AI & ML

Kubernetes v1.36 Introduces Beta for In-Place Vertical Scaling of Pod Resources

Apr 30, 2026 5 min read views

The recent graduation of In-Place Pod-Level Resources Vertical Scaling to Beta in Kubernetes v1.36 is more than just a feature enhancement; it signals a substantial evolution in how Kubernetes manages resource utilization for applications running in containers. By allowing for dynamic adjustments to the Pod's aggregate resource budget without necessitating a container restart, the Kubernetes community is addressing a critical pain point for operators in cloud-native environments.

Understanding the Importance of In-Place Resizing

Traditionally, managing resource allocation for complex Pods—including those with multiple containers or sidecars—required meticulous planning and often manual recalibrations. The in-place resizing feature fundamentally shifts this paradigm. With Kubernetes v1.36, users can now modify the shared resource pool for a Pod on-the-fly, enhancing the flexibility to respond to varying demand without interrupting service.

This capability is particularly vital in scenarios where individual containers lack predefined limits. By dynamically expanding a Pod's resource pool during peak loads, Kubernetes enables applications to adapt without the overhead of reconfiguring each associated container. As workloads fluctuate, the system can now auto-scale its resource allocation, a much-needed feature in today's increasingly agile CI/CD pipelines.

Implementation and Policy Considerations

The manual initiation of a pod-level resize introduces a series of procedural checks and policies that must be adhered to by the Kubelet. When a resize request is made, Kubelet evaluates the existing container resizePolicy to determine whether a dynamic update can be made or if the container must be restarted. This reflects a more nuanced approach to resource management:

  • Non-disruptive Updates: For containers marked with a restartPolicy of NotRequired, the system can adjust resource allocations seamlessly.
  • Disruptive Updates: Containers with a restartPolicy of RestartContainer will require a restart to reflect the updated Pod-level budget, which can lead to temporary service interruptions—but may also ensure stability.

However, it's crucial to note that the current implementation does not support a pod-level resizePolicy. This could lead to inconsistent results if individual container policies conflict with the overall Pod-level directive. The responsibility lies with application operators to understand these dynamics thoroughly to optimize their resource management strategies.

Tracking Resizing Status: A New Level of Observability

Alongside the resizing capabilities, Kubernetes v1.36 improves observability for Pod resource management. New Pod Conditions enhance the ability to track the status of resize operations, including:

  • PodResizePending: Indicates that the Pod's specification has been updated, but the Node hasn't accepted this change yet, potentially due to resource allocation issues.
  • PodResizeInProgress: Confirms that the resize operation is recognized by the Node, yet the changes haven’t fully propagated to the containers.

This tracking capability is pivotal for diagnosing issues in real time and provides feedback loops for operators to understand resource utilization better. Operational transparency can significantly reduce troubleshooting time and enhance the responsiveness of teams managing Kubernetes environments under fluctuating loads.

Constraints and Future Opportunities

For Kubernetes practitioners, it's essential to comprehend the constraints associated with this feature. Currently, it requires cgroup v2 for optimal function, along with support from container runtimes that can manage dynamic resource updates, like containerd and CRI-O. Furthermore, all operations are limited to Linux-based nodes at this stage. Understanding these limitations will help teams prepare adequately for integrating in-place resizing into their workflows.

Forward-Looking Insights: What’s Next for Kubernetes

The journey doesn't end here. As Kubernetes edges closer to general availability for this feature, the focus shifts toward integrating it with the Vertical Pod Autoscaler (VPA). This integration can potentially automate the issuance of resource recommendations at the Pod level, streamlining the whole actuation process—reducing human intervention and increasing overall efficiency. For organizations running Kubernetes at scale, this could redefine operational practices and truly push the boundaries of resource optimization.

If you’re currently operationalizing Kubernetes, the In-Place Pod-Level Resources Vertical Scaling feature is a significant advancement. It addresses longstanding complexities in dynamic resource allocation with a more responsive, automated approach. Engage with this feature actively and provide feedback—your insights could help shape the next iteration of Kubernetes capabilities as the community strides towards more sophisticated resource management solutions.

For practical engagement, consider testing this feature in your environments and sharing your experiences via Kubernetes's communication platforms, including the Slack channel #sig-node or through their mailing lists.