The Monitor

What's new for scheduling and resource management in Kubernetes v1.34?

7 minute read

Published

Share

What's new for scheduling and resource management in Kubernetes v1.34?
Nicholas Thomson

Nicholas Thomson

Kubernetes v1.34, which is scheduled for release August 27, 2025, focuses on improved scheduler visibility, deeper life cycle observability, and enhanced resource management. As always, the list of changes and improvements in the official changelog is extensive, and cluster operators may be wondering which changes are most important. If you're operating a monitoring platform or depend on deep Kubernetes observability, here's how a number of new features will affect your workflows.

Better scheduling visibility, predictive metrics, and life cycle observability

When scheduling decisions, pod placement, or autoscaling behavior are opaque, incident response slows down and it becomes harder to correlate infrastructure changes with application performance. This lack of visibility also increases the risk of inefficient resource use. As such, cluster operators need to maintain deep visibility into how workloads are scheduled, started, and scaled. Kubernetes v1.34 introduces several features that help teams gain earlier, more accurate signals about pod readiness, container termination, and expected node placement, enabling faster root cause analysis, smarter alerting, and more reliable predictive dashboards.

Asynchronous scheduler API calls — Beta (enabled via SchedulerAsyncAPICalls feature gate)

Kubernetes scheduler performance can degrade due to blocking API calls—such as updating pod status or binding decisions—before proceeding to the next scheduling decision. This creates scheduling bottlenecks and introduces delays that ripple into downstream observability systems.

With this new feature, the scheduler can now perform API interactions asynchronously, increasing throughput and responsiveness. Faster scheduling means that pod state transitions will be reflected more quickly in metrics, logs, and alerts related to pod life cycle events.

NominatedNodeName for pod placement – Alpha

Observability tools typically have limited insight into where unscheduled pods are expected to run, as this information isn't exposed until the scheduler completes its decision and the pod is actually bound to a node. This lack of visibility makes it difficult to anticipate placement bottlenecks, preemptively allocate resources, or correlate scheduling delays with downstream effects in logs, metrics, or alerts, especially in environments with resource constraints or taints.

With this feature, the scheduler will now set the nominatedNodeName field for more pods—not just those pending preemption—exposing their expected node earlier in the scheduling process. This field indicates which node the scheduler is reserving for the pod, and teams can use this metadata for predictive scheduling dashboards or pre-scheduling alerting logic, as well as enhanced workload tracing and capacity forecasting.

Container stop signals – Alpha (feature-gated)

In Kubernetes, distinguishing between different reasons for container shutdowns—such as graceful exits versus forced terminations—has traditionally been difficult, since the STOPSIGNAL was configured in the container image or runtime and was not visible in the pod spec. This limited both control over shutdown behavior and clarity during incident analysis.

The new container stop signals feature surfaces the termination signal in a pod's status, making it visible to operators. Making termination intent more explicit will improve life cycle observability, alerting accuracy, and the ability to debug container exits without relying solely on logs. Additionally, this new feature allows developers to configure custom stop signals directly in the pod specification, rather than being constrained to the image defaults, providing greater control over shutdown semantics.

Enhancing observability for extended resources with Dynamic Resource Allocation (DRA)

Increasingly, Kubernetes workloads depend on specialized hardware, like GPUs, FPGAs, and smart NICs. Accordingly, there is a growing need for better visibility into how these extended resources are allocated, consumed, and managed. Dynamic Resource Allocation (DRA) is a Kubernetes framework that lets workloads request specialized hardware like GPUs or FPGAs through standardized, pluggable drivers, rather than vendor-specific plugins or ad hoc APIs. Several new features in Kubernetes v1.34 enhance DRA to improve observability across the entire life cycle, from scheduling to health monitoring.

Handle extended resource requests via DRA driver – Alpha

Kubernetes provides the DRA framework as a pluggable mechanism for managing complex hardware resources, using ResourceClaim and ResourceSlice objects. However, extended resources such as GPUs, FPGAs, or smart NICs were previously handled outside of DRA, often through custom device plugins or ad hoc annotations. This created fragmentation, making it harder to consistently track resource usage, enforce policies, and support multi-tenant or heterogeneous hardware environments.

This feature extends DRA to cover extended resources, enabling them to be allocated and tracked through standardized DRA drivers rather than one-off integrations. Workloads requesting DRA-managed resources can now be tracked with fine-grained metadata, such as specific device classes, allocation counts, and driver-backed ResourceSlice references, making it easier to report resource allocations and usage per workload. For instance, instead of tracking nvidia.com/gpu and intel.com/fpga through separate mechanisms, both can now be managed and monitored through their respective DRA drivers with consistent metadata and allocation patterns. This enhancement improves visibility, auditability, and allocation efficiency across heterogeneous clusters, providing a much clearer understanding of who’s using which hardware and how.

Device binding conditions – Alpha (DRADeviceBindingConditions and DRAResourceClaimDeviceStatus feature gates required)

When a pod requests extended resources, the scheduler first evaluates those requirements through the DRA framework and then assigns the pod to a node. Once scheduled, the kubelet on that node calls the appropriate DRA driver to prepare the hardware, and the driver allocates the actual device. Because this allocation process can take time, users have traditionally had no clear way to know whether the device was ready; the only options were digging into logs or inspecting node internals.

This enhancement adds device binding conditions, allowing the kubelet to update pod.status.conditions with the binding state reported by the DRA driver. By surfacing readiness information directly in the pod status, users can easily see whether device preparation is in progress, complete, or failed. This improves observability, enables proactive alerting on delayed or failed bindings, and makes diagnosing hardware-related startup issues far more straightforward.

Consumable capacity for devices – Alpha

Before this feature, Kubernetes handled extended resources as opaque, non-overcommittable integer counts. If a device advertised three units and a pod requested one, the scheduler could allocate two more pods that each requested one. But from Kubernetes’ perspective, a device was either allocatable or not—it couldn’t represent “this device has X capacity left.” This meant that workloads that only needed a fraction of a device couldn’t express this natively, and that Kubernetes had no notion of whether a device was saturated, underutilized, or had headroom remaining. Vendors tried to work around this by carving devices into artificial resource types (for example, NVIDIA exposing nvidia.com/mig-1g.5gb, …2g.10gb, etc.), but this fragmented reporting and forced operators to rely on vendor-specific plugins and metrics.

This feature addresses this by introducing consumable capacity for DRA, allowing device plugins to directly expose how much capacity remains on a device. This enables operators to gain visibility into metrics like saturation, headroom, and overcommitment risk, allowing SREs and platform engineers to reason about utilization across heterogeneous hardware and tune scheduling policies with greater accuracy.

Add resource health status to PodStatus – Beta

When a pod using specialized hardware crashes or misbehaves, it’s often unclear whether the issue lies with the device, the application, or the node. This can leave cluster operators to engage in considerable guesswork, leading to delayed incident resolution.

This new field in PodStatus lets DRA and device plugins report device health directly in the pod’s status. Teams can now detect hardware issues at the pod level without sifting through logs, enabling faster alerting, diagnosis, and automated remediation when faulty devices impact workload stability.

Get the most out of Kubernetes v1.34

For cluster operators, it’s important to keep an eye on each new Kubernetes version release for updates that will enable teams to scale up faster and more efficiently. Kubernetes 1.34 brings a wave of observability-aligned enhancements, from deeper scheduling visibility to improved DRA observability. These features unlock faster insights into system health and performance and better coverage of modern workloads.

Explore the benefits of using Datadog for Kubernetes observability in our dedicated blog post, or check out our docs to learn more. If you’re new to Datadog and would like to monitor the health and performance of your Kubernetes clusters, sign up for a to get started.

Related Articles

Rightsize workloads and reduce costs with Datadog Kubernetes Autoscaling

Rightsize workloads and reduce costs with Datadog Kubernetes Autoscaling

Accelerate Kubernetes issue resolution with AI-powered guided remediation

Accelerate Kubernetes issue resolution with AI-powered guided remediation

What's new for scheduling, scalability, and performance in Kubernetes v1.33?

What's new for scheduling, scalability, and performance in Kubernetes v1.33?

Java on containers: a guide to efficient deployment

Java on containers: a guide to efficient deployment

Start monitoring your metrics in minutes