
David Lentz
In the first two parts of this series, we explored how Karpenter’s architecture enables just-in-time provisioning and active node consolidation, and we identified the key Karpenter metrics you should track to keep your cluster performant and cost-efficient. In this post, we’ll look at vendor-agnostic tools you can use to capture these signals. We’ll show you how to:
- Audit Karpenter’s current state by using native Kubernetes commands
- Collect and visualize Karpenter metrics by using Prometheus and Grafana to track provisioning latency and consolidation trends
- Monitor logs to audit Karpenter’s activity and understand its scheduling and disruption decisions
Use Kubernetes-native tools to spot-check Karpenter status
Before implementing a comprehensive observability pipeline, it can be helpful to use Kubernetes’ built-in tooling to do a quick, real-time check of what Karpenter is doing. These spot checks are especially handy when pods are pending and you’re quickly trying to understand why they haven’t been scheduled.
A good first step is to use kubectl to look at the resources Karpenter creates and manages. Karpenter adds (and replaces) cluster capacity by creating NodeClaim Kubernetes objects that represent the compute it wants the cloud provider to provision. The following command can give you a high-level inventory of what Karpenter has created:
kubectl get nodeclaims
You’ll see a compact view of recent and active NodeClaims, including the cloud instance type, zone, associated Kubernetes node name (if it has registered), and whether the NodeClaim (and, by association, the node) is ready.
Here’s an example of what that output can look like:
NAME TYPE ZONE NODE READY AGEdefault-t5k2p c5.large us-east-1a ip-10-0-12-192.ec2.internal True 12mdefault-8g4n1 m5.2xlarge us-east-1b ip-10-0-45-21.ec2.internal True 4mWhen you need more than a snapshot—especially if you’re investigating a specific scaling event—kubectl describe can show you the full state of a single NodeClaim. The following command illustrates how to describe a NodeClaim to see its status and events, which are often the most valuable pieces of troubleshooting data.
kubectl describe nodeclaim <nodeclaim-name>
The following sample output shows the detailed history of the NodeClaim named default-t5k2p. The Events section at the bottom is particularly useful for verifying that Karpenter successfully launched the instance and that it registered with the cluster.
Name: default-t5k2pLabels: karpenter.sh/nodepool=default karpenter.sh/capacity-type=on-demandStatus: Conditions: Type Status ---- ------ Launched True Registered True Initialized True Node Name: ip-10-0-12-192.ec2.internalEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Launched 12m controller.nodeclaim.lifecycle Launched instance: i-0abcdef1234567890 Normal Registered 11m controller.nodeclaim.lifecycle Registered node: ip-10-0-12-192.ec2.internalCollect and visualize Karpenter metrics
Karpenter exposes metrics in Prometheus format. To store them for long-term storage and analysis, you can send the metrics to a Prometheus server running in your cluster or to any other compatible backend. In this section, we’ll show you how to use a Prometheus server to scrape and store Karpenter metrics and then visualize them in Grafana to track disruption, batching efficiency, and provisioning latency.
Scrape the Karpenter /metrics endpoint
Karpenter exposes metrics at the /metrics endpoint, which is on port 8080 by default. Note that this is distinct from the health probe port, which defaults to 8081 and serves Kubernetes liveness checks. If you inadvertently configure Karpenter to scrape the health port, you may see a status of 200 OK but no metric values. You can learn more about the distinct roles and behavior of the metrics endpoint versus the health endpoint in the documentation for Karpenter and for Kubebuilder—the framework that Karpenter is built on.
Store Karpenter metrics with a Prometheus server
In Kubernetes, Prometheus typically runs as a dedicated monitoring workload in your cluster. It can automatically discover targets (such as Karpenter’s /metrics endpoint), scrape them on an interval you define (which defaults to 60 seconds), and store the resulting timeseries so you can query and analyze them later.
Prometheus’s built-in timeseries database enables you to retain Karpenter metrics long term so you can visualize, analyze, and alert on them. Karpenter automatically enriches these metrics with labels for attributes like NodePool names, instance types, and capacity types, keeping dashboards and alerts manageable as your environment grows.
A common way to run Prometheus on Kubernetes is via the Prometheus Operator, which enables you to launch and manage Prometheus by using Kubernetes-native objects. With the Operator, you define scrape targets declaratively by using ServiceMonitor and PodMonitor custom resource definitions (CRDs), and the Operator automatically generates and keeps Prometheus’s scrape configuration in sync. The Operator also manages the Prometheus instance life cycle (for example, updates, scaling, and persistence), which makes Prometheus easier to operate over time.
To configure the Prometheus Operator to scrape Karpenter, you can apply a ServiceMonitor like the one shown here. This configuration assumes Karpenter is running in the karpenter namespace and tells Prometheus to scrape the service labeled app.kubernetes.io/name: karpenter:
apiVersion: monitoring.coreos.com/v1kind: ServiceMonitormetadata: name: karpenter-monitor namespace: monitoring # This varies with your Operator's config labels: release: prometheus # This varies with your Operator's configspec: namespaceSelector: matchNames: - karpenter selector: matchLabels: app.kubernetes.io/name: karpenter endpoints: - path: /metrics port: http-metrics # Matches Karpenter's Service port name interval: 60sFor more detail on how Prometheus is commonly installed and managed on Kubernetes, see the Prometheus Operator installation guide.
Visualize and alert on Karpenter trends with Grafana
Grafana is a convenient next step once you’re scraping and storing Karpenter metrics. Grafana enables you to visualize your metrics, explore trends by writing custom queries in the PromQL language, and set alerts on the signals you care about. You can get started quickly by using Grafana’s community dashboards. You can then customize them to optimize your visibility, for example, by tracking telemetry data from your environment alongside three critical areas of Karpenter metrics: disruption behavior, batching behavior, and provisioning latency.
- Disruption behavior: The
karpenter_voluntary_disruption_eligible_nodesmetric counts the nodes Karpenter considers eligible for voluntary disruption. If this number stays high, Karpenter may be identifying optimization opportunities but getting blocked by constraints like PodDisruptionBudgets (PDBs), which limit how many replicas can be voluntarily evicted at once. - Batching behavior: The
karpenter_cloudprovider_batcher_batch_sizemetric illustrates how efficiently Karpenter is interacting with your cloud provider during scaling events. Healthy batching typically means requests are grouped, and small batch sizes can be a sign that scaling activity is fragmented and less efficient. - Provisioning latency: Karpenter is known for fast scale-out, and you should see that reflected in
karpenter_scheduler_scheduling_duration_seconds, which tracks the duration of Karpenter’s scheduling simulations. A sustained increase here can indicate Karpenter is spending more time evaluating scheduling requirements before it can act. If your pods are pending too long but this metric stays flat, check cloud provider metrics. The metricskarpenter_cloudprovider_duration_secondsandkarpenter_cloudprovider_errors_totalcan help you spot API latency, throttling, or quota-related failures.
You can create Grafana alerts to notify you proactively of any metrics that signal a problem with Karpenter’s performance. If any key metrics trend outside your expected ranges, you’ll get notified early. For example, you might alert on a sustained rise in scheduling latency or on the number of consolidation-eligible nodes staying elevated for an extended period.
Monitor Karpenter logs
Karpenter’s metrics can illustrate its health, but logs can explain Karpenter’s specific decisions that shape the performance of your Kubernetes-based applications. If you need to troubleshoot Karpenter performance problems such as sudden scheduling delays, unexpected node churn, or failed consolidation, logs are often the quickest way to gain the detail necessary to understand what went wrong.
An overview of Karpenter logs
Karpenter emits structured JSON logs that include a log level, a logger name (which identifies the component that produced the entry), and a message describing the event. The logger field is especially useful because it naturally groups logs by category. In practice, you’ll frequently see entries from the provisioner controller and the NodeClaim life cycle controller.
For example, when Karpenter identifies pods that it can provision capacity for, it will log an event like this:
{"level":"INFO","time":"2024-06-22T02:24:16.114Z","logger":"controller.provisioner","message":"found provisionable pod(s)","Pods":"default/inflate-...","duration":"10.5ms"}This line tells you the provisioner controller (controller.provisioner) found unscheduled pods it can handle. This can be a helpful starting point if you’re investigating why pods are pending.
Once a provisionable pod is identified, the NodeClaim life cycle controller (controller.nodeclaim.lifecycle) narrates what happens as Karpenter creates and brings up capacity. You’ll typically see a sequence like this:
{"level":"INFO","time":"2024-06-22T02:24:19.028Z","logger":"controller.nodeclaim.lifecycle","message":"launched nodeclaim","NodeClaim":{"name":"default-sfpsl"},"provider-id":"aws:///us-west-2b/i-01234567adb205c7e","instance-type":"c5.2xlarge","zone":"us-west-2b","capacity-type":"spot","allocatable":{"cpu":"8","memory":"16Gi"}}{"level":"INFO","time":"2024-06-22T02:26:19.028Z","logger":"controller.nodeclaim.lifecycle","message":"registered nodeclaim","NodeClaim":{"name":"default-sfpsl"},"Node":{"name":"ip-10-0-12-34.us-west-2.compute.internal"}}{"level":"INFO","time":"2024-06-22T02:26:52.642Z","logger":"controller.nodeclaim.lifecycle","message":"initialized nodeclaim","NodeClaim":{"name":"default-sfpsl"},"Node":{"name":"ip-10-0-12-34.us-west-2.compute.internal"}}The launched nodeclaim message confirms that Karpenter requested capacity and shows details like instance type, zone, and capacity type. The next message—registered nodeclaim—confirms that the node joined the cluster, and the initialized nodeclaim message indicates that the node is ready for use.
Collecting Karpenter logs
You can access Karpenter logs the same way you access any Kubernetes workload logs. Because Karpenter pods may have random suffixes, it’s best practice to use label selectors rather than targeting a specific pod name. The following command tails logs from all pods in the karpenter namespace that are labeled app.kubernetes.io/name=karpenter label:
kubectl logs -n karpenter -l app.kubernetes.io/name=karpenter -f
Streaming Karpenter logs via kubectl makes it easy to gain context around an active incident or validate changes to your NodePools, disruption settings, or workload scheduling behavior.
See the Karpenter documentation for detailed information about using logs for troubleshooting.
Get deeper visibility into Karpenter behavior
Karpenter is a powerful tool for optimizing Kubernetes clusters, but maintaining efficiency and availability requires deep visibility into its decision-making process. By collecting metrics and logs, you can help keep Karpenter provisioning nodes rapidly, batching requests efficiently, and respecting your disruption budgets. While Prometheus and Grafana provide a solid foundation for monitoring Karpenter, managing a self-hosted observability stack can become complex at scale. In the next part of this series, we’ll look at how to send these metrics to Datadog for a fully managed view of your cluster’s scaling performance.





