Tools for Collecting Etcd Metrics and Logs | Datadog

Tools for collecting etcd metrics and logs

Author David Lentz

Published: February 23, 2024

In Part 1 of this series, we looked at how etcd works and the role it plays in managing the state of a Kubernetes cluster. We also explored key etcd metrics you should monitor to ensure the health and performance of your etcd cluster. In this post, we’ll show you how you can use tools like Prometheus, Grafana, and etcdctl to collect and visualize etcd metrics. We’ll also show you how to collect etcd logs that provide context for those metrics.

Collect and visualize etcd metrics

Each server in your etcd cluster exposes metrics in the standard Prometheus format. In this section, we’ll show you how you can view these metrics and how you can collect, store, and visualize them.

View a snapshot via the /metrics endpoint

Each node in your etcd cluster exposes metrics at the /metrics endpoint, which is enabled by default. In this section, we’ll show you how to use curl to view current values for all available metrics, so you can always see a snapshot of your etcd servers’ performance.

Etcd supports using mutual TLS (mTLS) to secure the communication with clients (such as curl and kube-apiserver) and among the peers within the cluster. Once you’ve configured your etcd servers to use the necessary certificates, you can communicate securely by passing authentication information with your request. The command shown below calls the /metrics endpoint and specifies the TLS client certificate (--cert), client key (--key), and certificate authority (--cacert) to secure the communication. The command is prefaced with sudo to provide the privileges necessary to access the certificate files. It uses an example IP address but the port shown—2379—is the default port that all etcd nodes use for client communication.

sudo curl --cert /etc/etcd/kubernetes.pem --key /etc/etcd/kubernetes-key.pem --cacert /etc/etcd/ca.pem

The sample output shows an excerpt of the etcd server’s reply. This snapshot includes the current values of the commit duration, cluster leadership status, and heartbeat metrics:

# HELP etcd_debugging_disk_backend_commit_write_duration_seconds The latency distributions of commit.write called by bboltdb backend.
# TYPE etcd_debugging_disk_backend_commit_write_duration_seconds histogram
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.001"} 1
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.002"} 638
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.004"} 3333
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.008"} 3339
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.016"} 3341
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.032"} 3341
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.064"} 3341
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.128"} 3341
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.256"} 3342
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.512"} 3342
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="1.024"} 3342
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="2.048"} 3342
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="4.096"} 3342
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="8.192"} 3342
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="+Inf"} 3342
etcd_debugging_disk_backend_commit_write_duration_seconds_sum 7.666080665000013
etcd_debugging_disk_backend_commit_write_duration_seconds_count 3342
# HELP etcd_server_has_leader Whether or not a leader exists. 1 is existence, 0 is not.
# TYPE etcd_server_has_leader gauge
etcd_server_has_leader 1
# HELP etcd_server_heartbeat_send_failures_total The total number of leader heartbeat send failures (likely overloaded from slow disk).
# TYPE etcd_server_heartbeat_send_failures_total counter
etcd_server_heartbeat_send_failures_total 0

Expand your etcd visibility with Prometheus and Grafana

The /metrics endpoint displays the current values of etcd metrics, but you can track your cluster’s performance over time by running a Prometheus server. The etcd documentation provides guidance on configuring Prometheus monitoring in your etcd cluster.

You can use Grafana to see history, trends, and patterns in the values of metrics stored in Prometheus. Etcd provides a pre-built Grafana dashboard (shown below) that includes key metrics, such as leader changes and failed proposals.

A Grafana dashboard shows Prometheus metrics over time.

Check endpoint health and performance with etcdctl

Etcd’s command-line tool—etcdctl—allows you to execute simple tests and query the status of your etcd nodes. In this section, we’ll show you how you can use etcdctl to gather information about your cluster for troubleshooting and analysis. Note that in the sample commands shown throughout this section, we have created an environment variable called ETCDCTL_API that specifies the etcdctl version to use—see the documentation for more information on the available versions.

The check perf subcommand sends write requests to one or more etcd nodes and measures the throughput and latency of those operations. The sample command shown below uses the --endpoints option to specify which of the cluster’s nodes to check.

sudo ETCDCTL_API=3 etcdctl --endpoints= --cacert=/etc/etcd/ca.pem --cert=/etc/etcd/kubernetes.pem --key=/etc/etcd/kubernetes-key.pem check perf

The output of check perf summarizes the test results and—in this example—indicates that the node has passed the test.

PASS: Throughput is 150 writes/s
PASS: Slowest request took 0.095848s
PASS: Stddev is 0.003297s

The endpoint health subcommand tests the time required for each node in the cluster to commit a test proposal. The example command below uses the --cluster option to specify that all nodes in the cluster should be checked.

sudo ETCDCTL_API=3 etcdctl --cluster --cacert=/etc/etcd/ca.pem --cert=/etc/etcd/kubernetes.pem --key=/etc/etcd/kubernetes-key.pem endpoint health

Checking the health of the entire cluster can help you focus your troubleshooting by quickly identifying any unhealthy nodes. The output below shows that four of the endpoints responded successfully, but etcdctl was unable to reach the other endpoint and returned the context deadline exceeded error. Because not all nodes passed the test, etcdctl returned an unhealthy cluster error. is healthy: successfully committed proposal: took = 9.562129ms is healthy: successfully committed proposal: took = 12.199038ms is unhealthy: failed to commit proposal: context deadline exceeded is healthy: successfully committed proposal: took = 21.537375ms is healthy: successfully committed proposal: took = 20.272916ms
Error: unhealthy cluster

The endpoint status subcommand can help you troubleshoot etcd by showing you each node’s database size, error messages (if any), and Raft status—including its role as a leader or follower and the most recent log entries it has committed and applied. The example command below uses the --write-out=table option to format the output. Other formatting options include json and fields, which writes the output as a list.

sudo ETCDCTL_API=3 etcdctl --cluster --write-out=table --cacert=/etc/etcd/ca.pem --cert=/etc/etcd/kubernetes.pem --key=/etc/etcd/kubernetes-key.pem endpoint status

The sample output below shows no errors pending on any of the nodes.

| | 2fe2f5d17fc97dab |   3.5.9 |   22 MB |     false |      false |         2 |       9002 |               9002 |        |
| | 3a57933972cb5131 |   3.5.9 |   22 MB |     false |      false |         2 |       9002 |               9002 |        |
| | 5e3509fb8e8c6cae |   3.5.9 |   22 MB |     false |      false |         2 |       9002 |               9002 |        |
| | f98dc20bce6225a0 |   3.5.9 |   22 MB |     false |      false |         2 |       9002 |               9002 |        |
| | ffed16798470cab5 |   3.5.9 |   22 MB |      true |      false |         2 |       9002 |               9002 |        |

Use journalctl to view etcd logs

Metrics and health data can help you understand the state of your etcd cluster, and you can gain greater context around that data by collecting and exploring etcd logs. These logs provide information about the etcd process (such as startup, shutdown, and errors), as well as etcd activity, including reads, writes, and leader elections.

Etcd uses the zap library to structure its logs. Each log contains a JSON object that identifies the node and log level (e.g., warn, or error), as well as a message and other data that can help you troubleshoot.

By default, etcd sends logs to journald, which is the service that processes logs from applications managed by systemd. The example command below shows how you can use journald’s command-line tool—journalctl—to browse the logs on an etcd node.

journalctl _SYSTEMD_UNIT=etcd.service

The sample output shown below shows details of a leader election.

Dec 21 21:23:07 controller-0 etcd[1950]: {"level":"info","ts":"2023-12-21T21:23:07.76964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98dc20bce6225a0 became candidate at term 2"}
Dec 21 21:23:07 controller-0 etcd[1950]: {"level":"info","ts":"2023-12-21T21:23:07.769719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 2"}
Dec 21 21:23:07 controller-0 etcd[1950]: {"level":"info","ts":"2023-12-21T21:23:07.769739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 at term 2"}
Dec 21 21:23:07 controller-0 etcd[1950]: {"level":"info","ts":"2023-12-21T21:23:07.769796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 at term 2"}
Dec 21 21:23:07 controller-0 etcd[1950]: {"level":"info","ts":"2023-12-21T21:23:07.773656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98dc20bce6225a0 received MsgVoteResp from ffed16798470cab5 at term 2"}
Dec 21 21:23:07 controller-0 etcd[1950]: {"level":"info","ts":"2023-12-21T21:23:07.773714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98dc20bce6225a0 has received 2 MsgVoteResp votes and 0 vote rejections"}
Dec 21 21:23:07 controller-0 etcd[1950]: {"level":"info","ts":"2023-12-21T21:23:07.773738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98dc20bce6225a0 became leader at term 2"}
Dec 21 21:23:07 controller-0 etcd[1950]: {"level":"info","ts":"2023-12-21T21:23:07.773755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f98dc20bce6225a0 elected leader f98dc20bce6225a0 at term 2"}

If you prefer not to use journald—for example, if you’re using a platform that doesn’t support systemd or you need to implement a lightweight logging solution that doesn’t require it—you can configure etcd to send logs to stderr or stdout.

You can also configure etcd’s log level. By default, you’ll collect logs designated as info-level and higher, but you can instead use the debug level for more verbose logging. This level also makes additional data available under the /debug endpoint, including profiling data from the pprof Go package, as well as request and trace data. This additional data can be useful for examining etcd’s behavior in development environments or troubleshooting production issues. But using the debug log level can degrade the performance of your etcd cluster and increase the volume of your logs, so you should use it only when necessary.

Monitor etcd and Kubernetes with Datadog

In this post, we’ve shown you the tools and processes you can use to collect and view metrics and logs from your etcd clusters. In Part 3 of this series, we’ll show you how Datadog gives you visibility into your etcd metrics and logs—alongside data from your Kubernetes control plane and workloads, plus the infrastructure that runs it all—in one unified platform.