Analyzing etcd performance

The brain of any good Kubernetes cluster is etcd (pronounced et-see-dee), is an open source, distributed, consistent key-value store for shared configuration, service discovery, and scheduler coordination of distributed systems or clusters of machines.

As the primary datastore of Kubernetes, etcd stores and replicates all Kubernetes cluster states. Since it is a critical component of a Kubernetes cluster it is important that etcd has a reliable approach to its configuration and management and is hosted on hardware that is able to support it’s rigorous demands. When etcd is experiencing performance issues, then entire cluster suffers and can become unstable and unusable.

In this module we will utilize a python script to analyze the etcd pod logs in an OpenShift must-gather to help identify common errors and correlate those errors.

etcd-ocp-diag.py

  1. With this python script we will be able to identify common errors in the etc pod logs including:

    • "Apply Request Took Too Long" - etcd took longer than expected to apply the request.

    • "Failed to Send Out Heartbeat" - The etcd node took longer than expected to send out a heart beat.

    • "Lost Leader" and "Elected Leader" - The node hasn’t heard from the leader in a set period causing an election

    • "wal: Sync Duration" - The Write Ahead Log Sync Duration took too long.

    • "The Clock Difference Against Peer" - The Peers have too large of a clock difference/

    • "Server is Likely Overloaded" - This error is a direct result of disk performance per etcd docs.

    • "Sending Buffer is Full" - The sending buffer for etcd raft messages is full due to slowness of other nodes.

    • and more.

Understanding etcd-ocp-diag.py
etcd-ocp-diag.py --help

usage: etcd-ocp-diag.py [-h] --path PATH [--ttl] [--heartbeat] [--election] [--fdatasync] [--buffer] [--etcd_timeout] [--pod POD] [--date DATE] [--compare] [--errors] [--stats] [--previous] [--rotated]

Process etcd logs and gather statistics.

options:
  -h, --help      show this help message and exit
  --path PATH     Path to the must-gather
  --ttl           Check apply request took too long
  --heartbeat     Check failed to send out heartbeat
  --election      Check election issues
  --fdatasync     Check slow fdatasync
  --buffer        Check sending buffer is full
  --etcd_timeout  Check etcdserver: request timed out
  --pod POD       Specify the pod to analyze
  --date DATE     Specify date for error search in YYYY-MM-DD format
  --compare       Display only dates or times that happen in all pods
  --errors        Display etcd errors
  --stats         Display etcd stats
  --previous      Use previous logs
  --rotated       Use rotated logs

etcd Error Stats

  1. The first thing you should check is to see if there are performance issues affecting the etcd cluster. To do this we run the --stats which looks for Took Too Long error messages, collects the expected time, and then calculated the maximum time, the minimum time over the expected time, the median, and the average along with the count and then displays the information. It also does the same thing for the slow fdatasync error message.

  2. If you encounter a large number (over 100 errors a day average) or your Maximums are near 1 Second or higher, then you want to dig deeper into the etcd performance to see when it is happening and if we see any issues elsewhere in the cluster.

Checking for etcd Error Stats
cd ~/Module6/
etcd-ocp-diag.py --path module6-must-gather.6521552859184261155/ --stats

Stats about etcd "apply request took too long" messages: etcd-ocp4-2nvq7-master-0
	First Occurrence: 2024-07-28T04:00:27
	Last Occurrence: 2024-08-14T15:18:23
	Maximum: 2515.8442ms
	Minimum: 200.1755ms
	Median: 554.2048ms
	Average: 607.3171ms
	Count: 3614
	Expected: 200ms

etc..

Common Errors

  1. etcd has common errors to let you know what issues are affecting your cluster this script lets you look for them quickly to then help determine what problems you should be focusing on.

  2. To do this you run etcd-ocp-diag.py --path <path_to_must_gather> --errors and it will search all of the etcd Pods and return the Pod Name, Error, and the Count.

Reviewing Common Errors
etcd-ocp-diag.py --path module6-must-gather.6521552859184261155/ --errors

POD                       	ERROR                                                 	COUNT
etcd-ocp4-2nvq7-master-0	waiting for ReadIndex response took too long, retrying	 295
etcd-ocp4-2nvq7-master-0	slow fdatasync                                        	  60
etcd-ocp4-2nvq7-master-0	apply request took too long                           	3614
etcd-ocp4-2nvq7-master-0	leader is overloaded likely from slow disk            	 500
etcd-ocp4-2nvq7-master-0	elected leader                                        	   9

Searching for Specific Errors

  1. etcd has common errors to let you know what issues are affecting your cluster this script lets you look for them quickly to then help determine what problems you should be focusing on.

  2. To do this you run etcd-ocp-diag.py --path <path_to_must_gather> --ttl and it will search all of the etcd Pods and return the Pod Name, Date, and the Count.

  3. In addition to "Took Too Long" errors you can also use the following sub-commands:

  --heartbeat     Check failed to send out heartbeat
  --election      Check election issues
  --fdatasync     Check slow fdatasync
  --buffer        Check sending buffer is full
  --etcd_timeout  Check etcdserver: request timed out
Using Sub-Commands to look for Specific Errors
etcd-ocp-diag.py --path module6-must-gather.6521552859184261155/ --ttl | sort -u -k3,3nr

POD                       	DATE      	COUNT
etcd-prodshift-2nvq7-master-2	2024-08-14	952
etcd-prodshift-2nvq7-master-1	2024-08-14	887
etcd-prodshift-2nvq7-master-1	2024-08-13	805
etcd-prodshift-2nvq7-master-2	2024-08-13	744
etcd-prodshift-2nvq7-master-0	2024-08-14	702
etcd-prodshift-2nvq7-master-2	2024-08-12	627
etcd-prodshift-2nvq7-master-1	2024-08-12	579
etcd-prodshift-2nvq7-master-0	2024-08-13	550
etcd-prodshift-2nvq7-master-0	2024-08-12	455
etcd-prodshift-2nvq7-master-2	2024-08-11	360
etcd-prodshift-2nvq7-master-1	2024-08-11	333
etcd-prodshift-2nvq7-master-1	2024-08-10	288
etcd-prodshift-2nvq7-master-2	2024-08-09	268
  1. After you return the results for all dates and pods, you can then drill down further by specifying the --date and/or the --pod command to return the hour and minute the error happened and results just for one specific pod.

Using the Date and Pod Options
etcd-ocp-diag.py --path module6-must-gather.6521552859184261155/ --ttl --date 2024-08-14 --pod etcd-prodshift-2nvq7-master-1 | sort -k3,3rn

POD                       	DATE 	COUNT
etcd-prodshift-2nvq7-master-1	00:10	26
etcd-prodshift-2nvq7-master-1	05:39	21
etcd-prodshift-2nvq7-master-1	06:08	21
etcd-prodshift-2nvq7-master-1	05:41	20
etcd-prodshift-2nvq7-master-1	14:21	20
etcd-prodshift-2nvq7-master-1	15:12	20
etcd-prodshift-2nvq7-master-1	05:54	19
  1. Finally, you can use the --compare command to see when errors happened on the same date. You can combine it with the --date command to narrow issues down to specifics hours or minutes.

Using Compare
etcd-ocp-diag.py --path module6-must-gather.6521552859184261155/ --ttl --compare

Date: 2024-07-28
POD                            COUNT
etcd-ocp4-2nvq7-master-0     121
etcd-ocp4-2nvq7-master-1     60
etcd-ocp4-2nvq7-master-2     98

Date: 2024-07-29
POD                            COUNT
etcd-ocp4-2nvq7-master-0     112
etcd-ocp4-2nvq7-master-1     83
etcd-ocp4-2nvq7-master-2     58

Date: 2024-07-30
POD                            COUNT
etcd-ocp4-2nvq7-master-0     133
etcd-ocp4-2nvq7-master-1     114
etcd-ocp4-2nvq7-master-2     152
etcd-ocp-diag.py --path module6-must-gather.6521552859184261155/ --ttl --date 2024-07-28 --compare

Date: 04:02
POD                            COUNT
etcd-ocp4-2nvq7-master-0     8
etcd-ocp4-2nvq7-master-2     13

Date: 05:16
POD                            COUNT
etcd-ocp4-2nvq7-master-0     14
etcd-ocp4-2nvq7-master-1     12
etcd-ocp4-2nvq7-master-2     15

Debugging etcd can be complex and there are often a handful of potential root causes. If you find a customer cluster that is showing signs of etcd performance issues, the best course of action is to open a support ticket to engage with our experts. You can also determine next steps by reviewing our detailed etcd debugging article found at https://access.redhat.com/articles/6271341