Introduction
Possible IOPS contention appearing in your metrics and cluster behavior
There are times when an issue in a cluster presents itself as an IOPS issue, showing as slow writes, context deadline exceeded errors and many other items.
When troubleshooting IOPS issues in your Vault cluster, the tool FIO can come in very handy. While the below output is an example, as is the command, it is what HashiCorp support have used in the past to diagnose issues that present as IOPS issues to either confirm it or rule IOPS out. Ruling out IOPS can help the support engineer quickly move on from IOPS related issues and focus on other areas, helping to streamline the amount of time it takes to resolve issues.
NOTE: This is a 3rd party tool and any issues running the tool are not supported by HashiCorp
Prerequisites (if applicable)
The command
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest
--filename=testfio --bs=4k --iodepth=64 --size=8G --readwrite=randrw --rw
The flags
randrepeat
- Seed the random number generator used for random I/O patterns in a predictable way so the pattern is repeatable across runs. Default: true.ioengine
- Defines how the job issues I/O to the file. Further reference heredirect
- If value is true, use non-buffered I/O. This is usually O_DIRECT. Note that OpenBSD and ZFS on Solaris don’t support direct I/O. On Windows the synchronous ioengines don’t support direct I/O. Default: false.gtod_reduce
- Reduces calls to gettimeofday
name
- Name of the testfilename
- Name of the file to write tobs
- The block size. Defaults to 4k, but can be set from 256 to 8kiodepth
- Number of I/O units to keep in flight against the file.size
- Size of the total file to write. 8g is ok to test with, can be higher.readwrite
- The type of I/O pattern to use.rw
- Decides if the I/O is issued sequentially or randomly.
Further settings can be found here but for a basic test, the above should work just fine.
understanding the output.
read:
- the output of the runs for readsbw ( KiB/s):
Bandwidth of the run.iops:
Output of the IOPS ran.= for readswrite:
Output of the writesbw ( KiB/s):
Bandwidth of the writes.iops :
Output of the IOPS for writescpu :
CPU usage during the test.IO depths :
The distribution of I/O depths over the job lifetime.submit:
How many pieces of I/O were submitting in a single submit call.complete :
Identical to submit, but for completions.issued rwts:
Total issuesd reads and writeslatency :
Latency of the writes.
Run status group 0 (all jobs):
This section is a summary of the READS and WRITES during the test.
Disk stats (read/write):
Stats of each particular disk that was engaged during the test.
Why we have you run this.
There are times when a problem within your cluster may present itself as an IOPS issue. In order to verify that it is an IOPS issue, this command should be run from a node to the underlying storage. If the output is higher than the current IOPS usage of Vault, then the storage system has sufficent IOPS and other avenues should be explored, including network bandwidth throttling or configuration within the Vault cluster itself.
Additional links
https://fio.readthedocs.io/en/latest/fio_doc.html
https://fio.readthedocs.io/en/latest/fio_doc.html#command-line-options
https://fio.readthedocs.io/en/latest/fio_doc.html#interpreting-the-output