What is Vault Eventual Consistency?
Vault Enterprise applies an eventual consistency model. Only one Vault node (the leader) can write to storage at any given time. Users generally expect read-after-write consistency: in other words, after writing foo=1
, a subsequent read of foo
should return 1
. Depending on how Vault is configured this isn't always the case. When using performance standbys with Integrated Storage, or when using performance replication, there are some sequences of operations that don't always yield read-after-write consistency.
Issue
A customer was facing an issue when they configured the static role for the Oracle DB secret engine for one of their applications which is used to generate the password. Later, the application reads the password via API calls. When the application team was calling the GET
credential API, they were frequently getting ERROR:400
. As per the customer, 8 requests out of 10 were failing.
Diagnosis
It was figured out that the application team was issuing the GET
secret request immediately after it was generated and written which was against Vault's eventual consistency model.
Solution
The customer had to introduce a small delay
or retry
in their code/process that attempts to fetch the secret with the token in order for Vault to replicate the secret before being read.
On the other hand, if it is a short time running process, you can use a Batch-token instead of Service-token. The lease attached to a Batch-token is not written to the disk at all and so it's never synced across all other performance standby nodes. This way you can authenticate to Vault without worrying about eventual consistency.
For long-running processes, service tokens are a better option.
Feature Update
Starting with Vault v1.10 the Service-tokens return additional metadata as part of the Service-token back to the client. The additional metadata provides a reference to the authentication token. Client attempts to connect to any of the performance standby nodes (not yet synced) are then automatically forwarded to the leader node. No client configurations are required to enable this and the noticeable difference here is via a prefix on all tokens such as:
Token Type | Old Prefix | New Prefix |
Service | s. | hvs. |
Batch | b. | hvb. |
The other change will be the size of the token which will change from 26 bytes to 92 bytes.