Introduction
If you are operating a Vault cluster with a storage backend such as AWS DynamoDB using a traditional platform such as bare metal or VM's and wish to migrate both the Vault cluster and Vault backend storage to Integrated Storage (Raft) on a new Vault cluster on Kubernetes, this guide can be used as a reference.
Prerequisites
- After the storage migration process has been completed you will need access to relocate files within the Kubernetes Persistent Volume Claim (PVC) used by the Vault pod.
- Administrative level access to the Kubernetes cluster on which the destination Vault cluster will reside.
- Credentials allowing access to the existing storage backend (AWS DynamoDB in this example), as these are required for the storage migration.
- The existing Vault cluster using the DynamoDB storage backend must be shutdown while the migration process is run.
- A new Vault deployment to Kubernetes should be running with one replica/pod and be in an uninitialised state.
- The new Vault deployment should be configured with the same seal method as the existing Vault cluster.
Procedure
-
Review the vault operator migrate document to ensure familiarity with the migrate process.
- Deploy the new Vault cluster to Kubernetes with one replica/pod, but do not initialise Vault.
- Locate the
data-vault-0
PVC associated with the singular Vault pod, i.e.kubectl -n vault get pvc
- Open a shell on the Vault pod once ready, i.e.
kubectl -n vault exec -ti vault-0 -- sh
- Run
cd ~
- Create the file
migrate.hcl
and populate it with the relevantstorage_source
andstorage_destination
values. Note: Thepath
value must match themountPath
value used for the new Vault cluster deployment (see this reference in the Helm chart), however we must specify a sub-folder of this path which is not currently in use.
- Run
storage_source "dynamodb" {
region = "ap-southeast-2"
table = "vault-cluster"
}
storage_destination "raft" {
path = "/vault/data/dyn-migrate"
node_id = "raft_node_1_k8s_was_dynamodb"
}
cluster_addr = "http://127.0.0.1:8201"
-
- Create the destination directory in the Vault pod which corresponds to the
path
value:mkdir /vault/data/dyn-migrate
- Set the Vault log level to trace for maximum visibility via
export VAULT_LOG_LEVEL=trace
- Provide credentials to the Vault pod which can be used to access the existing storage backend, i.e
export AWS_ACCESS_KEY_ID=AXHHWW; export AWS_SECRET_ACCESS_KEY=XYZ; export AWS_SESSION_TOKEN=432432423KS
- Create the destination directory in the Vault pod which corresponds to the
- Stop the existing Vault cluster when ready to perform the storage migration.
-
Return to the shell on the Vault pod.
- Begin the migration by running
vault operator migrate -config migrate.hcl
- Once completed the following message should be visible:
Success! All of the keys have been migrated.
- Make a note of the timestamps in the migration log.
- Exit the shell on the Vault pod.
- Begin the migration by running
- The storage lock placed on the source storage backend will now have been released and you can optionally consider restarting the Vault service on the existing Vault cluster.
- Ideally this should be avoided as any writes / changes to the cluster storage will not be carried over to the new cluster, potentially leading to a split brain scenario.
- If servicing read requests of static KV secrets is the Vault use case and no writes occur restarting the Vault service on the existing Vault cluster is acceptable.
-
Patch the Vault stateful set on the Kubernetes cluster in order to change the spec.replicas field from 1 to 0, for example:
kubectl -n vault patch statefulset vault --type='json' -p='[{"op": "replace", "path": "/spec/replicas", "value": 0}]'
. - Confirm the number of Vault pods has changed from 1 to 0.
- Establish edit access to the PVC for the
data-vault-0
volume.- Confirm the PVC has a
raft
directory andvault.db
file.- The timestamps for creation should correspond to the deployment of Vault to the Kubernetes cluster.
- Confirm the PVC has a
dyn-migrate
directory present and within it is anotherraft
directory andvault.db
file.- The timestamps for creation on these should correspond to the previously noted time when the migration was run. The timestamp for these files should be more recent than the first set of files.
- Move the
raft
directory andvault.db
file from within thedyn-migrate
directory to the root of the PVC, replacing the older files.
- As the files being replaced were created when Vault was deployed to the Kubernetes cluster and Vault was not initialised (per the requirements section listed at the top of this document) no data should be lost.
- Exit the shell on the Vault pod.
- Confirm the PVC has a
- Patch the Vault stateful set on the Kubernetes cluster in order to change the spec.replicas field from 0 to 1, for example:
kubectl -n vault patch statefulset vault --type='json' -p='[{"op": "replace", "path": "/spec/replicas", "value": 1}]'
. - Vault should now have one running pod in the Kubernetes cluster, confirm with
kubectl -n vault get po
. - If Vault is using a form of auto unseal the Vault service should now report as follows via
kubectl -n vault exec vault-0 -- vault status
.-
Initialized
status should betrue
. -
Sealed
status should befalse
.
-
- If Vault is using a shamir seal, open a shell to the pod and complete the unseal process using the shamir keys which were previously used to unseal the existing Vault cluster using the DynamoDB storage backend.
vault status
output should now show:-
Initialized
status should betrue
. -
Sealed
status should befalse
.
-
- Authenticate to Vault and validate the presence of expected configuration and secret engine data.
- Patch the Vault stateful set on the Kubernetes cluster / update the replicas count on the Helm chart and apply the changes in order to have the desired number of replicas, with a minimum of 3.
- Join the new replicas / pods to the Vault cluster, unseal as necessary and confirm their membership in the cluster via the
vault operator members
command.