Summary
This article outlines the step-by-step process to restore a Consul snapshot from one Vault cluster (using Consul as the storage backend) to another Vault cluster also backed by Consul. This is useful in disaster recovery, environment cloning (e.g., staging to production), or migration scenarios.
Scope
- Source Vault Cluster uses Consul as its storage backend.
- Target Vault Cluster also uses Consul as its storage backend.
- The Vault binary version should be compatible across both clusters (recommended: same version).
- Both Consul clusters are accessible from the host performing the operations (or snapshots are securely transferred).
Prerequisites
- Vault binaries installed on both source and destination environments.
- Access to both Consul clusters (
consul
CLI or HTTP API). - Source Vault is initialized and unsealed.
-
Target Vault is either:
- ❗Not yet initialized — recommended for clean DR or environment bootstrapping, or
- ⚠️ Already initialized — in which case restoring the snapshot will overwrite existing Vault data (make sure to back up).
- Consul snapshot permissions and connectivity.
- Secure copy tools (
scp
,rsync
, etc.) or access method to transfer snapshot between environments. -
Following versions are used for the KB article, however you may take any compatible version.
Consul - 1.11.4+ent
Vault - 1.15.2+ent
Procedures
Step1:- Run a consul cluster locally with a single node to simulate "PROD" environment
% consul agent -server -bind 10.118.101.66 -client=10.118.101.66 -data-dir=./prod/data/ -bootstrap=true -ui=true -node=prod-server ==> Starting Consul agent... Version: '1.11.4+ent' Node ID: '6e91dc98-9351-f6b9-1ae4-96877f3f677d' Node name: 'prod-server' Datacenter: 'dc1' (Segment: '<all>') Server: true (Bootstrap: true) Client Addr: [10.118.101.66] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: 8600) Cluster Addr: 10.118.101.66 (LAN: 8301, WAN: 8302) Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false ==> Log data will now stream in as it occurs: 2025-07-08T12:40:02.383+0530 [WARN] agent: bootstrap = true: do not enable unless necessary 2025-07-08T12:40:02.389+0530 [WARN] agent.auto_config: bootstrap = true: do not enable unless necessary ...
Step2: Run a vault cluster with a single node having consul as storage backend
prod_vault_config.hcl
storage "consul" { address = "10.118.101.66:8500" path = "vault/" } listener "tcp" { address = "10.118.101.66:8200" tls_disable = true } ui = true cluster_addr = "https://10.118.101.66:8201" api_addr = "http://10.118.101.66:8200" disable_mlock = true
% vault server -config=prod_vault_config.hcl ==> Vault server configuration: Administrative Namespace: Api Address: http://10.118.101.66:8200 Cgo: disabled Cluster Address: https://10.118.101.66:8201 Environment Variables: COLORFGBG, COLORTERM, COMMAND_MODE, GODEBUG, HOME, HOMEBREW_CELLAR, HOMEBREW_PREFIX, HOMEBREW_REPOSITORY, INFOPATH, ITERM_PROFILE, ITERM_SESSION_ID, LANG, LC_TERMINAL, LC_TERMINAL_VERSION, LOGNAME, LaunchInstanceID, OLDPWD, PATH, PWD, SECURITYSESSIONID, SHELL, SHLVL, SSH_AUTH_SOCK, TERM, TERMINFO_DIRS, TERM_FEATURES, TERM_PROGRAM, TERM_PROGRAM_VERSION, TERM_SESSION_ID, TMPDIR, USER, VAULT_LICENSE, XPC_FLAGS, XPC_SERVICE_NAME, _, __CFBundleIdentifier, __CF_USER_TEXT_ENCODING Go Version: go1.21.3 Listener 1: tcp (addr: "10.118.101.66:8200", cluster address: "10.118.101.66:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled") Log Level: Mlock: supported: false, enabled: false Recovery Mode: false Storage: consul (HA available) Version: Vault v1.15.2+ent, built 2023-11-07T13:52:33Z Version Sha: 8b6cdc3100961bfd91cf03cfb5eaa0a2448199b5 ==> Vault server started! Log data will stream in below: 2025-07-08T12:42:52.315+0530 [INFO] proxy environment: http_proxy="" https_proxy="" no_proxy="" 2025-07-08T12:42:52.332+0530 [INFO] incrementing seal generation: generation=1 ...
Initialise & Unseal the Vault cluster:-
% vault operator init -key-shares=1 -key-threshold=1 | tee prod_init.txt
% vault operator unseal <prod_unseal_key>
Created some sample KV secrets & enabled some Auth Methods on Vault.
% vault auth enable kubernetes Success! Enabled kubernetes auth method at: kubernetes/
% vault secrets enable -path=secret kv-v2 Success! Enabled the kv-v2 secrets engine at: secret/
% vault kv put -mount=secret test1/test2 key=1234 ===== Secret Path ===== secret/data/test1/test2 ======= Metadata ======= Key Value --- ----- created_time 2025-07-08T07:20:55.741173Z custom_metadata <nil> deletion_time n/a destroyed false version 1
% vault auth list Path Type Accessor Description Version ---- ---- -------- ----------- ------- kubernetes/ kubernetes auth_kubernetes_c1f5bdfd n/a n/a token/ token auth_token_9a897d81 token based credentials n/a
% vault status Key Value --- ----- Seal Type shamir Initialized true Sealed false Total Shares 1 Threshold 1 Version 1.15.2+ent Build Date 2023-11-07T13:52:33Z Storage Type consul Cluster Name vault-cluster-4902ef6e Cluster ID afe6416d-3819-1d14-2fc9-8f987835c951 HA Enabled true HA Cluster https://10.118.101.66:8201 HA Mode active Active Since 2025-07-08T07:15:51.166347Z Last WAL 45
Took a Consul Snapshot from the "PROD" cluster.
% consul snapshot save prod_snap_8_Jul_25.snap Saved and verified snapshot to index 299
Step3: Run a consul cluster locally with a single node to simulate "DEV" environment
% consul agent -server -bind 127.0.0.1 -client=127.0.0.1 -data-dir=./dev/data/ -bootstrap=true -ui=true -node=dev-server ==> Starting Consul agent... Version: '1.11.4+ent' Node ID: '91521d19-921c-a7d5-d378-5725d6a9ad85' Node name: 'dev-server' Datacenter: 'dc1' (Segment: '<all>') Server: true (Bootstrap: true) Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: 8600) Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302) Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false ==> Log data will now stream in as it occurs: 2025-07-08T12:53:13.320+0530 [WARN] agent: bootstrap = true: do not enable unless necessary 2025-07-08T12:53:13.323+0530 [WARN] agent.auto_config: bootstrap = true: do not enable unless necessary ...
Step4: Ran a vault cluster with a single node having consul as storage backend
dev_vault_config.hcl
storage "consul" { address = "127.0.0.1:8500" path = "vault/" } listener "tcp" { address = "127.0.0.1:8200" tls_disable = true } ui = true cluster_addr = "https://127.0.0.1:8201" api_addr = "http://127.0.0.1:8200" disable_mlock = true
% vault server -config=dev_vault_config.hcl ==> Vault server configuration: Administrative Namespace: Api Address: http://127.0.0.1:8200 Cgo: disabled Cluster Address: https://127.0.0.1:8201 Environment Variables: COLORFGBG, COLORTERM, COMMAND_MODE, GODEBUG, HOME, HOMEBREW_CELLAR, HOMEBREW_PREFIX, HOMEBREW_REPOSITORY, INFOPATH, ITERM_PROFILE, ITERM_SESSION_ID, LANG, LC_TERMINAL, LC_TERMINAL_VERSION, LOGNAME, LaunchInstanceID, OLDPWD, PATH, PWD, SECURITYSESSIONID, SHELL, SHLVL, SSH_AUTH_SOCK, TERM, TERMINFO_DIRS, TERM_FEATURES, TERM_PROGRAM, TERM_PROGRAM_VERSION, TERM_SESSION_ID, TMPDIR, USER, VAULT_LICENSE, XPC_FLAGS, XPC_SERVICE_NAME, _, __CFBundleIdentifier, __CF_USER_TEXT_ENCODING Go Version: go1.21.3 Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled") Log Level: Mlock: supported: false, enabled: false Recovery Mode: false Storage: consul (HA available) Version: Vault v1.15.2+ent, built 2023-11-07T13:52:33Z Version Sha: 8b6cdc3100961bfd91cf03cfb5eaa0a2448199b5 ==> Vault server started! Log data will stream in below: 2025-07-08T12:55:05.664+0530 [INFO] proxy environment: http_proxy="" https_proxy="" no_proxy="" 2025-07-08T12:55:05.666+0530 [INFO] incrementing seal generation: generation=1 ...
Initialise & Unseal the Vault cluster:-
% vault operator init -key-shares=1 -key-threshold=1 | tee dev_init.txt
% vault operator unseal <dev_unseal_key>
% vault auth list Path Type Accessor Description Version ---- ---- -------- ----------- ------- token/ token auth_token_cffb51a6 token based credentials n/a
Restore the consul snapshot taken from the "PROD" environment, and restored it in the "DEV" environment cluster.
% consul snapshot restore prod_snap_8_Jul_25.snap Restored snapshot
After snapshot restoration, vault went into sealed state. To unseal the same, I used unseal keys from the "PROD" cluster.
% vault status Key Value --- ----- Seal Type shamir Initialized true Sealed true Total Shares 1 Threshold 1 Unseal Progress 0/1 Unseal Nonce n/a Version 1.15.2+ent Build Date 2023-11-07T13:52:33Z Storage Type consul HA Enabled true
% vault operator unseal <prod_unseal_key>
% vault status Key Value --- ----- Seal Type shamir Initialized true Sealed false Total Shares 1 Threshold 1 Version 1.15.2+ent Build Date 2023-11-07T13:52:33Z Storage Type consul Cluster Name vault-cluster-20faba45 Cluster ID afe6416d-3819-1d14-2fc9-8f987835c951 HA Enabled true HA Cluster https://127.0.0.1:8201 HA Mode active Active Since 2025-07-08T07:35:31.208781Z Last WAL 47
In the "DEV" cluster, we could validate the Auth Method & KV secret data got populated through consul's snapshot.
% vault kv get -mount=secret test1/test2 ===== Secret Path ===== secret/data/test1/test2 ======= Metadata ======= Key Value --- ----- created_time 2025-07-08T07:20:55.741173Z custom_metadata <nil> deletion_time n/a destroyed false version 1 === Data === Key Value --- ----- key 1234
% vault auth list Path Type Accessor Description Version ---- ---- -------- ----------- ------- kubernetes/ kubernetes auth_kubernetes_c1f5bdfd n/a n/a token/ token auth_token_9a897d81 token based credentials n/a
Conclusion
Restoring a Vault cluster backed by Consul from a snapshot is a powerful way to recover, migrate, or clone environments. By leveraging Consul’s native snapshot capabilities, you can safely move Vault's underlying data between clusters. Always validate the restored environment thoroughly, and ensure proper security practices are followed, especially around unseal keys and access tokens.
References