The information contained in this article has been verified as up‑to‑date on the date of the original publication of the article. HashiCorp endeavors to keep this information up‑to‑date and correct, but it makes no representations or warranties of any kind, express or implied, about the ongoing completeness, accuracy, reliability, or suitability of the information provided.
All information contained in this article is for general information purposes only. Any reliance you place on such information as it applies to your use of your HashiCorp product is therefore strictly at your own risk.
Introduction
After OS‑level or platform maintenance, a Vault node may unexpectedly remain sealed and show:
Removed From Cluster: true
in vault status.
In this state, the node cannot join Raft consensus or participate in cluster operations.
This behavior most often occurs when the node has been pruned from Raft membership, either:
- Automatically by Autopilot (health check failures, network isolation, quorum issues), or
- Manually by operators during maintenance
Once removed, the node’s local Raft state no longer reflects valid cluster state. Vault prevents the node from rejoining until that outdated state is cleared.
This article explains why the node appears sealed and removed, and provides the steps to safely restore it.
Problem
The affected node typically shows:
Sealed: trueRemoved From Cluster: true- Failure to join Raft consensus
- Inability to list peers
- Raft RPC or membership‑related log errors
Even if the Vault service is running, the node cannot operate until its local Raft state is rebuilt from healthy peers.
Prerequisites
This guide applies to:
- Vault Enterprise
- Clusters using Integrated Storage (Raft)
- Nodes using auto‑unseal or manual unseal
Operators must have:
- OS‑level access to the affected node
- Ability to inspect Vault logs
-
Ability to inspect both:
/opt/vault/data /opt/vault/data/raft
- Ability to restart Vault services
A valid retry_join stanza must be present; otherwise manual peering may be required.
Cause
The node has been removed from Raft membership. Common causes:
- Autopilot pruning (health failures, network isolation, quorum issues)
- Manual operator removal
- Disk failures or corruption in the Raft storage directory
When this occurs, the node’s local:
-
vault.db(operational data), and -
raft/raft.db(Raft metadata)
no longer reflect the active cluster configuration. Vault treats the node as outside the cluster until outdated local state is cleared.
For a deeper explanation of what these files contain and how they work, see:
Understanding vault.db and raft.db in Vault Integrated Storage
Troubleshooting and Remediation Steps
Step 1: Validate the Node’s State
Run:
vault status
Indicators of this issue include:
Sealed: trueHA Cluster: n/a- Messages indicating the node is removed from cluster membership
Attempt to list peers:
vault operator raft list-peers
Review the Vault logs:
journalctl -u vault -f
Common log entries include:
node is not in HA cluster membership- Raft RPC errors
- Backend initialization failures followed by seal activation
- Shutdown of cluster listener components
These confirm the node cannot initialize its Raft subsystem.
Step 2: Locate the Raft Data Directory
Check the storage configuration:
cat /etc/vault.d/vault.hcl
Example:
storage "raft" {
path = "/opt/vault/data"
}
Inside the storage path, you should see:
vault.db-
raft/containing:raft.dbsnapshots/
Step 3: Back Up Existing Raft Data
This provides an additional safeguard in case restoration is needed after clearing and rebuilding Raft state.
Please note that this step is optional and may not be required depending on your use case. It creates a temporary backup, and once the process is complete and verified, you can safely remove the backup files generated during this step. Please also be mindful of disk space availability, as the size of these backup files may introduce storage constraints on the node.
Stop Vault on the node
Note: Adjust commands for your platform (systemd vs Kubernetes, etc.)
sudo systemctl stop vault
Create file-level backups of vault.db and raft.db
cd /opt/vault/data cp vault.db vault.db.bak cp raft/raft.db raft.db.bak
Why this step matters
- These
.dbfiles will be removed in the next step to clear outdated Raft state. - Backups provide a rollback option if the node cannot rejoin the cluster.
- The example uses
cp(copy), but administrators may choose alternative methods such as the move command according to internal practices.
Step 4: Remove the Node’s Existing Raft Data
Run these commands only on the affected node. Do NOT run these on the leader Vault node.
This clears outdated Raft data so the node can rebuild healthy state from the cluster leader
cd /opt/vault/data rm -f vault.db rm -f raft/raft.db
Step 5: Start Vault
systemctl start vault
If using manual Shamir unseal:
vault operator unseal
What happens next
After restart, the node will:
- Contact healthy Raft peers
- Receive the leader’s snapshot and log
- Rebuild
vault.dbandraft/raft.db - Rejoin the cluster as a follower
Verify
vault status vault operator raft list-peers
Expected:
Sealed: false- Node listed as a follower
- No “removed from cluster” message
Outcome
After completing these steps:
- The node unseals successfully
- Raft state is fully reconstructed
- The node rejoins the cluster
- Cluster redundancy is restored
- “Removed From Cluster: true” no longer appears
If issues persist:
- Confirm other peers are healthy
- Verify network connectivity
- Check disk space and permissions
- Review logs for repeated membership errors