This guide provides a detailed step-by-step process for migrating from an HCP-managed Consul cluster to a self-managed Consul Enterprise cluster using snapshot restoration.
The process begins with creating a snapshot of the HCP-managed cluster, which is transferred and restored to the self-managed environment. After restoration, the client configurations are updated to point to the new cluster. Finally, clients are restarted, and connectivity is verified before decommissioning the HCP cluster.
NOTE: HashiCorp highly recommends testing migration in dev environment/instances prior to production migration. Customers should schedule a period of inactivity using Consul Maintenance Mode to address potential concern for data loss/sync.
Prerequisites
-
An HCP Consul Dedicated cluster and Self-managed Consul cluster with matching configurations.
- Assumes cluster has 3 server nodes.
-
ACLs, TLS, and gossip encryption must be enabled.
- The gossip encryption key should be set to the same value that the client agents in the cluster are using.
- If there are no client agents in the cluster then this step isn't mandatory although it would be strongly recommended to enable Gossip encryption for any Consul cluster.
- Identify and prepare any client nodes that will be migrated.
- Access to the command line on both HCP and self-managed clusters to run migration commands.
- Ensure VPC or peering connectivity between the clusters for seamless communication.
For further setup requirements, see:
NOTE: Following this flow will result in a temporary outage between restoring the snapshot to the managed cluster, and forcing all client agents to rejoin the new Consul Enterprise cluster, where reads/writes to Consul APIs other than the ones to the local client agents would all fail. Additionally any data being written to the HCP servers after having taken a snapshot of the HCP cluster would not be seen in the new cluster. Essentially this means that the user has downtime from step 1 through 5.
Additionally, downtime can be longer as the number of clients, environment, and any automation increases. This will impact the time required for switching clients, users should be advised to expect longer delays in the case of more clients.
Migration Steps
Step 1: Take a Snapshot from the HCP Cluster
A snapshot is a backup of your HCP Consul cluster’s state. This snapshot will be restored in the new self-managed environment.
consul snapshot save /path/to/hcp-cluster.snapshot
Example:
For more info, see Taking Snapshots
Step 2: Transfer the Snapshot to the Self-Managed Cluster
Use a secure copy (SCP) command to move the snapshot file to the self-managed Consul cluster.
scp /path/to/hcp-cluster.snapshot <user>@<self-managed-node>:/path/to/destination
Step 3: Restore the Snapshot on the Self-Managed Cluster
Once the snapshot file is transferred to the Self-managed cluster, we can then restore the cluster’s state from the snapshot into your self-managed environment.
consul snapshot restore /path/to/destination/hcp-cluster.snapshot
For more info, see Restoring Snapshots
Step 4: Update the Client Configuration File
Modify the client configuration to point to the new cluster and update the relevant parameters, these being:
1. Data Center:
# This value is set to "hcp-managed" for HCP Consul clusters
# Change it to "dc1" for self-managed
datacenter = "dc1"
2. Join addresses:
# Similarly, be sure to set IPs under "retry_join" from the HCP Managed IPs to your self-managed IPs
retry_join = ["<new-server-IP>"]
3. TLS certificate:
tls {
defaults {
auto_encrypt {
allow_tls =true
tls = true
}
verify_incoming = true
verify_outgoing = true
}
}
4. ACL Agent token:
# Be sure to use the SecretID we generated earlier on with 'consul acl bootstrap'
acl {
enabled = true
default_policy = "deny"
enable_token_persistence = true
tokens {
agent = "8945c65f-65we-1414-fds8-a1r6d5s688ww"
}
}
For more info, see Client Configuration
Step 5: Restart the Client Agent
Restart the client to apply the updated configuration and reconnect it to the new cluster.
sudo systemctl restart consul
Step 6: Verify the Migration
- Access the self-managed cluster UI and confirm that all nodes and services are correctly connected.
- Run to ensure clients were migrated successfully:
consul members
- Run to ensure clients were migrated successfully:
- In the HCP cluster, ensure the client appears as inactive or left.
Step 7: Disconnect and Decommission the HCP Cluster
Once you confirm the migration is successful:
- Delete any VPC peering if no longer needed.
- Terminate the HCP-managed cluster from the portal.
Conclusion
Following these steps ensures a smooth migration from HCP Consul to a self-managed cluster with minimal downtime. Verify all configurations and ensure client nodes are correctly connected to the new environment before decommissioning the old cluster.