This guide provides a step-by-step process on how to migrate your K8s service mesh resources from HCP Consul to a Consul Enterprise. This process presumes network connectivity (e.g., VPC peering, security group rules) is already established and that the self-managed cluster is compatible with the existing Consul installation. Additionally, an HCP-managed cluster snapshot should be restored to the self-managed environment prior to beginning. For instructions on how to migrate HCP Consul resources to a self-managed environment, see:
How to Migrate from HCP Consul to a Self-Managed Consul Cluster Using Snapshot Restoration
NOTE: HashiCorp highly recommends testing migration in dev environment/instances prior to production migration. Following this flow will result in a temporary outage between restoring the snapshot to the managed cluster, and forcing all client agents to rejoin the new Consul Enterprise cluster, where reads/writes to Consul APIs other than the ones to the local client agents would all fail. Additionally, downtime can be longer as the number of clients, environment, and any automation increases. This will impact the time required for switching clients, users should be advised to expect longer delays in the case of more clients. Customers should schedule a period of inactivity using Consul Maintenance Mode to address potential concern for data loss/sync.
Prerequisites
- Existing HCP Consul and self-managed Consul clusters
- Network connectivity (VPC peering, security group rules)
- Kubernetes compatibility
- Snapshot of the HCP-managed cluster restored on the self-managed cluster
- Access to Consul CLI and Kubernetes command-line tools
Step 1: Update CoreDNS Configuration
To ensure the Kubernetes environment can resolve the IP of the HCP-managed cluster, configure CoreDNS with host entries that point to the HCP-managed cluster's IP.
-
Edit the CoreDNS configuration by adding the IP and hostname:
Corefile: |- .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } hosts { 35.91.49.134 server.hcp-managed.consul fallthrough } prometheus 0.0.0.0:9153 forward . 8.8.8.8 8.8.4.4 /etc/resolv.conf cache 30 loop reload loadbalance }
-
Confirm the hostname resolves to the cluster IP inside a deployed pod. Fetch the Cluster IP using:
kubectl -n kube-system get svc
-
If the
nameserver
does not match the Cluster IP, verify thecluster-dns
field in thekubelet
configuration on each Kubernetes worker node and restartkubelet
.
Step 2: Update Consul Configuration
Modify the Consul Helm values.yaml
file for the self-managed cluster by adding the following configurations:
-
Server Host:
- Update
server host
to match the hostname specified in CoreDNS. - Example:
server.hcp-managed.consul
- Update
-
CA Certificates:
- Create a Kubernetes secret in the Consul namespace, aggregating required CA files:
- Add the CA file contents for the self-managed server to the end of the CA file.
-
TLS Server Name:
- Set
tlsServerName
to the hostname of the managed cluster. - Note: If the value is incorrect, Consul will display possible values in the error logs during the next upgrade step.
- Set
-
System Roots:
- Set
useSystemRoots
tofalse
to use the new CA certs provided by the Kubernetes secret.
- Set
Step 3: Upgrade Consul Cluster Configuration
Run the following command to upgrade the Consul pods on Kubernetes with the new configurations:
consul-k8s upgrade -config-file=values.yaml
Note: After this step, the Consul installation remains connected to the HCP-managed cluster, but now includes the updated CA files.
Step 4: Redeploy Application Workloads
Redeploy all workloads to apply the updated configurations. For each workload, verify that init containers have fetched the new CA files by running:
kubectl describe pod <pod-name>
You should see environment variables similar to the following:
CONSUL_ADDRESSES: server.consul.one
CONSUL_GRPC_PORT: 8502
CONSUL_HTTP_PORT: 443
CONSUL_API_TIMEOUT: 5m0s
CONSUL_NODE_NAME: $(NODE_NAME)-virtual
CONSUL_USE_TLS: true
CONSUL_CACERT_PEM: -----BEGIN CERTIFICATE-----\r
...
Step 5: Switch CoreDNS Entry to Self-Managed Server IP
After redeploying workloads, update the CoreDNS configuration to point to the self-managed Consul server’s IP instead of the HCP-managed server.
- Change the IP in CoreDNS for
server.hcp-managed.consul
to the self-managed server’s IP. - If
tlsServerName
for the self-managed cluster differs from the managed cluster, update it and rerunconsul-k8s upgrade -config-file=values.yaml
.
Last step
If everything has worked thus far, the service mesh should now be running on the self-managed cluster. You should verify connectivity to the Consul UI and monitor traffic between services to confirm successful migration.