Introduction
This article provides detailed instructions on how to update the encryption password for Terraform Enterprise Kubernetes deployments.
Expected Outcome
Once you complete all steps, the application will be running with the updated password.
Prerequisites
- You must have command line access to the Terraform Enterprise deployment.
Procedure
-
Get the name of the Terraform Enterprise pod.
$ kubectl get pods -n terraform-enterprise ##... NAME READY STATUS RESTARTS AGE ##... terraform-enterprise-79db4fcdb7-wf448 1/1 Running 0 5d
-
Start an interactive shell session in the pod.
$ kubectl exec -n terraform-enterprise -it terraform-enterprise-79db4fcdb7-wf448 -- bash
-
From within the pod shell, verify the current password using
tfectl.$ tfectl app config --unredacted | grep "encryption_password" ##... "encryption_password": "Password123"
-
Update the encryption password. You must also update the
TFE_ENCRYPTION_PASSWORDin the Deployment or ConfigMap by creating a new Helm release with the updated setting, as directed by the command output.$ tfectl app rotate-encryption-password ##... WARNING: this operation is irreversible, and you will need to restart all of the TFE nodes once this operation is done. Make sure that no one is using TFE at this time. ##... Do you desire to continue? 'yes' is the only valid option. [yes/no]: yes ##... Current Encryption Password: *********** ##... New Encryption Password: ********** ##... Encryption key successfully rotated ##... Current TFE_ENCRYPTION_PASSWORD value in the Kubernetes deployment/config map is not up to date. ##... Update it with the new encryption password and rotate the pods.
-
Drain the node or nodes to prevent them from accepting new tasks. The following example is for a single-node instance. If you have multiple nodes, use the
--allflag.$ tfectl node drain ##... stopping service: service=sidekiq ##... successfully stopped service: service=sidekiq ##... stopping service: service=task-worker ##... successfully stopped service: service=task-worker ##... node successfully drained: node=terraform-enterprise-79db4fcdb7-wf448
-
Exit the Terraform Enterprise pod shell.
$ exit ##... exit ##... command terminated with exit code 1
-
Restart the Terraform Enterprise pod. If you followed all steps correctly, a new Terraform Enterprise pod starts.
$ kubectl get pods -n terraform-enterprise
Troubleshooting
If the new pod fails to enter a healthy state, use the following commands to review the pod logs for more information.
-
Get the new pod's name.
$ kubectl get pods -n terraform-enterprise ##... NAME READY STATUS RESTARTS AGE ##... terraform-enterprise-79db4fcdb7-z87t8 0/1 Running 0 131m
-
Check the logs for error messages. In this example, an error message indicates the Vault unseal key could not be decrypted. This error occurs when the
TFE_ENCRYPTION_PASSWORDin the Kubernetes Deployment or ConfigMap was not updated with the new value.$ kubectl logs terraform-enterprise-79db4fcdb7-z87t8 -n terraform-enterprise | tail {"log":"10.0.172.152 - - [...] \"GET /_health_check HTTP/1.1\" 502 150 \"-\" \"kube-probe/1.28+\"","component":"nginx"} {"log":"Waiting for Atlas to become active.","component":"task-worker"} {"log":"Error reading Vault configuration: failed decrypting unseal key: could not decrypt ciphertext: chacha20poly1305: message authentication failed","component":"vault"} {"log":"Waiting to retrieve Vault unseal key.","component":"vault"} {"log":"[...] [error] 496#496: *1767 connect() failed (111: Unknown error) while connecting to upstream, client: 10.0.172.152, server: , request: \"GET /_health_check HTTP/1.1\", upstream: \"http://127.0.0.1:9292/_health_check\", host: \"10.0.160.173:8080\"","component":"nginx"} {"log":"10.0.172.152 - - [...] \"GET /_health_check HTTP/1.1\" 502 150 \"-\" \"kube-probe/1.28+\"","component":"nginx"} {"log":"Error checking seal status: Get \"http://127.0.0.1:8200/v1/sys/seal-status\": dial tcp 127.0.0.1:8200: connect: connection refused","component":"archivist"} {"log":"Waiting for Vault to unseal.","component":"archivist"} {"log":"Error checking seal status: Get \"http://127.0.0.1:8200/v1/sys/seal-status\": dial tcp 127.0.0.1:8200: connect: connection refused","component":"backup-restore"} {"log":"Waiting for Vault to unseal.","component":"backup-restore"}