Currently, there is no method to switch a Terraform Enterprise (TFE) installation from Mounted Disk to External Services mode. Terraform Enterprise provides an API to backup and restore all of its application data. Using the backup and restore API is the only supported way to migrate between operational modes (mounted disk, external services). This article documents how to migrate TFE application data from a current installation to a new installation using the backup and restore API.
Prerequisites:
- The Terraform Enterprise installation that will be restored to must be a new, running installation with no existing application data.
- The version of Terraform Enterprise and PostgreSQL cannot be changed between a backup and restore.
- Once a restore is completed, the Terraform Enterprise application will need to be restarted before it can use the restored data.
Step 1: Set Environment Variables for Backup API Tokens
The backup and restore API uses a separate authorization token from other Terraform Enterprise tokens. You cannot use a user, team, or organization tokens. This token can be found on the Replicated settings dashboard (https://<tfe-host>:8800/settings#backup_token). It is specific to the Terraform Enterprise installation, so you will need to collect this for both the current and new installation.
Update the <value> with your source and destination token, then run the following command:
export SOURCE_TFE_TOKEN=<value>
export DESTINATION_TFE_TOKEN=<value>
Step 2: Create a payload file
The backup API will encrypt the backup file contents. The API requires a payload to be specified during backup which will also be passed during restore to decrypt the backup file.
Create a file name payload.json. Copy the contents below and replace <password> with a secure password, then save.
Option 1: Backup all data
{
"password": "<password>"
}
Option 2: Skip backing up object storage data
{
"password": "<password>",
"skip_object_storage": true
}
Note: It is recommended to use Option 2 for large TFE installations to reduce the time to restore.
Step 3: Create a backup
In this step, a POST request will be sent to the backup API passing the Source TFE token and password file.
NOTE: It is recommended to run the backup and restore commands from a server co-located with Terraform Enterprise installation rather than from a workstation to bypass VPN connectivity. This will permit the best performance and help to avoid disconnects.
Replace <source-tfe-host> with the fully qualified domain name of your current TFE host, then run the following command:
curl \
--header "Authorization: Bearer $SOURCE_TFE_TOKEN" \
--request POST \
--data @payload.json \
--output backup.blob \
https://<source-tfe-host>/_backup/api/v1/backup
If successful, there should be no error returned and a backup.blob file should have been created.
Step 4: Stop TFE
It is important to stop the Terraform Enterprise application to make sure no changes occur after the backup. SSH to the source TFE instance and run the following command:
replicatedctl app stop
Step 5: Restore the backup
In this step, a POST request will be sent to the restore API passing the current site token, password file, and backup file.
Replace <destination-tfe-host> with the fully qualified domain name of your new TFE host, then run the following command:
curl \
--header "Authorization: Bearer $DESTINATION_TFE_TOKEN" \
--request POST \
--form config=@payload.json \
--form snapshot=@backup.blob \
https://<destination-tfe-host>/_backup/api/v1/restore
This command may take some time depending on the size of your environment. Please be patient.
Step 6: Upload to S3 (skip if you included object storage in backup)
In this step, the data on disk will be copied to object storage (S3). This step should only be done if you chose not to backup object storage data.
First, SSH to the Source TFE instance and obtain the disk path.
replicatedctl app-config export | grep "disk_path" -A1
Set your AWS credentials in your environment variables.
export AWS_ACCESS_KEY_ID=<your_access_key_id>
export AWS_SECRET_ACCESS_KEY=<your_secret_access_key>
export REGION=<your_region>
Upload the data on disk to S3.
cd <disk_path>/aux/archivist/terraform
aws s3 cp . s3://<s3_bucket>/archivistterraform –-recursive
cd <disk_path>/aux/archivist/sentinel
aws s3 cp . s3://<s3_bucket>/archivistsentinel –-recursive
cd <disk_path>/aux/archivist/plan-export
aws s3 cp . s3://<s3_bucket>/archivistplan-export –-recursive
cd <disk_path>/aux/archivist/policy-set-versions
aws s3 cp . s3://<s3_bucket>/archivistpolicy-set-versions –-recursive
NOTE: The folder structure may differ depending on application versioning and usage. Please validate all contents are moved under the <disk_path>/aux/archivist/ folder and copied to archivist<folder_name> prefixes (be aware there is no slash in the bucket prefix).
Step 7: Restart TFE
Once the restore has completed, it is required to restart the new Terraform Enterprise installation for the changes to take effect. SSH to the new TFE instance and run the following command to stop TFE:
replicatedctl app stop
You can monitor the status of this action with the following command:
watch replicatedctl app status
Once it is stopped, start TFE with the following command:
replicatedctl app start
Once the Terraform Enterprise application has come online, the installation should mirror the old installation. If the TFE hostname has changed, then VCS connections will need to be re-created.
Sources:
Backups and Restores - Infrastructure Administration - Terraform Enterprise