Introduction
Expected Outcome
The steps below will outline the process of upgrading multiple Active/Active instances with required versions. Note: This process will require application downtime.
Use Case
There are some Active/Active environments where an upgrade cannot be performed using the preferred method. Some examples are when using VMWare On-Prem, which does not provide a native auto-scaling ability, or some environments that cannot rebuild the instances due to company restrictions.
Backup
As standard practice, always ensure that database and object storage backups are performed before the upgrade in the event a rollback is required.
Terraform Enterprise Backup - Recommended Pattern
Required Versions
Check the Terraform Enterprise Releases page to note any required releases and find the release sequence number for the target release. * Denotes a required release.
Note: airgap customers must upgrade to this version before proceeding to later releases.
Note the required release and target release to which the instances will ultimately be upgraded. In this case, the required release is 610, and the target release is 636. These versions will be set using the ReleaseSequence
command below.
Airgap Considerations
Airgap installations require some extra steps, such as setting the AirgapPackagePath
for each release before starting the upgrade replicatedctl app-release apply
process below.
- Connect to the Terraform Enterprise host machine using SSH.
-
Print the
AirgapPackagePath
. -
On the Terraform Enterprise host machine, upload the desired airgap packages into the
AirgapPackagePath
. -
Fetch the versions of Terraform Enterprise from the uploaded airgap packages.
Procedure
The procedure below is for upgrading two instances to a required version and then the selected or targeted version.
Adjust Health-Check
If there is a health-check process that may rebuild instances upon failure, disable the health-check or increase the value to allow for both instances to be updated before the next check.
Stop Applications
# Stop applications on both nodes.
tfe-admin node-drain # Currently, it only affects localhost (it does not support running on one node to drain other nodes).
replicatedctl app stop
watch replicatedctl app status # Verify the app is stopped
Node 1:
replicatedctl params set ReleaseSequence --value 610
replicatedctl app-release apply
replicatedctl app status # Verify the app is started
tfe-admin node-drain
replicatedctl app stop
watch replicatedctl app status # Verify the app is stopped
Node 2:
replicatedctl params set ReleaseSequence --value 610
replicatedctl app-release apply
replicatedctl app status # Verify the app is started
# If only one version jump is needed, skip down to the last Node 1 box to bring that node up and wrap up.
tfe-admin node-drain
replicatedctl app stop
watch replicatedctl app status # Verify the app is stopped
Node 1:
replicatedctl params set ReleaseSequence --value 636
replicatedctl app-release apply
replicatedctl app status # Verify the app is started
tfe-admin node-drain
replicatedctl app stop
watch replicatedctl app status # Verify the app is stopped
Node 2:
replicatedctl params set ReleaseSequence --value 636
replicatedctl app-release apply
Node 1:
replicatedctl app start
replicatedctl app status # Verify the app is started