Overview
When a new module version (release/tags) is pushed to VCS, Terraform Enterprise should automatically update the Terraform Private Module Registry with the latest version. However, there are scenarios where the new module version does not appear in the registry.
Scenario 1: Webhook Delivery from VCS Failed
Terraform Enterprise relies on webhooks from VCS to receive notifications whenever a new tag or module version is pushed. If this webhook fails, Terraform Enterprise will not be notified of the new module version, leading to the registry not being updated.
Cause:
- Webhook Misconfiguration: The webhook might be pointing to the wrong URL or using incorrect authentication credentials.
- Terraform Downtime: If Terraform Enterprise is experiencing issues, webhook events may not be processed.
- Network or Firewall Restrictions: If firewall rules block outgoing requests from VCS or inbound requests to Terraform Enterprise, webhook delivery may fail.Most commonly network errors are seen when the network a Terraform Enterprise instance is connected to doesn’t meet the minimum Terraform Enterprise Network Requirements documentation. These minimum network requirements must be met along the entire network path including any firewall, security groups, load balancer, proxy, or any other network device.
Solution:
Check the VCS Webhook logs in the VCS webhook settings to see if any delivery errors have occurred.
- Webhook Misconfiguration: Validate the webhook URL configuration in the VCS settings.The webhook URL should match the one shown in the workspace’s “Version Control” settings area.
- Terraform Downtime: Retry the webhook request or trigger a test request if the VCS allows such an option for redelivery after the issue is fixed, or delete and re-push the tag.
- Network or Firewall Restrictions: A network related issue will present itself in the webhook request log or in the Terraform Enterprise atlas or sidekiq logs. Ensure firewall rules allow communication between the VCS and Terraform Enterprise. Check if there are any denied connections for outbound traffic from VCS or inbound traffic to Terraform Enterprise. From the VCS host and the Terraform Enterprise host try to ping or test a request with a third part utility, such as curl. If the VCS is hosted externally (e.g., GitHub, GitLab), allow their IP ranges or domain names in the firewall to allow traffic from Terraform Enterprise. Similarly, ensure that Terraform Enterprise can receive traffic from the VCS host.
Scenario 2: Module Version Could Not Be Ingressed by Slug-Ingress
Once the webhook is received, Terraform Enterprise processes the new module version using a service called slug-ingress. This service is responsible for fetching the module source code, packing it into archive, and persisting it to object storage. If this process fails, the new module version will not appear.
Cause:
- Incorrect Module Naming Conventions: Terraform modules must follow the correct naming format.
- Tagging Issues: If the tag does not follow semantic versioning, Terraform Enterprise might reject it.
- Repository Visibility: The repository might be private, and the VCS under which the Terraform Enterprise VCS integration was configured may lack permissions to access it.
- Expired OAuth Token: VCS OAuth tokens in Terraform Cloud or Enterprise are not longer valid
Solution:
Check the sidekiq and slug-ingress logs in Terraform Enterprise for any processing or ingress errors.
- Incorrect Module Naming Conventions: Ensure that the module name and tagging format comply with Terraform Module Registry Requirements
-
Tagging Issues: Release tag names must be a semantic version, which can optionally be prefixed with a "v", for example,
v1.0.4 and
0.9.2
. To publish a module initially, at least one release tag must be present. - Repository Visibility: Verify that the repository visibility settings allow Terraform Enterprise to access it. Ensure the webhook is configured to trigger on push events and release/tag creation. Retry the ingestion process by making a no-op change and pushing a new tag.
- Expired OAuth Token: check the sidekiq and slug-ingress logs for authorization errors to confirm the token expiration. Attempt to reauthorize the VCS provider by using the following guide. If the VCS connection cannot be reauthorized and needs to be recreated, follow the steps in this guide to delete and republish modules after the VCS connection is recreated.
Sometimes, The webhook ID associated with the VCS repository appears to be missing (Atlas), It’s possible that the VCS object or module became disassociated. As a potential solution, try deleting and recreating the module on Terraform Enterprise side to re-establish the connection.
Scenario 3: Module Version Could Not Be Uploaded to Object Storage by Archivist
Once a module version is ingressed, it must be uploaded to object storage (e.g., AWS S3, Azure Blob Storage, or Google Cloud Storage) by the archivist service in Terraform Enterprise. If this step fails, the module version will not appear in the registry.
Cause:
- Insufficient Storage Permissions: The archivist service may not have the necessary IAM roles to upload files.
- Incorrect Storage Configuration: The object storage endpoint or credentials might be misconfigured.
- Connectivity Issues: If there are high delays between Terraform Enterprise and the object storage provider, uploads may time out.
Solution:
Check logs from the archivist component for errors related to the archivist service.
- Insufficient Storage Permissions: Validate storage IAM permissions to ensure Terraform Enterprise can write to the storage bucket.
- Incorrect Storage Configuration: Test connectivity to object storage using manual uploads to verify accessibility.
- Connectivity Issues: Retry the module upload by making a no-op change and pushing a new tag. Verify the health of any intermediary proxies between Terraform Enterprise and object storage.
Scenario 4: Module Version Failed Validation by terraform-registry-worker
After a module is ingested, it undergoes validation by the terraform-registry-worker to ensure that it meets Terraform’s syntax and structural requirements. If the module fails validation, it will not be added to the registry.
Cause:
- Syntax Errors in Terraform Code: If the module contains invalid HCL (HashiCorp Configuration Language) syntax, it will fail validation.
- Dependency Requirements: If the module depends on other modules or providers that are unavailable or incorrectly referenced, validation will fail.
- Incompatible Terraform Versions: The module might require a Terraform version that is not supported by the current Terraform Enterprise environment.
Solution:
Check the terraform-registry-worker logs for validation error messages.
-
Syntax Errors in Terraform Code: Run
terraform validate
locally on the module directory to detect syntax issues. - Dependency Requirements: Verify that all dependencies are available and correctly defined in the module.
-
Incompatible Terraform Versions: Check the Terraform version running in Terraform Enterprise and Update the
required_version
constraint(configuration file) to match the available Terraform version.
Additional Information