Introduction
When running Terraform Enterprise (TFE) in Kubernetes, operational metrics are exposed on port 9090 at the /metrics endpoint. Prometheus can scrape these metrics for monitoring and alerting.
In some cases, scraping fails due to format or configuration issues. This article explains the problem, cause, and resolution.
Problem
Prometheus fails to scrape TFE metrics with errors such as:
expected equal, got ":" ("INVALID") while parsing: "{\"Timestamp\":"After updating configuration to:
metrics_path: /metrics?format=prometheusPrometheus shows targets like:
http://POD_IP:9090/metrics%3Fformat=prometheusResulting in:
HTTP 404 Not FoundPrerequisites -:
Terraform Enterprise deployed in Kubernetes
Metrics enabled on port
9090Prometheus configured with Kubernetes service discovery
Cause
By default, the
/metricsendpoint returns JSON format.
Prometheus expects Prometheus exposition format, not JSON.Adding
?format=prometheusdirectly inmetrics_pathcauses Prometheus to URL-encode it:
/metrics?format=prometheusbecomes:
/metrics%3Fformat=prometheusTerraform Enterprise does not recognise the encoded path, leading to a 404 error.
Solutions:
Do not append query parameters directly in metrics_path.
Use the params block instead.
Correct Configuration
scrape_configs:
- job_name: 'terraform-enterprise-pods'
metrics_path: /metrics
params:
format: [prometheus]
scrape_interval: 20s
kubernetes_sd_configs:
- role: pod
This ensures Prometheus correctly requests:
/metrics?format=prometheus
Reload Prometheus and verify the target status is UP in the Prometheus UI.