Overview
This article explains how to configure Consul connect-injector when it needs to authenticate service accounts against an external Kubernetes cluster's API server. This scenario typically occurs in multi-cluster deployments where Consul servers run in one cluster (Server Cluster) and connect-injector runs in another (Client Cluster).
Common Symptoms
x509: certificate signed by unknown authority
unable to verify the first certificate
x509: certificate is valid for X, not Y
Architecture
In this configuration:
- Server Cluster: Hosts Consul servers with custom CA
- Client Cluster: Hosts Consul connect-injector and application workloads
- TokenReview Flow: Connect-injector validates service account tokens against the Server Cluster's Kubernetes API
Client Cluster Server Cluster
┌────────────────────┐ ┌──────────────────────┐
│ Connect-Injector │───Consul RPC───▶│ Consul Servers │
│ │ └──────────────────────┘
│ │ ┌──────────────────────┐
│ │───TokenReview──▶│ Kubernetes API │
└────────────────────┘ └──────────────────────┘
Certificate Authority Requirements
Understanding the CA Chain
This setup requires a complete CA chain combining two certificate contexts:
-
Custom Consul CA (Your PKI)
- Root CA and Intermediate CA for Consul certificate generation
- Intermediate CA private key used as Consul's caKey
-
Kubernetes API Server CA
- Cloud provider CA (EKS/GKE/AKS) or enterprise CA (RKE/on-prem)
- Extracted from kubeconfig
- Required for Consul servers to trust Kubernetes LoadBalancer certificates
Complete CA Chain Structure
ca-chain.pem = Intermediate CA + Root CA + Kubernetes API CA
Critical: The Intermediate CA private key (inter.key) is only provided to the Server Cluster as Consul's caKey. The Client Cluster receives the certificate chain but not the private key.
Prerequisites
- Two Kubernetes clusters with network connectivity
- kubectl configured for both clusters
- OpenSSL for certificate operations
- Helm 3.x
- Network access:
- Client → Server Consul (ports 8300-8302, 8501-8502)
- Client → Server Kubernetes API (port 6443)
Configuration Steps
Step 1: Generate Custom PKI
Create a two-tier PKI structure:
# Root CA
openssl genrsa -out root.key 4096
openssl req -x509 -new -nodes -key root.key -sha256 -days 3650 \
-subj "/C=US/O=Example/CN=Root CA" -out root.crt
# Intermediate CA (with CA:TRUE for certificate signing)
openssl genrsa -out inter.key 4096
openssl req -new -key inter.key \
-subj "/C=US/O=Example/CN=Intermediate CA" -out inter.csr
openssl x509 -req -in inter.csr -CA root.crt -CAkey root.key \
-CAcreateserial -out inter.crt -days 3650 -sha256 \
-extfile <(printf "basicConstraints=CA:TRUE\nkeyUsage=critical,keyCertSign,cRLSign")
# Verify
openssl verify -CAfile root.crt inter.crt
Step 2: Extract Kubernetes API CA
Extract the Kubernetes API CA certificate from the Server Cluster:
kubectl config use-context <server-cluster-context>
# For EKS/GKE/AKS
kubectl config view --raw \
-o jsonpath='{.clusters[0].cluster.certificate-authority-data}' | \
base64 -d > k8s-api-ca.crt
# For RKE/On-premises
kubectl config view --raw --flatten \
-o jsonpath='{.clusters[0].cluster.certificate-authority-data}' | \
base64 -d > k8s-api-ca.crt
Step 3: Create Complete CA Chain
Combine all CA certificates in order:
cat inter.crt root.crt k8s-api-ca.crt > ca-chain.pem
# Verify (should show 3 certificates)
openssl crl2pkcs7 -nocrl -certfile ca-chain.pem | \
openssl pkcs7 -print_certs -noout
Step 4: Configure Server Cluster
Create CA secret with both certificate and key:
kubectl create namespace consul
kubectl create secret generic ca-cert -n consul \
--from-file=tls.crt=ca-chain.pem \
--from-file=tls.key=inter.key
Helm values (values-server.yaml):
global:
name: consul
datacenter: dc1
tls:
enabled: true
enableAutoEncrypt: true
caCert:
secretName: ca-cert
secretKey: tls.crt
caKey:
secretName: ca-cert
secretKey: tls.key
serverAdditionalDNSSANs:
- "<consul-loadbalancer-dns>" # Add after installation
acls:
manageSystemACLs: true
gossipEncryption:
autoGenerate: true
server:
enabled: true
replicas: 3
exposeService:
enabled: true
type: LoadBalancer
ui:
enabled: true
service:
type: LoadBalancer
connectInject:
enabled: false
Install Consul:
helm repo add hashicorp https://helm.releases.hashicorp.com
helm install consul hashicorp/consul -f values-server.yaml -n consul
# Get LoadBalancer DNS
CONSUL_LB=$(kubectl get svc consul-expose-servers -n consul \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
# Update serverAdditionalDNSSANs and upgrade
helm upgrade consul hashicorp/consul -f values-server.yaml -n consul
Step 5: Export Secrets
Export required secrets from Server Cluster:
# Bootstrap token
kubectl get secret consul-bootstrap-acl-token -n consul \
-o jsonpath='{.data.token}' | base64 -d > bootstrap-token.txt
# Gossip key
kubectl get secret consul-gossip-encryption-key -n consul \
-o jsonpath='{.data.key}' | base64 -d > gossip-key.txt
# Kubernetes API endpoint
kubectl config view --minify \
-o jsonpath='{.clusters[0].cluster.server}' > k8s-api-endpoint.txt
Step 6: Configure Client Cluster
Create CA secret (certificate only, no key):
kubectl config use-context <client-cluster-context>
kubectl create namespace consul
kubectl create secret generic ca-cert -n consul \
--from-file=tls.crt=ca-chain.pem
Verify the secret has only tls.crt:
kubectl get secret ca-cert -n consul -o jsonpath='{.data}' | jq 'keys'
# Expected: ["tls.crt"] ← NO tls.key
Create other secrets:
kubectl create secret generic consul-bootstrap-acl-token -n consul \
--from-literal=token=$(cat bootstrap-token.txt)
kubectl create secret generic consul-gossip-encryption-key -n consul \
--from-literal=key=$(cat gossip-key.txt)
Helm values (values-client.yaml):
global:
name: consul
datacenter: dc1
tls:
enabled: true
enableAutoEncrypt: false # False in client cluster
caCert:
secretName: ca-cert
secretKey: tls.crt
acls:
enabled: true
manageSystemACLs: true
bootstrapToken:
secretName: consul-bootstrap-acl-token
secretKey: token
gossipEncryption:
autoGenerate: false
secretName: consul-gossip-encryption-key
secretKey: key
server:
enabled: false
ui:
enabled: false
connectInject:
enabled: true
default: false
externalServers:
enabled: true
hosts:
- "<consul-loadbalancer-dns>"
httpsPort: 8501
grpcPort: 8502
tlsServerName: server.dc1.consul
k8sAuthMethodHost: "<k8s-api-endpoint>" # From k8s-api-endpoint.txt
useSystemRoots: false
Install Consul:
helm install consul hashicorp/consul -f values-client.yaml -n consul
Step 7: Validate
Deploy a test application with sidecar injection:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app
spec:
replicas: 1
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
annotations:
consul.hashicorp.com/connect-inject: "true"
spec:
serviceAccountName: test-app
containers:
- name: test-app
image: nginx:latest
EOF
# Verify sidecar injection
kubectl get pod -l app=test-app \
-o jsonpath='{.items[0].spec.containers[*].name}'
# Expected: test-app consul-connect-envoy-sidecar consul-connect-init
Check connect-injector logs:
kubectl logs -n consul -l component=connect-injector --tail=50
# Success indicators:
# ✓ No TLS/certificate errors
# ✓ TokenReview requests succeeding
Troubleshooting
Certificate Signed by Unknown Authority
Symptom: x509: certificate signed by unknown authority
Cause: Incomplete CA chain in Client Cluster.
Solution:
# Verify chain has 3 certificates
openssl crl2pkcs7 -nocrl -certfile ca-chain.pem | \
openssl pkcs7 -print_certs -noout | grep "subject=" | wc -l
# Should return: 3
# If incorrect, recreate
cat inter.crt root.crt k8s-api-ca.crt > ca-chain.pem
# Update secret
kubectl delete secret ca-cert -n consul
kubectl create secret generic ca-cert -n consul \
--from-file=tls.crt=ca-chain.pem
kubectl rollout restart deployment consul-connect-injector -n consul
Certificate Hostname Mismatch
Symptom: x509: certificate is valid for kubernetes, not
Cause: Using custom domain instead of direct Kubernetes API endpoint.
Solution:
Use the direct cloud provider endpoint in k8sAuthMethodHost:
externalServers:
k8sAuthMethodHost: "https://ABC123.gr7.us-east-1.eks.amazonaws.com" # ✓ Correct
# NOT: https://custom-domain.example.com # ✗ Incorrect
Auto-Encrypt Configuration Error
Symptom: error signing certificate: certificate signing unavailable
Cause: Incorrect enableAutoEncrypt setting.
Solution:
- Server Cluster: enableAutoEncrypt: true (has caKey)
- Client Cluster: enableAutoEncrypt: false (no caKey)
Invalid Bearer Token
Symptom: error performing token review: invalid bearer token
Cause: ACL token mismatch between clusters.
Solution:
# Export from Server Cluster
kubectl get secret consul-bootstrap-acl-token -n consul \
--context <server-cluster> -o jsonpath='{.data.token}' | base64 -d > token.txt
# Update in Client Cluster
kubectl delete secret consul-bootstrap-acl-token -n consul \
--context <client-cluster>
kubectl create secret generic consul-bootstrap-acl-token -n consul \
--context <client-cluster> --from-literal=token=$(cat token.txt)
kubectl rollout restart deployment consul-connect-injector -n consul \
--context <client-cluster>
Gossip Key Mismatch
Symptom: Failed to decrypt gossip message
Cause: Different gossip encryption keys between clusters.
Solution:
# Export from Server Cluster
kubectl get secret consul-gossip-encryption-key -n consul \
--context <server-cluster> -o jsonpath='{.data.key}' | base64 -d > gossip.txt
# Update in Client Cluster
kubectl delete secret consul-gossip-encryption-key -n consul \
--context <client-cluster>
kubectl create secret generic consul-gossip-encryption-key -n consul \
--context <client-cluster> --from-literal=key=$(cat gossip.txt)
kubectl rollout restart deployment consul-connect-injector -n consul \
--context <client-cluster>
Configuration Comparison
Server Cluster vs Client Cluster
| Setting | Server Cluster | Client Cluster |
|---|---|---|
| server.enabled | true | false |
| connectInject.enabled | false | true |
| tls.enableAutoEncrypt | true | false |
| tls.caKey | Provided (inter.key) | Not provided |
| ca-cert secret | tls.crt + tls.key | tls.crt only |
| gossipEncryption.autoGenerate | true | false |
| externalServers.enabled | N/A | true |
Secret Structures
Server Cluster:
ca-cert:
tls.crt: <ca-chain.pem> # 3 certificates
tls.key: <inter.key> # Intermediate CA private key
Client Cluster:
ca-cert:
tls.crt: <ca-chain.pem> # Same 3 certificates
# NO tls.key
Best Practices
Use Intermediate CA: Generate Root → Intermediate CA structure. Provide intermediate private key as caKey.
Include Kubernetes API CA: Always add the Kubernetes API CA to your ca-chain.pem to enable LoadBalancer certificate trust.
Use Direct API Endpoints: In k8sAuthMethodHost, use the direct cloud provider endpoint (not custom domains or proxies).
Principle of Least Privilege: Client Cluster should never have access to CA private keys.
Monitor Logs: Regularly check connect-injector logs for certificate or TokenReview errors.
Document Certificate Sources: Maintain clear documentation of which CA issued which certificate and expiration dates.