Problem
When running a single Consul control plane with multiple Kubernetes dataplane clusters, components such as the Consul Kubernetes connect-injector may fail ACL login in some clusters with errors like:
rpc error: code = Unauthenticated desc = Unauthorized
or:
rpc error: code = Unauthenticated desc = lookup failed: [invalid bearer token, invalid signature, no keys found]
This typically occurs in architectures where:
A single Consul control-plane cluster serves multiple Kubernetes “dataplane” clusters.
Each dataplane’s Kubernetes API is reached via an ingress/proxy endpoint over TLS.
Consul’s Kubernetes ACL auth-method(s) are shared across these dataplanes.
Affected environment (example)
The issue can appear in environments similar to:
Consul Enterprise (with ACLs and TLS enabled), deployed via the official Helm chart.
One primary Consul control-plane cluster.
-
Two or more Kubernetes dataplane clusters, each:
Running Consul via Helm with
externalServers.enabled=true.Configured to reach the control plane using
externalServers.hosts.Exposing their Kubernetes API via a TLS endpoint (e.g.,
https://kubeapi-dataplane-a.example.com:6443,https://kubeapi-dataplane-b.example.com:6443).
-
Default Kubernetes auth-methods created by the Helm chart:
consul-k8s-auth-methodconsul-k8s-component-auth-method
The ingress endpoints may share a common external CA chain, but each Kubernetes cluster still has its own service-account token issuer.
Symptoms
In this multi-dataplane setup you may observe:
-
The first dataplane cluster you configure (“dataplane A”) works after:
Installing Consul via Helm, and/or
Patching the default Kubernetes auth-methods to trust the external CA chain used by its Kubernetes API endpoint.
-
A later dataplane (“dataplane B”) fails with:
connect-injectorcrash-looping.Sidecar injection failing.
Logs showing
Unauthenticated/invalid bearer token/invalid signatureerrors.
After configuring dataplane B, previously working workloads in dataplane A may start failing ACL login with similar errors.
Root cause
1. Kubernetes auth-methods are cluster-specific
Consul’s Kubernetes ACL auth-method:
Is designed to talk to one specific Kubernetes API server per auth-method.
Validates service-account tokens by sending a
TokenReviewrequest to the configuredkubernetes-host.
Each Kubernetes cluster signs its service-account tokens with its own private key. Tokens from dataplane A cannot be reliably validated by the API server of dataplane B, and vice versa.
2. Single kubernetes-host per auth-method
A Kubernetes auth-method in Consul has a single kubernetes-host field, for example:
-kubernetes-host="https://kubeapi-dataplane-a.example.com:6443"
There is currently no built‑in support for multiple Kubernetes API endpoints per auth-method.
3. Shared CA chain does not remove cluster specificity
Even when all Kubernetes API ingress endpoints share an identical external CA chain:
The auth-method is still bound to one
kubernetes-host.-
If a single shared auth-method is pointed to dataplane B’s Kubernetes API:
Tokens from dataplane A are sent to dataplane B’s API server for
TokenReview.Dataplane B’s API server cannot validate tokens it did not issue.
Consul ACL login for dataplane A workloads fails with
invalid bearer token/invalid signaturetype errors.
In short: multiple Kubernetes clusters sharing the same external CA does not make them interchangeable for the Consul Kubernetes auth-method.
Why this often appears after adding a second dataplane
When you add another dataplane and install Consul via the Helm chart:
-
The
consul-server-acl-init-jobin the control plane may:Update the default Kubernetes auth-methods’
kubernetes-host.Replace the CA configuration to match the new dataplane’s Kubernetes API.
This overwrites the configuration that previously worked for the first dataplane.
-
The result is:
The last-deployed dataplane cluster begins to work.
Earlier dataplane clusters begin to fail ACL login.
Manual attempts to “toggle” auth-method configuration between clusters are not scalable and are prone to breakage.
Known limitations
Today, the following limitations are key:
-
Single-host design for Kubernetes auth-method
Each Kubernetes auth-method is tied to exactly one
kubernetes-hostand one cluster’s CA / token issuer.
-
Default auth-method usage in Helm deployments
The Consul Helm chart configures components (such as
connect-injector) to use the default Kubernetes auth-method (for example,consul-k8s-component-auth-method).This means all dataplanes using the same control-plane Helm configuration will, by default, share the same Kubernetes auth-method—reintroducing the single-host limitation.
-
No native multi-dataplane support over ingress/proxy
Consul’s current Kubernetes integration does not provide out-of-the-box support for many dataplane clusters, each reaching their own Kubernetes API only via ingress/proxy, all authenticating through a single shared control-plane with a single shared set of default auth-methods.
Troubleshooting and verification
If you suspect you are hitting this issue, you can:
-
Check
connect-injector(and other component) logs-
Look for errors such as:
rpc error: code = Unauthenticated desc = Unauthorized rpc error: code = Unauthenticated desc = lookup failed: [invalid bearer token, invalid signature, no keys found]
Confirm whether only one dataplane is working while others fail.
-
-
Inspect the Kubernetes auth-method(s) in Consul
-
Use:
consul acl auth-method list consul acl auth-method read -name <auth-method-name>
-
Verify:
Which
kubernetes-hostis configured.Which CA certificates are in use.
If a single shared auth-method is repeatedly updated to point to different
kubernetes-hostvalues (from different dataplanes), you are likely affected.
-
-
Confirm that each Kubernetes cluster has its own token issuer
Each Kubernetes cluster normally has a distinct service-account signing key.
If service-account tokens from cluster A are being validated against cluster B’s API server, validation will fail.
Current recommendations and workarounds
Because of the above limitations, the supported options today are:
Option 1: Separate Consul control-plane per Kubernetes cluster
Approach
Run a dedicated Consul deployment (control plane) for each Kubernetes cluster.
Pros
No cross-cluster conflicts for Kubernetes auth-methods.
Each environment can be managed and upgraded independently.
Cons
Higher operational cost (more Consul deployments to manage).
No shared Consul state across clusters (may not suit shared / federated service mesh topologies).
This is the most straightforward and reliable approach given current product behavior.
Option 2: Avoid using ingress/proxy for Kubernetes API where possible
Approach
-
Where your network topology allows, configure each dataplane to:
Reach its own Kubernetes API directly (for example, using the cluster‑internal API endpoint).
Use the native Kubernetes CA instead of a shared external CA chain via ingress.
Benefits
Simplifies Kubernetes auth-method configuration.
Avoids having to patch auth-methods with an external CA chain.
Reduces the chance of conflicts when multiple dataplanes are configured.
Advanced / custom workarounds (use with caution)
In limited scenarios, some users have experimented with:
Creating separate Kubernetes auth-methods per dataplane in the same control plane.
Customizing their deployment so each dataplane’s Consul components consistently use the correct auth-method.
However, such approaches typically require:
Manual creation and ongoing management of multiple Kubernetes auth-methods and ACL binding rules.
Customization or forking of Helm templates to ensure the correct auth-method is used by each dataplane’s components.
Additional validation after each upgrade.
These patterns are advanced, may not be officially supported, and should be treated as temporary workarounds rather than long-term solutions.
Future improvements (high-level)
To improve support for this architecture, the following product enhancements would help:
-
Per-dataplane configuration of the Kubernetes auth-method used by Consul components
For example, a Helm configuration that allows each dataplane release to specify which Kubernetes auth-method name to use.
-
Clearer documentation of supported multi-cluster patterns
-
Explicitly documenting recommended and unsupported topologies for:
Shared control plane with multiple Kubernetes clusters.
Use of ingress/proxy for Kubernetes APIs.
Auth-method and CA configuration patterns.
-
-
(Longer term) Enhanced Kubernetes auth-method capabilities
Designs that might allow multiple Kubernetes hosts or more flexible token validation across clusters would need careful consideration for security and complexity.
References
Kubernetes auth-method documentation
Auth methods overview | Consul | HashiCorp Developerconsul acl auth-method createcommand
Commands: ACL Auth Method Create | Consul | HashiCorp Developerconsul acl auth-method updatecommand
Commands: ACL Auth Method Update | Consul | HashiCorp DeveloperConsul Helm chart configuration
Helm Chart Reference | Consul | HashiCorp Developer