The information contained in this article has been verified as up-to-date on the date of the original publication of the article. HashiCorp endeavors to keep this information up-to-date and correct, but it makes no representations or warranties of any kind, express or implied, about the ongoing completeness, accuracy, reliability, or suitability of the information provided.
All information contained in this article is for general information purposes only. Any reliance you place on such information as it applies to your use of your HashiCorp product is therefore strictly at your own risk.
This article serves as a guide for resolving a situation where a Kubernetes cluster is integrated with external Consul servers, and an attempt to deploy a service within the Kubernetes cluster results in the pod entering an
- The Consul Kubernetes components (webhook/injector) appear to be running:
default consul-connect-injector-5645d5b8f4-dq4cx 1/1 Running 0 74m
default consul-webhook-cert-manager-d445986b4-z9m92 1/1 Running 0 74m
- When a new service pod is deployed/registered, the volumes get mounted correctly but the pod itself goes into a
In the service pod log, the following error is present
2023-10-10T16:30:16.102Z [INFO] Registered service has been detected: service=static-server-sidecar-proxy
2023-10-10T16:30:16.173Z [ERROR] error applying traffic redirection rules:
| failed to run command: /sbin/iptables -t nat -N CONSUL_PROXY_INBOUND, err: exit status 3, output: modprobe: can't change directory to '/lib/modules': No such file or directory
| iptables v1.8.8 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
| Perhaps iptables or your kernel needs to be upgraded.
- The firewalld has been disabled and the error only occurs when Consul is installed in Kubernetes.
- The external Consul UI does display the service but is marked as all service checks failing.
The error in this article was identified within the specified setup; nonetheless, it's worth noting that the error may persist in alternative configurations.
- Consul Server 1.15.3+ent on a Virtual machine running Alma Linux 8
- Kubernetes Cluster Consul-k8s 1.1.2. 1xController + 2xWorkers
- Proxy mode IPVS (not ip_tables)
- The environment does not have iptables support since kube-proxy is in IPVS mode
- IPVS mode is used to enable transparent proxy support
Overview of possible solutions
- Disable transparent proxy via
- After disabling transparent proxy, all the pods should be running
- The Consul UI should show the service running with all health checks passing