Introduction
Services can be synced in two ways -
- using Catalog Sync
- using Connect Sidecar.
We recommend using one of them above, not both together. For Cluster security, Connect sidecar is the better choice. Actually, if the Consul cluster has Connect enabled, the Connect injector should only be used. Using the Catalog Sync in that situation will either not work at all if people have secured their services appropriately or it will bypass the proxy entirely and the communication will not be secured.
For Connect, the tokens do not need to be manually passed. Because, the Connect injector uses the Consul’s Kubernetes auth method to retrieve individual service tokens before the container starts up.
Procedure
In this setup, one ACL enabled Kubernetes Consul client running in minikube and one Consul server outside of Kubernetes were used.
The steps to follow are below -
1. First, a Kubernetes service account needs to be created to be used for the auth method. In this example, the default service account was used for the cluster. As the helm deploy name was sujata
, so the service account name was sujata-consul-client
.
% kubectl get serviceaccounts NAME SECRETS AGE counting 1 45h default 1 46h static-client 1 83m static-server 1 84m sujata-consul-client 1 46h sujata-consul-connect-injector-webhook-svc-account 1 46h
The same RBAC configuration snippet was used to grant the necessary permissions to the service account sujata-consul-client
as described in this link. The consul-auth-method-example
was replaced with the service account above.
~ % cat sujata_rbac.yaml kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: review-tokens namespace: default subjects: - kind: ServiceAccount name: sujata-consul-client namespace: default roleRef: kind: ClusterRole name: system:auth-delegator apiGroup: rbac.authorization.k8s.io --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-account-getter namespace: default rules: - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: get-service-accounts namespace: default subjects: - kind: ServiceAccount name: sujata-consul-client namespace: default roleRef: kind: ClusterRole name: service-account-getter apiGroup: rbac.authorization.k8s.io ~~ % kubectl create -f sujata_rbac.yaml clusterrolebinding.rbac.authorization.k8s.io/review-tokens created clusterrole.rbac.authorization.k8s.io/service-account-getter created clusterrolebinding.rbac.authorization.k8s.io/get-service-accounts created
- Next, an ACL auth method needs to be created to be used by the injector. For example, the below command was used. The CA cert and token contents were created from the service account secret created for the service account.
consul acl auth-method create -type "kubernetes" \ -name "my-k8s" \ -description "This is an example kube method" \ -kubernetes-host "https://<kubernetes service cluster IP>:443" \ -kubernetes-ca-file /path/to/kube.ca.crt \ -kubernetes-service-account-jwt "<jwt token contents of the service account from step 1>"
The auth method named auth-method-sujata
was created as below-
% kubectl get endpoints | grep kubernetes kubernetes 192.xxx.xx.1x:8443 23h ~ % kubectl get sa sujata-consul-client -o yaml | grep "\- name:" - name: sujata-consul-client-token-56q9c Next step was to create the secret token and cert. ~ % kubectl get secret sujata-consul-client-token-56q9c -o yaml | grep token: The secret from this output was decoded (base64) and then, the value was saved as token value. ~ % kubectl get secret sujata-consul-client-token-56q9c -o yaml | grep ca.crt: The certificate from this output was decoded (base64) as before and then, the value was saved in a file ca.crt. ~ %consul acl auth-method create \ -type "kubernetes" \ -name "auth-method-sujata" \ -description "This is an auth method using kubernetes for the cluster sujata" \ -kubernetes-host "192.xxx.xx.1x:8443" \ -kubernetes-ca-cert=@ca.crt \ -kubernetes-service-account-jwt="<use the secret token after decoding>”
- After Auth method is created, next step is to create binding rule
~ % consul acl binding-rule create -method=auth-method-sujata -bind-type=service -bind-name='${serviceaccount.name}' -selector="serviceaccount.name!=default"
- Next, the auth method name
connectInject.overrideAuthMethodName
needs to be set as a value in the Helm chart and then, the helm chart should be re-installed. This will instruct the Connect injector to now use this auth method to get the service token for each service that uses sidecar injection.
Here is the example client config file:
--- client: enabled: true exposeGossipPorts: true extraConfig: |- { "acl": { "enabled": true, "default_policy": "deny", "enable_token_persistence": true, "tokens": { "agent": "489027f3-43f8-5c6b-d2b1-7300b786ac78" } } } grpc: true image: "consul:latest" join: - "10.x.x.xx" connectInject: enabled: true aclBindingRuleSelector: "serviceaccount.name!=default" # If not using global.bootstrapACLs and instead manually setting up an auth # method for Connect inject, set this to the name of your auth method. overrideAuthMethodName: "auth-method-sujata" global: enabled: false image: "consul:1.7.2" server: enabled: false syncCatalog: enabled: false ui: enabled: false
Note: It is only necessary to manually create an auth method because the Consul server is outside of Kubernetes. If it were running inside Kubernetes with the clients, then it is automatically created for users. Also, there is another way to deal with external servers if looking for fewer manual steps. Create a k8s secret with the server’s acl bootstrap token and provide it in the helm chart under the global.acls.bootstrapToken
values, all of the acls will be automatically configured, that includes setting up the client token and the auth method. The details can be found in this link.
- Finally it is time to deploy the services to Kubernetes.
The services do not need to be manually registered with Consul because the Connect injector automatically adds the sidecars and registers the services with Consul. If the services were already running in the cluster when the Connect injector was first installed, then, those could be injected and registered by restarting them. To have a service injected, either an annotation to the spec is added or the connectInject.default helm value is set to true.
annotation: https://www.consul.io/docs/platform/k8s/connect.html#consul-hashicorp-com-connect-inject helm chart option: https://www.consul.io/docs/platform/k8s/helm.html#v-connectinject-default
For example, here, the policy for both static-server and static-client were created using Consul UI. Then, the token for both was created in the Consul UI using those policy. The command line can also be used. The static-server and static-client files, used in this example, are referred in the link.
Only static-server
example is provided here.
service "static-server" {
policy = "write"
}
service "static-server-sidecar-proxy" {
policy = "write"
}
service_prefix "" {
policy = "read"
}
node_prefix "" {
policy = "read"
}
~ % kubectl create -f consul-helm/static-server.yaml
serviceaccount/static-server created
pod/static-server created
% kubectl get pods
NAME READY STATUS RESTARTS AGE
static-client 3/3 Running 0 98m
static-server 3/3 Running 0 99m
sujata-consul-bw4bl 1/1 Running 0 6h30m
sujata-consul-connect-injector-webhook-deployment-84cb649584j47 1/1 Running 0 6h30m
Now, in the Consul UI, one should be able to see the services running.
Additional Information
https://www.consul.io/docs/platform/k8s/connect.html
https://discuss.hashicorp.com/t/consul-kubernetes/6931/2