Introduction
In this tutorial, we will use Vault with Kubernetes to store and manage secrets required for a Consul datacenter. A secure Consul datacenter requires us to distribute a number of secrets to the Consul agents before we can perform any operations. This includes a gossip encryption key, TLS certificates for the servers, and ACL tokens for all configurations.
If you are deploying Consul on Kubernetes, we have the option to provide these secrets to Consul agents, through Vault as a Secret Management for Consul (as Vault elevate the problem of secret sprawls and provide centralized storage to secure/store sensitive data like token, passwords, certificates, keys, licenses, etc.).
You can use HashiCorp Vault to authenticate your applications with a Kubernetes Service Account token. The
If you are deploying Consul on Kubernetes, we have the option to provide these secrets to Consul agents, through Vault as a Secret Management for Consul (as Vault elevate the problem of secret sprawls and provide centralized storage to secure/store sensitive data like token, passwords, certificates, keys, licenses, etc.).
You can use HashiCorp Vault to authenticate your applications with a Kubernetes Service Account token. The
kubernetes
authentication method automatically injects a Vault token into a Kubernetes pod. This lets us use Vault to store all the other secrets, including the ones required by Consul.
Setup Instructions:-
In order to proceed with the integration, we would need to have a K8s cluster setup. You may opt-out of any of the following options to create a multi-node K8s cluster.
* Create a kind cluster setup.
* Create an EKS cluster.
* Create a K3s cluster setup.
In order to proceed with the integration, we would need to have a K8s cluster setup. You may opt-out of any of the following options to create a multi-node K8s cluster.
* Create a kind cluster setup.
* Create an EKS cluster.
* Create a K3s cluster setup.
ubuntu@dc1-master:~$ k get nodes NAME STATUS ROLES AGE VERSION dc1-master Ready control-plane,master 15d v1.24.7+k3s1 dc1-worker2 Ready <none> 15d v1.24.7+k3s1 dc1-worker1 Ready <none> 15d v1.24.7+k3s1
Create a Vault Cluster on Kubernetes:-
We will create a single server Vault cluster without TLS using its `vault-values.yaml` file below, which creates 1 single vault server pod and a vault-agent-injector. If you want to create a cluster with TLS, please refer to the guide.
global: enabled: true tlsDisable: true server: standalone: enabled: true config: | listener "tcp" { address = "[::]:8200" tls_disable = "true" cluster_address = "[::]:8201" } storage "raft" { path = "/vault/data" }
To get more info on the available helm values configuration options, check the Helm Chart Configuration page.
$ helm repo add hashicorp https://helm.releases.hashicorp.com && helm repo update $ helm install vault -f ./vault-values.yaml hashicorp/vault --version "0.25.0"
The vault starts uninitialized and in a sealed state.
$ k get po NAME READY STATUS RESTARTS AGE vault-0 0/1 Running 0 32s vault-agent-injector-6549d85b8f-l64fx 1/1 Running 0 32s
Initialize Vault with one key share and one key threshold.
$ kubectl exec vault-0 -- vault operator init \ -key-shares=1 \ -key-threshold=1 \ -format=json > cluster-keys.json $ kubectl exec vault-0 -- vault operator unseal $(cat cluster-keys.json | jq -r ".unseal_keys_b64[]")
Thereafter, check the status of
vault-0
pod it should be 1/1
and output of kubectl exec -it vault-0 -- vault status
should show HA Mode
as active
and Sealed
status as false
.$ k get po NAME READY STATUS RESTARTS AGE vault-agent-injector-6549d85b8f-l64fx 1/1 Running 0 11m vault-0 1/1 Running 0 11m
$ vault status
Handling connection for 8200
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 1
Threshold 1
Version 1.14.0
Build Date 2023-06-19T11:40:23Z
Storage Type raft
Cluster Name vault-cluster-9fe28389
Cluster ID f0594bd1-9be4-69b1-f72e-f91f85851ca6
HA Enabled true
HA Cluster https://vault-0.vault-internal:8201
HA Mode active
Active Since 2023-09-25T16:22:13.442214949Z
Raft Committed Index 2402
Raft Applied Index 2402
Before we deploy the consul helm chart, we need to create all defined secrets and roles inside Vault cluster.
$ kubectl port-forward svc/vault 8200:8200 & $ vault login # Now Pass the root credentials from "cluster-keys.json" file created earlier. $ export VAULT_ADDR=http://127.0.0.1:8200 # Create a KV secret engine to store secrets at path "consul" $ vault secrets enable -path=consul kv-v2 # Store consul encryption key in Vault $ vault kv put consul/secret/gossip gossip="$(consul keygen)" # Create a Vault PKI secrets engine at "pki" path, and tune it to issue certificate with TTL of 10 years. $ vault secrets enable pki $ vault secrets tune -max-lease-ttl=87600h pki # Generate the root certificate for Consul CA. $ vault write -field=certificate pki/root/generate/internal \ common_name="dc1.consul" \ ttl=87600h | tee consul_ca.crt # Create a role that defines the configuration for the certificates. $ vault write pki/roles/consul-server \ allowed_domains="dc1.consul,consul-server,consul-server.consul,consul-server.consul.svc" \ allow_subdomains=true \ allow_bare_domains=true \ allow_localhost=true \ generate_lease=true \ max_ttl="720h" # Enable Vault's PKI secrets engine at the "connect-root" path to be used as root CA for Consul service mesh. $ vault secrets enable -path connect-root pki # Next, we need to configure Kubernetes authentication method provided by Vault that enables clients to authenticate with a Kubernetes Service Account Token and gives them access to the secrets. $ vault auth enable kubernetes # Starting in version 1.21 (ref. https://developer.hashicorp.com/vault/docs/auth/kubernetes#kubernetes-1-21), the Kubernetes "BoundServiceAccountTokenVolume" feature defaults to enabled. This changes the JWT token mounted into containers by default. Thereby, we need to create a service account, secret and ClusterRoleBinding with the necessary permissions to allow vault to perform token reviews with k8s. $ cat <<EOF | kubectl create -f - --- apiVersion: v1 kind: ServiceAccount metadata: name: vault --- apiVersion: v1 kind: Secret metadata: name: vault annotations: kubernetes.io/service-account.name: vault type: kubernetes.io/service-account-token --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: role-tokenreview-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: vault namespace: default EOF # Vault accepts service tokens from any client from within the Kubernetes cluster. During authentication, Vault verifies that the service account token is valid by querying a configured Kubernetes endpoint. In order to do that, configure the Kubernetes auth method with the JSON web token (JWT) for the service account, the Kubernetes CA certificate, and the Kubernetes host URL. $ TOKEN_REVIEW_JWT=$(kubectl get secret vault -o go-template='{{ .data.token }}' | base64 --decode) $ KUBE_CA_CERT=$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}' | base64 --decode) $ KUBE_HOST=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.server}') # Configure the Vault Kubernetes auth method to use the service account token. $ vault write auth/kubernetes/config token_reviewer_jwt="$TOKEN_REVIEW_JWT" kubernetes_host="$KUBE_HOST" kubernetes_ca_cert="$KUBE_CA_CERT" disable_local_ca_jwt="true" # Once, authentication happens, then vault policies attached to the role would let the consul agent to generate or retrieve secrets. # Create a policy that grants access to the path "consul" where you stored the gossip encryption. $ vault policy write gossip-policy - <<EOF path "consul/data/secret/gossip" { capabilities = ["read"] } EOF # Consul servers need to generate TLS certificates (pki/issue/consul-server) and retrieve the CA certificate (pki/cert/ca). $ vault policy write consul-server - <<EOF path "kv/data/consul-server" { capabilities = ["read"] } path "pki/issue/consul-server" { capabilities = ["read","update"] } path "pki/cert/ca" { capabilities = ["read"] } EOF # Create a policy "ca-policy" that grants access to the Consul root CA so that Consul agents and services can verify the certificates used in the service mesh are authentic. $ vault policy write ca-policy - <<EOF path "pki/cert/ca" { capabilities = ["read"] } EOF # Create a policy to create and manage the root and intermediate PKI secrets engines for generating service mesh certificates. Here, "RootPKIPath" is "connect-root" and the "IntermediatePKIPath" is "connect-intermediate-dc1" $ vault policy write connect - <<EOF path "/sys/mounts/connect-root" { capabilities = [ "create", "read", "update", "delete", "list" ] } path "/sys/mounts/connect-intermediate-dc1" { capabilities = [ "create", "read", "update", "delete", "list" ] } path "/sys/mounts/connect-intermediate-dc1/tune" { capabilities = [ "update" ] } path "/connect-root/*" { capabilities = [ "create", "read", "update", "delete", "list" ] } path "/connect-intermediate-dc1/*" { capabilities = [ "create", "read", "update", "delete", "list" ] } path "auth/token/renew-self" { capabilities = [ "update" ] } path "auth/token/lookup-self" { capabilities = [ "read" ] } EOF
# Finally, we need to create different roles to map above policies. This roles define the associate between K8s ServiceAccounts and Vault policies. $ vault write auth/kubernetes/role/consul-server \ bound_service_account_names=consul-server \ bound_service_account_namespaces=consul \ policies="gossip-policy,consul-server,connect" \ ttl=24h $ vault write auth/kubernetes/role/consul-client \ bound_service_account_names=consul-client \ bound_service_account_namespaces=consul \ policies="gossip-policy,ca-policy" \ ttl=24h $ vault write auth/kubernetes/role/consul-ca \ bound_service_account_names="*" \ bound_service_account_namespaces=consul \ policies=ca-policy \ ttl=1h
Create a Consul Cluster:-
global: datacenter: "dc1" name: consul domain: consul secretsBackend: vault: # Once enabled, consul will refer to vault to fetch its secrets. enabled: true # K8s Auth Role in Vault that connects the K8s ServiceAccount(consul-server) and its Namespace(consul) with Vault Policies (gossip-policy, consul-server and connect), and returns a token after authentication. consulServerRole: consul-server # K8s Auth Role in Vault that connects the K8s ServiceAccount(consul-client) and its Namespace(consul) with Vault Policies (gossip-policy and ca-policy), and returns a token after authentication. consulClientRole: consul-client # K8s Auth Role in Vault that connects the all K8s ServiceAccount and their Namespace (consul) with Vault Policy (ca-policy), and returns a token after authentication. consulCARole: consul-ca # Below is the configuration of connect CA which creates mTLS certificates for services in K8s and stores it in root and intermediate pki path. connectCA: # Address of connectCA would be vault service, which would be resolved like vault.<namespace>.svc.cluster.local:<port_address> address: http://vault.default:8200 rootPKIPath: connect-root/ intermediatePKIPath: connect-intermediate-dc1/ additionalConfig: "{\"connect\": [{ \"ca_config\": [{ \"namespace\": \"root\"}]}]}" agentAnnotations: | "vault.hashicorp.com/namespace": "root" #As Vault operates in root namespace tls: enabled: true enableAutoEncrypt: true caCert: secretName: "pki/cert/ca" # Path to retrieve CA cert federation: enabled: false createFederationSecret: false acls: manageSystemACLs: false gossipEncryption: secretName: consul/data/secret/gossip #Secret path where `gossip` key is stored secretKey: gossip server: replicas: 1 exposeGossipAndRPCPorts: true serverCert: secretName: "pki/issue/consul-server" #Consul Server to generate TLS certificate connectInject: replicas: 1 enabled: true controller: enabled: false #As controller functionalities is merged within connectInject in helm chart >= 1.0.0 meshGateway: enabled: false replicas: 1 ingressGateways: replicas: 1 enabled: true gateways: - name: ingress-gateway service: type: LoadBalancer terminatingGateways: replicas: 1 enabled: true gateways: - name: terminating-gateway service: type: LoadBalancer ui: enabled: true service: type: LoadBalancer syncCatalog: enabled: true consulNamespaces: mirroringK8S: true k8sDenyNamespaces: ["kube-system", "kube-public"]
Now, we will finally deploy our Consul Helm chart.
$ helm install --namespace consul --create-namespace \ --wait \ --values ./consul-values.yaml \ consul hashicorp/consul --version "1.1.4" --wait --debug
# Verify the installation is complete.
$ kubectl get po -n consul NAME READY STATUS RESTARTS AGE consul-connect-injector-7ddfbd84b8-7n8dd 2/2 Running 0 5d consul-ingress-gateway-644b8896d7-phzc6 2/2 Running 0 5d consul-server-0 2/2 Running 0 3d2h consul-sync-catalog-57487bc878-5dx9j 2/2 Running 0 5d consul-terminating-gateway-6d9c6d5f59-wh68p 2/2 Running 0 5d consul-webhook-cert-manager-6c6d66bdd5-4tvr8 1/1 Running 0 5d
Now, in order to test the K8s authentication during login using
consul-server
service account JWT and its role consul-server
defined in Vault. First, create an env variable SERVER_JWT
by fetching JWT token from consul-server
respective service account and its secrets.
$ kubectl describe sa consul-server -n consul Name: consul-server Namespace: consul Labels: app=consul app.kubernetes.io/managed-by=Helm chart=consul-helm component=server heritage=Helm release=consul Annotations: meta.helm.sh/release-name: consul meta.helm.sh/release-namespace: consul Image pull secrets: <none> Mountable secrets: consul-server-token-ljg7h Tokens: consul-server-token-ljg7h Events: <none> $ export SERVER_JWT=$(kubectl get secret $(kubectl get sa consul-server -n consul -o jsonpath='{.secrets[0].name}') -n consul -o jsonpath='{.data.token}' | base64 -d) $ curl --request POST --data '{"jwt": "'$SERVER_JWT'", "role": "consul-server"}' http://127.0.0.1:8200/v1/auth/kubernetes/login | jq . % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 1798 100 841 100 957 1144 1301 --:--:-- --:--:-- --:--:-- 2456 { "request_id": "93427bba-93e3-7132-a388-c5da5336ea9d", "lease_id": "", "renewable": false, "lease_duration": 0, "data": null, "wrap_info": null, "warnings": null, "auth": { "client_token": "hvs.CAESIDiQktvzXQbocfQ2KlcnIcP9WVYDNT7vOopZ-oMq2rzDGh4KHGh2cy5BZmkxdlFFeVdLckk5NEd4OU0xUUdCaUE", "accessor": "9NxeUbcHVBf8ll9LOEltOIfb", "policies": [ "connect", "consul-server", "default", "gossip-policy" ], "token_policies": [ "connect", "consul-server", "default", "gossip-policy" ], "metadata": { "role": "consul-server", "service_account_name": "consul-server", "service_account_namespace": "consul", "service_account_secret_name": "consul-server-token-ljg7h", "service_account_uid": "e205be82-9338-47f0-94cc-d1a8fd1452b7" }, "lease_duration": 86400, "renewable": true, "entity_id": "5ff5f94b-9c3e-4fe6-6309-5e44b37575d6", "token_type": "service", "orphan": true, "mfa_requirement": null, "num_uses": 0 } }
It means the login is successful, and it generates a
Additionally, we may also find
client_token
with a lease duration of 24 hrs and fetch all secrets & store them on the path /vault/secrets/
Additionally, we may also find
JWT
ca.crt
& namespace
being used by the agent itself on the path/run/secrets/kubernetes.io/serviceaccount/
inside the pod.$ kubectl exec -it consul-server-0 -n consul -- sh ~ $ cd /vault/secrets/ /vault/secrets $ ls gossip.txt serverca.crt servercert.crt servercert.key /vault/secrets $ cd /var/run/secrets/kubernetes.io/serviceaccount/ /run/secrets/kubernetes.io/serviceaccount $ ls ca.crt namespace token /run/secrets/kubernetes.io/serviceaccount $
Additional References:-