Introduction
There is a blog article on connecting to Kubernetes Cluster using Boundary that includes extensive steps on how to access the pod resources from the Kubernetes cluster using Boundary. This guide focusses on how to retrieve the list of nodes, namespaces, or full access to objects that are NAMESPACED false in Kubernetes api-resources. This section outlines the steps to accomplish the above by making a few changes to the existing blog post setup.
Solution
Update the vault-cluster-role.yaml file mentioned in the blog article with the below content and it will create the service account, cluster role and cluster role binding with full access
apiVersion: v1
kind: ServiceAccount
metadata:
name: vault
namespace: vault
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8s-full-secrets-abilities-with-labels
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vault-token-creator-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: k8s-full-secrets-abilities-with-labels
subjects:
- kind: ServiceAccount
name: vault
namespace: vault
Generate and export the service account token for vault service account in the vault namespace
# Below command is for Kubernetes version 1.24 or higher
export VAULT_SVC_ACCT_TOKEN="$(kubectl create token vault -n vault)"
# Below command is for Kubernetes version 1.23 or lower, run:
export VAULT_SVC_ACCT_TOKEN="$(kubectl get secret -n vault `kubectl get
serviceaccounts vault -n vault -o jsonpath='{.secrets[0].name}'` -o
jsonpath='{.data.token}' | base64 -d)"
Set the KUBE_API_URL, get the ca.crt & configure the kubernetes secret engine on Vault like below
# We have exported KUBE_API_URL, VAULT_SVC_ACCT_TOKEN in the environment & have ca.crt in place
# Copy the Kubernetes API server URL to an environment variableexport KUBE_API_URL=$(kubectl config view -o jsonpath="{.clusters[?(@.name == \"$(kubectl config current-context)\")].cluster.server}")
# Retrieve the Kubernetes CA certificate and copy it to a ca.crt file:kubectl config view --minify --raw --output 'jsonpath={..cluster.certificate- authority-data}' | base64 -d > ca.crt
root@vaults0:/home/vagrant/boundarySetup# vault write -f kubernetes/config kubernetes_host=$KUBE_API_URL kubernetes_ca_cert=@ca.crt service_account_jwt=$VAULT_SVC_ACCT_TOKEN
Success! Data written to: kubernetes/config
root@vaults0:/home/vagrant/boundarySetup# vault write kubernetes/roles/auto-managed-sa-and-role allowed_kubernetes_namespaces="*" kubernetes_role_type="ClusterRole" kubernetes_role_name="k8s-full-secrets-abilities-with-labels"
Success! Data written to: kubernetes/roles/auto-managed-sa-and-role
Note that the ClusterRole created above is different from the Role which is mentioned in the blog. Please go through the Hashicorp Vault Kubernetes Secret Engine Doc for more details. For the already created clusterrole in kubernetes i.e. k8s-full-secrets-abilities-with-labels
; a service token, service account & role binding objects are created.
Now, update the HTTP Method POST Request Body param in kubernetes-creds-lib in boundary as seen in the blog with below details:-
{
"kubernetes_namespace": "default",
"cluster_role_binding": "true"
}
Boundary will create the dynamic service account token via Hashicorp Vault at cluster level which has full access of Kubernetes Cluster. Please go through this article for accessing the Kubernetes cluster using boundary