Introduction
For the use case, where AKS workloads need to leverage the Azure Auth method instead of default kubernetes
auth method, as these auth methods are configured on a Vault server hosted externally to AKS, on an Azure VM, or outside of Azure.
In this setup, an Azure Kubernetes Service (AKS) workload securely accesses secrets from a HashiCorp Vault server hosted externally on a virtual machine (VM). The integration uses Vault's Azure authentication method, allowing AKS workloads to authenticate using managed identities without hardcoding credentials. Vault validates the identity of the workload through Azure Active Directory and issues a Vault token, enabling secure, role-based access to secrets.
Expected Outcome
The expected outcome is that AKS workloads securely authenticate to the external Vault server using Azure managed identities. Vault issues tokens based on defined roles, allowing controlled access to secrets. This ensures secure, credential-free secret retrieval, leveraging Azure identity for authentication and maintaining strong access control across the infrastructure.
Example:
A Kubernetes pod running in AKS uses its assigned managed identity to authenticate with the external Vault server hosted on a VM. The pod sends a request to Vault's /v1/auth/azure/login
endpoint with a JWT obtained from Azure's metadata service. Vault verifies the token, confirms the pod's identity matches a configured role (e.g., dev-role
), and returns a Vault token. The pod then uses this token to read a secret, like:
vault kv get secret/database-creds
The secret is returned securely, allowing the pod to connect to a database without ever storing static credentials in the container or code.
Prerequisites (if applicable)
-
HashiCorp Vault Server (on VM)
-
Azure Infrastructure
-
AKS Cluster deployed.
-
Managed Identity or App Registration assigned to AKS workloads (via Azure Workload Identity or Pod Identity).
-
The VM hosting Vault is in the same or peered Virtual Network (if private access is used).
Procedure
Step 1: For the Vault server on the VM.
Run the vault agent with the following sample config
file. Also, ensure to enable System Assigned Identity
on the Azure VM.
ui = true
disable_mlock = true
storage "file" {
path = "/opt/vault/data"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 1
}
license_path = "/etc/vault.d/vault.hclic"
Step 2: Create an AKS cluster and ensure to pass --enable-oidc-issuer
and the --enable-workload-identity
parameters. Use the following values.yaml
file to install the vault helm chart on the AKS cluster.
In this file, externalVaultAddr
is pointing to Azure VM exposed on public IP and port 8200
.
himanshu.sharma@himanshu azure_auth_aks % cat aks_values.yaml
global:
externalVaultAddr: $VAULT_ADDR
injector:
enabled: true
agentImage:
repository: "hashicorp/vault-enterprise"
tag: "1.16.1-ent"
Use-Case 1:- By using Managed Identities.
On Vault Server VM:-
First, create a Managed Identity in Azure, and use its respective client_id to configure the Azure auth method.
Ref. Azure - Auth Methods | Vault | HashiCorp Developer
vault write auth/azure/config tenant_id=<tenant_id> \
resource=https://management.azure.com/ \
client_id=<managed_identity_client_id>\
identity_token_audience=vault.example/v1/identity/oidc/plugins
Create the respective azure auth role to bound the subscription_id and the resource_group. Also, you may assigned the role some vault policies to grant access to any secret engine (like to read or update a kv-v2 path).
Note: Resource Groups should be bound to the workload identity which is going to hit the Azure auth method for login.
Like in our case, it should be resource group of the AKS nodepool.
vault write auth/azure/role/dev-role policies="<policy_name>" \
bound_subscription_ids=<subscription_id> \
bound_resource_groups=<rg_name_of_AKS_nodepool>
If you won’t provide the correct bound_resource_group
, you may see the following error for auth/azure/login
endpoint.
azureuser@vault-vm:~$ vault write auth/azure/login role="dev-role" \
jwt="$(curl -s 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fmanagement.azure.com%2F' -H Metadata:true | jq -r '.access_token')" \
subscription_id=$(curl -s -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01" | jq -r '.compute | .subscriptionId') \
resource_group_name=$(curl -s -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01" | jq -r '.compute | .resourceGroupName') \
vm_name=$(curl -s -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01" | jq -r '.compute | .name')
Error writing data to auth/azure/login: Error making API request.
URL: PUT http://127.0.0.1:8200/v1/auth/azure/login
Code: 500. Errors:
* resource group not authorized
Also, you need to ensure that proper permissions (like role) should be assigned to the object to read any scope (like accessing vault-vm metadata). Otherwise, you might face the following error.
azureuser@vault-vm:~$ vault write auth/azure/login role="dev-role" \
jwt="$(curl -s 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fmanagement.azure.com%2F' -H Metadata:true | jq -r '.access_token')" \
subscription_id=$(curl -s -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01" | jq -r '.compute | .subscriptionId') \
resource_group_name=MC_vault-rg_myAKSCluster_eastus \
vm_name=$(curl -s -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01" | jq -r '.compute | .name')
Error writing data to auth/azure/login: Error making API request.
URL: PUT http://127.0.0.1:8200/v1/auth/azure/login
Code: 500. Errors:
* unable to retrieve virtual machine metadata: GET https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/MC_vault-rg_myAKSCluster_eastus/providers/Microsoft.Compute/virtualMachines/vault-vm
--------------------------------------------------------------------------------
RESPONSE 404: 404 Not Found
ERROR CODE: ResourceNotFound
--------------------------------------------------------------------------------
{
"error": {
"code": "ResourceNotFound",
"message": "The Resource 'Microsoft.Compute/virtualMachines/vault-vm' under resource group 'MC_vault-rg_myAKSCluster_eastus' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"
}
}
--------------------------------------------------------------------------------
Hereby, using proper Azure CLI, you may assign a proper permission to the object to give an access on the scope.
az role assignment create \
--assignee-object-id 9dcc29d5-ae6b-48e9-85bf-2988ead69186 \
--role Reader \
--scope /subscriptions/<subscription_id>/resourceGroups/vault-rg
% az role assignment create \
--assignee-object-id 9dcc29d5-ae6b-48e9-85bf-2988ead69186 \
--role Reader \
--scope /subscriptions/<subscription_id>/resourceGroups/MC_vault-rg_myAKSCluster_eastus
On Managed Identity service in Azure-
You need to create a “Federated credential” in the Managed Identity to configure an identity from an external OpenID Connect Provider to get tokens.
In the federated credentials, we have to set following fields where Cluster Issuer URL
should be the AKS OIDC Issuer URL, and define Namespace
and Service Account
name for the serviceAccount
in the AKS cluster.
On AKS cluster:-
-
Modify
serviceAccount
to pass theclientID
for the Managed Identity, using annotationsazure.workload.identity/client-id: <managed_identity_client_id>
.
% k get sa workload-identity-sa17874e -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
azure.workload.identity/client-id: ddc93440-cebd-4e09-946c-6c90bb33bd11
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{"azure.workload.identity/client-id":"88d12b7c-2413-4354-9f2f-60f862de326c"},"name":"workload-identity-sa17874e","namespace":"default"}}
creationTimestamp: "2025-04-14T04:42:23Z"
name: workload-identity-sa17874e
namespace: default
resourceVersion: "178411"
uid: c1c3c2e7-6207-4ff0-b7b8-d0f7c1a1361c
-
Run the following POD object file, which uses different annotations to pass
auth_type
role
&auth_path
using doc. Also, we passed theserviceAccount
name to the pod, which has a mapping to theclient_id
for the Managed Identity.
Ref. Vault Agent Injector annotations | Vault | HashiCorp Developer
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: sample-workload-identity
namespace: ${SERVICE_ACCOUNT_NAMESPACE} # Replace with your namespace
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/role: 'dev-role'
vault.hashicorp.com/agent-inject-secret-credentials.txt: 'secret/data/devwebapp/config'
vault.hashicorp.com/auth-path: 'auth/azure/'
vault.hashicorp.com/auth-type: 'azure'
vault.hashicorp.com/auth-config-resource: "https://management.azure.com/"
vault.hashicorp.com/namespace: 'root'
labels:
azure.workload.identity/use: "true" # Required. Only pods with this label can use workload identity.
spec:
serviceAccountName: workload-identity-sa17874e # Replace with your service account name
containers:
- name: rabbitmq # Replace with your container name
image: mcr.microsoft.com/mirror/docker/library/rabbitmq:3.10-management-alpine # Replace with your Docker image
ports:
- containerPort: 5672
name: rabbitmq-amqp
- containerPort: 15672
name: rabbitmq-http
env:
- name: RABBITMQ_DEFAULT_USER
value: "username"
- name: RABBITMQ_DEFAULT_PASS
value: "password"
resources:
requests:
cpu: 10m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
EOF
You will see that the application pod is up and running using Azure Auth
method of login in the vault-agent-init
container, and respective secrets from the Vault KV have been fetched and placed on the desired path.
% k get po
NAME READY STATUS RESTARTS AGE
sample-workload-identity 2/2 Running 0 23h
vault-agent-injector-fdcbf9d68-l7x6l 1/1 Running 0 46h
% k exec -it sample-workload-identity -- sh
Defaulted container "rabbitmq" out of: rabbitmq, vault-agent, vault-agent-init (init)
/ # cat /vault/secrets/credentials.txt
data: map[password:salsa username:giraffe]
metadata: map[created_time:2025-04-14T05:13:10.951783043Z custom_metadata:<nil> deletion_time: destroyed:false version:1]
/ # exit
Use-Case 2:- Using App Registration
Register an Enterprise Application in Microsoft Entra ID, you may then generate client_secret
for the app.
Ref. Quickstart: Register an app in Microsoft Entra ID - Microsoft identity platform
Just like “Managed Identity,” we would be creating federated credentials the same way for the Kubernetes workload.
On Azure VM:-
Once App Registration is done, you may use the client_id
and client_secret
to configure the Azure auth method.
vault write auth/azure/config tenant_id=<tenant_id> \
resource=https://management.azure.com/ \
client_id=ddc93440-cebd-4e09-946c-6c90bb33bd11 \
client_secret=<client_secret>
Configure the role to be bound with the correct subscription_id
and resource_group
. Also, assign the required policy to the role.
vault write auth/azure/role/dev-role policies="devwebapp,default" \
bound_subscription_ids=<subscription_id> \
bound_resource_groups=MC_vault-rg_myAKSCluster_eastus
On AKS cluster:-
-
Modify
serviceAccount
to pass theclientID
for the App Registration. -
Run the following same POD object file.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: sample-workload-identity
namespace: ${SERVICE_ACCOUNT_NAMESPACE} # Replace with your namespace
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/role: 'dev-role'
vault.hashicorp.com/agent-inject-secret-credentials.txt: 'secret/data/devwebapp/config'
vault.hashicorp.com/auth-path: 'auth/azure/'
vault.hashicorp.com/auth-type: 'azure'
vault.hashicorp.com/auth-config-resource: "https://management.azure.com/"
vault.hashicorp.com/namespace: 'root'
labels:
azure.workload.identity/use: "true" # Required. Only pods with this label can use workload identity.
spec:
serviceAccountName: workload-identity-sa17874e # Replace with your service account name
containers:
- name: rabbitmq # Replace with your container name
image: mcr.microsoft.com/mirror/docker/library/rabbitmq:3.10-management-alpine # Replace with your Docker image
ports:
- containerPort: 5672
name: rabbitmq-amqp
- containerPort: 15672
name: rabbitmq-http
env:
- name: RABBITMQ_DEFAULT_USER
value: "username"
- name: RABBITMQ_DEFAULT_PASS
value: "password"
resources:
requests:
cpu: 10m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
EOF
You will see that application pod is up and running using Azure Auth
method login in the vault-agent-init
container, and respective secrets from the Vault KV has been fetched and placed on the desired path.
himanshu.sharma@himanshu azure_auth_aks % k exec -it sample-workload-identity -- sh
Defaulted container "rabbitmq" out of: rabbitmq, vault-agent, vault-agent-init (init)
/ # cat /vault/secrets/credentials.txt
data: map[password:salsa username:giraffe]
metadata: map[created_time:2025-04-14T05:13:10.951783043Z custom_metadata:<nil> deletion_time: destroyed:false version:1]
/ # exit
Conclusion
In simple terms, this setup allows your AKS workloads to securely access secrets stored in an external Vault server using Azure identities. By using App Registrations and Managed Identities, you avoid hardcoding usernames or passwords in your applications. Instead, Vault checks the identity of the workload through Azure and then gives it access to the secrets it needs. This makes your system more secure, easier to manage, and ready to scale with your cloud applications.
Additional Information
Azure - Auth Methods | Vault | HashiCorp Developer
Azure - Auth Methods - HTTP API | Vault | HashiCorp Developer
Vault Agent Injector annotations | Vault | HashiCorp Developer
Integrate Kubernetes with an external Vault cluster | Vault | HashiCorp Developer
Vault installation to Azure Kubernetes Service via Helm | Vault | HashiCorp Developer
Use a managed identity in Azure Kubernetes Service (AKS) - Azure Kubernetes Service
Troubleshooting vault-agent init container for azure auth login failure