Getting Started with Vault Enterprise: AppRole Authentication Backend
Introduction
HashiCorp Vault can be used to secure application secrets in a variety of fashions. While there are many common workflows that leverage Vault as a source of security for your secrets, this guide focuses on deploying a Vault cluster to serve as a secret store for applications, using the AppRole authentication backend to handle the authentication needs of a variety of applications. AppRoles are effective for these workflows because they can be scoped to extremely broad or extremely fine grains, and everything in between, making them an effective tool for enterprises with a large number of applications.
This guide outlines the process of deploying and configuring a Vault Enterprise cluster and a Consul Enterprise cluster configured as a secret storage backend, followed by the process of configuring a sample AppRole. By following the instructions in this guide, you will configure a Vault Enterprise cluster that reflects HashiCorp’s deployment best practices, and a sample AppRole that will help you feel comfortable developing and deploying other AppRoles.
The examples in this guide walk through the process of deploying Vault and Consul clusters manually. These processes are easily automated and/or scripted, and code snippets that cover these workflows are linked in these documents. If the examples do not match your preferred tooling, it is relatively straightforward to adapt the scripting to your preferred tooling, or to write your own.
Prerequisites
To deploy a Vault cluster backed with Consul as your secret storage, there are a number of prerequisites that you need to have prepared.
- Three identically-configured servers to serve as Vault Servers and Consul Client Agents.
- Three or five identically-configured servers to serve as Consul Server Agents.
- If using three servers, we recommend configuring one non-voting server for each voting server to serve as a hot spare in the event of node failure.
- If using AWS, you must properly configure a VPC. If using another cloud provider’s service, configure the corresponding applicable concept.
- You will probably also want to configure a network gateway to download your binaries, unless you plan to SCP binaries around your cluster, or your topology involves the use of bastion hosts or similar
- Each instance should have the most recent Consul Enterprise binary tarball ready to be unzipped
- Each Vault Server should also have the most recent Vault Enterprise binary tarball ready to be unzipped
- All Consul Server instances must have ports 8300 (TCP), 8301 (TCP and UDP), 8500 (TCP) and 8600 (TCP and UDP) open.
- If the Consul Servers will gossip over Serf WAN across data centers, port 8302 must also be opened to TCP and UDP traffic
- All Vault Server instances (with colocated agents) must have ports 8200 (TCP), 8201 (TCP), 8301 (TCP and UDP), 8500 (TCP) and 8600 (TCP and UDP) open.
Deploying Consul
Servers
- Verify that your Consul binaries are deployed to an easily accessible location in each of your instances.
- On your servers, move your Consul binaries to /usr/local/bin.
- Configure your systemd scripts in /etc/systemd/system/consul.service on each Consul server. Since we plan to backload most of our configs into our config dir, you can use the same script for all servers to bootstrap, shown below:
$ sudo vim /etc/systemd/system/consul.service [Unit] Description=Consul Documentation=https://www.consul.io/ Requires=network-online.target After=network-online.target [Service] Restart=on-failure ExecStart=/usr/local/bin/consul agent -data-dir=/opt/consul/data -config-dir=/etc/consul.d/ ExecReload=/bin/kill -HUP $MAINPID KillSignal=SIGTERM LimitNOFILE=65536 [Install] WantedBy=multi-user.target
- Configure your json scripts in the config directory referenced above. Since we rolled a decent amount of our config into our systemd script, this will be pretty empty to start.
$ sudo vim /etc/consul.d/client.json
{
"data_dir": "/opt/consul/data",
"ui": true,
"server": true,
"bind_addr": "172.31.20.53",
"node_name": "consulserv-01",
"bootstrap_expect": 3
}
- Start consul across your cluster, starting with consul-01
$ systemctl daemon-reload
$ systemctl start consul.service
$ systemctl enable consul.service
- Check your logs to make sure your Consul server initialized properly
$ sudo journalctl -u consul
- Verify that your cluster has had all members join correctly. In the event of nodes missing from your list, check to make sure your nodes do not have duplicate names.
# consul members
Node Address Status Type Build Protocol DC Segment
consulserv-01 172.31.20.53:8301 alive server 1.0.0+ent 2 dc1 <all>
consulserv-02 172.31.19.160:8301 alive server 1.0.0+ent 2 dc1 <all>
consulserv-03 172.31.19.62:8301 alive server 1.0.0+ent 2 dc1 <all>
Clients
- Verify that your consul binaries are deployed to an easily accessible location in each of your instances.
- On your clients, move your consul binaries to /usr/local/bin
- Configure your systemd scripts in /etc/systemd/system/consul.service. A script to restart your clients would look like this:
$ sudo vim /etc/systemd/system/consul.service
[Unit]
Description=Consul
Documentation=https://www.consul.io/
Requires=network-online.target
After=network-online.target
[Service]
Restart=on-failure
ExecStart=/usr/local/bin/consul agent -data-dir=/opt/consul/data -config-dir=/etc/consul.d/
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGTERM
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
- Configure your json scripts in the config directory referenced above. Since client config is generally pretty minimal, this script is going to be short.
$ sudo vim /etc/consul.d/client.json
{
"data_dir": "/opt/consul/data",
"ui": true,
"node_name": "consulclient-01",
"retry_join": [
"172.31.20.53:8301",
"172.31.19.62:8301",
"172.31.19.160:8301"
]
}
- Start Consul on your agents
$ systemctl daemon-reload
$ systemctl start consul.service
$ systemctl enable consul.service
- Check your logs to make sure your Consul client initialized properly
$ sudo journalctl -u consul
- Verify that your cluster has had all members join correctly. In the event of nodes missing from your list, check to make sure your nodes do not have duplicate names.
$ consul members
Node Address Status Type Build Protocol DC Segment
consulserv-01 172.31.20.53:8301 alive server 1.0.0+ent 2 dc1 <all>
consulserv-02 172.31.19.160:8301 alive server 1.0.0+ent 2 dc1 <all>
consulserv-03 172.31.19.62:8301 alive server 1.0.0+ent 2 dc1 <all>
Consulclient-01 172.31.24.93:8301 alive client 1.0.0+ent 2 dc1 <default>
consulclient-02 172.31.17.194:8301 alive client 1.0.0+ent 2 dc1 <default>
consulclient-03 172.31.31.216:8301 alive client 1.0.0+ent 2 dc1 <default>
Deploying Vault
- Verify that your Vault binaries are deployed to an easily accessible location in each of your instances.
- On your servers, move your Vault binaries to /usr/local/bin.
- Create the path /etc/vault.d and create a config file in that directory called vault_server.hcl. This file should be fairly straightforward, telling Vault to connect to the Consul agent running locally, and that the Vault listener should run on port 8200. Repeat on all three Vault servers. This setup turns off TLS: if your configuration requires TLS, you should set it up here. All subsequent requests assume you require HTTP communication instead of HTTPS (remove the address flags for HTTPS).
$ sudo vim /etc/vault.d/vault_server.hcl
storage "consul" {
address = "127.0.0.1:8500"
path = "vault"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 1
}
####ui = true
- Configure your systemd script to initialize Vault as follows on all three Vault servers. This script references the config file you established in step 3.
$ sudo vim /etc/systemd/system/vault.service
[Unit]
Description=Vault
Requires=network-online.target
After=network-online.target
[Service]
Restart=on-failure
ExecStart=/usr/local/bin/vault server -config /etc/vault.d/
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGTERM
[Install]
WantedBy=multi-user.target
- Start Vault on your servers. You should already have running consul clients on these servers.
$ systemctl daemon-reload
$ systemctl start vault.service
$ systemctl enable vault.service
- Check your logs to make sure your Vault server initialized properly
$ sudo journalctl -u vault
- Verify that your Vault servers are all running and sealed
$ vault status -address http://127.0.0.1:8200
Sealed: true
Key Shares: 5
Key Threshold: 3
Unseal Progress: 0
Unseal Nonce:
Version: 0.8.3+ent
High-Availability Enabled: true
Mode: sealed
- Run vault init on one of your servers to initialize vault and generate barrier keys. Do not lose these keys: you will need at least the Key Threshold value worth of keys to unseal.
$ vault init -address=http://127.0.0.1:8200
Unseal Key 1: p4V/SqR2zP7jSu9pKHMTH1a0RmVsUczQ2VK6DzolyU8h
Unseal Key 2: lFu5hKkwTKq64/InktpnQawMnetFeYDS/36Rh6/ytJEZ
Unseal Key 3: 53uqVeF3u+EyaYb+LxS4/gsS0Eef4+nKko4dT0A26Kdu
Unseal Key 4: l+Iyf3ZkLSOtsnNBcZyn5Qd4IuxfeXiu7xqZIAOwO5Ca
Unseal Key 5: M4XSAXKbynuM9Kh0fAQp9If6+RHMSB1TozHDkKNaNl3y
Initial Root Token: e5c8c2f7-aec2-9859-db1d-b94ccdc2955c
Vault initialized with 5 keys and a key threshold of 3. Please
securely distribute the above keys. When the vault is re-sealed,
restarted, or stopped, you must provide at least 3 of these keys
to unseal it again.
Vault does not store the master key. Without at least 3 keys,
your vault will remain permanently sealed.
- Run your unseal commands on all servers. You will need to run the same unseal commands on all three servers.
$ vault unseal -address=http://127.0.0.1:8200 p4V/SqR2zP7jSu9pKHMTH1a0RmVsUczQ2VK6DzolyU8h
Sealed: true
Key Shares: 5
Key Threshold: 3
Unseal Progress: 1
Unseal Nonce: e686b815-531c-2284-4cd6-215709515716
$ vault unseal -address=http://127.0.0.1:8200 lFu5hKkwTKq64/InktpnQawMnetFeYDS/36Rh6/ytJEZ
Sealed: true
Key Shares: 5
Key Threshold: 3
Unseal Progress: 2
Unseal Nonce: e686b815-531c-2284-4cd6-215709515716
$ vault unseal -address=http://127.0.0.1:8200 53uqVeF3u+EyaYb+LxS4/gsS0Eef4+nKko4dT0A26Kdu
Sealed: false
Key Shares: 5
Key Threshold: 3
Unseal Progress: 0
Unseal Nonce:
- Your Vault Cluster is ready for use!
Configuring an AppRole
AppRole Pull
- Export your Vault server’s address:
export VAULT_ADDR=’http://127.0.0.1:8200’
- Enable your AppRole backend
vault auth-enable approle
Successfully enabled 'approle' at 'approle'!
- Create a Policy
echo 'path "sys/*" {
capabilities = ["deny"]
}
path "secret/app1" {
capabilities = ["read", "list"]
}
path "secret/app2" {
capabilities = ["read", "list"]
}
path "secret/super-secret" {
capabilities = ["deny"]
}' | vault policy-write approle -
Policy 'approle' written.
- Create an AppRole role with associated configuration details and the above policy
vault write auth/approle/role/app1 \
secret_id_ttl=60m \
token_ttl=60m \
token_max_tll=60m \
secret_id_num_uses=40 \
policies="approle"
Success! Data written to: auth/approle/role/app1
- Now we can retrieve a role_id specific to this role. Note that the role_id does not change.
vault read auth/approle/role/app1/role-id
Key Value
--- -----
role_id e1235151-f5fb-9eae-83b5-35cdf65108be
vault read auth/approle/role/app1/role-id
Key Value
--- -----
role_id e1235151-f5fb-9eae-83b5-35cdf65108be
- Now we can retrieve a secret_id against the AppRole. Note that the secret_id changes for each request.
vault write -f auth/approle/role/app1/secret-id
Key Value
--- -----
secret_id ba27ea84-c093-e1fd-1a4b-54db367fe380
secret_id_accessor 56109f4e-1f44-4694-c45c-16633e618b0d
vault write -f auth/approle/role/app1/secret-id
Key Value
--- -----
secret_id 0e9f355c-4f0c-9e5a-8ccd-05d06e8cc281
secret_id_accessor b75259ce-f4f3-7254-8340-dee378c61bd9
vault write -f auth/approle/role/app1/secret-id
Key Value
--- -----
secret_id 41ba1ee7-f4f4-a76e-9a5c-c0a4580eb695
secret_id_accessor f09561d8-2df9-828a-be85-18cf7303d49c
- Now we can perform a login to retrieve a token. Note that a secret_id can be used multiple times in the configuration used here. We could have limited it using the secret_id_num_uses parameter. This process would normally take place within a newly instantiated node or container in order to retreive an authentication token. That token is then used on the system to authenticate to Vault in order to retrieve secrets, using tools like consul-template, envconsul, or Vault aware libraries.
vault write auth/approle/login \
role_id=e1235151-f5fb-9eae-83b5-35cdf65108be \
secret_id=41ba1ee7-f4f4-a76e-9a5c-c0a4580eb695
Key Value
--- -----
token d4f963a9-d167-a0d4-1856-5716c63d6711
token_accessor dd0c6d88-f2cb-0237-165e-12711e0694bd
token_duration 1h0m0s
token_renewable true
token_policies [approle default]
vault write auth/approle/login \
role_id=e1235151-f5fb-9eae-83b5-35cdf65108be \
secret_id=41ba1ee7-f4f4-a76e-9a5c-c0a4580eb695
Key Value
--- -----
token 23e6c258-1b02-e65f-61b2-b8af7628cd21
token_accessor ecf1d66a-648f-e9c6-f17d-ad7d642d175d
token_duration 1h0m0s
token_renewable true
token_policies [approle default]
AppRole Pull via API
Now let’s do the same operation using the API.
- First enable AppRole backend at mount ‘approle-foo’. Note that this pathing structure allows multiple instances of similar authentication and secret backends. Substitute your token for the password string below.
curl -X POST \
-H "X-Vault-Token:password" \
-d '{"type":"approle"}' \
http://127.0.0.1:8200/v1/sys/auth/approle-foo
- Next let’s create a policy
curl -X POST \
-H "X-Vault-Token:password" \
http://127.0.0.1:8200/v1/sys/policy/approle-foo \
-d '{"rules":"path \"secret/foo\" {\n capabilities = [\"read\"]\n} \npath \"auth/token/renew\" {\n capabilities = [\"update\"]\n} \npath \"auth/token/lookup-accessor\" {\n capabilities = [\"update\"]\n} \npath \"auth/token/lookup\" {\n capabilities = [\"read\"]\n}"}'
- Validate that the policy was written
vault policies
approle
approle-foo
default
root
- Create an AppRole role with associated configuration details and the above policy
curl -X POST \
-H "X-Vault-Token:password" \
-d '{"policies":"approle-foo","secret_id_num_uses":"3","period":"3600"}' \
http://127.0.0.1:8200/v1/auth/approle/role/approle-foo
- Retrieve a role_id for this AppRole.
curl -X GET \
-H "X-Vault-Token:password " \
http://127.0.0.1:8200/v1/auth/approle/role/approle-foo/role-id
{
"request_id": "9d6cb463-afc3-f5fa-99ef-56330bd60cfa",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": {
"role_id": "19e8b827-7dad-0f3e-f204-b4550842581b"
},
"wrap_info": null,
"warnings": null,
"auth": null
}
- Now generate a secret_id for the AppRole. Note that as before, this generates a new secret_id each time it is requested.
curl -X POST \
-H "X-Vault-Token:password " \
http://127.0.0.1:8200/v1/auth/approle/role/approle-foo/secret-id
{
"request_id": "9964c730-896d-7132-3ee6-79ec00994e33",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": {
"secret_id": "7858d709-c58f-cef1-d4f6-9e9698380a0f",
"secret_id_accessor": "198b1995-25e0-8ec6-30b4-a2c8b1673413"
},
"wrap_info": null,
"warnings": null,
"auth": null
}
curl -X POST \
-H "X-Vault-Token:password " \
http://127.0.0.1:8200/v1/auth/approle/role/approle-foo/secret-id
{
"request_id": "a2d2a8e4-a4c9-3de2-35d3-7a33874dc945",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": {
"secret_id": "f9bdddeb-8283-bd21-95f4-e78e0d52e095",
"secret_id_accessor": "5cf0ad99-b8e7-21f0-1d00-e3c8c4a57c4d"
},
"wrap_info": null,
"warnings": null,
"auth": null
}
- Let’s write some secrets to the path that this AppRole has access to.
curl -X POST \
-H "X-Vault-Token:password" \
-H "Content-Type: application/json" \
http://127.0.0.1:8200/v1/secret/foo \
-d '{"name":"Burns, Charles Montgomery", "ssn": "000-00-0002"}'
- Now let’s perform a login event with a valid role_id and secret_id
curl -X POST \
-H "X-Vault-Token:e5c8c2f7-aec2-9859-db1d-b94ccdc2955c" \
-d '{"role_id":"19e8b827-7dad-0f3e-f204-b4550842581b","secret_id":"f9bdddeb-8283-bd21-95f4-e78e0d52e095"}' \
http://127.0.0.1:8200/v1/auth/approle/login | grep '.auth.client_token'
"3ce8449a-552f-37c7-0cf9-9e9149030715"
- Note that after 3 login events using this secret_id, it can no longer be used (as set by the secret_id_num_uses parameter).
curl -X POST \
-H "X-Vault-Token:password" \
-d '{"role_id":"19e8b827-7dad-0f3e-f204-b4550842581b","secret_id":"f9bdddeb-8283-bd21-95f4-e78e0d52e095"}' \
http://127.0.0.1:8200/v1/auth/approle/login
{
"errors": [
"failed to validate SecretID: invalid secret_id \"f9bdddeb-8283-bd21-95f4-e78e0d52e095\""
]
}
- Now we should be able to use the token retrieved to read our secrets
curl -X GET \
-H "X-Vault-Token:3ce8449a-552f-37c7-0cf9-9e9149030715" \
http://127.0.0.1:8200/v1/secret/foo
{
"name": "Burns, Charles Montgomery",
"ssn": "000-00-0002"
}
- Note that we can also create a secret_id that is response wrapped
curl -s -X POST \
-H "X-Vault-Token:password " \
-H "X-Vault-Wrap-TTL:60s" \
http://127.0.0.1:8200/v1/auth/approle/role/approle-foo/secret-id | jq '.wrap_info.token'
"6d417c3e-a0a0-eb77-23aa-08cf9fa295fc"
- In this situation the data must first be unwrapped before the secret_id can be used. This operation needs to happen within the TTL specified in the X-Vault-Wrap-TTL header above.
curl -s -X POST \
-H "X-Vault-Token:6d417c3e-a0a0-eb77-23aa-08cf9fa295fc" \
http://127.0.0.1:8200/v1/sys/wrapping/unwrap | jq
{
"request_id": "d8014807-f7d3-8a6f-bf54-328d4b45d63d",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": {
"secret_id": "8c7f4aa8-fbcf-f684-a1ed-532c119fa57a",
"secret_id_accessor": "bdfcebb4-c2c1-277b-0992-7f8c64fbbd1c"
},
"wrap_info": null,
"warnings": null,
"auth": null
}
- If we try to unwrap that same token after the 60s TTL, we are denied.
curl -s -X POST \
-H "X-Vault-Token:6d417c3e-a0a0-eb77-23aa-08cf9fa295fc" \
http://127.0.0.1:8200/v1/sys/wrapping/unwrap | jq
{
"errors": [
"wrapping token is not valid or does not exist"
]
}
AppRole Push
AppRole Push is a slightly different workflow than the Pull method. In this method, the secret_id is written to Vault rather than being retrieved from Vault (pushed to Vault versus pulled from Vault).
A simple example where this might be desired is if there was a spreadsheet of servers with hardware information and a piece of information like a MAC address was used as a unique identifier. You could pre-populate Vault (the secret_id) with details based on this information (perhaps obfuscated in some fashion; combine multiple pieces, hash and concatenate in a repeatable calculation). Then a process on the server side would re-compute the secret_id using the same process with the locally known information (MAC address etc). It would then use the calculated secret_id to perform the AppRole login for authentication.