Introduction:
Integrating multi-cluster Consul with Nomad allows you to leverage the power of both tools for more scalable and resilient infrastructure. This integration enables Nomad to use Consul for service discovery and health checking across multiple clusters, providing a unified view of your entire environment.
Nomad users on the Enterprise Platform Package can also integrate with multiple Consul clusters using a single Nomad cluster. Previously, there was a one-to-one relationship between the Nomad and the Consul.
Nomad administrators can now define multiple integrations in the consul
portions of the Nomad agent configuration. Then, Nomad job spec writers can pick which Consul cluster to use in their job.
Prerequisites:
- Consul and Nomad Enterprise Edition: This feature is supported with the Enterprise edition of Nomad (1.7 and above) and Consul (I have tested this article with v1.16.4+ent).
- Valid Nomad License: A valid Nomad license where the "Multiple Consul Clusters" feature is available.
- Two Consul Clusters Installed: Ensure you have two Consul clusters installed and properly configured.
- One Nomad Cluster: Ensure that you have a Nomad cluster installed and properly configured.
- Network Connectivity: Please make sure that network connectivity between Consul and Nomad servers and clients across clusters.
- Compatibility between Nomad and Consul: Most supported versions of Nomad are compatible with the most recent versions of Consul, with some exceptions. Please see this URL for more reference - https://developer.hashicorp.com/nomad/docs/integrations/consul#compatibility
Steps:
Here's a detailed step-by-step guide on how to integrate multi-cluster Consul with Nomad:
1. Configure Consul Cluster:
Configure two consul clusters separately. Please note that these consul clusters IPs should be accessible from the Nomad server
2. Configure Nomad Cluster:
You can use the below Nomad configuration for the Nomad cluster setup. Please note that this is a minimal configuration required for running Nomad as server and client node along with multi-cluster consul configuration.
log_level = "TRACE"
data_dir = "/opt/nomad/data"
bind_addr = "<paste you IP here>"
server {
license_path = "/etc/nomad.d/license.hclic"
enabled = true
bootstrap_expect = 1
}
client {
enabled = true
servers = ["<paste your IP here>"]
}
consul {
address = "<first consul IP:consul_port>"
default_cluster = true
name = "default"
}
consul {
address = "<second consul IP:consul_port"
default_cluster = false
name = "alpha"
}
Let’s take a look at using multiple consul clusters as an example. First, in the Nomad agent config, add two named consul blocks, one of which is the default cluster. Setting one of the consul blocks as a default cluster is mandatory. The consul block that will be acting as a default cluster should have these two parameters set default_cluster = true
and name = default
. The other consul block which is not a default cluster should be set default_cluster = false
and name
parameter can use any naming convention.
3. Validate the Integration:
You can run the Nomad command - nomad node status <node_id> -verbose | grep -i consul
to see the integration of Nomad with both the consul clusters. Below is the sample output for the reference where Nomad is integrated with both the consul clusters.
consul.alpha.connect = true
consul.alpha.datacenter = dc1
consul.alpha.ft.namespaces = true
consul.alpha.grpc = -1
consul.alpha.partition = default
consul.alpha.revision = ac9dd5b8
consul.alpha.server = true
consul.alpha.sku = ent
consul.alpha.version = 1.16.4+ent
consul.connect = true
consul.datacenter = dc1
consul.ft.namespaces = true
consul.grpc = -1
consul.partition = default
consul.revision = ac9dd5b8
consul.server = true
consul.sku = ent
consul.version = 1.16.4+ent
4. Configure Nomad Jobs:
When defining Nomad jobs, users can opt into default or non-default Consul clusters using the cluster
value in the consul block of job configuration. Here is the sample syntax -
job "example" {
…
consul {
cluster = "alpha"
}
…
}
5. Validate the Nomad Job:
Once Nomad job has been deployed, you can validate the same using the Nomad CLI command - nomad job inspect <job_name>
.
"Job": {
...
...
"TaskGroups": [
{
"Affinities": null,
"Constraints": null,
"Consul": {
"Cluster": "alpha",
"Namespace": "",
"Partition": ""
},
...
...
"Services": [
{
"Address": "",
"AddressMode": "auto",
"CanaryMeta": null,
"CanaryTags": null,
"CheckRestart": null,
"Checks": null,
"Cluster": "alpha",
"Connect": null,
"EnableTagOverride": false,
"Identity": null,
"Meta": null,
"Name": "redis",
"OnUpdate": "require_healthy",
"PortLabel": "db",
"Provider": "consul",
"TaggedAddresses": null,
"Tags": null,
"TaskName": "redis"
}
],
}
],
...
...
}
The above JSON from nomad job inspect
commands shows that this job has been registered in the alpha
consul cluster and the service has been registered in the alpha
consul cluster with the name redis
and it is using Consul as a provider.
Conclusion:
By following these steps, you can successfully integrate multi-cluster Consul with Nomad, providing a scalable and resilient infrastructure for your applications.
Reference: