The information contained in this article has been verified as up-to-date on the date of the original publication of the article. HashiCorp endeavors to keep this information up-to-date and correct, but it makes no representations or warranties of any kind, express or implied, about the ongoing completeness, accuracy, reliability, or suitability of the information provided.
All information contained in this article is for general information purposes only. Any reliance you place on such information as it applies to your use of your HashiCorp product is therefore strictly at your own risk.
Introduction:
Consul provides prepared queries feature that enable user to register a complex service query and execute it on demand. Prepared queries provide a rich set of lookup features, such as filtering by multiple tags and automatically failing over to look for services in the remote data centers, if no healthy nodes are available in the local data center.
In this guide, we will try to see if there is any option to configure such functionality with the use of LoadBalancer to simulate same results.
Following below architecture diagram, this document helps to create a lab setup (using following reference docs) to test out LB to manage fault-tolerance and load-balance across DCs.
https://developer.hashicorp.com/consul/tutorials/archive/load-balancing-nginx-plus
https://developer.hashicorp.com/consul/tutorials/network-automation/consul-template-load-balancing
Use-Cases:-
There could be a scenario where user wants to achieve same functionality of prepared query by using LoadBalancer. Though, there’s no huge advantage of using LB for this purpose since it involves underlying complexity and intricacies around workflow of LB.
However one such requirement could be if there’s any hard restriction to enable this real-time redirection of traffic at LoadBalancer level (assuming you are using Ingress-Controller/Ingress on K8s to access the service/application), to check liveness of DC and perform rendering (via consul-templates) to fetch respective clients service application.
Lab Setup:
Please find server and client node configuration used to create multi DC setup.
Server’s configs
ubuntu@server1:~$ cat /etc/consul.d/server1.hcl
node_name = "server-1"
bind_addr = "192.168.64.45"
client_addr = "192.168.64.45"
data_dir = "/opt/consul/data/"
retry_join = ["192.168.64.46"]
server = true
bootstrap_expect = 1
license_path = "/etc/consul.d/license.hclic"
ui_config {
enabled = true
}
ports {
grpc = 8502
}
connect {
enabled = true
}
Client’s config
"server" = false
"datacenter" = "dc1"
"data_dir" = "/opt/consul/data/"
"log_level" = "DEBUG"
"enable_script_checks" = true
"enable_syslog" = true
"leave_on_terminate" = true
bind_addr = "192.168.64.49"
client_addr = "192.168.64.49"
ui_config {
enabled = true
}
node_name = "client-1"
retry_join = ["192.168.64.45","192.168.64.46"]
license_path = "/etc/consul.d/license.hclic"
ports {
grpc = 8502
}
connect {
enabled = true
}
-
In data center
dc1
, create two server and one client.
ubuntu@server1:~$ consul members
Node Address Status Type Build Protocol DC Partition Segment
server-1 192.168.64.45:8301 alive server 1.15.12+ent 2 dc1 default <all>
server-2 192.168.64.46:8301 alive server 1.15.12+ent 2 dc1 default <all>
client-1 192.168.64.49:8301 alive client 1.15.12+ent 2 dc1 default <default>
-
In data center
dc2
which is federated withdc1
, create one more server and one client.
ubuntu@server3:~$ consul members
Node Address Status Type Build Protocol DC Partition Segment
server-3 192.168.64.44:8301 alive server 1.15.12+ent 2 dc2 default <all>
client-2 192.168.64.50:8301 alive client 1.15.12+ent 2 dc2 default <default>
ubuntu@server3:~$ consul members -wan
Node Address Status Type Build Protocol DC Partition Segment
server-1.dc1 192.168.64.45:8302 alive server 1.15.12+ent 2 dc1 default <all>
server-2.dc1 192.168.64.46:8302 alive server 1.15.12+ent 2 dc1 default <all>
server-3.dc2 192.168.64.44:8302 alive server 1.15.12+ent 2 dc2 default <all>
-
Install
nginx
on both the clients and modify the/var/www/html/index.html
on each node to include message like below (Note: replace the datacenter_name with the actual datacenter name).
<!DOCTYPE html>
<html>
<head>
<title>Backend Server </title>
</head>
<body>
<h1>This is Backend <datacenter_name> </h1>
</body>
</html>
-
Register a sample service below across DCs in both the clients
client-1
andclient-2
withchecks
to monitor health status of the service on the node via agent.
backend.hcl
services {
name = "backend"
port = 80
check = {
id = "nginx"
http = "http://localhost"
interval = "10s"
timeout = "1s"
}
}
To register the service
$ consul services register backend.hcl
Registered service: backend
To check service status across both DCs
ubuntu@server1:~$ curl http://192.168.64.45:8500/v1/catalog/service/backend?dc=dc1 | jq .
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--100 658 100 658 0 0 142k 0 --:--:-- --:--:-- --:--:-- 160k
[
{
"ID": "85b1375f-fef7-4e5b-9197-934e58c2bbf6",
"Node": "client-1",
"Address": "192.168.64.49",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "192.168.64.49",
"lan_ipv4": "192.168.64.49",
"wan": "192.168.64.49",
"wan_ipv4": "192.168.64.49"
},
"NodeMeta": {
"consul-network-segment": ""
},
"ServiceKind": "",
"ServiceID": "backend",
"ServiceName": "backend",
"ServiceTags": [],
"ServiceAddress": "",
"ServiceWeights": {
"Passing": 10,
"Warning": 1
},
"ServiceMeta": {},
"ServicePort": 80,
"ServiceSocketPath": "",
"ServiceEnableTagOverride": false,
"ServiceProxy": {
"Mode": "",
"MeshGateway": {},
"Expose": {}
},
"ServiceConnect": {},
"Partition": "default",
"Namespace": "default",
"CreateIndex": 50,
"ModifyIndex": 142
}
]
ubuntu@server1:~$ curl http://192.168.64.45:8500/v1/catalog/service/backend?dc=dc2 | jq .
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 656 100 656 0 0 80500 0 --:--:-- --:--:-- --:--:-- 82000
[
{
"ID": "aa8da731-b525-e60c-adc5-ca2846a5d396",
"Node": "client-2",
"Address": "192.168.64.50",
"Datacenter": "dc2",
"TaggedAddresses": {
"lan": "192.168.64.50",
"lan_ipv4": "192.168.64.50",
"wan": "192.168.64.50",
"wan_ipv4": "192.168.64.50"
},
"NodeMeta": {
"consul-network-segment": ""
},
"ServiceKind": "",
"ServiceID": "backend",
"ServiceName": "backend",
"ServiceTags": [],
"ServiceAddress": "",
"ServiceWeights": {
"Passing": 1,
"Warning": 1
},
"ServiceMeta": {},
"ServicePort": 80,
"ServiceSocketPath": "",
"ServiceEnableTagOverride": false,
"ServiceProxy": {
"Mode": "",
"MeshGateway": {},
"Expose": {}
},
"ServiceConnect": {},
"Partition": "default",
"Namespace": "default",
"CreateIndex": 36,
"ModifyIndex": 36
}
]
-
Created a standalone nginx server to act as LoadBalancer (though user may use any LB of your choice), and ran a consul-template (pick any recent binary) on it render configuration file of nginx to fetch the application/service endpoints across both DCs.
consul-template configuration file
consul {
address = "192.168.64.46:8500" #IP address of the Primary DC ie. DC1 server.
token = "" #If ACL enabled, then pass the token to connect to consul
retry {
enabled = true
attempts = 12
backoff = "250ms"
}
}
template {
source = "/etc/nginx/conf.d/load-balancer.conf.ctmpl" #Source consul-template file to render
destination = "/etc/nginx/conf.d/load-balancer.conf" #Rendered file output will be stored here, which will be used by nginx LB.
perms = 0600
command = "service nginx reload"
}
content of consul-template file to be rendered
Ref. consul-template/docs/templating-language.md at main · hashicorp/consul-template
ubuntu@nginx-lb:/etc/nginx$ cat /etc/nginx/conf.d/load-balancer.conf.ctmpl
upstream backend {
{{- range service "backend@dc1" }}
server {{ .Address }}:{{ .Port }};
{{- end }}
{{- range service "backend@dc2" }}
server {{ .Address }}:{{ .Port }};
{{- end }}
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
Once user run the consul-template
agent on the LB machine by passing its config file, then it will render the source file and put the output in the destination file below. In the destination file it is evident that consul-template first query the Primary DC server along with @<datacenter_name>
and then it produce the <IP_ADDR>:<PORT>
value of both the clients client-1
and client-2
running the backend
service.
Output of rendered file
upstream backend {
server 192.168.64.49:80;
server 192.168.64.50:80;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
In the end, if user try to hit the nginx LB IP address/DNS, then it will give output from any of the service across DCs in the round-robin manner.
ubuntu@nginx-lb:/etc/nginx$ curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Backend Server </title>
</head>
<body>
<h1>This is Backend DC1</h1>
</body>
</html>
ubuntu@nginx-lb:/etc/nginx$ curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Backend Server </title>
</head>
<body>
<h1>This is Backend DC2</h1>
</body>
</html>
However if user wants to modify the result then he could attach DNS weight
stanza to the service itself.
Ref. Load Balancing with NGINX Plus' Service Discovery Integration | Consul | HashiCorp Developer
-
Just like prepared query, in case if service in any DC goes down, then daemon
consul-template
will pick the state and accordingly rendered the nginx-loadbalancer.conf file to pick only the healthy node of the service.
For example if user changes health check stanza on client-1 node from http
to https
, which then marked the service down, then if he hits nginx LB URL it only shows output from client-2
service since client-1
backend service is down.
ubuntu@nginx-lb:/etc/nginx$ curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Backend Server </title>
</head>
<body>
<h1>This is Backend DC2</h1>
</body>
</html>
ubuntu@nginx-lb:/etc/nginx$ curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Backend Server </title>
</head>
<body>
<h1>This is Backend DC2</h1>
</body>
</html>