The information contained in this article has been verified as up-to-date on the date of the original publication of the article. HashiCorp endeavors to keep this information up-to-date and correct, but it makes no representations or warranties of any kind, express or implied, about the ongoing completeness, accuracy, reliability, or suitability of the information provided.
All information contained in this article is for general information purposes only. Any reliance you place on such information as it applies to your use of your HashiCorp product is therefore strictly at your own risk.
Introduction
Dynatrace is a metrics, observability, and analytics platform used for monitoring the Infrastructure and Application health in development platforms. The technology offers a wide arrange of features for cloud and on-premises platforms to include Kubernetes and OpenShift.
Purpose of Article
This article focuses solely on integrating Dynatrace Kubernetes Application Stack-injection with Consul's Service mesh when enabling transparent proxy.
What this article does not cover:
- Configuring Dynatrace for Consul specific metrics collection.
- Configuring Dynatrace for Envoy (Dataplane) mesh specific metrics collection.
Table of Contents
Background on Dynatrace and Kubernetes
This section outlines the components of interest as they pertain more closely to the Consul service mesh internal networking operability.
Note: Skip to Consul Kubernetes Mesh Integration to see Consul configuration specifics. |
Dynatrace Software-as-a-Service (SaaS) Connectivity
Image Ownership belongs to Dynatrace (ref: Dynatrace Saas connectivity scheme)
The image above demonstrates the connectivity relationship between several Dynatrace Components for operation.
Component | Description | Port(s) | Service Relationship |
OneAgent | Manages automation, configuration, and OneAgent code injection into pods. |
Egress: 443/tcp , 9999/tcp
|
Outbound traffic to:
|
ActiveGate | Secure proxy that routes traffic between OneAgents and Dynatrace Clusters or between OneAgents and other Active Gates |
Ingress: Egress: |
Inbound traffic from:
Outbound traffic to:
|
Dynatrace Clusters | Upstream Cluster Managing Metrics Ingestion |
Ingress: 443/tcp
|
Inbound traffic from:
|
Dynatrace OneAgent Full-stack Injection on Kubernetes
Image Ownership belongs to Dynatrace (ref: Dynatrace Full-stack Injection)
Dynatrace's Full-stack injection deployment pattern requires several networking considerations for proper metrics emission and collection.
Application Pod, Dynatrace OneAgent, and ActiveGate Pods
-
Dynatrace OneAgent: The diagram seems to reflect that there are inbound connections being made from the OneAgent pod to the Kubernetes hosted application pod, however this is not the case. The relationship works as follows:
- OneAgent is deployed as a DaemonSet that collects host metrics from Kube Nodes and detects new container deployments to the node.
- Its core functionality works to inject OneAgent code modules into application pods via publicly available modules and SDKs. This is made possible by accessing the Pod's underlying host:
- Network Namespace
- PID Namespace
- Host Root Filesystem
- See Dynatrace documentation for more information surrounding code module injection.
- Dynatrace ActiveGate: ActiveGate pods (or Statefulsets) work to establish a Dynatrace single point of access on the local network for metric aggregation and submission. Network traffic is outbound from the pod and inbound to the ActiveGate pod on Port 9999.
Dynatrace ActiveGate Connection Schemes
When integrating Kubernetes or OpenShift cluster application metrics monitoring using Dynatrace Full-stack injection, the following environment configurations need to be known and understood:
-
Dynatrace ActiveGate Model: Which connectivity schema is being used?
- Environment ActiveGate
- Cluster Managed ActiveGate
- Dynatrace Managed (SaaS) ActiveGate
Model | Description | Image |
Environment |
Kubernetes (OpenShift) local ActiveGate pod deployed to aggregate OneAgent collected metrics and submit to the user's Dynatrace Cluster. Uses port 9999 for local OneAgent to ActiveGate communication. | |
Cluster Managed | Used if OneAgent pods and/or Environment ActiveGate pods have no connectivity directly to the external Dynatrace SaaS endpoint over port 443. This can be a Virtual Machine or Container deployment external to the Kubernetes (OpenShift) cluster. | |
Dynatrace Managed (embedded) | OneAgent metrics are collected and emitted directly to the user's Dynatrace Cluster over port 443. ActiveGate functionality is embedded alongside the SaaS Dynatrace cluster. |
Dynatrace Connectivity Order of Operations
Network Connection attempts will occur in the below order for ActiveGate to emit metrics to Dynatrace. This should be understood if multiple ActiveGates are enabled for the same Dynatrace environment.
- Environment ActiveGates
- Cluster ActiveGates
- Embedded Cluster ActiveGates
Consul Kubernetes Mesh Integration
Consul Service Mesh can be deployed in two main methods when working with consul-k8s-control-plane and consul-dataplane when using transparent proxying:
Transparent Proxying
Application Stack-injection metrics submission to ActiveGate blocked by pod iptables traffic redirection rules
Consul Service Mesh is leveraging iptables
traffic redirection rules to enforce Envoy sidecar traffic network connectivity between services (forced mTLS authentication). Traffic from the Dynatrace injected application to the Dynatrace ActiveGate is blocked for two reasons:
- Dynatrace Application Components aren't a part of the Consul Service Mesh, therefore there's no way of Consul to know about Dynatrace as it's not a registered service.
-
iptables
, by design, blocks all non-mesh enabled traffic for applications on the mesh (i.e., are Connect injected viaconsul.hashicorp.com/connect-inject: true
annotation).
Transparent Proxy Destinations via Terminating Gateway
One method of integrating with Dynatrace's components while using transparent proxying would be to implement Consul's Terminating Gateway to the mesh and include ActiveGate as externally accessible destination service.
Benefits:
- Security: Kubernetes Application outbound traffic never compromised as all communication enforces use of mTLS via Consul service mesh.
- Flexibility: Terminating Gateway external service communication being established sets the user up to be able to quickly add or remove any ActiveGate services from cluster accessibility in the future.
Drawbacks:
- Initial Configuration Complexity: If user's haven't implemented the Terminating Gateway in their environment yet, this could take some additional time in planning to become familiar.
Terminating Gateway Example Configuration
Thorough breakdown of deploying a Terminating Gateway is beyond the scope of this article, but we'll go over the main idea of how this should be implemented using example configurations assuming a Terminating Gateway has been implemented already.
Example Assumptions:
-
- Stack-injected Application Name: frontend
- ActiveGate deployed as either Service or Statefulset using static clusterIP: 10.43.0.66
Note: Although this article only discusses ActiveGate as a Kubernetes service/statefulset, it should be known that the ActiveGate service can also be deployed as an external VM hosted service. |
Terminating Gateway Mesh Destination Configuration Files
ServiceDefaults (Destinations Configuration)
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
name: dynatrace-activegate
namespace: dynatrace
spec:
protocol: tcp
destination:
addresses:
- "10.43.0.66"
port: 9999
TerminatingGateway (Hosting External Connection to ActiveGate)
apiVersion: consul.hashicorp.com/v1alpha1
kind: TerminatingGateway
metadata:
name: dynatrace-terminating-gateway
spec:
services:
- name: dynatrace-activegate
namespace: dynatrace
ServiceIntentions (Establishing Allow Connection from frontend application to ActiveGate)
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: dynatrace-activegate
spec:
destination:
name: dynatrace-activegate
namespace: dynatrace
sources:
- action: allow
name: frontend
namespace: consul
Terminating Gateway Envoy Dynamic Endpoint (/config_dump?include_eds
endpoint)
Note: The following verifications leverage the Terminating Gateway container and Application Pod's container to reach the Envoy Admin API ( |
Once Terminating Gateway and applicable external mesh destinations are configured for ActiveGate you can verify the Terminating Gateway Envoy endpoint has populated a valid Endpoint dynamically by parsing the Envoy admin config_dump.
Verify TGW ActiveGate Cluster Creation
# TGW Cluster for ActiveGate
$ curl -s 0:19000/clusters\?format=json | jq '.cluster_statuses[] | .name' | grep dynatrace-activegate
"destination.10-43-0-66.dynatrace-activegate.dynatrace.dc1.internal.9d891a97-6f0c-61bb-40c4-eb8fffa28743.consul"
Verify TGW ActiveGate Endpoint Creation
## TGW Envoy EDS Configuration Dump verifying proper endpoint for ActiveGate
$ curl -s 0:19000/config_dump\?include_eds | jq '.configs[] \
| select(. != null) \
| .dynamic_endpoint_configs \
| select(. != null)'
[
{
"endpoint_config": {
"@type": "type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment",
"cluster_name": "destination.10-43-0-66.dynatrace-activegate.dynatrace.dc1.internal.9d891a97-6f0c-61bb-40c4-eb8fffa28743.consul",
"endpoints": [
{
"locality": {},
"lb_endpoints": [
{
"endpoint": {
"address": {
"socket_address": {
"address": "10.43.0.66",
"port_value": 9999
}
},
"health_check_config": {}
},
"health_status": "HEALTHY",
"load_balancing_weight": 1
}
]
}
],
"policy": {
"overprovisioning_factor": 140
}
}
}
]
Kubernetes Service Application Sidecar Dynamic Endpoint
The application pod intended to participate in Dynatrace stack injection needs a way to route to ActiveGate via it's Dataplane sidecar proxy.
- The traffic flow would be Application -> Application Sidecar -> Terminating Gateway -> ActiveGate.
Obtain TGW Internal Mesh Cluster IP:Port
# TGW Default Listener for internal Mesh Traffic
$ curl -s 0:19000/listeners
default:10.42.0.65:8443::10.42.0.65:8443
Verify Application Pod is Aware of TGW Cluster Endpoint for Routing to ActiveGate
# Kubernetes Service Sidecar Application Envoy Endpoint Config
$ curl -s 0:19000/config_dump\?include_eds | jq '.configs[] \
| select(. != null) \
| .dynamic_endpoint_configs \
| select(. != null)'
[
{
"endpoint_config": {
"@type": "type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment",
"cluster_name": "destination.10-43-0-66.dynatrace-activegate.dynatrace.dc1.internal.9d891a97-6f0c-61bb-40c4-eb8fffa28743.consul",
"endpoints": [
{
"locality": {},
"lb_endpoints": [
{
"endpoint": {
"address": {
"socket_address": {
"address": "10.42.0.65",
"port_value": 8443
}
},
"health_check_config": {}
},
"health_status": "HEALTHY",
"load_balancing_weight": 1
}
]
}
],
"policy": {
"overprovisioning_factor": 140
}
}
},
# cut for brevity...
Transparent Proxy Outbound Traffic Exclusions
Another means to overcome the iptables
traffic block is to include the following annotation adjustments to the deployed application pod spec will help permit traffic to flow to the ActiveGate service. Users would choose one of the following exclude annotations:
-
- Outbound Port:
9999
- Annotation:
'consul.hashicorp.com/transparent-proxy-exclude-oubound-ports': '9999'
- Annotation:
- Outbound CIDR: Dynatrace ActiveGate PodIP (Requires reserved static IP for ActiveGate)
- Annotation:
'consul.hashicorp.com/transparent-proxy-exclude-oubound-cidrs': '10.43.226.92/32'
- Annotation:
- Outbound Port:
Benefits:
- Simple: Ease of configuration in the event configuring Terminating Gateway is undesired.
Drawbacks:
-
Less Secure: (Port Exclusion) Any source IP with a destination port of
9999
can bypass the service mesh from the annotated pod. - Security: Undesirable configuration when using Cluster ActiveGate that resides outside of the Kubernetes or OpenShift cluster and Consul Service mesh. Terminating Gateway is highly recommended in this case.
-
Potential IP Conflict: (IP Exclusion) Requires setting static
clusterIP
in ActiveGate service or statefulset deployment if deploying ActiveGate as pod. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this IP, hence, you will not be able to create the ActiveGate Service because it will fail with a conflict error.
Dynatrace Stack Injection (Environment/Cluster ActiveGate) and Transparent Proxy Exclusion
Outbound traffic excluding Dynatrace ActiveGate IP (or Port 9999) permitting outbound Dynatrace traffic to ActiveGate.
'consul.hashicorp.com/transparent-proxy': 'true'
'consul.hashicorp.com/transparent-proxy-exclude-outbound-ports': '9999'
# or
'consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs': '10.43.226.92/32'