How to add custom tag on Datadog metrics for your running applications [Temporary Solution]

Hello,

one of the issues you might face today while using Datadog is to find the right app within your Datadog dashboards. The container_name or the kube_service tag does not contain the name of the application (ex: “Front-end”) but instead, they contain the Qovery internal name (ex: “app-zf988aa72”) and it becomes hard to quickly find an application within the dashboard

We are working on automatically adding the right labels on the Kubernetes resources (service/environment/project/organization names) and any custom label you would like to add (like a “team”) but as a temporary solution, you could follow these steps:

  1. Open the environment variable section of the application that you want to monitor via Datadog
  2. Define an environment variable for each tag that you want to find back in your Datadog dashboard for the selected application. Don’t forget to re-deploy your application to apply the new setup.
  3. In your datadog-values.yaml file (or any agent configuration file you might have), add a custom agent config that will allow mapping the container environment variable to a tag within Datadog:
agents:
  useConfigMap: true
  customAgentConfig:
    container_env_as_tags:
      <ENVIRONMENT_VARIABLE_NAME>: "<DESIRED_TAG_NAME>"

Example: You have an application “Front-end” managed by the team “MyTeamA”. This application is currently running on the staging environment and you want to have these 3 information added to the metrics collected by Datadog.

  1. add the following environment variables
    – qovery_application_name = “Front-end” . Scope = Application
    – qovery_environment_name = “Staging” . Scope = Environment
    – qovery_team = “MyTeamA” . Scope = Application
  2. Redeploy the application
  3. Modify the agent configuration to include the mapping below
agents:
  useConfigMap: true
  customAgentConfig:
    container_env_as_tags:
      qovery_service_name : "service_name"
      qovery_environment_name : "environment_name"
      qovery_team : "team_name"
  1. Apply the configuration change on your agent

You should now see within the Datadog metrics that 3 new tags are available: service_name, environment_name and team_name. These 3 will contain the values of the environment variables defined in step 1.

This solution is based on the Datadog documentation provided at this link.

Note: These tags are associated with containers and NOT with Pods. Some of the dashboards in Datadog do not allow pulling the container tags if you are at Pods level.

3 Likes

Thank you Alessandro. Super useful as a workaround

1 Like

Hey @a_carrano ,
Thank you very much for your post. I tried this but for some reason it did not work, I opened a ticket with datadog and will keep you posted on this thread if any findings. Probably something wrong I did:

agents:
useConfigMap : true
customAgentConfig:
container_env_as_tags:
app_service_name : “app_service_name”

From pod manifest:
From POD manifest file: (environment is there)

-name:app_service_name

valueFrom:

secretKeyRef:

name:app-z7c0f6559

key:app_service_name

Hey @a_carrano , thanks for sharing, but it’s not working for me.

I’ve got Datadog installed through helm in my Kubernetes cluster and here is my full agent configuration:

datadog:
  # valid config values: https://github.com/DataDog/helm-charts/blob/main/charts/datadog/values.yaml

  clusterName: qovery-z35b0097b

  site: datadoghq.com

  logLevel: INFO

  logs:
    enabled: true
    containerCollectAll: true
  
  # datadog environment variables: https://docs.datadoghq.com/containers/docker/?tab=standard#environment-variables
  env:
    - name: DD_ENV
      value: staging

  # Set to false if you are not using APM.
  apm:
    portEnabled: true
    port: 8125
  
  dogstatsd:
    originDetection: true
    useHostPort: true
  
  processAgent:
    enabled: true
    processCollection: true
    processDiscovery: true
  
  networkMonitoring:
    enabled: true
  
  serviceMonitoring:
    enabled: true
  
  containerExcludeLogs: "kube_namespace:kube-system kube_namespace:qovery kube_namespace:cert-manager kube_namespace:logging kube_namespace:prometheus kube_namespace:datadog-agent"

agents:
  useConfigMap: true
  customAgentConfig:
    container_env_as_tags:
      qovery_service_name : "service_name"
      qovery_environment_name : "environment_name"
      qovery_team : "team_name"

# You can remove this part if you are not using APM.
# Note that it it will be enabled for all your applications.
clusterAgent:
  replicas: 2
  createPodDisruptionBudget: true
  admissionController:
    enabled: true
    mutateUnlabelled: true

Do you see anything incorrectly configured? thanks in advance.

There are a few things to keep in mind to ensure that everything works as expected:

  1. Qovery Side: Once the environment variables are set, you need to re-deploy the application to inject the new environment variables.
  2. Datadog Side:
    2.a Once the config is changed and deployed, it takes a few minutes for the metrics to arrive with the new tags (sometimes we even had to kill the Datadog agent…)
    2.b These tags are associated with the containers and NOT with the Pods. Some of the dashboards in Datadog do not allow pulling the container tags if you are at Pods level. Example:
    The orchestration overview by pod doesn’t seem to allow you to display in the table the tags associated with the containers running within it (or filter the data based on them)
    Pods overview page allows you to define a scope and thus define which container tag I want to display in my dashboard. In the example below, I’ve filtered the list of pods and displayed only the metrics of the pods running my front-end application (via the environment variable called “qovery_service_name”)

The agent configuration that we used for our test:

datadog:
  clusterName: qovery-dx-cluster

  # datadog.site -- The site of the Datadog intake to send Agent data to
  ## Set to 'datadoghq.eu' to send data to the EU site.
  site: datadoghq.eu

  # Export custom Kubernetes labels as tags
  podLabelsAsTags:
    "qovery.com/*": "%%label%%"

  logs:
    enabled: true
    containerCollectAll: true

  # Set to false if you are not using APM.
  apm:
    enabled: true
  
  containerExcludeLogs: "kube_namespace:kube-system kube_namespace:qovery kube_namespace:cert-manager kube_namespace:nginx-ingress kube_namespace:logging kube_namespace:prometheus"

# You can remove this part if you are not using APM.
# Note that it it will be enabled for all your applications.
clusterAgent:
  admissionController:
    enabled: true
    mutateUnlabelled: true

agents:
  useConfigMap: true
  customAgentConfig:
    container_env_as_tags:
      qovery_service_name: "qovery_service_name"

Note: we have a rule “qovery.com/*”: “%%label%%” but this works only with Kubernetes labels which, for now, are not in a readable form

We will start working the next sprint on automatically adding labels to the Kubernetes resources based on the names defined within Qovery (at least service name & environment name), instead of relying on this environment variable configuration (which is still helpful if you want to define custom tags within the Datadog metrics)

Can we please get a similar set of documentation for AWS Container Insights? It sorta feels like AWS monitoring is a second class citizen around her, which surprises me considering most if not all these clusters are deployed there.