NGINX Ingress Controller using C++ module

Prev Next

Starting with version 1.12.0, the Ingress-NGINX controller removed support for custom Lua plugins, which were previously used for instrumentation. Traceable now supports injecting a C++ module into the Ingress-NGINX controller to continue supporting Ingress traffic monitoring and protection. This approach ensures compatibility with the latest controller versions and offers a reliable alternative to Lua-based instrumentation.

This document guides you through enabling Traceable’s C++ module injection for Ingress-NGINX using namespace labels or custom match selectors.


What Is This About?

This topic explains how to instrument the Ingress-NGINX controller using Traceable’s native C++ module. The module enables API observability and protection by injecting the required C++ module into the Ingress-NGINX controller pods. This solution is designed for environments where Lua-based plugins are no longer supported.


What Problem Does This Solve?

The deprecation of Lua plugin support in recent Ingress-NGINX versions (≥ 1.12.0) breaks compatibility with older Traceable instrumentation methods. This module-based injection approach resolves that by:

  • Ensuring instrumentation continues to work with updated NGINX versions.

  • Supporting fine-grained resource and behavior control through Helm or Terraform values.

  • Providing full observability via the Traceable Platform.


Before You Begin

Ensure the following prerequisites are met before proceeding:

Important: The configuration options listed in this topic are part of the Traceable Platform Agent (TPA) Helm chart and Terraform module.

You must ensure that TPA is installed using one of the following methods:

For more information on different Helm and Terraform values, see Helm and Terraform values topic.

  • Ingress-NGINX version ≥ 1.12.0.

  • You have kubectl access with permissions to:

    • Label namespaces.

    • Patch deployments.

    • Restart deployments.

  • Your deployment environment supports init containers and ConfigMap-based configuration.

  • You have either:

    • Helm access to modify values.yaml, or

    • Terraform configuration control for deployment.


Configuration Steps

Step 1: Label the Namespace (Optional but Recommended)

Apply a label to the namespace where Ingress-NGINX is deployed. This label signals Traceable’s injection mechanism to automatically inject the C++ module into any matching pods within this namespace.

kubectl label ns ingress-nginx traceableai-inject-nginx-cpp=enabled

What this does:
This command adds a label named traceableai-inject-nginx-cpp=enabled to the ingress-nginx namespace. When this label is present, Traceable’s injection logic knows to target this namespace for instrumentation.

Note

If you prefer to control injection based on pod annotations or labels instead of namespace-wide injection, you can skip this step and configure the matchSelectors parameter (see config table).


Step 2: Patch the Ingress Controller Deployment

Add a pod-level annotation to the Ingress-NGINX controller deployment to explicitly request injection of the Traceable C++ module.

Before proceeding to the next step: Ensure that the Traceable Platform Agent (TPA) has been installed using one of the supported methods:

kubectl patch deployment.apps/ingress-nginx-controller \
  -p '{"spec": {"template": {"metadata": {"annotations": {"nginx.cpp.traceable.ai/inject": "true"}}}}}' \
  -n ingress-nginx

What this does:
This command adds an annotation to the pod template within the ingress-nginx-controller deployment. The annotation is:

nginx.cpp.traceable.ai/inject: "true"

This instructs the Traceable agent injector to inject the C++ module into the pod during its next restart.

Why patch instead of editing directly?
Patching is a safer and quicker way to update specific parts of a Kubernetes object, especially in automated environments or CI/CD pipelines.


Step 3: Restart the Ingress Controller

Restart the Ingress-NGINX deployment to apply the annotation changes and trigger injection of the Traceable module.

kubectl rollout restart deployment ingress-nginx -n ingress-nginx

What this does:
This command safely restarts all pods in the ingress-nginx deployment by triggering a rolling restart. Kubernetes will recreate the pods using the updated configuration (which now includes the annotation and injection logic).

Note

A restart is required because changes to pod templates (like annotations) only apply to new pods. Existing pods must be restarted for changes to take effect.


Helm and Terraform Configuration Options

The following table lists configuration parameters for Helm (values.yaml) and Terraform (snake_case) to customize C++ module behavior:

Helm Key

Terraform Key

Description

injector.nginxCpp.agentVersion

injector.nginx_cpp.agent_version

Version of the agent to be injected.

injector.nginxCpp.imageVersion

injector.nginx_cpp.image_version

Custom image version (if not using the default).

injector.nginxCpp.imageName

injector.nginx_cpp.image_name

init container image name.

injector.nginxCpp.configMapName

injector.nginx_cpp.config_map_name

Name of the injector ConfigMap (optional if using default).

injector.nginxCpp.containerName

injector.nginx_cpp.container_name

Name of the container (optional if using default).

injector.nginxCpp.initContainerResources.limits.cpu

injector.nginx_cpp.init_container_limits_cpu

CPU limit for the init container.

injector.nginxCpp.initContainerResources.limits.memory

injector.nginx_cpp.init_container_limits_memory

Memory limit for the init container.

injector.nginxCpp.initContainerResources.requests.cpu

injector.nginx_cpp.init_container_requests_cpu

CPU request for the init container.

injector.nginxCpp.initContainerResources.requests.memory

injector.nginx_cpp.init_container_requests_memory

Memory request for the init container.

injector.nginxCpp.matchSelectors

injector.nginx_cpp.match_selectors

List of match selectors if not using namespace labels.

injector.nginxCpp.config.serviceName

injector.nginx_cpp.config_service_name

Name of the service for Traceable registration.

injector.nginxCpp.config.configPollPeriodSeconds

injector.nginx_cpp.config_poll_period_seconds

Configuration polling interval in seconds.

injector.nginxCpp.config.blocking

injector.nginx_cpp.config_blocking

Enable or disable request blocking. Allowed values: on, off.

injector.nginxCpp.config.blockingStatusCode

injector.nginx_cpp.config_blocking_status_code

HTTP status code to return on blocked requests.

injector.nginxCpp.config.blockingSkipInternalRequest

injector.nginx_cpp.config_blocking_skip_internal_request

Skip blocking internal requests. Allowed values: on, off.

injector.nginxCpp.config.sampling

injector.nginx_cpp.config_sampling

Enable or disable traffic sampling. Allowed values: on, off.

injector.nginxCpp.config.logLevel

injector.nginx_cpp.config_log_level

Log level: LOG_LEVEL_TRACE, LOG_LEVEL_DEBUG, LOG_LEVEL_INFO, LOG_LEVEL_WARN, LOG_LEVEL_ERROR.

injector.nginxCpp.config.metrics

injector.nginx_cpp.config_metrics

Enable or disable metrics collection. Allowed values: on, off.

injector.nginxCpp.config.metricsLog

injector.nginx_cpp.config_metrics_log

Enable or disable metrics logging. Allowed values: on, off.

injector.nginxCpp.config.metricsLogFrequency

injector.nginx_cpp.config_metrics_log_frequency

Frequency of metrics log output.

injector.nginxCpp.config.endpointMetrics

injector.nginx_cpp.config_endpoint_metrics

Enable or disable endpoint-level metrics. Allowed values: on, off.

injector.nginxCpp.config.endpointMetricsLog

injector.nginx_cpp.config_endpoint_metrics_log

Enable or disable endpoint metrics logging. Allowed values: on, off.

injector.nginxCpp.config.endpointMetricsLogFrequency

injector.nginx_cpp.config_endpoint_metrics_log_frequency

Log frequency for endpoint metrics.

injector.nginxCpp.config.endpointMetricsMaxEndpoints

injector.nginx_cpp.config_endpoint_metrics_max_endpoints

Max number of endpoints to track for metrics.


Upgrade

To upgrade the Traceable instrumentation:

Step 1: Upgrade the Traceable Platform Agent (TPA)

Update your TPA installation with the required configuration changes using your preferred method (Helm or Terraform).

Step 2: Clean up Traceable-specific directives

Manually remove the following directives from the Ingress-NGINX ConfigMap:

  • traceableai

  • opentracing

  • opentracing_trace_locations

  • opentracing_propagate_context

Step 3: Restart the Ingress-NGINX Deployment

kubectl rollout restart deployment ingress-nginx -n ingress-nginx

This command triggers a rolling restart of all pods in the ingress-nginx deployment. It ensures any changes to the ConfigMap or injected configuration take effect. triggers a rolling restart of all pods in the ingress-nginx deployment. It ensures any changes to the ConfigMap or injected configuration take effect.


Uninstall

To completely remove the Traceable instrumentation from your Ingress-NGINX setup:

Step 1: Remove the namespace label (if configured)

kubectl label ns ingress-nginx traceableai-inject-nginx-cpp-

Step 2: Remove the deployment annotation (if configured)

kubectl patch deployment.apps/ingress-nginx-controller \
  -p '{"spec": {"template": {"metadata": {"annotations": {"nginx.cpp.traceable.ai/inject": null}}}}}' \
  -n ingress-nginx

Step 3: Remove Traceable directives from the Ingress-NGINX ConfigMap

Manually delete the following directives:

  • traceableai

  • opentracing

  • opentracing_trace_locations

  • opentracing_propagate_context

Step 4: Uninstall the Traceable Platform Agent (TPA)

Uninstall the TPA from your cluster using the same method you used for installation (Helm or Terraform).