Sidecar proxy
  • 01 Dec 2021
  • 7 Minutes to read
  • PDF

Sidecar proxy

  • PDF

The topic explains deploying Traceable's sidecar proxy in a Kubernetes environment.


Traceable sidecar proxy intercepts the request and response that goes to the service running in your pod. The sidecar proxy aggregates all the ingress and egress traffic and sends them to Traceable’s Platform agent which in turn sends them to Traceable’s SaaS Platform.


Before you begin

Make sure that you have installed Traceable Platform agent using one of the methods described in the Kubernetes topic.


Download and label the namespace

There are no separate steps for downloading Traceable’s sidecar proxy. It is downloaded as part of the Traceable Platform agent that is mentioned in the Before you begin section. To enable Sidecar proxy, apply the label mentioned in the next section.

Label the namespace

To install or inject the sidecar proxy in your infrastructure, label the namespace you wish to protect. For a pod to be considered for proxy injection, the namespace for the pod should have the label traceableai-inject-proxy=enabled. This is a necessary configuration. Enter the following command:

kubectl label namespace $NAMESPACE traceableai-inject-proxy=enabled

Alternatively, you can add the following to the namespace manifest:

apiVersion: v1
kind: Namespace
metadata:
    labels:
        traceableai-inject-proxy: enabled

 Restart the pods

For the Traceable sidecar proxy to take effect in your pod, you have to restart all the pods that you have labeled in the previous step. Restarting the pods injects the Traceable sidecar proxy into the pods.

Enter the following command:

kubectl rollout restart deployment -n $NAMESPACE


Annotations and labels

You can optionally configure annotations and labels for the Traceable sidecar proxy. The following table lists the available annotations and labels at the namespace and pod level. 

Namespace

The following table describes annotations and labels for the namespace.

Label

AnnotationLabel
traceableai-inject-proxy

Set the value of this label to enabled for the namespace where you want to inject proxy.

 Annotation

AnnotationLabel
proxy.traceable.ai/defaultInject

This annotation defines the default injection behavior on pods in a namespace, that is, whether you want to inject or do not want to inject.

  • Value set to true - When you set the value to true, then by default injection is enabled. If you do not want injection at the pod level, set proxy.traceable.ai/inject: false.
  • Value set to false - When you set the value to false, then by default injection is disabled. In this case, if you want injection at the pod level, set proxy.traceable.ai/inject: true

When this annotation is not set, the default behavior is similar to when the value is set to true.

Pod

The following table describes annotations and labels for the pod.

Labels

There are no labels for the pod for Proxy.

Annotation

AnnotationDescription
proxy.traceable.ai/inboundPortmapsThe annotation is used to define the port mapping between service and Envoy. For example, 8080:8081, 9090:9091
proxy.traceable.ai/injectSet the value to true to enable injection and false to disable injection. The default behavior, when this annotation is not specified, depends on the behavior of the namespace. See, proxy.traceable.ai/defaultInject annotation above.
proxy.traceable.ai/blockingThe annotation is used to define request blocking in the injected proxy. Set the value to true to enable blocking and false to disable blocking. The default behavior is to enable blocking.
proxy.traceable.ai/ignoreMatcherA JSON array formatted string used to configure URLs to ignore. Used for --ignore-url-regex proxy argument. For example, [{"url_path":"\/login"},{"url_path":"\/logout"}].

Verification

To verify that the Traceable sidecar proxy has been successfully injected into your pod, check that a new container with the name traceable-proxy is available in the desired pods. You can also log into Traceable’s platform and navigate to Administration()>Configuration > Data Collection. Click on the Tracing Agents button. Check for proxy agent type.


Troubleshooting

Following are few of the troubleshooting steps that you can carry out to self troubleshoot sidecar proxy issues. If you need further assistance, reach out to Traceable support

Check if proxy is running

Check if your pod has n+1 of the containers that it usually has by running kubectl get pods -n <namespace-name>. For example, in the output below, we see 2/2:

system@traceable % kubectl get pods -n traceshop      
userreviewservice-8fcfdc576-jkbzz            2/2     Running   0          3d20h
userservice-5c7d6555bb-z4xln                 2/2     Running   0          3d20h

 If you do not see the new container, then check that your namespace is labeled properly with the traceableai-inject-proxy=enabled label to enable proxy injection. Run kubectl describe namespace <namespace-name> command:

system@traceable Desktop % kubectl describe namespace traceshop
Name:         traceshop
Labels:       traceableai-inject-proxy=enabled
Annotations:  <none>
Status:       Active

Check if proxy is configured correctly

Check whether proxy is configured and deployed properly by describing the pod and checking the traceable-proxy-init and traceable-proxy containers. Use the describe command:

kubectl describe pod <pod-name> -n <namespace-name> 

Following is a sample output:

...
traceable-proxy-init:
    Container ID:  docker://a3ed21a189ba09332244f18b14522674dfc523604e4c5c6d604657edd17c15f4
    Image:         traceableai/proxy-init:1.0.1
    Image ID:      docker-pullable://traceableai/proxy-init@sha256:53a657680a26c5c82ceb1b21fd9ea0e21bf446a17c6053f71d85b23a8c73a99f
    Port:          <none>
    Host Port:     <none>
    Args:
      -i
      3306 27017 5432 1433 1434 1521 6379 7419 15000 15020
      -o
      9411 14268 55678 4317 3306 27017 5672 2181 5432 1433 1434 1521 6379 7419
      -m
      REDIRECT
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 07 Nov 2021 13:28:50 -0800
      Finished:     Sun, 07 Nov 2021 13:28:50 -0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:        10m
      memory:     10Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-jszrw (ro)
...
traceable-proxy:
    Container ID:  docker://8c4ced4de41fd05723667bdaaff8d3a197910eb2d34237f8f6e63e9844416114
    Image:         traceableai/proxy:1.0.1
    Image ID:      docker-pullable://traceableai/proxy@sha256:794d7d4889ecfff76d3fdff54aa340787821041371597b06f8af8c46ad38665b
    Port:          <none>
    Host Port:     <none>
    Args:
      --service-port
      9396
      --port
      15020
      --envoy-admin-port
      15000
      --envoy-inbound-capture-port
      15006
      --envoy-outbound-capture-port
      15001
      --envoy-path
      /usr/local/bin/envoy
      --envoy-template
      /etc/traceable/envoy.tmpl.yaml
      --inbound-interception-mode
      REDIRECT
      --collector-address
      agent.traceableai:9411
      --trace-context
      trace_context
      --capture-content-type
      json
      --capture-content-type
      grpc
      --capture-content-type
      x-www-form-urlencoded
      --opa-address
      http://agent.traceableai:8181/
      --modsec
      --evaluate-body
      --skip-internal-request
      --region-blocking
      --remote-config
      --remote-config-endpoint
      agent.traceableai:5441
      --remote-config-poll-period
      30
      --log-level
      info
      --termination-grace-period-seconds
      5
      --idle-timeout
      3600s
      --service-cluster
      userservice
      --max-processing-size
      1048576
      --envoy-concurrency
      2
    State:          Running
      Started:      Sun, 07 Nov 2021 13:30:02 -0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1536Mi
    Requests:
      cpu:        75m
      memory:     128Mi
    Liveness:     http-get http://:15020/live delay=3s timeout=1s period=3s #success=1 #failure=3
    Readiness:    http-get http://:15020/ready delay=1s timeout=1s period=5s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-jszrw (ro)

Check the logs

In order to view the proxy logs run the kubectl logs command:

kubectl logs <pod-name> -n <namespace-name> -c traceable-proxy

Following is a sample log file when the proxy started and working without a problem. The log shows info level entries. You would see error in the same logs if there are any issues. 

system@traceable Desktop % kubectl logs userservice-5c7d6555bb-z4xln  -n traceshop -c traceable-proxy  
{"level":"info","ts":1636320602.7263637,"caller":"proxy/proxy.go:117","msg":"Starting Traceable proxy version 1.0.1"}
{"level":"info","ts":1636320602.7265475,"caller":"proxy/envoymanager.go:44","msg":"Starting envoy"}
{"level":"info","ts":1636320602.741311,"caller":"proxy/proxy.go:143","msg":"Starting server"}
{"level":"info","ts":1636320602.7503529,"caller":"proxy/envoymanager.go:63","msg":"Envoy command: /usr/local/bin/envoy [-c /etc/traceable/envoy.yaml --log-level info --service-cluster userservice --concurrency 2]"}
[2021-11-07 21:30:03.069][12][info][main] [external/envoy/source/server/server.cc:330] initializing epoch 0 (base id=0, hot restart version=11.104)

Turn on debug logging

Envoy offers a way to turn on debug logging for the various loggers using a curl command. In order to be able to change log levels for the loggers, you will need to execute this command using kubectl exec on the target pod. The default log level is info.

View loggers and their log levels

Run the following command:

kubectl exec <pod-name> -n <namespace> -c traceable-proxy -- curl -X POST http://localhost:15000, for example, the output of the command shows the list of available loggers.

system@traceable Desktop % kubectl exec userservice-5c7d6555bb-z4xln  -n traceshop -c traceable-proxy -- curl -X POST http://localhost:15000/logging
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0active loggers:
  admin: info
  aws: info
  assert: info
  backtrace: info
  cache_filter: info
  client: info
  config: info
  connection: info
  conn_handler: info
  decompression: info
  dubbo: info
  envoy_bug: info
  ext_authz: info
  rocketmq: info
  file: info
  filter: info
  forward_proxy: info
  grpc: info
  hc: info
  health_checker: info
  http: info
  http2: info
  hystrix: info
  init: info
  io: info
  jwt: info
  kafka: info
  lua: info
  main: info
  matcher: info
  misc: info
  mongo: info
  quic: info
  quic_stream: info
  pool: info
  rbac: info
  redis: info
  router: info
  runtime: info
  stats: info
  secret: info
  tap: info
  testing: info
  thrift: info
  tracing: info
  upstream: info
  udp: info
  wasm: info

100   748    0   748    0     0   182k      0 --:--:-- --:--:-- --:--:--  182k

Change the log level of a logger

Run kubectl exec <pod-name> -n <namespace> -c traceable-proxy -- curl -X POST http://localhost:15000/logging?http=debug. 

In the following example, the log level for http logger is updated to debug.

system@traceable Desktop % kubectl exec userservice-5c7d6555bb-z4xln  -n traceshop -c traceable-proxy -- curl -X POST http://localhost:15000/logging?http=debug
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   749    0   749 active loggers: 0      0 --:--:-- --:--:-- --:--:--     0
  admin: info
  aws: info
  assert: info
  backtrace: info
  cache_filter: info
  client: info
  config: info
  connection: info
  conn_handler: info
  decompression: info
  dubbo: info
  envoy_bug: info
  ext_authz: info
  rocketmq: info
  file: info
  filter: info
  forward_proxy: info
  grpc: info
  hc: info
  health_checker: info
  http: debug
  http2: info
  hystrix: info
  init: info
  io: info
  jwt: info
  kafka: info
  lua: info
  main: info
  matcher: info
  misc: info
  mongo: info
  quic: info
  quic_stream: info
  pool: info
  rbac: info
  redis: info
  router: info
  runtime: info
  stats: info
  secret: info
  tap: info
  testing: info
  thrift: info
  tracing: info
  upstream: info
  udp: info
  wasm: info

   0     0   243k      0 --:--:-- --:--:-- --:--:--  243k

 If you get the traceable-proxy logs, you now see the debug logs for the http logger:

[2021-11-11 19:14:28.232][29][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:261] [C120915] new stream
[2021-11-11 19:14:28.232][29][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:882] [C120915][S10752496571223545411] request headers complete (end_stream=true):
':authority', 'userservice:9396'
':path', '/user/'
':method', 'GET'
'user-agent', 'Go-http-client/1.1'
'traceparent', '00-4efccecd19ad14924c9c122745e4bf4b-6eaa017eb3836b37-01'
'accept-encoding', 'gzip'

You can switch back to info by entering the following command: 

kubectl exec <pod-name> -n <namespace> -c traceable-proxy -- curl -X POST http://localhost:15000/logging?http=info

Change log level for all loggers

Run the following command to change the log level of all the loggers:

kubectl exec <pod-name> -n <namespace> -c traceable-proxy -- curl -X POST http://localhost:15000/logging?level=debug


Upgrade

Traceable sidecar proxy is upgraded as part of the Traceable platform agent. Make sure that you restart the injected deployment after you upgrade the Traceable Platform agent. There is no separate upgrade path for the Traceable sidecar proxy. For more information on the latest Traceable platform agent, see Release notes.

Uninstall

You can uninstall the Traceable sidecar proxy by uninstalling the Traceable platform agent. For more information, see Uninstall. Restart the deployment after you have uninstalled the Traceable Platform agent.


Was this article helpful?

What's Next