This guide explains how to deploy the Cmd Audit agent as a DaemonSet to a cluster in EKS, AKS, or GKE.

Prerequisites:

This feature is supported starting with Cmd Audit agent v1.1.0.

Supported platforms:

  • GKE: COS or Ubuntu nodes.
  • EKS: Amazon Linux 2, with EC2-based worker nodes.
  • AKS: Ubuntu nodes.

Outline:

  1. Download the agent .deb
  2. Set up the file structure
  3. Build the docker image and push it to GCR
  4. Create a ConfigMap
  5. Load the custom AppArmor profile
  6. Modify cmd_daemonset.yaml
  7. Deploy the DaemonSet

1. Download the agent .deb

You can download the latest version from the web app's Agent settings (top-right drop-down menu > App settings > Agent > scroll to bottom). You can also learn to download specific older versions (Premium users only).


2. Set up the file structure

To build the DaemonSet’s Docker image, first download a tarball of the following files:

daemonset
├── Dockerfile
├── apparmor
│ ├── apparmor_configmap.yaml
│ └── apparmor_daemonset.yaml
└── cmd_daemonset.yaml

Next, extract the files and copy the agent .deb from step 1 into the daemonset directory.

If you plan to turn on the agent status API, add netcat-openbsd to the list of dependencies under apt-install in the Dockerfile.

3. Build the docker image

From the daemonset directory, build the docker image with for example:

docker build -t <image_tag> -f Dockerfile .

Then, push the image to GCR, to ECR, or to ACR.

4. Create a ConfigMap

The ConfigMap securely provides configuration variables to cmd_daemonset.yaml . Name the ConfigMap cmd_config.yaml , and include values for:

  • CMD_PROJECT_KEY — your project key.
  • CMD_API_URL — use this URL: https://<SUB>.c-app.cmd.com/ws .
    Replace <SUB> with the subdomain used by your instance of the Cmd web app (e.g. if your web app is at sub1.app.cmd.com , replace "<SUB>" with "sub1").
  • CMD_SOS_URL — use this URL: https://<SUB>.sos-app.cmd.com . Replace
    "<SUB>" just as in the previous URL.

cmd_config.yaml :

apiVersion: v1
kind: ConfigMap
metadata:
name: cmd-config
data:
CMD_PROJECT_KEY: xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
CMD_API_URL: https://sub1.c-app.cmd.com/ws
CMD_SOS_URL: https://sub1.sos-app.cmd.com

Apply the ConfigMap to your cluster, e.g.: kubectl apply -f cmd_config.yaml .

5. Load the custom AppArmor profile

On GKE and AKS (but not EKS), a custom AppArmor profile is required to run the DaemonSet in Docker or Kubernetes. The default Docker AppArmor profile won't allow access to the necessary files. Alternatively, you can run the DaemonSet container in privileged mode, which disables the default AppArmor/seccomp/SELinux profiles (running production systems in privileged mode is not recommended).

Set up a custom AppArmor profile

To use a custom AppArmor profile in GKE or AKS, load it onto your nodes before the Cmd DaemonSet. The following example shows how to achieve this with apparmor-loader, another DaemonSet:

  1. On your cluster, create a Cmd AppArmor profile ConfigMap:
    kubectl create -f apparmor/apparmor_configmap.yaml
  2. Create the apparmor-loader DaemonSet:
    kubectl create -f apparmor/apparmor_daemonset.yaml

To learn more, see the Google / Azure and Kubernetes AppArmor documentation

Example AppArmor files

Below, you will find examples of the necessary files: apparmor_configmap.yaml , and apparmor_daemonset.yaml .

apparmor/apparmor_configmap.yaml

# ConfigMap that contains an AppArmor profile for the cmd-daemonset.
# Compared to the Docker default AppArmor policy, this allows access
# to additional files required by the Cmd agent in /proc/sys/kernel.

apiVersion: v1
kind: ConfigMap
metadata:
name: cmd-apparmor-profile
data:
cmd-profile: |-
#include <tunables/global>


profile docker-cmd flags=(attach_disconnected,mediate_deleted) {

#include <abstractions/base>
network,
capability,
file,
ptrace (trace,read),
deny mount, deny umount,
deny @{PROC}/* w,
deny @{PROC}/{[^1-9],[^1-9][^0-9],[^1-9s][^0-9y][^0-9s],[^1-9][^0-9][^0-9][^0-9]*}/** w,
deny @{PROC}/sys/** w,
deny @{PROC}/sysrq-trigger rwklx,
deny @{PROC}/kcore rwklx,
deny @{PROC}/mem rwklx,
deny @{PROC}/kmem rwklx,
deny /sys/[^f]*/** wklx,
deny /sys/f[^s]*/** wklx,
deny /sys/fs/[^c]*/** wklx,
deny /sys/fs/c[^g]*/** wklx,
deny /sys/fs/cg[^r]*/** wklx,
deny /sys/firmware/** rwklx,
deny /sys/kernel/security/** rwklx,
}

apparmor/apparmor_configmap.yaml

# This is an example of how to load the cmd-docker AppArmor profile onto k8s nodes. 
# It deploys the profile loader onto a cluster to automatically load AppArmor profiles from a ConfigMap.
#
# Based on: https://github.com/kubernetes/kubernetes/tree/master/test/images/apparmor-loader
#
# It is also possible to use alternative methods, see:
# https://kubernetes.io/docs/tutorials/clusters/apparmor/#setting-up-nodes-with-profiles
#

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: apparmor-loader
labels:
daemon: apparmor-loader
spec:
selector:
matchLabels:
daemon: apparmor-loader
template:
metadata:
name: apparmor-loader
labels:
daemon: apparmor-loader
spec:
containers:
- name: apparmor-loader
image: google/apparmor-loader:latest
args:
# Tell the loader to pull the /profiles directory every 30 seconds.
- -poll
- 30s
- /profiles
securityContext:
# The loader requires root permissions to actually load the profiles.
privileged: true
volumeMounts:
- name: sys
mountPath: /sys
readOnly: true
- name: apparmor-includes
mountPath: /etc/apparmor.d
readOnly: true
- name: profiles
mountPath: /profiles
readOnly: true
volumes:
# The /sys directory must be mounted to interact with the AppArmor module.
- name: sys
hostPath:
path: /sys
# The /etc/apparmor.d directory is required for most AppArmor include templates.
- name: apparmor-includes
hostPath:
path: /etc/apparmor.d
# Map in the profile data.
- name: profiles
configMap:
name: cmd-apparmor-profile

6. Modify cmd_daemonset.yaml

Below is an example of cmd_daemonset.yaml . Make sure the container image referenced in the .yaml matches the tag that you set in the image repository, and that the intended cluster has permission to access the image repository.

If using EKS, remove the following from the cmd_daemonset.yaml:

annotations:
container.apparmor.security.beta.kubernetes.io/cmd-daemonset: localhost/docker-cmd

cmd_daemonset.yaml:


apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cmd-daemonset
spec:
selector:
matchLabels:
app: cmd-daemonset
template:
metadata:
labels:
app: cmd-daemonset
# When using nodes with AppArmor enabled, a custom Cmd AppArmor
# profile is required. This annotation will apply the AppArmor
# policy to the DaemonSet pods. The Cmd AppArmor policy allows
# access to some procfs files that are not allowed by the
# default Docker AppArmor policy. See `apparmor/` directory for
# more information.

# Remove this annotation when using nodes that do not have AppArmor. Kubernetes will report an error when
# attempting to apply an apparmor annotation to systems without apparmor.
annotations:
container.apparmor.security.beta.kubernetes.io/cmd-daemonset: localhost/docker-cmd
spec:
containers:
- name: cmd-daemonset
image: <Specify Cmd Daemonset Image>
imagePullPolicy: Always
volumeMounts:
# The Cmd agent must have access to the running kernel config, which is in /boot/config-* on some systems.
# For hosts that have the running kernel config in procfs (/proc/config.gz), this mount is not required.
- name: host-boot
mountPath: /boot
readOnly: true
# This mount is required for BPF probes to run on the host kernel
- name: host-debugfs
mountPath: /sys/kernel/debug
securityContext:
# These capabilities are required by the Cmd agent, in order to load BPF probes and access required
# information from procfs.
capabilities:
add:
- SYS_ADMIN
- SYS_PTRACE
- SYS_RESOURCE
env:
- name: CMD_SERVER_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CMD_PROJECT_KEY
valueFrom:
configMapKeyRef:
name: cmd-config
key: CMD_PROJECT_KEY
- name: CMD_API_URL
valueFrom:
configMapKeyRef:
name: cmd-config
key: CMD_API_URL
- name: CMD_SOS_URL
valueFrom:
configMapKeyRef:
name: cmd-config
key: CMD_SOS_URL
# Using host PID namespace allows the Cmd agent to properly process PID information in other containers
hostPID: true
restartPolicy: Always
volumes:
- name: host-boot
hostPath:
path: /boot
type: Directory
- name: host-debugfs
hostPath:
path: /sys/kernel/debug
type: Directory

7. Create the Cmd DaemonSet in your cluster

For example: kubectl apply -f cmd_daemonset.yaml .

After a few seconds, the DaemonSet will begin monitoring your cluster, and your nodes will appear in the Cmd web app.


Learn how triggers can define alerting behavior on monitored servers

Did this answer your question?