This guide explains how to deploy the Cmd Audit agent as a DaemonSet to a cluster in GKE.

Outline

  1. Download the agent .tgz
  2. Set up the file structure
  3. Build the docker image and push it to GCR
  4. Create a ConfigMap
  5. Load the custom AppArmor profile
  6. Modify cmd_daemonset.yaml
  7. Deploy the DaemonSet

1. Download the agent .tgz

You can download the latest version from the web app's Agent settings (top-right dropdown menu > App settings > Agent > scroll to bottom). You can also learn to download specific older versions (Premium users only).

2. Set up the file structure

To build the DaemonSet’s Docker image, first download a tarball of the following files:

daemonset
├── GKE # GKE-specific files.
│ ├── Dockerfile
│ └── run_cmd.sh
└── scripts # Scripts for building eBPF probes.
├── cos.sh
└── ubuntu.sh

  • Next, extract the files and copy the agent .tgz from step 1 into the daemonset directory.
  • If you plan to turn on the agent status API, add netcat-openbsd to the list of dependencies under apt-install in the Dockerfile.



3. Build the docker image

From the daemonset directory, build the docker image with for example:

docker build --build-arg CMD_VERSION=x.y.z -t <image_tag> -f GKE/Dockerfile .

(With CMD_VERSION equal to the version number of the agent you want to deploy.)

Then, push the image to GCR.

4. Create a ConfigMap

The ConfigMap securely provides configuration variables to cmd_daemonset.yaml . Name the ConfigMap cmd_config.yaml , and include values for:

  • CMD_PROJECT_KEY — your project key.
  • CMD_API_URL — use this URL: https://<SUB>.c-app.cmd.com/ws .
    Replace <SUB> with the subdomain used by your instance of the Cmd web app (e.g. if your web app is at sub1.app.cmd.com , replace "<SUB>" with "sub1").
  • CMD_SOS_URL — use this URL: https://<SUB>.sos-app.cmd.com . Replace sub just as you did in the previous URL.

cmd_config.yaml :

apiVersion: v1
kind: ConfigMap
metadata:
name: cmd-config
data:
CMD_PROJECT_KEY: xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
CMD_API_URL: https://sub1.c-app.cmd.com/ws
CMD_SOS_URL: https://sub1.sos-app.cmd.com

Apply the ConfigMap to your cluster, e.g.: kubectl apply -f cmd_config.yaml .


5. Load the custom AppArmor profile

A custom AppArmor profile is required to run the DaemonSet in Docker or Kubernetes. The default Docker AppArmor profile won't allow backtraces. Alternatively, you can run Docker in privileged mode, which disables the default AppArmor/seccomp/SELinux profiles (running production systems in privileged mode is not recommended).

Set up a custom AppArmor profile in GKE

To use a custom AppArmor profile in GKE, load it onto your nodes before the Cmd DaemonSet. The following example shows how to achieve this with apparmor-loader, another DaemonSet:

  1. On your cluster, create a Cmd AppArmor profile ConfigMap:
    kubectl create -f apparmor/apparmor_configmap.yaml
  2. Create the apparmor-loader DaemonSet:
    kubectl create -f apparmor/apparmor_daemonset.yaml

To learn more, see the Google and Kubernetes AppArmor documentation

Example AppArmor files
Below, you will find examples of the necessary files: apparmor_configmap.yaml , and apparmor_daemonset.yaml .

apparmor_configmap.yaml

# Configmap containing AppArmor profile for Cmd DaemonSet
apiVersion: v1
kind: ConfigMap
metadata:
name: cmd-apparmor-profile
data:
cmd-profile: |-
#include <tunables/global>


profile docker-cmd flags=(attach_disconnected,mediate_deleted) {

#include <abstractions/base>


network,
capability,
file,
signal (receive) peer=unconfined,
ptrace (trace,read,tracedby,readby),

deny mount,
deny umount,

deny @{PROC}/* w,
deny @{PROC}/{[^1-9],[^1-9][^0-9],[^1-9s][^0-9y][^0-9s],[^1-9][^0-9][^0-9][^0-9]*}/** w,
deny @{PROC}/sys/** w,
deny @{PROC}/sysrq-trigger rwklx,
deny @{PROC}/kcore rwklx,

deny /sys/[^f]*/** wklx,
deny /sys/f[^s]*/** wklx,
deny /sys/fs/[^c]*/** wklx,
deny /sys/fs/c[^g]*/** wklx,
deny /sys/fs/cg[^r]*/** wklx,
deny /sys/firmware/** rwklx,
deny /sys/kernel/security/** rwklx,

deny /host/etc/* wklx,
deny /host/etc/** wklx,

}

apparmor_daemonset.yaml

# An example DaemonSet demonstrating how the profile loader can be 
# deployed onto a cluster to automatically load AppArmor profiles from
# a ConfigMap.

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: apparmor-loader
labels:
daemon: apparmor-loader
spec:
selector:
matchLabels:
daemon: apparmor-loader
template:
metadata:
name: apparmor-loader
labels:
daemon: apparmor-loader
spec:
containers:
- name: apparmor-loader
image: google/apparmor-loader:latest
args:
# Tell the loader to pull the /profiles directory every 30 seconds.
- -poll
- 30s
- /profiles
securityContext:
# The loader requires root permissions to actually load the profiles.
privileged: true
volumeMounts:
- name: sys
mountPath: /sys
readOnly: true
- name: apparmor-includes
mountPath: /etc/apparmor.d
readOnly: true
- name: profiles
mountPath: /profiles
readOnly: true
volumes:
# The /sys directory must be mounted to interact with the AppArmor module.
- name: sys
hostPath:
path: /sys
# The /etc/apparmor.d directory is required for most apparmor include templates.
- name: apparmor-includes
hostPath:
path: /etc/apparmor.d
# Map in the profile data.
- name: profiles
configMap:
name: cmd-apparmor-profile

6. Modify cmd_daemonset.yaml

Below, you can find an example of cmd_daemonset.yaml . Make sure the container image referenced in the .yaml matches the tag that you set in GCR, and that the intended cluster has permission to access the image in GCR. (Clusters can access images by default if both the image and the cluster are in the same GCP project.)

cmd_daemonset.yaml:

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cmd-daemonset
spec:
selector:
matchLabels:
app: cmd-daemonset
template:
metadata:
labels:
app: cmd-daemonset
# Use either custom AppArmor profile with this annotation, or run in privileged mode.
annotations:
container.apparmor.security.beta.kubernetes.io/cmd-daemonset: localhost/docker-cmd
spec:
containers:
- name: cmd-daemonset
image: gcr.io/[your_gcp_project]/cmd-daemonset:latest
imagePullPolicy: Always
volumeMounts:
- name: host-etc
mountPath: /host/etc
- name: host-debugfs
mountPath: /sys/kernel/debug
securityContext:
capabilities:
add:
- SYS_ADMIN
- SYS_PTRACE
# Use privileged mode only if Cmd AppArmor profile is not used
# privileged: true
env:
- name: CMD_SERVER_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CMD_PROJECT_KEY
valueFrom:
configMapKeyRef:
name: cmd-config
key: CMD_PROJECT_KEY
- name: CMD_API_URL
valueFrom:
configMapKeyRef:
name: cmd-config
key: CMD_API_URL
- name: CMD_SOS_URL
valueFrom:
configMapKeyRef:
name: cmd-config
key: CMD_SOS_URL
hostPID: true
restartPolicy: Always
volumes:
- name: host-etc
hostPath:
path: /etc
type: Directory
- name: host-debugfs
hostPath:
path: /sys/kernel/debug
type: Directory

7. Create the Cmd DaemonSet in your cluster

For example: kubectl apply -f cmd_daemonset.yaml

After the probes compile, which normally takes less than a minute, the DaemonSet will begin monitoring your cluster, and your nodes will appear in the Cmd web app.


Related resources:

Learn how triggers can define alerting behavior on monitored servers

Did this answer your question?