To deploy the Cmd Audit agent to your cloud as a Daemonset, modify and use the template in step 1. If you are deploying to a cloud platform that uses AppArmor, also use the AppArmor profile in step 2. If you plan to install on RedHat CoreOS, Fedora CoreOS, or Flatcar OS, also read this guide.

1. Modify the DaemonSet template

Download daemonset.yaml, and update it in the following ways:

  1. Fill in values for CMD_PROJECT_KEY and CMD_SUB, as described here.

  2. If you want the nodes you're installing on to have Cmd server groups, fill in a value for CMD_SERVER_GROUPS. (E.g. group1 or group1,group2)

  3. If installing on nodes without AppArmor, remove this section:

annotations:
container.apparmor.security.beta.kubernetes.io/cmd-daemonset: localhost/docker-cmd

daemonset.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/name: cmd-agent
app.kubernetes.io/version: "v1.2.1"
app.kubernetes.io/part-of: cmd-agent
name: cmd-agent
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: cmd-agent
app.kubernetes.io/version: "v1.2.1"
app.kubernetes.io/part-of: cmd-agent
name: cmd-agent
rules:
- apiGroups:
- ""
resources:
- pods
- containers
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
app.kubernetes.io/name: cmd-agent
app.kubernetes.io/version: "v1.2.1"
app.kubernetes.io/part-of: cmd-agent
name: cmd-agent
roleRef:
kind: ClusterRole
name: cmd-agent
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: cmd-agent
namespace: default
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app.kubernetes.io/name: cmd-agent
app.kubernetes.io/version: "v1.2.1"
app.kubernetes.io/part-of: cmd-agent
name: cmd-agent
spec:
selector:
matchLabels:
app: cmd-agent
template:
metadata:
labels:
app: cmd-agent
# When using nodes with AppArmor enabled, a custom Cmd AppArmor profile is required. This annotation
# will apply the AppArmor policy to the daemonset pods.
# The Cmd AppArmor policy allows access to some procfs files that are not allowed by the default Docker
# AppArmor policy. See `apparmor/` directory for more information.
#
# Remove this annotation when using nodes that do not have AppArmor. Kubernetes will report an error when
# attempting to apply an apparmor annotation to systems without apparmor.
annotations:
container.apparmor.security.beta.kubernetes.io/cmd-agent: localhost/docker-cmd
spec:
serviceAccountName: cmd-agent
containers:
- name: cmd-k8sgate
image: registry.sw.cmd.com/cmdinc/cmd-agent:v1.2.1
imagePullPolicy: Always
command: ["/sbin/cmd-k8sgate"]
volumeMounts:
- name: shared-data
mountPath: /var/run/cmd
- name: cri-sock
mountPath: /var/run/cmd/cri.sock
- name: cmd-agent
image: registry.sw.cmd.com/cmdinc/cmd-agent:v1.2.1
imagePullPolicy: Always
volumeMounts:
# The Cmd agent must have access to the running kernel config, which is in /boot/config-* on some systems.
# For hosts that have the running kernel config in procfs (/proc/config.gz), this mount is not required.
- name: host-boot
mountPath: /boot
readOnly: true
# This mount is required for BPF probes to run on the host kernel
- name: host-debugfs
mountPath: /sys/kernel/debug
- name: shared-data
mountPath: /var/run/cmd
securityContext:
# These capabilities are required by the Cmd agent, in order to load BPF probes and access required
# information from procfs.
capabilities:
add:
- SYS_ADMIN
- SYS_PTRACE
- SYS_RESOURCE
env:
- name: CMD_SERVER_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CMD_ENABLE_K8SGATE
value: "1"
# To find your CMD_PROJECT_KEY and CMD_SUB values, see https://help.cmd.com/en/articles/4257620-the-agent-download-endpoint
# The CMD_PROJECT_KEY and CMD_SUB values can also be kept in k8s ConfigMap or Secrets
- name: CMD_PROJECT_KEY
value: <add your project key here>
- name: CMD_SUB
value: <add your Cmd sub here>
# Using host PID namespace allows the Cmd agent to properly process PID information in other containers
hostPID: true
restartPolicy: Always
volumes:
- name: host-boot
hostPath:
path: /boot
type: Directory
- name: host-debugfs
hostPath:
path: /sys/kernel/debug
type: Directory
- name: shared-data
emptyDir: {}
- name: cri-sock
hostPath:
path: /run/containerd/containerd.sock
type: Socket

2. For systems using AppArmor only: Load the AppArmor profile

On some cloud platforms, a custom AppArmor profile is required to run the DaemonSet in Docker or Kubernetes (otherwise the default Docker AppArmor profile won't allow access to the necessary files). Alternatively, you can run the DaemonSet container in privileged mode, which disables the default AppArmor/seccomp/SELinux profiles (running production systems in privileged mode is not recommended).

Set up a custom AppArmor profile

To use a custom AppArmor profile, load it onto your nodes before the Cmd DaemonSet. The following example shows how to achieve this with apparmor-loader, another DaemonSet:

  1. On your cluster, create a Cmd AppArmor profile ConfigMap:
    kubectl create -f apparmor/apparmor_configmap.yaml

  2. Create the apparmor-loader DaemonSet:
    kubectl create -f apparmor/apparmor_daemonset.yaml

To learn more, see the Google / Azure and Kubernetes AppArmor documentation

Example AppArmor files

Below, you will find examples of the necessary files: apparmor_configmap.yaml , and apparmor_daemonset.yaml .

apparmor/apparmor_configmap.yaml

# ConfigMap that contains an AppArmor profile for the cmd-daemonset.
# Compared to the Docker default AppArmor policy, this allows access
# to additional files required by the Cmd agent in /proc/sys/kernel.

apiVersion: v1
kind: ConfigMap
metadata:
name: cmd-apparmor-profile
data:
cmd-profile: |-
#include <tunables/global>


profile docker-cmd flags=(attach_disconnected,mediate_deleted) {

#include <abstractions/base>
network,
capability,
file,
ptrace (trace,read),
deny mount, deny umount,
deny @{PROC}/* w,
deny @{PROC}/{[^1-9],[^1-9][^0-9],[^1-9s][^0-9y][^0-9s],[^1-9][^0-9][^0-9][^0-9]*}/** w,
deny @{PROC}/sys/** w,
deny @{PROC}/sysrq-trigger rwklx,
deny @{PROC}/kcore rwklx,
deny @{PROC}/mem rwklx,
deny @{PROC}/kmem rwklx,
deny /sys/[^f]*/** wklx,
deny /sys/f[^s]*/** wklx,
deny /sys/fs/[^c]*/** wklx,
deny /sys/fs/c[^g]*/** wklx,
deny /sys/fs/cg[^r]*/** wklx,
deny /sys/firmware/** rwklx,
deny /sys/kernel/security/** rwklx,
}

apparmor/apparmor_daemonset.yaml

# This is an example of how to load the cmd-docker AppArmor profile onto k8s nodes. 
# It deploys the profile loader onto a cluster to automatically load AppArmor profiles from a ConfigMap.
#
# Based on: https://github.com/kubernetes/kubernetes/tree/master/test/images/apparmor-loader
#
# It is also possible to use alternative methods, see:
# https://kubernetes.io/docs/tutorials/clusters/apparmor/#setting-up-nodes-with-profiles
#

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: apparmor-loader
labels:
daemon: apparmor-loader
spec:
selector:
matchLabels:
daemon: apparmor-loader
template:
metadata:
name: apparmor-loader
labels:
daemon: apparmor-loader
spec:
containers:
- name: apparmor-loader
image: google/apparmor-loader:latest
args:
# Tell the loader to pull the /profiles directory every 30 seconds.
- -poll
- 30s
- /profiles
securityContext:
# The loader requires root permissions to actually load the profiles.
privileged: true
volumeMounts:
- name: sys
mountPath: /sys
readOnly: true
- name: apparmor-includes
mountPath: /etc/apparmor.d
readOnly: true
- name: profiles
mountPath: /profiles
readOnly: true
volumes:
# The /sys directory must be mounted to interact with the AppArmor module.
- name: sys
hostPath:
path: /sys
# The /etc/apparmor.d directory is required for most AppArmor include templates.
- name: apparmor-includes
hostPath:
path: /etc/apparmor.d
# Map in the profile data.
- name: profiles
configMap:
name: cmd-apparmor-profile

3. Create the Cmd DaemonSet in your cluster

For example: kubectl apply -f daemonset.yaml .

After a few seconds, the DaemonSet will begin monitoring your cluster, and your nodes will appear in the Cmd web app.

Did this answer your question?