You can use tools like jq for convenient direct access to Cmd data stored in JSON files.

In this example, objects are downloaded from S3, and de-compressed with gunzip. Then the objects are processed with jq to create a two column CSV file of commands executed by users:

Note: This example assumes that your S3 access credentials have been configured to enable the aws command line tool.

aws s3 cp --recursive s3://my-bucket/cmd_export/cmd-v1/CMP-XXX/PRJ-YYY/2018/08/20/ ./

gunzip *.json.gz

cat uw1_cmd-v1_PRJ-XXX_* | jq -r '.exec_user + ", " + .exec_path'  | sort | uniq > user-cmds.csv


For example, output from the above command in 'user-cmds.csv' might look like this:

peterson, .
peterson, /bin/grep
peterson, /bin/ls
peterson, /bin/ps
peterson, /usr/bin/basename
peterson, /usr/bin/clear_console
peterson, /usr/bin/dircolors
peterson, /usr/bin/dirname
peterson, /usr/bin/env
peterson, /usr/bin/groups
peterson, /usr/bin/lesspipe
peterson, /usr/bin/locale
peterson, /usr/bin/seq
peterson, /usr/bin/sudo
peterson, :
peterson, [
peterson, alias
peterson, command
peterson, complete
peterson, declare
peterson, eval
peterson, export
peterson, history
peterson, local
peterson, printf
peterson, pwd
peterson, read
peterson, readonly
peterson, return
peterson, set
peterson, shift
peterson, shopt
peterson, test
peterson, type
peterson, unset
root, /bin/bash
root, /bin/ls
root, /bin/su
root, /usr/bin/basename
root, /usr/bin/dircolors
root, /usr/bin/dirname
root, /usr/bin/groups
root, /usr/bin/lesspipe
root, [
root, alias
root, eval
root, export
root, shopt
root, test
root, unset
Did this answer your question?