Select any two versions of this STIG to compare the individual requirements
Select any old version/release of this STIG to view the previous requirements
Check that the "Per User Limit" Login Session Control in the UCP Admin Settings is set according to the values defined in the System Security Plan. via UI: In the UCP web console, navigate to "Admin Settings" | "Authentication & Authorization" and verify the "Per User Limit" field is set according to the number specified in the System Security Plan. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands from a machine with connectivity to the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml|grep per_user_limit If the "per_user_limit" entry under the "[auth.sessions]" section in the output is not set according to the value defined in the SSP, this is a finding.
Set the "Per User Limit" Login Session Control in the UCP Admin Settings per the requirements set forth by the System Security Plan (SSP). via UI: In the UCP web console, navigate to "Admin Settings" | "Authentication & Authorization" and set the "Per User Limit" field according to the requirements of this control. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands on either a UCP Manager node or using a UCP client bundle. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml > ucp-config.toml Open the "ucp-config.toml" file, set the "per_user_limit" entry under the "[auth.sessions]" section according to the requirements of this control. Save the file. Execute the following commands to update UCP with the new configuration: curl -sk -H "Authorization: Bearer $AUTHTOKEN" --upload-file ucp-config.toml https://[ucp_url]/api/ucp/config-toml
This check only applies to the Docker Engine - Enterprise component of Docker Enterprise. via CLI: Linux: Verify the daemon has not been started with the "-H TCP://[host]" argument by running the following command: ps -ef | grep dockerd If -H UNIX://, this is not a finding. If the "-H TCP://[host]" argument appears in the output, then this is a finding.
This fix only applies to Docker Engine - Enterprise nodes that are part of a UCP cluster. Apply this fix to every node in the cluster. (Linux) Execute the following command to open an override file for docker.service: sudo systemctl edit docker.service Remove any "-H" host daemon flags from the "ExecStart=/usr/bin/dockerd" line in the override file. Save the file and reload the config with the following command: sudo systemctl daemon-reload Restart Docker with the following command: sudo systemctl restart docker.service
This check only applies to Docker Engine - Enterprise. Verify FIPS mode is enabled on the host operating system. Execute the following command to verify that FIPS mode is enabled on the Engine: docker info The "Security Options" section in the response should show a "fips" label, indicating that, when configured, the remotely accessible Engine API uses FIPS-validated digital signatures in conjunction with an approved hash function to protect the integrity of remote access sessions. If the "fips" label is not shown in the "Security Options" section, then this is a finding.
Enable FIPS mode on the host operating system. Start the Engine after FIPS mode is enabled on the host to automatically enable FIPS mode on the Engine. FIPS mode can also be enabled by explicitly setting the DOCKER_FIPS=1 environment variable in an active terminal session prior to the execution of any Docker commands.
This check only applies to the UCP component of Docker Enterprise. Verify that the audit log configuration level in UCP is set to "request": Via UI: As a Docker EE Admin, navigate to "Admin Settings" | "Audit Logs" in the UCP management console, and verify "Audit Log Level" is set to "Request". If the audit log configuration level is not set to "Request", this is a finding. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands from a machine that can communicate with the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml Look for the "level" entry under the "[audit_log_configuration]" section in the output, and verify that it is set to "request". If the "level" entry under the "[audit_log_configuration]" section in the output is not set to "request", then this is a finding.
This fix only applies to the UCP component of Docker Enterprise. Set the remote syslog configuration in UCP: via UI: As a Docker EE Admin, navigate to "Admin Settings" | "Audit Logs" in the UCP management console, and set the "Audit Log Level" to "Request". via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands from a machine that can communicate with the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml > ucp-config.toml Open the "ucp-config.toml" file under the "[audit_log_configuration]" section set "level = request". Save the file. Execute the following commands to update UCP with the new configuration: curl -sk -H "Authorization: Bearer $AUTHTOKEN" --upload-file ucp-config.toml https://[ucp_url]/api/ucp/config-toml
This check only applies to the underlying host operating system on which the Docker Engine - Enterprise instance is running. Verify that the auditing capabilities provided by the underlying host have been properly configured to audit Docker Engine - Enterprise: (Linux) Check that auditd has been installed and that audit rules are configured against the following components of Docker Engine - Enterprise: auditctl -l | grep -e /usr/bin/docker -e /var/lib/docker -e /etc/docker -e /etc/default/docker -e /etc/docker/daemon.json -e /usr/bin/docker-containerd -e /usr/bin/docker-runc systemctl show -p FragmentPath docker.service or auditctl -l | grep docker.service systemctl show -p FragmentPath docker.socket or auditctl -l | grep docker.sock If audit rules aren't properly configured for the paths and services listed above, then this is a finding.
This fix applies to the underlying host operating system on which the Docker Engine - Enterprise instance is running. Enable and configure audit policies for Docker Engine - Enterprise on the host operating system: (Linux) Check that auditd has been installed, and add the following rules to /etc/audit/audit.rules: auditctl -w /usr/bin/docker -k auditctl -w /var/lib/docker -k docker auditctl -w /etc/docker -k docker auditctl -w [docker.service-path] -k docker (where [docker.service-path] is the result of systemctl show -p FragmentPath docker.service) auditctl -w [docker.socket-path] -k docker (where [docker.socket-path] is the result of systemctl show -p FragmentPath docker.socket) auditctl -w /etc/default/docker -k docker auditctl -w /etc/docker/daemon.json auditctl -w /usr/bin/docker-containerd -k docker auditctl -w /usr/bin/docker-runc -k docker
Verify that LDAP integration is enabled and properly configured in the UCP Admin Settings and verify that the LDAP/AD server is configured per the requirements set forth in the appropriate OS STIG. via UI: In the UCP web console, navigate to "Admin Settings" | "Authentication & Authorization" and verify "LDAP Enabled" is set to "Yes" and that it is properly configured. If it is not set to yes and if the LDAP server is not configured then this is a finding. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands from a machine with connectivity to the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml Look for the "backend" entry under the "[auth]" section in the output, and verify that it is set to "ldap". *NOTE: For security reasons, the "[auth.ldap]" section is not stored in the config file and can only be viewed from the UCP Admin Settings UI. If the "backend =" entry under the "[auth]" section in the output is not set to "ldap", then this is a finding.
Enable and configure LDAP integration in the UCP Admin Settings. via UI: In the UCP web console, navigate to "Admin Settings" | "Authentication & Authorization" and set "LDAP Enabled" to "Yes" and properly configure the LDAP/AD settings as per the appropriate OS STIG. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands on either a UCP Manager node or using a UCP client bundle. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml > ucp-config.toml Open the "ucp-config.toml" file, set the "backend" entry under the "[auth]" section to "ldap", and add an "[auth.ldap]" sub-section per the UCP configuration options as documented at https://docs.docker.com/ee/ucp/admin/configure/ucp-configuration-file/#authldap-optional. Save the file. Execute the following commands to update UCP with the new configuration: curl -sk -H "Authorization: Bearer $AUTHTOKEN" --upload-file ucp-config.toml https://[ucp_url]/api/ucp/config-toml
This check only applies to the UCP component of Docker Enterprise. Verify that the applied RBAC policy sets in UCP are configured per the requirements set forth by the System Security Plan (SSP). via UI: As a Docker EE Admin, navigate to "Access Control" | "Grants" in the UCP web console. Verify that all grants and cluster role bindings applied to Swarm are configured per the requirements set forth by the System Security Plan (SSP). If the applied RBAC policy sets in UCP are not configured per the requirements set forth by the SSP, then this is a finding. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands on a machine that can communicate with the UCP management console: AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/collectionGrants?subjectType=all&expandUser=true&showPaths=true Verify that all grants applied to Swarm in the API response are configured per the requirements set forth by the System Security Plan (SSP). If the applied RBAC policy sets in UCP are not configured per the requirements set forth by the SSP, then this is a finding.
This fix only applies to the UCP component of Docker Enterprise. Apply RBAC policy sets in UCP per the requirements set forth by the SSP. via UI: As a Docker EE Admin, navigate to "Access Control" | "Grants" in the UCP web console. Create grants and cluster role bindings for Swarm per the requirements set forth by the SSP. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands on a machine that can communicate with the UCP management console: AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) Create grants for Swarm for applicable subjects, objects and roles using the following command: curl -sk -H "Authorization: Bearer $AUTHTOKEN" -X PUT https://[ucp_url]/collectionGrants/[subjectID]/[objectID]/[roleID]
This check only applies to the DTR component of Docker Enterprise. Verify that the organization, team and user permissions in DTR are configured per the System Security Plan (SSP). Obtain and review SSP. Identify organization roles, teams and users. via UI: As a Docker EE Admin, navigate to "Organizations" and verify the list of organizations and teams within those organizations are setup per the SSP. Navigate to "Users" and verify that the list of users are assigned to appropriate organizations, teams and repositories per the SSP. If the organization, team and user permissions in DTR are not configured per the SSP, this is a finding. via CLI: Linux (requires curl and jq): As a Docker EE admin, execute the following commands on a machine that can communicate with the DTR management console: AUTHTOKEN=$(curl -kLsS -u <username>:<password> "https://[dtr_url]/auth/token" | jq -r .token) Execute the following command to verify that the teams associated with an organization have access to the appropriate repositories per the System Security Plan: curl -k -H "Authorization: Bearer $AUTHTOKEN" -X GET "https://[dtr_url]/api/v0/accounts/[org_name]/teams/[team_name]/repositoryAccess" Execute the following commands on a machine that can communicate with the UCP management console to verify that the members of the team with access to these repositories is appropriate per the SSP: AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/accounts/[orgNameOrID]/teams/[teamNameOrID]/members If the organization, team and user permissions in DTR are not configured per the SSP, this is a finding.
This fix only applies to the DTR component of Docker Enterprise. Verify that the applied organization, team and user permissions in DTR are configured per the SSP. via UI: As a Docker EE Admin, navigate to "Organizations" and setup the list of organizations and teams within those organizations per the requirements set forth by the SSP. Navigate to "Users" and assign users to appropriate organizations, teams and repositories per the SSP. via CLI: Linux (requires curl and jq): As a Docker EE admin, execute the following commands on a machine that can communicate with the DTR management console: AUTHTOKEN=$(curl -kLsS -u <username>:<password> "https://[dtr_url]/auth/token" | jq -r .token) Execute the following command to give teams in an organization access to the appropriate repositories per the System Security Plan: curl -k -H "Authorization: Bearer $AUTHTOKEN" -X PUT "https://[dtr_url]/api/v0/repositories/[namespace]/[reponame]/teamAccess/[teamname]" Execute the following commands on a machine that can communicate with the UCP management console to add/remove members to/from the team with access to these repositories as appropriate per the SSP: AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) Add: curl -sk -H "Authorization: Bearer $AUTHTOKEN" -X PUT https://[ucp_url]/accounts/[orgNameOrID]/teams/[teamNameOrID]/members/[memberNameOrID] Remove: curl -sk -H "Authorization: Bearer $AUTHTOKEN" -X DELETE https://[ucp_url]/accounts/[orgNameOrID]/teams/[teamNameOrID]/members/[memberNameOrID]
This check only applies to the use of Docker Engine - Enterprise. Verify that no running containers have mounted sensitive host system directories. Refer to System Security Plan for list of sensitive folders. via CLI: Execute the following command as a trusted user on the host operating system: docker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: Volumes={{ .Mounts }}' | grep -iv "ucp\|kubelet\|dtr" Verify in the output that no containers are running with mounted RW access to sensitive host system directories. If there are containers mounted with RW access to sensitive host system directories, this is a finding.
This fix only applies to the use of Docker Engine - Enterprise. Do not mount host sensitive directories on containers especially in read-write mode.
This check only applies to the use of Docker Engine - Enterprise on a Linux host operating system. Ensure the host's process namespace is not shared. via CLI: Linux: As a Docker EE Admin, execute the following command using a UCP client bundle: docker ps --all | grep -iv "ucp\|kube\|dtr" | awk '{print $1}' | xargs docker inspect --format '{{ .Id }}: PidMode={{ .HostConfig.PidMode }}' If PidMode = "host", it means the host PID namespace is shared with the container and this is a finding.
This fix only applies to the use of Docker Engine - Enterprise on a Linux host operating system. Do not start a container with --pid=host argument. For example, do not start a container as below: docker run --interactive --tty --pid=host centos /bin/bash
This check only applies to the use of Docker Engine - Enterprise on a Linux host operating system. Ensure the host's IPC namespace is not shared. via CLI: Linux: As a Docker EE Admin, execute the following command using a UCP client bundle: docker ps --all | grep -iv "ucp\|kube\|dtr" | awk '{print $1}' | xargs docker inspect --format '{{ .Id }}: IpcMode={{ .HostConfig.IpcMode }}' If IpcMode="shareable", then the host's IPC namespace is shared and this is a finding.
This fix only applies to the use of Docker Engine - Enterprise on a Linux host operating system. Do not start a container with --ipc=host argument. For example, do not start a container as below: docker run --interactive --tty --ipc=host centos /bin/bash
Verify this check on all Docker Engine - Enterprise nodes in the cluster. via CLI: Linux: Execute the following commands as a trusted user on the host operating system: Note: daemon.json file does not exist by default and must be created. Refer to https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file for all options. cat /etc/docker/daemon.json Verify that the "log-opts" object includes the "max-size" and "max-file" properties and that they are set accordingly in the output. If the "log-opts" object does not include the "max-size" and "max-file" properties and/or are not set accordingly, then this is a finding.
Execute this fix on all Docker Engine - Enterprise nodes in the cluster. via CLI: Linux: Execute the following commands as a trusted user on the host operating system: Open "/etc/docker/daemon.json" for editing. Set the "log-opts" object and its "max-size" and "max-file" properties accordingly. Save the file. Restart the Docker daemon.
via CLI: Linux: Execute the following commands as a trusted user on the host operating system: cat /etc/docker/daemon.json Verify that the "log-driver" property is set to one of the following: "syslog", "awslogs", "splunk", "gcplogs", "logentries" or "<plugin>" (where <plugin> is the naming of a third-party Docker logging driver plugin). Work with the SIEM administrator to determine if an alert is configured when audit data is no longer received as expected. If "log-driver" is not set, or if alarms are not configured in the SIEM, then this is a finding.
via CLI: Linux: As a trusted user on the host operating system, open the /etc/docker/daemon.json file for editing. If the file doesn't exist, it must be created. Set the "log-driver" property to one of the following: "syslog", "awslogs", "splunk", "gcplogs", "logentries" or "<plugin>" (where <plugin> is the naming of a third-party Docker logging driver plugin). Configure the "log-opts" object as required by the selected "log-driver". Save the file. Restart the docker daemon. Work with the SIEM administrator to configure an alert when no audit data is received from Docker.
For Linux systems, verify that the host is configured to trust Docker Inc's repository GPG keys and that Docker Engine - Enterprise is installed from these repositories as such. If installing in an offline environment, validate that the Engine's package signature matches that as published by Docker, Inc. Execute the following command to validate the Docker image signature digests of UCP and DTR: docker trust inspect docker/ucp:[ucp_version] docker/dtr:[dtr_version] Check that the "SignedTags" array for both images in the output includes a "Digest" field. If the SignedTags array does not contain a Digest field, this is a finding.
For Linux systems, add Docker Inc's official GPG key to the host using the operating system's respective package repository management tooling. If not using a package repository to install/update Docker Engine - Enterprise, verify that the Engine's package signature matches that as published by Docker, Inc. When retrieving the UCP and DTR installation images, use Docker, Inc's officially managed image repositories as follows: docker.io/docker/ucp:[ucp_version] docker.io/docker/dtr:[dtr_version] If downloading the UCP and DTR images for offline installation, use only Docker, Inc's officially managed package links as follows: https://docs.docker.com/ee/ucp/admin/install/install-offline/ https://docs.docker.com/ee/dtr/admin/install/install-offline/
This check only applies to the Docker Engine - Enterprise component of Docker Enterprise. via CLI: Linux: As a trusted user on the underlying host operating system, execute the following command: ps -ef | grep dockerd Ensure that the "--insecure-registry" parameter is not present. If it is present, then this is a finding.
This fix only applies to the Docker Engine - Enterprise component of Docker Enterprise. via CLI: Linux: As a trusted user on the underlying host operating system, edit the "/etc/docker/daemon.json" file and set the "insecure-registries" property to an empty array. If the daemon.json file doesn't exist, it must be created. Restart the Docker daemon.
This check only applies to the Docker Engine - Enterprise component of Docker Enterprise and only when it is used on a Linux host operating system. via CLI: Linux: As a trusted user on the underlying host operating system, execute the following command: docker info | grep -e "^Storage Driver:\s*aufs\s*$" If the Storage Driver setting contains *aufs, then this is a finding. If the above command returns no values, this is not a finding.
This fix only applies to the Docker Engine - Enterprise component of Docker Enterprise and only when it is used on a Linux host operating system. via CLI: Linux: As a trusted user on the underlying host operating system, edit the "/etc/docker/daemon.json" file and set the "storage-driver" property to a value that is not "aufs". If the daemon.json file does not exist, it must be created. Restart the Docker daemon.
This check only applies to the Docker Engine - Enterprise component of Docker Enterprise and only when it is not being operated as part of a UCP cluster. via CLI: Linux: As a trusted user on the underlying host operating system, execute the following command: ps -ef | grep dockerd Ensure that the "--userland-proxy" parameter is set to "false". If it is not, then this is a finding.
This fix only applies to the Docker Engine - Enterprise component of Docker Enterprise and only when it is not being operated as part of a UCP cluster. via CLI: Linux: As a trusted user on the underlying host operating system, edit the "/etc/docker/daemon.json" file and set the "userland-proxy" property to a value of "false". Restart the Docker daemon.
This check only applies to the Docker Engine - Enterprise component of Docker Enterprise. via CLI: Linux: As a trusted user on the underlying host operating system, execute the following command: docker version --format '{{ .Server.Experimental }}' Ensure that the "Experimental" property is set to "false". If it is not, then this is a finding.
This fix only applies to the Docker Engine - Enterprise component of Docker Enterprise. via CLI: Linux: As a trusted user on the underlying host operating system, edit the "/etc/docker/daemon.json" file and set the "experimental" property to a value of "false". If the daemon.json file doesn't exist, it must be created. Restart the Docker daemon.
Check that UCP has been integrated with a trusted certificate authority (CA). via UI: In the UCP web console, navigate to "Admin Settings" | "Certificates" and click on the "Download UCP Server CA Certificate" link. Verify that the contents of the downloaded "ca.pem" file match that of the trusted CA certificate. via CLI: Linux: Execute the following command and verify the certificate chain in the output is valid and matches that of the trusted CA: echo "" | openssl s_client -connect [ucp_url]:443 | openssl x509 -noout -text If the certificate chain does not match the chain as defined by the System Security Plan, then this is a finding.
This fix only applies to the UCP component of Docker Enterprise. Integrate UCP with a trusted certificate authority CA. via UI: In the UCP web console, navigate to "Admin Settings" | "Certificates". Fill in (or click on the "Upload" links) the "CA Certificate" field with the contents of the external public CA certificate. Assuming the user generated a server certificate from that CA for UCP, also fill in the "Server Certificate" and "Private Key" fields with the contents of the public/private certificates respectively. The "Server Certificate" field must include both the UCP server certificate and any intermediate certificates. Click on the "Save" button. If DTR was previously integrated with this UCP cluster, execute a "dtr reconfigure" command as a superuser on one of the UCP Manager nodes in the cluster to re-configure DTR with the updated UCP certificates. via CLI: Linux: As a superuser, execute the following commands on each UCP Manager node in the cluster and in the directory where keys and certificates are located: Create a container that attaches to the same volume where certificates are stored: docker create --name replace-certs -v ucp-controller-server-certs:/data busybox Copy keys and certificates to the container's volumes: docker cp cert.pem replace-certs:/data/cert.pem docker cp ca.pem replace-certs:/data/ca.pem docker cp key.pem replace-certs:/data/key.pem Remove the container, since it is no longer needed: docker rm replace-certs Restart the container, since it is no longer needed: docker rm replace-certs Restart the ucp-controller container: docker restart ucp-controller If DTR was previously integrated with this UCP cluster, execute a "dtr reconfigure" command as a superuser on one of the UCP Manager nodes in the cluster to re-configure DTR with the updated UCP certificates.
Check that DTR has been integrated with a trusted certificate authority (CA). via UI: In the DTR web console, navigate to "System" | "General" and click on the "Show TLS settings" link in the "Domain & Proxies" section. Verify the certificate chain in "TLS Root CA" box is valid and matches that of the trusted CA. via CLI: Linux: Execute the following command and verify the certificate chain in the output is valid and matches that of the trusted CA: echo "" | openssl s_client -connect [dtr_url]:443 | openssl x509 -noout -text If the certificate chain in the output is not valid and does not match that of the trusted CA, then this is a finding.
This fix only applies to the DTR component of Docker Enterprise. Integrate DTR with a trusted CA. via UI: In the DTR web console, navigate to "System" | "General" and click on the "Show TLS Settings" link in the "Domain & Proxies" section. Fill in the "TLS Root CA" field with the contents of the external public CA certificate. Assuming the user generated a server certificate from that CA for DTR, also fill in the "TLS Certificate Chain" and "TLS Private Key" fields with the contents of the public/private certificates respectively. The "TLS Certificate Chain" field must include both the DTR server certificate and any intermediate certificates. Click on the "Save" button. via CLI: Linux: Execute the following command as a superuser on one of the UCP Manager nodes in the cluster: docker run -it --rm docker/dtr:[dtr_version] reconfigure --dtr-ca "$(cat [ca.pem])" --dtr-cert "$(cat [dtr_cert.pem])" --dtr-key "$(cat [dtr_private_key.pem])"
Verify that admins and users are not allowed to schedule containers on manager nodes and DTR nodes. via UI: As a Docker EE Admin, navigate to "Admin Settings" | "Scheduler" in the UCP management console. Verify that the "Allow administrators to deploy containers on UCP managers or nodes running DTR" and "Allow users to schedule on all nodes, including UCP managers and DTR nodes" options are both unchecked. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands on a machine that can communicate with the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml Look for the "enable_admin_ucp_scheduling" entry under the "[scheduling_configuration]" section in the output, and verify that it is set to "false". If "enable_admin_ucp_scheduling" is not set to "false", this is a finding. Execute the following command: curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/collectionGrants?subjectType=all&expandUser=true&showPaths=true Ensure a Grant for the "Scheduler" role against the "/" collection for the "docker-datacenter" organization does not exist in the output. If it does, then this is a finding.
This fix only applies to the UCP component of Docker Enterprise. Prevent admins and users from being able to schedule containers on manager nodes and DTR nodes. via UI: As a Docker EE Admin, navigate to "Admin Settings" | "Scheduler" in the UCP management console. Uncheck both the "Allow administrators to deploy containers on UCP managers or nodes running DTR" and "Allow users to schedule on all nodes, including UCP managers and DTR nodes" options. Click "Save". via CLI: Linux: As a Docker EE Admin, execute the following commands on a machine that can communicate with the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator: AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml > ucp-config.toml Open the "ucp-config.toml" file, set the "enable_admin_ucp_scheduling" entry under the "[scheduling_configuration]" section to "false". Save the file. Execute the following commands to update UCP with the new configuration: curl -sk -H "Authorization: Bearer $AUTHTOKEN" --upload-file ucp-config.toml https://[ucp_url]/api/ucp/config-toml Delete the Grant for the "Scheduler" role against the "/" collection for the "docker-datacenter" organization by executing the following command: curl -sk -H "Authorization: Bearer $AUTHTOKEN" -X DELETE https://[ucp_url]/collectionGrants/[subjectID]/[objectID]/[roleID]
This check only applies to the DTR component of Docker Enterprise. Verify that the "Create repository on push" option is disabled in DTR: via UI: As a Docker EE Admin, navigate to "System" | "General" in the DTR management console. Verify that the "Create repository on push" slider is turned off. via CLI: Linux (requires curl and jq): AUTHTOKEN=$(curl -sk -u <username>:<password> "https://[dtr_url]/auth/token" | jq -r .token) curl -k -H "Authorization: Bearer $AUTHTOKEN"" -X GET ""https://[dtr_url]/api/v0/meta/settings" Look for the "createRepositoryOnPush" field in the output and verify that it is set to "false". If it is not, then this is a finding.
This fix only applies to the DTR component of Docker Enterprise. Disable the "Create repository on push" option in DTR: via UI: As a Docker EE Admin, navigate to "System" | "General" in the DTR management console. Click the "Create repository on push" slider to disable this capability. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands from a machine with connectivity to the DTR management console: AUTHTOKEN=$(curl -sk -u <username>:<password> "https://[dtr_url]/auth/token" | jq -r .token) curl -k -H "Authorization: Bearer $AUTHTOKEN" -X POST -d '{"createRepositoryOnPush":true}' -H 'Content-Type: application/json' "https://[dtr_url]/api/v0/meta/settings"
This check only applies to the UCP component of Docker Enterprise. Verify that usage and API analytics tracking is disabled in UCP: via UI: As a Docker EE Admin, navigate to "Admin Settings" | "Usage" in the UCP management console. Verify that the "Enable hourly usage reporting" and "Enable API and UI tracking" options are both unchecked. If either box is checked, this is a finding. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands on a machine that can communicate with the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml Look for the "disable_usageinfo" and "disable_tracking" entries under the "[tracking_configuration]" section in the output, and verify that they are both set to "true". If they are not, then this is a finding.
This fix only applies to the UCP component of Docker Enterprise. Disable usage and API analytics tracking in UCP: via UI: As a Docker EE Admin, navigate to "Admin Settings" | "Usage" in the UCP management console. Uncheck both the "Enable hourly usage reporting" and "Enable API and UI tracking" options. Click "Save". via CLI: Linux: As a Docker EE Admin, execute the following commands on a machine that can communicate with the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator: AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml > ucp-config.toml Open the "ucp-config.toml" file. Set both the "disable_usageinfo" and "disable_tracking" entries under the "[tracking_configuration]" section to "true". Save the file. Execute the following commands to update UCP with the new configuration: curl -sk -H "Authorization: Bearer $AUTHTOKEN" --upload-file ucp-config.toml https://[ucp_url]/api/ucp/config-toml
This check only applies to the DTR component of Docker Enterprise. Verify that usage and API analytics tracking is disabled in DTR: via UI: As a Docker EE Admin, navigate to "System" | "General" in the DTR management console. Verify that the "Send data" option is disabled. via CLI: Linux (requires curl and jq): AUTHTOKEN=$(curl -sk -u <username>:<password> "https://[dtr_url]/auth/token" | jq -r .token) curl -k -H "Authorization: Bearer $AUTHTOKEN"" -X GET ""https://[dtr_url]/api/v0/meta/settings" Look for the "reportAnalytics" field in the output and verify that it is set to "false". If it is not, then this is a finding.
This fix only applies to the DTR component of Docker Enterprise. Disable usage and API analytics tracking in DTR: via UI: As a Docker EE Admin, navigate to "System" | "General" in the DTR management console. Click the "Send data" slider to disable this capability. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands from a machine with connectivity to the DTR management console: AUTHTOKEN=$(curl -sk -u <username>:<password> "https://[dtr_url]/auth/token" | jq -r .token) curl -k -H "Authorization: Bearer $AUTHTOKEN" -X POST -d '{"reportAnalytics":false}' -H 'Content-Type: application/json' "https://[dtr_url]/api/v0/meta/settings"
This check only applies to the use of Docker Engine - Enterprise on the Ubuntu host operating system and should be executed on all nodes in a Docker Enterprise cluster. Verify that all running containers include a valid AppArmor profile: via CLI: Linux: Execute the following command as a trusted user on the host operating system: docker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: AppArmorProfile={{ .AppArmorProfile }}' Verify that all containers include a valid AppArmor Profile in the output. If they do not, then this is a finding.
This fix only applies to the use of Docker Engine - Enterprise on the Ubuntu host operating system where AppArmor is in use and should be executed on all nodes in a Docker Enterprise cluster. Run all containers using an AppArmor profile: via CLI: Linux: Install AppArmor (if not already installed). Create/import an AppArmor profile (if not using the "docker-default" profile). Put the profile in "enforcing" model. Execute the following command as a trusted user on the host operating system to run the container using the customized AppArmor profile: docker run [options] --security-opt="apparmor:[PROFILENAME]" [image] [command] If using the "docker-default" default profile, run the container using the following command instead: docker run [options] --security-opt apparmor=docker-default [image] [command]
This check only applies to the use of Docker Engine - Enterprise on either the Red Hat Enterprise Linux or CentOS host operating systems where SELinux is in use and should be executed on all nodes in a Docker Enterprise cluster. Verify that the appropriate security options are configured for all running containers: via CLI: Linux: Execute the following command as a trusted user on the host operating system: docker ps --quiet --all | xargs docker inspect --format '{{ .Name }}: SecurityOpt={{ .HostConfig.SecurityOpt }}' | grep -iv "ucp\|kube\|dtr" If SecurityOpt=[label=disable], then this is a finding.
This fix only applies to the use of Docker Engine - Enterprise on either the Red Hat Enterprise Linux or CentOS host operating systems where SELinux is in use and should be executed on all nodes in a Docker Enterprise cluster. Start the Docker daemon with SELinux mode enabled. Run Docker containers using appropriate security options. via CLI: Linux: Set the SE Linux state. Set the SELinux policy. Create or import a SELinux policy template for Docker containers. Start the Docker daemon with SELinux mode enabled by either adding the "--selinux-enabled" flag to the systemd drop-in file or by setting the "selinux-enabled" property to "true" in the "/etc/docker/daemon.json" daemon configuration file. Restart the Docker daemon.
This check only applies to the use of Docker Engine - Enterprise on a Linux host operating system and should be executed on all nodes in a Docker Enterprise cluster. Verify that the added and dropped Linux Kernel Capabilities are in line with the ones needed for container processes for each container instance as defined in the SSP. via CLI: Linux: Execute the following command as a trusted user on the host operating system: docker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: CapAdd={{ .HostConfig.CapAdd }} CapDrop={{ .HostConfig.CapDrop }}' If Linux Kernel Capabilities exceed what is defined in the SSP, then this is a finding.
This fix only applies to the use of Docker Engine - Enterprise on a Linux host operating system and should be executed on all nodes in a Docker Enterprise cluster. Document the required Kernel Capabilities for each container in the SSP. Only add needed capabilities when running containers. via CLI: Linux: Execute the below command to add needed capabilities: $> docker run --cap-add={"Capability 1","Capability 2"} Execute the below command to drop unneeded capabilities: $> docker run --cap-drop={"Capability 1","Capability 2"} The user may also choose to drop all capabilities and add only add the needed ones as per the SSP: $> docker run --cap-drop=all --cap-add={"Capability 1","Capability 2"}
This check only applies to the use of Docker Engine - Enterprise on a Linux host operating system and should be executed on all nodes in a Docker Enterprise cluster. Verify that no containers are running with the --privileged flag. The --privileged flag provides full kernel capabilities. Capabilities must be specified in the System Security Plan (SSP) rather than allowing full privileges. via CLI: Linux: Execute the following command as a trusted user on the host operating system: docker ps --quiet --all | grep -iv "ucp\|kube\|dtr" | awk '{print $1}' | xargs docker inspect --format '{{ .Id }}: Privileged={{ .HostConfig.Privileged }}' Verify in the output that no containers are running with the --privileged flag. If there are, then this is a finding.
This fix only applies to the use of Docker Engine - Enterprise on a Linux host operating system and should be executed on all nodes in a Docker Enterprise cluster. Do not run containers with the --privileged flag. For example, do not start a container as below: docker run --interactive --tty --privileged centos /bin/bash
This check only applies to the use of Docker Engine - Enterprise on a Linux host operating system and should be executed on all nodes in a Docker Enterprise cluster. Verify that no running containers have a process for SSH server. via CLI: for i in $(docker ps -qa); do echo $i; docker exec $i ps -el | grep -i sshd;done Container not running errors are not a finding. If running containers have a process for SSH server, this is a finding.
This fix only applies to the use of Docker Engine - Enterprise on a Linux host operating system and should be executed on all nodes in a Docker Enterprise cluster. Remove SSH packages from all Docker base images in use in the user's environment.
Ensure that mapped ports are the ones that are needed by the containers. This check should be executed on all nodes in a Docker Enterprise cluster. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle: docker ps --quiet | xargs docker inspect --format '{{ .Id }}: Ports={{ .NetworkSettings.Ports }}' Review the list and ensure that the ports mapped are the ones that are really needed for the container. If there are any mapped ports that aren't documented by the System Security Plan (SSP), then this is a finding.
Document the ports required for each container in the SSP. Fix the Dockerfile of the container image to expose only needed ports by the containerized application. Ignore the list of ports defined in the Dockerfile by NOT using -P (UPPERCASE) or --publish-all flag when starting the container. Use the -p (lowercase) or --publish flag to explicitly define the ports needed for a particular container instance. Example: docker run --interactive --tty --publish 5000 --publish 5001 --publish 5002 centos /bin/bash
Ensure the host's network namespace is not shared. This check should be executed on all nodes in a Docker Enterprise cluster. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle: docker ps --all | grep -iv "ucp\|kube\|dtr" | awk '{print $1}' | xargs docker inspect --format '{{ .Id }}: NetworkMode={{ .HostConfig.NetworkMode }}' If the above command returns NetworkMode=host, this is a finding.
Do not pass --net=host or --network=host options when starting the container. For example, when executing docker run, do not use the --net=host nor --network=host arguments. A more detailed reference for the docker run command can be found at https://docs.docker.com/engine/reference/run/.
Ensure memory limits are in place for all containers. This check should be executed on all nodes in a Docker Enterprise cluster. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle: docker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: Memory={{ .HostConfig.Memory }}' If the above command returns 0, it means the memory limits are not in place and this is a finding.
Document container memory requirements in the System Security Plan (SSP). Run the container with only as much memory as required. Always run the container using the --memory argument. For example, run a container as below: docker run --interactive --tty --memory 256m centos /bin/bash In the above example, the container is started with a memory limit of 256 MB. Note: The output of the below command would return values in scientific notation if memory limits are in place. docker inspect --format='{{.Config.Memory}}' 7c5a2d4c7fe0 For example, if the memory limit is set to 256 MB for the above container instance, the output of the above command would be 2.68435456e+08 and NOT 256m. Convert this value using a scientific calculator or programmatic methods.
Ensure CPU shares are in place for all containers. This check should be executed on all nodes in a Docker Enterprise cluster. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle: docker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: CpuShares={{ .HostConfig.CpuShares }}' If the above command returns 0 or 1024, it means the CPU shares are not in place and this is a finding.
Document container CPU requirements in the System Security Plan (SSP). Manage the CPU shares between containers. To do so, start the container using the --cpu-shares argument. For example, run a container as below: docker run --interactive --tty --cpu-shares 512 [image] [command] In the above example, the container is started with CPU shares of 50% of what the other containers use. So, if the other container has CPU shares of 80%, this container will have CPU shares of 40%. Note: Every new container will have 1024 shares of CPU by default. However, this value is shown as 0 if running the command mentioned in the audit section. Alternatively, 1. Navigate to /sys/fs/cgroup/cpu/system.slice/ directory. 2. Check the container instance ID using docker ps. 3. Now, inside the above directory (in step 1), there will be a directory by name docker-<Instance ID>.scope. For example, docker-4acae729e8659c6be696ee35b2237cc1fe4edd2672e9186434c5116e1a6fbed6.scope. Navigate to this directory. 4. Find a file named cpu.shares. Execute cat cpu.shares. This will always show the CPU share value based on the system. So, even if there is no CPU shares configured using -c or --cpu-shares argument in the docker run command, this file will have a value of 1024. By setting one container’s CPU shares to 512, it will receive half of the CPU time compared to the other container. So, take 1024 as 100% and then do quick math to derive the number that set for respective CPU shares. For example, use 512 to set 50% and 256 to set 25%.
Ensure all containers' root filesystem is mounted as read only. This check should be executed on all nodes in a Docker Enterprise cluster. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle: docker ps --quiet --all | xargs -L 1 docker inspect --format '{{ .Id }}: ReadonlyRootfs={{ .HostConfig.ReadonlyRootfs }}' If ReadonlyRootfs=false, it means the container's root filesystem is writable and this is a finding.
Add a --read-only flag at a container's runtime to enforce the container's root filesystem to be mounted as read only. docker run <Run arguments> --read-only <Container Image Name or ID> <Command> Enabling the --read-only option at a container's runtime should be used by administrators to force a container's executable processes to only write container data to explicit storage locations during the container's runtime. Examples of explicit storage locations during a container's runtime include, but are not limited to: 1. Use the --tmpfs option to mount a temporary file system for non-persistent data writes. Example: docker run --interactive --tty --read-only --tmpfs "/run" --tmpfs "/tmp" [image] [command] 2. Enabling Docker rw mounts at a container's runtime to persist container data directly on the Docker host filesystem. Example: docker run --interactive --tty --read-only -v /opt/app/data:/run/app/data:rw [image] [command] 3. Utilizing Docker shared-storage volume plugins for Docker data volume to persist container data. docker volume create -d convoy --opt o=size=20GB my-named-volume docker run --interactive --tty --read-only -v my-named-volume:/run/app/data [image] [command]
Ensure host devices are not directly exposed to containers. Verify that the host device needs to be accessed from within the container and the permissions required are correctly set. This check should be executed on all nodes in a Docker Enterprise cluster. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle: docker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: Devices={{ .HostConfig.Devices }}' The above command lists out each device with below information: - CgroupPermissions - For example, rwm - PathInContainer - Device path within the container - PathOnHost - Device path on the host If Devices=[], or Devices=<no value>, this is not a finding. If Devices are listed and the host device is not documented and approved in the System Security Plan (SSP), this is a finding.
Do not directly expose the host devices to containers. If at all, expose the host devices to containers, use the correct set of permissions: For example, do not start a container as below: docker run --interactive --tty --device=/dev/tty0:/dev/tty0:rwm --device=/dev/temp_sda:/dev/temp_sda:rwm centos bash For example, share the host device with correct permissions: docker run --interactive --tty --device=/dev/tty0:/dev/tty0:rw --device=/dev/temp_sda:/dev/temp_sda:r centos bash
Ensure mount propagation mode is not set to shared or rshared. This check should be executed on all nodes in a Docker Enterprise cluster. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle: docker ps --all | grep -iv "ucp\|kube\|dtr" | awk '{print $1}' | xargs docker inspect --format '{{ .Id }}: Propagation={{range $mnt := .Mounts}} {{json $mnt.Propagation}} {{end}}' If Propagation=shared or Propagation-rshared, then this is a finding.
Do not mount volumes in shared mode propagation. For example, do not start container as below: docker run <Run arguments> --volume=/hostPath:/containerPath:shared <Container Image Name or ID> <Command>
This check only applies to the use of Docker Engine - Enterprise on a Linux host operating system and should be executed on all nodes in a Docker Enterprise cluster. Ensure the host's UTS namespace is not shared. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle: docker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: UTSMode={{ .HostConfig.UTSMode }}' If the above command returns host, it means the host UTS namespace is shared with the container and this is a finding.
This fix only applies to the use of Docker Engine - Enterprise on a Linux host operating system. Do not start a container with --uts=host argument. For example, do not start a container as below: docker run --rm --interactive --tty --uts=host rhel7.2
This check only applies to the use of Docker Engine - Enterprise on a Linux host operating system and should be executed on all nodes in a Docker Enterprise cluster. Ensure the default seccomp profile is not disabled. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle: docker ps --all | grep -iv "ucp\|kube\|dtr" | awk '{print $1}' | xargs docker inspect --format '{{ .Id }}: SecurityOpt={{ .HostConfig.SecurityOpt }}' If seccomp:=unconfined, then the container is running without any seccomp profiles and this is a finding.
This fix only applies to the use of Docker Engine - Enterprise on a Linux host operating system. By default, seccomp profiles are enabled. It is not necessary to do anything unless the user wants to modify the seccomp profile. Do not pass unconfined flags to run a container without the default seccomp profile. Refer to seccomp documentation for details. https://docs.docker.com/engine/security/seccomp/
This check only applies to the use of Docker Engine - Enterprise on a Linux host operating system and should be executed on all nodes in a Docker Enterprise cluster. Ensure the default seccomp profile is not disabled, if applicable. via CLI: Linux: As a trusted user on the host operating system, use the below command to filter out docker exec commands that used --privileged option. sudo ausearch -k docker | grep exec | grep privileged If there are any in the output, then this is a finding.
This fix only applies to the use of Docker Engine - Enterprise on a Linux host operating system. Do not use --privileged option in docker exec command. A reference for the docker exec command can be found at https://docs.docker.com/engine/reference/commandline/exec/.
This check only applies to the use of Docker Engine - Enterprise on a Linux host operating system and should be executed on all nodes in a Docker Enterprise cluster. Ensure docker exec commands are not used with the user option. via CLI: Linux: As a trusted user on the host operating system, use the below command to filter out docker exec commands that used --privileged option. sudo ausearch -k docker | grep exec | grep user If there are any in the output, then this is a finding.
This fix only applies to the use of Docker Engine - Enterprise on a Linux host operating system. Do not use --user option in docker exec command. A reference for the docker exec command can be found at https://docs.docker.com/engine/reference/commandline/exec/.
This check only applies to the use of Docker Engine - Enterprise on a Linux host operating system and should be executed on all nodes in a Docker Enterprise cluster. Ensure cgroup usage is confirmed. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle: docker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: CgroupParent={{ .HostConfig.CgroupParent }}' If the cgroup is blank, the container is running under default docker cgroup. If the containers are found to be running under cgroup other than the one that is documented in the System Security Plan (SSP), then this is a finding.
This fix only applies to the use of Docker Engine - Enterprise on a Linux host operating system. Do not use --cgroup-parent option in docker run command unless needed. If required, document cgroup usage in the SSP. A reference for the docker run command can be found at https://docs.docker.com/engine/reference/run/.
This check only applies to the use of Docker Engine - Enterprise on a Linux host operating system and should be executed on all nodes in a Docker Enterprise cluster. Ensure all containers are restricted from acquiring additional privileges. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle: docker ps --quiet --all | xargs -L 1 docker inspect --format '{{ .Id }}: SecurityOpt={{ .HostConfig.SecurityOpt }}' The above command returns the security options currently configured for the running containers, if 'SecurityOpt=' setting does not include the 'no-new-privileges' flag, this is a finding."
This fix only applies to the use of Docker Engine - Enterprise on a Linux host operating system. Start the containers as below: docker run --rm -it --security-opt=no-new-privileges <image> A reference for the docker run command can be found at https://docs.docker.com/engine/reference/run/.
This check only applies to the use of Docker Engine - Enterprise on a Linux host operating system and should be executed on all nodes in a Docker Enterprise cluster. Ensure PIDs cgroup limit is used. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle: docker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: UsernsMode={{ .HostConfig.UsernsMode }}' Ensure that it does not return any value for UsernsMode. If it returns a value of host, it means the host user namespace is shared with the containers and this is a finding.
This fix only applies to the use of Docker Engine - Enterprise on a Linux host operating system. Do not share user namespaces between host and containers. For example, do not run a container as below: docker run --rm -it --userns=host <image>
This check should be executed on all nodes in a Docker Enterprise cluster. via CLI: As a Docker EE Admin, execute the following command using a UCP client bundle: docker ps --all | grep -iv "ucp\|kube\|dtr" | awk '{print $1}' | xargs docker inspect --format '{{ .Id }}: Volumes={{ .Mounts }}' | grep -i "docker.sock\|docker_engine" If the Docker socket is mounted inside containers, this is a finding.
When using the -v/--volume flags to mount volumes to containers in a docker run command, do not use docker.sock as a volume. A reference for the docker run command can be found at https://docs.docker.com/engine/reference/run/.
This check should be executed on all nodes in a Docker Enterprise cluster. Verify that no running containers are mapping host port numbers below 1024. via CLI: Linux: Execute the following command as a trusted user on the host operating system: docker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: Ports={{ .NetworkSettings.Ports }}' Review the list and ensure that container ports are not mapped to host port numbers below 1024. If they are, then this is a finding.
This fix only applies to the use of Docker Engine - Enterprise. Do not map the container ports to privileged host ports when starting a container. Also, ensure that there is no such container to host privileged port mapping declarations in the Dockerfile.
Ensure incoming container traffic is bound to a specific host interface. This check should be executed on all nodes in a Docker Enterprise cluster. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle to list all the running instances of containers and their port mapping: docker ps --quiet | xargs docker inspect --format '{{ .Id }}: Ports={{ .NetworkSettings.Ports }}' Review the list and ensure that the exposed container ports are tied to a particular interface and not to the wildcard IP address - 0.0.0.0. If they are, then this is a finding. For example, if the above command returns as below the container can accept connections on any host interface on the specified port 49153, this is a finding. Ports=map[443/TCP:<nil> 80/TCP:[map[HostPort:49153 HostIp:0.0.0.0]]] However, if the exposed port is tied to a particular interface on the host as below, then this recommendation is configured as desired and is compliant. Ports=map[443/TCP:<nil> 80/TCP:[map[HostIp:10.2.3.4 HostPort:49153]]]
Bind the container port to a specific host interface on the desired host port. Example: docker run --detach --publish 10.2.3.4:49153:80 nginx In the example above, the container port 80 is bound to the host port on 49153 and would accept incoming connection only from 10.2.3.4 external interface.
Verify that SAML integration is enabled and properly configured in the UCP Admin Settings. via UI: In the UCP web console, navigate to "Admin Settings" | "Authentication & Authorization" and verify "SAML Enabled" is set to "Yes" and that it is properly configured. If SAML authentication is not enabled, this is a finding. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands from a machine with connectivity to the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml Verify that the "samlEnabled" entry under the "[auth]" section is set to "true". If the "samlEnabled" entry under the "[auth]" section is not set to "true", then this is a finding.
Enable and configure SAML integration in the UCP Admin Settings. via UI: In the UCP web console, navigate to "Admin Settings" | "Authentication & Authorization" and set "SAML Enabled" to "Yes" and properly configure the SAML settings. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands from a machine with connectivity to the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml > ucp-config.toml Open the "ucp-config.toml" file. Set the "samlEnabled" entry under the "[auth]" section to "true". Set the "idpMetadataURL" and "spHost" entries under the "[auth.saml]" to appropriate values per the UCP configuration options as documented at https://docs.docker.com/ee/ucp/admin/configure/ucp-configuration-file/#authsaml-optional. Save the file. Execute the following commands to update UCP with the new configuration: curl -sk -H "Authorization: Bearer $AUTHTOKEN" --upload-file ucp-config.toml https://[ucp_url]/api/ucp/config-toml
via CLI: Execute the following command from within the directory in which the UCP client bundle is located. (Linux) openssl x509 -noout -text -in cert.pem |grep "Subject\|Issuer" Verify that the Subject and Issuer output matches that which is defined in the SSP. If the Subject and Issuer do not match what is documented in the SSP, this is a finding.
via GUI: As any user with access to UCP, within the UCP web console, click on the username dropdown in the top-left corner, and select "My Profile". On the "Client Bundles" tab, select the "New Client Bundle" dropdown and click "Add Existing Client Bundle". Provide an appropriate "Label", and in the "Public Key" field, paste the public key of the certificate chain provided to that user by the organization. Click "Confirm" to save the bundle. via CLI: Linux (requires curl): As a Docker EE Admin, execute the following commands using a client bundle and from a machine with connectivity to the UCP management console. curl --cacert ca.pem --cert cert.pem --key key.pem -X POST -H "Content-Type: application/json" -d '{"certificates":[{"cert":"[encoded_PEM_for_cert]","label":"[cert_label]"}],"label":"[key_description]","publicKey":"[encoded_PEM_for_public_key]"}' https://[ucp_url]/api/accounts/[account_name_or_id]/publickeys
Ensure swarm manager is run in auto-lock mode. via CLI: Linux: As a Docker EE Admin, follow the steps below using a Universal Control Plane (UCP) client bundle: Run the below command. If it outputs the key, it means swarm was initialized with the --autolock flag. docker swarm unlock-key If the output is no unlock key is set, it means that swarm was NOT initialized with the --autolock flag and this is a finding.
If initializing swarm, use the below command. docker swarm init --autolock If setting --autolock on an existing swarm manager node, use the below command. docker swarm update --autolock
Ensure Docker's secret management commands are used for managing secrets in a Swarm cluster. Refer to the System Security Plan (SSP) and verify that it includes documented processes for using Docker secrets commands to manage sensitive data that can be stored in key/value pairs. Examples include API tokens, database connection strings and credentials, SSL certificates, and the like. If the SSP does not have this documented, then this is a finding.
Update the SSP so that it includes documented processes for using Docker secrets commands to manage sensitive data that can be stored in key/value pairs. Examples include API tokens, database connection strings and credentials, SSL certificates, and the like. Follow docker secret documentation and use it to manage secrets effectively. This documentation can be found at https://docs.docker.com/engine/swarm/secrets/.
Verify that the "Lifetime Minutes" and "Renewal Threshold Minutes" Login Session Controls in the Universal Control Plane (UCP) Admin Settings to "10" and "0" respectively. via UI: In the UCP web console, navigate to "Admin Settings" | "Authentication & Authorization" and verify the "Lifetime Minutes" field is set to "10" and "Renewal Threshold Minutes" field is set to "0". If they are not, then this is a finding. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands from a machine with connectivity to the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml Look for the "lifetime_minutes" and "renewal_threshold_minutes" entries under the "[auth.sessions]" section in the output, and verify that the "lifetime_minutes" field is set to "10" and the "renewal_threshold_minutes" field is set to "0". If they are not, then this is a finding.
Set the "Lifetime Minutes" and "Renewal Threshold Minutes" Login Session Controls in the UCP Admin Settings to "10" and "0" respectively. via UI: In the UCP web console, navigate to "Admin Settings" | "Authentication & Authorization" and set the "Lifetime Minutes" and "Renewal Threshold Minutes" fields to "10" and "0" respectively. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands from a machine with connectivity to the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml > ucp-config.toml Open the "ucp-config.toml" file, set the "lifetime_minutes" and "renewal_threshold_minutes" entries under the "[auth.sessions]" section to "10" and "0" respectively. Execute the following commands to update UCP with the new configuration: curl -sk -H "Authorization: Bearer $AUTHTOKEN" --upload-file ucp-config.toml https://[ucp_url]/api/ucp/config-toml
Review System Security Plan (SSP) and identify applications that leverage configuration files and/or small amounts of user-generated data, ensure that data is stored in Docker Secrets or Kubernetes Secrets. Using a Universal Control Plane (UCP) client bundle, verify that secrets are in use by executing the following commands: docker secret ls Confirm containerized applications identified in SSP as utilizing Docker secrets have a corresponding secret configured. If the SSP requires Docker secrets be used but the containerized application does not use Docker secrets, this is a finding.
For all containerized applications that leverage configuration files and/or small amounts of user-generated data, store that data in Docker Secrets. All secrets should be created and managed using a UCP client bundle. A reference for the use of docker secrets can be found at https://docs.docker.com/engine/swarm/secrets/.
Ensure container health is checked at runtime. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle: Run the below command and ensure that all the containers are reporting health status: docker ps --quiet | xargs docker inspect --format '{{ .Id }}: Health={{ .State.Health.Status }}' If Health does not = "Healthy", this is a finding.
Run the container using --health-cmd and the other parameters, or include the HEALTHCHECK instruction in the Dockerfiles. Example: docker run -d --health-cmd='stat /etc/passwd || exit 1' nginx
This check only applies to the use of Docker Engine - Enterprise on a Linux host operating system. Ensure PIDs cgroup limit is used. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle: docker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: PidsLimit={{ .HostConfig.PidsLimit }}' Ensure that PidsLimit is not set to 0 or -1. A PidsLimit of 0 or -1 means that any number of processes can be forked inside the container concurrently. If the PidsLimit is set to either 0 or -1 then this is a finding.
This fix only applies to the use of Docker Engine - Enterprise on a Linux host operating system. Use --pids-limit flag while launching the container with an appropriate value. Example: docker run -it --pids-limit 100 <Image_ID> In the above example, the number of processes allowed to run at any given time is set to 100. After a limit of 100 concurrently running processes is reached, docker would restrict any new process creation.
Check that the "Per User Limit" Login Session Control in the UCP Admin Settings are set according to the System Security Plan but not set to "0". via UI: In the UCP web console, navigate to "Admin Settings" | "Authentication & Authorization" and verify the "Per User Limit" field is set according to the settings described in the SSP. If the per user limit setting is not set to the value defined in the SSP or is set to "0", this is a finding. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands from a machine with connectivity to the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml Look for the "per_user_limit" entry under the "[auth.sessions]" section in the output, and verify that it is set according to the requirements of this control. If the "per_user_limit" entry under the "[auth.sessions]" section in the output is not set according to the value defined in the SSP, or if the per user limit is set to "0", then this is a finding.
Set the "Per User Limit" Login Session Control in the UCP Admin Settings per the requirements set forth by the SSP but not "0". via UI: In the UCP web console, navigate to "Admin Settings" | "Authentication & Authorization" and set the "Per User Limit" field according to the SSP. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands from a machine with connectivity to the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml > ucp-config.toml Open the "ucp-config.toml" file, set the "per_user_limit" entry under the "[auth.sessions]" section according to the SSP but not 0. Save the file. Execute the following commands to update UCP with the new configuration: curl -sk -H "Authorization: Bearer $AUTHTOKEN" --upload-file ucp-config.toml https://[ucp_url]/api/ucp/config-toml
Verify that all containers are running as non-root users. via CLI: As a Docker EE admin, execute the following command using a client bundle: docker ps -q -a | xargs docker inspect --format '{{ .Id }}: User={{ .Config.User }}' Ensure that a non-admin username or user ID is returned for all containers in the output. If User is 0, root or undefined, this is a finding.
Set a non-root user for all container images. Include the following line in all Dockerfiles where username or ID refers to the user that can be found in the container base image or one that is created as part of that same Dockerfile: USER [username/ID]
via CLI: Linux: Execute the following commands as a trusted user on the host operating system: cat /etc/docker/daemon.json | grep -i log-driver Verify that the "log-driver" property is set to one of the following: "syslog", "awslogs", "splunk", "gcplogs", "logentries" or "<plugin>" (where <plugin> is the naming of a third-party Docker logging driver plugin). If "log-driver" is not set, then this is a finding.
via CLI: Linux: As a trusted user on the host operating system, open the /etc/docker/daemon.json file for editing. If the file doesn't exist, it must be created. Set the "log-driver" property to one of the following: "syslog", "awslogs", "splunk", "gcplogs", "logentries" or "<plugin>" (where <plugin> is the naming of a third-party Docker logging driver plugin). Configure the "log-opts" object as required by the selected "log-driver". Save the file. Restart the docker daemon.
This check only applies to the Docker Engine - Enterprise component of Docker Enterprise. via CLI: Linux: Execute the following commands as a trusted user on the host operating system: cat /etc/docker/daemon.json Verify that the "log-opts" object includes the "max-size" and "max-file" properties and that they are set according to requirements specified in the SSP. If they are not configured according to values defined in the SSP, this is a finding.
This fix only applies to the Docker Engine - Enterprise component of Docker Enterprise. via CLI: Linux: Execute the following commands as a trusted user on the host operating system: Open "/etc/docker/daemon.json" for editing. If the file doesn't exist, it must be created. Set the "log-opts" object and its "max-size" and "max-file" properties according to values defined in the SSP. Save the file. Restart the Docker daemon.
via CLI: Linux: Execute the following commands as a trusted user on the host operating system: cat /etc/docker/daemon.json Verify that the "log-driver" property is set to one of the following: "syslog", "awslogs", "splunk", "gcplogs", "logentries" or "<plugin>" (where <plugin> is the naming of a third-party Docker logging driver plugin). Ask the sys admin to demonstrate how the login driver that is being used is configured to send log events to a log aggregation server or SIEM. If "log-driver" is not set and configured to send logs to an aggregation server or SIEM, then this is a finding.
via CLI: Linux: As a trusted user on the host operating system, open the /etc/docker/daemon.json file for editing. If the file doesn't exist, it must be created. Set the "log-driver" property to one of the following: "syslog", "awslogs", "splunk", "gcplogs", "logentries" or "<plugin>" (where <plugin> is the naming of a third-party Docker logging driver plugin). Configure the "log-opts" object as required by the selected "log-driver" to ensure log aggregation is configured. Save the file. Restart the docker daemon. Configure the selected log system to send Docker events to a log aggregation server or SIEM.
Work with the SIEM administrator to determine if an alert is configured to alarm when audit storage space for Docker Engine - Enterprise nodes exceed 75% usage. If there is no alert configured, this is a finding.
Work with the SIEM administrator to configure an alert when audit storage space exceeds 75% usage.
Work with the SIEM administrator to determine if an alert is configured to notify the SA and ISSO when audit failure events occur. If there is no alert configured, this is a finding.
Work with the SIEM administrator to create an alert to notify the SA and ISSO when audit failure events occur.
Work with the SIEM administrator to determine if an alert is configured to notify the ISSO/ISSM when unauthorized software is installed on Docker nodes. If there is no alert configured, this is a finding.
Work with the SIEM administrator to create an alert to notify the ISSO/ISSM when unauthorized software is installed on Docker nodes.
Verify that only needed ports are open on all running containers. via CLI: As a Docker EE admin, execute the following command using a client bundle: docker ps -q | xargs docker inspect --format '{{ .Id }}: Ports={{ .NetworkSettings.Ports }}' Review the list and ensure that the ports mapped are the ones really needed for the containers per the requirements set forth by the SSP. If ports are not documented and approved in the SSP, this is a finding.
Publish only needed ports for all container images and running containers per the requirements set forth by the SSP. Update Dockerfiles and set or remove any EXPOSE lines accordingly. To ignore exposed ports as defined by a Dockerfile during container start, do not pass the "-P/--publish-all" flag to the Docker commands. When publishing needed ports at container start, use the "-p/--publish" flag to explicitly define the ports that are needed.
This check only applies to the UCP component of Docker Enterprise. Check that UCP is configured to only run signed images by applicable Orgs and Teams. via UI: In the UCP web console, navigate to "Admin Settings" | "Docker Content Trust" and verify that "Run only signed images" is checked. Verify that the Orgs and Teams that images must be signed by in the dropdown that follows matches that of your organizational policies. If "Run only signed images" box is not checked, this is a finding. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands on a machine that can communicate with the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml Look for the "require_content_trust" entry under the "[trust_configuration]" section in the output, and verify that it is set to "true". If require_content_trust is not set to true, this is a finding.
This fix only applies to the UCP component of Docker Enterprise. Enable Content Trust enforcement in UCP. via UI: In the UCP web console, navigate to "Admin Settings" | "Docker Content Trust" and check the box next to "Run only signed images". Set the appropriate Orgs and Teams that images must be signed by in the dropdown that follows to match that of the organizational policies. via CLI: Linux: As a Docker EE Admin, execute the following commands on a machine that can communicate with the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator: AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml > ucp-config.toml Open the "ucp-config.toml" file, set the "require_content_trust" entry under the "[trust_configuration]" section to "true". Save the file. Execute the following commands to update UCP with the new configuration: curl -sk -H "Authorization: Bearer $AUTHTOKEN" --upload-file ucp-config.toml https://[ucp_url]/api/ucp/config-toml
This check only applies to the UCP component of Docker Enterprise. Verify that all images sitting on a UCP cluster are signed. via CLI: Linux: As a Docker EE Admin, execute the following commands using a client bundle: docker trust inspect $(docker images | awk '{print $1 ":" $2}') Verify that all image tags in the output have valid signatures. If the images are not signed, this is a finding.
This fix only applies to the UCP component of Docker Enterprise. Pull and run only signed images on a UCP cluster. via CLI: Linux: When using a client bundle, set the "DOCKER_CONTENT_TRUST" environment variable to a value of "1" prior the execution of any of the following commands: docker push, docker build, docker create, docker pull and docker run.
This check only applies to the DTR component of Docker Enterprise. Check image vulnerability scanning enabled for all repositories: via UI: As a Docker EE Admin, navigate to "System" | "Security" in the DTR management console. Verify that the "Enable Scanning" slider is turned on and that the vulnerability database has been successfully synced (online)/uploaded (offline). If "Enable Scanning" is tuned off or if the vulnerability database is not synced or uploaded, this is a finding. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands from a machine with connectivity to the DTR management console: AUTHTOKEN=$(curl -sk -u <username>:<password> "https://[dtr_url]/auth/token" | jq -r .token) curl -k -H "Authorization: Bearer $AUTHTOKEN" -X GET "https://[dtr_url]/api/v0/imagescan/status" Verify that that the response is successful with HTTP Status Code 200, and look for the "lastDBUpdateFailed" and "lastVulnOverridesDBUpdateFailed" properties in the "Response body", and verify that are both "false". If they are both not "false", this is a finding.
This fix only applies to the DTR component of Docker Enterprise. Enable vulnerability scanning: via UI: As a Docker EE Admin, navigate to "System" | "Security" in the DTR management console. Click the "Enable Scanning" slider to enable this capability. Sync (online) or upload (offline) the vulnerability database. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands from a machine with connectivity to the DTR management console: AUTHTOKEN=$(curl -sk -u <username>:<password> "https://[dtr_url]/auth/token" | jq -r .token) curl -k -H "Authorization: Bearer $AUTHTOKEN" -X POST -d '{"scanningEnabled":true}' -H 'Content-Type: application/json' "https://[dtr_url]/api/v0/meta/settings" If DTR is offline, upload the latest vulnerability database (retrievable via Docker Enterprise subscription): AUTHTOKEN=$(curl -sk -u <username>:<password> "https://[dtr_url]/auth/token" | jq -r .token) UPDATE_FILE="[path_to_cve_database].tar" curl -k -H "Authorization: Bearer $AUTHTOKEN" -H "Content-Type: multipart/form-data" -H "Accept: application/json" -X PUT -F upload=@${UPDATE_FILE} "https://[dtr_url]/api/v0/imagescan/scan/update?online=false"
This check only applies to the UCP component of Docker Enterprise. Check that UCP has been integrated with a trusted CA. via UI: In the UCP web console, navigate to "Admin Settings" | "Certificates" and click on the "Download UCP Server CA Certificate" link. Verify that the contents of the downloaded "ca.pem" file match that of the trusted CA certificate. If the certificate chain is not valid or does not match the trusted CA, this is a finding. via CLI: Linux: Execute the following command and verify the certificate chain in the output is valid and matches that of the trusted CA: echo "" | openssl s_client -connect [ucp_url]:443 | openssl x509 -noout -text If the certificate chain is not valid or does not match the trusted CA, this is a finding.
This fix only applies to the UCP component of Docker Enterprise. Integrate UCP with a trusted CA. via UI: In the UCP web console, navigate to "Admin Settings" | "Certificates". Fill in (or click on the "Upload" links) the "CA Certificate" field with the contents of your public CA certificate. Assuming the user has generated a server certificate from that CA for UCP, also fill in the "Server Certificate" and "Private Key" fields with the contents of the public/private certificates respectively. The "Server Certificate" field must include both the UCP server certificate and any intermediate certificates. Click on the "Save" button. If DTR was previously integrated with this UCP cluster, execute a "dtr reconfigure" command as a superuser on one of the UCP Manager nodes in the cluster to re-configure DTR with the updated UCP certificates. via CLI: Linux : As a superuser, execute the following commands on each UCP Manager node in the cluster and in the directory where keys and certificates are stored: Create a container that attaches to the same volume where certificates are stored: docker create --name replace-certs -v ucp-controller-server-certs:/data busybox Copy keys and certificates to the container's volumes: docker cp cert.pem replace-certs:/data/cert.pem docker cp ca.pem replace-certs:/data/ca.pem docker cp key.pem replace-certs:/data/key.pem Remove the container, since it is no longer needed: docker rm replace-certs Restart the container, since it is no longer needed: docker rm replace-certs Restart the ucp-controller container: docker restart ucp-controller If DTR was previously integrated with this UCP cluster, execute a "dtr reconfigure" command as a superuser on one of the UCP Manager nodes in the cluster to re-configure DTR with the updated UCP certificates.
This check only applies to the DTR component of Docker Enterprise. Check that DTR has been integrated with a trusted CA. via UI: In the DTR web console, navigate to "System" | "General" and click on the "Show TLS settings" link in the "Domain & Proxies" section. Verify the certificate chain in "TLS Root CA" box is valid and matches that of the trusted CA. via CLI: Linux: Execute the following command and verify the certificate chain in the output is valid and matches that of the trusted CA: echo "" | openssl s_client -connect [dtr_url]:443 | openssl x509 -noout -text If the certificate chain is not valid or does not match the trusted CA, this is a finding.
This fix only applies to the DTR component of Docker Enterprise. Integrate DTR with a trusted CA. via UI: In the DTR web console, navigate to "System" | "General" and click on the "Show TLS Settings" link in the "Domain & Proxies" section. Fill in the "TLS Root CA" field with the contents of the trusted CA certificate. Assuming the user has generated a server certificate from that CA for DTR, also fill in the "TLS Certificate Chain" and "TLS Private Key" fields with the contents of the public/private certificates respectively. The "TLS Certificate Chain" field must include both the DTR server certificate and any intermediate certificates. Click on the "Save" button. via CLI: Linux: Execute the following command as a superuser on one of the UCP Manager nodes in the cluster: docker run -it --rm docker/dtr:[dtr_version] reconfigure --dtr-ca "$(cat [ca.pem])" --dtr-cert "$(cat [dtr_cert.pem])" --dtr-key "$(cat [dtr_private_key.pem])"
Ensure 'on-failure' container restart policy is set to 5. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle: docker ps --all | grep -iv "ucp\|kube\|dtr" | awk '{print $1}' | xargs docker inspect --format '{{ .Id }}: RestartPolicyName={{ .HostConfig.RestartPolicy.Name }} MaximumRetryCount={{ .HostConfig.RestartPolicy.MaximumRetryCount }}' If RestartPolicyName= "" and MaximumRetryCount=0, this is not a finding. If RestartPolicyName=always, this is a finding. If RestartPolicyName=on-failure, verify that the number of restart attempts is set to 5 or less by looking at MaximumRetryCount. If RestartPolicyName=failure and MaximumRetryCount is > 5, this is a finding.
If a container is desired to be restarted on its own, then, for example, start the container as below: docker run --detach --restart=on-failure:5 nginx
This check only applies to the use of Docker Engine - Enterprise on a Linux host operating system. Ensure the default ulimit is not overwritten at runtime unless approved in the SSP. via CLI: Linux: As a Docker EE Admin, execute the following command using a Universal Control Plane (UCP) client bundle: docker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: Ulimits={{ .HostConfig.Ulimits }}' If each container instance returns Ulimits=<no value>, this is not a finding. If a container sets a Ulimit and the setting is not approved in the SSP, this is a finding.
This fix only applies to the use of Docker Engine - Enterprise on a Linux host operating system. Only override the default ulimit settings if needed and if so, document these settings in the SSP. For example, to override default ulimit settings start a container as below: docker run --ulimit nofile=1024:1024 --interactive --tty [image] [command]
Verify that all outdated UCP and DTR container images have been removed from all nodes in the cluster. via CLI: As a Docker EE admin, execute the following command using a client bundle: docker images --filter reference='docker/[ucp|dtr]*' Verify that there are no tags listed that are older than the currently installed versions of UCP and DTR. If any of the tags listed are older than the currently installed versions of UCP and DTR, then this is a finding.
Remove all outdated UCP and DTR container images from all nodes in the cluster: via CLI: As a Docker EE admin, execute the following commands using a client bundle: docker rmi -f $(docker images --filter reference='docker/ucp*:[outdated_tags]' -q) docker rmi -f $(docker images --filter reference='docker/dtr*:[outdated_tags]' -q)
This check only applies to the DTR component of Docker Enterprise. Verify that all images that are stored in DTR are trusted, signed images: via UI: As a Docker EE Admin, navigate to "Repositories" in the DTR management console. Select a repository from the list. Navigate to the "Images" tab and verify that the "Signed" checkmark is indicated for each image tag. Repeat this for all repositories stored in DTR. If images stored in DTR are not signed, this is a finding. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands on a machine that can communicate with the DTR management console. Replace [dtr_url] with the DTR URL, [dtr_username] with the username of a Docker EE Admin and [dtr_password] with the password of a Docker EE Admin. AUTHTOKEN=$(curl -sk -u [dtr_username]:[dtr_password] -X GET "https://[dtr_url]/auth/token" | jq -r .token) REPOS=$(curl -sk -H "Authorization: Bearer $AUTHTOKEN" -X GET "https://[dtr_url]/api/v0/repositories" | jq -r '.repositories[] | "\(.namespace)/\(.name)"') for r in $REPOS; do curl -sk -H "Authorization: Bearer $AUTHTOKEN" -X GET "https://[dtr_url]/api/v0/repositories/$r/tags?domain=[dtr_url]"; done | jq -r '.[] | [.name, .inNotary] | @csv' Verify that "true" is output next to all tags listed. If all images stored in DTR are not signed and trusted, this is a finding.
This fix only applies to the DTR component of Docker Enterprise. Store only trusted, signed images in DTR. via CLI: Linux: Execute the following commands as a user with access to the repository in DTR for which image signing is being enabled: docker login [dtr_url] docker trust signer add --key [ucp_client_bundle_cert].pem [ucp_user] [dtr_url]/[namespace]/[imageName] docker trust key load [ucp_client_bundle_key].pem docker tag [source_image] [dtr_url]/[namespace]/[imageName]:[tag] export DOCKER_CONTENT_TRUST=1 docker push [dtr_url]/[namespace]/[imageName]:[tag]
This check only applies to the UCP component of Docker Enterprise. Check that UCP is configured to only run signed images by applicable Orgs and Teams. via UI: In the UCP web console, navigate to "Admin Settings" | "Docker Content Trust" and verify that "Run only signed images" is checked. Verify that the Orgs and Teams that images must be signed by in the dropdown that follows matches that of the organizational policies. If "Run only signed images" is not checked, this is a finding. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands on a machine that can communicate with the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml Look for the "require_content_trust" entry under the "[trust_configuration]" section in the output, and verify that it is set to "true". If require_content_trust is not set to true, this is a finding.
This fix only applies to the UCP component of Docker Enterprise. Enable Content Trust enforcement in UCP. via UI: In the UCP web console, navigate to "Admin Settings" | "Docker Content Trust" and check the box next to "Run only signed images". Set the appropriate Orgs and Teams that images must be signed by in the dropdown that follows to match that of the organizational policies. via CLI: Linux: As a Docker EE Admin, execute the following commands on a machine that can communicate with the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator: AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml > ucp-config.toml Open the "ucp-config.toml" file, set the "require_content_trust" entry under the "[trust_configuration]" section to "true". Save the file. Execute the following commands to update UCP with the new configuration: curl -sk -H "Authorization: Bearer $AUTHTOKEN" --upload-file ucp-config.toml https://[ucp_url]/api/ucp/config-toml
Ensure the correct range of manager nodes have been created in a swarm. via CLI: Linux: As a Docker EE Admin, follow the steps below using a Universal Control Plane (UCP) client bundle: Run the following command. docker info --format '{{ .Swarm.Managers }}' Alternatively run the below command. docker node ls | grep 'Leader' Ensure the number of leaders is between 1 and 3. If the number of leaders is not 1, 2 or 3, this is a finding.
If an excessive number of managers is configured, the excess can be demoted to worker using the following command: docker node demote <ID> Where is the node ID value of the manager to be demoted.
Interview the system administrator to identify the key rotation process. Determine if there is a key rotation record and if the keys are rotated at a pre-defined frequency. If the swarm manager auto-lock key is not rotated on a regular basis, this is a finding.
Run the below command to rotate the keys. docker swarm unlock-key --rotate Additionally, to facilitate audit for this recommendation, maintain key rotation records and ensure that a pre-defined frequency for key rotation is established.
Ensure node certificates are rotated as appropriate. via CLI: Linux: As a Docker EE Admin, follow the steps below using a Universal Control Plane (UCP) client bundle: Run the below command and ensure that the node certificate Expiry Duration is set according to the System Security Plan (SSP). docker info | grep "Expiry Duration" If the expiry duration is not set according to the SSP, this is a finding.
Run the below command to set the desired expiry time. Example: docker swarm update --cert-expiry 48h
Ensure that docker.service file ownership is set to root:root Step 1: Find out the file location: systemctl show -p FragmentPath docker.service Step 2: If the file does not exist, this is not a finding. If the file exists, execute the below command with the correct file path to verify that the file is owned and group-owned by root. Example: stat -c %U:%G /usr/lib/systemd/system/docker.service | grep -v root:root If the above command returns nothing, this is not a finding. If the command returns non root:root file permissions, this is a finding.
Step 1: Find out the file location: systemctl show -p FragmentPath docker.service Step 2: If the file exists, execute the below command with the correct file path to set the ownership and group ownership for the file to root. Example: chown root:root /usr/lib/systemd/system/docker.service
Ensure that docker.service file permissions are set to 644 or more restrictive. Step 1: Find out the file location: systemctl show -p FragmentPath docker.service Step 2: If the file does not exist, this is not a finding. If the file exists, execute the below command with the correct file path to verify that the file permissions are set to 644 or more restrictive. stat -c %a /usr/lib/systemd/system/docker.service If the file permissions are not set to 644 or a more restrictive permission, this is a finding.
Step 1: Find out the file location: systemctl show -p FragmentPath docker.service Step 2: If the file exists, execute the below command with the correct file path to set the file permissions to 644. Example: chmod 644 /usr/lib/systemd/system/docker.service
Ensure that docker.socket file ownership is set to root:root. Step 1: Find out the file location: systemctl show -p FragmentPath docker.socket Step 2: If the file does not exist, this is not a finding. If the file exists, execute the below command with the correct file path to verify that the file is owned and group-owned by root. Example: stat -c %U:%G /usr/lib/systemd/system/docker.socket | grep -v root:root If the above command returns nothing, this is not a finding. If the command returns non root:root file permissions, this is a finding.
Step 1: Find out the file location: systemctl show -p FragmentPath docker.socket Step 2: If the file exists, execute the below command with the correct file path to set the ownership and group ownership for the file to root. Example: chown root:root /usr/lib/systemd/system/docker.socket
Ensure that docker.socket file permissions are set to 644 or more restrictive. Step 1: Find out the file location: systemctl show -p FragmentPath docker.socket Step 2: If the file does not exist, this is not a finding. If the file exists, execute the below command with the correct file path to verify that the file permissions are set to 644 or more restrictive. stat -c %a /usr/lib/systemd/system/docker.socket If the file permissions are not set to 644 or a more restrictive permission, this is a finding.
Step 1: Find out the file location: systemctl show -p FragmentPath docker.socket Step 2: If the file exists, execute the below command with the correct file path to set the file permissions to 644. Example: chmod 644 /usr/lib/systemd/system/docker.socket
Ensure that /etc/docker directory ownership is set to root:root. On CentOS host OS's, execute the below command to verify that the directory is owned and group-owned by root: stat -c %U:%G /etc/docker If root:root is not displayed, this is a finding. On Ubuntu host OS's, execute the below command to verify that the /etc/default/docker directory ownership is set to root:root: stat -c %U:%G /etc/default/docker If root:root is not displayed, this is a finding.
Set the ownership and group-ownership for the directory to root. On CentOS host OS's, execute the following command: chown root:root /etc/docker On Ubuntu host OS's, execute the following command: chown root:root /etc/default/docker
Execute the below command to verify that the directory has permissions of 755 or more restrictive: stat -c %a /etc/docker If the permissions are not set to 755, this is a finding.
set the permissions for the directory to 755. Execute the following command: chmod 755 /etc/docker
Ensure that registry certificate file ownership is set to root:root. Execute the below command to verify that the registry certificate files are owned and group-owned by root: stat -c %U:%G /etc/docker/certs.d/* If the certificate files are not owned by root, this is a finding.
Set the ownership and group-ownership for the registry certificate files to root. Run the following command: chown root:root /etc/docker/certs.d/<registry-name>/*
Ensure that registry certificate file permissions are set to 444 or more restrictive. Execute the below command to verify that the registry certificate files have permissions of 444 or more restrictive: stat -c %a /etc/docker/certs.d/<registry-name>/* If the permissions are not set to 444, this is a finding.
Set the permissions for registry certificate files to 444. Run the following command: chmod 444 /etc/docker/certs.d/<registry-name>/*
Ensure that TLS CA certificate file ownership is set to root:root. Execute the below command to verify that the TLS CA certificate file is owned and group-owned by root: stat -c %U:%G <path to TLS CA certificate file> If the TLS CA certificate permissions are not set to root:root, this is a finding.
Set the ownership and group-ownership for the TLS CA certificate file to root. Run the following command: chown root:root <path to TLS CA certificate file>
Ensure that TLS CA certificate file permissions are set to 444 or more restrictive. Execute the below command to verify that the TLS CA certificate file has permissions of 444 or more restrictive: stat -c %a <path to TLS CA certificate file> If the permissions are not set to 444, this is a finding.
chmod 444 <path to TLS CA certificate file> This sets the file permissions of the TLS CA file to 444.
Ensure that Docker server certificate file ownership is set to root:root. Execute the below command to verify that the Docker server certificate file is owned and group-owned by root: stat -c %U:%G <path to Docker server certificate file> If the command does not return root:root, this is a finding.
chown root:root <path to Docker server certificate file> This sets the ownership and group-ownership for the Docker server certificate file to root.
Ensure that Docker server certificate file permissions are set to 444 or more restrictive. Execute the below command to verify that the Docker server certificate file has permissions of 444 or more restrictive: stat -c %a <path to Docker server certificate file> If the permissions are not set to 444, this is a finding.
chmod 444 <path to Docker server certificate file> This sets the file permissions of the Docker server file to 444.
Ensure that Docker server certificate key file ownership is set to root:root. Execute the below command to verify that the Docker server certificate key file is owned and group-owned by root: stat -c %U:%G <path to Docker server certificate key file> If the certificate file is not owned by root:root, this is a finding.
chown root:root <path to Docker server certificate key file> This sets the ownership and group-ownership for the Docker server certificate key file to root.
Ensure that Docker server certificate key file permissions are set to 400. Execute the below command to verify that the Docker server certificate key file has permissions of 400: stat -c %a <path to Docker server certificate key file> If the permissions are not set to 400, this is a finding.
Set the Docker server certificate key file permissions to 400. Run the following command: chmod 400 <path to Docker server certificate key file>
Ensure that Docker socket file ownership is set to root:docker. Execute the below command to verify that the Docker socket file is owned by root and group-owned by docker: stat -c %U:%G /var/run/docker.sock If docker.sock file ownership is not set to root:docker, this is a finding.
chown root:docker /var/run/docker.sock This sets the ownership to root and group-ownership to docker for default Docker socket file.
Ensure that Docker socket file permissions are set to 660 or more restrictive. Execute the below command to verify that the Docker socket file has permissions of 660 or more restrictive: stat -c %a /var/run/docker.sock If the permissions are not set to 660, this is a finding.
chmod 660 /var/run/docker.sock This sets the file permissions of the Docker socket file to 660.
The docker.daemon file is not created on installation and must be created. Ensure that daemon.json file ownership is set to root:root. Execute the below command to verify that the file is owned and group-owned by root: stat -c %U:%G /etc/docker/daemon.json If the docker.daemon file doesn't exist or if the file permissions are not set to root:root, this is a finding.
If docker.daemon does not exist, create the file and set the ownership and group-ownership for the file to root. Run the following command: chown root:root /etc/docker/daemon.json
The docker.daemon file is not created on installation and must be created. Ensure that daemon.json file permissions are set to 644 or more restrictive. Execute the below command to verify that the file permissions are correctly set to 644 or more restrictive: stat -c %a /etc/docker/daemon.json If the permissions are not set to 644 or a more restrictive setting, this is a finding. If the permissions are not set to 644, this is a finding.
If docker.daemon does not exist, create the file and set the file permissions for this file to 644. Run the following command; chmod 644 /etc/docker/daemon.json
This requirement applies to Ubuntu Linux systems only. Ensure that /etc/default/docker file ownership is set to root:root. Execute the below command to verify that the file is owned and group-owned by root: stat -c %U:%G /etc/default/docker If file ownership it not set to root:root, this is a finding.
Set the ownership and group-ownership for the file to root. Run the following command: chown root:root /etc/default/docker
This requirement applies to Ubuntu Linux systems only. Ensure that /etc/default/docker file permissions are set to 644 or more restrictive. Execute the below command to verify that the file permissions are correctly set to 644 or more restrictive: stat -c %a /etc/default/docker If the permissions are not set to 644, this is a finding.
Set the file permissions for this file to 644. Run the following command: chmod 644 /etc/default/docker
This check only applies to the UCP component of Docker Enterprise. Check that UCP has been integrated with a trusted CA. via UI: In the UCP web console, navigate to "Admin Settings" | "Certificates" and click on the "Download UCP Server CA Certificate" link. Verify that the contents of the downloaded "ca.pem" file match that of the user's trusted CA certificate. If the UCP certificate is not signed by a trusted DoD CA this is a finding. via CLI: Linux: Execute the following command and verify the certificate chain in the output is valid and matches that of the trusted CA: echo "" | openssl s_client -connect [ucp_url]:443 | openssl x509 -noout -text If the UCP certificate is not signed by a trusted DoD CA this is a finding.
This fix only applies to the UCP component of Docker Enterprise. Integrate UCP with a trusted CA. via UI: In the UCP web console, navigate to "Admin Settings" | "Certificates". Fill in (or click on the "Upload" links) the "CA Certificate" field with the contents of the trusted CA certificate. Assuming the user generated a server certificate from that CA for UCP, also fill in the "Server Certificate" and "Private Key" fields with the contents of the public/private certificates respectively. The "Server Certificate" field must include both the UCP server certificate and any intermediate certificates. Click on the "Save" button. If DTR was previously integrated with this UCP cluster, execute a "dtr reconfigure" command as a superuser on one of the UCP Manager nodes in the cluster to re-configure DTR with the updated UCP certificates. via CLI: Linux: As a superuser, execute the following commands on each UCP Manager node in the cluster and in the directory where keys and certificates are located: Create a container that attaches to the same volume where certificates are stored: docker create --name replace-certs -v ucp-controller-server-certs:/data busybox Copy keys and certificates to the container's volumes: docker cp cert.pem replace-certs:/data/cert.pem docker cp ca.pem replace-certs:/data/ca.pem docker cp key.pem replace-certs:/data/key.pem Remove the container, since it is no longer needed: docker rm replace-certs Restart the ucp-controller container: docker restart ucp-controller If DTR was previously integrated with this UCP cluster, execute a "dtr reconfigure" command as a superuser on one of the UCP Manager nodes in the cluster to re-configure DTR with the updated UCP certificates.
Ensure data exchanged between containers are encrypted on different nodes on the overlay network. via CLI: Linux: As a Docker EE Admin, follow the steps below using a Universal Control Plane (UCP) client bundle: Run the below command and ensure that each overlay network has been encrypted. docker network ls --filter driver=overlay --quiet | xargs docker network inspect --format '{{.Name}} {{ .Options }}' | grep -v "dtr\|interlock map\|ingress map" If the network overlay drivers do not show [com.docker.network.driver.overlay"encrypted:" ask for evidence that encryption is being handled at the application layer, if no evidence of encryption at the network or application layer is provided, this is a finding.
Create overlay network with --opt encrypted flag. Example: docker network create --opt encrypted --driver overlay my-network
Ensure swarm services are bound to a specific host interface. Linux: List the network listener on port 2377/TCP (the default for docker swarm) and confirm that it is only listening on specific interfaces. For example, using ubuntu this could be done with the following command: netstat -lt | grep -i 2377 If the swarm service is not bound to a specific host interface address, this is a finding.
Rebuild the cluster and utilize the --listen-addr parameter.
This check only applies to the UCP component of Docker Enterprise. Via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands from a machine that can communicate with the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml Look for the "min_TLS_version =" entry under the "[cluster_config]" section in the output, and verify that it is set to "TLSv1.2". If the "min_TLS_version" entry under the "[cluster_config]" section in the output is not set to "TLSv1.2", then this is a finding.
This fix only applies to the UCP component of Docker Enterprise. via CLI: Linux (requires curl and jq): As a Docker EE Admin, execute the following commands from a machine that can communicate with the UCP management console. Replace [ucp_url] with the UCP URL, [ucp_username] with the username of a UCP administrator and [ucp_password] with the password of a UCP administrator. AUTHTOKEN=$(curl -sk -d '{"username":"[ucp_username]","password":"[ucp_password]"}' https://[ucp_url]/auth/login | jq -r .auth_token) curl -sk -H "Authorization: Bearer $AUTHTOKEN" https://[ucp_url]/api/ucp/config-toml > ucp-config.toml Open the "ucp-config.toml" file under the "[cluster_config]" section set "min_TLS_version = TLSv1.2". Save the file. Execute the following commands to update UCP with the new configuration: curl -sk -H "Authorization: Bearer $AUTHTOKEN" --upload-file ucp-config.toml https://[ucp_url]/api/ucp/config-toml
Docker Enterprise Edition 2.x is no longer supported by the vendor. If the system is running Docker Enterprise Edition 2.x, this is a finding.
Upgrade to a supported version.