Mirantis Kubernetes Engine Security Technical Implementation Guide
Pick two releases to diff their requirements.
Open a previous version of this STIG.
Digest of Updates ✎ 1
Comparison against the immediately-prior release (V1R1). Rule matching uses the Group Vuln ID. Content-change detection compares the rule’s description, check, and fix text after stripping inline markup — cosmetic-only edits aren’t flagged.
Content changes 1
- V-260909 Medium fix MKE must be configured to integrate with an Enterprise Identity Provider.
- RMF Control
- SC-10
- Severity
- M
- CCI
- CCI-001133
- Version
- CNTR-MK-000940
- Vuln IDs
-
- V-260903
- Rule IDs
-
- SV-260903r986160_rule
Checks: C-64632r966064_chk
Log in to the MKE web UI and navigate to admin >> Admin Settings >> Authentication & Authorization. Ensure that "Lifetime Minutes" is set to "10" and "Renewal Threshold Minutes" is set to "0". If these settings are not configured as specified, this is a finding.
Fix: F-64540r966065_fix
Log in to the MKE web UI and navigate to admin >> Admin Settings >> Authentication & Authorization. - Below Lifetime Minutes, enter "10". - Below Renewal Threshold, enter "0". - Click "Save".
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-MK-000430
- Vuln IDs
-
- V-260904
- Rule IDs
-
- SV-260904r966069_rule
Checks: C-64633r966067_chk
If MSR is not being utilized, this is Not Applicable. Verify the organization, user permissions, and repositories in MSR are configured per the System Security Plan (SSP). Obtain and review the SSP. 1. Log in to the MSR web UI as Admin and navigate to "Organizations". Verify the list of organizations are setup per the SSP. 2. Navigate to "Users" and verify that the list of users are assigned to appropriate organizations per the SSP. 3. Click on the user and verify the assigned repositories are appropriate per the SSP. If the organization, user, or assigned repositories in MSR are not configured per the SSP, this is a finding.
Fix: F-64541r966068_fix
If MSR is not being utilized, this is Not Applicable. Set the organizations, user permissions, and repositories in MSR so they are configured per the SSP. 1. Modify Organizations according to the SSP by logging in to the MSR web UI as Admin and navigating to Organizations. To delete an Organization: - Click on the "Organization". - Click the "Settings Tab". - Click "Delete". - Confirm and click "Delete". To Add an Organization: - Click "New organization". - Input the Organization name. - Click "Save". To Assign Users to an Organization: - Click on an Organization. - Under the Members tab, click "Add user". - Select "New" or "Existing". - Fill in User information. - Click "Save". 2. Modify Users according to the SSP. - Navigate to "Users". To add a User: - Click "New User". - Fill in User information. - Click "Save". To Delete a User: - Click on the "User". - Select "Settings Tab". - Click "Delete User". - Confirm and click "Delete". 3. Modify Repositories according to the SSP: - Click on the User. - Under the Repositories tab, modify the assigned repositories to what is appropriate per the SSP.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-MK-000580
- Vuln IDs
-
- V-260905
- Rule IDs
-
- SV-260905r966072_rule
Checks: C-64634r966070_chk
This check only applies when using Kubernetes orchestration. Log in to the MKE web UI and navigate to Kubernetes >> Namespaces. The default namespaces are: "default", "kube-public", and "kube-node-lease". 1. In the top right corner, if "Set context for all namespaces" is not enabled, this is a finding. 2. Navigate to Kubernetes >> Services. Confirm that no service except "kubernetes" has the "default" namespace listed. Confirm that only approved system services have the "kube-system" namespace listed. If "default" has a service other than the "kubernetes" services, this is a finding. If "kube-system" has a service that is not listed in the System Security Plan (SSP), this is a finding.
Fix: F-64542r966071_fix
Log in to the MKE web UI and navigate to Kubernetes >> Namespaces. In the top right corner, enable "Set context for all namespaces". Move any user-managed resources from the default, kube-public and kube-node-lease namespaces, to user namespaces. - Navigate to Kubernetes >> Services. - Select the user-managed service. - Click on the settings wheel in the top right corner to view the .yaml for that service. - Change the "namespace" to a user namespace. - Click "Save".
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-MK-000110
- Vuln IDs
-
- V-260906
- Rule IDs
-
- SV-260906r986161_rule
Checks: C-64635r966073_chk
Access to use the docker CLI must be limited to root only. 1. Log on to the host CLI and execute the following: stat -c %U:%G /var/run/docker.sock | grep -v root:docker If any output is present, this is a finding. 2. Verify that the docker group has only the required users by executing: getent group docker If any users listed are not required to have direct access to MCR, this is a finding. 3. Execute the following command to verify the Docker socket file has permissions of 660 or more restrictive: stat -c %a /var/run/docker.sock If permissions are not set to "660", this is a finding.
Fix: F-64543r966074_fix
To remove unauthorized users from the docker group, access the host CLI and run: gpasswd -d docker [username to remove] To ensure that docker.socket is group owned, execute the following: chown root:docker /var/run/docker.sock Set the file permissions of the Docker socket file to "660" execute the following: chmod 660 /var/run/docker.sock
- RMF Control
- CM-7
- Severity
- H
- CCI
- CCI-000382
- Version
- CNTR-MK-000640
- Vuln IDs
-
- V-260907
- Rule IDs
-
- SV-260907r966078_rule
Checks: C-64636r966076_chk
This check must be executed on all nodes in an MKE cluster to ensure that mapped ports are the ones that are needed by the containers. Via CLI: Linux: As an administrator, execute the following command using a Universal Control Plane (MKE) client bundle: docker ps --quiet | xargs docker inspect --format '{{ .Id }}: Ports={{ .NetworkSettings.Ports }}' Review the list and ensure the ports mapped are those needed for the container. If there are any mapped ports not documented by the System Security Plan (SSP), this is a finding.
Fix: F-64544r966077_fix
Document the ports required for each container in the SSP. Fix the container image to expose only needed ports by the containerized application. Ignore the list of ports defined in the Dockerfile by NOT using -P (UPPERCASE) or --publish-all flag when starting the container. Use the -p (lowercase) or --publish flag to explicitly define the ports needed for a particular container instance. Example: docker run --interactive --tty --publish 5000 --publish 5001 --publish 5002 centos /bin/bash
- RMF Control
- IA-5
- Severity
- H
- CCI
- CCI-000197
- Version
- CNTR-MK-000870
- Vuln IDs
-
- V-260908
- Rule IDs
-
- SV-260908r966081_rule
Checks: C-64637r966079_chk
On the MKE controller, verify FIPS mode is enabled. Execute the following command through the CLI: docker info The "Security Options" section in the response must show a "fips" label, indicating that, when configured, the remotely accessible MKE UI uses FIPS-validated digital signatures in conjunction with an approved hash function to protect the integrity of remote access sessions. If the "fips" label is not shown in the "Security Options" section, then this is a finding.
Fix: F-64545r966080_fix
If the operating system has FIPS enabled, FIPS mode is enabled by default in MCR. The preferred method is to ensure FIPS mode is set on the operating system prior to installation. If a change is required on a deployed system, create the directory if it does not exist by executing the following: mkdir -p /etc/systemd/system/docker.service.d/ Create a file called /etc/systemd/system/docker.service.d/fips-module.conf and add the following: [Service] Environment="DOCKER_FIPS=1" Reload the Docker configuration to systemd by executing the following: sudo systemctl daemon-reload Restart the Docker service by executing the following: sudo systemctl restart docker
- RMF Control
- AC-2
- Severity
- M
- CCI
- CCI-000015
- Version
- CNTR-MK-000030
- Vuln IDs
-
- V-260909
- Rule IDs
-
- SV-260909r986168_rule
Checks: C-64638r966082_chk
Verify that Enterprise Identity Provider integration is enabled and properly configured in the MKE Admin Settings. 1. Log in to the MKE web UI and navigate to admin >> Admin Settings >> Authentication & Authorization. If LDAP or SAML are not set to "Enabled", this is a finding. 2. Identity Provider configurations: When using LDAP, ensure the following are set: - LDAP/AD server's URL. - Reader DN. - Reader Password. When using SAML: In the "SAML IdP Server" section, ensure the following: - URL for the identity provider exists in the "IdP Metadata URL" field. - Skip TLS Verification is unchecked. - Root Certificate Bundle is filled. In the "SAML Service Provider" section, ensure the MKE Host field has the MKE UI IP address. If the Identity Provider configurations do not match the System Security Plan (SSP), this is a finding.
Fix: F-64546r986167_fix
To configure Identity Provider, log in to the MKE web UI and navigate to admin >> Admin Settings >> Authentication & Authorization >> Identity Provider Integration section. To configure LDAP: Click the radial button to set LDAP to "Enabled". In the "LDAP Server" subsection set the following: - "LDAP Server URL" to the URL for the organization's AD or LDAP server (URL must be https). - "Reader DN" with the DN of the account used to search the LDAP entries. - "Reader Password" with the password for the Reader account. Click "Save". To configure SAML, click the radial button to set SAML to "Enabled". Enter URL in the "Service Provider Metadata URL" field. Upload the certificate bundle for the IdP provider in "Root Certificates Bundle". In the "SAML Service Provider" section, enter the "MKE IP address" in the MKE Host field. Click "Save".
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-MK-000120
- Vuln IDs
-
- V-260910
- Rule IDs
-
- SV-260910r966087_rule
Checks: C-64639r966085_chk
This check must be executed on all nodes in a Docker Enterprise cluster. Verify no running containers have a process for SSH server. Using CLI, execute the following: for i in $(docker container ls --format "{{.ID}}"); do pid=$(docker inspect -f '{{.State.Pid}}' "$i") ps -h --ppid "$pid" -o cmd done | grep sshd If a container is output, it has a process for SSH server, this is a finding.
Fix: F-64547r966086_fix
Containers found with SSH server must be removed by executing the following: docker rm [container name] Then, a new image must be added with SSH server removed.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-MK-000130
- Vuln IDs
-
- V-260911
- Rule IDs
-
- SV-260911r986162_rule
Checks: C-64640r966088_chk
Review the System Security Plan (SSP) and identify applications that leverage configuration files and/or small amounts of user-generated data, and ensure the data is stored in Docker Secrets or Kubernetes Secrets. When using Swarm orchestration, log in to the MKE web UI and navigate to Swarm >> Secrets and view the configured secrets. If items identified for secure storage are not included in the secrets, this is a finding. When using Kubernetes orchestration, log on to the MKE Controller node then run the following command: kubectl get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {"\n"}{end}' -A Or, using API, configure the $AUTH variable to contain the token for the SCIM API endpoint: curl -k 'Accept: application/json' -H "Authorization: Bearer $AUTH" -s "https://$MKE_ADDRESS/api/MKE/config/kubernetes" | jq '.KMSEnabled' true If any of the values returned reference environment variables, this is a finding.
Fix: F-64548r966089_fix
To create secrets when using Swarm Orchestration, log in to the MKE UI. Navigate to Swarm >> Secrets, and then click "Create". Provide a name for the secret and enter the data into the "Content" field. Add a label to allow for RBAC features to be used for access to secret. Click "Save". To create secrets when using Kubernetes orchestration, run the following command on the MKE Controller node: Configure the $AUTH variable to contain the token for the SCIM API endpoint. curl -X PUT -H 'Accept: application/json' -H "Authorization: Bearer $AUTH" -d '{"KMSEnabled":true,"KMSName"":"<kms_name>","KMSEndpoint":"/var/kms"}' "https://$MKE_ADDRESS/api/MKE/config/kubernetes"
- RMF Control
- AC-4
- Severity
- M
- CCI
- CCI-001368
- Version
- CNTR-MK-000140
- Vuln IDs
-
- V-260912
- Rule IDs
-
- SV-260912r966093_rule
Checks: C-64641r966091_chk
Verify the applied RBAC policies set in MKE are configured per the requirements set forth by the System Security Plan (SSP). Log in to the MKE web UI as an MKE Admin and navigate to Access Control >> Grants. When using Kubernetes orchestration, select the "Kubernetes" tab and verify that cluster role bindings are configured per the requirements set forth by the SSP. When using Swarm orchestration, select the "Swarm" tabs. Verify that all grants are configured per the requirements set forth by the SSP. If the grants are not configured per the requirements set forth by the SSP, then this is a finding.
Fix: F-64549r966092_fix
Create Role Bindings/Grants by logging in to the MKE web UI as an MKE Admin. Navigate to Access Control >> Grants. Using Kubernetes orchestration: - Select the "Kubernetes" tab and click "Create Role Binding". - Add Users, Organizations or Service Accounts as needed and click "Next". - Under "Resource Set", enable "Apply Role Binding to all namespaces", and then click "Next". - Under "Role" select a cluster role. - Click "Create". Using Swarm orchestration: - Select the "Swarm" tab and click "Create Grant". - Add Users, Organizations, or Service Accounts as needed and click "Next". - Under "Resource Set", click "View Children" until the required Swarm collection displays, and then click "Next". - Under "Role" select a cluster role. - Click "Create".
- RMF Control
- AC-4
- Severity
- M
- CCI
- CCI-001414
- Version
- CNTR-MK-000150
- Vuln IDs
-
- V-260913
- Rule IDs
-
- SV-260913r966096_rule
Checks: C-64642r966094_chk
When using Kubernetes orchestration, ensure that Pods do not use the host machine's network namespace and uses its own isolated network namespace. Note: If the hostNetwork field is not explicitly set in the Pod's specification, it will use the default behavior, which is equivalent to hostNetwork: false. Execute the following for all pods: kubectl get pods --all-namespaces -o json | jq '.items[] | select(.spec.hostNetwork == true) | .metadata.name' If the above command returns a namespace then the "hostNetwork" = true, this is a finding unless a documented exception is present in the System Security Plan (SSP). When using Swarm orchestration, check that the host's network namespace is not shared. Via CLI: Linux: As an administrator, execute the following command using a Universal Control Plane (MKE) client bundle: docker ps --filter "label=com.docker.ucp.version" | awk '{print $1}' | xargs docker inspect --format '{{ .Name }}: NetworkMode={{ .HostConfig.NetworkMode }}' If the above command returns NetworkMode=host, this is a finding unless a documented exception is present in the SSP.
Fix: F-64550r966095_fix
When using Kubernetes orchestration: In Kubernetes, the hostNetwork setting is a part of the Pod's specification, and once a Pod is created, its hostNetwork setting cannot be directly modified. However, the desired effect can be achieved by creating a new Pod with the updated hostNetwork setting and then deleting the existing Pod. This process replaces the old Pod with the new one. When using Swarm orchestration: Review and remove nonsystem containers previously created by these users that allowed access to the host network namespace must be removed using: docker container rm [container]
- RMF Control
- AU-14
- Severity
- M
- CCI
- CCI-001464
- Version
- CNTR-MK-000220
- Vuln IDs
-
- V-260914
- Rule IDs
-
- SV-260914r966099_rule
Checks: C-64643r966097_chk
Check auditing configuration level for MKE nodes and controller: Log in to the MKE web UI and navigate to admin >> Admin Settings >> Logs & Audit Logs. If "AUDIT LOG LEVEL" is not set to "Request", this is a finding. If "DEBUG LEVEL" is set to "ERROR", this is a finding.
Fix: F-64551r966098_fix
Log in to the MKE web UI and navigate to admin >> Admin Settings >> Logs & Audit Logs. In the "Configure Audit Log Level" section, select "Request" In the "Configure Global Log Level" section, select "INFO" or "DEBUG". Note: The recommended setting is "INFO". Click "Save".
- RMF Control
- AU-5
- Severity
- M
- CCI
- CCI-000140
- Version
- CNTR-MK-000310
- Vuln IDs
-
- V-260915
- Rule IDs
-
- SV-260915r966102_rule
Checks: C-64644r966100_chk
Check centralized log server configuration. Via CLI, execute the following commands as a trusted user on the host operating system: cat /etc/docker/daemon.json Verify that the "log-driver" property is set to one of the following: "syslog", "journald", or "<plugin>" (where <plugin> is the naming of a third-party Docker logging driver plugin). Work with the SIEM administrator to determine if an alert is configured when audit data is no longer received as expected. If "log-driver" is not set, or if alarms are not configured in the SIEM, then this is a finding.
Fix: F-64552r966101_fix
Configure logging driver by setting the log-driver and log-opts keys to appropriate values in the daemon.json file. Refer to this link for extra assistance: https://docs.docker.com/config/containers/logging/syslog/. Via CLI: Linux: 1. As a trusted user on the host OS, open the /etc/docker/daemon.json file for editing. If the file does not exist, it must be created. 2. Set the "log-driver" property to one of the following: "syslog", "journald", or "<plugin>" (where <plugin> is the naming of a third-party MKE logging driver plugin). Note: Mirantis recommends the "journald" setting. The following example sets the log driver to journald: { "log-driver": "journald" } 3. Configure the "log-opts" object as required by the selected "log-driver". 4. Save the file. 5. Restart the Docker daemon by executing the following: sudo systemctl restart docker Configure rsyslog to send logs to the SEIM system. 1. Edit the /etc/rsyslog.conf file and add the IP address of remote server. Example: *.* @@loghost.example.com 2. Work with the SIEM administrator to configure an alert when no audit data is received from Mirantis.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-MK-000480
- Vuln IDs
-
- V-260916
- Rule IDs
-
- SV-260916r966105_rule
Checks: C-64645r966103_chk
If MSR is not being utilized, this is Not Applicable. Check that MSR has been integrated with a trusted certificate authority (CA). 1. In one terminal window execute the following: kubectl port-forward service/msr 8443:443 2. In a second terminal window execute the following: openssl s_client -connect localhost:8443 -showcerts </dev/null If the certificate chain in the output is not valid and does not match that of the trusted CA, then this is a finding.
Fix: F-64553r966104_fix
If MSR is not being utilized, this is Not Applicable. Ensure the certificates are from a trusted DOD CA. 1. Add the secret to the cluster by executing the following: kubectl create secret tls <secret-name> --key <keyfile>.pem --cert <certfile>.pem 2. Update MSR with the custom certificate by executing the following: helm upgrade msr [REPO_NAME]/msr --version <helm-chart-version> --set-file license=path/to/file/license.lic --set nginx.webtls.create=false --set nginx.webtls.secretName="<secret-name>"
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-MK-000490
- Vuln IDs
-
- V-260917
- Rule IDs
-
- SV-260917r966108_rule
Checks: C-64646r966106_chk
To ensure this setting has not been modified follow these steps on each node: Log in to the MKE web UI and navigate to admin >> Admin Settings >> Orchestration. Scroll to down "Container Scheduling". Verify that the "Allow administrators to deploy containers on MKE managers or nodes running MSR" is disabled. If it is checked (enabled), this is a finding. Verify that the "Allow users to schedule on all nodes, including MKE managers and MSR nodes" is disabled. If it is checked (enabled), this is a finding.
Fix: F-64554r966107_fix
Set MKE and MSR to disallow administrators and users to schedule containers. Log in to the MKE web UI and navigate to admin >> Admin Settings >> Orchestration. Scroll to down "Container Scheduling". Disable the "Allow administrators to deploy containers on MKE managers or nodes running MSR". Disable "Allow users to schedule on all nodes, including MKE managers and MSR nodes" options. Click "Save".
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-MK-000500
- Vuln IDs
-
- V-260918
- Rule IDs
-
- SV-260918r966111_rule
Checks: C-64647r966109_chk
Verify that usage and API analytics tracking is disabled in MKE. Log in to the MKE web UI and navigate to admin >> Admin Settings >> Usage. Verify the "Enable hourly usage reporting" and "Enable API and UI tracking" options are both unchecked. If either box is checked, this is a finding.
Fix: F-64555r966110_fix
Disable usage and API analytics tracking in MKE. Log in to the MKE web UI and navigate to admin >> Admin Settings >> Usage. Uncheck both the "Enable hourly usage reporting" and "Enable API and UI tracking" options. Click "Save".
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-MK-000510
- Vuln IDs
-
- V-260919
- Rule IDs
-
- SV-260919r966114_rule
Checks: C-64648r966112_chk
If MSR is not being utilized, this is Not Applicable. Verify that usage and API analytics tracking is disabled in MSR. Log in to the MSR web UI and navigate to System >> General Tab. Scroll to the "Analytics" section. If the "Send data" option is enabled, this is a finding.
Fix: F-64556r966113_fix
If MSR is not being utilized, this is Not Applicable. Disable usage and API analytics tracking in MSR. Log in to the MSR web UI and navigate to System >> General Tab. Scroll to the "Analytics" section. Click the "Send data" slider to disable this capability.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-MK-000520
- Vuln IDs
-
- V-260920
- Rule IDs
-
- SV-260920r966117_rule
Checks: C-64649r966115_chk
If MKE is not being used on an Ubuntu host operating system, this is Not Applicable. If AppArmor is not in use, this is Not Applicable. This check must be executed on all nodes in a cluster. Via CLI: Linux: Execute the following command as a trusted user on the host operating system: docker ps -a -q | xargs -I {} docker inspect {} --format '{{ .Name }}: AppArmorProfile={{ .AppArmorProfile }}, Privileged={{ .HostConfig.Privileged }}' | grep 'AppArmorProfile=unconfined' | grep 'Privileged=false' If any output, this is a finding.
Fix: F-64557r966116_fix
If not using MKE on Ubuntu host operating system, this is Not Applicable. If AppArmor is not in use, this is Not Applicable. This check must be executed on all nodes in a cluster. Run on all nonprivileged containers using an AppArmor profile: Via CLI: Linux: Install AppArmor (if not already installed). Create/import an AppArmor profile (if not using the "docker-default" profile). Put the profile in "enforcing" model. Execute the following command as a trusted user on the host operating system to run the container using the customized AppArmor profile: docker run [options] --security-opt="apparmor:[PROFILENAME]" [image] [command] When using the "docker-default" default profile, run the container using the following command instead: docker run [options] --security-opt apparmor=docker-default [image] [command]
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-MK-000530
- Vuln IDs
-
- V-260921
- Rule IDs
-
- SV-260921r966120_rule
Checks: C-64650r966118_chk
If using MKE on operating systems other than Red Hat Enterprise Linux or CentOS host operating systems where SELinux is in use, this check is Not Applicable. Execute on all nodes in a cluster. Verify that the appropriate security options are configured for all running containers: Via CLI: Linux: Execute the following command as a user on the host operating system: docker info --format '{{.SecurityOptions}}' expected output [name=seccomp, profile=default name=selinux name=fips] If there is no output or name does not equal SELinux, this is a finding.
Fix: F-64558r966119_fix
If using MKE on operating systems other than Red Hat Enterprise Linux or CentOS host operating systems where SELinux is in use, this check is Not Applicable. Execute on all nodes in a cluster. Start MKE with SELinux mode enabled. Run containers using appropriate security options. Via CLI: Linux: Set the SELinux state and policy. Create or import a SELinux policy template for MKE. Then, start MKE with SELinux mode enabled by setting the "selinux-enabled" property to "true" in the "/etc/docker/daemon.json" daemon configuration file. Restart MKE.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-MK-000550
- Vuln IDs
-
- V-260922
- Rule IDs
-
- SV-260922r966123_rule
Checks: C-64651r966121_chk
If using Kubernetes orchestration, this check is Not Applicable. When using Swarm orchestration, log in to the CLI as an MKE Admin, and execute the following command using an MKE client bundle: docker ps --all --filter "label=com.docker.ucp.version" | xargs docker inspect --format '{{ .Id }}: Volumes={{ .Mounts }}' | grep -i "docker.sock\|docker_engine" If the Docker socket is mounted inside containers, this is a finding. If "volumes" is not present or if "docker.sock" is listed, this is a finding.
Fix: F-64559r966122_fix
If using Kubernetes orchestration, this check is Not Applicable. When using Swarm orchestration and using the -v/--volume flags to mount volumes to containers in a docker run command, do not use docker.sock as a volume. A reference for the docker run command can be found at https://docs.docker.com/engine/reference/run/. Review and remove nonsystem containers previously created by these users without the runAsGroup must be removed using: docker container rm [container]
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-MK-000560
- Vuln IDs
-
- V-260923
- Rule IDs
-
- SV-260923r966126_rule
Checks: C-64652r966124_chk
When using Kubernetes orchestration this check is Not Applicable. When using Swarm orchestration, via CLI: Linux: Execute the following command as a trusted user on the host operating system: docker ps --quiet --all | xargs docker inspect --format '{{ .Name }}: CapAdd={{ .HostConfig.CapAdd }} CapDrop={{ .HostConfig.CapDrop }}' The command will output all Linux Kernel Capabilities. If Linux Kernel Capabilities exceed what is defined in the System Security Plan (SSP), this is a finding.
Fix: F-64560r966125_fix
When using Kubernetes orchestration this check is Not Applicable. When using Swarm orchestration, review and remove nonsystem containers previously created by these users that allowed capabilities to be added or must be removed using: docker container rm [container]
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-MK-000570
- Vuln IDs
-
- V-260924
- Rule IDs
-
- SV-260924r966129_rule
Checks: C-64653r966127_chk
This check must be executed on all nodes in an MKE cluster. Verify that no running containers are mapping host port numbers below 1024. Via CLI: Linux: Execute the following command as a trusted user on the host operating system: docker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: Ports={{ .NetworkSettings.Ports }}' Review the list and ensure that container ports are not mapped to host port numbers below 1024. If they are, then this is a finding. Ensure that there is no such container to host privileged port mapping declarations in the Mirantis config file. View the config file. If container to host privileged port mapping declarations exist, this is a finding.
Fix: F-64561r966128_fix
To edit container ports, log in to the MKE web UI and navigate to Shared Resources >> Containers. - Locate the container with the incorrect port mapping. - Click on the container name and stop the container by clicking on the three dots in the upper right hand corner. - Scroll down to Ports to check if ports have been manually assigned. - Edit the port to a nonprivileged port.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-MK-000590
- Vuln IDs
-
- V-260925
- Rule IDs
-
- SV-260925r966132_rule
Checks: C-64654r966130_chk
Ensure Resource Quotas and CPU priority is set for each namespace. When using Kubernetes orchestration: Log in to the MKE web UI, navigate to Kubernetes >> Namespace, and then click on each defined Namespace. If the Namespace states "Quotas Nothing has been defined for this resource." or the limits.cpu or the limits.memory settings do not match the System Security Plan (SSP), this is a finding. When using Swarm orchestration: 1. Check Resource Quotas: Linux: As an administrator, execute the following command using a Universal Control Plane (MKE) client bundle: docker ps --quiet --filter """"label=com.docker.ucp.version"""" | xargs docker inspect --format '{{ .Name }}: Memory={{ .HostConfig.Memory }}' If the above command returns "0", it means the memory limits are not in place, and this is a finding. 2. Check CPU Priority: When using Swarm orchestration, to ensure CPU priority is set, use the CLI: Linux: As an MKE Admin, execute the following command using a Universal Control Plane (MKE) client bundle: docker ps --quiet --filter ""label=com.docker.ucp.version"" | xargs docker inspect --format '{{ .Name }}: CpuShares={{ .HostConfig.CpuShares }}' Compare the output against the SSP, if any containers are set to "0" or "1024", and they are not documented in the System Security Plan (SSP), this is a finding.
Fix: F-64562r966131_fix
Set Resource Quotas and CPU priority for each namespace. When using Kubernetes orchestration: 1. Create a resource quota as follows (quotaexample.yaml): apiVersion: v1 kind: ResourceQuota metadata: name: mem-cpu-demo spec: hard: requests.cpu: ""1"" requests.memory: 1Gi limits.cpu: ""2"" limits.memory: 2Gi Where the limits can be set according to the SSP. Save this file. 2. Apply the quota to a namespace within the cluster by executing: kubectl apply -f [full path to quotaexample.yaml] --namespace=[name of namespace on cluster] This must be repeated for all namespaces. Quotas can differ per namespace as required by the site. When using Swarm orchestration: 1. Set Resource Quotas by executing the following: docker exec -it [container name] --memory=""""2g"""" This must be repeated for all containers. Quotas can differ per container as required by the site. 2. Set CPU Priority: When using Swarm orchestration to manage the CPU shares between containers, start the container using the --cpu-shares argument. For example, run a container as below: docker run --interactive --tty --cpu-shares 512 [image] [command] In the above example, the container is started with CPU shares of 50 percent of what the other containers use. So, if the other container has CPU shares of 80 percent, this container will have CPU shares of 40 percent. Note: Every new container will have 1024 shares of CPU by default. However, this value is shown as "0" if running the command mentioned in the audit section. Alternatively: 1. Navigate to /sys/fs/cgroup/cpu/system.slice/ directory. 2. Check the container instance ID using docker ps. 3. Inside the above directory (in step 1), there will be a directory called docker-<Instance ID>.scope. For example, docker-4acae729e8659c6be696ee35b2237cc1fe4edd2672e9186434c5116e1a6fbed6.scope. Navigate to this directory. 4. Find a file named cpu.shares. Execute cat cpu.shares. This will always show the CPU share value based on the system. Even if there are no CPU shares configured using -c or --cpu-shares argument in the docker run command, this file will have a value of 1024. By setting one containers CPU shares to 512, it will receive half of the CPU time compared to the other container. Take 1024 as 100 percent and derive the number that set for respective CPU shares. For example, use 512 to set 50 percent and 256 to set 25 percent.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-MK-000600
- Vuln IDs
-
- V-260926
- Rule IDs
-
- SV-260926r966135_rule
Checks: C-64655r966133_chk
The default storage driver for MCR is overlay2. To confirm this has not been changed via CLI: As a trusted user on the underlying host operating system, execute the following command: docker info | grep -e "Storage Driver:" If the Storage Driver setting contains *aufs or *btrfs, then this is a finding. If the above command returns no values, this is not a finding.
Fix: F-64563r966134_fix
Modify Storage Driver setting. Via CLI as a trusted user on the underlying host operating system: If the daemon.json file does not exist, it must be created. "/etc/docker/daemon.json" Edit the "/etc/docker/daemon.json" file and set the "storage-driver" property to a value that is not "aufs" or "btrfs". { "storage-driver": "overlay2" } Restart the Docker daemon by executing the following: sudo systemctl restart docker
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-MK-000610
- Vuln IDs
-
- V-260927
- Rule IDs
-
- SV-260927r966138_rule
Checks: C-64656r966136_chk
If Kubernetes ingress is being used, this is Not Applicable. Check that MKE has been integrated with a trusted certificate authority (CA). Log in to the MKE web UI and navigate to admin >> Admin Settings >> Certificates. Click "Download MKE Server CA Certificate". Verify that the contents of the downloaded "ca.pem" file match that of the trusted CA certificate. If the certificate chain does not match the chain as defined by the System Security Plan (SSP), then this is a finding.
Fix: F-64564r966137_fix
If Kubernetes ingress is being used, this is Not Applicable. Integrate MKE and MSR (if used) with a trusted certificate authority CA. Log in to the MKE web UI and navigate to admin >> Admin Settings >> Certificates. Either fill in the "CA Certificate" field with the contents of the external public CA certificate or upload a file. Either fill in the "Server Certificate" and "Private Key" fields with the contents of the public/private certificates or upload a file. The "Server Certificate" field must include both the MKE server certificate and any intermediate certificates. Click "Save".
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-MK-000620
- Vuln IDs
-
- V-260928
- Rule IDs
-
- SV-260928r966141_rule
Checks: C-64657r966139_chk
If MSR is not being utilized, this is Not Applicable. Verify the "Create repository on push" option is disabled in MSR: Log in to the MSR web UI as an administrator and navigate to System >> General Tab >>Repositories Section. Verify the "Create repository on push" slider is turned off. If it is turned on, this is a finding.
Fix: F-64565r966140_fix
If MSR is not being utilized, this is Not Applicable. Verify the "Create repository on push" option is disabled in MSR: Log in to the MSR web UI as an administrator and navigate to System >> General Tab >>Repositories Section. Set the "Create repository on push" slider to off.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-MK-000650
- Vuln IDs
-
- V-260929
- Rule IDs
-
- SV-260929r966144_rule
Checks: C-64658r966142_chk
This check must be executed on all nodes in an MKE cluster. Verify no running containers are mapping host port numbers below 1024. Via CLI: Linux: Execute the following command as a trusted user on the host operating system: docker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: Ports={{ .NetworkSettings.Ports }}' Review the list and ensure container ports are not mapped to host port numbers below 1024. If they are, then this is a finding.
Fix: F-64566r966143_fix
To edit container ports, log in to the MKE web UI and navigate to Shared Resources >> Containers. - Locate the container with the incorrect port mapping. - Click on the container name and stop the container by clicking the three dots in the upper right corner. - Scroll down to Ports and check if ports have been manually assigned. - Edit the port to a nonprivileged port.
- RMF Control
- IA-2
- Severity
- M
- CCI
- CCI-000764
- Version
- CNTR-MK-000680
- Vuln IDs
-
- V-260930
- Rule IDs
-
- SV-260930r966147_rule
Checks: C-64659r966145_chk
When using Kubernetes orchestration, this check is Not Applicable. When using Swarm orchestration, to ensure the host's process namespace is not shared, log in via CLI: Execute the following using the MKE client bundle: container_ids=$(docker ps --quiet --filter=label=com.docker.ucp.version) for container_id in $container_ids do container_name=$(docker inspect -f '{{.Name}}' $container_id | cut -c2-) pid_mode=$(docker inspect -f '{{.HostConfig.PidMode}}' $container_id) echo "Container Name: $container_name, ID: $container_id, PidMode: $pid_mode" done If PidMode = "host", this is a finding.
Fix: F-64567r966146_fix
When using Kubernetes orchestration, this check is Not Applicable. Using Swarm orchestration, review and remove nonsystem containers previously created by these users utilizing shared namespaces or with a PidMode=host using the following: docker container rm [container]
- RMF Control
- IA-3
- Severity
- M
- CCI
- CCI-000778
- Version
- CNTR-MK-000770
- Vuln IDs
-
- V-260931
- Rule IDs
-
- SV-260931r966150_rule
Checks: C-64660r966148_chk
Verify IPSec network encryption. For Swarm orchestration log in to the MKE web UI and navigate to Swarm >> Networks. If the "scope" is not local and the "driver" is not overlay, this is a finding. Kubernetes orchestration: Note: The path may need to be edited. cat /etc/mke/config.toml | grep secure_overlay If the "secure_overlay" settings is not set to "true", this is a finding.
Fix: F-64568r966149_fix
To configure IPSec network encryption in Swarm orchestration, create an overlay network with --opt encrypted flag. Example: docker network create --opt encrypted --driver overlay my-network To configure IPSec network encryption in Kubernetes orchestration, modify an existing MKE configuration. Working as an MKE admin, use the config-toml API from within the directory of your client certificate bundle to export the current MKE settings to a TOML file (mke-config.toml). 1. Define the following environment variables: export MKE_USERNAME=<mke-username> export MKE_PASSWORD=<mke-password> export MKE_HOST=<mke-fqdm-or-ip-address> 2. Obtain and define an AUTHTOKEN environment variable by executing the following: AUTHTOKEN=$(curl --silent --insecure --data '{"username":"'$MKE_USERNAME'","password":"'$MKE_PASSWORD'"}' https://$MKE_HOST/auth/login | jq --raw-output .auth_token) 3. Download the current MKE configuration file by executing the following: curl --silent --insecure -X GET "https://$MKE_HOST/api/MKE/config-toml" -H "accept: application/toml" -H "Authorization: Bearer $AUTHTOKEN" > mke-config.toml 4. Modify "secure_overlay" settings to "true". 5. Upload the newly edited MKE configuration file by executing the following: curl --silent --insecure -X PUT -H "accept: application/toml" -H "Authorization: Bearer $AUTHTOKEN" --upload-file 'mke-config.toml' https://$MKE_HOST/api/MKE/config-toml Note: Users may need to reacquire AUTHTOKEN, if significant time has passed since it was first attained.
- RMF Control
- SC-24
- Severity
- M
- CCI
- CCI-001665
- Version
- CNTR-MK-000980
- Vuln IDs
-
- V-260932
- Rule IDs
-
- SV-260932r966153_rule
Checks: C-64661r966151_chk
When using Swarm orchestration, this check is Not Applicable. Review the Kubernetes configuration to determine if information necessary to determine the cause of a disruption or failure is preserved. Notes: - The ReadWriteOnce access mode in the PVC means the volume can be mounted as read-write by a single node. Ensure the storage backend supports this mode. - Adjust the sleep duration in the writer pod as needed. - Ensure that the namespace and PVC names match the setup. Steps to verify data durability: 1. Create a namespace to manage the testing: apiVersion: v1 kind: Namespace metadata: name: stig 2. PersistentVolumeClaim (PVC): Ensure a PVC is created. If using a storage class like Longhorn, it would look similar to: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: stig-pvc namespace: stig spec: accessModes: - ReadWriteOnce storageClassName: longhorn # Replace with your storage class if different, e.g. NFS resources: requests: storage: 5Gi 3. Deploying the Initial Pod: Create a pod that writes data to the PVC. This pod will use a simple loop to write data (e.g., timestamps) to a file on the mounted PVC. Example: apiVersion: v1 kind: Pod metadata: name: write-pod namespace: stig spec: volumes: - name: log-storage persistentVolumeClaim: claimName: stig-pvc containers: - name: writer image: busybox command: ["/bin/sh", "-c"] args: ["while true; do date >> /data/logs.log; sleep 10; done"] volumeMounts: - name: log-storage mountPath: /data 4. Simulate Pod Failure: After the pod has been writing data for some time, it can be deleted to simulate a failure by executing the following: kubectl delete pod write-pod -n stig 5. Deploying a New Pod to Verify Data: Deploy another pod that mounts the same PVC to verify that the data is still there. apiVersion: v1 kind: Pod metadata: name: read-pod namespace: stig spec: volumes: - name: log-storage persistentVolumeClaim: claimName: stig-pvc containers: - name: reader image: busybox command: ["/bin/sh", "-c"] args: ["sleep infinity"] volumeMounts: - name: log-storage mountPath: /data 6. Verify Data Persistence: Check the contents of the log file in the new pod to ensure that the data written by the first pod is still there by executing the following: kubectl exec read-pod -n stig -- cat /data/logs.log If there is no log data, this is a finding.
Fix: F-64569r966152_fix
When using Swarm orchestration, this check is Not Applicable. This is a catastrophic error, contact Mirantis support.
- RMF Control
- SC-3
- Severity
- M
- CCI
- CCI-001084
- Version
- CNTR-MK-000990
- Vuln IDs
-
- V-260933
- Rule IDs
-
- SV-260933r966156_rule
Checks: C-64662r966154_chk
Verify kernel protection. When using Kubernetes orchestration, change to the /etc/sysconfig/ directory on the Kubernetes Control Plane using the command: grep -i protect-kernel-defaults kubelet If the setting "protect-kernel-defaults" is set to false or not set in the Kubernetes Kubelet, this is a finding. When using Swarm orchestration: Linux: Execute the following command as a trusted user on the host operating system: docker ps --quiet --all | xargs docker inspect --format '{{ .Name }}: CapAdd={{ .HostConfig.CapAdd }} CapDrop={{ .HostConfig.CapDrop }}' The command will output all Linux Kernel Capabilities. If Linux Kernel Capabilities exceed what is defined in the System Security Plan (SSP), this is a finding.
Fix: F-64570r966155_fix
When using Kubernetes orchestration, edit the Kubernetes Kubelet file in the /etc/sysconfig directory on the Kubernetes Control Plane. Set the argument "--protect-kernel-defaults" to "true". Reset Kubelet service using the following command: service kubelet restart When using Swarm orchestration, review and remove nonsystem containers previously created by these users that allowed capabilities to be added or must be removed using: docker container rm [container]
- RMF Control
- SC-4
- Severity
- M
- CCI
- CCI-001090
- Version
- CNTR-MK-001010
- Vuln IDs
-
- V-260934
- Rule IDs
-
- SV-260934r966159_rule
Checks: C-64663r966157_chk
This check must be executed on all nodes in an MKE cluster to ensure all containers are restricted from acquiring additional privileges. Via CLI: Linux: As an MKE Admin, execute the following command using a Universal Control Plane (MKE) client bundle: docker ps --quiet --all | xargs -L 1 docker inspect --format '{{ .Id }}: SecurityOpt={{ .HostConfig.SecurityOpt }}' The above command returns the security options currently configured for the running containers. If the "SecurityOpt=" setting does not include the "no-new-privileges" flag, this is a finding.
Fix: F-64571r966158_fix
Start the containers using the following: docker run --rm -it --security-opt=no-new-privileges <image> A reference for the Docker run command can be found at https://docs.docker.com/engine/reference/run/. no-new-privileges command information can be found here: https://docs.mirantis.com/mke/3.7/install/plan-deployment/mcr-considerations/no-new-privileges.html.
- RMF Control
- SC-4
- Severity
- M
- CCI
- CCI-001090
- Version
- CNTR-MK-001020
- Vuln IDs
-
- V-260935
- Rule IDs
-
- SV-260935r966162_rule
Checks: C-64664r966160_chk
Check if the "IpcMode" is set to "host" for a running or stopped container. Log in to the MKE WebUI and Navigate to admin >> Admin Settings >> Privileges. If hostIPC is checked for User account privileges or Service account privileges, consult the System Security Plan (SSP). If hostIPC is not allowed per the SSP, this is a finding.
Fix: F-64572r966161_fix
Modify IpcMode for a container by logging in to the MKE WebUI and navigating to admin >> Admin Settings >> Privileges. - Uncheck hostIPC. - Click "Save".
- RMF Control
- AC-6
- Severity
- M
- CCI
- CCI-002233
- Version
- CNTR-MK-001160
- Vuln IDs
-
- V-260936
- Rule IDs
-
- SV-260936r966165_rule
Checks: C-64665r966163_chk
When using Kubernetes orchestration, this check is Not Applicable. For Swarm orchestration, check via CLI: Linux: As an MKE Admin, execute the following command using a Universal Control Plane (MKE) client bundle: docker ps --quiet --all | xargs -L 1 docker inspect --format '{{ .Name }}: ReadonlyRootfs={{ .HostConfig.ReadonlyRootfs }}' If ReadonlyRootfs=false, it means the container's root filesystem is writable and this is a finding.
Fix: F-64573r966164_fix
When using Kubernetes orchestration, this check is Not Applicable. When using Swarm orchestration, review and remove nonsystem containers previously created by these users with read write permissions using: docker container rm [container] Add a --read-only flag at a container's runtime to enforce the container's root filesystem to be mounted as read only: docker run <Run arguments> --read-only <Container Image Name or ID> <Command> Enabling the --read-only option at a container's runtime must be used by administrators to force a container's executable processes to only write container data to explicit storage locations during the container's runtime. Examples of explicit storage locations during a container's runtime include, but are not limited to: 1. Use the --tmpfs option to mount a temporary file system for nonpersistent data writes. Example: docker run --interactive --tty --read-only --tmpfs "/run" --tmpfs "/tmp" [image] [command] 2. Enabling Docker rw mounts at a container's runtime to persist container data directly on the Docker host filesystem. Example: docker run --interactive --tty --read-only -v /opt/app/data:/run/app/data:rw [image] [command] 3. Utilizing Docker shared-storage volume plugins for Docker data volume to persist container data. docker volume create -d convoy --opt o=size=20GB my-named-volume docker run --interactive --tty --read-only -v my-named-volume:/run/app/data [image] [command]
- RMF Control
- AC-6
- Severity
- M
- CCI
- CCI-002233
- Version
- CNTR-MK-001170
- Vuln IDs
-
- V-260937
- Rule IDs
-
- SV-260937r966168_rule
Checks: C-64666r966166_chk
When using Kubernetes orchestration, this check is Not Applicable. For Swarm orchestration, to ensure the default seccomp profile is not disabled, log in to the CLI: Linux: As an MKE Admin, execute the following command using a Universal Control Plane (MKE) client bundle: docker ps --quiet --filter "label=com.docker.ucp.version" | xargs docker inspect --format '{{ .Id }}: SecurityOpt={{ .HostConfig.SecurityOpt }}' If seccomp:=unconfined, then the container is running without any seccomp profiles and this is a finding.
Fix: F-64574r966167_fix
When using Kubernetes orchestration, this check is Not Applicable. When using Swarm orchestration, do not pass unconfined flags to run a container without the default seccomp profile. Refer to seccomp documentation for details: https://docs.docker.com/engine/security/seccomp/.
- RMF Control
- AC-6
- Severity
- M
- CCI
- CCI-002233
- Version
- CNTR-MK-001180
- Vuln IDs
-
- V-260938
- Rule IDs
-
- SV-260938r986163_rule
Checks: C-64667r966169_chk
The host OS must be locked down so that only authorized users with a client bundle can access docker commands. To ensure that no commands with privilege or user authorizations are present via CLI: Linux: As a trusted user on the host operating system, use the below command to filter out docker exec commands that used --privileged or --user option. sudo ausearch -k docker | grep exec | grep privileged | grep user If there are any in the output, then this is a finding.
Fix: F-64575r966170_fix
Docker CLI command must only be run with a client bundle and must not use --privileged or --user option. Refer to https://docs.mirantis.com/mke/3.7/ops/access-cluster/client-bundle/configure-client-bundle.html?highlight=client%20bundle.
- RMF Control
- AC-6
- Severity
- M
- CCI
- CCI-002233
- Version
- CNTR-MK-001200
- Vuln IDs
-
- V-260939
- Rule IDs
-
- SV-260939r966174_rule
Checks: C-64668r966172_chk
When using Kubernetes orchestration, this check is Not Applicable. When using Swarm orchestration, ensure that the PIDs cgroup limit is used. Log in to the CLI as an MKE Admin and execute the following command using a Universal Control Plane (MKE) client bundle: docker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: UsernsMode={{ .HostConfig.UsernsMode }}' Ensure it does not return any value for UsernsMode. If it returns a value of "host", that means the host user namespace is shared with the containers, and this is a finding.
Fix: F-64576r966173_fix
When using Kubernetes orchestration, this check is Not Applicable. When using Swarm orchestration, review and remove nonsystem containers previously created by these users without the runAsGroup using: docker container rm [container]
- RMF Control
- AC-6
- Severity
- M
- CCI
- CCI-002233
- Version
- CNTR-MK-001220
- Vuln IDs
-
- V-260940
- Rule IDs
-
- SV-260940r966177_rule
Checks: C-64669r966175_chk
When using Kubernetes orchestration, this check is Not Applicable. When using Swarm orchestration, execute the following command as a trusted user on the host operating system via CLI: docker ps --quiet --all | grep -iv "MKE\|kube\|dtr" | awk '{print $1}' | xargs docker inspect --format '{{ .Id }}: Privileged={{ .HostConfig.Privileged }}' Verify in the output that no containers are running with the --privileged flag. If there are, this is a finding.
Fix: F-64577r966176_fix
When using Kubernetes orchestration, this check is Not Applicable. Review and remove nonsystem containers previously created by these users that allowed privileged execution using: docker container rm [container]
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-001762
- Version
- CNTR-MK-001360
- Vuln IDs
-
- V-260941
- Rule IDs
-
- SV-260941r966180_rule
Checks: C-64670r966178_chk
Verify that only needed ports are open on all running containers. If an ingress controller is configured for the cluster, this check is not applicable. Via CLI: As a remote MKE admin, execute the following command using a client bundle: docker ps -q | xargs docker inspect --format '{{ .Id }}: Ports={{ .NetworkSettings.Ports }}' Review the list and ensure that the ports mapped are the ones really needed for the containers per the requirements set forth by the System Security Plan (SSP). If ports are not documented and approved in the SSP, this is a finding.
Fix: F-64578r966179_fix
Configuring an ingress controller is the preferred method to manage external ports. If an ingress controller is not used and unnecessary ports are in use, the container or pod network configurations must be updated. To update a pod's configuration, log in to the MKE UI as an administrator. Navigate to Kubernetes >> Pods and click the pod with an open port that is not allowed. Click the three dots in the upper right corner (edit). Modify the .yaml file to remove the port. Example: spec: container: - name: [pod name] ports: - containerPort: 80 [replace with 443] Click "Save". For a Swarm service, navigate to Swarm >> Services and click on the service with unauthorized port. Click the three dots in the top left corner. Select "Network" in the pop-up and remove the unauthorized port. Click "Save".
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-001774
- Version
- CNTR-MK-001380
- Vuln IDs
-
- V-260942
- Rule IDs
-
- SV-260942r986164_rule
Checks: C-64671r966181_chk
On each node, check that MKE is configured to only run images signed by applicable Orgs and Teams. 1. Log in to the MKE web UI and navigate to admin >> Admin Settings >> Docker Content Trust. If Content Trust Settings "Run only signed images" is disabled, this is a finding. 2. Verify that the Orgs and Teams that images must be signed by in the drop-down matches the organizational policies. If an Org or Team selected does not match organizational policies, this is a finding. 3. Verify that all images sitting on an MKE cluster are signed. Via CLI: Linux: As an MKE Admin, execute the following commands using a client bundle: docker trust inspect $(docker images | awk '{print $1 ":" $2}') Verify that all image tags in the output have valid signatures. If the images are not signed, this is a finding.
Fix: F-64579r966182_fix
On each node, enable Content Trust enforcement in MKE. 1. Log in to the MKE web UI and navigate to admin >> Admin Settings >> Docker Content Trust. Under Content Trust Settings section, enable "Run only signed images". 2. Log in to the MKE web UI and navigate to admin >> Admin Settings >> Docker Content Trust. Click "Add Team +" and set the appropriate Orgs and Teams that must sign images. Use the drop-down ("v") that follows to match the organizational policies. Remove any unwanted teams by clicking the minus symbol. Click "Save". 3. Manually remove any unsigned images sitting on an MKE cluster by executing the following: docker rmi <IMAGE_ID>
- RMF Control
- RA-5
- Severity
- M
- CCI
- CCI-001067
- Version
- CNTR-MK-001490
- Vuln IDs
-
- V-260943
- Rule IDs
-
- SV-260943r966186_rule
Checks: C-64672r966184_chk
If MSR is not being utilized, this is Not Applicable. Check image vulnerability scanning enabled for all repositories. Log in to the MSR web UI and navigate to System >> Security Tab. Verify that the "Enable Scanning" slider is turned on and the vulnerability database has been successfully synced (online) or uploaded (offline). If the "Enable Scanning" slider is tuned off, this is a finding. If the vulnerability database is not synced or uploaded, this is a finding.
Fix: F-64580r966185_fix
If MSR is not being utilized, this is Not Applicable. Enable vulnerability scanning on the MSR UI by logging in to the MSR web UI and navigating to System >> Security Tab. Click the "Enable Scanning" slider to enable this capability. Sync (online) or upload (offline) the vulnerability database.
- RMF Control
- SI-2
- Severity
- M
- CCI
- CCI-002617
- Version
- CNTR-MK-001600
- Vuln IDs
-
- V-260944
- Rule IDs
-
- SV-260944r966189_rule
Checks: C-64673r966187_chk
Verify all outdated MKE and DTR container images have been removed from all nodes in the cluster. Via CLI: As an MKE admin, execute the following command using a client bundle: docker images --filter reference='mirantis/[ucp]*' docker images --filter reference='registry.mirantis.com/msr/[msr]*' Verify there are no tags listed older than the currently installed versions of MKE and DTR. If any of the tags listed are older than the currently installed versions of MKE and DTR, then this is a finding. If no tags are listed, this is not a finding.
Fix: F-64581r966188_fix
Remove all outdated MKE and DTR container images from all nodes in the cluster: Via CLI: As an MKE admin, execute the following commands using a client bundle: docker rmi -f $(docker images --filter reference='mirantis/ucp*:[outdated_tags]' -q) docker rmi -f $(docker images --filter reference='registry.mirantis.com/msr/[msr]*:[outdated_tags]' -q)
- RMF Control
- SI-2
- Severity
- M
- CCI
- CCI-002605
- Version
- CNTR-MK-001630
- Vuln IDs
-
- V-260945
- Rule IDs
-
- SV-260945r966192_rule
Checks: C-64674r966190_chk
Check for updates by logging in to the MKE WebUI and Navigating to admin >> Admin Settings >> Upgrade. In the "Choose MKE Version" section, select the drop-down. The UI will provide a list of available versions. If an updated version is available in the list, this is a finding.
Fix: F-64582r966191_fix
Note: It is advisable to review the release notes to understand what changes and improvements come with the new version. Log in to the MKE WebUI and navigate to admin >> Admin Settings >> Upgrade. In the "Choose MKE Version" section, select the drop-down. Follow the on-screen instructions to start the upgrade.
- RMF Control
- AC-8
- Severity
- L
- CCI
- CCI-000048
- Version
- CNTR-MK-000170
- Vuln IDs
-
- V-260946
- Rule IDs
-
- SV-260946r966345_rule
Checks: C-64675r966193_chk
Review the MKE configuration to determine if the Standard Mandatory DOD Notice and Consent Banner is configured to be displayed before granting access to platform components. Log in to MKE and verify that the Standard Mandatory DOD Notice and Consent Banner is being displayed before granting access. If the Standard Mandatory DOD Notice and Consent Banner is not configured or is not displayed before granting access to MKE, this is a finding.
Fix: F-64583r966345_fix
Configure MKE to display the Standard Mandatory DOD Notice and Consent Banner before granting access to MKE: Modify the existing MKE configuration: Working as an MKE admin, use the config-toml API from within the directory of the client certificate bundle to export the current MKE settings to a TOML file (mke-config.toml). 1. Define the following environment variables: export MKE_USERNAME=<mke-username> export MKE_PASSWORD=<mke-password> export MKE_HOST=<mke-fqdm-or-ip-address> 2. Obtain and define an AUTHTOKEN environment variable by executing the following: AUTHTOKEN=$(curl --silent --insecure --data '{"username":"'$MKE_USERNAME'","password":"'$MKE_PASSWORD'"}' https://$MKE_HOST/auth/login | jq --raw-output .auth_token) 3. Download the current MKE configuration file by executing the following: curl --silent --insecure -X GET "https://$MKE_HOST/api/MKE/config-toml" -H "accept: application/toml" -H "Authorization: Bearer $AUTHTOKEN" > mke-config.toml 4. Edit the MKE configuration (mke-config.toml) modify "pre_logon_message" settings to match the Standard Mandatory DOD Notice and Consent Banner. Example: pre_logon_message = "You are accessing a U.S. Government (USG) Information System (IS) that is provided for USG-authorized use only. By using this IS (which includes any device attached to this IS), you consent to the following conditions :\n\n-The USG routinely intercepts and monitors communications on this IS for purposes including, but not limited to, penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM), law enforcement (LE), and counterintelligence (CI) investigations.\n\n-At any time, the USG may inspect and seize data stored on this IS.\n\n-Communications using, or data stored on, this IS are not private, are subject to routine monitoring, interception, and search, and may be disclosed or used for any USG-authorized purpose.\n\n-This IS includes security measures (e.g., authentication and access controls) to protect USG interests--not for your personal benefit or privacy.\n\n-Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching or monitoring of the content of privileged communications, or work product, related to personal representation or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work product are private and confidential. See User Agreement for details." 5. Upload the edited MKE configuration file by executing the following: curl --silent --insecure -X PUT -H "accept: application/toml" -H "Authorization: Bearer $AUTHTOKEN" --upload-file 'mke-config.toml' https://$MKE_HOST/api/MKE/config-toml Note: Users may need to reacquire the AUTHTOKEN, if significant time has passed since it was first acquired. 6. Log in to MKE and verify that the Standard Mandatory DOD Notice and Consent Banner is being displayed before granting access.