Kubernetes Security Technical Implementation Guide
Pick two releases to diff their requirements.
Open a previous version of this STIG.
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000150
- Vuln IDs
-
- CNTR-K8-000150
- Rule IDs
-
- CNTR-K8-000150_rule
Checks: C-CNTR-K8-000150_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i tls-min-version * If the setting “tls-min-version” is not set in the Kubernetes Controller Manager manifest file or it is set to “VersionTLS10” or “VersionTLS11”, this is a finding.
Fix: F-CNTR-K8-000150_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--tls-min-version” to either “VersionTLS12” or “VersionTLS13”.
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000160
- Vuln IDs
-
- CNTR-K8-000160
- Rule IDs
-
- CNTR-K8-000160_rule
Checks: C-CNTR-K8-000160_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i tls-min-version * If the setting “tls-min-version” is not set in the Kubernetes Scheduler manifest file or it is set to “VersionTLS10” or “VersionTLS11”, this is a finding.
Fix: F-CNTR-K8-000160_fix
Edit the Kubernetes Scheduler manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--tls-min-version” to either “VersionTLS12” or “VersionTLS13”.
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000170
- Vuln IDs
-
- CNTR-K8-000170
- Rule IDs
-
- CNTR-K8-000170_rule
Checks: C-CNTR-K8-000170_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i tls-min-version * If the setting “tls-min-version” is not set in the Kubernetes API Server manifest file or it is set to “VersionTLS10” or “VersionTLS11”, this is a finding.
Fix: F-CNTR-K8-000170_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--tls-min-version” to either “VersionTLS12” or “VersionTLS13”.
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000180
- Vuln IDs
-
- CNTR-K8-000180
- Rule IDs
-
- CNTR-K8-000180_rule
Checks: C-CNTR-K8-000180_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i auto-tls * If the setting “auto-tls” is not set in the Kubernetes etcd manifest file or it is set to true, this is a finding.
Fix: F-CNTR-K8-000180_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “-auto-tls” to false.
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000190
- Vuln IDs
-
- CNTR-K8-000190
- Rule IDs
-
- CNTR-K8-000190_rule
Checks: C-CNTR-K8-000190_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -I peer-auto-tls * If the setting “peer-auto-tls” is not set in the Kubernetes etcd manifest file or it is set to “true”, this is a finding.
Fix: F-CNTR-K8-000190_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “peer-auto-tls” to “false”.
- RMF Control
- AC-2
- Severity
- H
- CCI
- CCI-000015
- Version
- CNTR-K8-000220
- Vuln IDs
-
- CNTR-K8-000220
- Rule IDs
-
- CNTR-K8-000220_rule
Checks: C-CNTR-K8-000220_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i use-service-account-credential * If the setting use-service-account-credential is not set in the Kubernetes Controller Manager manifest file or it is set to “false”, this is a finding.
Fix: F-CNTR-K8-000220_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “use-service-account-credential” to “true”.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000270
- Vuln IDs
-
- CNTR-K8-000270
- Rule IDs
-
- CNTR-K8-000270_rule
Checks: C-CNTR-K8-000270_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i --authorization-mode * If the setting “authorization-mode” is not set in the Kubernetes API Server manifest file or is not set to “Node”, this is a finding.
Fix: F-CNTR-K8-000270_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--authorization-mode” to “Node”.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000280
- Vuln IDs
-
- CNTR-K8-000280
- Rule IDs
-
- CNTR-K8-000280_rule
Checks: C-CNTR-K8-000280_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i --authorization-mode * If the setting authorization-mode is not set in the Kubernetes API Server manifest file or is not set to “RBAC”, this is a finding.
Fix: F-CNTR-K8-000280_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--authorization-mode” to “RBAC”.
- RMF Control
- CM-6
- Severity
- H
- CCI
- CCI-000366
- Version
- CNTR-K8-000290
- Vuln IDs
-
- CNTR-K8-000290
- Rule IDs
-
- CNTR-K8-000290_rule
Checks: C-CNTR-K8-000290_chk
To view the available namespaces, run the command: kubectl get namespaces The default namespaces to be validated are default, kube-public and kube-node-lease if it is created. For the default namespace, execute the commands: kubectl config set-context --current --namespace=default kubectl get all For the kube-public namespace, execute the commands: kubectl config set-context --current --namespace=kube-public kubectl get all For the kube-node-lease namespace, execute the commands: kubectl config set-context --current --namespace=kube-node-lease kubectl get all The only valid return values are the kubernetes service (i.e., service/kubernetes) and nothing at all. If a return value is returned from the "kubectl get all" command and it is not the kubernetes service (i.e., service/kubernetes), this is a finding.
Fix: F-CNTR-K8-000290_fix
Move any user-managed resources from the default, kube-public and kube-node-lease namespaces, to user namespaces.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000300
- Vuln IDs
-
- CNTR-K8-000300
- Rule IDs
-
- CNTR-K8-000300_rule
Checks: C-CNTR-K8-000300_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i bind-address * If the setting “bind-address” is not set to “127.0.0.1” or is not found in the Kubernetes Scheduler manifest file, this is a finding.
Fix: F-CNTR-K8-000300_fix
Edit the Kubernetes Scheduler manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument “--bind-address” to “127.0.0.1”.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000310
- Vuln IDs
-
- CNTR-K8-000310
- Rule IDs
-
- CNTR-K8-000310_rule
Checks: C-CNTR-K8-000310_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i bind-address * If the setting bind-address is not set to “127.0.0.1” or is not found in the Kubernetes Controller Manager manifest file, this is a finding.
Fix: F-CNTR-K8-000310_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument “--bind-address” to “127.0.0.1”.
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000320
- Vuln IDs
-
- CNTR-K8-000320
- Rule IDs
-
- CNTR-K8-000320_rule
Checks: C-CNTR-K8-000320_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i insecure-port * If the setting insecure-port is not set to “0” or is not found in the Kubernetes API server manifest file, this is a finding.
Fix: F-CNTR-K8-000320_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument --insecure-port to “0”.
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000330
- Vuln IDs
-
- CNTR-K8-000330
- Rule IDs
-
- CNTR-K8-000330_rule
Checks: C-CNTR-K8-000330_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: kubelet --read-only-port If the setting insecure-port value is not set to “0” or is not set in the Kubernetes Kubelet, this is a finding.
Fix: F-CNTR-K8-000330_fix
Edit the Kubernetes Kubelet file in the /etc/sysconfig directory on the Kubernetes Master Node. Set the argument --read-only-port to “0”. Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000340
- Vuln IDs
-
- CNTR-K8-000340
- Rule IDs
-
- CNTR-K8-000340_rule
Checks: C-CNTR-K8-000340_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i insecure-bind-address * If the setting insecure-bind-address is found and set to “localhost” in the Kubernetes API manifest file, this is a finding.
Fix: F-CNTR-K8-000340_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Remove the value for the --insecure-bind-address setting.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000350
- Vuln IDs
-
- CNTR-K8-000350
- Rule IDs
-
- CNTR-K8-000350_rule
Checks: C-CNTR-K8-000350_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i secure-port * If the setting secure-port is set to “0” or is not found in the Kubernetes API manifest file, this is a finding.
Fix: F-CNTR-K8-000350_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument --secure-port to a value greater than “0”.
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000360
- Vuln IDs
-
- CNTR-K8-000360
- Rule IDs
-
- CNTR-K8-000360_rule
Checks: C-CNTR-K8-000360_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i anonymous-auth * If the setting anonymous-auth is set to “true” in the Kubernetes API Server manifest file, this is a finding.
Fix: F-CNTR-K8-000360_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument --anonymous-auth to “false”.
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000370
- Vuln IDs
-
- CNTR-K8-000370
- Rule IDs
-
- CNTR-K8-000370_rule
Checks: C-CNTR-K8-000370_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: grep -i anonymous-auth kubelet If the setting “anonymous-auth” is set to “true” or the parameter not set in the Kubernetes Kubelet, this is a finding.
Fix: F-CNTR-K8-000370_fix
Edit the Kubernetes Kubelet file in the/etc/sysconfig/ directory on the Kubernetes Master Node. Set the argument “--anonymous-auth” to “false”. Restart kubelet service using command: service kubelet restart
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000380
- Vuln IDs
-
- CNTR-K8-000380
- Rule IDs
-
- CNTR-K8-000380_rule
Checks: C-CNTR-K8-000380_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: grep -i authorization-mode kubelet On each Worker node, change to the /etc/sysconfig/ directory. Run the command: grep -i authorization-mode kubelet If authorization-mode is missing or is set to “AllowAlways” on the Master node or any of the Worker nodes, this is a finding.
Fix: F-CNTR-K8-000380_fix
Edit the Kubernetes Kubelet file in the/etc/sysconfig/ directory on the Kubernetes Master and Worker nodes. Set the argument --authorization-mode to “Webhook”. Restart each kubelet service after the change is made using the command: service kubelet restart
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000390
- Vuln IDs
-
- CNTR-K8-000390
- Rule IDs
-
- CNTR-K8-000390_rule
Checks: C-CNTR-K8-000390_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i authorization-mode * If the setting authorization-mode is set to “AlwaysAllow” in the Kubernetes API Server manifest file, this is a finding.
Fix: F-CNTR-K8-000390_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument “--authorization-mode” to any valid authorization mode other than “AlwaysAllow”.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000400
- Vuln IDs
-
- CNTR-K8-000400
- Rule IDs
-
- CNTR-K8-000400_rule
Checks: C-CNTR-K8-000400_chk
Log in to each worker node. Verify that the sshd service is not running. To validate that the service is not running, run the command: systemctl status sshd If the service sshd is active (running), this is a finding. Note: If console access is not available, SSH access can be attempted. If the worker nodes cannot be reached, this requirement is “not a finding”.
Fix: F-CNTR-K8-000400_fix
To stop the sshd service, run the command: systemctl stop sshd Note: If access to the worker node is through an SSH session, it is important to realize there are two requirements for disabling and stopping the sshd service and they should be done during the same SSH session. Disabling the service must be performed first and then the service stopped to guarantee both settings can be made if the session is interrupted.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000410
- Vuln IDs
-
- CNTR-K8-000410
- Rule IDs
-
- CNTR-K8-000410_rule
Checks: C-CNTR-K8-000410_chk
Log in to each worker node. Verify that the sshd service is not enabled. To validate the service is not enabled, run the command: systemctl is-enabled sshd.service If the service sshd is enabled, this is a finding. Note: If console access is not available, SSH access can be attempted. If the worker nodes cannot be reached, this requirement is “not a finding”.
Fix: F-CNTR-K8-000410_fix
To disable the sshd service, run the command: chkconfig sshd off Note: If access to the worker node is through an SSH session, it is important to realize there are two requirements for disabling and stopping the sshd service that must be done during the same SSH session. Disabling the service must be performed first and then the service stopped to guarantee both settings can be made if the session is interrupted.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000420
- Vuln IDs
-
- CNTR-K8-000420
- Rule IDs
-
- CNTR-K8-000420_rule
Checks: C-CNTR-K8-000420_chk
From the master node, run the command: kubectl get pods --all-namespaces -l k8s-app=kubernetes-dashboard If any resources are returned, this is a finding.
Fix: F-CNTR-K8-000420_fix
Delete the Kubernetes dashboard deployment with the following command: kubectl delete deployment kubernetes-dashboard --namespace=kube-system
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000430
- Vuln IDs
-
- CNTR-K8-000430
- Rule IDs
-
- CNTR-K8-000430_rule
Checks: C-CNTR-K8-000430_chk
From the Master and each Worker node, check the version of kubectl by executing the command: kubectl version --client If the Master or any Work nodes are not using kubectl version 1.12.9 or newer, this is a finding.
Fix: F-CNTR-K8-000430_fix
Upgrade the Master and Worker nodes to the latest version of kubectl.
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000440
- Vuln IDs
-
- CNTR-K8-000440
- Rule IDs
-
- CNTR-K8-000440_rule
Checks: C-CNTR-K8-000440_chk
On the Master and Worker nodes, change to the /etc/sysconfig/ directory and run the command: grep -i staticPodPath kubelet If any of the nodes return a value for staticPodPath, this is a finding.
Fix: F-CNTR-K8-000440_fix
Edit the kubelet file on each node under the /etc/sysconfig directory to remove the staticPodPath setting and restart the kubelet service by executing the command: service kubelet restart
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000450
- Vuln IDs
-
- CNTR-K8-000450
- Rule IDs
-
- CNTR-K8-000450_rule
Checks: C-CNTR-K8-000450_chk
On the Master node, change to the manifest’s directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Review the feature-gates setting, if one is returned. If the feature-gates setting is available and contains the DynamicAuditing flag set to “true”, this is a finding. Change to the directory /etc/sysconfig on the Master and each Worker node and execute the command: grep -i feature-gates kubelet Review every feature-gates setting that is returned. If any feature-gates setting is available and contains the “DynamicAuditing” flag set to “true”, this is a finding.
Fix: F-CNTR-K8-000450_fix
Edit any manifest files or kubelet config files that contain the feature-gates setting with DynamicAuditing set to “true”. Set the flag to “false” or remove the “DynamicAuditing” setting completely. Restart the kubelet service if the kubelet config file is changed.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000460
- Vuln IDs
-
- CNTR-K8-000460
- Rule IDs
-
- CNTR-K8-000460_rule
Checks: C-CNTR-K8-000460_chk
On the Master node, change to the manifest’s directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Review the feature-gates setting if one is returned. If the feature-gates setting does not exist or feature-gates does not contain the DynamicKubeletConfig flag or the “DynamicKubletConfig” flag is set to “true”, this is a finding. Change to the directory /etc/sysconfig on the Master and each Worker node and execute the command: grep -i feature-gates kubelet Review every feature-gates setting if one is returned. If the feature-gates setting does not exist or feature-gates does not contain the DynamicKubeletConfig flag or the DynamicKubletConfig flag is set to “true”, this is a finding.
Fix: F-CNTR-K8-000460_fix
Edit any manifest file or kubelet config file that does not contain a feature-gates setting or has DynamciKubeletConfig set to “true”. An omission of DynamicKubeletConfig within the feature-gates defaults to true. Set DynamicKubeletConfig to “false”. Restart the kubelet service if the kubelet config file is changed.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000470
- Vuln IDs
-
- CNTR-K8-000470
- Rule IDs
-
- CNTR-K8-000470_rule
Checks: C-CNTR-K8-000470_chk
On the Master node, change to the manifest’s directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Review the feature-gates setting, if one is returned. If the feature-gates setting is available and contains the AllAlpha flag set to “true”, this is a finding.
Fix: F-CNTR-K8-000470_fix
Edit any manifest files that contain the feature-gates setting with AllAlpha set to “true”. Set the flag to “false” or remove the AllAlpha setting completely.
- RMF Control
- AU-14
- Severity
- M
- CCI
- CCI-001464
- Version
- CNTR-K8-000600
- Vuln IDs
-
- CNTR-K8-000600
- Rule IDs
-
- CNTR-K8-000600_rule
Checks: C-CNTR-K8-000600_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * If the audit-policy-file is not set, this is a finding.
Fix: F-CNTR-K8-000600_fix
Edit the Kubernetes API Server manifest and set “--audit-policy-file” to the audit policy file. Note: If the API server is running as a Pod, then the manifest will also need to be updated to mount the host system filesystem where the audit policy file resides.
- RMF Control
- AU-14
- Severity
- M
- CCI
- CCI-001464
- Version
- CNTR-K8-000610
- Vuln IDs
-
- CNTR-K8-000610
- Rule IDs
-
- CNTR-K8-000610_rule
Checks: C-CNTR-K8-000610_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-log-path * If the audit-log-path is not set, this is a finding.
Fix: F-CNTR-K8-000610_fix
Edit the Kubernetes API Server manifest and set “--audit-log-path” to a secure location for the audit logs to be written. Note: If the API server is running as a Pod, then the manifest will also need to be updated to mount the host system filesystem where the audit log file is to be written.
- RMF Control
- AU-3
- Severity
- M
- CCI
- CCI-000130
- Version
- CNTR-K8-000630
- Vuln IDs
-
- CNTR-K8-000630
- Rule IDs
-
- CNTR-K8-000630_rule
Checks: C-CNTR-K8-000630_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-000630_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- AU-3
- Severity
- M
- CCI
- CCI-000131
- Version
- CNTR-K8-000640
- Vuln IDs
-
- CNTR-K8-000640
- Rule IDs
-
- CNTR-K8-000640_rule
Checks: C-CNTR-K8-000640_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-000640_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- AU-3
- Severity
- M
- CCI
- CCI-000132
- Version
- CNTR-K8-000650
- Vuln IDs
-
- CNTR-K8-000650
- Rule IDs
-
- CNTR-K8-000650_rule
Checks: C-CNTR-K8-000650_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-000650_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- AU-3
- Severity
- M
- CCI
- CCI-000133
- Version
- CNTR-K8-000660
- Vuln IDs
-
- CNTR-K8-000660
- Rule IDs
-
- CNTR-K8-000660_rule
Checks: C-CNTR-K8-000660_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-000660_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- AU-3
- Severity
- M
- CCI
- CCI-000134
- Version
- CNTR-K8-000670
- Vuln IDs
-
- CNTR-K8-000670
- Rule IDs
-
- CNTR-K8-000670_rule
Checks: C-CNTR-K8-000670_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-000670_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- AU-3
- Severity
- M
- CCI
- CCI-001487
- Version
- CNTR-K8-000680
- Vuln IDs
-
- CNTR-K8-000680
- Rule IDs
-
- CNTR-K8-000680_rule
Checks: C-CNTR-K8-000680_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-000680_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- AU-3
- Severity
- M
- CCI
- CCI-001487
- Version
- CNTR-K8-000690
- Vuln IDs
-
- CNTR-K8-000690
- Rule IDs
-
- CNTR-K8-000690_rule
Checks: C-CNTR-K8-000690_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-000690_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- AU-3
- Severity
- M
- CCI
- CCI-000135
- Version
- CNTR-K8-000700
- Vuln IDs
-
- CNTR-K8-000700
- Rule IDs
-
- CNTR-K8-000700_rule
Checks: C-CNTR-K8-000700_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-000700_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000850
- Vuln IDs
-
- CNTR-K8-000850
- Rule IDs
-
- CNTR-K8-000850_rule
Checks: C-CNTR-K8-000850_chk
On the Master and each Worker node, change to the /etc/sysconfig/ directory and run the command: grep -i hostname-override kubelet --hostname-override If any of the nodes have the setting “hostname-override” present, this is a finding.
Fix: F-CNTR-K8-000850_fix
Edit the Kubernetes Kubelet file in the /etc/sysconfig directory on the Master and Worker nodes and remove the “--hostname-override” setting. Restart the service after the change is made by running: service kubelet restart
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000860
- Vuln IDs
-
- CNTR-K8-000860
- Rule IDs
-
- CNTR-K8-000860_rule
Checks: C-CNTR-K8-000860_chk
On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: ls -l * Each manifest file must be owned by root:root. If any manifest file is not owned by root:root, this is a finding.
Fix: F-CNTR-K8-000860_fix
On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: chown root:root * To verify the change took place, run the command: ls -l * All the manifest files should be owned by root:root.
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000870
- Vuln IDs
-
- CNTR-K8-000870
- Rule IDs
-
- CNTR-K8-000870_rule
Checks: C-CNTR-K8-000870_chk
On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: ls -l * Each manifest file must have permissions “644” or more restrictive. If any manifest file is less restrictive than “644”, this is a finding.
Fix: F-CNTR-K8-000870_fix
On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: chmod 644 * To verify the change took place, run the command: ls -l * All the manifest files should now have privileges of “644”.
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000880
- Vuln IDs
-
- CNTR-K8-000880
- Rule IDs
-
- CNTR-K8-000880_rule
Checks: C-CNTR-K8-000880_chk
On the Master and worker nodes, change to the /etc/sysconfig directory. Run the command: ls -l kubelet Each kubelet configuration file must be owned by root:root. If any manifest file is not owned by root:root, this is a finding.
Fix: F-CNTR-K8-000880_fix
On the Master and Worker nodes, change to the /etc/sysconfig directory. Run the command: chown root:root kubelet To verify the change took place, run the command: ls -l kubelet The kubelet file should now be owned by root:root.
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000890
- Vuln IDs
-
- CNTR-K8-000890
- Rule IDs
-
- CNTR-K8-000890_rule
Checks: C-CNTR-K8-000890_chk
On the Master and worker nodes, change to the /etc/sysconfig directory. Run the command: ls -l kubelet Each kubelet configuration file must have permissions of “644” or more restrictive. If any kubelet configuration file is less restrictive than “644”, this is a finding.
Fix: F-CNTR-K8-000890_fix
On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: chmod 644 kubelet To verify the change took place, run the command: ls -l kubelet The kubelet file should now have the permissions of “644”.
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000900
- Vuln IDs
-
- CNTR-K8-000900
- Rule IDs
-
- CNTR-K8-000900_rule
Checks: C-CNTR-K8-000900_chk
On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: ls -l * Each manifest file must have permissions “644” or more restrictive. If any manifest file is less restrictive than “644”, this is a finding.
Fix: F-CNTR-K8-000900_fix
On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: chmod 644 * To verify the change took place, run the command: ls -l * All the manifest files should now have privileges of “644”.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-K8-000910
- Vuln IDs
-
- CNTR-K8-000910
- Rule IDs
-
- CNTR-K8-000910_rule
Checks: C-CNTR-K8-000910_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i profiling * If the setting “profiling” is not set in the Kubernetes Controller Manager manifest file or it is set to “True”, this is a finding.
Fix: F-CNTR-K8-000910_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument “--profiling value” to “false”.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000920
- Vuln IDs
-
- CNTR-K8-000920
- Rule IDs
-
- CNTR-K8-000920_rule
Checks: C-CNTR-K8-000920_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep kube-apiserver.manifest -I-insecure-port grep kube-apiserver.manifest -I -secure-port grep kube-apiserver.manifest -I -etcd-servers * -edit manifest file: VIM <Manifest Name> Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any apiserver names spaces. Port: Any manifest and namespace PPS or services configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the API Server architecture, and determine applicable PPS. If there are any ports, protocols, and services in the system documentation not in compliance with the CAL PPSM, this is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify API Server network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Fix: F-CNTR-K8-000920_fix
Amend any system documentation requiring revision. Update Kubernetes API Server manifest and namespace PPS configuration to comply with PPSM CAL.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000930
- Vuln IDs
-
- CNTR-K8-000930
- Rule IDs
-
- CNTR-K8-000930_rule
Checks: C-CNTR-K8-000930_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep kube-scheduler.manifest -I -insecure-port grep kube-scheduler.manifest -I -secure-port -edit manifest file: VIM <Manifest Name> Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any scheduler names spaces. Port: Any manifest and namespace PPS configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the Scheduler architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPSs not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify Scheduler network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Fix: F-CNTR-K8-000930_fix
Amend any system documentation requiring revision. Update Kubernetes Scheduler manifest and namespace PPS configuration to comply with the PPSM CAL.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000940
- Vuln IDs
-
- CNTR-K8-000940
- Rule IDs
-
- CNTR-K8-000940_rule
Checks: C-CNTR-K8-000940_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep kube-scheduler.manifest -I -insecure-port grep kube-scheduler.manifest -I -secure-port -edit manifest file: VIM <Manifest Name: Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any controller names spaces. Port: Any manifest and namespace PPS or services configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the Controller architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify Controller network boundary with the PPS associated with the Controller for Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Fix: F-CNTR-K8-000940_fix
Amend any system documentation requiring revision. Update Kubernetes Controller manifest and namespace PPS configuration to comply with PPSM CAL.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000950
- Vuln IDs
-
- CNTR-K8-000950
- Rule IDs
-
- CNTR-K8-000950_rule
Checks: C-CNTR-K8-000950_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep kube-apiserver.manifest -I -etcd-servers * -edit etcd-main.manifest file: VIM <Manifest Name: Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any apiserver names spaces. Port: Any manifest and namespace PPS configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the etcd architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify etcd network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Fix: F-CNTR-K8-000950_fix
Amend any system documentation requiring revision. Update Kubernetes etcd manifest and namespace PPS configuration to comply with PPSM CAL.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000960
- Vuln IDs
-
- CNTR-K8-000960
- Rule IDs
-
- CNTR-K8-000960_rule
Checks: C-CNTR-K8-000960_chk
On the Master node, run the command: kubectl get pods --all-namespaces The list returned is all pods running within the Kubernetes cluster. For those pods running within the user namespaces (System namespaces are kube-system, kube-node-lease and kube-public), run the command: kubectl get pod podname -o yaml | grep -i port Note: In the above command, “podname” is the name of the pod. For the command to work correctly, the current context must be changed to the namespace for the pod. The command to do this is: kubectl config set-context --current --namespace=namespace-name where namespace-name is the name of the namespace. Review the ports that are returned for the pod. If any host privileged ports are returned for any of the pods, this is a finding.
Fix: F-CNTR-K8-000960_fix
For any of the pods that are using host Privileged ports, reconfigure the pod to use a service to map a host non-privileged port to the pod port or reconfigure the image to use non-privileged ports.
- RMF Control
- IA-5
- Severity
- H
- CCI
- CCI-000196
- Version
- CNTR-K8-001160
- Vuln IDs
-
- CNTR-K8-001160
- Rule IDs
-
- CNTR-K8-001160_rule
Checks: C-CNTR-K8-001160_chk
On the Kubernetes Master node, run the following command: kubectl get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {"\n"}{end}' -A If any of the values returned reference environment variables, this is a finding.
Fix: F-CNTR-K8-001160_fix
Any secrets stored as environment variables must be moved to be secret files with the proper protections and enforcements or placed within a password vault.
- RMF Control
- SC-10
- Severity
- M
- CCI
- CCI-001133
- Version
- CNTR-K8-001300
- Vuln IDs
-
- CNTR-K8-001300
- Rule IDs
-
- CNTR-K8-001300_rule
Checks: C-CNTR-K8-001300_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: grep -i streaming-connection-idle-timeout kubelet If the setting streaming-connection-idle-timeout is set to “0” or the parameter is not set in the Kubernetes Kubelet, this is a finding.
Fix: F-CNTR-K8-001300_fix
Edit the Kubernetes Kuberlet file in the /etc/sysconfig directory on the Kubernetes Master Node. Set the argument “--streaming-connection-idle-timeout” to a value other than “0”. Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- SC-2
- Severity
- M
- CCI
- CCI-001082
- Version
- CNTR-K8-001360
- Vuln IDs
-
- CNTR-K8-001360
- Rule IDs
-
- CNTR-K8-001360_rule
Checks: C-CNTR-K8-001360_chk
On the Master node, run the command: kubectl get pods --all-namespaces Review the namespaces and pods that are returned. Kubernetes system namespaces are kube-node-lease, kube-public, and kube-system. If any user pods are present in the Kubernetes system namespaces, this is a finding.
Fix: F-CNTR-K8-001360_fix
Move any user pods that are present in the Kubernetes system namespaces to user specific namespaces.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001400
- Vuln IDs
-
- CNTR-K8-001400
- Rule IDs
-
- CNTR-K8-001400_rule
Checks: C-CNTR-K8-001400_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i tls-cipher-suites * If the setting feature tls-cipher-suites is not set in the Kubernetes API server manifest file or contains no value or does not contain TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM _SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM _SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM _SHA384, this is a finding.
Fix: F-CNTR-K8-001400_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of tls-cipher-suites to: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM _SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM _SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM _SHA384
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001410
- Vuln IDs
-
- CNTR-K8-001410
- Rule IDs
-
- CNTR-K8-001410_rule
Checks: C-CNTR-K8-001410_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i client-ca-file * If the setting feature client-ca-file is not set in the Kubernetes API server manifest file or contains no value, this is a finding.
Fix: F-CNTR-K8-001410_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of client-ca-file to path containing Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001420
- Vuln IDs
-
- CNTR-K8-001420
- Rule IDs
-
- CNTR-K8-001420_rule
Checks: C-CNTR-K8-001420_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: grep -i client-ca-file kubelet If the setting client-ca-file is not set in the Kubernetes API server manifest file or contains no value, this is a finding.
Fix: F-CNTR-K8-001420_fix
Edit the Kubernetes Kubelet file in the /etc/sysconfig/ directory on the Kubernetes Master Node. Set the value of client-ca-file to path containing Approved Organizational Certificate. Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001430
- Vuln IDs
-
- CNTR-K8-001430
- Rule IDs
-
- CNTR-K8-001430_rule
Checks: C-CNTR-K8-001430_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i root-ca-file * If the setting client-ca-file is not set in the Kubernetes Controller Manager manifest file or contains no value, this is a finding.
Fix: F-CNTR-K8-001430_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of root-ca-file to path containing Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001440
- Vuln IDs
-
- CNTR-K8-001440
- Rule IDs
-
- CNTR-K8-001440_rule
Checks: C-CNTR-K8-001440_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i tls-cert-file * grep -i tls-private-key-file * If the setting tls-cert-file and private-key-file is not set in the Kubernetes API server manifest file or contains no value, this is a finding.
Fix: F-CNTR-K8-001440_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of tls-cert-file and tls-private-key-file to path containing Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001450
- Vuln IDs
-
- CNTR-K8-001450
- Rule IDs
-
- CNTR-K8-001450_rule
Checks: C-CNTR-K8-001450_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i client-cert-auth * If the setting client-cert-auth is not set in the Kubernetes etcd manifest file or set to “false”, this is a finding.
Fix: F-CNTR-K8-001450_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--client-cert-auth” to “true” for the etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001460
- Vuln IDs
-
- CNTR-K8-001460
- Rule IDs
-
- CNTR-K8-001460_rule
Checks: C-CNTR-K8-001460_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the commands: grep -i tls-private-key-file kubelet If the setting “tls-private-key-file” is not set in the Kubernetes Kubelet, this is a finding.
Fix: F-CNTR-K8-001460_fix
Edit the Kubernetes Kuberlet file in the /etc/sysconfig directory on the Kubernetes Master Node. Set the argument tls-private-key-file to an Approved Organization Certificate. Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001470
- Vuln IDs
-
- CNTR-K8-001470
- Rule IDs
-
- CNTR-K8-001470_rule
Checks: C-CNTR-K8-001470_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the commands: grep -i tls-cert-file kubelet If the setting “tls-cert-file” is not set in the Kubernetes Kubelet, this is a finding.
Fix: F-CNTR-K8-001470_fix
Edit the Kubernetes Kuberlet file in the /etc/sysconfig directory on the Kubernetes Master Node. Set the argument “tls-cert-file” to an Approved Organization Certificate. Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001480
- Vuln IDs
-
- CNTR-K8-001480
- Rule IDs
-
- CNTR-K8-001480_rule
Checks: C-CNTR-K8-001480_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i peer-client-cert-auth * If the setting “client-cert-auth” is not set in the Kubernetes etcd manifest file or set to “false”, this is a finding.
Fix: F-CNTR-K8-001480_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--peer-client-cert-auth” to “true” for the etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001490
- Vuln IDs
-
- CNTR-K8-001490
- Rule IDs
-
- CNTR-K8-001490_rule
Checks: C-CNTR-K8-001490_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i key-file * If the setting “etcd-keyfile” is not set in the Kubernetes etcd manifest file, this is a finding.
Fix: F-CNTR-K8-001490_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--key-file” to the Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001500
- Vuln IDs
-
- CNTR-K8-001500
- Rule IDs
-
- CNTR-K8-001500_rule
Checks: C-CNTR-K8-001500_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i cert-file * If the setting “etcd-certfile” is not set in the Kubernetes etcd manifest file, this is a finding.
Fix: F-CNTR-K8-001500_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--cert-file” to the Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001510
- Vuln IDs
-
- CNTR-K8-001510
- Rule IDs
-
- CNTR-K8-001510_rule
Checks: C-CNTR-K8-001510_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i etcd-cafile * If the setting “etcd-cafile” is not set in the Kubernetes API Server manifest file, this is a finding.
Fix: F-CNTR-K8-001510_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--etcd-cafile” to the Certificate Authority for etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001520
- Vuln IDs
-
- CNTR-K8-001520
- Rule IDs
-
- CNTR-K8-001520_rule
Checks: C-CNTR-K8-001520_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i etcd-certfile * If the setting “etcd-certfile” is not set in the Kubernetes API Server manifest file, this is a finding.
Fix: F-CNTR-K8-001520_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--etcd-certfile” to the certificate to be used for communication with etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001530
- Vuln IDs
-
- CNTR-K8-001530
- Rule IDs
-
- CNTR-K8-001530_rule
Checks: C-CNTR-K8-001530_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i etcd-keyfile * If the setting “etcd-keyfile” is not set in the Kubernetes API Server manifest file, this is a finding.
Fix: F-CNTR-K8-001530_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--etcd-keyfile” to the certificate to be used for communication with etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001540
- Vuln IDs
-
- CNTR-K8-001540
- Rule IDs
-
- CNTR-K8-001540_rule
Checks: C-CNTR-K8-001540_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i peer-cert-file * If the setting “etcd-certfile” is not set in the Kubernetes etcd manifest file, this is a finding.
Fix: F-CNTR-K8-001540_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of peer-cert-file to the certificate to be used for communication with etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001550
- Vuln IDs
-
- CNTR-K8-001550
- Rule IDs
-
- CNTR-K8-001550_rule
Checks: C-CNTR-K8-001550_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i peer-key-file * If the setting “etcd-certfile” is not set in the Kubernetes etcd manifest file, this is a finding.
Fix: F-CNTR-K8-001550_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “peer-key-file” to the certificate to be used for communication with etcd.
- RMF Control
- SC-3
- Severity
- M
- CCI
- CCI-001084
- Version
- CNTR-K8-001620
- Vuln IDs
-
- CNTR-K8-001620
- Rule IDs
-
- CNTR-K8-001620_rule
Checks: C-CNTR-K8-001620_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: grep -i protect-kernel-defaults kubelet If the setting “protect-kernel-defaults” is set to false or not set in the Kubernetes Kubelet, this is a finding.
Fix: F-CNTR-K8-001620_fix
Edit the Kubernetes Kuberlet file in the /etc/sysconfig directory on the Kubernetes Master Node. Set the argument “--protect-kernel-defaults” to “true”. Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- AC-6
- Severity
- H
- CCI
- CCI-002235
- Version
- CNTR-K8-001990
- Vuln IDs
-
- CNTR-K8-001990
- Rule IDs
-
- CNTR-K8-001990_rule
Checks: C-CNTR-K8-001990_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i authorization-mode * If the setting authorization-mode is set to “AlwaysAllow” in the Kubernetes API Server manifest file, this is a finding.
Fix: F-CNTR-K8-001990_fix
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Edit the API server manifest and set the authorization-mode setting to any valid mode except for AlwaysAllow.
- RMF Control
- AC-6
- Severity
- M
- CCI
- CCI-002233
- Version
- CNTR-K8-002000
- Vuln IDs
-
- CNTR-K8-002000
- Rule IDs
-
- CNTR-K8-002000_rule
Checks: C-CNTR-K8-002000_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i ValidatingAdmissionWebhook * If a line is not returned that includes enable-admission-plugins and ValidatingAdmissionWebhook, this is a finding.
Fix: F-CNTR-K8-002000_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument --enable-admission-plugins to include ValidatingAdmissionWebhook. Each enabled plugin is separated by commas. Note: It is best to implement policies first and then enable the webhook, otherwise a denial of service may occur.
- RMF Control
- AC-6
- Severity
- M
- CCI
- CCI-002233
- Version
- CNTR-K8-002010
- Vuln IDs
-
- CNTR-K8-002010
- Rule IDs
-
- CNTR-K8-002010_rule
Checks: C-CNTR-K8-002010_chk
On the Master Node, run the command: kubectl get podsecuritypolicy For any pod security policies listed, edit the policy with the command: kubectl edit podsecuritypolicy policyname Where policyname is the name of the policy Review the runAsUser, supplementalGroups and fsGroup sections of the policy. If any of these sections are missing, this is a finding. If the rule within the runAsUser section is not set to “MustRunAsNonRoot”, this is a finding. If the ranges within the supplementalGroups section has min set to “0” or min is missing, this is a finding. If the ranges within the fsGroup section has a min set to “0” or the min is missing, this is a finding.
Fix: F-CNTR-K8-002010_fix
From the Master node, save the following policy to a file called restricted.yml. apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: restricted annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default' apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default' seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default' apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' spec: privileged: false # Required to prevent escalations to root. allowPrivilegeEscalation: false # This is redundant with non-root + disallow privilege escalation, # but we can provide it for defense in depth. requiredDropCapabilities: - ALL # Allow core volume types. volumes: - 'configMap' - 'emptyDir' - 'projected' - 'secret' - 'downwardAPI' # Assume that persistentVolumes set up by the cluster admin are safe to use. - 'persistentVolumeClaim' hostNetwork: false hostIPC: false hostPID: false runAsUser: # Require the container to run without root privileges. rule: 'MustRunAsNonRoot' seLinux: # This policy assumes the nodes are using AppArmor rather than SELinux. rule: 'RunAsAny' supplementalGroups: rule: 'MustRunAs' ranges: # Forbid adding the root group. - min: 1 max: 65535 fsGroup: rule: 'MustRunAs' ranges: # Forbid adding the root group. - min: 1 max: 65535 readOnlyRootFilesystem: false To implement the policy, run the command: kubectl create -f restricted.yml
- RMF Control
- AC-6
- Severity
- M
- CCI
- CCI-002234
- Version
- CNTR-K8-002020
- Vuln IDs
-
- CNTR-K8-002020
- Rule IDs
-
- CNTR-K8-002020_rule
Checks: C-CNTR-K8-002020_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-002020_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- CM-11
- Severity
- M
- CCI
- CCI-001812
- Version
- CNTR-K8-002220
- Vuln IDs
-
- CNTR-K8-002220
- Rule IDs
-
- CNTR-K8-002220_rule
Checks: C-CNTR-K8-002220_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i authorization-mode * If the setting authorization-mode is set to “AlwaysAllow” in the Kubernetes API Server manifest file, this is a finding.
Fix: F-CNTR-K8-002220_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument “--authorization-mode” to any valid authorization mode other than AlwaysAllow.
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001814
- Version
- CNTR-K8-002260
- Vuln IDs
-
- CNTR-K8-002260
- Rule IDs
-
- CNTR-K8-002260_rule
Checks: C-CNTR-K8-002260_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-002260_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- SC-5
- Severity
- M
- CCI
- CCI-002385
- Version
- CNTR-K8-002600
- Vuln IDs
-
- CNTR-K8-002600
- Rule IDs
-
- CNTR-K8-002600_rule
Checks: C-CNTR-K8-002600_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -I request-timeout * If the setting request-timeout is set to “0” in the Kubernetes API Server manifest file, this is a finding.
Fix: F-CNTR-K8-002600_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of request-timeout greater than “0”.
- RMF Control
- SC-8
- Severity
- H
- CCI
- CCI-002418
- Version
- CNTR-K8-002620
- Vuln IDs
-
- CNTR-K8-002620
- Rule IDs
-
- CNTR-K8-002620_rule
Checks: C-CNTR-K8-002620_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i basic-auth-file * If “basic-auth-file” is set in the Kubernetes API server manifest file, this is a finding.
Fix: F-CNTR-K8-002620_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Remove the setting “--basic-auth-file”.
- RMF Control
- SC-8
- Severity
- M
- CCI
- CCI-002418
- Version
- CNTR-K8-002630
- Vuln IDs
-
- CNTR-K8-002630
- Rule IDs
-
- CNTR-K8-002630_rule
Checks: C-CNTR-K8-002630_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i token-auth-file * If “token-auth-file” is set in the Kubernetes API server manifest file, this is a finding.
Fix: F-CNTR-K8-002630_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Remove parameter “--token-auth-file”.
- RMF Control
- SC-8
- Severity
- M
- CCI
- CCI-002418
- Version
- CNTR-K8-002640
- Vuln IDs
-
- CNTR-K8-002640
- Rule IDs
-
- CNTR-K8-002640_rule
Checks: C-CNTR-K8-002640_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i --kubelet-client-certificate * grep -I --kubelet-client-key * If the setting “feature--kubelet-client-certificate” is not set in the Kubernetes API server manifest file or contains no value, this is a finding. If the setting “feature--kubelet-client-key” is not set in the Kubernetes API server manifest file or contains no value, this is a finding.
Fix: F-CNTR-K8-002640_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--kubelet-client-certificate” and “--kubelet-client-key” to an Approved Organizational Certificate and key pair.
- RMF Control
- SI-2
- Severity
- M
- CCI
- CCI-002617
- Version
- CNTR-K8-002700
- Vuln IDs
-
- CNTR-K8-002700
- Rule IDs
-
- CNTR-K8-002700_rule
Checks: C-CNTR-K8-002700_chk
To view all pods and the images used to create the pods, from the Master node, run the following command: kubectl get pods --all-namespaces -o jsonpath="{..image}" | \ tr -s '[[:space:]]' '\n' | \ sort | \ uniq -c Review the images used for pods running within Kubernetes. If there are multiple versions of the same image, this is a finding.
Fix: F-CNTR-K8-002700_fix
Remove any old pods that are using older images. On the Master node, run the command: kubectl delete pod podname Where podname is the name of the pod to delete.
- RMF Control
- SI-2
- Severity
- M
- CCI
- CCI-002605
- Version
- CNTR-K8-002720
- Vuln IDs
-
- CNTR-K8-002720
- Rule IDs
-
- CNTR-K8-002720_rule
Checks: C-CNTR-K8-002720_chk
Authenticate on the Kubernetes Master Node. Run the command: kubectl version --short If kubectl version has a setting not supporting Kubernetes skew policy, this is a finding. Note: Kubernetes Skew Policy can be found at: https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions
Fix: F-CNTR-K8-002720_fix
Upgrade Kubernetes to the supported version. Institute and adhere to the policies and procedures to ensure that patches are consistently applied within the time allowed.
- RMF Control
- AU-12
- Severity
- M
- CCI
- CCI-000172
- Version
- CNTR-K8-002900
- Vuln IDs
-
- CNTR-K8-002900
- Rule IDs
-
- CNTR-K8-002900_rule
Checks: C-CNTR-K8-002900_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-002900_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- AU-12
- Severity
- M
- CCI
- CCI-000172
- Version
- CNTR-K8-002910
- Vuln IDs
-
- CNTR-K8-002910
- Rule IDs
-
- CNTR-K8-002910_rule
Checks: C-CNTR-K8-002910_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-002910_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- AU-12
- Severity
- M
- CCI
- CCI-000172
- Version
- CNTR-K8-002940
- Vuln IDs
-
- CNTR-K8-002940
- Rule IDs
-
- CNTR-K8-002940_rule
Checks: C-CNTR-K8-002940_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-002940_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- AU-12
- Severity
- M
- CCI
- CCI-000172
- Version
- CNTR-K8-002950
- Vuln IDs
-
- CNTR-K8-002950
- Rule IDs
-
- CNTR-K8-002950_rule
Checks: C-CNTR-K8-002950_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-002950_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- AU-12
- Severity
- M
- CCI
- CCI-000172
- Version
- CNTR-K8-002980
- Vuln IDs
-
- CNTR-K8-002980
- Rule IDs
-
- CNTR-K8-002980_rule
Checks: C-CNTR-K8-002980_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-002980_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- AU-12
- Severity
- M
- CCI
- CCI-000172
- Version
- CNTR-K8-002990
- Vuln IDs
-
- CNTR-K8-002990
- Rule IDs
-
- CNTR-K8-002990_rule
Checks: C-CNTR-K8-002990_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-002990_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- AU-12
- Severity
- M
- CCI
- CCI-000172
- Version
- CNTR-K8-003010
- Vuln IDs
-
- CNTR-K8-003010
- Rule IDs
-
- CNTR-K8-003010_rule
Checks: C-CNTR-K8-003010_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-003010_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- AU-12
- Severity
- M
- CCI
- CCI-000172
- Version
- CNTR-K8-003020
- Vuln IDs
-
- CNTR-K8-003020
- Rule IDs
-
- CNTR-K8-003020_rule
Checks: C-CNTR-K8-003020_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-003020_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- AU-12
- Severity
- M
- CCI
- CCI-000172
- Version
- CNTR-K8-003050
- Vuln IDs
-
- CNTR-K8-003050
- Rule IDs
-
- CNTR-K8-003050_rule
Checks: C-CNTR-K8-003050_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-CNTR-K8-003050_fix
Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003110
- Vuln IDs
-
- CNTR-K8-003110
- Rule IDs
-
- CNTR-K8-003110_rule
Checks: C-CNTR-K8-003110_chk
Review the ownership of the Kubernetes manifests files by using the command: stat -c %U:%G /etc/kubernetes/manifests/* | grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-CNTR-K8-003110_fix
Change the ownership of the manifest files to root: root by executing the command: chown root:root /etc/kubernetes/manifests/*
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003120
- Vuln IDs
-
- CNTR-K8-003120
- Rule IDs
-
- CNTR-K8-003120_rule
Checks: C-CNTR-K8-003120_chk
Review the ownership of the Kubernetes etcd files by using the command: stat -c %U:%G /var/lib/etcd/* | grep -v etcd:etcd If the command returns any non etcd:etcd file permissions, this is a finding.
Fix: F-CNTR-K8-003120_fix
Change the ownership of the manifest files to etcd:etcd by executing the command: chown etcd:etcd /var/lib/etcd/*
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003130
- Vuln IDs
-
- CNTR-K8-003130
- Rule IDs
-
- CNTR-K8-003130_rule
Checks: C-CNTR-K8-003130_chk
Review the Kubernetes conf files by using the command: stat -c %U:%G /etc/kubernetes/admin.conf | grep -v root:root stat -c %U:%G /etc/kubernetes/scheduler.conf | grep -v root:root stat -c %U:%G /etc/kubernetes/controller-manager.conf | grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-CNTR-K8-003130_fix
Change the ownership of the conf files to root: root by executing the command: chown root:root /etc/kubernetes/admin.conf chown root:root /etc/kubernetes/scheduler.conf chown root:root /etc/kubernetes/controller-manager.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003140
- Vuln IDs
-
- CNTR-K8-003140
- Rule IDs
-
- CNTR-K8-003140_rule
Checks: C-CNTR-K8-003140_chk
Check if Kube-Proxy is running and obtain --kubeconfig parameter use the following command: ps -ef | grep kube-proxy Review the permissions of the Kubernetes Kube Proxy by using the command: stat -c %a <location from --kubeconfig> If the file has permissions more permissive than “644”, this is a finding.
Fix: F-CNTR-K8-003140_fix
Change the permissions of the Kube Proxy to “644” by executing the command: chown 644 <location from kubeconfig>.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003150
- Vuln IDs
-
- CNTR-K8-003150
- Rule IDs
-
- CNTR-K8-003150_rule
Checks: C-CNTR-K8-003150_chk
Check if Kube-Proxy is running use the following command: ps -ef | grep kube-proxy Review the permissions of the Kubernetes Kube Proxy by using the command: chown %U:%G <location from --kubeconfig>| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-CNTR-K8-003150_fix
Change the ownership of the Kube Proxy to root:root by executing the command: chown root:root <location from kubeconfig>.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003160
- Vuln IDs
-
- CNTR-K8-003160
- Rule IDs
-
- CNTR-K8-003160_rule
Checks: C-CNTR-K8-003160_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run command: more kubelet --client-ca-file argument Note certificate location If the file has permissions more permissive than “644”, this is a finding.
Fix: F-CNTR-K8-003160_fix
Change the permissions of the --client-ca-file to “644” by executing the command: chown 644 <kubelet --client--ca-file argument location>.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003170
- Vuln IDs
-
- CNTR-K8-003170
- Rule IDs
-
- CNTR-K8-003170_rule
Checks: C-CNTR-K8-003170_chk
Check if Kube-Proxy is running and obtain --kubeconfig parameter use the following command: ps -ef | grep kube-proxy Review the permissions of the Kubernetes Kube Proxy by using the command: chown root:root <location from --kubeconfig> If the command returns any non root:root file permissions, this is a finding.
Fix: F-CNTR-K8-003170_fix
Change the permissions of the Kube Proxy to “644” by executing the command: chown root:root <location from kubeconfig>.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003180
- Vuln IDs
-
- CNTR-K8-003180
- Rule IDs
-
- CNTR-K8-003180_rule
Checks: C-CNTR-K8-003180_chk
Review the PKI files in Kubernetes by using the command: ls -laR /etc/kubernetes/pki/ If the command returns any non root:root file permissions, this is a finding.
Fix: F-CNTR-K8-003180_fix
Change the ownership of the PKI to root: root by executing the command: chown -R root:root /etc/kubernetes/pki/
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003190
- Vuln IDs
-
- CNTR-K8-003190
- Rule IDs
-
- CNTR-K8-003190_rule
Checks: C-CNTR-K8-003190_chk
Review the permissions of the Kubernetes Kubelet conf by using the command: stat -c %a /etc/kubernetee/kubelet.conf If any of the files are have permissions more permissive than “644”, this is a finding.
Fix: F-CNTR-K8-003190_fix
Change the permissions of the Kubelet to “644” by executing the command: chown 644 /etc/kubernetee/kubelet.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003200
- Vuln IDs
-
- CNTR-K8-003200
- Rule IDs
-
- CNTR-K8-003200_rule
Checks: C-CNTR-K8-003200_chk
Review the Kubernetes Kubelet conf files by using the command: stat -c %U:%G/etc/kubernetee/kubelet.conf| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-CNTR-K8-003200_fix
Change the ownership of the kubelet.conf to root: root by executing the command: chown root:root /etc/kubernetee/kubelet.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003210
- Vuln IDs
-
- CNTR-K8-003210
- Rule IDs
-
- CNTR-K8-003210_rule
Checks: C-CNTR-K8-003210_chk
Review the Kubernetes Kubeadm conf files by using the command: stat -c %U:%G /usr/bin/kubeadm.conf| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-CNTR-K8-003210_fix
Change the ownership of the kubeadm.conf to root: root by executing the command: chown root:root /user/bin/kubeadm.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003220
- Vuln IDs
-
- CNTR-K8-003220
- Rule IDs
-
- CNTR-K8-003220_rule
Checks: C-CNTR-K8-003220_chk
Review the permissions of the Kubernetes kubelet by using the command: stat -c %a /usr/bin/kubeadm.conf If any of the files are have permissions more permissive than “644”, this is a finding.
Fix: F-CNTR-K8-003220_fix
Change the permissions of the Kubeadm.conf to “644” by executing the command: chown 644 /usr/bin/kubeadm.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003230
- Vuln IDs
-
- CNTR-K8-003230
- Rule IDs
-
- CNTR-K8-003230_rule
Checks: C-CNTR-K8-003230_chk
Review the permissions of the Kubernetes config.yaml by using the command: stat -c %a /var/lib/kubelet/config.yaml If any of the files are have permissions more permissive than “644”, this is a finding.
Fix: F-CNTR-K8-003230_fix
Change the permissions of the config.yaml to “644” by executing the command: chown 644 /var/lib/kubelet/config.yaml
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003240
- Vuln IDs
-
- CNTR-K8-003240
- Rule IDs
-
- CNTR-K8-003240_rule
Checks: C-CNTR-K8-003240_chk
Review the Kubernetes Kubeadm kubelet conf file by using the command: stat -c %U:%G /var/lib/kubelet/config.yaml| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-CNTR-K8-003240_fix
Change the ownership of the kubelet config to “root: root” by executing the command: chown root:root /var/lib/kubelet/config.yaml
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003250
- Vuln IDs
-
- CNTR-K8-003250
- Rule IDs
-
- CNTR-K8-003250_rule
Checks: C-CNTR-K8-003250_chk
Review the permissions of the Kubernetes Kubelet by using the command: stat -c %a /etc/kubernetes/manifests/* If any of the files are have permissions more permissive than “644”, this is a finding.
Fix: F-CNTR-K8-003250_fix
Change the permissions of the manifest files to “root: root” by executing the command: chown root:root /etc/kubernetes/manifests/*
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003260
- Vuln IDs
-
- CNTR-K8-003260
- Rule IDs
-
- CNTR-K8-003260_rule
Checks: C-CNTR-K8-003260_chk
Review the permissions of the Kubernetes etcd by using the command: stat -c %a /var/lib/etcd/* If any of the files are have permissions more permissive than “700”, this is a finding.
Fix: F-CNTR-K8-003260_fix
Change the permissions of the manifest files to “644” by executing the command: chmod 700 /var/lib/etcd/*
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003270
- Vuln IDs
-
- CNTR-K8-003270
- Rule IDs
-
- CNTR-K8-003270_rule
Checks: C-CNTR-K8-003270_chk
Review the permissions of the Kubernetes config files by using the command: stat -c %a /etc/kubernetes/admin.conf stat -c %a /etc/kubernetes/scheduler.conf stat -c %a /etc/kubernetes/controller-manager.conf If any of the files are have permissions more permissive than “644”, this is a finding.
Fix: F-CNTR-K8-003270_fix
Change the permissions of the conf files to “644” by executing the command: chmod 644 /etc/kubernetes/admin.conf chmod 644 /etc/kubernetes/scheduler.conf chmod 644 /etc/kubernetes/controller-manager.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003280
- Vuln IDs
-
- CNTR-K8-003280
- Rule IDs
-
- CNTR-K8-003280_rule
Checks: C-CNTR-K8-003280_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * If the setting “audit-policy-file” is not set or is found in the Kubernetes API manifest file without valid content, this is a finding.
Fix: F-CNTR-K8-003280_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument “--audit-policy-file” to “log file directory”.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003290
- Vuln IDs
-
- CNTR-K8-003290
- Rule IDs
-
- CNTR-K8-003290_rule
Checks: C-CNTR-K8-003290_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i audit-log-maxsize * If the setting “audit-log-maxsize” is not set in the Kubernetes API Server manifest file or it is set to less than “100”, this is a finding.
Fix: F-CNTR-K8-003290_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of –“--audit-log-maxsize” to a minimum of “100”.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003300
- Vuln IDs
-
- CNTR-K8-003300
- Rule IDs
-
- CNTR-K8-003300_rule
Checks: C-CNTR-K8-003300_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i audit-log-maxbackup * If the setting “audit-log-maxbackup” is not set in the Kubernetes API Server manifest file or it is set less than “10”, this is a finding.
Fix: F-CNTR-K8-003300_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--audit-log-maxbackup” to a minimum of “10”.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003310
- Vuln IDs
-
- CNTR-K8-003310
- Rule IDs
-
- CNTR-K8-003310_rule
Checks: C-CNTR-K8-003310_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i audit-log-maxage * If the setting “audit-log-path” is not set in the Kubernetes API Server manifest file or it is set less than “30”, this is a finding.
Fix: F-CNTR-K8-003310_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--audit-log-maxage” to a minimum of “30”.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003320
- Vuln IDs
-
- CNTR-K8-003320
- Rule IDs
-
- CNTR-K8-003320_rule
Checks: C-CNTR-K8-003320_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i audit-log-path * If the setting audit-log-path is not set in the Kubernetes API Server manifest file or it is set to a valid path, this is a finding.
Fix: F-CNTR-K8-003320_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--audit-log-path” to valid location.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003330
- Vuln IDs
-
- CNTR-K8-003330
- Rule IDs
-
- CNTR-K8-003330_rule
Checks: C-CNTR-K8-003330_chk
Review the permissions of the Kubernetes PKI cert files by using the command: find /etc/kubernetes/pki -name "*.crt" | xargs stat -c '%n %a' If any of the files are have permissions more permissive than “644”, this is a finding.
Fix: F-CNTR-K8-003330_fix
Change the ownership of the cert files to “644” by executing the command: chmod -R 644 /etc/kubernetes/pki/*.crt
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003340
- Vuln IDs
-
- CNTR-K8-003340
- Rule IDs
-
- CNTR-K8-003340_rule
Checks: C-CNTR-K8-003340_chk
Review the permissions of the Kubernetes PKI key files by using the command: find /etc/kubernetes/pki -name "*.key" | xargs stat -c '%n %a' If any of the files are have permissions more permissive than “600”, this is a finding.
Fix: F-CNTR-K8-003340_fix
Change the ownership of the cert files to “600” by executing the command: chmod -R 600 /etc/kubernetes/pki/*.key
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-001453
- Version
- CNTR-K8-003350
- Vuln IDs
-
- CNTR-K8-003350
- Rule IDs
-
- CNTR-K8-003350_rule
Checks: C-CNTR-K8-003350_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i tls-min-version * If the setting tls-min-version is not set in the Kubernetes API Server manifest file or it is set to “VersionTLS10” or “VersionTLS11”, this is a finding.
Fix: F-CNTR-K8-003350_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--tls-min-version” to either “VersionTLS12” or “VersionTLS13”.
- RMF Control
- AC-2
- Severity
- M
- CCI
- CCI-000018
- Version
- CNTR-K8-003500
- Vuln IDs
-
- CNTR-K8-003500
- Rule IDs
-
- CNTR-K8-003500_rule
Checks: C-CNTR-K8-003500_chk
Review the container platform configuration to determine if audit records are automatically created upon account creation. If audit records are not automatically created upon account creation, this is a finding.
Fix: F-CNTR-K8-003500_fix
Configure the container platform to automatically create audit records on account creation.
- RMF Control
- AC-2
- Severity
- M
- CCI
- CCI-001403
- Version
- CNTR-K8-003510
- Vuln IDs
-
- CNTR-K8-003510
- Rule IDs
-
- CNTR-K8-003510_rule
Checks: C-CNTR-K8-003510_chk
Review the container platform configuration to determine if account modification is automatically audited. If account modification is not automatically audited, this is a finding.
Fix: F-CNTR-K8-003510_fix
Configure the container platform to automatically audit account modification.
- RMF Control
- AC-2
- Severity
- M
- CCI
- CCI-001404
- Version
- CNTR-K8-003520
- Vuln IDs
-
- CNTR-K8-003520
- Rule IDs
-
- CNTR-K8-003520_rule
Checks: C-CNTR-K8-003520_chk
Review the container platform configuration to determine if account disabling is automatically audited. If account disabling is not automatically audited, this is a finding.
Fix: F-CNTR-K8-003520_fix
Configure the container platform to automatically audit account disabling.
- RMF Control
- AU-5
- Severity
- M
- CCI
- CCI-000140
- Version
- CNTR-K8-003530
- Vuln IDs
-
- CNTR-K8-003530
- Rule IDs
-
- CNTR-K8-003530_rule
Checks: C-CNTR-K8-003530_chk
Review the configuration settings to determine how the container platform components are configured for audit failures. When the audit failure is due to the lack of audit record storage, the container platform must continue generating audit records, restarting services if necessary, and overwrite the oldest audit records in a first-in-first-out manner. If the audit failure is due to a communication to a centralized collection server, the container platform must queue audit records locally until communication is restored or the records are retrieved manually. If the container platform is not configured to handle audit failures appropriately, this is a finding.
Fix: F-CNTR-K8-003530_fix
Configure the container platform to continue generating audit records, overwriting oldest audit records in a first-in-first-out manner when the failure is due to a lack of audit record storage. When the audit failure is due to a communication to a centralized collection server, configure the container platform to queue audit records locally until communication is restored or the records are retrieved manually. If other actions are to be taken for audit record failures, document the actions and rationale in the system security plan and obtain risk acceptance approvals.