Kubernetes Security Technical Implementation Guide
Pick two releases to diff their requirements.
Open a previous version of this STIG.
Digest of Updates +3
Comparison against the immediately-prior release (V2R3). Rule matching uses the Group Vuln ID. Content-change detection compares the rule’s description, check, and fix text after stripping inline markup — cosmetic-only edits aren’t flagged.
Added rules 3
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000150
- Vuln IDs
-
- V-242376
- Rule IDs
-
- SV-242376r960759_rule
Checks: C-45651r863731_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i tls-min-version * If the setting "tls-min-version" is not configured in the Kubernetes Controller Manager manifest file or it is set to "VersionTLS10" or "VersionTLS11", this is a finding.
Fix: F-45609r863732_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--tls-min-version" to "VersionTLS12" or higher.
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000160
- Vuln IDs
-
- V-242377
- Rule IDs
-
- SV-242377r960759_rule
Checks: C-45652r863734_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i tls-min-version * If the setting "tls-min-version" is not configured in the Kubernetes Scheduler manifest file or it is set to "VersionTLS10" or "VersionTLS11", this is a finding.
Fix: F-45610r863735_fix
Edit the Kubernetes Scheduler manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--tls-min-version" to "VersionTLS12" or higher.
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000170
- Vuln IDs
-
- V-242378
- Rule IDs
-
- SV-242378r960759_rule
Checks: C-45653r863737_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i tls-min-version * If the setting "tls-min-version" is not configured in the Kubernetes API Server manifest file or it is set to "VersionTLS10" or "VersionTLS11", this is a finding.
Fix: F-45611r863738_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--tls-min-version" to "VersionTLS12" or higher.
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000180
- Vuln IDs
-
- V-242379
- Rule IDs
-
- SV-242379r960759_rule
Checks: C-45654r927069_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i auto-tls * If the setting "--auto-tls" is not configured in the Kubernetes etcd manifest file or it is set to true, this is a finding.
Fix: F-45612r927070_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--auto-tls" to "false".
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000190
- Vuln IDs
-
- V-242380
- Rule IDs
-
- SV-242380r960759_rule
Checks: C-45655r927072_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -I peer-auto-tls * If the setting "--peer-auto-tls" is not configured in the Kubernetes etcd manifest file or it is set to "true", this is a finding.
Fix: F-45613r927073_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--peer-auto-tls" to "false".
- RMF Control
- AC-2
- Severity
- H
- CCI
- CCI-000015
- Version
- CNTR-K8-000220
- Vuln IDs
-
- V-242381
- Rule IDs
-
- SV-242381r1043176_rule
Checks: C-45656r927075_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i use-service-account-credentials * If the setting "--use-service-account-credentials" is not configured in the Kubernetes Controller Manager manifest file or it is set to "false", this is a finding.
Fix: F-45614r927076_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--use-service-account-credentials" to "true".
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000270
- Vuln IDs
-
- V-242382
- Rule IDs
-
- SV-242382r960792_rule
Checks: C-45657r918144_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i authorization-mode * If the setting authorization-mode is set to "AlwaysAllow" in the Kubernetes API Server manifest file or is not configured, this is a finding.
Fix: F-45615r918145_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--authorization-mode" to "Node,RBAC".
- RMF Control
- CM-6
- Severity
- H
- CCI
- CCI-000366
- Version
- CNTR-K8-000290
- Vuln IDs
-
- V-242383
- Rule IDs
-
- SV-242383r960801_rule
Checks: C-45658r863752_chk
To view the available namespaces, run the command: kubectl get namespaces The default namespaces to be validated are default, kube-public, and kube-node-lease if it is created. For the default namespace, execute the commands: kubectl config set-context --current --namespace=default kubectl get all For the kube-public namespace, execute the commands: kubectl config set-context --current --namespace=kube-public kubectl get all For the kube-node-lease namespace, execute the commands: kubectl config set-context --current --namespace=kube-node-lease kubectl get all The only valid return values are the kubernetes service (i.e., service/kubernetes) and nothing at all. If a return value is returned from the "kubectl get all" command and it is not the kubernetes service (i.e., service/kubernetes), this is a finding.
Fix: F-45616r863753_fix
Move any user-managed resources from the default, kube-public, and kube-node-lease namespaces to user namespaces.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000300
- Vuln IDs
-
- V-242384
- Rule IDs
-
- SV-242384r960792_rule
Checks: C-45659r863755_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i bind-address * If the setting "bind-address" is not set to "127.0.0.1" or is not found in the Kubernetes Scheduler manifest file, this is a finding.
Fix: F-45617r863756_fix
Edit the Kubernetes Scheduler manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument "--bind-address" to "127.0.0.1".
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000310
- Vuln IDs
-
- V-242385
- Rule IDs
-
- SV-242385r960792_rule
Checks: C-45660r863758_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i bind-address * If the setting bind-address is not set to "127.0.0.1" or is not found in the Kubernetes Controller Manager manifest file, this is a finding.
Fix: F-45618r863759_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument "--bind-address" to "127.0.0.1".
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000320
- Vuln IDs
-
- V-242386
- Rule IDs
-
- SV-242386r960792_rule
Checks: C-45661r927079_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i insecure-port * If the setting "--insecure-port" is not set to "0" or is not configured in the Kubernetes API server manifest file, this is a finding. Note: "--insecure-port" flag has been deprecated and can only be set to "0". **This flag will be removed in v1.24.*
Fix: F-45619r927080_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--insecure-port" to "0".
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000330
- Vuln IDs
-
- V-242387
- Rule IDs
-
- SV-242387r960792_rule
Checks: C-45662r918147_chk
On each Control Plane and Worker Node, run the command: ps -ef | grep kubelet If the "--read-only-port" option exists, this is a finding. Note the path to the config file (identified by --config). Run the command: grep -i readOnlyPort <path_to_config_file> If the setting "readOnlyPort" exists and is not set to "0", this is a finding.
Fix: F-45620r918148_fix
On each Control Plane and Worker Node, run the command: ps -ef | grep kubelet Remove the "--read-only-port" option if present. Note the path to the config file (identified by --config). Edit the config file: Set "readOnlyPort" to "0" or remove the setting. Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000340
- Vuln IDs
-
- V-242388
- Rule IDs
-
- SV-242388r960792_rule
Checks: C-45663r927082_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i insecure-bind-address * If the setting "--insecure-bind-address" is found and set to "localhost" in the Kubernetes API manifest file, this is a finding.
Fix: F-45621r927083_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Remove the value of "--insecure-bind-address" setting.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000350
- Vuln IDs
-
- V-242389
- Rule IDs
-
- SV-242389r960792_rule
Checks: C-45664r927085_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i secure-port * If the setting "--secure-port" is set to "0" or is not configured in the Kubernetes API manifest file, this is a finding.
Fix: F-45622r927086_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--secure-port" to a value greater than "0".
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000360
- Vuln IDs
-
- V-242390
- Rule IDs
-
- SV-242390r960792_rule
Checks: C-45665r927088_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i anonymous-auth * If the setting "--anonymous-auth" is set to "true" in the Kubernetes API Server manifest file, this is a finding.
Fix: F-45623r927089_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--anonymous-auth" to "false".
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000370
- Vuln IDs
-
- V-242391
- Rule IDs
-
- SV-242391r960792_rule
Checks: C-45666r918150_chk
On each Control Plane and Worker Node, run the command: ps -ef | grep kubelet If the "--anonymous-auth" option exists, this is a finding. Note the path to the config file (identified by --config). Inspect the content of the config file: Locate the "anonymous" section under "authentication". In this section, if the field "enabled" does not exist or is set to "true", this is a finding.
Fix: F-45624r918151_fix
On each Control Plane and Worker Node, run the command: ps -ef | grep kubelet Remove the "anonymous-auth" option if present. Note the path to the config file (identified by --config). Edit the config file: Locate the "authentication" section and the "anonymous" subsection. Within the "anonymous" subsection, set "enabled" to "false". Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000380
- Vuln IDs
-
- V-242392
- Rule IDs
-
- SV-242392r1069461_rule
Checks: C-45667r1069459_chk
Run the following command on each Worker Node: ps -ef | grep kubelet Verify that the --authorization-mode exists and is set to "Webhook". If the --authorization-mode argument is not set to "Webhook" or doesn't exist, this is a finding.
Fix: F-45625r1069460_fix
Edit the Kubernetes Kubelet service file in the --config directory on the Kubernetes Worker Node: Set the value of "--authorization-mode" to "Webhook" in KUBELET_SYSTEM_PODS_ARGS variable. Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000400
- Vuln IDs
-
- V-242393
- Rule IDs
-
- SV-242393r960792_rule
Checks: C-45668r712533_chk
Log in to each worker node. Verify that the sshd service is not running. To validate that the service is not running, run the command: systemctl status sshd If the service sshd is active (running), this is a finding. Note: If console access is not available, SSH access can be attempted. If the worker nodes cannot be reached, this requirement is "not a finding".
Fix: F-45626r863782_fix
To stop the sshd service, run the command: systemctl stop sshd Note: If access to the worker node is through an SSH session, it is important to realize there are two requirements for disabling and stopping the sshd service and they should be done during the same SSH session. Disabling the service must be performed first and then the service stopped to guarantee both settings can be made if the session is interrupted.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000410
- Vuln IDs
-
- V-242394
- Rule IDs
-
- SV-242394r960792_rule
Checks: C-45669r712536_chk
Log in to each worker node. Verify that the sshd service is not enabled. To validate the service is not enabled, run the command: systemctl is-enabled sshd.service If the service sshd is enabled, this is a finding. Note: If console access is not available, SSH access can be attempted. If the worker nodes cannot be reached, this requirement is "not a finding".
Fix: F-45627r863784_fix
To disable the sshd service, run the command: chkconfig sshd off Note: If access to the worker node is through an SSH session, it is important to realize there are two requirements for disabling and stopping the sshd service that must be done during the same SSH session. Disabling the service must be performed first and then the service stopped to guarantee both settings can be made if the session is interrupted.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000420
- Vuln IDs
-
- V-242395
- Rule IDs
-
- SV-242395r960792_rule
Checks: C-45670r863786_chk
From the Control Plane, run the command: kubectl get pods --all-namespaces -l k8s-app=kubernetes-dashboard If any resources are returned, this is a finding.
Fix: F-45628r712540_fix
Delete the Kubernetes dashboard deployment with the following command: kubectl delete deployment kubernetes-dashboard --namespace=kube-system
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000430
- Vuln IDs
-
- V-242396
- Rule IDs
-
- SV-242396r960792_rule
Checks: C-45671r863788_chk
From the Control Plane and each Worker node, check the version of kubectl by executing the command: kubectl version --client If the Control Plane or any Worker nodes are not using kubectl version 1.12.9 or newer, this is a finding.
Fix: F-45629r863789_fix
Upgrade the Control Plane and Worker nodes to the latest version of kubectl.
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000440
- Vuln IDs
-
- V-242397
- Rule IDs
-
- SV-242397r1069464_rule
Checks: C-45672r1069462_chk
If staticPodPath is missing in the Kubelet config and in the systemd arguments, the node does not support static pods. 1. To find the staticPodPath setting on Kubernetes worker nodes, follow these steps: a. On the Worker nodes, run the command: ps -ef | grep kubelet b. Note the path to the Kubelet configuration file (identified by --config). (ls /var/lib/kubelet/config.yaml is the common location.) c. Run the command: grep -i staticPodPath <path_to_config_file> If any of the Worker nodes return a value for "staticPodPath", this is a finding. If staticPodPath is not in the config file, check if it is set as a command-line argument. 2. Check Kubelet Systemd Service Arguments. a. Run the following command to check the Kubelet service: sudo systemctl cat kubelet | grep pod-manifest-path If there is no output, staticPodPath is not set in systemd arguments. If there is any return, this is a finding. (Example Return:ExecStart=/usr/bin/kubelet --pod-manifest-path=/etc/kubernetes/manifests This means static pods are defined in /etc/kubernetes/manifests.)
Fix: F-45630r1069463_fix
1. Remove staticPodPath setting on Kubernetes worker nodes: a. On each Worker node, run the command: ps -ef | grep kubelet b. Note the path to the config file (identified by --config). c. Edit the Kubernetes kubelet file in the --config directory on the Worker nodes. Remove the setting "staticPodPath". d. Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet 2. Remove Kubelet Systemd Service Arguments: a. Modify the systemd Service File. Run the command: sudo systemctl edit --full kubelet (Example Return:ExecStart=/usr/bin/kubelet --pod-manifest-path=/etc/kubernetes/manifests) b. Find and remove --pod-manifest-path. c. Save and exit the editor. d. Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000450
- Vuln IDs
-
- V-242398
- Rule IDs
-
- SV-242398r960792_rule
Checks: C-45673r918159_chk
On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Review the feature-gates setting, if one is returned. If the feature-gates setting is available and contains the DynamicAuditing flag set to "true", this is a finding. On each Control Plane and Worker node, run the command: ps -ef | grep kubelet If the "--feature-gates" option exists, this is a finding. Note the path to the config file (identified by: --config). Inspect the content of the config file: If the "featureGates" setting is present and has the "DynamicAuditing" flag set to "true", this is a finding.
Fix: F-45631r918160_fix
On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * If any "--feature-gates" setting is available and contains the "DynamicAuditing" flag, remove the flag or set it to false. On the each Control Plane and Worker Node, run the command: ps -ef | grep kubelet Remove the "--feature-gates option" if present. Note the path to the config file (identified by: --config). Edit the Kubernetes Kubelet config file: If the "featureGates" setting is present, remove the "DynamicAuditing" flag or set the flag to false. Restart the kubelet service using the following command: service kubelet restart
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000460
- Vuln IDs
-
- V-242399
- Rule IDs
-
- SV-242399r960792_rule
Checks: C-45674r918162_chk
This check is only applicable for Kubernetes versions 1.25 and older. On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * In each manifest file, if the feature-gates does not exist, or does not contain the "DynamicKubeletConfig" flag, or sets the flag to "true", this is a finding. On each Control Plane and Worker node, run the command: ps -ef | grep kubelet Verify the "feature-gates" option is not present. Note the path to the config file (identified by --config). Inspect the content of the config file: If the "featureGates" setting is not present, or does not contain the "DynamicKubeletConfig", or sets the flag to "true", this is a finding.
Fix: F-45632r918163_fix
This fix is only applicable to Kubernetes version 1.25 and older. On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Edit the manifest files so that every manifest has a "--feature-gates" setting with "DynamicKubeletConfig=false". On each Control Plane and Worker Node, run the command: ps -ef | grep kubelet Remove the "feature-gates" option if present. Note the path to the config file (identified by --config). Edit the config file: Add a "featureGates" setting if one does not yet exist. Add the feature gate "DynamicKubeletConfig=false". Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000470
- Vuln IDs
-
- V-242400
- Rule IDs
-
- SV-242400r960792_rule
Checks: C-45675r927094_chk
On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Review the "--feature-gates" setting, if one is returned. If the "--feature-gate"s setting is available and contains the "AllAlpha" flag set to "true", this is a finding.
Fix: F-45633r927095_fix
Edit any manifest file that contains the "--feature-gates" setting with "AllAlpha" set to "true". Set the value of "AllAlpha" to "false" or remove the setting completely. (AllAlpha - default=false)
- RMF Control
- AU-14
- Severity
- M
- CCI
- CCI-001464
- Version
- CNTR-K8-000610
- Vuln IDs
-
- V-242402
- Rule IDs
-
- SV-242402r960888_rule
Checks: C-45677r927100_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i audit-log-path * If the "--audit-log-path" is not set, this is a finding.
Fix: F-45635r927101_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--audit-log-path" to a secure location for the audit logs to be written. Note: If the API server is running as a Pod, then the manifest will also need to be updated to mount the host system filesystem where the audit log file is to be written.
- RMF Control
- AC-2
- Severity
- M
- CCI
- CCI-000018
- Version
- CNTR-K8-000700
- Vuln IDs
-
- V-242403
- Rule IDs
-
- SV-242403r986135_rule
Checks: C-45678r863807_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i audit-policy-file If the audit-policy-file is not set, this is a finding. The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/vX (Where X is the latest apiVersion) kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-45636r927103_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--audit-policy-file" to the path of a file with the following content: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/vX (Where X is the latest apiVersion) kind: Policy rules: - level: RequestResponse Note: If the API server is running as a Pod, then the manifest will also need to be updated to mount the host system filesystem where the audit policy file resides.
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000850
- Vuln IDs
-
- V-242404
- Rule IDs
-
- SV-242404r960960_rule
Checks: C-45679r918165_chk
On the Control Plane and Worker nodes, run the command: ps -ef | grep kubelet If the option "--hostname-override" is present, this is a finding.
Fix: F-45637r918166_fix
Run the command: systemctl status kubelet. Note the path to the drop-in file. Determine the path to the environment file(s) with the command: grep -i EnvironmentFile <path_to_drop_in_file>. Remove the "--hostname-override" option from any environment file where it is present. Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000860
- Vuln IDs
-
- V-242405
- Rule IDs
-
- SV-242405r960960_rule
Checks: C-45680r863812_chk
On the Control Plane, change to the /etc/kubernetes/manifest directory. Run the command: ls -l * Each manifest file must be owned by root:root. If any manifest file is not owned by root:root, this is a finding.
Fix: F-45638r863813_fix
On the Control Plane, change to the /etc/kubernetes/manifest directory. Run the command: chown root:root * To verify the change took place, run the command: ls -l * All the manifest files should be owned by root:root.
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000880
- Vuln IDs
-
- V-242406
- Rule IDs
-
- SV-242406r960960_rule
Checks: C-45681r863815_chk
On the Kubernetes Control Plane and Worker nodes, run the command: ps -ef | grep kubelet Check the config file (path identified by: --config): Change to the directory identified by --config (example /etc/sysconfig/) run the command: ls -l kubelet Each kubelet configuration file must be owned by root:root. If any manifest file is not owned by root:root, this is a finding.
Fix: F-45639r863816_fix
On the Control Plane and Worker nodes, change to the --config directory. Run the command: chown root:root kubelet To verify the change took place, run the command: ls -l kubelet The kubelet file should now be owned by root:root.
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000890
- Vuln IDs
-
- V-242407
- Rule IDs
-
- SV-242407r960960_rule
Checks: C-45682r918169_chk
On the Kubernetes Control Plane and Worker nodes, run the command: ps -ef | grep kubelet Check the config file (path identified by: --config): Change to the directory identified by --config (example /etc/sysconfig/) and run the command: ls -l kubelet Each KubeletConfiguration file must have permissions of "644" or more restrictive. If any KubeletConfiguration file is less restrictive than "644", this is a finding.
Fix: F-45640r918170_fix
On the Kubernetes Control Plane and Worker nodes, run the command: ps -ef | grep kubelet Check the config file (path identified by: --config): Change to the directory identified by --config (example /etc/sysconfig/) and run the command: chmod 644 kubelet To verify the change took place, run the command: ls -l kubelet The kubelet file should now have the permissions of "644".
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000900
- Vuln IDs
-
- V-242408
- Rule IDs
-
- SV-242408r960960_rule
Checks: C-45683r918172_chk
On both Control Plane and Worker Nodes, change to the /etc/kubernetes/manifest directory. Run the command: ls -l * Each manifest file must have permissions "644" or more restrictive. If any manifest file is less restrictive than "644", this is a finding.
Fix: F-45641r918173_fix
On both Control Plane and Worker Nodes, change to the /etc/kubernetes/manifest directory. Run the command: chmod 644 * To verify the change took place, run the command: ls -l * All the manifest files should now have privileges of "644".
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-K8-000910
- Vuln IDs
-
- V-242409
- Rule IDs
-
- SV-242409r960963_rule
Checks: C-45684r863824_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i profiling * If the setting "profiling" is not configured in the Kubernetes Controller Manager manifest file or it is set to "True", this is a finding.
Fix: F-45642r863825_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument "--profiling value" to "false".
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000920
- Vuln IDs
-
- V-242410
- Rule IDs
-
- SV-242410r1043177_rule
Checks: C-45685r1007472_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep kube-apiserver.manifest -I -secure-port * grep kube-apiserver.manifest -I -etcd-servers * -edit manifest file: VIM <Manifest Name> Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run command: kubectl describe services --all-namespaces Search labels for any apiserver namespaces. Port: Any manifest and namespace PPS or services configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the API Server architecture, and determine applicable PPS. If there are any PPS in the system documentation not in compliance with the CAL PPSM, this is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify API Server network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Fix: F-45643r1007473_fix
Amend any system documentation requiring revision to comply with PPSM CAL. Update Kubernetes API Server manifest and namespace PPS configuration to comply with PPSM CAL.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000930
- Vuln IDs
-
- V-242411
- Rule IDs
-
- SV-242411r1043177_rule
Checks: C-45686r1007475_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep kube-scheduler.manifest -I -insecure-port grep kube-scheduler.manifest -I -secure-port -edit manifest file: VIM <Manifest Name> Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services --all-namespaces Search labels for any scheduler namespaces. Port: Any manifest and namespace PPS configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the Scheduler architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPSs not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify Scheduler network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Fix: F-45644r1007476_fix
Amend any system documentation requiring revision to comply with the PPSM CAL. Update Kubernetes Scheduler manifest and namespace PPS configuration to comply with the PPSM CAL.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000940
- Vuln IDs
-
- V-242412
- Rule IDs
-
- SV-242412r1043177_rule
Checks: C-45687r1007478_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep kube-conntroller-manager.manifest -I -secure-port -Review manifest file by executing the following: VIM <Manifest Name>: Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services --all-namespaces Search labels for any controller namespaces. Any manifest and namespace PPS or services configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the Controller architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify Controller network boundary with the PPS associated with the Controller for Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Fix: F-45645r1007479_fix
Amend any system documentation requiring revision to comply with the PPSM CAL. Update Kubernetes Controller manifest and namespace PPS configuration to comply with PPSM CAL.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000950
- Vuln IDs
-
- V-242413
- Rule IDs
-
- SV-242413r1043177_rule
Checks: C-45688r863833_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep kube-apiserver.manifest -I -etcd-servers * -edit etcd-main.manifest file: VIM <Manifest Name: Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any apiserver names spaces. Port: Any manifest and namespace PPS configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the etcd architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify etcd network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Fix: F-45646r712594_fix
Amend any system documentation requiring revision. Update Kubernetes etcd manifest and namespace PPS configuration to comply with PPSM CAL.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000960
- Vuln IDs
-
- V-242414
- Rule IDs
-
- SV-242414r1043177_rule
Checks: C-45689r863835_chk
On the Control Plane, run the command: kubectl get pods --all-namespaces The list returned is all pods running within the Kubernetes cluster. For those pods running within the user namespaces (System namespaces are kube-system, kube-node-lease and kube-public), run the command: kubectl get pod podname -o yaml | grep -i port Note: In the above command, "podname" is the name of the pod. For the command to work correctly, the current context must be changed to the namespace for the pod. The command to do this is: kubectl config set-context --current --namespace=namespace-name (Note: "namespace-name" is the name of the namespace.) Review the ports that are returned for the pod. If any host-privileged ports are returned for any of the pods, this is a finding.
Fix: F-45647r717032_fix
For any of the pods that are using host-privileged ports, reconfigure the pod to use a service to map a host non-privileged port to the pod port or reconfigure the image to use non-privileged ports.
- RMF Control
- Severity
- H
- CCI
- CCI-004062
- Version
- CNTR-K8-001160
- Vuln IDs
-
- V-242415
- Rule IDs
-
- SV-242415r1069466_rule
Checks: C-45690r1069465_chk
Follow these steps to check, from the Kubernetes control plane, if secrets are stored as environment variables. 1. Find All Pods Using Secrets in Environment Variables. To list all pods using secrets as environment variables, execute: kubectl get pods --all-namespaces -o yaml | grep -A5 "secretKeyRef" If any of the values returned reference environment variables, this is a finding. 2. Check Environment Variables in a Specific Pod. To check if a specific pod is using secrets as environment variables, execute: kubectl get pods -n <namespace> (Replace <namespace> with the actual namespace, or omit -n <namespace> to check in the default namespace.) kubectl describe pod <pod-name> -n <namespace> | grep -A5 "Environment:" If secrets are used, output like the following will be displayed: Environment: SECRET_USERNAME: <set from secret: my-secret key: username> SECRET_PASSWORD: <set from secret: my-secret key: password> If the output is similar to this, the pod is using Kubernetes secrets as environment variables, and this is a finding. 3. Check the Pod YAML for Secret Usage. To check the full YAML definition for environment variables, execute: kubectl get pod <pod-name> -n <namespace> -o yaml | grep -A5 "env:" Example output: yaml CopyEdit env: - name: SECRET_USERNAME valueFrom: secretKeyRef: name: my-secret key: username This means the pod is pulling the secret named my-secret and setting SECRET_USERNAME from its username key. If the pod is pulling a secret and setting an environment variable in the "env:", this is a finding. 4. Check Secrets in a Deployment, StatefulSet, or DaemonSet. If the pod is managed by a Deployment, StatefulSet, or DaemonSet, check their configurations: kubectl get deployment <deployment-name> -n <namespace> -o yaml | grep -A5 "env:" or For all Deployments in all namespaces: kubectl get deployments --all-namespaces -o yaml | grep -A5 "env:" If the pod is pulling a secret and setting an environment variable in the "env:", this is a finding. 5. Check Environment Variables Inside a Running Pod. If needed, check the environment variables inside a running pod: kubectl exec -it <pod-name> -n <namespace> -- env | grep SECRET If any of the values returned reference environment variables, this is a finding.
Fix: F-45648r712600_fix
Any secrets stored as environment variables must be moved to the secret files with the proper protections and enforcements or placed within a password vault.
- RMF Control
- SC-2
- Severity
- M
- CCI
- CCI-001082
- Version
- CNTR-K8-001360
- Vuln IDs
-
- V-242417
- Rule IDs
-
- SV-242417r961095_rule
Checks: C-45692r863840_chk
On the Control Plane, run the command: kubectl get pods --all-namespaces Review the namespaces and pods that are returned. Kubernetes system namespaces are kube-node-lease, kube-public, and kube-system. If any user pods are present in the Kubernetes system namespaces, this is a finding.
Fix: F-45650r712606_fix
Move any user pods that are present in the Kubernetes system namespaces to user specific namespaces.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001400
- Vuln IDs
-
- V-242418
- Rule IDs
-
- SV-242418r1043178_rule
Checks: C-45693r863842_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i tls-cipher-suites * If the setting feature tls-cipher-suites is not set in the Kubernetes API server manifest file or contains no value or does not contain TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, this is a finding.
Fix: F-45651r927105_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--tls-cipher-suites" to: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001410
- Vuln IDs
-
- V-242419
- Rule IDs
-
- SV-242419r1043178_rule
Checks: C-45694r863845_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i client-ca-file * If the setting feature client-ca-file is not set in the Kubernetes API server manifest file or contains no value, this is a finding.
Fix: F-45652r918175_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--client-ca-file" to path containing Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001420
- Vuln IDs
-
- V-242420
- Rule IDs
-
- SV-242420r1043178_rule
Checks: C-45695r918177_chk
On the Control Plane, run the command: ps -ef | grep kubelet If the "--client-ca-file" option exists, this is a finding. Note the path to the config file (identified by --config). Run the command: grep -i clientCAFile <path_to_config_file> If the setting "clientCAFile" is not set or contains no value, this is a finding.
Fix: F-45653r918178_fix
On the Control Plane, run the command: ps -ef | grep kubelet Remove the "--client-ca-file" option if present. Note the path to the config file (identified by --config). Edit the Kubernetes Kubelet config file: Set the value of "clientCAFile" to a path containing an Approved Organizational Certificate. Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001430
- Vuln IDs
-
- V-242421
- Rule IDs
-
- SV-242421r1043178_rule
Checks: C-45696r927107_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i root-ca-file * If the setting "--root-ca-file" is not set in the Kubernetes Controller Manager manifest file or contains no value, this is a finding.
Fix: F-45654r927108_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--root-ca-file" to path containing Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001440
- Vuln IDs
-
- V-242422
- Rule IDs
-
- SV-242422r1043178_rule
Checks: C-45697r863854_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i tls-cert-file * grep -i tls-private-key-file * If the setting tls-cert-file and private-key-file is not set in the Kubernetes API server manifest file or contains no value, this is a finding.
Fix: F-45655r863855_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of tls-cert-file and tls-private-key-file to path containing Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001450
- Vuln IDs
-
- V-242423
- Rule IDs
-
- SV-242423r1043178_rule
Checks: C-45698r863857_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i client-cert-auth * If the setting client-cert-auth is not configured in the Kubernetes etcd manifest file or set to "false", this is a finding.
Fix: F-45656r863858_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--client-cert-auth" to "true" for the etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001460
- Vuln IDs
-
- V-242424
- Rule IDs
-
- SV-242424r1043178_rule
Checks: C-45699r918180_chk
On the Control Plane, run the command: ps -ef | grep kubelet If the "--tls-private-key-file" option exists, this is a finding. Note the path to the config file (identified by --config). Run the command: grep -i tlsPrivateKeyFile <path_to_config_file> If the setting "tlsPrivateKeyFile" is not set or contains no value, this is a finding.
Fix: F-45657r918181_fix
On the Control Plane, run the command: ps -ef | grep kubelet Remove the "--tls-private-key-file" option if present. Note the path to the config file (identified by --config). Edit the Kubernetes Kubelet config file: Set "tlsPrivateKeyFile" to a path containing the appropriate private key. Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001470
- Vuln IDs
-
- V-242425
- Rule IDs
-
- SV-242425r1043178_rule
Checks: C-45700r918183_chk
On the Control Plane, run the command: ps -ef | grep kubelet If the argument for "--tls-cert-file" option exists, this is a finding. Note the path to the config file (identified by --config). Run the command: grep -i tlsCertFile <path_to_config_file> If the setting "tlsCertFile" is not set or contains no value, this is a finding.
Fix: F-45658r918184_fix
On the Control Plane, run the command: ps -ef | grep kubelet Remove the "--tls-cert-file" option if present. Note the path to the config file (identified by --config). Edit the Kubernetes Kubelet config file: Set "tlsCertFile" to a path containing an Approved Organization Certificate. Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001480
- Vuln IDs
-
- V-242426
- Rule IDs
-
- SV-242426r1043178_rule
Checks: C-45701r927110_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i peer-client-cert-auth * If the setting "--peer-client-cert-auth" is not configured in the Kubernetes etcd manifest file or set to "false", this is a finding.
Fix: F-45659r927111_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--peer-client-cert-auth" to "true" for the etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001490
- Vuln IDs
-
- V-242427
- Rule IDs
-
- SV-242427r1043178_rule
Checks: C-45702r863869_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i key-file * If the setting "key-file" is not configured in the etcd manifest file, this is a finding.
Fix: F-45660r863870_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--key-file" to the Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001500
- Vuln IDs
-
- V-242428
- Rule IDs
-
- SV-242428r1043178_rule
Checks: C-45703r863872_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i cert-file * If the setting "cert-file" is not configured in the Kubernetes etcd manifest file, this is a finding.
Fix: F-45661r863873_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--cert-file" to the Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001510
- Vuln IDs
-
- V-242429
- Rule IDs
-
- SV-242429r1043178_rule
Checks: C-45704r927113_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i etcd-cafile * If the setting "--etcd-cafile" is not configured in the Kubernetes API Server manifest file, this is a finding.
Fix: F-45662r927114_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--etcd-cafile" to the Certificate Authority for etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001520
- Vuln IDs
-
- V-242430
- Rule IDs
-
- SV-242430r1043178_rule
Checks: C-45705r927116_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i etcd-certfile * If the setting "--etcd-certfile" is not set in the Kubernetes API Server manifest file, this is a finding.
Fix: F-45663r927117_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--etcd-certfile" to the certificate to be used for communication with etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001530
- Vuln IDs
-
- V-242431
- Rule IDs
-
- SV-242431r1043178_rule
Checks: C-45706r927119_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i etcd-keyfile * If the setting "--etcd-keyfile" is not configured in the Kubernetes API Server manifest file, this is a finding.
Fix: F-45664r927120_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--etcd-keyfile" to the certificate to be used for communication with etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001540
- Vuln IDs
-
- V-242432
- Rule IDs
-
- SV-242432r1043178_rule
Checks: C-45707r863884_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i peer-cert-file * If the setting "peer-cert-file" is not configured in the Kubernetes etcd manifest file, this is a finding.
Fix: F-45665r863885_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--peer-cert-file" to the certificate to be used for communication with etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001550
- Vuln IDs
-
- V-242433
- Rule IDs
-
- SV-242433r1043178_rule
Checks: C-45708r863887_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i peer-key-file * If the setting "peer-key-file" is not set in the Kubernetes etcd manifest file, this is a finding.
Fix: F-45666r863888_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--peer-key-file" to the certificate to be used for communication with etcd.
- RMF Control
- SC-3
- Severity
- H
- CCI
- CCI-001084
- Version
- CNTR-K8-001620
- Vuln IDs
-
- V-242434
- Rule IDs
-
- SV-242434r961131_rule
Checks: C-45709r918186_chk
On the Control Plane, run the command: ps -ef | grep kubelet If the "--protect-kernel-defaults" option exists, this is a finding. Note the path to the config file (identified by --config). Run the command: grep -i protectKernelDefaults <path_to_config_file> If the setting "protectKernelDefaults" is not set or is set to false, this is a finding.
Fix: F-45667r918187_fix
On the Control Plane, run the command: ps -ef | grep kubelet Remove the "--protect-kernel-defaults" option if present. Note the path to the Kubernetes Kubelet config file (identified by --config). Edit the Kubernetes Kubelet config file: Set "protectKernelDefaults" to "true". Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
- RMF Control
- AC-16
- Severity
- H
- CCI
- CCI-002263
- Version
- CNTR-K8-002000
- Vuln IDs
-
- V-242436
- Rule IDs
-
- SV-242436r961359_rule
Checks: C-45711r863896_chk
Prior to version 1.21, to enforce security policiesPod Security Policies (psp) were used. Those are now deprecated and will be removed from version 1.25. Migrate from PSP to PSA: https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/ Pre-version 1.25 Check: Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i ValidatingAdmissionWebhook * If a line is not returned that includes enable-admission-plugins and ValidatingAdmissionWebhook, this is a finding.
Fix: F-45669r863897_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument "--enable-admission-plugins" to include "ValidatingAdmissionWebhook". Each enabled plugin is separated by commas. Note: It is best to implement policies first and then enable the webhook, otherwise a denial of service may occur.
- RMF Control
- AC-16
- Severity
- H
- CCI
- CCI-002263
- Version
- CNTR-K8-002010
- Vuln IDs
-
- V-242437
- Rule IDs
-
- SV-242437r961359_rule
Checks: C-45712r863899_chk
Prior to version 1.21, to enforce security policiesPod Security Policies (psp) were used. Those are now deprecated and will be removed from version 1.25. Migrate from PSP to PSA: https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/ Pre-version 1.25 Check: On the Control Plane, run the command: kubectl get podsecuritypolicy If there is no pod security policy configured, this is a finding. For any pod security policies listed, edit the policy with the command: kubectl edit podsecuritypolicy policyname (Note: "policyname" is the name of the policy.) Review the runAsUser, supplementalGroups and fsGroup sections of the policy. If any of these sections are missing, this is a finding. If the rule within the runAsUser section is not set to "MustRunAsNonRoot", this is a finding. If the ranges within the supplementalGroups section has min set to "0" or min is missing, this is a finding. If the ranges within the fsGroup section has a min set to "0" or the min is missing, this is a finding.
Fix: F-45670r863900_fix
From the Control Plane, save the following policy to a file called restricted.yml. apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: restricted annotations: apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default', seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default', apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' spec: privileged: false # Required to prevent escalations to root. allowPrivilegeEscalation: false # This is redundant with non-root + disallow privilege escalation, # but we can provide it for defense in depth. requiredDropCapabilities: - ALL # Allow core volume types. volumes: - 'configMap' - 'emptyDir' - 'projected' - 'secret' - 'downwardAPI' # Assume that persistentVolumes set up by the cluster admin are safe to use. - 'persistentVolumeClaim' hostNetwork: false hostIPC: false hostPID: false runAsUser: # Require the container to run without root privileges. rule: 'MustRunAsNonRoot' seLinux: # This policy assumes the nodes are using AppArmor rather than SELinux. rule: 'RunAsAny' supplementalGroups: rule: 'MustRunAs' ranges: # Forbid adding the root group. - min: 1 max: 65535 fsGroup: rule: 'MustRunAs' ranges: # Forbid adding the root group. - min: 1 max: 65535 readOnlyRootFilesystem: false To implement the policy, run the command: kubectl create -f restricted.yml
- RMF Control
- SC-7
- Severity
- M
- CCI
- CCI-002415
- Version
- CNTR-K8-002600
- Vuln IDs
-
- V-242438
- Rule IDs
-
- SV-242438r961620_rule
Checks: C-45713r927126_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -I request-timeout * If Kubernetes API Server manifest file does not exist, this is a finding. If the setting "--request-timeout" is set to "0" in the Kubernetes API Server manifest file, or is not configured this is a finding.
Fix: F-45671r927127_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--request-timeout" greater than "0".
- RMF Control
- SI-4
- Severity
- M
- CCI
- CCI-002647
- Version
- CNTR-K8-002700
- Vuln IDs
-
- V-242442
- Rule IDs
-
- SV-242442r961677_rule
Checks: C-45717r863905_chk
To view all pods and the images used to create the pods, from the Control Plane, run the following command: kubectl get pods --all-namespaces -o jsonpath="{..image}" | \ tr -s '[[:space:]]' '\n' | \ sort | \ uniq -c Review the images used for pods running within Kubernetes. If there are multiple versions of the same image, this is a finding.
Fix: F-45675r863906_fix
Remove any old pods that are using older images. On the Control Plane, run the command: kubectl delete pod podname (Note: "podname" is the name of the pod to delete.)
- RMF Control
- SI-3
- Severity
- M
- CCI
- CCI-002635
- Version
- CNTR-K8-002720
- Vuln IDs
-
- V-242443
- Rule IDs
-
- SV-242443r961683_rule
Checks: C-45718r863908_chk
Authenticate on the Kubernetes Control Plane. Run the command: kubectl version --short If kubectl version has a setting not supporting Kubernetes skew policy, this is a finding. Note: Kubernetes Skew Policy can be found at: https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions
Fix: F-45676r712684_fix
Upgrade Kubernetes to the supported version. Institute and adhere to the policies and procedures to ensure that patches are consistently applied within the time allowed.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003110
- Vuln IDs
-
- V-242444
- Rule IDs
-
- SV-242444r961863_rule
Checks: C-45719r712686_chk
Review the ownership of the Kubernetes manifests files by using the command: stat -c %U:%G /etc/kubernetes/manifests/* | grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45677r712687_fix
Change the ownership of the manifest files to root: root by executing the command: chown root:root /etc/kubernetes/manifests/*
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003120
- Vuln IDs
-
- V-242445
- Rule IDs
-
- SV-242445r961863_rule
Checks: C-45720r712689_chk
Review the ownership of the Kubernetes etcd files by using the command: stat -c %U:%G /var/lib/etcd/* | grep -v etcd:etcd If the command returns any non etcd:etcd file permissions, this is a finding.
Fix: F-45678r712690_fix
Change the ownership of the manifest files to etcd:etcd by executing the command: chown etcd:etcd /var/lib/etcd/*
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003130
- Vuln IDs
-
- V-242446
- Rule IDs
-
- SV-242446r961863_rule
Checks: C-45721r712692_chk
Review the Kubernetes conf files by using the command: stat -c %U:%G /etc/kubernetes/admin.conf | grep -v root:root stat -c %U:%G /etc/kubernetes/scheduler.conf | grep -v root:root stat -c %U:%G /etc/kubernetes/controller-manager.conf | grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45679r712693_fix
Change the ownership of the conf files to root: root by executing the command: chown root:root /etc/kubernetes/admin.conf chown root:root /etc/kubernetes/scheduler.conf chown root:root /etc/kubernetes/controller-manager.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003140
- Vuln IDs
-
- V-242447
- Rule IDs
-
- SV-242447r961863_rule
Checks: C-45722r712695_chk
Check if Kube-Proxy is running and obtain --kubeconfig parameter use the following command: ps -ef | grep kube-proxy If Kube-Proxy exists: Review the permissions of the Kubernetes Kube Proxy by using the command: stat -c %a <location from --kubeconfig> If the file has permissions more permissive than "644", this is a finding.
Fix: F-45680r821611_fix
Change the permissions of the Kube Proxy to "644" by executing the command: chmod 644 <location from kubeconfig>.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003150
- Vuln IDs
-
- V-242448
- Rule IDs
-
- SV-242448r961863_rule
Checks: C-45723r712698_chk
Check if Kube-Proxy is running use the following command: ps -ef | grep kube-proxy If Kube-Proxy exists: Review the permissions of the Kubernetes Kube Proxy by using the command: stat -c %U:%G <location from --kubeconfig>| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45681r712699_fix
Change the ownership of the Kube Proxy to root:root by executing the command: chown root:root <location from kubeconfig>.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003160
- Vuln IDs
-
- V-242449
- Rule IDs
-
- SV-242449r961863_rule
Checks: C-45724r919321_chk
On the Control Plane, run the command: ps -ef | grep kubelet If the "--client-ca-file" option exists, this is a finding. Note the path to the config file (identified by --config). Run the command: grep -i clientCAFile <path_to_config_file> Note the path to the client ca file. Run the command: stat -c %a <path_to_client_ca_file> If the client ca file has permissions more permissive than "644", this is a finding.
Fix: F-45682r919324_fix
On the Control Plane, run the command: ps -ef | grep kubelet Remove the "--client-ca-file" option. Note the path to the config file (identified by --config). Run the command: grep -i clientCAFile <path_to_config_file> Note the path to the client ca file. Run the command: chmod 644 <path_to_client_ca_file>
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003170
- Vuln IDs
-
- V-242450
- Rule IDs
-
- SV-242450r961863_rule
Checks: C-45725r918194_chk
On the Control Plane, run the command: ps -ef | grep kubelet If the "client-ca-file" option exists, this is a finding. Note the path to the config file (identified by --config). Run the command: grep -i clientCAFile <path_to_config_file> Note the path to the client ca file. Run the command: stat -c %U:%G <path_to_client_ca_file> If the command returns any non root:root file permissions, this is a finding.
Fix: F-45683r918195_fix
On the Control Plane, run the command: ps -ef | grep kubelet Remove the "client-ca-file" option. Note the path to the config file (identified by --config). Run the command: grep -i clientCAFile <path_to_config_file> Note the path to the client ca file. Run the command: chown root:root <path_to_client_ca_file>
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003180
- Vuln IDs
-
- V-242451
- Rule IDs
-
- SV-242451r961863_rule
Checks: C-45726r712707_chk
Review the PKI files in Kubernetes by using the command: ls -laR /etc/kubernetes/pki/ If the command returns any non root:root file permissions, this is a finding.
Fix: F-45684r712708_fix
Change the ownership of the PKI to root: root by executing the command: chown -R root:root /etc/kubernetes/pki/
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003190
- Vuln IDs
-
- V-242452
- Rule IDs
-
- SV-242452r961863_rule
Checks: C-45727r712710_chk
Review the permissions of the Kubernetes Kubelet conf by using the command: stat -c %a /etc/kubernetes/kubelet.conf If any of the files are have permissions more permissive than "644", this is a finding.
Fix: F-45685r821615_fix
Change the permissions of the Kubelet to "644" by executing the command: chmod 644 /etc/kubernetes/kubelet.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003200
- Vuln IDs
-
- V-242453
- Rule IDs
-
- SV-242453r961863_rule
Checks: C-45728r712713_chk
Review the Kubernetes Kubelet conf files by using the command: stat -c %U:%G /etc/kubernetes/kubelet.conf| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45686r712714_fix
Change the ownership of the kubelet.conf to root: root by executing the command: chown root:root /etc/kubernetes/kubelet.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003210
- Vuln IDs
-
- V-242454
- Rule IDs
-
- SV-242454r961863_rule
Checks: C-45729r754817_chk
Review the Kubeadm.conf file : Get the path for Kubeadm.conf by running: sytstemctl status kubelet Note the configuration file installed by the kubeadm is written to (Default Location: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf) stat -c %U:%G <kubeadm.conf path> | grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45687r754818_fix
Change the ownership of the kubeadm.conf to root: root by executing the command: chown root:root <kubeadm.conf path>
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003220
- Vuln IDs
-
- V-242455
- Rule IDs
-
- SV-242455r961863_rule
Checks: C-45730r754820_chk
Review the kubeadm.conf file : Get the path for kubeadm.conf by running: systemctl status kubelet Note the configuration file installed by the kubeadm is written to (Default Location: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf) stat -c %a <kubeadm.conf path> If the file has permissions more permissive than "644", this is a finding.
Fix: F-45688r754821_fix
Change the permissions of kubeadm.conf to "644" by executing the command: chmod 644 <kubeadm.conf path>
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003230
- Vuln IDs
-
- V-242456
- Rule IDs
-
- SV-242456r961863_rule
Checks: C-45731r712722_chk
Review the permissions of the Kubernetes config.yaml by using the command: stat -c %a /var/lib/kubelet/config.yaml If any of the files are have permissions more permissive than "644", this is a finding.
Fix: F-45689r821617_fix
Change the permissions of the config.yaml to "644" by executing the command: chmod 644 /var/lib/kubelet/config.yaml
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003240
- Vuln IDs
-
- V-242457
- Rule IDs
-
- SV-242457r961863_rule
Checks: C-45732r712725_chk
Review the Kubernetes Kubeadm kubelet conf file by using the command: stat -c %U:%G /var/lib/kubelet/config.yaml| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45690r712726_fix
Change the ownership of the kubelet config to "root: root" by executing the command: chown root:root /var/lib/kubelet/config.yaml
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003260
- Vuln IDs
-
- V-242459
- Rule IDs
-
- SV-242459r961863_rule
Checks: C-45734r918198_chk
Review the permissions of the Kubernetes etcd by using the command: ls -AR /var/lib/etcd/* If any of the files have permissions more permissive than "644", this is a finding.
Fix: F-45692r918199_fix
Change the permissions of the manifest files to "644" by executing the command: chmod -R 644 /var/lib/etcd/*
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003270
- Vuln IDs
-
- V-242460
- Rule IDs
-
- SV-242460r961863_rule
Checks: C-45735r712734_chk
Review the permissions of the Kubernetes config files by using the command: stat -c %a /etc/kubernetes/admin.conf stat -c %a /etc/kubernetes/scheduler.conf stat -c %a /etc/kubernetes/controller-manager.conf If any of the files are have permissions more permissive than "644", this is a finding.
Fix: F-45693r712735_fix
Change the permissions of the conf files to "644" by executing the command: chmod 644 /etc/kubernetes/admin.conf chmod 644 /etc/kubernetes/scheduler.conf chmod 644 /etc/kubernetes/controller-manager.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003280
- Vuln IDs
-
- V-242461
- Rule IDs
-
- SV-242461r961863_rule
Checks: C-45736r863922_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i audit-policy-file * If the setting "audit-policy-file" is not set or is found in the Kubernetes API manifest file without valid content, this is a finding.
Fix: F-45694r863923_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument "--audit-policy-file" to "log file directory".
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003290
- Vuln IDs
-
- V-242462
- Rule IDs
-
- SV-242462r961863_rule
Checks: C-45737r927135_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i audit-log-maxsize * If the setting "--audit-log-maxsize" is not set in the Kubernetes API Server manifest file or it is set to less than "100", this is a finding.
Fix: F-45695r927136_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--audit-log-maxsize" to a minimum of "100".
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003300
- Vuln IDs
-
- V-242463
- Rule IDs
-
- SV-242463r961863_rule
Checks: C-45738r863928_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i audit-log-maxbackup * If the setting "audit-log-maxbackup" is not set in the Kubernetes API Server manifest file or it is set less than "10", this is a finding.
Fix: F-45696r863929_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--audit-log-maxbackup" to a minimum of "10".
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003310
- Vuln IDs
-
- V-242464
- Rule IDs
-
- SV-242464r961863_rule
Checks: C-45739r863931_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i audit-log-maxage * If the setting "audit-log-maxage" is not set in the Kubernetes API Server manifest file or it is set less than "30", this is a finding.
Fix: F-45697r863932_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--audit-log-maxage" to a minimum of "30".
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003320
- Vuln IDs
-
- V-242465
- Rule IDs
-
- SV-242465r961863_rule
Checks: C-45740r863934_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i audit-log-path * If the setting audit-log-path is not set in the Kubernetes API Server manifest file or it is not set to a valid path, this is a finding.
Fix: F-45698r863935_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--audit-log-path" to valid location.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003330
- Vuln IDs
-
- V-242466
- Rule IDs
-
- SV-242466r961863_rule
Checks: C-45741r927138_chk
Review the permissions of the Kubernetes PKI cert files by using the command: sudo find /etc/kubernetes/pki/* -name "*.crt" | xargs stat -c '%n %a' If any of the files have permissions more permissive than "644", this is a finding.
Fix: F-45699r918202_fix
Change the ownership of the cert files to "644" by executing the command: find /etc/kubernetes/pki -name "*.crt" | xargs chmod 644
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003340
- Vuln IDs
-
- V-242467
- Rule IDs
-
- SV-242467r961863_rule
Checks: C-45742r918205_chk
Review the permissions of the Kubernetes PKI key files by using the command: sudo find /etc/kubernetes/pki -name "*.key" | xargs stat -c '%n %a' If any of the files have permissions more permissive than "600", this is a finding.
Fix: F-45700r918206_fix
Change the ownership of the key files to "600" by executing the command: find /etc/kubernetes/pki -name "*.key" | xargs chmod 600
- RMF Control
- SC-10
- Severity
- M
- CCI
- CCI-001133
- Version
- CNTR-K8-001300
- Vuln IDs
-
- V-245541
- Rule IDs
-
- SV-245541r1069469_rule
Checks: C-48816r1069467_chk
Follow these steps to check streaming-connection-idle-timeout: 1. On the Control Plane, run the command: ps -ef | grep kubelet If the "--streaming-connection-idle-timeout" option exists, this is a finding. Note the path to the config file (identified by --config). 2. Run the command: grep -i streamingConnectionIdleTimeout <path_to_config_file> If the setting "streamingConnectionIdleTimeout" is set to less than "5m" or is not configured, this is a finding.
Fix: F-48771r1069468_fix
Follow these steps to configure streaming-connection-idle-timeout: 1. On the Control Plane, run the command: ps -ef | grep kubelet Remove the "--streaming-connection-idle-timeout" option if present. Note the path to the config file (identified by --config). 2. Edit the Kubernetes Kubelet file in the --config directory on the Kubernetes Control Plane: Set the argument "streamingConnectionIdleTimeout" to a value of "5m".
- RMF Control
- SC-12
- Severity
- H
- CCI
- CCI-002448
- Version
- CNTR-K8-002620
- Vuln IDs
-
- V-245542
- Rule IDs
-
- SV-245542r961632_rule
Checks: C-48817r863943_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i basic-auth-file * If "basic-auth-file" is set in the Kubernetes API server manifest file this is a finding.
Fix: F-48772r863944_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Remove the setting "--basic-auth-file".
- RMF Control
- SC-12
- Severity
- H
- CCI
- CCI-002448
- Version
- CNTR-K8-002630
- Vuln IDs
-
- V-245543
- Rule IDs
-
- SV-245543r961632_rule
Checks: C-48818r927129_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i token-auth-file * If "--token-auth-file" is set in the Kubernetes API server manifest file, this is a finding.
Fix: F-48773r927130_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Remove the setting "--token-auth-file".
- RMF Control
- SC-12
- Severity
- H
- CCI
- CCI-002448
- Version
- CNTR-K8-002640
- Vuln IDs
-
- V-245544
- Rule IDs
-
- SV-245544r961632_rule
Checks: C-48819r863949_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i kubelet-client-certificate * grep -I kubelet-client-key * If the setting "--kubelet-client-certificate" is not configured in the Kubernetes API server manifest file or contains no value, this is a finding. If the setting "--kubelet-client-key" is not configured in the Kubernetes API server manifest file or contains no value, this is a finding.
Fix: F-48774r863950_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--kubelet-client-certificate" and "--kubelet-client-key" to an Approved Organizational Certificate and key pair.
- RMF Control
- AC-16
- Severity
- H
- CCI
- CCI-002263
- Version
- CNTR-K8-002011
- Vuln IDs
-
- V-254800
- Rule IDs
-
- SV-254800r961359_rule
Checks: C-58411r927123_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: "grep -i admission-control-config-file *" If the setting "--admission-control-config-file" is not configured in the Kubernetes API Server manifest file, this is a finding. Inspect the .yaml file defined by the --admission-control-config-file. Verify PodSecurity is properly configured. If least privilege is not represented, this is a finding.
Fix: F-58357r927124_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--admission-control-config-file" to a valid path for the file. Create an admission controller config file: Example File: ```yaml apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration plugins: - name: PodSecurity configuration: apiVersion: pod-security.admission.config.k8s.io/v1beta1 kind: PodSecurityConfiguration # Defaults applied when a mode label is not set. defaults: enforce: "privileged" enforce-version: "latest" exemptions: # Don't forget to exempt namespaces or users that are responsible for deploying # cluster components, because they need to run privileged containers usernames: ["admin"] namespaces: ["kube-system"] See for more details: Migrate from PSP to PSA: https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/ Best Practice: https://kubernetes.io/docs/concepts/security/pod-security-policy/#recommended-practice.
- RMF Control
- AC-16
- Severity
- H
- CCI
- CCI-002263
- Version
- CNTR-K8-002001
- Vuln IDs
-
- V-254801
- Rule IDs
-
- SV-254801r961359_rule
Checks: C-58412r918278_chk
On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * For each manifest file, if the "--feature-gates" setting does not exist, does not contain the "--PodSecurity" flag, or sets the flag to "false", this is a finding. On each Control Plane and Worker Node, run the command: ps -ef | grep kubelet If the "--feature-gates" option exists, this is a finding. Note the path to the config file (identified by --config). Inspect the content of the config file: If the "featureGates" setting is not present, does not contain the "PodSecurity" flag, or sets the flag to "false", this is a finding.
Fix: F-58358r918213_fix
On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Ensure the argument "--feature-gates=PodSecurity=true" is present in each manifest file. On each Control Plane and Worker Node, run the command: ps -ef | grep kubelet Remove the "--feature-gates" option if present. Note the path to the config file (identified by --config). Edit the Kubernetes Kubelet config file: Add a "featureGates" setting if one does not yet exist. Add the feature gate "PodSecurity=true". Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-001162
- Vuln IDs
-
- V-274882
- Rule IDs
-
- SV-274882r1107233_rule
Checks: C-78983r1107231_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i encryption-provider-config * If the setting "encryption-provider-config" is not configured, this is a finding. If the setting is configured, check the contents of the file specified by its argument. If the file does not specify the Secret's resource, this is a finding. If the identity provider is specified as the first provider for the resource, this is also a finding.
Fix: F-78888r1107232_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--encryption-provider-config" to the path to the encryption config. The encryption config must specify the Secret's resource and provider. Below is an example: { "kind": "EncryptionConfiguration", "apiVersion": "apiserver.config.k8s.io/v1", "resources": [ { "resources": [ "secrets" ], "providers": [ { "aescbc": { "keys": [ { "name": "aescbckey", "secret": "xxxxxxxxxxxxxxxxxxx" } ] } }, { "identity": {} } ] } ] }
- RMF Control
- Severity
- H
- CCI
- CCI-004062
- Version
- CNTR-K8-001161
- Vuln IDs
-
- V-274883
- Rule IDs
-
- SV-274883r1107230_rule
Checks: C-78984r1107228_chk
On the Kubernetes Master node, run the following command: kubectl get all,cm -A -o yaml Manually review the output for sensitive information. If any sensitive information is found, this is a finding.
Fix: F-78889r1107229_fix
Any sensitive information found must be stored in an approved external Secret store provider or use Kubernetes Secrets (attached on an as-needed basis to pods).
- RMF Control
- SC-28
- Severity
- M
- CCI
- CCI-002476
- Version
- CNTR-K8-001163
- Vuln IDs
-
- V-274884
- Rule IDs
-
- SV-274884r1107236_rule
Checks: C-78985r1107234_chk
Review the Kubernetes accounts and their corresponding roles. If any accounts have read (list, watch, get) access to Secrets without a documented organizational requirement, this is a finding. Run the below command to list the workload resources for applications deployed to Kubernetes: kubectl get all -A -o yaml If Secrets are attached to applications without a documented requirement, this is a finding.
Fix: F-78890r1107235_fix
For Kubernetes accounts that have read access to Secrets without a documented requirement, modify the corresponding Role or ClusterRole to remove list, watch, and get privileges for Secrets.