Kubernetes Security Technical Implementation Guide
Pick two releases to diff their requirements.
Open a previous version of this STIG.
Digest of Updates +4 −4 ✎ 10
Comparison against the immediately-prior release (V1R1). Rule matching uses the Group Vuln ID. Content-change detection compares the rule’s description, check, and fix text after stripping inline markup — cosmetic-only edits aren’t flagged.
Added rules 4
- V-245541 Medium Kubernetes Kubelet must not disable timeouts.
- V-245542 High Kubernetes API Server must disable basic authentication to protect information in transit.
- V-245543 Medium Kubernetes API Server must disable token authentication to protect information in transit.
- V-245544 Medium Kubernetes endpoints must use approved organizational certificate and key pair to protect information in transit.
Removed rules 4
- V-242416 Medium Kubernetes Kubelet must not disable timeouts.
- V-242439 High Kubernetes API Server must disable basic authentication to protect information in transit.
- V-242440 Medium Kubernetes API Server must disable token authentication to protect information in transit.
- V-242441 Medium Kubernetes endpoints must use approved organizational certificate and key pair to protect information in transit.
Content changes 10
- V-242379 Medium description The Kubernetes etcd must use TLS to protect the confidentiality of sensitive data during electronic dissemination.
- V-242380 Medium description The Kubernetes etcd must use TLS to protect the confidentiality of sensitive data during electronic dissemination.
- V-242426 Medium checkfix Kubernetes etcd must enable client authentication to secure service.
- V-242428 Medium descriptioncheckfix Kubernetes etcd must have a certificate for communication.
- V-242450 Medium check The Kubernetes Kubelet certificate authority must be owned by root.
- V-242454 Medium checkfix The Kubernetes kubeadm.conf must be owned by root.
- V-242455 Medium checkfix The Kubernetes kubeadm.conf must have file permissions set to 644 or more restrictive.
- V-242458 Medium fix The Kubernetes API Server must have file permissions set to 644 or more restrictive.
- V-242464 Medium check The Kubernetes API Server audit log retention must be set.
- V-242465 Medium check The Kubernetes API Server audit log path must be set.
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000150
- Vuln IDs
-
- V-242376
- Rule IDs
-
- SV-242376r712484_rule
Checks: C-45651r712482_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i tls-min-version * If the setting "tls-min-version" is not configured in the Kubernetes Controller Manager manifest file or it is set to "VersionTLS10" or "VersionTLS11", this is a finding.
Fix: F-45609r712483_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--tls-min-version" to "VersionTLS12" or higher.
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000160
- Vuln IDs
-
- V-242377
- Rule IDs
-
- SV-242377r712487_rule
Checks: C-45652r712485_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i tls-min-version * If the setting "tls-min-version" is not configured in the Kubernetes Scheduler manifest file or it is set to "VersionTLS10" or "VersionTLS11", this is a finding.
Fix: F-45610r712486_fix
Edit the Kubernetes Scheduler manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--tls-min-version" to "VersionTLS12" or higher.
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000170
- Vuln IDs
-
- V-242378
- Rule IDs
-
- SV-242378r712490_rule
Checks: C-45653r712488_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i tls-min-version * If the setting "tls-min-version" is not configured in the Kubernetes API Server manifest file or it is set to "VersionTLS10" or "VersionTLS11", this is a finding.
Fix: F-45611r712489_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--tls-min-version" to "VersionTLS12" or higher.
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000180
- Vuln IDs
-
- V-242379
- Rule IDs
-
- SV-242379r754799_rule
Checks: C-45654r712491_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i auto-tls * If the setting "auto-tls" is not configured in the Kubernetes etcd manifest file or it is set to true, this is a finding.
Fix: F-45612r712492_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "-auto-tls" to "false".
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000190
- Vuln IDs
-
- V-242380
- Rule IDs
-
- SV-242380r754800_rule
Checks: C-45655r712494_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -I peer-auto-tls * If the setting "peer-auto-tls" is not configured in the Kubernetes etcd manifest file or it is set to "true", this is a finding.
Fix: F-45613r712495_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "peer-auto-tls" to "false".
- RMF Control
- AC-2
- Severity
- H
- CCI
- CCI-000015
- Version
- CNTR-K8-000220
- Vuln IDs
-
- V-242381
- Rule IDs
-
- SV-242381r712499_rule
Checks: C-45656r712497_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i use-service-account-credential * If the setting use-service-account-credential is not configured in the Kubernetes Controller Manager manifest file or it is set to "false", this is a finding.
Fix: F-45614r712498_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "use-service-account-credential" to "true".
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000270
- Vuln IDs
-
- V-242382
- Rule IDs
-
- SV-242382r712502_rule
Checks: C-45657r712500_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: "grep -i authorization-mode *" If the setting "authorization-mode" is not configured in the Kubernetes API Server manifest file or is not set to "Node,RBAC", this is a finding.
Fix: F-45615r712501_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--authorization-mode" to "Node,RBAC".
- RMF Control
- CM-6
- Severity
- H
- CCI
- CCI-000366
- Version
- CNTR-K8-000290
- Vuln IDs
-
- V-242383
- Rule IDs
-
- SV-242383r712505_rule
Checks: C-45658r712503_chk
To view the available namespaces, run the command: kubectl get namespaces The default namespaces to be validated are default, kube-public and kube-node-lease if it is created. For the default namespace, execute the commands: kubectl config set-context --current --namespace=default kubectl get all For the kube-public namespace, execute the commands: kubectl config set-context --current --namespace=kube-public kubectl get all For the kube-node-lease namespace, execute the commands: kubectl config set-context --current --namespace=kube-node-lease kubectl get all The only valid return values are the kubernetes service (i.e., service/kubernetes) and nothing at all. If a return value is returned from the "kubectl get all" command and it is not the kubernetes service (i.e., service/kubernetes), this is a finding.
Fix: F-45616r712504_fix
Move any user-managed resources from the default, kube-public and kube-node-lease namespaces, to user namespaces.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000300
- Vuln IDs
-
- V-242384
- Rule IDs
-
- SV-242384r712508_rule
Checks: C-45659r712506_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i bind-address * If the setting "bind-address" is not set to "127.0.0.1" or is not found in the Kubernetes Scheduler manifest file, this is a finding.
Fix: F-45617r712507_fix
Edit the Kubernetes Scheduler manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument "--bind-address" to "127.0.0.1".
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000310
- Vuln IDs
-
- V-242385
- Rule IDs
-
- SV-242385r712511_rule
Checks: C-45660r712509_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i bind-address * If the setting bind-address is not set to "127.0.0.1" or is not found in the Kubernetes Controller Manager manifest file, this is a finding.
Fix: F-45618r712510_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument "--bind-address" to "127.0.0.1".
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000320
- Vuln IDs
-
- V-242386
- Rule IDs
-
- SV-242386r712514_rule
Checks: C-45661r712512_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i insecure-port * If the setting insecure-port is not set to "0" or is not configured in the Kubernetes API server manifest file, this is a finding.
Fix: F-45619r712513_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument --insecure-port to "0".
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000330
- Vuln IDs
-
- V-242387
- Rule IDs
-
- SV-242387r717013_rule
Checks: C-45662r712515_chk
Run the following command on each Worker Node: ps -ef | grep kubelet Verify that the --read-only-port argument exists and is set to "0". If the --read-only-port argument exists and is not set to "0", this is a finding. If the --read-only-port argument does not exist, check the Master Node Kubelet config file. On the Kubernetes Master Node, run the command: ps -ef | grep kubelet (path identified by: --config) Verify there is a readOnlyPort entry in the config file and it is set to "0". If the --read-only-port argument exists and is not set to "0" this is a finding. If "--read-only-port=0" argument does not exist on the worker node and the master node, this is a finding.
Fix: F-45620r717012_fix
Edit the Kubernetes Kubelet file in the --config directory on the Kubernetes Master Node. Set the argument --read-only-port to 0. Reset Kubelet service using the following command: service kubelet restart If using worker node arguments, edit the kubelet service file /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf on each Worker Node: set the parameter in KUBELET_SYSTEM_PODS_ARGS variable to "--read-only-port=0".
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000340
- Vuln IDs
-
- V-242388
- Rule IDs
-
- SV-242388r712520_rule
Checks: C-45663r712518_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i insecure-bind-address * If the setting insecure-bind-address is found and set to "localhost" in the Kubernetes API manifest file, this is a finding.
Fix: F-45621r712519_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Remove the value for the --insecure-bind-address setting.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000350
- Vuln IDs
-
- V-242389
- Rule IDs
-
- SV-242389r712523_rule
Checks: C-45664r712521_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i secure-port * If the setting secure-port is set to "0" or is not configured in the Kubernetes API manifest file, this is a finding.
Fix: F-45622r712522_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument --secure-port to a value greater than "0".
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000360
- Vuln IDs
-
- V-242390
- Rule IDs
-
- SV-242390r712526_rule
Checks: C-45665r712524_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i anonymous-auth * If the setting anonymous-auth is set to "true" in the Kubernetes API Server manifest file, this is a finding.
Fix: F-45623r712525_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument --anonymous-auth to "false".
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000370
- Vuln IDs
-
- V-242391
- Rule IDs
-
- SV-242391r712529_rule
Checks: C-45666r712527_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: grep -i anonymous-auth kubelet If the setting "anonymous-auth" is set to "true" or the parameter not set in the Kubernetes Kubelet configuration file, this is a finding.
Fix: F-45624r712528_fix
Edit the Kubernetes Kubelet file in the/etc/sysconfig/ directory on the Kubernetes Master Node. Set the argument "--anonymous-auth" to "false". Restart kubelet service using command: service kubelet restart
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000380
- Vuln IDs
-
- V-242392
- Rule IDs
-
- SV-242392r712532_rule
Checks: C-45667r712530_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: grep -i authorization-mode kubelet On each Worker node, change to the /etc/sysconfig/ directory. Run the command: grep -i authorization-mode kubelet If authorization-mode is missing or is set to "AllowAlways" on the Master node or any of the Worker nodes, this is a finding.
Fix: F-45625r717029_fix
Edit the Kubernetes Kubelet file in the/etc/sysconfig/ directory on the Kubernetes Master and Worker nodes. Set the argument --authorization-mode to "Webhook". Restart each kubelet service after the change is made using the command: service kubelet restart
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000400
- Vuln IDs
-
- V-242393
- Rule IDs
-
- SV-242393r717015_rule
Checks: C-45668r712533_chk
Log in to each worker node. Verify that the sshd service is not running. To validate that the service is not running, run the command: systemctl status sshd If the service sshd is active (running), this is a finding. Note: If console access is not available, SSH access can be attempted. If the worker nodes cannot be reached, this requirement is "not a finding".
Fix: F-45626r717014_fix
To stop the sshd service, run the command: systemctl stop sshd Note: If access to the worker node is through an SSH session, it is important to realize there are two requirements for disabling and stopping the sshd service and they should be done during the same SSH session. Disabling the service must be performed first and then the service stopped to guarantee both settings can be made if the session is interrupted.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000410
- Vuln IDs
-
- V-242394
- Rule IDs
-
- SV-242394r717017_rule
Checks: C-45669r712536_chk
Log in to each worker node. Verify that the sshd service is not enabled. To validate the service is not enabled, run the command: systemctl is-enabled sshd.service If the service sshd is enabled, this is a finding. Note: If console access is not available, SSH access can be attempted. If the worker nodes cannot be reached, this requirement is "not a finding".
Fix: F-45627r717016_fix
To disable the sshd service, run the command: chkconfig sshd off Note: If access to the worker node is through an SSH session, it is important to realize there are two requirements for disabling and stopping the sshd service that must be done during the same SSH session. Disabling the service must be performed first and then the service stopped to guarantee both settings can be made if the session is interrupted.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000420
- Vuln IDs
-
- V-242395
- Rule IDs
-
- SV-242395r712541_rule
Checks: C-45670r712539_chk
From the master node, run the command: kubectl get pods --all-namespaces -l k8s-app=kubernetes-dashboard If any resources are returned, this is a finding.
Fix: F-45628r712540_fix
Delete the Kubernetes dashboard deployment with the following command: kubectl delete deployment kubernetes-dashboard --namespace=kube-system
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000430
- Vuln IDs
-
- V-242396
- Rule IDs
-
- SV-242396r712544_rule
Checks: C-45671r712542_chk
From the Master and each Worker node, check the version of kubectl by executing the command: kubectl version --client If the Master or any Work nodes are not using kubectl version 1.12.9 or newer, this is a finding.
Fix: F-45629r712543_fix
Upgrade the Master and Worker nodes to the latest version of kubectl.
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000440
- Vuln IDs
-
- V-242397
- Rule IDs
-
- SV-242397r712547_rule
Checks: C-45672r712545_chk
On the Master and Worker nodes, change to the /etc/sysconfig/ directory and run the command: grep -i staticPodPath kubelet If any of the nodes return a value for staticPodPath, this is a finding.
Fix: F-45630r712546_fix
Edit the kubelet file on each node under the /etc/sysconfig directory to remove the staticPodPath setting and restart the kubelet service by executing the command: service kubelet restart
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000450
- Vuln IDs
-
- V-242398
- Rule IDs
-
- SV-242398r717019_rule
Checks: C-45673r712548_chk
On the Master node, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Review the feature-gates setting, if one is returned. If the feature-gates setting is available and contains the DynamicAuditing flag set to "true", this is a finding. Change to the directory /etc/sysconfig on the Master and each Worker Node and execute the command: grep -i feature-gates kubelet Review every feature-gates setting that is returned. If any feature-gates setting is available and contains the "DynamicAuditing" flag set to "true", this is a finding.
Fix: F-45631r717018_fix
Edit any manifest files or kubelet config files that contain the feature-gates setting with DynamicAuditing set to "true". Set the flag to "false" or remove the "DynamicAuditing" setting completely. Restart the kubelet service if the kubelet config file if the kubelet config file is changed.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000460
- Vuln IDs
-
- V-242399
- Rule IDs
-
- SV-242399r717021_rule
Checks: C-45674r712551_chk
On the Master node, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Review the feature-gates setting if one is returned. If the feature-gates setting does not exist or feature-gates does not contain the DynamicKubeletConfig flag or the "DynamicKubletConfig" flag is set to "true", this is a finding. Change to the directory /etc/sysconfig on the Master and each Worker node and execute the command: grep -i feature-gates kubelet Review every feature-gates setting if one is returned. If the feature-gates setting does not exist or feature-gates does not contain the DynamicKubeletConfig flag or the DynamicKubletConfig flag is set to "true", this is a finding.
Fix: F-45632r717020_fix
Edit any manifest file or kubelet config file that does not contain a feature-gates setting or has DynamicKubeletConfig set to "true". An omission of DynamicKubeletConfig within the feature-gates defaults to true. Set DynamicKubeletConfig to "false". Restart the kubelet service if the kubelet config file is changed.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000470
- Vuln IDs
-
- V-242400
- Rule IDs
-
- SV-242400r712556_rule
Checks: C-45675r712554_chk
On the Master node, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Review the feature-gates setting, if one is returned. If the feature-gates setting is available and contains the AllAlpha flag set to "true", this is a finding.
Fix: F-45633r712555_fix
Edit any manifest files that contain the feature-gates setting with AllAlpha set to "true". Set the flag to "false" or remove the AllAlpha setting completely. (AllAlpha- default=false)
- RMF Control
- AU-14
- Severity
- M
- CCI
- CCI-001464
- Version
- CNTR-K8-000600
- Vuln IDs
-
- V-242401
- Rule IDs
-
- SV-242401r712559_rule
Checks: C-45676r712557_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * If the audit-policy-file is not set, this is a finding.
Fix: F-45634r717023_fix
Edit the Kubernetes API Server manifest and set "--audit-policy-file" to the audit policy file. Note: If the API server is running as a Pod, then the manifest will also need to be updated to mount the host system filesystem where the audit policy file resides.
- RMF Control
- AU-14
- Severity
- M
- CCI
- CCI-001464
- Version
- CNTR-K8-000610
- Vuln IDs
-
- V-242402
- Rule IDs
-
- SV-242402r712562_rule
Checks: C-45677r712560_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-log-path * If the audit-log-path is not set, this is a finding.
Fix: F-45635r717024_fix
Edit the Kubernetes API Server manifest and set "--audit-log-path" to a secure location for the audit logs to be written. Note: If the API server is running as a Pod, then the manifest will also need to be updated to mount the host system filesystem where the audit log file is to be written.
- RMF Control
- AC-2
- Severity
- M
- CCI
- CCI-000018
- Version
- CNTR-K8-000700
- Vuln IDs
-
- V-242403
- Rule IDs
-
- SV-242403r712565_rule
Checks: C-45678r712563_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file If the audit-policy-file is not set, this is a finding. The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/vX (Where X is the latest apiVersion) kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-45636r712564_fix
Edit the Kubernetes API Server audit policy and set it to look like the following: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/vX (Where X is the latest apiVersion) kind: Policy rules: - level: RequestResponse
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000850
- Vuln IDs
-
- V-242404
- Rule IDs
-
- SV-242404r712568_rule
Checks: C-45679r712566_chk
On the Master and each Worker node, change to the /etc/sysconfig/ directory and run the command: grep -i hostname-override kubelet --hostname-override If any of the nodes have the setting "hostname-override" present, this is a finding.
Fix: F-45637r712567_fix
Edit the Kubernetes Kubelet file in the /etc/sysconfig directory on the Master and Worker nodes and remove the "--hostname-override" setting. Restart the service after the change is made by running: service kubelet restart
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000860
- Vuln IDs
-
- V-242405
- Rule IDs
-
- SV-242405r712571_rule
Checks: C-45680r712569_chk
On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: ls -l * Each manifest file must be owned by root:root. If any manifest file is not owned by root:root, this is a finding.
Fix: F-45638r712570_fix
On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: chown root:root * To verify the change took place, run the command: ls -l * All the manifest files should be owned by root:root.
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000880
- Vuln IDs
-
- V-242406
- Rule IDs
-
- SV-242406r712574_rule
Checks: C-45681r712572_chk
On the Master and worker nodes, change to the /etc/sysconfig directory. Run the command: ls -l kubelet Each kubelet configuration file must be owned by root:root. If any manifest file is not owned by root:root, this is a finding.
Fix: F-45639r712573_fix
On the Master and Worker nodes, change to the /etc/sysconfig directory. Run the command: chown root:root kubelet To verify the change took place, run the command: ls -l kubelet The kubelet file should now be owned by root:root.
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000890
- Vuln IDs
-
- V-242407
- Rule IDs
-
- SV-242407r712577_rule
Checks: C-45682r712575_chk
On the Master and worker nodes, change to the /etc/kubernetes/manifest directory. Run the command: ls -l kubelet Each kubelet configuration file must have permissions of "644" or more restrictive. If any kubelet configuration file is less restrictive than "644", this is a finding.
Fix: F-45640r712576_fix
On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: chmod 644 kubelet To verify the change took place, run the command: ls -l kubelet The kubelet file should now have the permissions of "644".
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000900
- Vuln IDs
-
- V-242408
- Rule IDs
-
- SV-242408r712580_rule
Checks: C-45683r712578_chk
On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: ls -l * Each manifest file must have permissions "644" or more restrictive. If any manifest file is less restrictive than "644", this is a finding.
Fix: F-45641r712579_fix
On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: chmod 644 * To verify the change took place, run the command: ls -l * All the manifest files should now have privileges of "644".
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-K8-000910
- Vuln IDs
-
- V-242409
- Rule IDs
-
- SV-242409r712583_rule
Checks: C-45684r712581_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i profiling * If the setting "profiling" is not configured in the Kubernetes Controller Manager manifest file or it is set to "True", this is a finding.
Fix: F-45642r712582_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument "--profiling value" to "false".
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000920
- Vuln IDs
-
- V-242410
- Rule IDs
-
- SV-242410r712586_rule
Checks: C-45685r712584_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep kube-apiserver.manifest -I-insecure-port grep kube-apiserver.manifest -I -secure-port grep kube-apiserver.manifest -I -etcd-servers * -edit manifest file: VIM <Manifest Name> Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any apiserver names spaces. Port: Any manifest and namespace PPS or services configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the API Server architecture, and determine applicable PPS. If there are any ports, protocols, and services in the system documentation not in compliance with the CAL PPSM, this is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify API Server network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Fix: F-45643r712585_fix
Amend any system documentation requiring revision. Update Kubernetes API Server manifest and namespace PPS configuration to comply with PPSM CAL.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000930
- Vuln IDs
-
- V-242411
- Rule IDs
-
- SV-242411r712589_rule
Checks: C-45686r712587_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep kube-scheduler.manifest -I -insecure-port grep kube-scheduler.manifest -I -secure-port -edit manifest file: VIM <Manifest Name> Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any scheduler names spaces. Port: Any manifest and namespace PPS configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the Scheduler architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPSs not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify Scheduler network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Fix: F-45644r712588_fix
Amend any system documentation requiring revision. Update Kubernetes Scheduler manifest and namespace PPS configuration to comply with the PPSM CAL.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000940
- Vuln IDs
-
- V-242412
- Rule IDs
-
- SV-242412r712592_rule
Checks: C-45687r712590_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep kube-scheduler.manifest -I -insecure-port grep kube-scheduler.manifest -I -secure-port -edit manifest file: VIM <Manifest Name: Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any controller names spaces. Port: Any manifest and namespace PPS or services configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the Controller architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify Controller network boundary with the PPS associated with the Controller for Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Fix: F-45645r712591_fix
Amend any system documentation requiring revision. Update Kubernetes Controller manifest and namespace PPS configuration to comply with PPSM CAL.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000950
- Vuln IDs
-
- V-242413
- Rule IDs
-
- SV-242413r712595_rule
Checks: C-45688r712593_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep kube-apiserver.manifest -I -etcd-servers * -edit etcd-main.manifest file: VIM <Manifest Name: Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any apiserver names spaces. Port: Any manifest and namespace PPS configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the etcd architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify etcd network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Fix: F-45646r712594_fix
Amend any system documentation requiring revision. Update Kubernetes etcd manifest and namespace PPS configuration to comply with PPSM CAL.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000960
- Vuln IDs
-
- V-242414
- Rule IDs
-
- SV-242414r717030_rule
Checks: C-45689r717031_chk
On the Master node, run the command: kubectl get pods --all-namespaces The list returned is all pods running within the Kubernetes cluster. For those pods running within the user namespaces (System namespaces are kube-system, kube-node-lease and kube-public), run the command: kubectl get pod podname -o yaml | grep -i port Note: In the above command, "podname" is the name of the pod. For the command to work correctly, the current context must be changed to the namespace for the pod. The command to do this is: kubectl config set-context --current --namespace=namespace-name (Note: "namespace-name" is the name of the namespace.) Review the ports that are returned for the pod. If any host-privileged ports are returned for any of the pods, this is a finding.
Fix: F-45647r717032_fix
For any of the pods that are using host-privileged ports, reconfigure the pod to use a service to map a host non-privileged port to the pod port or reconfigure the image to use non-privileged ports.
- RMF Control
- IA-5
- Severity
- H
- CCI
- CCI-000196
- Version
- CNTR-K8-001160
- Vuln IDs
-
- V-242415
- Rule IDs
-
- SV-242415r712601_rule
Checks: C-45690r712599_chk
On the Kubernetes Master node, run the following command: kubectl get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {"\n"}{end}' -A If any of the values returned reference environment variables, this is a finding.
Fix: F-45648r712600_fix
Any secrets stored as environment variables must be moved to the secret files with the proper protections and enforcements or placed within a password vault.
- RMF Control
- SC-2
- Severity
- M
- CCI
- CCI-001082
- Version
- CNTR-K8-001360
- Vuln IDs
-
- V-242417
- Rule IDs
-
- SV-242417r712607_rule
Checks: C-45692r712605_chk
On the Master node, run the command: kubectl get pods --all-namespaces Review the namespaces and pods that are returned. Kubernetes system namespaces are kube-node-lease, kube-public, and kube-system. If any user pods are present in the Kubernetes system namespaces, this is a finding.
Fix: F-45650r712606_fix
Move any user pods that are present in the Kubernetes system namespaces to user specific namespaces.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001400
- Vuln IDs
-
- V-242418
- Rule IDs
-
- SV-242418r712610_rule
Checks: C-45693r712608_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i tls-cipher-suites * If the setting feature tls-cipher-suites is not set in the Kubernetes API server manifest file or contains no value or does not contain TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM _SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM _SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM _SHA384, this is a finding.
Fix: F-45651r717025_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of tls-cipher-suites to: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM _SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM _SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM _SHA384
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001410
- Vuln IDs
-
- V-242419
- Rule IDs
-
- SV-242419r712613_rule
Checks: C-45694r712611_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i client-ca-file * If the setting feature client-ca-file is not set in the Kubernetes API server manifest file or contains no value, this is a finding.
Fix: F-45652r712612_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of client-ca-file to path containing Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001420
- Vuln IDs
-
- V-242420
- Rule IDs
-
- SV-242420r712616_rule
Checks: C-45695r712614_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: grep -i client-ca-file kubelet If the setting client-ca-file is not set in the Kubernetes API server manifest file or contains no value, this is a finding.
Fix: F-45653r717026_fix
Edit the Kubernetes Kubelet file in the /etc/sysconfig/ directory on the Kubernetes Master Node. Set the value of client-ca-file to path containing Approved Organizational Certificate. Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001430
- Vuln IDs
-
- V-242421
- Rule IDs
-
- SV-242421r717033_rule
Checks: C-45696r712617_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i root-ca-file * If the setting client-ca-file is not set in the Kubernetes Controller Manager manifest file or contains no value, this is a finding.
Fix: F-45654r712618_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of root-ca-file to path containing Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001440
- Vuln IDs
-
- V-242422
- Rule IDs
-
- SV-242422r712622_rule
Checks: C-45697r712620_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i tls-cert-file * grep -i tls-private-key-file * If the setting tls-cert-file and private-key-file is not set in the Kubernetes API server manifest file or contains no value, this is a finding.
Fix: F-45655r712621_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of tls-cert-file and tls-private-key-file to path containing Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001450
- Vuln IDs
-
- V-242423
- Rule IDs
-
- SV-242423r712625_rule
Checks: C-45698r712623_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i client-cert-auth * If the setting client-cert-auth is not configured in the Kubernetes etcd manifest file or set to "false", this is a finding.
Fix: F-45656r712624_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--client-cert-auth" to "true" for the etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001460
- Vuln IDs
-
- V-242424
- Rule IDs
-
- SV-242424r712628_rule
Checks: C-45699r712626_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the commands: grep -i tls-private-key-file kubelet If the setting "tls-private-key-file" is not configured in the Kubernetes Kubelet, this is a finding.
Fix: F-45657r712627_fix
Edit the Kubernetes Kuberlet file in the /etc/sysconfig directory on the Kubernetes Master Node. Set the argument tls-private-key-file to an Approved Organization Certificate. Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001470
- Vuln IDs
-
- V-242425
- Rule IDs
-
- SV-242425r712631_rule
Checks: C-45700r712629_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the commands: grep -i tls-cert-file kubelet If the setting "tls-cert-file" is not configured in the Kubernetes Kubelet, this is a finding.
Fix: F-45658r712630_fix
Edit the Kubernetes Kuberlet file in the /etc/sysconfig directory on the Kubernetes Master Node. Set the argument "tls-cert-file" to an Approved Organization Certificate. Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001480
- Vuln IDs
-
- V-242426
- Rule IDs
-
- SV-242426r754813_rule
Checks: C-45701r754811_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i peer-client-cert-auth * If the setting peer-client-cert-auth is not configured in the Kubernetes etcd manifest file or set to "false", this is a finding.
Fix: F-45659r754812_fix
Edit the Kubernetes etcd file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--peer-client-cert-auth" to "true" for the etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001490
- Vuln IDs
-
- V-242427
- Rule IDs
-
- SV-242427r712637_rule
Checks: C-45702r712635_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i etcd-keyfile * If the setting "etcd-keyfile" is not configured in the Kubernetes etcd manifest file, this is a finding.
Fix: F-45660r712636_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--etcd-keyfile" to the Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001500
- Vuln IDs
-
- V-242428
- Rule IDs
-
- SV-242428r754816_rule
Checks: C-45703r754814_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i certfile * If the setting "certfile" is not configured in the Kubernetes etcd manifest file, this is a finding.
Fix: F-45661r754815_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--certfile" to the Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001510
- Vuln IDs
-
- V-242429
- Rule IDs
-
- SV-242429r712643_rule
Checks: C-45704r712641_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i etcd-cafile * If the setting "etcd-cafile" is not configured in the Kubernetes API Server manifest file, this is a finding.
Fix: F-45662r712642_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--etcd-cafile" to the Certificate Authority for etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001520
- Vuln IDs
-
- V-242430
- Rule IDs
-
- SV-242430r712646_rule
Checks: C-45705r712644_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i etcd-certfile * If the setting "etcd-certfile" is not set in the Kubernetes API Server manifest file, this is a finding.
Fix: F-45663r712645_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--etcd-certfile" to the certificate to be used for communication with etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001530
- Vuln IDs
-
- V-242431
- Rule IDs
-
- SV-242431r712649_rule
Checks: C-45706r712647_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i etcd-keyfile * If the setting "etcd-keyfile" is not configured in the Kubernetes API Server manifest file, this is a finding.
Fix: F-45664r712648_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--etcd-keyfile" to the certificate to be used for communication with etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001540
- Vuln IDs
-
- V-242432
- Rule IDs
-
- SV-242432r712652_rule
Checks: C-45707r712650_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i peer-cert-file * If the setting "peer-cert-file" is not configured in the Kubernetes etcd manifest file, this is a finding.
Fix: F-45665r712651_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--peer-cert-file" to the certificate to be used for communication with etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001550
- Vuln IDs
-
- V-242433
- Rule IDs
-
- SV-242433r712655_rule
Checks: C-45708r712653_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i peer-key-file * If the setting "peer-key-file" is not set in the Kubernetes etcd manifest file, this is a finding.
Fix: F-45666r712654_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--peer-key-file" to the certificate to be used for communication with etcd.
- RMF Control
- SC-3
- Severity
- H
- CCI
- CCI-001084
- Version
- CNTR-K8-001620
- Vuln IDs
-
- V-242434
- Rule IDs
-
- SV-242434r712658_rule
Checks: C-45709r712656_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: grep -i protect-kernel-defaults kubelet If the setting "protect-kernel-defaults" is set to false or not set in the Kubernetes Kubelet, this is a finding.
Fix: F-45667r712657_fix
Edit the Kubernetes Kuberlet file in the /etc/sysconfig directory on the Kubernetes Master Node. Set the argument "--protect-kernel-defaults" to "true". Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-001990
- Vuln IDs
-
- V-242435
- Rule IDs
-
- SV-242435r712661_rule
Checks: C-45710r712659_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i authorization-mode * If the setting authorization-mode is set to "AlwaysAllow" in the Kubernetes API Server manifest file or is not configured, this is a finding.
Fix: F-45668r712660_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument "--authorization-mode" to any valid authorization mode other than AlwaysAllow.
- RMF Control
- AC-6
- Severity
- H
- CCI
- CCI-002233
- Version
- CNTR-K8-002000
- Vuln IDs
-
- V-242436
- Rule IDs
-
- SV-242436r712664_rule
Checks: C-45711r712662_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i ValidatingAdmissionWebhook * If a line is not returned that includes enable-admission-plugins and ValidatingAdmissionWebhook, this is a finding.
Fix: F-45669r717027_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument "--enable-admission-plugins" to include "ValidatingAdmissionWebhook". Each enabled plugin is separated by commas. Note: It is best to implement policies first and then enable the webhook, otherwise a denial of service may occur.
- RMF Control
- AC-6
- Severity
- H
- CCI
- CCI-002233
- Version
- CNTR-K8-002010
- Vuln IDs
-
- V-242437
- Rule IDs
-
- SV-242437r712667_rule
Checks: C-45712r712665_chk
On the Master Node, run the command: kubectl get podsecuritypolicy If there is no pod security policy configured, this is a finding. For any pod security policies listed, edit the policy with the command: kubectl edit podsecuritypolicy policyname (Note: "policyname" is the name of the policy.) Review the runAsUser, supplementalGroups and fsGroup sections of the policy. If any of these sections are missing, this is a finding. If the rule within the runAsUser section is not set to "MustRunAsNonRoot", this is a finding. If the ranges within the supplementalGroups section has min set to "0" or min is missing, this is a finding. If the ranges within the fsGroup section has a min set to "0" or the min is missing, this is a finding.
Fix: F-45670r717028_fix
From the Master node, save the following policy to a file called restricted.yml. apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: restricted annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default, apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default' seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default' apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' spec: privileged: false # Required to prevent escalations to root. allowPrivilegeEscalation: false # This is redundant with non-root + disallow privilege escalation, # but we can provide it for defense in depth. requiredDropCapabilities: - ALL # Allow core volume types. volumes: - 'configMap' - 'emptyDir' - 'projected' - 'secret' - 'downwardAPI' # Assume that persistentVolumes set up by the cluster admin are safe to use. - 'persistentVolumeClaim' hostNetwork: false hostIPC: false hostPID: false runAsUser: # Require the container to run without root privileges. rule: 'MustRunAsNonRoot' seLinux: # This policy assumes the nodes are using AppArmor rather than SELinux. rule: 'RunAsAny' supplementalGroups: rule: 'MustRunAs' ranges: # Forbid adding the root group. - min: 1 max: 65535 fsGroup: rule: 'MustRunAs' ranges: # Forbid adding the root group. - min: 1 max: 65535 readOnlyRootFilesystem: false To implement the policy, run the command: kubectl create -f restricted.yml
- RMF Control
- SC-5
- Severity
- M
- CCI
- CCI-002385
- Version
- CNTR-K8-002600
- Vuln IDs
-
- V-242438
- Rule IDs
-
- SV-242438r754802_rule
Checks: C-45713r754801_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -I request-timeout * If Kubernetes API Server manifest file does not exist, this is a finding. If the setting request-timeout is set to "0" in the Kubernetes API Server manifest file, or is not configured this is a finding.
Fix: F-45671r712669_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of request-timeout greater than "0".
- RMF Control
- SI-2
- Severity
- M
- CCI
- CCI-002617
- Version
- CNTR-K8-002700
- Vuln IDs
-
- V-242442
- Rule IDs
-
- SV-242442r712682_rule
Checks: C-45717r712680_chk
To view all pods and the images used to create the pods, from the Master node, run the following command: kubectl get pods --all-namespaces -o jsonpath="{..image}" | \ tr -s '[[:space:]]' '\n' | \ sort | \ uniq -c Review the images used for pods running within Kubernetes. If there are multiple versions of the same image, this is a finding.
Fix: F-45675r712681_fix
Remove any old pods that are using older images. On the Master node, run the command: kubectl delete pod podname (Note: "podname" is the name of the pod to delete.)
- RMF Control
- SI-2
- Severity
- M
- CCI
- CCI-002605
- Version
- CNTR-K8-002720
- Vuln IDs
-
- V-242443
- Rule IDs
-
- SV-242443r712685_rule
Checks: C-45718r712683_chk
Authenticate on the Kubernetes Master Node. Run the command: kubectl version --short If kubectl version has a setting not supporting Kubernetes skew policy, this is a finding. Note: Kubernetes Skew Policy can be found at: https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions
Fix: F-45676r712684_fix
Upgrade Kubernetes to the supported version. Institute and adhere to the policies and procedures to ensure that patches are consistently applied within the time allowed.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003110
- Vuln IDs
-
- V-242444
- Rule IDs
-
- SV-242444r712688_rule
Checks: C-45719r712686_chk
Review the ownership of the Kubernetes manifests files by using the command: stat -c %U:%G /etc/kubernetes/manifests/* | grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45677r712687_fix
Change the ownership of the manifest files to root: root by executing the command: chown root:root /etc/kubernetes/manifests/*
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003120
- Vuln IDs
-
- V-242445
- Rule IDs
-
- SV-242445r712691_rule
Checks: C-45720r712689_chk
Review the ownership of the Kubernetes etcd files by using the command: stat -c %U:%G /var/lib/etcd/* | grep -v etcd:etcd If the command returns any non etcd:etcd file permissions, this is a finding.
Fix: F-45678r712690_fix
Change the ownership of the manifest files to etcd:etcd by executing the command: chown etcd:etcd /var/lib/etcd/*
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003130
- Vuln IDs
-
- V-242446
- Rule IDs
-
- SV-242446r712694_rule
Checks: C-45721r712692_chk
Review the Kubernetes conf files by using the command: stat -c %U:%G /etc/kubernetes/admin.conf | grep -v root:root stat -c %U:%G /etc/kubernetes/scheduler.conf | grep -v root:root stat -c %U:%G /etc/kubernetes/controller-manager.conf | grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45679r712693_fix
Change the ownership of the conf files to root: root by executing the command: chown root:root /etc/kubernetes/admin.conf chown root:root /etc/kubernetes/scheduler.conf chown root:root /etc/kubernetes/controller-manager.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003140
- Vuln IDs
-
- V-242447
- Rule IDs
-
- SV-242447r712697_rule
Checks: C-45722r712695_chk
Check if Kube-Proxy is running and obtain --kubeconfig parameter use the following command: ps -ef | grep kube-proxy If Kube-Proxy exists: Review the permissions of the Kubernetes Kube Proxy by using the command: stat -c %a <location from --kubeconfig> If the file has permissions more permissive than "644", this is a finding.
Fix: F-45680r712696_fix
Change the permissions of the Kube Proxy to "644" by executing the command: chown 644 <location from kubeconfig>.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003150
- Vuln IDs
-
- V-242448
- Rule IDs
-
- SV-242448r712700_rule
Checks: C-45723r712698_chk
Check if Kube-Proxy is running use the following command: ps -ef | grep kube-proxy If Kube-Proxy exists: Review the permissions of the Kubernetes Kube Proxy by using the command: stat -c %U:%G <location from --kubeconfig>| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45681r712699_fix
Change the ownership of the Kube Proxy to root:root by executing the command: chown root:root <location from kubeconfig>.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003160
- Vuln IDs
-
- V-242449
- Rule IDs
-
- SV-242449r712703_rule
Checks: C-45724r712701_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run command: more kubelet --client-ca-file argument Note certificate location If the ca-file argument location file has permissions more permissive than "644", this is a finding.
Fix: F-45682r712702_fix
Change the permissions of the --client-ca-file to "644" by executing the command: chown 644 <kubelet --client--ca-file argument location>.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003170
- Vuln IDs
-
- V-242450
- Rule IDs
-
- SV-242450r754804_rule
Checks: C-45725r754803_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Review the ownership of the Kubernetes client-ca-file by using the command: more kubelet --client-ca-file argument Note certificate location Review the ownership of the Kubernetes client-ca-file by using the command: stat -c %U:%G <location from --client-ca-file argument>| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45683r712705_fix
Change the permissions of the Kube Proxy to "root" by executing the command: chown root:root <location from kubeconfig>.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003180
- Vuln IDs
-
- V-242451
- Rule IDs
-
- SV-242451r712709_rule
Checks: C-45726r712707_chk
Review the PKI files in Kubernetes by using the command: ls -laR /etc/kubernetes/pki/ If the command returns any non root:root file permissions, this is a finding.
Fix: F-45684r712708_fix
Change the ownership of the PKI to root: root by executing the command: chown -R root:root /etc/kubernetes/pki/
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003190
- Vuln IDs
-
- V-242452
- Rule IDs
-
- SV-242452r712712_rule
Checks: C-45727r712710_chk
Review the permissions of the Kubernetes Kubelet conf by using the command: stat -c %a /etc/kubernetes/kubelet.conf If any of the files are have permissions more permissive than "644", this is a finding.
Fix: F-45685r712711_fix
Change the permissions of the Kubelet to "644" by executing the command: chown 644 /etc/kubernetes/kubelet.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003200
- Vuln IDs
-
- V-242453
- Rule IDs
-
- SV-242453r712715_rule
Checks: C-45728r712713_chk
Review the Kubernetes Kubelet conf files by using the command: stat -c %U:%G /etc/kubernetes/kubelet.conf| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45686r712714_fix
Change the ownership of the kubelet.conf to root: root by executing the command: chown root:root /etc/kubernetes/kubelet.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003210
- Vuln IDs
-
- V-242454
- Rule IDs
-
- SV-242454r754819_rule
Checks: C-45729r754817_chk
Review the Kubeadm.conf file : Get the path for Kubeadm.conf by running: sytstemctl status kubelet Note the configuration file installed by the kubeadm is written to (Default Location: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf) stat -c %U:%G <kubeadm.conf path> | grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45687r754818_fix
Change the ownership of the kubeadm.conf to root: root by executing the command: chown root:root <kubeadm.conf path>
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003220
- Vuln IDs
-
- V-242455
- Rule IDs
-
- SV-242455r754822_rule
Checks: C-45730r754820_chk
Review the kubeadm.conf file : Get the path for kubeadm.conf by running: systemctl status kubelet Note the configuration file installed by the kubeadm is written to (Default Location: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf) stat -c %a <kubeadm.conf path> If the file has permissions more permissive than "644", this is a finding.
Fix: F-45688r754821_fix
Change the permissions of kubeadm.conf to "644" by executing the command: chmod 644 <kubeadm.conf path>
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003230
- Vuln IDs
-
- V-242456
- Rule IDs
-
- SV-242456r712724_rule
Checks: C-45731r712722_chk
Review the permissions of the Kubernetes config.yaml by using the command: stat -c %a /var/lib/kubelet/config.yaml If any of the files are have permissions more permissive than "644", this is a finding.
Fix: F-45689r712723_fix
Change the permissions of the config.yaml to "644" by executing the command: chown 644 /var/lib/kubelet/config.yaml
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003240
- Vuln IDs
-
- V-242457
- Rule IDs
-
- SV-242457r712727_rule
Checks: C-45732r712725_chk
Review the Kubernetes Kubeadm kubelet conf file by using the command: stat -c %U:%G /var/lib/kubelet/config.yaml| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45690r712726_fix
Change the ownership of the kubelet config to "root: root" by executing the command: chown root:root /var/lib/kubelet/config.yaml
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003250
- Vuln IDs
-
- V-242458
- Rule IDs
-
- SV-242458r754806_rule
Checks: C-45733r712728_chk
Review the permissions of the Kubernetes Kubelet by using the command: stat -c %a /etc/kubernetes/manifests/* If any of the files are have permissions more permissive than "644", this is a finding.
Fix: F-45691r754805_fix
Change the permissions of the manifest files by executing the command: chmod 644 /etc/kubernetes/manifests/*
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003260
- Vuln IDs
-
- V-242459
- Rule IDs
-
- SV-242459r712733_rule
Checks: C-45734r712731_chk
Review the permissions of the Kubernetes etcd by using the command: stat -c %a /var/lib/etcd/* If any of the files are have permissions more permissive than "644", this is a finding.
Fix: F-45692r712732_fix
Change the permissions of the manifest files to "644" by executing the command: chmod 644/var/lib/etcd/*
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003270
- Vuln IDs
-
- V-242460
- Rule IDs
-
- SV-242460r712736_rule
Checks: C-45735r712734_chk
Review the permissions of the Kubernetes config files by using the command: stat -c %a /etc/kubernetes/admin.conf stat -c %a /etc/kubernetes/scheduler.conf stat -c %a /etc/kubernetes/controller-manager.conf If any of the files are have permissions more permissive than "644", this is a finding.
Fix: F-45693r712735_fix
Change the permissions of the conf files to "644" by executing the command: chmod 644 /etc/kubernetes/admin.conf chmod 644 /etc/kubernetes/scheduler.conf chmod 644 /etc/kubernetes/controller-manager.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003280
- Vuln IDs
-
- V-242461
- Rule IDs
-
- SV-242461r712739_rule
Checks: C-45736r712737_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * If the setting "audit-policy-file" is not set or is found in the Kubernetes API manifest file without valid content, this is a finding.
Fix: F-45694r712738_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument "--audit-policy-file" to "log file directory".
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003290
- Vuln IDs
-
- V-242462
- Rule IDs
-
- SV-242462r712742_rule
Checks: C-45737r712740_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i audit-log-maxsize * If the setting "audit-log-maxsize" is not set in the Kubernetes API Server manifest file or it is set to less than "100", this is a finding.
Fix: F-45695r712741_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of –"--audit-log-maxsize" to a minimum of "100".
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003300
- Vuln IDs
-
- V-242463
- Rule IDs
-
- SV-242463r712745_rule
Checks: C-45738r712743_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i audit-log-maxbackup * If the setting "audit-log-maxbackup" is not set in the Kubernetes API Server manifest file or it is set less than "10", this is a finding.
Fix: F-45696r712744_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--audit-log-maxbackup" to a minimum of "10".
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003310
- Vuln IDs
-
- V-242464
- Rule IDs
-
- SV-242464r754808_rule
Checks: C-45739r754807_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i audit-log-maxage * If the setting "audit-log-maxage" is not set in the Kubernetes API Server manifest file or it is set less than "30", this is a finding.
Fix: F-45697r712747_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--audit-log-maxage" to a minimum of "30".
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003320
- Vuln IDs
-
- V-242465
- Rule IDs
-
- SV-242465r754810_rule
Checks: C-45740r754809_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i audit-log-path * If the setting audit-log-path is not set in the Kubernetes API Server manifest file or it is not set to a valid path, this is a finding.
Fix: F-45698r712750_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--audit-log-path" to valid location.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003330
- Vuln IDs
-
- V-242466
- Rule IDs
-
- SV-242466r712754_rule
Checks: C-45741r712752_chk
Review the permissions of the Kubernetes PKI cert files by using the command: find /etc/kubernetes/pki -name "*.crt" | xargs stat -c '%n %a' If any of the files are have permissions more permissive than "644", this is a finding.
Fix: F-45699r712753_fix
Change the ownership of the cert files to "644" by executing the command: chmod -R 644 /etc/kubernetes/pki/*.crt
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003340
- Vuln IDs
-
- V-242467
- Rule IDs
-
- SV-242467r712757_rule
Checks: C-45742r712755_chk
Review the permissions of the Kubernetes PKI key files by using the command: find /etc/kubernetes/pki -name "*.key" | xargs stat -c '%n %a' If any of the files are have permissions more permissive than "600", this is a finding.
Fix: F-45700r712756_fix
Change the ownership of the cert files to "600" by executing the command: chmod -R 600 /etc/kubernetes/pki/*.key
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-001453
- Version
- CNTR-K8-003350
- Vuln IDs
-
- V-242468
- Rule IDs
-
- SV-242468r712760_rule
Checks: C-45743r712758_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i tls-min-version * If the setting tls-min-version is not configured in the Kubernetes API Server manifest file or it is set to "VersionTLS10" or "VersionTLS11", this is a finding.
Fix: F-45701r712759_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--tls-min-version" to either "VersionTLS12" or higher.
- RMF Control
- SC-10
- Severity
- M
- CCI
- CCI-001133
- Version
- CNTR-K8-001300
- Vuln IDs
-
- V-245541
- Rule IDs
-
- SV-245541r754888_rule
Checks: C-48816r754886_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: grep -i streaming-connection-idle-timeout kubelet If the setting streaming-connection-idle-timeout is set to "0" or the parameter is not configured in the Kubernetes Kubelet, this is a finding.
Fix: F-48771r754887_fix
Edit the Kubernetes Kubelet file in the /etc/sysconfig directory on the Kubernetes Master Node. Set the argument "--streaming-connection-idle-timeout" to a value other than "0". Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- SC-8
- Severity
- H
- CCI
- CCI-002418
- Version
- CNTR-K8-002620
- Vuln IDs
-
- V-245542
- Rule IDs
-
- SV-245542r754891_rule
Checks: C-48817r754889_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i basic-auth-file * If "basic-auth-file" is set in the Kubernetes API server manifest file this is a finding.
Fix: F-48772r754890_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Remove the setting "--basic-auth-file".
- RMF Control
- SC-8
- Severity
- M
- CCI
- CCI-002418
- Version
- CNTR-K8-002630
- Vuln IDs
-
- V-245543
- Rule IDs
-
- SV-245543r754894_rule
Checks: C-48818r754892_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i token-auth-file * If "token-auth-file" is set in the Kubernetes API server manifest file, this is a finding.
Fix: F-48773r754893_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Remove parameter "--token-auth-file".
- RMF Control
- SC-8
- Severity
- M
- CCI
- CCI-002418
- Version
- CNTR-K8-002640
- Vuln IDs
-
- V-245544
- Rule IDs
-
- SV-245544r754897_rule
Checks: C-48819r754895_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i kubelet-client-certificate * grep -I kubelet-client-key * If the setting "--kubelet-client-certificate" is not configured in the Kubernetes API server manifest file or contains no value, this is a finding. If the setting "--kubelet-client-key" is not configured in the Kubernetes API server manifest file or contains no value, this is a finding.
Fix: F-48774r754896_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of "--kubelet-client-certificate" and "--kubelet-client-key" to an Approved Organizational Certificate and key pair.