Kubernetes Security Technical Implementation Guide
Pick two releases to diff their requirements.
Open a previous version of this STIG.
Digest of Updates +2 ✎ 85
Comparison against the immediately-prior release (V1R6). Rule matching uses the Group Vuln ID. Content-change detection compares the rule’s description, check, and fix text after stripping inline markup — cosmetic-only edits aren’t flagged.
Added rules 2
Content changes 85
- V-242376 Medium checkfix The Kubernetes Controller Manager must use TLS 1.2, at a minimum, to protect the confidentiality of sensitive data during electronic dissemination.
- V-242377 Medium checkfix The Kubernetes Scheduler must use TLS 1.2, at a minimum, to protect the confidentiality of sensitive data during electronic dissemination.
- V-242378 Medium checkfix The Kubernetes API Server must use TLS 1.2, at a minimum, to protect the confidentiality of sensitive data during electronic dissemination.
- V-242379 Medium checkfix The Kubernetes etcd must use TLS to protect the confidentiality of sensitive data during electronic dissemination.
- V-242380 Medium checkfix The Kubernetes etcd must use TLS to protect the confidentiality of sensitive data during electronic dissemination.
- V-242381 High checkfix The Kubernetes Controller Manager must create unique service accounts for each work payload.
- V-242382 Medium checkfix The Kubernetes API Server must enable Node,RBAC as the authorization mode.
- V-242383 High checkfix User-managed resources must be created in dedicated namespaces.
- V-242384 Medium checkfix The Kubernetes Scheduler must have secure binding.
- V-242385 Medium checkfix The Kubernetes Controller Manager must have secure binding.
- V-242386 High descriptioncheckfix The Kubernetes API server must have the insecure port flag disabled.
- V-242387 High checkfix The Kubernetes Kubelet must have the read-only port flag disabled.
- V-242388 High descriptioncheckfix The Kubernetes API server must have the insecure bind address not set.
- V-242389 Medium descriptioncheckfix The Kubernetes API server must have the secure port set.
- V-242390 High checkfix The Kubernetes API server must have anonymous authentication disabled.
- V-242391 High checkfix The Kubernetes Kubelet must have anonymous authentication disabled.
- V-242392 High checkfix The Kubernetes kubelet must enable explicit authorization.
- V-242393 Medium description Kubernetes Worker Nodes must not have sshd service running.
- V-242394 Medium description Kubernetes Worker Nodes must not have the sshd service enabled.
- V-242395 Medium check Kubernetes dashboard must not be enabled.
- V-242396 Medium checkfix Kubernetes Kubectl cp command must give expected access and results.
- V-242397 High checkfix The Kubernetes kubelet static PodPath must not enable static pods.
- V-242398 Medium check Kubernetes DynamicAuditing must not be enabled.
- V-242399 Medium check Kubernetes DynamicKubeletConfig must not be enabled.
- V-242400 Medium check The Kubernetes API server must have Alpha APIs disabled.
- V-242401 Medium check The Kubernetes API Server must have an audit policy set.
- V-242402 Medium check The Kubernetes API Server must have an audit log path set.
- V-242403 Medium check Kubernetes API Server must generate audit records that identify what type of event has occurred, identify the source of the event, contain the event results, identify any users, and identify any containers associated with the event.
- V-242404 Medium checkfix Kubernetes Kubelet must deny hostname override.
- V-242405 Medium checkfix The Kubernetes manifests must be owned by root.
- V-242406 Medium checkfix The Kubernetes kubelet configuration file must be owned by root.
- V-242407 Medium checkfix The Kubernetes kubelet configuration files must have file permissions set to 644 or more restrictive.
- V-242408 Medium checkfix The Kubernetes manifests must have least privileges.
- V-242409 Medium checkfix Kubernetes Controller Manager must disable profiling.
- V-242410 Medium check The Kubernetes API Server must enforce ports, protocols, and services (PPS) that adhere to the Ports, Protocols, and Services Management Category Assurance List (PPSM CAL).
- V-242411 Medium check The Kubernetes Scheduler must enforce ports, protocols, and services (PPS) that adhere to the Ports, Protocols, and Services Management Category Assurance List (PPSM CAL).
- V-242412 Medium check The Kubernetes Controllers must enforce ports, protocols, and services (PPS) that adhere to the Ports, Protocols, and Services Management Category Assurance List (PPSM CAL).
- V-242413 Medium check The Kubernetes etcd must enforce ports, protocols, and services (PPS) that adhere to the Ports, Protocols, and Services Management Category Assurance List (PPSM CAL).
- V-242414 Medium check The Kubernetes cluster must use non-privileged host ports for user pods.
- V-242415 High check Secrets in Kubernetes must not be stored as environment variables.
- V-242417 Medium check Kubernetes must separate user functionality.
- V-242418 Medium checkfix The Kubernetes API server must use approved cipher suites.
- V-242419 Medium checkfix Kubernetes API Server must have the SSL Certificate Authority set.
- V-242420 Medium checkfix Kubernetes Kubelet must have the SSL Certificate Authority set.
- V-242421 Medium checkfix Kubernetes Controller Manager must have the SSL Certificate Authority set.
- V-242422 Medium checkfix Kubernetes API Server must have a certificate for communication.
- V-242423 Medium checkfix Kubernetes etcd must enable client authentication to secure service.
- V-242424 Medium checkfix Kubernetes Kubelet must enable tls-private-key-file for client authentication to secure service.
- V-242425 Medium checkfix Kubernetes Kubelet must enable tls-cert-file for client authentication to secure service.
- V-242426 Medium checkfix Kubernetes etcd must enable client authentication to secure service.
- V-242427 Medium checkfix Kubernetes etcd must have a key file for secure communication.
- V-242428 Medium checkfix Kubernetes etcd must have a certificate for communication.
- V-242429 Medium checkfix Kubernetes etcd must have the SSL Certificate Authority set.
- V-242430 Medium checkfix Kubernetes etcd must have a certificate for communication.
- V-242431 Medium checkfix Kubernetes etcd must have a key file for secure communication.
- V-242432 Medium checkfix Kubernetes etcd must have peer-cert-file set for secure communication.
- V-242433 Medium checkfix Kubernetes etcd must have a peer-key-file set for secure communication.
- V-242434 High checkfix Kubernetes Kubelet must enable kernel protection.
- V-242435 High checkfix Kubernetes must prevent non-privileged users from executing privileged functions to include disabling, circumventing, or altering implemented security safeguards/countermeasures or the installation of patches and updates.
- V-242436 High checkfix The Kubernetes API server must have the ValidatingAdmissionWebhook enabled.
- V-242437 High checkfix Kubernetes must have a pod security policy set.
- V-242438 Medium checkfix Kubernetes API Server must configure timeouts to limit attack surface.
- V-242442 Medium checkfix Kubernetes must remove old components after updated versions have been installed.
- V-242443 Medium check Kubernetes must contain the latest updates as authorized by IAVMs, CTOs, DTMs, and STIGs.
- V-242444 Medium description The Kubernetes component manifests must be owned by root.
- V-242445 Medium description The Kubernetes component etcd must be owned by etcd.
- V-242446 Medium description The Kubernetes conf files must be owned by root.
- V-242447 Medium description The Kubernetes Kube Proxy must have file permissions set to 644 or more restrictive.
- V-242448 Medium description The Kubernetes Kube Proxy must be owned by root.
- V-242449 Medium check The Kubernetes Kubelet certificate authority file must have file permissions set to 644 or more restrictive.
- V-242450 Medium descriptioncheck The Kubernetes Kubelet certificate authority must be owned by root.
- V-242458 Medium description The Kubernetes API Server must have file permissions set to 644 or more restrictive.
- V-242459 Medium description The Kubernetes etcd must have file permissions set to 644 or more restrictive.
- V-242460 Medium description The Kubernetes admin.conf must have file permissions set to 644 or more restrictive.
- V-242461 Medium checkfix Kubernetes API Server audit logs must be enabled.
- V-242462 Medium checkfix The Kubernetes API Server must be set to audit log max size.
- V-242463 Medium checkfix The Kubernetes API Server must be set to audit log maximum backup.
- V-242464 Medium checkfix The Kubernetes API Server audit log retention must be set.
- V-242465 Medium checkfix The Kubernetes API Server audit log path must be set.
- V-242467 Medium check The Kubernetes PKI keys must have file permissions set to 600 or more restrictive.
- V-242468 Medium checkfix The Kubernetes API Server must prohibit communication using TLS version 1.0 and 1.1, and SSL 2.0 and 3.0.
- V-245541 Medium checkfix Kubernetes Kubelet must not disable timeouts.
- V-245542 High checkfix Kubernetes API Server must disable basic authentication to protect information in transit.
- V-245543 Medium checkfix Kubernetes API Server must disable token authentication to protect information in transit.
- V-245544 Medium checkfix Kubernetes endpoints must use approved organizational certificate and key pair to protect information in transit.
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000150
- Vuln IDs
-
- V-242376
- Rule IDs
-
- SV-242376r863952_rule
Checks: C-45651r863731_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i tls-min-version * If the setting "tls-min-version" is not configured in the Kubernetes Controller Manager manifest file or it is set to "VersionTLS10" or "VersionTLS11", this is a finding.
Fix: F-45609r863732_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--tls-min-version" to "VersionTLS12" or higher.
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000160
- Vuln IDs
-
- V-242377
- Rule IDs
-
- SV-242377r863953_rule
Checks: C-45652r863734_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i tls-min-version * If the setting "tls-min-version" is not configured in the Kubernetes Scheduler manifest file or it is set to "VersionTLS10" or "VersionTLS11", this is a finding.
Fix: F-45610r863735_fix
Edit the Kubernetes Scheduler manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--tls-min-version" to "VersionTLS12" or higher.
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000170
- Vuln IDs
-
- V-242378
- Rule IDs
-
- SV-242378r863954_rule
Checks: C-45653r863737_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i tls-min-version * If the setting "tls-min-version" is not configured in the Kubernetes API Server manifest file or it is set to "VersionTLS10" or "VersionTLS11", this is a finding.
Fix: F-45611r863738_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--tls-min-version" to "VersionTLS12" or higher.
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000180
- Vuln IDs
-
- V-242379
- Rule IDs
-
- SV-242379r863955_rule
Checks: C-45654r863740_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i auto-tls * If the setting "auto-tls" is not configured in the Kubernetes etcd manifest file or it is set to true, this is a finding.
Fix: F-45612r863741_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "-auto-tls" to "false".
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-000068
- Version
- CNTR-K8-000190
- Vuln IDs
-
- V-242380
- Rule IDs
-
- SV-242380r863956_rule
Checks: C-45655r863743_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -I peer-auto-tls * If the setting "peer-auto-tls" is not configured in the Kubernetes etcd manifest file or it is set to "true", this is a finding.
Fix: F-45613r863744_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "peer-auto-tls" to "false".
- RMF Control
- AC-2
- Severity
- H
- CCI
- CCI-000015
- Version
- CNTR-K8-000220
- Vuln IDs
-
- V-242381
- Rule IDs
-
- SV-242381r863957_rule
Checks: C-45656r863746_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i use-service-account-credentials * If the setting use-service-account-credentials is not configured in the Kubernetes Controller Manager manifest file or it is set to "false", this is a finding.
Fix: F-45614r863747_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "use-service-account-credentials" to "true".
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000270
- Vuln IDs
-
- V-242382
- Rule IDs
-
- SV-242382r863958_rule
Checks: C-45657r863749_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: "grep -i authorization-mode *" If the setting "authorization-mode" is not configured in the Kubernetes API Server manifest file or is not set to "Node,RBAC", this is a finding.
Fix: F-45615r863750_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--authorization-mode" to "Node,RBAC".
- RMF Control
- CM-6
- Severity
- H
- CCI
- CCI-000366
- Version
- CNTR-K8-000290
- Vuln IDs
-
- V-242383
- Rule IDs
-
- SV-242383r863959_rule
Checks: C-45658r863752_chk
To view the available namespaces, run the command: kubectl get namespaces The default namespaces to be validated are default, kube-public, and kube-node-lease if it is created. For the default namespace, execute the commands: kubectl config set-context --current --namespace=default kubectl get all For the kube-public namespace, execute the commands: kubectl config set-context --current --namespace=kube-public kubectl get all For the kube-node-lease namespace, execute the commands: kubectl config set-context --current --namespace=kube-node-lease kubectl get all The only valid return values are the kubernetes service (i.e., service/kubernetes) and nothing at all. If a return value is returned from the "kubectl get all" command and it is not the kubernetes service (i.e., service/kubernetes), this is a finding.
Fix: F-45616r863753_fix
Move any user-managed resources from the default, kube-public, and kube-node-lease namespaces to user namespaces.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000300
- Vuln IDs
-
- V-242384
- Rule IDs
-
- SV-242384r863960_rule
Checks: C-45659r863755_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i bind-address * If the setting "bind-address" is not set to "127.0.0.1" or is not found in the Kubernetes Scheduler manifest file, this is a finding.
Fix: F-45617r863756_fix
Edit the Kubernetes Scheduler manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument "--bind-address" to "127.0.0.1".
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000310
- Vuln IDs
-
- V-242385
- Rule IDs
-
- SV-242385r863961_rule
Checks: C-45660r863758_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i bind-address * If the setting bind-address is not set to "127.0.0.1" or is not found in the Kubernetes Controller Manager manifest file, this is a finding.
Fix: F-45618r863759_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument "--bind-address" to "127.0.0.1".
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000320
- Vuln IDs
-
- V-242386
- Rule IDs
-
- SV-242386r863962_rule
Checks: C-45661r863761_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i insecure-port * If the setting insecure-port is not set to "0" or is not configured in the Kubernetes API server manifest file, this is a finding. NOTE: --insecure-port flag has been deprecated and can only be set to 0, **This flag will be removed in v1.24.*
Fix: F-45619r863762_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument --insecure-port to "0".
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000330
- Vuln IDs
-
- V-242387
- Rule IDs
-
- SV-242387r863963_rule
Checks: C-45662r863764_chk
Run the following command on each Worker Node: ps -ef | grep kubelet Verify that the --read-only-port argument exists and is set to "0". If the --read-only-port argument exists and is not set to "0", this is a finding. If the --read-only-port argument does not exist, check the Control Plane Kubelet config file: On the Kubernetes Control Plane, run the command: ps -ef | grep kubelet Check the config file (path identified by: --config). Verify there is a readOnlyPort entry in the config file and it is set to "0". If the readOnlyPort argument exists and is not set to "0" this is a finding. If "--read-only-port=0" argument does not exist on the worker nodes and "readOnlyPort=0" does not exist on the Control Plane, this is a finding.
Fix: F-45620r863765_fix
Edit the Kubernetes Kubelet file in the --config directory on the Kubernetes Control Plane: Set "readOnlyPort=0" If using worker node arguments, edit the kubelet service file (identified in the --config directory): On each Worker Node: set the parameter in KUBELET_SYSTEM_PODS_ARGS variable to "--read-only-port=0". Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000340
- Vuln IDs
-
- V-242388
- Rule IDs
-
- SV-242388r863964_rule
Checks: C-45663r863767_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i insecure-bind-address * If the setting insecure-bind-address is found and set to "localhost" in the Kubernetes API manifest file, this is a finding.
Fix: F-45621r863768_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Remove the value for the --insecure-bind-address setting.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000350
- Vuln IDs
-
- V-242389
- Rule IDs
-
- SV-242389r863965_rule
Checks: C-45664r863770_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i secure-port * If the setting secure-port is set to "0" or is not configured in the Kubernetes API manifest file, this is a finding.
Fix: F-45622r863771_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument --secure-port to a value greater than "0".
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000360
- Vuln IDs
-
- V-242390
- Rule IDs
-
- SV-242390r863966_rule
Checks: C-45665r863773_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i anonymous-auth * If the setting anonymous-auth is set to "true" in the Kubernetes API Server manifest file, this is a finding.
Fix: F-45623r863774_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument --anonymous-auth to "false".
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000370
- Vuln IDs
-
- V-242391
- Rule IDs
-
- SV-242391r863967_rule
Checks: C-45666r863776_chk
Run the following command on each Worker Node: ps -ef | grep kubelet Verify that the --anonymous-auth argument exists and is set to "false". If the --anonymous-auth argument exists and is not set to "false", this is a finding. If the --anonymous-auth argument does not exist, check the Control Plane Kubelet config file: On the Kubernetes Control Plane, run the command: ps -ef | grep kubelet Check the config file (path identified by: --config). Verify "authentication: anonymous: enabled=false". If this is not set to "false", this is a finding. If "--anonymous-auth=false" argument does not exist on the worker nodes or "authentication: anonymous: enabled=false" does not exist on the Control Plane, this is a finding.
Fix: F-45624r863777_fix
Edit the Kubernetes Kubelet file in the --config directory on the Kubernetes Control Plane: Set "authentication: anonymous: enabled=false" If using worker node arguments, edit the kubelet service file (identified in the --config directory): On each Worker Node: set the parameter in KUBELET_SYSTEM_PODS_ARGS variable to "--anonymous-auth=false". Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000380
- Vuln IDs
-
- V-242392
- Rule IDs
-
- SV-242392r863968_rule
Checks: C-45667r863779_chk
Run the following command on each Worker Node: ps -ef | grep kubelet Verify that the --authorization-mode exists and is set to "Webhook". If the --authorization-mode argument exists and is not set to "Webhook", this is a finding. If the --authorization-mode does not exist, check the Control Plane Kubelet config file: On the Kubernetes Control Plane, run the command: ps -ef | grep kubelet Check the config file (path identified by: --config). Verify authorization: mode. If this is not set to "Webhook", this is a finding. If "--authorization-mode=Webhook" argument does not exist on the worker nodes or "authorization: mode=Webhook" does not exist on the Control Plane, this is a finding.
Fix: F-45625r863780_fix
Edit the Kubernetes Kubelet file in the --config directory on the Kubernetes Control Plane: Set the argument "authorization: mode=Webhook" If using worker node arguments, edit the kubelet service file identified in the --config directory: On each Worker Node: set the parameter in KUBELET_SYSTEM_PODS_ARGS variable to "--authorization-mode=Webhook". Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000400
- Vuln IDs
-
- V-242393
- Rule IDs
-
- SV-242393r863969_rule
Checks: C-45668r712533_chk
Log in to each worker node. Verify that the sshd service is not running. To validate that the service is not running, run the command: systemctl status sshd If the service sshd is active (running), this is a finding. Note: If console access is not available, SSH access can be attempted. If the worker nodes cannot be reached, this requirement is "not a finding".
Fix: F-45626r863782_fix
To stop the sshd service, run the command: systemctl stop sshd Note: If access to the worker node is through an SSH session, it is important to realize there are two requirements for disabling and stopping the sshd service and they should be done during the same SSH session. Disabling the service must be performed first and then the service stopped to guarantee both settings can be made if the session is interrupted.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000410
- Vuln IDs
-
- V-242394
- Rule IDs
-
- SV-242394r863970_rule
Checks: C-45669r712536_chk
Log in to each worker node. Verify that the sshd service is not enabled. To validate the service is not enabled, run the command: systemctl is-enabled sshd.service If the service sshd is enabled, this is a finding. Note: If console access is not available, SSH access can be attempted. If the worker nodes cannot be reached, this requirement is "not a finding".
Fix: F-45627r863784_fix
To disable the sshd service, run the command: chkconfig sshd off Note: If access to the worker node is through an SSH session, it is important to realize there are two requirements for disabling and stopping the sshd service that must be done during the same SSH session. Disabling the service must be performed first and then the service stopped to guarantee both settings can be made if the session is interrupted.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000420
- Vuln IDs
-
- V-242395
- Rule IDs
-
- SV-242395r863971_rule
Checks: C-45670r863786_chk
From the Control Plane, run the command: kubectl get pods --all-namespaces -l k8s-app=kubernetes-dashboard If any resources are returned, this is a finding.
Fix: F-45628r712540_fix
Delete the Kubernetes dashboard deployment with the following command: kubectl delete deployment kubernetes-dashboard --namespace=kube-system
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000430
- Vuln IDs
-
- V-242396
- Rule IDs
-
- SV-242396r863972_rule
Checks: C-45671r863788_chk
From the Control Plane and each Worker node, check the version of kubectl by executing the command: kubectl version --client If the Control Plane or any Worker nodes are not using kubectl version 1.12.9 or newer, this is a finding.
Fix: F-45629r863789_fix
Upgrade the Control Plane and Worker nodes to the latest version of kubectl.
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-000440
- Vuln IDs
-
- V-242397
- Rule IDs
-
- SV-242397r863973_rule
Checks: C-45672r863791_chk
On the Kubernetes Control Plane and Worker nodes, run the command: ps -ef | grep kubelet Check the config file (path identified by: --config): Change to the directory identified by --config (example /etc/sysconfig/) run the command: grep -i staticPodPath kubelet If any of the nodes return a value for staticPodPath, this is a finding.
Fix: F-45630r863792_fix
Edit the kubelet file on each node under the --config directory and remove the staticPodPath setting. Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000450
- Vuln IDs
-
- V-242398
- Rule IDs
-
- SV-242398r863974_rule
Checks: C-45673r863794_chk
On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Review the feature-gates setting, if one is returned. If the feature-gates setting is available and contains the DynamicAuditing flag set to "true", this is a finding. Change to the directory /etc/sysconfig on the Control Plane and each Worker Node and execute the command: grep -i feature-gates kubelet Review every feature-gates setting that is returned. If any feature-gates setting is available and contains the "DynamicAuditing" flag set to "true", this is a finding.
Fix: F-45631r717018_fix
Edit any manifest files or kubelet config files that contain the feature-gates setting with DynamicAuditing set to "true". Set the flag to "false" or remove the "DynamicAuditing" setting completely. Restart the kubelet service if the kubelet config file if the kubelet config file is changed.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000460
- Vuln IDs
-
- V-242399
- Rule IDs
-
- SV-242399r863975_rule
Checks: C-45674r863796_chk
On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Review the feature-gates setting if one is returned. If the feature-gates setting does not exist or feature-gates does not contain the DynamicKubeletConfig flag or the "DynamicKubletConfig" flag is set to "true", this is a finding. Change to the directory /etc/sysconfig on the Control Plane and each Worker node and execute the command: grep -i feature-gates kubelet Review every feature-gates setting if one is returned. If the feature-gates setting does not exist or feature-gates does not contain the DynamicKubeletConfig flag or the DynamicKubletConfig flag is set to "true", this is a finding.
Fix: F-45632r863797_fix
Edit any manifest file or kubelet config file that does not contain a feature-gates setting or has DynamicKubeletConfig set to "true". An omission of DynamicKubeletConfig within the feature-gates defaults to true. Set DynamicKubeletConfig to "false". Restart the kubelet service if the kubelet config file is changed.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-K8-000470
- Vuln IDs
-
- V-242400
- Rule IDs
-
- SV-242400r863976_rule
Checks: C-45675r863799_chk
On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Review the feature-gates setting, if one is returned. If the feature-gates setting is available and contains the AllAlpha flag set to "true", this is a finding.
Fix: F-45633r712555_fix
Edit any manifest files that contain the feature-gates setting with AllAlpha set to "true". Set the flag to "false" or remove the AllAlpha setting completely. (AllAlpha- default=false)
- RMF Control
- AU-14
- Severity
- M
- CCI
- CCI-001464
- Version
- CNTR-K8-000600
- Vuln IDs
-
- V-242401
- Rule IDs
-
- SV-242401r863977_rule
Checks: C-45676r863801_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i audit-policy-file * If the audit-policy-file is not set, this is a finding.
Fix: F-45634r863802_fix
Edit the Kubernetes API Server manifest and set "--audit-policy-file" to the audit policy file. Note: If the API server is running as a Pod, then the manifest will also need to be updated to mount the host system filesystem where the audit policy file resides.
- RMF Control
- AU-14
- Severity
- M
- CCI
- CCI-001464
- Version
- CNTR-K8-000610
- Vuln IDs
-
- V-242402
- Rule IDs
-
- SV-242402r863978_rule
Checks: C-45677r863804_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i audit-log-path * If the audit-log-path is not set, this is a finding.
Fix: F-45635r863805_fix
Edit the Kubernetes API Server manifest and set "--audit-log-path" to a secure location for the audit logs to be written. Note: If the API server is running as a Pod, then the manifest will also need to be updated to mount the host system filesystem where the audit log file is to be written.
- RMF Control
- AC-2
- Severity
- M
- CCI
- CCI-000018
- Version
- CNTR-K8-000700
- Vuln IDs
-
- V-242403
- Rule IDs
-
- SV-242403r863979_rule
Checks: C-45678r863807_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i audit-policy-file If the audit-policy-file is not set, this is a finding. The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/vX (Where X is the latest apiVersion) kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Fix: F-45636r712564_fix
Edit the Kubernetes API Server audit policy and set it to look like the following: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/vX (Where X is the latest apiVersion) kind: Policy rules: - level: RequestResponse
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000850
- Vuln IDs
-
- V-242404
- Rule IDs
-
- SV-242404r863980_rule
Checks: C-45679r863809_chk
On the Kubernetes Control Plane and Worker nodes, run the command: ps -ef | grep kubelet Check the config file (path identified by: --config): Change to the directory identified by --config (example /etc/sysconfig/) run the command: grep -i hostname-override kubelet If any of the nodes have the setting "hostname-override" present, this is a finding.
Fix: F-45637r863810_fix
Edit the kubelet file on each node under the --config directory and remove the hostname-override setting. Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000860
- Vuln IDs
-
- V-242405
- Rule IDs
-
- SV-242405r863981_rule
Checks: C-45680r863812_chk
On the Control Plane, change to the /etc/kubernetes/manifest directory. Run the command: ls -l * Each manifest file must be owned by root:root. If any manifest file is not owned by root:root, this is a finding.
Fix: F-45638r863813_fix
On the Control Plane, change to the /etc/kubernetes/manifest directory. Run the command: chown root:root * To verify the change took place, run the command: ls -l * All the manifest files should be owned by root:root.
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000880
- Vuln IDs
-
- V-242406
- Rule IDs
-
- SV-242406r863982_rule
Checks: C-45681r863815_chk
On the Kubernetes Control Plane and Worker nodes, run the command: ps -ef | grep kubelet Check the config file (path identified by: --config): Change to the directory identified by --config (example /etc/sysconfig/) run the command: ls -l kubelet Each kubelet configuration file must be owned by root:root. If any manifest file is not owned by root:root, this is a finding.
Fix: F-45639r863816_fix
On the Control Plane and Worker nodes, change to the --config directory. Run the command: chown root:root kubelet To verify the change took place, run the command: ls -l kubelet The kubelet file should now be owned by root:root.
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000890
- Vuln IDs
-
- V-242407
- Rule IDs
-
- SV-242407r863983_rule
Checks: C-45682r863818_chk
On the Control Plane and worker nodes, change to the /etc/kubernetes/manifest directory. Run the command: ls -l kubelet Each kubelet configuration file must have permissions of "644" or more restrictive. If any kubelet configuration file is less restrictive than "644", this is a finding.
Fix: F-45640r863819_fix
On the Control Plane, change to the /etc/kubernetes/manifest directory. Run the command: chmod 644 kubelet To verify the change took place, run the command: ls -l kubelet The kubelet file should now have the permissions of "644".
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-K8-000900
- Vuln IDs
-
- V-242408
- Rule IDs
-
- SV-242408r863984_rule
Checks: C-45683r863821_chk
On the Control Plane, change to the /etc/kubernetes/manifest directory. Run the command: ls -l * Each manifest file must have permissions "644" or more restrictive. If any manifest file is less restrictive than "644", this is a finding.
Fix: F-45641r863822_fix
On the Control Plane, change to the /etc/kubernetes/manifest directory. Run the command: chmod 644 * To verify the change took place, run the command: ls -l * All the manifest files should now have privileges of "644".
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-K8-000910
- Vuln IDs
-
- V-242409
- Rule IDs
-
- SV-242409r863985_rule
Checks: C-45684r863824_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i profiling * If the setting "profiling" is not configured in the Kubernetes Controller Manager manifest file or it is set to "True", this is a finding.
Fix: F-45642r863825_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument "--profiling value" to "false".
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000920
- Vuln IDs
-
- V-242410
- Rule IDs
-
- SV-242410r863986_rule
Checks: C-45685r863827_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep kube-apiserver.manifest -I -secure-port * grep kube-apiserver.manifest -I -etcd-servers * -edit manifest file: VIM <Manifest Name> Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any apiserver names spaces. Port: Any manifest and namespace PPS or services configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the API Server architecture, and determine applicable PPS. If there are any ports, protocols, and services in the system documentation not in compliance with the CAL PPSM, this is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify API Server network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Fix: F-45643r712585_fix
Amend any system documentation requiring revision. Update Kubernetes API Server manifest and namespace PPS configuration to comply with PPSM CAL.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000930
- Vuln IDs
-
- V-242411
- Rule IDs
-
- SV-242411r863987_rule
Checks: C-45686r863829_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep kube-scheduler.manifest -I -insecure-port grep kube-scheduler.manifest -I -secure-port -edit manifest file: VIM <Manifest Name> Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any scheduler names spaces. Port: Any manifest and namespace PPS configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the Scheduler architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPSs not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify Scheduler network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Fix: F-45644r712588_fix
Amend any system documentation requiring revision. Update Kubernetes Scheduler manifest and namespace PPS configuration to comply with the PPSM CAL.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000940
- Vuln IDs
-
- V-242412
- Rule IDs
-
- SV-242412r863988_rule
Checks: C-45687r863831_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep kube-scheduler.manifest -I -secure-port -edit manifest file: VIM <Manifest Name: Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any controller names spaces. Any manifest and namespace PPS or services configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the Controller architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify Controller network boundary with the PPS associated with the Controller for Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Fix: F-45645r712591_fix
Amend any system documentation requiring revision. Update Kubernetes Controller manifest and namespace PPS configuration to comply with PPSM CAL.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000950
- Vuln IDs
-
- V-242413
- Rule IDs
-
- SV-242413r863989_rule
Checks: C-45688r863833_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep kube-apiserver.manifest -I -etcd-servers * -edit etcd-main.manifest file: VIM <Manifest Name: Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any apiserver names spaces. Port: Any manifest and namespace PPS configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the etcd architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify etcd network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Fix: F-45646r712594_fix
Amend any system documentation requiring revision. Update Kubernetes etcd manifest and namespace PPS configuration to comply with PPSM CAL.
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-K8-000960
- Vuln IDs
-
- V-242414
- Rule IDs
-
- SV-242414r863990_rule
Checks: C-45689r863835_chk
On the Control Plane, run the command: kubectl get pods --all-namespaces The list returned is all pods running within the Kubernetes cluster. For those pods running within the user namespaces (System namespaces are kube-system, kube-node-lease and kube-public), run the command: kubectl get pod podname -o yaml | grep -i port Note: In the above command, "podname" is the name of the pod. For the command to work correctly, the current context must be changed to the namespace for the pod. The command to do this is: kubectl config set-context --current --namespace=namespace-name (Note: "namespace-name" is the name of the namespace.) Review the ports that are returned for the pod. If any host-privileged ports are returned for any of the pods, this is a finding.
Fix: F-45647r717032_fix
For any of the pods that are using host-privileged ports, reconfigure the pod to use a service to map a host non-privileged port to the pod port or reconfigure the image to use non-privileged ports.
- RMF Control
- IA-5
- Severity
- H
- CCI
- CCI-000196
- Version
- CNTR-K8-001160
- Vuln IDs
-
- V-242415
- Rule IDs
-
- SV-242415r863991_rule
Checks: C-45690r863838_chk
On the Kubernetes Control Plane, run the following command: kubectl get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {"\n"}{end}' -A If any of the values returned reference environment variables, this is a finding.
Fix: F-45648r712600_fix
Any secrets stored as environment variables must be moved to the secret files with the proper protections and enforcements or placed within a password vault.
- RMF Control
- SC-2
- Severity
- M
- CCI
- CCI-001082
- Version
- CNTR-K8-001360
- Vuln IDs
-
- V-242417
- Rule IDs
-
- SV-242417r863992_rule
Checks: C-45692r863840_chk
On the Control Plane, run the command: kubectl get pods --all-namespaces Review the namespaces and pods that are returned. Kubernetes system namespaces are kube-node-lease, kube-public, and kube-system. If any user pods are present in the Kubernetes system namespaces, this is a finding.
Fix: F-45650r712606_fix
Move any user pods that are present in the Kubernetes system namespaces to user specific namespaces.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001400
- Vuln IDs
-
- V-242418
- Rule IDs
-
- SV-242418r863993_rule
Checks: C-45693r863842_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i tls-cipher-suites * If the setting feature tls-cipher-suites is not set in the Kubernetes API server manifest file or contains no value or does not contain TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, this is a finding.
Fix: F-45651r863843_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of tls-cipher-suites to: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001410
- Vuln IDs
-
- V-242419
- Rule IDs
-
- SV-242419r863994_rule
Checks: C-45694r863845_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i client-ca-file * If the setting feature client-ca-file is not set in the Kubernetes API server manifest file or contains no value, this is a finding.
Fix: F-45652r863846_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of client-ca-file to path containing Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001420
- Vuln IDs
-
- V-242420
- Rule IDs
-
- SV-242420r863995_rule
Checks: C-45695r863848_chk
On the Kubernetes Control Plane, run the command: ps -ef | grep kubelet Check the config file (path identified by: --config): Change to the directory identified by --config (example /etc/sysconfig/) run the command: grep -i client-ca-file kubelet If the setting client-ca-file is not set in the Kubernetes API server manifest file or contains no value, this is a finding.
Fix: F-45653r863849_fix
Edit the Kubernetes Kubelet file in the --config directory on the Kubernetes Control Plane: Set the value of client-ca-file to path containing Approved Organizational Certificate. Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001430
- Vuln IDs
-
- V-242421
- Rule IDs
-
- SV-242421r863996_rule
Checks: C-45696r863851_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i root-ca-file * If the setting client-ca-file is not set in the Kubernetes Controller Manager manifest file or contains no value, this is a finding.
Fix: F-45654r863852_fix
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of root-ca-file to path containing Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001440
- Vuln IDs
-
- V-242422
- Rule IDs
-
- SV-242422r863997_rule
Checks: C-45697r863854_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i tls-cert-file * grep -i tls-private-key-file * If the setting tls-cert-file and private-key-file is not set in the Kubernetes API server manifest file or contains no value, this is a finding.
Fix: F-45655r863855_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of tls-cert-file and tls-private-key-file to path containing Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001450
- Vuln IDs
-
- V-242423
- Rule IDs
-
- SV-242423r863998_rule
Checks: C-45698r863857_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i client-cert-auth * If the setting client-cert-auth is not configured in the Kubernetes etcd manifest file or set to "false", this is a finding.
Fix: F-45656r863858_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--client-cert-auth" to "true" for the etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001460
- Vuln IDs
-
- V-242424
- Rule IDs
-
- SV-242424r863999_rule
Checks: C-45699r863860_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Control Plane. Run the commands: grep -i tls-private-key-file kubelet If the setting "tls-private-key-file" is not configured in the Kubernetes Kubelet, this is a finding.
Fix: F-45657r863861_fix
Edit the Kubernetes Kubelet file in the /etc/sysconfig directory on the Kubernetes Control Plane. Set the argument tls-private-key-file to an Approved Organization Certificate. Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001470
- Vuln IDs
-
- V-242425
- Rule IDs
-
- SV-242425r864000_rule
Checks: C-45700r863863_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Control Plane. Run the command: grep -i tls-cert-file kubelet If the setting "tls-cert-file" is not configured in the Kubernetes Kubelet, this is a finding.
Fix: F-45658r863864_fix
Edit the Kubernetes Kubelet file in the /etc/sysconfig directory on the Kubernetes Control Plane. Set the argument "tls-cert-file" to an Approved Organization Certificate. Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001480
- Vuln IDs
-
- V-242426
- Rule IDs
-
- SV-242426r864001_rule
Checks: C-45701r863866_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i peer-client-cert-auth * If the setting peer-client-cert-auth is not configured in the Kubernetes etcd manifest file or set to "false", this is a finding.
Fix: F-45659r863867_fix
Edit the Kubernetes etcd file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--peer-client-cert-auth" to "true" for the etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001490
- Vuln IDs
-
- V-242427
- Rule IDs
-
- SV-242427r864002_rule
Checks: C-45702r863869_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i key-file * If the setting "key-file" is not configured in the etcd manifest file, this is a finding.
Fix: F-45660r863870_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--key-file" to the Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001500
- Vuln IDs
-
- V-242428
- Rule IDs
-
- SV-242428r864003_rule
Checks: C-45703r863872_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i cert-file * If the setting "cert-file" is not configured in the Kubernetes etcd manifest file, this is a finding.
Fix: F-45661r863873_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--cert-file" to the Approved Organizational Certificate.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001510
- Vuln IDs
-
- V-242429
- Rule IDs
-
- SV-242429r864004_rule
Checks: C-45704r863875_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i etcd-cafile * If the setting "etcd-cafile" is not configured in the Kubernetes kube-apiserver manifest file, this is a finding.
Fix: F-45662r863876_fix
Edit the Kubernetes kube-apiserver manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--etcd-cafile" to the Certificate Authority for etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001520
- Vuln IDs
-
- V-242430
- Rule IDs
-
- SV-242430r864005_rule
Checks: C-45705r863878_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i etcd-certfile * If the setting "etcd-certfile" is not set in the Kubernetes kube-apiserver manifest file, this is a finding.
Fix: F-45663r863879_fix
Edit the Kubernetes kube-apiserver manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--etcd-certfile" to the certificate to be used for communication with etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001530
- Vuln IDs
-
- V-242431
- Rule IDs
-
- SV-242431r864006_rule
Checks: C-45706r863881_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i etcd-keyfile * If the setting "etcd-keyfile" is not configured in the Kubernetes kube-apiserver manifest file, this is a finding.
Fix: F-45664r863882_fix
Edit the Kubernetes kube-apiserver manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--etcd-keyfile" to the certificate to be used for communication with etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001540
- Vuln IDs
-
- V-242432
- Rule IDs
-
- SV-242432r864007_rule
Checks: C-45707r863884_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i peer-cert-file * If the setting "peer-cert-file" is not configured in the Kubernetes etcd manifest file, this is a finding.
Fix: F-45665r863885_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--peer-cert-file" to the certificate to be used for communication with etcd.
- RMF Control
- SC-23
- Severity
- M
- CCI
- CCI-001184
- Version
- CNTR-K8-001550
- Vuln IDs
-
- V-242433
- Rule IDs
-
- SV-242433r864008_rule
Checks: C-45708r863887_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i peer-key-file * If the setting "peer-key-file" is not set in the Kubernetes etcd manifest file, this is a finding.
Fix: F-45666r863888_fix
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--peer-key-file" to the certificate to be used for communication with etcd.
- RMF Control
- SC-3
- Severity
- H
- CCI
- CCI-001084
- Version
- CNTR-K8-001620
- Vuln IDs
-
- V-242434
- Rule IDs
-
- SV-242434r864009_rule
Checks: C-45709r863890_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Control Plane. Run the command: grep -i protect-kernel-defaults kubelet If the setting "protect-kernel-defaults" is set to false or not set in the Kubernetes Kubelet, this is a finding.
Fix: F-45667r863891_fix
Edit the Kubernetes Kubelet file in the /etc/sysconfig directory on the Kubernetes Control Plane. Set the argument "--protect-kernel-defaults" to "true". Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-K8-001990
- Vuln IDs
-
- V-242435
- Rule IDs
-
- SV-242435r864010_rule
Checks: C-45710r863893_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i authorization-mode * If the setting authorization-mode is set to "AlwaysAllow" in the Kubernetes API Server manifest file or is not configured, this is a finding.
Fix: F-45668r863894_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument "--authorization-mode" to any valid authorization mode other than AlwaysAllow.
- RMF Control
- AC-16
- Severity
- H
- CCI
- CCI-002263
- Version
- CNTR-K8-002000
- Vuln IDs
-
- V-242436
- Rule IDs
-
- SV-242436r864011_rule
Checks: C-45711r863896_chk
Prior to version 1.21, to enforce security policiesPod Security Policies (psp) were used. Those are now deprecated and will be removed from version 1.25. Migrate from PSP to PSA: https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/ Pre-version 1.25 Check: Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i ValidatingAdmissionWebhook * If a line is not returned that includes enable-admission-plugins and ValidatingAdmissionWebhook, this is a finding.
Fix: F-45669r863897_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument "--enable-admission-plugins" to include "ValidatingAdmissionWebhook". Each enabled plugin is separated by commas. Note: It is best to implement policies first and then enable the webhook, otherwise a denial of service may occur.
- RMF Control
- AC-16
- Severity
- H
- CCI
- CCI-002263
- Version
- CNTR-K8-002010
- Vuln IDs
-
- V-242437
- Rule IDs
-
- SV-242437r864012_rule
Checks: C-45712r863899_chk
Prior to version 1.21, to enforce security policiesPod Security Policies (psp) were used. Those are now deprecated and will be removed from version 1.25. Migrate from PSP to PSA: https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/ Pre-version 1.25 Check: On the Control Plane, run the command: kubectl get podsecuritypolicy If there is no pod security policy configured, this is a finding. For any pod security policies listed, edit the policy with the command: kubectl edit podsecuritypolicy policyname (Note: "policyname" is the name of the policy.) Review the runAsUser, supplementalGroups and fsGroup sections of the policy. If any of these sections are missing, this is a finding. If the rule within the runAsUser section is not set to "MustRunAsNonRoot", this is a finding. If the ranges within the supplementalGroups section has min set to "0" or min is missing, this is a finding. If the ranges within the fsGroup section has a min set to "0" or the min is missing, this is a finding.
Fix: F-45670r863900_fix
From the Control Plane, save the following policy to a file called restricted.yml. apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: restricted annotations: apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default', seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default', apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' spec: privileged: false # Required to prevent escalations to root. allowPrivilegeEscalation: false # This is redundant with non-root + disallow privilege escalation, # but we can provide it for defense in depth. requiredDropCapabilities: - ALL # Allow core volume types. volumes: - 'configMap' - 'emptyDir' - 'projected' - 'secret' - 'downwardAPI' # Assume that persistentVolumes set up by the cluster admin are safe to use. - 'persistentVolumeClaim' hostNetwork: false hostIPC: false hostPID: false runAsUser: # Require the container to run without root privileges. rule: 'MustRunAsNonRoot' seLinux: # This policy assumes the nodes are using AppArmor rather than SELinux. rule: 'RunAsAny' supplementalGroups: rule: 'MustRunAs' ranges: # Forbid adding the root group. - min: 1 max: 65535 fsGroup: rule: 'MustRunAs' ranges: # Forbid adding the root group. - min: 1 max: 65535 readOnlyRootFilesystem: false To implement the policy, run the command: kubectl create -f restricted.yml
- RMF Control
- SC-7
- Severity
- M
- CCI
- CCI-002415
- Version
- CNTR-K8-002600
- Vuln IDs
-
- V-242438
- Rule IDs
-
- SV-242438r864013_rule
Checks: C-45713r863902_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -I request-timeout * If Kubernetes API Server manifest file does not exist, this is a finding. If the setting request-timeout is set to "0" in the Kubernetes API Server manifest file, or is not configured this is a finding.
Fix: F-45671r863903_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of request-timeout greater than "0".
- RMF Control
- SI-4
- Severity
- M
- CCI
- CCI-002647
- Version
- CNTR-K8-002700
- Vuln IDs
-
- V-242442
- Rule IDs
-
- SV-242442r878098_rule
Checks: C-45717r863905_chk
To view all pods and the images used to create the pods, from the Control Plane, run the following command: kubectl get pods --all-namespaces -o jsonpath="{..image}" | \ tr -s '[[:space:]]' '\n' | \ sort | \ uniq -c Review the images used for pods running within Kubernetes. If there are multiple versions of the same image, this is a finding.
Fix: F-45675r863906_fix
Remove any old pods that are using older images. On the Control Plane, run the command: kubectl delete pod podname (Note: "podname" is the name of the pod to delete.)
- RMF Control
- SI-3
- Severity
- M
- CCI
- CCI-002635
- Version
- CNTR-K8-002720
- Vuln IDs
-
- V-242443
- Rule IDs
-
- SV-242443r864015_rule
Checks: C-45718r863908_chk
Authenticate on the Kubernetes Control Plane. Run the command: kubectl version --short If kubectl version has a setting not supporting Kubernetes skew policy, this is a finding. Note: Kubernetes Skew Policy can be found at: https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions
Fix: F-45676r712684_fix
Upgrade Kubernetes to the supported version. Institute and adhere to the policies and procedures to ensure that patches are consistently applied within the time allowed.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003110
- Vuln IDs
-
- V-242444
- Rule IDs
-
- SV-242444r864016_rule
Checks: C-45719r712686_chk
Review the ownership of the Kubernetes manifests files by using the command: stat -c %U:%G /etc/kubernetes/manifests/* | grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45677r712687_fix
Change the ownership of the manifest files to root: root by executing the command: chown root:root /etc/kubernetes/manifests/*
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003120
- Vuln IDs
-
- V-242445
- Rule IDs
-
- SV-242445r864017_rule
Checks: C-45720r712689_chk
Review the ownership of the Kubernetes etcd files by using the command: stat -c %U:%G /var/lib/etcd/* | grep -v etcd:etcd If the command returns any non etcd:etcd file permissions, this is a finding.
Fix: F-45678r712690_fix
Change the ownership of the manifest files to etcd:etcd by executing the command: chown etcd:etcd /var/lib/etcd/*
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003130
- Vuln IDs
-
- V-242446
- Rule IDs
-
- SV-242446r864018_rule
Checks: C-45721r712692_chk
Review the Kubernetes conf files by using the command: stat -c %U:%G /etc/kubernetes/admin.conf | grep -v root:root stat -c %U:%G /etc/kubernetes/scheduler.conf | grep -v root:root stat -c %U:%G /etc/kubernetes/controller-manager.conf | grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45679r712693_fix
Change the ownership of the conf files to root: root by executing the command: chown root:root /etc/kubernetes/admin.conf chown root:root /etc/kubernetes/scheduler.conf chown root:root /etc/kubernetes/controller-manager.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003140
- Vuln IDs
-
- V-242447
- Rule IDs
-
- SV-242447r864019_rule
Checks: C-45722r712695_chk
Check if Kube-Proxy is running and obtain --kubeconfig parameter use the following command: ps -ef | grep kube-proxy If Kube-Proxy exists: Review the permissions of the Kubernetes Kube Proxy by using the command: stat -c %a <location from --kubeconfig> If the file has permissions more permissive than "644", this is a finding.
Fix: F-45680r821611_fix
Change the permissions of the Kube Proxy to "644" by executing the command: chmod 644 <location from kubeconfig>.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003150
- Vuln IDs
-
- V-242448
- Rule IDs
-
- SV-242448r864020_rule
Checks: C-45723r712698_chk
Check if Kube-Proxy is running use the following command: ps -ef | grep kube-proxy If Kube-Proxy exists: Review the permissions of the Kubernetes Kube Proxy by using the command: stat -c %U:%G <location from --kubeconfig>| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45681r712699_fix
Change the ownership of the Kube Proxy to root:root by executing the command: chown root:root <location from kubeconfig>.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003160
- Vuln IDs
-
- V-242449
- Rule IDs
-
- SV-242449r864021_rule
Checks: C-45724r863915_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Control Plane. Run command: more kubelet --client-ca-file argument Note certificate location If the ca-file argument location file has permissions more permissive than "644", this is a finding.
Fix: F-45682r821613_fix
Change the permissions of the --client-ca-file to "644" by executing the command: chmod 644 <kubelet --client--ca-file argument location>.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003170
- Vuln IDs
-
- V-242450
- Rule IDs
-
- SV-242450r864022_rule
Checks: C-45725r863917_chk
Change to the /etc/sysconfig/ directory on the Kubernetes Control Plane. Review the ownership of the Kubernetes client-ca-file by using the command: more kubelet --client-ca-file argument Note certificate location Review the ownership of the Kubernetes client-ca-file by using the command: stat -c %U:%G <location from --client-ca-file argument>| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45683r712705_fix
Change the permissions of the Kube Proxy to "root" by executing the command: chown root:root <location from kubeconfig>.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003180
- Vuln IDs
-
- V-242451
- Rule IDs
-
- SV-242451r712709_rule
Checks: C-45726r712707_chk
Review the PKI files in Kubernetes by using the command: ls -laR /etc/kubernetes/pki/ If the command returns any non root:root file permissions, this is a finding.
Fix: F-45684r712708_fix
Change the ownership of the PKI to root: root by executing the command: chown -R root:root /etc/kubernetes/pki/
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003190
- Vuln IDs
-
- V-242452
- Rule IDs
-
- SV-242452r821616_rule
Checks: C-45727r712710_chk
Review the permissions of the Kubernetes Kubelet conf by using the command: stat -c %a /etc/kubernetes/kubelet.conf If any of the files are have permissions more permissive than "644", this is a finding.
Fix: F-45685r821615_fix
Change the permissions of the Kubelet to "644" by executing the command: chmod 644 /etc/kubernetes/kubelet.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003200
- Vuln IDs
-
- V-242453
- Rule IDs
-
- SV-242453r712715_rule
Checks: C-45728r712713_chk
Review the Kubernetes Kubelet conf files by using the command: stat -c %U:%G /etc/kubernetes/kubelet.conf| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45686r712714_fix
Change the ownership of the kubelet.conf to root: root by executing the command: chown root:root /etc/kubernetes/kubelet.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003210
- Vuln IDs
-
- V-242454
- Rule IDs
-
- SV-242454r754819_rule
Checks: C-45729r754817_chk
Review the Kubeadm.conf file : Get the path for Kubeadm.conf by running: sytstemctl status kubelet Note the configuration file installed by the kubeadm is written to (Default Location: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf) stat -c %U:%G <kubeadm.conf path> | grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45687r754818_fix
Change the ownership of the kubeadm.conf to root: root by executing the command: chown root:root <kubeadm.conf path>
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003220
- Vuln IDs
-
- V-242455
- Rule IDs
-
- SV-242455r754822_rule
Checks: C-45730r754820_chk
Review the kubeadm.conf file : Get the path for kubeadm.conf by running: systemctl status kubelet Note the configuration file installed by the kubeadm is written to (Default Location: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf) stat -c %a <kubeadm.conf path> If the file has permissions more permissive than "644", this is a finding.
Fix: F-45688r754821_fix
Change the permissions of kubeadm.conf to "644" by executing the command: chmod 644 <kubeadm.conf path>
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003230
- Vuln IDs
-
- V-242456
- Rule IDs
-
- SV-242456r821618_rule
Checks: C-45731r712722_chk
Review the permissions of the Kubernetes config.yaml by using the command: stat -c %a /var/lib/kubelet/config.yaml If any of the files are have permissions more permissive than "644", this is a finding.
Fix: F-45689r821617_fix
Change the permissions of the config.yaml to "644" by executing the command: chmod 644 /var/lib/kubelet/config.yaml
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003240
- Vuln IDs
-
- V-242457
- Rule IDs
-
- SV-242457r712727_rule
Checks: C-45732r712725_chk
Review the Kubernetes Kubeadm kubelet conf file by using the command: stat -c %U:%G /var/lib/kubelet/config.yaml| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Fix: F-45690r712726_fix
Change the ownership of the kubelet config to "root: root" by executing the command: chown root:root /var/lib/kubelet/config.yaml
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003250
- Vuln IDs
-
- V-242458
- Rule IDs
-
- SV-242458r864023_rule
Checks: C-45733r712728_chk
Review the permissions of the Kubernetes Kubelet by using the command: stat -c %a /etc/kubernetes/manifests/* If any of the files are have permissions more permissive than "644", this is a finding.
Fix: F-45691r754805_fix
Change the permissions of the manifest files by executing the command: chmod 644 /etc/kubernetes/manifests/*
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003260
- Vuln IDs
-
- V-242459
- Rule IDs
-
- SV-242459r864024_rule
Checks: C-45734r712731_chk
Review the permissions of the Kubernetes etcd by using the command: stat -c %a /var/lib/etcd/* If any of the files are have permissions more permissive than "644", this is a finding.
Fix: F-45692r712732_fix
Change the permissions of the manifest files to "644" by executing the command: chmod 644/var/lib/etcd/*
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003270
- Vuln IDs
-
- V-242460
- Rule IDs
-
- SV-242460r864025_rule
Checks: C-45735r712734_chk
Review the permissions of the Kubernetes config files by using the command: stat -c %a /etc/kubernetes/admin.conf stat -c %a /etc/kubernetes/scheduler.conf stat -c %a /etc/kubernetes/controller-manager.conf If any of the files are have permissions more permissive than "644", this is a finding.
Fix: F-45693r712735_fix
Change the permissions of the conf files to "644" by executing the command: chmod 644 /etc/kubernetes/admin.conf chmod 644 /etc/kubernetes/scheduler.conf chmod 644 /etc/kubernetes/controller-manager.conf
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003280
- Vuln IDs
-
- V-242461
- Rule IDs
-
- SV-242461r864026_rule
Checks: C-45736r863922_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i audit-policy-file * If the setting "audit-policy-file" is not set or is found in the Kubernetes API manifest file without valid content, this is a finding.
Fix: F-45694r863923_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument "--audit-policy-file" to "log file directory".
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003290
- Vuln IDs
-
- V-242462
- Rule IDs
-
- SV-242462r864027_rule
Checks: C-45737r863925_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i audit-log-maxsize * If the setting "audit-log-maxsize" is not set in the Kubernetes API Server manifest file or it is set to less than "100", this is a finding.
Fix: F-45695r863926_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of –"--audit-log-maxsize" to a minimum of "100".
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003300
- Vuln IDs
-
- V-242463
- Rule IDs
-
- SV-242463r864028_rule
Checks: C-45738r863928_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i audit-log-maxbackup * If the setting "audit-log-maxbackup" is not set in the Kubernetes API Server manifest file or it is set less than "10", this is a finding.
Fix: F-45696r863929_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--audit-log-maxbackup" to a minimum of "10".
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003310
- Vuln IDs
-
- V-242464
- Rule IDs
-
- SV-242464r864029_rule
Checks: C-45739r863931_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i audit-log-maxage * If the setting "audit-log-maxage" is not set in the Kubernetes API Server manifest file or it is set less than "30", this is a finding.
Fix: F-45697r863932_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--audit-log-maxage" to a minimum of "30".
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003320
- Vuln IDs
-
- V-242465
- Rule IDs
-
- SV-242465r864030_rule
Checks: C-45740r863934_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i audit-log-path * If the setting audit-log-path is not set in the Kubernetes API Server manifest file or it is not set to a valid path, this is a finding.
Fix: F-45698r863935_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--audit-log-path" to valid location.
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003330
- Vuln IDs
-
- V-242466
- Rule IDs
-
- SV-242466r712754_rule
Checks: C-45741r712752_chk
Review the permissions of the Kubernetes PKI cert files by using the command: find /etc/kubernetes/pki -name "*.crt" | xargs stat -c '%n %a' If any of the files are have permissions more permissive than "644", this is a finding.
Fix: F-45699r712753_fix
Change the ownership of the cert files to "644" by executing the command: chmod -R 644 /etc/kubernetes/pki/*.crt
- RMF Control
- CM-6
- Severity
- M
- CCI
- CCI-000366
- Version
- CNTR-K8-003340
- Vuln IDs
-
- V-242467
- Rule IDs
-
- SV-242467r878165_rule
Checks: C-45742r878164_chk
Review the permissions of the Kubernetes PKI key files by using the command: sudo find /etc/kubernetes/pki/* -name "*.key" | xargs stat -c '%n %a' If any of the files are have permissions more permissive than "600", this is a finding.
Fix: F-45700r712756_fix
Change the ownership of the cert files to "600" by executing the command: chmod -R 600 /etc/kubernetes/pki/*.key
- RMF Control
- AC-17
- Severity
- M
- CCI
- CCI-001453
- Version
- CNTR-K8-003350
- Vuln IDs
-
- V-242468
- Rule IDs
-
- SV-242468r864031_rule
Checks: C-45743r863937_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i tls-min-version * If the setting tls-min-version is not configured in the Kubernetes API Server manifest file or it is set to "VersionTLS10" or "VersionTLS11", this is a finding.
Fix: F-45701r863938_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--tls-min-version" to either "VersionTLS12" or higher.
- RMF Control
- SC-10
- Severity
- M
- CCI
- CCI-001133
- Version
- CNTR-K8-001300
- Vuln IDs
-
- V-245541
- Rule IDs
-
- SV-245541r864032_rule
Checks: C-48816r863940_chk
On the Kubernetes Control Plane, run the command: ps -ef | grep kubelet Check the config file (path identified by: --config): Change to the directory identified by --config (example /etc/sysconfig/) run the command: grep -i streaming-connection-idle-timeout kubelet If the setting streaming-connection-idle-timeout is set to < "5m" or the parameter is not configured in the Kubernetes Kubelet, this is a finding.
Fix: F-48771r863941_fix
Edit the Kubernetes Kubelet file in the --config directory on the Kubernetes Control Plane: Set the argument "--streaming-connection-idle-timeout" to a value of "5m". Reset Kubelet service using the following command: service kubelet restart
- RMF Control
- SC-12
- Severity
- H
- CCI
- CCI-002448
- Version
- CNTR-K8-002620
- Vuln IDs
-
- V-245542
- Rule IDs
-
- SV-245542r864033_rule
Checks: C-48817r863943_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i basic-auth-file * If "basic-auth-file" is set in the Kubernetes API server manifest file this is a finding.
Fix: F-48772r863944_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Remove the setting "--basic-auth-file".
- RMF Control
- SC-12
- Severity
- M
- CCI
- CCI-002448
- Version
- CNTR-K8-002630
- Vuln IDs
-
- V-245543
- Rule IDs
-
- SV-245543r864034_rule
Checks: C-48818r863946_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i token-auth-file * If "token-auth-file" is set in the Kubernetes API server manifest file, this is a finding.
Fix: F-48773r863947_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Remove parameter "--token-auth-file".
- RMF Control
- SC-12
- Severity
- M
- CCI
- CCI-002448
- Version
- CNTR-K8-002640
- Vuln IDs
-
- V-245544
- Rule IDs
-
- SV-245544r864035_rule
Checks: C-48819r863949_chk
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i kubelet-client-certificate * grep -I kubelet-client-key * If the setting "--kubelet-client-certificate" is not configured in the Kubernetes API server manifest file or contains no value, this is a finding. If the setting "--kubelet-client-key" is not configured in the Kubernetes API server manifest file or contains no value, this is a finding.
Fix: F-48774r863950_fix
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--kubelet-client-certificate" and "--kubelet-client-key" to an Approved Organizational Certificate and key pair.
- RMF Control
- AC-16
- Severity
- H
- CCI
- CCI-002263
- Version
- CNTR-K8-002011
- Vuln IDs
-
- V-254800
- Rule IDs
-
- SV-254800r864040_rule
Checks: C-58411r863727_chk
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: "grep -i admission-control-config-file *" If the setting "admission-control-config-file" is not configured in the Kubernetes API Server manifest file, this is a finding. Inspect the .yaml file defined by the --admission-control-config-file. Verify PodSecurity is properly configured. If least privilege is not represented, this is a finding.
Fix: F-58357r863728_fix
Modify the file /etc/kubernetes/manifests/kube-apiserver.yaml and add the flag --admission-control-config-file (with a valid path for the file) to the apiserver configuration. Create an admission controller config file: Example File: ```yaml apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration plugins: - name: PodSecurity configuration: apiVersion: pod-security.admission.config.k8s.io/v1beta1 kind: PodSecurityConfiguration # Defaults applied when a mode label is not set. defaults: enforce: "privileged" enforce-version: "latest" exemptions: # Don't forget to exempt namespaces or users that are responsible for deploying # cluster components, because they need to run privileged containers usernames: ["admin"] namespaces: ["kube-system"] See For More Details: Migrate from PSP to PSA: https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/ Best Practice: https://kubernetes.io/docs/concepts/security/pod-security-policy/#recommended-practice
- RMF Control
- AC-16
- Severity
- H
- CCI
- CCI-002263
- Version
- CNTR-K8-002001
- Vuln IDs
-
- V-254801
- Rule IDs
-
- SV-254801r864044_rule
Checks: C-58412r863729_chk
Check Static Pods: On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i PodSecurity=true * Ensure the argument "--feature-gates=PodSecurity=true" is present in each manifest file. If kube-apiserver, kube-controller-manager or kube-schedule is missing the argument "--feature-gates=PodSecurity=true", this is a finding. Check Kubelet: Run the following command on each Worker Node: ps -ef | grep kubelet Verify that the "--feature-gates=PodSecurity=true" argument exists. If it doesn't exisit, this is a finding. Check Control Plane Kubelet config file: On the Kubernetes Control Plane, run the command: ps -ef | grep kubelet Check the config file (path identified by: --config). Verify that the "--feature-gates=PodSecurity=true" argument exists. If it doesn't exisit, this is a finding.
Fix: F-58358r863730_fix
Add the "--feature-gates=PodSecurity=true" argument to every component of Kubernetes. kube-apiserver, kube-controller-manager and kube-scheduler: These components are started as static pods, you can find their manifests in the /etc/kubernetes/manifests/ folder. add "--feature-gates=PodSecurity=true" argument in each of the files. Kubelet: Edit the Kubernetes Kubelet file in the --config directory on the Kubernetes Control Plane: Add "--feature-gates=PodSecurity=true" Reset Kubelet service using the following command: service kubelet restart Note: if your cluster has multiple nodes you will need to make the changes on every node where the components are deployed.