Select any two versions of this STIG to compare the individual requirements
Select any old version/release of this STIG to view the previous requirements
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i tls-min-version * If the setting "tls-min-version" is not configured in the Kubernetes Controller Manager manifest file or it is set to "VersionTLS10" or "VersionTLS11", this is a finding.
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--tls-min-version" to "VersionTLS12" or higher.
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i tls-min-version * If the setting "tls-min-version" is not configured in the Kubernetes Scheduler manifest file or it is set to "VersionTLS10" or "VersionTLS11", this is a finding.
Edit the Kubernetes Scheduler manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--tls-min-version" to "VersionTLS12" or higher.
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i tls-min-version * If the setting "tls-min-version" is not configured in the Kubernetes API Server manifest file or it is set to "VersionTLS10" or "VersionTLS11", this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--tls-min-version" to "VersionTLS12" or higher.
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i auto-tls * If the setting "--auto-tls" is not configured in the Kubernetes etcd manifest file or it is set to true, this is a finding.
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--auto-tls" to "false".
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -I peer-auto-tls * If the setting "--peer-auto-tls" is not configured in the Kubernetes etcd manifest file or it is set to "true", this is a finding.
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--peer-auto-tls" to "false".
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i use-service-account-credentials * If the setting "--use-service-account-credentials" is not configured in the Kubernetes Controller Manager manifest file or it is set to "false", this is a finding.
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--use-service-account-credentials" to "true".
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i authorization-mode * If the setting authorization-mode is set to "AlwaysAllow" in the Kubernetes API Server manifest file or is not configured, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--authorization-mode" to "Node,RBAC".
To view the available namespaces, run the command: kubectl get namespaces The default namespaces to be validated are default, kube-public, and kube-node-lease if it is created. For the default namespace, execute the commands: kubectl config set-context --current --namespace=default kubectl get all For the kube-public namespace, execute the commands: kubectl config set-context --current --namespace=kube-public kubectl get all For the kube-node-lease namespace, execute the commands: kubectl config set-context --current --namespace=kube-node-lease kubectl get all The only valid return values are the kubernetes service (i.e., service/kubernetes) and nothing at all. If a return value is returned from the "kubectl get all" command and it is not the kubernetes service (i.e., service/kubernetes), this is a finding.
Move any user-managed resources from the default, kube-public, and kube-node-lease namespaces to user namespaces.
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i bind-address * If the setting "bind-address" is not set to "127.0.0.1" or is not found in the Kubernetes Scheduler manifest file, this is a finding.
Edit the Kubernetes Scheduler manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument "--bind-address" to "127.0.0.1".
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i bind-address * If the setting bind-address is not set to "127.0.0.1" or is not found in the Kubernetes Controller Manager manifest file, this is a finding.
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument "--bind-address" to "127.0.0.1".
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i insecure-port * If the setting "--insecure-port" is not set to "0" or is not configured in the Kubernetes API server manifest file, this is a finding. Note: "--insecure-port" flag has been deprecated and can only be set to "0". **This flag will be removed in v1.24.*
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--insecure-port" to "0".
On each Control Plane and Worker Node, run the command: ps -ef | grep kubelet If the "--read-only-port" option exists, this is a finding. Note the path to the config file (identified by --config). Run the command: grep -i readOnlyPort <path_to_config_file> If the setting "readOnlyPort" exists and is not set to "0", this is a finding.
On each Control Plane and Worker Node, run the command: ps -ef | grep kubelet Remove the "--read-only-port" option if present. Note the path to the config file (identified by --config). Edit the config file: Set "readOnlyPort" to "0" or remove the setting. Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i insecure-bind-address * If the setting "--insecure-bind-address" is found and set to "localhost" in the Kubernetes API manifest file, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Remove the value of "--insecure-bind-address" setting.
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i secure-port * If the setting "--secure-port" is set to "0" or is not configured in the Kubernetes API manifest file, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--secure-port" to a value greater than "0".
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i anonymous-auth * If the setting "--anonymous-auth" is set to "true" in the Kubernetes API Server manifest file, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--anonymous-auth" to "false".
On each Control Plane and Worker Node, run the command: ps -ef | grep kubelet If the "--anonymous-auth" option exists, this is a finding. Note the path to the config file (identified by --config). Inspect the content of the config file: Locate the "anonymous" section under "authentication". In this section, if the field "enabled" does not exist or is set to "true", this is a finding.
On each Control Plane and Worker Node, run the command: ps -ef | grep kubelet Remove the "anonymous-auth" option if present. Note the path to the config file (identified by --config). Edit the config file: Locate the "authentication" section and the "anonymous" subsection. Within the "anonymous" subsection, set "enabled" to "false". Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
On each Control Plane and Worker Node, run the command: ps -ef | grep kubelet If the "--authorization-mode" option exists, this is a finding. Note the path to the config file (identified by --config). Inspect the content of the config file: Locate the "authorization" section. If the field "mode" does not exist or is not set to "Webhook", this is a finding.
On each Control Plane and Worker Node, run the command: ps -ef | grep kubelet Remove the "--authorization-mode" option if present. Note the path to the config file (identified by --config). Edit the config file: In the "authorization" section, set "mode" to "Webhook". Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
Log in to each worker node. Verify that the sshd service is not running. To validate that the service is not running, run the command: systemctl status sshd If the service sshd is active (running), this is a finding. Note: If console access is not available, SSH access can be attempted. If the worker nodes cannot be reached, this requirement is "not a finding".
To stop the sshd service, run the command: systemctl stop sshd Note: If access to the worker node is through an SSH session, it is important to realize there are two requirements for disabling and stopping the sshd service and they should be done during the same SSH session. Disabling the service must be performed first and then the service stopped to guarantee both settings can be made if the session is interrupted.
Log in to each worker node. Verify that the sshd service is not enabled. To validate the service is not enabled, run the command: systemctl is-enabled sshd.service If the service sshd is enabled, this is a finding. Note: If console access is not available, SSH access can be attempted. If the worker nodes cannot be reached, this requirement is "not a finding".
To disable the sshd service, run the command: chkconfig sshd off Note: If access to the worker node is through an SSH session, it is important to realize there are two requirements for disabling and stopping the sshd service that must be done during the same SSH session. Disabling the service must be performed first and then the service stopped to guarantee both settings can be made if the session is interrupted.
From the Control Plane, run the command: kubectl get pods --all-namespaces -l k8s-app=kubernetes-dashboard If any resources are returned, this is a finding.
Delete the Kubernetes dashboard deployment with the following command: kubectl delete deployment kubernetes-dashboard --namespace=kube-system
From the Control Plane and each Worker node, check the version of kubectl by executing the command: kubectl version --client If the Control Plane or any Worker nodes are not using kubectl version 1.12.9 or newer, this is a finding.
Upgrade the Control Plane and Worker nodes to the latest version of kubectl.
Ensure that Kubernetes static PodPath is not enabled on each Control Plane and Worker node. On the Control Plane and Worker nodes, run the command: ps -ef | grep kubelet Note the path to the config file (identified by --config). Run the command: grep -i staticPodPath <path_to_config_file> If any of the Control Plane and Worker nodes return a value for "staticPodPath", this is a finding.
On each Control Plane and Worker node, run the command: ps -ef | grep kubelet Note the path to the config file (identified by --config). Edit the Kubernetes kubelet file in the --config directory on the Kubernetes Control Plane and Worker nodes. Remove the setting "staticPodPath". Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Review the feature-gates setting, if one is returned. If the feature-gates setting is available and contains the DynamicAuditing flag set to "true", this is a finding. On each Control Plane and Worker node, run the command: ps -ef | grep kubelet If the "--feature-gates" option exists, this is a finding. Note the path to the config file (identified by: --config). Inspect the content of the config file: If the "featureGates" setting is present and has the "DynamicAuditing" flag set to "true", this is a finding.
On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * If any "--feature-gates" setting is available and contains the "DynamicAuditing" flag, remove the flag or set it to false. On the each Control Plane and Worker Node, run the command: ps -ef | grep kubelet Remove the "--feature-gates option" if present. Note the path to the config file (identified by: --config). Edit the Kubernetes Kubelet config file: If the "featureGates" setting is present, remove the "DynamicAuditing" flag or set the flag to false. Restart the kubelet service using the following command: service kubelet restart
This check is only applicable for Kubernetes versions 1.25 and older. On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * In each manifest file, if the feature-gates does not exist, or does not contain the "DynamicKubeletConfig" flag, or sets the flag to "true", this is a finding. On each Control Plane and Worker node, run the command: ps -ef | grep kubelet Verify the "feature-gates" option is not present. Note the path to the config file (identified by --config). Inspect the content of the config file: If the "featureGates" setting is not present, or does not contain the "DynamicKubeletConfig", or sets the flag to "true", this is a finding.
This fix is only applicable to Kubernetes version 1.25 and older. On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Edit the manifest files so that every manifest has a "--feature-gates" setting with "DynamicKubeletConfig=false". On each Control Plane and Worker Node, run the command: ps -ef | grep kubelet Remove the "feature-gates" option if present. Note the path to the config file (identified by --config). Edit the config file: Add a "featureGates" setting if one does not yet exist. Add the feature gate "DynamicKubeletConfig=false". Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Review the "--feature-gates" setting, if one is returned. If the "--feature-gate"s setting is available and contains the "AllAlpha" flag set to "true", this is a finding.
Edit any manifest file that contains the "--feature-gates" setting with "AllAlpha" set to "true". Set the value of "AllAlpha" to "false" or remove the setting completely. (AllAlpha - default=false)
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i audit-log-path * If the "--audit-log-path" is not set, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--audit-log-path" to a secure location for the audit logs to be written. Note: If the API server is running as a Pod, then the manifest will also need to be updated to mount the host system filesystem where the audit log file is to be written.
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i audit-policy-file If the audit-policy-file is not set, this is a finding. The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/vX (Where X is the latest apiVersion) kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--audit-policy-file" to the path of a file with the following content: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/vX (Where X is the latest apiVersion) kind: Policy rules: - level: RequestResponse Note: If the API server is running as a Pod, then the manifest will also need to be updated to mount the host system filesystem where the audit policy file resides.
On the Control Plane and Worker nodes, run the command: ps -ef | grep kubelet If the option "--hostname-override" is present, this is a finding.
Run the command: systemctl status kubelet. Note the path to the drop-in file. Determine the path to the environment file(s) with the command: grep -i EnvironmentFile <path_to_drop_in_file>. Remove the "--hostname-override" option from any environment file where it is present. Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
On the Control Plane, change to the /etc/kubernetes/manifest directory. Run the command: ls -l * Each manifest file must be owned by root:root. If any manifest file is not owned by root:root, this is a finding.
On the Control Plane, change to the /etc/kubernetes/manifest directory. Run the command: chown root:root * To verify the change took place, run the command: ls -l * All the manifest files should be owned by root:root.
On the Kubernetes Control Plane and Worker nodes, run the command: ps -ef | grep kubelet Check the config file (path identified by: --config): Change to the directory identified by --config (example /etc/sysconfig/) run the command: ls -l kubelet Each kubelet configuration file must be owned by root:root. If any manifest file is not owned by root:root, this is a finding.
On the Control Plane and Worker nodes, change to the --config directory. Run the command: chown root:root kubelet To verify the change took place, run the command: ls -l kubelet The kubelet file should now be owned by root:root.
On the Kubernetes Control Plane and Worker nodes, run the command: ps -ef | grep kubelet Check the config file (path identified by: --config): Change to the directory identified by --config (example /etc/sysconfig/) and run the command: ls -l kubelet Each KubeletConfiguration file must have permissions of "644" or more restrictive. If any KubeletConfiguration file is less restrictive than "644", this is a finding.
On the Kubernetes Control Plane and Worker nodes, run the command: ps -ef | grep kubelet Check the config file (path identified by: --config): Change to the directory identified by --config (example /etc/sysconfig/) and run the command: chmod 644 kubelet To verify the change took place, run the command: ls -l kubelet The kubelet file should now have the permissions of "644".
On both Control Plane and Worker Nodes, change to the /etc/kubernetes/manifest directory. Run the command: ls -l * Each manifest file must have permissions "644" or more restrictive. If any manifest file is less restrictive than "644", this is a finding.
On both Control Plane and Worker Nodes, change to the /etc/kubernetes/manifest directory. Run the command: chmod 644 * To verify the change took place, run the command: ls -l * All the manifest files should now have privileges of "644".
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i profiling * If the setting "profiling" is not configured in the Kubernetes Controller Manager manifest file or it is set to "True", this is a finding.
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument "--profiling value" to "false".
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep kube-apiserver.manifest -I -secure-port * grep kube-apiserver.manifest -I -etcd-servers * -edit manifest file: VIM <Manifest Name> Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any apiserver names spaces. Port: Any manifest and namespace PPS or services configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the API Server architecture, and determine applicable PPS. If there are any ports, protocols, and services in the system documentation not in compliance with the CAL PPSM, this is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify API Server network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Amend any system documentation requiring revision. Update Kubernetes API Server manifest and namespace PPS configuration to comply with PPSM CAL.
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep kube-scheduler.manifest -I -insecure-port grep kube-scheduler.manifest -I -secure-port -edit manifest file: VIM <Manifest Name> Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any scheduler names spaces. Port: Any manifest and namespace PPS configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the Scheduler architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPSs not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify Scheduler network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Amend any system documentation requiring revision. Update Kubernetes Scheduler manifest and namespace PPS configuration to comply with the PPSM CAL.
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep kube-scheduler.manifest -I -secure-port -edit manifest file: VIM <Manifest Name: Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any controller names spaces. Any manifest and namespace PPS or services configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the Controller architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify Controller network boundary with the PPS associated with the Controller for Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Amend any system documentation requiring revision. Update Kubernetes Controller manifest and namespace PPS configuration to comply with PPSM CAL.
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep kube-apiserver.manifest -I -etcd-servers * -edit etcd-main.manifest file: VIM <Manifest Name: Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any apiserver names spaces. Port: Any manifest and namespace PPS configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the etcd architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify etcd network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.
Amend any system documentation requiring revision. Update Kubernetes etcd manifest and namespace PPS configuration to comply with PPSM CAL.
On the Control Plane, run the command: kubectl get pods --all-namespaces The list returned is all pods running within the Kubernetes cluster. For those pods running within the user namespaces (System namespaces are kube-system, kube-node-lease and kube-public), run the command: kubectl get pod podname -o yaml | grep -i port Note: In the above command, "podname" is the name of the pod. For the command to work correctly, the current context must be changed to the namespace for the pod. The command to do this is: kubectl config set-context --current --namespace=namespace-name (Note: "namespace-name" is the name of the namespace.) Review the ports that are returned for the pod. If any host-privileged ports are returned for any of the pods, this is a finding.
For any of the pods that are using host-privileged ports, reconfigure the pod to use a service to map a host non-privileged port to the pod port or reconfigure the image to use non-privileged ports.
On the Kubernetes Control Plane, run the following command: kubectl get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {"\n"}{end}' -A If any of the values returned reference environment variables, this is a finding.
Any secrets stored as environment variables must be moved to the secret files with the proper protections and enforcements or placed within a password vault.
On the Control Plane, run the command: kubectl get pods --all-namespaces Review the namespaces and pods that are returned. Kubernetes system namespaces are kube-node-lease, kube-public, and kube-system. If any user pods are present in the Kubernetes system namespaces, this is a finding.
Move any user pods that are present in the Kubernetes system namespaces to user specific namespaces.
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i tls-cipher-suites * If the setting feature tls-cipher-suites is not set in the Kubernetes API server manifest file or contains no value or does not contain TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--tls-cipher-suites" to: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i client-ca-file * If the setting feature client-ca-file is not set in the Kubernetes API server manifest file or contains no value, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--client-ca-file" to path containing Approved Organizational Certificate.
On the Control Plane, run the command: ps -ef | grep kubelet If the "--client-ca-file" option exists, this is a finding. Note the path to the config file (identified by --config). Run the command: grep -i clientCAFile <path_to_config_file> If the setting "clientCAFile" is not set or contains no value, this is a finding.
On the Control Plane, run the command: ps -ef | grep kubelet Remove the "--client-ca-file" option if present. Note the path to the config file (identified by --config). Edit the Kubernetes Kubelet config file: Set the value of "clientCAFile" to a path containing an Approved Organizational Certificate. Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i root-ca-file * If the setting "--root-ca-file" is not set in the Kubernetes Controller Manager manifest file or contains no value, this is a finding.
Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--root-ca-file" to path containing Approved Organizational Certificate.
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i tls-cert-file * grep -i tls-private-key-file * If the setting tls-cert-file and private-key-file is not set in the Kubernetes API server manifest file or contains no value, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of tls-cert-file and tls-private-key-file to path containing Approved Organizational Certificate.
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i client-cert-auth * If the setting client-cert-auth is not configured in the Kubernetes etcd manifest file or set to "false", this is a finding.
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--client-cert-auth" to "true" for the etcd.
On the Control Plane, run the command: ps -ef | grep kubelet If the "--tls-private-key-file" option exists, this is a finding. Note the path to the config file (identified by --config). Run the command: grep -i tlsPrivateKeyFile <path_to_config_file> If the setting "tlsPrivateKeyFile" is not set or contains no value, this is a finding.
On the Control Plane, run the command: ps -ef | grep kubelet Remove the "--tls-private-key-file" option if present. Note the path to the config file (identified by --config). Edit the Kubernetes Kubelet config file: Set "tlsPrivateKeyFile" to a path containing the appropriate private key. Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
On the Control Plane, run the command: ps -ef | grep kubelet If the argument for "--tls-cert-file" option exists, this is a finding. Note the path to the config file (identified by --config). Run the command: grep -i tlsCertFile <path_to_config_file> If the setting "tlsCertFile" is not set or contains no value, this is a finding.
On the Control Plane, run the command: ps -ef | grep kubelet Remove the "--tls-cert-file" option if present. Note the path to the config file (identified by --config). Edit the Kubernetes Kubelet config file: Set "tlsCertFile" to a path containing an Approved Organization Certificate. Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i peer-client-cert-auth * If the setting "--peer-client-cert-auth" is not configured in the Kubernetes etcd manifest file or set to "false", this is a finding.
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--peer-client-cert-auth" to "true" for the etcd.
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i key-file * If the setting "key-file" is not configured in the etcd manifest file, this is a finding.
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--key-file" to the Approved Organizational Certificate.
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i cert-file * If the setting "cert-file" is not configured in the Kubernetes etcd manifest file, this is a finding.
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--cert-file" to the Approved Organizational Certificate.
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i etcd-cafile * If the setting "--etcd-cafile" is not configured in the Kubernetes API Server manifest file, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--etcd-cafile" to the Certificate Authority for etcd.
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i etcd-certfile * If the setting "--etcd-certfile" is not set in the Kubernetes API Server manifest file, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--etcd-certfile" to the certificate to be used for communication with etcd.
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i etcd-keyfile * If the setting "--etcd-keyfile" is not configured in the Kubernetes API Server manifest file, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--etcd-keyfile" to the certificate to be used for communication with etcd.
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i peer-cert-file * If the setting "peer-cert-file" is not configured in the Kubernetes etcd manifest file, this is a finding.
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--peer-cert-file" to the certificate to be used for communication with etcd.
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i peer-key-file * If the setting "peer-key-file" is not set in the Kubernetes etcd manifest file, this is a finding.
Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--peer-key-file" to the certificate to be used for communication with etcd.
On the Control Plane, run the command: ps -ef | grep kubelet If the "--protect-kernel-defaults" option exists, this is a finding. Note the path to the config file (identified by --config). Run the command: grep -i protectKernelDefaults <path_to_config_file> If the setting "protectKernelDefaults" is not set or is set to false, this is a finding.
On the Control Plane, run the command: ps -ef | grep kubelet Remove the "--protect-kernel-defaults" option if present. Note the path to the Kubernetes Kubelet config file (identified by --config). Edit the Kubernetes Kubelet config file: Set "protectKernelDefaults" to "true". Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet
Prior to version 1.21, to enforce security policiesPod Security Policies (psp) were used. Those are now deprecated and will be removed from version 1.25. Migrate from PSP to PSA: https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/ Pre-version 1.25 Check: Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i ValidatingAdmissionWebhook * If a line is not returned that includes enable-admission-plugins and ValidatingAdmissionWebhook, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument "--enable-admission-plugins" to include "ValidatingAdmissionWebhook". Each enabled plugin is separated by commas. Note: It is best to implement policies first and then enable the webhook, otherwise a denial of service may occur.
Prior to version 1.21, to enforce security policiesPod Security Policies (psp) were used. Those are now deprecated and will be removed from version 1.25. Migrate from PSP to PSA: https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/ Pre-version 1.25 Check: On the Control Plane, run the command: kubectl get podsecuritypolicy If there is no pod security policy configured, this is a finding. For any pod security policies listed, edit the policy with the command: kubectl edit podsecuritypolicy policyname (Note: "policyname" is the name of the policy.) Review the runAsUser, supplementalGroups and fsGroup sections of the policy. If any of these sections are missing, this is a finding. If the rule within the runAsUser section is not set to "MustRunAsNonRoot", this is a finding. If the ranges within the supplementalGroups section has min set to "0" or min is missing, this is a finding. If the ranges within the fsGroup section has a min set to "0" or the min is missing, this is a finding.
From the Control Plane, save the following policy to a file called restricted.yml. apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: restricted annotations: apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default', seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default', apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' spec: privileged: false # Required to prevent escalations to root. allowPrivilegeEscalation: false # This is redundant with non-root + disallow privilege escalation, # but we can provide it for defense in depth. requiredDropCapabilities: - ALL # Allow core volume types. volumes: - 'configMap' - 'emptyDir' - 'projected' - 'secret' - 'downwardAPI' # Assume that persistentVolumes set up by the cluster admin are safe to use. - 'persistentVolumeClaim' hostNetwork: false hostIPC: false hostPID: false runAsUser: # Require the container to run without root privileges. rule: 'MustRunAsNonRoot' seLinux: # This policy assumes the nodes are using AppArmor rather than SELinux. rule: 'RunAsAny' supplementalGroups: rule: 'MustRunAs' ranges: # Forbid adding the root group. - min: 1 max: 65535 fsGroup: rule: 'MustRunAs' ranges: # Forbid adding the root group. - min: 1 max: 65535 readOnlyRootFilesystem: false To implement the policy, run the command: kubectl create -f restricted.yml
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -I request-timeout * If Kubernetes API Server manifest file does not exist, this is a finding. If the setting "--request-timeout" is set to "0" in the Kubernetes API Server manifest file, or is not configured this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--request-timeout" greater than "0".
To view all pods and the images used to create the pods, from the Control Plane, run the following command: kubectl get pods --all-namespaces -o jsonpath="{..image}" | \ tr -s '[[:space:]]' '\n' | \ sort | \ uniq -c Review the images used for pods running within Kubernetes. If there are multiple versions of the same image, this is a finding.
Remove any old pods that are using older images. On the Control Plane, run the command: kubectl delete pod podname (Note: "podname" is the name of the pod to delete.)
Authenticate on the Kubernetes Control Plane. Run the command: kubectl version --short If kubectl version has a setting not supporting Kubernetes skew policy, this is a finding. Note: Kubernetes Skew Policy can be found at: https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions
Upgrade Kubernetes to the supported version. Institute and adhere to the policies and procedures to ensure that patches are consistently applied within the time allowed.
Review the ownership of the Kubernetes manifests files by using the command: stat -c %U:%G /etc/kubernetes/manifests/* | grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Change the ownership of the manifest files to root: root by executing the command: chown root:root /etc/kubernetes/manifests/*
Review the ownership of the Kubernetes etcd files by using the command: stat -c %U:%G /var/lib/etcd/* | grep -v etcd:etcd If the command returns any non etcd:etcd file permissions, this is a finding.
Change the ownership of the manifest files to etcd:etcd by executing the command: chown etcd:etcd /var/lib/etcd/*
Review the Kubernetes conf files by using the command: stat -c %U:%G /etc/kubernetes/admin.conf | grep -v root:root stat -c %U:%G /etc/kubernetes/scheduler.conf | grep -v root:root stat -c %U:%G /etc/kubernetes/controller-manager.conf | grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Change the ownership of the conf files to root: root by executing the command: chown root:root /etc/kubernetes/admin.conf chown root:root /etc/kubernetes/scheduler.conf chown root:root /etc/kubernetes/controller-manager.conf
Check if Kube-Proxy is running and obtain --kubeconfig parameter use the following command: ps -ef | grep kube-proxy If Kube-Proxy exists: Review the permissions of the Kubernetes Kube Proxy by using the command: stat -c %a <location from --kubeconfig> If the file has permissions more permissive than "644", this is a finding.
Change the permissions of the Kube Proxy to "644" by executing the command: chmod 644 <location from kubeconfig>.
Check if Kube-Proxy is running use the following command: ps -ef | grep kube-proxy If Kube-Proxy exists: Review the permissions of the Kubernetes Kube Proxy by using the command: stat -c %U:%G <location from --kubeconfig>| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Change the ownership of the Kube Proxy to root:root by executing the command: chown root:root <location from kubeconfig>.
On the Control Plane, run the command: ps -ef | grep kubelet If the "--client-ca-file" option exists, this is a finding. Note the path to the config file (identified by --config). Run the command: grep -i clientCAFile <path_to_config_file> Note the path to the client ca file. Run the command: stat -c %a <path_to_client_ca_file> If the client ca file has permissions more permissive than "644", this is a finding.
On the Control Plane, run the command: ps -ef | grep kubelet Remove the "--client-ca-file" option. Note the path to the config file (identified by --config). Run the command: grep -i clientCAFile <path_to_config_file> Note the path to the client ca file. Run the command: chmod 644 <path_to_client_ca_file>
On the Control Plane, run the command: ps -ef | grep kubelet If the "client-ca-file" option exists, this is a finding. Note the path to the config file (identified by --config). Run the command: grep -i clientCAFile <path_to_config_file> Note the path to the client ca file. Run the command: stat -c %U:%G <path_to_client_ca_file> If the command returns any non root:root file permissions, this is a finding.
On the Control Plane, run the command: ps -ef | grep kubelet Remove the "client-ca-file" option. Note the path to the config file (identified by --config). Run the command: grep -i clientCAFile <path_to_config_file> Note the path to the client ca file. Run the command: chown root:root <path_to_client_ca_file>
Review the PKI files in Kubernetes by using the command: ls -laR /etc/kubernetes/pki/ If the command returns any non root:root file permissions, this is a finding.
Change the ownership of the PKI to root: root by executing the command: chown -R root:root /etc/kubernetes/pki/
Review the permissions of the Kubernetes Kubelet conf by using the command: stat -c %a /etc/kubernetes/kubelet.conf If any of the files are have permissions more permissive than "644", this is a finding.
Change the permissions of the Kubelet to "644" by executing the command: chmod 644 /etc/kubernetes/kubelet.conf
Review the Kubernetes Kubelet conf files by using the command: stat -c %U:%G /etc/kubernetes/kubelet.conf| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Change the ownership of the kubelet.conf to root: root by executing the command: chown root:root /etc/kubernetes/kubelet.conf
Review the Kubeadm.conf file : Get the path for Kubeadm.conf by running: sytstemctl status kubelet Note the configuration file installed by the kubeadm is written to (Default Location: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf) stat -c %U:%G <kubeadm.conf path> | grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Change the ownership of the kubeadm.conf to root: root by executing the command: chown root:root <kubeadm.conf path>
Review the kubeadm.conf file : Get the path for kubeadm.conf by running: systemctl status kubelet Note the configuration file installed by the kubeadm is written to (Default Location: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf) stat -c %a <kubeadm.conf path> If the file has permissions more permissive than "644", this is a finding.
Change the permissions of kubeadm.conf to "644" by executing the command: chmod 644 <kubeadm.conf path>
Review the permissions of the Kubernetes config.yaml by using the command: stat -c %a /var/lib/kubelet/config.yaml If any of the files are have permissions more permissive than "644", this is a finding.
Change the permissions of the config.yaml to "644" by executing the command: chmod 644 /var/lib/kubelet/config.yaml
Review the Kubernetes Kubeadm kubelet conf file by using the command: stat -c %U:%G /var/lib/kubelet/config.yaml| grep -v root:root If the command returns any non root:root file permissions, this is a finding.
Change the ownership of the kubelet config to "root: root" by executing the command: chown root:root /var/lib/kubelet/config.yaml
Review the permissions of the Kubernetes etcd by using the command: ls -AR /var/lib/etcd/* If any of the files have permissions more permissive than "644", this is a finding.
Change the permissions of the manifest files to "644" by executing the command: chmod -R 644 /var/lib/etcd/*
Review the permissions of the Kubernetes config files by using the command: stat -c %a /etc/kubernetes/admin.conf stat -c %a /etc/kubernetes/scheduler.conf stat -c %a /etc/kubernetes/controller-manager.conf If any of the files are have permissions more permissive than "644", this is a finding.
Change the permissions of the conf files to "644" by executing the command: chmod 644 /etc/kubernetes/admin.conf chmod 644 /etc/kubernetes/scheduler.conf chmod 644 /etc/kubernetes/controller-manager.conf
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: grep -i audit-policy-file * If the setting "audit-policy-file" is not set or is found in the Kubernetes API manifest file without valid content, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the argument "--audit-policy-file" to "log file directory".
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i audit-log-maxsize * If the setting "--audit-log-maxsize" is not set in the Kubernetes API Server manifest file or it is set to less than "100", this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--audit-log-maxsize" to a minimum of "100".
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i audit-log-maxbackup * If the setting "audit-log-maxbackup" is not set in the Kubernetes API Server manifest file or it is set less than "10", this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--audit-log-maxbackup" to a minimum of "10".
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i audit-log-maxage * If the setting "audit-log-maxage" is not set in the Kubernetes API Server manifest file or it is set less than "30", this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--audit-log-maxage" to a minimum of "30".
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i audit-log-path * If the setting audit-log-path is not set in the Kubernetes API Server manifest file or it is not set to a valid path, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--audit-log-path" to valid location.
Review the permissions of the Kubernetes PKI cert files by using the command: sudo find /etc/kubernetes/pki/* -name "*.crt" | xargs stat -c '%n %a' If any of the files have permissions more permissive than "644", this is a finding.
Change the ownership of the cert files to "644" by executing the command: find /etc/kubernetes/pki -name "*.crt" | xargs chmod 644
Review the permissions of the Kubernetes PKI key files by using the command: sudo find /etc/kubernetes/pki -name "*.key" | xargs stat -c '%n %a' If any of the files have permissions more permissive than "600", this is a finding.
Change the ownership of the key files to "600" by executing the command: find /etc/kubernetes/pki -name "*.key" | xargs chmod 600
On the Control Plane, run the command: ps -ef | grep kubelet If the "--streaming-connection-idle-timeout" option exists, this is a finding. Note the path to the config file (identified by --config). Run the command: grep -i streamingConnectionIdleTimeout <path_to_config_file> If the setting "streamingConnectionIdleTimeout" is set to less than "5m" or is not configured, this is a finding.
On the Control Plane, run the command: ps -ef | grep kubelet Remove the "--streaming-connection-idle-timeout" option if present. Note the path to the config file (identified by --config). Edit the Kubernetes Kubelet file in the --config directory on the Kubernetes Control Plane: Set the argument "streamingConnectionIdleTimeout" to a value of "5m". Reset the kubelet service using the following command: service kubelet restart
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i basic-auth-file * If "basic-auth-file" is set in the Kubernetes API server manifest file this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Remove the setting "--basic-auth-file".
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i token-auth-file * If "--token-auth-file" is set in the Kubernetes API server manifest file, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Remove the setting "--token-auth-file".
Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Control Plane. Run the command: grep -i kubelet-client-certificate * grep -I kubelet-client-key * If the setting "--kubelet-client-certificate" is not configured in the Kubernetes API server manifest file or contains no value, this is a finding. If the setting "--kubelet-client-key" is not configured in the Kubernetes API server manifest file or contains no value, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--kubelet-client-certificate" and "--kubelet-client-key" to an Approved Organizational Certificate and key pair.
Change to the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Run the command: "grep -i admission-control-config-file *" If the setting "--admission-control-config-file" is not configured in the Kubernetes API Server manifest file, this is a finding. Inspect the .yaml file defined by the --admission-control-config-file. Verify PodSecurity is properly configured. If least privilege is not represented, this is a finding.
Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Control Plane. Set the value of "--admission-control-config-file" to a valid path for the file. Create an admission controller config file: Example File: ```yaml apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration plugins: - name: PodSecurity configuration: apiVersion: pod-security.admission.config.k8s.io/v1beta1 kind: PodSecurityConfiguration # Defaults applied when a mode label is not set. defaults: enforce: "privileged" enforce-version: "latest" exemptions: # Don't forget to exempt namespaces or users that are responsible for deploying # cluster components, because they need to run privileged containers usernames: ["admin"] namespaces: ["kube-system"] See for more details: Migrate from PSP to PSA: https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/ Best Practice: https://kubernetes.io/docs/concepts/security/pod-security-policy/#recommended-practice.
On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * For each manifest file, if the "--feature-gates" setting does not exist, does not contain the "--PodSecurity" flag, or sets the flag to "false", this is a finding. On each Control Plane and Worker Node, run the command: ps -ef | grep kubelet If the "--feature-gates" option exists, this is a finding. Note the path to the config file (identified by --config). Inspect the content of the config file: If the "featureGates" setting is not present, does not contain the "PodSecurity" flag, or sets the flag to "false", this is a finding.
On the Control Plane, change to the manifests' directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Ensure the argument "--feature-gates=PodSecurity=true" is present in each manifest file. On each Control Plane and Worker Node, run the command: ps -ef | grep kubelet Remove the "--feature-gates" option if present. Note the path to the config file (identified by --config). Edit the Kubernetes Kubelet config file: Add a "featureGates" setting if one does not yet exist. Add the feature gate "PodSecurity=true". Restart the kubelet service using the following command: systemctl daemon-reload && systemctl restart kubelet