Select any two versions of this STIG to compare the individual requirements
Select any old version/release of this STIG to view the previous requirements
Use strong TLS settings. On an RKE2 server, run each command: /bin/ps -ef | grep kube-apiserver | grep -v grep /bin/ps -ef | grep kube-controller-manager | grep -v grep /bin/ps -ef | grep kube-scheduler | grep -v grep For each, look for the existence of tls-min-version (use this command for an aid "| grep tls-min-version"): If the setting "tls-min-version" is not configured or it is set to "VersionTLS10" or "VersionTLS11", this is a finding. For each, look for the existence of the tls-cipher-suites. If "tls-cipher-suites" is not set for all servers, or does not contain the following, this is a finding: --tls-cipher-suites=TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, 124 | P a g eTLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384
Use strong TLS settings. Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, to contain the following: kube-controller-manager-arg: - "tls-min-version=VersionTLS12" [or higher] - "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" kube-scheduler-arg: - "tls-min-version=VersionTLS12" - "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" kube-apiserver-arg: - "tls-min-version=VersionTLS12" - "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" Once configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
Ensure use-service-account-credentials argument is set correctly. Run this command on the RKE2 Control Plane: /bin/ps -ef | grep kube-controller-manager | grep -v grep If --use-service-account-credentials argument is not set to "true" or is not configured, this is a finding.
Edit the RKE2 Configuration File /etc/rancher/rke2/config.yaml on the RKE2 Control Plane and set the following "kube-controller-manager-arg" argument: - use-service-account-credentials=true Once the configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
Audit logging and policies: 1. On all hosts running RKE2 Server, run the command: /bin/ps -ef | grep kube-apiserver | grep -v grep If --audit-policy-file is not set, this is a finding. If --audit-log-mode is not = "blocking-strict", this is a finding. 2. Ensure the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, contains CIS profile setting. Run the following command: cat /etc/rancher/rke2/config.yaml If a value for profile is not found, this is a finding. (Example: "profile: cis-1.6" ) 3. Check the contents of the audit-policy file. By default, RKE2 expects the audit-policy file to be located at /etc/rancher/rke2/audit-policy.yaml; however, this location can be overridden in the /etc/rancher/rke2/config.yaml file with argument 'kube-apiserver-arg: "audit-policy-file=/etc/rancher/rke2/audit-policy.yaml"'. If the audit policy file does not exist or does not look like the following, this is a finding. apiVersion: audit.k8s.io/v1 kind: Policy metadata: name: rke2-audit-policy rules: - level: Metadata resources: - group: "" resources: ["secrets"] - level: RequestResponse resources: - group: "" resources: ["*"]
Audit logging and policies: Edit the /etc/rancher/rke2/config.yaml file, and enable the audit policy: audit-policy-file: /etc/rancher/rke2/audit-policy.yaml 1. Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, so that it contains required configuration. --audit-policy-file= Path to the file that defines the audit policy configuration. (Example: /etc/rancher/rke2/audit-policy.yaml) --audit-log-mode=blocking-strict If configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server 2. Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, so that it contains required configuration. If using RKE2 v1.24 or older, set: profile: cis-1.6 If using RKE2 v1.25 or newer, set: profile: cis-1.23 If configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server 3. Edit the audit policy file, by default located at /etc/rancher/rke2/audit-policy.yaml to look like below: apiVersion: audit.k8s.io/v1 kind: Policy metadata: name: rke2-audit-policy rules: - level: Metadata resources: - group: "" resources: ["secrets"] - level: RequestResponse resources: - group: "" resources: ["*"] If configuration files are updated on a host, restart the RKE2 Service. Run the command "systemctl restart rke2-server" for server hosts and "systemctl restart rke2-agent" for agent hosts.
Ensure bind-address is set correctly. Run this command on the RKE2 Control Plane: /bin/ps -ef | grep kube-controller-manager | grep -v grep If --bind-address is not set to "127.0.0.1" or is not configured, this is a finding.
Edit the RKE2 Configuration File /etc/rancher/rke2/config.yaml on the RKE2 Control Plane and set the following "kube-controller-manager-arg" argument: - bind-address=127.0.0.1 Once the configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
Ensure anonymous-auth is set correctly so anonymous requests will be rejected. Run this command on each node: /bin/ps -ef | grep kubelet | grep -v grep If --anonymous-auth is set to "true" or is not configured, this is a finding.
Edit the Kubernetes Kubelet file etc/rancher/rke2/config.yaml on the RKE2 Control Plane and set the following: --anonymous-auth=false Once configuration file is updated, restart the RKE2 Agent. Run the command: systemctl restart rke2-server
Ensure insecure-port is set correctly. If running v1.20 through v1.23, this is default configuration so no change is necessary if not configured. If running v1.24, this check is Not Applicable. Run this command on the RKE2 Control Plane: /bin/ps -ef | grep kube-apiserver | grep -v grep If --insecure-port is not set to "0" or is not configured, this is a finding.
Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, to contain the following: kube-apiserver-arg: - insecure-port=0 Once configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
Ensure read-only-port is set correctly so anonymous requests will be rejected. Run this command on each node: /bin/ps -ef | grep kubelet | grep -v grep If --read-only-port is not set to "0" or is not configured, this is a finding.
Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, to contain the following: kubelet-arg: --read-only-port=0 If configuration files are updated on a host, restart the RKE2 Service. Run the command "systemctl restart rke2-server" for server hosts and "systemctl restart rke2-agent" for agent hosts.
If running rke2 Kubernetes version > 1.20, this requirement is not applicable (NA). Ensure insecure-bind-address is set correctly. Run the command: ps -ef | grep kube-apiserver If the setting insecure-bind-address is found and set to "localhost", this is a finding.
If running rke2 Kubernetes version > 1.20, this requirement is NA. Upgrade to a supported version of RKE2 Kubernetes.
Ensure authorization-mode is set correctly in the kubelet on each rke2 node. Run this command on each node: /bin/ps -ef | grep kubelet | grep -v grep If --authorization-mode is not set to "Webhook" or is not configured, this is a finding.
Edit the RKE2 Configuration File /etc/rancher/rke2/config.yaml on every RKE2 node and set the following "kubelet-arg" argument: - authorization-mode=Webhook Once the configuration file is updated, restart the RKE2 Server or Agent. Run the command: systemctl restart rke2-server or systemctl restart rke2-agent
Ensure anonymous-auth argument is set correctly. Run this command on the RKE2 Control Plane: /bin/ps -ef | grep kube-apiserver | grep -v grep If --anonymous-auth is set to "true" or is not configured, this is a finding.
Edit the RKE2 Configuration File /etc/rancher/rke2/config.yaml on the RKE2 Control Plane and set the following "kube-apiserver-arg" argument: - anonymous-auth=false Once the configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
Ensure audit-log-maxage is set correctly. Run the below command on the RKE2 Control Plane: /bin/ps -ef | grep kube-apiserver | grep -v grep If --audit-log-maxage argument is not set to at least 30 or is not configured, this is a finding. (By default, RKE2 sets the --audit-log-maxage argument parameter to 30.)
Edit the RKE2 Configuration File /etc/rancher/rke2/config.yaml on the RKE2 Control Plane and set the following "kube-apiserver-arg" argument: - audit-log-maxage=30 Once the configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
File system permissions: 1. Ensure correct permissions of the files in /etc/rancher/rke2 cd /etc/rancher/rke2 ls -l all owners are root:root all permissions are 0600 2. Ensure correct permissions of the files in /var/lib/rancher/rke2 cd /var/lib/rancher/rke2 ls -l all owners are root:root 3. Ensure correct permissions of the files and directories in /var/lib/rancher/rke2/agent cd /var/lib/rancher/rke2/agent ls -l owners and group are root:root File permissions set to 0640 for the following: rke2controller.kubeconfig kubelet.kubeconfig kubeproxy.kubeconfig Certificate file permissions set to 0600 client-ca.crt client-kubelet.crt client-kube-proxy.crt client-rke2-controller.crt server-ca.crt serving-kubelet.crt Key file permissions set to 0600 client-kubelet.key serving-kubelet.key client-rke2-controller.key client-kube-proxy.key The directory permissions to 0700 pod-manifests etc 4. Ensure correct permissions of the files in /var/lib/rancher/rke2/bin cd /var/lib/rancher/rke2/bin ls -l all owners are root:root all files are 0750 5. Ensure correct permissions of the directory /var/lib/rancher/rke2/data cd /var/lib/rancher/rke2 ls -l all owners are root:root permissions are 0750 6. Ensure correct permissions of each file in /var/lib/rancher/rke2/data cd /var/lib/rancher/rke2/data ls -l all owners are root:root all files are 0640 7. Ensure correct permissions of /var/lib/rancher/rke2/server cd /var/lib/rancher/rke2/server ls -l all owners are root:root The following directories are set to 0700 cred db tls The following directories are set to 0750 manifests logs The following file is set to 0600 token 8. Ensure the RKE2 Server configuration file on all RKE2 Server hosts contain the following: (cat /etc/rancher/rke2/config.yaml) write-kubeconfig-mode: "0600" If any of the permissions specified above do not match the required level, this is a finding.
File system permissions: 1. Fix permissions of the files in /etc/rancher/rke2: cd /etc/rancher/rke2 chmod 0600 ./* chown root:root ./* ls -l 2. Fix permissions of the files in /var/lib/rancher/rke2: cd /var/lib/rancher/rke2 chown root:root ./* ls -l 3. Fix permissions of the files and directories in /var/lib/rancher/rke2/agent: cd /var/lib/rancher/rke2/agent chown root:root ./* chmod 0700 pod-manifests chmod 0700 etc find . -maxdepth 1 -type f -name "*.kubeconfig" -exec chmod 0640 {} \; find . -maxdepth 1 -type f -name "*.crt" -exec chmod 0600 {} \; find . -maxdepth 1 -type f -name "*.key" -exec chmod 0600 {} \; ls -l 4. Fix permissions of the files in /var/lib/rancher/rke2/bin: cd /var/lib/rancher/rke2/agent/bin chown root:root ./* chmod 0750 ./* ls -l 5. Fix permissions directory of /var/lib/rancher/rke2/data: cd /var/lib/rancher/rke2/agent chown root:root data chmod 0750 data ls -l 6. Fix permissions of files in /var/lib/rancher/rke2/data: cd /var/lib/rancher/rke2/data chown root:root ./* chmod 0640 ./* ls -l 7. Fix permissions in /var/lib/rancher/rke2/server: cd /var/lib/rancher/rke2/server chown root:root ./* chmod 0700 cred chmod 0700 db chmod 0700 tls chmod 0750 manifests chmod 0750 logs chmod 0600 token ls -l Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, to contain the following: write-kubeconfig-mode: "0640" Once the configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
Ensure the RKE2 Server configuration file on all RKE2 Server hosts contains a "disable" flag only if there are default RKE2 components that need to be disabled. If there are no default components that need to be disabled, this is not a finding. Run this command on the RKE2 Control Plane: cat /etc/rancher/rke2/config.yaml RKE2 allows disabling the following components. If any of the components are not required, they can be disabled: - rke2-canal - rke2-coredns - rke2-ingress-nginx - rke2-kube-proxy - rke2-metrics-server If services not in use are enabled, this is a finding.
Disable unnecessary RKE2 components. Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, so that it contains a "disable" flag if any default RKE2 components are unnecessary. Example: disable: rke2-canal disable: rke2-coredns disable: rke2-ingress-nginx disable: rke2-kube-proxy disable: rke2-metrics-server Once the configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
Check Ports, Protocols, and Services (PPS). Change to the /var/lib/rancher/rke2/agent/pod-manifests directory on the Kubernetes RKE2 Control Plane. Run the command: grep kube-apiserver.yaml -I -insecure-port grep kube-apiserver.yaml -I -secure-port grep kube-apiserver.yaml -I -etcd-servers * Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Any manifest and namespace PPS or services configuration not in compliance with PPSM CAL or otherwise approved by the information system security officer (ISSO) is a finding. If there are any ports, protocols, and services in the system documentation not in compliance with the CAL PPSM or otherwise been approved by the ISSO, this is a finding. Any PPS not set in the system documentation is a finding. Verify API Server network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements or otherwise approved by the ISSO is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Running these commands individually will show what ports are currently configured to be used by each of the core components. Inspect this output and ensure only proper ports are being utilized. If any ports not defined as the proper ports are being used, this is a finding. /var/lib/rancher/rke2/bin/kubectl get po -n kube-system -l component=kube-controller-manager -o=jsonpath="{.items[*].spec.containers[*].args}" /var/lib/rancher/rke2/bin/kubectl get po -n kube-system -l component=kube-scheduler -o=jsonpath="{.items[*].spec.containers[*].args}" /var/lib/rancher/rke2/bin/kubectl get po -n kube-system -l component=kube-apiserver -o=jsonpath="{.items[*].spec.containers[*].args}" | grep tls-min-version Verify user pods: User pods will also need to be inspected to ensure compliance. This will need to be on a case-by-case basis. cat /var/lib/rancher/rke2/server/db/etcd/config If any ports not defined as the proper ports are being used or otherwise approved by the ISSO, this is a finding.
Review the documentation covering how to set these PPSs and update this configuration file: /etc/rancher/rke2/config.yaml Once configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
On the RKE2 Control Plane, run the following commands: kubectl get pods -A kubectl get jobs -A kubectl get cronjobs -A This will output all running pods, jobs, and cronjobs. Evaluate each of the above commands using the respective commands below: kubectl get pod -n <namespace> <pod> -o yaml kubectl get job -n <namespace> <job> -o yaml kubectl get cronjob -n <namespace> <cronjob> -o yaml If any contain sensitive values as environment variables, this is a finding.
Any secrets stored as environment variables must be moved to the secret files with the proper protections and enforcements or placed within a password vault.
Ensure streaming-connection-idle-timeout argument is set correctly. Run this command on each node: /bin/ps -ef | grep kubelet | grep -v grep If --streaming-connection-idle-timeout is set to < "5m" or the parameter is not configured, this is a finding.
Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, to contain the following: kubelet-arg: - streaming-connection-idle-timeout=5m Once configuration file is updated, restart the RKE2 Agent. Run the command: systemctl restart rke2-agent
Ensure protect-kernel-defaults argument is set correctly. Run this command on each node: /bin/ps -ef | grep kubelet | grep -v grep If --protect-kernel-defaults is not set to "true" or is not configured, this is a finding.
Edit the Kubernetes Kubelet file etc/rancher/rke2/config.yaml on the RKE2 Control Plane and set the following: --protect-kernel-defaults=true Once configuration file is updated, restart the RKE2 Agent. Run the command: systemctl restart rke2-agent
Audit logging and policies: Edit the /etc/rancher/rke2/config.yaml file and enable the audit policy: audit-policy-file: /etc/rancher/rke2/audit-policy.yaml 1. Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, so that it contains required configuration. --audit-policy-file= Path to the file that defines the audit policy configuration. (Example: /etc/rancher/rke2/audit-policy.yaml) --audit-log-mode=blocking-strict If the configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server 2. Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, so that it contains required configuration. If using RKE2 v1.24 or older, set: profile: cis-1.6 If using RKE2 v1.25 or newer, set: profile: cis-1.23 If the configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server 3. Edit the audit policy file, by default located at /etc/rancher/rke2/audit-policy.yaml to look like below: apiVersion: audit.k8s.io/v1 kind: Policy metadata: name: rke2-audit-policy rules: - level: Metadata resources: - group: "" resources: ["secrets"] - level: RequestResponse resources: - group: "" resources: ["*"] If configuration files are updated on a host, restart the RKE2 Service. Run the command "systemctl restart rke2-server" for server hosts and "systemctl restart rke2-agent" for agent hosts.
System namespaces are reserved and isolated. A resource cannot move to a new namespace; the resource must be deleted and recreated in the new namespace. kubectl delete <resource_type> <resource_name> kubectl create -f <resource.yaml> --namespace=<user_created_namespace>
If using RKE2 v1.24 or older: On the Server Node, run the command: kubectl get podsecuritypolicy For any pod security policies listed, with the exception of system-unrestricted-psp (which is required for core Kubernetes functionality), edit the policy with the command: kubectl edit podsecuritypolicy policyname Where policyname is the name of the policy Review the runAsUser, supplementalGroups, and fsGroup sections of the policy. If any of these sections are missing, this is a finding. If the rule within the runAsUser section is not set to "MustRunAsNonRoot", this is a finding. If the ranges within the supplementalGroups section has min set to "0" or min is missing, this is a finding. If the ranges within the fsGroup section have a min set to "0" or the min is missing, this is a finding. If using RKE2 v1.25 or newer: On each controlplane node, validate that the file "/etc/rancher/rke2/rke2-pss.yaml" exists and the default configuration settings match the following: defaults: audit: restricted audit-version: latest enforce: restricted enforce-version: latest warn: restricted warn-version: latest If the configuration file differs from the above, this is a finding.
If using RKE2 v1.24 or older: On each Control Plane node, create the following policy to a file called restricted.yml. apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: restricted annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default' apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default' seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default' apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' spec: privileged: false #Required to prevent escalations to root. allowPrivilegeEscalation: false #This is redundant with non-root + disallow privilege escalation, # but we can provide it for defense in depth. requiredDropCapabilities: - ALL # Allow core volume types. volumes: - 'configMap' - 'emptyDir' - 'projected' - 'secret' - 'downwardAPI' # Assume that persistentVolumes set up by the cluster admin are safe to use. - 'persistentVolumeClaim' hostNetwork: false hostIPC: false hostPID: false runAsUser: # Require the container to run without root privileges. rule: 'MustRunAsNonRoot' seLinux: # This policy assumes the nodes are using AppArmor rather than SELinux. rule: 'RunAsAny' supplementalGroups: rule: 'MustRunAs' ranges: # Forbid adding the root group. - min: 1 max: 65535 fsGroup: rule: 'MustRunAs' ranges: # Forbid adding the root group. - min: 1 max: 65535 readOnlyRootFilesystem: false To implement the policy, run the command: kubectl create -f restricted.yml" If using RKE v1.25 or newer: On each Control Plane node, create the file "/etc/rancher/rke2/rke2-pss.yaml" and add the following content: apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration plugins: - name: PodSecurity configuration: apiVersion: pod-security.admission.config.k8s.io/v1beta1 kind: PodSecurityConfiguration defaults: enforce: "restricted" enforce-version: "latest" audit: "restricted" audit-version: "latest" warn: "restricted" warn-version: "latest" exemptions: usernames: [] runtimeClasses: [] namespaces: [kube-system, cis-operator-system, tigera-operator] Ensure the namespace exemptions contain only namespaces requiring access to capabilities outside of the restricted settings above. Once the file is created, restart the Control Plane nodes with: systemctl restart rke2-server
Ensure authorization-mode is set correctly in the apiserver. Run this command on the RKE2 Control Plane: /bin/ps -ef | grep kube-apiserver | grep -v grep If --authorization-mode is not set to "RBAC,Node" or is not configured, this is a finding. (By default, RKE2 sets Node,RBAC as the parameter to the --authorization-mode argument.)
Edit the /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml file. --authorization-mode=RBAC,Node Once configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
Review the encryption configuration file. As root or with root permissions, run the following command: view /var/lib/rancher/rke2/server/cred/encryption-config.json Ensure the RKE2 configuration file on all RKE2 servers, located at /etc/rancher/rke2/config.yaml, does NOT contain: secrets-encryption: false If secrets encryption is turned off, this is a finding.
Enable secrets encryption. Edit the RKE2 configuration file on all RKE2 servers, located at /etc/rancher/rke2/config.yaml, so that it contains: secrets-encryption: true
To view all pods and the images used to create the pods, from the RKE2 Control Plane, run the following command: kubectl get pods --all-namespaces -o jsonpath="{..image}" | \ tr -s '[[:space:]]' '\n' | \ sort | \ uniq -c Review the images used for pods running within Kubernetes. If there are multiple versions of the same image, this is a finding.
Remove any old pods that are using older images. On the RKE2 Control Plane, run the command: kubectl delete pod podname (Note: "podname" is the name of the pod to delete.) Run the command: systemctl restart rke2-server
Authenticate on the RKE2 Control Plane. Verify all nodes in the cluster are running a supported version of RKE2 Kubernetes. Run command: kubectl get nodes If any nodes are running an unsupported version of RKE2 Kubernetes, this is a finding. Verify all images running in the cluster are patched to the latest version. Run command: kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" | tr -s '[[:space:]]' '\n' | sort | uniq -c If any images running in the cluster are not the latest version, this is a finding. Note: Kubernetes release support levels can be found at: https://kubernetes.io/releases/
Upgrade RKE2 to the supported version. Institute and adhere to the policies and procedures to ensure that patches are consistently applied within the time allowed.