Rancher Government Solutions RKE2 Security Technical Implementation Guide
Pick two releases to diff their requirements.
Open a previous version of this STIG.
Digest of Updates ✎ 7
Comparison against the immediately-prior release (V1R1). Rule matching uses the Group Vuln ID. Content-change detection compares the rule’s description, check, and fix text after stripping inline markup — cosmetic-only edits aren’t flagged.
Content changes 7
- V-254553 High check Rancher RKE2 must protect authenticity of communications sessions with the use of FIPS-validated 140-2 or 140-3 security requirements for cryptographic modules.
- V-254555 Medium checkfix Rancher RKE2 components must be configured in accordance with the security configuration settings based on DoD security configuration or implementation guidance, including SRGs, STIGs, NSA configuration guides, CTOs, and DTMs.
- V-254558 High checkfix The Kubernetes API server must have the insecure port flag disabled.
- V-254566 Medium check Rancher RKE2 runtime must enforce ports, protocols, and services that adhere to the PPSM CAL.
- V-254567 Medium check Rancher RKE2 must store only cryptographic representations of passwords.
- V-254568 Medium fix Rancher RKE2 must terminate all network connections associated with a communications session at the end of the session, or as follows: for in-band management sessions (privileged sessions), the session must be terminated after five minutes of inactivity.
- V-254571 Medium checkfix Rancher RKE2 must prevent nonprivileged users from executing privileged functions to include disabling, circumventing, or altering implemented security safeguards/countermeasures.
- RMF Control
- AC-17
- Severity
- H
- CCI
- CCI-000068
- Version
- CNTR-R2-000010
- Vuln IDs
-
- V-254553
- Rule IDs
-
- SV-254553r894451_rule
Checks: C-58037r894450_chk
Use strong TLS settings. On an RKE2 server, run each command: /bin/ps -ef | grep kube-apiserver | grep -v grep /bin/ps -ef | grep kube-controller-manager | grep -v grep /bin/ps -ef | grep kube-scheduler | grep -v grep For each, look for the existence of tls-min-version (use this command for an aid "| grep tls-min-version"): If the setting "tls-min-version" is not configured or it is set to "VersionTLS10" or "VersionTLS11", this is a finding. For each, look for the existence of the tls-cipher-suites. If "tls-cipher-suites" is not set for all servers, or does not contain the following, this is a finding: --tls-cipher-suites=TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, 124 | P a g eTLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384
Fix: F-57986r859228_fix
Use strong TLS settings. Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, to contain the following: kube-controller-manager-arg: - "tls-min-version=VersionTLS12" [or higher] - "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" kube-scheduler-arg: - "tls-min-version=VersionTLS12" - "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" kube-apiserver-arg: - "tls-min-version=VersionTLS12" - "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" Once configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
- RMF Control
- AC-2
- Severity
- M
- CCI
- CCI-000015
- Version
- CNTR-R2-000030
- Vuln IDs
-
- V-254554
- Rule IDs
-
- SV-254554r879522_rule
Checks: C-58038r859230_chk
Ensure use-service-account-credentials argument is set correctly. Run this command on the RKE2 Control Plane: /bin/ps -ef | grep kube-controller-manager | grep -v grep If --use-service-account-credentials argument is not set to "true" or is not configured, this is a finding.
Fix: F-57987r859231_fix
Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml on the RKE2 Control Plane to set the below parameter: --use-service-account-credentials argument=true Once configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
- RMF Control
- AC-2
- Severity
- M
- CCI
- CCI-000018
- Version
- CNTR-R2-000060
- Vuln IDs
-
- V-254555
- Rule IDs
-
- SV-254555r894454_rule
Checks: C-58039r894452_chk
Audit logging and policies: 1. On all hosts running RKE2 Server, run the command: /bin/ps -ef | grep kube-apiserver | grep -v grep If --audit-policy-file is not set, this is a finding. If --audit-log-mode is not = "blocking-strict", this is a finding. 2. Ensure the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, contains CIS profile setting. Run the following command: cat /etc/rancher/rke2/config.yaml If a value for profile is not found, this is a finding. (Example: "profile: cis-1.6" ) 3. Check the contents of the audit-policy file. By default, RKE2 expects the audit-policy file to be located at /etc/rancher/rke2/audit-policy.yaml; however, this location can be overridden in the /etc/rancher/rke2/config.yaml file with argument 'kube-apiserver-arg: "audit-policy-file=/etc/rancher/rke2/audit-policy.yaml"'. If the audit policy file does not exist or does not look like the following, this is a finding. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/vX (Where X is the latest apiVersion) kind: Policy rules: - level: RequestResponse
Fix: F-57988r894453_fix
Audit logging and policies: Edit the /etc/rancher/rke2/config.yaml file, and enable the audit policy: audit-policy-file: /etc/rancher/rke2/audit-policy.yaml 1. Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, so that it contains required configuration. --audit-policy-file= Path to the file that defines the audit policy configuration. (Example: /etc/rancher/rke2/audit-policy.yaml) --audit-log-mode=blocking-strict If configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server 2. Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, so that it contains required configuration. For example: profile: cis-1.6 If configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server 3. Edit the audit policy file, by default located at /etc/rancher/rke2/audit-policy.yaml to look like below: #Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If configuration files are updated on a host, restart the RKE2 Service. Run the command: 'systemctl restart rke2-server' for server hosts and 'systemctl restart rke2-agent' for agent hosts.
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-R2-000100
- Vuln IDs
-
- V-254556
- Rule IDs
-
- SV-254556r879530_rule
Checks: C-58040r859236_chk
Ensure bind-address is set correctly. Run this command on the RKE2 Control Plane: /bin/ps -ef | grep kube-controller-manager | grep -v grep If --bind-address is not set to "127.0.0.1" or is not configured, this is a finding.
Fix: F-57989r859237_fix
Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml on the RKE2 Control Plane to set the below parameter: --bind-address argument=127.0.0.1 Once configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
- RMF Control
- AC-3
- Severity
- M
- CCI
- CCI-000213
- Version
- CNTR-R2-000110
- Vuln IDs
-
- V-254557
- Rule IDs
-
- SV-254557r879530_rule
Checks: C-58041r859239_chk
Ensure anonymous-auth is set correctly so anonymous requests will be rejected. Run this command on each node: /bin/ps -ef | grep kubelet | grep -v grep If --anonymous-auth is set to "true" or is not configured, this is a finding.
Fix: F-57990r859240_fix
Edit the Kubernetes Kubelet file etc/rancher/rke2/config.yaml on the RKE2 Control Plane and set the following: --anonymous-auth=false Once configuration file is updated, restart the RKE2 Agent. Run the command: systemctl restart rke2-agent
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-R2-000120
- Vuln IDs
-
- V-254558
- Rule IDs
-
- SV-254558r894457_rule
Checks: C-58042r894455_chk
Ensure insecure-port is set correctly. If running v1.20 through v1.23, this is default configuration so no change is necessary if not configured. If running v1.24, this check is Not Applicable. Run this command on the RKE2 Control Plane: /bin/ps -ef | grep kube-apiserver | grep -v grep If --insecure-port is not set to "0" or is not configured, this is a finding.
Fix: F-57991r894456_fix
Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, to contain the following: kube-apiserver-arg: - insecure-port=0 Once configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-R2-000130
- Vuln IDs
-
- V-254559
- Rule IDs
-
- SV-254559r879530_rule
Checks: C-58043r870253_chk
Ensure read-only-port is set correctly so anonymous requests will be rejected. Run this command on each node: /bin/ps -ef | grep kubelet | grep -v grep If --read-only-port is not set to "0" or is not configured, this is a finding.
Fix: F-57992r859246_fix
Edit the Kubernetes Kubelet file etc/rancher/rke2/config.yaml on the RKE2 Control Plane and set the following: --read-only-port=0 Once configuration file is updated, restart the RKE2 Agent. Run the command: systemctl restart rke2-agent
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-R2-000140
- Vuln IDs
-
- V-254560
- Rule IDs
-
- SV-254560r879530_rule
Checks: C-58044r859248_chk
Ensure insecure-bind-address is set correctly. Run the command: ps -ef | grep kube-apiserver If the setting insecure-bind-address is found and set to "localhost" in the Kubernetes API manifest file, this is a finding.
Fix: F-57993r859249_fix
Edit the /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml on the Kubernetes RKE2 Control Plane. Remove the value for the --insecure-bind-address setting. Once configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-R2-000150
- Vuln IDs
-
- V-254561
- Rule IDs
-
- SV-254561r879530_rule
Checks: C-58045r870255_chk
Ensure authorization-mode is set correctly in the kubelet. Run this command on each node: /bin/ps -ef | grep kubelet | grep -v grep If --authorization-mode is not set to "Webhook" or is not configured, this is a finding.
Fix: F-57994r859252_fix
Edit the Kubernetes Kubelet file etc/rancher/rke2/config.yaml on the RKE2 Control Plane and set the following: --authorization-mode=Webhook Once configuration file is updated, restart the RKE2 Agent. Run the command: systemctl restart rke2-agent
- RMF Control
- AC-3
- Severity
- H
- CCI
- CCI-000213
- Version
- CNTR-R2-000160
- Vuln IDs
-
- V-254562
- Rule IDs
-
- SV-254562r879530_rule
Checks: C-58046r859254_chk
Ensure anonymous-auth argument is set correctly. Run this command on the RKE2 Control Plane: /bin/ps -ef | grep kube-apiserver | grep -v grep If --anonymous-auth is set to "true" or is not configured, this is a finding.
Fix: F-57995r859255_fix
Edit the /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml file --anonymous-auth=false Once configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
- RMF Control
- AU-3
- Severity
- M
- CCI
- CCI-001487
- Version
- CNTR-R2-000320
- Vuln IDs
-
- V-254563
- Rule IDs
-
- SV-254563r879568_rule
Checks: C-58047r859257_chk
Ensure audit-log-maxage is set correctly. Run the below command on the RKE2 Control Plane: /bin/ps -ef | grep kube-apiserver | grep -v grep If --audit-log-maxage argument is not set to at least 30 or is not configured, this is a finding. (By default, RKE2 sets the --audit-log-maxage argument parameter to 30.)
Fix: F-57996r859258_fix
Edit the /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml file --audit-log-maxage=30 Once configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
- RMF Control
- CM-5
- Severity
- M
- CCI
- CCI-001499
- Version
- CNTR-R2-000520
- Vuln IDs
-
- V-254564
- Rule IDs
-
- SV-254564r879586_rule
Checks: C-58048r859260_chk
File system permissions: 1. Ensure correct permissions of the files in /etc/rancher/rke2: cd /etc/rancher/rke2 ls -l all owners are root:root all permissions are 0640 2. Ensure correct permissions of the files in /var/lib/rancher/rke2: cd /var/lib/rancher/rke2 ls -l all owners are root:root 3. Ensure correct permissions of the files and directories in /var/lib/rancher/rke2/agent: cd /var/lib/rancher/rke2/agent ls -l owners and group are root:root File permissions set to 0640 for the following: rke2controller.kubeconfig kubelet.kubeconfig kubeproxy.kubeconfig Certificate file permissions set to 0600 client-ca.crt client-kubelet.crt client-kube-proxy.crt client-rke2-controller.crt server-ca.crt serving-kubelet.crt Key file permissions set to 0600 client-kubelet.key serving-kubelet.key client-rke2-controller.key client-kube-proxy.key The directory permissions to 0700 pod-manifests etc 4. Ensure correct permissions of the files in /var/lib/rancher/rke2/bin cd /var/lib/rancher/rke2/bin ls -l all owners are root:root all files are 0750 5. Ensure correct permissions of the directory /var/lib/rancher/rke2/data cd /var/lib/rancher/rke2 ls -l all owners are root:root permissions are 0750 6. Ensure correct permissions of each file in /var/lib/rancher/rke2/data cd /var/lib/rancher/rke2/data ls -l all owners are root:root all files are 0640 7. Ensure correct permissions of /var/lib/rancher/rke2/server cd /var/lib/rancher/rke2/server ls -l all owners are root:root The following directories are set to 0700 cred db tls The following directories are set to 0750 manifests logs The following file is set to 0600 token 8. Ensure the RKE2 Server configuration file on all RKE2 Server hosts contain the following: (cat /etc/rancher/rke2/config.yaml) write-kubeconfig-mode: "0640" If any of the permissions specified above do not match the required level then this is a finding.
Fix: F-57997r859261_fix
File system permissions: 1. Fix permissions of the files in /etc/rancher/rke2 cd /etc/rancher/rke2 chmod 0640 ./* chown root:root ./* ls -l 2. Fix permissions of the files in /var/lib/rancher/rke2 cd /var/lib/rancher/rke2 chown root:root ./* ls -l 3. Fix permissions of the files and directories in /var/lib/rancher/rke2/agent cd /var/lib/rancher/rke2/agent chown root:root ./* chmod 0700 pod-manifests chmod 0700 etc find . -maxdepth 1 -type f -name "*.kubeconfig" -exec chmod 0640 {} \; find . -maxdepth 1 -type f -name "*.crt" -exec chmod 0600 {} \; find . -maxdepth 1 -type f -name "*.key" -exec chmod 0600 {} \; ls -l 4. Fix permissions of the files in /var/lib/rancher/rke2/bin cd /var/lib/rancher/rke2/agent/bin chown root:root ./* chmod 0750 ./* ls -l 5. Fix permissions directory of /var/lib/rancher/rke2/data cd /var/lib/rancher/rke2/agent chown root:root data chmod 0750 data ls -l 6. Fix permissions of files in /var/lib/rancher/rke2/data cd /var/lib/rancher/rke2/data chown root:root ./* chmod 0640 ./* ls -l 7. Fix permissions in /var/lib/rancher/rke2/server cd /var/lib/rancher/rke2/server chown root:root ./* chmod 0700 cred chmod 0700 db chmod 0700 tls chmod 0750 manifests chmod 0750 logs chmod 0600 token ls -l Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, to contain the following: write-kubeconfig-mode: "0640" Once configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000381
- Version
- CNTR-R2-000550
- Vuln IDs
-
- V-254565
- Rule IDs
-
- SV-254565r879587_rule
Checks: C-58049r859263_chk
Ensure the RKE2 Server configuration file on all RKE2 Server hosts contain a "disable" flag for all unnecessary components. Run this command on the RKE2 Control Plane: cat /etc/rancher/rke2/config.yaml RKE2 allows disabling the following components. If any of the components are not required, they can be disabled: - rke2-canal - rke2-coredns - rke2-ingress-nginx - rke2-kube-proxy - rke2-metrics-server If services not in use are enabled, this is a finding.
Fix: F-57998r859264_fix
Disable unnecessary RKE2 components. Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, so that it contains a "disable" flag for all unnecessary components. Example: disable: rke2-canal disable: rke2-coredns disable: rke2-ingress-nginx disable: rke2-kube-proxy disable: rke2-metrics-server Once configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
- RMF Control
- CM-7
- Severity
- M
- CCI
- CCI-000382
- Version
- CNTR-R2-000580
- Vuln IDs
-
- V-254566
- Rule IDs
-
- SV-254566r894459_rule
Checks: C-58050r894458_chk
Check Ports, Protocols, and Services (PPS) Change to the /var/lib/rancher/rke2/agent/pod-manifests directory on the Kubernetes RKE2 Control Plane. Run the command: grep kube-apiserver.yaml -I -insecure-port grep kube-apiserver.yaml -I -secure-port grep kube-apiserver.yaml -I -etcd-servers * Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Any manifest and namespace PPS or services configuration not in compliance with PPSM CAL is a finding. If there are any ports, protocols, and services in the system documentation not in compliance with the CAL PPSM, this is a finding. Any PPS not set in the system documentation is a finding. Verify API Server network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Running these commands individually will show what ports are currently configured to be used by each of the core components. Inspect this output and ensure only proper ports are being utilized. If any ports not defined as the proper ports are being used, this is a finding. /var/lib/rancher/rke2/bin/kubectl get po -n kube-system -l component=kube-controller-manager -o=jsonpath="{.items[*].spec.containers[*].args}" /var/lib/rancher/rke2/bin/kubectl get po -n kube-system -l component=kube-scheduler -o=jsonpath="{.items[*].spec.containers[*].args}" /var/lib/rancher/rke2/bin/kubectl get po -n kube-system -l component=kube-apiserver -o=jsonpath="{.items[*].spec.containers[*].args}" | grep tls-min-version Verify user pods: User pods will also need to be inspected to ensure compliance. This will need to be on a case-by-case basis. cat /var/lib/rancher/rke2/server/db/etcd/config If any ports not defined as the proper ports are being used, this is a finding.
Fix: F-57999r859267_fix
Review the documentation covering how to set these PPSs and update this configuration file: /etc/rancher/rke2/config.yaml Once configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
- RMF Control
- IA-5
- Severity
- M
- CCI
- CCI-000196
- Version
- CNTR-R2-000800
- Vuln IDs
-
- V-254567
- Rule IDs
-
- SV-254567r894461_rule
Checks: C-58051r894460_chk
On the RKE2 Control Plane, run the following commands: kubectl get pods -A kubectl get jobs -A kubectl get cronjobs -A This will output all running pods, jobs, and cronjobs. Evaluate each of the above commands using the respective commands below: kubectl get pod -n <namespace> <pod> -o yaml kubectl get job -n <namespace> <job> -o yaml kubectl get cronjob -n <namespace> <cronjob> -o yaml If any contain sensitive values as environment variables, this is a finding.
Fix: F-58000r859270_fix
Any secrets stored as environment variables must be moved to the secret files with the proper protections and enforcements or placed within a password vault.
- RMF Control
- SC-10
- Severity
- M
- CCI
- CCI-001133
- Version
- CNTR-R2-000890
- Vuln IDs
-
- V-254568
- Rule IDs
-
- SV-254568r894464_rule
Checks: C-58052r894462_chk
Ensure streaming-connection-idle-timeout argument is set correctly. Run this command on each node: /bin/ps -ef | grep kubelet | grep -v grep If --streaming-connection-idle-timeout is set to < "5m" or the parameter is not configured, this is a finding.
Fix: F-58001r894463_fix
Edit the RKE2 Server configuration file on all RKE2 Server hosts, located at /etc/rancher/rke2/config.yaml, to contain the following: kubelet-arg: - streaming-connection-idle-timeout=5m Once configuration file is updated, restart the RKE2 Agent. Run the command: systemctl restart rke2-agent
- RMF Control
- SC-3
- Severity
- M
- CCI
- CCI-001084
- Version
- CNTR-R2-000940
- Vuln IDs
-
- V-254569
- Rule IDs
-
- SV-254569r879643_rule
Checks: C-58053r859275_chk
Ensure protect-kernel-defaults argument is set correctly. Run this command on each node: /bin/ps -ef | grep kubelet | grep -v grep If --protect-kernel-defaults is not set to "true" or is not configured, this is a finding.
Fix: F-58002r859276_fix
Edit the Kubernetes Kubelet file etc/rancher/rke2/config.yaml on the RKE2 Control Plane and set the following: --protect-kernel-defaults=true Once configuration file is updated, restart the RKE2 Agent. Run the command: systemctl restart rke2-agent
- RMF Control
- SC-2
- Severity
- M
- CCI
- CCI-001082
- Version
- CNTR-R2-000970
- Vuln IDs
-
- V-254570
- Rule IDs
-
- SV-254570r879649_rule
Checks: C-58054r870259_chk
System namespaces are reserved and isolated. To view the available namespaces, run the command: kubectl get namespaces The namespaces to be validated include: default kube-public kube-system kube-node-lease For the default namespace, execute the commands: kubectl config set-context --current --namespace=default kubectl get all For the kube-public namespace, execute the commands: kubectl config set-context --current --namespace=kube-public kubectl get all For the kube-node-lease namespace, execute the commands: kubectl config set-context --current --namespace=kube-node-lease kubectl get all The only return values are the Kubernetes service objects (e.g., service/kubernetes). For the kube-system namespace, execute the commands: kubectl config set-context --current --namespace=kube-system kubectl get all The values returned include the following resources: - ETCD - Helm - Kubernetes API Server - Kubernetes Controller Manager - Kubernetes Proxy - Kubernetes Scheduler - Kubernetes Networking Components - Ingress Controller Components - Metrics Server If a return value from the "kubectl get all" command is not the Kubernetes service or one from the above lists, this is a finding.
Fix: F-58003r870260_fix
System namespaces are reserved and isolated. A resource cannot move to a new namespace; the resource must be deleted and recreated in the new namespace. kubectl delete <resource_type> <resource_name> kubectl create -f <resource.yaml> --namespace=<user_created_namespace>
- RMF Control
- AC-6
- Severity
- M
- CCI
- CCI-002233
- Version
- CNTR-R2-001130
- Vuln IDs
-
- V-254571
- Rule IDs
-
- SV-254571r894467_rule
Checks: C-58055r894465_chk
On the Server Node, run the command: kubectl get podsecuritypolicy For any pod security policies listed, with the exception of system-unrestricted-psp (which is required for core Kubernetes functionality), edit the policy with the command: kubectl edit podsecuritypolicy policyname Where policyname is the name of the policy Review the runAsUser, supplementalGroups and fsGroup sections of the policy. If any of these sections are missing, this is a finding. If the rule within the runAsUser section is not set to "MustRunAsNonRoot" this is a finding. If the ranges within the supplementalGroups section has min set to "0" or min is missing, this is a finding. If the ranges within the fsGroup section has a min set to "0" or the min is missing, this is a finding.
Fix: F-58004r894466_fix
From the Server node, save the following policy to a file called restricted.yml. apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: restricted annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default' apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default' seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default' apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' spec: privileged: false #Required to prevent escalations to root. allowPrivilegeEscalation: false #This is redundant with non-root + disallow privilege escalation, # but we can provide it for defense in depth. requiredDropCapabilities: - ALL # Allow core volume types. volumes: - 'configMap' - 'emptyDir' - 'projected' - 'secret' - 'downwardAPI' # Assume that persistentVolumes set up by the cluster admin are safe to use. - 'persistentVolumeClaim' hostNetwork: false hostIPC: false hostPID: false runAsUser: # Require the container to run without root privileges. rule: 'MustRunAsNonRoot' seLinux: # This policy assumes the nodes are using AppArmor rather than SELinux. rule: 'RunAsAny' supplementalGroups: rule: 'MustRunAs' ranges: # Forbid adding the root group. - min: 1 max: 65535 fsGroup: rule: 'MustRunAs' ranges: # Forbid adding the root group. - min: 1 max: 65535 readOnlyRootFilesystem: false To implement the policy, run the command: kubectl create -f restricted.yml
- RMF Control
- CM-11
- Severity
- M
- CCI
- CCI-001812
- Version
- CNTR-R2-001270
- Vuln IDs
-
- V-254572
- Rule IDs
-
- SV-254572r879751_rule
Checks: C-58056r859284_chk
Ensure authorization-mode is set correctly in the apiserver. Run this command on the RKE2 Control Plane: /bin/ps -ef | grep kube-apiserver | grep -v grep If --authorization-mode is not set to "RBAC,Node" or is not configured, this is a finding. (By default, RKE2 sets Node,RBAC as the parameter to the --authorization-mode argument.)
Fix: F-58005r859285_fix
Edit the /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml file. --authorization-mode=RBAC,Node Once configuration file is updated, restart the RKE2 Server. Run the command: systemctl restart rke2-server
- RMF Control
- SC-28
- Severity
- M
- CCI
- CCI-002476
- Version
- CNTR-R2-001500
- Vuln IDs
-
- V-254573
- Rule IDs
-
- SV-254573r879800_rule
Checks: C-58057r859287_chk
Review the encryption configuration file. As root or with root permissions, run the following command: view /var/lib/rancher/rke2/server/cred/encryption-config.json Ensure the RKE2 configuration file on all RKE2 servers, located at /etc/rancher/rke2/config.yaml, does NOT contain: secrets-encryption: false If secrets encryption is turned off, this is a finding.
Fix: F-58006r859288_fix
Enable secrets encryption. Edit the RKE2 configuration file on all RKE2 servers, located at /etc/rancher/rke2/config.yaml, so that it does NOT contain: secrets-encryption: false or that secrets-encryption is set to true.
- RMF Control
- SI-2
- Severity
- M
- CCI
- CCI-002617
- Version
- CNTR-R2-001580
- Vuln IDs
-
- V-254574
- Rule IDs
-
- SV-254574r879825_rule
Checks: C-58058r859290_chk
To view all pods and the images used to create the pods, from the RKE2 Control Plane, run the following command: kubectl get pods --all-namespaces -o jsonpath="{..image}" | \ tr -s '[[:space:]]' '\n' | \ sort | \ uniq -c Review the images used for pods running within Kubernetes. If there are multiple versions of the same image, this is a finding.
Fix: F-58007r859291_fix
Remove any old pods that are using older images. On the RKE2 Control Plane, run the command: kubectl delete pod podname (Note: "podname" is the name of the pod to delete.) Run the command: systemctl restart rke2-server
- RMF Control
- SI-2
- Severity
- M
- CCI
- CCI-002605
- Version
- CNTR-R2-001620
- Vuln IDs
-
- V-254575
- Rule IDs
-
- SV-254575r879827_rule
Checks: C-58059r859293_chk
Authenticate on the RKE2 Control Plane. Run command: kubectl version --short If kubectl version has a setting not supporting Kubernetes skew policy, this is a finding. Note: Kubernetes Skew Policy can be found at: https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions.
Fix: F-58008r859294_fix
Upgrade RKE2 to the supported version. Institute and adhere to the policies and procedures to ensure that patches are consistently applied within the time allowed.