Rancher Government Solutions Multi-Cluster Manager Security Technical Implementation Guide
Pick two releases to diff their requirements.
Open a previous version of this STIG.
Digest of Updates No substantive changes
Comparison against the immediately-prior release (V1R1). Rule matching uses the Group Vuln ID. Content-change detection compares the rule’s description, check, and fix text after stripping inline markup — cosmetic-only edits aren’t flagged.
No substantive changes detected against the previous release. 7 rules matched cleanly.
- RMF Control
- AC-2
- Severity
- H
- CCI
- CCI-000015
- Version
- CNTR-RM-000030
- Vuln IDs
-
- V-252843
- Rule IDs
-
- SV-252843r819979_rule
Checks: C-56299r819977_chk
RBAC Integration and Authn/Authz View and modify authentication settings through the Rancher MCM UI. Navigate to Triple Bar Symbol(Global) >> Users & Authentication >> Auth Provider. This screen shows the authentication mechanism that is configured. If no authentication mechanism is configured or disabled, this is a finding.
Fix: F-56249r819978_fix
RBAC Integration and Authn/Authz Navigate to Triple Bar Symbol(Global) >> Users & Authentication >> Auth Provider. From this screen the authentication mechanism can be selected and configured. This STIG is written and tested with KeyCloak and not included with Rancher MCM. Installation instructions for KeyCloak can be found here: https://www.keycloak.org/getting-started/getting-started-kube
- RMF Control
- AC-2
- Severity
- M
- CCI
- CCI-000018
- Version
- CNTR-RM-000060
- Vuln IDs
-
- V-252844
- Rule IDs
-
- SV-252844r819982_rule
Checks: C-56300r819980_chk
Ensure audit logging is enabled: Navigate to Triple Bar Symbol(Global) >> <local cluster> -From the drop down next to the cluster name, select "cattle-system". -Click "deployments" under Workload menu item. -Select "rancher" in the Deployments section. -Click the three dot config menu on the right. -Choose "Edit Config". -Scroll down to the "Environment Variables" section. If the 'AUDIT_LEVEL' environment variable does not exist or < Level 2, this is a finding.
Fix: F-56250r819981_fix
Ensure audit logging is enabled: Navigate to Triple Bar Symbol(Global) >> <local cluster> -From the drop down next to the cluster name, select 'cattle-system'. -Click "deployments" under Workload menu item. -Select "rancher" in the Deployments section. -Click the three dot config menu on the right. -Choose "Edit Config". -Scroll down to the "Environment Variables" section. -Change the AUDIT_LEVEL value to "2" or "3" and then click "Save". If the variable does not exist: -Click "Add Variable". -Keep Default key/Value Pair as "Type" -Add "AUDIT_LEVEL" as Variable Name. -Input "2,3" for a value. -Click "Save".
- RMF Control
- AC-2
- Severity
- M
- CCI
- CCI-001404
- Version
- CNTR-RM-000080
- Vuln IDs
-
- V-252845
- Rule IDs
-
- SV-252845r822506_rule
Checks: C-56301r822505_chk
Verify User-Base is the default assigned role: -From the GUI, navigate to Triple Bar Symbol(Global) >> Users & Authentication >> Roles. -Click "Standard User". -At the top right, click the three dots, and then choose "Edit Config". -Under "New User Default", ensure "No" is selected. -Click "User-Base". -At the top right, click the three dots, and then "Edit Config". -Under "New User Default", ensure "Yes" is selected. If "No" is not selected for Standard User, this is a finding. If "Yes" is not selected for User-Base, this is a finding.
Fix: F-56251r822506_fix
From the GUI, navigate to Triple Bar Symbol(Global) >> Users & Authentication >> Roles. -Click "Standard User". -At the top right, click the three dots, and then "Edit Config". -Under "New User Default", select "No" and click "Save". -Click "User-Base". -At the top right, click the three dots, and then click "Edit Config". -Under "New User Default", select "Yes", and then click "Save".
- RMF Control
- AU-3
- Severity
- M
- CCI
- CCI-000133
- Version
- CNTR-RM-000250
- Vuln IDs
-
- V-252846
- Rule IDs
-
- SV-252846r819988_rule
Checks: C-56302r819986_chk
Ensure logging aggregation is enabled: Navigate to Triple Bar Symbol(Global). For each cluster in "EXPLORE CLUSTER": -Select "Cluster". -Select "Cluster Tools" (bottom left). This screen shows the current configuration for logging. If the logging block has an Install button, this is a finding.
Fix: F-56252r819987_fix
Enable log aggregation: Navigate to Triple Bar Symbol(Global). For each cluster in "EXPLORE CLUSTER": -Select "Cluster". -Select "Cluster Tools" (bottom left). -In the "Logging Block", select "Install". -Select the newest version of logging in the dropdown. -Open the "Install into Project Dropdown". -Select the Project. (Note: Kubernetes STIG requires creating new project & namespace for deployments. Using Default or System is not best practice.) -Click "Next". -Review the options and click "Install".
- RMF Control
- AC-2
- Severity
- M
- CCI
- CCI-001682
- Version
- CNTR-RM-000850
- Vuln IDs
-
- V-252847
- Rule IDs
-
- SV-252847r819991_rule
Checks: C-56303r819989_chk
Ensure local emergency admin account has not been removed and is the only Local account. Navigate to the Triple Bar Symbol(Global) >> Users & Authentication. In the left navigation menu, click "Users". There should be only one local account and that account should have administrator role. If no local administrator account exists or there is more than one local account, this is a finding.
Fix: F-56253r819990_fix
Ensure local emergency admin account has not been removed and is the only Local account. Navigate to the Triple Bar Symbol(Global) >> Users & Authentication. In the left navigation menu, click "Users". To Create a User: -Click "Create". -Complete the "Add User" form. Ensure Global Permissions are set to "Administrator". -Click "Create". To Delete a User: -Select the user and click "Delete".
- RMF Control
- AC-2
- Severity
- M
- CCI
- CCI-002145
- Version
- CNTR-RM-000970
- Vuln IDs
-
- V-252848
- Rule IDs
-
- SV-252848r819994_rule
Checks: C-56304r819992_chk
Verify helm installation contains correct parameters: Navigate to Triple Bar Symbol(Global) >> <local cluster>. From the kubectl shell (>_) Execute: `helm get values rancher -n cattle-system` The output must contain: ``` privateCA: true ingress: tls: source: secret ``` If the output source is not "secret", this is a finding. Verify contents of certificates are correct: From the console, type: kubectl -n cattle-system get secret tls-rancher-ingress -o 'jsonpath={.data.tls\.crt}' | base64 --decode | openssl x509 -noout -text kubectl -n cattle-system get secret tls-ca -o 'jsonpath={.data.cacerts\.pem}' | base64 --decode | openssl x509 -noout -text
Fix: F-56254r819993_fix
Update the secrets to contain valid certificates. Put the correct and valid DOD certificate and key in files called "tls.crt" and "tls.key", respectively, and then run: kubectl -n cattle-system create secret tls tls-rancher-ingress \ --cert=tls.crt \ --key=tls.key Upload the CA required for the certs by creating another file called "cacerts.pem" and running: kubectl -n cattle-system create secret generic tls-ca \ --from-file=cacerts.pem=./cacerts.pem The helm chart values need to be updated to include the check section: privateCA: true ingress: tls: source: secret Re-run helm upgrade with the new values for the certs to take effect.
- RMF Control
- CM-7
- Severity
- H
- CCI
- CCI-000382
- Version
- CNTR-RM-001730
- Vuln IDs
-
- V-252849
- Rule IDs
-
- SV-252849r819997_rule
Checks: C-56305r819995_chk
Navigate to Triple Bar Symbol(Global) >> <local cluster>. From the kubectl shell (>_) execute: kubectl get ingress -n cattle-system rancher -o yaml verify: spec: rules: - host: rancher.example.com < Caution-http://rancher.example.com > http: paths: - backend: service: name: rancher port: number: 443 kubectl get svc rancher -n cattle-system -o yaml Verify: spec: clusterIP: 10.43.145.4 clusterIPs: - 10.43.145.4 ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: https-internal port: 443 protocol: TCP targetPort: 443 If the output does not match the above, this is a finding.
Fix: F-56255r819996_fix
From the dropdown select Global >> <local cluster>. From the kubectl shell (>_) execute the following: kubectl patch -n cattle-system service rancher -p '{"spec":{"ports":[{"port":443,"targetPort":443}]}}' export RANCHER_HOSTNAME=rancher.disa-eval-2-6.tomatodamato.com < Caution-http://rancher.disa-eval-2-6.tomatodamato.com > kubectl -n cattle-system patch ingress rancher -p "{\"metadata\":{\"annotations\":{\"nginx.ingress.Kubernetes.io/backend-protocol\ < Caution-http://nginx.ingress.Kubernetes.io/backend-protocol\ > ":\"HTTPS\"}},\"spec\":{\"rules\":[{\"host\":\"$RANCHER_HOSTNAME\",\"http\":{\"paths\":[{\"backend\":{\"service\":{\"name\":\"rancher\",\"port\":{\"number\":443}}},\"pathType\":\"ImplementationSpecific\"}]}}]}}" kubectl patch -n cattle-system service rancher --type=json -p '[{"op":"remove","path":"/spec/ports/0"}]'