Select any two versions of this STIG to compare the individual requirements
Select any old version/release of this STIG to view the previous requirements
RBAC Integration and Authn/Authz View and modify authentication settings through the Rancher MCM UI. Navigate to Triple Bar Symbol(Global) >> Users & Authentication >> Auth Provider. This screen shows the authentication mechanism that is configured. If no authentication mechanism is configured or disabled, this is a finding.
RBAC Integration and Authn/Authz Navigate to Triple Bar Symbol(Global) >> Users & Authentication >> Auth Provider. From this screen the authentication mechanism can be selected and configured. This STIG is written and tested with KeyCloak and not included with Rancher MCM. Installation instructions for KeyCloak can be found here: https://www.keycloak.org/getting-started/getting-started-kube
Ensure audit logging is enabled: Navigate to Triple Bar Symbol(Global) >> <local cluster> -From the drop down next to the cluster name, select "cattle-system". -Click "deployments" under Workload menu item. -Select "rancher" in the Deployments section. -Click the three dot config menu on the right. -Choose "Edit Config". -Scroll down to the "Environment Variables" section. If the 'AUDIT_LEVEL' environment variable does not exist or < Level 2, this is a finding.
Ensure audit logging is enabled: Navigate to Triple Bar Symbol(Global) >> <local cluster> -From the drop down next to the cluster name, select 'cattle-system'. -Click "deployments" under Workload menu item. -Select "rancher" in the Deployments section. -Click the three dot config menu on the right. -Choose "Edit Config". -Scroll down to the "Environment Variables" section. -Change the AUDIT_LEVEL value to "2" or "3" and then click "Save". If the variable does not exist: -Click "Add Variable". -Keep Default key/Value Pair as "Type" -Add "AUDIT_LEVEL" as Variable Name. -Input "2,3" for a value. -Click "Save".
Verify User-Base is the default assigned role: -From the GUI, navigate to Triple Bar Symbol(Global) >> Users & Authentication >> Roles. -Click "Standard User". -At the top right, click the three dots, and then choose "Edit Config". -Under "New User Default", ensure "No" is selected. -Click "User-Base". -At the top right, click the three dots, and then "Edit Config". -Under "New User Default", ensure "Yes" is selected. If "No" is not selected for Standard User, this is a finding. If "Yes" is not selected for User-Base, this is a finding.
From the GUI, navigate to Triple Bar Symbol(Global) >> Users & Authentication >> Roles. -Click "Standard User". -At the top right, click the three dots, and then "Edit Config". -Under "New User Default", select "No" and click "Save". -Click "User-Base". -At the top right, click the three dots, and then click "Edit Config". -Under "New User Default", select "Yes", and then click "Save".
Ensure logging aggregation is enabled: Navigate to Triple Bar Symbol(Global). For each cluster in "EXPLORE CLUSTER": -Select "Cluster". -Select "Cluster Tools" (bottom left). This screen shows the current configuration for logging. OR Ensure logs are being aggregated and stored in a central logging solution. If the logging block has an Install button OR logs are not being aggregated in a central logging solution, this is a finding.
Enable log aggregation: Navigate to Triple Bar Symbol(Global). For each cluster in "EXPLORE CLUSTER": -Select "Cluster". -Select "Cluster Tools" (bottom left). -In the "Logging Block", select "Install". -Select the newest version of logging in the dropdown. -Open the "Install into Project Dropdown". -Select the Project. (Note: Kubernetes STIG requires creating new project and namespace for deployments. Using Default or System is not best practice.) -Click "Next". -Review the options and click "Install".
Ensure local emergency admin account has not been removed and is the only Local account. Navigate to the Triple Bar Symbol(Global) >> Users & Authentication. In the left navigation menu, click "Users". There should be only one local account and that account should have administrator role. If no local administrator account exists or there is more than one local account, this is a finding.
Ensure local emergency admin account has not been removed and is the only Local account. Navigate to the Triple Bar Symbol(Global) >> Users & Authentication. In the left navigation menu, click "Users". To Create a User: -Click "Create". -Complete the "Add User" form. Ensure Global Permissions are set to "Administrator". -Click "Create". To Delete a User: -Select the user and click "Delete".
Navigate to Triple Bar Symbol(Global) >> <local cluster>. From the kubectl shell (>_) execute: kubectl get ingress -n cattle-system rancher -o yaml Verify the port number for Rancher is using "443", like the following: spec: rules: - host: rancher.rfed.us http: paths: - backend: service: name: rancher port: number: 443 From the kubectl shell (>_) execute: kubectl get networkpolicies -n cattle-system Verify networkpolicies exist and that they are only allowing traffic to port "444" of the Rancher pods, like the following: NAME POD-SELECTOR AGE rancher-allow-https app=rancher 10h rancher-deny-ingress app=rancher 10h If the ingress output is not using port 443, or there are not network policies in place to only allow traffic to port 444, this is a finding.
Gather the current values of the Rancher deployment by running the following: helm get values -n cattle-system rancher > /tmp/rancher-values.yaml Create another values file to upgrade Rancher's ingress object for HTTPS. Add the following to "/tmp/rancher-ingress-values.yaml": ingress: extraAnnotations: nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" # If using NGINX ingress traefik.ingress.kubernetes.io/router.tls: "true" # If using Traefik ingress servicePort: 443 If using a different ingress controller than NGINX or Traefik, other annotations may need to be added to ensure the controller knows the Rancher backend is HTTPS. Upgrade Rancher, referencing the two files created: helm upgrade -n cattle-system -f /tmp/rancher-values.yaml -f /tmp/rancher-ingress-values.yaml rancher rancher-stable/rancher --version=CURRENT_RANCHER_VERSION Once Rancher ingress has been updated and it has been verified that Rancher is still accessible, run the following command to create NetworkPolicies that will block all traffic to Rancher with the exception of HTTPS: cat <<EOF | kubectl apply -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: rancher-allow-https namespace: cattle-system spec: podSelector: matchLabels: app: rancher ingress: - ports: - port: 444 --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: rancher-deny-ingress namespace: cattle-system spec: podSelector: matchLabels: app: rancher policyTypes: - Ingress EOF
Verify helm installation contains correct parameters: Navigate to Triple Bar Symbol(Global) >> <local cluster>. From the kubectl shell (>_) Execute: `helm get values rancher -n cattle-system` The output must contain: ``` privateCA: true ingress: tls: source: secret ``` If the output source is not "secret", this is a finding. Verify contents of certificates are correct: From the console, type: kubectl -n cattle-system get secret tls-rancher-ingress -o 'jsonpath={.data.tls\.crt}' | base64 --decode | openssl x509 -noout -text kubectl -n cattle-system get secret tls-ca -o 'jsonpath={.data.cacerts\.pem}' | base64 --decode | openssl x509 -noout -text
Update the secrets to contain valid certificates. Put the correct and valid DOD certificate and key in files called "tls.crt" and "tls.key", respectively, and then run: kubectl -n cattle-system create secret tls tls-rancher-ingress \ --cert=tls.crt \ --key=tls.key Upload the CA required for the certs by creating another file called "cacerts.pem" and running: kubectl -n cattle-system create secret generic tls-ca \ --from-file=cacerts.pem=./cacerts.pem The helm chart values need to be updated to include the check section: privateCA: true ingress: tls: source: secret Rerun helm upgrade with the new values for the certs to take effect.