DoD Compliance · STIG

Kubernetes Security Technical Implementation Guide

V1R0.1 · · · Released 01 Dec 2020 · 119 rules
Compare

Pick two releases to diff their requirements.

View

Open a previous version of this STIG.

This Security Technical Implementation Guide is published as a tool to improve the security of Department of Defense (DoD) information systems. The requirements are derived from the National Institute of Standards and Technology (NIST) 800-53 and related documents. Comments or proposed revisions to this document should be sent via email to the following address: disa.stig_spt@mail.mil.
Sort by
b
The Kubernetes Controller Manager must use TLS 1.2, at a minimum, to protect the confidentiality of sensitive data during electronic dissemination.
AC-17 - Medium - CCI-000068 - CNTR-K8-000150 - CNTR-K8-000150_rule
RMF Control
AC-17
Severity
M
CCI
CCI-000068
Version
CNTR-K8-000150
Vuln IDs
  • CNTR-K8-000150
Rule IDs
  • CNTR-K8-000150_rule
The Kubernetes Controller Manager will prohibit the use of SSL and unauthorized versions of TLS protocols to properly secure communication. The use of unsupported protocol exposes vulnerabilities to the Kubernetes by rogue traffic interceptions, man-in-the-middle attacks, and impersonation of users or services from the container platform runtime, registry, and key store. To enable the minimum version of TLS to be used by the Kubernetes Controller Manager, the setting “tls-min-version” must be set.
Checks: C-CNTR-K8-000150_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i tls-min-version * If the setting “tls-min-version” is not set in the Kubernetes Controller Manager manifest file or it is set to “VersionTLS10” or “VersionTLS11”, this is a finding.

Fix: F-CNTR-K8-000150_fix

Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--tls-min-version” to either “VersionTLS12” or “VersionTLS13”.

b
The Kubernetes Scheduler must use TLS 1.2, at a minimum, to protect the confidentiality of sensitive data during electronic dissemination.
AC-17 - Medium - CCI-000068 - CNTR-K8-000160 - CNTR-K8-000160_rule
RMF Control
AC-17
Severity
M
CCI
CCI-000068
Version
CNTR-K8-000160
Vuln IDs
  • CNTR-K8-000160
Rule IDs
  • CNTR-K8-000160_rule
The Kubernetes Scheduler will prohibit the use of SSL and unauthorized versions of TLS protocols to properly secure communication. The use of unsupported protocol exposes vulnerabilities to the Kubernetes by rogue traffic interceptions, man-in-the-middle attacks, and impersonation of users or services from the container platform runtime, registry, and keystore. To enable the minimum version of TLS to be used by the Kubernetes API Server, the setting “tls-min-version” must be set.
Checks: C-CNTR-K8-000160_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i tls-min-version * If the setting “tls-min-version” is not set in the Kubernetes Scheduler manifest file or it is set to “VersionTLS10” or “VersionTLS11”, this is a finding.

Fix: F-CNTR-K8-000160_fix

Edit the Kubernetes Scheduler manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--tls-min-version” to either “VersionTLS12” or “VersionTLS13”.

b
The Kubernetes API Server must use TLS 1.2, at a minimum, to protect the confidentiality of sensitive data during electronic dissemination.
AC-17 - Medium - CCI-000068 - CNTR-K8-000170 - CNTR-K8-000170_rule
RMF Control
AC-17
Severity
M
CCI
CCI-000068
Version
CNTR-K8-000170
Vuln IDs
  • CNTR-K8-000170
Rule IDs
  • CNTR-K8-000170_rule
The Kubernetes API Server will prohibit the use of SSL and unauthorized versions of TLS protocols to properly secure communication. The use of unsupported protocol exposes vulnerabilities to the Kubernetes by rogue traffic interceptions, man-in-the-middle attacks, and impersonation of users or services from the container platform runtime, registry, and keystore. To enable the minimum version of TLS to be used by the Kubernetes API Server, the setting “tls-min-version” must be set.
Checks: C-CNTR-K8-000170_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i tls-min-version * If the setting “tls-min-version” is not set in the Kubernetes API Server manifest file or it is set to “VersionTLS10” or “VersionTLS11”, this is a finding.

Fix: F-CNTR-K8-000170_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--tls-min-version” to either “VersionTLS12” or “VersionTLS13”.

b
The Kubernetes etcd must use TLS to protect the confidentiality of sensitive data during electronic dissemination.
AC-17 - Medium - CCI-000068 - CNTR-K8-000180 - CNTR-K8-000180_rule
RMF Control
AC-17
Severity
M
CCI
CCI-000068
Version
CNTR-K8-000180
Vuln IDs
  • CNTR-K8-000180
Rule IDs
  • CNTR-K8-000180_rule
Kubernetes etcd will prohibit the use of SSL and unauthorized versions of TLS protocols to properly secure communication. The use of unsupported protocol exposes vulnerabilities to the Kubernetes by rogue traffic interceptions, man-in-the-middle attacks, and impersonation of users or services from the container platform runtime, registry, and keystore. To enable the minimum version of TLS to be used by the Kubernetes API Server, the setting “tls-min-version” must be set.
Checks: C-CNTR-K8-000180_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i auto-tls * If the setting “auto-tls” is not set in the Kubernetes etcd manifest file or it is set to true, this is a finding.

Fix: F-CNTR-K8-000180_fix

Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “-auto-tls” to false.

b
The Kubernetes etcd must use TLS to protect the confidentiality of sensitive data during electronic dissemination.
AC-17 - Medium - CCI-000068 - CNTR-K8-000190 - CNTR-K8-000190_rule
RMF Control
AC-17
Severity
M
CCI
CCI-000068
Version
CNTR-K8-000190
Vuln IDs
  • CNTR-K8-000190
Rule IDs
  • CNTR-K8-000190_rule
The Kubernetes API Server will prohibit the use of SSL and unauthorized versions of TLS protocols to properly secure communication. The use of unsupported protocol exposes vulnerabilities to the Kubernetes by rogue traffic interceptions, man-in-the-middle attacks, and impersonation of users or services from the container platform runtime, registry, and keystore. To enable the minimum version of TLS to be used by the Kubernetes API Server, the setting “tls-min-version” must be set.
Checks: C-CNTR-K8-000190_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -I peer-auto-tls * If the setting “peer-auto-tls” is not set in the Kubernetes etcd manifest file or it is set to “true”, this is a finding.

Fix: F-CNTR-K8-000190_fix

Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “peer-auto-tls” to “false”.

c
The Kubernetes Controller Manager must create unique service accounts for each work payload.
AC-2 - High - CCI-000015 - CNTR-K8-000220 - CNTR-K8-000220_rule
RMF Control
AC-2
Severity
H
CCI
CCI-000015
Version
CNTR-K8-000220
Vuln IDs
  • CNTR-K8-000220
Rule IDs
  • CNTR-K8-000220_rule
The Kubernetes Controller Manager is a background process that embeds core control loops regulating cluster system state through the API Server. Every process executed in a pod has an associated service account. By default, service accounts use the same credentials for authentication. Implementing the default settings poses a High risk to the Kubernetes Controller Manager. Setting the use-service-account-credential value lowers the attack surface by generating unique service accounts settings for each controller instance.
Checks: C-CNTR-K8-000220_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i use-service-account-credential * If the setting use-service-account-credential is not set in the Kubernetes Controller Manager manifest file or it is set to “false”, this is a finding.

Fix: F-CNTR-K8-000220_fix

Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “use-service-account-credential” to “true”.

b
The Kubernetes API Server must enable Node as the authorization mode.
AC-3 - Medium - CCI-000213 - CNTR-K8-000270 - CNTR-K8-000270_rule
RMF Control
AC-3
Severity
M
CCI
CCI-000213
Version
CNTR-K8-000270
Vuln IDs
  • CNTR-K8-000270
Rule IDs
  • CNTR-K8-000270_rule
To mitigate the risk of unauthorized access to sensitive information by entities that have been issued certificates by DoD-approved PKIs, all DoD systems (e.g., networks, web servers, and web portals) must be properly configured to incorporate access control methods that do not rely solely on the possession of a certificate for access. Successful authentication must not automatically give an entity access to an asset or security boundary. Authorization procedures and controls must be implemented to ensure each authenticated entity also has a validated and current authorization. Authorization is the process of determining whether an entity, once authenticated, is permitted to access a specific asset. Node= is the method within Kubernetes to control access of users and applications. Kubernetes uses roles to grant authorization API requests made by kubelets.
Checks: C-CNTR-K8-000270_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i --authorization-mode * If the setting “authorization-mode” is not set in the Kubernetes API Server manifest file or is not set to “Node”, this is a finding.

Fix: F-CNTR-K8-000270_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--authorization-mode” to “Node”.

b
The Kubernetes API Server must enable Role-Based Access (RBAC) as the authorization mode.
AC-3 - Medium - CCI-000213 - CNTR-K8-000280 - CNTR-K8-000280_rule
RMF Control
AC-3
Severity
M
CCI
CCI-000213
Version
CNTR-K8-000280
Vuln IDs
  • CNTR-K8-000280
Rule IDs
  • CNTR-K8-000280_rule
To mitigate the risk of unauthorized access to sensitive information by entities that have been issued certificates by DoD-approved PKIs, all DoD systems (e.g., networks, web servers, and web portals) must be properly configured to incorporate access control methods that do not rely solely on the possession of a certificate for access. Successful authentication must not automatically give an entity access to an asset or security boundary. Authorization procedures and controls must be implemented to ensure each authenticated entity also has a validated and current authorization. Authorization is the process of determining whether an entity, once authenticated, is permitted to access a specific asset. RBAC is the method within Kubernetes to control access of users and applications. Kubernetes uses roles to grant authorization to resources. RBAC is the default configuration for Kubernetes.
Checks: C-CNTR-K8-000280_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i --authorization-mode * If the setting authorization-mode is not set in the Kubernetes API Server manifest file or is not set to “RBAC”, this is a finding.

Fix: F-CNTR-K8-000280_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--authorization-mode” to “RBAC”.

c
User managed resources must be created in dedicated namespaces.
CM-6 - High - CCI-000366 - CNTR-K8-000290 - CNTR-K8-000290_rule
RMF Control
CM-6
Severity
H
CCI
CCI-000366
Version
CNTR-K8-000290
Vuln IDs
  • CNTR-K8-000290
Rule IDs
  • CNTR-K8-000290_rule
Creating namespaces for user-managed resources is important when implementing Role-Based Access controls (RBAC). RBAC allows for the authorization of users and helps support proper API server permissions separation and network micro segmentation. If user-managed resources are placed within the default namespaces, it becomes impossible to implement policies for RBAC permission, service account usage, network policies and more.
Checks: C-CNTR-K8-000290_chk

To view the available namespaces, run the command: kubectl get namespaces The default namespaces to be validated are default, kube-public and kube-node-lease if it is created. For the default namespace, execute the commands: kubectl config set-context --current --namespace=default kubectl get all For the kube-public namespace, execute the commands: kubectl config set-context --current --namespace=kube-public kubectl get all For the kube-node-lease namespace, execute the commands: kubectl config set-context --current --namespace=kube-node-lease kubectl get all The only valid return values are the kubernetes service (i.e., service/kubernetes) and nothing at all. If a return value is returned from the "kubectl get all" command and it is not the kubernetes service (i.e., service/kubernetes), this is a finding.

Fix: F-CNTR-K8-000290_fix

Move any user-managed resources from the default, kube-public and kube-node-lease namespaces, to user namespaces.

b
The Kubernetes Scheduler must have secure binding.
AC-3 - Medium - CCI-000213 - CNTR-K8-000300 - CNTR-K8-000300_rule
RMF Control
AC-3
Severity
M
CCI
CCI-000213
Version
CNTR-K8-000300
Vuln IDs
  • CNTR-K8-000300
Rule IDs
  • CNTR-K8-000300_rule
Limiting the number of attack vectors and implementing authentication and encryption on the endpoints available to external sources is paramount when securing the overall Kubernetes cluster. The Scheduler API service exposes port 10251/TCP by default for health and metrics information use. This port does not encrypt or authenticate connections. If this port is exposed externally, an attacker can use this port to attack the entire Kubernetes cluster. By setting the bind address to localhost (i.e., 127.0.0.1), only those internal services that require health and metrics information can access the Scheduler API.
Checks: C-CNTR-K8-000300_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i bind-address * If the setting “bind-address” is not set to “127.0.0.1” or is not found in the Kubernetes Scheduler manifest file, this is a finding.

Fix: F-CNTR-K8-000300_fix

Edit the Kubernetes Scheduler manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument “--bind-address” to “127.0.0.1”.

b
The Kubernetes Controller Manager must have secure binding.
AC-3 - Medium - CCI-000213 - CNTR-K8-000310 - CNTR-K8-000310_rule
RMF Control
AC-3
Severity
M
CCI
CCI-000213
Version
CNTR-K8-000310
Vuln IDs
  • CNTR-K8-000310
Rule IDs
  • CNTR-K8-000310_rule
Limiting the number of attack vectors and implementing authentication and encryption on the endpoints available to external sources is paramount when securing the overall Kubernetes cluster. The Controller Manager API service exposes port 10252/TCP by default for health and metrics information use. This port does not encrypt or authenticate connections. If this port is exposed externally, an attacker can use this port to attack the entire Kubernetes cluster. By setting the bind address to only localhost (i.e., 127.0.0.1), only those internal services that require health and metrics information can access the Control Manager API.
Checks: C-CNTR-K8-000310_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i bind-address * If the setting bind-address is not set to “127.0.0.1” or is not found in the Kubernetes Controller Manager manifest file, this is a finding.

Fix: F-CNTR-K8-000310_fix

Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument “--bind-address” to “127.0.0.1”.

c
The Kubernetes API server must have the insecure port flag disabled.
AC-3 - High - CCI-000213 - CNTR-K8-000320 - CNTR-K8-000320_rule
RMF Control
AC-3
Severity
H
CCI
CCI-000213
Version
CNTR-K8-000320
Vuln IDs
  • CNTR-K8-000320
Rule IDs
  • CNTR-K8-000320_rule
By default, the API server will listen on two ports. One port is the secure port and the other port is called the “localhost port”. This port is also called the “insecure port”, port 8080. Any requests to this port bypass authentication and authorization checks. If this port is left open, anyone who gains access to the host on which the master is running can bypass all authorization and authentication mechanisms put in place, and have full control over the entire cluster. Close the insecure port by setting the API server’s --insecure-port flag to “0” and ensuring that the --insecure-bind-address is not set.
Checks: C-CNTR-K8-000320_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i insecure-port * If the setting insecure-port is not set to “0” or is not found in the Kubernetes API server manifest file, this is a finding.

Fix: F-CNTR-K8-000320_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument --insecure-port to “0”.

c
The Kubernetes Kubelet must have the read-only port flag disabled.
AC-3 - High - CCI-000213 - CNTR-K8-000330 - CNTR-K8-000330_rule
RMF Control
AC-3
Severity
H
CCI
CCI-000213
Version
CNTR-K8-000330
Vuln IDs
  • CNTR-K8-000330
Rule IDs
  • CNTR-K8-000330_rule
Kubelet serves a small REST API with read access to port 10255. Read-only port for Kubernetes provides no authentication or authorization security control. Providing unrestricted access on port 10255 exposes Kubernetes pods and containers to malicious attacks or compromise. Port 10255 is deprecated and should be disabled. Close the read-only-port by setting the API server’s read-only port flag to “0”.
Checks: C-CNTR-K8-000330_chk

Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: kubelet --read-only-port If the setting insecure-port value is not set to “0” or is not set in the Kubernetes Kubelet, this is a finding.

Fix: F-CNTR-K8-000330_fix

Edit the Kubernetes Kubelet file in the /etc/sysconfig directory on the Kubernetes Master Node. Set the argument --read-only-port to “0”. Reset Kubelet service using the following command: service kubelet restart

c
The Kubernetes API server must have the insecure bind address not set.
AC-3 - High - CCI-000213 - CNTR-K8-000340 - CNTR-K8-000340_rule
RMF Control
AC-3
Severity
H
CCI
CCI-000213
Version
CNTR-K8-000340
Vuln IDs
  • CNTR-K8-000340
Rule IDs
  • CNTR-K8-000340_rule
By default, the API server will listen on two ports and addresses. One address is the secure address and the other address is called the “insecure bind” address and is set by default to localhost. Any requests to this address bypass authentication and authorization checks. If this insecure bind address is set to localhost, anyone who gains access to the host on which the master is running can bypass all authorization and authentication mechanisms put in place and have full control over the entire cluster. Close or set the insecure bind address by setting the API server’s --insecure-bind-address flag to an IP or leave it unset and ensure that the --insecure-bind-port is not set.
Checks: C-CNTR-K8-000340_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i insecure-bind-address * If the setting insecure-bind-address is found and set to “localhost” in the Kubernetes API manifest file, this is a finding.

Fix: F-CNTR-K8-000340_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Remove the value for the --insecure-bind-address setting.

b
The Kubernetes API server must have the secure port set.
AC-3 - Medium - CCI-000213 - CNTR-K8-000350 - CNTR-K8-000350_rule
RMF Control
AC-3
Severity
M
CCI
CCI-000213
Version
CNTR-K8-000350
Vuln IDs
  • CNTR-K8-000350
Rule IDs
  • CNTR-K8-000350_rule
By default, the API server will listen on what is rightfully called the secure port, port 6443. Any requests to this port will perform authentication and authorization checks. If this port is disabled, anyone who gains access to the host on which the master is running has full control of the entire cluster over encrypted traffic. Open the secure port by setting the API server’s --secure-port flag to a value other than “0”.
Checks: C-CNTR-K8-000350_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i secure-port * If the setting secure-port is set to “0” or is not found in the Kubernetes API manifest file, this is a finding.

Fix: F-CNTR-K8-000350_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument --secure-port to a value greater than “0”.

c
The Kubernetes API server must have anonymous authentication disabled.
AC-3 - High - CCI-000213 - CNTR-K8-000360 - CNTR-K8-000360_rule
RMF Control
AC-3
Severity
H
CCI
CCI-000213
Version
CNTR-K8-000360
Vuln IDs
  • CNTR-K8-000360
Rule IDs
  • CNTR-K8-000360_rule
The Kubernetes API Server controls Kubernetes via an API interface. A user who has access to the API essentially has root access to the entire Kubernetes cluster. To control access, users must be authenticated and authorized. By allowing anonymous connections, the controls put in place to secure the API can be bypassed. Setting anonymous authentication to “false” also disables unauthenticated requests from kubelets. While there are instances where anonymous connections may be needed (e.g., health checks) and Role-Based Access controls (RBAC) are in place to limit the anonymous access, this access should be disabled, and only enabled when necessary.
Checks: C-CNTR-K8-000360_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i anonymous-auth * If the setting anonymous-auth is set to “true” in the Kubernetes API Server manifest file, this is a finding.

Fix: F-CNTR-K8-000360_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument --anonymous-auth to “false”.

c
The Kubernetes Kubelet must have anonymous authentication disabled.
AC-3 - High - CCI-000213 - CNTR-K8-000370 - CNTR-K8-000370_rule
RMF Control
AC-3
Severity
H
CCI
CCI-000213
Version
CNTR-K8-000370
Vuln IDs
  • CNTR-K8-000370
Rule IDs
  • CNTR-K8-000370_rule
A user who has access to the Kubelet essentially has root access to the nodes contained within the Kubernetes Control Plane. To control access, users must be authenticated and authorized. By allowing anonymous connections, the controls put in place to secure the Kubelet can be bypassed. Setting anonymous authentication to “false” also disables unauthenticated requests from kubelets. While there are instances where anonymous connections may be needed (e.g., health checks) and Role-Based Access controls (RBAC) are in place to limit the anonymous access, this access must be disabled and only enabled when necessary.
Checks: C-CNTR-K8-000370_chk

Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: grep -i anonymous-auth kubelet If the setting “anonymous-auth” is set to “true” or the parameter not set in the Kubernetes Kubelet, this is a finding.

Fix: F-CNTR-K8-000370_fix

Edit the Kubernetes Kubelet file in the/etc/sysconfig/ directory on the Kubernetes Master Node. Set the argument “--anonymous-auth” to “false”. Restart kubelet service using command: service kubelet restart

c
The Kubernetes kubelet must enable explicit authorization.
AC-3 - High - CCI-000213 - CNTR-K8-000380 - CNTR-K8-000380_rule
RMF Control
AC-3
Severity
H
CCI
CCI-000213
Version
CNTR-K8-000380
Vuln IDs
  • CNTR-K8-000380
Rule IDs
  • CNTR-K8-000380_rule
Kubelet is the primary agent on each node. The API server communicates with each kubelet to perform tasks such as starting/stopping pods. By default, kubelets allow all authenticated requests, even anonymous ones, without requiring any authorization checks from the API server. This default behavior bypasses any authorization controls put in place to limit what users may perform within the Kubernetes cluster. To change this behavior, the default setting of AlwaysAllow for the authorization mode must be set to “Webhook”.
Checks: C-CNTR-K8-000380_chk

Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: grep -i authorization-mode kubelet On each Worker node, change to the /etc/sysconfig/ directory. Run the command: grep -i authorization-mode kubelet If authorization-mode is missing or is set to “AllowAlways” on the Master node or any of the Worker nodes, this is a finding.

Fix: F-CNTR-K8-000380_fix

Edit the Kubernetes Kubelet file in the/etc/sysconfig/ directory on the Kubernetes Master and Worker nodes. Set the argument --authorization-mode to “Webhook”. Restart each kubelet service after the change is made using the command: service kubelet restart

b
The Kubernetes API server must have an authorization mode set.
AC-3 - Medium - CCI-000213 - CNTR-K8-000390 - CNTR-K8-000390_rule
RMF Control
AC-3
Severity
M
CCI
CCI-000213
Version
CNTR-K8-000390
Vuln IDs
  • CNTR-K8-000390
Rule IDs
  • CNTR-K8-000390_rule
The Kubernetes API Server controls Kubernetes via an API interface. Access to the API gives a user root access to the cluster. Using the setting “--authorization-mode=AlwaysAllow” allows all requests with no authorization checks. The valid modes for this setting are: --authorization-mode=ABAC Attribute-Based Access Control (ABAC) mode allows policies to be configured using local files. --authorization-mode=RBAC Role-based access control (RBAC) mode allows a user to create and store policies using the Kubernetes API. --authorization-mode=Webhook WebHook is an HTTP callback mode that allows a user to manage authorization using a remote REST endpoint. --authorization-mode=Node Node authorization is a special-purpose authorization mode that specifically authorizes API requests made by kubelets. --authorization-mode=AlwaysDeny This flag blocks all requests. Use this flag only for testing.
Checks: C-CNTR-K8-000390_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i authorization-mode * If the setting authorization-mode is set to “AlwaysAllow” in the Kubernetes API Server manifest file, this is a finding.

Fix: F-CNTR-K8-000390_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument “--authorization-mode” to any valid authorization mode other than “AlwaysAllow”.

b
Kubernetes worker nodes must not have sshd service running.
AC-3 - Medium - CCI-000213 - CNTR-K8-000400 - CNTR-K8-000400_rule
RMF Control
AC-3
Severity
M
CCI
CCI-000213
Version
CNTR-K8-000400
Vuln IDs
  • CNTR-K8-000400
Rule IDs
  • CNTR-K8-000400_rule
Worker Nodes are maintained and monitored by the Master Node. Direct access and manipulation of the nodes should not take place by administrators. Worker nodes should be treated as immutable and updated via replacement rather than in-place upgrades.
Checks: C-CNTR-K8-000400_chk

Log in to each worker node. Verify that the sshd service is not running. To validate that the service is not running, run the command: systemctl status sshd If the service sshd is active (running), this is a finding. Note: If console access is not available, SSH access can be attempted. If the worker nodes cannot be reached, this requirement is “not a finding”.

Fix: F-CNTR-K8-000400_fix

To stop the sshd service, run the command: systemctl stop sshd Note: If access to the worker node is through an SSH session, it is important to realize there are two requirements for disabling and stopping the sshd service and they should be done during the same SSH session. Disabling the service must be performed first and then the service stopped to guarantee both settings can be made if the session is interrupted.

b
Kubernetes Worker Nodes must not have the sshd service enabled.
AC-3 - Medium - CCI-000213 - CNTR-K8-000410 - CNTR-K8-000410_rule
RMF Control
AC-3
Severity
M
CCI
CCI-000213
Version
CNTR-K8-000410
Vuln IDs
  • CNTR-K8-000410
Rule IDs
  • CNTR-K8-000410_rule
Worker Nodes are maintained and monitored by the Master Node. Direct access and manipulation of the nodes must not take place by administrators. Worker nodes must be treated as immutable and updated via replacement rather than in-place upgrades.
Checks: C-CNTR-K8-000410_chk

Log in to each worker node. Verify that the sshd service is not enabled. To validate the service is not enabled, run the command: systemctl is-enabled sshd.service If the service sshd is enabled, this is a finding. Note: If console access is not available, SSH access can be attempted. If the worker nodes cannot be reached, this requirement is “not a finding”.

Fix: F-CNTR-K8-000410_fix

To disable the sshd service, run the command: chkconfig sshd off Note: If access to the worker node is through an SSH session, it is important to realize there are two requirements for disabling and stopping the sshd service that must be done during the same SSH session. Disabling the service must be performed first and then the service stopped to guarantee both settings can be made if the session is interrupted.

b
Kubernetes dashboard must not be enabled.
AC-3 - Medium - CCI-000213 - CNTR-K8-000420 - CNTR-K8-000420_rule
RMF Control
AC-3
Severity
M
CCI
CCI-000213
Version
CNTR-K8-000420
Vuln IDs
  • CNTR-K8-000420
Rule IDs
  • CNTR-K8-000420_rule
While the Kubernetes dashboard is not inherently insecure on its own, it is often coupled with a misconfiguration of Role-Based Access control (RBAC) permissions that can unintentionally over-grant access. It is not commonly protected with “NetworkPolicies”, preventing all pods from being able to reach it. In increasingly rare circumstances, the Kubernetes dashboard is exposed publicly to the internet.
Checks: C-CNTR-K8-000420_chk

From the master node, run the command: kubectl get pods --all-namespaces -l k8s-app=kubernetes-dashboard If any resources are returned, this is a finding.

Fix: F-CNTR-K8-000420_fix

Delete the Kubernetes dashboard deployment with the following command: kubectl delete deployment kubernetes-dashboard --namespace=kube-system

b
Kubernetes Kubectl cp command must give expected access and results.
AC-3 - Medium - CCI-000213 - CNTR-K8-000430 - CNTR-K8-000430_rule
RMF Control
AC-3
Severity
M
CCI
CCI-000213
Version
CNTR-K8-000430
Vuln IDs
  • CNTR-K8-000430
Rule IDs
  • CNTR-K8-000430_rule
One of the tools heavily used to interact with containers in the Kubernetes cluster is kubectl. The command is the tool System Administrators used to create, modify, and delete resources. One of the capabilities of the tool is to copy files to and from running containers (i.e., kubectl cp). The command uses the “tar” command of the container to copy files from the container to the host executing the "kubectl cp" command. If the “tar” command on the container has been replaced by a malicious user, the command can copy files anywhere on the host machine. This flaw has been fixed in later versions of the tool. It is recommended to use kubectl versions newer than 1.12.9.
Checks: C-CNTR-K8-000430_chk

From the Master and each Worker node, check the version of kubectl by executing the command: kubectl version --client If the Master or any Work nodes are not using kubectl version 1.12.9 or newer, this is a finding.

Fix: F-CNTR-K8-000430_fix

Upgrade the Master and Worker nodes to the latest version of kubectl.

c
Kubernetes kubelet static PodPath must not enable static pods.
AC-3 - High - CCI-000213 - CNTR-K8-000440 - CNTR-K8-000440_rule
RMF Control
AC-3
Severity
H
CCI
CCI-000213
Version
CNTR-K8-000440
Vuln IDs
  • CNTR-K8-000440
Rule IDs
  • CNTR-K8-000440_rule
Allowing kubelet to set a staticPodPath gives containers with root access permissions to traverse the hosting filesystem. The danger comes when the container can create a manifest file within the /etc/kubernetes/manifests directory. When a manifest is created within this directory, containers are entirely governed by the Kubelet not the API Server. The container is not susceptible to admission control at all. Any containers or pods that are instantiated in this manner are called “static pods” and are meant to be used for pods such as the API server, scheduler, controller, etc., not workload pods that need to be governed by the API Server.
Checks: C-CNTR-K8-000440_chk

On the Master and Worker nodes, change to the /etc/sysconfig/ directory and run the command: grep -i staticPodPath kubelet If any of the nodes return a value for staticPodPath, this is a finding.

Fix: F-CNTR-K8-000440_fix

Edit the kubelet file on each node under the /etc/sysconfig directory to remove the staticPodPath setting and restart the kubelet service by executing the command: service kubelet restart

b
Kubernetes DynamicAuditing must not be enabled.
AC-3 - Medium - CCI-000213 - CNTR-K8-000450 - CNTR-K8-000450_rule
RMF Control
AC-3
Severity
M
CCI
CCI-000213
Version
CNTR-K8-000450
Vuln IDs
  • CNTR-K8-000450
Rule IDs
  • CNTR-K8-000450_rule
Protecting the audit data from change or deletion is important when an attack occurs. One way an attacker can cover their tracks is to change or delete audit records. This will either make the attack unnoticeable or make it more difficult to investigate how the attack took place and what changes were made. The audit data can be protected through audit log file protections and user authorization. One way for an attacker to thwart these measures is to send the audit logs to another source and filter the audited results before sending them on to the original target. This can be done in Kubernetes through the configuration of dynamic audit webhooks through the DynamicAuditing flag.
Checks: C-CNTR-K8-000450_chk

On the Master node, change to the manifest’s directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Review the feature-gates setting, if one is returned. If the feature-gates setting is available and contains the DynamicAuditing flag set to “true”, this is a finding. Change to the directory /etc/sysconfig on the Master and each Worker node and execute the command: grep -i feature-gates kubelet Review every feature-gates setting that is returned. If any feature-gates setting is available and contains the “DynamicAuditing” flag set to “true”, this is a finding.

Fix: F-CNTR-K8-000450_fix

Edit any manifest files or kubelet config files that contain the feature-gates setting with DynamicAuditing set to “true”. Set the flag to “false” or remove the “DynamicAuditing” setting completely. Restart the kubelet service if the kubelet config file is changed.

b
Kubernetes DynamicKubeletConfig must not be enabled.
AC-3 - Medium - CCI-000213 - CNTR-K8-000460 - CNTR-K8-000460_rule
RMF Control
AC-3
Severity
M
CCI
CCI-000213
Version
CNTR-K8-000460
Vuln IDs
  • CNTR-K8-000460
Rule IDs
  • CNTR-K8-000460_rule
Kubernetes allows a user to configure kubelets with dynamic configurations. When dynamic configuration is used, the kubelet will watch for changes to the configuration file. When changes are made, the kubelet will automatically restart. Allowing this capability bypasses access restrictions and authorizations. Using this capability, an attacker can lower the security posture of the kubelet, which includes allowing the ability to run arbitrary commands in any container running on that node.
Checks: C-CNTR-K8-000460_chk

On the Master node, change to the manifest’s directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Review the feature-gates setting if one is returned. If the feature-gates setting does not exist or feature-gates does not contain the DynamicKubeletConfig flag or the “DynamicKubletConfig” flag is set to “true”, this is a finding. Change to the directory /etc/sysconfig on the Master and each Worker node and execute the command: grep -i feature-gates kubelet Review every feature-gates setting if one is returned. If the feature-gates setting does not exist or feature-gates does not contain the DynamicKubeletConfig flag or the DynamicKubletConfig flag is set to “true”, this is a finding.

Fix: F-CNTR-K8-000460_fix

Edit any manifest file or kubelet config file that does not contain a feature-gates setting or has DynamciKubeletConfig set to “true”. An omission of DynamicKubeletConfig within the feature-gates defaults to true. Set DynamicKubeletConfig to “false”. Restart the kubelet service if the kubelet config file is changed.

b
The Kubernetes API server must have Alpha APIs disabled.
AC-3 - Medium - CCI-000213 - CNTR-K8-000470 - CNTR-K8-000470_rule
RMF Control
AC-3
Severity
M
CCI
CCI-000213
Version
CNTR-K8-000470
Vuln IDs
  • CNTR-K8-000470
Rule IDs
  • CNTR-K8-000470_rule
Kubernetes allows alpha API calls within the API server. The alpha features are disabled by default since they are not ready for production and likely to change without notice. These features may also contain security issues that are rectified as the feature matures. To keep the Kubernetes cluster secure and stable, these alpha features must not be used.
Checks: C-CNTR-K8-000470_chk

On the Master node, change to the manifest’s directory at /etc/kubernetes/manifests and run the command: grep -i feature-gates * Review the feature-gates setting, if one is returned. If the feature-gates setting is available and contains the AllAlpha flag set to “true”, this is a finding.

Fix: F-CNTR-K8-000470_fix

Edit any manifest files that contain the feature-gates setting with AllAlpha set to “true”. Set the flag to “false” or remove the AllAlpha setting completely.

b
The Kubernetes API Server must have an audit policy set.
AU-14 - Medium - CCI-001464 - CNTR-K8-000600 - CNTR-K8-000600_rule
RMF Control
AU-14
Severity
M
CCI
CCI-001464
Version
CNTR-K8-000600
Vuln IDs
  • CNTR-K8-000600
Rule IDs
  • CNTR-K8-000600_rule
When Kubernetes is started, components and user services are started. For auditing startup events, and events for components and services, it is important that auditing begin on startup. Within Kubernetes, audit data for all components is generated by the API server. To enable auditing to begin, an audit policy must be defined for the events and the information to be stored with each event. It is also necessary to give a secure location where the audit logs are to be stored. If an audit log path is not specified, all audit data is sent to studio.
Checks: C-CNTR-K8-000600_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * If the audit-policy-file is not set, this is a finding.

Fix: F-CNTR-K8-000600_fix

Edit the Kubernetes API Server manifest and set “--audit-policy-file” to the audit policy file. Note: If the API server is running as a Pod, then the manifest will also need to be updated to mount the host system filesystem where the audit policy file resides.

b
The Kubernetes API Server must have an audit log path set.
AU-14 - Medium - CCI-001464 - CNTR-K8-000610 - CNTR-K8-000610_rule
RMF Control
AU-14
Severity
M
CCI
CCI-001464
Version
CNTR-K8-000610
Vuln IDs
  • CNTR-K8-000610
Rule IDs
  • CNTR-K8-000610_rule
When Kubernetes is started, components and user services are started for auditing startup events, and events for components and services, it is important that auditing begin on startup. Within Kubernetes, audit data for all components is generated by the API server. To enable auditing to begin, an audit policy must be defined for the events and the information to be stored with each event. It is also necessary to give a secure location where the audit logs are to be stored. If an audit log path is not specified, all audit data is sent to studio.
Checks: C-CNTR-K8-000610_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-log-path * If the audit-log-path is not set, this is a finding.

Fix: F-CNTR-K8-000610_fix

Edit the Kubernetes API Server manifest and set “--audit-log-path” to a secure location for the audit logs to be written. Note: If the API server is running as a Pod, then the manifest will also need to be updated to mount the host system filesystem where the audit log file is to be written.

b
The Kubernetes API server must generate audit records that identify what type of event has occurred.
AU-3 - Medium - CCI-000130 - CNTR-K8-000630 - CNTR-K8-000630_rule
RMF Control
AU-3
Severity
M
CCI
CCI-000130
Version
CNTR-K8-000630
Vuln IDs
  • CNTR-K8-000630
Rule IDs
  • CNTR-K8-000630_rule
Within Kubernetes, audit data for all components is generated by the API server. This audit data is important when there are issues, to include security incidents, that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to have the appropriate and required data logged. To fully understand the event, it is important to understand the type of event. The API server policy file allows for the following levels of auditing: None - do not log events that match the rule. Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body. Request - log event metadata and request body but not response body. RequestResponse - log event metadata, request, and response bodies.
Checks: C-CNTR-K8-000630_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-000630_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
The Kubernetes API server must generate audit records that that have a date and time association with all events.
AU-3 - Medium - CCI-000131 - CNTR-K8-000640 - CNTR-K8-000640_rule
RMF Control
AU-3
Severity
M
CCI
CCI-000131
Version
CNTR-K8-000640
Vuln IDs
  • CNTR-K8-000640
Rule IDs
  • CNTR-K8-000640_rule
Within Kubernetes, audit data for all components is generated by the API server. This audit data is important when there are issues, to include security incidents, that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to have the appropriate and required data logged. To fully understand the event, it is important to know the data and time of the event. The API server policy file allows for the following levels of auditing: None - do not log events that match the rule. Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body. Request - log event metadata and request body but not response body. RequestResponse - log event metadata, request, and response bodies.
Checks: C-CNTR-K8-000640_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-000640_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
The Kubernetes API server must generate audit records that identify where in Kubernetes the event occurred.
AU-3 - Medium - CCI-000132 - CNTR-K8-000650 - CNTR-K8-000650_rule
RMF Control
AU-3
Severity
M
CCI
CCI-000132
Version
CNTR-K8-000650
Vuln IDs
  • CNTR-K8-000650
Rule IDs
  • CNTR-K8-000650_rule
Within Kubernetes, audit data for all components is generated by the API server. This audit data is important when there are issues, to include security incidents, that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to have the appropriate and required data logged. To fully understand the event, it is important to understand where in Kubernetes the event occurred. The API server policy file allows for the following levels of auditing: None - do not log events that match the rule. Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body. Request - log event metadata and request body but not response body. RequestResponse - log event metadata, request, and response bodies.
Checks: C-CNTR-K8-000650_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-000650_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
The Kubernetes API server must generate audit records that identify the source of the event.
AU-3 - Medium - CCI-000133 - CNTR-K8-000660 - CNTR-K8-000660_rule
RMF Control
AU-3
Severity
M
CCI
CCI-000133
Version
CNTR-K8-000660
Vuln IDs
  • CNTR-K8-000660
Rule IDs
  • CNTR-K8-000660_rule
Within Kubernetes, audit data for all components is generated by the API server. This audit data is important when there are issues, to include security incidents, that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to have the appropriate and required data logged. To fully understand the event, it is important to understand the source of the event. The API server policy file allows for the following levels of auditing: None - do not log events that match the rule. Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body. Request - log event metadata and request body but not response body. RequestResponse - log event metadata, request, and response bodies.
Checks: C-CNTR-K8-000660_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-000660_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
The Kubernetes API server must generate audit records that contain the event results.
AU-3 - Medium - CCI-000134 - CNTR-K8-000670 - CNTR-K8-000670_rule
RMF Control
AU-3
Severity
M
CCI
CCI-000134
Version
CNTR-K8-000670
Vuln IDs
  • CNTR-K8-000670
Rule IDs
  • CNTR-K8-000670_rule
Within Kubernetes, audit data for all components is generated by the API server. This audit data is important when there are issues, to include security incidents, that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to have the appropriate and required data logged. To fully understand the event, it is important to know the outcome of the event. The API server policy file allows for the following levels of auditing: None - do not log events that match the rule. Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body. Request - log event metadata and request body but not response body. RequestResponse - log event metadata, request, and response bodies.
Checks: C-CNTR-K8-000670_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-000670_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
The Kubernetes API server must generate audit records that identify any users associated with the event.
AU-3 - Medium - CCI-001487 - CNTR-K8-000680 - CNTR-K8-000680_rule
RMF Control
AU-3
Severity
M
CCI
CCI-001487
Version
CNTR-K8-000680
Vuln IDs
  • CNTR-K8-000680
Rule IDs
  • CNTR-K8-000680_rule
Within Kubernetes, audit data for all components is generated by the API server. This audit data is important when there are issues, to include security incidents, that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to have the appropriate and required data logged. To fully understand the event, it is important to identify any users associated with the event. The API server policy file allows for the following levels of auditing: None – do not log events that match the rule. Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body. Request - log event metadata and request body but not response body. RequestResponse - log event metadata, request, and response bodies.
Checks: C-CNTR-K8-000680_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-000680_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
The Kubernetes API server must generate audit records that identify any containers associated with the event.
AU-3 - Medium - CCI-001487 - CNTR-K8-000690 - CNTR-K8-000690_rule
RMF Control
AU-3
Severity
M
CCI
CCI-001487
Version
CNTR-K8-000690
Vuln IDs
  • CNTR-K8-000690
Rule IDs
  • CNTR-K8-000690_rule
Within Kubernetes, audit data for all components is generated by the API server. This audit data is important when there are issues, to include security incidents, that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to have the appropriate and required data logged. To fully understand the event, it is important to know any containers associated with the event. The API server policy file allows for the following levels of auditing: None – do not log events that match the rule. Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body. Request - log event metadata and request body but not response body. RequestResponse - log event metadata, request, and response bodies.
Checks: C-CNTR-K8-000690_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-000690_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
The API Server must generate audit records containing the full-text recording of privileged commands or the individual identities of group account users.
AU-3 - Medium - CCI-000135 - CNTR-K8-000700 - CNTR-K8-000700_rule
RMF Control
AU-3
Severity
M
CCI
CCI-000135
Version
CNTR-K8-000700
Vuln IDs
  • CNTR-K8-000700
Rule IDs
  • CNTR-K8-000700_rule
During an investigation of an incident, it is important to fully understand what took place. Often, information is not part of the audited event due to the data's nature, security risk or to limit audit log size. Organizations must consider limiting the additional audit information to only that information explicitly needed for specific audit requirements. At a minimum, the organization must audit either full-text recording of privileged commands or the individual identities of group users, or both.
Checks: C-CNTR-K8-000700_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-000700_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
Kubernetes Kubelet must deny hostname override.
CM-5 - Medium - CCI-001499 - CNTR-K8-000850 - CNTR-K8-000850_rule
RMF Control
CM-5
Severity
M
CCI
CCI-001499
Version
CNTR-K8-000850
Vuln IDs
  • CNTR-K8-000850
Rule IDs
  • CNTR-K8-000850_rule
Kubernetes allows for the overriding of hostnames. Allowing this feature to be implemented within the kubelets may break the TLS setup between the kubelet service and the API server. This setting also can make it difficult to associate logs with nodes if security analytics needs to take place. The better practice is to setup nodes with resolvable FQDNs and avoid overriding the hostnames.
Checks: C-CNTR-K8-000850_chk

On the Master and each Worker node, change to the /etc/sysconfig/ directory and run the command: grep -i hostname-override kubelet --hostname-override If any of the nodes have the setting “hostname-override” present, this is a finding.

Fix: F-CNTR-K8-000850_fix

Edit the Kubernetes Kubelet file in the /etc/sysconfig directory on the Master and Worker nodes and remove the “--hostname-override” setting. Restart the service after the change is made by running: service kubelet restart

b
The Kubernetes manifests must be owned by root.
CM-5 - Medium - CCI-001499 - CNTR-K8-000860 - CNTR-K8-000860_rule
RMF Control
CM-5
Severity
M
CCI
CCI-001499
Version
CNTR-K8-000860
Vuln IDs
  • CNTR-K8-000860
Rule IDs
  • CNTR-K8-000860_rule
The manifest files contain the runtime configuration of the API server, proxy, scheduler, controller, and etcd. If an attacker can gain access to these files, changes can be made to open vulnerabilities and bypass user authorizations inherit within Kubernetes with RBAC implemented.
Checks: C-CNTR-K8-000860_chk

On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: ls -l * Each manifest file must be owned by root:root. If any manifest file is not owned by root:root, this is a finding.

Fix: F-CNTR-K8-000860_fix

On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: chown root:root * To verify the change took place, run the command: ls -l * All the manifest files should be owned by root:root.

b
The Kubernetes manifests must have least privileges.
CM-5 - Medium - CCI-001499 - CNTR-K8-000870 - CNTR-K8-000870_rule
RMF Control
CM-5
Severity
M
CCI
CCI-001499
Version
CNTR-K8-000870
Vuln IDs
  • CNTR-K8-000870
Rule IDs
  • CNTR-K8-000870_rule
The manifest files contain the runtime configuration of the API server, proxy, scheduler, controller, and etcd. If an attacker can gain access to these files, changes can be made to open vulnerabilities and bypass user authorizations inherit within Kubernetes with RBAC implemented.
Checks: C-CNTR-K8-000870_chk

On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: ls -l * Each manifest file must have permissions “644” or more restrictive. If any manifest file is less restrictive than “644”, this is a finding.

Fix: F-CNTR-K8-000870_fix

On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: chmod 644 * To verify the change took place, run the command: ls -l * All the manifest files should now have privileges of “644”.

b
The Kubernetes kubelet configuration file must be owned by root.
CM-5 - Medium - CCI-001499 - CNTR-K8-000880 - CNTR-K8-000880_rule
RMF Control
CM-5
Severity
M
CCI
CCI-001499
Version
CNTR-K8-000880
Vuln IDs
  • CNTR-K8-000880
Rule IDs
  • CNTR-K8-000880_rule
The kubelet configuration file contains the runtime configuration of the kubelet service. If an attacker can gain access to this file, changes can be made to open vulnerabilities and bypass user authorizations inherent within Kubernetes with RBAC implemented.
Checks: C-CNTR-K8-000880_chk

On the Master and worker nodes, change to the /etc/sysconfig directory. Run the command: ls -l kubelet Each kubelet configuration file must be owned by root:root. If any manifest file is not owned by root:root, this is a finding.

Fix: F-CNTR-K8-000880_fix

On the Master and Worker nodes, change to the /etc/sysconfig directory. Run the command: chown root:root kubelet To verify the change took place, run the command: ls -l kubelet The kubelet file should now be owned by root:root.

b
The Kubernetes kubelet configuration file must be owned by root.
CM-5 - Medium - CCI-001499 - CNTR-K8-000890 - CNTR-K8-000890_rule
RMF Control
CM-5
Severity
M
CCI
CCI-001499
Version
CNTR-K8-000890
Vuln IDs
  • CNTR-K8-000890
Rule IDs
  • CNTR-K8-000890_rule
The kubelet configuration file contains the runtime configuration of the kubelet service. If an attacker can gain access to this file, changes can be made to open vulnerabilities and bypass user authorizations inherit within Kubernetes with RBAC implemented.
Checks: C-CNTR-K8-000890_chk

On the Master and worker nodes, change to the /etc/sysconfig directory. Run the command: ls -l kubelet Each kubelet configuration file must have permissions of “644” or more restrictive. If any kubelet configuration file is less restrictive than “644”, this is a finding.

Fix: F-CNTR-K8-000890_fix

On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: chmod 644 kubelet To verify the change took place, run the command: ls -l kubelet The kubelet file should now have the permissions of “644”.

b
The Kubernetes manifests must have least privileges.
CM-5 - Medium - CCI-001499 - CNTR-K8-000900 - CNTR-K8-000900_rule
RMF Control
CM-5
Severity
M
CCI
CCI-001499
Version
CNTR-K8-000900
Vuln IDs
  • CNTR-K8-000900
Rule IDs
  • CNTR-K8-000900_rule
The manifest files contain the runtime configuration of the API server, scheduler, controller, and etcd. If an attacker can gain access to these files, changes can be made to open vulnerabilities and bypass user authorizations inherent within Kubernetes with RBAC implemented.
Checks: C-CNTR-K8-000900_chk

On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: ls -l * Each manifest file must have permissions “644” or more restrictive. If any manifest file is less restrictive than “644”, this is a finding.

Fix: F-CNTR-K8-000900_fix

On the Master node, change to the /etc/kubernetes/manifest directory. Run the command: chmod 644 * To verify the change took place, run the command: ls -l * All the manifest files should now have privileges of “644”.

b
Kubernetes Controller Manager must disable profiling.
CM-7 - Medium - CCI-000381 - CNTR-K8-000910 - CNTR-K8-000910_rule
RMF Control
CM-7
Severity
M
CCI
CCI-000381
Version
CNTR-K8-000910
Vuln IDs
  • CNTR-K8-000910
Rule IDs
  • CNTR-K8-000910_rule
Kubernetes profiling provides the ability to analyze and troubleshoot Controller Manager events over a web interface on a host port. Enabling this service can expose details about the Kubernetes architecture. This service must not be enabled unless deemed necessary.
Checks: C-CNTR-K8-000910_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i profiling * If the setting “profiling” is not set in the Kubernetes Controller Manager manifest file or it is set to “True”, this is a finding.

Fix: F-CNTR-K8-000910_fix

Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument “--profiling value” to “false”.

b
The Kubernetes API Server must enforce ports, protocols, and services (PPS) that adhere to the Ports, Protocols, and Services Management Category Assurance List (PPSM CAL).
CM-7 - Medium - CCI-000382 - CNTR-K8-000920 - CNTR-K8-000920_rule
RMF Control
CM-7
Severity
M
CCI
CCI-000382
Version
CNTR-K8-000920
Vuln IDs
  • CNTR-K8-000920
Rule IDs
  • CNTR-K8-000920_rule
Kubernetes API Server PPSs must be controlled and conform to the PPSM CAL. Those PPS that fall outside the PPSM CAL must be blocked. Instructions on the PPSM can be found in DoD Instruction 8551.01 Policy.
Checks: C-CNTR-K8-000920_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep kube-apiserver.manifest -I-insecure-port grep kube-apiserver.manifest -I -secure-port grep kube-apiserver.manifest -I -etcd-servers * -edit manifest file: VIM <Manifest Name> Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any apiserver names spaces. Port: Any manifest and namespace PPS or services configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the API Server architecture, and determine applicable PPS. If there are any ports, protocols, and services in the system documentation not in compliance with the CAL PPSM, this is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify API Server network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.

Fix: F-CNTR-K8-000920_fix

Amend any system documentation requiring revision. Update Kubernetes API Server manifest and namespace PPS configuration to comply with PPSM CAL.

b
The Kubernetes Scheduler must enforce ports, protocols and services (PPS) that adhere to the Ports, Protocols, and Services Management Category Assurance List (PPSM CAL).
CM-7 - Medium - CCI-000382 - CNTR-K8-000930 - CNTR-K8-000930_rule
RMF Control
CM-7
Severity
M
CCI
CCI-000382
Version
CNTR-K8-000930
Vuln IDs
  • CNTR-K8-000930
Rule IDs
  • CNTR-K8-000930_rule
Kubernetes Scheduler PPS must be controlled and conform to the PPSM CAL. Those ports, protocols and services that fall outside the PPSM CAL must be blocked. Instructions on the PPSM can be found in DoD Instruction 8551.01 Policy.
Checks: C-CNTR-K8-000930_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep kube-scheduler.manifest -I -insecure-port grep kube-scheduler.manifest -I -secure-port -edit manifest file: VIM <Manifest Name> Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any scheduler names spaces. Port: Any manifest and namespace PPS configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the Scheduler architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPSs not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify Scheduler network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.

Fix: F-CNTR-K8-000930_fix

Amend any system documentation requiring revision. Update Kubernetes Scheduler manifest and namespace PPS configuration to comply with the PPSM CAL.

b
The Kubernetes Controllers must enforce ports, protocols and services (PPS) that adhere to the Ports, Protocols, and Services Management Category Assurance List (PPSM CAL).
CM-7 - Medium - CCI-000382 - CNTR-K8-000940 - CNTR-K8-000940_rule
RMF Control
CM-7
Severity
M
CCI
CCI-000382
Version
CNTR-K8-000940
Vuln IDs
  • CNTR-K8-000940
Rule IDs
  • CNTR-K8-000940_rule
Kubernetes Controller ports, protocols and services must be controlled and conform to the PPSM CAL. Those PPS that fall outside the PPSM CAL must be blocked. Instructions on the PPSM can be found in DoD Instruction 8551.01 Policy.
Checks: C-CNTR-K8-000940_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep kube-scheduler.manifest -I -insecure-port grep kube-scheduler.manifest -I -secure-port -edit manifest file: VIM <Manifest Name: Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any controller names spaces. Port: Any manifest and namespace PPS or services configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the Controller architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify Controller network boundary with the PPS associated with the Controller for Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.

Fix: F-CNTR-K8-000940_fix

Amend any system documentation requiring revision. Update Kubernetes Controller manifest and namespace PPS configuration to comply with PPSM CAL.

b
The Kubernetes etcd must enforce ports, protocols and services (PPS) that adhere to the Ports, Protocols, and Services Management Category Assurance List (PPSM CAL).
CM-7 - Medium - CCI-000382 - CNTR-K8-000950 - CNTR-K8-000950_rule
RMF Control
CM-7
Severity
M
CCI
CCI-000382
Version
CNTR-K8-000950
Vuln IDs
  • CNTR-K8-000950
Rule IDs
  • CNTR-K8-000950_rule
Kubernetes etcd PPS must be controlled and conform to the PPSM CAL. Those PPS that fall outside the PPSM CAL must be blocked. Instructions on the PPSM can be found in DoD Instruction 8551.01 Policy.
Checks: C-CNTR-K8-000950_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep kube-apiserver.manifest -I -etcd-servers * -edit etcd-main.manifest file: VIM <Manifest Name: Review livenessProbe: HttpGet: Port: Review ports: - containerPort: hostPort: - containerPort: hostPort: Run Command: kubectl describe services –all-namespace Search labels for any apiserver names spaces. Port: Any manifest and namespace PPS configuration not in compliance with PPSM CAL is a finding. Review the information systems documentation and interview the team, gain an understanding of the etcd architecture, and determine applicable PPS. Any PPS in the system documentation not in compliance with the CAL PPSM is a finding. Any PPS not set in the system documentation is a finding. Review findings against the most recent PPSM CAL: https://cyber.mil/ppsm/cal/ Verify etcd network boundary with the PPS associated with the CAL Assurance Categories. Any PPS not in compliance with the CAL Assurance Category requirements is a finding.

Fix: F-CNTR-K8-000950_fix

Amend any system documentation requiring revision. Update Kubernetes etcd manifest and namespace PPS configuration to comply with PPSM CAL.

b
The Kubernetes cluster must use non-privileged host ports for user pods.
CM-7 - Medium - CCI-000382 - CNTR-K8-000960 - CNTR-K8-000960_rule
RMF Control
CM-7
Severity
M
CCI
CCI-000382
Version
CNTR-K8-000960
Vuln IDs
  • CNTR-K8-000960
Rule IDs
  • CNTR-K8-000960_rule
Privileged ports are those ports below 1024 and that require system privileges for their use. If containers can use these ports, the container must be run as a privileged user. Kubernetes must stop containers that try to map to these ports directly. Allowing non-privileged ports to be mapped to the container privileged port is the allowable method when a certain port is needed. An example is mapping port 8080 externally to port 80 in the container.
Checks: C-CNTR-K8-000960_chk

On the Master node, run the command: kubectl get pods --all-namespaces The list returned is all pods running within the Kubernetes cluster. For those pods running within the user namespaces (System namespaces are kube-system, kube-node-lease and kube-public), run the command: kubectl get pod podname -o yaml | grep -i port Note: In the above command, “podname” is the name of the pod. For the command to work correctly, the current context must be changed to the namespace for the pod. The command to do this is: kubectl config set-context --current --namespace=namespace-name where namespace-name is the name of the namespace. Review the ports that are returned for the pod. If any host privileged ports are returned for any of the pods, this is a finding.

Fix: F-CNTR-K8-000960_fix

For any of the pods that are using host Privileged ports, reconfigure the pod to use a service to map a host non-privileged port to the pod port or reconfigure the image to use non-privileged ports.

c
Secrets in Kubernetes must not be stored as environment variables.
IA-5 - High - CCI-000196 - CNTR-K8-001160 - CNTR-K8-001160_rule
RMF Control
IA-5
Severity
H
CCI
CCI-000196
Version
CNTR-K8-001160
Vuln IDs
  • CNTR-K8-001160
Rule IDs
  • CNTR-K8-001160_rule
Secrets, such as passwords, keys, tokens, and certificates should not be stored as environment variables. These environment variables are accessible inside Kubernetes by the "Get Pod" API call, and by any system, such as CI/CD pipeline, which has access to the definition file of the container. Secrets must be mounted from files or stored within password vaults.
Checks: C-CNTR-K8-001160_chk

On the Kubernetes Master node, run the following command: kubectl get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {"\n"}{end}' -A If any of the values returned reference environment variables, this is a finding.

Fix: F-CNTR-K8-001160_fix

Any secrets stored as environment variables must be moved to be secret files with the proper protections and enforcements or placed within a password vault.

b
Kubernetes Kubelet must not disable timeouts.
SC-10 - Medium - CCI-001133 - CNTR-K8-001300 - CNTR-K8-001300_rule
RMF Control
SC-10
Severity
M
CCI
CCI-001133
Version
CNTR-K8-001300
Vuln IDs
  • CNTR-K8-001300
Rule IDs
  • CNTR-K8-001300_rule
Idle connections from the Kubelet can be use by unauthorized users to perform malicious activity to the nodes, pods, containers, and cluster within the Kubernetes Control Plane. Setting the streaming connection idle timeout defines the maximum time an idle session is permitted prior to disconnect. Setting the value to “0” never disconnects any idle sessions. Idle timeouts must never be set to “0” and should be defined at a minimum of “5 minutes”.
Checks: C-CNTR-K8-001300_chk

Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: grep -i streaming-connection-idle-timeout kubelet If the setting streaming-connection-idle-timeout is set to “0” or the parameter is not set in the Kubernetes Kubelet, this is a finding.

Fix: F-CNTR-K8-001300_fix

Edit the Kubernetes Kuberlet file in the /etc/sysconfig directory on the Kubernetes Master Node. Set the argument “--streaming-connection-idle-timeout” to a value other than “0”. Reset Kubelet service using the following command: service kubelet restart

b
Kubernetes must separate user functionality.
SC-2 - Medium - CCI-001082 - CNTR-K8-001360 - CNTR-K8-001360_rule
RMF Control
SC-2
Severity
M
CCI
CCI-001082
Version
CNTR-K8-001360
Vuln IDs
  • CNTR-K8-001360
Rule IDs
  • CNTR-K8-001360_rule
Separating user functionality from management functionality is a requirement for all the components within the Kubernetes Control Plane. Without the separation, users may have access to management functions that can degrade the Kubernetes architecture and the services being offered ,and can offer a method to bypass testing and validation of functions before introduced into a production environment.
Checks: C-CNTR-K8-001360_chk

On the Master node, run the command: kubectl get pods --all-namespaces Review the namespaces and pods that are returned. Kubernetes system namespaces are kube-node-lease, kube-public, and kube-system. If any user pods are present in the Kubernetes system namespaces, this is a finding.

Fix: F-CNTR-K8-001360_fix

Move any user pods that are present in the Kubernetes system namespaces to user specific namespaces.

b
The Kubernetes API server must use approved cipher suites.
SC-23 - Medium - CCI-001184 - CNTR-K8-001400 - CNTR-K8-001400_rule
RMF Control
SC-23
Severity
M
CCI
CCI-001184
Version
CNTR-K8-001400
Vuln IDs
  • CNTR-K8-001400
Rule IDs
  • CNTR-K8-001400_rule
The Kubernetes API server communicates to the kubelet service on the nodes to deploy, update, and delete resources. If an attacker were able to get between this communication and modify the request, the Kubernetes cluster could be compromised. Using approved cypher suites for the communication ensures the protection of the transmitted information, confidentiality, and integrity so that the attacker cannot read or alter this communication.
Checks: C-CNTR-K8-001400_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i tls-cipher-suites * If the setting feature tls-cipher-suites is not set in the Kubernetes API server manifest file or contains no value or does not contain TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM _SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM _SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM _SHA384, this is a finding.

Fix: F-CNTR-K8-001400_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of tls-cipher-suites to: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM _SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM _SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM _SHA384

b
Kubernetes API Server must have the SSL Certificate Authority set.
SC-23 - Medium - CCI-001184 - CNTR-K8-001410 - CNTR-K8-001410_rule
RMF Control
SC-23
Severity
M
CCI
CCI-001184
Version
CNTR-K8-001410
Vuln IDs
  • CNTR-K8-001410
Rule IDs
  • CNTR-K8-001410_rule
Kubernetes control plane and external communication is managed by API Server. The main implementation of the API Server is to manage hardware resources for pods and container using horizontal or vertical scaling. Anyone who can access the API Server can effectively control the Kubernetes architecture. Using authenticity protection, the communication can be protected against man-in-the-middle attacks/session hijacking and the insertion of false information into sessions. The communication session is protected by utilizing transport encryption protocols, such as TLS. TLS provides the Kubernetes API Server with a means to be able to authenticate sessions and encrypt traffic. To enable encrypted communication for API Server, the parameter etcd-cafile must be set. This parameter gives the location of the SSL Certificate Authority file used to secure API Server communication.
Checks: C-CNTR-K8-001410_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i client-ca-file * If the setting feature client-ca-file is not set in the Kubernetes API server manifest file or contains no value, this is a finding.

Fix: F-CNTR-K8-001410_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of client-ca-file to path containing Approved Organizational Certificate.

b
Kubernetes Kubelet must have the SSL Certificate Authority set.
SC-23 - Medium - CCI-001184 - CNTR-K8-001420 - CNTR-K8-001420_rule
RMF Control
SC-23
Severity
M
CCI
CCI-001184
Version
CNTR-K8-001420
Vuln IDs
  • CNTR-K8-001420
Rule IDs
  • CNTR-K8-001420_rule
Kubernetes container and pod configuration are maintained by Kubelet. Kubelet agents register nodes with the API Server, mount volume storage, and perform health checks for containers and pods. Anyone who gains access to Kubelet agents can effectively control applications within the pods and containers. Using authenticity protection, the communication can be protected against man-in-the-middle attacks/session hijacking and the insertion of false information into sessions. The communication session is protected by utilizing transport encryption protocols, such as TLS. TLS provides the Kubernetes API Server with a means to be able to authenticate sessions and encrypt traffic. To enable encrypted communication for Kubelet, the parameter etcd-cafile must be set. This parameter gives the location of the SSL Certificate Authority file used to secure Kubelet communication.
Checks: C-CNTR-K8-001420_chk

Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: grep -i client-ca-file kubelet If the setting client-ca-file is not set in the Kubernetes API server manifest file or contains no value, this is a finding.

Fix: F-CNTR-K8-001420_fix

Edit the Kubernetes Kubelet file in the /etc/sysconfig/ directory on the Kubernetes Master Node. Set the value of client-ca-file to path containing Approved Organizational Certificate. Reset Kubelet service using the following command: service kubelet restart

b
Kubernetes Controller Manager must have the SSL Certificate Authority set.
SC-23 - Medium - CCI-001184 - CNTR-K8-001430 - CNTR-K8-001430_rule
RMF Control
SC-23
Severity
M
CCI
CCI-001184
Version
CNTR-K8-001430
Vuln IDs
  • CNTR-K8-001430
Rule IDs
  • CNTR-K8-001430_rule
The Kubernetes Controller Manager is responsible for creating service accounts and tokens for the API Server, maintaining the correct number of pods for every replication controller and provides notifications when nodes are offline. Anyone who gains access to the Controller Manager can generate backdoor accounts, take possession of or diminish system performance without detection by disabling system notification. Using authenticity protection, the communication can be protected against man-in-the-middle attacks/session hijacking and the insertion of false information into sessions. The communication session is protected by utilizing transport encryption protocols, such as TLS. TLS provides the Kubernetes Controller Manager with a means to be able to authenticate sessions and encrypt traffic.
Checks: C-CNTR-K8-001430_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i root-ca-file * If the setting client-ca-file is not set in the Kubernetes Controller Manager manifest file or contains no value, this is a finding.

Fix: F-CNTR-K8-001430_fix

Edit the Kubernetes Controller Manager manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of root-ca-file to path containing Approved Organizational Certificate.

b
Kubernetes API Server must have a certificate for communication.
SC-23 - Medium - CCI-001184 - CNTR-K8-001440 - CNTR-K8-001440_rule
RMF Control
SC-23
Severity
M
CCI
CCI-001184
Version
CNTR-K8-001440
Vuln IDs
  • CNTR-K8-001440
Rule IDs
  • CNTR-K8-001440_rule
Kubernetes control plane and external communication is managed by API Server. The main implementation of the API Server is to manage hardware resources for pods and container using horizontal or vertical scaling. Anyone who can access the API Server can effectively control the Kubernetes architecture. Using authenticity protection, the communication can be protected against man-in-the-middle attacks/session hijacking and the insertion of false information into sessions. The communication session is protected by utilizing transport encryption protocols, such as TLS. TLS provides the Kubernetes API Server with a means to be able to authenticate sessions and encrypt traffic. To enable encrypted communication for API Server, the parameter etcd-cafile must be set. This parameter gives the location of the SSL Certificate Authority file used to secure API Server communication.
Checks: C-CNTR-K8-001440_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i tls-cert-file * grep -i tls-private-key-file * If the setting tls-cert-file and private-key-file is not set in the Kubernetes API server manifest file or contains no value, this is a finding.

Fix: F-CNTR-K8-001440_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of tls-cert-file and tls-private-key-file to path containing Approved Organizational Certificate.

b
Kubernetes etcd must enable client authentication to secure service.
SC-23 - Medium - CCI-001184 - CNTR-K8-001450 - CNTR-K8-001450_rule
RMF Control
SC-23
Severity
M
CCI
CCI-001184
Version
CNTR-K8-001450
Vuln IDs
  • CNTR-K8-001450
Rule IDs
  • CNTR-K8-001450_rule
Kubernetes container and pod configuration are maintained by Kubelet. Kubelet agents register nodes with the API Server, mount volume storage, and perform health checks for containers and pods. Anyone who gains access to Kubelet agents can effectively control applications within the pods and containers. Using authenticity protection, the communication can be protected against man-in-the-middle attacks/session hijacking and the insertion of false information into sessions. The communication session is protected by utilizing transport encryption protocols, such as TLS. TLS provides the Kubernetes API Server with a means to be able to authenticate sessions and encrypt traffic. To enable encrypted communication for Kubelet, the parameter etcd-cafile must be set. This parameter gives the location of the SSL Certificate Authority file used to secure Kubelet communication.
Checks: C-CNTR-K8-001450_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i client-cert-auth * If the setting client-cert-auth is not set in the Kubernetes etcd manifest file or set to “false”, this is a finding.

Fix: F-CNTR-K8-001450_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--client-cert-auth” to “true” for the etcd.

b
Kubernetes Kubelet must enable tls-private-key-file for client authentication to secure service.
SC-23 - Medium - CCI-001184 - CNTR-K8-001460 - CNTR-K8-001460_rule
RMF Control
SC-23
Severity
M
CCI
CCI-001184
Version
CNTR-K8-001460
Vuln IDs
  • CNTR-K8-001460
Rule IDs
  • CNTR-K8-001460_rule
Kubernetes container and pod configuration are maintained by Kubelet. Kubelet agents register nodes with the API Server, mount volume storage, and perform health checks for containers and pods. Anyone who gains access to Kubelet agents can effectively control applications within the pods and containers. Using authenticity protection, the communication can be protected against man-in-the-middle attacks/session hijacking and the insertion of false information into sessions. The communication session is protected by utilizing transport encryption protocols, such as TLS. TLS provides the Kubernetes API Server with a means to be able to authenticate sessions and encrypt traffic. To enable encrypted communication for Kubelet, the parameter etcd-cafile must be set. This parameter gives the location of the SSL Certificate Authority file used to secure Kubelet communication.
Checks: C-CNTR-K8-001460_chk

Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the commands: grep -i tls-private-key-file kubelet If the setting “tls-private-key-file” is not set in the Kubernetes Kubelet, this is a finding.

Fix: F-CNTR-K8-001460_fix

Edit the Kubernetes Kuberlet file in the /etc/sysconfig directory on the Kubernetes Master Node. Set the argument tls-private-key-file to an Approved Organization Certificate. Reset Kubelet service using the following command: service kubelet restart

b
Kubernetes Kubelet must enable tls-cert-file for client authentication to secure service.
SC-23 - Medium - CCI-001184 - CNTR-K8-001470 - CNTR-K8-001470_rule
RMF Control
SC-23
Severity
M
CCI
CCI-001184
Version
CNTR-K8-001470
Vuln IDs
  • CNTR-K8-001470
Rule IDs
  • CNTR-K8-001470_rule
Kubernetes container and pod configuration are maintained by Kubelet. Kubelet agents register nodes with the API Server, mount volume storage, and perform health checks for containers and pods. Anyone who gains access to Kubelet agents can effectively control applications within the pods and containers. Using authenticity protection, the communication can be protected against man-in-the-middle attacks/session hijacking and the insertion of false information into sessions. The communication session is protected by utilizing transport encryption protocols, such as TLS. TLS provides the Kubernetes API Server with a means to be able to authenticate sessions and encrypt traffic. To enable encrypted communication for Kubelet, the parameter etcd-cafile must be set. This parameter gives the location of the SSL Certificate Authority file used to secure Kubelet communication.
Checks: C-CNTR-K8-001470_chk

Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the commands: grep -i tls-cert-file kubelet If the setting “tls-cert-file” is not set in the Kubernetes Kubelet, this is a finding.

Fix: F-CNTR-K8-001470_fix

Edit the Kubernetes Kuberlet file in the /etc/sysconfig directory on the Kubernetes Master Node. Set the argument “tls-cert-file” to an Approved Organization Certificate. Reset Kubelet service using the following command: service kubelet restart

b
Kubernetes etcd must enable client authentication to secure service.
SC-23 - Medium - CCI-001184 - CNTR-K8-001480 - CNTR-K8-001480_rule
RMF Control
SC-23
Severity
M
CCI
CCI-001184
Version
CNTR-K8-001480
Vuln IDs
  • CNTR-K8-001480
Rule IDs
  • CNTR-K8-001480_rule
Kubernetes container and pod configuration are maintained by Kubelet. Kubelet agents register nodes with the API Server, mount volume storage, and perform health checks for containers and pods. Anyone who gains access to Kubelet agents can effectively control applications within the pods and containers. Using authenticity protection, the communication can be protected against man-in-the-middle attacks/session hijacking and the insertion of false information into sessions. The communication session is protected by utilizing transport encryption protocols, such as TLS. TLS provides the Kubernetes API Server with a means to be able to authenticate sessions and encrypt traffic. To enable encrypted communication for Kubelet, the parameter etcd-cafile must be set. This parameter gives the location of the SSL Certificate Authority file used to secure Kubelet communication.
Checks: C-CNTR-K8-001480_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i peer-client-cert-auth * If the setting “client-cert-auth” is not set in the Kubernetes etcd manifest file or set to “false”, this is a finding.

Fix: F-CNTR-K8-001480_fix

Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--peer-client-cert-auth” to “true” for the etcd.

b
Kubernetes etcd must have a key file for secure communication.
SC-23 - Medium - CCI-001184 - CNTR-K8-001490 - CNTR-K8-001490_rule
RMF Control
SC-23
Severity
M
CCI
CCI-001184
Version
CNTR-K8-001490
Vuln IDs
  • CNTR-K8-001490
Rule IDs
  • CNTR-K8-001490_rule
Kubernetes stores configuration and state information in a distributed key-value store called etcd. Anyone who can write to etcd can effectively control the Kubernetes cluster. Even just reading the contents of etcd could easily provide helpful hints to a would-be attacker. Using authenticity protection, the communication can be protected against man-in-the-middle attacks/session hijacking and the insertion of false information into sessions. The communication session is protected by utilizing transport encryption protocols, such as TLS. TLS provides the Kubernetes API Server and etcd with a means to be able to authenticate sessions and encrypt traffic. To enable encrypted communication for etcd, the parameter etcd-keyfile must be set. This parameter gives the location of the key file used to secure etcd communication.
Checks: C-CNTR-K8-001490_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i key-file * If the setting “etcd-keyfile” is not set in the Kubernetes etcd manifest file, this is a finding.

Fix: F-CNTR-K8-001490_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--key-file” to the Approved Organizational Certificate.

b
Kubernetes etcd must have a certificate for communication.
SC-23 - Medium - CCI-001184 - CNTR-K8-001500 - CNTR-K8-001500_rule
RMF Control
SC-23
Severity
M
CCI
CCI-001184
Version
CNTR-K8-001500
Vuln IDs
  • CNTR-K8-001500
Rule IDs
  • CNTR-K8-001500_rule
Kubernetes stores configuration and state information in a distributed key-value store called etcd. Anyone who can write to etcd can effectively control a Kubernetes cluster. Even just reading the contents of etcd could easily provide helpful hints to a would-be attacker. Using authenticity protection, the communication can be protected against man-in-the-middle attacks/session hijacking and the insertion of false information into sessions. The communication session is protected by utilizing transport encryption protocols, such as TLS. TLS provides the Kubernetes API Server and etcd with a means to be able to authenticate sessions and encrypt traffic. To enable encrypted communication for etcd, the parameter etcd-certfile must be set. This parameter gives the location of the SSL certification file used to secure etcd communication.
Checks: C-CNTR-K8-001500_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i cert-file * If the setting “etcd-certfile” is not set in the Kubernetes etcd manifest file, this is a finding.

Fix: F-CNTR-K8-001500_fix

Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--cert-file” to the Approved Organizational Certificate.

b
Kubernetes etcd must have the SSL Certificate Authority set.
SC-23 - Medium - CCI-001184 - CNTR-K8-001510 - CNTR-K8-001510_rule
RMF Control
SC-23
Severity
M
CCI
CCI-001184
Version
CNTR-K8-001510
Vuln IDs
  • CNTR-K8-001510
Rule IDs
  • CNTR-K8-001510_rule
Kubernetes stores configuration and state information in a distributed key-value store called etcd. Anyone who can write to etcd can effectively control a Kubernetes cluster. Even just reading the contents of etcd could easily provide helpful hints to a would-be attacker. Using authenticity protection, the communication can be protected against man-in-the-middle attacks/session hijacking and the insertion of false information into sessions. The communication session is protected by utilizing transport encryption protocols, such as TLS. TLS provides the Kubernetes API Server and etcd with a means to be able to authenticate sessions and encrypt traffic. To enable encrypted communication for etcd, the parameter etcd-cafile must be set. This parameter gives the location of the SSL Certificate Authority file used to secure etcd communication.
Checks: C-CNTR-K8-001510_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i etcd-cafile * If the setting “etcd-cafile” is not set in the Kubernetes API Server manifest file, this is a finding.

Fix: F-CNTR-K8-001510_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--etcd-cafile” to the Certificate Authority for etcd.

b
Kubernetes etcd must have a certificate for communication.
SC-23 - Medium - CCI-001184 - CNTR-K8-001520 - CNTR-K8-001520_rule
RMF Control
SC-23
Severity
M
CCI
CCI-001184
Version
CNTR-K8-001520
Vuln IDs
  • CNTR-K8-001520
Rule IDs
  • CNTR-K8-001520_rule
Kubernetes stores configuration and state information in a distributed key-value store called etcd. Anyone who can write to etcd can effectively control your Kubernetes cluster. Even just reading the contents of etcd could easily provide helpful hints to a would-be attacker. Using authenticity protection, the communication can be protected against man-in-the-middle attacks/session hijacking and the insertion of false information into sessions. The communication session is protected by utilizing transport encryption protocols, such as TLS. TLS provides the Kubernetes API Server and etcd with a means to be able to authenticate sessions and encrypt traffic. To enable encrypted communication for etcd, the parameter etcd-certfile must be set. This parameter gives the location of the SSL certification file used to secure etcd communication.
Checks: C-CNTR-K8-001520_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i etcd-certfile * If the setting “etcd-certfile” is not set in the Kubernetes API Server manifest file, this is a finding.

Fix: F-CNTR-K8-001520_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--etcd-certfile” to the certificate to be used for communication with etcd.

b
Kubernetes etcd must have a key file for secure communication.
SC-23 - Medium - CCI-001184 - CNTR-K8-001530 - CNTR-K8-001530_rule
RMF Control
SC-23
Severity
M
CCI
CCI-001184
Version
CNTR-K8-001530
Vuln IDs
  • CNTR-K8-001530
Rule IDs
  • CNTR-K8-001530_rule
Kubernetes stores configuration and state information in a distributed key-value store called etcd. Anyone who can write to etcd can effectively control a Kubernetes cluster. Even just reading the contents of etcd could easily provide helpful hints to a would-be attacker. Using authenticity protection, the communication can be protected against man-in-the-middle attacks/session hijacking and the insertion of false information into sessions. The communication session is protected by utilizing transport encryption protocols, such as TLS. TLS provides the Kubernetes API Server and etcd with a means to be able to authenticate sessions and encrypt traffic. To enable encrypted communication for etcd, the parameter etcd-keyfile must be set. This parameter gives the location of the key file used to secure etcd communication.
Checks: C-CNTR-K8-001530_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i etcd-keyfile * If the setting “etcd-keyfile” is not set in the Kubernetes API Server manifest file, this is a finding.

Fix: F-CNTR-K8-001530_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--etcd-keyfile” to the certificate to be used for communication with etcd.

b
Kubernetes etcd must have peer-cert-file set for secure communication.
SC-23 - Medium - CCI-001184 - CNTR-K8-001540 - CNTR-K8-001540_rule
RMF Control
SC-23
Severity
M
CCI
CCI-001184
Version
CNTR-K8-001540
Vuln IDs
  • CNTR-K8-001540
Rule IDs
  • CNTR-K8-001540_rule
Kubernetes stores configuration and state information in a distributed key-value store called etcd. Anyone who can write to etcd can effectively control the Kubernetes cluster. Even just reading the contents of etcd could easily provide helpful hints to a would-be attacker. Using authenticity protection, the communication can be protected against man-in-the-middle attacks/session hijacking and the insertion of false information into sessions. The communication session is protected by utilizing transport encryption protocols, such as TLS. TLS provides the Kubernetes API Server and etcd with a means to be able to authenticate sessions and encrypt traffic. To enable encrypted communication for etcd, the parameter etcd-certfile must be set. This parameter gives the location of the SSL certification file used to secure etcd communication.
Checks: C-CNTR-K8-001540_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i peer-cert-file * If the setting “etcd-certfile” is not set in the Kubernetes etcd manifest file, this is a finding.

Fix: F-CNTR-K8-001540_fix

Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of peer-cert-file to the certificate to be used for communication with etcd.

b
Kubernetes etcd must have a peer-key-file set for secure communication.
SC-23 - Medium - CCI-001184 - CNTR-K8-001550 - CNTR-K8-001550_rule
RMF Control
SC-23
Severity
M
CCI
CCI-001184
Version
CNTR-K8-001550
Vuln IDs
  • CNTR-K8-001550
Rule IDs
  • CNTR-K8-001550_rule
Kubernetes stores configuration and state information in a distributed key-value store called etcd. Anyone who can write to etcd can effectively control a Kubernetes cluster. Even just reading the contents of etcd could easily provide helpful hints to a would-be attacker. Using authenticity protection, the communication can be protected against man-in-the-middle attacks/session hijacking and the insertion of false information into sessions. The communication session is protected by utilizing transport encryption protocols, such as TLS. TLS provides the Kubernetes API Server and etcd with a means to be able to authenticate sessions and encrypt traffic. To enable encrypted communication for etcd, the parameter etcd-certfile must be set. This parameter gives the location of the SSL certification file used to secure etcd communication.
Checks: C-CNTR-K8-001550_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i peer-key-file * If the setting “etcd-certfile” is not set in the Kubernetes etcd manifest file, this is a finding.

Fix: F-CNTR-K8-001550_fix

Edit the Kubernetes etcd manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “peer-key-file” to the certificate to be used for communication with etcd.

b
Kubernetes Kubelet must enable kernel protection.
SC-3 - Medium - CCI-001084 - CNTR-K8-001620 - CNTR-K8-001620_rule
RMF Control
SC-3
Severity
M
CCI
CCI-001084
Version
CNTR-K8-001620
Vuln IDs
  • CNTR-K8-001620
Rule IDs
  • CNTR-K8-001620_rule
System kernel is responsible for memory, disk and task management. The kernel provides a gateway between the system hardware and software. Kubernetes requires kernel access to allocate resources to the Control Plane. Threat actors that penetrate the system kernel can inject malicious code or hijack the Kubernetes architecture. It is vital to implement protections through Kubernetes components to reduce the attack surface.
Checks: C-CNTR-K8-001620_chk

Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run the command: grep -i protect-kernel-defaults kubelet If the setting “protect-kernel-defaults” is set to false or not set in the Kubernetes Kubelet, this is a finding.

Fix: F-CNTR-K8-001620_fix

Edit the Kubernetes Kuberlet file in the /etc/sysconfig directory on the Kubernetes Master Node. Set the argument “--protect-kernel-defaults” to “true”. Reset Kubelet service using the following command: service kubelet restart

c
Kubernetes must prevent non-privileged users from executing privileged functions to include disabling, circumventing, or altering implemented security safeguards/countermeasures.
AC-6 - High - CCI-002235 - CNTR-K8-001990 - CNTR-K8-001990_rule
RMF Control
AC-6
Severity
H
CCI
CCI-002235
Version
CNTR-K8-001990
Vuln IDs
  • CNTR-K8-001990
Rule IDs
  • CNTR-K8-001990_rule
Kubernetes uses the API Server to control communication to the other services that makeup Kubernetes. The API Server can use several different authorization modes to determine what a user may do within the cluster. The default authorization mode is “AlwaysAllow”, which performs no authorization checks and would allow all users to install any software. To control access to those users and roles responsible for patching and updating the Kubernetes cluster, the API server must have one of the following options set for the authorization mode: --authorization-mode=ABAC Attribute-Based Access Control (ABAC) mode allows a user to configure policies using local files. --authorization-mode=RBAC Role-based access control (RBAC) mode allows a user to create and store policies using the Kubernetes API. --authorization-mode=Webhook WebHook is an HTTP callback mode that allows a user to manage authorization using a remote REST endpoint. --authorization-mode=Node Node authorization is a special-purpose authorization mode that specifically authorizes API requests made by kubelets. --authorization-mode=AlwaysDeny This flag blocks all requests. Use this flag only for testing. The use of authorizations and not the default of AlwaysAllow enables the Kubernetes functions controlled to those groups that need them.
Checks: C-CNTR-K8-001990_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i authorization-mode * If the setting authorization-mode is set to “AlwaysAllow” in the Kubernetes API Server manifest file, this is a finding.

Fix: F-CNTR-K8-001990_fix

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Edit the API server manifest and set the authorization-mode setting to any valid mode except for AlwaysAllow.

b
The Kubernetes API server must have the ValidatingAdmissionWebhook enabled.
AC-6 - Medium - CCI-002233 - CNTR-K8-002000 - CNTR-K8-002000_rule
RMF Control
AC-6
Severity
M
CCI
CCI-002233
Version
CNTR-K8-002000
Vuln IDs
  • CNTR-K8-002000
Rule IDs
  • CNTR-K8-002000_rule
Enabling the admissions webhook allows for Kubernetes to apply policies against objects that are to be created, read, updated, or deleted. By applying a pod security policy, control can be given to not allow images to be instantiated that run as the root user. If pods run as the root user, the pod then has root privileges to the host system and all the resources it has. An attacker can use this to attack the Kubernetes cluster. By implementing a policy that does not allow root or privileged pods, the pod users are limited in what the pod can do and access.
Checks: C-CNTR-K8-002000_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i ValidatingAdmissionWebhook * If a line is not returned that includes enable-admission-plugins and ValidatingAdmissionWebhook, this is a finding.

Fix: F-CNTR-K8-002000_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument --enable-admission-plugins to include ValidatingAdmissionWebhook. Each enabled plugin is separated by commas. Note: It is best to implement policies first and then enable the webhook, otherwise a denial of service may occur.

b
Kubernetes must have a pod security policy set.
AC-6 - Medium - CCI-002233 - CNTR-K8-002010 - CNTR-K8-002010_rule
RMF Control
AC-6
Severity
M
CCI
CCI-002233
Version
CNTR-K8-002010
Vuln IDs
  • CNTR-K8-002010
Rule IDs
  • CNTR-K8-002010_rule
Enabling the admissions webhook allows for Kubernetes to apply policies against objects that are to be created, read, updated or deleted. By applying a pod security policy, control can be given to not allow images to be instantiated that run as the root user. If pods run as the root user, the pod then has root privileges to the host system and all the resources it has. An attacker can use this to attack the Kubernetes cluster. By implementing a policy that does not allow root or privileged pods, the pod users are limited in what the pod can do and access.
Checks: C-CNTR-K8-002010_chk

On the Master Node, run the command: kubectl get podsecuritypolicy For any pod security policies listed, edit the policy with the command: kubectl edit podsecuritypolicy policyname Where policyname is the name of the policy Review the runAsUser, supplementalGroups and fsGroup sections of the policy. If any of these sections are missing, this is a finding. If the rule within the runAsUser section is not set to “MustRunAsNonRoot”, this is a finding. If the ranges within the supplementalGroups section has min set to “0” or min is missing, this is a finding. If the ranges within the fsGroup section has a min set to “0” or the min is missing, this is a finding.

Fix: F-CNTR-K8-002010_fix

From the Master node, save the following policy to a file called restricted.yml. apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: restricted annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default' apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default' seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default' apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' spec: privileged: false # Required to prevent escalations to root. allowPrivilegeEscalation: false # This is redundant with non-root + disallow privilege escalation, # but we can provide it for defense in depth. requiredDropCapabilities: - ALL # Allow core volume types. volumes: - 'configMap' - 'emptyDir' - 'projected' - 'secret' - 'downwardAPI' # Assume that persistentVolumes set up by the cluster admin are safe to use. - 'persistentVolumeClaim' hostNetwork: false hostIPC: false hostPID: false runAsUser: # Require the container to run without root privileges. rule: 'MustRunAsNonRoot' seLinux: # This policy assumes the nodes are using AppArmor rather than SELinux. rule: 'RunAsAny' supplementalGroups: rule: 'MustRunAs' ranges: # Forbid adding the root group. - min: 1 max: 65535 fsGroup: rule: 'MustRunAs' ranges: # Forbid adding the root group. - min: 1 max: 65535 readOnlyRootFilesystem: false To implement the policy, run the command: kubectl create -f restricted.yml

b
The Kubernetes API Server must audit the execution of privileged functions.
AC-6 - Medium - CCI-002234 - CNTR-K8-002020 - CNTR-K8-002020_rule
RMF Control
AC-6
Severity
M
CCI
CCI-002234
Version
CNTR-K8-002020
Vuln IDs
  • CNTR-K8-002020
Rule IDs
  • CNTR-K8-002020_rule
During an investigation of an incident, it is important to fully understand what took place. Often, information is not part of the audited event due to the nature of the data, security risk, or to limit audit log size. Organizations must consider limiting the additional audit information to only that information explicitly needed for specific audit requirements. At a minimum, the organization must audit either full-text recording of privileged commands or the individual identities of group users, or both.
Checks: C-CNTR-K8-002020_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-002020_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
Kubernetes must prohibit the installation of patches and updates without explicit privileged status.
CM-11 - Medium - CCI-001812 - CNTR-K8-002220 - CNTR-K8-002220_rule
RMF Control
CM-11
Severity
M
CCI
CCI-001812
Version
CNTR-K8-002220
Vuln IDs
  • CNTR-K8-002220
Rule IDs
  • CNTR-K8-002220_rule
The Kubernetes uses the API Server to control communication to the other services that makeup Kubernetes. The API Server can use several different authorization modes to determine what a user may do within the cluster. The default authorization mode is “AlwaysAllow”, which does not perform authorization checks and would allow all users to install any software. To control access to those users and roles responsible for patching and updating the Kubernetes cluster, the API server must have one of the following options set for the authorization mode: --authorization-mode=ABAC Attribute-Based Access Control (ABAC) mode allows a user to configure policies using local files. --authorization-mode=RBAC Role-based access control (RBAC) mode allows a user to create and store policies using the Kubernetes API. --authorization-mode=Webhook WebHook is an HTTP callback mode that allows a user to manage authorization using a remote REST endpoint. --authorization-mode=Node Node authorization is a special-purpose authorization mode that specifically authorizes API requests made by kubelets. --authorization-mode=AlwaysDeny This flag blocks all requests. Use this flag only for testing.
Checks: C-CNTR-K8-002220_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i authorization-mode * If the setting authorization-mode is set to “AlwaysAllow” in the Kubernetes API Server manifest file, this is a finding.

Fix: F-CNTR-K8-002220_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument “--authorization-mode” to any valid authorization mode other than AlwaysAllow.

b
Kubernetes must audit enforcement access restrictions and support auditing of the enforcement actions.
CM-5 - Medium - CCI-001814 - CNTR-K8-002260 - CNTR-K8-002260_rule
RMF Control
CM-5
Severity
M
CCI
CCI-001814
Version
CNTR-K8-002260
Vuln IDs
  • CNTR-K8-002260
Rule IDs
  • CNTR-K8-002260_rule
Auditing the enforcement of access restrictions against changes to the Kubernetes Control Plane helps identify attacks and provides forensic data for investigation for after-the-fact actions. Attempts to change configurations, components, or data maintained by a component, e.g., images in the registry, running containers in the runtime or keys in the keystore, must be audited. Enforcement actions are the methods or mechanisms used to prevent unauthorized changes to configuration settings. Enforcement action methods may be as simple as denying access to a file based on the application of file permissions (access restriction). Audit items may consist of lists of actions blocked by access restrictions or changes identified after the fact.
Checks: C-CNTR-K8-002260_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-002260_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
Kubernetes API Server must configure timeouts to limit attack surface.
SC-5 - Medium - CCI-002385 - CNTR-K8-002600 - CNTR-K8-002600_rule
RMF Control
SC-5
Severity
M
CCI
CCI-002385
Version
CNTR-K8-002600
Vuln IDs
  • CNTR-K8-002600
Rule IDs
  • CNTR-K8-002600_rule
Kubernetes API Server request timeouts sets the duration a request stays open before timing out. Since the API Server is the central component in the Kubernetes Control Plane, it is vital to protect this service. If request timeouts were not set, malicious attacks or unwanted activities might affect multiple deployments across different applications or environments. This might deplete all resources from the Kubernetes infrastructure causing the information system to go offline. The request-timeout value must never be set to “0”. This disables the request-timeout feature. By default, the request-timeout is set to “1 minute”.
Checks: C-CNTR-K8-002600_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -I request-timeout * If the setting request-timeout is set to “0” in the Kubernetes API Server manifest file, this is a finding.

Fix: F-CNTR-K8-002600_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of request-timeout greater than “0”.

c
Kubernetes API Server must disable basic authentication to protect information in transit.
SC-8 - High - CCI-002418 - CNTR-K8-002620 - CNTR-K8-002620_rule
RMF Control
SC-8
Severity
H
CCI
CCI-002418
Version
CNTR-K8-002620
Vuln IDs
  • CNTR-K8-002620
Rule IDs
  • CNTR-K8-002620_rule
Kubernetes basic authentication sends and receives request containing username, uid, groups, and other fields over a clear text HTTP communication. Basic authentication does not provide any security mechanisms using encryption standards. PKI certificate-based authentication must be set over a secure channel to ensure confidentiality and integrity. Basic authentication must not be set in the manifest file.
Checks: C-CNTR-K8-002620_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i basic-auth-file * If “basic-auth-file” is set in the Kubernetes API server manifest file, this is a finding.

Fix: F-CNTR-K8-002620_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Remove the setting “--basic-auth-file”.

b
Kubernetes API Server must disable token authentication to protect information in transit.
SC-8 - Medium - CCI-002418 - CNTR-K8-002630 - CNTR-K8-002630_rule
RMF Control
SC-8
Severity
M
CCI
CCI-002418
Version
CNTR-K8-002630
Vuln IDs
  • CNTR-K8-002630
Rule IDs
  • CNTR-K8-002630_rule
Kubernetes token authentication uses password known as secrets in a plaintext file. This file contains sensitive information such as token, username and user uid. This token is used by service accounts within pods to authenticate with the API Server. This information is very valuable for attackers with malicious intent if the service account is privileged having access to the token. With this token a threat actor can impersonate the service account gaining access to the Rest API service.
Checks: C-CNTR-K8-002630_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i token-auth-file * If “token-auth-file” is set in the Kubernetes API server manifest file, this is a finding.

Fix: F-CNTR-K8-002630_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Remove parameter “--token-auth-file”.

b
Kubernetes endpoints must use approved organizational certificate and key pair to protect information in transit.
SC-8 - Medium - CCI-002418 - CNTR-K8-002640 - CNTR-K8-002640_rule
RMF Control
SC-8
Severity
M
CCI
CCI-002418
Version
CNTR-K8-002640
Vuln IDs
  • CNTR-K8-002640
Rule IDs
  • CNTR-K8-002640_rule
Kubernetes control plane and external communication is managed by API Server. The main implementation of the API Server is to manage hardware resources for pods and container using horizontal or vertical scaling. Anyone who can gain access to the API Server can effectively control your Kubernetes architecture. Using authenticity protection, the communication can be protected against man-in-the-middle attacks/session hijacking and the insertion of false information into sessions. The communication session is protected by utilizing transport encryption protocols, such as TLS. TLS provides the Kubernetes API Server with a means to be able to authenticate sessions and encrypt traffic. By default, the API Server does not authenticate to the kubelet HTTPs endpoint. To enable secure communication for API Server, the parameter -kubelet-client-certificate and kubelet-client-key must be set. This parameter gives the location of the certificate and key pair used to secure API Server communication.
Checks: C-CNTR-K8-002640_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i --kubelet-client-certificate * grep -I --kubelet-client-key * If the setting “feature--kubelet-client-certificate” is not set in the Kubernetes API server manifest file or contains no value, this is a finding. If the setting “feature--kubelet-client-key” is not set in the Kubernetes API server manifest file or contains no value, this is a finding.

Fix: F-CNTR-K8-002640_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--kubelet-client-certificate” and “--kubelet-client-key” to an Approved Organizational Certificate and key pair.

b
Kubernetes must remove old components after updated versions have been installed.
SI-2 - Medium - CCI-002617 - CNTR-K8-002700 - CNTR-K8-002700_rule
RMF Control
SI-2
Severity
M
CCI
CCI-002617
Version
CNTR-K8-002700
Vuln IDs
  • CNTR-K8-002700
Rule IDs
  • CNTR-K8-002700_rule
Previous versions of Kubernetes components that are not removed after updates have been installed may be exploited by adversaries by allowing the vulnerabilities to still exist within the cluster. It is important for Kubernetes to remove old pods when newer pods are created using new images to always be at the desired security state.
Checks: C-CNTR-K8-002700_chk

To view all pods and the images used to create the pods, from the Master node, run the following command: kubectl get pods --all-namespaces -o jsonpath="{..image}" | \ tr -s '[[:space:]]' '\n' | \ sort | \ uniq -c Review the images used for pods running within Kubernetes. If there are multiple versions of the same image, this is a finding.

Fix: F-CNTR-K8-002700_fix

Remove any old pods that are using older images. On the Master node, run the command: kubectl delete pod podname Where podname is the name of the pod to delete.

b
Kubernetes must contain the latest updates as authorized by IAVMs, CTOs, DTMs, and STIGs.
SI-2 - Medium - CCI-002605 - CNTR-K8-002720 - CNTR-K8-002720_rule
RMF Control
SI-2
Severity
M
CCI
CCI-002605
Version
CNTR-K8-002720
Vuln IDs
  • CNTR-K8-002720
Rule IDs
  • CNTR-K8-002720_rule
Kubernetes software must stay up to date with the latest patches, service packs, and hot fixes. Not updating the Kubernetes control plane will expose the organization to vulnerabilities. Flaws discovered during security assessments, continuous monitoring, incident response activities, or information system error handling must also be addressed expeditiously. Organization-defined time periods for updating security-relevant container platform components may vary based on a variety of factors including, for example, the security category of the information system or the criticality of the update (i.e., severity of the vulnerability related to the discovered flaw). This requirement will apply to software patch management solutions that are used to install patches across the enclave and also to applications themselves that are not part of that patch management solution. For example, many browsers today provide the capability to install their own patch software. Patch criticality, as well as system criticality will vary. Therefore, the tactical situations regarding the patch management process will also vary. This means that the time period utilized must be a configurable parameter. Time frames for application of security-relevant software updates may be dependent upon the IAVM process. The container platform components will be configured to check for and install security-relevant software updates within an identified time period from the availability of the update. The container platform registry will ensure the images are current. The specific time period will be defined by an authoritative source (e.g., IAVM, CTOs, DTMs, and STIGs).
Checks: C-CNTR-K8-002720_chk

Authenticate on the Kubernetes Master Node. Run the command: kubectl version --short If kubectl version has a setting not supporting Kubernetes skew policy, this is a finding. Note: Kubernetes Skew Policy can be found at: https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions

Fix: F-CNTR-K8-002720_fix

Upgrade Kubernetes to the supported version. Institute and adhere to the policies and procedures to ensure that patches are consistently applied within the time allowed.

b
The Kubernetes API Server must generate audit records when successful/unsuccessful attempts to access security objects occur.
AU-12 - Medium - CCI-000172 - CNTR-K8-002900 - CNTR-K8-002900_rule
RMF Control
AU-12
Severity
M
CCI
CCI-000172
Version
CNTR-K8-002900
Vuln IDs
  • CNTR-K8-002900
Rule IDs
  • CNTR-K8-002900_rule
Within Kubernetes, audit data for all components is generated by the API server. This audit data is important when there are issues, to include security incidents that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to have the appropriate and required data logged. To fully understand the event, it is important to identify any users associated with the event. The API server policy file allows for the following levels of auditing: None – do not log events that match the rule. Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body. Request - log event metadata and request body but not response body. RequestResponse - log event metadata, request, and response bodies.
Checks: C-CNTR-K8-002900_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-002900_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
The Kubernetes API Server must generate audit records when successful/unsuccessful attempts to access security levels occur.
AU-12 - Medium - CCI-000172 - CNTR-K8-002910 - CNTR-K8-002910_rule
RMF Control
AU-12
Severity
M
CCI
CCI-000172
Version
CNTR-K8-002910
Vuln IDs
  • CNTR-K8-002910
Rule IDs
  • CNTR-K8-002910_rule
Within Kubernetes, audit data for all components is generated by the API server. This audit data is important when there are issues, to include security incidents that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to have the appropriate and required data logged. To fully understand the event, it is important to identify any users associated with the event. The API server policy file allows for the following levels of auditing: None – do not log events that match the rule. Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body. Request - log event metadata and request body but not response body. RequestResponse - log event metadata, request, and response bodies
Checks: C-CNTR-K8-002910_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-002910_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
The Kubernetes API Server must generate audit records when successful/unsuccessful attempts to modify objects occur.
AU-12 - Medium - CCI-000172 - CNTR-K8-002940 - CNTR-K8-002940_rule
RMF Control
AU-12
Severity
M
CCI
CCI-000172
Version
CNTR-K8-002940
Vuln IDs
  • CNTR-K8-002940
Rule IDs
  • CNTR-K8-002940_rule
Within Kubernetes, audit data for all components is generated by the API server. This audit data is important when there are issues, to include security incidents that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to have the appropriate and required data logged. To fully understand the event, it is important to identify any users associated with the event. The API server policy file allows for the following levels of auditing: None – do not log events that match the rule. Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body. Request - log event metadata and request body but not response body. RequestResponse - log event metadata, request, and response bodies.
Checks: C-CNTR-K8-002940_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-002940_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
The Kubernetes API Server must generate audit records when successful/unsuccessful attempts to modify security levels occur.
AU-12 - Medium - CCI-000172 - CNTR-K8-002950 - CNTR-K8-002950_rule
RMF Control
AU-12
Severity
M
CCI
CCI-000172
Version
CNTR-K8-002950
Vuln IDs
  • CNTR-K8-002950
Rule IDs
  • CNTR-K8-002950_rule
Within Kubernetes, audit data for all components is generated by the API server. This audit data is important when there are issues, to include security incidents that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to have the appropriate and required data logged. To fully understand the event, it is important to identify any users associated with the event. The API server policy file allows for the following levels of auditing: None – do not log events that match the rule. Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body. Request - log event metadata and request body but not response body. RequestResponse - log event metadata, request, and response bodies.
Checks: C-CNTR-K8-002950_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-002950_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
The Kubernetes API Server must generate audit records when successful/unsuccessful attempts to delete security levels occur.
AU-12 - Medium - CCI-000172 - CNTR-K8-002980 - CNTR-K8-002980_rule
RMF Control
AU-12
Severity
M
CCI
CCI-000172
Version
CNTR-K8-002980
Vuln IDs
  • CNTR-K8-002980
Rule IDs
  • CNTR-K8-002980_rule
Within Kubernetes, audit data for all components is generated by the API server. This audit data is important when there are issues, to include security incidents that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to have the appropriate and required data logged. To fully understand the event, it is important to identify any users associated with the event. The API server policy file allows for the following levels of auditing: None – do not log events that match the rule. Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body. Request - log event metadata and request body but not response body. RequestResponse - log event metadata, request, and response bodies.
Checks: C-CNTR-K8-002980_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-002980_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
The Kubernetes API Server must generate audit records when successful/unsuccessful attempts to delete security objects occur.
AU-12 - Medium - CCI-000172 - CNTR-K8-002990 - CNTR-K8-002990_rule
RMF Control
AU-12
Severity
M
CCI
CCI-000172
Version
CNTR-K8-002990
Vuln IDs
  • CNTR-K8-002990
Rule IDs
  • CNTR-K8-002990_rule
Within Kubernetes, audit data for all components is generated by the API server. This audit data is important when there are issues, to include security incidents that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to have the appropriate and required data logged. To fully understand the event, it is important to identify any users associated with the event. The API server policy file allows for the following levels of auditing: None – do not log events that match the rule. Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body. Request - log event metadata and request body but not response body. RequestResponse - log event metadata, request, and response bodies.
Checks: C-CNTR-K8-002990_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-002990_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
The Kubernetes API Server must generate audit records when successful/unsuccessful logon attempts occur.
AU-12 - Medium - CCI-000172 - CNTR-K8-003010 - CNTR-K8-003010_rule
RMF Control
AU-12
Severity
M
CCI
CCI-000172
Version
CNTR-K8-003010
Vuln IDs
  • CNTR-K8-003010
Rule IDs
  • CNTR-K8-003010_rule
Within Kubernetes, audit data for all components is generated by the API server. This audit data is important when there are issues, to include security incidents that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to have the appropriate and required data logged. To fully understand the event, it is important to identify any users associated with the event. The API server policy file allows for the following levels of auditing: None – do not log events that match the rule. Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body. Request - log event metadata and request body but not response body. RequestResponse - log event metadata, request, and response bodies.
Checks: C-CNTR-K8-003010_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-003010_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
The Kubernetes API Server must generate audit record for privileged activities.
AU-12 - Medium - CCI-000172 - CNTR-K8-003020 - CNTR-K8-003020_rule
RMF Control
AU-12
Severity
M
CCI
CCI-000172
Version
CNTR-K8-003020
Vuln IDs
  • CNTR-K8-003020
Rule IDs
  • CNTR-K8-003020_rule
Within Kubernetes, audit data for all components is generated by the API server. This audit data is important when there are issues, to include security incidents that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to have the appropriate and required data logged. To fully understand the event, it is important to identify any users associated with the event. The API server policy file allows for the following levels of auditing: None – do not log events that match the rule. Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body. Request - log event metadata and request body but not response body. RequestResponse - log event metadata, request, and response bodies.
Checks: C-CNTR-K8-003020_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-003020_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
Kubernetes API Server must generate audit records when successful/unsuccessful accesses to objects occur.
AU-12 - Medium - CCI-000172 - CNTR-K8-003050 - CNTR-K8-003050_rule
RMF Control
AU-12
Severity
M
CCI
CCI-000172
Version
CNTR-K8-003050
Vuln IDs
  • CNTR-K8-003050
Rule IDs
  • CNTR-K8-003050_rule
Within Kubernetes, all create, read, update, and delete events for objects goes through the API Server. It is important to create audit record on any of these events whether successful or unsuccessful. Without audit record generation, unauthorized users can access objects unknowingly for malicious intent creating vulnerabilities within the container platform.
Checks: C-CNTR-K8-003050_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * The file given is the policy file and defines what is audited and what information is included with each event. The policy file must look like this: # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse If the audit policy file does not look like above, this is a finding.

Fix: F-CNTR-K8-003050_fix

Edit the Kubernetes API Server audit policy and set it to look like below. # Log all requests at the RequestResponse level. apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse

b
The Kubernetes component manifests must be owned by root.
CM-6 - Medium - CCI-000366 - CNTR-K8-003110 - CNTR-K8-003110_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003110
Vuln IDs
  • CNTR-K8-003110
Rule IDs
  • CNTR-K8-003110_rule
The Kubernetes manifests are those files that contain the arguments and settings for the Master Node services. These services are etcd, the api server, controller, proxy, and scheduler. If these files can be changed, the scheduler will be implementing the changes immediately. Many of the security settings within the document are implemented through these manifests.
Checks: C-CNTR-K8-003110_chk

Review the ownership of the Kubernetes manifests files by using the command: stat -c %U:%G /etc/kubernetes/manifests/* | grep -v root:root If the command returns any non root:root file permissions, this is a finding.

Fix: F-CNTR-K8-003110_fix

Change the ownership of the manifest files to root: root by executing the command: chown root:root /etc/kubernetes/manifests/*

b
The Kubernetes component etcd must be owned by etcd.
CM-6 - Medium - CCI-000366 - CNTR-K8-003120 - CNTR-K8-003120_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003120
Vuln IDs
  • CNTR-K8-003120
Rule IDs
  • CNTR-K8-003120_rule
The Kubernetes etcd key-value store provides a way to store data to the Master Node. If these files can be changed, data to API object and the master node would be compromised. The scheduler will implement the changes immediately. Many of the security settings within the document are implemented through this file.
Checks: C-CNTR-K8-003120_chk

Review the ownership of the Kubernetes etcd files by using the command: stat -c %U:%G /var/lib/etcd/* | grep -v etcd:etcd If the command returns any non etcd:etcd file permissions, this is a finding.

Fix: F-CNTR-K8-003120_fix

Change the ownership of the manifest files to etcd:etcd by executing the command: chown etcd:etcd /var/lib/etcd/*

b
The Kubernetes conf files must be owned by root.
CM-6 - Medium - CCI-000366 - CNTR-K8-003130 - CNTR-K8-003130_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003130
Vuln IDs
  • CNTR-K8-003130
Rule IDs
  • CNTR-K8-003130_rule
The Kubernetes conf files contain the arguments and settings for the Master Node services. These services are controller and scheduler. If these files can be changed, the scheduler will be implementing the changes immediately. Many of the security settings within the document are implemented through this file.
Checks: C-CNTR-K8-003130_chk

Review the Kubernetes conf files by using the command: stat -c %U:%G /etc/kubernetes/admin.conf | grep -v root:root stat -c %U:%G /etc/kubernetes/scheduler.conf | grep -v root:root stat -c %U:%G /etc/kubernetes/controller-manager.conf | grep -v root:root If the command returns any non root:root file permissions, this is a finding.

Fix: F-CNTR-K8-003130_fix

Change the ownership of the conf files to root: root by executing the command: chown root:root /etc/kubernetes/admin.conf chown root:root /etc/kubernetes/scheduler.conf chown root:root /etc/kubernetes/controller-manager.conf

b
The Kubernetes Kube Proxy must have file permissions set to 644 or more restrictive.
CM-6 - Medium - CCI-000366 - CNTR-K8-003140 - CNTR-K8-003140_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003140
Vuln IDs
  • CNTR-K8-003140
Rule IDs
  • CNTR-K8-003140_rule
The Kubernetes kube proxy kubeconfig contain the argument and setting for the Master Nodes. These settings contain network rules for restricting network communication between pods, clusters, and networks. If these files can be changed, data traversing between the Kubernetes Control Panel components would be compromised. Many of the security settings within the document are implemented through this file.
Checks: C-CNTR-K8-003140_chk

Check if Kube-Proxy is running and obtain --kubeconfig parameter use the following command: ps -ef | grep kube-proxy Review the permissions of the Kubernetes Kube Proxy by using the command: stat -c %a <location from --kubeconfig> If the file has permissions more permissive than “644”, this is a finding.

Fix: F-CNTR-K8-003140_fix

Change the permissions of the Kube Proxy to “644” by executing the command: chown 644 <location from kubeconfig>.

b
The Kubernetes Kube Proxy must be owned by root.
CM-6 - Medium - CCI-000366 - CNTR-K8-003150 - CNTR-K8-003150_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003150
Vuln IDs
  • CNTR-K8-003150
Rule IDs
  • CNTR-K8-003150_rule
The Kubernetes kube proxy kubeconfig contain the argument and setting for the Master Nodes. These settings contain network rules for restricting network communication between pods, clusters, and networks. If these files can be changed, data traversing between the Kubernetes Control Panel components would be compromised. Many of the security settings within the document are implemented through this file.
Checks: C-CNTR-K8-003150_chk

Check if Kube-Proxy is running use the following command: ps -ef | grep kube-proxy Review the permissions of the Kubernetes Kube Proxy by using the command: chown %U:%G &lt;location from --kubeconfig&gt;| grep -v root:root If the command returns any non root:root file permissions, this is a finding.

Fix: F-CNTR-K8-003150_fix

Change the ownership of the Kube Proxy to root:root by executing the command: chown root:root <location from kubeconfig>.

b
The Kubernetes Kubelet certificate authority file must have file permissions set to 644 or more restrictive.
CM-6 - Medium - CCI-000366 - CNTR-K8-003160 - CNTR-K8-003160_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003160
Vuln IDs
  • CNTR-K8-003160
Rule IDs
  • CNTR-K8-003160_rule
The Kubernetes kubelet certificate authority file contains settings for the Kubernetes Node TLS certificate authority. Any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate. If this file can be changed, the Kubernetes architecture could be compromised. The scheduler will implement the changes immediately. Many of the security settings within the document are implemented through this file.
Checks: C-CNTR-K8-003160_chk

Change to the /etc/sysconfig/ directory on the Kubernetes Master Node. Run command: more kubelet --client-ca-file argument Note certificate location If the file has permissions more permissive than “644”, this is a finding.

Fix: F-CNTR-K8-003160_fix

Change the permissions of the --client-ca-file to “644” by executing the command: chown 644 <kubelet --client--ca-file argument location>.

b
The Kubernetes Kubelet certificate authority must be owned by root.
CM-6 - Medium - CCI-000366 - CNTR-K8-003170 - CNTR-K8-003170_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003170
Vuln IDs
  • CNTR-K8-003170
Rule IDs
  • CNTR-K8-003170_rule
The Kubernetes kube proxy kubeconfig contain the argument and setting for the Master Nodes. These settings contain network rules for restricting network communication between pods, clusters, and networks. If these files can be changed, data traversing between the Kubernetes Control Panel components would be compromised. Many of the security settings within the document are implemented through this file.
Checks: C-CNTR-K8-003170_chk

Check if Kube-Proxy is running and obtain --kubeconfig parameter use the following command: ps -ef | grep kube-proxy Review the permissions of the Kubernetes Kube Proxy by using the command: chown root:root &lt;location from --kubeconfig&gt; If the command returns any non root:root file permissions, this is a finding.

Fix: F-CNTR-K8-003170_fix

Change the permissions of the Kube Proxy to “644” by executing the command: chown root:root <location from kubeconfig>.

b
The Kubernetes component PKI must be owned by root.
CM-6 - Medium - CCI-000366 - CNTR-K8-003180 - CNTR-K8-003180_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003180
Vuln IDs
  • CNTR-K8-003180
Rule IDs
  • CNTR-K8-003180_rule
The Kubernetes PKI directory contains all certificates (.crt files) supporting secure network communications in the Kubernetes Control Plane. If these files can be modified, data traversing within the architecture components would become unsecure and compromised. Many of the security settings within the document are implemented through this file.
Checks: C-CNTR-K8-003180_chk

Review the PKI files in Kubernetes by using the command: ls -laR /etc/kubernetes/pki/ If the command returns any non root:root file permissions, this is a finding.

Fix: F-CNTR-K8-003180_fix

Change the ownership of the PKI to root: root by executing the command: chown -R root:root /etc/kubernetes/pki/

b
The Kubernetes kubelet config must have file permissions set to 644 or more restrictive.
CM-6 - Medium - CCI-000366 - CNTR-K8-003190 - CNTR-K8-003190_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003190
Vuln IDs
  • CNTR-K8-003190
Rule IDs
  • CNTR-K8-003190_rule
The Kubernetes kubelet agent registers nodes with the API Server, mounts volume storage for pods, and perform health checks to containers within pods. If these files can be modified, the information system would be unaware of pod or container degradation. Many of the security settings within the document are implemented through this file.
Checks: C-CNTR-K8-003190_chk

Review the permissions of the Kubernetes Kubelet conf by using the command: stat -c %a /etc/kubernetee/kubelet.conf If any of the files are have permissions more permissive than “644”, this is a finding.

Fix: F-CNTR-K8-003190_fix

Change the permissions of the Kubelet to “644” by executing the command: chown 644 /etc/kubernetee/kubelet.conf

b
The Kubernetes kubelet config must be owned by root.
CM-6 - Medium - CCI-000366 - CNTR-K8-003200 - CNTR-K8-003200_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003200
Vuln IDs
  • CNTR-K8-003200
Rule IDs
  • CNTR-K8-003200_rule
The Kubernetes kubelet agent registers nodes with the API server and performs health checks to containers within pods. If these files can be modified, the information system would be unaware of pod or container degradation. Many of the security settings within the document are implemented through this file.
Checks: C-CNTR-K8-003200_chk

Review the Kubernetes Kubelet conf files by using the command: stat -c %U:%G/etc/kubernetee/kubelet.conf| grep -v root:root If the command returns any non root:root file permissions, this is a finding.

Fix: F-CNTR-K8-003200_fix

Change the ownership of the kubelet.conf to root: root by executing the command: chown root:root /etc/kubernetee/kubelet.conf

b
The Kubernetes kubeadm.conf must be owned by root.
CM-6 - Medium - CCI-000366 - CNTR-K8-003210 - CNTR-K8-003210_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003210
Vuln IDs
  • CNTR-K8-003210
Rule IDs
  • CNTR-K8-003210_rule
The Kubernetes kubeeadm.conf contains sensitive information regarding the cluster nodes configuration. If this file can be modified, the Kubernetes Platform Plane would be degraded or compromised for malicious intent. Many of the security settings within the document are implemented through this file.
Checks: C-CNTR-K8-003210_chk

Review the Kubernetes Kubeadm conf files by using the command: stat -c %U:%G /usr/bin/kubeadm.conf| grep -v root:root If the command returns any non root:root file permissions, this is a finding.

Fix: F-CNTR-K8-003210_fix

Change the ownership of the kubeadm.conf to root: root by executing the command: chown root:root /user/bin/kubeadm.conf

b
The Kubernetes kubelet service must have file permissions set to 644 or more restrictive.
CM-6 - Medium - CCI-000366 - CNTR-K8-003220 - CNTR-K8-003220_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003220
Vuln IDs
  • CNTR-K8-003220
Rule IDs
  • CNTR-K8-003220_rule
The Kubernetes kubeadm.conf contains sensitive information regarding the cluster nodes configuration. If this file can be modified, the Kubernetes Platform Plane would be degraded or compromised for malicious intent. Many of the security settings within the document are implemented through this file.
Checks: C-CNTR-K8-003220_chk

Review the permissions of the Kubernetes kubelet by using the command: stat -c %a /usr/bin/kubeadm.conf If any of the files are have permissions more permissive than “644”, this is a finding.

Fix: F-CNTR-K8-003220_fix

Change the permissions of the Kubeadm.conf to “644” by executing the command: chown 644 /usr/bin/kubeadm.conf

b
The Kubernetes kubelet config must have file permissions set to 644 or more restrictive.
CM-6 - Medium - CCI-000366 - CNTR-K8-003230 - CNTR-K8-003230_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003230
Vuln IDs
  • CNTR-K8-003230
Rule IDs
  • CNTR-K8-003230_rule
The Kubernetes kubelet agent registers nodes with the API server and performs health checks to containers within pods. If this file can be modified, the information system would be unaware of pod or container degradation.
Checks: C-CNTR-K8-003230_chk

Review the permissions of the Kubernetes config.yaml by using the command: stat -c %a /var/lib/kubelet/config.yaml If any of the files are have permissions more permissive than “644”, this is a finding.

Fix: F-CNTR-K8-003230_fix

Change the permissions of the config.yaml to “644” by executing the command: chown 644 /var/lib/kubelet/config.yaml

b
The Kubernetes kubelet config must be owned by root.
CM-6 - Medium - CCI-000366 - CNTR-K8-003240 - CNTR-K8-003240_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003240
Vuln IDs
  • CNTR-K8-003240
Rule IDs
  • CNTR-K8-003240_rule
The Kubernetes kubelet agent registers nodes with the API Server and performs health checks to containers within pods. If this file can be modified, the information system would be unaware of pod or container degradation.
Checks: C-CNTR-K8-003240_chk

Review the Kubernetes Kubeadm kubelet conf file by using the command: stat -c %U:%G /var/lib/kubelet/config.yaml| grep -v root:root If the command returns any non root:root file permissions, this is a finding.

Fix: F-CNTR-K8-003240_fix

Change the ownership of the kubelet config to “root: root” by executing the command: chown root:root /var/lib/kubelet/config.yaml

b
The Kubernetes API Server must have file permissions set to 644 or more restrictive.
CM-6 - Medium - CCI-000366 - CNTR-K8-003250 - CNTR-K8-003250_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003250
Vuln IDs
  • CNTR-K8-003250
Rule IDs
  • CNTR-K8-003250_rule
The Kubernetes manifests are those files that contain the arguments and settings for the Master Node services. These services are etcd, the API Server, controller, proxy, and scheduler. If these files can be changed, the scheduler will be implementing the changes immediately. Many of the security settings within the document are implemented through these manifests.
Checks: C-CNTR-K8-003250_chk

Review the permissions of the Kubernetes Kubelet by using the command: stat -c %a /etc/kubernetes/manifests/* If any of the files are have permissions more permissive than “644”, this is a finding.

Fix: F-CNTR-K8-003250_fix

Change the permissions of the manifest files to “root: root” by executing the command: chown root:root /etc/kubernetes/manifests/*

b
The Kubernetes etcd must have file permissions set to 644 or more restrictive.
CM-6 - Medium - CCI-000366 - CNTR-K8-003260 - CNTR-K8-003260_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003260
Vuln IDs
  • CNTR-K8-003260
Rule IDs
  • CNTR-K8-003260_rule
The Kubernetes etcd key-value store provides a way to store data to the Master Node. If these files can be changed, data to API object and master node would be compromised.
Checks: C-CNTR-K8-003260_chk

Review the permissions of the Kubernetes etcd by using the command: stat -c %a /var/lib/etcd/* If any of the files are have permissions more permissive than “700”, this is a finding.

Fix: F-CNTR-K8-003260_fix

Change the permissions of the manifest files to “644” by executing the command: chmod 700 /var/lib/etcd/*

b
The Kubernetes admin.conf must have file permissions set to 644 or more restrictive.
CM-6 - Medium - CCI-000366 - CNTR-K8-003270 - CNTR-K8-003270_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003270
Vuln IDs
  • CNTR-K8-003270
Rule IDs
  • CNTR-K8-003270_rule
The Kubernetes conf files contain the arguments and settings for the Master Node services. These services are controller and scheduler. If these files can be changed, the scheduler will be implementing the changes immediately.
Checks: C-CNTR-K8-003270_chk

Review the permissions of the Kubernetes config files by using the command: stat -c %a /etc/kubernetes/admin.conf stat -c %a /etc/kubernetes/scheduler.conf stat -c %a /etc/kubernetes/controller-manager.conf If any of the files are have permissions more permissive than “644”, this is a finding.

Fix: F-CNTR-K8-003270_fix

Change the permissions of the conf files to “644” by executing the command: chmod 644 /etc/kubernetes/admin.conf chmod 644 /etc/kubernetes/scheduler.conf chmod 644 /etc/kubernetes/controller-manager.conf

b
Kubernetes API Server audit logs must be enabled.
CM-6 - Medium - CCI-000366 - CNTR-K8-003280 - CNTR-K8-003280_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003280
Vuln IDs
  • CNTR-K8-003280
Rule IDs
  • CNTR-K8-003280_rule
Kubernetes API Server validates and configures pods and services for the API object. The REST operation provides frontend functionality to the cluster share state. Enabling audit logs provides a way to monitor and identify security risk events or misuse of information. Audit logs are necessary to provide evidence in the case the Kubernetes API Server is compromised requiring a Cyber Security Investigation.
Checks: C-CNTR-K8-003280_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i audit-policy-file * If the setting “audit-policy-file” is not set or is found in the Kubernetes API manifest file without valid content, this is a finding.

Fix: F-CNTR-K8-003280_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the argument “--audit-policy-file” to “log file directory”.

b
The Kubernetes API Server must be set to audit log max size.
CM-6 - Medium - CCI-000366 - CNTR-K8-003290 - CNTR-K8-003290_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003290
Vuln IDs
  • CNTR-K8-003290
Rule IDs
  • CNTR-K8-003290_rule
The Kubernetes API Server must be set for enough storage to retain log information over the period required. When audit logs are large in size, the monitoring service for events becomes degraded. The function of the maximum log file size is to set these limits.
Checks: C-CNTR-K8-003290_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i audit-log-maxsize * If the setting “audit-log-maxsize” is not set in the Kubernetes API Server manifest file or it is set to less than “100”, this is a finding.

Fix: F-CNTR-K8-003290_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of –“--audit-log-maxsize” to a minimum of “100”.

b
The Kubernetes API Server must be set to audit log maximum backup.
CM-6 - Medium - CCI-000366 - CNTR-K8-003300 - CNTR-K8-003300_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003300
Vuln IDs
  • CNTR-K8-003300
Rule IDs
  • CNTR-K8-003300_rule
The Kubernetes API Server must set enough storage to retain logs for monitoring suspicious activity and system misconfiguration, and provide evidence for Cyber Security Investigations.
Checks: C-CNTR-K8-003300_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i audit-log-maxbackup * If the setting “audit-log-maxbackup” is not set in the Kubernetes API Server manifest file or it is set less than “10”, this is a finding.

Fix: F-CNTR-K8-003300_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--audit-log-maxbackup” to a minimum of “10”.

b
The Kubernetes API Server audit log retention must be set.
CM-6 - Medium - CCI-000366 - CNTR-K8-003310 - CNTR-K8-003310_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003310
Vuln IDs
  • CNTR-K8-003310
Rule IDs
  • CNTR-K8-003310_rule
The Kubernetes API Server must set enough storage to retain logs for monitoring suspicious activity and system misconfiguration, and provide evidence for Cyber Security Investigations.
Checks: C-CNTR-K8-003310_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i audit-log-maxage * If the setting “audit-log-path” is not set in the Kubernetes API Server manifest file or it is set less than “30”, this is a finding.

Fix: F-CNTR-K8-003310_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--audit-log-maxage” to a minimum of “30”.

b
The Kubernetes API Server audit log path must be set.
CM-6 - Medium - CCI-000366 - CNTR-K8-003320 - CNTR-K8-003320_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003320
Vuln IDs
  • CNTR-K8-003320
Rule IDs
  • CNTR-K8-003320_rule
Kubernetes API Server validates and configures pods and services for the API object. The REST operation provides frontend functionality to the cluster share state. Audit logs are necessary to provide evidence in the case the Kubernetes API Server is compromised requiring Cyber Security Investigation. To record events in the audit log the log path value must be set.
Checks: C-CNTR-K8-003320_chk

Change to the /etc/kubernetes/manifests/ directory on the Kubernetes Master Node. Run the command: grep -i audit-log-path * If the setting audit-log-path is not set in the Kubernetes API Server manifest file or it is set to a valid path, this is a finding.

Fix: F-CNTR-K8-003320_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--audit-log-path” to valid location.

b
The Kubernetes PKI CRT must have file permissions set to 644 or more restrictive.
CM-6 - Medium - CCI-000366 - CNTR-K8-003330 - CNTR-K8-003330_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003330
Vuln IDs
  • CNTR-K8-003330
Rule IDs
  • CNTR-K8-003330_rule
The Kubernetes PKI directory contains all certificates (.crt files) supporting secure network communications in the Kubernetes Control Plane. If these files can be modified, data traversing within the architecture components would become unsecure and compromised.
Checks: C-CNTR-K8-003330_chk

Review the permissions of the Kubernetes PKI cert files by using the command: find /etc/kubernetes/pki -name "*.crt" | xargs stat -c '%n %a' If any of the files are have permissions more permissive than “644”, this is a finding.

Fix: F-CNTR-K8-003330_fix

Change the ownership of the cert files to “644” by executing the command: chmod -R 644 /etc/kubernetes/pki/*.crt

b
The Kubernetes PKI keys must have file permissions set to 600 or more restrictive.
CM-6 - Medium - CCI-000366 - CNTR-K8-003340 - CNTR-K8-003340_rule
RMF Control
CM-6
Severity
M
CCI
CCI-000366
Version
CNTR-K8-003340
Vuln IDs
  • CNTR-K8-003340
Rule IDs
  • CNTR-K8-003340_rule
The Kubernetes PKI directory contains all certificate key files supporting secure network communications in the Kubernetes Control Plane. If these files can be modified, data traversing within the architecture components would become unsecure and compromised.
Checks: C-CNTR-K8-003340_chk

Review the permissions of the Kubernetes PKI key files by using the command: find /etc/kubernetes/pki -name "*.key" | xargs stat -c '%n %a' If any of the files are have permissions more permissive than “600”, this is a finding.

Fix: F-CNTR-K8-003340_fix

Change the ownership of the cert files to “600” by executing the command: chmod -R 600 /etc/kubernetes/pki/*.key

b
The Kubernetes API Server must prohibit communication using TLS version 1.0 and 1.1, and SSL 2.0 and 3.0.
AC-17 - Medium - CCI-001453 - CNTR-K8-003350 - CNTR-K8-003350_rule
RMF Control
AC-17
Severity
M
CCI
CCI-001453
Version
CNTR-K8-003350
Vuln IDs
  • CNTR-K8-003350
Rule IDs
  • CNTR-K8-003350_rule
The Kubernetes API Server will prohibit the use of SSL and unauthorized versions of TLS protocols to properly secure communication. The use of unsupported protocol exposes vulnerabilities to Kubernetes by rogue traffic interceptions, man-in-the middle attacks, and impersonation of users or services from the container platform runtime, registry, and keystore. To enable the minimum version of TLS to be used by the Kubernetes API Server, the setting “tls-min-version” must be set. The container platform and its components will adhere to NIST 800-52R2.
Checks: C-CNTR-K8-003350_chk

Change to the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Run the command: grep -i tls-min-version * If the setting tls-min-version is not set in the Kubernetes API Server manifest file or it is set to “VersionTLS10” or “VersionTLS11”, this is a finding.

Fix: F-CNTR-K8-003350_fix

Edit the Kubernetes API Server manifest file in the /etc/kubernetes/manifests directory on the Kubernetes Master Node. Set the value of “--tls-min-version” to either “VersionTLS12” or “VersionTLS13”.

b
The container platform must automatically audit account creation.
AC-2 - Medium - CCI-000018 - CNTR-K8-003500 - CNTR-K8-003500_rule
RMF Control
AC-2
Severity
M
CCI
CCI-000018
Version
CNTR-K8-003500
Vuln IDs
  • CNTR-K8-003500
Rule IDs
  • CNTR-K8-003500_rule
Once an attacker establishes access to a system, the attacker often attempts to create a persistent method of reestablishing access. One way to accomplish this is for the attacker to create a new account. Auditing of account creation is one method for mitigating this risk. A comprehensive account management process will ensure an audit trail documents the creation of application user accounts and, as required, notifies administrators and/or application when accounts are created. Such a process greatly reduces the risk that accounts will be surreptitiously created, and provides logging that can be used for forensic purposes. To address access requirements, many application developers choose to integrate their applications with enterprise-level authentication/access/auditing mechanisms that meet or exceed access control policy requirements. Such integration allows the application developer to offload those access control functions and focus on core application features and functionality.
Checks: C-CNTR-K8-003500_chk

Review the container platform configuration to determine if audit records are automatically created upon account creation. If audit records are not automatically created upon account creation, this is a finding.

Fix: F-CNTR-K8-003500_fix

Configure the container platform to automatically create audit records on account creation.

b
The container platform must automatically audit account modification.
AC-2 - Medium - CCI-001403 - CNTR-K8-003510 - CNTR-K8-003510_rule
RMF Control
AC-2
Severity
M
CCI
CCI-001403
Version
CNTR-K8-003510
Vuln IDs
  • CNTR-K8-003510
Rule IDs
  • CNTR-K8-003510_rule
Once an attacker establishes access to a system, the attacker often attempts to create a persistent method of reestablishing access. One way to accomplish this is for the attacker to modify an existing account. Auditing of account creation is one method for mitigating this risk. A comprehensive account management process will ensure an audit trail documents the creation of application user accounts and, as required, notifies administrators and/or application when accounts are created. Such a process greatly reduces the risk that accounts will be surreptitiously created and provides logging that can be used for forensic purposes. To address access requirements, many application developers choose to integrate their applications with enterprise-level authentication/access/auditing mechanisms that meet or exceed access control policy requirements. Such integration allows the application developer to off-load those access control functions and focus on core application features and functionality.
Checks: C-CNTR-K8-003510_chk

Review the container platform configuration to determine if account modification is automatically audited. If account modification is not automatically audited, this is a finding.

Fix: F-CNTR-K8-003510_fix

Configure the container platform to automatically audit account modification.

b
The container platform must automatically audit account-disabling actions.
AC-2 - Medium - CCI-001404 - CNTR-K8-003520 - CNTR-K8-003520_rule
RMF Control
AC-2
Severity
M
CCI
CCI-001404
Version
CNTR-K8-003520
Vuln IDs
  • CNTR-K8-003520
Rule IDs
  • CNTR-K8-003520_rule
When application accounts are disabled, user accessibility is affected. Once an attacker establishes access to an application, the attacker often attempts to disable authorized accounts to disrupt services or prevent the implementation of countermeasures. Auditing account-disabling actions provides logging that can be used for forensic purposes. To address access requirements, many application developers choose to integrate their applications with enterprise-level authentication/access/audit mechanisms meeting or exceeding access control policy requirements. Such integration allows the application developer to off-load those access control functions and focus on core application features and functionality.
Checks: C-CNTR-K8-003520_chk

Review the container platform configuration to determine if account disabling is automatically audited. If account disabling is not automatically audited, this is a finding.

Fix: F-CNTR-K8-003520_fix

Configure the container platform to automatically audit account disabling.

b
The container platform must take appropriate action upon an audit failure.
AU-5 - Medium - CCI-000140 - CNTR-K8-003530 - CNTR-K8-003530_rule
RMF Control
AU-5
Severity
M
CCI
CCI-000140
Version
CNTR-K8-003530
Vuln IDs
  • CNTR-K8-003530
Rule IDs
  • CNTR-K8-003530_rule
It is critical that when the container platform is at risk of failing to process audit logs as required that it take action to mitigate the failure. Audit processing failures include software/hardware errors, failures in the audit capturing mechanisms, and audit storage capacity being reached or exceeded. Responses to audit failure depend upon the nature of the failure mode. Because availability of the services provided by the container platform, approved actions in response to an audit failure are as follows: (i) If the failure was caused by the lack of audit record storage capacity, the container platform must continue generating audit records if possible (automatically restarting the audit service if necessary), overwriting the oldest audit records in a first-in-first-out manner. (ii) If audit records are sent to a centralized collection server and communication with this server is lost or the server fails, the container platform must queue audit records locally until communication is restored or until the audit records are retrieved manually. Upon restoration of the connection to the centralized collection server, action should be taken to synchronize the local audit data with the collection server.
Checks: C-CNTR-K8-003530_chk

Review the configuration settings to determine how the container platform components are configured for audit failures. When the audit failure is due to the lack of audit record storage, the container platform must continue generating audit records, restarting services if necessary, and overwrite the oldest audit records in a first-in-first-out manner. If the audit failure is due to a communication to a centralized collection server, the container platform must queue audit records locally until communication is restored or the records are retrieved manually. If the container platform is not configured to handle audit failures appropriately, this is a finding.

Fix: F-CNTR-K8-003530_fix

Configure the container platform to continue generating audit records, overwriting oldest audit records in a first-in-first-out manner when the failure is due to a lack of audit record storage. When the audit failure is due to a communication to a centralized collection server, configure the container platform to queue audit records locally until communication is restored or the records are retrieved manually. If other actions are to be taken for audit record failures, document the actions and rationale in the system security plan and obtain risk acceptance approvals.