Select any two versions of this STIG to compare the individual requirements
Select any old version/release of this STIG to view the previous requirements
Review the components within the SDN framework that send and receive southbound API messages and verify that the messages are authenticated using a FIPS-approved message authentication code algorithm. FIPS-approved algorithms for authentication are the cipher-based message authentication code (CMAC) and the keyed-hash message authentication code (HMAC). AES and 3DES are NIST-approved CMAC algorithms. The following are NIST-approved HMAC algorithms: SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256. If the SDN controller or SDN-enabled network elements do not authenticate received southbound API messages using a FIPS-approved message authentication code algorithm, this is a finding.
Ensure that all components within the SDN framework authenticate southbound API messages using a FIPS-approved message authentication code algorithm. FIPS-approved algorithms for authentication are the CMAC and the HMAC. AES and 3DES are NIST-approved CMAC algorithms. The following are NIST-approved HMAC algorithms: SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256.
Review the configuration of the SDN controllers and verify that the northbound API messages received are authenticated using a FIPS-approved message authentication code algorithm. FIPS-approved algorithms for authentication are the cipher-based message authentication code (CMAC) and the keyed-hash message authentication code (HMAC). AES and 3DES are NIST-approved CMAC algorithms. The following are NIST-approved HMAC algorithms: SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256. If the SDN controllers do not authenticate received northbound API messages using a FIPS-approved message authentication code algorithm, this is a finding.
Configure all SDN controllers to authenticate received northbound API messages using a FIPS-approved message authentication code algorithm. FIPS-approved algorithms for authentication are the CMAC and the HMAC. AES and 3DES are NIST-approved CMAC algorithms. The following are NIST-approved HMAC algorithms: SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256.
Review all management and orchestration systems within the SDN framework and verify that access to these components requires DOD PKI certificate-based authentication. If access to the SDN management and orchestration systems does not require DOD PKI certificate-based authentication, this is a finding.
Configure all management and orchestration systems within the SDN framework to require DOD PKI certificate-based authentication for access.
Determine if the southbound API control plane traffic between the SDN controllers and the SDN-enabled network elements traverses an out-of-band path. If not, verify that the southbound API traffic is encrypted using a FIPS-validated cryptographic module. If the southbound API traffic does not traverse an out-of-band path or is not encrypted using a FIPS-validated cryptographic module, this is a finding. Note: An out-of-band path would be a path between two nodes that traverses one or more links on an out-of-band network; that is, a dedicated layer 2 infrastructure separate from a production network.
Deploy an out-of-band network to provision paths between the SDN controllers and the SDN-enabled network elements for providing transport for southbound API control plane traffic. An alternative is to encrypt all southbound API control plane traffic using a FIPS-validated cryptographic module. Implement a cryptographic module which has a validation certification and is listed on the NIST Cryptographic Module Validation Program's (CMVP) validation list.
Determine if the northbound API traffic between the SDN controllers and the SDN management/orchestration systems traverses an out-of-band path. If not, verify that the northbound API traffic is encrypted using a FIPS-validated cryptographic module. If the northbound API traffic does not traverse an out-of-band path or is not encrypted using a FIPS-validated cryptographic module, this is a finding. Note: An out-of-band path would be a path between two nodes that traverses one or more links on an out-of-band network; that is, a dedicated layer 2 infrastructure separate from a production network.
Deploy an out-of-band network to provision paths between the SDN controllers and the SDN management/orchestration systems for providing transport for northbound API traffic. An alternative is to encrypt all northbound API traffic using a FIPS-validated cryptographic module. Implement a cryptographic module which has a validation certification and is listed on the NIST Cryptographic Module Validation Program's (CMVP) validation list.
Verify that all southbound API management plane traffic is authenticated using a FIPS-approved message authentication code algorithm. Review SDN management and orchestration systems, as well as all hypervisor hosts that compose the NVP framework, to determine if a FIPS-approved message authentication code algorithm is used to ensure the authenticity and integrity of messages used to deploy and configure software-defined network elements. If southbound API management plane traffic is not authenticated using a FIPS-approved message authentication code algorithm, this is a finding.
Configure these components to use a FIPS-approved message authentication code algorithm to authenticate southbound API management messages.
Determine if the southbound API management plane traffic traverses an out-of-band path. If not, verify that the southbound API management plane traffic is encrypted using a using a FIPS-validated cryptographic module. If the southbound API management plane traffic does not traverse an out-of-band path or is not encrypted using a using a FIPS-validated cryptographic module, this is a finding.
Deploy an out-of-band network to provision paths between management systems, orchestrations systems, and all hypervisor hosts that compose the SDN infrastructure to provide transport for southbound API management plane traffic. An alternative is to encrypt all southbound API management plane traffic using a FIPS-validated cryptographic module. Implement a cryptographic module that has a validation certification and is listed on the NIST Cryptographic Module Validation Program's (CMVP) validation list.
Review both management and orchestration systems, as well as all SDN controllers and physical SDN-enabled network elements that compose the network virtualization platform (NVP), to determine if certificate-based authentication is used to ensure the authenticity and integrity of southbound API management messages. If southbound API management plane traffic is not authenticated using DOD PKI certificates, this is a finding.
Deploy DOD PKI certificates to all orchestration systems, management systems, and physical SDN-enabled network elements. Configure these components to use the certificates to authenticate southbound API management messages.
Determine if the southbound API management plane traffic is encrypted using a FIPS-validated cryptographic module. If the Southbound API management plane traffic is not encrypted using a FIPS-validated cryptographic module, this is a finding.
Encrypt all southbound API management plane traffic using a using a FIPS-validated cryptographic module. Implement a cryptographic module that has a validation certification and is listed on the NIST Cryptographic Module Validation Program's (CMVP) validation list.
Review the SDN infrastructure topology to verify that the all physical SDN controllers, management appliances, and servers hosting SDN applications reside within the management network that has multiple paths and is also secured by a firewall. If these physical NVP components do not reside within the management network with multiple paths, and are not secured by a firewall, this is a finding. Note: If the SDN physical components reside within an out-of-band network, this requirement would not be applicable.
Deploy all physical controllers, management appliances, and servers hosting SDN applications into the management network with multiple paths that are secured by a firewall inspecting all ingress traffic.
Review the configurations for all SDN-enabled routers and switches and verify that link state information is provided to the SDN controllers. If the SDN-enabled routers and switches do not provide link state information to the SDN controllers, this is a finding. Note: This requirement is not applicable if the SDN deployment model does not rely on the controller for network forwarding or convergence.
Configure all SDN-enabled routers and switches to send link state information to the SDN controllers.
Note: This requirement will not be applicable if an out-of-band network is used to transport SDN control and management plane traffic. Review the router and multilayer switch configurations to verify that SDN control and management plane packets are receiving the appropriate amount of priority to ensure this traffic has preference over normal production traffic. If not all routers and multilayer switches impose preferred treatment for SDN control and management plane traffic during periods of congestion, this is a finding.
Determine the paths in which SDN control and management plane traffic will flow between the SDN controllers and SDN-enabled switches and routers. Configure each router and multilayer switch to impose preferred treatment for this traffic so it has priority over normal production traffic during periods of congestion.
Review the network virtualization platform topology and the SDN configuration to verify that SDN controllers have been deployed as clusters on separate physical hosts. If the SDN controllers have not been deployed as clusters on separate physical hosts, this is a finding.
Deploy SDN controllers as clusters on separate physical hosts to eliminate single point of failure.
Review the network topology as well as the physical connection between the physical device hosting an SDN controller and the switches. The device must have NIC teaming enabled and must be dual homed, with each upstream link connected to a different switch. If the physical device hosting an SDN controller is not connected to two switches using NIC teaming, this is a finding.
Enable NIC teaming on the device hosting an SDN controller in either Link Aggregation Control Protocol (LACP) or switch-independent mode. Connect each interface to a different access switch.
Review the parameters provided by the SDN manager or controller when deploying router or switch instances to determine if they set a threshold on the number of unknown data plane packets that are allowed to be punted by a virtual router or switch to the controller within a specific amount of time. Review the configuration of all physical SDN-enabled switches and routers and verify that packet-in messages are rate limited. If SDN-enabled routers and switches do not rate limit the amount of unknown data plane packets that are punted to the SDN controller, this is a finding.
Configure the SDN manager or controller to set a threshold on the number of unknown data plane packets that are allowed to be punted by a virtual router or switch to the controller within a specific amount of time. Configure all physical SDN-enabled switches and routers to rate limit the amount of packets that are punted to the SDN controller.
Review all servers hosting an SDN controller and verify that logging has been enabled. If logging is not enabled on all servers hosting an SDN controller, this is a finding.
Enable logging on all servers hosting an SDN controller.
Review all servers hosting an SDN controller and verify that an HIDS has been installed and enabled. If an HIDS has not been installed and enabled on all servers hosting an SDN controller, this is a finding.
Install and enable an HIDS on all servers hosting an SDN controller.
Review the VXLAN topology and documentation for the SDN deployment that identifies each VXLAN segment and distributed logical switch. Review the configuration of all physical VXLAN-enabled switches to verify that the applicable VNIs are defined. If the applicable VNIs have not been defined on all VXLAN-enabled switches, this is a finding. Note: This requirement is applicable to the implementation of technologies similar to VXLAN (e.g., NVGRE, STT) for the purpose of transporting traffic between virtual machines residing on different physical hosts.
Define all applicable member VNIs on each VXLAN-enabled switch.
Review the VXLAN topology and documentation for the SDN deployment that identifies each VXLAN segment via VNI, VLAN membership, and the VLAN-to-VNI mapping to be implemented. Review the VTEP configuration of all physical VXLAN-enabled switches to verify that the appropriate VLAN-to-VNI mapping has been defined. If the correct VLAN-to-VNI mapping has not been configured on all VXLAN-enabled switches, this is a finding. Note: This requirement is only applicable to VNIs that must be defined on each VXLAN-enabled switch. In addition, this requirement is applicable to the implementation of technologies similar to VXLAN (e.g., NVGRE, STT) for the purpose of transporting traffic between virtual machines residing on different physical hosts.
Configure the appropriate VLAN-to-VNI mapping on all VXLAN-enabled switches.
Review the VXLAN topology as well as documentation for the SDN deployment that identifies each VXLAN segment via VNI and the associated multicast groups. Review the VTEP configuration of all physical VXLAN-enabled switches to verify that the appropriate multicast group is defined for each VNI. If the appropriate multicast group is not configured for each member VNI, this is a finding. Note: This requirement is only applicable to VNIs that must be defined on each VXLAN-enabled switch. In addition, this requirement is applicable to the implementation of technologies similar to VXLAN (e.g., NVGRE, STT) for the purpose of transporting traffic between virtual machines residing on different physical hosts.
Configure the appropriate multicast group that is assigned to each VNI on all VXLAN-enabled switches.
Review the VXLAN topology and the configuration of all hypervisor hosts and VXLAN-enabled switches to verify that every VTEP is dual-homed to two physical network nodes. If any VTEPs are not dual-homed to two physical network nodes, this is a finding. Note: This requirement is only applicable to VNIs that must be defined on each VXLAN-enabled switch. In addition, this requirement is applicable to the implementation of technologies similar to VXLAN (e.g., NVGRE, STT) for the purpose of transporting traffic between virtual machines residing on different physical hosts.
Configure all hypervisor hosts and VXLAN-enabled switches so the VTEP will be dual-homed to two physical network nodes. In the case of the VXLAN-enabled switch, the VTEP will be the loopback interface; hence, dual-homing can be achieved by having two links going upstream to two switches or to two routers. The hypervisor can use network interface card (NIC) teaming for the VTEP interface, with each link connected to an access switch.
Review the VXLAN topology to determine if any hypervisor hosts are dual-homed to two VXLAN-enabled switches deployed as multi-chassis configuration (e.g., vPC domain, MLAG, MCLAG, etc.) to function as a single VTEP. For VXLAN-enabled switches deployed as a multi-chassis configuration, review the configuration to verify that a secondary IP address has been defined for the VTEP loopback interface. If a secondary IP address has not been configured for the VTEP, this is a finding.
Configure a secondary IP address for all VTEP loopback interfaces for VXLAN-enabled switches deployed as a multi-chassis configuration to function as a single VTEP for dual-homed attached hypervisor hosts.
Review the network topology diagram for both the physical infrastructure and the NVP to determine if two or more edge gateways have been deployed between the virtual and physical networks. If two or more edge gateways connecting the NVP and the physical network have not been deployed, this is a finding. Note: This requirement is not applicable if hardware switches are deployed as VTEP devices that also function as gateways between VXLANs and between VXLAN and non-VXLAN infrastructures.
Deploy two or more edge gateways connecting the network virtualization platform and the physical network.
Review the network virtualization platform topology and the SDN manager to verify that each virtual edge gateway has been deployed across multiple hypervisor hosts. If each virtual edge gateway has not been deployed across multiple hypervisor hosts, this is a finding.
Deploy each virtual edge gateway across multiple hypervisor hosts.
Review the network topology diagram for both the physical infrastructure and the network virtualization platform (NVP) to determine if the virtual edge gateways have routing adjacencies with two or more physical routers. In addition, verify that the router adjacencies are established by having the administrator enter the appropriate commands that will show the neighbor relationship between the edge gateway and upstream routers. If the virtual edge gateway does not have routing adjacencies established with two or more physical routers, this is a finding.
Configure the virtual edge gateways to have routing adjacencies established with two or more physical routers.