SDN Using NV Security Technical Implementation Guide

  • Version/Release: V1R1
  • Published: 2017-03-01
  • Expand All:
  • Severity:
  • Sort:
Compare

Select any two versions of this STIG to compare the individual requirements

View

Select any old version/release of this STIG to view the previous requirements

This Security Technical Implementation Guide is published as a tool to improve the security of Department of Defense (DoD) information systems. The requirements are derived from the National Institute of Standards and Technology (NIST) 800-53 and related documents. Comments or proposed revisions to this document should be sent via email to the following address: disa.stig_spt@mail.mil.
c
Southbound API control plane traffic between the SDN controller and SDN-enabled network elements must be mutually authenticated using a FIPS-approved message authentication code algorithm.
IA-7 - High - CCI-000803 - V-73073 - SV-87725r1_rule
RMF Control
IA-7
Severity
High
CCI
CCI-000803
Version
NET-SDN-001
Vuln IDs
  • V-73073
Rule IDs
  • SV-87725r1_rule
Southbound APIs such as OpenFlow provide the forwarding tables to network devices such as switches and routers, both physical and virtual (hypervisor-based). The SDN controllers use the concept of flows to identify network traffic based on predefined rules that can be statically or dynamically programmed by the SDN control software, thereby determining how traffic should flow through network devices based on usage patterns, applications, and policy that can optimize traffic paths based on business requirements and not network infrastructure design. If an SDN-aware router or switch received erroneous forwarding information from a rogue controller, traffic could be black-holed or even forwarded to a malicious user to sniff traffic and perform a man-in-the-middle attack. Hence, it is imperative that mutual authentication is enabled between the SDN controller and the SDN-aware network elements for all southbound API traffic.
Checks: C-73207r1_chk

Review the components within the SDN framework that send and receive southbound API messages and verify that the messages are authenticated using a FIPS-approved message authentication code algorithm. FIPS-approved algorithms for authentication are the cipher-based message authentication code (CMAC) and the keyed-hash message authentication code (HMAC). AES and 3DES are NIST-approved CMAC algorithms. The following are NIST-approved HMAC algorithms: SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256. If the SDN controller or SDN-enabled network elements do not authenticate received southbound API messages using a FIPS-approved message authentication code algorithm, this is a finding.

Fix: F-79519r1_fix

Ensure that all components within the SDN framework authenticate southbound API messages using a FIPS-approved message authentication code algorithm. FIPS-approved algorithms for authentication are the CMAC and the HMAC. AES and 3DES are NIST-approved CMAC algorithms. The following are NIST-approved HMAC algorithms: SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256.

c
Northbound API traffic received by the SDN controller must be authenticated using a FIPS-approved message authentication code algorithm.
IA-7 - High - CCI-000803 - V-73075 - SV-87727r1_rule
RMF Control
IA-7
Severity
High
CCI
CCI-000803
Version
NET-SDN-002
Vuln IDs
  • V-73075
Rule IDs
  • SV-87727r1_rule
The SDN controller determines how traffic should flow through physical and virtual network devices based on application profiles, network infrastructure resources, security policies, and business requirements that it receives via the northbound API. It also receives network service requests from orchestration and management systems to deploy and configure network elements via this API. In turn, the northbound API presents a network abstraction to these orchestration and management systems. If attackers could leverage a vulnerable northbound API, they would have control over the SDN infrastructure through the controller by inserting polices. If the SDN controller were to receive fictitious information from a rogue application or orchestration system, non-optimized network paths would be produced that could disrupt network operations, resulting in inefficient application and business processes. Hence, it is imperative that all northbound API traffic received by the SDN controller is authenticated.
Checks: C-73209r1_chk

Review the configuration of the SDN controllers and verify that the northbound API messages received are authenticated using a FIPS-approved message authentication code algorithm. FIPS-approved algorithms for authentication are the cipher-based message authentication code (CMAC) and the keyed-hash message authentication code (HMAC). AES and 3DES are NIST-approved CMAC algorithms. The following are NIST-approved HMAC algorithms: SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256. If the SDN controllers do not authenticate received northbound API messages using a FIPS-approved message authentication code algorithm, this is a finding.

Fix: F-79521r1_fix

Configure all SDN controllers to authenticate received northbound API messages using a FIPS-approved message authentication code algorithm. FIPS-approved algorithms for authentication are the CMAC and the HMAC. AES and 3DES are NIST-approved CMAC algorithms. The following are NIST-approved HMAC algorithms: SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256.

b
Access to the SDN management and orchestration systems must be authenticated using a FIPS-approved message authentication code algorithm.
IA-5 - Medium - CCI-000186 - V-73077 - SV-87729r1_rule
RMF Control
IA-5
Severity
Medium
CCI
CCI-000186
Version
NET-SDN-003
Vuln IDs
  • V-73077
Rule IDs
  • SV-87729r1_rule
The SDN controller receives network service requests from orchestration and management systems to deploy and configure network elements via the northbound API. In turn, the Northbound API presents a network abstraction to these systems. If either the orchestration or management system were breached, a rogue user could make modifications to the business or security policy that could disrupt network operations, resulting in inefficient application and business processes as well as bypassing security controls. In addition, invalid network service requests could be processed that could exhaust compute, storage, and network resources, leaving no resources available for legitimate business requirements.
Checks: C-73211r1_chk

Review all management and orchestration systems within the SDN framework and verify that access to these components requires DOD PKI certificate-based authentication. If access to the SDN management and orchestration systems does not require DOD PKI certificate-based authentication, this is a finding.

Fix: F-79523r1_fix

Configure all management and orchestration systems within the SDN framework to require DOD PKI certificate-based authentication for access.

c
Southbound API control plane traffic must traverse an out-of-band path or be encrypted using a FIPS-validated cryptographic module.
CM-6 - High - CCI-000366 - V-73079 - SV-87731r1_rule
RMF Control
CM-6
Severity
High
CCI
CCI-000366
Version
NET-SDN-004
Vuln IDs
  • V-73079
Rule IDs
  • SV-87731r1_rule
Southbound APIs such as OpenFlow provide the forwarding tables to network devices such as switches and routers, both physical and virtual (hypervisor-based). The SDN controllers use the concept of flows to identify network traffic based on predefined rules that can be statically or dynamically programmed by the SDN control software, thereby determining how traffic should flow through network devices based on usage patterns, applications, and policy that can optimize traffic paths based on business requirements and not network infrastructure design. If an SDN-aware router or switch received erroneous forwarding information from a rogue controller, traffic could be black-holed or even forwarded to a malicious user to sniff traffic and perform a man-in-the-middle attack. Hence, it is imperative to secure flow table updates by encrypting all southbound API traffic or deploying an out-of-band network for this traffic to traverse.
Checks: C-73213r1_chk

Determine if the southbound API control plane traffic between the SDN controllers and the SDN-enabled network elements traverses an out-of-band path. If not, verify that the southbound API traffic is encrypted using a FIPS-validated cryptographic module. If the southbound API traffic does not traverse an out-of-band path or is not encrypted using a FIPS-validated cryptographic module, this is a finding. Note: An out-of-band path would be a path between two nodes that traverses one or more links on an out-of-band network; that is, a dedicated layer 2 infrastructure separate from a production network.

Fix: F-79525r1_fix

Deploy an out-of-band network to provision paths between the SDN controllers and the SDN-enabled network elements for providing transport for southbound API control plane traffic. An alternative is to encrypt all southbound API control plane traffic using a FIPS-validated cryptographic module. Implement a cryptographic module which has a validation certification and is listed on the NIST Cryptographic Module Validation Program's (CMVP) validation list.

c
Northbound API traffic must traverse an out-of-band path or be encrypted using a FIPS-validated cryptographic module.
CM-6 - High - CCI-000366 - V-73081 - SV-87733r1_rule
RMF Control
CM-6
Severity
High
CCI
CCI-000366
Version
NET-SDN-005
Vuln IDs
  • V-73081
Rule IDs
  • SV-87733r1_rule
The SDN controller receives network service requests from orchestration and management systems to deploy and configure network elements via the northbound API. In turn, the northbound API presents a network abstraction to these systems. If either the orchestration or management system were breached, a rogue user could make modifications to the business or security policy that could disrupt network operations, resulting in inefficient application and business processes and bypassing security controls. In addition, invalid network service requests could be processed that could exhaust compute, storage, and network resources, leaving no resources available for legitimate business requirements. Hence, it is imperative that all southbound API traffic is secured by encrypting the traffic or deploying an out-of-band network for this traffic to traverse.
Checks: C-73215r1_chk

Determine if the northbound API traffic between the SDN controllers and the SDN management/orchestration systems traverses an out-of-band path. If not, verify that the northbound API traffic is encrypted using a FIPS-validated cryptographic module. If the northbound API traffic does not traverse an out-of-band path or is not encrypted using a FIPS-validated cryptographic module, this is a finding. Note: An out-of-band path would be a path between two nodes that traverses one or more links on an out-of-band network; that is, a dedicated layer 2 infrastructure separate from a production network.

Fix: F-79527r1_fix

Deploy an out-of-band network to provision paths between the SDN controllers and the SDN management/orchestration systems for providing transport for northbound API traffic. An alternative is to encrypt all northbound API traffic using a FIPS-validated cryptographic module. Implement a cryptographic module which has a validation certification and is listed on the NIST Cryptographic Module Validation Program's (CMVP) validation list.

b
Southbound API management plane traffic for provisioning and configuring virtual network elements within the SDN infrastructure must be authenticated using a FIPS-approved message authentication code algorithm.
IA-5 - Medium - CCI-000186 - V-73083 - SV-87735r1_rule
RMF Control
IA-5
Severity
Medium
CCI
CCI-000186
Version
NET-SDN-006
Vuln IDs
  • V-73083
Rule IDs
  • SV-87735r1_rule
Management and orchestration systems within the SDN framework instantiate, deploy, and configure virtual network elements. These systems also define the virtual network topology by specifying the connectivity between the network elements and the workloads, both virtual and physical. If a hypervisor host within the SDN infrastructure were to receive fictitious information from a rogue management or orchestration system as a result of no authentication, the virtual network topology could be altered by deploying rogue network elements to create non-optimized network paths, resulting in inefficient application and business processes. By altering the network topology, the attacker would have the ability force traffic to bypass security controls.
Checks: C-73217r1_chk

Verify that all southbound API management plane traffic is authenticated using a FIPS-approved message authentication code algorithm. Review SDN management and orchestration systems, as well as all hypervisor hosts that compose the NVP framework, to determine if a FIPS-approved message authentication code algorithm is used to ensure the authenticity and integrity of messages used to deploy and configure software-defined network elements. If southbound API management plane traffic is not authenticated using a FIPS-approved message authentication code algorithm, this is a finding.

Fix: F-79529r1_fix

Configure these components to use a FIPS-approved message authentication code algorithm to authenticate southbound API management messages.

b
Southbound API management plane traffic for provisioning and configuring virtual network elements within the SDN infrastructure must traverse an out-of-band path or be encrypted using a using a FIPS-validated cryptographic module.
CM-6 - Medium - CCI-000366 - V-73085 - SV-87737r1_rule
RMF Control
CM-6
Severity
Medium
CCI
CCI-000366
Version
NET-SDN-007
Vuln IDs
  • V-73085
Rule IDs
  • SV-87737r1_rule
Management and orchestration systems within the SDN framework instantiate, deploy, and configure network elements within the SDN infrastructure. These systems also define the virtual network topology by specifying the connectivity between the network elements and the workloads, both virtual and physical. If a hypervisor host within the SDN infrastructure were to receive fictitious information from a rogue management or orchestration system, the virtual network topology could be altered by deploying rogue network elements to create non-optimized network paths, resulting in inefficient application and business processes. By altering the network topology, the attacker would have the ability to force traffic to bypass security controls. Spoofed management plane traffic generated by a rogue management system could result in a denial-of-service attack on the hypervisor hosts, exhausting the computing resources and disrupting workload processing or even creating a network outage. Hence, it is imperative that all SDN management plane traffic is secured by encrypting the traffic or deploying an out-of-band network for this traffic to traverse.
Checks: C-73219r1_chk

Determine if the southbound API management plane traffic traverses an out-of-band path. If not, verify that the southbound API management plane traffic is encrypted using a using a FIPS-validated cryptographic module. If the southbound API management plane traffic does not traverse an out-of-band path or is not encrypted using a using a FIPS-validated cryptographic module, this is a finding.

Fix: F-79531r1_fix

Deploy an out-of-band network to provision paths between management systems, orchestrations systems, and all hypervisor hosts that compose the SDN infrastructure to provide transport for southbound API management plane traffic. An alternative is to encrypt all southbound API management plane traffic using a FIPS-validated cryptographic module. Implement a cryptographic module that has a validation certification and is listed on the NIST Cryptographic Module Validation Program's (CMVP) validation list.

b
Southbound API management plane traffic for configuring SDN parameters on physical network elements must be authenticated using DOD PKI certificate-based authentication.
IA-5 - Medium - CCI-000186 - V-73087 - SV-87739r1_rule
RMF Control
IA-5
Severity
Medium
CCI
CCI-000186
Version
NET-SDN-008
Vuln IDs
  • V-73087
Rule IDs
  • SV-87739r1_rule
Physical SDN-enabled switches are dependent on the SDN controller for their forwarding tables as well as their configuration and service parameters. This information is provided to the switches via SDN management plane protocols such as Network Configuration Protocol (NETCONF) and Open vSwitch Database Management Protocol (OVSDB). The latter provides configuration support for OpenFlow-enabled switches such as Open vSwitch, as well as many vendor switches. Without authenticating management packets, physical switches within the SDN infrastructure could receive fictitious information from a rogue management system that could shut down interfaces, thereby altering the physical network topology. By altering the network topology, the attacker would have the ability to force traffic to bypass security controls. Legitimate traffic could be dropped by deploying access control lists to active interfaces. Spoofed management plane traffic generated by a rogue management system could result in a denial-of-service attack on the switches, resulting in a network outage.
Checks: C-73221r1_chk

Review both management and orchestration systems, as well as all SDN controllers and physical SDN-enabled network elements that compose the network virtualization platform (NVP), to determine if certificate-based authentication is used to ensure the authenticity and integrity of southbound API management messages. If southbound API management plane traffic is not authenticated using DOD PKI certificates, this is a finding.

Fix: F-79533r1_fix

Deploy DOD PKI certificates to all orchestration systems, management systems, and physical SDN-enabled network elements. Configure these components to use the certificates to authenticate southbound API management messages.

b
Southbound API management plane traffic for configuring SDN parameters on physical network elements must be encrypted using a FIPS-validated cryptographic module.
CM-6 - Medium - CCI-000366 - V-73089 - SV-87741r1_rule
RMF Control
CM-6
Severity
Medium
CCI
CCI-000366
Version
NET-SDN-009
Vuln IDs
  • V-73089
Rule IDs
  • SV-87741r1_rule
Physical SDN-enabled switches are dependent on the SDN controller for their forwarding tables, as well as their configuration and service parameters. This information is provided to the switches via SDN management plane protocols such as Network Configuration Protocol (NETCONF) and Open vSwitch Database Management Protocol (OVSDB). The latter provides configuration support for OpenFlow-enabled switches such as Open vSwitch, as well as many vendor switches. If a switch within the SDN infrastructure were to receive fictitious information from a rogue management system, the physical network topology could be altered by shutting down interfaces. Legitimate traffic could be dropped by deploying access control lists to active interfaces. By altering the network topology, the attacker would have the ability to force traffic to bypass security controls. Spoofed management plane traffic generated by a rogue management system could result in a denial-of-service attack on the switches, resulting in a network outage. Hence, it is imperative that all SDN management plane traffic is secured by encrypting the traffic using a FIPS-validated cryptographic module.
Checks: C-73223r1_chk

Determine if the southbound API management plane traffic is encrypted using a FIPS-validated cryptographic module. If the Southbound API management plane traffic is not encrypted using a FIPS-validated cryptographic module, this is a finding.

Fix: F-79535r1_fix

Encrypt all southbound API management plane traffic using a using a FIPS-validated cryptographic module. Implement a cryptographic module that has a validation certification and is listed on the NIST Cryptographic Module Validation Program's (CMVP) validation list.

b
Physical SDN controllers and servers hosting SDN applications must reside within the management network with multiple paths that are secured by a firewall to inspect all ingress traffic.
CM-6 - Medium - CCI-000366 - V-73091 - SV-87743r1_rule
RMF Control
CM-6
Severity
Medium
CCI
CCI-000366
Version
NET-SDN-010
Vuln IDs
  • V-73091
Rule IDs
  • SV-87743r1_rule
Management and orchestration systems deploy and configure network devices such as switches and routers, both physical and virtual. SDN controllers are made aware of the deployments and are able to define the network topology through abstraction. The controllers are then able to provide forwarding table information to each router or switch instance within the SDN infrastructure. If an SDN-aware router or switch received erroneous forwarding information from a rogue controller, traffic could be black-holed or even forwarded to a malicious user to sniff traffic and to perform a man-in-the-middle attack. If attackers could leverage a vulnerable northbound API, they would have control over the SDN infrastructure through the controller by creating their own polices. If the SDN controller were to receive fictitious information from a rogue application, non-optimized network paths would be produced that could disrupt network operations, resulting in inefficient application and business processes. If either the orchestration or management system were breached, invalid network service requests could be processed that could exhaust compute, storage, and network resources, leaving no resources available for legitimate business requirements.
Checks: C-73225r1_chk

Review the SDN infrastructure topology to verify that the all physical SDN controllers, management appliances, and servers hosting SDN applications reside within the management network that has multiple paths and is also secured by a firewall. If these physical NVP components do not reside within the management network with multiple paths, and are not secured by a firewall, this is a finding. Note: If the SDN physical components reside within an out-of-band network, this requirement would not be applicable.

Fix: F-79537r1_fix

Deploy all physical controllers, management appliances, and servers hosting SDN applications into the management network with multiple paths that are secured by a firewall inspecting all ingress traffic.

a
SDN-enabled routers and switches must provide link state information to the SDN controller to create new forwarding decisions for the network elements.
CM-6 - Low - CCI-000366 - V-73093 - SV-87745r1_rule
RMF Control
CM-6
Severity
Low
CCI
CCI-000366
Version
NET-SDN-011
Vuln IDs
  • V-73093
Rule IDs
  • SV-87745r1_rule
Southbound APIs such as OpenFlow provide the forwarding tables to network devices such as switches and routers. SDN controllers have an abstraction of the network topology based on discovery and provisioning information provided by management and orchestration systems. The SDN controllers use the concept of flows to identify network traffic based on predefined rules that can be statically or dynamically programmed by the SDN control software. With the network topology abstraction, they are able to determine how traffic should flow through network devices based on application data, business policy, bandwidth, and path availability. If the SDN-enabled network elements do not provide updated link state information, the SDN controller is not able to reconverge the network to verify there is reachability to all destinations.
Checks: C-73227r1_chk

Review the configurations for all SDN-enabled routers and switches and verify that link state information is provided to the SDN controllers. If the SDN-enabled routers and switches do not provide link state information to the SDN controllers, this is a finding. Note: This requirement is not applicable if the SDN deployment model does not rely on the controller for network forwarding or convergence.

Fix: F-79539r2_fix

Configure all SDN-enabled routers and switches to send link state information to the SDN controllers.

a
Quality of service (QoS) must be implemented on the underlying IP network to provide preferred treatment for traffic between the SDN controllers and SDN-enabled switches and hypervisors.
CM-6 - Low - CCI-000366 - V-73095 - SV-87747r1_rule
RMF Control
CM-6
Severity
Low
CCI
CCI-000366
Version
NET-SDN-012
Vuln IDs
  • V-73095
Rule IDs
  • SV-87747r1_rule
With the network topology abstraction, the SDN controllers are able to determine how traffic should flow through network devices based on application data, business policy, bandwidth, and path availability. When updated link state information is provided by the network elements, the SDN controller must recalculate the optimized paths for network reconvergence and provide the new forwarding tables to the network elements. When network congestion occurs, all traffic has an equal chance of being dropped. QoS provisioning categorizes network traffic, prioritizes it according to its relative importance, and provides preferential treatment using various priority queuing techniques. Prioritization of both link state updates and control plane traffic must be implemented to verify that during periods of severe network congestion, the network can converge.
Checks: C-73229r3_chk

Note: This requirement will not be applicable if an out-of-band network is used to transport SDN control and management plane traffic. Review the router and multilayer switch configurations to verify that SDN control and management plane packets are receiving the appropriate amount of priority to ensure this traffic has preference over normal production traffic. If not all routers and multilayer switches impose preferred treatment for SDN control and management plane traffic during periods of congestion, this is a finding.

Fix: F-79541r1_fix

Determine the paths in which SDN control and management plane traffic will flow between the SDN controllers and SDN-enabled switches and routers. Configure each router and multilayer switch to impose preferred treatment for this traffic so it has priority over normal production traffic during periods of congestion.

b
SDN controllers must be deployed as clusters and on separate physical hosts to eliminate single point of failure.
CM-6 - Medium - CCI-000366 - V-73097 - SV-87749r1_rule
RMF Control
CM-6
Severity
Medium
CCI
CCI-000366
Version
NET-SDN-013
Vuln IDs
  • V-73097
Rule IDs
  • SV-87749r1_rule
SDN relies heavily on control messages between a controller and the forwarding devices for network convergence. The controller uses node and link state discovery information to calculate and determine optimum pathing within the SDN network infrastructure based on application, business, and security policies. Operating in the proactive flow instantiation mode, the SDN controller populates forwarding tables to the SDN-aware forwarding devices. At times, the SDN controller must function in reactive flow instantiation mode; that is, when a forwarding device receives a packet for a flow not found in its forwarding table, it must send it to the controller to receive forwarding instructions. With total dependence on the SDN controller for determining forwarding decisions and path optimization within the SDN infrastructure for both proactive and reactive flow modes of operation, having a single point of failure is not acceptable. A controller failure with no failover backup leaves the network in an unmanaged state. Hence, it is imperative that the SDN controllers are deployed as clusters on separate physical hosts to guarantee network high availability.
Checks: C-73231r1_chk

Review the network virtualization platform topology and the SDN configuration to verify that SDN controllers have been deployed as clusters on separate physical hosts. If the SDN controllers have not been deployed as clusters on separate physical hosts, this is a finding.

Fix: F-79543r1_fix

Deploy SDN controllers as clusters on separate physical hosts to eliminate single point of failure.

a
Physical devices hosting an SDN controller must be connected to two switches for high-availability.
CM-6 - Low - CCI-000366 - V-73099 - SV-87751r1_rule
RMF Control
CM-6
Severity
Low
CCI
CCI-000366
Version
NET-SDN-014
Vuln IDs
  • V-73099
Rule IDs
  • SV-87751r1_rule
SDN relies heavily on control messages between a controller and the forwarding devices for network convergence. The controller uses node and link state discovery information to calculate and determine optimum pathing within the SDN network infrastructure based on application, business, and security policies. Operating in the proactive flow instantiation mode, the SDN controller populates forwarding tables to the SDN-aware forwarding devices. At times, the SDN controller must function in reactive flow instantiation mode; that is, when a forwarding device receives a packet for a flow not found in its forwarding table, it must send it to the controller to receive forwarding instructions. With total dependence on the SDN controller for determining forwarding decisions and path optimization within the SDN infrastructure for both proactive and reactive flow modes of operation, having a single point of failure is not acceptable. Hence, it is imperative that all physical devices hosting an SDN controller are connected to two switches using NIC teaming to guarantee network high availability.
Checks: C-73233r1_chk

Review the network topology as well as the physical connection between the physical device hosting an SDN controller and the switches. The device must have NIC teaming enabled and must be dual homed, with each upstream link connected to a different switch. If the physical device hosting an SDN controller is not connected to two switches using NIC teaming, this is a finding.

Fix: F-79545r1_fix

Enable NIC teaming on the device hosting an SDN controller in either Link Aggregation Control Protocol (LACP) or switch-independent mode. Connect each interface to a different access switch.

a
SDN-enabled routers and switches must rate limit the amount of unknown data plane packets that are punted to the SDN controller.
CM-6 - Low - CCI-000366 - V-73101 - SV-87753r1_rule
RMF Control
CM-6
Severity
Low
CCI
CCI-000366
Version
NET-SDN-015
Vuln IDs
  • V-73101
Rule IDs
  • SV-87753r1_rule
SDN-enabled forwarding devices are dependent on the SDN controller for their forwarding tables as well as their configuration and service parameters. The controller uses node and link state discovery information to calculate and determine optimum pathing within the SDN network infrastructure based on application, business, and security policies. Operating in the proactive flow instantiation mode, the SDN controller pre-populates forwarding tables to the forwarding devices. At times, the SDN controller must function in reactive flow instantiation mode; that is, when a forwarding device receives a packet for a flow not found in its forwarding table, it must send or punt it to the controller to receive forwarding instructions. Upon receiving the punted packet, the controller must determine how to forward the packet, create a rule, and populate a new forwarding table to the forwarding device. High rates of punted packets result in excessive controller CPU and memory utilization. Hence, a denial-of-serve attack targeting the SDN controller can be perpetrated either inadvertently or maliciously, involving high rates of packets for new flows that must be punted to the controller.
Checks: C-73235r1_chk

Review the parameters provided by the SDN manager or controller when deploying router or switch instances to determine if they set a threshold on the number of unknown data plane packets that are allowed to be punted by a virtual router or switch to the controller within a specific amount of time. Review the configuration of all physical SDN-enabled switches and routers and verify that packet-in messages are rate limited. If SDN-enabled routers and switches do not rate limit the amount of unknown data plane packets that are punted to the SDN controller, this is a finding.

Fix: F-79547r1_fix

Configure the SDN manager or controller to set a threshold on the number of unknown data plane packets that are allowed to be punted by a virtual router or switch to the controller within a specific amount of time. Configure all physical SDN-enabled switches and routers to rate limit the amount of packets that are punted to the SDN controller.

b
Servers hosting SDN controllers must have logging enabled.
AU-3 - Medium - CCI-001846 - V-73103 - SV-87755r1_rule
RMF Control
AU-3
Severity
Medium
CCI
CCI-001846
Version
NET-SDN-016
Vuln IDs
  • V-73103
Rule IDs
  • SV-87755r1_rule
It is critical for both network and security personnel to be aware of the state of the SDN infrastructure to maintain network stability. Associating logged events that have occurred within the SDN controller as well as network state information provided by the SDN-enabled components is essential to compile an accurate risk assessment and troubleshoot network outages.
Checks: C-73237r1_chk

Review all servers hosting an SDN controller and verify that logging has been enabled. If logging is not enabled on all servers hosting an SDN controller, this is a finding.

Fix: F-79549r1_fix

Enable logging on all servers hosting an SDN controller.

b
Servers hosting SDN controllers must have an HIDS implemented to detect unauthorized changes.
SI-4 - Medium - CCI-001255 - V-73105 - SV-87757r1_rule
RMF Control
SI-4
Severity
Medium
CCI
CCI-001255
Version
NET-SDN-018
Vuln IDs
  • V-73105
Rule IDs
  • SV-87757r1_rule
The SDN controller is the backbone of the SDN infrastructure. If the server hosting the SDN controller is breached or if unauthorized changes are made to the device, the SDN controller may not have the appropriate resources to function properly or may even be disabled. A host intrusion detection system (HIDS) can monitor and report system configuration changes and prevent malicious or anomalous activity.
Checks: C-73239r1_chk

Review all servers hosting an SDN controller and verify that an HIDS has been installed and enabled. If an HIDS has not been installed and enabled on all servers hosting an SDN controller, this is a finding.

Fix: F-79551r1_fix

Install and enable an HIDS on all servers hosting an SDN controller.

b
All Virtual Extensible Local Area Network (VXLAN) enabled switches must be configured with the appropriate VXLAN network identifier (VNI) to ensure VMs can send and receive all associated traffic for their Layer 2 domain.
CM-6 - Medium - CCI-000366 - V-73107 - SV-87759r1_rule
RMF Control
CM-6
Severity
Medium
CCI
CCI-000366
Version
NET-SDN-020
Vuln IDs
  • V-73107
Rule IDs
  • SV-87759r1_rule
VXLAN is a Layer 2 network that overlays a Layer 3 network; that is, it creates a Layer 2 adjacency across a routed IP fabric. Each Layer 2 overlay network is known as a VXLAN segment and is identified by a unique segment ID called a VXLAN Network Identifier (VNI). The VXLAN network enables virtual machines with the same VNI deployed on different hosts to communicate with each other. Virtual machines are identified uniquely by the combination of the MAC addresses of their virtual network interface card (NIC) and VNI. Hence, it is possible to have duplicate MAC addresses within the SDN infrastructure while in different VXLAN segments. Within the VXLAN architecture, virtual tunnel endpoints (VTEPs) perform the encapsulation and de-encapsulation of the layer-2 traffic. The VXLAN segments are independent of the underlying network topology; conversely, the underlying IP network between VTEPs is independent of the VXLAN overlay. It routes the encapsulated packets based on the outer IP address header, which has the initiating VTEP as the source IP address and the terminating VTEP as the destination IP address. The VTEP must be configured with the appropriate VNIs to enable the VTEP to build forwarding tables for active VXLAN segments (Layer 2 domains) by learning MAC addresses per VNI packet flows.
Checks: C-73241r1_chk

Review the VXLAN topology and documentation for the SDN deployment that identifies each VXLAN segment and distributed logical switch. Review the configuration of all physical VXLAN-enabled switches to verify that the applicable VNIs are defined. If the applicable VNIs have not been defined on all VXLAN-enabled switches, this is a finding. Note: This requirement is applicable to the implementation of technologies similar to VXLAN (e.g., NVGRE, STT) for the purpose of transporting traffic between virtual machines residing on different physical hosts.

Fix: F-79553r1_fix

Define all applicable member VNIs on each VXLAN-enabled switch.

b
Virtual Extensible Local Area Network (VXLAN) identifiers must be mapped to the appropriate VLAN identifiers.
CM-6 - Medium - CCI-000366 - V-73109 - SV-87761r1_rule
RMF Control
CM-6
Severity
Medium
CCI
CCI-000366
Version
NET-SDN-021
Vuln IDs
  • V-73109
Rule IDs
  • SV-87761r1_rule
VXLAN is a Layer 2 network that overlays a Layer 3 network; that is, it creates a Layer 2 adjacency across a routed IP fabric. Each Layer 2 overlay network is known as a VXLAN segment and is identified by a unique segment ID called a VXLAN Network Identifier (VNI). The VXLAN network enables virtual machines with the same VNI deployed on different hosts to communicate with each other. Virtual machines are identified uniquely by the combination of the MAC addresses of their virtual network interface card (NIC) and VNI. Hence, it is possible to have duplicate MAC addresses within the SDN infrastructure while in different VXLAN segments. Within the VXLAN architecture, virtual tunnel endpoints (VTEPs) perform the encapsulation and de-encapsulation of the layer-2 traffic. The VXLAN segments are independent of the underlying network topology; conversely, the underlying IP network between VTEPs is independent of the VXLAN overlay. It routes the encapsulated packets based on the outer IP address header, which has the initiating VTEP as the source IP address and the terminating VTEP as the destination IP address. VTEP-enabled switches will determine the VNI to insert into the VXLAN header based on the 802.1Q VLAN tag of each frame received from the hypervisor host connected via trunk link or the VLAN assignment of an access switchport. The mapping of VLAN to VNI is configured on the switch. Since the VNI is used to segregate all Layer 2 domains, the correct mapping is critical to ensure all traffic for each Layer 2 domain within the SDN infrastructure is forwarded correctly and that broadcast and multicast traffic does not leak into the wrong domain.
Checks: C-73243r1_chk

Review the VXLAN topology and documentation for the SDN deployment that identifies each VXLAN segment via VNI, VLAN membership, and the VLAN-to-VNI mapping to be implemented. Review the VTEP configuration of all physical VXLAN-enabled switches to verify that the appropriate VLAN-to-VNI mapping has been defined. If the correct VLAN-to-VNI mapping has not been configured on all VXLAN-enabled switches, this is a finding. Note: This requirement is only applicable to VNIs that must be defined on each VXLAN-enabled switch. In addition, this requirement is applicable to the implementation of technologies similar to VXLAN (e.g., NVGRE, STT) for the purpose of transporting traffic between virtual machines residing on different physical hosts.

Fix: F-79555r1_fix

Configure the appropriate VLAN-to-VNI mapping on all VXLAN-enabled switches.

b
The proper multicast group for each Virtual Extensible Local Area Network (VXLAN) identifier must be mapped to the appropriate virtual tunnel endpoint (VTEP) so the VTEP will join the associated multicast groups.
CM-6 - Medium - CCI-000366 - V-73111 - SV-87763r1_rule
RMF Control
CM-6
Severity
Medium
CCI
CCI-000366
Version
NET-SDN-022
Vuln IDs
  • V-73111
Rule IDs
  • SV-87763r1_rule
VXLAN is a Layer 2 network that overlays a Layer 3 network; that is, it creates a Layer 2 adjacency across a routed IP fabric. Each Layer 2 overlay network is known as a VXLAN segment and is identified by a unique segment ID called a VXLAN Network Identifier (VNI). The VXLAN network enables virtual machines with the same VNI deployed on different hosts to communicate with each other. Virtual machines are identified uniquely by the combination of the MAC addresses of their virtual network interface card (NIC) and VNI. Hence, it is possible to have duplicate MAC addresses within the SDN infrastructure while in different VXLAN segments. Within the VXLAN architecture, VTEPs perform the encapsulation and de-encapsulation of the layer-2 traffic. The VXLAN segments are independent of the underlying network topology; conversely, the underlying IP network between VTEPs is independent of the VXLAN overlay. It routes the encapsulated packets based on the outer IP address header, which has the initiating VTEP as the source IP address and the terminating VTEP as the destination IP address. Each VXLAN segment is mapped to an IP multicast group in the transport IP network. Hence, VTEPs join IP multicast groups based on VNI membership. This is the method by which VTEPs can discover other VTEPs belonging to the same VXLAN segment. Each VTEP-enabled switch is configured to join the applicable multicast group for each VNI through Internet Group Management Protocol (IGMP). The IGMP joins will trigger Protocol Independent Multicast (PIM) joins, thereby signaling a multicast distribution tree for each group through the transport network based on the locations of participating VTEPs. The multicast group is used to transmit broadcast, unknown unicast, and multicast traffic through the IP network for each VXLAN segment, limiting all Layer 2 flooding to those switches that have end systems participating in the same VXLAN segment. Because the VNI is used to segregate all Layer 2 domains via the VXLAN header encapsulation by the VTEPs, and discovery of each VTEP member is dependent on a specific multicast group, it is imperative that the correct mapping of multicast groups to VNI is configured.
Checks: C-73245r1_chk

Review the VXLAN topology as well as documentation for the SDN deployment that identifies each VXLAN segment via VNI and the associated multicast groups. Review the VTEP configuration of all physical VXLAN-enabled switches to verify that the appropriate multicast group is defined for each VNI. If the appropriate multicast group is not configured for each member VNI, this is a finding. Note: This requirement is only applicable to VNIs that must be defined on each VXLAN-enabled switch. In addition, this requirement is applicable to the implementation of technologies similar to VXLAN (e.g., NVGRE, STT) for the purpose of transporting traffic between virtual machines residing on different physical hosts.

Fix: F-79557r1_fix

Configure the appropriate multicast group that is assigned to each VNI on all VXLAN-enabled switches.

a
The virtual tunnel endpoint (VTEP) must be dual-homed to two physical network nodes.
CM-6 - Low - CCI-000366 - V-73113 - SV-87765r1_rule
RMF Control
CM-6
Severity
Low
CCI
CCI-000366
Version
NET-SDN-024
Vuln IDs
  • V-73113
Rule IDs
  • SV-87765r1_rule
If uplink connectivity for the VTEP to the Virtual Extensible Local Area Network (VXLAN) transport network fails, traffic to and from the VM servers resident on the affected hypervisor host is dropped. Whether it is a hardware (VXLAN-enabled switch) or software (hypervisor resident) VTEP, dedicating a pair of physical uplinks from the VTEP to two separate network nodes adds high availability and resiliency to the VXLAN implementation. If either an uplink or one of the attached network nodes fails, the VTEP would still have connectivity to the underlying IP network for VXLAN traffic.
Checks: C-73247r1_chk

Review the VXLAN topology and the configuration of all hypervisor hosts and VXLAN-enabled switches to verify that every VTEP is dual-homed to two physical network nodes. If any VTEPs are not dual-homed to two physical network nodes, this is a finding. Note: This requirement is only applicable to VNIs that must be defined on each VXLAN-enabled switch. In addition, this requirement is applicable to the implementation of technologies similar to VXLAN (e.g., NVGRE, STT) for the purpose of transporting traffic between virtual machines residing on different physical hosts.

Fix: F-79559r1_fix

Configure all hypervisor hosts and VXLAN-enabled switches so the VTEP will be dual-homed to two physical network nodes. In the case of the VXLAN-enabled switch, the VTEP will be the loopback interface; hence, dual-homing can be achieved by having two links going upstream to two switches or to two routers. The hypervisor can use network interface card (NIC) teaming for the VTEP interface, with each link connected to an access switch.

a
A secondary IP address must be specified for the virtual tunnel endpoint (VTEP) loopback interface when Virtual Extensible Local Area Network (VXLAN) enabled switches are deployed as a multi-chassis configuration.
CM-6 - Low - CCI-000366 - V-73115 - SV-87767r1_rule
RMF Control
CM-6
Severity
Low
CCI
CCI-000366
Version
NET-SDN-025
Vuln IDs
  • V-73115
Rule IDs
  • SV-87767r1_rule
A multi-chassis configuration (i.e., vPC domain, MLAG, MCLAG, etc.) can be used to attach a hypervisor host to a pair of VXLAN-enabled switches. For example, a vPC consists of two vPC peer switches connected by a vPC peer link. A vPC domain is formed by the two switches; one switch is primary and the other is secondary. A switch can only be part of one vPC domain, and only two switches can make up a vPC domain. A vPC allows links that are physically connected to two different switches to appear as a single port channel to a third device, which can be another switch or a server that supports Link Aggregation Control Protocol (LACP) as defined in IEEE 802.1AX, 802.1aq, and 802.3ad. With vPC deployment, the loopback interface that is acting as the source-interface for the VTEP will use the secondary IP address to function as the anycast IP address if the hypervisor host is dual-attached through the vPC. When a host is single-attached (orphan port), the VXLAN-encapsulated traffic will be sent using the loopback’s primary address.
Checks: C-73249r1_chk

Review the VXLAN topology to determine if any hypervisor hosts are dual-homed to two VXLAN-enabled switches deployed as multi-chassis configuration (e.g., vPC domain, MLAG, MCLAG, etc.) to function as a single VTEP. For VXLAN-enabled switches deployed as a multi-chassis configuration, review the configuration to verify that a secondary IP address has been defined for the VTEP loopback interface. If a secondary IP address has not been configured for the VTEP, this is a finding.

Fix: F-79561r1_fix

Configure a secondary IP address for all VTEP loopback interfaces for VXLAN-enabled switches deployed as a multi-chassis configuration to function as a single VTEP for dual-homed attached hypervisor hosts.

a
Two or more edge gateways must be deployed connecting the network virtualization platform (NVP) and the physical network.
CM-6 - Low - CCI-000366 - V-73117 - SV-87769r1_rule
RMF Control
CM-6
Severity
Low
CCI
CCI-000366
Version
NET-SDN-027
Vuln IDs
  • V-73117
Rule IDs
  • SV-87769r1_rule
An edge gateway is deployed to allow north-south traffic to flow between the virtualized network and the physical network, including destinations outside of the data center or enclave boundaries. The gateway establishes routing adjacencies between the virtual routers and physical routers. The gateway can also filter the north-south traffic to enforce security policies for communication between the physical and virtual workloads. Deploying two or more edge gateways eliminates the risk of a single point of failure, thereby ensuring there is always reachability between virtual machines and the physical network infrastructure and reducing the risk of black-holing north-south traffic.
Checks: C-73251r1_chk

Review the network topology diagram for both the physical infrastructure and the NVP to determine if two or more edge gateways have been deployed between the virtual and physical networks. If two or more edge gateways connecting the NVP and the physical network have not been deployed, this is a finding. Note: This requirement is not applicable if hardware switches are deployed as VTEP devices that also function as gateways between VXLANs and between VXLAN and non-VXLAN infrastructures.

Fix: F-79563r1_fix

Deploy two or more edge gateways connecting the network virtualization platform and the physical network.

a
Virtual edge gateways must be deployed across multiple hypervisor hosts.
CM-6 - Low - CCI-000366 - V-73119 - SV-87771r1_rule
RMF Control
CM-6
Severity
Low
CCI
CCI-000366
Version
NET-SDN-028
Vuln IDs
  • V-73119
Rule IDs
  • SV-87771r1_rule
An edge gateway is deployed to allow north-south traffic to flow between the virtualized network and the physical network, including destinations outside of the data center or enclave boundaries. The gateway can also filter the north-south traffic to enforce security policies for communication between the physical and virtual workloads. If the edge gateways deployed as virtual machines are resident on the same host, the host becomes a single point of failure for all communication between the virtual workload and the physical network infrastructure. Deploying the edge gateways across multiple hypervisor hosts eliminates the risk of a single point of failure, thereby ensuring there is always reachability between virtual machines and the physical network infrastructure and reducing the risk of black-holing north-south traffic.
Checks: C-73253r1_chk

Review the network virtualization platform topology and the SDN manager to verify that each virtual edge gateway has been deployed across multiple hypervisor hosts. If each virtual edge gateway has not been deployed across multiple hypervisor hosts, this is a finding.

Fix: F-79565r1_fix

Deploy each virtual edge gateway across multiple hypervisor hosts.

a
The virtual edge gateways must be deployed with routing adjacencies established with two or more physical routers.
CM-6 - Low - CCI-000366 - V-73121 - SV-87773r1_rule
RMF Control
CM-6
Severity
Low
CCI
CCI-000366
Version
NET-SDN-029
Vuln IDs
  • V-73121
Rule IDs
  • SV-87773r1_rule
An edge gateway is deployed to allow north-south traffic to flow between the virtualized network and the physical network, including destinations outside of the data center or enclave boundaries. The gateway establishes routing adjacencies between the virtual routers and physical routers. The gateway can also filter the north-south traffic to enforce security policies for communication between the physical and virtual workloads. Implementing the edge gateway in either active/standby or equal-cost multipath (ECMP) mode ensures there is always a virtual router to forward north-south traffic, assuming there is always a routing adjacency with a router in the physical network infrastructure. Having an adjacency with only one physical router creates a single point of failure regardless of the number of links deployed, there would be no connectivity between the virtual and physical workloads if a node failure occurred. Hence, it is imperative that each edge gateway is deployed with connectivity to two physical routers.
Checks: C-73255r1_chk

Review the network topology diagram for both the physical infrastructure and the network virtualization platform (NVP) to determine if the virtual edge gateways have routing adjacencies with two or more physical routers. In addition, verify that the router adjacencies are established by having the administrator enter the appropriate commands that will show the neighbor relationship between the edge gateway and upstream routers. If the virtual edge gateway does not have routing adjacencies established with two or more physical routers, this is a finding.

Fix: F-79567r1_fix

Configure the virtual edge gateways to have routing adjacencies established with two or more physical routers.