VMware NSX-T Data Center transport nodes

Hypervisor transport nodes are hypervisors that are prepared and configured for VMware NSX-T Data Center.

The VMware VDS provides network services to the virtual machines that are running on those hypervisors. VMware NSX-T Data Center supports VMware ESXi and KVM hypervisors. The N-VDS that is implemented for KVM is based on the Open vSwitch (OVS) and is platform-independent. It can be ported to other hypervisors and serves as the foundation when implementing VMware NSX-T Data Center in other environments (for example, cloud and containers). For the VMware NSX-T Data Center on VxBlock Systems design, Dell Technologies Sales Engineers deploy and support only VMware ESXi based transport nodes. The VMware support organization supports other types of transport nodes.

Transport node topology 1

The VMware NSX-T Data Center for VxBlock System transport node topology 1 implementation uses the following criteria:

  • One VMware VDS uses vmnic0 and vmnic1 for VMware ESXi host functions including port groups and kernels for VMware ESXi management, VMware vSphere vMotion, and NFS.
  • There is one N-VDS using vmnic2 and vmnic3 for workload VMs.

    The N-VDS is used for East-West traffic with the use of TEPs to create an overlay network.

  • Use uplink teaming of source port on the N-VDS to ensure load-balancing.
  • Use an MTU value of 9000 on these vNICs to allow for the overhead of GENEVE tunnel encapsulation.
  • TEP traffic requires VLAN tagging at the uplink profile. There is no VDS in front of the N-VDS.

The use of the VMware VDS and N-VDS enables the separation of the VMware ESXi host functions from VMware NSX-T Data Center traffic. If a failure occurs, it also makes troubleshooting and recovery of a host easier. The following figure illustrates the topology of this design:

In this design, the transport nodes are connected to FI A and B through all four VMNICs.

The VLANs are added to the vNIC templates as follows:
  • The VLANs for Overlay and VLAN-backed segments are added to the templates for vNIC2/3.
  • The VLANs for Management, vMotion, and NFS traffic are added to the templates for vNIC0/1.
  • Non-NSX customer VLANs are not deployed on NSX transport nodes, but participation in NSX can be defined at a cluster level so that some clusters carry non-NSX customer VLANs and others carry NSX VLAN and overlay-backed segments.

Transport node topology 2

The VMware NSX-T Data Center for VxBlock System transport node topology 2 implementation uses the following criteria:

  • In VMware vSphere 7.0, all port groups and VLANs are under the VDS, including ESXi-mgmt, VMware vSphere vMotion, Uplink1, Uplink2, and Overlay.
  • Uplink teaming of source ports on VDS ensures load balancing.
  • An MTU value of 9000 on these vNICs allows for the overhead of GENEVE tunnel encapsulation.
  • TEP traffic requires VLAN tagging at the uplink profile.

The following figure illustrates the topology that is used for VMware vSphere 7.0 on the transport nodes:

In this design, the transport nodes connect to FI A and B through all four VMNICs. The four-vNIC design aligns with the VxBlock System standard configuration for a VMware vSphere 7.0 compute host, allowing standardization of service profile templates across hosts with single and dual VIC configurations.

The overlay VLAN is added to the vNIC template for vNICs 0, 1, 2, and 3 and the Cisco Nexus 9000 Series Switches (9K-A and 9K-B) trunk ports to FIs for the overlay network.