EMAIL SUPPORT

dclessons@dclessons.com

LOCATION

NZ

Introduction & vSphare Requirements For NSX

Introduction & vSphare Requirements For NSX

Posted on Jan 17, 2020 (0)

Introduction & vSphare Requirements For NSX

To understand the actual benefits of NSX let’s see how traditional DC looks like

Some time we also provide the Collapsed Access layer topology, where TOR becomes the Layer 3 switch by providing the Layer 2 Ethernet connection to workloads and also provides the default gateway services to them.

All ports configured as layer 2 have all STP functions disabled and is set to forwarding states.

Another designers also sometimes designers DC network as Spine and Leaf Architecture  to provide the equal cost multipath supports to DC workloads and how ever removing all complexities of STP .

Now after going from series of design changes, VMWARE provided the NSX solution which virtualizes access layer. Now to virtualize access layer there are two requirements

  • IPV4 connectivity connectivity among ESXi servers.
  • JUMBO frame supports as using NXS, the size of ethernet frame increases.

Below datacenter diagram shows how NSX has introduced the Access layer, where TOR switches are L3 Switch and are connected to distribution layer switches via L3 Links. NSX allows layer 2 domains to extend across multiple racks separated by Layer 3 boundaries.

NSX & vSphere:

Before NSX are deployed following VMWARE vSphere component must be deployed:

  • vCenter: NSX talks to vCenter to get access to vSphere infrastructure and provide NSX related configuration to vSphere Components.
  • ESXi Host & Clusters: NSX also install some of its components in Kernel of ESXi Host so that it can provide NSX services to Virtual machines. Each ESXi host must be a member of cluster. Communication fails if ESXi host is not in cluster to NSX.
  • vSphere Standard Switch: ESXi host that participated to NSX and NSX Edge may need to talk to vSphere Standard Switch, else NSX does not require vSphere Standard Switch.
  • vSphere Distributed Switch: In order to virtualize the access layer switch, it is mandatory that all ESXi host in a cluster must be part of same vSphere Distributed switch .

ESXi Network Connectivity:

As soon as ESXi Hypervisor is installed on bare-metal, ESXi operating systems install a logical interface called VMKernel Interface over which IP address of the Management network is configured. This VMKernel interface is just like logical SVI that gets an IP address and subnets.

An ESXi hosts can have multiple VMKernel pots which can be used for different purposes and each gets different IP address from different subnets or may be on same subnets.  An ESXi host may have different VMKernel interface to provide following services:

  • ESXi host management
  • vMotion
  • IP Storage

These VMkernel ports are further connected to virtual Switch which in turn has path to physical network. Physical NICs of Physical servers are called as VMNICs. And these VMNICs are further connected to physical network via layer 2 connection. Inside ESXi hosts these VMNICs are uplink of virtual switch and a VMNICs can only be assigned to a single virtual switch and that virtual switch which owns that VMNICs finally makes the decision of which VMNICs will be used for egress traffic from ESXi hosts.

Below figure stares what we have discussed till now:

vSphere Standard Switch:

As soon as we installed the ESXi to the bare metal server, vSphere Standard switch will be installed by default and runs in the Kernel. Each ESXi host manages its own vSS and vCenter has ownership of its configuration.  AN ESXi host can have multiple vSS and each virtual machines connects their vNIC to vSS virtual ports via certain Port Group.

A Port-Group is just like VLAN, and in a single Port-Group multiple VM machines can resides. It is the Logical grouping of the ports. Each VMKernel Ports can be assigned to only one port-Group.

Below figure is the logical representation of above concepts:

The vSS is a non-MAC learning switch that does not keep a traditional MAC table. It only knows about the MAC addresses configured in the vmx file of the virtual machines connected to the vSS or the VMkernel port’s MAC address. The vmx file is the instruction set that tells the ESXi host the configuration and features that need to be provided to the virtual machine when it powers on.

If you select VLAN or 4095 in any port group in the vSS, the vSS automatically enables 802.1Q, trunking, on the uplinks.

Another configuration that can be done in the vSS is load balancing. The load balancing configuration tells the vSS how to decide which uplink port to use to send BUM traffic to the physical network. An uplink port in the vSS maps to a single VMNIC. If the vSS only has a single uplink port, it is not much of a decision which uplink port to use.

Broadcast, Unknown Unicast, and Multicasts (BUM) traffic also is sent to all virtual machines in the same VLAN in the vSS.

vSphere Distributed Switches

A vDS is vSphere Distributed Switch is a feature rich switch which is managed by vCenter and spans across multiple ESXi hosts. vCenter configures the vDS which in turns pushes down copy of configuration to each ESXi hosts that has been added to vDVS. A vCenter supports 128 vDS and a ESXi host can be part of multiple vDS . If we want NSX to virtualize the access layer it is important to deploy the vDS.

The vDS virtual ports are called distributed ports (dvPorts) and all virtual machines are connected to dvPort groups. VMKernel interface may not be needed separate port groups instead the VMkernel Interface and VM vNICS can share same port groups.

Uplinks ports are called dvUplinks which are setup differently in vDS. Each dvUplink connects to a single VMNIC in an ESXi hosts but from vCenter multiple VMNICs can be added to vDVS.

When vCenter pushes the vDS configuration to ESXi hosts, it only tells the ESXi host about association of its VMNICs to the Uplinks.


Comment

    You are will be the first.

LEAVE A COMMENT

Please login here to comment.