NSX Controller handles following Control Plane:
- Layer 2 Control Plane for Logical switches & Distributed Logical router control VM.
- L3 Control Plane
For L2 Control Plane, NSX Controller has principal copy of these three table per logical switch which are mentioned below:
- VTEP Table: Table which list all VTEP information which has at least one VM connected to logical switch. Rule: One VTEP table per logical switch
- MAC Table: It contains the MAC address of VMs connected to logical switches and also contain MAC address of physical end system in same Broadcast domain.
- ARP Table: It contains the ARP entries of VMs connected to logical switches and also contain ARP entries of physical end system in same Broadcast domain.For Layer 3 NSX controller contains routing table for each distributed logical router, NSX controller also has the list of all hosts running a copy of each distributed logical router.
Deploying NSX Controller:
It is the Virtual appliance which is deployed with the help of NSX Manager. It must be deployed in same vCenter on which NSX manager is associated.
For redundancy, there should be three NSX Controller per standalone NSX manager.
NSX Controller can be deployed in separate ESXi, when following condition is met:
- There must be IP connectivity between NSX controller and NSX manager over TCP 443
- All three NSX Controller must have IP connectivity to each other over TCP 443
- Each NSX Controller must have IP connectivity with VMKernal of each ESXi host over port 1234.
As soon as NSX Controller is deployed, they form the cluster automatically, As soon as First NSX Controller has been deployed, it will join the NSX controller Cluster by itself and as soon as other NSX controller is deployed, they will join the same NSX Cluster.
Prerequisite for NSX Controller:
- 4 vCPUs
- 4 GB vRAM with 2 GB reservation
- 20 GB HDD
- VM hardware Version 10
NSX Controller Master & Recovery:
L2 & L3 Control plane are shared among all NSX Controller. Now in order to determine, which portion of L2/L3 control Plane each NSX Controller will handle, NSX Controller cluster will elect the API provider and L2/L3 NSX Controller master.
API Provider: This API Provider master will receives the internal NSX API calls from NSX manager.
L2 NSX master: This Master will assigns the L2 control plane responsibility per logical switch basis to each NSX controller in the cluster and master is also included in it.
L3 NSX Master: This Master will assigns the L3 control plane responsibility per logical router basis to each NSX controller in the cluster and master is also included in it.
This feature of assigning responsibility to handle the portion of L2 and L3 control plane among NSX controller cluster is called as Slicing. And once this responsibilities is distributed among NSX controller Cluster , the Master will inform this distribution list to each NSX controller , so that each NSX Controller in cluster knows that what other NSX Controller are responsible for so that one will act as Master and other two will serve as backups.
If any NSX Controller goes down or stops responding, than NSX L2 & L3 Controller master will distribute or split the responsibility of failed NSX controller (Logical switches and logical distributed router) to working and live NSX Controller.
If the NSX Master fails than the new NSX master will be elected left ones and the newly elected NSX master will start recovering control plane of affected logical switches or logical distributed router.
While deploying NSX Manager, IP pools must be configured, this IP pools is responsible for following:
- Supply IP to NSX Controller
- IP to ESXi host during NSX host preparation.
Following steps must be followed which can be used as Host preparation for NSX Controller Installation in host Preparation TAB while deploying NSX controller.
For NSX Controller, deploy Virtual network and Security services
Install NSX vSphere Infrastructure Bundle (VIBs) in ESXi host that will be part of NSX domain. Due to these VIBs, ESXi host gets the capability in NSX Data plane and kernel security. Alternate to this is use vSphere ESXI image builder.
In the host Preparation TAB, all the ESXi host cluster configured in vCenter will be seen and you have click install, and NSX Manager pushes the VIBs to each ESXi host that are part of the cluster.
Benefits of VIBs:
When VIBs are installed to ESXi host, following additional modules are installed in ESXi host and each ESXi host are capable of following which are self-explanatory:
- VXLAN Module
- Switch Security Module
- Routing Module
- Distributed Firewall
Host Configuration will covered on NSX Premium labs section.
VNI Pools, Multicast Pools and Transport Zones:
VNI pools and Multicast Pools are defined or must be provided that NSX manager will use for local and Cross vCenter use. It is mandatory that Local VNI Pools and Universal VNI pool should not overlap same way Local Multicast group and Universal Multicast group should not overlap.
Global Transport Zone: It is the group of ESXi host cluster under same NSX domain which are local to center.
Universal Transport Zone: It is the group of ESXi host cluster under same NSX domain which are part of Cross vCenter NSX Domain.
An ESXi host can have any number of Global transport zone but can only have one Universal Transport Zone.