EMAIL SUPPORT

dclessons@dclessons.com

LOCATION

NZ

Nexus 5500 & vPC

Nexus 5500 & vPC

Posted on Jan 24, 2020 (0)

Nexus 5500 & vPC

Cisco Nexus 5000 series switches consist of following models based on generation:

Generation 1: Models are 5000 switch which includes 5010 and 5020 switches which are now End of sale.

Generation: 2: Models include 5500 switches and are currently being deployed as datacenter access layer switches along with Cisco Fex Extender which we will be discussing a little later.

The Generation 2 5500 switches model typically includes:

  • Nexus 5548 P
  • Nexus 5548UP
  • Nexus 5596UP
  • Nexus 5596T

These has following capability:

  • Up to 1152 ports in a single management domain using Cisco FEX architecture
  • Up to 96 unified ports
  • Nexus 5500 switch provide Layer 3 capacity by inserting separate Layer 3 module in its expansion slot.

Below diagram will provide some highlights about first and second generation Datacenter access switches:

Out of above model, Nexus 5548 is end of sale now, the switch which has UP attached along with model number is called as unified ports that mean its can support both Ethernet as well as Storage (FC) and FCOE traffic.

Next Generation Datacenter 5600 Switch:

  • Nexus 5600 model switches are switches which are deployed to meet the customer Virtualized environment and cloud deployments.
  • They offer 2304 ports on single management domain with Cisco FEX.
  • Capable of FC, FCOE and Ethernet traffic
  • Provides support for VXLAN traffic and deployment.
  • Provides the Integrated Layer 3 capability
  • It supports Gigabit Ethernet, 10 Gigabit Ethernet, 40 Gigabit Ethernet, and 100 Gigabit Ethernet; native Fibre Channel; and Fibre Channel over Ethernet.

Models includes in this portfolios are:

  • Cisco Nexus 56128P Switch
  • Cisco Nexus 5696Q Switch
  • Cisco Nexus 5672UP Switch
  • Cisco Nexus 5672UP-16G Switch
  • Cisco Nexus 5648Q Switch
  • Cisco Nexus 5624Q Switch

Nexus 5500 models switch Architecture

Below is the hardware description of Nexus 5500 switch for Nexus 5548P, 5548UP and 5596UP models. The below figure its self-defines the hardware capabilities of the Nexus 5500 Series switch and each switch has one or three expansion slots which is used to install additional expansion modules if more number of ports are required  in future.

Below is the pictorial view of the types of Expansion slots available for Nexus 5500 Switch Models:

Below is the small description of Nexus 5500 hardware Overview:

Nexus 5000/5500 uses the Distributed Forwarding Architecture, where UPC represents the Unified port Controller which connects 8 physical ports and provides the 8 virtual Queue per port for QOS and queuing purpose. All port to port traffic passes through this UPC only. Finally this UPC is connected to Crossbar fabric which inter connects all UPC in the Nexus 5500 or 5000 series switches.

In Nexus 5500 Switches, the internal hardware consist of following components:

  • A supervisor subsystems
  • A Unified cross bar fabric
  • A set of Unified port controller

VOQ function of Unified Port controller for Model 5500 has been discussed in Nexus 5500 QOS section.

Ethernet Packet Forwarding on Nexus 5500:

Following is the steps used to forwards packet on Nexus 5500 Module:

  1. When packet arrives at ingress Port, on ingress UPC, MAC decoding and Synchronization of Bytes is done.
  2. Ethernet frame is then transferred to Forwarding logic on ingress UPC to determine the Egress Interface
  3. Once it is done , based on QOS feature and configuration frame is stored in ingress Queue and raise the request to scheduler to transit through Cross bar fabric.
  4. Once the scheduler grant permission to transit through Crossbar fabric, Frame crosses the fabric and reach to Egress UPC
  5. On Egress UPC it get stored on Egress Queue and wait for its turn to transit to physical port
  6. Based on QOS configuration frame characteristics, frame is dequeued from Egress queue and sent to physical port to finally towards destination.

Nexus 5000/5500 cut through switching:

Nexus 5000/5500 switch both uses cut through and Store-forward method for packet forwarding. These method is used by both model based on what is ingress interface and what is egress interface.

Below diagram briefly explains both behaviour in context of Ingress and egress port.

Nexus 5500 with L3 Module:

When any L3 module is connected to Nexus 5500 Switch , the XBAR connect to Layer 3 engine via two 2 generation UPC using 10*16 G Internal Port-channel (iPC).Traffic is shared between these 16 internal iPorts. While using this it is recommended to configure L2/L3/L4 load balancing method on Nexus 5500 series switch.

Below is the figure which gives some idea of iPC and iPorts.

Next Generation Datacenter 5600 Switch Architecture:

Below is the nexus 5600 switch Architecture

The Nexus 5600 Switch Architecture contains UC which is said to be unified port controller.

Unified Port Controller

  • Supports Multimode MAC for 1/10/40G
  • Helps in Packet parsing and rewriting
  • Acts as Lookup Engine for L2/L3/VXLAN/ Fabric Path / ACL/ FCOE etc.
  • Buffer management, Queuing mechanism and multicast support

Unified Crossbar Fabric

  • This crossbar fabric can be used in 10GE or 40GE optimized mode.
  • It provides Lossless fabric

Following figure explains how Cross Bar fabric is connected to ingress UPC and egress UPC.

Below figure gives the overview idea of the control plane and data plane on Nexus 5600 Supervisor Block.

Some expansion module used in Nexus 5600 Series switches are where each having different capability and features. You can refer Cisco data sheet for these for more details:

  • N5696-M12Q
  • N5696-M20UP
  • N5696-M4C

Nexus 2000 FEX Extender:

Nexus 2000 Fex Extender has following features:

  • Provides connectivity to rack and blade Servers in converged fabric deployment
  • Provides 10 Gig capability to Access layer Servers
  • Acts as a remote line cards and is always managed by its parent switch like Nexus 5000 or 5600 or 7000 series switches.
  • It is ideal for both LAN and SAN deployments and ideal for UCS fabric Interconnects.
  • Supports TOR and EOR deployments.

In Nexus 3rd generation datacenter switch, Cisco has produced Nexus 2300 series and following are the Models mostly used on time of writing

  • Cisco Nexus 2348TQ 10GE Fabric Extender
  • Cisco Nexus 2348TQ-E 10GE Fabric Extender
  • Cisco Nexus 2348UPQ 10GE Fabric Extender
  • Cisco Nexus 2332TQ 10GE Fabric Extender
  • Cisco Nexus 2248TP-E Fabric Extender
  • Cisco Nexus 2232PP 10GE Fabric Extender
  • Cisco Nexus 2232TM-E 10GE Fabric Extender
  • Cisco Nexus B22 Blade Fabric Extender

Following Nexus Model are out of Sale:

  • Cisco Nexus 2248PQ 10GE Fabric Extender
  • Cisco Nexus 2248TP GE Fabric Extender
  • Cisco Nexus 2232TM 10GE Fabric Extender
  • Cisco Nexus 2224TP GE Fabric Extender
  • Cisco Nexus 2148T Fabric Extender

Fabric Extender Interfaces:

Following are the FEX Interfaces which are used for connectivity purpose.

Fabric Interface (FIF): It is a Single individual Interface or Port-channel used to connect the FEX to parent switch. At time of writing depending upon model each Fex has 4 QPSP+ ports or 4/6/8 fabric Ports to get its connected via Twinax cable. Or SFP+ cable or via QSFP+.

Host Interface (HIF):  Those 8, 24, 32, or 48 Ethernet ports which are used to connect to host like Servers, Network and Security devices by SFP or SFP+. These Interface can be used as Single interface or can be used as Port-channel.

Logical Interface (LIF): This data structure emulates an Ethernet interface in parent switch which carries properties like VLAN membership, ACL Labels, STP states etc.

Virtual Interface: This is logical entity inside FEX that receives its configuration from parent switch and is used to map frame to Switch LIF interface.

Mapping between a parent switch LIF and Fabric Extender VIF is called Virtual Network Link (VN-LINK). This mapping is defined by a special tag which is inserted on all Ethernet frame which traverse these physical link and is called as Virtual Network Tag (VN-TAG). The main objective of this TAG is to differentiate frames received from (or to be sent to) different FEX host Interface.

Below is the figure of VN-TAG used for this purpose

The VNTag is inserted between the source MAC address and the IEEE 802.1Q fields from the original Ethernet frame. The VNTag fields are

  • Ethertype: This field identifies a VNTag frame. IEEE reserved the value 0x8926 for Cisco VNTag.
  • Direction bit (d): A 0 indicates that the frame is traveling from the FEX to the parent switch. A 1 means that the frame is traveling from the parent switch to the FEX.
  • Pointer bit (p): A 1 indicates that a Vif_list_id is included in the tag. A 0 signals that a Dvif_id is included in the frame.
  • Virtual Interface List Identifier (Vif_list_id): This is a 14-bit value mapped to a list of host interfaces to which this frame must be forwarded.
  • Destination Virtual Interface Identifier (Dvif_id): This is a 12-bit value mapped to a single host interface to which an Ethernet frame will be forwarded.
  • Looped bit (l): This field indicates a multicast frame that was forwarded out the switch port and later received. In this case, the FEX checks the Svif_id and filters the frame from the corresponding port.
  • Reserved bit (r): This bit is reserved for future use.
  • Version (ver): This value is currently set to 0. It represents the version of the tag.
  • Source Virtual Interface Identifier (Svif_id): This is a 12-bit value mapped to the host interface that received this frame (if it is going from the FEX to the parent switch).

When an Ethernet frame is received on a host interface, The Fabric Extender adds a VNtag to the frame and forwards it to one of the fabric interfaces.

The parent switch recognizes the logical interface that sent the frame (through the Svif_id field), removes the tag, and forwards it according to its MAC address table.

In the other direction (parent switch receives a frame that is destined to a FEX host interface).The parent switch reads the frame destination MAC address and forwards it to a logical interface index in its MAC address table.The switch inserts the VNtag associated with the logical interface and forwards it to the correct FEX. Receiving this frame, the FEX recognizes the associated VIF (through the Dvif_id), removes the VNTag, and sends it to the mapped host interface.

Connecting FEX Switch to Parent Switch:

Nexus 2000 consist of two copies of System Image.

  • Primary Image: Stored in Flash memory and can be upgraded.
  • Secondary Image: Stored in boot flash memory and is write protected, it is only used when primary image gets corrupted.

Whenever FEX gets connected to  5000K , Handshake process starts and this process verifies the image version and then Parent switch determine whether FEX image should be upgraded or not and if upgrade is necessary than Parent switch loads the images to 2K and reloads it. This all process takes 8 Mints of time. 

Now let’s understand how FEX are discovered by parent switch and then how management model works on FEX.

When there is one active connection between them, a Fabric Extender and its parent switch use Satellite Discovery Protocol (SDP) periodic messages to discover each other. After this formal introduction, the Fabric Extender deploys a Satellite Registration Protocol (SRP) request to register itself to the parent switch.

  • When a switch interface is configured with the switchport mode fex-fabric
  • As expected, the Fabric Extender automatically sends SDP Rx messages as soon as it has an active fabric interface.
  • In these SDP messages, the Fabric Extender exposes the VLAN from which it expects to receive control commands, SDP refresh interval (3 seconds), and hardware information.
  • After interface in Nexus Parent switch is configured with the switchmode fex-fabric and fex associate This interface starts to send SDP packets, and both devices discover each other.
  • After the discovery is complete, the Fabric Extender sends an SRP Request message and waits for an SRP Response from the parent switch
  • The registration process completes the FEX detection on the parent switch, and SDP messages continue to be exchanged between both devices after the registration and FEX become operational from Parent switch.

Now When FEX is operational and is discovered, FEX works on management model so that it can be easily managed by parent switch over fabric interface.After discovery, if the Fabric Extender has been correctly associated with the parent switch, the following operations are performed:

  • The switch checks the software image compatibility and upgrades the Fabric Extender if necessary.
  • The switch and Fabric Extender establish in-band IP connectivity with each other. The switch assigns an IP address in the range of loopback addresses (127.15.1.0/24) to the Fabric Extender to avoid conflicts with IP addresses that might be in use on the network.
  • The switch pushes the configuration data to the Fabric Extender. The Fabric Extender does not store any configuration locally.
  • The Fabric Extender updates the switch with its operational status. All Fabric Extender information is displayed using the switch commands for monitoring and troubleshooting.

Methods to connect FEX to Parent Switch:

There are two method to connect FEX with parent Switch:

  • Static Pining
  • Dynamic Pinning

Static Pinning: In this method FEX is connected from its fabric interface to Parent Switch and on parent switch pinning max-link 1 command is configured.

The Cisco Nexus 2000 Series Fabric Extender is connected to the Cisco Nexus 5000 Series Switch with two fabric links above figure you can divide the ports as follows:

  • Pinning max-links 1: All 48 host ports use one fabric port only (and this port is the first port connected between the Cisco Nexus 5000 Series Switch and the Cisco Nexus 2000 Series Fabric Extender).
  • Pinning max-links 2: The 48 ports are divided into two groups. The first 24 ports use the fabric link shown as a dotted line in Figure and the remaining 24 ports use the fabric link shown as a solid line.
  • Pinning max-links 3: The 48 ports are divided into three groups. The first 16 ports use fabric link 1, the second 16 ports use fabric link 2, and the remaining ports remain shut down because there is no associated fabric link.
  • Pinning max-links 4: The 48 ports are divided into four groups. The first 12 ports use fabric link 1, the second 12 ports use fabric link 2, and the remaining ports remain shut down because there is no associated fabric link.

If the Cisco Nexus 2000 Series Fabric Extender is connected to the Cisco Nexus 5000 Series Switch with four fabric links, ports can be divided as shown in Figure

Ports can be divided as follows:

  • pinning max-links 1: All 48 host ports use one fabric port only (and this the first port connected between the Cisco Nexus 5000 Series Switch and the Cisco Nexus 2000 Series Fabric Extender).
  • Pinning max-links 2: The 48 ports are divided into two groups. The first 24 ports use fabric link 1, shown as a dotted line in Figure 23, and the remaining 24 ports use fabric link 2, shown as a solid line.
  • Pinning max-links 3: The 48 ports are divided into three groups. The first 16 ports use fabric link 1, the second 16 ports use fabric link 2, and the remaining ports use fabric link 3.
  • Pinning max-links 4: The 48 ports are divided into four groups. The first 12 ports use fabric link 1, the second 12 ports use fabric link 2, the following 12 ports use fabric link 3, and the remaining 12 ports use fabric link 4

Dynamic Pinning: Dynamic Pinning can be configured by configuring Port-channel on the link between FEX and Parent Switch. There are two method of Dynamic Pinning:

  • Via Port-Channel
  • Via Active-Active vPC.

If you are using all the available fabric extender fabric links (four 10 Gigabit Ethernet links), traffic is distributed across all fabric links based on the Port Channel load-balancing algorithm of your choice, which for the fabric extender module can hash on Layer 2 and 3 information.

If one fabric link is lost, as in example link 1 here, traffic is distributed across the remaining three links.

Active-Active FEX:

In Active Active FEX, FEX is configured in vPC and traffic is distributed as per vPC traffic flow method. Best Example of Active-Active FEX is EVPC (ENHANCE vPC) whose configuration is given in EVPC lab section.

An overview of Active-Active Fex is given below:

Design Scenarios based on Nexus 7K, 5K and 2K:

Design Option1: In this Fex can be connected to nexus 7K via Dynamic Port-channel method called as Straight Through mode.

Design Option2: In this Fex can be connected via 5K via Either Straight through mode or Active-Active Port channel.


Comment

    You are will be the first.

LEAVE A COMMENT

Please login here to comment.