VXLAN Overlay Network Configuration
How to Configure VXLAN Overlay Network
- Configure the ml2_conf.ini file to use VXLAN driver.
- Configure a network dclessons_VXLAN_VNI with segment ID 1010.
- Verify how Virtual interface information is used for VXLAN tunnel on compute node and Network node.
To use VXLAN driver, SSH to OpenStack CLI and open it in any favorite editor
Modify the following section in ml2 section:
In the [ovs] section, set local_ip to the VTEP IP address. OVS uses this IP to carry all the tunneled traffic. In our setup, the VTEP IP for the controller node is 10.0.0.1:
In the [agent] section, set tunnel_types to vxlan:
Restart the Neutron server and Open vSwitch Agent on the controller and network node of our setup using the following commands:
When we configure the ML2 plugin to create the VXALN network , The Neutron OVSL2 agent on controller node is responsible to configure local OVS instance on compute and Network nodes because the Neutron L2 agent also run on compute and network nodes.
To create the VXALN Network , go to Admin| Networks
Click on + Create Network and provide the desired Name. Provider Network Type should be set to VXLAN and Segmentation ID should be set to the required VNI:
Now associate with 184.108.40.206/24 network with this VXLAN network.
When a tenant launches a VM and attaches it to a virtual network, a virtual network interface is created on the compute node, which connects the VM to the OVS instance. We will identify the virtual network interface, which attaches a VM to the OVS instance on the VXLAN network.
We will also look at the OVS configuration, which makes the communication between the VM and other members on the virtual network possible.
Launch an instance, Go to Project | Compute | Instance and click on launch instance.
Use following command to get the Port mapping with ID
Now create the nova VM by following command:
Use nova list command to check how many VM are running :
Now use neutron port-list to find port-ID for virtual interface associated with VM.
use the ovs-vsctl show command to look at the ports created on the OVS instance. The OVS port name is composed of a part of the Neutron Port ID. In the following listing, qvo6954c6de-7is the port corresponding to our virtual interface ID of 6954c6de-7cab-4003-b0b1-cf4ed0aecf89. It is connected to the br-int bridge and is configured with tag: 8 to mark all the packets entering this interface with VLAN 8:
flow configuration on OVS on the br-tun bridge, which sends the packets to the other Hypervisors through the VXLAN tunnels using the ovs-ofctl dump-flow br-tun command:
we can see that the configuration strips the local VLAN 8 from the packets going out of the compute node and adds a tunnel key (VNI), 0x3f2. The VNI of 0x3f2 is the hexadecimal equivalent of 1010, which was used to create the OpenStack network.
When a VM is launched, OpenStack creates a virtual NIC interface and attaches it to the OVS instance on the Hypervisor through a Linux bridge.The OVS instance on the Hypervisor has two bridges, br-int for communication in the Hypervisor and br-tun, which is used to communicate with the other Hypervisors using the VXLAN tunnels. See the below figure.
The OVS bridge, br-int, uses VLANs to segregate the traffic in the Hypervisors. These VLANs are locally significant to the Hypervisor. Neutron allocates a unique VNI for every virtual network. For any packet leaving the Hypervisor, OVS replaces the VLAN tag with the VNI in the encapsulation header. OVS uses local_ip from the plugin configuration as the source VTEP IP for the VXLAN packet.