EMAIL SUPPORT

dclessons@dclessons.com

LOCATION

NZ

VXLAN Overlay Network Configuration

VXLAN Overlay Network Configuration

Posted on Jan 13, 2020 (0)

How to Configure VXLAN Overlay Network

Task:

  • Configure the ml2_conf.ini file to use VXLAN driver.
  • Configure a network dclessons_VXLAN_VNI with segment ID 1010.
  • Verify how Virtual interface information is used for VXLAN tunnel on compute node and Network node.

Solution:

To use VXLAN driver, SSH to OpenStack CLI and open it in any favorite editor

[root@localhost ~(keystone_admin)]# sudo nano /etc/neutron/plugins/ml2/ml2_conf.ini

Modify the following section in ml2 section:

[ml2]
...
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch

In the [ovs] section, set local_ip to the VTEP IP address. OVS uses this IP to carry all the tunneled traffic. In our setup, the VTEP IP for the controller node is 10.0.0.1:

[ovs]
local_ip = 10.0.0.1

In the [agent] section, set tunnel_types to vxlan:

[agent]
tunnel_types = vxlan

Restart the Neutron server and Open vSwitch Agent on the controller and network node of our setup using the following commands:

[root@localhost ~(keystone_admin)]# sudo service neutron-server restart.

When we configure the ML2 plugin to create the VXALN network , The Neutron OVSL2 agent on controller node is responsible to configure local OVS instance on compute and Network nodes because the Neutron L2 agent  also run on compute and network nodes.

To create the VXALN Network , go to Admin| Networks

Click on + Create Network and provide the desired NameProvider Network Type should be set to VXLAN and Segmentation ID should be set to the required VNI:


Now associate with 80.80.80.0/24 network with this VXLAN network.

When a tenant launches a VM and attaches it to a virtual network, a virtual network interface is created on the compute node, which connects the VM to the OVS instance. We will identify the virtual network interface, which attaches a VM to the OVS instance on the VXLAN network.

We will also look at the OVS configuration, which makes the communication between the VM and other members on the virtual network possible.

Launch an instance, Go to Project | Compute | Instance and click on launch instance.

[root@localhost ~(keystone_admin)]# neutron port-create --name VXLAN-PORT1 dclessons_VXLAN_VNI

Use following command to get the Port mapping with ID

[root@localhost ~(keystone_admin)]# neutron port-list
6954c6de-7cab-4003-b0b1-cf4ed0aecf89  |  VXLAN-PORT1  |  fa:16:3e:ee:5b:ed  |  80.80.80.7

Now create the nova VM by following command:

[root@localhost ~(keystone_admin)]# nova boot --flavor m1.tiny --image cirros --nic port-id=6954c6de-7cab-4003-b0b1-cf4ed0aecf89 dclessons-VXLAN-VM1

Use nova list command to check how many VM are running :

Now use neutron port-list to find port-ID for virtual interface associated with VM.

[root@localhost ~(keystone_admin)]# neutron port-list | grep 80.80.80.7 | cut -f2 -d" "
6954c6de-7cab-4003-b0b1-cf4ed0aecf89

use the ovs-vsctl show command to look at the ports created on the OVS instance. The OVS port name is composed of a part of the Neutron Port ID. In the following listing, qvo6954c6de-7is the port corresponding to our virtual interface ID of 6954c6de-7cab-4003-b0b1-cf4ed0aecf89. It is connected to the br-int bridge and is configured with tag: 8 to mark all the packets entering this interface with VLAN 8:

[root@localhost ~(keystone_admin)]# sudo ovs-vsctl show a928e612-3b05-46db-962c-e24e08ba8a3f

flow configuration on OVS on the br-tun bridge, which sends the packets to the other Hypervisors through the VXLAN tunnels using the ovs-ofctl dump-flow br-tun command:

[root@localhost ~(keystone_admin)]# sudo ovs-ofctl dump-flows br-tun
cookie=0xa841d4c1b2ceb4c7, duration=1781.434s, table=4, n_packets=0, n_bytes=0, idle_age=1781, priority=1,tun_id=0x3f2 actions=mod_vlan_vid:8.

we can see that the configuration strips the local VLAN 8  from the packets going out of the compute node and adds a tunnel key (VNI), 0x3f2. The VNI of 0x3f2 is the hexadecimal equivalent of 1010, which was used to create the OpenStack network.

When a VM is launched, OpenStack creates a virtual NIC interface and attaches it to the OVS instance on the Hypervisor through a Linux bridge.The OVS instance on the Hypervisor has two bridges, br-int for communication in the Hypervisor and br-tun, which is used to communicate with the other Hypervisors using the VXLAN tunnels. See the below figure.

The OVS bridge, br-int, uses VLANs to segregate the traffic in the Hypervisors. These VLANs are locally significant to the Hypervisor. Neutron allocates a unique VNI for every virtual network. For any packet leaving the Hypervisor, OVS replaces the VLAN tag with the VNI in the encapsulation header. OVS uses local_ip from the plugin configuration as the source VTEP IP for the VXLAN packet.


Comment

    You are will be the first.

LEAVE A COMMENT

Please login here to comment.