OpenStack Networking Services
OpenStack Networking Services
Networking in OpenStack is designed in such a way that each Tenants can create their own network, manage multiple networks, and connect network networks, access to external network and also deploy other networking services.
In Open stack, basic networking was provided by Nova but all other advance feature of networking features are provided by OpenStack Project called Neutrons.
Here we will in details about Neutron projects.
Neutron project in OpenStack enables tenants to create their own network and configure network elements like subnets, routers, firewall, load balancer, and ports.
Neutron Server contains API server which receives the networking service request and this API server is mostly configured on controller node. Neutron is based on plugin-based architecture which provides additional networking requirements. These plugin resides on controller node where it can implement orchestration of resources directly by interacting with devices or use agents to control the resources.
As soon as API server receives the networking request, it passes the request to associated plugins to further work on.
Agents are deployed on network or compute nodes, whereas the networking node provides resources to implement services like routing, firewall, load balancing, and VPNS.
Hardware Vendors also provides well defined API to implement their own plugin to support OpenStack API server request.
Neutron Plugins: These plugins implement orchestration of resources directly by interacting with devices or use agents to control the resources. There are two types of Neutron plugins.
Core Plugin: This plugin provides the layer 2 connectivity for virtual machine as well as network elements connecting to network. As soon as there is request for creating new virtual network or new ports creation, API server call to Core Plugin to proceed with.
Core Plugin has following neutron resources:
- Networks: represents layer 2 Domain.
- Ports: Representing endpoints on the above virtual network.
- Subnets: Contains Layer 3 address along with gateway defined.
Service Plugin: Service plugin is used to configure higher networking services like routing , Security firewall, Load Balancer , and VPN services.
As example to understand , service plugin create the virtual router that provide connectivity between different network , another example is it also creates the floating IP that provides NAT function to expose the Internal VM to External World.
Agents: Agents are deployed on network and compute nodes. These agents can talk to Neutron server over Message bus. Neutron provide various types of agents to implement virtual networking services like Layer-2 connectivity, DHCP, Routers etc. Some of the Neutron agents are as follows:
- DHCP Agents: to Provide DHCP services and is implemented on network node.
- L3 Agents: Implements routing, and also NAT services
- VON Agents: Implements VPN service and is installed on Network Nodes.
- L2 Agents: Resides on Compute Node and network node, and is used to connect VM and other networking devices like Virtual router to L2 network. It also interacts to Core Plugin to receive the networking information / configuration for VM.
Virtual Networks Implementation Overview:
Neutron core plugin is responsible for creating virtual network and ports. As soon as each virtual network is created it is associated with separate layer-2 domain. These virtual network can be categorized in to two different virtual network:
VLAN Based Network: In this method core plugin create a virtual network and attach it to static Layer 2 VLAN and all communication with this network is confined to itself. Core plugin configures the VLAN on virtual switch on compute node and network node and all physical switch that connect the compute and network nodes.
Tunnel Based Network: When there is requirement to isolate the virtual network traffic, we use the Tunnel based network. In this type of network scenario, Network nodes or compute nodes are connected by IP fabric with PE encapsulation. This PE method is used to carry inner packets (L2 Packets) as payload to outer packet ( IP Packet ), whereas tunnel ends point encapsulate and decapsulate the L2 packets.
VM on a compute node connected to a virtual router on the network node. As VM1 sends out a packet to another port on a virtual network, the router interface, the virtual switch on the compute node, encapsulates the L2 packet inside an IP packet and sends over the IP Fabric to the network node where the destination virtual router resides. When receiving the IP packet, the virtual switch on the destination node removes the outer IP packet to recover the L2 packet sent by the virtual machine at the origin. The packet is then delivered to the destination router port.
These are the switches which is used to connect virtual ports to physical network. The compute nodes and Network nodes are installed with virtual switch, and as soon as VM is created , These VM are connected on virtual switch and network elements like Virtual routers, DHCP servers are connected to Virtual switches on Network Nodes. These virtual switch switches are connected to Physical switches by NIC card of the physical node.
Neutron server supports two types of Virtual switch, Linus Bridge or OpenVswitch (OVS). OVS provide various advance networking option like LACP, VXLAN/GRE, it also supports programmability function.
Different Network Types:
Neutron supports different network types in OpenStack.
- Tenant Network: Virtual Network created by tenant.
- Provider Network: Created by OpenStack operators and associates it with existing network in DC.
- External network: This Network allow access to internet or external outside world.
Neutron Subnets: This defines the pool of IP address associated to any network. Neutron also associates the multiple subnets to a single network.
Creating Virtual network and subnets: We can create the virtual network and subnets by following commands:
#neutron net-create network1
To create a subnet associated with this virtual network, use the subnet-create argument in the Neutron command line. To specify the gateway address, use the --gateway option. Use the --disable-dhcp option to disable the DHCP service for the subnet. It is also possible to set the name servers for the subnet up to five per subnet using the --dns-nameserver option:
Network Port Connectivity Concepts:
In this let’s understand how different ports are connected to virtual switch.
Linux Bridge-based connection:
When any node is installed with Linux Bridge, This Bridge also provide basic Layer 2 connectivity and does not support VLAN tagging or tunneling. Now to provide VLAN and Tunnel based isolation Linus Kernel creates the sub-interface that can perform VLAN Tugging or tunneling. Below figure shows how tagged sub-interface are created from Parent NIC of Compute and Network Node and how they are plugged in to bridges.
Here Neutron agents creates the sub-interface and linux Bridge per vlan segment and connect them together. When VM connected to network , VM are connected to correct Virtual Ports of correct Linux Bridge.
The above output shows that eth1.211, and corresponding Linux Bridge brq7c2152c-c3 is created by layer 2 agents.
To enable Linux bridge-based networks, the ML2 plugin configuration file /etc/neutron/plugins/ml2/ml2_conf.ini should be updated as follows:
OVS Switch Based connection:
The OVS uses multiple interconnected switches to perform Layer-2 isolation. The OVS uses three bridges to connect the virtual machines to the physical network. We will start our discussion by first looking at VXLAN-based networks.
To configure the ML2 plugin to use VXLAN-based networks, update the ML2 configuration file /etc/neutron/plugins/ml2/ml2_conf.ini as follows:
OVS introduces additional virtual interfaces inside the host identified as bridges and classified as follows:
- br-int: The br-int is the integration bridge. All virtual instances that use the networking service connect to this bridge. This includes virtual machines, routers, and DHCP server.
- br-tun: In the case of tunnel-based networks, the br-ethX (provider bridge) is replaced with a tunnel bridge called br-tun. The br-tun bridge is configured to handle encapsulation and de-encapsulation of packets.
- br-ex: This bridge is connected to a physical interface that provides connectivity to the external world.
As soon as VM is connected to network via Bridge , VLAN ID used to connect VM to Integration bridge are local to compote node. These local VLAN are never exposed to outside network.
The ovs-vsctl show command shows the OVS configuration. Notice that the interfaces connected to the br-int bridge are tagged with VLAN 1:
Further commands and references you can learn from OpenStack Networking Labs section. </span