Docker Networking Overview
There are three components in Docker Networking:
- The Container Network Model (CNM)
CNM: CNM defines and outline fundamental building blocks of Docker Network.
Libnetwork: It is the real implementation of CNM and is used by Docker, Just like TCP/IP is implementation of OSI layer.
Drivers: With the help of drivers, various Network topologies can be implemented like VXLAN-based overlay network.
Below figure explains what we have studied above.
CNM (Container Network Model):
CNM is the framework which defines how networking design should be from Docker Network. CNM model is categorized in to three building blocks.
Sandbox: It is an isolated network stack, which includes Ethernet interface, DNS, Ports, routing tables.
Endpoints: It is a virtual interface, which is used to provide network connection to make communication successful.
Networks: It is the 802.1d software network bridge or software based switch, on which various endpoints connects to communicate to each other.
Below figure describes the CNM model.
CNM model is used by Docker to provide network connectivity to containers. In which Sandboxes are placed inside containers and Endpoints in container is used to connect to Software based switch to provide connectivity.
Below figure show that there are two containers and Endpoints of Container A and B which is connected to Network X will be able to communicate to each other whereas Endpoints in Containers B who are on different Network will not be able to communicate each other , as they are in different network and they requires Routers to communicate.
Libnetwork is the real implementation of CNM, Libnetwork is open source, and can be used on different platforms like Linux and windows.
Libnetwork contains codes of core docker networking. Due to which it provides following functions:
- Service discovery
- Ingress-based container load balancing
- Control plane & Management plane function.
Drivers are responsible for data plane of containers. Below figure describes that how drivers are used to handle data plane by using control plane and Management plane.
By default Docker ships with some in-build drivers called as local drivers. If we talk on Linux Platform they are bridge, overlay, and macvlan. On windows they are nat, overlay, transparent, l2bridge.
Single Host Bridge Network:
In single host bridge network, is the one which exists on single Docker host and can only connect containers that are on that Host only via 802.1d Software Bridge.
Docker create this single host bridge via bridge driver on Linux platform and on Window platform it is created by using nat driver.
Below figure, shows two Docker host with same bridge network called dclessons_net , even though network are identical but acts as independent and isolated network due to which container on Host A will not be able to communicate to container on Host B as they are on different network.
Once Docker is installed on Host, it gets the default single-host bridge network, and on Linux network it is called as bridge and on window host is called nat.
On Linux host the network present on docker host is seen by following below command.
To get more details about bridge network, use inspect command .
The default, bridge network on all Linux Docker Host maps to underlying Linux Bridge in kernel called docker0. It can be seen as below command.
The relationship between Docker default bridge network and “Docker0” Bridge in Linux kernel is shown below:
Now As soon as container is installed and it starts running, it gets connected to bridge network and further the bridge network is mapped to docker0 Linux Bridge in host kernel which further mapped to an Ethernet interface on host via port mapping.
In order to create single-host bridge network called dclessonslocal_net, use following commands as shown below:
In above figure, we are using “brctl show“ to see Linux bridges currently running on system. If this command does not work use apt-get install bridge-utils.
The current output show the docker 0 bridge and also the bridge br-bb58d43.. Bridge which is just now created with name dclessonslocal_net.
Now let’s create container name as dclessons1 from alpine image and connect it to dclessonslocal_net, use blow figure command to achieve.
Now inspect dclessonslocal_net in order to see what IP the container has received. Here you will see that it got IP address 172.20.0.2/16.
Now execute brctl show command to see on which bridge network your container dclessons1 has been connected to which veth port.
Let’s create another container dclessons2 and connect to same network, verify this via below command shown in figure. Here command use to create container and attach it to container has not been show, as it has been already discussed in previous sections.
Now let’s try to create another container which run web services, nginx. Use following command mentioned in below figure to run container which has nginx service running and also how to verify it.
Multi-Host Overlay Network:
With the help of Overlay network, Single network can be spanned across multiple hosts so that container on different host can communicate to each other on same network. For this it uses overlay driver on Linux Host.
Assume that we have existing physical network, with two vlan (VLAN 10, VLAN 20) as shown below:
Now let’s connect the Docker host to the physical network, as soon as we connect, out topology looks like below:
Now create a container and connect it on VLAN10 and this can be done by macvlan driver. When we use macvlan command, we need to provide following information as its attributes.
- Range of IP it can assigns to containers
- Interface or Sub-interface on host
Use following command in mentioned in below figure, once container is running and is connected to 192.168.1.0/24 network and got IP address 192.168.1.2/24 and this can be verified by inspect command.
The above command created dclessons_net network and created sub-interface on Vlan 10 and connected dclessons4 container on this network as shown in below figure.
Now let’s create another network dclessons_net2 on vlan 20 and create a container named dclessons5 and connect to it. Use following command as shown in below figure
Once it is done out Docker topology will looks like below figure:
Ingress Load Balancing:
Swarm supports two publishing modes, due to which services are available outside of cluster.
- Ingress mode (default)
- Host Mode
When using ingress mode, service published can be accessible from any node in swarm, even nodes not running service replica. Ingress mode is default, which means every time you publish service with –p or –publish it will use default mode.
Whereas using host mode, service published can be accessible from those node that runs service replicas.
To publish service in host mode, you should use –publish and add mode = host, as given in below example.
Ingress mode uses layer 4 routing mesh called Service Mesh or Swarm mode service mesh, below figure describes basic traffic flow when an external request hits to a service exposed in ingress mode.
In order to achieve that, use following command:
Let’s see how this command works and how traffic flows:
- The command will deploy service name dclessons6, which is connected to network overnet and published on port 5000.
- Now when any traffic for this service hits to ingress network via any node, on port 5000, will be routed to dclessons6 on port 80.
- Now at Node 2 dclessons6 service is deployed, and all traffic trying to access service dclessons6 port 5000, will be routed to Node 2 who is running that service.
Build Docker Overlay network in Swarm node:
Let’s build the scenario in which two swarm node are built on two different network and is connected to L3 device to provide reachability to each other.
Now let’s make Node 1 as Manager and register Node 2 as its worker. Now once it is done, Create one overlay network named dclessons-overlay-net and verify it on both Docker host Node 1 and Node 3 as shown in below figure
Now let’s create a service name dclessons5 with replicas = 2 and attach it to same overlay network , once done you will see that both Node 1 and Node 3 will have container named dclessons5 running and is attached to overlay network dclessons-overlay-net.
Now inspect this overlay network to get the IP address and network details of both Container running on each node.
Verify containers on swarm node and its existence
From above setting, Network topology seems as per below figure: