EMAIL SUPPORT

dclessons@dclessons.com

LOCATION

NZ

Distributed Logical Firewall

Distributed Logical Firewall

Posted on Jan 17, 2020 (0)

Distributed Logical Firewall

Distributed firewall (DFW) is s replicated among multiple hosts the way the vSphere distributed switch or logical router are replicated.

With the DFW, it is possible to deploy any multitier application in the same Layer 2 broadcast domain for all tiers, and have the same subnet and the same default gateway. With DFW, there are no compromises in the diameter of the vMotion range, and if you deploy the application in a logical switch, you don’t need to worry about STP. Security between any two VMs in the same Layer 2 broadcast domain can also be provided.

The DFW provides the Layer 2, Layer 3, and Layer 4 stateful security to all virtual workloads running in NSX-prepared ESXi hosts, regardless of the virtual switch they connect to. A VM can be connected to a logical switch, a dvPortgroup, or a standard portgroup. If two VMs connect to the same standard portgroup, you can apply whatever security policy you want between the VMs, and the DFW enforces it.

The DFW kernel module connects itself in slot 2 of the IOChain. This means the DFW will enforce Firewall rules regardless of how the virtual machine connects to the network.

Below shows our multitier application with the only allowed traffic being the one from the users to the web server and the web servers to the database servers. The DFW makes it possible for all tiers in a multitier application to reside in the same Layer 2 broadcast domain and the same subnet.

The DFW is composed of firewall rules with source and destination addresses and Ethertypes or Layer 4 protocols, which are then applied to the individual vNIC of a virtual machine. The same DFW rule can be applied to a single vNIC in a VM, all vNICs in the same VM, or the vNICs of multiple VMs. In case the DFW fails, it fails close, blocking all traffic for the impacted vNIC.

Traffic Flow between web-servers and database servers

Step1: Web server WEB_02 sends some traffic to database server DB_01.

Step2: Before traffic reaches logical switch Web – Buccaneers, WEB_02’s DFW checks the traffic against the firewall rules.

  1. WEB_02’s DFW notes the traffic is coming from the direction of the web server WEB_02.
  2. The DFW finds a matching rule allowing the traffic, and the traffic is forwarded to the network.

The first frame/packet in the flow is processed by the ESXi host in user space against the table containing the DFW rules. This table is called the DFW Rule Table. If allowed, the state of the active connection is recorded using the 5-tuple of Layer 3 source/destination address (IP address if IP), Layer 3 Protocol, and Layer 4 source/destination address (port numbers if TCP or UDP), and future frames/packets in the flow are then processed in kernel space. This state table is called the DFW Connection Table. The memory used to record both the DFW rules and the firewall state is attached to the VM’s kernel overhead memory. This simple trick allows vMotion to happen without a ping drop, since the DFW state for each of the VMs’ vNICs is moved with the VM.

The DFW supports Application Level Gateway (ALG) for the following applications: FTP, CIFS, Oracle, TNS, MS-RPC, and SUN-RPC. ALG support allows the DFW to be aware that the application’s return traffic uses different ports from those used to initiate the session. The DFW adds the correct ports to the state tables for the return traffic.

Step 3: Traffic egresses logical switch DB – Pirates toward the database server DB_01.

Step 4: Before the traffic reaches the database server, DB_01’s DFW checks the traffic against the firewall rules.

  1. DB_01’s DFW notes the traffic is coming from the direction of the network.
  2. The DFW finds a matching rule allowing the traffic, and the traffic is forwarded to DB_01.

The first frame/packet in the flow is processed by the ESXi host in user space, the states and allow decision are recorded, and future frames/packets in the flow are processed in kernel space.

Step 5: Database server DB_01 receives the traffic and responds.

Step 6: Before the traffic reaches logical switch DB – Pirates, DB_01’s DFW has an entry in the state table in kernel space for this flow and allows it. Traffic is now forwarded to the network.

Step 7: Logical switch WEB – Buccaneers forwards the traffic to WEB_02.

Step 8: Before the traffic makes it to WEB_02, WEB_02’s DFW processes the traffic using the earlier state entry it made in kernel space and forwards it to WEB_02.

Step 9: WEB_02 receives the traffic.

The ability of the DFW to enforce security granularly at this level on a per vNIC basis is called microsegmentation. Microsegmentation is also leveraged to provide Layer 7 and other security services.

Each DFW processes traffic starting with the first firewall rule and going down. If the traffic matches a rule, the action in the rule is enforced against the traffic, and no more rules are checked. If the action of the rule is to block, the traffic is dropped without any fanfare. Firewall rules are created in different ways, and these are enforced based in the following order:

  1. Firewall rules created by users have the highest priority and are enforced from top to bottom.
  2. Firewall rules autogenerated by NSX Edges in support of NSX Edge Services.
  3. Firewall rules created by users in the NSX Edges.
  4. Firewall rules created by Service Composer.
  5. The default firewall rule. This rule is always present, and it is the last rule. It can’t be moved.

Of the five firewall rules mentioned in the preceding list, only numbers 1, 4, and 5 are DFW rules. Numbers 2 and 3 are NSX Edge firewall rules.

The default firewall rule is applied to all instances of the DFW. The default action is to Allow so as not to break any existing connectivity to VMs. You should consider changing the action to either Block or Reject after you have finalized your security plan.

There are two default DFW rules. One for Layer 2, and another for Layer 3 and Layer 4. All Layer 2 DFW rules are processed before any Layer 3 and Layer 4 DFW rules are applied.

DFW Thresholds and Limits

The DFW does consume resources in the ESXi host, and by default the ESXi host monitors DFW resource utilization. There are three default thresholds that if reached or crossed 20 consecutive times in any 200-second interval will raise an alarm. The thresholds are

  • CPU: 100% of the total physical capacity of the ESXi host.
  • Memory: 100% of the DFW allocated memory in the ESXi host. Table 15-3 shows the allocated memory based on the total physical memory in the host.


Comment

    You are will be the first.

LEAVE A COMMENT

Please login here to comment.