Christof VG

You don't need to come out of your comfort zone, if automation is in it!

Azure Virtual Datacenter - Part 3 - Firewall deployment

Read time: 10 minutes
Execution time: 10 minutes

Series overview

Introduction

We now have the foundation of our Virtual Datacenter in place. We created a central hub, meant to accommodate centralized services like firewalls, domain controllers, file servers, … . Spoke networks are created for specific workloads that need to be separated from other workloads for security or governance purposes. Another network is created that will be connected using a site-to-site ipsec tunnel to simulate an on-premises network. With all these networks in place, we are ready to implement the centralized firewalls that will inspect and control all east-west traffic (between the spokes and the on-premises network) and north-south traffic (between the internal networks and the internet).

The ARM templates for the deployment are available on my GitHub page so I won’t put the files here. But we will go deeper into certain parts of the ARM templates in this article where needed.

Prerequisites

When you want to deploy the firewalls as described in this article, I assume that you deployed the networks as described in the previous article. If you did not deploy te networks, the deployments will fail since the templates depend on these networks.

Firewalls

For this lab, we will deploy pfSense firewalls as NVAs (Network Virtual Appliances). These firewalls are very user friendly and are perfect to be used to learn networking through NVAs in Azure. In a production environment, I would recommend to use pfSense NVAs from the marketplace as they are supported by Netgate. We will deploy custom pfSense images, created in another blog series to enable a high level of automation (the marketplace pfSense images only have 1 nic and PowerShell intervention is needed to add more nics).

In these articles, I’ll show you how to create a pfSense image that can be used in Azure:

Load balancers

Since we will deploy more than one firewall, it is necessary to load balance the traffic between both firewalls. As we saw in the design decisions in Part 1, we are using Standard load balancers. This new type of load balancers have many advantages over Basic load balancers, but the main reason we will use Standard load balancers is because they support HA ports. With those HA ports, you can load balance all traffic instead of a set of specific ports which is very limited with Basic load balancers.

To load balance traffic on the trusted and on the untrusted side of the load balancer, we will use 2 separate load balancer instances. It is important that you use the same SKU for both load balancers (Standard in this case). SKUs may not be mixed between different resource types either. So, the public IP of the untrusted load balancer needs to be a Standard SKU public IP as well.

Availability Set

The firewalls will be placed in an availability set for high availability. Placing them in an availability set will make sure they are spread over different fault domains and update domains. Placing the firewalls in different fault domains means that they will be spread over hypervisors in different physical racks with different power, cooling and hardware. Update domains separate the virtual machines on different underlying hardware that won’t undergo maintenance or reboots at the same time.

Deployment

For the deployment of the resources, we use ARM templates, describing the required resources. But before the templates can be deployed, Azure Resource Groups need to exist. In a later post, I will explain how to deploy the resources, including the resource groups.

Following resources are deployed with the ARM template under HUB firewalls:

Public IP Address: The public IP Address for the untrusted load balancer
Untrusted load balancer: The load balancer on the WAN side of the firewalls
Trusted load balancer: The load balancer on the LAN side of the firewalls
Availability set: The availability set for the load balancers
Untrusted nics: The network interfaces on the WAN side of the firewalls
Trusted nics: The network interfaces on the LAN side of the firewalls
Virtual machines: The firewall virtual machines
OS disks: The OS disks (from VHD image) that needs to be attached to the firewalls

The deployments of the nics, the virtual machines and the OS disks are using copy functions. This allows you to deploy multiple instances of the resources, based on a “number of instances” parameter.

Code snippet for the copy function:

1
2
3
4
"copy": {
"count": "[parameters('numberOfInstances')]",
"name": "vmCopy"
}

The untrusted load balancer has 1 load balancing rule, TCP port 443 (https). This will spread the load on port 443 TCP on all back-end nodes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
"loadBalancingRules": [
{
"name": "[variables('untrustedLoadBalancerHttpsRuleName')]",
"properties": {
"frontendIPConfiguration": {
"id": "[variables('untrustedLoadBalancerFrontEndIPConfigId')]"
},
"backendAddressPool": {
"id": "[variables('untrustedLoadBalancerBackEndPoolId')]"
},
"protocol": "Tcp",
"frontendPort": 443,
"backendPort": 443,
"enableFloatingIP": false,
"idleTimeoutInMinutes": 5,
"probe": {
"id": "[variables('untrustedLoadBalancerProbeId')]"
}
}
}
]

A health probe also checks the status of port 443 TCP. If it is reachable, the node is considered “online”. When it is not answering, then the node is considered “offline”.

1
2
3
4
5
6
7
8
9
10
11
"probes": [
{
"name": "[variables('untrustedLoadBalancerProbeName')]",
"properties": {
"protocol": "tcp",
"port": 443,
"intervalInSeconds": 5,
"numberOfProbes": 2
}
}
]

The trusted load balancer uses the same health probe (443 TCP), but the ports are different. Since we want to load balance all traffic, HA ports are used:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
"loadBalancingRules": [
{
"name": "[variables('trustedLoadBalancerHARuleName')]",
"properties": {
"frontendIPConfiguration": {
"id": "[variables('trustedLoadBalancerFrontEndIPConfigId')]"
},
"backendAddressPool": {
"id": "[variables('trustedLoadBalancerBackEndPoolId')]"
},
"protocol": "All",
"frontendPort": 0,
"backendPort": 0,
"enableFloatingIP": false,
"idleTimeoutInMinutes": 5,
"probe": {
"id": "[variables('trustedLoadBalancerProbeId')]"
}
}
}
]

In this case, “All” is specified to select all protocols (TCP and UDP). FrontendPort 0 and BackendPort 0 also means that all ports are selected.

An important setting in the network interfaces is the IP Forwarding. This enables the forwarding of traffic, not destined for the local network card.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Static",
"privateIPAddress": "[concat(parameters('trustedSubnetIpPrefixString'), copyIndex(int(parameters('trustedFirstIpHostString'))))]",
"subnet": {
"id": "[variables('trustedSubnetRef')]"
},
"loadBalancerBackendAddressPools": [
{
"id": "[variables('trustedLoadBalancerBackEndPoolId')]"
}
]
}
}
],
"enableIPForwarding": true
}

The managed disks are imported from a given VHD file, uploaded in a previous post, and placed on blob storage. The location is specified using the blob uri.

1
2
3
4
5
6
7
8
"properties": {
"osType": "Linux",
"creationData": {
"createOption": "Import",
"sourceUri": "[variables('pfSenseSourceUri')]"
},
"diskSizeGB": 30
}

In the firewall resources, a disk is attached as created beforehand.

1
2
3
4
5
6
7
8
"osDisk": {
"osType": "Linux",
"createOption": "Attach",
"caching": "ReadWrite",
"managedDisk": {
"id": "[resourceId('Microsoft.Compute/disks/', concat(parameters('firewallNamePrefix'), '-', copyIndex(1), '-os'))]"
}
}

Another important setting is to define a primary nic, since the firewalls have more than one nic.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
"networkInterfaces": [
{
"properties": {
"primary": true
},
"id": "[resourceId('Microsoft.Network/networkInterfaces', concat(parameters('firewallNamePrefix'), '-', copyIndex(1), '-untrusted-nic'))]"
},
{
"properties": {
"primary": false
},
"id": "[resourceId('Microsoft.Network/networkInterfaces', concat(parameters('firewallNamePrefix'), '-', copyIndex(1), '-trusted-nic'))]"
}
]

We will also deploy a management virtual machine. The templates to deploy the management virtual machine can be found in the management folder on the github repository.

This template doesn’t contain any specialities. It is just a simple Windows VM, based on the 101-vm-simple-windows quickstart template.

For the deployment, we will use powershell.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Create the Resource groups for the resources
New-AzureRmResourceGroup -Name "rg-hub-firewalls" `
-Location "westeurope"
New-AzureRmResourceGroup -Name "rg-hub-management" `
-Location "westeurope"
New-AzureRmResourceGroupDeployment -ResourceGroupName "rg-hub-firewalls" `
-TemplateFile "<path to the firewalls azuredeploy file>" `
-TemplateParameterFile "<path to the firewalls azuredeploy parameters file>" `
-pfSenseStorageAccountName "<name of the storage account name where the image is stored>" `
-Verbose
New-AzureRmResourceGroupDeployment -ResourceGroupName "rg-hub-management" `
-TemplateFile "<path to the management azuredeploy file>" `
-TemplateParameterFile "<path to the management azuredeploy parameters file>" `
-adminPassword "<password for the admin account>" `
-Verbose

Important notice

In both deployments, an extra parameter is specified. For the deployment of the firewalls, parameter pfSenseStorageAccountName is specified and for the management deployment, we have parameter adminPassword. The reason for these extra parameters is because we don’t want to store this sensitive information in the parameters file and therefore, we add them at runtime.

Configuration

After both the firewalls and the management virtual machine are deployed, we need to perform some extra small, but important actions.

The most important action is to change the admin password of the pfSense appliances, if you didn’t change this in the image. They are reachable from the internet, so you don’t want to have a default password on the virtual appliances.

The next important actions is the routing to 168.63.129.16. This is a special IP address, used by Azure. It is the IP address of the DNS servers, but also used as the source IP address of the health probes of the load balancers. The part source IP address in this phrase is very important. Since Software Defined Networking is used, packets from an address can pop-up on an interface where needed. This is also the case for the health probes for the load balancers. For the untrusted load balancer, there is no issue. The ip 168.63.129.16 is considered a public IP and when checking the WAN ports on the firewalls, traffic is returned to the internet and it works. But for the trusted side of the load balancers, this is a different story. If no extra configuration is made, response to 168.63.129.16 on the trusted side will go through the firewall, being sent to the internet this way. This is asymmetric routing which won’t work.

To make this work, we need to perform these steps:

  • Create a gateway on the trusted side of both pfSense instances and point to the subnet gateway of the trusted subnet.
  • Create a static route on the trusted nic for IP 168.63.129.16, pointing to the gateway, created in the previous step.
  • Remove the static route for 168.63.129.16 on the untrusted nic, created by DHCP.

For the configuration of these settings, we can make use of the management virtual machine. Connect to the public IP of the virtual machine using RDP. The IPs of the firewalls are (if you didn’t change the ip addressing in the parameter files):

  • 10.1.0.36 (Firewall 1)
  • 10.1.0.37 (Firewall 2)

Create gateway on the trusted subnet

To create the gateway for the trusted subnet, navigate to System/Routing/Gateways:

Select the LAN interface for the gateway, give it a name and configure the IP address of the gateway of the subnet. In Azure, the first IP address of the subnet is the gateway. In this case it is 10.1.0.33. You cannot ping the gateway by design, so disable gateway monitoring and gateway monitoring action.

Apply the configuration.

Create the static route

Now, with the gateway in place, we can add the static route.

We want to create a static route to 168.63.129.16/32, point it to the gateway we created and give it a description.

Remove the default static route to 168.63.129.16 on the WAN port

First, we need to configure SSH in the advanced settings.

Now, we are able to putty to the LAN IPs of the pfSense instances. Login to the machines using SSH and select 8 in the menu to access bash.

In bash, enter the following command to delete the route:

1
route delete 168.63.129.16

With the route deleted, traffic from 168.63.129.16, directly to the LAN port can be answered and the load balancer is coming online, as you can see in the monitoring of the load balancer:

Conclusion

Today, we did a lot of fun stuff! We created the firewalls with their corresponding load balancers. It would be great if I could deliver a totally automated deployment, but I really have no idea to block the static routes from DHCP. Still, we have a nice solution and in the next post, we will create the on-premise test environment.