Jump to content


Photo

Setting up a HA in azure in a hub and spoke setup - but how?

HA Azure hub and spoke loadbalancer

Best Answer Anders Gregersen, 04 March 2019 - 11:24 AM

Hi Michael

 

And it's also not quite a good fit for a support case to begin with - so I thought this would be a good place to start - but if you recommend creating a support case, I'll be more than happy to do it instead of using the community and it might help others with similar challenges. 

 

I've done most of what you describe, but I need the public IP to be the same no matter which firewall responds. I'm not sure but having a request coming in on one public ip and going out another just seems like a sure way for getting into trouble - but is this a wrong assumption?. And there will be several DNS entries for a lot of stuff and they should all point to the same ip.

 

Regarding the Azure VPN vs. Barracuda VPN - since the Barracuda licenses are vm core based in Azure I would prefer to use the cores for firewall functionality instead of VPN, but I haven't done a cost/benefit of it yet.

 

 

 

Troubleshooting this complex setup is probably more than a forum post can handle, but let me answer a couple of your questions, that may get you started in the right direction:

 

"Wrap" the HA cluster in Loadbalancers. Toward the internal resources (peered VNETs) set up a STD internal loadbalancer and use that as the target of the default route in the Azure route tables for the spoke VNETs. Use a loadbalancer to get a single public IP/hostname - this is optional however, if no loadbalancer is added the public IP associated to the active firewall is used for outgoing connections.

 

Using the Azure VPN gateway in a peered VNET as the connection back to onprem is probably making the setup a lot more difficult, than just configuring the same VPN tunnel in the CGF. You can also call support and work with them to get that set up. Then you can cut over the when it is up and running. That will make the routing of your spoke VNETs a lot easier as everything is pushed through the firewalls.

Go to the full post


This topic has been archived. This means that you cannot reply to this topic.
3 replies to this topic

#1 Anders Gregersen

Anders Gregersen
  • Members
  • 5 posts

Posted 04 March 2019 - 08:22 AM

Hi

 

I'm working on setting up a HA setup in Azure in a hub and spoke architecture.

 

It consists of 3 virtual networks,

  • a vpn network (spoke) that connects to on-premise resources using Azure vpn (yeah I know I could use the buildin barracuda vpn, but that's in the future)
  • a firewall network (hub) that connects all the spokes and all internet access goes through for Azure resources needing internet access
  • a kubernetes network (spoke) that hosts kubernetes

 

Normally according to the documentation I should create a Routing table in the firewall network and the HA setup would through Cloud Integration update the routing table with the active nodes ip for 0.0.0.0/0 default route. But as soon that is set up, no traffic goes to the internet.

 

Also creating a default route in the spokes doesn't seem to work with the Barracuda recommendation (or I might just not get it). Creating a default route in say the Kubernetes virtual network only supports targets of 1 ip, so it's unclear how I would route traffic to the active node especially since the HA setup only supports maintaining 1 routing table and a routing table cannot as far as I know span multiple virtual networks.

 

Also it's very unclear how the public ip gets assigned to the firewalls in a HA setup.

 

I've tried adding a public loadbalancer in front of the HA setup to make sure that the same public IP are used and the internet access stops again (I'm using the antivirus service for probing at 850/tcp and that works fine when adding the azure probe sources address in a fw rule)

Adding an internal loadbalancer as well to overcome the routing problem with just 1 target ip adress from the spokes works fine. But since internet access stops working when adding the public loadbalancer it really doesn't matter.

 

Default gw are the .1 in the fw virtual network and I'm not sure what DNS ip it's using.

 

I've created a similar setup using Windows and loadalancing on HTTP and that just works, so it's not the loadbalancer per say that's the problem.

 

Any advice or clues how to get it working?

 

Cheers, 

Anders



#2 Michael Zoller

Michael Zoller
  • Barracuda Team Members
  • 209 posts

Posted 04 March 2019 - 08:49 AM

Troubleshooting this complex setup is probably more than a forum post can handle, but let me answer a couple of your questions, that may get you started in the right direction:

 

"Wrap" the HA cluster in Loadbalancers. Toward the internal resources (peered VNETs) set up a STD internal loadbalancer and use that as the target of the default route in the Azure route tables for the spoke VNETs. Use a loadbalancer to get a single public IP/hostname - this is optional however, if no loadbalancer is added the public IP associated to the active firewall is used for outgoing connections.

 

Using the Azure VPN gateway in a peered VNET as the connection back to onprem is probably making the setup a lot more difficult, than just configuring the same VPN tunnel in the CGF. You can also call support and work with them to get that set up. Then you can cut over the when it is up and running. That will make the routing of your spoke VNETs a lot easier as everything is pushed through the firewalls.



#3 Anders Gregersen

Anders Gregersen
  • Members
  • 5 posts

Posted 04 March 2019 - 11:24 AM   Best Answer

Hi Michael

 

And it's also not quite a good fit for a support case to begin with - so I thought this would be a good place to start - but if you recommend creating a support case, I'll be more than happy to do it instead of using the community and it might help others with similar challenges. 

 

I've done most of what you describe, but I need the public IP to be the same no matter which firewall responds. I'm not sure but having a request coming in on one public ip and going out another just seems like a sure way for getting into trouble - but is this a wrong assumption?. And there will be several DNS entries for a lot of stuff and they should all point to the same ip.

 

Regarding the Azure VPN vs. Barracuda VPN - since the Barracuda licenses are vm core based in Azure I would prefer to use the cores for firewall functionality instead of VPN, but I haven't done a cost/benefit of it yet.

 

 

 

Troubleshooting this complex setup is probably more than a forum post can handle, but let me answer a couple of your questions, that may get you started in the right direction:

 

"Wrap" the HA cluster in Loadbalancers. Toward the internal resources (peered VNETs) set up a STD internal loadbalancer and use that as the target of the default route in the Azure route tables for the spoke VNETs. Use a loadbalancer to get a single public IP/hostname - this is optional however, if no loadbalancer is added the public IP associated to the active firewall is used for outgoing connections.

 

Using the Azure VPN gateway in a peered VNET as the connection back to onprem is probably making the setup a lot more difficult, than just configuring the same VPN tunnel in the CGF. You can also call support and work with them to get that set up. Then you can cut over the when it is up and running. That will make the routing of your spoke VNETs a lot easier as everything is pushed through the firewalls.



#4 Michael Zoller

Michael Zoller
  • Barracuda Team Members
  • 209 posts

Posted 11 March 2019 - 05:06 AM

Happy to do high-level help via forum, for more detailed / hands-on support would be the better route. 

 

You can create a setup that uses the external IP of the loadbalancer or you can attach PIPs to each of the firewall VMs. If you do both, the firewall VM will use the PIP for outbound connections, but incoming connections via the Loabalancer are still possible. So if you need one public IP for outbound then remove the PIPs and just use the LB. The downside of this setup is that you'll need a Client-to-Site VPN to manage both firewalls via the internal IPs since the passive firewall no longer has a public IP. If you want the best of both worlds, you can also set up PIPs and use Azure Traffic manager with healthchecks to have a hostname that always points to the active firewall.