And it's also not quite a good fit for a support case to begin with - so I thought this would be a good place to start - but if you recommend creating a support case, I'll be more than happy to do it instead of using the community and it might help others with similar challenges.
I've done most of what you describe, but I need the public IP to be the same no matter which firewall responds. I'm not sure but having a request coming in on one public ip and going out another just seems like a sure way for getting into trouble - but is this a wrong assumption?. And there will be several DNS entries for a lot of stuff and they should all point to the same ip.
Regarding the Azure VPN vs. Barracuda VPN - since the Barracuda licenses are vm core based in Azure I would prefer to use the cores for firewall functionality instead of VPN, but I haven't done a cost/benefit of it yet.
Go to the full post
Troubleshooting this complex setup is probably more than a forum post can handle, but let me answer a couple of your questions, that may get you started in the right direction:
"Wrap" the HA cluster in Loadbalancers. Toward the internal resources (peered VNETs) set up a STD internal loadbalancer and use that as the target of the default route in the Azure route tables for the spoke VNETs. Use a loadbalancer to get a single public IP/hostname - this is optional however, if no loadbalancer is added the public IP associated to the active firewall is used for outgoing connections.
Using the Azure VPN gateway in a peered VNET as the connection back to onprem is probably making the setup a lot more difficult, than just configuring the same VPN tunnel in the CGF. You can also call support and work with them to get that set up. Then you can cut over the when it is up and running. That will make the routing of your spoke VNETs a lot easier as everything is pushed through the firewalls.