Contents
- Overview
- Scenario 1 – Traffic hair-pinning using ExpressRoute
- Scenario 2 – Build a virtual tunnel (SD-WAN or IPSec)
- Scenario 3 – vNet Peering and vHub connection coexistence
- Scenario 4 – Transit virtual network for decentralized vNets
- Conclusion
- Bonus
Overview
This article is going to discuss different options that interconnect the Hub and Spoke networking with Virtual WAN for migrations scenarios. The goal of this article is to expand on additional options that can help customers to migrate to their existing Hub and Spoke topology to Azure Virtual WAN.
You can find a comprehensive article Migrate to Azure Virtual WAN to go over several considerations during the migration process. The focus of this article is to focus only on the connectivity to facilitate the migration process. Therefore, it is important to note that the interconnectivity options listed here are intended to be used in the short term to ensure a temporary coexistence between both topologies while the workload on the Spoke vNets with the end goal of disconnecting both environments after migration is completed.
This article mainly discusses scenarios with a Virtual WAN Secured Virtual Hub; exceptions are noted where applicable. The setup assumes the use of routing intent and route policies, replacing the previous approach of using route tables to secure Virtual Hubs. For more information, please consult: How to configure Virtual WAN Hub routing intent and routing policies.
Scenario 1 – Traffic hair-pinning using ExpressRoute circuits
To begin the migration, ensure that the target Virtual WAN Hub (vHub) includes all necessary components. For existing vHubs equipped with Firewalls, SD-WAN, VPN (Point-to-Site or Site-to-Site), confirm that these elements are also present and correctly configured on the target Virtual WAN. Additionally, for any migrated Spoke, an optional vNet peering can be maintained to the original Hub vNet if there are dependencies, such as shared services (DNS, Active Directory, and other services). Make sure that the peering configuration has the option for using remote gateway disabled, because once connected to the vHub, the vNet connection requires using remote gateway to be enabled.
On this scenario traffic between Hub and Spoke and Virtual WAN Hub is facilitated using an ExpressRoute circuit that is connected to both environments. When a single circuit is connected to both environments’ routes will be exchanged between both environments, and it will hairpin at the MSEE (Microsoft Enterprise Edge) routers.
This scenario is a similar approach used described in the article: Migrate to Azure Virtual WAN.
Connectivity flow:
Source |
Destination |
Data Path |
Spoke vNet |
Migrated Spokes vNets |
1. vNet Hub Firewall 2. vNet ExpressRoute Gateway 3. MSEE via Hairpin 4. vHub ExpressRoute Gateway 5. vHub Firewall |
Spoke vNet |
Branches (VPN/SD-WAN) |
1. vNet Hub Firewall 2. vNet SD-WAN NVA or VPN Gateway |
Spoke vNet |
On-premises DC |
1. vNet Hub Firewall 2. ExpressRoute Gateway 3. ExpressRoute Circuit (MSEE) 4. Provider/Customer |
Migrated vNet |
Branches (VPN/SD-WAN) |
1. vHub Firewall 2. vHub SD-WAN NVA or VPN Gateway |
Migrated vNet |
On-premises DC |
1. vNet Hub Firewall 2. vNet ExpressRoute Gateway 3. ExpressRoute Circuit (MSEE) 4. Provider/Customer |
Note: Connectivity also considers that return traffic follows the same path and components.
Pros
- Traffic stays in the Microsoft Backbone and does not go over the Provider or Customer CPE.
- Built-in route provided by the Azure Platform (this is configurable, see considerations).
Cons
- Expect high latency. Traffic between VNET Hub and vHubs crosses MSEE routers outside the Azure Region in a Cloud Exchange facility, increasing latency due to the distance to the region.
- Single point of failure. Because the MSEE is located at the Edge location, an outage at that site can impact communication. To ensure redundancy, you can utilize a second MSEE at a different Edge location within the same metro area to achieve redundancy and lower latency. Additionally, a second MSEE in different metro areas can also provide redundancy, although this might result in increased latency.
Considerations
- A new feature has been introduced to block MSEE hairpin. To enable this scenario, you need to activate Allow Traffic from remote Virtual WAN Networks (on VNET Hub side) and Allow Traffic from non-Virtual WAN Networks (on Virtual WAN Hub side). For more details, refer to this article: Customisation controls for connectivity between Virtual Networks over ExpressRoute..
Scenario 2 – Build a virtual tunnel (SD-WAN or IPSec)
The same prerequisites for the target vHub apply for this option before beginning the migration. However, instead of utilizing ExpressRoute transit, in this scenario you establish a direct virtual tunnel between the existing VNET Hub and the vHub to facilitate communication. There are several options for achieving this, including:
- Use Azure native VPN Gateway on both VNET Hub and vHub for IPSec tunnels. Up to four tunnels can be created when VNET Hub VPN Gateway is configured to Active/Active (by default, vHub VPN Gateways are already Active/Active). It is important to consider that customers can use either BGP or static routing when implementing this option. However, BGP will be restricted, if the VNET VPN Gateway is the only gateway present, you can use custom ASN other than 65515. If there is another gateway, such as ExpressRoute or Azure Route Server, ASN must be set to 65515. Since vHub VPN Gateway does not allow custom ASN at this moment (65515 is the default ASN), static routes will be required for this setup.
- Use 3rd party NVA to establish SD-WAN connectivity between both sides or IPSec tunnel.
Using this option, you can leverage either static or BGP routing, where BGP will offer better integration with vHub and less administrative effort.
Connectivity flow:
Source |
Destination |
Data Path |
Spoke vNet |
Migrated Spokes vNets |
1. vNet Hub Firewall 2. vNet Hub SD-WAN NVA or VPN Gateway 3. vHub Hub SD-WAN NVA or VPN Gateway 4. vHub Firewall |
Spoke vNet |
Branches (VPN/SD-WAN) |
1. vNet Hub Firewall 2. vNet Hub SD-WAN NVA or VPN Gateway |
Spoke vNet |
On-premises DC |
1. vNet Hub Firewall 2. ExpressRoute Gateway 3. MSEE Hairpin 4. Provider/Customer |
Migrated vNet |
Branches (VPN/SD-WAN) |
1. vHub Firewall 2. SD-WAN NVA or VPN Gateway |
Migrated vNet |
On-premises DC |
1. vNet Hub Firewall 2. ExpressRoute Gateway 3. MSEE Hairpin 4. Provider/Customer |
Note: Connectivity also considers that return traffic follows the same path and components.
Pros
- Traffic remains within the Microsoft Backbone in the region, resulting in lower latency compared to Option 1.
Cons
- Administrative overhead when using static routes and managing extra network components.
- Cost of adding a new VPN Gateway or 3rd party NVA to build the virtual tunnel
- Throughput may be limited based on the type of virtual tunnel technology used. This limitation can be mitigated by adding multiple tunnels, which require BGP + ECMP to balance traffic between them. It is important to note that Azure allows up to eight tunnels, which is the maximum number of programmed routes for the same networks with different next hops, indicating the specific tunnel.
Scenario 3 – vNet Peering and vHub connection coexistence
In this scenario, spokes vNet originally connected to vNet Hub are migrated to the vHub while maintaining existing peering with the vNet Hub but with the Use Remote Gateway configuration disabled. This allows the migrated vNets to retain connectivity with the source vNet Hub while also connecting to the vHub. The connection to the vHub necessitates the Use Remote Gateway, which directs all traffic towards on-premises to use the vHub.
To connect with other spokes via vHub, the migrated vNet needs a UDR with routes to the vNet spoke prefixes using the vNet Hub Firewall as the next hop. Use route summarization for contiguous prefixes or enter specific prefixes if they are not. Additionally, enable Gateway Propagation in the UDR so migrated Spoke vNets can learn routes from the vHub (RFC 1918, default route, or both).
Connectivity flow:
Source |
Destination |
Data Path |
Spoke vNet |
Migrated Spokes vNet |
1. vNet Hub Firewall |
Spoke vNet |
Branches (VPN/SD-WAN) |
1. vNet Hub Firewall 2. vNet SD-WAN NVA or VPN Gateway |
Spoke vNet |
On-premises DC |
1. vNet Hub Firewall 2. ExpressRoute Gateway 3. ExpressRoute Circuit (MSEE) 4. Provider/Customer |
Migrated vNet |
Branches (VPN/SD-WAN) |
1. vHub Firewall 2. vHub SD-WAN NVA or VPN Gateway |
Migrated vNet |
On-premises DC |
1. vHub Firewall 2. vHub ExpressRoute Gateway 3. ExpressRoute Circuit (MSEE) 4. Provider/Customer |
Note: Connectivity means that return traffic follows the same path and components.
Pros
- Traffic remains within the Microsoft Backbone in the region, resulting in lower latency compared to option 1.
- No throughput limitation imposed by virtual tunnels compared to option 2.
Throughput will be limited by the VM size.
Cons
- Administrative overhead to adjust the UDR to reach the Spoke vNets on connected over the vNet Hub.
Scenario 4 – Transit virtual network for decentralized vNets
This use case involves a decentralized virtual network model where each customer has an ExpressRoute Gateway for connectivity to on-premises systems. Traffic between virtual networks is managed using virtual network peering, based on the specific connectivity requirements of the customer. Each virtual network has its own gateway, which prevents connecting them directly to the virtual hub because the remote gateway option needs to be enabled.
If the customer can tolerate the downtime associated with removing the Express Route Gateway from the migrated vNet, they have the option to establish a direct vNet connection to the vHub, thereby simplifying the solution. This process typically takes approximately 45 minutes, excluding the rollback procedure which would require an additional 45 minutes, potentially making this approach prohibitive for most customers.
However, customers with existing Azure workloads often aim to minimize downtime. As illustrated in the diagram below, they can create a transit vNet equipped with a firewall or a Network Virtual Appliance (NVA) with routing capabilities. This configuration allows the migrated vNet to establish regular peering, thereby maintaining connectivity without significant disruption.
The solution illustrated on this section uses a static route propagation at the vNet connection level towards the Transit vNet, which now requires non Secured-Virtual WAN hubs (note that support for static route propagation is on the Virtual WAN roadmap).
Alternatively, you can use BGP peering from the Firewall or NVA to program the migrated vNets summary prefixes. For Firewall implementations with BGP it is recommended to leverage Next hop IP support for Virtual WAN where traffic flows over a load balance feature to ensure traffic symmetry. In that scenario you can also leverage Secured-vHubs.
The migration process also necessitates adjustments to the routes for the migrated vNET to facilitate traffic flow to on-premises systems using the vHUB. This includes utilizing static routes at the connection from the Transit vNet to the vHub to advertise a summary route via the Firewall in the transit vNET to ensure return traffic and proper advertisement to the on-premises environment. Once the route configurations are established, the ExpressRoute connection can be removed. The customer can then proceed to Step 2, which will allow them to make the final adjustments and complete the full integration with the vHub following the outlined steps.
- Remove ExpressRoute Gateway.
- Create the vNet connection to the vHub, that will allow the specific Migrated vNet prefix to advertise to the vHub as well as leak down to the ExpressRoute. Once the step 2 is completed the traffic should start to flow over the vNET connection to the Vub.
- Removed the vNet peering to the Transit vNET.
Connectivity flow:
Source |
Destination |
Data Path |
vNet1/VNet2 |
Migrated Spokes vNet |
1. Direct vNet peering |
vNet1/VNet2 |
Branches (VPN/SD-WAN) |
1. ExpressRoute Gateway 2. ExpressRoute Circuit (MSEE) 3. Provider/Customer 4. VPN/SD-WAN |
vNet1/VNet2 |
On-premises DC |
1. ExpressRoute Gateway 2. ExpressRoute Circuit (MSEE) 3. Provider/Customer |
Migrated vNet (Step1) |
Branches (VPN/SD-WAN) |
1. Transit Firewall 2. vHub SD-WAN NVA or VPN Gateway |
Migrated vNet (Step1) |
On-premises DC |
1. Transit Firewall 2. vHub ExpressRoute Gateway 3. ExpressRoute Circuit (MSEE) 4. Provider/Customer |
Migrated vNet (Step2) |
Branches (VPN/SD-WAN) |
1. vHub SD-WAN NVA or VPN Gateway |
Migrated vNet (Step2) |
On-premises DC |
1. vHub ExpressRoute Gateway 2. ExpressRoute Circuit (MSEE) 3. Provider/Customer |
Note: Connectivity means that return traffic follows the same path and components.
Pros
- Traffic remains on the Microsoft backbone, ensuring minimal latency.
- Not the same throughput limits associated with option 2 solution (virtual tunnels).
Cons
- Administrative overhead associated with maintaining the additional transit virtual network, including user-defined route management and vHub vNet Connection static route configuration.
- Costs incurred from operating any supplementary firewalls (FWs) or network virtual appliances (NVAs) in the transit vNet.
Conclusion
This article outlined four strategies for migrating from Hub and Spoke networking to Azure Virtual WAN—ExpressRoute hair pinning, VPN or SD-WAN virtual tunnels, vNet peering with vHub connections, and transit virtual networks for decentralized vNets—highlighting their pros, cons, and administrative considerations. It is important to assess which approach best fits your needs by weighing each scenario's advantages and drawbacks.
Bonus
The diagrams in Excalidraw format related to this blog post are available in the following GitHub repository.