By Schneider Larbi, Specialist Solutions Architect at AWS

VMware Cloud on AWS_blueVMware Cloud on AWS allows customers to seamlessly extend their Layer 2 on-premises networks to the cloud. This is important because you can migrate virtual machines from on-premises Layer 2 networks to VMware Cloud on AWS without changing IP addresses.

Currently, there are two ways to implement network extensions to VMware Cloud on AWS:

You are not required to have NSX on-premises to implement this. If you have NSX-T on-premises, you cannot use the NSX-T edge that is automatically provisioned by NSX as the client side for an L2VPN (Layer 2 VPN) that connects to your Software Defined Data Center (SDDC) or your VMware Cloud on AWS environment.

In this post, I will explain architectural considerations around extending your on-premises networks to VMware Cloud on AWS. This will allow for hybrid cloud implementations or migration without the need to change your IP addresses.

VMware Cloud on AWS allows organizations to migrate data centers to the cloud to evacuate data centers, implement disaster recovery (DR), and modernize applications.

Customers are able to do this using the same VMware infrastructure components used by organizations on-premises, namely vCenter for management, ESXi as the hypervisor, NSX for networking, and VSAN for storage. Let’s discuss the options for implementing network extensions with VMware Cloud on AWS.

Establishing Hybrid Connectivity

One way to implement hybrid network connectivity to VMware Cloud on AWS is to use the Autonomous NSX Edge. This is a standalone appliance you deploy into your on-premises VMware cluster. It’s downloaded in an OVF format and acts as L2VPN to extend your Layer 2 network domain to VMware Cloud on AWS.

Currently, the type of NSX being used on VMware Cloud on AWS is NSX-T Data Center. It’s worth noting that only the Autonomous NSX Edge is compatible with NSX-T Data Center for VMware Cloud on AWS. If you download the other NSX-V appliances, these will not work with NSX-T Data Center in the cloud.

Prior to setting up the Autonomous NSX Edge appliance, you need to ensure you are able to initiate vMotion between the on-premises vCenter and the vCenter for VMware Cloud on AWS. You will need to configure hybrid linked mode between the two vCenters servers. This allows you to link your on-premises vCenter to VMware Cloud on AWS for easy management. Refer to the full checklist from the VMware documentation.

Ensure these prerequisites are met to be able to initiate vMotion between your on-premises VMware environment and VMware Cloud on AWS without incurring any unnecessary network downtimes and incompatibility errors.

When you use the Autonomous NSX Edge to extend your Layer 2 network to your VMware Cloud on AWS environment over AWS Direct Connect (DX), or a public internet, you can also initiate live vMotion between your on-premises environment to VMware Cloud on AWS for your workload virtual machines connected to the extended networks.

To be able to vMotion between vCenters—from on-premises to the cloud—you will need to configure hybrid linked mode between the two vCenters servers. This allows you to link your on-premises vCenter to the VMware Cloud on AWS vCenter. Refer to the full checklist from the VMware documentation.

Ensure these prerequisites are met to be able to initiate vMotion between your on-premises VMware environment and VMware Cloud on AWS without incurring any unnecessary network downtimes and incompatibility errors.

The second way to implement hybrid networking or extending your network to VMware Cloud on AWS is by using VMware HCX. This is an application mobility platform designed for simplifying application migration, workload rebalancing, and business continuity across data centers and clouds.

When deployed on-premises and paired to the components in your VMware Cloud on AWS environment, VMware HCX allows you to extend your on-premises Layer 2 networks to VMware Cloud on AWS for migration purposes, or to maintain a hybrid cloud architecture.

To achieve hybrid network connectivity with the ability to extend your on-premises networks to VMware Cloud on AWS, you still need to establish connectivity between your on-premises environment and the VMware Cloud on AWS. Learn more about these options in this blog post: Connectivity Options for VMware Cloud on AWS SDDCs.

NSX Edge Appliance Architecture With VMC

For customers who already use AWS Direct Connect, it’s possible to configure the NSX autonomous edge over existing DX connection to VMware Cloud on AWS.

VMware-Cloud-AWS-On-Premises-1

Figure 1 – L2VPN over AWS Direct Connect.

In this design, an AWS Direct Connect Private Virtual Interface (Private VIF) is configured over the DX connection that terminates on a Virtual Private Gateway (VGW). This configuration allows networks from your on-premises environment to be advertised to VMware Cloud on AWS, precisely to your NSX Edge Router using the Border Gateway Protocol (BGP).

Ensure you do not use overlapping Autonomous System Numbers (ASN) on-premises and in your VMware Cloud on AWS environment for the BGP session configuration. Different numbers must be used for both sites.

When this underlying configuration is complete, you can now configure your L2VPN over the Direct Connect Private VIF to extend your on-premises networks to your VMware Cloud on AWS environment.

Make sure not to advertise the networks you want to extend using the NSX Autonomous Edge Appliance through BGP over the Direct Connect Private VIF.

The screen shot below depicts how to configure the Autonomous Edge over existing AWS Direct Connect connection.

VMware-Cloud-AWS-On-Premises-2.3

Figure 2 – L2VPN over AWS Direct Connect.

As displayed in Figure 2, you need to select Private IP as the termination point in VMware Cloud on AWS for the L2VPN configuration. This allows you to use the private or internal IP of the on-premises customer device for the Remote IP configuration for the L2VPN.

This Private IP can only be used over AWS Direct Connect. You cannot use this Private IP over an existing VPN connectivity.

This setting allows your L2VPN traffic to be sent over the AWS Direct Connect link to provide you with a stable, reliable, and low latency connectivity to VMware Cloud on AWS.

For customers who do not use AWS Direct Connect to establish connectivity between on-premises and VMware Cloud on AWS, you can still leverage the NSX Autonomous Standalone Edge to stretch desired networks to VMware Cloud on AWS.

VMware-Cloud-AWS-On-Premises-3

Figure 3 – L2VPN over public internet.

The architecture in Figure 3 allows you to extend your Layer 2 networks using the NSX Autonomous Edge. Similar to the architecture in Figure 1, you deploy your appliance to the on-premises vSphere cluster and configure it to stretch your networks using L2VPN over the public internet to VMware Cloud on AWS.

With this design, it’s important to ensure you are not advertising any of the networks you intend to extend to VMware Cloud on AWS. The notable difference in this architecture is that your VPN terminates on the NSX Edge Router in your VMware Cloud on AWS environment.

This network extension is only applicable to your workload networks within your VMware cluster. If you wish to advertise your management network to VMware Cloud on AWS, you must use a separate VPN tunnel, as depicted in Figure 3.

To make this configuration work, you need to select the Public IP option from the Layer 2 configuration, as depicted below.

VMware-Cloud-AWS-On-Premises-4.2

Figure 4 – L2VPN configuration over public internet.

VMware Cloud on AWS supports just one single Layer 2 VPN tunnel between your on-premises installation and your SDDC, but multiple tunnels can be created between your on-premises and your SDDC. Follow the VMware documentation to deploy your NSX Autonomous Edge Appliance.

Additionally, L2VPNs using NSX Autonomous Standalone Edge version 2.5.1.0.0 can extend up to 100 of your on-premises networks to the cloud. You will need to plan accordingly for the number of networks you want to extend to the cloud if it’s more than this number.

Since the NSX Autonomous Edge is not a managed service, you are responsible for managing this appliance within your on-premises vSphere cluster.

You can utilize vSphere features such as High Availability (HA), Distributed Resource Scheduler (DRS), and Fault Tolerance (FT) to protect the appliance from outages. You can also leverage the inbuilt backup functionality to back up the configuration of the appliance and restore when needed.

The autonomous edge appliance has a native Backup/Restore feature that allows you to back up the configuration file and store it away from the cluster. This allows you to quickly deploy a new appliance and restore configurations in a few minutes.

You can also leverage third-party backup solutions to protect your appliance.

Bear in mind that using this method to stretch your network from on-premises to VMware Cloud on AWS can introduce latency into your environment. Your gateway IP will still be on-premises, so ensure the latency is acceptable for your environment.

HCX Network Extension Deployment Considerations

You can extend up to eight networks per network appliance for HCX. If you intend to extend more than eight networks, you’ll have to deploy more network appliances.

To properly architect this solution, it’s recommended you always review the configuration limits of the HCX components you deploy.

VMware provides detailed instructions on how to deploy HCX on-premises and pair it to your on-premises environment.

Depending on how much you want to scale HCX, you should plan to have enough compute, memory, and storage resources in your management cluster for the different HCX components that will be deployed on-premises.

Also, you are responsible for managing and maintaining the HCX components, both on-premises and within VMware Cloud on AWS. If you run into issues, you can request support from VMware.

HCX Network Extension Architecture with VMware Cloud on AWS

There are two versions of HCX—one is for on-premises use only and is used to connect between two on-premises environments. There’s also the cloud version that allows connectivity between on-premises and cloud. I will focus on the cloud version.

HCX deploys components to VMware Cloud on AWS and to your on-premises cluster. These components are configured and paired together.

The HCX cloud version is an add-on to VMware Cloud on AWS. The components are managed in the cloud, and you manage the components that get deployed on-premises. Learn how to deploy HCX for VMware Cloud on AWS in this user guide.

HCX also supports AWS Direct Connect, meaning that when a DX connection is configured between on-premises and VMware Cloud on AWS with a Private VIF, HCX can be configured to extend your locally-configured Layer 2 networks on-premises to VMware Cloud on AWS over the DX connection.

VMware-Cloud-AWS-On-Premises-5

Figure 5 – HCX configuration over AWS Direct Connect.

To configure the architecture in Figure 5, you need to reserve IP addresses being advertised over your underlying AWS Direct Connect connection to VMware Cloud on AWS for your service mesh configuration. This configuration is done from the on-premises HCX manager, and the IP addresses should be from the on-premises pool.

The next step is to log on to the HCX manager in VMware Cloud on AWS. From the network profile section, VMware provides a network profile named directConnectNetwork1. It’s a blank profile initially, so you need to work with your network team to provide you with a network range that will be used for HCX cloud component deployment over DX.

The IP range must not overlap with any other IP on-premises, nor in VMware Cloud on AWS. Also, the range doesn’t have to be a /24; it could be a small range based on how many appliances you want to deploy based on your HCX deployment model.

Once the network range is provided by your network team, edit the directConnectNetwork1 profile and enter that range. Once saved, the network range will automatically be advertised over the Direct Connect Private VIF using BGP to on-premises. Figure 6 below depicts this configuration from the HCX Manager for VMware Cloud on AWS.

VMware-Cloud-AWS-On-Premises-6.1

Figure 6 – AWS Direct Connect configuration.

After configuring the AWS Direct Connect network profile from the HCX manager in the cloud, configure the on-premises Service Mesh from the HCX Manager. Select the Direct Connect Network Profile from the HCX manager on VMware Cloud on AWS. Then, pair that with your on-premises management network.

VMware-Cloud-AWS-On-Premises-7.1

Figure 7 – Service Mesh configuration.

When the HCX is configured to work over AWS Direct Connect, you can now use the network extension feature within HCX to extend your Layer 2 broadcast domain from on-premises to VMware Cloud on AWS.

When the L2VPN extension is configured, it creates a VPN tunnel over DX to extend your Layer 2 on-premises networks to VMware Cloud on AWS.

The extension service supports between (4-6Gbps) of bandwidth for Layer 2 network extensions. This allows customers to keep the same IP and MAC addresses during a virtual machine migration.

HCX can also be configured over the public internet to stretch your Layer 2 networks on-premises to VMware Cloud on AWS using L2VPN. This is the default configuration for HCX. In this model, AWS Direct Connect is not required.

VMware-Cloud-AWS-On-Premises-8

Figure 8 – HCX configuration over public internet.

Note that in Figure 8 above, a VPN is used to connect between on-premises and VMware Cloud on AWS to exchange routes.

To use HCX without AWS Direct Connect, the IP addresses reserved for the on-premises components of HCX must have internet access to be able to communicate with the public endpoint of the HCX components on VMware Cloud on AWS.

To complete the configuration, you need to request Public IPs on VMware Cloud on AWS. These IPs will be used for the HCX components in the cloud.

After requesting the Public IPs, you need to edit the Network Profile named externalNetwork in HCX. Next, enter the Public IPs and save the configuration, which is done on VMware Cloud on AWS.

In the Service Mesh configuration from the HCX Manager on-premises, select externalNetwork from the list, as opposed to directConnectNetwork1 as depicted in Figure 7.

The minimum bandwidth requirement for HCX is 100MBps. Between source and destination, this bandwidth may be enough to vMotion very small regular virtual machines. However, if you migrate large high memory and CPU intensive virtual machines, the vMotion operation could fail.

If you’re using HCX to extend your networks for Layer 2 traffic in addition to vMotion and migration traffic, remember to have an acceptable bandwidth between on-premises and the cloud. You can also leverage the WAN Optimization capabilities within HCX. It’s common to see customers using 1Gbps bandwidth and above for network extension and migration operations.

For this reason, I strongly recommend the use of AWS Direct Connect or a high bandwidth internet speed in the architecture to ensure optimal user experience with HCX.

Network extensions allow customers to migrate entire virtual machines from on-premises to VMware Cloud on AWS without changing the IP addresses. For customers who choose to maintain hybrid network connectivity, you can leave the HCX L2VPN network extensions in place.

If you want to migrate everything to VMware Cloud on AWS, you can manually unextend all of the networks and convert these as routed networks in VMware Cloud on AWS. To do this, ensure the stretched networks are not advertised from on-premises to VMware Cloud on AWS by a separate VPN tunnel or over AWS Direct Connect.

Conclusion

In this post, I have explained the architectures that extends on-premises Layer 2 networks to VMware Cloud on AWS using HCX and the NSX Autonomous Standalone Edge, either via AWS Direct Connect or through the public internet.

Using any of these architectures makes it possible for you to establish true hybrid connectivity between your on-premises environment and VMware Cloud on AWS. It also allows you to migrate your virtual machine workloads without changing IP addresses.