You can configure and use the NSX Advanced Load Balancer in a vSAN stretched cluster in active/active mode.

NSX Advanced Load Balancer Components

The NSX Advanced Load Balancer includes the following components:
  • NSX Advanced Load Balancer Controller. The Controller is the single point of management and control that manages the lifecycle and configurations of the NSX Advanced Load Balancer Service Engines. This typically is deployed external to the Supervisor.
  • Avi Kubernetes Operator (AKO). The AKO watches Kubernetes resources and communicates with the Controller to request the corresponding services of type LoadBalancer.
  • NSX Advanced Load Balancer Services Engines. Service Engines are data plane VMs that implement the virtual services for the load balancer services requested by Supervisor and Supervisor workloads. These are typically deployed external to Supervisor and must be routable to the vSphere Namespace Network Distributed Virtual Port Groups on which the workloads reside. It only supports a single-replica deployment.

For procedures for installing and configuring the NSX Advanced Load Balancer, see Installing and Configuring vSphere IaaS Control Plane.

Keep in mind the following considerations and limitations when planning to use the NSX Advanced Load Balancer:
Creating Service Engine Groups
Service Engines are created within a Service Engine Group. Each group acts as an isolation domain as it contains the definition of how the services are sized, placed, and made highly available. vSphere IaaS control plane uses a Default-Group template to configure a Service Engine Group per Supervisor. Currently, the AKO has been integrated with the Supervisor such that when a new service of type LoadBalancer needs to be reconciled to a Service Engine, the NSX Advanced Load Balancer Controller automatically deploys Service Engines from the Default-Group.
Deploying the NSX Advanced Load Balancer Controller in HA mode.
As the Controller is the single point of management and control, it is recommended to deploy it in a three-node cluster. The Controller-level HA requires a quorum to be up. If one of the Controller nodes fails, the remaining two nodes continue to be active, but if two nodes fail, then the entire cluster fails. There is no availability advantage in spreading three controller nodes across two sites of a vSAN Stretched Cluster. The site tolerance remains the same in the following situations:
  • Site 1 has two nodes and site 2 has one node and site 1 fails, the entire cluster fails. The probability of tolerating a site failure is 50% in the event that site 2 fails.
  • All three nodes on places on the same site. The probability of tolerating a site failure is 50% in the event that the site with no node fails.

Placing all the three controller nodes in the same site helps with the latency as the three controllers constantly exchange information with one another, and need round-trip time to be less than 20 milliseconds.

Placement of the NSX Advanced Load Balancer Components in an Active/Active Deployment

NSX Advanced Load Balancer Controller
Deploy a set of three NSX Advanced Load Balancer Controllers as an HA cluster on the same site of the vSAN stretched cluster.

Generally, NSX Advanced Load Balancer Controllers are deployed outside the Supervisor or workload cluster and they might not be deployed on a vSAN stretched cluster if they are only used for workloads. However, you can deploy the NSX Advanced Load Balancer in a stretched vSAN toplogy.

Due to the limitation of Default-Group, if multiple Supervisors share the same NSX Advanced Load Balancer Controller, the Controller reconciles the services from the same Default-Group Service Engine Group, which means that Service Engines are shared across the Supervisors. To avoid sharing of Service Engines across Supervisors, you might need to deploy a distinct NSX Advanced Load Balancer Controller for each Supervisor. In this case, NSX Advanced Load Balancer Controller might run alongside the workloads in the same vSAN stretched cluster where Supervisor is running.

NSX Advanced Load Balancer Service Engines
Services Engines of the Default-Group can either run on the workload cluster or outside it. In either scenario, deploy the Services Engines evenly across site 1 and site 2 of the vSAN stretched cluster.

Host Affinity Rules for the NSX Advanced Load Balancer Components in an Active/Active Deployment

NSX Advanced Load Balancer Controller
Perform the following steps:
  1. Create a VM group with the three Controllers. For example, AviControllerVmGroup.
  2. Create a host group with all the ESXi hosts of site 1. For example, HostGroup-A.
  3. Create a should VM-Host affinity rule between AviControllerVmGroup and HostGroup-A.
  4. If there are at least three ESXi hosts in each site, create a VM-VM anti-affinity rule between the three controller VMs. For more information, see the VCF Documentation.
    Note: Creating an anti affinity rule when each site has less than three hosts can prevent the power-on of one or more controllers.
NSX Advanced Load Balancer Service Engines
Perform the following steps:
  1. Create a VM group for half the number of the Service Engines VMs of the Default-Group. For example, AviSEVmGroup-A.
  2. Create a VM group for the remaining Service Engines VMs of the Default-Group. For example, AviSEVmGroup-B.
  3. Deploy the Services Engines to these groups as described in the Placements section.
  4. Create a should VM-host affinity rule for AviSEVmGroup-A and HostGroup-A.
  5. Create a should VM-host affinity rule for AviSEVmGroup-B and HostGroup-B.
  6. Create an anti-affinity rule to place the Services Engines on different hosts.
    Note: If the number of Services Engines that need to be created are more than the number of ESXi hosts, the anti-affinity rules might not allow VM placement, vMotion, and restart.