Refer to this section for a conceptual description of the vSphere hosts and VM configuration for running the vSphere IaaS control plane on a vSAN stretched cluster topology in active/active deployment mode.
You can operate vSphere IaaS control plane components on a vSAN stretched cluster topology in active-active deployment mode. Refer to the vSphere documentation for details on VM Group, Host Group, and VM/Host Rules: vSphere Resource Management documentation.
Active/Active Deployment Mode
In active/active deployment mode, you balance Supervisor and TKG cluster node VMs across the two vSAN stretched cluster sites using vSphere Host Groups, VM Groups, and VM to Host Affinity Rules. Because both sites are active, VM placement can be on either site as long as grouping and balancing are respected.
- Host Groups
- In an active/active deployment, create two Host Groups, one for each site. Add participating ESXi hosts to each Host Group.
- Supervisor Control Plane VMs
- Supervisor control plane node VMs must be grouped. Use a VM to Host affinity rule to bind the Supervisor control plane VM Group to either the site 1 or the site 2 Host Group.
- TKG Service Cluster Control Plane VMs
- TKG Service cluster control plane VMs must be grouped. For each cluster, use a VM to Host affinity rule to bind the VM Group to either the site 1 or the site 2 Host Group. If there are multiple clusters, create a VM Group for each cluster control plane and bind each VM Group to a site Host Group in a balanced manner.
- TKG Service Worker Node VMs
- TKG Service cluster worker node VMs should be spread across the two sites. The recommended approach is to create two worker node VM Groups, and use a VM to Host affinity rule to bind each VM group to one of the site Host Groups. Use a round robin approach to add worker node VMs to each worker VM Group so that worker nodes are distributed across the two sites in a balanced fashion. Ensure that worker nodes in the same node pool are distributed across the two sites.
Active/Active Deployment Example
- vSAN stretched cluster with 6 ESXi hosts
- Supervisor is deployed on a single vSphere Zone
- TKG cluster 1 is provisioned with 3 control plane nodes, 1 worker node pool, and 3 worker nodes
- TKG cluster 2 is provisioned with 3 control plane nodes, 1 worker node pool, and 2 worker nodes
- TKG cluster 3 is provisioned with 3 control plane nodes and 2 worker node pools: pool 1 has 3 worker nodes, pool 2 has 4 worker nodes
Site 1 | Site 2 |
---|---|
Host Group 1 with 3 ESXi hosts | Host Group 2 with 3 ESXI hosts |
|
|
|
|
|
|
Default Host Affinity Rules for vSphere IaaS control plane Components
- Supervisor Control Plane VMs
- Supervisor control plane VMs have an anti-affinity relationship with each other and are placed on separate ESXi hosts. The system allows 1 Supervisor control plane VM per ESXi host, hence a minimum of 3 ESXi hosts is required, with 4 recommended for upgrade purposes.
Custom VM Groups and Rules Are Deleted on Update of vSphere IaaS control plane Components
On update of vCenter Server or Supervisor, the control plane VM Group and VM to Host Affinity Rule will be deleted. You will need to manually recreate the group and rule after the update completes.
On update of a TKG Service cluster, the VM Groups and VM to Host Affinity Rules you have created for control plane and worker nodes will be deleted. You will need to manually recreate the groups and rules after the update completes. Note that rolling updates of clusters can be initiated manually or automatically by the system. See Understanding the Rolling Update Model for TKG Clusters on Supervisor.
If you do not recreate the groups and rules after a system update, the behaviour of vSphere IaaS control plane in a vSAN stretched cluster topology is undefined and not supported.