Refer to these instructions to install and configure the cluster autoscaler package using the Tanzu CLI.
Requirements
- The minimum vSphere version is vSphere 8 U3, including vCenter and ESXi hosts
- The minimum TKr version is TKr 1.27.x for vSphere 8
- The minor version of the TKr and the minor version of the Cluster Autoscaler package must match
Configure the vSphere Namespace
Complete the following prerequisite tasks for provisioning a TKG cluster.
- Install or update your environment to vSphere 8 U3 and TKr 1.27.x for vSphere 8.
- Create or update a content library with the latest Tanzu Kubernetes releases. See Administering Kubernetes Releases for TKG Service Clusters.
- Create and configure a vSphere Namespace for hosting the TKG cluster. See Configuring vSphere Namespaces for Hosting TKG Service Clusters.
- Install the Kubernetes CLI Tools for vSphere.
The following example can be used to install the tools from the command line. For additional guidance, see Install the Kubernetes CLI Tools for vSphere.
wget https://SUPERVISOR-IP-or-FQDN/wcp/plugin/linux-amd64/vsphere-plugin.zip unzip vsphere-plugin.zip chmod +x bin/kubectl* mv bin/kubectl* /usr/bin/kubectl vsphere --help rm ~/.kube/config kubectl vsphere login --insecure-skip-tls-verify --server SUPERVISOR-IP-or-FQDN --tanzu-kubernetes-cluster-namespace VSPHERE-NAMESPACE --vsphere-username VSPHERE-USER kubectl config use-context VSPHERE-NAMESPACE
- Run
kubectl
andkubectl vsphere
to verify the installation.
Create a TKG Cluster with Autoscaler Annotations
Follow the instructions to create a TKG cluster. For additional guidance, see Workflow for Provisioning TKG Clusters Using Kubectl.
- Authenticate with Supervisor using kubectl.
kubectl vsphere login --server=SUPERVISOR-CONTROL-PLANE-IP-ADDRESS-or-FQDN --vsphere-username USERNAME
- Switch context to the target vSphere Namespace that will host the cluster.
kubectl config use-context tkgs-cluster-namespace
- List the VM classes that are available in the vSphere Namespace.
You can only use VM classes bound to the target vSphere Namespace. See Using VM Classes with TKG Service Clusters.
- List the available persistent volume storage classes.
kubectl describe namespace VSPHERE-NAMESPACE-NAME
The command returns details about the vSphere Namespace, including the storage class. The command
kubectl describe storageclasses
also returns available storage classes, but requires vSphere administrator permissions. - List the available Tanzu Kubernetes releases.
kubectl get tkr
This command returns the TKrs available in this vSphere Namespace and their compatibility. See Administering Kubernetes Releases for TKG Service Clusters.
- Use the information you have gleaned to craft a TKG cluster specification YAML file with the required cluster autoscaler configuration.
- Use the
*-min-size
and*-max-size
annotations for the worker nodePools, in this example 3 is the minimum and 5 is the maximum number of worker nodes that can be scaled. By default the cluster will be created with 3 worker nodes. - Use the matching minor version for the TKr and autoscaler package.
- The cluster
metadata.name
andmetadata.namespace
values used are consistent with autoscaler package default values. If you change these values in the cluster spec, you will need to modify them in theautoscaler-data-values
(see below).
#cc-autoscaler.yaml apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: tkc namespace: cluster spec: clusterNetwork: pods: cidrBlocks: - 192.0.2.0/16 serviceDomain: cluster.local services: cidrBlocks: - 198.51.100.0/12 topology: class: tanzukubernetescluster controlPlane: metadata: {} replicas: 3 variables: - name: storageClasses value: - wcpglobal-storage-profile - name: vmClass value: guaranteed-medium - name: storageClass value: wcpglobal-storage-profile #minor versions must match version: v1.27.11---vmware.1-fips.1-tkg.2 workers: machineDeployments: - class: node-pool metadata: annotations: cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "3" cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "5" name: np-1
- Use the
- Apply the cluster specification.
kubectl apply -f cc-autoscaler.yaml
- Verify cluster creation.
kubectl get cluster,vm
- Verify cluster node version.
kubectl get node
Create the Package Repository on the TKG Cluster
- Install the Tanzu CLI.
See Install the Tanzu CLI for Use with TKG Service Clusters.
- Log in to the cluster.
rm ~/.kube/config kubectl vsphere login --insecure-skip-tls-verify --server 192.168.0.2 --tanzu-kubernetes-cluster-namespace autoscaler --vsphere-username administrator@vsphere.local --tanzu-kubernetes-cluster-name cckubectl config use-context cc
- Create the package repository.
#Standard package repository URL might change depending on the required cluster autoscaler version tanzu package repository add standard-repo --url projects.registry.vmware.com/tkg/packages/standard/repo:v2024.4.12 -n tkg-system tanzu package available list -n tkg-system tanzu package available get cluster-autoscaler.tanzu.vmware.com -n tkg-system
Install the Autoscaler Package
kube-system
namespace.
- Generate the default
values.yaml
using the Tanzu CLI command.tanzu package available get cluster-autoscaler.tanzu.vmware.com/1.27.2+vmware.1-tkg.3 -n tkg-system --default-values-file-output values.yaml
- Update the
values.yaml
for the package installation.arguments: ignoreDaemonsetsUtilization: true maxNodeProvisionTime: 15m maxNodesTotal: 0 metricsPort: 8085 scaleDownDelayAfterAdd: 10m scaleDownDelayAfterDelete: 10s scaleDownDelayAfterFailure: 3m scaleDownUnneededTime: 10m clusterConfig: clusterName: "tkc" clusterNamespace: "cluster" paused: false
- Install the cluster autoscaler package using the Tanzu CLI.
tanzu package install cluster-autoscaler-pkgi -n tkg-system --package cluster-autoscaler.tanzu.vmware.com --version 1.27.2+vmware.1-tkg.3 --values-file values.yaml
Test Cluster Autoscaling
To test cluster autoscaling, deploy an application, increase the number of replicas, and verify that additional worker nodes are scaled out to handle the additional load.
Upgrade Autoscaled Cluster
To upgrade an autoscaled cluster, first you musty pause the autoscaler package.