Refer to these instructions to install and configure the cluster autoscaler package using the Tanzu CLI.

Requirements

Adhere to the following requirements.
  • The minimum vSphere version is vSphere 8 U3, including vCenter and ESXi hosts
  • The minimum TKr version is TKr 1.27.x for vSphere 8
  • The minor version of the TKr and the minor version of the Cluster Autoscaler package must match
Note: There is a 1-to-1 relationship between the autoscaler package minor version and the TKr minor version. For example, if you are using TKr 1.27.11, you should install v1.27.2 of the autoscaler. If there is a version mismatch, package reconciliation will fail.

Configure the vSphere Namespace

Complete the following prerequisite tasks for provisioning a TKG cluster.

  1. Install or update your environment to vSphere 8 U3 and TKr 1.27.x for vSphere 8.
  2. Create or update a content library with the latest Tanzu Kubernetes releases. See Administering Kubernetes Releases for TKG Service Clusters.
  3. Create and configure a vSphere Namespace for hosting the TKG cluster. See Configuring vSphere Namespaces for Hosting TKG Service Clusters.
  4. Install the Kubernetes CLI Tools for vSphere.

    The following example can be used to install the tools from the command line. For additional guidance, see Install the Kubernetes CLI Tools for vSphere.

    wget https://SUPERVISOR-IP-or-FQDN/wcp/plugin/linux-amd64/vsphere-plugin.zip
    unzip vsphere-plugin.zip
    chmod +x bin/kubectl*
    mv bin/kubectl* /usr/bin/kubectl vsphere --help
    rm ~/.kube/config
    kubectl vsphere login --insecure-skip-tls-verify --server SUPERVISOR-IP-or-FQDN --tanzu-kubernetes-cluster-namespace VSPHERE-NAMESPACE --vsphere-username VSPHERE-USER
    kubectl config use-context VSPHERE-NAMESPACE
  5. Run kubectl and kubectl vsphere to verify the installation.

Create a TKG Cluster with Autoscaler Annotations

Follow the instructions to create a TKG cluster. For additional guidance, see Workflow for Provisioning TKG Clusters Using Kubectl.

To use the autoscaler, you must configure the cluster with autoscaler label annotations as demonstrated in the cluster specification example provided here. Unlike regular cluster provisioning, you do not hard code the number of worker node replicas. Kubernetes has built-in default logic for the replicas based on the autoscaler minimum and maximum size annotations. Since this is a new cluster, the minimum size is used to create the cluster. For more information, see https://cluster-api.sigs.k8s.io/tasks/automated-machine-management/autoscaling.
  1. Authenticate with Supervisor using kubectl.
    kubectl vsphere login --server=SUPERVISOR-CONTROL-PLANE-IP-ADDRESS-or-FQDN --vsphere-username USERNAME
  2. Switch context to the target vSphere Namespace that will host the cluster.
    kubectl config use-context tkgs-cluster-namespace
  3. List the VM classes that are available in the vSphere Namespace.

    You can only use VM classes bound to the target vSphere Namespace. See Using VM Classes with TKG Service Clusters.

  4. List the available persistent volume storage classes.
    kubectl describe namespace VSPHERE-NAMESPACE-NAME

    The command returns details about the vSphere Namespace, including the storage class. The command kubectl describe storageclasses also returns available storage classes, but requires vSphere administrator permissions.

  5. List the available Tanzu Kubernetes releases.
    kubectl get tkr

    This command returns the TKrs available in this vSphere Namespace and their compatibility. See Administering Kubernetes Releases for TKG Service Clusters.

  6. Use the information you have gleaned to craft a TKG cluster specification YAML file with the required cluster autoscaler configuration.
    • Use the *-min-size and *-max-size annotations for the worker nodePools, in this example 3 is the minimum and 5 is the maximum number of worker nodes that can be scaled. By default the cluster will be created with 3 worker nodes.
    • Use the matching minor version for the TKr and autoscaler package.
    • The cluster metadata.name and metadata.namespace values used are consistent with autoscaler package default values. If you change these values in the cluster spec, you will need to modify them in the autoscaler-data-values (see below).
    #cc-autoscaler.yaml
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: Cluster
    metadata:
     name: tkc
     namespace: cluster
    spec:
     clusterNetwork:
       pods:
         cidrBlocks:
         - 192.0.2.0/16
       serviceDomain: cluster.local
       services:
         cidrBlocks:
         - 198.51.100.0/12
     topology:
       class: tanzukubernetescluster
       controlPlane:
         metadata: {}
         replicas: 3
       variables:
       - name: storageClasses
         value:
         - wcpglobal-storage-profile
       - name: vmClass
         value: guaranteed-medium
       - name: storageClass
         value: wcpglobal-storage-profile
       #minor versions must match
       version: v1.27.11---vmware.1-fips.1-tkg.2
       workers:
         machineDeployments:
         - class: node-pool
           metadata:
             annotations:
               cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "3"
               cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "5"
           name: np-1
  7. Apply the cluster specification.
    kubectl apply -f cc-autoscaler.yaml
  8. Verify cluster creation.
    kubectl get cluster,vm
  9. Verify cluster node version.
    kubectl get node

Create the Package Repository on the TKG Cluster

Once the TKG cluster is provisioned, install the Tanzu CLI and set up the package repository.
  1. Install the Tanzu CLI.

    See Install the Tanzu CLI for Use with TKG Service Clusters.

  2. Log in to the cluster.
    rm ~/.kube/config
    kubectl vsphere login --insecure-skip-tls-verify --server 192.168.0.2 --tanzu-kubernetes-cluster-namespace autoscaler --vsphere-username administrator@vsphere.local --tanzu-kubernetes-cluster-name cckubectl 
    config use-context cc
  3. Create the package repository.
    #Standard package repository URL might change depending on the required cluster autoscaler version
    tanzu package repository add standard-repo --url projects.registry.vmware.com/tkg/packages/standard/repo:v2024.4.12 -n tkg-system
    tanzu package available list -n tkg-system
    tanzu package available get cluster-autoscaler.tanzu.vmware.com -n tkg-system

Install the Autoscaler Package

Install the cluster autoscaler package. The cluster autoscaler will be installed in the kube-system namespace.
  1. Generate the default values.yaml using the Tanzu CLI command.
    tanzu package available get cluster-autoscaler.tanzu.vmware.com/1.27.2+vmware.1-tkg.3  -n tkg-system --default-values-file-output values.yaml
  2. Update the values.yaml for the package installation.
    arguments:  
      ignoreDaemonsetsUtilization: true  
      maxNodeProvisionTime: 15m  
      maxNodesTotal: 0  
      metricsPort: 8085  
      scaleDownDelayAfterAdd: 10m  
      scaleDownDelayAfterDelete: 10s  
      scaleDownDelayAfterFailure: 3m  
      scaleDownUnneededTime: 10m
    clusterConfig:  
      clusterName: "tkc"  
      clusterNamespace: "cluster"
    paused: false
  3. Install the cluster autoscaler package using the Tanzu CLI.
    tanzu package install cluster-autoscaler-pkgi -n tkg-system --package cluster-autoscaler.tanzu.vmware.com --version 1.27.2+vmware.1-tkg.3 --values-file values.yaml

Test Cluster Autoscaling

To test cluster autoscaling, deploy an application, increase the number of replicas, and verify that additional worker nodes are scaled out to handle the additional load.

See Test Cluster Autoscaler.

Upgrade Autoscaled Cluster

To upgrade an autoscaled cluster, first you musty pause the autoscaler package.

See Upgrade Autoscaled Cluster Using the Tanzu CLI.