Refer to these instructions to provision a TKG cluster based on a custom ClusterClass. Note that these instructions are specific to vSphere 8 U2 and later environments.

Prerequisites

The procedure for provisioning a TKG cluster based on a custom ClusterClass is updated for the vSphere 8 U2 release.

Adhere to the following prerequisites.
  • vSphere 8 U2+ environment
  • Workload Management enabled
  • Supervisor configured
  • Ubuntu client with Kubernetes CLI Tools for vSphere installed
Attention: Custom ClusterClass is an experimental Kubernetes feature per the upstream Cluster API documentation. Due to the range of customizations available with custom ClusterClass, VMware cannot test or validate all possible customizations. Customers are responsible for testing, validating, and troubleshooting their custom ClusterClass clusters. Customers can open support tickets regarding their custom ClusterClass clusters, however, VMware support is limited to a best effort basis only and cannot guarantee resolution to every issue opened for custom ClusterClass clusters. Customers should be aware of these risks before deploying custom ClusterClass clusters in production environments.

High Level Workflows

The high level workflows are as follows.

The following workflow is all you need to get started.
Step Task Instructions
1 Create a custom ClusterClass by cloning the default ClusterClass. 1: Create a Custom ClusterClass
2 Provision a new TKG cluster based on the custom ClusterClass and verify that all cluster nodes come up properly. 2: Create a TKG Cluster Based on the Custom ClusterClass
Refer to the following workflow to make changes to the custom ClusterClass and initiate a rolling update of custom ClusterClass cluster nodes.
Note: The operation demonstrated in the below workflow is an example of what you can do to a custom ClusterClass. Your use cases may differ, but the general workflow should be applicable.
Step Task Instructions
3 SSH into one of the worker nodes to confirm that there are packages to be updated. 3: Verify the Existence of Package Updates
4 Update the custom ClusterClass with a new command which performs the updates. 4: Update the Custom ClusterClass
5 Confirm rollout of new nodes with the updates already run. 5: Verify Rolling Update of Cluster Nodes

1: Create a Custom ClusterClass

The first part involves creating a custom ClusterClass named ccc (abbreviation for customclusterclass) by cloning the default ClusterClass which is named tanzukubernetescluster.
Note: The custom ClusterClass name is user-defined. If you use a different name, adjust the instructions accordingly.
  1. Create and configure a vSphere Namespace named ccc-ns.

    Configure permissions, storage, content library, and VM classes. Refer to the documentation as needed.

    Note: The vSphere Namespace name is user-defined. If you use a different name, adjust the instructions accordingly.
  2. Log in to Supervisor.
    kubectl vsphere login --server=IP-ADDRESS --vsphere-username USER@vsphere.local
  3. Write the output of the default ClusterClass to a file named ccc.yaml.
    kubectl -n ccc-ns get clusterclass tanzukubernetescluster -o yaml > ccc.yaml
    Or, the shortcut version:
    kubectl -n ccc-ns get cc tanzukubernetescluster -o yaml > ccc.yaml
  4. Open for editing the cloned ClusterClass file.
    vim ccc.yaml
  5. Edit the file ccc.yaml.
    • Delete the line metadata.creationTimestamp
    • Delete the line metadata.generation
    • Delete the line metadata.resourceVersion
    • Delete the line metadata.uid
    • Change the metadata.name value from tanzukubernetescluster to ccc
    • Leave metadata.namespace value as-is: ccc-ns
    • Leave the metadata.annotations value as-is for run.tanzu.vmware.com/resolve-tkr: "". This annotation is required for the TKR data/resolution.
  6. Save and verify the changes.
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: ClusterClass
    metadata:
      annotations:
        run.tanzu.vmware.com/resolve-tkr: ""
      name: ccc
      namespace: ccc-ns
    spec:
    ...
  7. Create the custom ClusterClass object.
    kubectl apply -f ccc.yaml -n ccc-ns
    Expected result:
    clusterclass.cluster.x-k8s.io/ccc created
  8. List the custom ClusterClass.
    kubectl get cc -n ccc-ns
    Expected result:
    NAME                     AGE
    ccc                      3m14s
    tanzukubernetescluster   29m
    

2: Create a TKG Cluster Based on the Custom ClusterClass

Use the Cluster v1beta1 API to create a Cluster based on a ClusterClass.
  1. Construct the ccc-cluster.yaml manifest to provision the cluster.
    #ccc-cluster.yaml
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: Cluster
    metadata:
      name: ccc-cluster
    spec:
      clusterNetwork:
        pods:
          cidrBlocks:
          - 192.0.2.0/16
        services:
          cidrBlocks:
          - 198.51.100.0/12
        serviceDomain: cluster.local
      topology:
        class: ccc
        version: v1.26.5---vmware.2-fips.1-tkg.1
        controlPlane:
          replicas: 1
        workers:
          machineDeployments:
            - class: node-pool
              name: tkgs-node-pool-1
              replicas: 1
        variables:
        - name: vmClass
          value: guaranteed-small
        - name: storageClass
          value: tkg-storage-profile
    Where:
    • The metadata.name value is the name of the cluster: ccc-cluster
    • The spec.topology.class value is the name of the custom ClusterClass: ccc
    • The spec.topology.version value is the TKR version
    • The spec.topology.variables.storageClass value is the name of the persistent storage class
    Note: For testing purposes 1 replica is sufficient for the control plane and worker node pool. In production use 3 replicas for the control plane and at least 3 replicas for each worker node pool.
  2. Create the TKG cluster based on the custom ClusterClass.
    kubectl apply -f ccc-cluster.yaml -n ccc-ns
    Expected result:
    cluster.cluster.x-k8s.io/ccc-cluster created
  3. Verify cluster provisioning.
    Run the following command. Wait for all cluster nodes to come up properly.
    kubectl -n ccc-ns get cc,clusters,vsphereclusters,kcp,machinedeployment,machineset,machine,vspheremachine,virtualmachineservice
    Note: It will be helpful to run this command in a separate session so that you can monitor the rolling update progress in Step 5.

3: Verify the Existence of Package Updates

SSH into one of the worker nodes to confirm that there are packages to be updated.
Note: The objective with this step is simply to confirm that there are packages to update, not to actually update them. They will be updated by the custom ClusterClass when new cluster nodes are rolled out (steps following this one). This step and the ones that follow are meant to be an example of what you can do to a custom ClusterClass.
  1. Run the following command to get the SSH secret.
    export CC=ccc-cluster && kubectl get secret -n ccc-ns ${CC}-ssh -o jsonpath={.data.ssh-privatekey} | base64 -d > ${CC}-ssh && chomd 4000 ${CC}-ssh
  2. Run the following command to get the IP address of the worker node VM.
    kubectl -n ccc-ns get vm -o wide
    Note: If you deployed multiple worker nodes, pick one. Do not use a control plane node.
  3. Run the following command to SSH into the worker node VM.
    ssh -i ${CC}-ssh vmware-system-user@IP-ADDRESS-OF-WORKER-NODE
    For example:
    ssh -i ${CC}-ssh vmware-system-user@192.168.128.55
    Note: Enter "yes" to continue connecting.
    Expected result: After SSHing into the host, you should see the following message.
    tdnf update info not availble yet!
  4. Run the following commands and check for updates.
    sudo -i
    tdnf update
  5. At the prompt, enter "N" for no (do not update).
    Expected result:
    Operation aborted
    Note: The purpose here is simply to check for the existence of updates, not to initiate updates. You will initiate the updates by adding a command to the custom ClusterClass in the next section.
  6. Type "exit" to logout of the SSH session, then type "exit" again.

4: Update the Custom ClusterClass

Update the custom ClusterClass with a new command which performs a tdnf update.
  1. Open for editing the custom ClusterClass named ccc.
    kubectl edit cc ccc -n ccc-ns
  2. Scroll down to the following section with postKubeadmCommands.
      - definitions:
        - jsonPatches:
          - op: add
            path: /spec/template/spec/kubeadmConfigSpec/postKubeadmCommands
            valueFrom:
              template: |
                - touch /root/kubeadm-complete
                - vmware-rpctool 'info-set guestinfo.kubeadm.phase complete'
                - vmware-rpctool 'info-set guestinfo.kubeadm.error ---'
          selector:
            apiVersion: controlplane.cluster.x-k8s.io/v1beta1
            kind: KubeadmControlPlaneTemplate
            matchResources:
              controlPlane: true
        - jsonPatches:
          - op: add
            path: /spec/template/spec/postKubeadmCommands
            valueFrom:
              template: |
                - touch /root/kubeadm-complete
                - vmware-rpctool 'info-set guestinfo.kubeadm.phase complete'
                - vmware-rpctool 'info-set guestinfo.kubeadm.error ---'
          selector:
            apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
            kind: KubeadmConfigTemplate
            matchResources:
              machineDeploymentClass:
                names:
                - node-pool
        name: controlPlanePostKubeadmCommandsSuccess
    
    Add the following command to both valueFrom.template fields.
    - tdnf update -y
    For example:
      - definitions:
        - jsonPatches:
          - op: add
            path: /spec/template/spec/kubeadmConfigSpec/postKubeadmCommands
            valueFrom:
              template: |
                - touch /root/kubeadm-complete
                - vmware-rpctool 'info-set guestinfo.kubeadm.phase complete'
                - vmware-rpctool 'info-set guestinfo.kubeadm.error ---'
                - tdnf update -y
          selector:
            apiVersion: controlplane.cluster.x-k8s.io/v1beta1
            kind: KubeadmControlPlaneTemplate
            matchResources:
              controlPlane: true
        - jsonPatches:
          - op: add
            path: /spec/template/spec/postKubeadmCommands
            valueFrom:
              template: |
                - touch /root/kubeadm-complete
                - vmware-rpctool 'info-set guestinfo.kubeadm.phase complete'
                - vmware-rpctool 'info-set guestinfo.kubeadm.error ---'
                - tdnf update -y
          selector:
            apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
            kind: KubeadmConfigTemplate
            matchResources:
              machineDeploymentClass:
                names:
                - node-pool
        name: controlPlanePostKubeadmCommandsSuccess
    
  3. Save the changes to the custom ClusterClass and close the editor.
    wq
    Expected result:
    clusterclass.cluster.x-k8s/ccc edited

5: Verify Rolling Update of Cluster Nodes

Updating the custom ClusterClass triggers a rolling update of cluster nodes for clusters provisioned based on that ClusterClass. The new nodes come up with above command having been applied.
  1. Verify cluster provisioning by running the following command.
    Wait for all cluster nodes to come up properly.
    kubectl -n ccc-ns get cc,clusters,vsphereclusters,kcp,machinedeployment,machineset,machine,vspheremachine,virtualmachineservice
  2. You should see that new nodes, with new UUIDs, are deployed.
  3. Run the following command to SSH into the worker node VM.
    ssh -i ${CC}-ssh vmware-system-user@IP-ADDRESS-OF-WORKER-NODE
    Expected result: After SSHing into the host, you should see the following message.
    tdnf update info not availble yet!
  4. Run the following commands.
    sudo -i
    tdnf update

    Expected result: You should see many less packages that need to be updated.

  5. At the prompt, enter "N" for no (do not update).
    Expected result:
    Operation aborted
  6. Run the following command to confirm that tdnf was run.
    cat /var/log/cloud-init-output.log | grep -i tdnf
  7. Type "exit" to logout of the SSH session, then type "exit" again.

Maintaining a Custom ClusterClass

After upgrading the TKG Service to a new version, you must ensure that your custom ClusterClass derived from the default ClusterClass from the previous TKG Service version is updated with the changes to the default ClusterClass shipped with the new TKG Service version.

Use the following workflow to keep your custom ClusterClass in sync with the system-provided ClusterClass. Note that these instructions assume you have created an initial custom ClusterClass as described herein.
  1. Upgrade the TKG Service version.

    For example, upgrade from TKG Service v3.0 to v3.1.

    See Installing and Upgrading the TKG Service.

  2. Create a new custom ClusterClass by following the instructions herein.

    Manually version the new custom ClusterClass by appending to its name the TKG Service version, such as ccc-3.1.

  3. Add the custom patches and variables from the previous custom ClusterClass to the new custom ClusterClass.

    To do this, cat ccc.yaml and copy the custom patches and variables from it to ccc-3.1.yaml.

  4. Apply the new custom ClusterClass and wait for the reconciliation to succeed.
  5. Update TKG clusters using the previous custom ClusterClass to the new custom ClusterClass by editing the spec.topology.class field in the Cluster object.

Unmanaged ClusterClass

For vSphere 8 U2+ there is an annotation that you can add to a custom ClusterClass if you do not want the TKG Controller to manage the custom ClusterClass. Be aware that if you add this annotation, you are responsible for manually creating all underlying Kubernetes objects, such as Certificates, Secrets, etc. Refer to the vSphere 8 U1 custom ClusterClass documentation for guidance on doing this.

The annotation is as follows:
Annotation Key Value
run.tanzu.vmware.com/unmanaged-clusterclass " "
Here is an example of how you would add the annotation to the custom ClusterClass named ccc:
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
  annotations:
    run.tanzu.vmware.com/resolve-tkr: ""
    run.tanzu.vmware.com/unmanaged-clusterclass: ""
  name: ccc
  namespace: ccc-ns
spec:
...