Refer to these instructions to provision a TKG cluster based on a custom ClusterClass. Note that these instructions are specific to vSphere 8 U2 and later environments.
Prerequisites
The procedure for provisioning a TKG cluster based on a custom ClusterClass is updated for the vSphere 8 U2 release.
- vSphere 8 U2+ environment
- Workload Management enabled
- Supervisor configured
- Ubuntu client with Kubernetes CLI Tools for vSphere installed
High Level Workflows
The high level workflows are as follows.
Step | Task | Instructions |
---|---|---|
1 | Create a custom ClusterClass by cloning the default ClusterClass. | 1: Create a Custom ClusterClass |
2 | Provision a new TKG cluster based on the custom ClusterClass and verify that all cluster nodes come up properly. | 2: Create a TKG Cluster Based on the Custom ClusterClass |
Step | Task | Instructions |
---|---|---|
3 | SSH into one of the worker nodes to confirm that there are packages to be updated. | 3: Verify the Existence of Package Updates |
4 | Update the custom ClusterClass with a new command which performs the updates. | 4: Update the Custom ClusterClass |
5 | Confirm rollout of new nodes with the updates already run. | 5: Verify Rolling Update of Cluster Nodes |
1: Create a Custom ClusterClass
ccc
(abbreviation for
customclusterclass
) by cloning the default ClusterClass which is named
tanzukubernetescluster
.
- Create and configure a vSphere Namespace named ccc-ns.
Configure permissions, storage, content library, and VM classes. Refer to the documentation as needed.
Note: The vSphere Namespace name is user-defined. If you use a different name, adjust the instructions accordingly. - Log in to Supervisor.
kubectl vsphere login --server=IP-ADDRESS --vsphere-username USER@vsphere.local
- Write the output of the default ClusterClass to a file named ccc.yaml.
kubectl -n ccc-ns get clusterclass tanzukubernetescluster -o yaml > ccc.yaml
Or, the shortcut version:kubectl -n ccc-ns get cc tanzukubernetescluster -o yaml > ccc.yaml
- Open for editing the cloned ClusterClass file.
vim ccc.yaml
- Edit the file
ccc.yaml
.- Delete the line
metadata.creationTimestamp
- Delete the line
metadata.generation
- Delete the line
metadata.resourceVersion
- Delete the line
metadata.uid
- Change the
metadata.name
value fromtanzukubernetescluster
toccc
- Leave
metadata.namespace
value as-is:ccc-ns
- Leave the
metadata.annotations
value as-is forrun.tanzu.vmware.com/resolve-tkr: ""
. This annotation is required for the TKR data/resolution.
- Delete the line
- Save and verify the changes.
apiVersion: cluster.x-k8s.io/v1beta1 kind: ClusterClass metadata: annotations: run.tanzu.vmware.com/resolve-tkr: "" name: ccc namespace: ccc-ns spec: ...
- Create the custom ClusterClass object.
kubectl apply -f ccc.yaml -n ccc-ns
Expected result:clusterclass.cluster.x-k8s.io/ccc created
- List the custom ClusterClass.
kubectl get cc -n ccc-ns
Expected result:NAME AGE ccc 3m14s tanzukubernetescluster 29m
2: Create a TKG Cluster Based on the Custom ClusterClass
- Construct the
ccc-cluster.yaml
manifest to provision the cluster.#ccc-cluster.yaml apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: ccc-cluster spec: clusterNetwork: pods: cidrBlocks: - 192.0.2.0/16 services: cidrBlocks: - 198.51.100.0/12 serviceDomain: cluster.local topology: class: ccc version: v1.26.5---vmware.2-fips.1-tkg.1 controlPlane: replicas: 1 workers: machineDeployments: - class: node-pool name: tkgs-node-pool-1 replicas: 1 variables: - name: vmClass value: guaranteed-small - name: storageClass value: tkg-storage-profile
Where:- The
metadata.name
value is the name of the cluster:ccc-cluster
- The
spec.topology.class
value is the name of the custom ClusterClass:ccc
- The
spec.topology.version
value is the TKR version - The
spec.topology.variables.storageClass
value is the name of the persistent storage class
Note: For testing purposes 1 replica is sufficient for the control plane and worker node pool. In production use 3 replicas for the control plane and at least 3 replicas for each worker node pool. - The
- Create the TKG cluster based on the custom ClusterClass.
kubectl apply -f ccc-cluster.yaml -n ccc-ns
Expected result:cluster.cluster.x-k8s.io/ccc-cluster created
- Verify cluster provisioning.
Run the following command. Wait for all cluster nodes to come up properly.
kubectl -n ccc-ns get cc,clusters,vsphereclusters,kcp,machinedeployment,machineset,machine,vspheremachine,virtualmachineservice
Note: It will be helpful to run this command in a separate session so that you can monitor the rolling update progress in Step 5.
3: Verify the Existence of Package Updates
- Run the following command to get the SSH secret.
export CC=ccc-cluster && kubectl get secret -n ccc-ns ${CC}-ssh -o jsonpath={.data.ssh-privatekey} | base64 -d > ${CC}-ssh && chomd 4000 ${CC}-ssh
- Run the following command to get the IP address of the worker node VM.
kubectl -n ccc-ns get vm -o wide
Note: If you deployed multiple worker nodes, pick one. Do not use a control plane node. - Run the following command to SSH into the worker node VM.
ssh -i ${CC}-ssh vmware-system-user@IP-ADDRESS-OF-WORKER-NODE
For example:ssh -i ${CC}-ssh vmware-system-user@192.168.128.55
Note: Enter "yes" to continue connecting.Expected result: After SSHing into the host, you should see the following message.tdnf update info not availble yet!
- Run the following commands and check for updates.
sudo -i
tdnf update
- At the prompt, enter "N" for no (do not update).
Expected result:
Operation aborted
Note: The purpose here is simply to check for the existence of updates, not to initiate updates. You will initiate the updates by adding a command to the custom ClusterClass in the next section. - Type "exit" to logout of the SSH session, then type "exit" again.
4: Update the Custom ClusterClass
- Open for editing the custom ClusterClass named
ccc
.kubectl edit cc ccc -n ccc-ns
- Scroll down to the following section with
postKubeadmCommands
.- definitions: - jsonPatches: - op: add path: /spec/template/spec/kubeadmConfigSpec/postKubeadmCommands valueFrom: template: | - touch /root/kubeadm-complete - vmware-rpctool 'info-set guestinfo.kubeadm.phase complete' - vmware-rpctool 'info-set guestinfo.kubeadm.error ---' selector: apiVersion: controlplane.cluster.x-k8s.io/v1beta1 kind: KubeadmControlPlaneTemplate matchResources: controlPlane: true - jsonPatches: - op: add path: /spec/template/spec/postKubeadmCommands valueFrom: template: | - touch /root/kubeadm-complete - vmware-rpctool 'info-set guestinfo.kubeadm.phase complete' - vmware-rpctool 'info-set guestinfo.kubeadm.error ---' selector: apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 kind: KubeadmConfigTemplate matchResources: machineDeploymentClass: names: - node-pool name: controlPlanePostKubeadmCommandsSuccess
Add the following command to bothvalueFrom.template
fields.- tdnf update -y
For example:- definitions: - jsonPatches: - op: add path: /spec/template/spec/kubeadmConfigSpec/postKubeadmCommands valueFrom: template: | - touch /root/kubeadm-complete - vmware-rpctool 'info-set guestinfo.kubeadm.phase complete' - vmware-rpctool 'info-set guestinfo.kubeadm.error ---' - tdnf update -y selector: apiVersion: controlplane.cluster.x-k8s.io/v1beta1 kind: KubeadmControlPlaneTemplate matchResources: controlPlane: true - jsonPatches: - op: add path: /spec/template/spec/postKubeadmCommands valueFrom: template: | - touch /root/kubeadm-complete - vmware-rpctool 'info-set guestinfo.kubeadm.phase complete' - vmware-rpctool 'info-set guestinfo.kubeadm.error ---' - tdnf update -y selector: apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 kind: KubeadmConfigTemplate matchResources: machineDeploymentClass: names: - node-pool name: controlPlanePostKubeadmCommandsSuccess
- Save the changes to the custom ClusterClass and close the editor.
wq
Expected result:clusterclass.cluster.x-k8s/ccc edited
5: Verify Rolling Update of Cluster Nodes
- Verify cluster provisioning by running the following command.
Wait for all cluster nodes to come up properly.
kubectl -n ccc-ns get cc,clusters,vsphereclusters,kcp,machinedeployment,machineset,machine,vspheremachine,virtualmachineservice
- You should see that new nodes, with new UUIDs, are deployed.
- Run the following command to SSH into the worker node VM.
ssh -i ${CC}-ssh vmware-system-user@IP-ADDRESS-OF-WORKER-NODE
Expected result: After SSHing into the host, you should see the following message.tdnf update info not availble yet!
- Run the following commands.
sudo -i
tdnf update
Expected result: You should see many less packages that need to be updated.
- At the prompt, enter "N" for no (do not update).
Expected result:
Operation aborted
- Run the following command to confirm that tdnf was run.
cat /var/log/cloud-init-output.log | grep -i tdnf
- Type "exit" to logout of the SSH session, then type "exit" again.
Maintaining a Custom ClusterClass
After upgrading the TKG Service to a new version, you must ensure that your custom ClusterClass derived from the default ClusterClass from the previous TKG Service version is updated with the changes to the default ClusterClass shipped with the new TKG Service version.
- Upgrade the TKG Service version.
For example, upgrade from TKG Service v3.0 to v3.1.
- Create a new custom ClusterClass by following the instructions herein.
Manually version the new custom ClusterClass by appending to its name the TKG Service version, such as
ccc-3.1
. - Add the custom patches and variables from the previous custom ClusterClass to the new custom ClusterClass.
To do this,
cat ccc.yaml
and copy the custom patches and variables from it toccc-3.1.yaml
. - Apply the new custom ClusterClass and wait for the reconciliation to succeed.
- Update TKG clusters using the previous custom ClusterClass to the new custom ClusterClass by editing the
spec.topology.class
field in the Cluster object.
Unmanaged ClusterClass
For vSphere 8 U2+ there is an annotation that you can add to a custom ClusterClass if you do not want the TKG Controller to manage the custom ClusterClass. Be aware that if you add this annotation, you are responsible for manually creating all underlying Kubernetes objects, such as Certificates, Secrets, etc. Refer to the vSphere 8 U1 custom ClusterClass documentation for guidance on doing this.
Annotation Key | Value |
---|---|
run.tanzu.vmware.com/unmanaged-clusterclass | " " |
ccc
:
apiVersion: cluster.x-k8s.io/v1beta1 kind: ClusterClass metadata: annotations: run.tanzu.vmware.com/resolve-tkr: "" run.tanzu.vmware.com/unmanaged-clusterclass: "" name: ccc namespace: ccc-ns spec: ...