Refer to these instructions to provision a TKG cluster based on a custom ClusterClass. Note that these instructions are specific to vSphere 8 U1 environments.
Prerequisites
The procedure for provisioning a TKG cluster based on a custom ClusterClass is available starting with the vSphere 8 U1 release. If you are using vSphere 8 U2, see v1beta1 Example: Cluster Based on a Custom ClusterClass (vSphere 8 U2 and Later Workflow).
- vSphere 8 U1 environment
- Workload Management enabled
- Supervisor configured
- Ubuntu client with Kubernetes CLI Tools for vSphere installed
Part 1: Create the Custom ClusterClass
tanzukubernetescluster
.
- Create a vSphere Namespace named custom-ns.
- Log in to Supervisor.
- Switch context to the vSphere Namespace named custom-ns.
- Get the default ClusterClass.
kubectl get clusterclass tanzukubernetescluster -o json
- Create a custom ClusterClass named custom-cc by cloning the default ClusterClass.
kubectl get clusterclass tanzukubernetescluster -o json | jq '.metadata.name="custom-cc"' | kubectl apply -f -
Expected result:clusterclass.cluster.x-k8s.io/custom-cc created
- Get the custom ClusterClass.
kubectl get clusterclass custom-cc -o json
If necessary you can use less to view the custom ClusterClass.
kubectl get clusterclass custom-cc -o json | less
Note: Issue command "q" to exit less.
Part 2: Create Required Supervisor Objects to Provision the TKG Cluster
- Create the issuer for the self-signed extensions certificate.
#self-signed-extensions-issuer.yaml apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: self-signed-extensions-issuer spec: selfSigned: {}
kubectl apply -f self-signed-extensions-issuer.yaml -n custom-ns
Expected result:issuer.cert-manager.io/self-signed-extensions-issuer created
- Create the secret for the extensions CA certificate.
#extensions-ca-certificate.yaml apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ccc-cluster-extensions-ca spec: commonName: kubernetes-extensions duration: 87600h0m0s isCA: true issuerRef: kind: Issuer name: self-signed-extensions-issuer secretName: ccc-cluster-extensions-ca usages: - digital signature - cert sign - crl sign
kubectl apply -f extensions-ca-certificate.yaml -n custom-ns
Expected result:certificate.cert-manager.io/ccc-cluster-extensions-ca created
- Create the issuer for the extensions CA certificate.
#extensions-ca-issuer.yaml apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: ccc-cluster-extensions-ca-issuer spec: ca: secretName: ccc-cluster-extensions-ca
kubectl apply -f extensions-ca-issuer.yaml -n custom-ns
Expected result:issuer.cert-manager.io/ccc-cluster-extensions-ca-issuer created
- Create the secrete for the auth service certificate.
#auth-svc-cert.yaml apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ccc-cluster-auth-svc-cert spec: commonName: authsvc dnsNames: - authsvc - localhost - 127.0.0.1 duration: 87600h0m0s issuerRef: kind: Issuer name: ccc-cluster-extensions-ca-issuer secretName: ccc-cluster-auth-svc-cert usages: - server auth - digital signature
kubectl apply -f auth-svc-cert.yaml -n custom-ns
Expected result:certificate.cert-manager.io/ccc-cluster-auth-svc-cert created
- Verify the creation of the issuers and certificates.
kubectl get issuers -n custom-ns NAME READY AGE ccc-cluster-extensions-ca-issuer True 2m57s self-signed-extensions-issuer True 14m
kubectl get certs -n custom-ns NAME READY SECRET AGE ccc-cluster-auth-svc-cert True ccc-cluster-auth-svc-cert 34s ccc-cluster-extensions-ca True ccc-cluster-extensions-ca 5m
Part 3: Create a TKG Cluster Based on the Custom ClusterClass
Variable | Description |
---|---|
vmClass | See Using VM Classes with TKG Service Clusters. |
storageClass | See Configure Persistent Storage for the vSphere Namespace. |
ntp | NTP server used to enable Supervisor. |
extensionCert | Auto-generated after the "extension CA certificate" was created in the previous section. |
clusterEncryptionConfigYaml | section below will walk through the process of getting this file |
- Create the encryption secret.
#encryption-secret.yaml apiVersion: v1 data: key: all3dzZpODFmRmh6MVlJbUtQQktuN2ViQzREbDBQRHlxVk8yYXRxTW9QQT0= kind: Secret metadata: name: ccc-cluster-encryption type: Opaque
kubectl apply -f encryption-secret.yaml -n custom-ns
Expected result:secret/ccc-cluster-encryption created
- Gather the NTP server from Supervisor.
kubectl -n vmware-system-vmop get configmap vmoperator-network-config -o jsonpath={.data.ntpservers}
- Construct the
cluster-with-ccc.yaml
manifest to provision the cluster.#cluster-with-ccc.yaml apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: ccc-cluster spec: clusterNetwork: pods: cidrBlocks: - 193.0.0.0/16 serviceDomain: managedcluster1.local services: cidrBlocks: - 198.201.0.0/16 topology: class: custom-cc version: v1.26.5---vmware.2-fips.1-tkg.1 controlPlane: metadata: {} replicas: 3 workers: machineDeployments: - class: node-pool metadata: { } name: node-pool-workers replicas: 3 variables: - name: vmClass value: guaranteed-medium - name: storageClass value: tkg-storage-profile - name: ntp value: time.acme.com - name: extensionCert value: contentSecret: key: tls.crt name: ccc-cluster-extensions-ca - name: clusterEncryptionConfigYaml value: LS0tCm...Ht9Cg==
In the cluster manifest, verify or update the following fields:Parameter Description metadata.name
Name of the v1beta1 Cluster. spec.topology.class
Name of the custom ClusterClass. spec.topology.version
Tanzu Kubernetes release version spec.topology.variables.storageClass.value
StoragePolicy attached to the vSphere Namespace where the cluster will be provisioned spec.topology.variables.ntp.value
NTP server address spec.topology.variables.extensionCert.value.contentSecret.name
Verify spec.topology.variables.clusterEncryptionConfigYaml.value
Populate the with the data.key
value from the ClusterEncryptionConfig secret - Create the cluster based on the custom ClusterClass.
kubectl apply -f cluster-with-ccc.yaml -n custom-ns
Expected result:cluster.cluster.x-k8s.io/ccc-cluster created
Using the vSphere Client, verify that the cluster is created.
- Log in to the TKG cluster.
kubectl vsphere login --server=xxx.xxx.xxx.xxx --vsphere-username USERNAME@vsphere.local --tanzu-kubernetes-cluster-name ccc-cluster --tanzu-kubernetes-cluster-namespace custom-ns
Part 4: Create Required Supervisor Objects to Manage the TKG Cluster
Parameter | Value |
---|---|
Authentication | Authentication values need to be gathered and updated into a file named values.yaml |
Base64 encoded | The values.yaml file will be encoded into a base64 string |
guest-cluster-auth-service-data-values.yaml | This string will be added to the guest-cluster-auth-service-data-values.yaml file downloaded from CCC_config_yamls.tar.gz prior to applying the file |
GuestClusterAuthSvcDataValues secret | Finally, the Guest Cluster Bootstrap must be modified to reference the newly created GuestClusterAuthSvcDataValues secret. |
- Switch context to the vSphere Namespace where the cluster is provisioned.
kubectl config use-context custom-ns
- Get the
authServicePublicKeys
value.kubectl -n vmware-system-capw get configmap vc-public-keys -o jsonpath="{.data.vsphere\.local\.json}"
Copy the result to a text file namedvalues.yaml
.authServicePublicKeys: '{"issuer_url":"https://...SShrDw=="]}]}}'
- Get the cluster UID to update the
authServicePublicKeys
.kubectl get cluster -n custom-ns ccc-cluster -o yaml | grep uid
- In the
authServicePublicKeys
section of thevalues.yaml
file, append the cluster UID to the "client_id
" value.Syntax:
vmware-tes:vc:vns:k8s:clusterUID
For example:vmware-tes:vc:vns:k8s:7d95b50b-4fd4-4642-82a3-5dbfe87f499c
- Get the certificate value (replace ccc-cluster with the chosen cluster name).
kubectl -n custom-ns get secret ccc-cluster-auth-svc-cert -o jsonpath="{.data.tls\.crt}" | base64 -d
- Add the certificate to the
values.yaml
.Add the certificate contents beneath the
authServicePublicKeys
section.Note: The certificate must be indented 4 spaces to avoid failure.For example:authServicePublicKeys: '{"issuer_url":"https://...SShrDw=="]}]}}' ceritificate: | -----BEGIN CERTIFICATE----- MIIDPTCCAiWgAwIBAgIQMibGSjeuJelQoPxCof/+xzANBgkqhkiG9w0BAQsFADAg ... sESk/RDTB1UAvi8PD3zcbEKZuRxuo4IAJqFFbAabwULhjUo0UwT+dIJo1gLf5/ep VoIRJS7j6VT98WbKyZp5B4I= -----END CERTIFICATE-----
- Get the privateKey value.
kubectl -n custom-ns get secret ccc-cluster-auth-svc-cert -o jsonpath="{.data.tls\.key}"
- Verify your
values.yaml
file.authServicePublicKeys: '{"issuer_url":"https://10.197.79.141/openidconnect/vsphere.local","client_id":"vmware-tes:vc:vns:k8s:7d95...499c",...SShrDw=="]}]}}' certificate: | -----BEGIN CERTIFICATE----- MIIDPTCCAiWgAwIBAgIQWQyXAQDRMhgrGre8ysVN0DANBgkqhkiG9w0BAQsFADAg ... uJSBP49sF0nKz5nf7w+BdYE= -----END CERTIFICATE----- privateKey: LS0tLS1CRUdJTi...VktLS0tLQo=
- Hash the
values.yaml
file with base64-encoding to gather output for theguest-cluster-auth-service-data-values.yaml
file.base64 -i values.yaml -w 0
- Create the
guest-cluster-auth-service-data-values.yaml
file.Here is a template for the secret.apiVersion: v1 data: values.yaml: YXV0a...ExRbz0K kind: Secret metadata: labels: tkg.tanzu.vmware.com/cluster-name: ccc-cluster tkg.tanzu.vmware.com/package-name: guest-cluster-auth-service.tanzu.vmware.com.1.3.0+tkg.2-vmware name: ccc-cluster-guest-cluster-auth-service-data-values type: Opaque
Refer to the following table to populate the expected secret values.Parameter Value data.values.yaml
Base64-encoded string of
values.yaml
metadata.labels.cluster-name
Name of the cluster, such as
ccc-cluster
metadata.labels.package-name
guest-cluster-auth-service.tanzu.vmware.com.version
To get this value, run the command
kubectl get tkr v1.26.5---vmware.2-fips.1-tkg.1 -o yaml
Change the TKR version depending on the version you are using
metadata.name
Name of the cluster, such as
ccc-cluster
- Create the
guest-cluster-auth-service-data-values.yaml
secret.kubectl apply -f guest-cluster-auth-service-data-values.yaml -n custom-ns
- Edit Cluster Bootstrap to reference the secret.
kubectl edit clusterbootstrap ccc-cluster -n custom-ns
- Add the following lines beneath the line
guest-cluster-auth-service.tanzu.vmware.com.version:
.valuesFrom: secretRef: ccc-cluster-guest-cluster-auth-service-data-values
For example:spec: additionalPackages: - refName: guest-cluster-auth-service.tanzu.vmware.com.1.3.0+tkg.2-vmware valuesFrom: secretRef: ccc-cluster-guest-cluster-auth-service-data-values
- Save and quit to apply the clusterbootstrap modifications.
Part 5: Configure Pod Security
If you are using TKR versions 1.25 and later, configure pod security for the vSphere Namespace named custom-ns
. See Configure PSA for TKR 1.25 and Later.
- Gather the TKG cluster kubeconfig.
kubectl -n custom-ns get secret ccc-cluster-kubeconfig -o jsonpath="{.data.value}" | base64 -d > ccc-cluster-kubeconfig
- Create the
psp.yaml
file.apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: tanzu-system-kapp-ctrl-restricted spec: privileged: false allowPrivilegeEscalation: false requiredDropCapabilities: - ALL volumes: - configMap - emptyDir - projected - secret - downwardAPI - persistentVolumeClaim hostNetwork: false hostIPC: false hostPID: false runAsUser: rule: MustRunAsNonRoot seLinux: rule: RunAsAny supplementalGroups: rule: MustRunAs ranges: - min: 1 max: 65535 fsGroup: rule: MustRunAs ranges: - min: 1 max: 65535 readOnlyRootFilesystem: false
- Apply the pod security policy.
KUBECONFIG=ccc-cluster-kubeconfig kubectl apply -f psp.yaml
- Log in to the TKG cluster.
kubectl vsphere login --server=10.197.154.66 --vsphere-username administrator@vsphere.local --insecure-skip-tls-verify --tanzu-kubernetes-cluster-name ccc-cluster --tanzu-kubernetes-cluster-namespace custom-ns
- List the namespaces.
KUBECONFIG=ccc-cluster-kubeconfig kubectl get ns -A
NAME STATUS AGE default Active 13d kube-node-lease Active 13d kube-public Active 13d kube-system Active 13d secretgen-controller Active 13d tkg-system Active 13d vmware-system-antrea Active 13d vmware-system-cloud-provider Active 13d vmware-system-csi Active 13d vmware-system-tkg Active 13d
Part 6: Synchronize vSphere SSO Roles with the Custom TKG Cluster
Rolebinding for vCenter Single Sign-On users built in the vSphere Namespaces must be synchronized from Supervisor to the TKG cluster for developers to manage the cluster workloads.
sync-cluster-edit-rolebinding.yaml
and then applying to the TKG cluster using its KUBECONFIG.
- Gather existing rolebindings from Supervisor.
kubectl get rolebinding -n custom-ns -o yaml
- From the returned list of rolebinding objects, identify ones with
roleRef.name
equal to "edit".For example:apiVersion: v1 items: - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: creationTimestamp: "2023-08-25T18:44:45Z" name: ccc-cluster-8lr5x-ccm namespace: custom-ns ownerReferences: - apiVersion: vmware.infrastructure.cluster.x-k8s.io/v1beta1 blockOwnerDeletion: true controller: true kind: ProviderServiceAccount name: ccc-cluster-8lr5x-ccm uid: b5fb9f01-9a55-4f69-8673-fadc49012994 resourceVersion: "108766602" uid: eb93efd4-ae56-4d9f-a745-d2782885e7fb roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ccc-cluster-8lr5x-ccm subjects: - kind: ServiceAccount name: ccc-cluster-8lr5x-ccm namespace: custom-ns - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: creationTimestamp: "2023-08-25T18:44:45Z" name: ccc-cluster-8lr5x-pvcsi namespace: custom-ns ownerReferences: - apiVersion: vmware.infrastructure.cluster.x-k8s.io/v1beta1 blockOwnerDeletion: true controller: true kind: ProviderServiceAccount name: ccc-cluster-8lr5x-pvcsi uid: d9342f8f-13d2-496d-93cb-b24edfacb5c1 resourceVersion: "108766608" uid: fd1820c7-7993-4299-abb7-bb67fb17f1fd roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ccc-cluster-8lr5x-pvcsi subjects: - kind: ServiceAccount name: ccc-cluster-8lr5x-pvcsi namespace: custom-ns - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: creationTimestamp: "2023-08-25T16:58:06Z" labels: managedBy: vSphere name: wcp:custom-ns:group:vsphere.local:administrators namespace: custom-ns resourceVersion: "108714148" uid: d74a98c7-e7da-4d71-b1d5-deb60492d429 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: sso:Administrators@vsphere.local - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: creationTimestamp: "2023-08-25T16:58:21Z" labels: managedBy: vSphere name: wcp:custom-ns:user:vsphere.local:administrator namespace: custom-ns resourceVersion: "108714283" uid: 07f7dbba-2670-4100-a59b-c09e4b2edd6b roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: sso:Administrator@vsphere.local kind: List metadata: resourceVersion: ""
- Create a file named
sync-cluster-edit-rolebinding.yaml
to add any extra rolebindings other than the default administrator@vsphere.local. For example:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: run.tanzu.vmware.com/vmware-system-synced-from-supervisor: "yes" name: vmware-system-auth-sync-wcp:custom-ns:group:vsphere.local:administrators roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: sso:Administrators@vsphere.local apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: run.tanzu.vmware.com/vmware-system-synced-from-supervisor: "yes" name: vmware-system-auth-sync-wcp:custom-ns:group:SSODOMAIN.COM:testuser roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: sso:testuser@SSODOMAIN.COM
Note: In themetadata.name
field, the user role is prepended withvmware-system-auth-sync-
for all users. The metadata.name and subjects.name entries will require modification for all non-default roles. - Apply the sync-cluster-edit-rolebinding.yaml configuration to synchronize rolebindings.
KUBECONFIG=ccc-cluster-kubeconfig kubectl apply -f sync-cluster-edit-rolebinding.yaml