You can use the Tanzu CLI to deploy a management cluster to vSphere with a configuration that you specify in a YAML configuration file. To create a cluster configuration file, you can copy an existing configuration file for a previous deployment to vSphere and update it. Alternatively, you can create a file from scratch by using an empty template.
Important
For Tanzu Kubernetes Grid deployments to vSphere, VMware recommends that you use the vSphere IaaS control plane (formerly known as vSphere with Tanzu) Supervisor. Using TKG with a standalone management cluster is only recommended for the use cases listed in When to Use a Standalone Management Cluster in About TKG.
Tanzu Kubernetes Grid v2.5.x does not support the creation of standalone TKG management clusters on AWS and Azure. For more information, see End of Support for TKG Management and Workload Clusters on AWS and Azure in the VMware Tanzu Kubernetes Grid v2.5.x Release Notes.
When you deploy a management cluster from the CLI, you specify your cluster configuration file by using the --file
option of the tanzu mc create
command.
If you have previously deployed a management cluster by running tanzu mc create --ui
, the ~/.config/tanzu/tkg/clusterconfigs
directory contains management cluster configuration files with settings saved from each invocation of the installer interface. You can use these files as templates for cluster configuration files for new deployments. Alternatively, you can create management cluster configuration files from the template that is provided in this topic.
VMware recommends using a dedicated configuration file for each management cluster.
Important
- As described in Configuring the Management Cluster, environment variables override values from a cluster configuration file. To use all settings from a cluster configuration file, unset any conflicting environment variables before you deploy the management cluster from the CLI.
- Support for IPv6 addresses in Tanzu Kubernetes Grid is limited; see Deploy Clusters on IPv6. If you are not deploying to an IPv6-only networking environment, all IP address settings in your configuration files must be IPv4.
- Some parameters configure identical properties. For example, the
SIZE
property configures the same infrastructure settings as all of the control plane and worker node size and type properties for the different target platforms, but at a more general level. In such cases, avoid setting conflicting or redundant properties.
Before you can deploy a management cluster, you must make sure that your environment meets the requirements for the target platform.
TKG_CUSTOM_IMAGE_REPOSITORY
as an environment variable.ImportantIt is strongly recommended to use the Tanzu Kubernetes Grid installer interface rather than the CLI to deploy your first management cluster. When you deploy a management cluster by using the installer interface, it populates a cluster configuration file for the management cluster with the required parameters. You can use the created configuration file as a model for future deployments from the CLI to your target platform.
Make sure that you have met the all of the requirements listed in Prepare to Deploy Management Clusters to vSphere.
To create a configuration file for a standalone management cluster by using a template:
In a text editor, open a new file with a .yaml
extension and an appropriate name, for example, vsphere-mgmt-cluster-config.yaml
. This will be your configuration file.
If you have already deployed a management cluster from the installer interface, you can create the file in the default location for cluster configurations, ~/.config/tanzu/tkg/clusterconfigs
.
Copy-paste the Management Cluster Configuration Template code into your configuration file.
Configure settings within the file by following the instructions in this topic. For information about all of the variables that the configuration file can include, see the Configuration File Variable Reference.
Save the file.
To use the configuration file from a previous deployment that you performed by using the installer interface, make a copy of the configuration file with a new name, open it in a text editor, and update the configuration. For information about how to update all of the settings, see the Configuration File Variable Reference.
The template below includes all of the options that are relevant to deploying management clusters on vSphere. You can copy this template and use it to deploy management clusters to vSphere.
Mandatory options are uncommented. Optional settings are commented out. Default values are included where applicable.
#! ---------------------------------------------------------------------
#! Basic cluster creation configuration
#! ---------------------------------------------------------------------
CLUSTER_NAME:
CLUSTER_PLAN: dev
INFRASTRUCTURE_PROVIDER: vsphere
# CLUSTER_API_SERVER_PORT: # For deployments without NSX Advanced Load Balancer
ENABLE_CEIP_PARTICIPATION: true
ENABLE_AUDIT_LOGGING: true
CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13
# CAPBK_BOOTSTRAP_TOKEN_TTL: 30m
#! ---------------------------------------------------------------------
#! vSphere configuration
#! ---------------------------------------------------------------------
VSPHERE_SERVER:
VSPHERE_USERNAME:
VSPHERE_PASSWORD:
VSPHERE_DATACENTER:
VSPHERE_RESOURCE_POOL:
VSPHERE_DATASTORE:
VSPHERE_FOLDER:
VSPHERE_NETWORK: VM Network
# VSPHERE_CONTROL_PLANE_ENDPOINT: # Required for Kube-Vip
# VSPHERE_CONTROL_PLANE_ENDPOINT_PORT: 6443
VIP_NETWORK_INTERFACE: "eth0"
# VSPHERE_TEMPLATE:
VSPHERE_SSH_AUTHORIZED_KEY:
# VSPHERE_STORAGE_POLICY_ID: ""
VSPHERE_TLS_THUMBPRINT:
VSPHERE_INSECURE: false
DEPLOY_TKG_ON_VSPHERE7: false
ENABLE_TKGS_ON_VSPHERE7: false
#! ---------------------------------------------------------------------
#! Node configuration
#! ---------------------------------------------------------------------
# SIZE:
# CONTROLPLANE_SIZE:
# WORKER_SIZE:
# OS_NAME: ""
# OS_VERSION: ""
# OS_ARCH: ""
# VSPHERE_NUM_CPUS: 2
# VSPHERE_DISK_GIB: 40
# VSPHERE_MEM_MIB: 4096
# VSPHERE_MTU:
# VSPHERE_CONTROL_PLANE_NUM_CPUS: 2
# VSPHERE_CONTROL_PLANE_DISK_GIB: 40
# VSPHERE_CONTROL_PLANE_MEM_MIB: 8192
# VSPHERE_WORKER_NUM_CPUS: 2
# VSPHERE_WORKER_DISK_GIB: 40
# VSPHERE_WORKER_MEM_MIB: 4096
#! ---------------------------------------------------------------------
#! VMware NSX specific configuration for enabling NSX routable pods
#! ---------------------------------------------------------------------
# NSXT_POD_ROUTING_ENABLED: false
# NSXT_ROUTER_PATH: ""
# NSXT_USERNAME: ""
# NSXT_PASSWORD: ""
# NSXT_MANAGER_HOST: ""
# NSXT_ALLOW_UNVERIFIED_SSL: false
# NSXT_REMOTE_AUTH: false
# NSXT_VMC_ACCESS_TOKEN: ""
# NSXT_VMC_AUTH_HOST: ""
# NSXT_CLIENT_CERT_KEY_DATA: ""
# NSXT_CLIENT_CERT_DATA: ""
# NSXT_ROOT_CA_DATA: ""
# NSXT_SECRET_NAME: "cloud-provider-vsphere-nsxt-credentials"
# NSXT_SECRET_NAMESPACE: "kube-system"
#! ---------------------------------------------------------------------
#! NSX Advanced Load Balancer configuration
#! ---------------------------------------------------------------------
AVI_ENABLE: false
AVI_CONTROL_PLANE_HA_PROVIDER: false
# AVI_NAMESPACE: "tkg-system-networking"
# AVI_DISABLE_INGRESS_CLASS: true
# AVI_AKO_IMAGE_PULL_POLICY: IfNotPresent
# AVI_ADMIN_CREDENTIAL_NAME: avi-controller-credentials
# AVI_CA_NAME: avi-controller-ca
# AVI_CONTROLLER:
# AVI_USERNAME: ""
# AVI_PASSWORD: ""
# AVI_CLOUD_NAME:
# AVI_SERVICE_ENGINE_GROUP:
# AVI_NSXT_T1LR: # Required for NSX ALB deployments on NSX Cloud.
# AVI_MANAGEMENT_CLUSTER_SERVICE_ENGINE_GROUP:
# AVI_DATA_NETWORK:
# AVI_DATA_NETWORK_CIDR:
# AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME:
# AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR:
# AVI_CA_DATA_B64: ""
# AVI_LABELS: ""
# AVI_DISABLE_STATIC_ROUTE_SYNC: true
# AVI_INGRESS_DEFAULT_INGRESS_CONTROLLER: false
# AVI_INGRESS_SHARD_VS_SIZE: ""
# AVI_INGRESS_SERVICE_TYPE: ""
# AVI_INGRESS_NODE_NETWORK_LIST: ""
#! ---------------------------------------------------------------------
#! Image repository configuration
#! ---------------------------------------------------------------------
# TKG_CUSTOM_IMAGE_REPOSITORY: ""
# TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: ""
#! ---------------------------------------------------------------------
#! Proxy configuration
#! ---------------------------------------------------------------------
# TKG_HTTP_PROXY: ""
# TKG_HTTPS_PROXY: ""
# TKG_NO_PROXY: ""
#! ---------------------------------------------------------------------
#! Machine Health Check configuration
#! ---------------------------------------------------------------------
ENABLE_MHC:
ENABLE_MHC_CONTROL_PLANE: true
ENABLE_MHC_WORKER_NODE: true
MHC_UNKNOWN_STATUS_TIMEOUT: 5m
MHC_FALSE_STATUS_TIMEOUT: 12m
#! ---------------------------------------------------------------------
#! Identity management configuration
#! ---------------------------------------------------------------------
IDENTITY_MANAGEMENT_TYPE: "none"
#! Settings for IDENTITY_MANAGEMENT_TYPE: "oidc"
# CERT_DURATION: 2160h
# CERT_RENEW_BEFORE: 360h
# OIDC_IDENTITY_PROVIDER_CLIENT_ID:
# OIDC_IDENTITY_PROVIDER_CLIENT_SECRET:
# OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: groups
# OIDC_IDENTITY_PROVIDER_ISSUER_URL:
# OIDC_IDENTITY_PROVIDER_SCOPES: "email,profile,groups,offline_access"
# OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: email
#! The following two variables are used to configure Pinniped JWTAuthenticator for workload clusters
# SUPERVISOR_ISSUER_URL:
# SUPERVISOR_ISSUER_CA_BUNDLE_DATA:
#! Settings for IDENTITY_MANAGEMENT_TYPE: "ldap"
# LDAP_BIND_DN:
# LDAP_BIND_PASSWORD:
# LDAP_HOST:
# LDAP_USER_SEARCH_BASE_DN:
# LDAP_USER_SEARCH_FILTER:
# LDAP_USER_SEARCH_ID_ATTRIBUTE: dn
# LDAP_USER_SEARCH_NAME_ATTRIBUTE:
# LDAP_GROUP_SEARCH_BASE_DN:
# LDAP_GROUP_SEARCH_FILTER:
# LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: dn
# LDAP_GROUP_SEARCH_USER_ATTRIBUTE: dn
# LDAP_ROOT_CA_DATA_B64:
#! ---------------------------------------------------------------------
#! Antrea CNI configuration
#! ---------------------------------------------------------------------
# ANTREA_NO_SNAT: true
# ANTREA_NODEPORTLOCAL: true
# ANTREA_NODEPORTLOCAL_ENABLED: true
# ANTREA_NODEPORTLOCAL_PORTRANGE: 61000-62000
# ANTREA_TRAFFIC_ENCAP_MODE: "encap"
# ANTREA_PROXY: true
# ANTREA_PROXY_ALL: false
# ANTREA_PROXY_LOAD_BALANCER_IPS: false
# ANTREA_PROXY_NODEPORT_ADDRS:
# ANTREA_PROXY_SKIP_SERVICES: ""
# ANTREA_POLICY: true
# ANTREA_TRACEFLOW: true
# ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD: false
# ANTREA_ENABLE_USAGE_REPORTING: false
# ANTREA_EGRESS: true
# ANTREA_EGRESS_EXCEPT_CIDRS: ""
# ANTREA_FLOWEXPORTER: false
# ANTREA_FLOWEXPORTER_COLLECTOR_ADDRESS: "flow-aggregator.flow-aggregator.svc:4739:tls"
# ANTREA_FLOWEXPORTER_POLL_INTERVAL: "5s"
# ANTREA_FLOWEXPORTER_ACTIVE_TIMEOUT: "5s"
# ANTREA_FLOWEXPORTER_IDLE_TIMEOUT: "15s"
# ANTREA_IPAM: false
# ANTREA_KUBE_APISERVER_OVERRIDE: ""
# ANTREA_MULTICAST: false
# ANTREA_MULTICAST_INTERFACES: ""
# ANTREA_NETWORKPOLICY_STATS: true
# ANTREA_SERVICE_EXTERNALIP: true
# ANTREA_TRANSPORT_INTERFACE: ""
# ANTREA_TRANSPORT_INTERFACE_CIDRS: ""
Follow the instructions below to configure your management cluster deployment.
The basic management cluster creation settings define the infrastructure on which to deploy the management cluster and other basic settings.
CLUSTER_PLAN
specify whether you want to deploy a development cluster, which provides a single control plane node, or a production cluster, which provides a highly available management cluster with three control plane nodes. Specify dev
or prod
.INFRASTRUCTURE_PROVIDER
, specify vsphere
.ENABLE_CEIP_PARTICIPATION
to false
. For information about the CEIP, see Manage Participation in CEIP and https://www.vmware.com/solutions/security/trustvmware/ceip-products.ENABLE_AUDIT_LOGGING
to false
. For information about audit logging, see Audit Logging.CLUSTER_CIDR
for the cluster pod network and SERVICE_CIDR
for the cluster service network.For example:
#! ---------------------------------------------------------------------
#! Basic cluster creation configuration
#! ---------------------------------------------------------------------
CLUSTER_NAME: vsphere-mgmt-cluster
CLUSTER_PLAN: dev
INFRASTRUCTURE_PROVIDER: vsphere
ENABLE_CEIP_PARTICIPATION: true
ENABLE_AUDIT_LOGGING: true
CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13
Provide information to allow Tanzu Kubernetes Grid to log in to vSphere and to designate the resources that Tanzu Kubernetes Grid can use.
VSPHERE_SERVER
, VSPHERE_USERNAME
, and VSPHERE_PASSWORD
settings with the IP address or FQDN of the vCenter Server instance and the credentials to use to log in.Provide the full paths to the vSphere datacenter, resource pool, datastores, and folder in which to deploy the management cluster:
VSPHERE_DATACENTER
: /<MY-DATACENTER>
VSPHERE_RESOURCE_POOL
: /<MY-DATACENTER>/host/<CLUSTER>/Resources
VSPHERE_DATASTORE
: /<MY-DATACENTER>/datastore/<MY-DATASTORE>
VSPHERE_FOLDER
: /<MY-DATACENTER>/vm/<FOLDER>.
VSPHERE_CONTROL_PLANE_ENDPOINT
or leave it blank:
VSPHERE_NETWORK
and VIP_NETWORK_INTERFACE
.VSPHERE_TEMPLATE
to specify the path to an OVA file if you are using multiple custom OVA images for the same Kubernetes version. Use the format /MY-DC/vm/MY-FOLDER-PATH/MY-IMAGE
. For more information, see Deploy a Cluster with a Custom OVA Image in Creating and Managing TKG 2.5 Workload Clusters on vSphere with the Tanzu CLI.VSPHERE_SSH_AUTHORIZED_KEY
option. For information about how to obtain an SSH key, see Prepare to Deploy Management Clusters to vSphere.VSPHERE_TLS_THUMBPRINT
variable, or set VSPHERE_INSECURE: true
to skip thumbprint verification.VSPHERE_STORAGE_POLICY_ID
and specify the name of a storage policy for the VMs, which you have configured on vCenter Server, for the management cluster to use.For example:
#! ---------------------------------------------------------------------
#! vSphere configuration
#! ---------------------------------------------------------------------
VSPHERE_SERVER: 10.185.12.154
VSPHERE_USERNAME: tkg-user@vsphere.local
VSPHERE_PASSWORD: <encoded:QWRtaW4hMjM=>
VSPHERE_DATACENTER: /dc0
VSPHERE_RESOURCE_POOL: /dc0/host/cluster0/Resources/tanzu
VSPHERE_DATASTORE: /dc0/datastore/sharedVmfs-1
VSPHERE_FOLDER: /dc0/vm/tanzu
VSPHERE_NETWORK: "VM Network"
VSPHERE_CONTROL_PLANE_ENDPOINT: 10.185.11.134
VIP_NETWORK_INTERFACE: "eth0"
VSPHERE_TEMPLATE: /dc0/vm/tanzu/my-image.ova
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAAB3[...]tyaw== user@example.com
VSPHERE_TLS_THUMBPRINT: 47:F5:83:8E:5D:36:[...]:72:5A:89:7D:29:E5:DA
VSPHERE_INSECURE: false
VSPHERE_STORAGE_POLICY_ID: "My storage policy"
The Tanzu CLI creates the individual nodes of management clusters and workload clusters according to settings that you provide in the configuration file. You can configure all node VMs to have the same predefined configurations, set different predefined configurations for control plane and worker nodes, or customize the configurations of the nodes. By using these settings, you can create clusters that have nodes with different configurations to the management cluster nodes. You can also create clusters in which the control plane nodes and worker nodes have different configurations.
The Tanzu CLI provides the following predefined configurations for cluster nodes:
small
: 2 CPUs, 4 GB memory, 20 GB diskmedium
: 2 CPUs, 8 GB memory, 40 GB disklarge
: 4 CPUs, 16 GB memory, 40 GB diskextra-large
: 8 CPUs, 32 GB memory, 80 GB diskTo create a cluster in which all of the control plane and worker node VMs are the same size, specify the SIZE
variable. If you set the SIZE
variable, all nodes will be created with the configuration that you set.
SIZE: "large"
To create a in which the control plane and worker node VMs are different sizes, specify the CONTROLPLANE_SIZE
and WORKER_SIZE
options.
CONTROLPLANE_SIZE: "medium"
WORKER_SIZE: "extra-large"
You can combine the CONTROLPLANE_SIZE
and WORKER_SIZE
options with the SIZE
option. For example, if you specify SIZE: "large"
with WORKER_SIZE: "extra-large"
, the control plane nodes will be set to large
and worker nodes will be set to extra-large
.
SIZE: "large"
WORKER_SIZE: "extra-large"
You can customize the configuration of the nodes rather than using the predefined configurations.
To use the same custom configuration for all nodes, specify the VSPHERE_NUM_CPUS
, VSPHERE_DISK_GIB
, and VSPHERE_MEM_MIB
options.
VSPHERE_NUM_CPUS: 2
VSPHERE_DISK_GIB: 40
VSPHERE_MEM_MIB: 4096
To define different custom configurations for control plane nodes and worker nodes, specify the VSPHERE_CONTROL_PLANE_*
and VSPHERE_WORKER_*
options.
VSPHERE_CONTROL_PLANE_NUM_CPUS: 2
VSPHERE_CONTROL_PLANE_DISK_GIB: 20
VSPHERE_CONTROL_PLANE_MEM_MIB: 8192
VSPHERE_WORKER_NUM_CPUS: 4
VSPHERE_WORKER_DISK_GIB: 40
VSPHERE_WORKER_MEM_MIB: 4096
You can override these settings by using the SIZE
, CONTROLPLANE_SIZE
, and WORKER_SIZE
options.
By default, all cluster nodes run Ubuntu v22.04. You can optionally deploy clusters that run Photon OS on their nodes. For the architecture, the default and only current choice is amd64
. For the OS and version settings, see see Node Configuration in the Configuration File Variable Reference.
For example:
#! ---------------------------------------------------------------------
#! Node configuration
#! ---------------------------------------------------------------------
OS_NAME: "photon"
OS_VERSION: "5"
OS_ARCH: "amd64"
If your vSphere environment uses NSX, you can configure it to implement routable, or NO_NAT
, pods.
NoteNSX Routable Pods is an experimental feature in this release. Information about how to implement NSX Routable Pods will be added to this documentation soon.
#! ---------------------------------------------------------------------
#! NSX specific configuration for enabling NSX routable pods
#! ---------------------------------------------------------------------
# NSXT_POD_ROUTING_ENABLED: false
# NSXT_ROUTER_PATH: ""
# NSXT_USERNAME: ""
# NSXT_PASSWORD: ""
# NSXT_MANAGER_HOST: ""
# NSXT_ALLOW_UNVERIFIED_SSL: false
# NSXT_REMOTE_AUTH: false
# NSXT_VMC_ACCESS_TOKEN: ""
# NSXT_VMC_AUTH_HOST: ""
# NSXT_CLIENT_CERT_KEY_DATA: ""
# NSXT_CLIENT_CERT_DATA: ""
# NSXT_ROOT_CA_DATA: ""
# NSXT_SECRET_NAME: "cloud-provider-vsphere-nsxt-credentials"
# NSXT_SECRET_NAMESPACE: "kube-system"
To use NSX Advanced Load Balancer, you must first deploy it in your vSphere environment. See Install NSX Advanced Load Balancer. After deploying NSX Advanced Load Balancer, configure a vSphere management cluster to use the load balancer.
For example:
AVI_ENABLE: true
AVI_CONTROL_PLANE_HA_PROVIDER: true
AVI_NAMESPACE: "tkg-system-networking"
AVI_DISABLE_INGRESS_CLASS: true
AVI_AKO_IMAGE_PULL_POLICY: IfNotPresent
AVI_ADMIN_CREDENTIAL_NAME: avi-controller-credentials
AVI_CA_NAME: avi-controller-ca
AVI_CONTROLLER: 10.185.10.217
AVI_USERNAME: "admin"
AVI_PASSWORD: "<password>"
AVI_CLOUD_NAME: "Default-Cloud"
AVI_SERVICE_ENGINE_GROUP: "Default-Group"
AVI_NSXT_T1LR:""
AVI_DATA_NETWORK: nsx-alb-dvswitch
AVI_DATA_NETWORK_CIDR: 10.185.0.0/20
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME: ""
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR: ""
AVI_CA_DATA_B64: LS0tLS1CRU[...]UtLS0tLQo=
AVI_LABELS: ""
AVI_DISABLE_STATIC_ROUTE_SYNC: true
AVI_INGRESS_DEFAULT_INGRESS_CONTROLLER: false
AVI_INGRESS_SHARD_VS_SIZE: ""
AVI_INGRESS_SERVICE_TYPE: ""
AVI_INGRESS_NODE_NETWORK_LIST: ""
By default, the management cluster and all workload clusters that it manages will use the load balancer. For information about how to configure the NSX Advanced Load Balancer variables, see NSX Advanced Load Balancer in the Configuration File Variable Reference.
You can use NSX ALB as the control plane endpoint provider in Tanzu Kubernetes Grid. The following table describes the differences between NSX ALB and Kube-Vip, which is the default control plane endpoint provider in Tanzu Kubernetes Grid.
Kube-Vip | NSX ALB | |
---|---|---|
Sends Traffic to | Single control plane node |
Multiple control plane nodes |
Requires configuring endpoint VIP | Yes |
No Assigns VIP from the NSX ALB static IP pool |
If you are deploying the management cluster in an Internet-restricted environment, uncomment and update the TKG_CUSTOM_IMAGE_REPOSITORY_*
settings. You do not need to configure the private image registry settings if:
TKG_CUSTOM_IMAGE_REPOSITORY_*
variables by running the tanzu config set
command, as described in Prepare an Internet-Restricted Environment. Environment variables set by running tanzu config set
override values from a cluster configuration file.For example:
#! ---------------------------------------------------------------------
#! Image repository configuration
#! ---------------------------------------------------------------------
TKG_CUSTOM_IMAGE_REPOSITORY: "custom-image-repository.io/yourproject"
TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: "LS0t[...]tLS0tLQ=="
To optionally send outgoing HTTP(S) traffic from the management cluster to a proxy, for example, in an internet-restricted environment, uncomment and set the *_PROXY
settings. You can choose to use one proxy for HTTP requests and another proxy for HTTPS requests or to use the same proxy for both HTTP and HTTPS requests. You cannot change the proxy after you deploy the cluster.
NoteTraffic from cluster VMs to vCenter cannot be proxied. In a proxied vSphere environment, you need to either set
VSPHERE_INSECURE
totrue
, or else add the vCenter IP address or hostname to theTKG_NO_PROXY
list.
TKG_HTTP_PROXY_ENABLED
: Set this to true
to configure a proxy.
TKG_PROXY_CA_CERT
: Set this to the proxy server’s CA if its certificate is self-signed.
TKG_HTTP_PROXY
: This is the URL of the proxy that handles HTTP requests. To set the URL, use the format below:
PROTOCOL://USERNAME:PASSWORD@FQDN-OR-IP:PORT
Where:
PROTOCOL
: This must be http
.USERNAME
and PASSWORD
: This is your HTTP proxy username and password. You must set USERNAME
and PASSWORD
if the proxy requires authentication.Note: When deploying management clusters with CLI, the following non-alphanumeric characters cannot be used in passwords: # ` ^ | / \ ? % ^ { [ ] }" < >
.
FQDN-OR-IP
: This is the FQDN or IP address of your HTTP proxy.PORT
: This is the port number that your HTTP proxy uses.For example, http://user:password@myproxy.com:1234
.
TKG_HTTPS_PROXY
: This is the URL of the proxy that handles HTTPS requests. You can set TKG_HTTPS_PROXY
to the same value as TKG_HTTP_PROXY
or provide a different value. To set the value, use the URL format from the previous step, where:
PROTOCOL
: This must be http
.USERNAME
and PASSWORD
: This is your HTTPS proxy username and password. You must set USERNAME
and PASSWORD
if the proxy requires authentication.Note: When deploying management clusters with CLI, the following non-alphanumeric characters cannot be used in passwords: # ` ^ | / \ ? % ^ { [ ] }" < >
.
FQDN-OR-IP
: This is the FQDN or IP address of your HTTPS proxy.PORT
: This is the port number that your HTTPS proxy uses.For example, http://user:password@myproxy.com:1234
.
TKG_NO_PROXY
: This sets one or more comma-separated network CIDRs or hostnames that must bypass the HTTP(S) proxy, for example to enable the management cluster to communicate directly with infrastructure that runs on the same network, behind the same proxy. Do not use spaces in the comma-separated list setting. For example, noproxy.yourdomain.com,192.168.0.0/24
.
This list must include:
VSPHERE_NETWORK
, which includes the IP address of your control plane endpoint. If you set VSPHERE_CONTROL_PLANE_ENDPOINT
to an FQDN, also add that FQDN to the TKG_NO_PROXY
list.Internally, Tanzu Kubernetes Grid appends localhost
, 127.0.0.1
, the values of CLUSTER_CIDR
and SERVICE_CIDR
, .svc
, and .svc.cluster.local
to the value that you set in TKG_NO_PROXY
. You must manually add the CIDR of VSPHERE_NETWORK
, which includes the IP address of your control plane endpoint, to TKG_NO_PROXY
. If you set VSPHERE_CONTROL_PLANE_ENDPOINT
to an FQDN, add both the FQDN and VSPHERE_NETWORK
to TKG_NO_PROXY
.
ImportantIf the cluster VMs need to communicate with external services and infrastructure endpoints in your Tanzu Kubernetes Grid environment, ensure that those endpoints are reachable by the proxies that you set above or add them to
TKG_NO_PROXY
. Depending on your environment configuration, this may include, but is not limited to:
For example:
#! ---------------------------------------------------------------------
#! Proxy configuration
#! ---------------------------------------------------------------------
TKG_HTTP_PROXY_ENABLED: true
TKG_PROXY_CA_CERT: "LS0t[...]tLS0tLQ==""
TKG_HTTP_PROXY: "http://myproxy.com:1234"
TKG_HTTPS_PROXY: "http://myproxy.com:1234"
TKG_NO_PROXY: "noproxy.yourdomain.com,192.168.0.0/24"
Optionally, update variables based on your deployment preferences and using the guidelines described in the Machine Health Checks section of Configuration File Variable Reference.
For example:
ENABLE_MHC:
ENABLE_MHC_CONTROL_PLANE: true
ENABLE_MHC_WORKER_NODE: true
MHC_MAX_UNHEALTHY_CONTROL_PLANE: 60%
MHC_MAX_UNHEALTHY_WORKER_NODE: 60%
MHC_UNKNOWN_STATUS_TIMEOUT: 10m
MHC_FALSE_STATUS_TIMEOUT: 20m
Set IDENTITY_MANAGEMENT_TYPE
to ldap
or oidc
. Set to none
or omit to deactivate identity management. It is strongly recommended to enable identity management for production deployments.
IDENTITY_MANAGEMENT_TYPE: oidc
IDENTITY_MANAGEMENT_TYPE: ldap
To configure OIDC, update the variables below. For information about how to configure the variables, see Identity Providers - OIDC in the Configuration File Variable Reference.
For example:
OIDC_IDENTITY_PROVIDER_CLIENT_ID: 0oa2i[...]NKst4x7
OIDC_IDENTITY_PROVIDER_CLIENT_SECRET: 331!b70[...]60c_a10-72b4
OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: groups
OIDC_IDENTITY_PROVIDER_ISSUER_URL: https://dev-[...].okta.com
OIDC_IDENTITY_PROVIDER_SCOPES: openid,groups,email,offline_access
OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: email
To configure LDAP, uncomment and update the LDAP_*
variables with information about your LDAPS server. For information about how to configure the variables, see Identity Providers - LDAP in the Configuration File Variable Reference.
For example:
LDAP_BIND_DN: "cn=bind-user,ou=people,dc=example,dc=com"
LDAP_BIND_PASSWORD: "example-password"
LDAP_GROUP_SEARCH_BASE_DN: dc=example,dc=com
LDAP_GROUP_SEARCH_FILTER: &(objectClass=posixGroup)(memberUid={})
LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn
LDAP_GROUP_SEARCH_USER_ATTRIBUTE: uid
LDAP_HOST: ldaps.example.com:636
LDAP_ROOT_CA_DATA_B64: ""
LDAP_USER_SEARCH_BASE_DN: ou=people,dc=example,dc=com
LDAP_USER_SEARCH_FILTER: &(objectClass=posixAccount)(uid={})
LDAP_USER_SEARCH_NAME_ATTRIBUTE: uid
By default, clusters that you deploy with the Tanzu CLI provide in-cluster container networking with the Antrea container network interface (CNI).
With ANTREA_*
configuration variables, you can optionally deactivate Source Network Address Translation (SNAT) for pod traffic, implement hybrid
, noEncap
, NetworkPolicyOnly
traffic encapsulation modes, use proxies and network policies, and implement Traceflow.
Proxy Settings: Antrea proxy and related configurations determine which TKG components handle network traffic with service types ClusterIP
, NodePort
, and LoadBalancer
originating from internal pods, internal nodes, and external clients:
ANTREA_PROXY=true
and ANTREA_PROXY_ALL=false
(default): AntreaProxy
handles ClusterIP
traffic from pods, and kube-proxy
handles all service traffic from nodes and external traffic, which has service type NodePort
.ANTREA_PROXY=false
: kube-proxy
handles all service traffic from all sources; overrides settings for ANTREA_PROXY_ALL
.ANTREA_PROXY_ALL=true
: AntreaProxy
handles all service traffic from all nodes and pods.
kube-proxy
also redundantly handles all service traffic from nodes and hostNetwork
pods, including Antrea components, typically before AntreaProxy
does.kube-proxy
has been removed, as described in Removing kube-proxy, AntreaProxy
alone serves all traffic types from all sources.
AntreaProxy
provides ClusterIP
services for traffic to the Kubernetes API server, it also connects to the server itself. So it is safer to give AntreaProxy
its own address for kube-apiserver
.
ANTREA_KUBE_APISERVER_OVERRIDE
in format CONTROL-PLANE-VIP:PORT
. The address should be either maintained by kube-vip
or a static IP for a control plane node.LoadBalancer
service:
LoadBalancer
solution, enable ANTREA_SERVICE_EXTERNALIP
and define Antrea ExternalIPPool
custom resources as described in Service of type LoadBalancer in the Antrea documentation.kube-proxy
cannot serve as a load balancer and needs a third-party load balancer solution for allocating and advertising LoadBalancer
IP addresses.For more information about Antrea, see the following resources:
To optionally configure these features on Antrea, uncomment and update the ANTREA_*
variables. For example:
#! ---------------------------------------------------------------------
#! Antrea CNI configuration
#! ---------------------------------------------------------------------
ANTREA_NO_SNAT: true
ANTREA_NODEPORTLOCAL: true
ANTREA_NODEPORTLOCAL_ENABLED: true
ANTREA_NODEPORTLOCAL_PORTRANGE: 61000-62000
ANTREA_TRAFFIC_ENCAP_MODE: "encap"
ANTREA_PROXY: true
ANTREA_PROXY_ALL: false
ANTREA_PROXY_LOAD_BALANCER_IPS: false
ANTREA_PROXY_NODEPORT_ADDRS:
ANTREA_PROXY_SKIP_SERVICES: ""
ANTREA_POLICY: true
ANTREA_TRACEFLOW: true
ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD: false
ANTREA_ENABLE_USAGE_REPORTING: false
ANTREA_EGRESS: true
ANTREA_EGRESS_EXCEPT_CIDRS: ""
ANTREA_FLOWEXPORTER: false
ANTREA_FLOWEXPORTER_COLLECTOR_ADDRESS: "flow-aggregator.flow-aggregator.svc:4739:tls"
ANTREA_FLOWEXPORTER_POLL_INTERVAL: "5s"
ANTREA_FLOWEXPORTER_ACTIVE_TIMEOUT: "5s"
ANTREA_FLOWEXPORTER_IDLE_TIMEOUT: "15s"
ANTREA_IPAM: false
ANTREA_KUBE_APISERVER_OVERRIDE: ""
ANTREA_MULTICAST: false
ANTREA_MULTICAST_INTERFACES: ""
ANTREA_NETWORKPOLICY_STATS: true
ANTREA_SERVICE_EXTERNALIP: true
ANTREA_TRAFFIC_ENCRYPTION_MODE: none
ANTREA_TRANSPORT_INTERFACE: ""
ANTREA_TRANSPORT_INTERFACE_CIDRS: ""
To configure a management cluster that supports IPv6 to an IPv6 networking environment:
Prepare the environment as described in (Optional) Set Variables and Rules for IPv6.
Set the following variables in the configuration file for the management cluster.
TKG_IP_FAMILY
to ipv6
.VSPHERE_CONTROL_PLANE_ENDPOINT
to a static IPv6 address.CLUSTER_CIDR and SERVICE_CIDR
. Defaults to fd00:100:64::/48
and fd00:100:96::/108
respectively.You can configure a management or workload cluster that runs nodes in multiple availability zones (AZs) as described in Running Clusters Across Multiple Availability Zones
Prerequisites
vsphere-zones.yaml
file containing Failure Domain and Deployment Zone object definitions for the AZs, created as described in Create FailureDomain
and DeploymentZone
Objects in Kubernetes. You pass this file to the --az-file
option when you create the management cluster.To configure a cluster with nodes deployed across multiple AZs:
VSPHERE_REGION
and VSPHERE_ZONE
to the region and zone tag categories, k8s-region
and k8s-zone
.
VSPHERE_AZ_0
, VSPHERE_AZ_1
, VSPHERE_AZ_2
with the names of the VsphereDeploymentZone
objects where the machines need to be deployed.
VsphereDeploymentZone
associated with VSPHERE_AZ_0
is the VSphereFailureDomain
in which the machine deployment ending with md-0
gets deployed, similarly VSPHERE_AZ_1
is the VSphereFailureDomain
in which the machine deployment ending with md-1
gets deployed, and VSPHERE_AZ_2
is the VSphereFailureDomain
in which the machine deployment ending with md-2
gets deployedVSphereFailureDomain
WORKER_MACHINE_COUNT
sets the total number of workers for the cluster. The total number of workers are distributed in a round-robin fashion across the number of AZs specifiedVSPHERE_AZ_CONTROL_PLANE_MATCHING_LABELS
sets key/value selector labels for the AZs that cluster control plane nodes may deploy to.
VSPHERE_REGION
and VSPHERE_ZONE
are set.VSphereDeploymentZone
resources that you create."region=us-west-1,environment=staging"
.For the full list of options that you must specify when deploying clusters to vSphere, see the Configuration File Variable Reference.
TKG supports in-cluster Node IPAM for standalone management clusters on vSphere and the class-based workload clusters that they manage. For more information and current limitations, see Node IPAM in Creating and Managing TKG 2.5 Workload Clusters on vSphere with the Tanzu CLI.
NoteThis procedure describes how to create a new management cluster that uses Node IPAM directly. For information about how to update an existing management cluster to use Node IPAM, see Migrate an Existing Management Cluster from DHCP to Node IPAM.
You cannot deploy a new management cluster with Node IPAM directly from the installer interface; you must deploy it from a configuration file. You can create the configuration file by running the installer interface, clicking Review Configuration > Export Configuration, and then editing the generated configuration file as described below.
To deploy a new management cluster that uses in-cluster IPAM for its nodes:
As a prerequisite, gather IP addresses of nameservers to use for the cluster’s control plane and worker nodes. This is required because cluster nodes will no longer receive nameservers via DHCP to resolve names in vCenter.
Multiple Nameservers: For each nameserver setting you can specify a single nameserver, or two nameservers for redundancy. With two nameservers, a node VM normally uses the first nameserver listed in the setting, and only uses the second nameserver if the first one fails. One common setup is to specify multiple nameservers for all nodes, and set one as primary for worker nodes and the other as primary for control plane nodes.
Edit the management cluster configuration file to include settings like the following, as described in the Node IPAM table in the Configuration File Variable Reference:
MANAGEMENT_NODE_IPAM_IP_POOL_GATEWAY: "10.10.10.1"
MANAGEMENT_NODE_IPAM_IP_POOL_ADDRESSES: "10.10.10.2-10.10.10.100,10.10.10.105"
MANAGEMENT_NODE_IPAM_IP_POOL_SUBNET_PREFIX: "24"
CONTROL_PLANE_NODE_NAMESERVERS: "10.10.10.10,10.10.10.11"
WORKER_NODE_NAMESERVERS: "10.10.10.10,10.10.10.11"
Where CONTROL_PLANE_NODE_NAMESERVERS
and WORKER_NODE_NAMESERVERS
are the addresses of the nameservers to use. For the NODE_NAMESERVERS
settings, you can specify a single nameserver or two comma-separated nameservers.
Dual-Stack: To configure a management cluster with dual-stack Node IPAM networking, include MANAGEMENT_NODE_IPAM_*
settings as described in Configure Node IPAM with Dual-Stack Networking.
To configure the maximum transmission unit (MTU) for management clusters and workload clusters on a vSphere Standard Switch, set the VSPHERE_MTU
variable. Setting VSPHERE_MTU
is applicable to both management clusters and workload clusters.
If not specified, the default vSphere node MTU is 1500. The maximum value is 9000. For information about MTUs, see About vSphere Networking in the vSphere 8 documentation.
If vSphere IaaS control plane Supervisor is enabled on the target vCenter Server instance, the installer states that you can use the TKG Service as the preferred way to run Kubernetes workloads, in which case you do not need a standalone management cluster. It presents a choice:
tanzu mc create
CommandAfter you have created or updated the cluster configuration file, you can deploy a management cluster by running the tanzu mc create --file CONFIG-FILE
command, where CONFIG-FILE
is the name of the configuration file. If your configuration file is the default ~/.config/tanzu/tkg/cluster-config.yaml
, you can omit the --file
option. If you would like to review the Kubernetes manifest that the tanzu mc create
command will apply you can optionally use the --dry-run
flag to print the manifest without making changes. This invocation will still run the validation checks described below before generating the Kubernetes manifest.
CautionThe
tanzu mc create
command takes time to complete. Whiletanzu mc create
is running, do not run additional invocations oftanzu mc create
on the same bootstrap machine to deploy multiple management clusters, change context, or edit~/.kube-tkg/config
.
To deploy a management cluster, run the tanzu mc create
command. For example:
tanzu mc create --file path/to/cluster-config-file.yaml
Multi-AZ: To run the management cluster or its workload clusters across multiple availability zones, either now or later, include the --az-file
option with the vsphere-zones.yaml
file described in Create FailureDomain
and DeploymentZone
Objects in Kubernetes:
tanzu mc create --file path/to/cluster-config-file.yaml --az-file path/to/vsphere-zones.yaml
When you run tanzu mc create
, the command performs several validation checks before deploying the management cluster. It verifies that the target vSphere infrastructure meets the following requirements:
Multi-AZ Validation: If you are deploying the management cluster defining FailureDomains
and DeploymentZones
resources as described in Create FailureDomain and DeploymentZone Objects in Kubernetes and then referencing them with the --az-file
option in the tanzu mc create
command, the cluster creation process by default performs the following additional checks:
VSphereFailureDomain
, VSphereDeploymentZone
, and related Kubernetes objects exist in vSphere, are accessible, and have host groups, VM groups, and tags that also exist.MATCHING_LABELS
configuration settings correspond to labels in the VSphereDeploymentZone
objects.VSphereDeploymentZone
resources exist in vSphere.To prevent the cluster creation process from verifying that vSphere zones and regions specified in the configuration all exist, are consistent, and are defined at the same level, set SKIP_MULTI_AZ_VERIFY
to "true"
in your local environment:
```
export SKIP_MULTI_AZ_VERIFY="true"
```
You cannot set this variable in the cluster configuration file.
A typical scenario for using SKIP_MULTI_AZ_VERIFY
is when you are deploying a standalone management cluster that you will use to create workload clusters running across across multiple AZs in the future, but the vSphere resources for the workload cluster AZs have not yet been set up.
If any of these conditions are not met, the tanzu mc create
command fails.
When you run tanzu mc create
, you can follow the progress of the deployment of the management cluster in the terminal. The first run of tanzu mc create
takes longer than subsequent runs because it has to pull the required Docker images into the image store on your bootstrap machine. Subsequent runs do not require this step, so are faster.
If tanzu mc create
fails before the management cluster deploys, you should clean up artifacts on your bootstrap machine before you re-run tanzu mc create
. See the Troubleshooting Management Cluster Issues topic for details. If the machine on which you run tanzu mc create
shuts down or restarts before the local operations finish, the deployment will fail.
If the deployment succeeds, you see a confirmation message in the terminal:
Management cluster created! You can now create your first workload cluster by running tanzu cluster create [name] -f [file]
For information about what happened during the deployment of the management cluster, how to connect kubectl
to the management cluster, how to create namespaces, and how to register the management cluster with Tanzu Mission Control, see Examine and Register a Newly-Deployed Standalone Management Cluster.