Prepare to Deploy Management Clusters to Microsoft Azure

This topic explains how to prepare Microsoft Azure for running Tanzu Kubernetes Grid.

If you are installing Tanzu Kubernetes Grid on Azure VMware Solution (AVS), you are installing to a vSphere environment. See Preparing Azure VMware Solution on Microsoft Azure in Prepare to Deploy Management Clusters to a VMware Cloud Environment to prepare your environment and Prepare to Deploy Management Clusters to vSphere to deploy management clusters.

For your convenience, a Preparation Checklist is available at the end of this page to ensure you are prepared to deploy a Tanzu Kubernetes Grid management cluster to Azure.

Important

Tanzu Kubernetes Grid v2.4.x is the last version of TKG that supports the creation of standalone TKG management clusters on Azure. The ability to create standalone TKG management clusters on Azure will be removed in the Tanzu Kubernetes Grid v2.5 release.

Starting from now, VMware recommends that you use Tanzu Mission Control to create native Azure AKS clusters instead of creating new TKG management clusters on Azure. For information about how to create native Azure AKS clusters with Tanzu Mission Control, see Managing the Lifecycle of Azure AKS Clusters in the Tanzu Mission Control documentation.

For more information, see Deprecation of TKG Management and Workload Clusters on AWS and Azure in the VMware Tanzu Kubernetes Grid v2.4 Release Notes.

General Requirements

  • The Tanzu CLI installed locally. See Install the Tanzu CLI and Kubernetes CLI for Use with Standalone Management Clusters.
  • A Microsoft Azure account with the following:

    • Permissions required to create a service principal and assign the Owner role to it.
      For more information about the roles, see Azure built-in roles.
    • Sufficient VM core (vCPU) quotas for your clusters. A standard Azure account has a quota of 10 vCPU per region. The vCPU requirements will depend on whether you will use prod or dev plan. To learn more about the plans, see Workload Cluster Plans.
      Tanzu Kubernetes Grid clusters require 2 vCPU per node, which translates to:
    • Management cluster:
      • dev plan: 4 vCPU (1 main, 1 worker)
      • prod plan: 8 vCPU (3 main , 1 worker)
    • Each workload cluster:
      • dev plan: 4 vCPU (1 main, 1 worker)
      • prod plan: 12 vCPU (3 main , 3 worker)
    • For example, assuming a single management cluster and all clusters with the same plan:

      Plan Workload Clusters vCPU for Workload vCPU for Management Total vCPU
      Dev 1 4 4 8
      5 20 24
      Prod 1 12 8 20
      5 60 68

    • Sufficient public IP address quotas for your clusters, including the quota for Public IP Addresses - Standard, Public IP Addresses - Basic, and Static Public IP Addresses. A standard Azure account has a quota of 10 public IP addresses per region. Every Tanzu Kubernetes Grid cluster requires 2 Public IP addresses regardless of how many control plane nodes and worker nodes it has. For each Kubernetes Service object with type LoadBalancer, 1 Public IP address is required.

  • Traffic is allowed between your local bootstrap machine and the image repositories listed in the management cluster Bill of Materials (BoM) file, over port 443, for TCP.*
    • The BoM file is under ~/.config/tanzu/tkg/bom/, and its name includes the Tanzu Kubernetes Grid version. For example, tkg-bom-v2.4.1+vmware.1 .yaml.
    • Run a DNS lookup on all imageRepository values to find their CNAMEs.
  • (Optional) OpenSSL installed locally, to create a new key-pair or validate the download package thumbprint. See OpenSSL.
  • (Optional) A Virtual Network (VNet) with:

    • A subnet for the management cluster control plane node
    • A Network Security Group (NSG) in the cluster’s VNet resource group that is on the control plane subnet and has the following inbound security rules, to enable SSH and Kubernetes API server connections:
      • Allow TCP over port 22 for any source and destination
      • Allow TCP over port 6443 for any source and destination Port 6443 is where the Kubernetes API is exposed on VMs in the clusters you create. To change this port for a management or a workload cluster, set the CLUSTER_API_SERVER_PORT variable when deploying the cluster.
    • A subnet for the management cluster worker nodes
    • An NSG for the management cluster worker nodes that is in the cluster’s VNet resource group and on the cluster’s worker node subnet

    If you do not use an existing VNet, the installation process creates a new one.

  • The Azure CLI installed locally. See Install the Azure CLI in the Microsoft Azure documentation.

  • If you will deploy services of type LoadBalancer to class-based workload clusters, configure a NAT gateway or other frontend as described in LoadBalancer services for class-based workload clusters on Azure need manual gateway or frontend configuration.

*Or see Prepare an Internet-Restricted Environment for installing without external network access.

Management Cluster Sizing Examples

The table below describes sizing examples for management clusters on Azure. Use this data as guidance to ensure your management cluster is scaled to handle the number of workload clusters that you plan to deploy. The Workload cluster VM size column lists the VM sizes that were used for the examples in the Can manage… column.

Management cluster plan Management cluster VM size Can manage … Workload cluster VM size
3 control plane nodes and 3 worker nodes
  • Control plane nodes: Standard_D2s_v3 (CPU: 2; memory: 8 GB; SSD: 16 GB)
  • Worker nodes: Standard_D2s_v3 (CPU: 2; memory: 8 GB; SSD: 16 GB)
Examples:
  • 5 workload clusters, each cluster deployed with 3 control plane and 200 worker nodes; or
  • 10 workload clusters, each cluster deployed with 3 control plane and 50 worker nodes
  • Control plane nodes: Standard_D2s_v3 (CPU: 2; memory: 8 GB; SSD: 16 GB)
  • Worker nodes: Standard_D2s_v3 (CPU: 2; memory: 8 GB; SSD: 16 GB)
3 control plane nodes and 3 worker nodes
  • Control plane nodes: Standard_D2s_v3 (CPU: 2; memory: 8 GB; SSD: 16 GB)
  • Worker nodes: Standard_D2s_v3 (CPU: 2; memory: 8 GB; SSD: 16 GB)
Example: One workload cluster, deployed with 3 control plane and 250 worker nodes
  • Control plane nodes: Standard_D4s_v3 (CPU: 4; memory: 16 GB; SSD: 32 GB)
  • Worker nodes: Standard_D2s_v3 (CPU: 2; memory: 8 GB; SSD: 16 GB)
3 control plane nodes and 3 worker nodes
  • Control plane nodes: Standard_D4s_v3 (CPU: 4; memory: 16 GB; SSD: 32 GB)
  • Worker nodes: Standard_D4s_v3 (CPU: 4; memory: 16 GB; SSD: 32 GB)
Example: 199 workload clusters, each deployed with 3 control plane and 3 worker nodes
  • Control plane nodes: Standard_D2s_v3 (CPU: 2; memory: 8 GB; SSD: 16 GB)
  • Worker nodes: Standard_D2s_v3 (CPU: 2; memory: 8 GB; SSD: 16 GB)

Create Azure NSGs for Existing VNet

Tanzu Kubernetes Grid management and workload clusters on Azure require two Network Security Groups (NSGs) to be defined on their VNet and in their VNet resource group:

  • An NSG named CLUSTER-NAME-controlplane-nsg and associated with the cluster’s control plane subnet
  • An NSG named CLUSTER-NAME-node-nsg and associated with the cluster’s worker node subnet

    Where CLUSTER-NAME is the name of the cluster.

    Caution

    Giving NSGs names that do not follow the format above may prevent deployment.

If you specify an existing VNet for the management cluster, you must create these NSGs as described in the General Requirements above. An existing VNet for a management cluster is specified with Select an existing VNet in the installer interface or AZURE_VNET_NAME in its configuration file.

If you do not specify an existing VNet for the cluster, the deployment process creates a new VNet and the required NSGs.

See the Microsoft Azure table in the Configuration File Variable Reference for how to configure the cluster’s VNet, resource groups, and subnets.

Register Tanzu Kubernetes Grid as an Azure Client App

Tanzu Kubernetes Grid manages Azure resources as a registered client application that accesses Azure through a service principal. To create the service principal and configure its access to Azure resources, you can use the az ad sp create-for-rbac command.

  1. Sign in to the Azure CLI by running az login.

  2. Create a service principal and assign the the Owner role to it:

    az ad sp create-for-rbac --role "Owner" --name "APP-NAME" --scopes /subscriptions/SUBSCRIPTION-ID/resourceGroups/RESOURCE-GROUP
    az role assignment create --assignee APP-ID --role "Owner"
    

    Where:

    • APP-NAME is any name to give your service principal
    • SUBSCRIPTION-ID and RESOURCE-GROUP are your Azure subscription ID and VNet resource group
    • APP-ID is the appId value returned from az ad sp create-for-rbac

    For example, to create and assign the Owner role to a service principal named tkg:

    $ az ad sp create-for-rbac --role "Owner" --name "tkg" --scopes /subscriptions/c789uce3-aaaa-bbbb-cccc-a51b6b0gb405/resourceGroups/myrg
    Creating 'Owner' role assignment under scope '/subscriptions/c789uce3-aaaa-bbbb-cccc-a51b6b0gb405'
    The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli
    'name' property in the output is deprecated and will be removed in the future. Use 'appId' instead.
    {
     "appId": "c407cfd4-aaaa-bbbb-cccc-80af703eb0ed",
     "displayName": "tkg",
     "name": "c407cfd4-aaaa-bbbb-cccc-80af703eb0ed",
     "password": "R6yM_.aaaabbbbccccdddd111122223333",
     "tenant": "9c117323-aaaa-bbbb-cccc-9ee430723ba3"
    }
    $ az role assignment create --assignee c407cfd4-aaaa-bbbb-cccc-80af703eb0ed --role "Owner"
    

    Record the output. You will use this information in following Accept the Base Image License steps and later when deploying a management cluster. For the full list of options that are supported by az ad sp create-for-rbac, see az ad sp create-for-rbac in the Azure documentation.

Accept the Base Image License

To run management cluster VMs on Azure, accept the license for their base Kubernetes version and machine OS.

Run the az vm image terms accept command, specifying the --plan and your subscription ID.

In Tanzu Kubernetes Grid v2.4.1, the default cluster image --plan value is k8s-1dot27dot5-ubuntu-2004, based on Kubernetes version 1.27.5 and the machine OS, Ubuntu 20.04. Run the following command:

az vm image terms accept --publisher vmware-inc --offer tkg-capi-2022-06-24 --plan k8s-1dot27dot5-ubuntu-2004 --subscription AZURE_SUBSCRIPTION_ID

Where AZURE_SUBSCRIPTION_ID is your Azure subscription ID.

You must repeat this to accept the base image license for every version of Kubernetes or OS that you want to use when you deploy clusters, and every time that you upgrade to a new version of Tanzu Kubernetes Grid.

Create an SSH Key Pair (Optional)

You deploy management clusters from a machine referred to as the bootstrap machine, using the Tanzu CLI. To connect to Azure, the bootstrap machine must provide the public key part of an SSH key pair. If your bootstrap machine does not already have an SSH key pair, you can use a tool such as ssh-keygen to generate one.

  1. On your bootstrap machine, run the following ssh-keygen command.

    ssh-keygen -t rsa -b 4096 -C "email@example.com"
    
  2. At the prompt Enter file in which to save the key (/root/.ssh/id_rsa): press Enter to accept the default.

  3. Enter and repeat a password for the key pair.
  4. Add the private key to the SSH agent running on your machine, and enter the password you created in the previous step.

    ssh-add ~/.ssh/id_rsa
    
  5. Open the file .ssh/id_rsa.pub in a text editor so that you can easily copy and paste it when you deploy a management cluster.

Preparation Checklist

Use this checklist to make sure you are prepared to deploy a Tanzu Kubernetes Grid management cluster to Azure:

  • Tanzu CLI installed

  • Azure account

    • Log in to the Azure web portal at https://portal.azure.com.
  • Azure CLI installed

    • Run az version. The output should list the current version of the Azure CLI as listed in Install the Azure CLI, in the Microsoft Azure documentation.
  • Registered tkg app

    • In the Azure portal, select Active Directory > App Registrations > Owned applications and confirm that your tkg app is listed as configured in Register Tanzu Kubernetes Grid as an Azure Client App above, and with a current secret.
    • Alternatively, in the Azure CLI, run az ad sp show --id.
  • Base VM image license accepted

    • Run az vm image terms show --publisher vmware-inc --offer tkg-capi-2022-06-24 --plan k8s-1dot27dot5-ubuntu-2004. The output should contain "accepted": true.

What to Do Next

For production deployments, it is strongly recommended to enable identity management for your clusters: * For information about the preparatory steps to perform before you deploy a management cluster, see Obtain Your Identity Provider Details in Configure Identity Management. * For conceptual information about identity management and access control in Tanzu Kubernetes Grid, see About Identity and Access Management.

If you are using Tanzu Kubernetes Grid in an environment with an external internet connection, once you have set up identity management, you are ready to deploy management clusters to Azure.

check-circle-line exclamation-circle-line close-line
Scroll to top icon