Skip to main content
Skip table of contents

Create a new Azure Cluster

Prerequisites

  • Before you begin, make sure you have created a Bootstrap cluster.

Name your cluster

  1. Give your cluster a unique name suitable for your environment.

  2. Set the environment variable:

    CODE
    export CLUSTER_NAME=azure-example

Tips and Tricks

Below are a few ways to customize your setup. If you prefer to do a basic setup, skip Tips and Tricks and proceed to Create a New Azure Cluster section.

  1. (Optional) To create a cluster name that is unique, use the following command:

CODE
export CLUSTER_NAME=azure-example-$(LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | fold -w 5 | head -n1)
echo $CLUSTER_NAME
CODE
azure-example-pf4a3

This creates a unique name every time you run it, so use it with forethought.

2. (Optional) To use a custom Azure Image when creating your cluster, you must create that Azure Image using KIB first.

CODE
dkp create cluster azure --cluster-name=${CLUSTER_NAME} \ 
--dry-run \ 
--output=yaml \ 
> ${CLUSTER_NAME}.yaml

Create a new Azure Kubernetes cluster

Availability zones (AZs) are isolated locations within data center regions from which public cloud services originate and operate. Because all the nodes in a node pool are deployed in a single Availability Zone, you may wish to create additional node pools to ensure your cluster has nodes deployed in multiple Availability Zones.

By default, the control-plane Nodes will be created in 3 different zones. However, the default worker Nodes will reside in a single Availability Zone. You may create additional node pools in other Availability Zones with the dkp create nodepool command.

  1. Generate the Kubernetes cluster objects:

    CODE
    dkp create cluster azure --cluster-name=${CLUSTER_NAME} \
    --dry-run \
    --output=yaml \
    > ${CLUSTER_NAME}.yaml
    CODE
    Generating cluster resources
  2. (Optional) To configure the Control Plane and Worker nodes to use an HTTP proxy:

    CODE
    export CONTROL_PLANE_HTTP_PROXY=http://example.org:8080
    export CONTROL_PLANE_HTTPS_PROXY=http://example.org:8080
    export CONTROL_PLANE_NO_PROXY="example.org,example.com,example.net,localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,169.254.169.254,.cloudapp.azure.com"
    
    export WORKER_HTTP_PROXY=http://example.org:8080
    export WORKER_HTTPS_PROXY=http://example.org:8080
    export WORKER_NO_PROXY="example.org,example.com,example.net,localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,169.254.169.254,.cloudapp.azure.com"
    • Replace example.org,example.com,example.net with your internal addresses

    • localhost and 127.0.0.1 addresses should not use the proxy

    • 10.96.0.0/12 is the default Kubernetes service subnet

    • 192.168.0.0/16 is the default Kubernetes pod subnet

    • kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local is the internal Kubernetes kube-apiserver service

    • .svc,.svc.cluster,.svc.cluster.local is the internal Kubernetes services

    • 169.254.169.254 is the Azure metadata server

    • .cloudapp.azure.com is for the worker nodes to allow them to communicate directly to the kube-apiserver load balancer

  3. (Optional) Create a Kubernetes cluster with HTTP proxy configured. This step assumes you did not already create a cluster in the previous steps:

    CODE
    dkp create cluster azure --cluster-name=${CLUSTER_NAME} \
    --control-plane-http-proxy="${CONTROL_PLANE_HTTP_PROXY}" \
    --control-plane-https-proxy="${CONTROL_PLANE_HTTPS_PROXY}" \
    --control-plane-no-proxy="${CONTROL_PLANE_NO_PROXY}" \
    --worker-http-proxy="${WORKER_HTTP_PROXY}" \
    --worker-https-proxy="${WORKER_HTTPS_PROXY}" \
    --worker-no-proxy="${WORKER_NO_PROXY}" \
    --dry-run \
    --output=yaml \
    > ${CLUSTER_NAME}.yaml
  4. Inspect or edit the cluster objects:

    NOTE: Familiarize yourself with Cluster API before editing the cluster objects as edits can prevent the cluster from deploying successfully.

    The objects are Custom Resources defined by Cluster API components, and they belong in three different categories:

    1. Cluster

      A Cluster object has references to the infrastructure-specific and control plane objects. Because this is an Azure cluster, there is an AzureCluster object that describes the infrastructure-specific cluster properties. Here, this means the Azure region, the VPC ID, subnet IDs, and security group rules required by the Pod network implementation.

    2. Control Plane

      A KubeadmControlPlane object describes the control plane, which is the group of machines that run the Kubernetes control plane components, which include the etcd distributed database, the API server, the core controllers, and the scheduler. The object describes the configuration for these components. The object also has a reference to an infrastructure-specific object that describes the properties of all control plane machines. Here, it references an AzureMachineTemplate object, which describes the instance type, the type of disk used, and the size of the disk, among other properties.

    3. Node Pool

      A Node Pool is a collection of machines with identical properties. For example, a cluster might have one Node Pool with large memory capacity, another Node Pool with GPU support. Each Node Pool is described by three objects: The MachinePool references an object that describes the configuration of Kubernetes components (for example, kubelet) deployed on each node pool machine, and an infrastructure-specific object that describes the properties of all node pool machines. Here, it references a KubeadmConfigTemplate, and an AzureMachineTemplate object, which describes the instance type, the type of disk used, the size of the disk, among other properties.

    For in-depth documentation about the objects, read Concepts in the Cluster API Book.

  5. Modify Control Plane Audit logs settings using the information contained in the page Configuring the Control Plane.

  6. Create the cluster from the objects.

    CODE
    kubectl create -f ${CLUSTER_NAME}.yaml
    CODE
    cluster.cluster.x-k8s.io/azure-example created
    azurecluster.infrastructure.cluster.x-k8s.io/azure-example created
    kubeadmcontrolplane.controlplane.cluster.x-k8s.io/azure-example-control-plane created
    azuremachinetemplate.infrastructure.cluster.x-k8s.io/azure-example-control-plane created
    secret/azure-example-etcd-encryption-config created
    machinedeployment.cluster.x-k8s.io/azure-example-md-0 created
    azuremachinetemplate.infrastructure.cluster.x-k8s.io/azure-example-md-0 created
    kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/azure-example-md-0 created
    clusterresourceset.addons.cluster.x-k8s.io/calico-cni-installation-azure-example created
    configmap/calico-cni-installation-azure-example created
    configmap/tigera-operator-azure-example created
    clusterresourceset.addons.cluster.x-k8s.io/azure-disk-csi-azure-example created
    configmap/azure-disk-csi-azure-example created
    clusterresourceset.addons.cluster.x-k8s.io/cluster-autoscaler-azure-example created
    configmap/cluster-autoscaler-azure-example created
    clusterresourceset.addons.cluster.x-k8s.io/node-feature-discovery-azure-example created
    configmap/node-feature-discovery-azure-example created
    clusterresourceset.addons.cluster.x-k8s.io/nvidia-feature-discovery-azure-example created
    configmap/nvidia-feature-discovery-azure-example created
  7. Wait for the cluster control-plane to be ready:

    CODE
    kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=20m
    CODE
    cluster.cluster.x-k8s.io/azure-example condition met
  8. After the objects are created on the API server, the Cluster API controllers reconcile them. They create infrastructure and machines. As they progress, they update the Status of each object. Konvoy provides a command to describe the current status of the cluster:

    CODE
    dkp describe cluster -c ${CLUSTER_NAME}
    CODE
    NAME                                                              READY  SEVERITY  REASON  SINCE  MESSAGE
    Cluster/azure-example                                             True                     3m4s
    ├─ClusterInfrastructure - AzureCluster/azure-example              True                     8m26s
    ├─ControlPlane - KubeadmControlPlane/azure-example-control-plane  True                     3m4s
    │ ├─Machine/azure-example-control-plane-l8j9r                     True                     3m9s
    │ ├─Machine/azure-example-control-plane-slprd                     True                     7m17s
    │ └─Machine/azure-example-control-plane-xhxxg                     True                     5m9s
    └─Workers
      └─MachineDeployment/azure-example-md-0                          True                     4m31s
        ├─Machine/azure-example-md-0-d67567c8b-2674r                  True                     5m19s
        ├─Machine/azure-example-md-0-d67567c8b-mbmhk                  True                     5m17s
        ├─Machine/azure-example-md-0-d67567c8b-pzg8k                  True                     5m17s
        └─Machine/azure-example-md-0-d67567c8b-z8km9                  True                     5m17s
  9. As they progress, the controllers also create Events. List the Events using this command:

    CODE
    kubectl get events | grep ${CLUSTER_NAME}

    For brevity, the example uses grep. It is also possible to use separate commands to get Events for specific objects. For example, kubectl get events --field-selector involvedObject.kind="AzureCluster" and kubectl get events --field-selector involvedObject.kind="AzureMachine".

    CODE
    15m         Normal    AzureClusterObjectNotFound                  azurecluster                                          AzureCluster object default/azure-example not found
    15m         Normal    AzureManagedControlPlaneObjectNotFound      azuremanagedcontrolplane                              AzureManagedControlPlane object default/azure-example not found
    15m         Normal    AzureClusterObjectNotFound                  azurecluster                                          AzureCluster.infrastructure.cluster.x-k8s.io "azure-example" not found
    8m22s       Normal    SuccessfulSetNodeRef                        machine/azure-example-control-plane-bmc9b          azure-example-control-plane-fdvnm
    10m         Normal    Machine controller dependency not yet met   azuremachine/azure-example-control-plane-fdvnm     Machine Controller has not yet set OwnerRef
    12m         Normal    SuccessfulSetNodeRef                        machine/azure-example-control-plane-msftd          azure-example-control-plane-z9q45
    10m         Normal    SuccessfulSetNodeRef                        machine/azure-example-control-plane-nrvff          azure-example-control-plane-vmqwx
    12m         Normal    Machine controller dependency not yet met   azuremachine/azure-example-control-plane-vmqwx     Machine Controller has not yet set OwnerRef
    14m         Normal    Machine controller dependency not yet met   azuremachine/azure-example-control-plane-z9q45     Machine Controller has not yet set OwnerRef
    14m         Warning   VMIdentityNone                              azuremachinetemplate/azure-example-control-plane   You are using Service Principal authentication for Cloud Provider Azure which is less secure than Managed Identity. Your Service Principal credentials will be written to a file on the disk of each VM in order to be accessible by Cloud Provider. To learn more, see https://capz.sigs.k8s.io/topics/identities-use-cases.html#azure-host-identity
    12m         Warning   ControlPlaneUnhealthy                       kubeadmcontrolplane/azure-example-control-plane    Waiting for control plane to pass preflight checks to continue reconciliation: [machine azure-example-control-plane-msftd does not have APIServerPodHealthy condition, machine azure-example-control-plane-msftd does not have ControllerManagerPodHealthy condition, machine azure-example-control-plane-msftd does not have SchedulerPodHealthy condition, machine azure-example-control-plane-msftd does not have EtcdPodHealthy condition, machine azure-example-control-plane-msftd does not have EtcdMemberHealthy condition]
    11m         Warning   ControlPlaneUnhealthy                       kubeadmcontrolplane/azure-example-control-plane    Waiting for control plane to pass preflight checks to continue reconciliation: [machine azure-example-control-plane-nrvff does not have APIServerPodHealthy condition, machine azure-example-control-plane-nrvff does not have ControllerManagerPodHealthy condition, machine azure-example-control-plane-nrvff does not have SchedulerPodHealthy condition, machine azure-example-control-plane-nrvff does not have EtcdPodHealthy condition, machine azure-example-control-plane-nrvff does not have EtcdMemberHealthy condition]
    9m52s       Normal    SuccessfulSetNodeRef                        machine/azure-example-md-0-84bd8b5f5b-b8cnq        azure-example-md-0-bsc82
    9m53s       Normal    SuccessfulSetNodeRef                        machine/azure-example-md-0-84bd8b5f5b-j8ldg        azure-example-md-0-mjcbn
    9m52s       Normal    SuccessfulSetNodeRef                        machine/azure-example-md-0-84bd8b5f5b-lx89f        azure-example-md-0-pmq8f
    10m         Normal    SuccessfulSetNodeRef                        machine/azure-example-md-0-84bd8b5f5b-pcv7q        azure-example-md-0-vzprf
    15m         Normal    SuccessfulCreate                            machineset/azure-example-md-0-84bd8b5f5b           Created machine "azure-example-md-0-84bd8b5f5b-j8ldg"
    15m         Normal    SuccessfulCreate                            machineset/azure-example-md-0-84bd8b5f5b           Created machine "azure-example-md-0-84bd8b5f5b-lx89f"
    15m         Normal    SuccessfulCreate                            machineset/azure-example-md-0-84bd8b5f5b           Created machine "azure-example-md-0-84bd8b5f5b-pcv7q"
    15m         Normal    SuccessfulCreate                            machineset/azure-example-md-0-84bd8b5f5b           Created machine "azure-example-md-0-84bd8b5f5b-b8cnq"
    15m         Normal    Machine controller dependency not yet met   azuremachine/azure-example-md-0-bsc82              Machine Controller has not yet set OwnerRef
    15m         Normal    Machine controller dependency not yet met   azuremachine/azure-example-md-0-mjcbn              Machine Controller has not yet set OwnerRef
    15m         Normal    Machine controller dependency not yet met   azuremachine/azure-example-md-0-pmq8f              Machine Controller has not yet set OwnerRef

Known Limitations

NOTE: Be aware of these limitations in the current release of Konvoy.

  • The Konvoy version used to create a bootstrap cluster must match the Konvoy version used to create a workload cluster.

  • Konvoy supports deploying one workload cluster.

  • Konvoy generates a set of objects for one Node Pool.

  • Konvoy does not validate edits to cluster objects.

Next, you can Explore the New Cluster.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.