Skip to main content
Skip table of contents

AKS Quick Start

Get started by installing a cluster with the default configuration settings on AKS.

This guide provides instructions for getting started with DKP to get your Kubernetes cluster up and running with basic configuration requirements on an Azure Kubernetes Service (AKS) public cloud instance. If you want to customize your AKS environment, see Install AKS Advanced.

Prerequisites

Before starting the DKP installation, verify that you have:

Configure AKS Prerequisites

Follow these steps:

  1. Log in to Azure:

    C
    az login

    CODE
    [
      {
        "cloudName": "AzureCloud",
        "homeTenantId": "a1234567-b132-1234-1a11-1234a5678b90",
        "id": "b1234567-abcd-11a1-a0a0-1234a5678b90",
        "isDefault": true,
        "managedByTenants": [],
        "name": "Mesosphere Developer Subscription",
        "state": "Enabled",
        "tenantId": "a1234567-b132-1234-1a11-1234a5678b90",
        "user": {
          "name": "user@azuremesosphere.onmicrosoft.com",
          "type": "user"
        }
      }
    ]
  2. Create an Azure Service Principal (SP) by running the following command:

    NOTE: If an SP with the name exists, this command will rotate the password.

    CODE
    az ad sp create-for-rbac --role contributor --name "$(whoami)-konvoy" --scopes=/subscriptions/$(az account show --query id -o tsv)

    CODE
    {
      "appId": "7654321a-1a23-567b-b789-0987b6543a21",
      "displayName": "azure-cli-2021-03-09-23-17-06",
      "password": "Z79yVstq_E.R0R7RUUck718vEHSuyhAB0C",
      "tenant": "a1234567-b132-1234-1a11-1234a5678b90"
    }
  3. Set the required environment variables:

    CODE
    export AZURE_SUBSCRIPTION_ID="<id>"       # b1234567-abcd-11a1-a0a0-1234a5678b90
    export AZURE_TENANT_ID="<tenant>"         # a1234567-b132-1234-1a11-1234a5678b90
    export AZURE_CLIENT_ID="<appId>"          # 7654321a-1a23-567b-b789-0987b6543a21
    export AZURE_CLIENT_SECRET="<password>"   # Z79yVstq_E.R0R7RUUck718vEHSuyhAB0C
  4. Base64 encode the same environment variables:

    CODE
    export AZURE_SUBSCRIPTION_ID_B64="$(echo -n "${AZURE_SUBSCRIPTION_ID}" | base64 | tr -d '\n')"
    export AZURE_TENANT_ID_B64="$(echo -n "${AZURE_TENANT_ID}" | base64 | tr -d '\n')"
    export AZURE_CLIENT_ID_B64="$(echo -n "${AZURE_CLIENT_ID}" | base64 | tr -d '\n')"
    export AZURE_CLIENT_SECRET_B64="$(echo -n "${AZURE_CLIENT_SECRET}" | base64 | tr -d '\n')"
  5. Check to see what version of Kubernetes is available in your region. When deploying with AKS, you need to declare the version of Kubernetes you wish to use by running the following command, substituting <your-location> for the Azure region you're deploying to:

    CODE
    az aks get-versions -o table --location <your-location>
  6. Set the version of Kubernetes you’ve chosen:
    NOTE: Using Kubernetes v1.23.x is recommended. The version listed in the command is an example.

    CODE
    export KUBERNETES_VERSION=1.23.5

Name your cluster

Give your cluster a unique name suitable for your environment. In AKS it is critical that the name is unique as no two clusters in the same AKS account can have the same name.

To increase Docker Hub's rate limit use your Docker Hub credentials when creating the cluster, by setting the following flag:
--registry-mirror-url=https://registry-1.docker.io --registry-mirror-username=<username> --registry-mirror-password=<password> on dkp create cluster.

  1. Set the environment variable to be used throughout this documentation:

    CODE
    export CLUSTER_NAME=aks-example
  2. Then, modify dkp create cluster aks to include the Kubernetes version:

    CODE
    dkp create cluster aks --cluster-name=${CLUSTER_NAME} --additional-tags=owner=$(whoami) --kubernetes-version=${KUBERNETES_VERSION}

Create a new AKS Kubernetes cluster

  1. Create a Kubernetes cluster:

    CODE
    dkp create cluster aks --cluster-name=${CLUSTER_NAME} --additional-tags=owner=$(whoami)

    CODE
    Generating cluster resources
    cluster.cluster.x-k8s.io/aks-example created
    azuremanagedcontrolplane.infrastructure.cluster.x-k8s.io/aks-example created
    azuremanagedcluster.infrastructure.cluster.x-k8s.io/aks-example created
    machinepool.cluster.x-k8s.io/aks-example created
    azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/cp4n2bm created
    machinepool.cluster.x-k8s.io/aks-example-md-0 created
    azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/mpn6l25 created
    clusterresourceset.addons.cluster.x-k8s.io/cluster-autoscaler-aks-example created
    configmap/cluster-autoscaler-aks-example created
    clusterresourceset.addons.cluster.x-k8s.io/node-feature-discovery-aks-example created
    configmap/node-feature-discovery-aks-example created
    clusterresourceset.addons.cluster.x-k8s.io/nvidia-feature-discovery-aks-example created
    configmap/nvidia-feature-discovery-aks-example created
  2. (Optional) Specify an authorized key file to have SSH access to the machines.

    The file must contain exactly one entry, as described in this manual.

    You can use the .pub file that complements your private ssh key. For example, use the public key that complements your RSA private key:

    CODE
    --ssh-public-key-file=${HOME}/.ssh/id_rsa.pub

    The default username for SSH access is konvoy. For example, use your own username:

    CODE
    --ssh-username=$(whoami)
  3. Wait for the cluster control-plane to be ready:

    CODE
    kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=20m

    CODE
    cluster.cluster.x-k8s.io/aks-example condition met

Pivoting is not supported in AKS.

Explore the new Kubernetes Cluster

Follow these steps:

  1. Fetch the kubeconfig file:

    CODE
    dkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf
  2. List the Nodes with the command:

    CODE
    kubectl --kubeconfig=${CLUSTER_NAME}.conf get nodes

    CODE
    NAME                              STATUS   ROLES   AGE   VERSION
    aks-cp4n2bm-57672902-vmss000000   Ready    agent   28m   v1.22.6
    aks-cp4n2bm-57672902-vmss000001   Ready    agent   29m   v1.22.6
    aks-cp4n2bm-57672902-vmss000002   Ready    agent   29m   v1.22.6
    aks-mpn6l25-57672902-vmss000000   Ready    agent   29m   v1.22.6
    aks-mpn6l25-57672902-vmss000001   Ready    agent   29m   v1.22.6
    aks-mpn6l25-57672902-vmss000002   Ready    agent   29m   v1.22.6
    aks-mpn6l25-57672902-vmss000003   Ready    agent   29m   v1.22.6

    NOTE: It may take a couple of minutes for the Status to move to Ready while calico-node pods are being deployed.

  3. List the Pods with the command:

    CODE
    kubectl --kubeconfig=${CLUSTER_NAME}.conf get pods -A

    CODE
    NAMESPACE                NAME                                             READY   STATUS     RESTARTS   AGE
    calico-system            calico-kube-controllers-78f65cd5dd-5m6t2         1/1     Running    0          28m
    calico-system            calico-node-27h8r                                1/1     Running    0          28m
    calico-system            calico-node-cn4zw                                1/1     Running    0          28m
    calico-system            calico-node-fgsqx                                1/1     Running    0          28m
    calico-system            calico-node-htr4f                                1/1     Running    0          28m
    calico-system            calico-node-l7skw                                1/1     Running    0          28m
    calico-system            calico-node-mn67v                                1/1     Running    0          28m
    calico-system            calico-node-z626n                                1/1     Running    0          28m
    calico-system            calico-typha-b6c9c78f4-hcnmd                     1/1     Running    0          28m
    calico-system            calico-typha-b6c9c78f4-pz52w                     1/1     Running    0          28m
    calico-system            calico-typha-b6c9c78f4-xknwt                     1/1     Running    0          28m
    kube-system              azure-ip-masq-agent-9hxsr                        1/1     Running    0          30m
    kube-system              azure-ip-masq-agent-bh5m6                        1/1     Running    0          31m
    kube-system              azure-ip-masq-agent-c6s4v                        1/1     Running    0          31m
    kube-system              azure-ip-masq-agent-gg77k                        1/1     Running    0          30m
    kube-system              azure-ip-masq-agent-k5sl8                        1/1     Running    0          31m
    kube-system              azure-ip-masq-agent-mmpsp                        1/1     Running    0          31m
    kube-system              azure-ip-masq-agent-z4n24                        1/1     Running    0          31m
    kube-system              cloud-node-manager-42shm                         1/1     Running    0          31m
    kube-system              cloud-node-manager-b9scr                         1/1     Running    0          30m
    kube-system              cloud-node-manager-ccmwl                         1/1     Running    0          31m
    kube-system              cloud-node-manager-csrml                         1/1     Running    0          31m
    kube-system              cloud-node-manager-gkv6x                         1/1     Running    0          31m
    kube-system              cloud-node-manager-ttxz7                         1/1     Running    0          30m
    kube-system              cloud-node-manager-twlh8                         1/1     Running    0          31m
    kube-system              cluster-autoscaler-68c759fbf6-cnkkp              0/1     Init:0/1   0          29m
    kube-system              coredns-845757d86-brpzs                          1/1     Running    0          33m
    kube-system              coredns-845757d86-nqmlc                          1/1     Running    0          31m
    kube-system              coredns-autoscaler-7d56cd888-8bc28               1/1     Running    0          33m
    kube-system              csi-azuredisk-node-4bl85                         3/3     Running    0          30m
    kube-system              csi-azuredisk-node-8dw5n                         3/3     Running    0          31m
    kube-system              csi-azuredisk-node-bg2kb                         3/3     Running    0          31m
    kube-system              csi-azuredisk-node-fr9bm                         3/3     Running    0          31m
    kube-system              csi-azuredisk-node-nm4k9                         3/3     Running    0          31m
    kube-system              csi-azuredisk-node-twvcv                         3/3     Running    0          31m
    kube-system              csi-azuredisk-node-wgds6                         3/3     Running    0          30m
    kube-system              csi-azurefile-node-5xv28                         3/3     Running    0          31m
    kube-system              csi-azurefile-node-9nl7n                         3/3     Running    0          31m
    kube-system              csi-azurefile-node-c6mn9                         3/3     Running    0          31m
    kube-system              csi-azurefile-node-q69zr                         3/3     Running    0          31m
    kube-system              csi-azurefile-node-q894n                         3/3     Running    0          31m
    kube-system              csi-azurefile-node-v2rmj                         3/3     Running    0          30m
    kube-system              csi-azurefile-node-wkgck                         3/3     Running    0          30m
    kube-system              kube-proxy-5kd77                                 1/1     Running    0          31m
    kube-system              kube-proxy-96jfn                                 1/1     Running    0          30m
    kube-system              kube-proxy-96pj6                                 1/1     Running    0          30m
    kube-system              kube-proxy-b8vzs                                 1/1     Running    0          31m
    kube-system              kube-proxy-fqnw4                                 1/1     Running    0          31m
    kube-system              kube-proxy-rvpp8                                 1/1     Running    0          31m
    kube-system              kube-proxy-sfqnm                                 1/1     Running    0          31m
    kube-system              metrics-server-6576d9ccf8-kfm5q                  1/1     Running    0          33m
    kube-system              tunnelfront-78777b4fd6-g84wp                     1/1     Running    0          27m
    node-feature-discovery   node-feature-discovery-master-84c67dcbb6-vgxfm   1/1     Running    0          29m
    node-feature-discovery   node-feature-discovery-worker-2htgg              1/1     Running    0          29m
    node-feature-discovery   node-feature-discovery-worker-5cpnt              1/1     Running    0          29m
    node-feature-discovery   node-feature-discovery-worker-6cjvb              1/1     Running    0          29m
    node-feature-discovery   node-feature-discovery-worker-jdmkj              1/1     Running    0          29m
    node-feature-discovery   node-feature-discovery-worker-ms749              1/1     Running    0          29m
    node-feature-discovery   node-feature-discovery-worker-p2z55              1/1     Running    0          29m
    node-feature-discovery   node-feature-discovery-worker-wnwfx              1/1     Running    0          29m
    tigera-operator          tigera-operator-74d785cb58-vbr4d                 1/1     Running    0          33m

Install and Log in to the DKP UI

You can now proceed to installing the DKP UI and applications. After installation, you will be able to log in to the DKP UI to explore it.

Delete the Kubernetes Cluster and Cleanup your Environment

Follow these steps:

  1. Delete the provisioned Kubernetes cluster and wait a few minutes:

    CODE
    dkp delete cluster --cluster-name=${CLUSTER_NAME}

    CODE
    ✓ Deleting Services with type LoadBalancer for Cluster default/aks-example
    ✓ Deleting ClusterResourceSets for Cluster default/aks-example
    ✓ Deleting cluster resources
    ✓ Waiting for cluster to be fully deleted
    Deleted default/aks-example cluster
JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.