AKS Quick Start
Get started by installing a cluster with the default configuration settings on AKS.
This guide provides instructions for getting started with DKP to get your Kubernetes cluster up and running with basic configuration requirements on an Azure Kubernetes Service (AKS) public cloud instance. If you want to customize your AKS environment, see Install AKS Advanced.
DKP Prerequisites
Before starting the DKP installation, verify that you have:
An x86_64-based Linux or macOS machine with a supported version of the operating system.
The
dkp
binary for Linux, or macOS available on Download DKP page.Docker version 18.09.2 or later.
kubectl for interacting with the running cluster.
The Azure CLI.
A valid Azure account used to sign in to the Azure CLI.
AKS Prerequisites
Follow these steps:
Log in to Azure:
Caz login
CODE[ { "cloudName": "AzureCloud", "homeTenantId": "a1234567-b132-1234-1a11-1234a5678b90", "id": "b1234567-abcd-11a1-a0a0-1234a5678b90", "isDefault": true, "managedByTenants": [], "name": "Mesosphere Developer Subscription", "state": "Enabled", "tenantId": "a1234567-b132-1234-1a11-1234a5678b90", "user": { "name": "user@azuremesosphere.onmicrosoft.com", "type": "user" } } ]
Create an Azure Service Principal (SP) by running the following command:
NOTE: If an SP with the name exists, this command will rotate the password.
CODEaz ad sp create-for-rbac --role contributor --name "$(whoami)-konvoy" --scopes=/subscriptions/$(az account show --query id -o tsv)
CODE{ "appId": "7654321a-1a23-567b-b789-0987b6543a21", "displayName": "azure-cli-2021-03-09-23-17-06", "password": "Z79yVstq_E.R0R7RUUck718vEHSuyhAB0C", "tenant": "a1234567-b132-1234-1a11-1234a5678b90" }
Set the required environment variables:
CODEexport AZURE_SUBSCRIPTION_ID="<id>" # b1234567-abcd-11a1-a0a0-1234a5678b90 export AZURE_TENANT_ID="<tenant>" # a1234567-b132-1234-1a11-1234a5678b90 export AZURE_CLIENT_ID="<appId>" # 7654321a-1a23-567b-b789-0987b6543a21 export AZURE_CLIENT_SECRET="<password>" # Z79yVstq_E.R0R7RUUck718vEHSuyhAB0C
Base64 encode the same environment variables:
CODEexport AZURE_SUBSCRIPTION_ID_B64="$(echo -n "${AZURE_SUBSCRIPTION_ID}" | base64 | tr -d '\n')" export AZURE_TENANT_ID_B64="$(echo -n "${AZURE_TENANT_ID}" | base64 | tr -d '\n')" export AZURE_CLIENT_ID_B64="$(echo -n "${AZURE_CLIENT_ID}" | base64 | tr -d '\n')" export AZURE_CLIENT_SECRET_B64="$(echo -n "${AZURE_CLIENT_SECRET}" | base64 | tr -d '\n')"
Check to see what version of Kubernetes is available in your region. When deploying with AKS, you must declare the version of Kubernetes you want to use by running the following command, substituting
<your-location>
for the Azure region you're deploying to:CODEaz aks get-versions -o table --location <your-location>
Set the version of Kubernetes you chose. The version listed in the command is an example:
CODEexport KUBERNETES_VERSION=1.24.3
Using Kubernetes v1.23.x is recommended for AKS.
Kubernetes Version | Upgrades |
---|---|
1.24.6 | None available |
1.24.3 | 1.24.6 |
1.23.12 | 1.24.3, 1.24.6 |
1.23.8 | 1.23.12, 1.24.3, 1.24.6 |
1.22.15 | 1.23.8, 1.23.12 |
1.22.11 | 1.22.15, 1.23.8, 1.23.12 |
Name Your Cluster
Give your cluster a unique name suitable for your environment. In AKS it is critical that the name is unique as no two clusters in the same AKS account can have the same name.
To increase Docker Hub's rate limit use your Docker Hub credentials when creating the cluster, by setting the following flag: --registry-mirror-url=https://registry-1.docker.io --registry-mirror-username=<username> --registry-mirror-password=<password>
on dkp create cluster
.
Set the environment variable to be used throughout this documentation:
CODEexport CLUSTER_NAME=aks-example
Modify
dkp create cluster aks
to include the Kubernetes version and execute against the self-managed cluster:CODEdkp create cluster aks --cluster-name=${CLUSTER_NAME} --additional-tags=owner=$(whoami) --kubernetes-version=${KUBERNETES_VERSION} KUBECONFIG=${SELF_MANAGED_AZ_CLUSTER}.yaml
Create a New AKS Kubernetes Cluster
Create a Kubernetes cluster and execute this command:
CODEdkp create cluster aks --cluster-name=${CLUSTER_NAME} --additional-tags=owner=$(whoami)
If your environment uses HTTP/HTTPS proxies, you must include the flags
--http-proxy
,--https-proxy
, and--no-proxy
and their related values in this command for it to be successful. More information is available in Configure HTTP Proxy.
The output resembles:CODEGenerating cluster resources cluster.cluster.x-k8s.io/aks-example created azuremanagedcontrolplane.infrastructure.cluster.x-k8s.io/aks-example created azuremanagedcluster.infrastructure.cluster.x-k8s.io/aks-example created machinepool.cluster.x-k8s.io/aks-example created azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/cp4n2bm created machinepool.cluster.x-k8s.io/aks-example-md-0 created azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/mpn6l25 created clusterresourceset.addons.cluster.x-k8s.io/cluster-autoscaler-aks-example created configmap/cluster-autoscaler-aks-example created clusterresourceset.addons.cluster.x-k8s.io/node-feature-discovery-aks-example created configmap/node-feature-discovery-aks-example created clusterresourceset.addons.cluster.x-k8s.io/nvidia-feature-discovery-aks-example created configmap/nvidia-feature-discovery-aks-example created
(Optional) Specify an authorized key file to have SSH access to the machines.
The file must contain exactly one entry, as described in this manual.
You can use the
.pub
file that complements your private ssh key. For example, use the public key that complements your RSA private key:CODE--ssh-public-key-file=${HOME}/.ssh/id_rsa.pub
The default username for SSH access is
konvoy
. For example, use your own username:CODE--ssh-username=$(whoami)
Wait for the cluster control-plane to be ready:
CODEkubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=20m
CODEcluster.cluster.x-k8s.io/aks-example condition met
Pivoting is not supported in AKS.
Explore the New Kubernetes Cluster
Follow these steps:
Fetch the kubeconfig file:
CODEdkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf
List the Nodes with the command:
CODEkubectl --kubeconfig=${CLUSTER_NAME}.conf get nodes
CODENAME STATUS ROLES AGE VERSION aks-cp4n2bm-57672902-vmss000000 Ready agent 28m v1.22.6 aks-cp4n2bm-57672902-vmss000001 Ready agent 29m v1.22.6 aks-cp4n2bm-57672902-vmss000002 Ready agent 29m v1.22.6 aks-mpn6l25-57672902-vmss000000 Ready agent 29m v1.22.6 aks-mpn6l25-57672902-vmss000001 Ready agent 29m v1.22.6 aks-mpn6l25-57672902-vmss000002 Ready agent 29m v1.22.6 aks-mpn6l25-57672902-vmss000003 Ready agent 29m v1.22.6
NOTE: It may take a couple of minutes for the Status to move to
Ready
whilecalico-node
pods are being deployed.List the Pods with the command:
CODEkubectl --kubeconfig=${CLUSTER_NAME}.conf get pods -A
CODENAMESPACE NAME READY STATUS RESTARTS AGE calico-system calico-kube-controllers-78f65cd5dd-5m6t2 1/1 Running 0 28m calico-system calico-node-27h8r 1/1 Running 0 28m calico-system calico-node-cn4zw 1/1 Running 0 28m calico-system calico-node-fgsqx 1/1 Running 0 28m calico-system calico-node-htr4f 1/1 Running 0 28m calico-system calico-node-l7skw 1/1 Running 0 28m calico-system calico-node-mn67v 1/1 Running 0 28m calico-system calico-node-z626n 1/1 Running 0 28m calico-system calico-typha-b6c9c78f4-hcnmd 1/1 Running 0 28m calico-system calico-typha-b6c9c78f4-pz52w 1/1 Running 0 28m calico-system calico-typha-b6c9c78f4-xknwt 1/1 Running 0 28m kube-system azure-ip-masq-agent-9hxsr 1/1 Running 0 30m kube-system azure-ip-masq-agent-bh5m6 1/1 Running 0 31m kube-system azure-ip-masq-agent-c6s4v 1/1 Running 0 31m kube-system azure-ip-masq-agent-gg77k 1/1 Running 0 30m kube-system azure-ip-masq-agent-k5sl8 1/1 Running 0 31m kube-system azure-ip-masq-agent-mmpsp 1/1 Running 0 31m kube-system azure-ip-masq-agent-z4n24 1/1 Running 0 31m kube-system cloud-node-manager-42shm 1/1 Running 0 31m kube-system cloud-node-manager-b9scr 1/1 Running 0 30m kube-system cloud-node-manager-ccmwl 1/1 Running 0 31m kube-system cloud-node-manager-csrml 1/1 Running 0 31m kube-system cloud-node-manager-gkv6x 1/1 Running 0 31m kube-system cloud-node-manager-ttxz7 1/1 Running 0 30m kube-system cloud-node-manager-twlh8 1/1 Running 0 31m kube-system cluster-autoscaler-68c759fbf6-cnkkp 0/1 Init:0/1 0 29m kube-system coredns-845757d86-brpzs 1/1 Running 0 33m kube-system coredns-845757d86-nqmlc 1/1 Running 0 31m kube-system coredns-autoscaler-7d56cd888-8bc28 1/1 Running 0 33m kube-system csi-azuredisk-node-4bl85 3/3 Running 0 30m kube-system csi-azuredisk-node-8dw5n 3/3 Running 0 31m kube-system csi-azuredisk-node-bg2kb 3/3 Running 0 31m kube-system csi-azuredisk-node-fr9bm 3/3 Running 0 31m kube-system csi-azuredisk-node-nm4k9 3/3 Running 0 31m kube-system csi-azuredisk-node-twvcv 3/3 Running 0 31m kube-system csi-azuredisk-node-wgds6 3/3 Running 0 30m kube-system csi-azurefile-node-5xv28 3/3 Running 0 31m kube-system csi-azurefile-node-9nl7n 3/3 Running 0 31m kube-system csi-azurefile-node-c6mn9 3/3 Running 0 31m kube-system csi-azurefile-node-q69zr 3/3 Running 0 31m kube-system csi-azurefile-node-q894n 3/3 Running 0 31m kube-system csi-azurefile-node-v2rmj 3/3 Running 0 30m kube-system csi-azurefile-node-wkgck 3/3 Running 0 30m kube-system kube-proxy-5kd77 1/1 Running 0 31m kube-system kube-proxy-96jfn 1/1 Running 0 30m kube-system kube-proxy-96pj6 1/1 Running 0 30m kube-system kube-proxy-b8vzs 1/1 Running 0 31m kube-system kube-proxy-fqnw4 1/1 Running 0 31m kube-system kube-proxy-rvpp8 1/1 Running 0 31m kube-system kube-proxy-sfqnm 1/1 Running 0 31m kube-system metrics-server-6576d9ccf8-kfm5q 1/1 Running 0 33m kube-system tunnelfront-78777b4fd6-g84wp 1/1 Running 0 27m node-feature-discovery node-feature-discovery-master-84c67dcbb6-vgxfm 1/1 Running 0 29m node-feature-discovery node-feature-discovery-worker-2htgg 1/1 Running 0 29m node-feature-discovery node-feature-discovery-worker-5cpnt 1/1 Running 0 29m node-feature-discovery node-feature-discovery-worker-6cjvb 1/1 Running 0 29m node-feature-discovery node-feature-discovery-worker-jdmkj 1/1 Running 0 29m node-feature-discovery node-feature-discovery-worker-ms749 1/1 Running 0 29m node-feature-discovery node-feature-discovery-worker-p2z55 1/1 Running 0 29m node-feature-discovery node-feature-discovery-worker-wnwfx 1/1 Running 0 29m tigera-operator tigera-operator-74d785cb58-vbr4d 1/1 Running 0 33m
Install and Log in to the UI
You can now install the UI with Kommander and applications. After installation, you will be able to log in to the UI to explore it.
Delete the Kubernetes Cluster and Cleanup your Environment
Follow these steps:
Delete the provisioned Kubernetes cluster and wait a few minutes:
CODEdkp delete cluster --cluster-name=${CLUSTER_NAME}
CODE✓ Deleting Services with type LoadBalancer for Cluster default/aks-example ✓ Deleting ClusterResourceSets for Cluster default/aks-example ✓ Deleting cluster resources ✓ Waiting for cluster to be fully deleted Deleted default/aks-example cluster