Attach AKS Cluster

Attach an existing AKS cluster
You can attach existing Kubernetes clusters to the Management Cluster. After attaching the cluster, you can use the UI to examine and manage this cluster. The following procedure shows how to attach an existing Azure Kubernetes Service (AKS) cluster.
Before you Begin
This procedure requires the following items and configurations:
A fully configured and running Azure AKS cluster with administrative privileges.
The current version DKP Enterprise is installed on your cluster.
Ensure you have installed
kubectlin your Management cluster.
This procedure assumes you have an existing and spun up Azure AKS cluster(s) with administrative privileges. Refer to the Azure AKS for setup and configuration information.
Attach AKS Clusters
Ensure that the KUBECONFIG environment variable is set to the Management cluster before attaching by running:
export KUBECONFIG=<Management_cluster_kubeconfig>.conf
Ensure you have access to your AKS clusters
Ensure you are connected to your AKS clusters. Enter the following commands for each of your clusters:
CODEkubectl config get-contexts kubectl config use-context <context for first AKS cluster>Confirm
kubectlcan access the AKS cluster:CODEkubectl get nodes
Create a kubeconfig file for your AKS cluster
To get started, ensure you have kubectl set up and configured with ClusterAdmin for the cluster you want to connect to Kommander.
Create the necessary service account:
CODEkubectl -n kube-system create serviceaccount kommander-cluster-adminCreate a token secret for the
serviceaccount:CODEkubectl -n kube-system create -f - <<EOF apiVersion: v1 kind: Secret metadata: name: kommander-cluster-admin-sa-token annotations: kubernetes.io/service-account.name: kommander-cluster-admin type: kubernetes.io/service-account-token EOFVerify that the
serviceaccounttoken is ready by running this command:CODEkubectl -n kube-system get secret kommander-cluster-admin-sa-token -oyamlVerify that the
data.tokenfield is populated. The output should be similar to this:CODEapiVersion: v1 data: ca.crt: LS0tLS1CRUdJTiBDR... namespace: ZGVmYXVsdA== token: ZXlKaGJHY2lPaUpTVX... kind: Secret metadata: annotations: kubernetes.io/service-account.name: kommander-cluster-admin kubernetes.io/service-account.uid: b62bc32e-b502-4654-921d-94a742e273a8 creationTimestamp: "2022-08-19T13:36:42Z" name: kommander-cluster-admin-sa-token namespace: default resourceVersion: "8554" uid: 72c2a4f0-636d-4a70-9f1c-55a75f15e520 type: kubernetes.io/service-account-tokenConfigure the new service account for
cluster-adminpermissions:CODEcat << EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kommander-cluster-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kommander-cluster-admin namespace: kube-system EOFSet up the following environment variables with the access data that is needed for producing a new
kubeconfigfile:CODEexport USER_TOKEN_VALUE=$(kubectl -n kube-system get secret/kommander-cluster-admin-sa-token -o=go-template='{{.data.token}}' | base64 --decode) export CURRENT_CONTEXT=$(kubectl config current-context) export CURRENT_CLUSTER=$(kubectl config view --raw -o=go-template='{{range .contexts}}{{if eq .name "'''${CURRENT_CONTEXT}'''"}}{{ index .context "cluster" }}{{end}}{{end}}') export CLUSTER_CA=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}"{{with index .cluster "certificate-authority-data" }}{{.}}{{end}}"{{ end }}{{ end }}') export CLUSTER_SERVER=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ .cluster.server }}{{end}}{{ end }}')Confirm these variables have been set correctly:
CODEexport -p | grep -E 'USER_TOKEN_VALUE|CURRENT_CONTEXT|CURRENT_CLUSTER|CLUSTER_CA|CLUSTER_SERVER'Generate a
kubeconfigfile that uses the environment variable values from the previous step:CODEcat << EOF > kommander-cluster-admin-config apiVersion: v1 kind: Config current-context: ${CURRENT_CONTEXT} contexts: - name: ${CURRENT_CONTEXT} context: cluster: ${CURRENT_CONTEXT} user: kommander-cluster-admin namespace: kube-system clusters: - name: ${CURRENT_CONTEXT} cluster: certificate-authority-data: ${CLUSTER_CA} server: ${CLUSTER_SERVER} users: - name: kommander-cluster-admin user: token: ${USER_TOKEN_VALUE} EOFThis process produces a file in your current working directory called
kommander-cluster-admin-config. The contents of this file are used in Kommander to attach the cluster.
Before importing this configuration, verify thekubeconfigfile can access the cluster:CODEkubectl --kubeconfig $(pwd)/kommander-cluster-admin-config get all --all-namespaces
Finalize attaching your cluster from the UI
Now that you have kubeconfig, go to the DKP UI and follow these steps below:
From the top menu bar, select your target workspace.
On the Dashboard page, select the Add Cluster option in the Actions dropdown menu at the top right.
Select Attach Cluster.
Select the No additional networking restrictions card. Alternatively, if you must use network restrictions, stop following the steps below, and see the instructions on the page Attach a cluster WITH network restrictions.
Upload the kubeconfig file you created in the previous section (or copy its contents) into the Cluster Configuration section.
The Cluster Name field automatically populates with the name of the cluster in the kubeconfig. You can edit this field with the name you want for your cluster.
Add labels to classify your cluster as needed.
Select Create to attach your cluster.
If a cluster has limited resources to deploy all the federated platform services, it will fail to stay attached in the DKP UI. If this happens, ensure your system has sufficient resources for all pods.
Related Information
For information on related topics or procedures, refer to the following: