Create a kubeconfig File for Attachment
Create a separate service account when attaching existing clusters (for example Amazon EKS, or Azure AKS clusters)
If you already have a kubeconfig
file to attach your cluster, go directly to Attach a Cluster with no Networking Restrictions (UI) or Attach a Cluster with Networking Restrictions.
The kubeconfig
files generated from existing clusters are not usable out-of-the-box, because they call provisioner-specific CLI commands (like aws
commands), and use locally-obtained authentication tokens that are not compatible with DKP. Having a separate service account also allows you to have a dedicated identity for all DKP operations.
To get started, ensure you have kubectl set up and configured with ClusterAdmin for the cluster you want to connect to Kommander.
Create the necessary service account:
CODEkubectl -n kube-system create serviceaccount kommander-cluster-admin
Create a token secret for the
serviceaccount
:CODEkubectl -n kube-system create -f - <<EOF apiVersion: v1 kind: Secret metadata: name: kommander-cluster-admin-sa-token annotations: kubernetes.io/service-account.name: kommander-cluster-admin type: kubernetes.io/service-account-token EOF
Verify that the
serviceaccount
token is ready by running this command:CODEkubectl -n kube-system get secret kommander-cluster-admin-sa-token -oyaml
Verify that the
data.token
field is populated. The output should be similar to this:NONEapiVersion: v1 data: ca.crt: LS0tLS1CRUdJTiBDR... namespace: ZGVmYXVsdA== token: ZXlKaGJHY2lPaUpTVX... kind: Secret metadata: annotations: kubernetes.io/service-account.name: kommander-cluster-admin kubernetes.io/service-account.uid: b62bc32e-b502-4654-921d-94a742e273a8 creationTimestamp: "2022-08-19T13:36:42Z" name: kommander-cluster-admin-sa-token namespace: default resourceVersion: "8554" uid: 72c2a4f0-636d-4a70-9f1c-55a75f15e520 type: kubernetes.io/service-account-token
Configure the new service account for
cluster-admin
permissions:NONEcat << EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kommander-cluster-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kommander-cluster-admin namespace: kube-system EOF
Set up the following environment variables with the access data that is needed for producing a new kubeconfig file:
NONEexport USER_TOKEN_VALUE=$(kubectl -n kube-system get secret/kommander-cluster-admin-sa-token -o=go-template='{{.data.token}}' | base64 --decode) export CURRENT_CONTEXT=$(kubectl config current-context) export CURRENT_CLUSTER=$(kubectl config view --raw -o=go-template='{{range .contexts}}{{if eq .name "'''${CURRENT_CONTEXT}'''"}}{{ index .context "cluster" }}{{end}}{{end}}') export CLUSTER_CA=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}"{{with index .cluster "certificate-authority-data" }}{{.}}{{end}}"{{ end }}{{ end }}') export CLUSTER_SERVER=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ .cluster.server }}{{end}}{{ end }}')
Confirm these variables have been set correctly:
NONEexport -p USER_TOKEN_VALUE CURRENT_CONTEXT CURRENT_CLUSTER CLUSTER_CA CLUSTER_SERVER
Generate a
kubeconfig
file that uses the environment variable values from the previous step:NONEcat << EOF > kommander-cluster-admin-config apiVersion: v1 kind: Config current-context: ${CURRENT_CONTEXT} contexts: - name: ${CURRENT_CONTEXT} context: cluster: ${CURRENT_CONTEXT} user: kommander-cluster-admin namespace: kube-system clusters: - name: ${CURRENT_CONTEXT} cluster: certificate-authority-data: ${CLUSTER_CA} server: ${CLUSTER_SERVER} users: - name: kommander-cluster-admin user: token: ${USER_TOKEN_VALUE} EOF
This process produces a file in your current working directory called
kommander-cluster-admin-config
. The contents of this file are used in Kommander to attach the cluster.
Before importing this configuration, verify thekubeconfig
file can access the cluster:NONEkubectl --kubeconfig $(pwd)/kommander-cluster-admin-config get all --all-namespaces
Next Step:
Use this kubeconfig
to:
Attach a cluster with no additional networking restrictions
Attach a cluster that has networking restrictions
If a cluster has limited resources to deploy all the federated platform services, it will fail to stay attached in the DKP UI. If this happens, check if there are any pods that are not getting the resources required.