Skip to main content
Skip table of contents

Attach GKE Cluster

Attach an existing GKE cluster in DKP

You can attach existing Kubernetes clusters to the Management Cluster. After attaching the cluster, you can use the UI to examine and manage this cluster. The following procedure shows how to attach an existing standard GKE cluster.

Before you begin

This procedure requires the following items and configurations:

This procedure assumes you have an existing and spun up GKE cluster with administrator privileges.

Attach GKE Clusters

Ensure you have access to your GKE clusters

  1. Ensure you are connected to your GKE clusters. Enter the following commands for each of your clusters:

    CODE
    kubectl config get-contexts
    kubectl config use-context <context for first gcloud cluster>
  2. Confirm kubectl can access the GKE cluster.

    CODE
    kubectl get nodes

Configure a kubeconfig file

If you already have a kubeconfig file, SKIP this section.

To get started, ensure you have kubectl set up and configured with ClusterAdmin for the cluster you want to connect to Kommander.

  1. Create the necessary service account:

    CODE
    kubectl -n kube-system create serviceaccount kommander-cluster-admin
  2. Create a token secret for the serviceaccount:

    CODE
    kubectl -n kube-system create  -f - <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: kommander-cluster-admin-sa-token
      annotations:
        kubernetes.io/service-account.name: kommander-cluster-admin
    type: kubernetes.io/service-account-token
    EOF
  3. Verify that the serviceaccount token is ready by running this command:

    CODE
    kubectl -n kube-system get secret kommander-cluster-admin-sa-token -oyaml

    Verify that the data.token field is populated. The output should be similar to this:

    NONE
    apiVersion: v1
    data:
      ca.crt: LS0tLS1CRUdJTiBDR...
      namespace: ZGVmYXVsdA==
      token: ZXlKaGJHY2lPaUpTVX...
    kind: Secret
    metadata:
      annotations:
        kubernetes.io/service-account.name: kommander-cluster-admin
        kubernetes.io/service-account.uid: b62bc32e-b502-4654-921d-94a742e273a8
      creationTimestamp: "2022-08-19T13:36:42Z"
      name: kommander-cluster-admin-sa-token
      namespace: default
      resourceVersion: "8554"
      uid: 72c2a4f0-636d-4a70-9f1c-55a75f15e520
    type: kubernetes.io/service-account-token
  4. Configure the new service account for cluster-admin permissions:

    NONE
    cat << EOF | kubectl apply -f -
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kommander-cluster-admin
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: kommander-cluster-admin
      namespace: kube-system
    EOF
  5. Set up the following environment variables with the access data that is needed for producing a new kubeconfig file:

    NONE
    export USER_TOKEN_VALUE=$(kubectl -n kube-system get secret/kommander-cluster-admin-sa-token -o=go-template='{{.data.token}}' | base64 --decode)
    export CURRENT_CONTEXT=$(kubectl config current-context)
    export CURRENT_CLUSTER=$(kubectl config view --raw -o=go-template='{{range .contexts}}{{if eq .name "'''${CURRENT_CONTEXT}'''"}}{{ index .context "cluster" }}{{end}}{{end}}')
    export CLUSTER_CA=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}"{{with index .cluster "certificate-authority-data" }}{{.}}{{end}}"{{ end }}{{ end }}')
    export CLUSTER_SERVER=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ .cluster.server }}{{end}}{{ end }}')
  6. Confirm these variables have been set correctly:

    NONE
    export -p | grep -E 'USER_TOKEN_VALUE|CURRENT_CONTEXT|CURRENT_CLUSTER|CLUSTER_CA|CLUSTER_SERVER'
  7. Generate a kubeconfig file that uses the environment variable values from the previous step:

    NONE
    cat << EOF > kommander-cluster-admin-config
    apiVersion: v1
    kind: Config
    current-context: ${CURRENT_CONTEXT}
    contexts:
    - name: ${CURRENT_CONTEXT}
      context:
        cluster: ${CURRENT_CONTEXT}
        user: kommander-cluster-admin
        namespace: kube-system
    clusters:
    - name: ${CURRENT_CONTEXT}
      cluster:
        certificate-authority-data: ${CLUSTER_CA}
        server: ${CLUSTER_SERVER}
    users:
    - name: kommander-cluster-admin
      user:
        token: ${USER_TOKEN_VALUE}
    EOF
  8. This process produces a file in your current working directory called kommander-cluster-admin-config. The contents of this file are used in Kommander to attach the cluster.
    Before importing this configuration, verify the kubeconfig file can access the cluster:

    NONE
    kubectl --kubeconfig $(pwd)/kommander-cluster-admin-config get all --all-namespaces

Attach the cluster

Now that you have a kubeconfig file, go to the DKP UI and follow these steps:

  1. Select your target workspace from the top menu bar.

  2. Select Add Cluster in the Actions dropdown menu from the Dashboard page, located at the top-right.

  3. Select Attach Cluster.

  4. Select the No additional networking restrictions card.
    Alternatively, if you must use network restrictions, stop following the steps below, and see the instructions on the page Attach a cluster WITH network restrictions.

  5. Upload the kubeconfig file you created in the previous section (or copy its contents) into the Cluster Configuration section.

  6. The Cluster Name field automatically populates with the name of the cluster in the kubeconfig. You can edit this field with the name you want for your cluster.

  7. Add labels to classify your cluster as needed.

  8. Select Create to attach your cluster.

If a cluster has limited resources to deploy all the federated platform services, it will fail to stay attached in the DKP UI. If this happens, ensure your system has sufficient resources for all pods.

Related information

For information on related topics or procedures, refer to the following:

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.