Requirements for Attaching an Existing Cluster
Basic Requirements
To attach an existing cluster in the UI, the Application Management cluster must be able to reach the services and the api-server
of the target cluster.
DKP does not support attachment of K3s clusters.
For attaching existing clusters without networking restrictions, the requirements depend on which DKP version you are using. Each version of DKP supports a specific range of Kubernetes versions. You must ensure that the target cluster is running a compatible version.
Create a Default StorageClass
To deploy many of the services on the attached cluster, there must be a default StorageClass
configured. Run the following command on the cluster you want to attach:
kubectl get sc
The output should look similar to this. Note the (default)
after the name:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ebs-sc (default) ebs.csi.aws.com Delete WaitForFirstConsumer false 41s
If the StorageClass
is not set as default, add the following annotation to the StorageClass
manifest:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
Creating Projects and Workspaces
Before you attach clusters, you need to create one or more Workspaces, and we recommend that you also create Projects within your Workspaces. Workspaces give you a logical way to represent your teams and specific configurations. Projects let you define one or more clusters as a group to which Kommander pushes a common configuration. Grouping your existing clusters in Kommander projects and workspaces makes managing their platform services and resources easier and supports monitoring and logging.
Do not attach a cluster in the "Management Cluster Workspace" workspace. This workspace is reserved for your Application Management cluster only.
Platform Application Requirements
In addition to the basic cluster requirements, the platform services you want DKP to manage on those clusters will have an impact on the total cluster requirements. The specific combinations of platform applications will make a difference in the requirements for the cluster nodes and their resources (CPU, memory, and storage).
See this table of platform applications that DKP provides by default.
Attach Existing AWS, EKS and GKE clusters
Attaching an existing AWS cluster requires that the cluster be fully configured and running. You must create a separate service account when attaching existing AKS, EKS or Google GKE Kubernetes clusters. This is necessary because the kubeconfig files generated from those clusters are not usable out-of-the-box by Kommander. The kubeconfig files call CLI commands, such as azure, aws
or gcloud
, and use locally-obtained authentication tokens. Having a separate service account also allows you to keep access to the cluster specific and isolated to Kommander.
The suggested default cluster configuration includes a control plane pool containing three (3) m5.xlarge nodes and a worker pool containing four (4) m5.2xlarge nodes.
Consider the additional resource requirements for running the platform services you want DKP to manage, and ensure that your existing clusters comply.
To attach an existing EKS cluster, refer to the specific information in Attach Amazon EKS Cluster.
To attach an existing GKE cluster, refer to the specific information in Attach GKE Cluster.
Attach Clusters with an Existing cert-manager Installation
If you are attaching clusters that already have cert-manager installed, the cert-manager HelmRelease
provided by DKP will fail to deploy, due to the existing cert-manager installation. As long as the pre-existing cert-manager functions as expected, you can ignore this failure. It will have no impact on the operation of the cluster.