vSphere FIPS Air-gapped: Create a Managed Cluster Using the DKP CLI

Creating an air-gapped vSphere FIPS Managed cluster with the DKP CLI assumes that you already fulfilled all of the prerequisites and successfully created a vSphere Management cluster. Use this procedure to create a Managed vSphere cluster.
When creating Managed clusters, you do not need to create and move CAPI objects, or install the Kommander component. Those tasks are only done on Management clusters!
Make New Cluster Part of a Workspace
- If you have an existing Workspace name, run this command to find the name: 
 ⚠️ NOTE: If you need to create a new Workspace, follow the instructions to Create a New Workspace.CODE- kubectl get workspace -A
- When you have the Workspace name, set the - WORKSPACE_NAMESPACEenvironment variable:CODE- export WORKSPACE_NAMESPACE=<workspace_namespace>
Name your Cluster
- Give your cluster a unique name suitable for your environment. 
- Set the CLUSTER_NAME environment variable with the command: CODE- export CLUSTER_NAME=<my-managed-vsphere-cluster>
DKP uses local static provisioner as the default storage provider. However, localvolumeprovisioner is not suitable for production use. You should use a Kubernetes CSI compatible storage that is suitable for production.
- You can choose from any of the storage options available for Kubernetes. To disable the default that Konvoy deploys, set the default StorageClass - localvolumeprovisioneras non-default. Then set your newly created StorageClass to be the default by following the commands in the Kubernetes documentation called Changing the Default Storage Class.
Create a New vSphere Kubernetes Cluster
The below instructions tell you how to create a cluster and have it automatically attach to the workspace you set above.
If you do not set a workspace, it will be created in the default workspace, and you need to take additional steps to attach to a workspace later. For instructions on how to do this, see Attach a Kubernetes Cluster.
Use the following steps to create a new, air-gapped vSphere cluster.
- Configure your cluster to use an existing registry as a mirror when attempting to pull images: 
 IMPORTANT: The image must be created by Konvoy Image Builder so that it uses the registry mirror feature.CODE- export REGISTRY_URL=<https/http>://<registry-address>:<registry-port> export REGISTRY_CA=<path to the CA on the bastion> export REGISTRY_USERNAME=<username> export REGISTRY_PASSWORD=<password>- REGISTRY_URL: the address of an existing Docker registry accessible in the VPC that the new cluster nodes will be configured to use a mirror registry when pulling images.
- REGISTRY_CA: (optional) the path on the bastion machine to the Docker registry CA. Konvoy will configure the cluster nodes to trust this CA. This value is only needed if the registry is using a self-signed certificate and the AMIs are not already configured to trust this CA.
- REGISTRY_USERNAME: optional, set to a user that has pull access to this registry.
- REGISTRY_PASSWORD: optional if username is not set.
 
- Create a Kubernetes cluster by copying the following command and substituting the valid values for your environment: 
dkp create cluster vsphere \
  --cluster-name ${MANAGED_CLUSTER_NAME} \
  --additional-tags=owner=$(whoami) \
  --namespace ${WORKSPACE_NAMESPACE}
  --network <NETWORK_NAME> \
  --control-plane-endpoint-host <CONTROL_PLANE_IP> \
  --data-center <DATACENTER_NAME> \
  --data-store <DATASTORE_NAME> \
  --folder <FOLDER_NAME> \
  --server <VCENTER_API_SERVER_URL> \
  --ssh-public-key-file </path/to/key.pub> \e
  --resource-pool <RESOURCE_POOL_NAME> \
  --vm-template konvoy-ova-vsphere-os-release-k8s_release-vsphere-timestamp \
  --virtual-ip-interface <ip_interface_name> \
  --extra-sans "127.0.0.1" \
  --registry-mirror-url=${REGISTRY_URL} \
  --registry-mirror-cacert=${REGISTRY_CA} \
  --registry-mirror-username=${REGISTRY_USERNAME} \
  --registry-mirror-password=${REGISTRY_PASSWORD} \
  --kubernetes-version=v1.27.11+fips.0 \
  --kubernetes-image-repository=docker.io/mesosphere \
  --etcd-image-repository=docker.io/mesosphere \ 
  --etcd-version=3.5.10+fips.0 \
  --kubeconfig=<management-cluster-kubeconfig-path> \If your environment uses HTTP/HTTPS proxies, you must include the flags --http-proxy, --https-proxy, and --no-proxy and their related values in this command for it to be successful. More information is available in Configuring an HTTP/HTTPS Proxy.
Retrieve the kubeconfig and Explore New vSphere Cluster
Follow these steps:
- Fetch the kubeconfig file with the command: CODE- dkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} --kubeconfig <management-cluster-kubeconfig-path> -n ${WORKSPACE_NAMESPACE} > ${MANAGED_CLUSTER_NAME}.conf
- List the nodes with the following command: CODE- kubectl --kubeconfig=${MANAGED_CLUSTER_NAME}.conf get nodes
- List the pods with the following command: 
NOTE: Wait for the Status to move to Ready while calico-node pods are being deployed.
kubectl --kubeconfig=${MANAGED_CLUSTER_NAME}.conf get pods -AManually Attach a DKP CLI Cluster to the Management Cluster
    
These steps are only applicable if you do not set a WORKSPACE_NAMESPACE when creating a cluster. If you already set a WORKSPACE_NAMESPACE, then you do not need to perform these steps since the cluster is already attached to the workspace.
Starting with DKP 2.6, when you create a Managed Cluster with the DKP CLI, it attaches automatically to the Management Cluster after a few moments.
However, if you do not set a workspace, the attached cluster will be created in the default workspace. To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions: 
- Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command: CODE- echo ${MANAGED_CLUSTER_NAME}
- Retrieve your kubeconfig from the cluster you have created without setting a workspace: CODE- dkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} > ${MANAGED_CLUSTER_NAME}.conf
- You can now either [attach it in the UI](link to attaching it to workspace via UI that was earlier), or attach your cluster to the workspace you want in the CLI. 
 NOTE: This is only necessary if you never set the workspace of your cluster upon creation.
- Retrieve the workspace where you want to attach the cluster: CODE- kubectl get workspaces -A
- Set the WORKSPACE_NAMESPACE environment variable: CODE- export WORKSPACE_NAMESPACE=<workspace-namespace>
- You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the kubeconfig secret value of your cluster: CODE- kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-template='{{.data.value}}{{ "\n"}}'
- This will return a lengthy value. Copy this entire string for a secret using the template below as a reference. Create a new attached-cluster-kubeconfig.yaml file: CODE- apiVersion: v1 kind: Secret metadata: name: <your-managed-cluster-name>-kubeconfig labels: cluster.x-k8s.io/cluster-name: <your-managed-cluster-name> type: cluster.x-k8s.io/secret data: value: <value-you-copied-from-secret-above>
- Create this secret in the desired workspace: CODE- kubectl apply -f attached-cluster-kubeconfig.yaml --namespace ${WORKSPACE_NAMESPACE}
- Create this - kommanderclusterobject to attach the cluster to the workspace:CODE- cat << EOF | kubectl apply -f - apiVersion: kommander.mesosphere.io/v1beta1 kind: KommanderCluster metadata: name: ${MANAGED_CLUSTER_NAME} namespace: ${WORKSPACE_NAMESPACE} spec: kubeconfigRef: name: ${MANAGED_CLUSTER_NAME}-kubeconfig clusterRef: capiCluster: name: ${MANAGED_CLUSTER_NAME} EOF
- You can now view this cluster in your Workspace in the UI and you can confirm its status by running the below command. It may take a few minutes to reach "Joined" status: CODE- kubectl get kommanderclusters -A
If you have several Essential Clusters and want to turn one of them to a Managed Cluster to be centrally administrated by a Management Cluster, refer to Platform Expansion: Convert a DKP Essential Cluster to a DKP Enterprise Managed Cluster.
If you have existing clusters or want to create other new clusters to attach, there are many ways to attach a cluster with various requirements and restrictions. To see all the options, visit the section in documentation Day 2 - Attach an Existing Kubernetes Cluster.
At this point, you can create more clusters, perform other configuration tasks, or proceed to Day 2 Operations.
