vSphere FIPS Air-gapped: Create the Management Cluster
Deploying a Cluster in FIPS mode
In order to create a cluster in FIPS mode, we must inform the bootstrap controllers of the appropriate image repository and version tags of the official D2iQ FIPS builds of Kubernetes.
Supported FIPS Builds
Component | Repository | Version |
---|---|---|
Kubernetes | v1.25.4+fips.0 | |
etcd | 3.5.5+fips.0 |
Name your Cluster
Give your cluster a unique name suitable for your environment.
Set the CLUSTER_NAME environment variable with the command:
CODEexport CLUSTER_NAME=<my-vsphere-cluster>
DKP uses the vsphere CSI driver as the default storage provider. Use a Kubernetes CSI compatible storage that is suitable for production.
Create a New vSphere Kubernetes Cluster
Use the following steps to create a new, air-gapped vSphere cluster.
Configure your cluster to use an existing registry as a mirror when attempting to pull images:
IMPORTANT: The image must be created by Konvoy Image Builder in order to use the registry mirror feature.CODEexport REGISTRY_URL=<https/http>://<registry-address>:<registry-port> export REGISTRY_CA=<path to the CA on the bastion> export REGISTRY_USERNAME=<username> export REGISTRY_PASSWORD=<password>
REGISTRY_URL
: the address of an existing registry accessible in the VPC that the new cluster nodes will be configured to use a mirror registry when pulling images.REGISTRY_CA
: (optional) the path on the bastion machine to the registry CA. Konvoy will configure the cluster nodes to trust this CA. This value is only needed if the registry is using a self-signed certificate and the AMIs are not already configured to trust this CA.REGISTRY_USERNAME
: optional, set to a user that has pull access to this registry.REGISTRY_PASSWORD
: optional if username is not set.
Load the container image, using either the
docker
orpodman
command:CODEdocker load -i konvoy-bootstrap-image-v2.5.2.tar
OR
CODEpodman load -i konvoy-bootstrap-image-v2.5.2.tar
Create a Kubernetes cluster by copying the following command and substituting the valid values for your environment:
dkp create cluster vsphere
--cluster-name ${CLUSTER_NAME} \
--network <NETWORK_NAME> \
--control-plane-endpoint-host <CONTROL_PLANE_IP> \
--data-center <DATACENTER_NAME> \
--data-store <DATASTORE_NAME> \
--folder <FOLDER_NAME> \
--server <VCENTER_API_SERVER_URL> \
--ssh-public-key-file </path/to/key.pub> \
--resource-pool <RESOURCE_POOL_NAME> \
--vm-template konvoy-ova-vsphere-os-release-k8s_release-vsphere-timestamp \
--virtual-ip-interface <ip_interface_name> \
--extra-sans "127.0.0.1" \
--registry-mirror-url=${DOCKER_REGISTRY_URL} \
--registry-mirror-cacert=${DOCKER_REGISTRY_CA} \
--registry-mirror-username=${DOCKER_REGISTRY_USERNAME} \
--registry-mirror-password=${DOCKER_REGISTRY_PASSWORD} \
--kubernetes-version=v1.25.4+fips.0 \
--kubernetes-image-repository=docker.io/mesosphere \
--etcd-image-repository=docker.io/mesosphere --etcd-version=3.5.5+fips.0 \
--self-managed
A self-managed cluster refers to one in which the CAPI resources and controllers that describe and manage it are running on the same cluster they are managing.
If your environment uses HTTP/HTTPS proxies, you must include the flags --http-proxy
, --https-proxy
, and --no-proxy
and their related values in this command for it to be successful. More information is available in Configuring an HTTP/HTTPS Proxy.
Cluster Verification
If you want to monitor or verify the installation of your clusters, refer to:
Verify your Cluster and DKP Installation.
As they progress, the controllers create Events, which you can also monitor using the command:
kubectl get events | grep ${CLUSTER_NAME}
For brevity, this example uses grep
. You can also use separate commands to get Events for specific objects, such as kubectl get events --field-selector involvedObject.kind="VSphereCluster"
and kubectl get events --field-selector involvedObject.kind="VSphereMachine"
.