vSphere Quick Start
This guide provides instructions for getting started with DKP to get your Kubernetes cluster up and running with basic configuration requirements in a vSphere environment. If you want to customize your vSphere environment, see vSphere Advanced Install.
DKP Prerequisites
Before using DKP to create a vSphere cluster, verify that you have:
An x86_64-based Linux® or macOS® machine.
The
dkp
binaries available on Download DKP page.Konvoy Image Builder (KIB) image bundle for Linux or macOS.
Docker® version 18.09.2 or later installed. You must have Docker installed on the host where the DKP Konvoy CLI runs. For example, if you are installing Konvoy on your laptop, ensure the laptop has a supported version of Docker.
On macOS, Docker runs in a virtual machine. Configure this virtual machine with at least 8GB of memory.
kubectl 1.21.6 for interacting with the running cluster, installed on the host where the DKP Konvoy command line interface (CLI) runs.
A valid VMware vSphere account with credentials configured.
vSphere Prerequisites
Before installing, verify that your VMware vSphere Client environment meets the following basic requirements:
Access to a bastion VM or other network connected host.
You must be able to reach the vSphere API endpoint from where the Konvoy command line interface (CLI) runs.
vSphere account with credentials configured - this account must have Administrator privileges.
A RedHat® subscription with user name and password for downloading DVD ISOs.
For air-gapped environments, prepare your environment using a bastion VM host template with access to a configured Docker registry.
Valid vSphere values for the following:
vCenter API server URL.
Datacenter name.
vCenter Cluster name that contains ESXi hosts for your cluster’s nodes.
Datastore name for the shared storage resource to be used for the VMs in the cluster.
Use of PersistentVolumes in your cluster depends on Cloud Native Storage (CNS), available in vSphere v6.7.x with Update 3 and later versions. CNS depends on this shared Datastore’s configuration.
Datastore URL from the datastore record for the shared datastore you want your cluster to use.
You need this URL value to ensure that the correct Datastore is used when DKP creates VMs for your cluster in vSphere.
Folder name.
Base template name, such as base-rhel-8, or base-rhel-7.
Name of a Virtual Network that has DHCP enabled for both air-gapped and non air-gapped environments.
Resource Pools - at least one resource pool needed, with every host in the pool having access to shared storage, such as VSAN.
Each host in the resource pool needs access to shared storage, such as NFS or VSAN, to make use of MachineDeployments and high-availability control planes.
The next step is:
For non air-gapped environments, Create a Base OS image in vSphere .
For air-gapped environments, after you create and prepare a bastion VM, Create a Base Air-gapped OS VM Image . (Refer to your provider’s site for details on bastion host set up of vSphere.)
Create directory for KIB and DKP CLI
This command creates directories for working with images created in KIB as well as a directory for running the commands for DKP:
mkdir kib && mkdir dkp
Get the needed D2iQ Software
Download and decompress KIB by running the following command:
cd kib
wget https://github.com/mesosphere/konvoy-image-builder/releases/download/v1.12.0/konvoy-image-bundle-v1.12.0_linux_amd64.tar.gz
tar -xvf konvoy-image-bundle-v1.12.0_linux_amd64.tar.gz
Use this link to Download and decompress DKP and copy or move the dkp binary into the dkp
subdirectory you created above..
Create a folder and resource pool in vCenter for DKP cluster
To create a folder in vCenter, follow the creation steps below:
Right click on the datacenter
Select New Folder
Select host and cluster folder
Name folder "D2IQ"
In order to create a Resource Pool, follow the steps below:
Right click on the vCenter cluster you plan to use for DKP
Select New Resource Pool
Adjust values if you need to restrict resources for DKP
Build template using KIB
Run the following command replacing rhel-84.yaml
with rhel-79.yaml
if necessary to create the compatible image:
cd ..
cd kib/images/ova
vi rhel-84.yaml
For troubleshooting building an image, see the following solutions guide.
Adjust the Packer file for your vSphere cluster
Execute this command to adjust the Packer file:
---
download_images: true
build_name: "vsphere-rhel-84"
packer_builder_type: "vsphere"
guestinfo_datasource_slug: "https://raw.githubusercontent.com/vmware/cloud-init-vmware-guestinfo"
guestinfo_datasource_ref: "v1.4.0"
guestinfo_datasource_script: "{{guestinfo_datasource_slug}}/{{guestinfo_datasource_ref}}/install.sh"
packer:
vcenter_server: "10.0.1.52"
vsphere_username: "administrator@vsphere.local"
vsphere_password: "Password"
cluster: "cluster1"
datacenter: "dc1"
datastore: "vsanDatastore"
folder: "d2iq"
insecure_connection: "true"
network: "VM Network"
resource_pool: "D2IQ"
template: "rhel-boot-8.4"
vsphere_guest_os_type: "rhel8_64Guest"
guest_os_type: "rhel8-64"
# goss params
distribution: "RHEL"
distribution_version: "8.4"
Create overrides for docker credentials
Navigate to the directory that contains your image and then create the override files:
cd ..
cd ..
To override the docker credentials, run the overrides file command below:
vi overrides.yaml
image_registries_with_auth:
- host: "registry-1.docker.io"
username: "<dockerhub-user>"
password: "<dockerhub-password>
auth: ""
identityToken: ""
Build VM template using KIB
Run the following command, to build your template:
./konvoy-image build images/ova/rhel-84.yaml --overrides overrides.yaml
For more specific information regarding creating a vSphere Virtual Machine Template, you can refer to the KIB for vSphere or Advanced Configuration for vSphere sections of the documentation.
Create DKP Cluster on vSphere
Export your vSphere Environment Variables
Copy the set of “exports” below to a text document in a notepad so that you can modify them and copy/paste into the CLI terminal. It is recommended to save this information for later reference. Export with this command:
export VSPHERE_SERVER="<vcenter-server-ip-address>"
export VSPHERE_PASSWORD='<password>'
export VSPHERE_USERNAME="<administrator@vsphere.local>"
export export CLUSTER_NAME=dkp
openssl s_client -connect <vcenter_ip.:443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin
Build the Bootstrap Cluster
Execute this command to create your bootstrap cluster:
cd ..
cd dkp
./dkp create bootstrap --with-aws-bootstrap-credentials=false
If your environment uses HTTP/HTTPS proxies, you must include the flags --http-proxy
, --https-proxy
, and --no-proxy
and their related values in this command for it to be successful. More information is available in Configure HTTP Proxy.
Create the DKP cluster deployment YAML
If you are not using self-signed certificates, remove the last line of the command--tls-thumb-print=
before executing the command below:
dkp create cluster vsphere --cluster-name="dkp" --network="VM Network" --control-plane-endpoint-host="<vip_for_api" --virtual-ip-interface="eth0" --data-center="<dc1>" --data-store="vsanDatastore" --folder="${VSPHERE_FOLDER}" --server="<vsphere_server_ip>" --ssh-public-key-file=/root/.ssh/id_rsa.pub --resource-pool="DKP" --vm-template=konvoy-ova-vsphere-rhel-84-1.24.6-1649344885 --tls-thumb-print="tls-thumbprint" --dry-run -o yaml > ${DKP_CLUSTER_NAME}.yaml
If your environment uses HTTP/HTTPS proxies, you must include the flags --http-proxy
, --https-proxy
, and --no-proxy
and their related values in this command for it to be successful. More information is available in Configure HTTP Proxy.
Create a New vSphere Kubernetes cluster
Create/deploy a cluster using the command below:
kubectl create -f ${DKP_CLUSTER_NAME}.yaml
If you wish to watch the cluster build, run the command below:
dkp describe cluster -c ${DKP_CLUSTER_NAME}
DKP deploys MetalLB on vSphere. This is an advanced step as it will require the IP addresses managed that needed to be managed by the MetalLB and therefore not necessary for Quick Start. However, know that it is an option and see the Configure MetalLB for vSphere documentation if necessary for your configuration.
Pivot the Cluster Controllers and Create CAPI Controllers on Cluster
Perform the pivot with this command:
./dkp create capi-components --kubeconfig ${DKP_CLUSTER_NAME}.conf
If your environment uses HTTP/HTTPS proxies, you must include the flags --http-proxy
, --https-proxy
, and --no-proxy
and their related values in this command for it to be successful. More information is available in Configure HTTP Proxy.
After created, move the configuration to the new cluster
Move the configuration using the command below:
./dkp move --to-kubeconfig ${DKP_CLUSTER_NAME}.conf
You now have a Self-Managing Kubernetes Cluster deployed on vSphere.
Adjust the Storage class to only allow PVs on a specific VMware datastore:
kubectl delete sc vsphere-raw-block-sc --kubeconfig ${DKP_CLUSTER_NAME}.conf
Create a storage class YAML with the URL of the VMware datastore you want to use:
vi sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: vsphere-raw-block-sc
provisioner: csi.vsphere.vmware.com
allowVolumeExpansion: true
parameters:
datastoreurl: "ds:///vmfs/volumes/vsan:5238a205736fdb1f-c71f7ec7a0353662/"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Apply the Storage class YAML to create a new default StorageClass:
kubectl apply -f sc.yaml --kubeconfig ${DKP_CLUSTER_NAME}.conf
Kommander Deployment
Deploy the Kommander component to the DKP Cluster:
./dkp install kommander --kubeconfig ${DKP_CLUSTER_NAME}.conf
If you would like to watch the Helm Releases Deploy, run the following command:
watch kubectl get hr -A --kubeconfig ${DKP_CLUSTER_NAME}.conf
Explore the New Kubernetes Cluster
The kubeconfig file is written to your local directory and you can now explore the cluster.
List the Nodes with the command:
kubectl --kubeconfig=${CLUSTER_NAME}.conf get nodes
Log in to the UI through Kommander
You can now log in to the UI to explore.
Delete the Kubernetes Cluster and Cleanup your Environment
Follow these steps:
Delete the provisioned Kubernetes cluster and wait a few minutes:
dkp delete cluster \
--cluster-name=${CLUSTER_NAME} \
--kubeconfig=${CLUSTER_NAME}.conf \
--self-managed