Skip to main content
Skip table of contents

vSphere Quick Start

This guide provides instructions for getting started with DKP to get your Kubernetes cluster up and running with basic configuration requirements in a vSphere environment. If you want to customize your vSphere environment, see vSphere Advanced Install.

DKP Prerequisites

Before using DKP to create a vSphere cluster, verify that you have:

  • An x86_64-based Linux® or macOS® machine.

  • The dkp binaries and Konvoy Image Builder (KIB) image bundle for Linux or macOS.

  • Docker® version 18.09.2 or later installed. You must have Docker installed on the host where the DKP Konvoy CLI runs. For example, if you are installing Konvoy on your laptop, ensure the laptop has a supported version of Docker.

On macOS, Docker runs in a virtual machine. Configure this virtual machine with at least 8GB of memory.

Configure vSphere Prerequisites

Before installing, verify that your VMware vSphere Client environment meets the following basic requirements:

  • Access to a bastion VM or other network connected host.

    • You must be able to reach the vSphere API endpoint from where the Konvoy command line interface (CLI) runs.

  • vSphere account with credentials configured - this account must have Administrator privileges.

  • A RedHat® subscription with user name and password for downloading DVD ISOs.

  • For air-gapped environments, a bastion VM host template with access to a configu.red Docker registry.

  • Valid vSphere values for the following:

    • vCenter API server URL.

    • Datacenter name.

    • vCenter Cluster name that contains ESXi hosts for your cluster’s nodes.

    • Datastore name for the shared storage resource to be used for the VMs in the cluster.

      • Use of PersistentVolumes in your cluster depends on Cloud Native Storage (CNS), available in vSphere v6.7.x with Update 3 and later versions. CNS depends on this shared Datastore’s configuration.

    • Datastore URL from the datastore record for the shared datastore you want your cluster to use.

      • You need this URL value to ensure that the correct Datastore is used when DKP creates VMs for your cluster in vSphere.

    • Folder name.

    • Base template name, such as base-rhel-8, or base-rhel-7.

    • Name of a Virtual Network that has DHCP enabled for both air-gapped and non air-gapped environments.

    • Resource Pools - at least one resource pool needed, with every host in the pool having access to shared storage, such as VSAN.

      • Each host in the resource pool needs access to shared storage, such as NFS or VSAN, to make use of MachineDeployments and high-availability control planes.

The next step is:

Create directory for KIB and DKP CLI

This command creates directories for working with images created in KIB as well as a directory for running the commands for DKP:

CODE
mkdir kib && mkdir dkp

Get the needed D2iQ Software

Download and decompress KIB by running the following command:

CODE
cd kib
wget https://github.com/mesosphere/konvoy-image-builder/releases/download/v1.19.14/konvoy-image-bundle-v1.19.14_linux_amd64.tar.gz
tar -xvf konvoy-image-bundle-v1.19.14_linux_amd64.tar.gz

Use this link to Download and decompress DKP and copy or move the dkp binary into the dkp subdirectory you created above..

Create a folder and resource pool in vCenter for DKP cluster

To create a folder in vCenter, follow the creation steps below:

  1. Right click on the datacenter

  2. Select New Folder

  3. Select host and cluster folder

  4. Name folder "D2IQ"

In order to create a Resource Pool, follow the steps below:

  1. Right click on the vCenter cluster you plan to use for DKP

  2. Select New Resource Pool

  3. Adjust values if you need to restrict resources for DKP

Build template using KIB

Run the following command replacing rhel-84.yaml with rhel-79.yaml if necessary to create the compatible image:

CODE
cd ..
cd kib/images/ova
vi rhel-84.yaml

For troubleshooting building an image, see the following solutions guide.

Adjust the packer file for your vSphere cluster

  • DKP recommends to install with the packages provided by the operating system package managers. Use the version that corresponds to the major version of your operating system.

  • Passwords should be generated not static and a minimum of 20 characters long.

CODE
---
download_images: true
build_name: "vsphere-rhel-84"
packer_builder_type: "vsphere" 
guestinfo_datasource_slug: "https://raw.githubusercontent.com/vmware/cloud-init-vmware-guestinfo"
guestinfo_datasource_ref: "v1.4.0"
guestinfo_datasource_script: "{{guestinfo_datasource_slug}}/{{guestinfo_datasource_ref}}/install.sh"
packer:
  vcenter_server: "10.0.1.52"
  vsphere_username: "administrator@vsphere.local"
  vsphere_password: "Password"
  cluster: "cluster1"
  datacenter: "dc1"
  datastore: "vsanDatastore"
  folder: "d2iq"
  insecure_connection: "true"
  network: "VM Network"
  resource_pool: "D2IQ"
  template: "rhel-boot-8.4"
  vsphere_guest_os_type: "rhel8_64Guest"
  guest_os_type: "rhel8-64"
  # goss params
  distribution: "RHEL"
  distribution_version: "8.4"

Create overrides for docker credentials

Navigate to the directory that contains your image and then create the override files:

CODE
cd ..
cd ..

To override the docker credentials, run the overrides file command below:

CODE
vi overrides.yaml
image_registries_with_auth:
- host: "registry-1.docker.io"
  username: "<dockerhub-user>"
  password: "<dockerhub-password>
  auth: ""
  identityToken: ""

Build VM template using KIB

Run the following command, to build your template:

CODE
./konvoy-image build images/ova/rhel-84.yaml --overrides overrides.yaml

Create DKP cluster on vSphere

Export your vSphere Environment Variables

Copy the set of “exports” below to a text document in a notepad so that you can modify them and copy/paste into the CLI terminal. It is recommended to save this information for later reference.

CODE
export VSPHERE_SERVER="<vcenter-server-ip-address>"
export VSPHERE_PASSWORD='<password>'
export VSPHERE_USERNAME="<administrator@vsphere.local>"
export export CLUSTER_NAME=dkp
openssl s_client -connect <vcenter_ip.:443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin

Build the Bootstrap Cluster

CODE
cd ..
cd dkp 
./dkp create bootstrap --with-aws-bootstrap-credentials=false 

Create the DKP cluster deployment YAML

If you are not using self-signed certificates, remove the last line of the command--tls-thumb-print= before running the command below:

CODE
dkp create cluster vsphere --cluster-name="dkp" --network="VM Network" --control-plane-endpoint-host="<vip_for_api" --virtual-ip-interface="eth0" --data-center="<dc1>" --data-store="vsanDatastore" --folder="${VSPHERE_FOLDER}" --server="<vsphere_server_ip>" --ssh-public-key-file=/root/.ssh/id_rsa.pub --resource-pool="DKP" --vm-template=konvoy-ova-vsphere-rhel-84-1.23.12-1649344885 --tls-thumb-print="tls-thumbprint" --dry-run -o yaml > ${DKP_CLUSTER_NAME}.yaml

Create a new vSphere Kubernetes cluster

Create/deploy a cluster using the command below:

CODE
kubectl create -f ${DKP_CLUSTER_NAME}.yaml

If you wish to watch the cluster build, run the command below:

CODE
dkp describe cluster -c ${DKP_CLUSTER_NAME}

DKP deploys MetalLB on vSphere. This is an advanced step as it will require the IP addresses managed that needed to be managed by the MetalLB and therefore not necessary for Quick Start. However, know that it is an option and see the Configure MetalLB for vSphere documentation if necessary for your configuration.

Pivot the Cluster Controllers and Create CAPI Controllers on Cluster

CODE
./dkp create capi-components --kubeconfig ${DKP_CLUSTER_NAME}.conf

Once created, move the configuration to the new cluster using the command below:

CODE
./dkp move --to-kubeconfig ${DKP_CLUSTER_NAME}.conf

You now have a Self-Managing Kubernetes Cluster deployed on vSphere.

Adjust the Storage class to only allow PVs on a specfic VMware datastore

CODE
kubectl delete sc vsphere-raw-block-sc --kubeconfig ${DKP_CLUSTER_NAME}.conf

Create a storage class yaml with the URL of the VMware datastore you want to use

CODE
vi sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  name: vsphere-raw-block-sc
provisioner: csi.vsphere.vmware.com
allowVolumeExpansion: true
parameters:
  datastoreurl: "ds:///vmfs/volumes/vsan:5238a205736fdb1f-c71f7ec7a0353662/" 
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

Apply the Storage class YAML to create a new default SC

CODE
kubectl apply -f sc.yaml --kubeconfig ${DKP_CLUSTER_NAME}.conf

Kommander Deployment

Deploy Kommander to the DKP Cluster

CODE
./dkp install kommander --kubeconfig ${DKP_CLUSTER_NAME}.conf

If you would like to watch the Helm Releases Deploy, run the following command:

CODE
watch kubectl get hr -A --kubeconfig ${DKP_CLUSTER_NAME}.conf

Explore the new Kubernetes cluster

The kubeconfig file is written to your local directory and you can now explore the cluster.

List the Nodes with the command:

CODE
kubectl --kubeconfig=${CLUSTER_NAME}.conf get nodes

Log in to the DKP UI

You can now log in to the DKP UI to explore.

Delete the Kubernetes Cluster and Cleanup your Environment

Follow these steps:

Delete the provisioned Kubernetes cluster and wait a few minutes:

CODE
dkp delete cluster \
--cluster-name=${CLUSTER_NAME} \
--kubeconfig=${CLUSTER_NAME}.conf \
--self-managed
JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.