GCP Quick Start
This Quick Start guide provides simplified instructions for using DKP to get your Kubernetes cluster up and running with minimal configuration requirements on Google Cloud Platform (GCP). For customization during setup, see Advanced GCP Install.
Prerequisites
Before beginning a DKP installation, verify that you have:
An x86_64-based Linux or macOS machine with a supported version of the operating system.
The
dkp
binary on this machine available on Download DKP page.Docker version 18.09.2 or later.
kubectl for interacting with the running cluster.
Install the GCP
gcloud
CLI by following the GCP install documentation.
GCP Prerequisites
If you are creating the bootstrap cluster on a non-GCP instance or one that does not have the required
editor
role:(option 1) Create a service account using the following
gcloud
commands:CODEexport GCP_PROJECT=<your GCP project ID> export SERVICE_ACCOUNT_USER=<some new service account user> export GOOGLE_APPLICATION_CREDENTIALS="$HOME/.gcloud/credentials.json" gcloud iam service-accounts create "$SERVICE_ACCOUNT_USER" --project=$GCP_PROJECT gcloud projects add-iam-policy-binding $GCP_PROJECT --member="serviceAccount:$SERVICE_ACCOUNT_USER@$GCP_PROJECT.iam.gserviceaccount.com" --role=roles/editor gcloud iam service-accounts keys create $GOOGLE_APPLICATION_CREDENTIALS --iam-account="$SERVICE_ACCOUNT_USER@$GCP_PROJECT.iam.gserviceaccount.com"
(option 2) Retrieve the credentials for an existing service account using the following
gcloud
commands:CODEexport GCP_PROJECT=<your GCP project ID> export SERVICE_ACCOUNT_USER=<existing service account user> export GOOGLE_APPLICATION_CREDENTIALS="$HOME/.gcloud/credentials.json" gcloud iam service-accounts keys create $GOOGLE_APPLICATION_CREDENTIALS --iam-account="$SERVICE_ACCOUNT_USER@$GCP_PROJECT.iam.gserviceaccount.com"
Export the static credentials that will be used to create the cluster:
BASHexport GCP_B64ENCODED_CREDENTIALS=$(base64 < "${GOOGLE_APPLICATION_CREDENTIALS}" | tr -d '\n')
Export the GCP location where you want to deploy the cluster, the default is set to
us-west1
:BASHexport GCP_REGION=us-west1
Create a New GCP Kubernetes Cluster
If you use these instructions to create a cluster on GCP using the DKP default settings without any edits to configuration files or additional flags, your cluster will be deployed with 3 control plane nodes, and 4 worker nodes in the default us-west1
region.
Follow these steps:
Create an image using Konvoy Image Builder (KIB) and export the image name:
BASHexport IMAGE_NAME=projects/$GCP_PROJECT/global/images/<image-name>
Give your cluster a name suitable for your environment:
CODEexport CLUSTER_NAME=gcp-example
Create a Kubernetes cluster:
NOTE: To increase Docker Hub's rate limit, use your Docker Hub credentials when creating the cluster, by setting the following flag
--registry-mirror-url=https://registry-1.docker.io --registry-mirror-username= --registry-mirror-password=
on thedkp create cluster
command.CODEdkp create cluster gcp \ --cluster-name=${CLUSTER_NAME} \ --additional-tags=owner=$(whoami) \ --with-gcp-bootstrap-credentials=true \ --project=$GCP_PROJECT \ --image=$IMAGE_NAME \ --self-managed
If your environment uses HTTP/HTTPS proxies, you must include the flags
--http-proxy
,--https-proxy
, and--no-proxy
and their related values in this command for it to be successful. More information is available in Configure HTTP Proxy.You should see output similar to this example:
CODEGenerating cluster resources cluster.cluster.x-k8s.io/gcp-example created gcpcluster.infrastructure.cluster.x-k8s.io/gcp-example created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/gcp-example-control-plane created gcpmachinetemplate.infrastructure.cluster.x-k8s.io/gcp-example-control-plane created secret/gcp-example-etcd-encryption-config created machinedeployment.cluster.x-k8s.io/gcp-example-md-0 created gcpmachinetemplate.infrastructure.cluster.x-k8s.io/gcp-example-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/gcp-example-md-0 created clusterresourceset.addons.cluster.x-k8s.io/calico-cni-installation-gcp-example created configmap/calico-cni-installation-gcp-example created configmap/tigera-operator-gcp-example created clusterresourceset.addons.cluster.x-k8s.io/gcp-persistent-disk-gcp-example created configmap/gcp-persistent-disk-csi-gcp-example created clusterresourceset.addons.cluster.x-k8s.io/cluster-autoscaler-gcp-example created configmap/cluster-autoscaler-gcp-example created clusterresourceset.addons.cluster.x-k8s.io/node-feature-discovery-gcp-example created configmap/node-feature-discovery-gcp-example created clusterresourceset.addons.cluster.x-k8s.io/nvidia-feature-discovery-gcp-example created configmap/nvidia-feature-discovery-gcp-example created
As part of the underlying processing, the DKP CLI:creates a bootstrap cluster
creates a workload cluster
moves CAPI controllers from the bootstrap cluster to the workload cluster, making it self-managed
deletes the bootstrap cluster
Explore the New Kubernetes Cluster
The kubeconfig
file is written to your local directory and you can now explore the cluster.
Follow these steps:
List the Nodes:
CODEkubectl --kubeconfig=${CLUSTER_NAME}.conf get nodes
You should see output similar to this example:
CODENAME STATUS ROLES AGE VERSION gcp-example-control-plane-9z77w Ready control-plane,master 4m44s v1.24.6 gcp-example-control-plane-rtj9h Ready control-plane,master 104s v1.24.6 gcp-example-control-plane-zbf9w Ready control-plane,master 3m23s v1.24.6 gcp-example-md-0-88c46 Ready <none> 3m28s v1.24.6 gcp-example-md-0-fp8s7 Ready <none> 3m28s v1.24.6 gcp-example-md-0-qvnx7 Ready <none> 3m28s v1.24.6 gcp-example-md-0-wjdrg Ready <none> 3m27s v1.24.6
List the Pods with the command:
CODEkubectl --kubeconfig=${CLUSTER_NAME}.conf get pods -A
You should see output similar to this example:
CODENAMESPACE NAME READY STATUS RESTARTS AGE calico-system calico-kube-controllers-577c696df9-v2nzv 1/1 Running 0 5m23s calico-system calico-node-4x5rk 1/1 Running 0 4m22s calico-system calico-node-cxsgc 1/1 Running 0 4m23s calico-system calico-node-dvlnm 1/1 Running 0 4m23s calico-system calico-node-h6nlt 1/1 Running 0 4m23s calico-system calico-node-jmkwq 1/1 Running 0 5m23s calico-system calico-node-tnf54 1/1 Running 0 4m18s calico-system calico-node-v6bwq 1/1 Running 0 2m39s calico-system calico-typha-6d8c94bfdf-dkfvq 1/1 Running 0 5m23s calico-system calico-typha-6d8c94bfdf-fdfn2 1/1 Running 0 3m43s calico-system calico-typha-6d8c94bfdf-kjgzj 1/1 Running 0 3m43s capa-system capa-controller-manager-6468bc488-w7nj9 1/1 Running 0 67s capg-system capg-controller-manager-5fb47f869b-6jgms 1/1 Running 0 53s capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-65ffc94457-7cjdn 1/1 Running 0 74s capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-bc7b688d4-vv8wg 1/1 Running 0 72s capi-system capi-controller-manager-dbfc7b49-dzvw8 1/1 Running 0 77s cappp-system cappp-controller-manager-8444d67568-rmms2 1/1 Running 0 59s capv-system capv-controller-manager-58b8ccf868-rbscn 1/1 Running 0 56s capz-system capz-controller-manager-6467f986d8-dnvj4 1/1 Running 0 62s cert-manager cert-manager-6888d6b69b-7b7m9 1/1 Running 0 91s cert-manager cert-manager-cainjector-76f7798c9-gnp8f 1/1 Running 0 91s cert-manager cert-manager-webhook-7d4b5d8484-gn5dr 1/1 Running 0 91s gce-pd-csi-driver csi-gce-pd-controller-5bd587fbfb-lrx29 5/5 Running 0 5m40s gce-pd-csi-driver csi-gce-pd-node-4cgd8 2/2 Running 0 4m22s gce-pd-csi-driver csi-gce-pd-node-5qsfk 2/2 Running 0 4m23s gce-pd-csi-driver csi-gce-pd-node-5w4bq 2/2 Running 0 4m18s gce-pd-csi-driver csi-gce-pd-node-fbdbw 2/2 Running 0 4m23s gce-pd-csi-driver csi-gce-pd-node-h82lx 2/2 Running 0 4m23s gce-pd-csi-driver csi-gce-pd-node-jzq58 2/2 Running 0 5m39s gce-pd-csi-driver csi-gce-pd-node-k6bz9 2/2 Running 0 2m39s kube-system cluster-autoscaler-7f695dc48f-v5kvh 1/1 Running 0 5m40s kube-system coredns-64897985d-hbkqd 1/1 Running 0 5m38s kube-system coredns-64897985d-m8g5j 1/1 Running 0 5m38s kube-system etcd-gcp-example-control-plane-9z77w 1/1 Running 0 5m32s kube-system etcd-gcp-example-control-plane-rtj9h 1/1 Running 0 2m37s kube-system etcd-gcp-example-control-plane-zbf9w 1/1 Running 0 4m17s kube-system kube-apiserver-gcp-example-control-plane-9z77w 1/1 Running 0 5m32s kube-system kube-apiserver-gcp-example-control-plane-rtj9h 1/1 Running 0 2m38s kube-system kube-apiserver-gcp-example-control-plane-zbf9w 1/1 Running 0 4m17s kube-system kube-controller-manager-gcp-example-control-plane-9z77w 1/1 Running 0 5m33s kube-system kube-controller-manager-gcp-example-control-plane-rtj9h 1/1 Running 0 2m37s kube-system kube-controller-manager-gcp-example-control-plane-zbf9w 1/1 Running 0 4m17s kube-system kube-proxy-bskz2 1/1 Running 0 4m18s kube-system kube-proxy-gdkn5 1/1 Running 0 4m23s kube-system kube-proxy-knvb9 1/1 Running 0 4m22s kube-system kube-proxy-tcj7r 1/1 Running 0 4m23s kube-system kube-proxy-thdpl 1/1 Running 0 5m38s kube-system kube-proxy-txxmb 1/1 Running 0 4m23s kube-system kube-proxy-vq6kv 1/1 Running 0 2m39s kube-system kube-scheduler-gcp-example-control-plane-9z77w 1/1 Running 0 5m33s kube-system kube-scheduler-gcp-example-control-plane-rtj9h 1/1 Running 0 2m37s kube-system kube-scheduler-gcp-example-control-plane-zbf9w 1/1 Running 0 4m17s node-feature-discovery node-feature-discovery-master-7d5985467-lh7dc 1/1 Running 0 5m40s node-feature-discovery node-feature-discovery-worker-5qtvg 1/1 Running 0 3m40s node-feature-discovery node-feature-discovery-worker-66rwx 1/1 Running 0 3m40s node-feature-discovery node-feature-discovery-worker-7h92d 1/1 Running 0 3m35s node-feature-discovery node-feature-discovery-worker-b4666 1/1 Running 0 3m40s tigera-operator tigera-operator-5f9bdc5c59-j9tnr 1/1 Running 0 5m38s
Install and Log in to the UI
You can now proceed to installing the UI in Kommander and applications. After installation, you will be able to log in to the UI to explore it.
Delete the Kubernetes Cluster and Clean up Your Environment
Delete the provisioned Kubernetes cluster and wait a few minutes:
dkp delete cluster \
--cluster-name=${CLUSTER_NAME} \
--with-gcp-bootstrap-credentials=true \
--kubeconfig=${CLUSTER_NAME}.conf \
--self-managed
You should see output similar to this example:
✓ Creating a bootstrap cluster
✓ Initializing new CAPI components
✓ Moving cluster resources
You can now view resources in the moved cluster by using the --kubeconfig flag with kubectl. For example: kubectl --kubeconfig=gcp-example-bootstrap.conf get nodes
✓ Waiting for cluster infrastructure to be ready
✓ Waiting for cluster control-planes to be ready
✓ Waiting for machines to be ready
✓ Deleting Services with type LoadBalancer for Cluster default/gcp-example
✓ Deleting ClusterResourceSets for Cluster default/gcp-example
✓ Deleting cluster resources
✓ Waiting for cluster to be fully deleted
Deleted default/gcp-example cluster
✓ Deleting bootstrap cluster