AWS Quick Start
This page provides instructions for getting started with DKP, to get your Kubernetes cluster up and running with basic configuration requirements on an Amazon Web Services (AWS) public cloud instances. If you want to customize your AWS environment, see Install AWS Advanced.
Prerequisites
Before starting the DKP installation, verify that you have:
An x86_64-based Linux or macOS machine with a supported version of the operating system.
The
dkp
binary for Linux, or macOS.Docker version 18.09.2 or later.
kubectl for interacting with the running cluster.
A valid AWS account with credentials configured.
Configure AWS prerequisites
Follow these steps:
Follow the steps in IAM Policy Configuration.
Export the AWS region where you want to deploy the cluster:
CODEexport AWS_REGION=us-west-2
Export the AWS Profile with the credentials that you want to use to create the Kubernetes cluster:
CODEexport AWS_PROFILE=<profile>
Name your cluster.
Give your cluster a unique name suitable for your environment. In AWS, it is critical that the name be unique as no two clusters in the same AWS account can have the same name.
Set the environment variable to be used throughout this documentation:
CODEexport CLUSTER_NAME=aws-example
The cluster name may only contain the following characters: a-z
, 0-9
, .
, and -
. Cluster creation will fail if the name has capital letters. See Kubernetes for more naming information.
Create a new AWS Kubernetes cluster
By default, the control-plane Nodes will be created in 3 different zones. However, the default worker Nodes will reside in a single Availability Zone. You may create additional node pools in other Availability Zones with the dkp create nodepool
command.
Ensure your AWS credentials are up to date. If you are using Static Credentials, use the following command to refresh the credentials. Otherwise, proceed to step 1 below:
dkp update bootstrap credentials aws
If you use these instructions to create a cluster on AWS using the DKP default settings without any edits to configuration files or additional flags, your cluster is deployed on an Ubuntu 20.04 operating system image with 3 control plane nodes, and 4 worker nodes.
The default AWS image is not recommended for use in production. We suggest using DKP Image Builder to create a custom AMI to take advantage of enhanced cluster operations, and to explore the advanced AWS installation topics for more options.
Create a Kubernetes cluster:
NOTE: To increase Docker Hub's rate limit use your Docker Hub credentials when creating the cluster, by setting the following flag
--registry-mirror-url=https://registry-1.docker.io --registry-mirror-username= --registry-mirror-password=
on thedkp create cluster
command.CODEdkp create cluster aws \ --cluster-name=${CLUSTER_NAME} \ --additional-tags=owner=$(whoami) \ --with-aws-bootstrap-credentials=true \ --self-managed
Verify your output is similar to the following:
CODE✓ Creating a bootstrap cluster ✓ Initializing new CAPI components Generating cluster resources cluster.cluster.x-k8s.io/aws-example created awscluster.infrastructure.cluster.x-k8s.io/aws-example created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/aws-example-control-plane created awsmachinetemplate.infrastructure.cluster.x-k8s.io/aws-example-control-plane created secret/aws-example-etcd-encryption-config created machinedeployment.cluster.x-k8s.io/aws-example-md-0 created awsmachinetemplate.infrastructure.cluster.x-k8s.io/aws-example-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/aws-example-md-0 created clusterresourceset.addons.cluster.x-k8s.io/calico-cni-installation-aws-example created configmap/calico-cni-installation-aws-example created configmap/tigera-operator-aws-example created clusterresourceset.addons.cluster.x-k8s.io/aws-ebs-csi-aws-example created configmap/aws-ebs-csi-aws-example created clusterresourceset.addons.cluster.x-k8s.io/cluster-autoscaler-aws-example created configmap/cluster-autoscaler-aws-example created clusterresourceset.addons.cluster.x-k8s.io/node-feature-discovery-aws-example created configmap/node-feature-discovery-aws-example created clusterresourceset.addons.cluster.x-k8s.io/nvidia-feature-discovery-aws-example created configmap/nvidia-feature-discovery-aws-example created ✓ Waiting for cluster infrastructure to be ready ✓ Waiting for cluster control-planes to be ready ✓ Waiting for machines to be ready ✓ Initializing new CAPI components ✓ Moving cluster resources You can now view resources in the moved cluster by using the --kubeconfig flag with kubectl. For example: kubectl --kubeconfig=/aws-example.conf get nodes ✓ Deleting bootstrap cluster Cluster default/aws-example kubeconfig was written to to the filesystem. You can now view resources in the new cluster by using the --kubeconfig flag with kubectl. For example: kubectl --kubeconfig=aws-example.conf get nodes
As part of the underlying processing, the DKP CLI:
creates a bootstrap cluster
creates a workload cluster
moves CAPI controllers from the bootstrap cluster to the workload cluster, making it self-managed
deletes the bootstrap cluster
To understand how this process works step by step, you can follow the workflow in Install AWS Advanced.
Explore the new Kubernetes Cluster
The kubeconfig file is written to your local directory and you can now explore the cluster.
List the Nodes with the command:
CODEkubectl --kubeconfig=${CLUSTER_NAME}.conf get nodes
Verify the output is similar to:
CODENAME STATUS ROLES AGE VERSION ip-10-0-108-63.us-west-2.compute.internal Ready <none> 59m v1.22.8 ip-10-0-115-181.us-west-2.compute.internal Ready <none> 59m v1.22.8 ip-10-0-118-159.us-west-2.compute.internal Ready <none> 59m v1.22.8 ip-10-0-122-136.us-west-2.compute.internal Ready control-plane,master 60m v1.22.8 ip-10-0-122-6.us-west-2.compute.internal Ready <none> 59m v1.22.8 ip-10-0-154-239.us-west-2.compute.internal Ready control-plane,master 59m v1.22.8 ip-10-0-199-233.us-west-2.compute.internal Ready control-plane,master 57m v1.22.8
List the Pods with the command:
CODEkubectl --kubeconfig=${CLUSTER_NAME}.conf get pods -A
Verify the output is similar to:
CODENAMESPACE NAME READY STATUS RESTARTS AGE calico-system calico-typha-665d976df-rf7jg 1/1 Running 0 60m capa-system capa-controller-manager-697b7df888-vhcbj 2/2 Running 0 57m capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-67d8fc9688-5p65s 1/1 Running 0 57m capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-846ff8b565-jqmhd 1/1 Running 0 57m capi-system capi-controller-manager-865fddc84c-9g7bb 1/1 Running 0 57m cappp-system cappp-controller-manager-7859fbbb7f-xjh6k 1/1 Running 0 56m ...
Install and Log in to the DKP UI
You can now proceed to installing the DKP UI and applications. After installation, you will be able to log in to the DKP UI to explore it.
Delete the Kubernetes Cluster and Cleanup your Environment
If you no longer need the cluster and want to delete it, you can do so using the DKP CLI.
Delete the provisioned Kubernetes cluster:
CODEdkp delete cluster \ --cluster-name=${CLUSTER_NAME} \ --with-aws-bootstrap-credentials=true \ --kubeconfig=${CLUSTER_NAME}.conf \ --self-managed
Verify the output is similar to:
CODE✓ Creating a bootstrap cluster ✓ Initializing new CAPI components ✓ Moving cluster resources You can now view resources in the moved cluster by using the --kubeconfig flag with kubectl. For example: kubectl --kubeconfig=aws-example-bootstrap.conf get nodes ✓ Waiting for cluster infrastructure to be ready ✓ Waiting for cluster control-planes to be ready ✓ Waiting for machines to be ready ✓ Deleting Services with type LoadBalancer for Cluster default/aws-example ✓ Deleting ClusterResourceSets for Cluster default/aws-example ✓ Deleting cluster resources ✓ Waiting for cluster to be fully deleted Deleted default/aws-example cluster ✓ Deleting bootstrap cluster
Similar to create cluster
, use the flag --self-managed
with the delete cluster
command:
Creates a bootstrap cluster.
Moves the CAPI controllers from the workload cluster back to the bootstrap cluster.
Deletes the workload cluster.
Deletes the bootstrap cluster.
To understand how this process works step by step, you can follow the workflow in Delete Cluster.