Skip to main content
Skip table of contents

AWS Quick Start

This page provides instructions for getting started with DKP, to get your Kubernetes cluster up and running with basic configuration requirements on an Amazon Web Services (AWS) public cloud instances. If you want to customize your AWS environment, see Install AWS Advanced.

Prerequisites

Before starting the DKP installation, verify that you have:

Configure AWS Prerequisites

Follow these steps:

  1. Follow the steps in IAM Policy Minimal Permissions to Create Clusters. This will provide the permissions to launch a DKP Cluster.

  2. Follow the steps in Cluster IAM Policy Policies and Roles to provide the IAM configurations the DKP Cluster will use.

  3. Export the AWS region where you want to deploy the cluster:

    CODE
    export AWS_REGION=us-west-2
  4. Export the AWS Profile with the credentials that you want to use to create the Kubernetes cluster:

    CODE
    export AWS_PROFILE=<profile>
  5. Name your cluster.

    Give your cluster a unique name suitable for your environment. In AWS, it is critical that the name be unique as no two clusters in the same AWS account can have the same name.

    Set the environment variable to be used throughout this documentation:

    CODE
    export CLUSTER_NAME=aws-example

The cluster name may only contain the following characters: a-z, 0-9, ., and -. Cluster creation will fail if the name has capital letters. See Kubernetes for more naming information.

Create a New AWS Kubernetes Cluster

By default, the control-plane Nodes will be created in 3 different zones. However, the default worker Nodes will reside in a single Availability Zone. You may create additional node pools in other Availability Zones with the dkp create nodepool command.

If you use these instructions to create a cluster on AWS using the DKP default settings without any edits to configuration files or additional flags, your cluster is deployed on an Ubuntu 20.04 operating system image with 3 control plane nodes, and 4 worker nodes.

The default AWS image is not recommended for use in production. We suggest using DKP Image Builder to create a custom AMI to take advantage of enhanced cluster operations, and to explore the advanced AWS installation topics for more options.

  1. Create a Kubernetes cluster:

    NOTE: To increase Docker Hub's rate limit use your Docker Hub credentials when creating the cluster, by setting the following flag --registry-mirror-url=https://registry-1.docker.io --registry-mirror-username= --registry-mirror-password= on the dkp create cluster command.

    CODE
    dkp create cluster aws \
    --cluster-name=${CLUSTER_NAME} \
    --additional-tags=owner=$(whoami) \
    --with-aws-bootstrap-credentials=true \
    --self-managed

    If your environment uses HTTP/HTTPS proxies, you must include the flags --http-proxy, --https-proxy, and --no-proxy and their related values in this command for it to be successful. More information is available in Configure HTTP Proxy.

  2. Verify your output is similar to the following:

    CODE
     ✓ Creating a bootstrap cluster
     ✓ Initializing new CAPI components
    Generating cluster resources
    cluster.cluster.x-k8s.io/aws-example created
    awscluster.infrastructure.cluster.x-k8s.io/aws-example created
    kubeadmcontrolplane.controlplane.cluster.x-k8s.io/aws-example-control-plane created
    awsmachinetemplate.infrastructure.cluster.x-k8s.io/aws-example-control-plane created
    secret/aws-example-etcd-encryption-config created
    machinedeployment.cluster.x-k8s.io/aws-example-md-0 created
    awsmachinetemplate.infrastructure.cluster.x-k8s.io/aws-example-md-0 created
    kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/aws-example-md-0 created
    clusterresourceset.addons.cluster.x-k8s.io/calico-cni-installation-aws-example created
    configmap/calico-cni-installation-aws-example created
    configmap/tigera-operator-aws-example created
    clusterresourceset.addons.cluster.x-k8s.io/aws-ebs-csi-aws-example created
    configmap/aws-ebs-csi-aws-example created
    clusterresourceset.addons.cluster.x-k8s.io/cluster-autoscaler-aws-example created
    configmap/cluster-autoscaler-aws-example created
    clusterresourceset.addons.cluster.x-k8s.io/node-feature-discovery-aws-example created
    configmap/node-feature-discovery-aws-example created
    clusterresourceset.addons.cluster.x-k8s.io/nvidia-feature-discovery-aws-example created
    configmap/nvidia-feature-discovery-aws-example created
     ✓ Waiting for cluster infrastructure to be ready
     ✓ Waiting for cluster control-planes to be ready
     ✓ Waiting for machines to be ready
     ✓ Initializing new CAPI components
     ✓ Moving cluster resources
    You can now view resources in the moved cluster by using the --kubeconfig flag with kubectl. For example: kubectl --kubeconfig=/aws-example.conf get nodes
     ✓ Deleting bootstrap cluster
    
    Cluster default/aws-example kubeconfig was written to to the filesystem.
    You can now view resources in the new cluster by using the --kubeconfig flag with kubectl.
    For example: kubectl --kubeconfig=aws-example.conf get nodes

    As part of the underlying processing, the DKP CLI:

    • creates a bootstrap cluster

    • creates a workload cluster

    • moves CAPI controllers from the bootstrap cluster to the workload cluster, making it self-managed

    • deletes the bootstrap cluster

    To understand how this process works step by step, you can follow the workflow in Install AWS Advanced.

Explore the New Kubernetes Cluster

The kubeconfig file is written to your local directory and you can now explore the cluster.

  1. List the Nodes with the command:

    CODE
    kubectl --kubeconfig=${CLUSTER_NAME}.conf get nodes
  2. Verify the output is similar to:

    CODE
    NAME                                         STATUS   ROLES                  AGE   VERSION
    ip-10-0-108-63.us-west-2.compute.internal    Ready    <none>                 59m   v1.22.8
    ip-10-0-115-181.us-west-2.compute.internal   Ready    <none>                 59m   v1.22.8
    ip-10-0-118-159.us-west-2.compute.internal   Ready    <none>                 59m   v1.22.8
    ip-10-0-122-136.us-west-2.compute.internal   Ready    control-plane,master   60m   v1.22.8
    ip-10-0-122-6.us-west-2.compute.internal     Ready    <none>                 59m   v1.22.8
    ip-10-0-154-239.us-west-2.compute.internal   Ready    control-plane,master   59m   v1.22.8
    ip-10-0-199-233.us-west-2.compute.internal   Ready    control-plane,master   57m   v1.22.8
  3. List the Pods with the command:

    CODE
    kubectl --kubeconfig=${CLUSTER_NAME}.conf get pods -A
  4. Verify the output is similar to:

    CODE
    NAMESPACE                           NAME                                                                 READY   STATUS    RESTARTS   AGE
    calico-system                       calico-typha-665d976df-rf7jg                                         1/1     Running   0          60m
    capa-system                         capa-controller-manager-697b7df888-vhcbj                             2/2     Running   0          57m
    capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-controller-manager-67d8fc9688-5p65s           1/1     Running   0          57m
    capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-controller-manager-846ff8b565-jqmhd       1/1     Running   0          57m
    capi-system                         capi-controller-manager-865fddc84c-9g7bb                             1/1     Running   0          57m
    cappp-system                        cappp-controller-manager-7859fbbb7f-xjh6k                            1/1     Running   0          56m
    ...

Kommander Deployment

Deploy Kommander to the DKP Cluster:

CODE
./dkp install kommander --kubeconfig ${DKP_CLUSTER_NAME}.conf

If you would like to watch the Helm Releases Deploy, run the following command:

CODE
watch kubectl get hr -A --kubeconfig ${DKP_CLUSTER_NAME}.conf

Explore the New Kubernetes Cluster

The kubeconfig file is written to your local directory and you can now explore the cluster.

List the Nodes with the command:

CODE
kubectl --kubeconfig=${CLUSTER_NAME}.conf get nodes

Log in to the UI through Kommander

You can now log in to the UI to explore.

Delete the Kubernetes Cluster and Cleanup your Environment

Follow these steps:

Delete the provisioned Kubernetes cluster and wait a few minutes:

CODE
dkp delete cluster \
--cluster-name=${CLUSTER_NAME} \
--kubeconfig=${CLUSTER_NAME}.conf \
--with-aws-bootstrap-credentials=true \
--self-managed

Similar to create cluster, use the flag --self-managed with the delete clustercommand:

  • Creates a bootstrap cluster.

  • Moves the CAPI controllers from the workload cluster back to the bootstrap cluster.

  • Deletes the workload cluster.

  • Deletes the bootstrap cluster.

To understand how this process works step by step, you can follow the workflow in Delete an AWS Cluster.

If you want to customize your AWS environment, see Install AWS Advanced or Install AWS Air-gapped.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.