EKS Quick Start
This guide provides instructions for getting started with DKP to get your Kubernetes cluster up and running with basic configuration requirements on an Elastic Kubernetes Service (EKS) public cloud instance. If you want to customize your EKS environment, see EKS Advanced Install.
DKP Prerequisites
Before you begin using DKP, you must have:
An x86_64-based Linux or macOS machine with a supported version of the operating system.
The
dkp
binary for Linux, or macOS available on Download DKP page.Docker version 18.09.2 or later installed.
kubectl for interacting with the running cluster.
On macOS, Docker runs in a virtual machine. Configure this virtual machine with at least 8GB of memory.
AWS Prerequisites
Before you begin using DKP with AWS, you must have:
A valid AWS account with credentials configured.
Installation of aws-iam-authenticator. This binary is used to access your cluster using kubectl. Amazon EKS uses IAM to provide authentication to your Kubernetes cluster.
Create an IAM policy configuration.
Export the AWS region where you want to deploy the cluster:
CODEexport AWS_REGION=us-west-2
Export the AWS profile with the credentials you want to use to create the Kubernetes cluster:
CODEexport AWS_PROFILE=<profile>
See the AWS site for more information about AWS credentials.
EKS for DKP 2.4.x is compatible with Kubernetes 1.23.x because EKS uses its own Kubernetes release cycle.
Configure EKS Prerequisites
Follow these steps:
Follow the steps in EKS Cluster IAM Policy and Roles Configuration.
Export the AWS region where you want to deploy the EKS cluster:
CODEexport AWS_REGION=us-west-2
Export the AWS Profile with the credentials that you want to use to create the EKS Kubernetes cluster:
CODEexport AWS_PROFILE=<profile>
Name Your Cluster
Give your cluster a unique name suitable for your environment. In EKS it is critical that the name is unique as no two clusters in the same EKS account can have the same name.
Set the environment variable to be used throughout this documentation:
export CLUSTER_NAME=eks-example
Follow these steps:
(Optional) To get a list of names in use in your EKS account, use the
aws
CLI tool. For example:CODEaws ec2 describe-vpcs --filter "Name=tag-key,Values=kubernetes.io/cluster" --query "Vpcs[*].Tags[?Key=='kubernetes.io/cluster'].Value | sort(@[*][0])"
CODE"alex-eks-cluster-afe98", "sam-aws-cluster-8if9q"
(Optional) If you want to create a cluster name that matches the example above, use this command. This creates a unique name every time you run it, so use the command with forethought.
CODEexport CLUSTER_NAME=eks-example-$(LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | fold -w 5 | head -n1) echo $CLUSTER_NAME
CODEeks-example-i05l6
Create a New EKS Kubernetes Cluster
Follow these steps:
Make sure your AWS credentials are up to date. If you are using User Profiles, you must refresh the credentials using the command in Step 1. Otherwise, proceed to Step 2.
CODEdkp update bootstrap credentials aws
Create a Kubernetes cluster:
CODEdkp create cluster eks --cluster-name=${CLUSTER_NAME} --additional-tags=owner=$(whoami)
If your environment uses HTTP/HTTPS proxies, you must include the flags
--http-proxy
,--https-proxy
, and--no-proxy
and their related values in this command for it to be successful. More information is available in Configure HTTP Proxy.
The output resembles:CODEGenerating cluster resources cluster.cluster.x-k8s.io/eks-example created awsmanagedcontrolplane.controlplane.cluster.x-k8s.io/eks-example-control-plane created machinedeployment.cluster.x-k8s.io/eks-example-md-0 created awsmachinetemplate.infrastructure.cluster.x-k8s.io/eks-example-md-0 created eksconfigtemplate.bootstrap.cluster.x-k8s.io/eks-example-md-0 created clusterresourceset.addons.cluster.x-k8s.io/calico-cni-installation-eks-example created configmap/calico-cni-installation-eks-example created configmap/tigera-operator-eks-example created clusterresourceset.addons.cluster.x-k8s.io/cluster-autoscaler-eks-example created configmap/cluster-autoscaler-eks-example created clusterresourceset.addons.cluster.x-k8s.io/node-feature-discovery-eks-example created configmap/node-feature-discovery-eks-example created clusterresourceset.addons.cluster.x-k8s.io/nvidia-feature-discovery-eks-example created configmap/nvidia-feature-discovery-eks-example created
(Optional) Specify an authorized key file to have SSH access to the machines.
The file must contain exactly one entry, as described in this manual.
You can use the
.pub
file that complements your private ssh key. For example, use the public key that complements your RSA private key:CODE--ssh-public-key-file=${HOME}/.ssh/id_rsa.pub
The default username for SSH access is
konvoy
. For example, use your own username:CODE--ssh-username=$(whoami)
Wait for the cluster control-plane to be ready:
CODEkubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=20m
CODEcluster.cluster.x-k8s.io/eks-example condition met
Explore the New Kubernetes Cluster
Follow these steps:
Fetch the kubeconfig file:
CODEdkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf
List the Nodes with the command:
CODEkubectl --kubeconfig=${CLUSTER_NAME}.conf get nodes
CODENAME STATUS ROLES AGE VERSION ip-10-0-122-211.us-west-2.compute.internal Ready <none> 32s v1.21.5-eks-9017834 ip-10-0-127-74.us-west-2.compute.internal Ready <none> 42s v1.21.5-eks-9017834 ip-10-0-71-155.us-west-2.compute.internal Ready <none> 46s v1.21.5-eks-9017834 ip-10-0-93-47.us-west-2.compute.internal Ready <none> 51s v1.21.5-eks-9017834
NOTE: It may take a couple of minutes for the Status to move to
Ready
whilecalico-node
pods are being deployed.List the Pods with the command:
CODEkubectl --kubeconfig=${CLUSTER_NAME}.conf get pods -A
CODENAMESPACE NAME READY STATUS RESTARTS AGE calico-system calico-kube-controllers-69845d4df5-sc9vq 1/1 Running 0 44s calico-system calico-node-5lppw 1/1 Running 0 44s calico-system calico-node-dwbfj 1/1 Running 0 44s calico-system calico-node-q6tg6 1/1 Running 0 44s calico-system calico-node-rbm7c 1/1 Running 0 44s calico-system calico-typha-68c68c96d-tcrxn 1/1 Running 0 35s calico-system calico-typha-68c68c96d-xhrjv 1/1 Running 0 44s kube-system aws-node-25bnt 1/1 Running 0 80s kube-system aws-node-dr4b7 1/1 Running 0 89s kube-system aws-node-mmn87 1/1 Running 0 70s kube-system aws-node-z6cdb 1/1 Running 0 84s kube-system cluster-autoscaler-68c759fbf6-zszxr 0/1 Init:0/1 0 9m50s kube-system coredns-85d5b4454c-n54rq 1/1 Running 0 12m kube-system coredns-85d5b4454c-xzd9w 1/1 Running 0 12m kube-system kube-proxy-4bhzp 1/1 Running 0 84s kube-system kube-proxy-5hkv9 1/1 Running 0 80s kube-system kube-proxy-g82d7 1/1 Running 0 70s kube-system kube-proxy-h2jv5 1/1 Running 0 89s node-feature-discovery node-feature-discovery-master-84c67dcbb6-s6874 1/1 Running 0 9m50s node-feature-discovery node-feature-discovery-worker-677hh 1/1 Running 0 69s node-feature-discovery node-feature-discovery-worker-fvjwz 1/1 Running 0 49s node-feature-discovery node-feature-discovery-worker-xcgvt 1/1 Running 0 64s node-feature-discovery node-feature-discovery-worker-zctnz 1/1 Running 0 60s tigera-operator tigera-operator-d499f5c8f-b56xn 1/1 Running 1 9m47s
Install Kommander and Log in to the UI
You can now proceed to installing the UI with Kommander and applications. After installation, you will be able to log in to the UI to explore it.
Delete the Kubernetes Cluster and Cleanup Your Environment
Delete the provisioned Kubernetes cluster and wait a few minutes:
CODEdkp delete cluster --cluster-name=${CLUSTER_NAME}
CODE✓ Deleting Services with type LoadBalancer for Cluster default/eks-example ✓ Deleting ClusterResourceSets for Cluster default/eks-example ✓ Deleting cluster resources ✓ Waiting for cluster to be fully deleted Deleted default/eks-example cluster
Delete the
kind
Kubernetes cluster:CODEdkp delete bootstrap --kubeconfig $HOME/.kube/config
CODE✓ Deleting bootstrap cluster