Manage EKS Nodepools
Ensure that the KUBECONFIG
environment variable is set to the self-managed cluster by running export KUBECONFIG={SELF_MANAGED_AWS_CLUSTER}.conf
Node pools are part of a cluster and managed as a group, and can be used to manage a group of machines using common properties. New default clusters created by Konvoy contain one node pool of worker nodes that have the same configuration.
You can create additional node pools for specialized hardware or other configurations. For example, if you want to tune your memory usage on a cluster where you need maximum memory for some machines and minimal memory on others, you can create a new node pool with those specific resource needs.
NOTE: Konvoy implements node pools using Cluster API MachineDeployments.
Create a node pool
Availability zones (AZs) are isolated locations within data center regions from which public cloud services originate and operate. Because all the nodes in a node pool are deployed in a single Availability Zone, you may wish to create additional node pools is to ensure your cluster has nodes deployed in multiple Availability Zones.
By default, the first Availability Zone in the region is used for the nodes in the node pool. To create the nodes in a different Availability Zone set the appropriate --availability-zone
.
To create a new EKS node pool with 3 replicas, run:
dkp create nodepool eks ${NODEPOOL_NAME} \
--cluster-name=${CLUSTER_NAME} \
--replicas=3
machinedeployment.cluster.x-k8s.io/example created
awsmachinetemplate.infrastructure.cluster.x-k8s.io/example created
eksconfigtemplate.bootstrap.cluster.x-k8s.io/example created
✓ Creating default/example nodepool resources
Advanced users can use a combination of the --dry-run
and --output=yaml
flags to get a complete set of node pool objects to modify locally or store in version control.
Scaling Up Node Pools
To scale up a node pool in a cluster, run:
dkp scale nodepools ${NODEPOOL_NAME} --replicas=5 --cluster-name=${CLUSTER_NAME}
Your output should be similar to this example, indicating the scaling is in progress:
✓ Scaling node pool example to 5 replicas
After a few minutes, you can list the node pools to:
dkp get nodepools --cluster-name=${CLUSTER_NAME}
Your output should be similar to this example, with the number of DESIRED and READY replicas increased to 5:
NODEPOOL DESIRED READY KUBERNETES VERSION
example 5 5 v1.23.6
eks-example-md-0 4 4 v1.23.6
Delete EKS Node Pools
Delete node pools in a cluster
Deleting a node pool deletes the Kubernetes nodes and the underlying infrastructure. All nodes are drained prior to deletion and the pods running on those nodes are rescheduled.
To delete a node pool from a managed cluster, run:
dkp delete nodepool ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME}
Here, example
is the node pool to be deleted.
The expected output will be similar to the following example, indicating the node pool is being deleted:
✓ Deleting default/example nodepool resources
Deleting an invalid node pool results in output similar to this example:
dkp delete nodepool ${CLUSTER_NAME}-md-invalid --cluster-name=${CLUSTER_NAME}
MachineDeployments or MachinePools.infrastructure.cluster.x-k8s.io "no MachineDeployments or MachinePools found for cluster eks-example" not found