Skip to main content
Skip table of contents

Scale AWS Node Pools

Scale node pools in a cluster

While you can run Cluster Autoscaler, you can also manually scale your node pools up or down when you need more finite control over your environment. For example, if you require 10 machines to run a process, you can manually set the scaling to run those 10 machines only. However, if also using the Cluster Autoscaler, you must stay within your minimum and maximum bounds.

Scaling Up Node Pools

To scale up a node pool in a cluster, run:

CODE
dkp scale nodepools ${NODEPOOL_NAME} --replicas=5 --cluster-name=${CLUSTER_NAME}

Your output should be similar to this example, indicating the scaling is in progress:

CODE
✓ Scaling node pool example to 5 replicas

After a few minutes, you can list the node pools to:

CODE
dkp get nodepools --cluster-name=${CLUSTER_NAME} --kubeconfig=${CLUSTER_NAME}.conf

Your output should be similar to this example, with the number of DESIRED and READY replicas increased to 5:

CODE
NODEPOOL                           DESIRED               READY               KUBERNETES VERSION               
example                            5                     5                   v1.24.6                          
aws-example-md-0                   4                     4                   v1.24.6

Scaling Down Node Pools

To scale down a node pool, run:

CODE
dkp scale nodepools ${NODEPOOL_NAME} --replicas=4 --cluster-name=${CLUSTER_NAME}
CODE
✓ Scaling node pool example to 4 replicas

After a few minutes, you can list the node pools using this command:

CODE
dkp get nodepools --cluster-name=${CLUSTER_NAME} --kubeconfig=${CLUSTER_NAME}.conf

Your output should be similar to this example, with the number of DESIRED and READY replicas decreased to 4:

CODE
NODEPOOL                           DESIRED               READY               KUBERNETES VERSION               
example                            4                     4                   v1.24.6                          
aws-example-md-0                   4                     4                   v1.24.6

In a default cluster, the nodes to delete are selected at random. This behavior is controlled by CAPI’s delete policy. However, when using the DKP CLI to scale down a node pool, it is also possible to specify the Kubernetes Nodes you want to delete.

To do this, set the flag --nodes-to-delete with a list of nodes as below. This adds an annotation cluster.x-k8s.io/delete-machine=yes to the matching Machine object that contains status.NodeRef with the node names from --nodes-to-delete.

CODE
dkp scale nodepools ${NODEPOOL_NAME} --replicas=3 --nodes-to-delete=<> --cluster-name=${CLUSTER_NAME}
CODE
✓ Scaling node pool example to 3 replicas

Scaling Node Pools When Using Cluster Autoscaler

If you configured the cluster autoscaler for the demo-cluster-md-0 node pool, the value of --replicas must be within the minimum and maximum bounds.

For example, assuming you have these annotations:

CODE
kubectl --kubeconfig=${CLUSTER_NAME}.conf annotate machinedeployment ${NODEPOOL_NAME} cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size=2
kubectl --kubeconfig=${CLUSTER_NAME}.conf annotate machinedeployment ${NODEPOOL_NAME} cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size=6

Try to scale the node pool to 7 replicas with the command:

CODE
dkp scale nodepools ${NODEPOOL_NAME} --replicas=7 -c demo-cluster

Which results in an error similar to:

CODE
 ✗ Scaling node pool example to 7 replicas
failed to scale nodepool: scaling MachineDeployment is forbidden: desired replicas 7 is greater than the configured max size annotation cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: 6

Similarly, scaling down to a number of replicas less than the configured min-size also returns an error.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.