Pre-provisioned Create and Delete Node Pools
Node pools are part of a cluster and managed as a group, and can be used to manage a group of machines using common properties. New default clusters created by Konvoy contain one node pool of worker nodes that have the same configuration.
You can create additional node pools for specialized hardware or other configurations. For example, if you want to tune your memory usage on a cluster where you need maximum memory for some machines and minimal memory on others, you could create a new node pool with those specific resource needs.
NOTE: Konvoy implements node pools using Cluster API MachineDeployments.
Create a Pre-provisioned node pool
Follow these steps:
Create an inventory object that has the same name as the node pool you’re creating, and the details of the pre-provisioned machines that you want to add to it. For example, to create a node pool named
gpu-nodepool
an inventory namedgpu-nodepool
must be present in the same namespace:YAMLapiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1 kind: PreprovisionedInventory metadata: name: ${MY_NODEPOOL_NAME} spec: hosts: - address: ${IP_OF_NODE} sshConfig: port: 22 user: ${SSH_USERNAME} privateKeyRef: name: ${NAME_OF_SSH_SECRET} namespace: ${NAMESPACE_OF_SSH_SECRET}
(Optional) If your pre-provisioned machines have overrides, you must create a secret that includes all of the overrides you want to provide in one file. Create an override secret using the instructions detailed on this page.
Once the
PreprovisionedInventory
object and overrides are created, create a node pool:BASHdkp create nodepool preprovisioned -c ${MY_CLUSTER_NAME} ${MY_NODEPOOL_NAME} --override-secret-name ${MY_OVERRIDE_SECRET}
Advanced users can use a combination of the
--dry-run
and--output=yaml
or--output-directory=<existing-directory>
flags to get a complete set of node pool objects to modify locally or store in version control.
For more information regarding this flag or others, please refer to the dkp create nodepool section of the documentation for either cluster or nodepool and select your provider.
Delete a node pool
Deleting a node pool deletes both the Kubernetes nodes and their underlying infrastructure. DKP drains all nodes prior to deletion and reschedules the pods running on those nodes.
To delete a node pool from a managed cluster, run the following command:
dkp delete nodepool ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME}
The expected output is similar to the following example, indicating the example
node pool is being deleted:
INFO[2021-07-28T17:14:26-07:00] Running nodepool delete command Nodepool=example clusterName=d2iq-e2e-cluster-1 managementClusterKubeconfig= namespace=default src="nodepool/delete.go:80"
Deleting an invalid node pool results in an output similar to this example command output:
dkp delete nodepool ${CLUSTER_NAME}-md-invalid --cluster-name=${CLUSTER_NAME}
INFO[2021-07-28T17:11:44-07:00] Running nodepool delete command Nodepool=demo-cluster-md-invalid clusterName=d2iq-e2e-cluster-1 managementClusterKubeconfig= namespace=default src="nodepool/delete.go:80"
Error: failed to get nodepool with name demo-cluster-md-invalid in namespace default : failed to get nodepool with name demo-cluster-md-invalid in namespace default : machinedeployments.cluster.x-k8s.io "demo-cluster-md-invalid" not found