Skip to main content
Skip table of contents

DKP Essential Upgrade

Upgrade your Konvoy environment within the DKP Essential License.

Prerequisites

Overview

To upgrade Konvoy for DKP Essential:

  1. Upgrade the Cluster API (CAPI) components.

  2. Upgrade the core addons.

  3. Upgrade the Kubernetes version.

If you have more than one Essential cluster, repeat all of these steps for each Essential cluster (management cluster).

  • For pre-provisioned air-gapped environments, you must run konvoy-image upload artifacts before beginning Upgrade the Capi Components section below.

  • For a full list of DKP Essential features, see DKP Essential.

Upgrade the CAPI components

New versions of DKP come pre-bundled with newer versions of CAPI, newer versions of infrastructure providers, or new infrastructure providers. When using a new version of the DKP CLI, upgrade all of these components first.

Ensure your dkp configuration references the management cluster where you want to run the upgrade by setting the KUBECONFIG environment variable, or using the --kubeconfig flag, in accordance with Kubernetes conventions.

Run the following upgrade command for the CAPI components:

If you created CAPI components using flags to specify values, use those same flags during Upgrade to preserve existing values while setting additional values.

CODE
dkp upgrade capi-components

The command should output something similar to the following:

CODE
✓ Upgrading CAPI components
✓ Waiting for CAPI components to be upgraded
✓ Initializing new CAPI components
✓ Deleting Outdated Global ClusterResourceSets

If the upgrade fails, review the prerequisites section and ensure that you’ve followed the steps in the DKP upgrade overview.

Upgrade the Core Addons

To install the core addons, DKP relies on the ClusterResourceSet Cluster API feature. In the CAPI component upgrade, we deleted the previous set of outdated global ClusterResourceSets because prior to DKP 2.4 some addons were installed using a global configuration. In order to support individual cluster upgrades, DKP 2.4 now installs all addons with a unique set of ClusterResourceSet and corresponding referenced resources, all named using the cluster’s name as a suffix. For example: calico-cni-installation-my-aws-cluster.

If you modify any of the clusterResourceSet definitions, these changes are not be preserved when running the command dkp upgrade addons. You must use the --dry-run -o yaml options to save the new configuration to a file and continue the same changes upon each upgrade.

Your cluster comes preconfigured with a few different core addons that provide functionality to your cluster upon creation. These include: CSI, CNI, Cluster Autoscaler, and Node Feature Discovery. New versions of DKP may come pre-bundled with newer versions of these addons.

If you have more than one essential cluster, ensure your dkp configuration references the management cluster where you want to run the upgrade by setting the KUBECONFIG environment variable, or using the --kubeconfig flag, in accordance with Kubernetes conventions.

Perform the following steps to update your addons.

  1. For any additional managed clusters, you will need to upgrade the core addons and Kubernetes version for each one.

  2. Ensure your dkp configuration references the management cluster where you want to run the upgrade by setting the KUBECONFIG environment variable, or using the --kubeconfig flag, in accordance with Kubernetes conventions.

  3. Upgrade the core addons in a cluster using the dkp upgrade addons command specifying the cluster infrastructure (choose aws, azure, vsphere, eks,gcp, preprovisioned) and the name of the cluster.

Additional Considerations for EKS
  • The Kubernetes version for EKS clusters supported on DKP 2.4 is now v1.23. As EKS has disabled support for in-tree EBS volume provisioning in favor of CSI Volumes, DKP automatically deploys a new EBS CSI driver to your EKS cluster when you run the dkp upgrade addons command.

  • Before running the dkp upgrade addons command when deploying EKS clusters, you must add the necessary IAM policy to your worker instances. If you are using the default IAM instance profile name, run the following command:
    aws iam attach-role-policy --role-name nodes.cluster-api-provider-aws.sigs.k8s.io --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
    If you have customized your AWSMachineTemplate to use a different instance profile, review and add the policy to that profile.

 

Examples for upgrade core addons commands:

CODE
export CLUSTER_NAME=my-azure-cluster
dkp upgrade addons azure --cluster-name=${CLUSTER_NAME}

OR

CODE
export CLUSTER_NAME=my-aws-cluster
dkp upgrade addons aws --cluster-name=${CLUSTER_NAME}

The output for the AWS example should be similar to:

CODE
Generating addon resources
clusterresourceset.addons.cluster.x-k8s.io/calico-cni-installation-my-aws-cluster upgraded
configmap/calico-cni-installation-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/tigera-operator-my-aws-cluster upgraded
configmap/tigera-operator-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/aws-ebs-csi-my-aws-cluster upgraded
configmap/aws-ebs-csi-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/cluster-autoscaler-my-aws-cluster upgraded
configmap/cluster-autoscaler-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/node-feature-discovery-my-aws-cluster upgraded
configmap/node-feature-discovery-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/nvidia-feature-discovery-my-aws-cluster upgraded
configmap/nvidia-feature-discovery-my-aws-cluster upgraded

4. After addons are upgraded, you must remove deprecated addons. In DKP version 2.4, the nvidia-feature-discovery addon is no longer be shipped on new clusters, but rather will be handled by the nvidia-gpu-operator Kommander component. To perform removal, execute the following steps:

a. List all cluster resource sets by running:

CODE
kubectl get clusterresourcesets
CODE
NAME                                                         AGE
aws-ebs-csi-my-aws-cluster                                   7m46s
calico-cni-installation-my-aws-cluster                       7m46s
cluster-autoscaler-my-aws-cluster                            7m46s
node-feature-discovery-my-aws-cluster                        7m46
nvidia-feature-discovery-my-aws-cluster                      7m46s

b. Delete the ClusterResourceSet for nvidia-feature-discovery by running:

CODE
kubectl delete clusterresourceset nvidia-feature-discovery-my-aws-cluster

c. Delete ConfigMap ClusterResourceSet referred to by running the following command always using named nvidia-feature-discover-${CLUSTER_NAME}:

CODE
kubectl delete configmap nvidia-feature-discovery-my-aws-cluster

d. Get the kubeconfig for the cluster by running:

CODE
dkp get kubeconfig -c my-aws-cluster >> my-aws-cluster.conf

e. Delete the corresponding daemonset on the remote cluster by running:

CODE
kubectl --kubeconfig=my-aws-cluster.conf delete daemonset nvidia-feature-discovery-gpu-feature-discovery -n node-feature-discovery

Now that you completed updating your core addons, begin upgrading the Kubernetes version in the section below.

See also DKP upgrade addons

Upgrade the Kubernetes version

When upgrading the Kubernetes version of a cluster, first upgrade the control plane and then the node pools.

If you have any additional managed or attached clusters, you need to upgrade the core addons and Kubernetes version for each one.

  1. Build a new image if applicable.

    • If an AMI was specified when initially creating a cluster for AWS, you must build a new one with Konvoy Image Builder.

    • If an Azure Machine Image was specified for Azure, you must build a new one with Konvoy Image Builder.

    • If a vSphere template Image was specified for vSphere, you must build a new one with Konvoy Image Builder.

  2. Upgrade the Kubernetes version of the control plane. Each cloud provider has distinctive commands. Below is the AWS command example. Select the drop-down menu next to your provider for compliant CLI.
    :note: NOTE: The first example below is for AWS. If you created your initial cluster with a custom AMI using the --ami flag, it is required to set the --ami flag during the Kubernetes upgrade.

    CODE
    dkp update controlplane aws --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6
Azure
CODE
dkp update controlplane azure --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6 --compute-gallery-id <Azure Compute Gallery built by KIB for Kubernetes v1.24.6>
vSphere
CODE
dkp update controlplane vsphere --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6 --vm-template <vSphere template built by KIB for Kubernetes v1.24.6>
GCP
CODE
dkp update controlplane gcp --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6 --image=projects/${GCP_PROJECT}/global/images/<GCP image built by KIB for Kubernetes v1.24.6>
Pre-provisioned
CODE
dkp update controlplane preprovisioned --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6
EKS
CODE
dkp update controlplane eks --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.23.12

Due to environment restrictions, EKS clusters will display different Kubernetes patch version numbers upon upgrade completion as EKS does not allow for patch version specification.

The output should be similar to the below example, with the provider name corresponding to the CLI you executed from the choices above:

NOTE: Some advanced options are available for various providers. An example would be regarding AMI instance type: aws: --ami, --instance-type. To see all the options for your particular provider, run this command dkp update controlplane aws|vsphere|preprovisioned|azure --help for more advance options.

The output should be similar to:

CODE
Updating control plane resource controlplane.cluster.x-k8s.io/v1beta1, Kind=KubeadmControlPlane default/my-aws-cluster-control-plane
Waiting for control plane update to finish.
 ✓ Updating the control plane

If upgrading a FIPS cluster, there is a bug in the upgrade of kube-proxy DaemonSet in that it doesn't get automatically upgraded. After updating the control plane, execute the following command to finish the kube-proxy upgrade:

CODE
kubectl set image -n kube-system daemonset.v1.apps/kube-proxy kube-proxy=docker.io/mesosphere/kube-proxy:v1.24.4_fips.0

3. Upgrade the Kubernetes version of your node pools. Upgrading a node pool involves draining the existing nodes in the node pool and replacing them with new nodes. In order to ensure minimum downtime and maintain high availability of the critical application workloads during the upgrade process, we recommend deploying Pod Disruption Budget (Disruptions ) for your critical applications. For more information, refer to Update Cluster Nodepools documentation.

a. First, get a list of all node pools available in your cluster by running the following command:

CODE
dkp get nodepool --cluster-name ${CLUSTER_NAME}

b. Select the nodepool you want to upgrade with the command below:

CODE
export NODEPOOL_NAME=my-nodepool

c. Then update the selected nodepool using the command below. The first example command shows AWS language, so select the drop-down menu for your provider for the correct command. Execute the update command for each of the node pools listed in the previous command.
NOTE: The first example below is for AWS. If you created your initial cluster with a custom AMI using the --ami flag, it is required to set the --ami flag during the Kubernetes upgrade.

CODE
dkp update nodepool aws ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6
Azure
CODE
dkp update nodepool azure ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6 --compute-gallery-id <Azure Compute Gallery built by KIB for Kubernetes v1.24.6>
vSphere
CODE
dkp update nodepool vsphere ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6 --vm-template <vSphere template built by KIB for Kubernetes v1.24.6>
GCP
CODE
dkp update nodepool gcp ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6 --image=projects/${GCP_PROJECT}/global/images/<GCP image built by KIB for Kubernetes v1.24.6>
Pre-provisioned
CODE
dkp update nodepool preprovisioned ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6
EKS
CODE
dkp update nodepool eks ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.23.12

The output should be similar to the following, with the name of the infrastructure provider shown accordingly:

CODE
Updating node pool resource cluster.x-k8s.io/v1beta1, Kind=MachineDeployment default/my-aws-cluster-my-nodepool
Waiting for node pool update to finish.
 ✓ Updating the my-aws-cluster-my-nodepool node pool

4. Repeat this step for each additional node pool.

Additional Considerations for upgrading a FIPS cluster:

If upgrading a FIPS cluster, to correctly upgrade the Kubernetes version, instead run the command shown below:

CODE
dkp update nodepool aws ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6+fips.0 --ami=<ami-with-fips-id>

When all nodepools have been updated, your upgrade is complete. For the overall process for upgrading to the latest version of DKP, refer back to DKP Upgrade for more details.

For the overall process for upgrading to the latest version of DKP, refer back to DKP Upgrade.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.