DKP Enterprise Upgrade
Upgrade your Konvoy environment within the DKP Enterprise license.
Prerequisites
Create an on-demand backup of your current configuration with Velero.
Follow the steps listed in the DKP upgrade overview.
Check what version of DKP you have downloaded currently using cli command dkp version.
Ensure that all platform applications in the management cluster have been upgraded to avoid compatibility issues with the Kubernetes version included in this release. This is done automatically when upgrading Kommander, so ensure that you upgrade Kommander prior to upgrading Konvoy.
For air-gapped: Download the required bundles either at our support site or by using the CLI.
For Azure, set the required environment variables.
For AWS and EKS, set the required environment variables.
For vSphere, set the required environment variables.
The following infrastructure environments are supported:
Amazon Web Services (AWS)
Microsoft Azure
Pre-provisioned environments
vSphere
EKS
Overview
To upgrade Konvoy for DKP Enterprise:
Upgrade the Cluster API (CAPI) components
Upgrade the core addons
Upgrade the Kubernetes version
Run all three steps on the management cluster (Kommander cluster) first. Then, run the second and third steps on additional managed clusters (Konvoy clusters), one cluster at a time, using the KUBECONFIG for the management cluster and specifying the name of the managed cluster(s) to upgrade.
For a full list of DKP Enterprise features, see DKP Enterprise.
For pre-provisioned air-gapped environments, you must run
konvoy-image upload artifacts
.You must maintain your attached clusters manually. Review the documentation from your cloud provider for more information.
Upgrade the CAPI components
New versions of DKP come pre-bundled with newer versions of CAPI, newer versions of infrastructure providers, or new infrastructure providers. When using a new version of the DKP CLI, upgrade all of these components first.
If you are running on more than one management cluster (Kommander cluster), you must upgrade the CAPI components on each of these clusters.
Ensure your dkp
configuration references the management cluster where you want to run the upgrade by setting the KUBECONFIG
environment variable, or using the --kubeconfig
flag, in accordance with Kubernetes conventions.
Run the following upgrade command for the CAPI components.
dkp upgrade capi-components
The output resembles the following:
✓ Upgrading CAPI components
✓ Waiting for CAPI components to be upgraded
✓ Initializing new CAPI components
✓ Deleting Outdated Global ClusterResourceSets
If the upgrade fails, review the prerequisites section and ensure that you’ve followed the steps in the DKP upgrade overview.
Upgrade the core addons
To install the core addons, DKP relies on the ClusterResourceSet
Cluster API feature. In the CAPI component upgrade, we deleted the previous set of outdated global ClusterResourceSets
because prior to DKP 2.2 some addons were installed using a global configuration. In order to support individual cluster upgrades, DKP 2.2 now installs all addons with a unique set of ClusterResourceSet
and corresponding referenced resources, all named using the cluster’s name as a suffix. For example: calico-cni-installation-my-aws-cluster
.
If you have modified any of the clusterResourceSet
definitions, these changes will not be preserved when running the command dkp upgrade addons
. You must use the --dry-run -o yaml
options to save the new configuration to a file and remake the same changes upon each upgrade.
Your cluster comes preconfigured with a few different core addons that provide functionality to your cluster upon creation. These include: CSI, CNI, Cluster Autoscaler, and Node Feature Discovery. New versions of DKP may come pre-bundled with newer versions of these addons. Perform the following steps to update these addons. If you have any additional managed clusters, you will need to upgrade the core addons and Kubernetes version for each one.
Ensure your dkp
configuration references the management cluster where you want to run the upgrade by setting the KUBECONFIG
environment variable, or using the --kubeconfig
flag, in accordance with Kubernetes conventions.
Upgrade the core addons in a cluster using the dkp upgrade addons
command specifying the cluster infrastructure (choose aws
, azure
, vsphere
, eks
,preprovisioned
) and the name of the cluster.
Examples:
export CLUSTER_NAME=my-azure-cluster
dkp upgrade addons azure --cluster-name=${CLUSTER_NAME}
OR
export CLUSTER_NAME=my-aws-cluster
dkp upgrade addons aws --cluster-name=${CLUSTER_NAME}
The output for the AWS example should be similar to:
Generating addon resources
clusterresourceset.addons.cluster.x-k8s.io/calico-cni-installation-my-aws-cluster upgraded
configmap/calico-cni-installation-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/tigera-operator-my-aws-cluster upgraded
configmap/tigera-operator-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/aws-ebs-csi-my-aws-cluster upgraded
configmap/aws-ebs-csi-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/cluster-autoscaler-my-aws-cluster upgraded
configmap/cluster-autoscaler-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/node-feature-discovery-my-aws-cluster upgraded
configmap/node-feature-discovery-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/nvidia-feature-discovery-my-aws-cluster upgraded
configmap/nvidia-feature-discovery-my-aws-cluster upgraded
See also
Once complete, begin upgrading the Kubernetes version.
Upgrade the Kubernetes version
When upgrading the Kubernetes version of a cluster, first upgrade the control plane and then the node pools. If you have any additional managed or attached clusters, you will need to upgrade the core addons and Kubernetes version for each one.
Build a new image if applicable.
If an AMI was specified when initially creating a cluster for AWS, you must build a new one with Konvoy Image Builder.
If an Azure Machine Image was specified for Azure, you must build a new one with Konvoy Image Builder.
If a vSphere template Image was specified for vSphere, you must build a new one with Konvoy Image Builder.
Upgrade the Kubernetes version of the control plane. Each cloud provider has distinctive commands. Below is the AWS command example. Select the drop-down menu next to your provider for compliant CLI.
CODEdkp update controlplane aws --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.23.12
The output should be similar to the below example, with the provider name corresponding to the one you entered in the command line above:
Updating control plane resource controlplane.cluster.x-k8s.io/v1beta1, Kind=KubeadmControlPlane default/my-aws-cluster-control-plane
Waiting for control plane update to finish.
✓ Updating the control plane
If upgrading a FIPS cluster, there is a bug in the upgrade of kube-proxy
DaemonSet
in that it doesn't get automatically upgraded. To correctly upgrade, run the command shown below:
kubectl set image -n kube-system daemonset.v1.apps/kube-proxy kube-proxy=docker.io/mesosphere/kube-proxy:v1.23.12_fips.0
3. Upgrade the Kubernetes version of your node pools. Upgrading a node pool involves draining the existing nodes in the node pool and replacing them with new nodes. In order to ensure minimum downtime and maintain high availability of the critical application workloads during the upgrade process, we recommend deploying Pod Disruption Budget (Disruptions ) for your critical applications. For more information, refer to Update Cluster Nodepools documentation.
First, get a list of all node pools available in your cluster by running the following command:
dkp get nodepool --cluster-name ${CLUSTER_NAME}
Select the nodepool you want to upgrade with the command below:
export NODEPOOL_NAME=my-nodepool
Then update the selected nodepool using the command below. The first example command shows AWS language, so select the drop-down menu for your provider for the correct command. Execute the
update
command for each of the node pools listed in the previous command.
dkp update nodepool aws ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.23.12
The output should be similar to the following, with the name of the infrastructure provider shown accordingly:
Updating node pool resource cluster.x-k8s.io/v1beta1, Kind=MachineDeployment default/my-aws-cluster-my-nodepool
Waiting for node pool update to finish.
✓ Updating the my-aws-cluster-my-nodepool node pool
NOTE: Some advanced options are available for various providers. An example would be regarding AMI instance type: aws: --ami, --instance-type
. To see all the options for your particular provider, run this command dkp update controlplane aws|vsphere|preprovisioned|azure|eks --help
for more advance options.
Repeat this step for each additional node pool.Once all nodepools have been updated, your upgrade is complete. For the overall process for upgrading to the latest version of DKP, refer back to DKP Upgrade.
Upgrade Managed Clusters
If you have managed clusters, follow these steps to upgrade each cluster:
Using the kubeconfig of your management cluster, find your cluster name and be sure to copy the information for all of your clusters:
CODEkubectl get clusters -A
Set your cluster variable:
CODEexport CLUSTER_NAME=<your-managed-cluster-name>
Set your cluster's workspace variable:
CODEexport CLUSTER_WORKSPACE=<your-workspace-namespace>
Then, upgrade the core addons (replacing
aws
with whatever infrastructure provider you would be using):CODEdkp upgrade addons aws --cluster-name=${CLUSTER_NAME} -n ${CLUSTER_WORKSPACE}
Check to see if you have any cluster resource sets that need to be cleaned up:
CODEkubectl get clusterresourcesets -n ${CLUSTER_WORKSPACE}
Delete the ClusterResourceSet for nvidia-feature-discovery by running:
CODEkubectl delete clusterresourceset nvidia-feature-discovery-my-aws-cluster -n ${CLUSTER_WORKSPACE}
Delete ConfigMap ClusterResourceSet referred to by running the following command, ensure you use using
nvidia-feature-discover-${CLUSTER_NAME}
. If there is no related ConfigMap, then you can move on to the next step.CODEkubectl delete configmap nvidia-feature-discovery-my-aws-cluster -n ${CLUSTER_WORKSPACE}
Get the kubeconfig for the managed cluster by running:
CODEdkp get kubeconfig -c ${CLUSTER_NAME} -n ${CLUSTER_WORKSPACE} >> ${CLUSTER_NAME}.conf
Delete the corresponding daemonset on the remote cluster by running. If there is no related daemonset, then you can move on to the next step.
CODEkubectl --kubeconfig=${CLUSTER_NAME}.conf delete daemonset nvidia-feature-discovery-gpu-feature-discovery -n node-feature-discovery
Upgrade Kubernetes Version on a Managed Cluster
After you complete the previous steps for all managed clusters and you update your core addons, begin upgrading the Kubernetes version.
You should first complete the upgrade of your Kommander Management Cluster before upgrading any managed clusters.
Use this command to start upgrading the Kubernetes version:
CODEdkp update controlplane aws --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.23.12 -n ${CLUSTER_WORKSPACE}
Get a list of all node pools available in your cluster by running the following command:
CODEdkp get nodepools -c ${CLUSTER_NAME} -n ${CLUSTER_WORKSPACE} export NODEPOOL_NAME=<my-nodepool>
Use this command to upgrade the node pools:
CODEdkp update nodepool aws ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.23.12 -n ${CLUSTER_WORKSPACE}