DKP Essential Upgrade
Upgrade your Konvoy environment within the DKP Essential License.
Prerequisites
Create an on-demand backup of your current configuration with Velero.
Follow the steps listed in the DKP upgrade overview.
Check what version of DKP you have downloaded currently using cli command dkp version.
Ensure that all platform applications in the management cluster are upgraded to avoid compatibility issues with the Kubernetes version included in this release. This is done automatically when upgrading Kommander, so ensure that you upgrade Kommander prior to upgrading Konvoy.
Set the appropriate environment variables:
For air-gapped: Download the required bundles either at our support site or by using the CLI.
For Azure, set the required environment variables.
For AWS, set the required environment variables.
For vSphere, set the required environment variables.
For GCP, set the required environment variables.
vSphere only: If you want to resize your disk, ensure you have reviewed Create a Base OS Image.
Overview
To upgrade Konvoy for DKP Essential:
Upgrade the Cluster API (CAPI) components.
Upgrade the core addons.
Upgrade the Kubernetes version.
If you have more than one Essential cluster, repeat all of these steps for each Essential cluster (management cluster).
For pre-provisioned air-gapped environments, you must run
konvoy-image upload artifacts
before beginning Upgrade the Capi Components section below.For a full list of DKP Essential features, see DKP Essential.
Upgrade the CAPI components
New versions of DKP come pre-bundled with newer versions of CAPI, newer versions of infrastructure providers, or new infrastructure providers. When using a new version of the DKP CLI, upgrade all of these components first.
Ensure your dkp
configuration references the management cluster where you want to run the upgrade by setting the KUBECONFIG
environment variable, or using the --kubeconfig
flag, in accordance with Kubernetes conventions.
Run the following upgrade command for the CAPI components:
If you created CAPI components using flags to specify values, use those same flags during Upgrade to preserve existing values while setting additional values.
Refer to dkp create cluster for flag descriptions for
--with-aws-bootstrap-credentials
and--aws-service-endpoints
Refer to the HTTP section for details: Universal Configurations for all Infrastructures
dkp upgrade capi-components
The command should output something similar to the following:
✓ Upgrading CAPI components
✓ Waiting for CAPI components to be upgraded
✓ Initializing new CAPI components
✓ Deleting Outdated Global ClusterResourceSets
If the upgrade fails, review the prerequisites section and ensure that you’ve followed the steps in the DKP upgrade overview.
Upgrade the Core Addons
To install the core addons, DKP relies on the ClusterResourceSet
Cluster API feature. In the CAPI component upgrade, we deleted the previous set of outdated global ClusterResourceSets
because prior to DKP 2.4 some addons were installed using a global configuration. In order to support individual cluster upgrades, DKP 2.4 now installs all addons with a unique set of ClusterResourceSet
and corresponding referenced resources, all named using the cluster’s name as a suffix. For example: calico-cni-installation-my-aws-cluster
.
If you modify any of the clusterResourceSet
definitions, these changes are not be preserved when running the command dkp upgrade addons
. You must use the --dry-run -o yaml
options to save the new configuration to a file and continue the same changes upon each upgrade.
Your cluster comes preconfigured with a few different core addons that provide functionality to your cluster upon creation. These include: CSI, CNI, Cluster Autoscaler, and Node Feature Discovery. New versions of DKP may come pre-bundled with newer versions of these addons.
If you have more than one essential cluster, ensure your dkp
configuration references the management cluster where you want to run the upgrade by setting the KUBECONFIG
environment variable, or using the --kubeconfig
flag, in accordance with Kubernetes conventions.
Perform the following steps to update your addons.
For any additional managed clusters, you will need to upgrade the core addons and Kubernetes version for each one.
Ensure your
dkp
configuration references the management cluster where you want to run the upgrade by setting theKUBECONFIG
environment variable, or using the--kubeconfig
flag, in accordance with Kubernetes conventions.Upgrade the core addons in a cluster using the
dkp upgrade addons
command specifying the cluster infrastructure (chooseaws
,azure
,vsphere
,eks
,gcp
,preprovisioned
) and the name of the cluster.
Examples for upgrade core addons commands:
export CLUSTER_NAME=my-azure-cluster
dkp upgrade addons azure --cluster-name=${CLUSTER_NAME}
OR
export CLUSTER_NAME=my-aws-cluster
dkp upgrade addons aws --cluster-name=${CLUSTER_NAME}
The output for the AWS example should be similar to:
Generating addon resources
clusterresourceset.addons.cluster.x-k8s.io/calico-cni-installation-my-aws-cluster upgraded
configmap/calico-cni-installation-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/tigera-operator-my-aws-cluster upgraded
configmap/tigera-operator-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/aws-ebs-csi-my-aws-cluster upgraded
configmap/aws-ebs-csi-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/cluster-autoscaler-my-aws-cluster upgraded
configmap/cluster-autoscaler-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/node-feature-discovery-my-aws-cluster upgraded
configmap/node-feature-discovery-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/nvidia-feature-discovery-my-aws-cluster upgraded
configmap/nvidia-feature-discovery-my-aws-cluster upgraded
4. After addons are upgraded, you must remove deprecated addons. In DKP version 2.4, the nvidia-feature-discovery
addon is no longer be shipped on new clusters, but rather will be handled by the nvidia-gpu-operator
Kommander component. To perform removal, execute the following steps:
a. List all cluster resource sets by running:
kubectl get clusterresourcesets
NAME AGE
aws-ebs-csi-my-aws-cluster 7m46s
calico-cni-installation-my-aws-cluster 7m46s
cluster-autoscaler-my-aws-cluster 7m46s
node-feature-discovery-my-aws-cluster 7m46
nvidia-feature-discovery-my-aws-cluster 7m46s
b. Delete the ClusterResourceSet
for nvidia-feature-discovery
by running:
kubectl delete clusterresourceset nvidia-feature-discovery-my-aws-cluster
c. Delete ConfigMap ClusterResourceSet
referred to by running the following command always using named nvidia-feature-discover-${CLUSTER_NAME}
:
kubectl delete configmap nvidia-feature-discovery-my-aws-cluster
d. Get the kubeconfig
for the cluster by running:
dkp get kubeconfig -c my-aws-cluster >> my-aws-cluster.conf
e. Delete the corresponding daemonset
on the remote cluster by running:
kubectl --kubeconfig=my-aws-cluster.conf delete daemonset nvidia-feature-discovery-gpu-feature-discovery -n node-feature-discovery
Now that you completed updating your core addons, begin upgrading the Kubernetes version in the section below.
See also DKP upgrade addons
Upgrade the Kubernetes version
When upgrading the Kubernetes version of a cluster, first upgrade the control plane and then the node pools.
If you have any additional managed or attached clusters, you need to upgrade the core addons and Kubernetes version for each one.
Build a new image if applicable.
If an AMI was specified when initially creating a cluster for AWS, you must build a new one with Konvoy Image Builder.
If an Azure Machine Image was specified for Azure, you must build a new one with Konvoy Image Builder.
If a vSphere template Image was specified for vSphere, you must build a new one with Konvoy Image Builder.
Upgrade the Kubernetes version of the control plane. Each cloud provider has distinctive commands. Below is the AWS command example. Select the drop-down menu next to your provider for compliant CLI.
:note: NOTE: The first example below is for AWS. If you created your initial cluster with a custom AMI using the--ami
flag, it is required to set the--ami
flag during the Kubernetes upgrade.CODEdkp update controlplane aws --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6
The output should be similar to the below example, with the provider name corresponding to the CLI you executed from the choices above:
NOTE: Some advanced options are available for various providers. An example would be regarding AMI instance type: aws: --ami, --instance-type
. To see all the options for your particular provider, run this command dkp update controlplane aws|vsphere|preprovisioned|azure --help
for more advance options.
The output should be similar to:
Updating control plane resource controlplane.cluster.x-k8s.io/v1beta1, Kind=KubeadmControlPlane default/my-aws-cluster-control-plane
Waiting for control plane update to finish.
✓ Updating the control plane
If upgrading a FIPS cluster, there is a bug in the upgrade of kube-proxy
DaemonSet
in that it doesn't get automatically upgraded. After updating the control plane, execute the following command to finish the kube-proxy upgrade:
kubectl set image -n kube-system daemonset.v1.apps/kube-proxy kube-proxy=docker.io/mesosphere/kube-proxy:v1.24.4_fips.0
3. Upgrade the Kubernetes version of your node pools. Upgrading a node pool involves draining the existing nodes in the node pool and replacing them with new nodes. In order to ensure minimum downtime and maintain high availability of the critical application workloads during the upgrade process, we recommend deploying Pod Disruption Budget (Disruptions ) for your critical applications. For more information, refer to Update Cluster Nodepools documentation.
a. First, get a list of all node pools available in your cluster by running the following command:
dkp get nodepool --cluster-name ${CLUSTER_NAME}
b. Select the nodepool you want to upgrade with the command below:
export NODEPOOL_NAME=my-nodepool
c. Then update the selected nodepool using the command below. The first example command shows AWS language, so select the drop-down menu for your provider for the correct command. Execute the update
command for each of the node pools listed in the previous command.
NOTE: The first example below is for AWS. If you created your initial cluster with a custom AMI using the --ami
flag, it is required to set the --ami
flag during the Kubernetes upgrade.
dkp update nodepool aws ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6
The output should be similar to the following, with the name of the infrastructure provider shown accordingly:
Updating node pool resource cluster.x-k8s.io/v1beta1, Kind=MachineDeployment default/my-aws-cluster-my-nodepool
Waiting for node pool update to finish.
✓ Updating the my-aws-cluster-my-nodepool node pool
4. Repeat this step for each additional node pool.
When all nodepools have been updated, your upgrade is complete. For the overall process for upgrading to the latest version of DKP, refer back to DKP Upgrade for more details.
For the overall process for upgrading to the latest version of DKP, refer back to DKP Upgrade.