Post Conversion Cleanup: Cluster Autoscaler Configuration
Follow the steps in this page to ensure that the Cluster Autoscaler is able to function properly after converting your DKP Essential cluster to a DKP Enterprise Managed cluster.
After converting your cluster from Essential to Enterprise, the Cluster Lifecycle Management responsibilities is moved to a single Management cluster.
The Cluster Autoscaler feature also depends on the same Cluster Lifecycle Management components. If you are using the Cluster Autoscaler feature in DKP, you must perform the following steps for this feature to continue to work correctly:
Run the following commands in the Management cluster. For general guidelines on how to set the context, refer to Provide Context for Commands with a kubeconfig File.
Set the following environment variables with your cluster's details:
CODEexport CLUSTER_NAME=<> export WORKSPACE_NAMESPACE=<>
Apply the Cluster Autoscaler
Deployment
and supporting resources:CODEcat <<EOF | kubectl apply -f - --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: cluster-autoscaler-${CLUSTER_NAME} name: cluster-autoscaler-${CLUSTER_NAME} namespace: ${WORKSPACE_NAMESPACE} spec: replicas: 1 selector: matchLabels: app: cluster-autoscaler-${CLUSTER_NAME} template: metadata: labels: app: cluster-autoscaler-${CLUSTER_NAME} spec: containers: - args: - --cloud-provider=clusterapi - --node-group-auto-discovery=clusterapi:clusterName=${CLUSTER_NAME} - --kubeconfig=/workload-cluster/kubeconfig - --clusterapi-cloud-config-authoritative - -v5 command: - /cluster-autoscaler image: us.gcr.io/k8s-artifacts-prod/autoscaling/cluster-autoscaler:v1.25.0 name: cluster-autoscaler volumeMounts: - mountPath: /workload-cluster name: kubeconfig readOnly: true serviceAccountName: cluster-autoscaler-${CLUSTER_NAME} terminationGracePeriodSeconds: 10 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master - effect: NoSchedule key: node-role.kubernetes.io/control-plane volumes: - name: kubeconfig secret: items: - key: value path: kubeconfig secretName: ${CLUSTER_NAME}-kubeconfig --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cluster-autoscaler-management-${CLUSTER_NAME} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-autoscaler-management-${CLUSTER_NAME} subjects: - kind: ServiceAccount name: cluster-autoscaler-${CLUSTER_NAME} namespace: ${WORKSPACE_NAMESPACE} --- apiVersion: v1 kind: ServiceAccount metadata: name: cluster-autoscaler-${CLUSTER_NAME} namespace: ${WORKSPACE_NAMESPACE} --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-autoscaler-management-${CLUSTER_NAME} rules: - apiGroups: - cluster.x-k8s.io resources: - machinedeployments - machinedeployments/scale - machines - machinesets verbs: - get - list - update - watch EOF
Verify the output is similar to the following:
CODEdeployment.apps/cluster-autoscaler-<cluster-name> created clusterrolebinding.rbac.authorization.k8s.io/cluster-autoscaler-management-<cluster-name> created serviceaccount/cluster-autoscaler-<cluster-name> created clusterrole.rbac.authorization.k8s.io/cluster-autoscaler-management-<cluster-name> created
To check that the status of the deployment has the expected
AVAILABLE
count of1
, run the following command and verify that the output is similar:CODE$ kubectl get deployment -n $WORKSPACE_NAMESPACE cluster-autoscaler-$CLUSTER_NAME NAME READY UP-TO-DATE AVAILABLE AGE cluster-autoscaler-<cluster-name> 1/1 1 1 1m
Next Step
Post Conversion Cleanup: Clusters run on Different Cloud Platforms