Cluster-scoped Configuration for Existing AppDeployments
Enable and customize an application on a per-cluster basis
This topic describes how to enable cluster-scoped configuration of applications for existing AppDeployments
. For more information on how to create AppDeployments
, refer to the Application Deployment section.
When you enable an application for a workspace, you deploy this application to all clusters within that workspace. You can also choose to enable or customize an application on certain clusters within a workspace. This functionality allows you to use DKP in a multi-cluster scenario without restricting the management of multiple clusters from a single workspace.
Your DKP cluster comes bundled with a set of default application configurations. If you want to override the default configuration of your applications, you can define workspace configOverrides
on top of the default workspace configuration. And if you want to further customize your workplace by enabling applications on a per-cluster basis or by defining per-cluster customizations, you can create and apply clusterConfigOverrides
.
The cluster-scoped enablement and customization of applications is an Enterprise-only feature, which allows the configuration of all workspace Platform, DKP catalog and Custom Applications via the CLI in your managed and attached clusters regardless of your environment setup (networked or air-gapped). This capability is not provided for project applications.
Prerequisites
Any application you wish to enable or customize at a cluster level, first needs to be enabled at the workspace-level via an
AppDeployment
(platform application deployment, workspace catalog application deployment).For custom configurations, you have created a ConfigMap with all the required
spec
fields for each customization you would like to add to an application in a cluster. You can apply aConfigMap
to several clusters, or create aConfigMap
for each cluster, but theConfigMap
object must exist in the Management cluster.Determine the name of the workspace where you wish to perform the deployments. You can use the
dkp get workspaces
command to see the list of workspace names and their corresponding namespaces.Set the
WORKSPACE_NAMESPACE
environment variable to the name of the workspace’s namespace where the cluster is attached:BASHexport WORKSPACE_NAMESPACE=<workspace_namespace>
Set the
WORKSPACE_NAME
environment variable to the name of the workspace where the cluster is attached:BASHexport WORKSPACE_NAME=<workspace_name>
Enable or Disable an App on a Per-cluster basis and Define a Custom Configuration
If you want to:
Enable/disable an application for certain clusters, or
Provide a custom configuration for your enabled applications,
Edit the AppDeployment
resource to include the new cluster-scoped configuration.
Enable an Application on a Per-cluster Basis for the First Time
When you enable an application on a workspace, it is deployed to all clusters in the workspace by default. If you would like to only deploy it to a subset of clusters when enabling it on a workspace for the first time, you can follow the steps below:
Create an
AppDeployment
for your application, selecting a subset of clusters within the workspace to enable it on. You can use thedkp get clusters --workspace ${WORKSPACE_NAME}
command to see the list of clusters in the workspace.BASHdkp create appdeployment kube-prometheus-stack --app kube-prometheus-stack-34.9.3 --workspace ${WORKSPACE_NAME} --clusters attached-cluster1,attached-cluster2
Optional: Check the current status of the
AppDeployment
to see the names of the clusters where the application is currently enabled.
Enable or Disable an Application on a Per-cluster Basis after it has been enabled at the workspace level
You can enable or disable applications at any time. Once you have enabled the application at the workspace level, the spec.clusterSelector
field will be populated.
For clusters that are newly attached into the workspace, all applications enabled for the workspace are automatically enabled on and deployed to the new clusters.
If you want to see on which clusters your application is currently deployed, refer to the Print and Review the Current State of your AppDeployment documentation.
Edit the
AppDeployment
YAML by adding or removing the names of the clusters where you want to enable your application in theclusterSelector
section:YAMLcat <<EOF | kubectl apply -f - apiVersion: apps.kommander.d2iq.io/v1alpha3 kind: AppDeployment metadata: name: kube-prometheus-stack namespace: ${WORKSPACE_NAMESPACE} spec: appRef: name: kube-prometheus-stack-34.9.3 kind: ClusterApp clusterSelector: matchExpressions: - key: kommander.d2iq.io/cluster-name operator: In values: - attached-cluster1 - attached-cluster3-new EOF
Enable a Custom Configuration of an Application for a Cluster
You can customize the application for each cluster occurrence of said application. If you want to customize the application for a cluster that is not yet attached, refer to the instructions below, so the application is deployed with the custom configuration on attachment.
To enable per-cluster customizations:
Reference the name of the
ConfigMap
to be applied per cluster in thespec.clusterConfigOverrides
fields. In this example, you have three different customizations specified in three differentConfigMaps
for three different clusters in one workspace:YAMLcat <<EOF | kubectl apply -f - apiVersion: apps.kommander.d2iq.io/v1alpha3 kind: AppDeployment metadata: name: kube-prometheus-stack namespace: ${WORKSPACE_NAMESPACE} spec: appRef: name: kube-prometheus-stack-34.9.3 kind: ClusterApp clusterSelector: matchExpressions: - key: kommander.d2iq.io/cluster-name operator: In values: - attached-cluster1 - attached-cluster2 - attached-cluster3-new clusterConfigOverrides: - configMapName: kps-cluster1-overrides clusterSelector: matchExpressions: - key: kommander.d2iq.io/cluster-name operator: In values: - attached-cluster1 - configMapName: kps-cluster2-overrides clusterSelector: matchExpressions: - key: kommander.d2iq.io/cluster-name operator: In values: - attached-cluster2 - configMapName: kps-cluster3-overrides clusterSelector: matchExpressions: - key: kommander.d2iq.io/cluster-name operator: In values: - attached-cluster3-new EOF
If you have not done so yet, create the ConfigMaps referenced in each
clusterConfigOverrides
entry.
The changes are applied only if the YAML file has a valid syntax.
Set up only one cluster override
ConfigMap
per cluster. If there are severalConfigMaps
configured for a cluster, only one will be applied.Cluster override
ConfigMaps
must be created on the Management cluster.
Enable a Custom Configuration of an Application for a Cluster at Attachment
You can customize the application configuration for a cluster prior to its attachment, so that the application is deployed with this custom configuration on attachment. This is preferable, if you do not want to redeploy the application with an updated configuration after it has been initially installed, which may cause downtime.
To enable per-cluster customizations, follow these steps before attaching the cluster:
Set the
CLUSTER_NAME
environment variable to the cluster name that you will give your to-be-attached cluster.CODEexport CLUSTER_NAME=<your_attached_cluster_name>
Reference the name of the
ConfigMap
you want to apply to this cluster in thespec.clusterConfigOverrides
fields. You do not need to update thespec.clusterSelector
field. In this example, you have thekps-cluster1-overrides
customization specified forattached-cluster-1
and a different customization (inkps-your-attached-cluster-overrides
ConfigMap
) for your to-be-attached cluster:YAMLcat <<EOF | kubectl apply -f - apiVersion: apps.kommander.d2iq.io/v1alpha3 kind: AppDeployment metadata: name: kube-prometheus-stack namespace: ${WORKSPACE_NAMESPACE} spec: appRef: name: kube-prometheus-stack-34.9.3 kind: ClusterApp clusterSelector: matchExpressions: - key: kommander.d2iq.io/cluster-name operator: In values: - attached-cluster1 clusterConfigOverrides: - configMapName: kps-cluster1-overrides clusterSelector: matchExpressions: - key: kommander.d2iq.io/cluster-name operator: In values: - attached-cluster1 - configMapName: kps-your-attached-cluster-overrides clusterSelector: matchExpressions: - key: kommander.d2iq.io/cluster-name operator: In values: - ${CLUSTER_NAME} EOF
If you have not done so yet, create the ConfigMap referenced for your to-be-attached cluster.
The changes are applied only if the YAML file has a valid syntax.
Cluster override
ConfigMaps
must be created on the Management cluster.
Disable a Custom Configuration of an Application for a Cluster
Enabled customizations are defined in a ConfigMap
, which, in turn, is referenced in the spec.clusterConfigOverrides
object of your AppDeployment
.
Review your current configuration to establish what you want to remove:
BASHkubectl get appdeployment -n ${WORKSPACE_NAMESPACE} kube-prometheus-stack -o yaml
The output looks similar to this:
YAMLapiVersion: apps.kommander.d2iq.io/v1alpha3 kind: AppDeployment metadata: name: kube-prometheus-stack namespace: ${WORKSPACE_NAMESPACE} spec: appRef: name: kube-prometheus-stack-34.9.3 kind: ClusterApp configOverrides: name: kube-prometheus-stack-overrides-attached clusterSelector: matchExpressions: - key: kommander.d2iq.io/cluster-name operator: In values: - attached-cluster1 - attached-cluster2 clusterConfigOverrides: - configMapName: kps-cluster1-overrides clusterSelector: matchExpressions: - key: kommander.d2iq.io/cluster-name operator: In values: - attached-cluster1 - configMapName: kps-cluster2-overrides clusterSelector: matchExpressions: - key: kommander.d2iq.io/cluster-name operator: In values: - attached-cluster2
Here you can see that
kube-prometheus-stack
has been enabled for theattached-cluster1
andattached-cluster2
. There is also a custom configuration for each of the clusters:kps-cluster1-overrides
andkps-cluster2-overrides
.Edit the file by deleting the
configMapName
entry of the cluster for which you would like to delete the customization. This is located under theclusterConfigOverrides
:YAMLcat <<EOF | kubectl apply -f - apiVersion: apps.kommander.d2iq.io/v1alpha3 kind: AppDeployment metadata: name: kube-prometheus-stack namespace: ${WORKSPACE_NAMESPACE} spec: appRef: kind: ClusterApp name: kube-prometheus-stack-34.9.3 configOverrides: name: kube-prometheus-stack-ws-overrides clusterSelector: matchExpressions: - key: kommander.d2iq.io/cluster-name operator: In values: - attached-cluster1 clusterConfigOverrides: - configMapName: kps-cluster1-overrides clusterSelector: matchExpressions: - key: kommander.d2iq.io/cluster-name operator: In values: - attached-cluster1 EOF
Compare steps one and two for a reference of how an entry should be deleted.
Before deleting a
ConfigMap
that contains your customization, ensure you will NOT require it at a later time. It is not possible to restore a deletedConfigMap
.
If you choose to delete it, run:
kubectl delete configmap <name_configmap> -n ${WORKSPACE_NAMESPACE}
It is NOT possible to delete a ConfigMap
that is being actively used and referenced in the configOverride
of any AppDeployment
.
Verify the Applications and your Current Configuration
Refer to the Verify applications help to connect to the managed or attached cluster and check the status of the deployments.
If you want to know how the AppDeployment
resource is currently configured, refer to the Print and review the state of your AppDeployment section.