Deployment of Catalog Applications in Workspaces
Deploy applications to attached clusters using the CLI
This topic describes how to use the CLI to deploy a workspace catalog application to attached clusters within a workspace. To deploy an application to selected clusters within a workspace, refer to the Cluster-scoped Configuration section of the documentation.
Prerequisites
Before you begin, you must have:
A running cluster with Kommander installed. The cluster should be on a supported Kubernetes version for this release of DKP, and also compatible with the catalog application version you wish to install.
An Attach an Existing Kubernetes Cluster section of the documentation completed.
Set the WORKSPACE_NAMESPACE
environment variable to the name of the workspace’s namespace the attached cluster exists in:
export WORKSPACE_NAMESPACE=<workspace_namespace>
After creating a GitRepository, use either the DKP UI or the CLI to enable your catalog applications.
From within a workspace, you can enable applications to deploy. Verify that an application has successfully deployed via the CLI.
Enable (and Customize) the Application using the DKP UI
Follow these steps to enable your catalog applications from the DKP UI:
Enterprise only: from the top menu bar, select your target workspace.
Select Applications from the sidebar menu to browse the available applications from your configured repositories.
Select the three dot button from the bottom-right corner of the desired application card, and then select Enable.
If available, select a version from the drop-down menu. This drop-down menu will only be visible if there is more than one version.
(Optional) If you want to override the default configuration values, copy your customized values into the text editor under Configure Service or upload your YAML file that contains the values:
CODEsomeField: someValue
Confirm the details are correct, and then select the Enable button.
For all applications, you must provide a display name and an ID which is automatically generated based on what you enter for the display name, unless or until you edit the ID directly. The ID must be compliant with Kubernetes DNS subdomain name validation rules.
Alternately, you can use the CLI to enable your catalog applications.
Enable the Application using the CLI
See Workspace Catalog Applications for the list of available applications that you can deploy on the attached cluster.
Enable a supported application to deploy to your Attached Cluster with an
AppDeployment
resource.Within the
AppDeployment
, define theappRef
to specify whichApp
to enable:CODEcat <<EOF | kubectl apply -f - apiVersion: apps.kommander.d2iq.io/v1alpha3 kind: AppDeployment metadata: name: spark-operator namespace: ${WORKSPACE_NAMESPACE} spec: appRef: name: spark-operator-1.1.17 kind: App EOF
The
appRef.name
must match the appname
from the list of available catalog applications.Create the resource in the workspace you just created, which instructs Kommander to deploy the
AppDeployment
to theKommanderCluster
s in the same workspace.
Enable an Application with a Custom Configuration using the CLI
Follow these steps:
Provide the name of a ConfigMap
in the AppDeployment
, which provides custom configuration on top of the default configuration:
- CODE
cat <<EOF | kubectl apply -f - apiVersion: apps.kommander.d2iq.io/v1alpha3 kind: AppDeployment metadata: name: spark-operator namespace: ${WORKSPACE_NAMESPACE} spec: appRef: name: spark-operator-1.1.17 kind: App configOverrides: name: spark-operator-overrides EOF
Create the
ConfigMap
with the name provided in the step above, with the custom configuration:CODEcat <<EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: namespace: ${WORKSPACE_NAMESPACE} name: spark-operator-overrides data: values.yaml: | configInline: uiService: enable: false EOF
Kommander waits for the ConfigMap
to be present before deploying the AppDeployment
to the managed or attached clusters.
Verify Applications
The applications are now enabled. Connect to the attached cluster and check the HelmReleases
to verify the deployment:
kubectl get helmreleases -n ${WORKSPACE_NAMESPACE}
The output looks similar to this:
NAMESPACE NAME READY STATUS AGE
workspace-test-vjsfq spark-operator True Release reconciliation succeeded 7m3s