Skip to main content
Skip table of contents

DKP 2.6.2 Deprecations

The following items are deprecated or removed from this version of DKP.

Deprecated dkp push Subcommand and Flag

To improve consistency, simplify, and streamline DKP CLI commands, this version of DKP deprecates the dkp push image-bundle command in favor of dkp push bundle.

Deprecated subcommand and flag

New subcommand and flag

dkp push image-bundle --image-bundle

dkp push bundle --bundle

You can still run the dkp push image-bundle with this version of DKP, but it will be removed in future DKP versions.

If you have scripts, GitHub Actions, or other cluster creation and upgrade automations, D2iQ recommends updating to the new subcommand and flag as soon as possible.

Catalog Applications Compatibility

From DKP 2.6.2, the following versions of Catalog applications are deprecated:

  • kafka-operator-0.20.0

  • kafka-operator-0.20.2

  • spark-operator-1.1.6

  • zookeeper-operator-0.2.13

If you plan on upgrading to DKP 2.6.2, ensure that you upgrade these applications to the latest compatible version.

For more information, see Workspace DKP Catalog Applications - Compatibility.

Spark Operator to be Removed in DKP 2.7

Starting from DKP 2.6.2, D2iQ will discontinue support for all versions of Spark Operator in the DKP Catalog, because the upstream operator is no longer maintained. As a result, the Spark Operator will no longer be available for use in future DKP releases.

Spark Operator remains available in DKP 2.6.1 and previous versions.

For instructions on how to keep Spark on your cluster after upgrading, expand this section:

How to keep Spark for existing instances of Spark, but it is unmanaged by DKP or Flux
  1. Set the WORKSPACE_NAMESPACE variable to the namespace where the spark-operator is deployed:

    CODE
    export WORKSPACE_NAMESPACE=<WORKSPACE_NAMESPACE>
  2. Set suspend: true on the cluster Kustomization
    NOTE: This needs to be reverted in the last step.

    CODE
    kubectl -n ${WORKSPACE_NAMESPACE} patch kustomization cluster --type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": true}]'
  3. Set suspend: true and prune: false on the spark-operator Kustomizations

    CODE
    export SPARK_OPERATOR_APPDEPLOYMENT_NAME=<spark operator AppDeployment name>
    export SPARK_OPERATOR_APPDEPLOYMENT_VERSION=<spark operator AppDeployment version>
    kubectl -n ${WORKSPACE_NAMESPACE} patch kustomization ${SPARK_OPERATOR_APPDEPLOYMENT_NAME} --type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": true},{"op": "replace", "path": "/spec/prune", "value": false}]'
    kubectl -n ${WORKSPACE_NAMESPACE} patch kustomization spark-operator-${SPARK_OPERATOR_APPDEPLOYMENT_VERSION}-defaults --type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": true},{"op": "replace", "path": "/spec/prune", "value": false}]'
  4. Update the dkp-catalog-applications GitRepository with the new version

  5. Unsuspend the cluster Kustomization
    NOTE: If the Kustomization is left suspended, Kommander will be unable to function properly.

    CODE
    kubectl -n ${WORKSPACE_NAMESPACE} patch kustomization cluster --type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Remove Spark from the UI

Follow these steps to remove the spark-operator AppDeployment from the DKP UI:

  1. From the top menu bar, select your target workspace.

  2. Select Applications from the sidebar menu.

  3. Select the three dot button from the bottom-right corner of the Spark application tile, and then select Uninstall.

  4. Select Save.

OR

Remove Spark with CLI

  1. Execute the following command to get the WORKSPACE_NAMESPACE of your workspace:

    CODE
    dkp get workspaces

    And copy the values under the NAMESPACE column for your workspace.

  2. Export the WORKSPACE_NAMESPACE variable:

    CODE
    export WORKSPACE_NAMESPACE=<WORKSPACE_NAME>
  3. Run the following command to delete the spark-operatorAppDeployment:

    CODE
    kubectl delete AppDeployment <spark operator appdeployment name> -n ${WORKSPACE_NAMESPACE}

This results in the spark-operator Kustomizations being deleted, but the HelmRelease and default ConfigMap will remain cluster.

From here, you can continue to manage the spark-operator via the HelmRelease.

More information on Spark can be found in this section of the documentation Enterprise: Upgrade Workspace Catalog Applications.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.