Configure HTTP Proxy

Configure HTTP proxy for the Kommander cluster(s)

Kommander supports environments where access to the Internet is restricted, and must be made through an HTTP/HTTPS proxy.

In these environments, you must configure Kommander to use the HTTP/HTTPS proxy. In turn, Kommander configures all platform services to use the HTTP/HTTPS proxy.

NOTE: Kommander follows a common convention for using an HTTP proxy server. The convention is based on three environment variables, and is supported by many, though not all, applications.

  • HTTP_PROXY: the HTTP proxy server address
  • HTTPS_PROXY: the HTTPS proxy server address
  • NO_PROXY: a list of IPs and domain names that are not subject to proxy settings


In the examples below:

  1. The curl command-line tool is available on the host.
  2. The proxy server address is
  3. The proxy server address uses the http scheme.
  4. The proxy server can reach using HTTP or HTTPS.

Verify the cluster nodes can access the Internet through the proxy server

On each cluster node, run:

curl --proxy --head
curl --proxy --head

If the proxy is working for HTTP and HTTPS, respectively, the curl command returns a 200 OK HTTP response.

Enable Gatekeeper

Gatekeeper acts as a Kubernetes mutating webhook. You can use this to mutate the Pod resources with HTTP_PROXY, HTTPS_PROXY and NO_PROXY environment variables.

Kommander installs with the DKP CLI.

  1. Create (if necessary) and update the Kommander installation configuration file. If one does not already exist, then create it using the following commands:

    ./dkp install kommander --init > install.yaml
  2. Append the apps section in install.yaml with the following values to enable Gatekeeper and configure it to add HTTP proxy settings to the pods.

    NOTE: Only pods created after applying this setting will be mutated. Also, this will only impact pods in the namespace with the "" label.

        values: |
          disableMutation: false
            enablePodProxy: true
              noProxy: ",,,,,,localhost,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,.svc.cluster.local.,kubecost-prometheus-server.kommander,logging-operator-logging-fluentd.kommander.svc,"
              httpProxy: ""
              httpsProxy: ""
            excludeNamespacesFromProxy: []
              "": "pod-proxy"
  3. You can create the kommander and kommander-flux namespaces, or the namespace where Kommander will be installed, and then label them so the Gatekeeper mutation is active on the namespaces.

    kubectl create namespace kommander
    kubectl label namespace kommander
    kubectl create namespace kommander-flux
    kubectl label namespace kommander-flux
  4. Install Kommander using the above configuration file:

    NOTE: To ensure Kommander is installed on the workload cluster, use the --kubeconfig=cluster_name.conf flag. This ensures that Kommander is installed on the workload cluster.

    ./dkp install kommander --installer-config ./install.yaml

Configure Workspace (or Project) in which you want to use proxy

To have Gatekeeper mutate the manifests, create the Workspace (or Project) with the following label:

labels: "pod-proxy"

This can be done when creating the Workspace (or Project) from the UI OR by running the following command from the CLI once the namespace is created:

kubectl label namespace <NAMESPACE> ""

Configure attached clusters with proxy configuration

In order to ensure that Gatekeeper is deployed before everything else in the attached clusters, you must manually create the exact namespace of the workspace in which the cluster is going to be attached, before attaching the cluster:

Execute the following command in the attached cluster before attaching it to the host cluster:

kubectl create namespace <NAMESPACE>

Then, to configure the pods in this namespace to use proxy configuration, create the gatekeeper-overrides configmap described in the next section before attaching the cluster to the host cluster. You must label the workspace with when creating it so that Gatekeeper deploys a validatingwebhook to mutate the pods with proxy configuration.

Create Gatekeeper configmap in Workspace namespace

To configure Gatekeeper such that these environment variables are mutated in the pods, create the following configmap in the target Workspace:

export NAMESPACE=<workspace-namespace>
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
  name: gatekeeper-overrides
  namespace: ${NAMESPACE}
  values.yaml: |
    # enable mutations
    disableMutation: false
      enablePodProxy: true
        noProxy: ",,,,,,localhost,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,.svc.cluster.local.,kubecost-prometheus-server.kommander,logging-operator-logging-fluentd.kommander.svc,"
        httpProxy: ""
        httpsProxy: ""
      excludeNamespacesFromProxy: []
        "": "pod-proxy"

Set the httpProxy and httpsProxy environment variables to the address of the HTTP and HTTPS proxy server, respectively. Set the noProxy environment variable to the addresses that should be accessed directly, not through the proxy.

IMPORTANT: Both the HTTP and HTTPS proxy server address must use the http scheme.

NOTE: To ensure that core components work correctly, always add these addresses to the noProxy:

  • Loopback addresses ( and localhost)
  • Kubernetes API Server addresses
  • Kubernetes Pod IPs (for example, This comes from two places:
    • Calico pod CIDR - Defaults to
    • The podSubnet is configured in CAPI objects and needs to match above Calico's - Defaults to (same as above)
  • Kubernetes Service addresses (for example,, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster, kubernetes.default.svc.cluster.local, .svc, .svc.cluster, .svc.cluster.local, .svc.cluster.local.)
  • Auto-IP addresses,
In addition to above, following are needed when installing on AWS:
  • The default VPC CIDR range of
  • kube-apiserver internal/external ELB address

IMPORTANT: The NO_PROXY variable contains the Kubernetes Services CIDR. This example uses the default CIDR, If your cluster's CIDR is different, update the value in NO_PROXY.

LIMITATION: Based on the order in which the Gatekeeper Deployment is Ready (in relation to other Deployments), not all the core services are guaranteed to be mutated with the proxy environment variables. Only the user deployed workloads are guaranteed to be mutated with the proxy environment variables. If you need a core service to be mutated with your proxy environment variables, you can restart the AppDeployment for that core service. This behavior will be fixed in a future release of Kommander.

Configure your applications

In a default installation with gatekeeper enabled, you can have proxy environment variables applied to all your pods automatically by adding the following label to your namespace:

"": "pod-proxy"

No further manual changes are required.

IMPORTANT: If Gatekeeper is not installed, and you need to use an HTTP proxy you must manually configure your applications as described further in this section.

Manually configure your application

Some applications follow the convention of HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables.

In this example, the environment variables are set for a container in a Pod:

apiVersion: v1
kind: Pod
  - name: example-container
    - name: HTTP_PROXY
      value: ""
    - name: HTTPS_PROXY
      value: ""
    - name: NO_PROXY
      value: ",localhost,,,,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,.svc.cluster.local."

See Define Environment Variables for a Container for more details.