Universal Configurations for all Infrastructure Providers
Several areas of DKP configuration are common amongst all amongst all infrastructure providers. Some of the universal configurations are described in this section, and some pages include links to expanded information for these topics.
Use HTTP Proxy
Production environments can deny direct access to the Internet and instead have an HTTP proxy available. To configure DKP to use these proxies, set the standard environment variables in the configuration YAML file. Steps include giving valid data from the environment where the bootstrap is created. Next give the location for the production cluster. Finally the address of the API load balancer from Kommander after the cluster is created using the Konvoy component.
Bootstrap Proxy Settings
When creating the bootstrap, it will not be part of the workload cluster. It is possible that the portion of the bootstrap is located on a different network so you may need different HTTP proxy variables. The API server doesn’t exist yet in the bootstrap environment because it is created during cluster creation.
To create a bootstrap server in a proxy environment, you need to include the following flags:
--http-proxy <<http proxy list>>
--https-proxy <<https proxy list>>
--no-proxy <<no proxy list>>
Example:
- CODE
dkp create bootstrap --http-proxy <<http proxy list>> --https-proxy <<https proxy list>> --no-proxy <<no proxy list>>
If an HTTP proxy is required, set the local
http_proxy
,https_proxy
, andno_proxy
environment variables. They are copied into the bootstrap cluster.Create a bootstrap cluster:
CODEdkp create bootstrap --kubeconfig $HOME/.kube/config \ --http-proxy <string> \ --https-proxy <string> \ --no-proxy <string>
Example:
CODEdkp create bootstrap --http-proxy 10.0.0.15:3128 --https-proxy 10.0.0.15:3128 --no-proxy 127.0.0.1,192.168.0.0/16,10.0.0.0/16,10.96.0.0/12,169.254.169.254,169.254.0.0/24,localhost,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,.svc.cluster.local.,kubecost-prometheus-server.kommander,logging-operator-logging-fluentd.kommander.svc.cluster.local,elb.amazonaws.com
Create CAPI Components with HTTP Proxy
Creating CAPI components for a DKP cluster from the command line requires HTTP/HTTPS proxy information, if your environment is proxied.
If you created a cluster without using the --self-managed flag
, the cluster will not have any of the CAPI controllers or the cert-manager component. This means that the cluster will be managed from the context of the cluster from which it was created such as the bootstrap cluster. However, you can transform the cluster to a self-managed cluster by performing the commands:dkp create capi-components --kubeconfig=<newcluster>
and…dkp move --to-kubeconfig=<newcluster>
This combination of actions is sometimes called a pivot.
When creating the CAPI components for a proxied environment using the DKP command line interface, you must include the following flags :
--http-proxy <<http proxy list>>
--https-proxy <<https proxy list>>
--no-proxy <<no proxy list>>
The following is an example dkp create capi-components
command’s syntax with the HTTP proxy settings included:
dkp create capi-components --http-proxy <<http proxy list>> --https-proxy <<https proxy list>> --no-proxy <<no proxy list>>
Configure and Create a Cluster with HTTP Proxy
During installation of the Konvoy component of DKP, the Control Plane and Worker nodes can be configured to use an HTTP proxy when creating a cluster. If you require HTTP proxy configurations, you can apply them during the create
operation by adding the appropriate flags to the create cluster
command example below:
Proxy configuration | Flag |
---|---|
HTTP proxy for control plane machines |
|
HTTPS proxy for control plane machines |
|
No Proxy list for control plane machines |
|
HTTP proxy for worker machines |
|
HTTPS proxy for worker machines |
|
No Proxy list for worker machines |
|
The same configuration needs to be applied to the custom machine images built with Konvoy Image Builder (KIB) by using a the http override file. For more information, refer to Use Override Files with Konvoy Image Builder section of the documentation.
Example of how to configure the control plane and worker nodes to use HTTP proxy:
Configure control plane and worker nodes:
export CONTROL_PLANE_HTTP_PROXY=http://example.org:8080
export CONTROL_PLANE_HTTPS_PROXY=http://example.org:8080
export CONTROL_PLANE_NO_PROXY="example.org,example.com,example.net,localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,169.254.169.254,.elb.amazonaws.com"
export WORKER_HTTP_PROXY=http://example.org:8080
export WORKER_HTTPS_PROXY=http://example.org:8080
export WORKER_NO_PROXY="example.org,example.com,example.net,localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,169.254.169.254,.elb.amazonaws.com"
HTTP Proxy Configuration Considerations to ensure the core components work correctly:
Replace
example.org,example.com,example.net
with you internal addresseslocalhost
and127.0.0.1
addresses should not use the proxy10.96.0.0/12
is the default Kubernetes service subnet192.168.0.0/16
is the default Kubernetes pod subnetkubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local
is the internal Kubernetes kube-apiserver service.svc,.svc.cluster,.svc.cluster.local
is the internal Kubernetes servicesAuto-IP addresses
169.254.169.254
for any cloud provider
Example of create cluster using the configured HTTP Proxy above:
Use your infrastructure provider name in line one from the choices listed:
dkp create cluster [aws, azure, gcp, preprovisoned, vsphere] \
--cluster-name ${CLUSTER_NAME} \
--control-plane-http-proxy="${CONTROL_PLANE_HTTP_PROXY}" \
--control-plane-https-proxy="${CONTROL_PLANE_HTTPS_PROXY}" \
--control-plane-no-proxy="${CONTROL_PLANE_NO_PROXY}" \
--worker-http-proxy="${WORKER_HTTP_PROXY}" \
--worker-https-proxy="${WORKER_HTTPS_PROXY}" \
--worker-no-proxy="${WORKER_NO_PROXY}"
If your environment uses HTTP/HTTPS proxies, you must include the flags --http-proxy
, --https-proxy
, and --no-proxy
and their related values in this command for it to be successful. More information is available in Configure HTTP Proxy.
If you have an AMI with /etc/environment
and then add - export $(xargs < /etc/environment)
using preKubeadmCommands
.
HTTP Proxy for the Kommander Component of DKP
After the cluster is running in the Konvoy component from above, you will need to configure the NO_PROXY
variable for each provider.
For example, in addition to the values above for AWS, the following settings are needed:
The default VPC CIDR range of
10.0.0.0/16
kube-apiserver
internal/external ELB address
The
NO_PROXY
variable contains the Kubernetes Services CIDR. This example uses the default CIDR,10.96.0.0/12
. If your cluster's CIDR is different, update the value in theNO_PROXY
field.
Set the httpProxy
and httpsProxy
environment variables to the address of the HTTP and HTTPS proxy servers, respectively. Set the noProxy
environment variable to the addresses that should be accessed directly, not through the proxy. For the Kommander component of DKP, refer to more HTTP Proxy information in this document: Configure HTTP Proxy
Load Balancer
In a Kubernetes cluster, depending on the flow of traffic direction, there are two kinds of load balancing:
Internal load balancing for the traffic within a Kubernetes cluster
External load balancing for the traffic coming from outside the cluster
External Load Balancer
DKP includes a load balancing solution for the supported cloud infrastructure providers and for pre-provisioned environments. For more information, see Load Balancing for external traffic in DKP.
If you want to use a non-DKP load balancer (for example, as an alternative to MetalLB in pre-provisioned environments), DKP supports setting up an external load balancer.
When enabled, the external load balancer routes incoming traffic requests to a single point of entry in your cluster. Users and services can then access the DKP UI through an established IP or DNS address.
Select your Connection Mechanism
A virtual IP is the address that the client uses to connect to the service. A load balancer is the device that distributes the client connections to the backend servers. Before you create a new DKP cluster, choose an external load balancer(LB) or virtual IP.
External load balancer
It is recommended that an external load balancer be the control plane endpoint. To distribute request load among the control plane machines, configure the load balancer to send requests to all the control plane machines. Configure the load balancer to send requests only to control plane machines that are responding to API requests.
Built-in virtual IP (option for Pre-provisioned or vSphere)
If an external load balancer is not available, use the built-in virtual IP. The virtual IP is not a load balancer; it does not distribute request load among the control plane machines. However, if the machine receiving requests does not respond to them, the virtual IP automatically moves to another machine.
Additional Configurations
More information regarding global configurations or customization of specific components can be found in the Additional Konvoy Configurations section of the documentation.