Skip to main content
Skip table of contents

Pre-provisioned Create new Cluster

Create a Kubernetes cluster using the infrastructure definition

Once you’ve defined the infrastructure and control plane endpoints, you can proceed to creating the cluster by following these steps to create a new pre-provisioned cluster:

  1. With the inventory, and the control plane endpoint defined, use the dkp binary to create a Konvoy cluster. The following command relies on the pre-provisioned cluster API infrastructure provider to initialize the Kubernetes control plane and worker nodes on the hosts defined in the inventory.

    NOTE: When specifying the cluster-name, you must use the same cluster-name as used when defining your inventory objects.

    NOTE: To increase Docker Hub's rate limit use your Docker Hub credentials when creating the cluster, by setting the following flag --registry-mirror-url=https://registry-1.docker.io --registry-mirror-username= --registry-mirror-password= on the dkp create cluster command.

    CODE
    dkp create cluster preprovisioned --cluster-name ${CLUSTER_NAME} --control-plane-endpoint-host <control plane endpoint host> --control-plane-endpoint-port <control plane endpoint port, if different than 6443>

    If your environment uses HTTP/HTTPS proxies, you must include the flags --http-proxy, --https-proxy, and --no-proxy and their related values in this command for it to be successful. More information is available in Configure HTTP Proxy.

    The output resembles:

    CODE
    Generating cluster resources
    cluster.cluster.x-k8s.io/preprovisioned-example created
    kubeadmcontrolplane.controlplane.cluster.x-k8s.io/preprovisioned-example-control-plane created
    preprovisionedcluster.infrastructure.cluster.konvoy.d2iq.io/preprovisioned-example created
    preprovisionedmachinetemplate.infrastructure.cluster.konvoy.d2iq.io/preprovisioned-example-control-plane created
    secret/preprovisioned-example-etcd-encryption-config created
    machinedeployment.cluster.x-k8s.io/preprovisioned-example-md-0 created
    preprovisionedmachinetemplate.infrastructure.cluster.konvoy.d2iq.io/preprovisioned-example-md-0 created
    kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/preprovisioned-example-md-0 created
    clusterresourceset.addons.cluster.x-k8s.io/calico-cni-installation-preprovisioned-example created
    configmap/calico-cni-installation-preprovisioned-example created
    configmap/tigera-operator-preprovisioned-example created
    clusterresourceset.addons.cluster.x-k8s.io/local-volume-provisioner-preprovisioned-example created
    configmap/local-volume-provisioner-preprovisioned-example created
    clusterresourceset.addons.cluster.x-k8s.io/node-feature-discovery-preprovisioned-example created
    configmap/node-feature-discovery-preprovisioned-example created
    clusterresourceset.addons.cluster.x-k8s.io/nvidia-feature-discovery-preprovisioned-example created
    configmap/nvidia-feature-discovery-preprovisioned-example created
    clusterresourceset.addons.cluster.x-k8s.io/metallb-preprovisioned-example created
    configmap/metallb-installation-preprovisioned-example created
  2. Use the wait command to monitor the cluster control-plane readiness:

    CODE
    kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=30m

    CODE
    cluster.cluster.x-k8s.io/preprovisioned-example condition met

3. Run the dkp create cluster command.

NOTE: (Optional) If you have overrides for your clusters, you must specify the secret as part of the create cluster command. If these are not specified, the overrides for your nodes will not be applied.

CODE
dkp create cluster preprovisioned --cluster-name ${CLUSTER_NAME} --control-plane-endpoint-host <control plane endpoint host> --control-plane-endpoint-port <control plane endpoint port, if different than 6443> --override-secret-name=$CLUSTER_NAME-user-overrides

NOTE: If your cluster is air-gapped or you have a local docker registry you must provide additional arguments when creating the cluster.

CODE
export DOCKER_REGISTRY_URL="<https/http>://<registry-address>:<registry-port>"
export DOCKER_REGISTRY_CA="<path to the CA on the bastion>"
export DOCKER_REGISTRY_USERNAME="<username>"
export DOCKER_REGISTRY_USERNAME="<password>"
  • DOCKER_REGISTRY_URL: the address of an existing Docker registry accessible in the VPC that the new cluster nodes will be configured to use a mirror registry when pulling images.

  • DOCKER_REGISTRY_CA: (optional) the path on the bastion machine to the Docker registry CA. Konvoy will configure the cluster nodes to trust this CA. This value is only needed if the registry is using a self-signed certificate and the AMIs are not already configured to trust this CA.

  • DOCKER_REGISTRY_USERNAME: optional, set to a user that has pull access to this registry.

  • DOCKER_REGISTRY_PASSWORD: optional if username is not set.

CODE
dkp create cluster preprovisioned --cluster-name ${CLUSTER_NAME} \
--control-plane-endpoint-host <control plane endpoint host> \
--control-plane-endpoint-port <control plane endpoint port, if different than 6443> \
--registry-mirror-url=${DOCKER_REGISTRY_URL} \
--registry-mirror-cacert=${DOCKER_REGISTRY_CA} \
--registry-mirror-username=${DOCKER_REGISTRY_USERNAME} \
--registry-mirror-password=${DOCKER_REGISTRY_PASSWORD}

NOTE: Depending on the cluster size, it will take a few minutes to create.

4. After the creation, use this command to get the Kubernetes kubeconfig for the new cluster and begin deploying workloads:

CODE
dkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf

Audit logs

To modify Control Plane Audit logs settings using the information contained in the page Configuring the Control Plane.

Modify the Calico installation

Set the interface

Before exploring the new cluster, confirm your calico installation is correct. By default, Calico automatically detects the IP to use for each node using the first-found method. This is not always appropriate for your particular nodes. In that case, you must modify Calico’s configuration to use a different method. An alternative is to use the interface method by providing the interface ID to use. Follow the steps outlined in this section to modify Calico’s configuration. In this example, all cluster nodes use ens192 as the interface name.

Get the pods running on your cluster with this command:

CODE
kubectl get pods -A --kubeconfig ${CLUSTER_NAME}.conf
CODE
NAMESPACE                NAME                                                                READY   STATUS            RESTARTS        AGE
calico-system            calico-kube-controllers-57fbd7bd59-vpn8b                            1/1     Running           0               16m
calico-system            calico-node-5tbvl                                                   1/1     Running           0               16m
calico-system            calico-node-nbdwd                                                   1/1     Running           0               4m40s
calico-system            calico-node-twl6b                                                   0/1     PodInitializing   0               9s
calico-system            calico-node-wktkh                                                   1/1     Running           0               5m35s
calico-system            calico-typha-54f46b998d-52pt2                                       1/1     Running           0               16m
calico-system            calico-typha-54f46b998d-9tzb8                                       1/1     Running           0               4m31s
default                  cuda-vectoradd                                                      0/1     Pending           0               0s
kube-system              coredns-78fcd69978-frwx4                                            1/1     Running           0               16m
kube-system              coredns-78fcd69978-kkf44                                            1/1     Running           0               16m
kube-system              etcd-ip-10-0-121-16.us-west-2.compute.internal                      0/1     Running           0               8s
kube-system              etcd-ip-10-0-46-17.us-west-2.compute.internal                       1/1     Running           1               16m
kube-system              etcd-ip-10-0-88-238.us-west-2.compute.internal                      1/1     Running           1               5m35s
kube-system              kube-apiserver-ip-10-0-121-16.us-west-2.compute.internal            0/1     Running           6               7s
kube-system              kube-apiserver-ip-10-0-46-17.us-west-2.compute.internal             1/1     Running           1               16m
kube-system              kube-apiserver-ip-10-0-88-238.us-west-2.compute.internal            1/1     Running           1               5m34s
kube-system              kube-controller-manager-ip-10-0-121-16.us-west-2.compute.internal   0/1     Running           0               7s
kube-system              kube-controller-manager-ip-10-0-46-17.us-west-2.compute.internal    1/1     Running           1 (5m25s ago)   15m
kube-system              kube-controller-manager-ip-10-0-88-238.us-west-2.compute.internal   1/1     Running           0               5m34s
kube-system              kube-proxy-gclmt                                                    1/1     Running           0               16m
kube-system              kube-proxy-gptd4                                                    1/1     Running           0               9s
kube-system              kube-proxy-mwkgl                                                    1/1     Running           0               4m40s
kube-system              kube-proxy-zcqxd                                                    1/1     Running           0               5m35s
kube-system              kube-scheduler-ip-10-0-121-16.us-west-2.compute.internal            0/1     Running           1               7s
kube-system              kube-scheduler-ip-10-0-46-17.us-west-2.compute.internal             1/1     Running           3 (5m25s ago)   16m
kube-system              kube-scheduler-ip-10-0-88-238.us-west-2.compute.internal            1/1     Running           1               5m34s
kube-system              local-volume-provisioner-2mv7z                                      1/1     Running           0               4m10s
kube-system              local-volume-provisioner-vdcrg                                      1/1     Running           0               4m53s
kube-system              local-volume-provisioner-wsjrt                                      1/1     Running           0               16m
node-feature-discovery   node-feature-discovery-master-84c67dcbb6-m78vr                      1/1     Running           0               16m
node-feature-discovery   node-feature-discovery-worker-vpvpl                                 1/1     Running           0               4m10s
tigera-operator          tigera-operator-d499f5c8f-79dc4                                     1/1     Running           1 (5m24s ago)   16m

If a calico-node pod is not ready on your cluster, you must edit the installation file.

To edit the installation file, run the command:

CODE
kubectl edit installation default --kubeconfig ${CLUSTER_NAME}.conf

Change the value for spec.calicoNetwork.nodeAddressAutodetectionV4 to interface: ens192, and save the file:

CODE
spec:
  calicoNetwork:
  ...
    nodeAddressAutodetectionV4:
      interface: ens192

Save this file. You may need to delete the node feature discovery worker pod in the node-feature-discovery namespace if that pod has failed. After you delete it, Kubernetes replaces the pod as part of its normal reconciliation.

Change the encapsulation type

Calico can leverage different network encapsulation methods to route traffic for your workloads. Encapsulation is useful when running on top of an underlying network that is not aware of workload IPs. Common examples of this include:

CODE
- public cloud environments where you don’t own the hardware
- AWS across VPC subnet boundaries
- environments where you cannot peer Calico over BGP to the underlay or easily configure static routes.

IPIP is the default encapsulation method.

To change the encapsulation, run the following command:

CODE
kubectl edit installation default --kubeconfig ${CLUSTER_NAME}.conf

Change the value for spec.calicoNetwork.ipPools[0].encapsulation

CODE
  spec:
  calicoNetwork:
    ipPools:
      - encapsulation: VXLAN

The supported values are “IPIPCrossSubnet”, “IPIP”, “VXLAN”, “VXLANCrossSubnet”, and “None”.

VXLAN

VXLAN is a tunneling protocol that encapsulates layer 2 Ethernet frames in UDP packets, enabling you to create virtualized layer 2 subnets that span Layer 3 networks. It has a slightly larger header than IP-in-IP which creates a slight reduction in performance over IP-in-IP.

IPIP

IP-in-IP is an IP tunneling protocol that encapsulates one IP packet in another IP packet. An outer packet header is added with the tunnel entrypoint and the tunnel exit point. The calico implementation of this protocol uses BGP to determine the exit point making this protocol unusable on networks that don’t pass BGP.

Be aware that switching encapsulation modes can cause disruption to in-progress connections. Plan accordingly.

For more information, see:

Use the built-in Virtual IP

As explained in Define the Control Plane Endpoint, we recommend using an external load balancer for the control plane endpoint, but provide a built-in virtual IP when an external load balancer is not available. The built-in virtual IP uses the kube-vip project. To use the virtual IP, add these flags to the create cluster command:

Virtual IP Configuration

Flag

Network interface to use for Virtual IP. Must exist on all control plane machines.

--virtual-ip-interface string

IPv4 address. Reserved for use by the cluster.

--control-plane-endpoint string

Virtual IP example

CODE
dkp create cluster preprovisioned \
    --cluster-name ${CLUSTER_NAME} \
    --control-plane-endpoint-host 196.168.1.10 \
    --virtual-ip-interface eth1

Confirm that your Calico installation is correct.

Provision on the Flatcar Linux OS

When provisioning onto the Flatcar Container Linux distribution, you must instruct the bootstrap cluster to make some changes related to the installation paths. To accomplish this, add the --os-hint flatcar flag to the above create cluster command.

Flatcar Linux example

CODE
dkp create cluster preprovisioned \
    --cluster-name ${CLUSTER_NAME} \
    --os-hint flatcar

Confirm that your Calico installation is correct.

For provisioning DKP on Flatcar, DKP configures cluster nodes to use Control Groups (cgroups) version 1. In versions prior to Flatcar 3033.2.4, a restart is required in order to apply the changes to the kernel. For more information, refer to the Flatcar documentation.

Use an HTTP proxy

If you require HTTP proxy configurations, you can apply them during the create operation by adding the appropriate flags to the create cluster command:

Proxy configuration

Flag

HTTP proxy for control plane machines

--control-plane-http-proxy string

HTTPS proxy for control plane machines

--control-plane-https-proxy string

No Proxy list for control plane machines

--control-plane-no-proxy strings

HTTP proxy for worker machines

--worker-http-proxy string

HTTPS proxy for worker machines

--worker-https-proxy string

No Proxy list for worker machines

--worker-no-proxy strings

You must also add the same configuration as an override. For more information, refer to this documentation.

HTTP proxy example

To increase Docker Hub's rate limit use your Docker Hub credentials when creating the cluster, by setting the following flag --registry-mirror-url=https://registry-1.docker.io --registry-mirror-username= --registry-mirror-password= on the dkp create cluster command.

CODE
dkp create cluster preprovisioned \
    --cluster-name ${CLUSTER_NAME} \
    --control-plane-http-proxy http://proxy.example.com:8080 \
    --control-plane-https-proxy https://proxy.example.com:8080 \
    --control-plane-no-proxy "127.0.0.1,10.96.0.0/12,192.168.0.0/16,kubernetes,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local" \
    --worker-http-proxy http://proxy.example.com:8080 \
    --worker-https-proxy https://proxy.example.com:8080 \
    --worker-no-proxy "127.0.0.1,10.96.0.0/12,192.168.0.0/16,kubernetes,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local"

Confirm that your Calico installation is correct.

Use an alternative mirror

To apply Docker registry configurations during the create operation, add the appropriate flags to the create cluster command:

Docker registry configuration

Flag

CA certificate chain to use while communicating with the registry mirror using TLS

--registry-mirror-cacert file

URL of a container registry to use as a mirror in the cluster

--registry-mirror-url string

This is useful when using an internal registry and when Internet access is not available (air-gapped installations).

When the cluster is up and running, you can deploy and test workloads.

Alternative mirror example

CODE
dkp create cluster preprovisioned \
    --cluster-name ${CLUSTER_NAME} \
    --registry-mirror-cacert /tmp/registry.pem \
    --registry-mirror-url https://registry.example.com

Confirm that your Calico installation is correct.

Use alternate pod or service subnets

In Konvoy, the default pod subnet is 192.168.0.0/16, and the default service subnet is 10.96.0.0/12. If you wish to change the subnets you can do so with the following steps:

  1. Generate the YAML manifests for the cluster using the --dry-run and -o yaml flags, along with the desired dkp cluster create command:

    CODE
    dkp create cluster preprovisioned --cluster-name ${CLUSTER_NAME} --control-plane-endpoint-host <control plane endpoint host> --control-plane-endpoint-port <control plane endpoint port, if different than 6443> --dry-run -o yaml > cluster.yaml
  2. To modify the service subnet, add or edit the spec.clusterNetwork.services.cidrBlocks field of the Cluster object:

    CODE
    kind: Cluster
    spec:
      clusterNetwork:
        services:
          cidrBlocks:
          - 10.0.0.0/18
  3. To modify the pod subnet, edit the Cluster and calico-cni ConfigMap resources:

    Cluster: Add or edit thespec.clusterNetwork.pods.cidrBlocks field:

    CODE
    kind: Cluster
    spec:
      clusterNetwork:
        pods:
          cidrBlocks:
          - 172.16.0.0/16

    ConfigMap: Edit the data."custom-resources.yaml".spec.calicoNetwork.ipPools.cidr field with your desired pod subnet:

    CODE
    apiVersion: v1
    data:
      custom-resources.yaml: |
        apiVersion: operator.tigera.io/v1
        kind: Installation
        metadata:
          name: default
        spec:
          # Configures Calico networking.
          calicoNetwork:
            # Note: The ipPools section cannot be modified post-install.
            ipPools:
            - blockSize: 26
              cidr: 172.16.0.0/16
    kind: ConfigMap
    metadata:
      name: calico-cni-<cluter-name>

When you provision the cluster, the configured pod and service subnets will be applied.

Confirm that your Calico installation is correct.

When you complete this procedure, move on to Pre-provisioned Make Cluster Self-managed to continue the process.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.