Skip to main content
Skip table of contents

Networking

Configure networking for Konvoy cluster

This section describes different networking components that come together to form a Konvoy networking stack. It assumes familiarity with Kubernetes networking.

Service

A Service is an API resource that defines a logical set of pods and a policy by which to access them, and is an abstracted manner to expose applications as network services.

Kubernetes gives pods their own IP addresses and a single DNS name for a set of pods. Services are used as entrypoints to load-balance the traffic across the pods. A selector determines the set of Pods targeted by a Service.

For example, if you have a set of pods that each listen on TCP port 9191 and carry a label app=MyKonvoyApp, as configured in the following:

CODE
apiVersion: v1
kind: Service
metadata:
  name: my-konvoy-service
  namespace: default
spec:
  selector:
    app: MyKonvoyApp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9191

This specification creates a new Service object named "my-konvoy-service", that targets TCP port 9191 on any pod with the app=MyKonvoyApp label.

Kubernetes assigns this Service an IP address. In particular, the kube-proxy implements a form of virtual IP for Services of type other than ExternalName.

  • The name of a Service object must be a valid DNS label name.

  • A Service is not a Platform Service.

Service Topology

Service Topology is a mechanism in Kubernetes to route traffic based upon the Node topology of the cluster. For example, you can configure a Service to route the traffic to endpoints on specific nodes, or even based on the region or availability zone of the node’s location.

To enable this new feature in your Kubernetes cluster, use the feature gates --feature-gates="ServiceTopology=true,EndpointSlice=true" flag. After enabling, you can control Service traffic routing by defining the topologyKeys field in the Service API object.

In the following example, a Service defines topologyKeys to be routed to endpoints only in the same zone:

CODE
apiVersion: v1
kind: Service
metadata:
  name: my-konvoy-service
  namespace: default
spec:
  selector:
    app: MyKonvoyApp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9191
  topologyKeys:
  - "topology.kubernetes.io/zone"

If the value of the topologyKeys field does not match any pattern, the traffic is rejected.

EndpointSlices

EndpointSlices are an API resource that appears as a scalable and more manageable solution to network endpoints within a Kubernetes cluster. They allow for distributing network endpoints across multiple resources with a limit of 100 endpoints per EndpointSlice.

An EndpointSlice contains references to a set of endpoints, and the control plane takes care of creating EndpointSlices for any Service that has a selector specified. These EndpointSlices include references to all the pods that match the Service selector.

Like Services, the name of a EndpointSlice object must be a valid DNS subdomain name.

In this example, here’s a sample EndpointSlice resource for the example Kubernetes Service:

CODE
apiVersion: discovery.k8s.io/v1beta1
kind: EndpointSlice
metadata:
  name: konvoy-endpoint-slice
  namespace: default
  labels:
    kubernetes.io/service-name: my-konvoy-service
addressType: IPv4
ports:
- name: http
  protocol: TCP
  port: 80
endpoints:
- addresses:
  - "192.168.126.168"
  conditions:
    ready: true
  hostname: ip-10-0-135-39.us-west-2.compute.internal
  topology:
    kubernetes.io/hostname: ip-10-0-135-39.us-west-2.compute.internal
    topology.kubernetes.io/zone: us-west2-b

DNS for Services and Pods

Every new Service object in Kubernetes gets assigned a DNS name. The Kubernetes DNS component schedules a DNS name for the pods and services created on the cluster, and then the Kubelets are configured so containers can resolve these DNS names.

Considering previous examples, assume there is a Service named my-konvoy-service in the Kubernetes namespace default. A Pod running in namespace default can look up this service by performing a DNS query for my-konvoy-service. A Pod running in namespace kommander can look up this service by performing a DNS query for my-konvoy-service.default.

In general, a pod has the following DNS resolution:

CODE
pod-ip-address.namespace-name.pod-name.cluster-domain.example.

Similarly, a service has the following DNS resolution:

CODE
service-name.namespace-name.svc.cluster-domain.example.

You can find additional information about all the possible record types and layout here.

Ingress

Ingress is an API resource that manages external access to the services in a cluster through HTTP or HTTPS. It offers name-based virtual hosting, SSL termination and load balancing when exposing HTTP/HTTPS routes from outside to services in the cluster.

The traffic policies are controlled by rules as part of the Ingress definition. Each rule defines the following details:

  • An optional host to which apply the rules.

  • A list of paths or routes which has an associated backend defined with a Service name, a port name and number.

  • A backend is a combo of a Service and port names, or a custom resource backend defined as a CRD. Consequently HTTP/HTTPS requests to the Ingress that matches the host and path of the rule are sent to the listed backend.

An example of an Ingress specification is:

CODE
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: konvoy-ingress
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /path
        pathType: Prefix
        backend:
          service:
            name: my-konvoy-service
            port:
              number: 80

In Kommander, you can expose services to the outside world using Ingress objects.

Ingress Controllers

In contrast with the controllers in the Kubernetes control plane, Ingress controllers are not started with a cluster so you need to choose the desired Ingress controller.

An Ingress controller has to be deployed in a cluster for the Ingress definitions to work.

Kubernetes as a project currently supports and maintains GCE and nginx controllers.

These are four of the most known Ingress controllers:

  • HAProxy Ingress is a highly customizable community-driven ingress controller for HAProxy.

  • NGINX offers support and maintenance for the NGINX Ingress Controller for Kubernetes.

  • Traefik is a fully featured Ingress controller (Let’s Encrypt, secrets, http2, websocket), and has commercial support.

  • Ambassador API Gateway EXPERIMENTAL is an Envoy based Ingress controller with community and commercial support.

In Kommander, Traefik deploys by default as a well-suited Ingress controller.

Network Policies

NetworkPolicy is an API resource that controls the traffic flow at port level 3 or 4, or at the IP address level. It enables defining constraints on how a pod communicates with various network services such as endpoints and services.

A Pod can be restricted to talk to other network services through a selection of the following identifiers:

  • Namespaces that have to access. There can be pods that are not allowed to talk to other namespaces.

  • Other allowed IP blocks regardless of the node or IP address assigned to the targeted Pod.

  • Other allowed Pods.

An example of a NetworkPolicy specification is:

CODE
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: network-konvoy-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 172.17.0.0/16
        except:
        - 172.17.1.0/24
    - namespaceSelector:
        matchLabels:
          app: MyKonvoyApp
    - podSelector:
        matchLabels:
          app: MyKonvoyApp
    ports:
    - protocol: TCP
      port: 6379
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 5978

As shown in the example, when defining a pod or namespace based NetworkPolicy, you use a selector to specify what traffic is allowed to and from the Pod(s).

Adding entries to Pod /etc/hosts with HostAliases

The Pod API resource definition has a HostAliases field that allows adding entries to the Pod’s container /etc/hosts file. This field overrides the hostname resolution when DNS and other options are not applicable.

For example, to resolve foo.node.local, bar.node.local to 127.0.0.1 and foo.node.remote, bar.node.remote to 10.1.2.3, configure the HostAliases values as follows:

CODE
apiVersion: v1
kind: Pod
metadata:
  name: hostaliases-konvoy-pod
spec:
  restartPolicy: Never
  hostAliases:
  - ip: "127.0.0.1"
    hostnames:
    - "foo.node.local"
    - "bar.node.local"
  - ip: "10.1.2.3"
    hostnames:
    - "foo.node.remote"
    - "bar.node.remote"
  containers:
  - name: cat-hosts
    image: busybox
    command:
    - cat
    args:
    - "/etc/hosts"
JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.