Attach Cluster Using Tunnel
Using the CLI to attach a Kubernetes Cluster using a Tunnel
Identify the management cluster endpoint
Obtain the hostname and CA certificate for the management cluster:
hostname=$(kubectl get service -n kommander kommander-traefik -o go-template='{{with index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}')
b64ca_cert=$(kubectl get secret -n cert-manager kommander-ca -o=go-template='{{index .data "tls.crt"}}')
Specify a workspace namespace
Obtain the desired workspace namespace on the management cluster for the tunnel gateway:
namespace=$(kubectl get workspace default-workspace -o jsonpath="{.status.namespaceRef.name}")
Alternatively, if you wish to create a new workspace instead of using an existing workspace:
workspace=sample
namespace=${workspace}
cat > workspace.yaml <<EOF
apiVersion: workspaces.kommander.mesosphere.io/v1alpha1
kind: Workspace
metadata:
annotations:
kommander.mesosphere.io/display-name: ${workspace}
name: ${workspace}
spec:
namespaceName: ${namespace}
EOF
kubectl apply -f workspace.yaml
You can verify the workspace exists using:
kubectl get workspace ${workspace}
Create a tunnel gateway
Create a tunnel gateway on the management cluster to listen for tunnel agents on remote clusters:
NOTE: Kommander uses Traefik 2 ingress, which requires explicit definition of strip prefix middleware as a Kubernetes API object, opposed to a simple annotation. Kommander provides default middleware that supports creating tunnels only on the /dkp/tunnel URL
prefix. This is indicated by using the extra annotation, traefik.ingress.kubernetes.io/router.middlewares: kommander-stripprefixes-kubetunnel@kubernetescrd
as shown in the code sample that follows. If you want to expose a tunnel on a different URL prefix, you must manage your own middleware configuration.
cacert_secret=kubetunnel-ca
gateway=sample-gateway
cat > gateway.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
namespace: ${namespace}
name: ${cacert_secret}
data:
ca.crt:
${b64ca_cert}
---
apiVersion: kubetunnel.d2iq.io/v1alpha1
kind: TunnelGateway
metadata:
namespace: ${namespace}
name: ${gateway}
spec:
ingress:
caSecretRef:
namespace: ${namespace}
name: ${cacert_secret}
loadBalancer:
hostname: ${hostname}
urlPathPrefix: /dkp/tunnel
extraAnnotations:
kubernetes.io/ingress.class: kommander-traefik
traefik.ingress.kubernetes.io/router.tls: "true"
traefik.ingress.kubernetes.io/router.middlewares: kommander-stripprefixes-kubetunnel@kubernetescrd
EOF
kubectl apply -f gateway.yaml
You can verify the gateway exists using the command:
kubectl get tunnelgateway -n ${namespace} ${gateway}
Connecting a remote cluster
Create a tunnel connector
Create a tunnel connector on the management cluster for the remote cluster:
connector=sample-connector
cat > connector.yaml <<EOF
apiVersion: kubetunnel.d2iq.io/v1alpha1
kind: TunnelConnector
metadata:
namespace: ${namespace}
name: ${connector}
spec:
gatewayRef:
name: ${gateway}
EOF
kubectl apply -f connector.yaml
You can verify the connector exists using:
kubectl get tunnelconnector -n ${namespace} ${connector}
Wait for the tunnel connector to reach Listening
state and then export the agent manifest:
while [ "$(kubectl get tunnelconnector -n ${namespace} ${connector} -o jsonpath="{.status.state}")" != "Listening" ]
do
sleep 5
done
manifest=$(kubectl get tunnelconnector -n ${namespace} ${connector} -o jsonpath="{.status.tunnelAgent.manifestsRef.name}")
while [ -z ${manifest} ]
do
sleep 5
manifest=$(kubectl get tunnelconnector -n ${namespace} ${connector} -o jsonpath="{.status.tunnelAgent.manifestsRef.name}")
done
kubectl get secret -n ${namespace} ${manifest} -o jsonpath='{.data.manifests\.yaml}' | base64 -d > manifest.yaml
Create a network policy for the tunnel server
This step is optional, but improves security by restricting which remote hosts can connect to the tunnel.
Apply a network policy that restricts tunnel access to specific namespaces and IP blocks. The following example permits connections from pods running in the kommander
namespace, from pods running in namespaces with a label kubetunnel.d2iq.io/networkpolicy
matching the tunnel name and namespace, and to remote clusters with IP addresses in the ranges 192.0.2.0 to 192.0.2.255 and 203.0.113.0 to 203.0.113.255:
cat > net.yaml <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
namespace: ${namespace}
name: ${connector}-deny
labels:
kubetunnel.d2iq.io/tunnel-connector: ${connector}
kubetunnel.d2iq.io/networkpolicy-type: "tunnel-server"
spec:
podSelector:
matchLabels:
kubetunnel.d2iq.io/tunnel-connector: ${connector}
policyTypes:
- Ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
namespace: ${namespace}
name: ${connector}-allow
labels:
kubetunnel.d2iq.io/tunnel-connector: ${connector}
kubetunnel.d2iq.io/networkpolicy-type: "tunnel-server"
spec:
podSelector:
matchLabels:
kubetunnel.d2iq.io/tunnel-connector: ${connector}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "kube-federation-system"
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "kommander"
- namespaceSelector:
matchLabels:
kubetunnel.d2iq.io/networkpolicy: ${connector}-${namespace}
- ipBlock:
cidr: 192.0.2.0/24
- ipBlock:
cidr: 203.0.113.0/24
EOF
kubectl apply -f net.yaml
Set up the managed cluster
the manifest.yaml
file to the managed cluster and deploy the tunnel agent:
kubectl apply --context managed -f manifest.yaml
You can check the status of the created pods using:
kubectl get pods --context managed -n kubetunnel
After a short time, expect to see a post-kubeconfig
pod that reaches Completed
state and a tunnel-agent
pod that stays in Running
state.
NAME READY STATUS RESTARTS AGE
post-kubeconfig-j2ghk 0/1 Completed 0 14m
tunnel-agent-f8d9f4cb4-thx8h 0/1 Running 0 14m
Add the managed cluster into Kommander
On the management cluster, wait for the tunnel to be connected by the tunnel agent:
while [ "$(kubectl get tunnelconnector -n ${namespace} ${connector} -o jsonpath="{.status.state}")" != "Connected" ]
do
sleep 5
done
Add the cluster into Kommander:
managed=private-cluster
display_name=${managed}
cat > kommander.yaml <<EOF
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
namespace: ${namespace}
name: ${managed}
annotations:
kommander.mesosphere.io/display-name: ${display_name}
spec:
clusterTunnelConnectorRef:
name: ${connector}
EOF
kubectl apply -f kommander.yaml
Wait for the managed cluster to join the Kommander cluster:
while [ "$(kubectl get kommandercluster -n ${namespace} ${managed} -o jsonpath='{.status.phase}')" != "Joined" ]
do
sleep 5
done
kubefed=$(kubectl get kommandercluster -n ${namespace} ${managed} -o jsonpath="{.status.kubefedclusterRef.name}")
while [ -z "${kubefed}" ]
do
sleep 5
kubefed=$(kubectl get kommandercluster -n ${namespace} ${managed} -o jsonpath="{.status.kubefedclusterRef.name}")
done
kubectl wait --for=condition=ready --timeout=60s kubefedcluster -n kube-federation-system ${kubefed}
kubectl get kubefedcluster -n kube-federation-system ${kubefed}
Using a remote cluster
To access services running on the remote cluster from the management cluster, connect to the tunnel proxy.
You can use these three methods:
If the client program supports use of a kubeconfig file, use the managed cluster’s kubeconfig.
If the client program supports SOCKS5 proxies, use the proxy directly.
Otherwise, deploy a proxy server on the management cluster.
Managed cluster service
These sections require a service to run on the managed cluster.
As an example, start the following service:
service_namespace=test
service_name=webserver
service_port=8888
service_endpoint=${service_name}.${service_namespace}.svc.cluster.local:${service_port}
cat > nginx.yaml <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: ${service_namespace}
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ${service_namespace}
name: nginx-deployment
labels:
app: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: ${service_namespace}
name: ${service_name}
spec:
selector:
app: nginx-app
type: ClusterIP
ports:
- targetPort: 80
port: ${service_port}
EOF
kubectl apply -f nginx.yaml
kubectl rollout status deploy -n ${service_namespace} nginx-deployment
On the managed cluster, a client Job can access this service using:
cat > curl.yaml <<EOF
apiVersion: batch/v1
kind: Job
metadata:
name: curl
spec:
template:
spec:
containers:
- name: curl
image: curlimages/curl:7.76.0
command: ["curl", "--silent", "--show-error", "http://${service_endpoint}"]
restartPolicy: Never
backoffLimit: 4
EOF
kubectl apply -f curl.yaml
kubectl wait --for=condition=complete job curl
podname=$(kubectl get pods --selector=job-name=curl --field-selector=status.phase=Succeeded -o jsonpath='{.items[0].metadata.name}')
kubectl logs ${podname}
The final command returns the default Nginx web page:
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Use of kubeconfig file
This is primarily useful for running kubectl
commands on the management cluster to monitor the managed cluster.
On the management cluster, a kubeconfig
file for the managed cluster configured to use the tunnel proxy is available as a Secret. The Secret’s name can be identified using:
kubeconfig_secret=$(kubectl get tunnelconnector -n ${namespace} ${connector} -o jsonpath='{.status.kubeconfigRef.name}')
After setting service_namespace
and service_name
to the managed service resource, on the management cluster run:
cat > get-service.yaml <<EOF
apiVersion: batch/v1
kind: Job
metadata:
name: get-service
spec:
template:
spec:
containers:
- name: kubectl
image: bitnami/kubectl:1.19
command: ["kubectl", "get", "service", "-n", "${service_namespace}", "${service_name}"]
env:
- name: KUBECONFIG
value: /tmp/kubeconfig/kubeconfig
volumeMounts:
- name: kubeconfig
mountPath: /tmp/kubeconfig
volumes:
- name: kubeconfig
secret:
secretName: "${kubeconfig_secret}"
restartPolicy: Never
backoffLimit: 4
EOF
kubectl apply -n ${namespace} -f get-service.yaml
kubectl wait --for=condition=complete --timeout=5m job -n ${namespace} get-service
podname=$(kubectl get pods -n ${namespace} --selector=job-name=get-service --field-selector=status.phase=Succeeded -o jsonpath='{.items[0].metadata.name}')
kubectl logs -n ${namespace} ${podname}
Direct use of SOCKS5 proxy
To use the SOCKS5 proxy directly, obtain the SOCKS5 proxy endpoint using:
proxy_service=$(kubectl get tunnelconnector -n ${namespace} ${connector} -o jsonpath='{.status.tunnelServer.serviceRef.name}')
socks_proxy=$(kubectl get service -n ${namespace} "${proxy_service}" -o jsonpath='{.spec.clusterIP}{":"}{.spec.ports[?(@.name=="proxy")].port}')
Provide the value of ${socks_proxy}
as the SOCKS5 proxy to your client.
For example, since curl
supports SOCKS5 proxies, the managed service started above can be accessed from the management cluster by adding the SOCKS5 proxy to the curl
command. After setting service_endpoint
to the service endpoint, on the management cluster run:
cat > curl.yaml <<EOF
apiVersion: batch/v1
kind: Job
metadata:
name: curl
spec:
template:
spec:
containers:
- name: curl
image: curlimages/curl:7.76.0
command: ["curl", "--silent", "--show-error", "--socks5-hostname", "${socks_proxy}", "http://${service_endpoint}"]
restartPolicy: Never
backoffLimit: 4
EOF
kubectl apply -f curl.yaml
kubectl wait --for=condition=complete --timeout=5m job curl
podname=$(kubectl get pods --selector=job-name=curl --field-selector=status.phase=Succeeded -o jsonpath='{.items[0].metadata.name}')
kubectl logs ${podname}
The final command returns the same output as for the job on the managed cluster, demonstrating that the job on the management cluster accessed the service running on the managed cluster.
Use of deployed proxy on management cluster
To deploy a proxy on the management cluster, obtain the SOCKS5 proxy endpoint using:
proxy_service=$(kubectl get tunnelconnector -n ${namespace} ${connector} -o jsonpath='{.status.tunnelServer.serviceRef.name}')
socks_proxy=$(kubectl get service -n ${namespace} "${proxy_service}" -o jsonpath='{.spec.clusterIP}{":"}{.spec.ports[?(@.name=="proxy")].port}')
Provide the value of ${socks_proxy}
as the SOCKS5 proxy to a proxy deployed on the management cluster. After setting service_endpoint
to the service endpoint, on the management cluster run:
cat > nginx-proxy.yaml <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: nginx-proxy-crt
spec:
secretName: nginx-proxy-crt-secret
dnsNames:
- nginx-proxy-service.${namespace}.svc.cluster.local
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: kubernetes-ca
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-proxy
labels:
app: nginx-proxy-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx-proxy-app
template:
metadata:
labels:
app: nginx-proxy-app
spec:
containers:
- name: nginx-proxy
image: mesosphere/ghostunnel:v1.5.3-server-backend-proxy
args:
- "server"
- "--listen=:443"
- "--target=${service_endpoint}"
- "--cert=/etc/certs/tls.crt"
- "--key=/etc/certs/tls.key"
- "--cacert=/etc/certs/ca.crt"
- "--unsafe-target"
- "--disable-authentication"
env:
- name: ALL_PROXY
value: socks5://${socks_proxy}
ports:
- containerPort: 443
volumeMounts:
- name: certs
mountPath: /etc/certs
volumes:
- name: certs
secret:
secretName: nginx-proxy-crt-secret
---
apiVersion: v1
kind: Service
metadata:
name: nginx-proxy-service
spec:
selector:
app: nginx-proxy-app
type: ClusterIP
ports:
- targetPort: 443
port: 8765
EOF
kubectl apply -n ${namespace} -f nginx-proxy.yaml
kubectl rollout status deploy -n ${namespace} nginx-proxy
proxy_port=$(kubectl get service -n ${namespace} nginx-proxy-service -o jsonpath='{.spec.ports[0].port}')
Any client running on the management cluster can now access the service running on the managed cluster using the proxy service endpoint. Note in the following that the curl
job runs in the same namespace as the proxy, to provide access to the CA certificate secret.
cat > curl.yaml <<EOF
apiVersion: batch/v1
kind: Job
metadata:
name: curl
spec:
template:
spec:
containers:
- name: curl
image: curlimages/curl:7.76.0
command:
- curl
- --silent
- --show-error
- --cacert
- /etc/certs/ca.crt
- https://nginx-proxy-service.${namespace}.svc.cluster.local:${proxy_port}
volumeMounts:
- name: certs
mountPath: /etc/certs
volumes:
- name: certs
secret:
secretName: nginx-proxy-crt-secret
restartPolicy: Never
backoffLimit: 4
EOF
kubectl apply -n ${namespace} -f curl.yaml
kubectl wait --for=condition=complete --timeout=5m job -n ${namespace} curl
podname=$(kubectl get pods -n ${namespace} --selector=job-name=curl --field-selector=status.phase=Succeeded -o jsonpath='{.items[0].metadata.name}')
kubectl logs -n ${namespace} ${podname}
The final command returns the same output as the job on the managed cluster, demonstrating that the job on the management cluster accessed the service running on the managed cluster.