Setup On-premise Kubernetes with Kubeadm, MetalLB, Traefik and Vagrant

Peter Gillich
5 min readDec 24, 2020

--

Google published Kubernetes as open-source in 2014, except a milk cow: the external connectivity (load balancer, ingress, DNS, etc). Other K8s cloud providers follow this strategy, too. They use own solution because it’s a rational cost model for them and fits to their infrastructure. If a company would like to setup on-premise solution, it has to be solved with open-source solutions and/or with non-free, for example: Red Hat OpenShift.

Components

Depending on the expectations and requirements, below components may be used:

  • External Load Balancer for K8s Services (type: LoadBalancer), for example: MetalLB, Porter
  • Ingress Controller (reverse proxy, HTTP router), for example: Nginx, Contour, HAProxy, Traefik
  • Cert Manager, for example: Let’s Encrypt, BuyPass
  • External DNS, for example: ExternalDNS
  • Persistent Volume, for example: NFS, GlusterFS

Setup a cluster which uses above components is a long journey, see an article series here: https://richard-nunez.medium.com/my-journey-to-kubernetes-on-bare-metal-93f5d347c06f .

My article explains how an on-premise K8s cluster can be installed by Vagrant on VMs with minimal user intervention. It covers External Load Balancer (MetalLB) and Ingress Controller (Traefik). If you are interested in a more simple K8s deployment, you can read my earlier article Setup lightweight Kubernetes with K3s . These two K8s clusters can run same time on a Linux bare metal machine, so it’s good opportunity to compare a lightweight K8s development environment and a VM-based multi-node on-premise K8s deployment.

Below steps are integrated into a make-based environment, see my article Environment for comparing several on-premise Kubernetes distributions (K3s, KinD, kubeadm)

VirtualBox and libvirt/KVM hypervisors are supported. The weird VM-in-VM also works: Windows 10 host → VirtualBox → Ubuntu 18.04 middle → KVM → Ubuntu 18.04/20.04 guests.

If this article looks too simple, you can try Kubernetes The Hard Way ;-)

MetalLB does not support all CNI plugins, see more details here: https://metallb.universe.tf/installation/network-addons/ .
In order to avoid new routes, the higher half of VM network is allocated for MetalLB address pool. The higher end of MetalLB address pool is used for static IP addresses.

Flannel was selected as CNI plugin.

Ephemeral Containers feature is active, so a network capture container is easy to deploy, see my article Capturing container traffic on Kubernetes. This feature is in alpha phase, so please be patient ;-)

A few Helm charts are used from https://charts.helm.sh/stable , which is deprecated.

The Ingress endpoint extensions/v1beta1 is already deprecated and will be unavailable in K8s v1.22+. Some helm chart uses the deprecated endpoint, instead of the new networking.k8s.io/v1.

Traefik 2 does not support the whole functionality on Ingress (for example: path prefix strip), moreover there is no newer 1.x image than 1.7.19 in Docker Hub. The result is similar to my earlier article Setup lightweight Kubernetes with K3s : using a Traefik 1 Helm chart.

Install

The cluster can be installed on Ubuntu or Windows host into VMs by Vagrant, described at https://github.com/pgillich/kubeadm-vagrant/tree/master/Ubuntu . If you are interested in what happened under the hood, you can read below files:

  • install-ubuntu.sh: how the VM OS is installed
  • Vagrantfile: how the master and worker nodes are installed (SCRIPTandMINIONSCRIPTscripts)

Operation

Kubernetes Dashboard and Traefik Dashboard were installed, see Install description above. Kubernetes Metrics is also installed.

Usage

There are several ways to expose own service to externals. There is a good summary about the possibilities: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0 .

Below examples expose a simple web server, which can be created by following command:

kubectl create deployment --image nginx:latest my-nginx

NodePort

NodePort type is a simple way to expose a service on all nodes without ingress, for example (on random port range 30000–32767):

# kubectl expose deployment my-nginx --name my-nginx-nodeport --port=80 --type=NodePort

The nodePortcan be set in service resource file, but kubectl expose deploymentdoes not support it. Here is an example to set it to 30800 in the service resource spec:

kubectl expose deployment my-nginx --name my-nginx-nodeport --port=80 --type=NodePort --dry-run=client -o yaml | kubectl patch -f - --type='json' --patch='[{"op": "add", "path": "/spec/ports/0/nodePort", "value":30800}]' --dry-run=client -o yaml | kubectl apply -f -

Checking:

kubectl get node -o widekubectl get service my-nginx-nodeport -o wide# Use node INTERNAL-IP and service PORT(S):
curl -s 192.168.26.10:30800
curl -s 192.168.26.11:30800
curl -s 192.168.26.12:30800

LoadBalancer

LoadBalancer exposes the service on a public IP address from a specified address pool. MetalLB selects the lowest free IP address (if it’s not specified) and creates a floating IP. Example for exposing the web service on a specified IP address:

kubectl expose deployment my-nginx --name my-nginx-loadbalancer --port=80 --type=LoadBalancer --load-balancer-ip=192.168.26.253

Checking:

kubectl get service my-nginx-loadbalancer -o wide# Use service EXTERNAL-IP and PORT(S):
curl -s 192.168.26.253:80

Ingress

Ingress can provide more services on same IP:PORT distinguished by HTTP routing rules (for example: host name, path). The external host name is cluster-01.company.com, which can be added to /etc/hosts, for example:

192.168.26.254  oam.cluster-01.company.com cluster-01.company.com

Example for creating a service and exposing it on ingress:

kubectl expose deployment my-nginx --port=80cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mysite-nginx-ingress
annotations:
kubernetes.io/ingress.class: "traefik"
traefik.ingress.kubernetes.io/rule-type: "PathPrefixStrip"
spec:
rules:
- host: "cluster-01.company.com"
http:
paths:
- path: /my-nginx
pathType: Prefix
backend:
service:
name: my-nginx
port:
number: 80
EOF

Checking:

curl -s http://cluster-01.company.com/my-nginx

Summary

Three different methods were presented above for exposing a service. A summary about this services can be printed out by below command:

kubectl get service -o wide -l app=my-nginxNAME                    TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE   SELECTOR
my-nginx ClusterIP 10.102.86.1 <none> 80/TCP 50m app=my-nginx
my-nginx-loadbalancer LoadBalancer 10.103.244.121 192.168.26.253 80:30427/TCP 68m app=my-nginx
my-nginx-nodeport NodePort 10.107.191.14 <none> 80:30800/TCP 80m app=my-nginx

References

Install steps are described at https://github.com/pgillich/kubeadm-vagrant/tree/master/Ubuntu . If you are interested in another deployments, you can read below articles:

--

--