Setup On-premise Kubernetes with Kubeadm, MetalLB, Traefik and Vagrant

Peter Gillich
5 min readDec 24, 2020


Google published Kubernetes as open-source in 2014, except a milk cow: the external connectivity (load balancer, ingress, DNS, etc). Other K8s cloud providers follow this strategy, too. They use own solution because it’s a rational cost model for them and fits to their infrastructure. If a company would like to setup on-premise solution, it has to be solved with open-source solutions and/or with non-free, for example: Red Hat OpenShift.


Depending on the expectations and requirements, below components may be used:

  • External Load Balancer for K8s Services (type: LoadBalancer), for example: MetalLB, Porter
  • Ingress Controller (reverse proxy, HTTP router), for example: Nginx, Contour, HAProxy, Traefik
  • Cert Manager, for example: Let’s Encrypt, BuyPass
  • External DNS, for example: ExternalDNS
  • Persistent Volume, for example: NFS, GlusterFS

Setup a cluster which uses above components is a long journey, see an article series here: .

My article explains how an on-premise K8s cluster can be installed by Vagrant on VMs with minimal user intervention. It covers External Load Balancer (MetalLB) and Ingress Controller (Traefik). If you are interested in a more simple K8s deployment, you can read my earlier article Setup lightweight Kubernetes with K3s . These two K8s clusters can run same time on a Linux bare metal machine, so it’s good opportunity to compare a lightweight K8s development environment and a VM-based multi-node on-premise K8s deployment.

Below steps are integrated into a make-based environment, see my article Environment for comparing several on-premise Kubernetes distributions (K3s, KinD, kubeadm)

VirtualBox and libvirt/KVM hypervisors are supported. The weird VM-in-VM also works: Windows 10 host → VirtualBox → Ubuntu 18.04 middle → KVM → Ubuntu 18.04/20.04 guests.

If this article looks too simple, you can try Kubernetes The Hard Way ;-)

MetalLB does not support all CNI plugins, see more details here: .
In order to avoid new routes, the higher half of VM network is allocated for MetalLB address pool. The higher end of MetalLB address pool is used for static IP addresses.

Flannel was selected as CNI plugin.

Ephemeral Containers feature is active, so a network capture container is easy to deploy, see my article Capturing container traffic on Kubernetes. This feature is in alpha phase, so please be patient ;-)

A few Helm charts are used from , which is deprecated.

The Ingress endpoint extensions/v1beta1 is already deprecated and will be unavailable in K8s v1.22+. Some helm chart uses the deprecated endpoint, instead of the new

Traefik 2 does not support the whole functionality on Ingress (for example: path prefix strip), moreover there is no newer 1.x image than 1.7.19 in Docker Hub. The result is similar to my earlier article Setup lightweight Kubernetes with K3s : using a Traefik 1 Helm chart.


The cluster can be installed on Ubuntu or Windows host into VMs by Vagrant, described at . If you are interested in what happened under the hood, you can read below files:

  • how the VM OS is installed
  • Vagrantfile: how the master and worker nodes are installed (SCRIPTandMINIONSCRIPTscripts)


Kubernetes Dashboard and Traefik Dashboard were installed, see Install description above. Kubernetes Metrics is also installed.


There are several ways to expose own service to externals. There is a good summary about the possibilities: .

Below examples expose a simple web server, which can be created by following command:

kubectl create deployment --image nginx:latest my-nginx


NodePort type is a simple way to expose a service on all nodes without ingress, for example (on random port range 30000–32767):

# kubectl expose deployment my-nginx --name my-nginx-nodeport --port=80 --type=NodePort

The nodePortcan be set in service resource file, but kubectl expose deploymentdoes not support it. Here is an example to set it to 30800 in the service resource spec:

kubectl expose deployment my-nginx --name my-nginx-nodeport --port=80 --type=NodePort --dry-run=client -o yaml | kubectl patch -f - --type='json' --patch='[{"op": "add", "path": "/spec/ports/0/nodePort", "value":30800}]' --dry-run=client -o yaml | kubectl apply -f -


kubectl get node -o widekubectl get service my-nginx-nodeport -o wide# Use node INTERNAL-IP and service PORT(S):
curl -s
curl -s
curl -s


LoadBalancer exposes the service on a public IP address from a specified address pool. MetalLB selects the lowest free IP address (if it’s not specified) and creates a floating IP. Example for exposing the web service on a specified IP address:

kubectl expose deployment my-nginx --name my-nginx-loadbalancer --port=80 --type=LoadBalancer --load-balancer-ip=


kubectl get service my-nginx-loadbalancer -o wide# Use service EXTERNAL-IP and PORT(S):
curl -s


Ingress can provide more services on same IP:PORT distinguished by HTTP routing rules (for example: host name, path). The external host name is, which can be added to /etc/hosts, for example:

Example for creating a service and exposing it on ingress:

kubectl expose deployment my-nginx --port=80cat <<EOF | kubectl apply -f -
kind: Ingress
name: mysite-nginx-ingress
annotations: "traefik" "PathPrefixStrip"
- host: ""
- path: /my-nginx
pathType: Prefix
name: my-nginx
number: 80


curl -s


Three different methods were presented above for exposing a service. A summary about this services can be printed out by below command:

kubectl get service -o wide -l app=my-nginxNAME                    TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE   SELECTOR
my-nginx ClusterIP <none> 80/TCP 50m app=my-nginx
my-nginx-loadbalancer LoadBalancer 80:30427/TCP 68m app=my-nginx
my-nginx-nodeport NodePort <none> 80:30800/TCP 80m app=my-nginx


Install steps are described at . If you are interested in another deployments, you can read below articles: