How Kubernetes reduces Azure IoT Edge cost?
Microsoft made an IoT Edge solution integrated to Azure ecosystem. The information, collected from the sensors by the Azure IoT Edge device, can be processed, stored, and provided by Azure cloud services (event and message processing, DB, AI + machine learning, analytics, etc.). Nice, but somebody must pay the bill for Microsoft services. There are several other IoT Edge solutions, but if the company fell in love with Microsoft, it must be used.
Free link to this article: https://pgillich.medium.com/how-kubernetes-reduces-azure-iot-edge-cost-21cb8e35e3b9?source=friends_link&sk=b173340adf928efbafedd3f7459678de
The Azure bill can be reduced if our solution makes the work out of the Azure as much as possible, especially on the IoT Edge device. Microsoft gives the opportunity for it, because the Azure IoT Edge expects relatively strong device, at least Raspberry Pi or equivalent, which is not busy in collecting the data from the sensors. So, we have more processing power to substitute a subset of Azure cloud features.
The solution for decreasing the Azure cloud cost is: only the most important information should be sent to Azure cloud, by:
- preprocessing, reducing and filtering
- the raw collected data is stored on the IoT Edge device
This kind of services can be run as microservices, for example:
- database (Time Series Database, SQL DB)
- file and/or object storage
- message broker
- backends (own and 3PP, for example: NodeRed, Elasticsearch + Fluentd + Kibana, machine learning)
- frontends
- operation & maintenance (for example: Prometheus, Grafana)
A few of above are heavyweight, for example EFK, video processing or machine learning. It’s not problem, because Azure IoT Edge supports the stronger PC architecture, too, not only Raspberry Pi.
The Azure IoT Edge runs Docker containers (called: modules), which is great, but it has a few limitation:
- dummy container configuration
- maximum number of containers is 50 (it was 10 and 20 earlier, but was increased again)
- workload cannot be shared on more nodes
There are several possible solutions for above issues:
- Sideloading, where an own-implemented module loads own containers. This module can manage more containers based on its Digital Twins configuration, but we have to implement own container management (refresh, health check, restart, etc.).
- Virtual Kubelet. It represents Azure IoT Edge as a K8s node. Forget it.
- Azure IoT Edge on Kubernetes.
This article highlights Azure IoT Edge on Kubernetes. In this solution, edgeAgent is running in own K8s, downloads workload manifest from Azure cloud and creates K8s deployment for each module. Finally, K8s starts the modules. The modules can use edgeHub to communicate with Azure cloud.
So, we can have a solution, which gives the industrial-standard container orchestrator (multi tenancy, HA and scaling, RBAC) and is integrated with Azure IoT Edge. See more details at official Microsoft documentations:
- https://microsoft.github.io/iotedge-k8s-doc/introduction.html
- https://docs.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge-kubernetes
Depending on the requirements, different number or K8s nodes can be setup:
- Only a few IoT sensors should be handled. Low resource required, so a single-node K8s is enough with K3s, see my earlier article Setup lightweight Kubernetes with K3s
- More K8s nodes are needed for processing, see my earlier article Setup On-premise Kubernetes with Kubeadm, MetalLB, Traefik and Vagrant
- HA K8s (multiple masters)
If you would like to compare several Kubernetes deployments, you can read my article Environment for comparing several on-premise Kubernetes distributions (K3s, KinD, kubeadm)
Several examples can be found on the Internet about Azure IoT Edge on Kubernetes. The most detailed can be found at Daniel’s Tech Blog, for Raspberry Pi. The articles describe how the Raspberry Pi related issues can be solved, how Azure Pipelines can be used for deploying own container images and Helm, which is needed to make a production-grade deployment:
- Running a Kubernetes cluster with k3s on Raspbian
- Using an Azure Pipelines agent on a k3s Kubernetes cluster on Raspbian
- Installing Helm and Azure IoT Edge on a k3s Kubernetes cluster on Raspbian
Other example setups:
- https://www.chriswoolum.dev/k3s-cluster-on-raspberry-pi
- https://www.hackster.io/waltercoan/cluster-of-raspberry-pis-in-k8s-to-teach-azure-iot-central-c8b9b6
- https://github.com/gloveboxes/Raspberry-Pi-Kubernetes-Cluster
- https://kevinsaye.wordpress.com/2020/06/10/running-azure-iot-edge-on-kubernetes-on-real-host/
Azure IoT Edge on Kubernetes is in preview state, but looks useful to run services on-premise, to decrease the OPEX.