4 min read

Kubernetes Off the Cloud: Orchestration for IoT and Bare Metal

Kubernetes Off the Cloud: Orchestration for IoT and Bare Metal
Photo by NASA / Unsplash

Kubernetes has taken the web services world by storm in recent years, but did you know you can get the same benefits of process orchestration and self-healing services on any device, from a Raspberry Pi connected to weather sensors to small, on-prem file servers?

For our example project, let's assume we have a small Linux computer in our office that needs to run services (graphql server?) and make it available to the network.

The folks at Canonical (the maintainers of Ubuntu Linux!) have just the tool for the job.

MicroK8s is a lightweight, fast, powerful, and zero-ops Kubernetes for developers, IoT, and edge. It is designed to run on any Linux, from a workstation to an IoT device, with the same consistent experience. We will cover the installation process, enabling necessary services and basic commands to interact with your MicroK8s cluster.

Installation

To begin, we install and add the microk8s snap package on all the machines that should be part of the cluster.

snap refresh
snap install microk8s --classic

You can skip to the next section if you only run K8s on one machine.

Pick one machine to be your "controller"; it will house the control APIs for now. On that machine, run : microk8s add-node The output of this command will include a 'join' command that you can run on any other machine to join the cluster. It will be in the shape of microk8s join <ipaddress>:<port>/<key>

Once the other nodes have finished joining, you can verify that they were added by running from any machine: microk8s kubectl get nodes

Enable Snapshot Garbage Collection

On smaller machines and non-cloud-managed kubernetes, the cluster can run out of disk space and start causing problems. This can be caused by cached snapshot images that microk8s uses to keep things speedy.

There is a very simple fix for this that I recommend adding when first setting up. Unfortunately, this is not covered in the official docs, and it took me many hours of troubleshooting to figure out initially.

The Kublet is an agent that runs on each node in the cluster and responsible for managing the deployment of pods to Kubernetes nodes. In the Kublet config file, we can add parameters to enable garbage collection of old snapshots.

You can access this config file with Vim: vim /var/snap/microk8s/current/args/kublet

Add the following lines to the bottom of the file. You can also adjust the values if you want to make the garbage collection more or less aggressive.

--image-gc-high-threshold 70
--image-gc-low-threshold 60

After editing this file, you need to restart microk8s:

microk8s stop && microk8s start

Enabling Addons

One of the great things about microk8s is built-in "addons" for standard Kubernetes functions; these addons are features that usually come with cloud-hosted Kubernetes offerings but are usually difficult to install on a self-hosted machine.

For this example, we will install the following:

cert-manager: This allows us to provision an SSL certificate for our cluster, but it is only necessary if you plan on making the address public to the web. (and attaching a domain to the cluster)

ingress : This will allow us to make path routes to our various services. (/graphql will route us to our graphql server as an example)

We will also install metrics-server and rbac which provide CPU and RAM usage and security, respectively.

microk8s enable cert-manager ingress metrics-server rbac

Deploying a Service

To deploy a basic web service, we will create a deployment , service , and ingress objects. If you done already know about the different Kubernetes objects, you can get up to speed with the official docs.

Deployment

The deployment manifest describes how our web service should be deployed; it handles things like, what env values should be exposed to the container, the port that should be exposed, and how many replicas should be created an maintained.

If we were going to deploy a simple nginx website server, the deployment manifest could look like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

Service

The Kubernetes service allows us to access the pods in the deployment no matter how many or how many times they get built or torn down. In this example we have 3 nginx pods being created in our deployment, but instead of accessing them directly via IP address, we can point a service at the deployment and it will load the balance automatically between the available pods in the deployment. (notice the app name in the deployment matches the selector on the service)

apiVersion: v1
kind: Service
metadata:
  name: website-service
spec:
  selector:
    app.kubernetes.io/name: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376

Ingress

Now that we have described a service that routes traffic internally, we must expose it to the outside world. We can do this with an Ingress object.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: http-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: website-service
            port:
              number: 80

The above ingress excepts traffic on port 80, at the route / and forwards the traffic to our website-service that we just described.

By changing the path, we can change the address path that the outside would interact with our website. If we change it to /website, then to access our service, we would need to enter http://<your address or domain>/website

To deploy all of these objects, you can run microk8s kubctl apply -f <filename>

Once all the objects have deployed, you can access your website at your domain or IP address + whatever path you set in the ingress. (http://example.com/website)

Kubernetes is a massive and complex topic, but hopefully, this gets you started on building more robust and fault-tolerant infrastructure for your apps!