This guide can be used set up a vanilla/lightweight dual-stack Kubernetes (k3s) configuration on a Raspberry Pi. You can add more k3s-nodes to achieve high-availability if needed. In the end a IPv4/IPv6 Nextcloud instance is configured to test the dual-stack functionality.

Everything has mostly been restricted to home-lab use, but can easily be adjusted to fit your needs.

This guide also focuses heavily on Cilium which is an open source, cloud native solution for providing, securing, and observing network connectivity between workloads, fueled by the revolutionary Kernel technology eBPF. If you just want to setup k3s with default network layer in k3s, you can just remove the following lines in k3s/config.yaml

flannel-backend: none
disable-network-policy: true # cillium will replace this
disable-kube-proxy: true

In this blog post I rely on NginX ingress controller since it is one of the most mature/used ways to accept incoming traffic into your Kubernetes cluster. This can also be replaced with Cilium’s Ingress controller or Gateway API. These I will probably address in a later blog post.

Disclaimer: All information/data is based on own experience, upstream documentation and several other blog posts online. This is meant purely to be educational.

Requirements

  • Raspberry Pi 4(min. 4GB RAM)(any other machine can be used for this if desirable)
  • SD card + reader
  • Raspberry Pi imager (available as flatpak on Linux).
  • Raspberry Pi OS Lite (we want the base OS the be as lightweight as possible)
  • Kustomize binary
  • Kubectl binary
  • Cilium binary
  • Git binary

Optional requirements

  • M.2 SATA SSD for better reliability/performance than SD card (e.g. Kingston A400 120GB M.2 SSD)
  • USB-M.2adapter to flash images (e.g. ICY BOX IB-183M2 (NOT compatible to boot from!))
  • Argon ONE M.2 Case for Raspberry Pi 4

  • Download the following script to manage the fan speeds at different temperatures:

     curl https://download.argon40.com/argon1.sh | bash
    
  • Up-to-date firmware to have least possible hardware issues from Raspberry Pi OS:

     sudo apt update
     sudo apt upgrade -y
     sudo apt full-upgrade -y
     sudo apt install rpi-eeprom -y
     sudo rpi-eeprom-update
     sudo rpi-eeprom-update -d -a
     sudo reboot
    
  • Check release channel:

    cat /etc/default/rpi-eeprom-update
    
  • A router where you have access to(+ preferably a DNS domain that you own where you can point A/AAAA-records to your public IP)

Demo network details

Range Description
192.168.1.0/24 Private IPv4 range for most of my devices in my home network, where I statically assign 1 IP to my Pi
192.168.2.0/24 Private IPv4 range specifically created to use as “external IP”-range in my k3s-setup
2001:db8:8500:1::/64 Static ipv6 range specifically created for my k3s-setup
86.85.198.211/32 Public ipv4 IP that rarely changes
2001:db8:8500::/48 Public IPv6 range I got from my ISP when I enabled IPv6 on my WAN-port in my Unifi Dream Router
example.com Fictitious domain I use in this demo) where I create the following A/AAAA-records: A - *.example.com: 86.85.198.211/32, AAAA - *.example.com: 2001:db8:8500:1:cafe:5::

Raspberry Pi OS Lite

Raspberry Pi imager

Use the Raspberry Pi imager software to write Raspberry Pi OS Lite to your disk.

Configure a static IP + DNS server

Replace the comments in brackets in the box below with the correct information. The interface will be either wlan0 for WiFi or eth0 for ethernet.

# /etc/dhcpcd.conf
interface [INTERFACE]
static_routers=[ROUTER IP]
static domain_name_servers=[DNS IP]
static ip_address=[STATIC IP ADDRESS YOU WANT]/24

My Raspberry Pi got assigned a SLAAC-IPv6 address(=2001:db8:8500:1:c0d:807c:39bc:1f4b) from my static IPv6-network that I configured in my router.

Install the following packages

Firewalld provides firewall features by acting as a front-end for the Linux kernel’s net-filter framework via the iptables backend. I prefer this over iptables when it comes to usability.

sudo apt update
sudo apt install -y firewalld
Add preferred IP/range to your firewalld to control what can access your k3s-cluster

e.g.:

sudo firewall-cmd --zone=public --add-source=192.168.1.0/24 --permanent
sudo firewall-cmd --add-port=6443/tcp --permanent
sudo firewall-cmd --add-port=443/tcp --permanent
sudo firewall-cmd --add-port=80/tcp --permanent
sudo firewall-cmd --reload
Cheat sheet - firewalld

Firewalld limits incoming traffic by default, but it is wise to only allow necessary incoming/outgoing ports/protocols.

firewall-cmd --list-all                                             # List default
firewall-cmd --reload                                               # Reload configuration
firewall-cmd --permanent --direct --remove-rules ipv4 filter OUTPUT # Bulk remove
firewall-cmd --direct --get-all-rules                               # Get all direct rules

k3s-setup

  • Avoid a failed setup on 3/4 Model before bootstrapping k3s:

     echo " cgroup_memory=1 cgroup_enable=memory"| sudo tee -a /boot/cmdline.txt
     sudo reboot
    
  • Install k3s:

     git clone git@github.com:jonasbartho/cilium-dual-stack-k3s-demo.git
     sudo mkdir -p /etc/rancher/k3s
     cp k3s/k3s_config.yaml /etc/rancher/k3s/config.yaml
     curl -sfL https://get.k3s.io | sh -
    
  • Dual-stack ranges for pods and services:

     cluster-cidr: 10.42.0.0/16,fd01::/48
     service-cidr: 10.43.0.0/16,fd02::/112
    

    Both IPv4-ranges are the defaults in k3s and are sufficient. I chose the same private IPv6-ranges that OpenShift/OKD uses in their dual-stack offering, but you can plug in your own IPv6-range of your home network if you prefer that. Remember that Cilium needs some “extraConfig:” if you choose to go that path. This is currently commented out in k3s/cilium/cilium-values.yaml.

  • Verify the dual-stack CIDR-range on the node This is default /24 for IPv4 and default /64 for IPv6. This can be overwritten with parameters node-cidr-mask-size-ipv4 and node-cidr-mask-size-ipv6 if you want to match the planned pods per node and total node count.

     $ kubectl get node -o yaml | grep -A 24 spec
     spec:
       podCIDR: 10.42.0.0/16
       podCIDRs:
       - 10.42.0.0/24
       - fd01::/64
       providerID: k3s://pi-jonas
       taints:
       - effect: NoSchedule
         key: node.kubernetes.io/not-ready
    
  • Install the latest cilium binary

     cd ~
     wget https://github.com/cilium/cilium-cli/releases/download/v0.15.17/cilium-linux-arm64.tar.gz
     tar -xvf cilium-linux-arm64.tar.gz
     sudo mv cilium /usr/local/bin
    
  • For the Cilium CLI to access the cluster in successive steps you will need to use the kubeconfig file stored at /etc/rancher/k3s/k3s.yaml by setting the KUBECONFIG environment variable:

     export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
    
  • Apply/install Cilium to your cluster:

     kustomize build k3s/cilium/ --enable-helm |kubectl apply -f -
    
  • Check if your Cilium install was successful!

     $ KUBECONFIG=/etc/rancher/k3s/k3s.yaml cilium status
         /¯¯\
      /¯¯\__/¯¯\    Cilium:             OK
      \__/¯¯\__/    Operator:           OK
      /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
      \__/¯¯\__/    Hubble Relay:       disabled
         \__/       ClusterMesh:        disabled
    
     Deployment             cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
     DaemonSet              cilium             Desired: 1, Ready: 1/1, Available: 1/1
     Containers:            cilium             Running: 1
                            cilium-operator    Running: 1
     Cluster Pods:          3/3 managed by Cilium
    
  • Remove the following file in the case where you re-install your cluster multiples times:

     sudo rm /etc/cni/net.d/05-cilium.conflist
    

Now it’s time to add Layer 2-mode functionality for load-balancing. Cilium provides this now out-of-the-box with L2 Announcements. It is a feature which makes services visible and reachable on the local area network. This feature is primarily intended for on-premises deployments within networks without BGP based routing such as office or campus networks. It is currently only limited to IPv4 at time of writing.

The lacking IPv6-support has been flagged in this GitHub issue, so until this is added we will use MetalLB since it can do dual-stack L2. In an ideal world, Cilium can be used for all things related networking. Having metalLB in the mix leads to unnecessarily more resources be taken in use, but it is nevertheless an excellent load-balancer implementation.

Side note: All this can also be solved with a true Layer 2-functionality called CiliumBGPPeeringPolicy(which offers IPv4/IPv6 if you have a router that supports BGP. I sadly do not have that…) You could install the FRR package on your Pi to implement/manage the BGP-protocol in this way. This is material for a future blog post.. :)

Ref. https://docs.cilium.io/en/stable/network/lb-ipam/

Install MetalLB and L2-advertisement pool

  • Apply/install MetalLB to your cluster:

     kustomize build k3s/metallb/ |kubectl apply -f -
    
  • Apply/install L2-advertisement pool to your cluster:

     kustomize build k3s/metallb/crd | kubectl apply -f -
    
     spec:
       addresses:
         - 192.168.2.1-192.168.2.20
         - 2001:db8:8500:1:cafe:5::/112 # part of my static ipv6-range: 2001:db8:8500:1::/64
    

Now all services with type LoadBalancer receive automatically an available external IPv4/IPv6 address from your pool. The IngressController service should suffice here though since all other services in your cluster probably will run fine with service type “ClusterIP”.

Install the NGINX ingress controller

  • Apply/install NGINX ingress controller to your cluster:

     kustomize build k3s/ingress-nginx/ |kubectl apply -f -
    
  • Verify that your service has gotten an IPv4/IPv6 external IP:

     kubectl get svc -n ingress-nginx
     NAME                        TYPE          CLUSTER-IP     EXTERNAL-IP                                PORT(S)
     ingress-nginx-controller    LoadBalancer  fd02::748e     192.168.2.1,2001:db8:8500:1:cafe:5::       80:31042/TCP,443:31012/TCP
    

Install the cert-manager and issuer to issue certificates for your application

This part implies that you have your own DNS domain. If not, you could go the selfsigned route. Then you manually need to distribute trust stores on you local clients in your home network to avoid getting TLS-error messages.

  • Apply/install cert-manager to your cluster:

     kustomize build k3s/cert-manager/ |kubectl apply -f -
    
  • Apply/install a HTTP01-ClusterIssuer to your cluster. You can also use a Issuer which is not cluster-wide and only bound to one specific namespace. I went for a HTTP01-challenge since my personal DNS provider currently does not have support for DNS01 on ARM64-architecture. The benefit with DNS01 is that you do not have to open up for ingress port 80 in your firewall.

     kustomize build k3s/cert-manager/crd |kubectl apply -f -
    

Install Nextcloud

  • Install Nextcloud with a dual-stack (IPv4/IPv6) service:

     kustomize build k3s/nextcloud/ --enable-helm |kubectl apply -f -
    
  • The important part for enabling IPv6 is the following part defined in k3s/nextcloud/kustomization.yaml:

     patches:
       - patch: |-
           apiVersion: v1
           kind: Service
           metadata:
             name: nextcloud
           spec:
             ipFamilies:
             - IPv6
             - IPv4
             ipFamilyPolicy: PreferDualStack
    
  • Curl the following to test your dual-stack functionality:

     $ curl -Iv -6  https://nextcloud.example.com
     *   Trying [2001:db8:8500:1:cafe:5::]:443...
     * Connected to nextcloud.example.com (2001:db8:8500:1:cafe:5::) port 443 (#0)
     ...
     < HTTP/2 302S
     $
     $ curl -Iv -4  https://nextcloud.example.com
     *   Trying [86.85.198.211::]:443...
     * Connected to nextcloud.example.com (86.85.198.211::) port 443 (#0)
     ...
     < HTTP/2 302
    

Summary

From what I have seen until now, it looks like Cilium has the potential to become the de facto network standard for all Kubernetes platforms. I am looking forward to see the progress this project will make and the effect it will have on the Kubernetes world. In combination with k3s, it makes the perfect k8s home-lab to get better at Kubernetes and Cilium.

Last minute tips:

  1. If you would want to manage your cluster from your workstation, instead of doing everything directly on the Pi, you can do the following:

    ssh 192.168.1.211 "sudo cat /etc/rancher/k3s/k3s.yaml" | sed -e "s/127.0.0.1/192.168.1.211/g" > ~/.kube/k3s.yaml
    export KUBECONFIG=~/.kube/k3s.yaml
    kubectl cluster-info
    

    I preferably do not expose the k8s-API to the world if I can help it. :)

  2. When doing a similar setup for a production environment, try to use for example ArgoCD to roll out your k8s-deployments in a streamlined manner. It is a great GitOps continuous delivery tool for Kubernetes.

  3. As long as you are using a third-party network stack like Cilium, always test upgrades in a staging environment before you upgrade your production environment! (In a home-lab setup this does not matter..) If Cilium would become the default network stack in the near future for your k8s-variant, like for example OKD/OCP then there is less risk of things breaking when upgrading to a new minor version since dependencies tend to be thoroughly tested by the upstream team before being released.

  4. Keep everything in version control (aka. git)! Ad-hoc actions are the enemy in k8s-clusters, since this can lead to stupid mistakes and lack of control.


Update

  • 2024-08-26: Updated the link to the cilium documentation for BGP control planes.

Jonas Bartholomé

Systems Consultant at Redpill Linpro

Jonas joined Redpill Linpro in 2020 since he was hungry for even more open source knowledge. He likes to see the "bigger picture" in technology projects and loves to get things done in proper fashion. Main focus currently for Jonas is Kubernetes/Openshift.

The irony of insecure security software

It can probably be understood from my previous blog post that if it was up to me, I’d avoid products like CrowdStrike - but every now and then I still have to install something like that. It’s not the idea of “security software” per se that I’m against, it’s the actual implementation of many of those products. This post lists up some properties that should be fulfilled for me to happy to install such a product.

Free and ... [continue reading]

Thoughts on the CrowdStrike Outage

Published on July 23, 2024

Alarms made right

Published on June 27, 2024