These are essentially my notes on setting up a single-node Kubernetes cluster at home. Every time I set up an instance I have to dig through lots of posts, articles and documentation, much of it contradictory or out-of-date. Hopefully this distilled and much-abridged version will be helpful to someone else.

This follows for the initial cluster setup. You should look there for more detail. I strongly recommend that you read about the various projects and verify that this information is still valid. Things change pretty quickly. I’ve deliberately not included a lot of detail in the post beyond what I did and the commands to use.


You can work around these, but it’s recommended to have at least:

  • 4GB RAM
  • 2 cores


Kubernetes is an open ended platform which lets you make a lot of decisions yourself. Unfortunately it also requires you to make them (see projects like k3s to avoid some of that). I decided to use:

  • Debian stable (stretch), because why would you ever not use Debian?
  • Docker as the container runtime. Next time I’ll try something else (CRI-O), but Docker is familiar so it’s easy to diagnose any issues.
  • Flannel for networking. No particular reason; it has a nice name.
  • No Helm. Helm is great, but it hides away details and I’m interested in details.
  • Nginx for ingress. There are a bunch of possibilities, but Nginx is the common choice and integrates nicely with lots of other things.
  • Let’s Encrypt certificates validated by Cloudflare DNS. I use Cloudflare already, so this was an easy choice. DNS validation allows for wildcards and for internal hosts.
  • Single node. There are a lot of cool things about Kubernetes that you don’t get with a single node, but what I’m setting up here is for home. You can easily add more nodes by following the instructions kubeadm gives you when it runs.

Enable net.bridge.bridge-nf-call-iptables

This is required by Flannel and possibly other networking options. You can read more at

This was actually already set on my machine, but it doesn’t hurt to be explicit.

$ cat > /etc/sysctl.d/20-bridge-nf.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
sysctl --system
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  "storage-driver": "overlay2"
apt-get update
apt-get install -y \
    apt-transport-https \
    ca-certificates \
    curl \
curl -fsSL | apt-key add -
echo 'deb [arch=amd64] stretch stable' > /etc/apt/sources.list.d/docker.list
apt-get update
apt-get install -y --no-install-recommends docker-ce

The --no-install-recommends will avoid pulling in stuff you don’t need, including the aufs DKMS package.

Install Kubernetes components

curl -s | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb kubernetes-xenial main
apt-get update
apt-get install -y kubelet kubeadm kubectl

The xenial in the APT source is correct. That’s the repository they seem to update, and these are Go binaries anyway so they’re self-contained.

Run kubeadm to set up the cluster

The --pod-network-cidr setting is required by Flannel, which I chose to use for pod networking.

kubeadm init --pod-network-cidr=

That’s it. Neat, huh?

There is still a bunch of work to do to make the cluster actually useful. You can do most of the rest of this as a non-root user. Follow the instructions kubeadm gave you to copy the credential as your regular user.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

And as a handy extra tip, you’ll want completion:

source <(kubectl completion bash)

Install Flannel for pod networking

kubectl apply -f
kubectl get pods --all-namespaces

You should see coredns pods come to life if all is well.

Untaint the master so you can run pods

kubectl taint nodes --all

At this point you can run pods and expose them with services. If that’s all you need, you’re done! Next I set up an nginx-ingress and cert-manager to allow for hostname-based HTTPS ingress with Let’s Encrypt certificates.

Set up nginx-ingress

I set up the nginx-ingress with host networking so I could expose my cluster’s services via ports 80/443 on the host. You can also expose it via a NodePort or LoadBalancer, but this worked well for my simple setup.

$ kubectl apply -f
$ cat > nginx-host-networking.yaml <<EOF
      hostNetwork: true
$ kubectl -n ingress-nginx patch deployment nginx-ingress-controller --patch="$(<nginx-host-networking.yaml)"

You can now create Ingresses to make your services accessible externally. If you expose them via HTTP they should just work. HTTPS will work too, but with a self-signed certificate.

Set up cert-manager

This is a bit fiddly, which isn’t helped by the fact cert-manager’s documentation is a bit out of date. Luckily, I read the source so you don’t have to!

The cert-manager tool is pretty neat. It lets you provision Let’s Encrypt certificates either manually (useful for a fallback wildcard certificate) or automatically based on annotation on your Ingress.

I have overridden the nameservers used for verifying dns01 records because I use split-horizon DNS.

$ kubectl create ns cert-manager
$ kubectl config set-context --current --namespace=cert-manager
$ kubectl apply -f
$ cat > external-dns.yaml <<EOF
# Add dns01 recursive nameservers
        - name: cert-manager
          - --cluster-resource-namespace=\$(POD_NAMESPACE)
          - --leader-election-namespace=\$(POD_NAMESPACE)
          - --dns01-recursive-nameservers=,
$ kubectl -n cert-manager patch deployment cert-manager --patch="$(<external-dns.yaml)"

Set up a ClusterIssuer to issue certificates

You’ll need your Cloudflare API key to put into a Kubernetes secret.

$ kubectl -n cert-manager create secret generic cloudflare-api-key --from-literal=api-key=XXXXX
$ cat > clusterissuer-cf.yaml <<EOF
kind: ClusterIssuer
  name: letsencrypt

      name: letsencrypt

        - name: cf-dns
              name: cloudflare-api-key
              key: api-key
$ kubectl apply -f clusterissuer-cf.yaml

Create a wildcard certificate

I created a wildcard certificate in the ingress-nginx namespace so I can use it as the default.

$ cat > wildcard-cert.yaml <<EOF
kind: Certificate
  name: wildcard-example-com
  namespace: ingress-nginx
  secretName: wildcard-example-com
    kind: ClusterIssuer
    name: letsencrypt
  commonName: '*'
      - dns01:
          provider: cf-dns
          - '*'
          - ''
$ kubectl apply -f wildcard-cert.yaml

You can watch the progress of the certificate generation. Tab completion should find the names of your orders and challenges for you.

kubectl describe order ...
kubectl describe challenge ...

Set this as the default certificate for the ingress controller as follows:

kubectl -n ingress-nginx edit deployment nginx-ingress-controller


- --default-ssl-certificate=$(POD_NAMESPACE)/wildcard-example-com

to the list of args to the nginx-ingress-controller container. You can do this with a patch if you prefer.

Try it out!

You should have DNS set up for, in this example, * pointing to you Kubernetes host.

Deploy a simple Nginx service to test things out.

$ kubectl create ns testland
$ kubectl config set-context --current --namespace=testland
$ kubectl create deployment nginx --image nginx
$ kubectl expose deployment nginx --port 80
$ cat > nginx-ingress.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
  name: nginx
  - host:
      - path: /
          serviceName: nginx
          servicePort: 80
  - hosts:
$ kubectl apply -f nginx-ingress.yaml

You should be able to visit in your browser and get redirected to HTTPS with a real, live certificate.

I’ll run through the process again to check for any errors and update this post.

James McDonald

Former Senior Systems Consultant at Redpill Linpro

Alarms made right

At my work, we’re very much dependent on alarms. The systems need to be operational 24/7. When unexpected issues arise, timely manual intervention may be essential. We need good monitoring systems to catch whatever is wrong and good alarm systems to make someone aware that something urgently needs attention. I would claim that we’re quite good at setting up, tuning, and handling alarms.

When I’m not at work, I’m often sailing, often single-handedly for longer distances. Alarms are important for ... [continue reading]


Published on March 22, 2024

Containerized Development Environment

Published on February 28, 2024