These are essentially my notes on setting up a single-node Kubernetes cluster at home. Every time I set up an instance I have to dig through lots of posts, articles and documentation, much of it contradictory or out-of-date. Hopefully this distilled and much-abridged version will be helpful to someone else.

This follows https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ for the initial cluster setup. You should look there for more detail. I strongly recommend that you read about the various projects and verify that this information is still valid. Things change pretty quickly. I’ve deliberately not included a lot of detail in the post beyond what I did and the commands to use.

Requirements

You can work around these, but it’s recommended to have at least:

  • 4GB RAM
  • 2 cores

Decisions

Kubernetes is an open ended platform which lets you make a lot of decisions yourself. Unfortunately it also requires you to make them (see projects like k3s to avoid some of that). I decided to use:

  • Debian stable (stretch), because why would you ever not use Debian?
  • Docker as the container runtime. Next time I’ll try something else (CRI-O), but Docker is familiar so it’s easy to diagnose any issues.
  • Flannel for networking. No particular reason; it has a nice name.
  • No Helm. Helm is great, but it hides away details and I’m interested in details.
  • nginx for ingress. There are a bunch of possibilities, but nginx is the common choice and integrates nicely with lots of other things.
  • Let’s Encrypt certificates validated by Cloudflare DNS. I use Cloudflare already, so this was an easy choice. DNS validation allows for wildcards and for internal hosts.
  • Single node. There are a lot of cool things about Kubernetes that you don’t get with a single node, but what I’m setting up here is for home. You can easily add more nodes by following the instructions kubeadm gives you when it runs.

Enable net.bridge.bridge-nf-call-iptables

This is required by Flannel and possibly other networking options. You can read more at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network

This was actually already set on my machine, but it doesn’t hurt to be explicit.

cat > /etc/sysctl.d/20-bridge-nf.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

Install Docker with recommended settings

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
apt-get update
apt-get install -y \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg2
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
echo 'deb [arch=amd64] https://download.docker.com/linux/debian stretch stable' > /etc/apt/sources.list.d/docker.list
apt-get update
apt-get install -y --no-install-recommends docker-ce

The --no-install-recommends will avoid pulling in stuff you don’t need, including the aufs DKMS package.

Install Kubernetes components

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl

The xenial in the APT source is correct. That’s the repo they seem to update, and these are Go binaries anyway so they’re self-contained.

Run kubeadm to set up the cluster

The --pod-network-cidr setting is required by Flannel, which I chose to use for pod networking.

kubeadm init --pod-network-cidr=10.244.0.0/16

That’s it. Neat, huh?

There is still a bunch of work to do to make the cluster actually useful. You can do most of the rest of this as a non-root user. Follow the instructions kubeadm gave you to copy the credential as your regular user.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

And as a handy extra tip, you’ll want completion:

source <(kubectl completion bash)

Install Flannel for pod networking

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
kubectl get pods --all-namespaces

You should see coredns pods come to life if all is well.

Untaint the master so you can run pods

kubectl taint nodes --all node-role.kubernetes.io/master-

At this point you can run pods and expose them with services. If that’s all you need, you’re done! Next I set up an nginx-ingress and cert-manager to allow for hostname-based HTTPS ingress with Let’s Encrypt certificates.

Set up nginx-ingress

I set up the nginx-ingress with host networking so I could expose my cluster’s services via ports 80/443 on the host. You can also expose it via a NodePort or LoadBalancer, but this worked well for my simple setup.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
cat > nginx-host-networking.yaml <<EOF
spec:
  template:
    spec:
      hostNetwork: true
EOF
kubectl -n ingress-nginx patch deployment nginx-ingress-controller --patch="$(<nginx-host-networking.yaml)"

You can now create Ingresses to make your services accessible externally. If you expose them via HTTP they should just work. HTTPS will work too, but with a self-signed certificate.

Set up cert-manager

This is a bit fiddly, which isn’t helped by the fact cert-manager’s documentation is a bit out of date. Luckily, I read the source so you don’t have to!

The cert-manager tool is pretty neat. It lets you provision Let’s Encrypt certificates either manually (useful for a fallback wildcard certificate) or automatically based on annotation on your Ingress.

I have overridden the nameservers used for verifying dns01 records because I use split-horizon DNS.

kubectl create ns cert-manager
kubectl config set-context --current --namespace=cert-manager
kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.7/deploy/manifests/cert-manager.yaml
cat > external-dns.yaml <<EOF
# Add dns01 recursive nameservers
spec:
  template:
    spec:
      containers:
        - name: cert-manager
          args:
          - --cluster-resource-namespace=\$(POD_NAMESPACE)
          - --leader-election-namespace=\$(POD_NAMESPACE)
          - --dns01-recursive-nameservers=1.1.1.1:53,8.8.8.8:53
EOF

kubectl -n cert-manager patch deployment cert-manager --patch="$(<external-dns.yaml)"

Set up a ClusterIssuer to issue certificates

You’ll need your Cloudflare API key to put into a Kubernetes secret.

kubectl -n cert-manager create secret generic cloudflare-api-key --from-literal=api-key=XXXXX
cat > clusterissuer-cf.yaml <<EOF
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: you@example.com

    privateKeySecretRef:
      name: letsencrypt

    dns01:
      providers:
        - name: cf-dns
          cloudflare:
            email: you@example.com
            apiKeySecretRef:
              name: cloudflare-api-key
              key: api-key
EOF
kubectl apply -f clusterissuer-cf.yaml

Create a wildcard certificate

I created a wildcard certificate in the ingress-nginx namespace so I can use it as the default.

cat > wildcard-cert.yaml <<EOF
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: wildcard-example-com
  namespace: ingress-nginx
spec:
  secretName: wildcard-example-com
  issuerRef:
    kind: ClusterIssuer
    name: letsencrypt
  commonName: '*.example.com'
  dnsNames:
    - example.com
  acme:
    config:
      - dns01:
          provider: cf-dns
        domains:
          - '*.example.com'
          - 'example.com'
EOF
kubectl apply -f wildcard-cert.yaml

You can watch the progress of the certificate generation. Tab completion should find the names of your orders and challenges for you.

kubectl describe order ...
kubectl describe challenge ...

Set this as the default certificate for the ingress controller as follows:

kubectl -n ingress-nginx edit deployment nginx-ingress-controller

Add:

        - --default-ssl-certificate=$(POD_NAMESPACE)/wildcard-example-com

to the list of args to the nginx-ingress-controller container. You can do this with a patch if you prefer.

Try it out!

You should have DNS set up for, in this example, *.example.com pointing to you Kubernetes host.

Deploy a simple nginx service to test things out.

kubectl create ns testland
kubectl config set-context --current --namespace=testland
kubectl create deployment nginx --image nginx
kubectl expose deployment nginx --port 80
cat > nginx-ingress.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx
spec:
  rules:
  - host: nginx.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx
          servicePort: 80
  tls:
  - hosts:
    - nginx.example.com
EOF
kubectl apply -f nginx-ingress.yaml

You should be able to visit http://nginx.example.com/ in your browser and get redirected to HTTPS with a real, live certificate.

I’ll run through the process again to check for any errors and update this post.

James McDonald

Senior Systems Consultant at Redpill Linpro

James just recently started at Redpill Linpro. He's been working with FOSS since the late 90s, and tinkering with it for rather longer. His background is in system administration and architecture. He has recently been experimenting with beards.

A rack switch removal ordeal

I recently needed to remove a couple of decommissioned switches from one of our data centres. This turned out to be quite an ordeal. The reason? The ill-conceived way the rack mount brackets used by most data centre switches are designed. In this post, I will use plenty of pictures to explain why that is, and propose a simple solution on how the switch manufacturers can improve this in future.

Rack switch mounting 101

Standard rack-mount switch... [continue reading]

Validating SSH host keys with DNSSEC

Published on May 06, 2019

Configure Alfresco 5.2.x with SAML 2.0

Published on March 25, 2019