My simple Kubernetes cluster with K3s
22nd Nov 2019
Kubernetes, so hot right now. I caught the bug at a previous job and now I run a private git server (with Gogs), a little CI pipeline (with Drone CI), and a number of smaller projects on a single-node cluster.
The advantages? Instead of juggling a bunch of random nginx configs, folders of random projects, and hoping I don’t have to upgrade Rails because there’s just one site on there running Rails 2.4 or something; I can deal with a complete different set of problems, but hey, at least they’re in containers.
This guide only describes my setup, and while I’ll keep it updated as I maintain my cluster it may be outdated or inaccurate in places. I have some experience with Kubernetes in production and I’m comfortable with the concepts and debugging the weird shit.
If you’re newer to k3s or frankly want a more comprehensive guide, refer to Cheap and local Kubernetes playground with K3s & Helm.
background
I used to run this on Google Kubernetes Engine on tiny (free tier) nodes but Google’s monitoring services (which run on each node) would consume enough memory to knock my services offline randomly. I don’t think GKE is worth the trouble if you’re keeping it cheap. So instead, I provisioned a small enough instance — mine is n1-standard-1 (1 vCPU, 3.75 GB memory) — and just consolidated all the services I run onto it.
installing k3s, the easiest k8s
k3s is a distribution of Kubernetes designed to run on tiny hardware like Raspberry Pis. ARM hardware, and IoT devices. Perfect for small, cheap servers. Also, they make it SO EASY! Some apps weren’t built-in in early releases (Traefik for one) but are built-in now – it’s worth doing it “their way” unless you’re particuarly opinionated about your apps. It’s honestly great and getting better by the day.
You can install or upgrade with one simple command which is curl -sfL https://get.k3s.io | sh -
or read the incredibly easy “quick start” guide
problem?
I had TLS issues connecting to my node, described in this issue. I fixed it by specifying --tls-san 36.164.66.25
in the installation options, but you may not have this issue!
traefik
I prefer installing my own traefik via helm since it has great Let’s Encrypt support, so I specify the --no-deploy traefik
installation option.
I also skip servicelb
since I don’t need it (single node, one IP, one port)
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --no-deploy servicelb --tls-san 36.164.66.25 --no-deploy traefik" sh -s -
installing helm and charts
install helm on your local machine, then
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
add our traefik
helm install stable/traefik --name ingress --namespace kube-system -f apps/traefik.values.yaml
I generated a API Key in cloudflare and saved it as a secret in kubernetes as cloudflare
. ACME writes TXT records to my DNS in cloudflare, then Let’s Encrypt issues us a certificate.
externalIP: "36.164.66.25"
serviceType: NodePort
# pulled from k3s's default chart
kubernetes:
ingressEndpoint:
useDefaultPublishedService: true
ssl:
enabled: true
rbac:
enabled: true
acme:
enabled: true
staging: false
email: my@email.com
challengeType: "dns-01"
domains:
enabled: true
domainsList:
- main: "mycluster.chillidonut.com"
dnsProvider:
name: 'cloudflare'
existingSecretName: 'cloudflare'
cert-manager
The latest version at the time, 0.7.0, changed a bunch of APIs, but 0.6.0 still worked for me. Since the concept of certificates is on track to being a first-class Kubernetes concept, newer versions cert-managers may be better for new clusters.
helm install --name cert-manager --namespace kube-system --version v0.6.0 stable/cert-manager --set webhook.enabled=false
kubernetes-dashboard
helm install stable/kubernetes-dashboard --name kubernetes-dashboard --namespace kube-system --set rbac.create=True --set enableSkipLogin=True
get secret token, i.e.
# kubectl -n kube-system describe secret kubernetes-dashboard-token-6rtfc
Once it’s up, tunnel into the kubernetes-dashboard container:
kubectl -n kube-system port-forward $(kubectl get pods -n kube-system -l "app=kubernetes-dashboard,release=kubernetes-dashboard" -o jsonpath="{.items[0].metadata.name}") 8443:8443
And use this token to log into the dash at https://127.0.0.1:8443/