(K3S - 3/8) Install and configure a Kubernetes cluster with k3s to self-host applications
This article is part of the series Build your very own self-hosting platform with Raspberry Pi and Kubernetes
- Introduction
- Install Raspbian Operating-System and prepare the system for Kubernetes
- Install and configure a Kubernetes cluster with k3s to self-host applications
- Deploy NextCloud on Kuberbetes: The self-hosted Dropbox
- Self-host your Media Center On Kubernetes with Plex, Sonarr, Radarr, Transmission and Jackett
- Self-host Pi-Hole on Kubernetes and block ads and trackers at the network level
- Self-host your password manager with Bitwarden
- Deploy Prometheus and Grafana to monitor a Kubernetes cluster
Introduction
In the previous article, we freshly prepared three machines (one master and two workers). In this article, we are going to learn how to install Kubernetes using k3s, a lightweight version of Kubernetes, suitable for ARM-based computers such as Raspberry Pi. If you need any support with k3s, I recommend checking the official documentation as well as the GitHub repository.
Once the cluster is up and each node connected to each other, we will install some useful services such as:
- Helm: Package manager for Kubernetes
- MetalLB: Load-balancer implementation for bare metal Kubernetes clusters
- Nginx: Kubernetes Ingress Proxy
- Cert Manager: Native Kubernetes certificate management controller.
- Kubernetes Dashboard: A web-based Kubernetes user interface
Install k3s server (master node)
In the first part of this article, we will install the Kubernetes master node which represents the orchestrator of the cluster.
1. Connect via ssh to the master node
1 | $ ssh [email protected] |
2. Configure the following environment variables
The first line specifies in which mode we would like to write the k3s configuration (required when not running commands as root
) and the second line actually says k3s not to deploy its default load balancer named servicelb and proxy traefik, instead we will install manually metalb as load balancer and nginx as proxy which are in my opinion better and more widely used.
1 | $ export K3S_KUBECONFIG_MODE="644" |
3. Run the installer
The next command simply downloads and executes the k3s installer. The installation will take into account the environment variables set just before.
1 | $ curl -sfL https://get.k3s.io | sh - |
4. Verify the status
The installer creates a systemd service which can be used for stop
, start
, restart
and verify the status
of the k3s server running Kubernetes.
1 | $ sudo systemctl status k3s |
k3s also installed the Kubernetes Command Line Tools kubectl
, so it is possible to start querying the cluster (composed at this stage, of only one node - the master, and a few internal services used by Kubernetes).
- To get the details of the nodes
1 | $ kubectl get nodes -o wide |
- To get the details of all the services deployed
1 | $ kubectl get pods -A -o wide |
5. Save the access token
Each agent will require an access token to connect to the server, the token can be retrieved with the following commands:
1 | $ sudo cat /var/lib/rancher/k3s/server/node-token |
Install k3s agent (worker nodes)
In the second part, we are now installing the k3s agent to connect on each worker to the k3s server (master).
1. Connect via ssh to the worker node
1 | $ ssh [email protected] |
2. Configure the following environment variables
The first line specifies in which mode we would like to write the k3s configuration (required when not running command as root
) and the second line provide the k3s server endpoint the agent needs to connect to. Finally, the third line is an access token to the k3s server saved previously.
1 | $ export K3S_KUBECONFIG_MODE="644" |
3. Run the installer
The next command simply downloads and executes the k3s installer. The installation will take into account the environment variables set just before and install the agent.
1 | $ curl -sfL https://get.k3s.io | sh - |
4. Verify the status
The installer creates a systemd service which can be used for stop
, start
, restart
and verify the status
of the k3s agent running Kubernetes.
1 | $ sudo systemctl status k3s-agent |
k3s also installed the Kubernetes Command Line Tools kubectl
, so it is possible to start querying the cluster and observe the all nodes are reconciliated.
1 | $ kubectl get nodes -o wide |
Connect remotely to the cluster
If you don’t want to connect via SSH to a node every time you need to query your cluster, it is possible to install kubectl
(k8s command line tool) on your local machine and control remotely your cluster.
1. Install kubectl on your local machine
Read the following page to know how to install kubectl
on Linux, MacOS or Windows.
2. Copy the k3s config file from the master node to your local machine
The command scp
allows to transfer file via SSH from/to a remote machine. We simply need to download the file /etc/rancher/k3s/k3s.yaml
located on the master node to our local machine into ~/.kube/config
.
1 | $ scp [email protected]:/etc/rancher/k3s/k3s.yaml ~/.kube/config |
The file contains a localhost endpoint 127.0.0.1
, we just need to replace this by the IP address of the master node instead (in my case 192.168.0.22
).
1 | $ sed -i '' 's/127\.0\.0\.1/192\.168\.0\.22/g' ~/.kube/config |
3. Try using kubectl
from your local machine
1 | $ kubectl get nodes -o wide |
Install Helm (version >= 3.x.y) - Kubernetes Package Manager
Helm is a package manager for Kubernetes. An application deployed on Kubernetes is usually composed of multiple config files (deployment, service, secret, ingress, etc.) which can be more or less complex and are generally the same for common applications.
Helm provides a solution to define, install, upgrade k8s applications based on config templates (called charts). A simple unique config file (names Values.yml) is used to generate all the necessary k8s config files and deploy them. The repository hub.helm.sh contains all the “official” charts available but you can easily find unofficial charts online.
1. Install Helm command line tools on your local machine
Refer to the following page to install helm
on your local machine. You must install Helm version >= 3.x.y
Example for Linux:
- Download the package on GitHub
- Run
$ tar -zxvf helm-v3.<X>.<Y>-linux-amd64.tar.gz
(replace3.<Y>.<Y>
by the latest version) - Execute
$ mv linux-amd64/helm /usr/local/bin/helm
2. Check the version
Verify that you have Helm version 3.x installed.
1 | $ helm version |
3. Add the repository for official charts
Configure the repository stable https://kubernetes-charts.storage.googleapis.com
to access the official charts
1 | $ helm repo add stable https://kubernetes-charts.storage.googleapis.com |
We can now install application using Helm using helm install <deployment_name> <chart_name> --namespace <namespace> --set <property_value_to_change>
, uninstall application running helm uninstall <deployment_name> --namespace <namespace>
and list the applications with helm list --namespace <namespace>
I also recommend checking this page to learn more how to use Helm cli.
Install MetalLB - Kubernetes Load Balancer
MetalLB is a load-balancer implementation for bare metal Kubernetes clusters. When configuring a Kubernetes service of type LoadBalancer, MetalLB will dedicate a virtual IP from an address-pool to be used as load balancer for an application.
To install MetalLB from Helm, you simply need to run the following command helm install ...
with:
metallb
: the name to give to the deploymentstable/metallb
: the name of the chart--namespace kube-system
: the namespace in which we want to deploy MetalLB.--set configInline...
: to configures MetalLB in Layer 2 mode (see documentation for more details). The IPs range192.168.0.240-192.168.0.250
is used to constitute a pool of virtual IP addresses.
1 | $ helm repo add stable https://charts.helm.sh/stable |
After a few seconds, you should observe the MetalLB components deployed under kube-system
namespace.
1 | $ kubectl get pods -n kube-system -l app=metallb -o wide |
All done. No every time a new Kubenertes service of type LoadBalancer is deployed, MetalLB will assign an IP from the pool to access the application.
Install Nginx - Web Proxy
Nginx is a recognized high-performance web server / reverse proxy. It can be used as Kubernetes Ingress to expose HTTP and HTTPS routes from outside the cluster to services within the cluster.
Similarly to MetalLB, we will use the following stable/nginx-ingress Helm chart to install our proxy server.
The only config change done here is disabling defaultBackend
which isn’t required.
1 | $ helm install nginx-ingress stable/nginx-ingress --namespace kube-system \ |
After a few seconds, you should observe the Nginx component deployed under kube-system
namespace.
1 | $ kubectl get pods -n kube-system -l app=nginx-ingress -o wide |
Interestingly, Nginx service is deployed in LoadBalancer mode, you can observe MetalLB allocates a virtual IP (column EXTERNAL-IP
) to Nginx with the command here:
1 | $ kubectl get services -n kube-system -l app=nginx-ingress -o wide |
From you local machine, you can try to externally access Nginx via the LoadBalancer IP (in my case http://192.168.0.240
) and it should return the following message “404 not found” because nothing is deployed yet.
Install cert-manager
Cert Manager is a set of Kubernetes tools used to automatically deliver and manage x509 certificates against the ingress (Nginx in our case) and consequently secure via SSL all the HTTP routes with almost no configuration.
1. Install the CustomResourceDefinition
Install the CustomResourceDefinition resources.
1 | $ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.16.0/cert-manager.crds.yaml |
2. Configure the jetstack Helm repository
cert-manager Helm charts aren’t hosted by the offical Helm hub, you need to configure a new repository named JetStack which maintains those charts (here).
1 | $ helm repo add jetstack https://charts.jetstack.io && helm repo update |
3. Install cert-manager through Helm
Run the following command to install the cert-manager components under the kube-system
namespace.
1 | $ helm install cert-manager jetstack/cert-manager --namespace kube-system --version v0.16.0 |
Check that all three cert-manager components are running.
1 | $ kubectl get pods -n kube-system -l app.kubernetes.io/instance=cert-manager -o wide |
4. Configure the certificate issuers
We now going to configure two certificate issuers from which signed x509 certificates can be obtained, such as Let’s Encrypt:
- letsencrypt-staging: will be used for testing purpose only
- letsencrypt-prod: will be used for production purpose.
Run the following commands (change <EMAIL>
by your email).
1 | $ cat <<EOF | kubectl apply -f - |
1 | $ cat <<EOF | kubectl apply -f - |
Once done, we should be able to automatically issue a Let’s Encrypt’s certificate every time we configure an ingress with ssl.
5. Example: Configuration of a ingress with SSL
The following k8s config file allows to access the service service_name
(port 80
) from outside the cluster with issuance of a certificate to the domain domain
.
1 |
|
1 | $ kubectl apply -f example.nfs.persistentvolume.yml |
2. Deploy the Persistent Volume Claim
Now, we need to configure a Persistent Volume Claim which maps a Peristent Volume to a Deployment or Statefulset. Apply the
1 | ## example.nfs.persistentvolumeclaim.yml |
1 | $ kubectl apply -f example.nfs.persistentvolumeclaim.yml |
3. Checkout the result
You should be able to query the cluster to find our Persistent Volume and Persistent Volume Claim.
1 | $ kubectl get pv |
This method will be used to declare persistent storage volume for each of our application. You can learn more about Persistent Volume here.
Install Kubernetes Dashboard
Kubernetes Dashboard is a web-based Kubernetes user interface allowing similar operations as kubectl.
1. Install kubernetes-dashboard via the official “recommended” manifests file
Execute the following command and replace <VERSION>
by the latest version (see release page)
Tested with version: v2.0.3
1 | $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/<VERSION>/aio/deploy/recommended.yaml |
After a few seconds, you should see to pods running in the namespace kubernetes-dashboard
.
1 | $ kubectl get pods -n kubernetes-dashboard |
2. Create admin-user
to connect kubernetes-dashboard
According to the wiki, create a new user named admin-user
to grant this user admin permissions and login to Dashboard using bearer token tied to this user.
1 | $ cat <<EOF | kubectl apply -f - |
3. Retrieve the unique access token of admin-user
In order to authenticate to the dashboard web page, we will need to provide a token we can retrieve by executing the command:
1 | $ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') |
Copy the token value
4. Create a secure channel to access the kubernetes-dashboard
To access kubernetes-dashboard from your local machine, you must create a secure channel to your Kubernetes cluster. Run the following command:
1 | $ kubectl proxy |
5. Connect to kubernetes-dashboard
Now we have a secure channel, you can access kubernetes-dashboard via the following URL: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.
Select “Token”, copy/paste the token previously retrieved and click on “Sign in”.
Well done, you have now access to a nice web interface to visialise and manage all the Objects of your Kubernetes cluster (you can switch namespace with the dropdown on the left menu).
Conclusion
In conclusion of this article, we now have a ready to use Kubernetes cluster to self-host applications. In the next articles, we will learn how to deploy specific applications such as a Media manager with Plex, a self-hosted file sharing solution similar to DropBox and more.
Teardown
If you want to uninstall completely the Kubernetes from a machine.
1. Worker node
Connect to the worker node and run the following commands:
1 | $ sudo /usr/local/bin/k3s-agent-uninstall.sh |
2. Master node
Connect to the master node and run the following commands:
1 | $ sudo /usr/local/bin/k3s-uninstall.sh |
Known Issues
- cert-manager doesn’t issue a certificate, it could be a DNS problem: Cert Manager works! (Jim Nicholson)