K3s: Difference between revisions
imported>Georgiancamarasan m Fixed minor spellcheck issues |
imported>Remedan Add examples fo a multi-node cluster |
||
Line 10: | Line 10: | ||
# 2379 # k3s, etcd clients: required if using a "High Availability Embedded etcd" configuration | # 2379 # k3s, etcd clients: required if using a "High Availability Embedded etcd" configuration | ||
# 2380 # k3s, etcd peers: required if using a "High Availability Embedded etcd" configuration | # 2380 # k3s, etcd peers: required if using a "High Availability Embedded etcd" configuration | ||
]; | |||
networking.firewall.allowedUDPPorts = [ | |||
# 8472 # k3s, flannel: required if using multi-node for inter-node networking | |||
]; | ]; | ||
services.k3s.enable = true; | services.k3s.enable = true; | ||
Line 24: | Line 27: | ||
== Multi-node setup == | == Multi-node setup == | ||
it is simple to create a cluster of multiple nodes in a highly available setup (all nodes are in the control-plane and are a part of the etcd cluster). | |||
The first node is configured like this: | |||
<syntaxHighlight lang=nix> | |||
{ | |||
services.k3s = { | |||
enable = true; | |||
role = "server"; | |||
token = "<randomized common secret>"; | |||
clusterInit = true; | |||
}; | |||
} | |||
</syntaxHighlight> | |||
Any other subsequent nodes can be added with a sligtly different config: | |||
<syntaxHighlight lang=nix> | |||
{ | |||
services.k3s = { | |||
enable = true; | |||
role = "server"; | |||
token = "<randomized common secret>"; | |||
serverAddr = "https://<ip of first node>:6443"; | |||
} | |||
</syntaxHighlight> | |||
For this to work you need to open the aforementioned API, etcd, and flannel ports in the firewall. Note that it is [https://etcd.io/docs/v3.3/faq/#why-an-odd-number-of-cluster-members recommended] to use an odd number of nodes in such a cluster. | |||
Or see this [https://github.com/Mic92/doctor-cluster-config/tree/master/modules/k3s real world example]. You might want to ignore some parts of it i.e. the monitoring as this is specific to our setup. | |||
The K3s server needs to import <code>modules/k3s/server.nix</code> and an agent <code>modules/k3s/agent.nix</code>. | The K3s server needs to import <code>modules/k3s/server.nix</code> and an agent <code>modules/k3s/agent.nix</code>. | ||
Tip: You might run into issues with coredns not being reachable from agent nodes. Right now, we disable the NixOS firewall all together until we find a better solution. | Tip: You might run into issues with coredns not being reachable from agent nodes. Right now, we disable the NixOS firewall all together until we find a better solution. |
Revision as of 18:28, 14 November 2023
K3s is a simplified version of Kubernetes. It bundles all components for a kubernetes cluster into a few of small binaries.
Single node setup
{
networking.firewall.allowedTCPPorts = [
6443 # k3s: required so that pods can reach the API server (running on port 6443 by default)
# 2379 # k3s, etcd clients: required if using a "High Availability Embedded etcd" configuration
# 2380 # k3s, etcd peers: required if using a "High Availability Embedded etcd" configuration
];
networking.firewall.allowedUDPPorts = [
# 8472 # k3s, flannel: required if using multi-node for inter-node networking
];
services.k3s.enable = true;
services.k3s.role = "server";
services.k3s.extraFlags = toString [
# "--kubelet-arg=v=4" # Optionally add additional args to k3s
];
environment.systemPackages = [ pkgs.k3s ];
}
After enabling, you can access your cluster through sudo k3s kubectl
i.e. sudo k3s kubectl cluster-info
, or by using the generated kubeconfig file in /etc/rancher/k3s/k3s.yaml
Multi-node setup
it is simple to create a cluster of multiple nodes in a highly available setup (all nodes are in the control-plane and are a part of the etcd cluster).
The first node is configured like this:
{
services.k3s = {
enable = true;
role = "server";
token = "<randomized common secret>";
clusterInit = true;
};
}
Any other subsequent nodes can be added with a sligtly different config:
{
services.k3s = {
enable = true;
role = "server";
token = "<randomized common secret>";
serverAddr = "https://<ip of first node>:6443";
}
For this to work you need to open the aforementioned API, etcd, and flannel ports in the firewall. Note that it is recommended to use an odd number of nodes in such a cluster.
Or see this real world example. You might want to ignore some parts of it i.e. the monitoring as this is specific to our setup.
The K3s server needs to import modules/k3s/server.nix
and an agent modules/k3s/agent.nix
.
Tip: You might run into issues with coredns not being reachable from agent nodes. Right now, we disable the NixOS firewall all together until we find a better solution.
ZFS support
K3s's builtin containerd does not support the zfs snapshotter. However, it is possible to configure it to use an external containerd:
virtualisation.containerd = {
enable = true;
settings =
let
fullCNIPlugins = pkgs.buildEnv {
name = "full-cni";
paths = with pkgs;[
cni-plugins
cni-plugin-flannel
];
};
in {
plugins."io.containerd.grpc.v1.cri".cni = {
bin_dir = "${fullCNIPlugins}/bin";
conf_dir = "/var/lib/rancher/k3s/agent/etc/cni/net.d/";
};
};
};
# TODO describe how to enable zfs snapshotter in containerd
services.k3s.extraFlags = toString [
"--container-runtime-endpoint unix:///run/containerd/containerd.sock"
];
Network policies
The current k3s derivation doesn't include ipset
package, which is required by the network policy controller.
k3s logs
level=warning msg="Skipping network policy controller start, ipset unavailable: ipset utility not found"
There is an open pull request to fix it https://github.com/NixOS/nixpkgs/pull/176520#pullrequestreview-1304593562. Until then, the package can be added to k3s's path as follows
systemd.services.k3s.path = [ pkgs.ipset ];
Troubleshooting
Raspberry Pi not working
If the k3s.service/k3s server does not start and gives you the error FATA[0000] failed to find memory cgroup (v2)
Here's the github issue: https://github.com/k3s-io/k3s/issues/2067 .
To fix the problem, you can add these things to your configuration.nix.
boot.kernelParams = [
"cgroup_enable=cpuset" "cgroup_memory=1" "cgroup_enable=memory"
];