K3s: Difference between revisions

imported>Georgiancamarasan
m Fixed minor spellcheck issues
imported>Remedan
Add examples fo a multi-node cluster
Line 10: Line 10:
     # 2379 # k3s, etcd clients: required if using a "High Availability Embedded etcd" configuration
     # 2379 # k3s, etcd clients: required if using a "High Availability Embedded etcd" configuration
     # 2380 # k3s, etcd peers: required if using a "High Availability Embedded etcd" configuration
     # 2380 # k3s, etcd peers: required if using a "High Availability Embedded etcd" configuration
  ];
  networking.firewall.allowedUDPPorts = [
    # 8472 # k3s, flannel: required if using multi-node for inter-node networking
   ];
   ];
   services.k3s.enable = true;
   services.k3s.enable = true;
Line 24: Line 27:
== Multi-node setup ==
== Multi-node setup ==


See this [https://github.com/Mic92/doctor-cluster-config/tree/master/modules/k3s real world example]. You might want to ignore some parts of it i.e. the monitoring as this is specific to our setup.
it is simple to create a cluster of multiple nodes in a highly available setup (all nodes are in the control-plane and are a part of the etcd cluster).
 
The first node is configured like this:
 
<syntaxHighlight lang=nix>
{
  services.k3s = {
    enable = true;
    role = "server";
    token = "<randomized common secret>";
    clusterInit = true;
  };
}
</syntaxHighlight>
 
Any other subsequent nodes can be added with a sligtly different config:
 
<syntaxHighlight lang=nix>
{
  services.k3s = {
    enable = true;
    role = "server";
    token = "<randomized common secret>";
    serverAddr = "https://<ip of first node>:6443";
}
</syntaxHighlight>
 
For this to work you need to open the aforementioned API, etcd, and flannel ports in the firewall. Note that it is [https://etcd.io/docs/v3.3/faq/#why-an-odd-number-of-cluster-members recommended] to use an odd number of nodes in such a cluster.
 
Or see this [https://github.com/Mic92/doctor-cluster-config/tree/master/modules/k3s real world example]. You might want to ignore some parts of it i.e. the monitoring as this is specific to our setup.
The K3s server needs to import <code>modules/k3s/server.nix</code> and an agent <code>modules/k3s/agent.nix</code>.
The K3s server needs to import <code>modules/k3s/server.nix</code> and an agent <code>modules/k3s/agent.nix</code>.
Tip: You might run into issues with coredns not being reachable from agent nodes. Right now, we disable the NixOS firewall all together until we find a better solution.
Tip: You might run into issues with coredns not being reachable from agent nodes. Right now, we disable the NixOS firewall all together until we find a better solution.