Kubernetes: Difference between revisions

From NixOS Wiki
imported>Iceychris
m + node setup, minor corrections
imported>Iceychris
m * fix nix code for master
Line 37: Line 37:
   services.kubernetes = {
   services.kubernetes = {
     roles = ["master" "node"];
     roles = ["master" "node"];
    masterAddress = kubeMasterHostname;
    easyCerts = true;
     apiserver = {
     apiserver = {
       securePort = ${kubeMasterAPIServerPort};
       securePort = kubeMasterAPIServerPort;
       advertiseAddress = ${kubeMasterIP};
       advertiseAddress = kubeMasterIP;
     };
     };
     masterAddress = ${kubeMasterHostname};
 
     easyCerts = true;
     # needed if you use swap
     kubelet.extraOpts = "--fail-swap-on=false";
   };
   };
  # needed if you use swap
  services.kubernetes.kubelet.extraOpts = "--fail-swap-on=false";
}
}
</syntaxhighlight>
</syntaxhighlight>
Line 83: Line 83:
   kubeMasterIP = "10.1.1.2";
   kubeMasterIP = "10.1.1.2";
   kubeMasterHostname = "api.kube";
   kubeMasterHostname = "api.kube";
   kubeMasterAPIServerPort = "443";
   kubeMasterAPIServerPort = 443;
in
in
{
{
Line 98: Line 98:
   services.kubernetes = {
   services.kubernetes = {
     roles = ["node"];
     roles = ["node"];
     masterAddress = "${kubeMasterHostname}";
     masterAddress = kubeMasterHostname;
     easyCerts = true;
     easyCerts = true;



Revision as of 19:05, 29 February 2020

1 Master and 1 Node

Assumptions:

  • Master and Node are on the same network (in this example 10.1.1.0/24)
  • IP of the Master: 10.1.1.2
  • IP of the first Node: 10.1.1.3

Caveats:

  • this was only tested on 20.09pre215024.e97dfe73bba (Nightingale) (unstable)
  • this is probably not best-practice
    • for a production-grade cluster you shouldn't use easyCerts

Master

Add to your configuration.nix:

{ config, pkgs, ... }:
let
  kubeMasterIP = "10.1.1.2";
  kubeMasterHostname = "api.kube";
  kubeMasterAPIServerPort = 443;
in
{
  # resolve master hostname
  networking.extraHosts = "${kubeMasterIP} ${kubeMasterHostname}";

  # packages for administration tasks
  environment.systemPackages = with pkgs; [
    kompose
    kubectl
    kubernetes
  ];

  services.kubernetes = {
    roles = ["master" "node"];
    masterAddress = kubeMasterHostname;
    easyCerts = true;
    apiserver = {
      securePort = kubeMasterAPIServerPort;
      advertiseAddress = kubeMasterIP;
    };

    # needed if you use swap
    kubelet.extraOpts = "--fail-swap-on=false";
  };
}

Apply your config (e.g. nixos-rebuild switch).

Link your kubeconfig to your home directory:

ln -s /etc/kubernetes/cluster-admin.kubeconfig ~/.kube/config

Now, executing kubectl cluster-info should yield something like this:

Kubernetes master is running at https://10.1.1.2
CoreDNS is running at https://10.1.1.2/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

You should also see that the master is also a node using kubectl get nodes:

NAME       STATUS   ROLES    AGE   VERSION
direwolf   Ready    <none>   41m   v1.16.6-beta.0

Node

Add to your configuration.nix:

{ config, pkgs, ... }:
let
  kubeMasterIP = "10.1.1.2";
  kubeMasterHostname = "api.kube";
  kubeMasterAPIServerPort = 443;
in
{
  # resolve master hostname
  networking.extraHosts = "${kubeMasterIP} ${kubeMasterHostname}";

  # packages for administration tasks
  environment.systemPackages = with pkgs; [
    kompose
    kubectl
    kubernetes
  ];

  services.kubernetes = {
    roles = ["node"];
    masterAddress = kubeMasterHostname;
    easyCerts = true;

    # point kubelet to kube-apiserver
    kubelet.kubeconfig.server = "https://${kubeMasterHostname}:${kubeMasterAPIServerPort}";

    # needed if you use swap
    kubelet.extraOpts = "--fail-swap-on=false";
  };
}

Apply your config (e.g. nixos-rebuild switch).

According to the NixOS tests, make your Node join the cluster:

# on the master, grab the apitoken
cat /var/lib/kubernetes/secrets/apitoken.secret

# on the node, join the node with
echo TOKEN | nixos-kubernetes-node-join

After that, you should see your new node using kubectl get nodes:

NAME       STATUS   ROLES    AGE    VERSION
direwolf   Ready    <none>   62m    v1.16.6-beta.0
drake      Ready    <none>   102m   v1.16.6-beta.0


N Masters (HA)

Debugging

systemctl status kubelet
systemctl status kube-apiserver
kubectl get nodes

Sources