Kubernetes: Difference between revisions

From NixOS Wiki
imported>Iceychris
m * fix nix code for master
imported>Iceychris
m * more on debugging, tooling and sources
Line 142: Line 142:
kubectl get nodes
kubectl get nodes
</syntaxhighlight>
</syntaxhighlight>
=== Clean State ===
Sometimes it helps to have a clean state on all instances:
* comment kubernetes-related code in <code>configuration.nix</code>
* <code>nixos-rebuild switch</code>
* clean up filesystem
** <code>rm -rf /var/lib/kubernetes/ /var/lib/etcd/ /var/lib/cfssl/ /var/lib/kubelet/</code>
** <code>rm -rf /etc/kube-flannel/ /etc/kubernetes/</code>
* uncomment kubernetes-related code again
* <code>nixos-rebuild switch</code>
== Tooling ==
There are various community projects aimed at facilitating working with Kubernetes combined with Nix:
* [https://github.com/saschagrunert/kubernix kubernix]: simple setup of development clusters using Nix
* [https://github.com/cmollekopf/kube-nix kube-nix]


== Sources ==
== Sources ==


* [https://github.com/NixOS/nixpkgs/issues/39327 Issue #39327]: kubernetes support is missing some documentation
* [https://discourse.nixos.org/t/kubernetes-using-multiple-nodes-with-latest-unstable/3936 NixOS Discourse]: Using multiple nodes on unstable
* [https://kubernetes.io/docs/home/ Kubernetes docs]
* [https://kubernetes.io/docs/home/ Kubernetes docs]
* [https://github.com/NixOS/nixpkgs/tree/master/nixos/tests/kubernetes NixOS e2e kubernetes tests]: Node Joining etc.
* [https://github.com/NixOS/nixpkgs/tree/master/nixos/tests/kubernetes NixOS e2e kubernetes tests]: Node Joining etc.
* [https://logs.nix.samueldr.com/nixos-kubernetes/2018-09-07 IRC (2018-09)]: issues related to DNS
* [https://logs.nix.samueldr.com/nixos-kubernetes/2018-09-07 IRC (2018-09)]: issues related to DNS
* [https://logs.nix.samueldr.com/nixos-kubernetes/2019-09-05 IRC (2019-09)]: discussion about <code>easyCerts</code> and general setup
* [https://logs.nix.samueldr.com/nixos-kubernetes/2019-09-05 IRC (2019-09)]: discussion about <code>easyCerts</code> and general setup

Revision as of 19:20, 29 February 2020

1 Master and 1 Node

Assumptions:

  • Master and Node are on the same network (in this example 10.1.1.0/24)
  • IP of the Master: 10.1.1.2
  • IP of the first Node: 10.1.1.3

Caveats:

  • this was only tested on 20.09pre215024.e97dfe73bba (Nightingale) (unstable)
  • this is probably not best-practice
    • for a production-grade cluster you shouldn't use easyCerts

Master

Add to your configuration.nix:

{ config, pkgs, ... }:
let
  kubeMasterIP = "10.1.1.2";
  kubeMasterHostname = "api.kube";
  kubeMasterAPIServerPort = 443;
in
{
  # resolve master hostname
  networking.extraHosts = "${kubeMasterIP} ${kubeMasterHostname}";

  # packages for administration tasks
  environment.systemPackages = with pkgs; [
    kompose
    kubectl
    kubernetes
  ];

  services.kubernetes = {
    roles = ["master" "node"];
    masterAddress = kubeMasterHostname;
    easyCerts = true;
    apiserver = {
      securePort = kubeMasterAPIServerPort;
      advertiseAddress = kubeMasterIP;
    };

    # needed if you use swap
    kubelet.extraOpts = "--fail-swap-on=false";
  };
}

Apply your config (e.g. nixos-rebuild switch).

Link your kubeconfig to your home directory:

ln -s /etc/kubernetes/cluster-admin.kubeconfig ~/.kube/config

Now, executing kubectl cluster-info should yield something like this:

Kubernetes master is running at https://10.1.1.2
CoreDNS is running at https://10.1.1.2/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

You should also see that the master is also a node using kubectl get nodes:

NAME       STATUS   ROLES    AGE   VERSION
direwolf   Ready    <none>   41m   v1.16.6-beta.0

Node

Add to your configuration.nix:

{ config, pkgs, ... }:
let
  kubeMasterIP = "10.1.1.2";
  kubeMasterHostname = "api.kube";
  kubeMasterAPIServerPort = 443;
in
{
  # resolve master hostname
  networking.extraHosts = "${kubeMasterIP} ${kubeMasterHostname}";

  # packages for administration tasks
  environment.systemPackages = with pkgs; [
    kompose
    kubectl
    kubernetes
  ];

  services.kubernetes = {
    roles = ["node"];
    masterAddress = kubeMasterHostname;
    easyCerts = true;

    # point kubelet to kube-apiserver
    kubelet.kubeconfig.server = "https://${kubeMasterHostname}:${kubeMasterAPIServerPort}";

    # needed if you use swap
    kubelet.extraOpts = "--fail-swap-on=false";
  };
}

Apply your config (e.g. nixos-rebuild switch).

According to the NixOS tests, make your Node join the cluster:

# on the master, grab the apitoken
cat /var/lib/kubernetes/secrets/apitoken.secret

# on the node, join the node with
echo TOKEN | nixos-kubernetes-node-join

After that, you should see your new node using kubectl get nodes:

NAME       STATUS   ROLES    AGE    VERSION
direwolf   Ready    <none>   62m    v1.16.6-beta.0
drake      Ready    <none>   102m   v1.16.6-beta.0


N Masters (HA)

Debugging

systemctl status kubelet
systemctl status kube-apiserver
kubectl get nodes

Clean State

Sometimes it helps to have a clean state on all instances:

  • comment kubernetes-related code in configuration.nix
  • nixos-rebuild switch
  • clean up filesystem
    • rm -rf /var/lib/kubernetes/ /var/lib/etcd/ /var/lib/cfssl/ /var/lib/kubelet/
    • rm -rf /etc/kube-flannel/ /etc/kubernetes/
  • uncomment kubernetes-related code again
  • nixos-rebuild switch

Tooling

There are various community projects aimed at facilitating working with Kubernetes combined with Nix:

Sources