Kubernetes: Difference between revisions
imported>Iceychris + master setup, stub for node |
imported>Iceychris m + node setup, minor corrections |
||
Line 9: | Line 9: | ||
Caveats: | Caveats: | ||
* this is probably not best-practice | * this was only tested on <code>20.09pre215024.e97dfe73bba (Nightingale)</code> (<code>unstable</code>) | ||
* this is probably not best-practice | |||
** for a production-grade cluster you shouldn't use <code>easyCerts</code> | |||
=== Master === | === Master === | ||
Line 61: | Line 63: | ||
Kubernetes master is running at https://10.1.1.2 | Kubernetes master is running at https://10.1.1.2 | ||
CoreDNS is running at https://10.1.1.2/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy | CoreDNS is running at https://10.1.1.2/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy | ||
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. | To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. | ||
Line 72: | Line 73: | ||
direwolf Ready <none> 41m v1.16.6-beta.0 | direwolf Ready <none> 41m v1.16.6-beta.0 | ||
</syntaxhighlight> | </syntaxhighlight> | ||
=== Node === | === Node === | ||
Line 84: | Line 83: | ||
kubeMasterIP = "10.1.1.2"; | kubeMasterIP = "10.1.1.2"; | ||
kubeMasterHostname = "api.kube"; | kubeMasterHostname = "api.kube"; | ||
kubeMasterAPIServerPort = 443; | kubeMasterAPIServerPort = "443"; | ||
in | in | ||
{ | { | ||
# resolve master hostname | |||
networking.extraHosts = "${kubeMasterIP} ${kubeMasterHostname}"; | |||
# packages for administration tasks | |||
environment.systemPackages = with pkgs; [ | |||
kompose | |||
kubectl | |||
kubernetes | |||
]; | |||
services.kubernetes = { | |||
roles = ["node"]; | |||
masterAddress = "${kubeMasterHostname}"; | |||
easyCerts = true; | |||
# point kubelet to kube-apiserver | |||
kubelet.kubeconfig.server = "https://${kubeMasterHostname}:${kubeMasterAPIServerPort}"; | |||
# needed if you use swap | |||
kubelet.extraOpts = "--fail-swap-on=false"; | |||
}; | |||
} | } | ||
</syntaxhighlight> | </syntaxhighlight> | ||
== | Apply your config (e.g. <code>nixos-rebuild switch</code>). | ||
According to the [https://github.com/NixOS/nixpkgs/blob/18ff53d7656636aa440b2f73d2da788b785e6a9c/nixos/tests/kubernetes/rbac.nix#L118 NixOS tests], make your Node join the cluster: | |||
<syntaxhighlight lang="bash"> | |||
# on the master, grab the apitoken | |||
cat /var/lib/kubernetes/secrets/apitoken.secret | |||
# on the node, join the node with | |||
echo TOKEN | nixos-kubernetes-node-join | |||
</syntaxhighlight> | |||
After that, you should see your new node using <code>kubectl get nodes</code>: | |||
<syntaxhighlight> | |||
NAME STATUS ROLES AGE VERSION | |||
direwolf Ready <none> 62m v1.16.6-beta.0 | |||
drake Ready <none> 102m v1.16.6-beta.0 | |||
</syntaxhighlight> | |||
== N Masters (HA) == | |||
{{expansion|How to set this up?}} | {{expansion|How to set this up?}} |
Revision as of 17:52, 29 February 2020
1 Master and 1 Node
Assumptions:
- Master and Node are on the same network (in this example
10.1.1.0/24
) - IP of the Master:
10.1.1.2
- IP of the first Node:
10.1.1.3
Caveats:
- this was only tested on
20.09pre215024.e97dfe73bba (Nightingale)
(unstable
) - this is probably not best-practice
- for a production-grade cluster you shouldn't use
easyCerts
- for a production-grade cluster you shouldn't use
Master
Add to your configuration.nix
:
{ config, pkgs, ... }:
let
kubeMasterIP = "10.1.1.2";
kubeMasterHostname = "api.kube";
kubeMasterAPIServerPort = 443;
in
{
# resolve master hostname
networking.extraHosts = "${kubeMasterIP} ${kubeMasterHostname}";
# packages for administration tasks
environment.systemPackages = with pkgs; [
kompose
kubectl
kubernetes
];
services.kubernetes = {
roles = ["master" "node"];
apiserver = {
securePort = ${kubeMasterAPIServerPort};
advertiseAddress = ${kubeMasterIP};
};
masterAddress = ${kubeMasterHostname};
easyCerts = true;
};
# needed if you use swap
services.kubernetes.kubelet.extraOpts = "--fail-swap-on=false";
}
Apply your config (e.g. nixos-rebuild switch
).
Link your kubeconfig
to your home directory:
ln -s /etc/kubernetes/cluster-admin.kubeconfig ~/.kube/config
Now, executing kubectl cluster-info
should yield something like this:
Kubernetes master is running at https://10.1.1.2
CoreDNS is running at https://10.1.1.2/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
You should also see that the master is also a node using kubectl get nodes
:
NAME STATUS ROLES AGE VERSION
direwolf Ready <none> 41m v1.16.6-beta.0
Node
Add to your configuration.nix
:
{ config, pkgs, ... }:
let
kubeMasterIP = "10.1.1.2";
kubeMasterHostname = "api.kube";
kubeMasterAPIServerPort = "443";
in
{
# resolve master hostname
networking.extraHosts = "${kubeMasterIP} ${kubeMasterHostname}";
# packages for administration tasks
environment.systemPackages = with pkgs; [
kompose
kubectl
kubernetes
];
services.kubernetes = {
roles = ["node"];
masterAddress = "${kubeMasterHostname}";
easyCerts = true;
# point kubelet to kube-apiserver
kubelet.kubeconfig.server = "https://${kubeMasterHostname}:${kubeMasterAPIServerPort}";
# needed if you use swap
kubelet.extraOpts = "--fail-swap-on=false";
};
}
Apply your config (e.g. nixos-rebuild switch
).
According to the NixOS tests, make your Node join the cluster:
# on the master, grab the apitoken
cat /var/lib/kubernetes/secrets/apitoken.secret
# on the node, join the node with
echo TOKEN | nixos-kubernetes-node-join
After that, you should see your new node using kubectl get nodes
:
NAME STATUS ROLES AGE VERSION
direwolf Ready <none> 62m v1.16.6-beta.0
drake Ready <none> 102m v1.16.6-beta.0
N Masters (HA)
Debugging
systemctl status kubelet
systemctl status kube-apiserver
kubectl get nodes
Sources
- Kubernetes docs
- NixOS e2e kubernetes tests: Node Joining etc.
- IRC (2018-09): issues related to DNS
- IRC (2019-09): discussion about
easyCerts
and general setup