NixOS Containers: Difference between revisions
imported>Lanjoni m fix: typo at github.com/tfc/nspawn-nixos |
imported>Dafitt No edit summary |
||
Line 3: | Line 3: | ||
It is possible to configure native [https://wiki.archlinux.org/title/systemd-nspawn systemd-nspawn] containers, which are running NixOS and are configured and managed by NixOS using the <code>containers</code> directive. | It is possible to configure native [https://wiki.archlinux.org/title/systemd-nspawn systemd-nspawn] containers, which are running NixOS and are configured and managed by NixOS using the <code>containers</code> directive. | ||
=== | === Configuration === | ||
The following example creates a container called <code>nextcloud</code> running the web application [[Nextcloud]]. It will start automatically at boot and has its private network subnet. | The following example creates a container called <code>nextcloud</code> running the web application [[Nextcloud]]. It will start automatically at boot and has its private network subnet. | ||
Line 51: | Line 51: | ||
In order to reach the web application on the host system, we have to open [[Firewall]] port 80 and also configure NAT through <code>networking.nat</code>. The web service of the container will be available at http://192.168.100.11 | In order to reach the web application on the host system, we have to open [[Firewall]] port 80 and also configure NAT through <code>networking.nat</code>. The web service of the container will be available at http://192.168.100.11 | ||
==== Networking ==== | |||
{{expansion}} | |||
By default, if <code>privateNetwork</code> is not set, the container shares the network with the host, enabling it to bind any port on any interface. However, when <code>privateNetwork</code> is set to <code>true</code>, the container gains its private virtual <code>eth0</code> and <code>ve-<container_name></code> on the host. This isolation is beneficial when you want the container to have its dedicated networking stack. | |||
'''NAT (Network Address Translation)''' | |||
<syntaxhighlight lang="nix"> | |||
</syntaxhighlight> | |||
'''Bridge''' | |||
<syntaxhighlight lang="nix"> | |||
networking = { | |||
bridges.br0.interfaces = [ "eth0s31f6" ]; # Adjust interface accordingly | |||
# Get bridge-ip with DHCP | |||
useDHCP = false; | |||
interfaces."br0".useDHCP = true; | |||
# Set bridge-ip static | |||
interfaces."br0".ipv4.addresses = [{ | |||
address = "192.168.100.3"; | |||
prefixLength = 24; | |||
}]; | |||
defaultGateway = "192.168.100.1"; | |||
nameservers = [ "192.168.100.1" ]; | |||
}; | |||
containers.<name> = { | |||
privateNetwork = true; | |||
hostBridge = "br0"; # Specify the bridge name | |||
localAddress = "192.168.100.5/24"; | |||
config = { }; | |||
}; | |||
</syntaxhighlight> | |||
=== Usage === | === Usage === |
Revision as of 17:43, 25 December 2023
Native NixOS containers
It is possible to configure native systemd-nspawn containers, which are running NixOS and are configured and managed by NixOS using the containers
directive.
Configuration
The following example creates a container called nextcloud
running the web application Nextcloud. It will start automatically at boot and has its private network subnet.
/etc/nixos/configuration.nix
networking.nat = {
enable = true;
internalInterfaces = ["ve-+"];
externalInterface = "ens3";
# Lazy IPv6 connectivity for the container
enableIPv6 = true;
};
containers.nextcloud = {
autoStart = true;
privateNetwork = true;
hostAddress = "192.168.100.10";
localAddress = "192.168.100.11";
hostAddress6 = "fc00::1";
localAddress6 = "fc00::2";
config = { config, pkgs, ... }: {
services.nextcloud = {
enable = true;
package = pkgs.nextcloud28;
hostName = "localhost";
config.adminpassFile = "${pkgs.writeText "adminpass" "test123"}"; # DON'T DO THIS IN PRODUCTION - the password file will be world-readable in the Nix Store!
};
system.stateVersion = "23.11";
networking = {
firewall = {
enable = true;
allowedTCPPorts = [ 80 ];
};
# Use systemd-resolved inside the container
# Workaround for bug https://github.com/NixOS/nixpkgs/issues/162686
useHostResolvConf = mkForce false;
};
services.resolved.enable = true;
};
};
In order to reach the web application on the host system, we have to open Firewall port 80 and also configure NAT through networking.nat
. The web service of the container will be available at http://192.168.100.11
Networking
By default, if privateNetwork
is not set, the container shares the network with the host, enabling it to bind any port on any interface. However, when privateNetwork
is set to true
, the container gains its private virtual eth0
and ve-<container_name>
on the host. This isolation is beneficial when you want the container to have its dedicated networking stack.
NAT (Network Address Translation)
Bridge
networking = {
bridges.br0.interfaces = [ "eth0s31f6" ]; # Adjust interface accordingly
# Get bridge-ip with DHCP
useDHCP = false;
interfaces."br0".useDHCP = true;
# Set bridge-ip static
interfaces."br0".ipv4.addresses = [{
address = "192.168.100.3";
prefixLength = 24;
}];
defaultGateway = "192.168.100.1";
nameservers = [ "192.168.100.1" ];
};
containers.<name> = {
privateNetwork = true;
hostBridge = "br0"; # Specify the bridge name
localAddress = "192.168.100.5/24";
config = { };
};
Usage
List containers
# machinectl list
Checking the status of the container
# systemctl status container@nextcloud
Login into the container
# nixos-container root-login nextcloud
Start or stop a container
# nixos-container start nextcloud
# nixos-container stop nextcloud
Destroy a container including its file system
# nixos-container destroy nextcloud
Further informations are available in the NixOS Manual, NixOS manual.
Declarative docker containers
Example config:
{ config, pkgs, ... }: { config.virtualisation.oci-containers.containers = { hackagecompare = { image = "chrissound/hackagecomparestats-webserver:latest"; ports = ["127.0.0.1:3010:3010"]; volumes = [ "/root/hackagecompare/packageStatistics.json:/root/hackagecompare/packageStatistics.json" ]; cmd = [ "--base-url" "\"/hackagecompare\"" ]; }; }; }
Troubleshooting
I have changed the host's channel and some services are no longer functional
Symptoms:
- Lost data in PostgreSQL database
- MySQL has changed its path, where it creates the database
Solution
If you did not have a system.stateVersion
option set inside your declarative container configuration, it will use the default one for the channel. Your data might be safe, if you did nothing meanwhile. Add the missing system.stateVersion
to your container, rebuild, and possibly stop/start the container.