NixOS Containers

From NixOS Wiki
Revision as of 08:32, 3 December 2024 by Mic92 (talk | contribs) (update stateVersion)

Native NixOS containers

It is possible to configure native systemd-nspawn containers, which are running NixOS and are configured and managed by NixOS using the containers directive.

Configuration

The following example creates a container called nextcloud running the web application Nextcloud. It will start automatically at boot and has its private network subnet.

/etc/nixos/configuration.nix
networking.nat = {
  enable = true;
  internalInterfaces = ["ve-+"];
  externalInterface = "ens3";
  # Lazy IPv6 connectivity for the container
  enableIPv6 = true;
};

containers.nextcloud = {
  autoStart = true;
  privateNetwork = true;
  hostAddress = "192.168.100.10";
  localAddress = "192.168.100.11";
  hostAddress6 = "fc00::1";
  localAddress6 = "fc00::2";
  config = { config, pkgs, lib, ... }: {

    services.nextcloud = {
      enable = true;
      package = pkgs.nextcloud28;
      hostName = "localhost";
      config.adminpassFile = "${pkgs.writeText "adminpass" "test123"}"; # DON'T DO THIS IN PRODUCTION - the password file will be world-readable in the Nix Store!
    };

    system.stateVersion = "24.11";

    networking = {
      firewall = {
        enable = true;
        allowedTCPPorts = [ 80 ];
      };
      # Use systemd-resolved inside the container
      # Workaround for bug https://github.com/NixOS/nixpkgs/issues/162686
      useHostResolvConf = lib.mkForce false;
    };
    
    services.resolved.enable = true;

  };
};

In order to reach the web application on the host system, we have to open Firewall port 80 and also configure NAT through networking.nat. The web service of the container will be available at http://192.168.100.11

Networking

By default, if privateNetwork is not set, the container shares the network with the host, enabling it to bind any port on any interface. However, when privateNetwork is set to true, the container gains its private virtual eth0 and ve-<container_name> on the host. This isolation is beneficial when you want the container to have its dedicated networking stack.

NAT (Network Address Translation)

Bridge

networking = {
  bridges.br0.interfaces = [ "eth0s31f6" ]; # Adjust interface accordingly
  
  # Get bridge-ip with DHCP
  useDHCP = false;
  interfaces."br0".useDHCP = true;

  # Set bridge-ip static
  interfaces."br0".ipv4.addresses = [{
    address = "192.168.100.3";
    prefixLength = 24;
  }];
  defaultGateway = "192.168.100.1";
  nameservers = [ "192.168.100.1" ];
};

containers.<name> = {
  privateNetwork = true;
  hostBridge = "br0"; # Specify the bridge name
  localAddress = "192.168.100.5/24";
  config = { };
};

Usage

List containers

# machinectl list

Checking the status of the container

# systemctl status container@nextcloud

Login into the container

# nixos-container root-login nextcloud

Start or stop a container

# nixos-container start nextcloud
# nixos-container stop nextcloud

Destroy a container including its file system

# nixos-container destroy nextcloud

Further informations are available in the NixOS Manual, NixOS manual.

Declarative OCI containers (Docker/Podman)

Example config

{ config, pkgs, ... }:

{
  config.virtualisation.oci-containers.containers = {
    hackagecompare = {
      image = "chrissound/hackagecomparestats-webserver:latest";
      ports = ["127.0.0.1:3010:3010"];
      volumes = [
        "/root/hackagecompare/packageStatistics.json:/root/hackagecompare/packageStatistics.json"
      ];
      cmd = [
        "--base-url"
        "\"/hackagecompare\""
      ];
    };
  };
}

Usage

NixOS uses Podman to run OCI containers. Note that these are user-specific, so running commands with or without sudo can change your output.


List containers

# podman ps

Update image

# podman restart hackagecompare

List images

# podman ls

Remove container

# podman rm hackagecompare

Remove image

# podman rmi c0d9a5f58afe

Update image

# podman pull chrissound/hackagecomparestats-webserver:latest

Run interactive shell in running container

# podman exec -ti $ContainerId /bin/sh

Troubleshooting

I have changed the host's channel and some services are no longer functional

Symptoms:

  • Lost data in PostgreSQL database
  • MySQL has changed its path, where it creates the database

Solution

If you did not have a system.stateVersion option set inside your declarative container configuration, it will use the default one for the channel. Your data might be safe, if you did nothing meanwhile. Add the missing system.stateVersion to your container, rebuild, and possibly stop/start the container.

See also