NixOS Containers: Difference between revisions

From NixOS Wiki
imported>Onny
Remove whitespaces and update example
mention microVMs as an alternative
 
(12 intermediate revisions by 6 users not shown)
Line 3: Line 3:
It is possible to configure native [https://wiki.archlinux.org/title/systemd-nspawn systemd-nspawn] containers, which are running NixOS and are configured and managed by NixOS using the <code>containers</code> directive.
It is possible to configure native [https://wiki.archlinux.org/title/systemd-nspawn systemd-nspawn] containers, which are running NixOS and are configured and managed by NixOS using the <code>containers</code> directive.


=== Installation ===
=== Configuration ===


The following example creates a container called <code>nextcloud</code> running the web application [[Nextcloud]]. It will start automatically at boot and has its private network subnet.
The following example creates a container called <code>nextcloud</code> running the web application [[Nextcloud]]. It will start automatically at boot and has its private network subnet.
Line 23: Line 23:
   hostAddress6 = "fc00::1";
   hostAddress6 = "fc00::1";
   localAddress6 = "fc00::2";
   localAddress6 = "fc00::2";
   config = { config, pkgs, ... }: {
   config = { config, pkgs, lib, ... }: {


     services.nextcloud = {
     services.nextcloud = {
       enable = true;
       enable = true;
       package = pkgs.nextcloud27;
       package = pkgs.nextcloud28;
       hostName = "localhost";
       hostName = "localhost";
       config.adminpassFile = "${pkgs.writeText "adminpass" "test123"}"; # DON'T DO THIS IN PRODUCTION - the password file will be world-readable in the Nix Store!
       config.adminpassFile = "${pkgs.writeText "adminpass" "test123"}"; # DON'T DO THIS IN PRODUCTION - the password file will be world-readable in the Nix Store!
     };
     };


     system.stateVersion = "23.05";
     system.stateVersion = "23.11";


     networking.firewall = {
     networking = {
      enable = true;
      firewall = {
      allowedTCPPorts = [ 80 ];
        enable = true;
        allowedTCPPorts = [ 80 ];
      };
      # Use systemd-resolved inside the container
      # Workaround for bug https://github.com/NixOS/nixpkgs/issues/162686
      useHostResolvConf = lib.mkForce false;
     };
     };
 
   
     # Manually configure nameserver. Using resolved inside the container seems to fail
     services.resolved.enable = true;
    # currently
    environment.etc."resolv.conf".text = "nameserver 8.8.8.8";


   };
   };
Line 47: Line 50:
</nowiki>}}
</nowiki>}}


In order to reach the web application on the host system, we have to open [[Firewall]] port 80 and also configure NAT through <code>networking.nat</code>. The web service of the container will be available at http://192.168.100.10
In order to reach the web application on the host system, we have to open [[Firewall]] port 80 and also configure NAT through <code>networking.nat</code>. The web service of the container will be available at http://192.168.100.11
 
==== Networking ====
 
{{expansion}}
 
By default, if <code>privateNetwork</code> is not set, the container shares the network with the host, enabling it to bind any port on any interface. However, when <code>privateNetwork</code> is set to <code>true</code>, the container gains its private virtual <code>eth0</code> and <code>ve-<container_name></code> on the host. This isolation is beneficial when you want the container to have its dedicated networking stack.
 
'''NAT (Network Address Translation)'''
 
<syntaxhighlight lang="nix">
</syntaxhighlight>
 
'''Bridge'''
 
<syntaxhighlight lang="nix">
networking = {
  bridges.br0.interfaces = [ "eth0s31f6" ]; # Adjust interface accordingly
 
  # Get bridge-ip with DHCP
  useDHCP = false;
  interfaces."br0".useDHCP = true;
 
  # Set bridge-ip static
  interfaces."br0".ipv4.addresses = [{
    address = "192.168.100.3";
    prefixLength = 24;
  }];
  defaultGateway = "192.168.100.1";
  nameservers = [ "192.168.100.1" ];
};
 
containers.<name> = {
  privateNetwork = true;
  hostBridge = "br0"; # Specify the bridge name
  localAddress = "192.168.100.5/24";
  config = { };
};
</syntaxhighlight>


=== Usage ===
=== Usage ===
List containers
<syntaxhighlight lang="console">
# machinectl list
</syntaxhighlight>


Checking the status of the container
Checking the status of the container
Line 74: Line 120:
Further informations are available in the {{manual:nixos|sec=#ch-containers|chapter=NixOS manual}}.
Further informations are available in the {{manual:nixos|sec=#ch-containers|chapter=NixOS manual}}.


=== Troubleshooting ===
== Declarative OCI containers (Docker/Podman) ==


Configuring nameservers for containers is [https://github.com/NixOS/nixpkgs/issues/162686 currently broken]. Therefore in some cases internet connectivity can be broken inside the containers. A temporary workaround is to manually write the <code>/etc/nixos/resolv.conf</code> file like this:
=== Example config ===
<syntaxhighlight lang="nixos">
{ config, pkgs, ... }:


{{file|/etc/nixos/configuration.nix|nix|<nowiki>
{
containers.nextcloud.config = { config, pkgs, ... }: {
  config.virtualisation.oci-containers.containers = {
  [...]
    hackagecompare = {
   environment.etc."resolv.conf".text = "nameserver 8.8.8.8";
      image = "chrissound/hackagecomparestats-webserver:latest";
};
      ports = ["127.0.0.1:3010:3010"];
</nowiki>}}
      volumes = [
        "/root/hackagecompare/packageStatistics.json:/root/hackagecompare/packageStatistics.json"
      ];
      cmd = [
        "--base-url"
        "\"/hackagecompare\""
      ];
    };
   };
}
</syntaxhighlight>
 
=== Usage ===
NixOS uses Podman to run OCI containers. Note that these are '''user-specific''', so running commands with or without sudo can change your output.
 
 
List containers
<syntaxhighlight lang="console">
# podman ps
</syntaxhighlight>
 
Update image
<syntaxhighlight lang="console">
# podman restart hackagecompare
</syntaxhighlight>


== Declarative docker containers ==
List images
<syntaxhighlight lang="console">
# podman ls
</syntaxhighlight>Remove container<syntaxhighlight lang="console">
# podman rm hackagecompare
</syntaxhighlight>


Example config:
Remove image
  { config, pkgs, ... }:
<syntaxhighlight lang="console">
  {
# podman rmi c0d9a5f58afe
    config.virtualisation.oci-containers.containers = {
</syntaxhighlight>Update image<syntaxhighlight lang="console">
      hackagecompare = {
# podman pull chrissound/hackagecomparestats-webserver:latest
        image = "chrissound/hackagecomparestats-webserver:latest";
</syntaxhighlight>Run interactive shell in running container<syntaxhighlight lang="console">
        ports = ["127.0.0.1:3010:3010"];
# podman exec -ti $ContainerId /bin/sh
        volumes = [
</syntaxhighlight>
          "/root/hackagecompare/packageStatistics.json:/root/hackagecompare/packageStatistics.json"
        ];
        cmd = [
          "--base-url"
          "\"/hackagecompare\""
        ];
      };
    };
  }


== Troubleshooting ==
== Troubleshooting ==
Line 125: Line 193:
* [https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/virtualisation/nixos-containers.nix Nixpkgs - nixos-containers.nix]
* [https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/virtualisation/nixos-containers.nix Nixpkgs - nixos-containers.nix]
* [https://nixcademy.com/2023/08/29/nixos-nspawn/ nixos-nspawn]
* [https://nixcademy.com/2023/08/29/nixos-nspawn/ nixos-nspawn]
* https://github.com/tfc/nspawn-nixos]
* [https://github.com/tfc/nspawn-nixos tfc/nspawn-nixos]
* MicroVMs as a more isolated alternative, e.g. with https://github.com/astro/microvm.nix
[[Category:Server]]
[[Category:Server]]
[[Category:NixOS]]
[[Category:NixOS]]
[[Category:Container]]

Latest revision as of 08:15, 10 July 2024

Native NixOS containers

It is possible to configure native systemd-nspawn containers, which are running NixOS and are configured and managed by NixOS using the containers directive.

Configuration

The following example creates a container called nextcloud running the web application Nextcloud. It will start automatically at boot and has its private network subnet.

/etc/nixos/configuration.nix
networking.nat = {
  enable = true;
  internalInterfaces = ["ve-+"];
  externalInterface = "ens3";
  # Lazy IPv6 connectivity for the container
  enableIPv6 = true;
};

containers.nextcloud = {
  autoStart = true;
  privateNetwork = true;
  hostAddress = "192.168.100.10";
  localAddress = "192.168.100.11";
  hostAddress6 = "fc00::1";
  localAddress6 = "fc00::2";
  config = { config, pkgs, lib, ... }: {

    services.nextcloud = {
      enable = true;
      package = pkgs.nextcloud28;
      hostName = "localhost";
      config.adminpassFile = "${pkgs.writeText "adminpass" "test123"}"; # DON'T DO THIS IN PRODUCTION - the password file will be world-readable in the Nix Store!
    };

    system.stateVersion = "23.11";

    networking = {
      firewall = {
        enable = true;
        allowedTCPPorts = [ 80 ];
      };
      # Use systemd-resolved inside the container
      # Workaround for bug https://github.com/NixOS/nixpkgs/issues/162686
      useHostResolvConf = lib.mkForce false;
    };
    
    services.resolved.enable = true;

  };
};

In order to reach the web application on the host system, we have to open Firewall port 80 and also configure NAT through networking.nat. The web service of the container will be available at http://192.168.100.11

Networking

By default, if privateNetwork is not set, the container shares the network with the host, enabling it to bind any port on any interface. However, when privateNetwork is set to true, the container gains its private virtual eth0 and ve-<container_name> on the host. This isolation is beneficial when you want the container to have its dedicated networking stack.

NAT (Network Address Translation)

Bridge

networking = {
  bridges.br0.interfaces = [ "eth0s31f6" ]; # Adjust interface accordingly
  
  # Get bridge-ip with DHCP
  useDHCP = false;
  interfaces."br0".useDHCP = true;

  # Set bridge-ip static
  interfaces."br0".ipv4.addresses = [{
    address = "192.168.100.3";
    prefixLength = 24;
  }];
  defaultGateway = "192.168.100.1";
  nameservers = [ "192.168.100.1" ];
};

containers.<name> = {
  privateNetwork = true;
  hostBridge = "br0"; # Specify the bridge name
  localAddress = "192.168.100.5/24";
  config = { };
};

Usage

List containers

# machinectl list

Checking the status of the container

# systemctl status container@nextcloud

Login into the container

# nixos-container root-login nextcloud

Start or stop a container

# nixos-container start nextcloud
# nixos-container stop nextcloud

Destroy a container including its file system

# nixos-container destroy nextcloud

Further informations are available in the NixOS Manual, NixOS manual.

Declarative OCI containers (Docker/Podman)

Example config

{ config, pkgs, ... }:

{
  config.virtualisation.oci-containers.containers = {
    hackagecompare = {
      image = "chrissound/hackagecomparestats-webserver:latest";
      ports = ["127.0.0.1:3010:3010"];
      volumes = [
        "/root/hackagecompare/packageStatistics.json:/root/hackagecompare/packageStatistics.json"
      ];
      cmd = [
        "--base-url"
        "\"/hackagecompare\""
      ];
    };
  };
}

Usage

NixOS uses Podman to run OCI containers. Note that these are user-specific, so running commands with or without sudo can change your output.


List containers

# podman ps

Update image

# podman restart hackagecompare

List images

# podman ls

Remove container

# podman rm hackagecompare

Remove image

# podman rmi c0d9a5f58afe

Update image

# podman pull chrissound/hackagecomparestats-webserver:latest

Run interactive shell in running container

# podman exec -ti $ContainerId /bin/sh

Troubleshooting

I have changed the host's channel and some services are no longer functional

Symptoms:

  • Lost data in PostgreSQL database
  • MySQL has changed its path, where it creates the database

Solution

If you did not have a system.stateVersion option set inside your declarative container configuration, it will use the default one for the channel. Your data might be safe, if you did nothing meanwhile. Add the missing system.stateVersion to your container, rebuild, and possibly stop/start the container.

See also