ZFS: Difference between revisions

imported>Mic92
explicit snapshot not required
Tie-ling (talk | contribs)
Take snapshots automatically: add services.zfs.autoSnapshot
 
(217 intermediate revisions by 77 users not shown)
Line 1: Line 1:
NixOS has native support for ZFS.
[https://zfsonlinux.org/ {{PAGENAME}}] ([[wikipedia:en:{{PAGENAME}}]]), also known as [https://openzfs.org/ OpenZFS] ([[wikipedia:en:OpenZFS]]), is a modern filesystem which is well supported on [[NixOS]].
It uses the code from the [http://zfsonlinux.org/ ZFS on Linux project], including kernel modules and userspace utilities.
[[category:filesystem]]
Besides the {{nixos:package|zfs}} package (''ZFS Filesystem Linux Kernel module'') itself, there are many packages in the ZFS ecosystem available.


== What works ==
ZFS integrates into NixOS via the {{nixos:option|boot.zfs}} and {{nixos:option|services.zfs}} options.


All functionality supported by ZFS on Linux, including:
== Limitations ==
* Using ZFS as the root filesystem (using either MS-DOS or GPT partitions)
* Encrypted ZFS pools (using either native encryption or Linux's dm-crypt)
* All the other ZFS goodies (cheap snapshotting, checksumming, compression, RAID-Z, ...)
* Auto-snapshotting service


== Known issues ==
==== Latest Kernel compatible with ZFS ====
ZFS often does not support the latest Kernel versions. It is recommended to use an LTS Kernel version whenever possible; the NixOS default Kernel is generally suitable. See [[Linux kernel|Linux Kernel]] for more information about configuring a specific Kernel version.


* As of 2014-03-04, you shouldn't use a ZVol as a swap device, as it can deadlock under memory pressure
If your config specifies a Kernel version that is not officially supported by upstream ZFS, the ZFS module will fail to evaluate with an error that the ZFS package is "broken". Upstream ZFS changed in 2.3 to refuse to build by default, regardless of Nixpkgs’ broken marking (or ignoring).  
* As of 2014-03-04, you should set the <code>mountpoint</code> property of your ZFS filesystems to be <code>legacy</code> and let NixOS mount them like any other filesystem (such as ext4 or btrfs), otherwise some filesystems may fail to mount due to ordering issues
* As of 2014-03-04, all ZFS pools available to the system will be forcibly imported during boot, regardless if you had imported them before or not. You should be careful not to have any other system accessing them at the same time, otherwise it will corrupt your pools. Normally (for the common desktop user) this should not be a problem, as a hard disk is usually only directly connected to one machine.


== How to use it ==
===== Selecting the latest ZFS-compatible Kernel =====
{{Warning|This will often result in the Kernel version going backwards as Kernel versions become end-of-life and are removed from Nixpkgs. If you need more control over the Kernel version due to hardware requirements, consider simply pinning a specific version rather than calculating it as below.}}
To use the latest ZFS-compatible Kernel currently available, the following configuration may be used.


Just add the following to your <code>configuration.nix</code> file:
<syntaxhighlight lang="nix">
{
  config,
  lib,
  pkgs,
  ...
}:


<syntaxhighlight lang="nix">
let
boot.supportedFilesystems = [ "zfs" ];
  zfsCompatibleKernelPackages = lib.filterAttrs (
    name: kernelPackages:
    (builtins.match "linux_[0-9]+_[0-9]+" name) != null
    && (builtins.tryEval kernelPackages).success
    && (!kernelPackages.${config.boot.zfs.package.kernelModuleAttribute}.meta.broken)
  ) pkgs.linuxKernel.packages;
  latestKernelPackage = lib.last (
    lib.sort (a: b: (lib.versionOlder a.kernel.version b.kernel.version)) (
      builtins.attrValues zfsCompatibleKernelPackages
    )
  );
in
{
  # Note this might jump back and forth as kernels are added or removed.
  boot.kernelPackages = latestKernelPackage;
}
</syntaxhighlight>
</syntaxhighlight>


To activate the configuration and load the ZFS kernel module, run:
===== Using unstable, pre-release ZFS =====
{{Warning|Pre-release ZFS versions may be less well-tested, and may have critical bugs that may cause data loss.}}{{Warning|Running ZFS with a Kernel unsupported by upstream “is considered EXPERIMENTAL by the OpenZFS project. Even if it appears to build and run correctly, there may be bugs that can cause SERIOUS DATA LOSS.”}}
In some cases, a pre-release version of ZFS may be available that supports a newer Kernel. Use it with <code>boot.zfs.package = pkgs.zfs_unstable;</code>. Using zfs_unstable may allow the use of an unsupported Kernel; as warned above, [https://github.com/openzfs/zfs/blob/6a2f7b38442b42f4bc9a848f8de10fc792ce8d76/config/kernel.m4#L473-L487 upstream considers this experimental].
 
==== Partial support for swap on ZFS ====
 
ZFS does not support swapfiles. swap devices can be used instead. Additionally, hibernation is disabled by default due to a [https://github.com/NixOS/nixpkgs/pull/208037 high risk] of data corruption. Note that even if that pull request is merged, it does not fully mitigate the risk. If you wish to enable hibernation regardless and made sure that swapfiles on ZFS are not used, set <code>boot.zfs.allowHibernation = true</code>.
 
==== Zpool not found ====
 
If NixOS fails to import the zpool on reboot, you may need to add <syntaxhighlight lang="nix" inline>boot.zfs.devNodes = "/dev/disk/by-path";</syntaxhighlight> or <syntaxhighlight lang="nix" inline>boot.zfs.devNodes = "/dev/disk/by-partuuid";</syntaxhighlight> to your configuration.nix file.
 
The differences can be tested by running <code>zpool import -d /dev/disk/by-id</code> when none of the pools are discovered, eg. a live iso.
 
==== ZFS conflicting with systemd ====
 
ZFS will manage mounting non-legacy ZFS filesystems, but NixOS tries to manage mounting with systemd. ZFS native mountpoints are not managed as part of the system configuration (but better support hibernation with a separate swap partition). This can lead to conflicts if the ZFS mount service is also enabled for the same datasets.
 
Disable the mount service with <code>systemd.services.zfs-mount.enable = false;</code> or remove the <code>fileSystems</code> entries in hardware-configuration.nix. Otherwise, use legacy mountpoints (created with e.g. <code>zfs create -o mountpoint=legacy</code>). Mountpoints must be specified with <code>fileSystems."/mount/point" = {};</code> or with <code>nixos-generate-config</code>.
 
== Guides ==
 
=== Root on ZFS with disko ===
 
disko[https://github.com/nix-community/disko/blob/master/example/zfs.nix] can partition disks declaratively and handle mount points at install time.
 
Don't follow the Root on ZFS guide found in OpenZFS documentation. It was abandoned and has not been updated in years. See commit log for the openzfs-docs repo for details.
 
=== Simple NixOS ZFS on root installation ===
Start from here in the NixOS manual: [https://nixos.org/manual/nixos/stable/#sec-installation-manual].
Under manual partitioning [https://nixos.org/manual/nixos/stable/#sec-installation-manual-partitioning] do this instead:
 
==== Partition the disk ====
We need the following partitions:
 
* 1G for boot partition with "boot" as the partition label (also called name in some tools) and ef00 as partition code
* 4G for a swap partition with "swap" as the partition label and 8200 as partition code. We will encrypt this with a random secret on each boot.
* The rest of disk space for zfs with "root" as the partition label and 8300 as partition code (default code)
 
Reason for swap partition: ZFS does use a caching mechanism that is different from the normal Linux cache infrastructure.
In low-memory situations, ZFS therefore might need a bit longer to free up memory from its cache. The swap partition will help with that.
 
Example with gdisk using <code>/dev/nvme0n1</code> as the device (use <code>lsblk</code> to find the device</code>):


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
nixos-rebuild switch
sudo gdisk /dev/nvme0n1
GPT fdisk (gdisk) version 1.0.10
...
# boot partition
Command (? for help): n
Partition number (1-128, default 1):
First sector (2048-1000215182, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-1000215182, default = 1000215175) or {+-}size{KMGTP}: +1G
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): ef00
Changed type of partition to 'EFI system partition'
 
# Swap partition
Command (? for help): n
Partition number (2-128, default 2):
First sector (2099200-1000215182, default = 2099200) or {+-}size{KMGTP}:
Last sector (2099200-1000215182, default = 1000215175) or {+-}size{KMGTP}: +4G
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): 8200
Changed type of partition to 'Linux swap'
 
# root partition
Command (? for help): n
Partition number (3-128, default 3):
First sector (10487808-1000215182, default = 10487808) or {+-}size{KMGTP}:
Last sector (10487808-1000215182, default = 1000215175) or {+-}size{KMGTP}:
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300):
Changed type of partition to 'Linux filesystem'
 
# write changes
Command (? for help): w
 
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
 
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/nvme0n1.
The operation has completed successfully.
</syntaxhighlight>
Final partition table (<code>fdisk -l /dev/nvme0n1</code>):
<syntaxhighlight lang=bash>
Number  Start (sector)    End (sector)  Size      Code  Name
  1            2048        2099199  1024.0 MiB  EF00  EFI system partition
  2        2099200        10487807  4.0 GiB    8200  Linux swap
  3        10487808      1000215175  471.9 GiB  8300  Linux filesystem
</syntaxhighlight>
 
'''Let's use variables from now on for simplicity.''' Get the device ID in <code>/dev/disk/by-id/</code> (using {{ic|blkid}}), in our case here it is <code>nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O</code>
 
<syntaxhighlight lang=bash>
BOOT=/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part1
SWAP=/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part2
DISK=/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part3
</syntaxhighlight>
</syntaxhighlight>


All ZFS functionality should now be available.
{{note|It is often recommended to specify the drive using the device ID/UUID to prevent incorrect configuration, but it is also possible to use the device name (e.g. /dev/sda). See also: [[#Zpool created with bus-based disk names]], [https://wiki.archlinux.org/title/Persistent_block_device_naming Persistent block device naming - ArchWiki]}}


If you want NixOS to auto-mount your ZFS filesystems during boot, you should set their <code>mountpoint</code> property to <code>legacy</code> and treat it like if it were any other filesystem, i.e.: mount the filesystem manually and regenerate your list of filesystems, as such:
==== Make a ZFS pool with encryption and mount points ====
 
{{Note|zpool config can significantly affect performance (especially the ashift option) so you may want to do some research. The ZFS tuning cheatsheet or ArchWiki is a good place to start.}}


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
zfs set mountpoint=legacy <pool>/<fs>
zpool create -O encryption=on -O keyformat=passphrase -O keylocation=prompt -O compression=zstd -O mountpoint=none -O xattr=sa -O acltype=posixacl -o ashift=12 zpool $DISK
mount -t zfs <pool>/<fs> <mountpoint>
# enter the password to decrypt the pool at boot
Enter new passphrase:
Re-enter new passphrase:


# This will regenerate your /etc/nixos/hardware-configuration.nix file:
# Create datasets
nixos-generate-config
zfs create zpool/root
zfs create zpool/nix
zfs create zpool/var
zfs create zpool/home


nixos-rebuild switch
# Mount root
mkdir -p /mnt
mount -t zfs zpool/root /mnt -o zfsutil
 
# Mount nix, var, home
mkdir /mnt/nix /mnt/var /mnt/home
mount -t zfs zpool/nix /mnt/nix -o zfsutil
mount -t zfs zpool/var /mnt/var -o zfsutil
mount -t zfs zpool/home /mnt/home -o zfsutil
</syntaxhighlight>
</syntaxhighlight>


NixOS will now make sure that your filesystem is always mounted during boot.
Output from <syntaxhighlight lang="bash" inline>zpool status</syntaxhighlight>:
The <code>nixos-generate-config</code> command regenerates your <code>/etc/nixos/hardware-configuration.nix</code> file, which includes the list of filesystems for NixOS to mount during boot, e.g.:
<syntaxhighlight >
<syntaxhighlight lang="nix">
zpool status
  fileSystems."/home" =
  pool: zpool
    { device = "rpool/home";
state: ONLINE
      fsType = "zfs";
...
    };
config:
 
NAME                              STATE    READ WRITE CKSUM
zpool                              ONLINE      0    0    0
  nvme-eui.0025384b21406566-part2  ONLINE      0    0    0
 
</syntaxhighlight>
 
==== Format boot partition and enable swap ====
<syntaxhighlight lang="bash">
mkfs.fat -F 32 -n boot $BOOT
</syntaxhighlight>


  fileSystems."/backup" =
<syntaxhighlight lang="bash">
    { device = "rpool/backup";
mkswap -L swap $SWAP
      fsType = "zfs";
swapon $SWAP
    };
</syntaxhighlight>
</syntaxhighlight>


== Automatic Scrubbing ==
==== Installation ====
<syntaxhighlight lang="bash">
# Mount boot
mkdir -p /mnt/boot
mount $BOOT /mnt/boot


Regular scrubbing of ZFS pools is recommended and can be enabled in your NixOS configuration via:
# Generate the nixos config
<syntaxhighlight lang="nix">
nixos-generate-config --root /mnt
services.zfs.autoScrub.enable = true;
...
writing /mnt/etc/nixos/hardware-configuration.nix...
writing /mnt/etc/nixos/configuration.nix...
For more hardware-specific settings, see https://github.com/NixOS/nixos-hardware.
</syntaxhighlight>
</syntaxhighlight>


You can tweak the interval (defaults to once a week) and which pools should be scrubbed (defaults to all).
Now edit the configuration.nix that was just created in <code>/mnt/etc/nixos/configuration.nix</code> and make sure to have at least the following content in it.
 
{{file|/mnt/etc/nixos/configuration.nix|diff|3=
{
...
  # Boot loader config for configuration.nix:
  boot.loader.systemd-boot.enable = true;
 
  # for local disks that are not shared over the network, we don't need this to be random
  # without this, "ZFS requires networking.hostId to be set" will be raised
+  networking.hostId = "8425e349";
...
}
}}
 
Now check the hardware-configuration.nix in <code>/mnt/etc/nixos/hardware-configuration.nix</code> and add whats missing e.g. <code>options = [ "zfsutil" ]</code> for all filesystems except boot and <code>randomEncryption = true;</code> for the swap partition. Also change the generated swap device to the partition we created e.g. <code>/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part2</code> in this case and <code>/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part1</code> for boot.
 
{{file|/mnt/etc/nixos/configuration.nix|diff|3=
{
...
  fileSystems."/" = {
    device = "zpool/root";
    fsType = "zfs";
    # the zfsutil option is needed when mounting zfs datasets without "legacy" mountpoints
+    options = [ "zfsutil" ];
  };
 
  fileSystems."/nix" = {
    device = "zpool/nix";
    fsType = "zfs";
+    options = [ "zfsutil" ];
  };
 
  fileSystems."/var" = {
    device = "zpool/var";
    fsType = "zfs";
+    options = [ "zfsutil" ];
  };
 
  fileSystems."/home" = {
    device = "zpool/home";
    fsType = "zfs";
+    options = [ "zfsutil" ];
  };


== How to use the auto-snapshotting service ==
  fileSystems."/boot" = {
  device = "/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part1";
  fsType = "vfat";
  };


To auto-snapshot a ZFS filesystem or a ZVol, set its <code>com.sun:auto-snapshot</code> property to <code>true</code>, like this:
  swapDevices = [{
+    device = "/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part2";
+    randomEncryption = true;
  }];
}
}}


<syntaxhighlight lang="bash">
Now you may install NixOS with <code>nixos-install</code>.
$ zfs set com.sun:auto-snapshot=true <pool>/<fs>
</syntaxhighlight>


(Note that by default this property will be inherited by all descendent datasets, but you can set their properties to false if you prefer.)
== Importing on boot ==


Then, to enable the auto-snapshot service, add this to your <code>configuration.nix</code>:
If you create a zpool, it will not be imported on the next boot unless you either add the zpool name to <syntaxhighlight lang="nix" inline>boot.zfs.extraPools</syntaxhighlight>:


<syntaxhighlight lang="nix">
<syntaxhighlight lang="nix">
services.zfs.autoSnapshot.enable = true;
## In /etc/nixos/configuration.nix:
boot.zfs.extraPools = [ "zpool_name" ];
</syntaxhighlight>
</syntaxhighlight>


And finally, run <code>nixos-rebuild switch</code> to activate the new configuration!
or if you are using legacy mountpoints, add a <syntaxhighlight lang="nix" inline>fileSystems</syntaxhighlight> entry and NixOS will automatically detect that the pool needs to be imported:
 
By default, the auto-snapshot service will keep the latest four 15-minute, 24 hourly, 7 daily, 4 weekly and 12 monthly snapshots.
You can globally override this configuration by setting the desired number of snapshots in your <code>configuration.nix</code>, like this:


<syntaxhighlight lang="nix">
<syntaxhighlight lang="nix">
services.zfs.autoSnapshot = {
## In /etc/nixos/configuration.nix:
   enable = true;
fileSystems."/mount/point" = {
   frequent = 8; # keep the latest eight 15-minute snapshots (instead of four)
   device = "zpool_name";
  monthly = 1;  # keep only one monthly snapshot (instead of twelve)
   fsType = "zfs";
};
};
</syntaxhighlight>
</syntaxhighlight>


You can also disable a given type of snapshots on a per-dataset basis by setting a ZFS property, like this:
=== Zpool created with bus-based disk names ===
If you used bus-based disk names in the <syntaxhighlight inline>zpool create</syntaxhighlight> command, e.g., <syntaxhighlight inline>/dev/sda</syntaxhighlight>, NixOS may run into issues importing the pool if the names change. Even if the pool is able to be mounted (with <syntaxhighlight lang="nix" inline>boot.zfs.devNodes = "/dev/disk/by-partuuid";</syntaxhighlight> set), this may manifest as a <syntaxhighlight inline>FAULTED</syntaxhighlight> disk and a <syntaxhighlight inline>DEGRADED</syntaxhighlight> pool reported by <syntaxhighlight inline>zpool status</syntaxhighlight>. The fix is to re-import the pool using disk IDs:


<syntaxhighlight lang="console">
<syntaxhighlight>
$ zfs set com.sun:auto-snapshot:weekly=false <pool>/<fs>
# zpool export zpool_name
# zpool import -d /dev/disk/by-id zpool_name
</syntaxhighlight>
</syntaxhighlight>


This would disable only weekly snapshots on the given filesystem.
The import setting is reflected in <syntaxhighlight inline="" lang="bash">/etc/zfs/zpool.cache</syntaxhighlight>, so it should persist through subsequent boots.  


== How to install NixOS on a ZFS root filesystem ==
=== Zpool created with disk IDs ===
If you used disk IDs to refer to disks in the <code>zpool create</code> command, e.g., <code>/dev/disk/by-id</code>, then NixOS may consistently fail to import the pool unless <code>boot.zfs.devNodes = "/dev/disk/by-id"</code> is also set.


Here's an example of how to create a ZFS root pool using 4 disks in RAID-10 mode (striping+mirroring), create a ZFS root+home filesystems and install NixOS on them:
== Mount datasets at boot ==
(thanks to Danny Wilson for the instructions)
zfs-mount service is enabled by default on NixOS 22.05.


<syntaxhighlight lang="bash">
To automatically mount a dataset at boot, you only need to set <code>canmount=on</code> and <code>mountpoint=/mount/point</code> on the respective datasets.
# Add the zfs filesystem to the install environment:
nano /etc/nixos/configuration.nix


## ---8<-------------------------8<---
== Changing the Adaptive Replacement Cache size ==
  boot.supportedFilesystems = [ "zfs" ];
## ---8<-------------------------8<---


nixos-rebuild switch
To change the maximum size of the ARC to (for example) 12 GB, add this to your NixOS configuration:
<syntaxhighlight lang="nix">
boot.kernelParams = [ "zfs.zfs_arc_max=12884901888" ];
</syntaxhighlight>


# Load the just installed ZFS kernel module
== Tuning other parameters ==
modprobe zfs


# Create boot partition and (zfs) data partition
To tune other attributes of ARC, L2ARC or of ZFS itself via runtime modprobe config, add this to your NixOS configuration (keys and values are examples only!):
# See: https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem#step-2-disk-partitioning
<syntaxhighlight lang="nix">
fdisk /dev/sda
    boot.extraModprobeConfig = ''
      options zfs l2arc_noprefetch=0 l2arc_write_boost=33554432 l2arc_write_max=16777216 zfs_arc_max=2147483648
    '';
</syntaxhighlight>


# Copy the partition table to the other disks
You can confirm whether any specified configuration/tuning got applied via commands like <code>arc_summary</code> and <code>arcstat -a -s " "</code>.
sfdisk --dump /dev/sda | sfdisk /dev/sdb
sfdisk --dump /dev/sda | sfdisk /dev/sdc
sfdisk --dump /dev/sda | sfdisk /dev/sdd


# Create a RAID-10 ZFS pool. Use "-o ashift=12" to create your ZFS pool with 4K sectors
== Automatic scrubbing ==
zpool create -o ashift=12 -o altroot=/mnt rpool mirror /dev/sda2 /dev/sdb2 mirror /dev/sdc2 /dev/sdd2


# Create the filesystems
Regular scrubbing of ZFS pools is recommended and can be enabled in your NixOS configuration via:
zfs create -o mountpoint=none rpool/root
<syntaxhighlight lang="nix">
zfs create -o mountpoint=legacy rpool/root/nixos
services.zfs.autoScrub.enable = true;
zfs create -o mountpoint=legacy rpool/home
</syntaxhighlight>
zfs set compression=lz4 rpool/home    # compress the home directories automatically


# Mount the filesystems manually
You can tweak the interval (defaults to once a week) and which pools should be scrubbed (defaults to all).
mount -t zfs rpool/root/nixos /mnt
== Remote unlock ==
=== Unlock encrypted ZFS via SSH on boot ===


mkdir /mnt/home
{{note|As of 22.05, rebuilding your config with the below directions may result in a situation where, if you want to revert the changes, you may need to do some pretty hairy nix-store manipulation to be able to successfully rebuild, see https://github.com/NixOS/nixpkgs/issues/101462#issuecomment-1172926129}}
mount -t zfs rpool/home /mnt/home


# Create a raid mirror of the first partitions for /boot (GRUB)
In case you want unlock a machine remotely (after an update), having an ssh service in initrd for the password prompt is handy:
mdadm --build /dev/md127 --metadata=0.90 --level=1 --raid-devices=4 /dev/sd[a,b,c,d]1
mkfs.ext4 -m 0 -L boot -j /dev/md127


mkdir /mnt/boot
<syntaxhighlight lang="nix">
mount /dev/md127 /mnt/boot
boot = {
  initrd.network = {
    # This will use udhcp to get an ip address.
    # Make sure you have added the kernel module for your network driver to `boot.initrd.availableKernelModules`,
    # so your initrd can load it!
    # Static ip addresses might be configured using the ip argument in kernel command line:
    # https://www.kernel.org/doc/Documentation/filesystems/nfs/nfsroot.txt
    enable = true;
    ssh = {
      enable = true;
      # To prevent ssh clients from freaking out because a different host key is used,
      # a different port for ssh is useful (assuming the same host has also a regular sshd running)
      port = 2222;
      # hostKeys paths must be unquoted strings, otherwise you'll run into issues with boot.initrd.secrets
      # the keys are copied to initrd from the path specified; multiple keys can be set
      # you can generate any number of host keys using
      # `ssh-keygen -t ed25519 -N "" -f /path/to/ssh_host_ed25519_key`
      hostKeys = [ /path/to/ssh_host_rsa_key ];
      # public ssh key used for login
      authorizedKeys = [ "ssh-rsa AAAA..." ];
    };
  };
};
</syntaxhighlight>
* In order to use DHCP in the initrd, network manager must not be enabled and <syntaxhighlight lang="nix" inline>networking.useDHCP = true;</syntaxhighlight> must be set.
* If your network card isn't started, you'll need to add the according Kernel module to the Kernel and initrd as well, e.g. <syntaxhighlight lang="nix">
boot.kernelModules = [ "r8169" ];
boot.initrd.kernelModules = [ "r8169" ];</syntaxhighlight>To know what kernel modules are needed, run <code>nix shell nixpkgs#pciutils --command lspci -v | grep -iA8 'network\|ethernet'</code> .


# Generate the NixOS configuration, as per the NixOS manual
After that you can unlock your datasets using the following ssh command:
nixos-generate-config --root /mnt


# Now edit the generated hardware config:
<syntaxhighlight>
nano /mnt/etc/nixos/hardware-configuration.nix
ssh -p 2222 root@host "zpool import -a; zfs load-key -a && killall zfs"
</syntaxhighlight>


## ---8<-------------------------8<---
Alternatively you could also add the commands as postCommands to your configuration.nix, then you just have to ssh into the initrd:
# This is what you want:


   fileSystems."/" =
<syntaxhighlight>
     { device = "rpool/root/nixos";
boot = {
      fsType = "zfs";
   initrd.network = {
    };
     postCommands = ''
    # Import all pools
    zpool import -a
    # Or import selected pools
    zpool import pool2
    zpool import pool3
    zpool import pool4
    # Add the load-key command to the .profile
    echo "zfs load-key -a; killall zfs" >> /root/.profile
    '';
  };
};
</syntaxhighlight>


  fileSystems."/home" =
After that you can unlock your datasets using the following ssh command:
    { device = "rpool/home";
      fsType = "zfs";
    };


  fileSystems."/boot" =
<syntaxhighlight>
    { device = "/dev/md127";
ssh -p 2222 root@host
      fsType = "ext4";
</syntaxhighlight>
    };
## ---8<-------------------------8<---


# configuration.nix needs an adjustment:
== Reservations ==
nano /mnt/etc/nixos/configuration.nix


## ---8<-------------------------8<---
On ZFS, the performance will deteriorate significantly when more than 80% of the available space is used.  To avoid this, reserve disk space beforehand.
# This is some more of what you want:


  boot.loader.grub.devices = [ "/dev/sda" "/dev/sdb" "/dev/sdc" "/dev/sdd" ];
To reserve space create a new unused dataset that gets a guaranteed disk space of 10GB.
  boot.supportedFilesystems = [ "zfs" ];
## ---8<-------------------------8<---


# Ready to go!
<syntaxhighlight lang="console">
nixos-install
# zfs create -o refreservation=10G -o mountpoint=none zroot/reserved
</syntaxhighlight>
</syntaxhighlight>


== Encrypted ZFS ==
== Auto ZFS trimming ==
 
<syntaxhighlight lang="nix" inline>services.zfs.trim.enable = true;</syntaxhighlight>.
 
This will periodically run <code>zpool trim</code>. Note that this is different from the <code>autotrim</code> pool property. For further information, see the <code>zpool-trim</code> and <code>zpoolprops</code> man pages.


Native encryption is only available in the <code>zfsUnstable</code> package of NixOS, which was added in [https://github.com/NixOS/nixpkgs/pull/29426 PR-29426] in <code>unstable</code>
== Take snapshots automatically ==
and will be part of <code>18.03</code>. In older versions it is also possible to use full disk encryption by creating zfs top of cryptsetup.


In the unstable channel at the moment it is necessary to set <code>boot.zfs.enableUnstable = true;</code> to get zfs version based on master branch as zfsStable does not yet have this feature.
See {{nixos:option|services.zfs.autoSnapshot}} or {{nixos:option|services.sanoid}} section in <code>man configuration.nix</code>.


Assuming that a zpool named <code>zroot</code> has been already created as described.
== NFS share ==
Encrypted datasets can be added on top as follow:


<syntaxHighlight lang=console>
With <code>sharenfs</code> property, ZFS has build-in support for generating <code>/etc/exports.d/zfs.exports</code> file, which in turn is processed by NFS service automatically.
$ zfs create -o encryption=aes-256-gcm -o keyformat=passphrase -o mountpoint=none zroot/root
</syntaxHighlight>


Instead of encrypting just a dataset (and all its child datasets) you can also directly encrypt the whole pool upon creation:
{{warning|If you are intending on defining an IPv6 subnet as part of your sharenfs rule, as of ZFS 2.0.6 (2021-09-23) please note that due to a bug in openzfs '''your rule will not correctly apply''', and may result in a security vulnerability (CVE-2013-20001). A fix has been implemented in the next yet-to-be-released upstream version - [https://github.com/openzfs/zfs/pull/11939 openzfs/zfs#11939]}}
<syntaxHighlight lang=console>
$ zpool create -o ashift=12 -o altroot="/mnt" -O encryption=aes-256-gcm -O keyformat=passphrase zroot /dev/sdxy
</syntaxHighlight>


To enable NFS share on a dataset, only two steps are needed:
First, enable [[NFS|NFS service]]:
<syntaxhighlight lang="nix">
services.nfs.server.enable = true;
</syntaxhighlight>
Only this line is needed. Configure firewall if necessary, as described in [[NFS]] article.


All child datasets will inherit the encryption.
{{warning|<code>zfs share</code> or <code>sharenfs</code> does not work if the <code>mountpoint</code> is set to <code>legacy</code> (or <code>none</code>, of course). I was unable to find a source for this behaviour, but I was stuck on the problem for days, until I realized the problem.  ::Reply: sharenfs controlls what
Note that using grub to boot directly from zfs with encryption enabled might not work at the moment,  
is written into <code>/etc/exports</code>. If ZFS does not know the mountpoint, as is the case in
so a separate boot partition is required.
mountpoint legacy or none, the contents of <code>/etc/exports</code> would be wrong}}
A full encrypted nixos installation on an UEFI system could look like this:


<syntaxHighlight lang=console>
Then, set <code>sharenfs</code> property:
$ zfs create -o mountpoint=legacy -o sync=disabled zroot/root/tmp
<syntaxhighlight lang="console">
$ zfs create -o mountpoint=legacy -o com.sun:auto-snapshot=true zroot/root/home
zfs set sharenfs="ro=192.168.1.0/24,all_squash,anonuid=70,anongid=70" rpool/myData
$ zfs create -o mountpoint=legacy -o com.sun:auto-snapshot=true zroot/root/nixos
</syntaxhighlight>
$ mount -t zfs zroot/root/nixos /mnt
For more options, see <code>man 5 exports</code>.
$ mkdir /mnt/{home,tmp,boot}
$ # assuming that /dev/sda1 is the boot partition
$ mkfs.vfat /dev/sda1
$ mount /dev/sda1 /mnt/boot/
$ mount -t zfs zroot/root/home /mnt/home/
$ mount -t zfs zroot/root/tmp /mnt/tmp/
$ nixos-generate-config  --root /mnt
</syntaxHighlight>


=== Unlock encrypted zfs via ssh on boot ===
Todo: sharesmb property for Samba.


In case you want unlock a machine remotely (after an update),
== Mail notifications (ZFS Event Daemon) ==
having a dropbear ssh service in initrd for the password prompt
is handy:


<syntaxHighlight lang=nix>
ZFS Event Daemon (zed) monitors events generated by the ZFS Kernel module and runs configured tasks. It can be configured to send an email when a pool scrub is finished or a disk has failed. [https://search.nixos.org/options?query=services.zfs.zed zed options]
boot = {
  initrd.network = {
    # This will use udhcp to get an ip address.
    # Make sure you have added the kernel module for your network driver to `boot.initrd.availableKernelModules`,
    # so your initrd can load it!
    # Static ip addresses might be configured using the ip argument in kernel command line:
    # https://www.kernel.org/doc/Documentation/filesystems/nfs/nfsroot.txt
    enable = true;
    ssh = {
        enable = true;
        # To prevent ssh from freaking out because a different host key is used,
        # a different port for dropbear is useful (assuming the same host has also a normal sshd running)
        port = 2222;
        # dropbear uses key format different from openssh; can be generated by using:
        # $ nix-shell -p dropbear --command "dropbearkey -t ecdsa -f /tmp/initrd-ssh-key"
        hostECDSAKey = /run/keys/initrd-ssh-key;
        # public ssh key used for login
        authorizedKeys = [ "ssh-rsa AAAA..." ];
    };
    # this will automatically load the zfs password prompt on login
    # and kill the other prompt so boot can continue
    postCommands = ''
      echo "zfs load-key -a; killall zfs" >> /root/.profile
    '';
  };
};
</syntaxHighlight>
* In order to use DHCP in the initrd, network manager must not be enabled and <code>networking.useDHCP = true;</code> must be set.
* If your network card isn't started, you'll need to add the according kernel module to the initrd as well, e.g. <code>boot.initrd.kernelModules = [ "r8169" ];</code>


== Encrypted Dataset Format Change ==
First, we need to configure a mail transfer agent, the program that sends email:
<syntaxhighlight lang="nix">
{
  age.secrets.msmtp = {
    file = "${inputs.self.outPath}/secrets/msmtp.age";
  };


The introduction of native encryption on ZFS was highly anticipated. However since it was introduced, there have been various issues discovered. Due to this, a rather large patch containing many fixes was merged into master, see https://github.com/zfsonlinux/zfs/pull/6864 for more information.
  # for zed enableMail, enable sendmailSetuidWrapper
  services.mail.sendmailSetuidWrapper.enable = true;


However this leads to a format change of the encrypted datasets. As a result of this format change, encrypted datasets that were created by older zfs versions can only be mounted as read-only. Encrypted datasets created with the new format cannot be opened at all on older versions. Unencrypted datasets were not altered and work as before.
  programs.msmtp = {
    enable = true;
    setSendmail = true;
    defaults = {
      aliases = "/etc/aliases";
      port = 587;
      auth = "plain";
      tls = "on";
      tls_starttls = "on";
    };
    accounts = {
      default = {
        host = "smtp.mail.example.com";
        passwordeval = "cat ${config.age.secrets.msmtp.path}";
        user = "myname@example.com";
        from = "myname@example.com";
      };
    };
  };
}
</syntaxhighlight>


If you've followed this wiki entry and didn't create an encrypt top-level dataset but a child-dataset, e.g. zroot/root/nixos where zroot is the name of the pool and the top-level dataset and root is the encrypted child-dataset, then you can easily use zfs send/recv to migrate it to the new format.
Then, configure an alias for root account. With this alias configured, all mails sent to root, such as cron job results and failed sudo login events, will be redirected to the configured email account.


# Create a custom NixOS iso with crypto stability patch enabled
<syntaxhighlight lang="nix">
# Boot into that live environment
{
# Import the pool and load the key
  environment.etc.aliases.text = ''
# Create a new encrypted dataset, e.g.<br/><code>zfs create -o encryption=aes-256-gcm -o keyformat=passphrase -o mountpoint=none zroot/rootNEW</code>
    root: admin@example.com
# Use zfs send and receive to copy the data to new format:<br/><code>zfs send zpool/root/nixos | zfs receive zpool/rootNew/nixos</code>
  '';
# Set correct mountpoint for the newly created dataset:<br/><code>zfs set moutpoint=legacy zpool/root/New/nixos</code>
}
# Rename the old and new datasets:<br/><code>zfs rename zpool/root zpool/rootOLD</code><br/><code>zfs rename zpool/rootNEW zpool/root</code>
</syntaxhighlight>
# That should allow to boot Nixos already with new format. If you other encrypted mounts, you will probably need to convert them to new format as well first.


It's also recommended to have two usb sticks available. One custom iso with the old zfs format and one with the new one. So you can easily switch between them.
Finally, enable zed mail notification:
<syntaxhighlight lang="nix">
{
  services.zfs.zed. = {
    enableMail = true;
    settings = {
      ZED_EMAIL_ADDR = [ "root" ];
      # send notification if scrub succeeds
      ZED_NOTIFY_VERBOSE = true;
    };
  };
}
</syntaxhighlight>


== Need more info? ==
You can now test this by performing a scrub
<syntaxhighlight lang="console">
# zpool scrub $pool
</syntaxhighlight>


Feel free to ask your questions on the NixOS mailing list or the IRC channel: http://nixos.org/development/


[[Category:Guide]]
[[Category:Guide]]