ZFS: Difference between revisions

From NixOS Wiki
imported>Mcdonc
No edit summary
imported>2r
(Update to OpenZFS documentation.)
Line 1: Line 1:
[[NixOS]] has native support for ZFS ([[wikipedia:ZFS]]). It uses the code from the [http://zfsonlinux.org/ ZFS on Linux project], including kernel modules and userspace utilities. The installation isos also come with zfs.
== Notes ==
* Newest kernels might not be supported by ZFS yet. If you are running an newer kernel which is not yet officially supported by zfs, the zfs module will refuse to evaluate and show up as ''broken''.  Use <code>boot.kernelPackages = config.boot.zfs.package.latestCompatibleLinuxPackages;</code>


== What works ==
* ZFS does not support swap.  Hibernation must be either disabled with <code><nowiki>boot.kernelParams = [ "nohibernate" ];</nowiki></code>, or enabled with a separate, non-ZFS swap partition.


All functionality supported by ZFS on Linux, including:
* By default, all ZFS pools available to the system will be forcibly imported during boot.  This behaviour can be disabled by setting <syntaxhighlight lang="nix" inline>boot.zfs.forceImportAll = false;</syntaxhighlight>.
* Using ZFS as the root filesystem (using either MS-DOS or GPT partitions)
* Encrypted ZFS pools (using either native encryption or Linux's dm-crypt)
* All the other ZFS goodies (cheap snapshotting, checksumming, compression, RAID-Z, …)
* Auto-snapshotting service


== Known issues ==
{{note|Setting <code><nowiki>boot.zfs.enableUnstable = true;</nowiki></code> is required if you are running an newer kernel which is not yet officially supported by zfs, otherwise the zfs module will refuse to evaluate and show up as ''broken''. This will install a pre-release of zfs. This might be not as stable as a released version. However, in the past this has rarely led to problems/stability issues}}
* Using NixOS on a ZFS root file system might result in the boot error ''external pointer tables not supported'' when the number of hardlinks in the nix store gets very high. This can be avoided by adding this option to your <code>configuration.nix</code> file:
<syntaxhighlight lang="nix">
boot.loader.grub.copyKernels = true;
</syntaxhighlight>
* In contrast to many native Linux filesystems, ZFS misses support for freeze/thaw operations. This means that using ZFS together with hibernation (suspend to disk) may cause filesystem corruption. See https://github.com/openzfs/zfs/issues/260 (closed in favour of follow-up issues https://github.com/openzfs/zfs/issues/12842 and https://github.com/openzfs/zfs/issues/12843).
{{note|For now, setting <code><nowiki>boot.kernelParams = [ "nohibernate" ];</nowiki></code>is necessary to avoid the issue described above.}}
== Caveats ==
* (ZFS, unrelated to Nix- see https://github.com/openzfs/zfs/issues/7734) You shouldn't use a ZVol as a swap device, as it can deadlock under memory pressure.
* You should set the <code>mountpoint</code> property of your ZFS filesystems to be <code>legacy</code> and let NixOS mount them like any other filesystem (such as ''ext4'' or ''btrfs''), otherwise some filesystems may fail to mount due to ordering issues.
* By default, all ZFS pools available to the system will be forcibly imported during boot, regardless if you had imported them before or not. You should be careful not to have any other system accessing them at the same time, otherwise it will corrupt your pools. Normally (for the common desktop user) this should not be a problem, as a hard disk is usually only directly connected to one machine. This behaviour can be disabled by setting <syntaxhighlight lang="nix" inline>boot.zfs.forceImportAll = false;</syntaxhighlight>.
* If you create a zpool in the installer, make sure you run <code>zpool export <pool name></code> after <code>nixos-install</code>, or else when you reboot into your new system, zfs will fail to import the zpool.
* If you are running within a VM and NixOS fails to import the zpool on reboot, you may need to add <syntaxhighlight lang="nix" inline>boot.zfs.devNodes = "/dev/disk/by-path";</syntaxhighlight> to your configuration.nix file.
* If you are running within a VM and NixOS fails to import the zpool on reboot, you may need to add <syntaxhighlight lang="nix" inline>boot.zfs.devNodes = "/dev/disk/by-path";</syntaxhighlight> to your configuration.nix file.




== How to use it ==
== Enable ZFS support ==


{{warning|Add all mounts necessary for booting to your configuration as legacy mounts as described in this article instead of zfs's own mount mechanism. Otherwise mounts might be not mounted in the correct order during boot!}}
Common ZFS installation guides are now maintained at [https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/index.html OpenZFS Documentation] website. Visit there for details and if an issue arises, submit an issue or pull request.


Just add the following to your <code>configuration.nix</code> file:
== Root on ZFS ==
<syntaxhighlight lang="nix">
# boot.initrd.supportedFilesystems = [ "zfs" ]; # Not required if zfs is root-fs (extracted from filesystems)
# boot.supportedFilesystems = [ "zfs" ]; # Not required if zfs is root-fs (extracted from filesystems)
services.udev.extraRules = ''
  ACTION=="add|change", KERNEL=="sd[a-z]*[0-9]*|mmcblk[0-9]*p[0-9]*|nvme[0-9]*n[0-9]*p[0-9]*", ENV{ID_FS_TYPE}=="zfs_member", ATTR{../queue/scheduler}="none"
''; # zfs already has its own scheduler. without this my(@Artturin) computer froze for a second when i nix build something.
</syntaxhighlight>
 
Be sure to also set <code>networking.hostId</code>, see https://nixos.org/nixos/manual/options.html#opt-networking.hostId (Why- https://discourse.nixos.org/t/feedback-on-a-user-guide-i-created-on-installing-nixos-with-zfs/5986/4?u=srgom)
 
To activate the configuration and load the ZFS kernel module, run:
<syntaxhighlight lang="console">
# nixos-rebuild switch
</syntaxhighlight>
 
All ZFS functionality should now be available.


If you want NixOS to auto-mount your ZFS filesystems during boot, you should set their <code>mountpoint</code> property to <code>legacy</code> and treat it like if it were any other filesystem, i.e.: mount the filesystem manually and regenerate your list of filesystems, as such:
Root on ZFS guide is now maintained at [https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/Root%20on%20ZFS.html OpenZFS Documentation] website. Visit there for details and if an issue arises, submit an issue or pull request.


<syntaxhighlight lang="console">
== Mount datasets at boot ==
# zfs set mountpoint=legacy <pool>/<fs>
zfs-mount service is enabled by default on NixOS 22.05.
</syntaxhighlight>


<syntaxhighlight lang="console">
To automatically mount a dataset at boot, you only need to set <code>canmount=on</code> and <code>mountpoint=/mount/point</code> on the respective datasets.
# mount -t zfs <pool>/<fs> <mountpoint>
</syntaxhighlight>


This will regenerate your /etc/nixos/hardware-configuration.nix file:
== Changing the Adaptive Replacement Cache size ==
<syntaxhighlight lang="console">
# nixos-generate-config
</syntaxhighlight>


<syntaxhighlight lang="console">
To change the maximum size of the ARC to (for example) 12 GB, add this to your NixOS configuration:
# nixos-rebuild switch
</syntaxhighlight>
 
NixOS will now make sure that your filesystem is always mounted during boot.
 
The <code>nixos-generate-config</code> command regenerates your <code>/etc/nixos/hardware-configuration.nix</code> file, which includes the list of filesystems for NixOS to mount during boot, e.g.:
<syntaxhighlight lang="nix">
fileSystems."/home" =
  { device = "rpool/home";
    fsType = "zfs";
  };
 
fileSystems."/backup" =
  { device = "rpool/backup";
    fsType = "zfs";
  };
</syntaxhighlight>
 
Alternatively, if you do not mind maintaining some manual tweaks to your Nix hardware configuration, you can avoid using the ZFS legacy mounting option if you add <syntaxhighlight lang="nix" inline>options = [ "zfsutil" ];</syntaxhighlight> to your filesystem definitions.  e.g. the above would become.
<syntaxhighlight lang="nix">
fileSystems."/home" =
  { device = "rpool/home";
    fsType = "zfs";
    options = [ "zfsutil" ];
  };
 
fileSystems."/backup" =
  { device = "rpool/backup";
    fsType = "zfs";
    options = [ "zfsutil" ];
  };
</syntaxhighlight>
 
Keep your filesystem defintions in a file separate from <code>/etc/nixos/hardware-configuration.nix</code>, since this is overwritten whenever you run <code>nixos-generate-config</code>.
 
== Changing the cache size ==
 
ZFS has a complicated cache system.  The cache you're most likely to want to fiddle with is the called Adaptive Replacement Cache, usually abbreviated ARC.  This is the first-level (fastest) of ZFS's caches.
 
You can increase or decrease a parameter which represents approximately the maximum size of the ARC cache.  You can't set its actual size (ZFS does that adaptively according to its workload), nor can you set its exact maximum size.
 
To change the maximum size of the ARC cache to (for example) 12 GB, add this to your NixOS configuration:
<syntaxhighlight lang="nix">
<syntaxhighlight lang="nix">
boot.kernelParams = [ "zfs.zfs_arc_max=12884901888" ];
boot.kernelParams = [ "zfs.zfs_arc_max=12884901888" ];
</syntaxhighlight>
</syntaxhighlight>
In some versions of ZFS, you can change the maximum size of the ARC on the fly, but in NixOS 18.03 this is not possible.  (Nor is it possible in other versions of ZFS on Linux yet, according to Stack Exchange.)


== Automatic scrubbing ==
== Automatic scrubbing ==

Revision as of 06:13, 5 August 2022

Notes

  • Newest kernels might not be supported by ZFS yet. If you are running an newer kernel which is not yet officially supported by zfs, the zfs module will refuse to evaluate and show up as broken. Use boot.kernelPackages = config.boot.zfs.package.latestCompatibleLinuxPackages;
  • ZFS does not support swap. Hibernation must be either disabled with boot.kernelParams = [ "nohibernate" ];, or enabled with a separate, non-ZFS swap partition.
  • By default, all ZFS pools available to the system will be forcibly imported during boot. This behaviour can be disabled by setting boot.zfs.forceImportAll = false;.
  • If you are running within a VM and NixOS fails to import the zpool on reboot, you may need to add boot.zfs.devNodes = "/dev/disk/by-path"; to your configuration.nix file.


Enable ZFS support

Common ZFS installation guides are now maintained at OpenZFS Documentation website. Visit there for details and if an issue arises, submit an issue or pull request.

Root on ZFS

Root on ZFS guide is now maintained at OpenZFS Documentation website. Visit there for details and if an issue arises, submit an issue or pull request.

Mount datasets at boot

zfs-mount service is enabled by default on NixOS 22.05.

To automatically mount a dataset at boot, you only need to set canmount=on and mountpoint=/mount/point on the respective datasets.

Changing the Adaptive Replacement Cache size

To change the maximum size of the ARC to (for example) 12 GB, add this to your NixOS configuration:

boot.kernelParams = [ "zfs.zfs_arc_max=12884901888" ];

Automatic scrubbing

Regular scrubbing of ZFS pools is recommended and can be enabled in your NixOS configuration via:

services.zfs.autoScrub.enable = true;

You can tweak the interval (defaults to once a week) and which pools should be scrubbed (defaults to all).

Reservations

Since zfs is a copy-on-write filesystem even for deleting files disk space is needed. Therefore it should be avoided to run out of disk space. Luckily it is possible to reserve disk space for datasets to prevent this.

To reserve space create a new unused dataset that gets a guaranteed disk space of 1GB.

# zfs create -o refreservation=1G -o mountpoint=none zroot/reserved

where zroot should be replaced by a dataset in your pool. The dataset itself should not be used. In case you would run out of space you can shrink the reservation to reclaim enough disk space to cleanup the other data from the pool:

# zfs set refreservation=none zroot/reserved

How to use the auto-snapshotting service

To auto-snapshot a ZFS filesystem or a ZVol, set its com.sun:auto-snapshot property to true, like this:

# zfs set com.sun:auto-snapshot=true <pool>/<fs>

(Note that by default this property will be inherited by all descendent datasets, but you can set their properties to false if you prefer.)

Then, to enable the auto-snapshot service, add this to your configuration.nix:

services.zfs.autoSnapshot.enable = true;

And finally, run nixos-rebuild switch to activate the new configuration!

By default, the auto-snapshot service will keep the latest four 15-minute, 24 hourly, 7 daily, 4 weekly and 12 monthly snapshots. You can globally override this configuration by setting the desired number of snapshots in your configuration.nix, like this:

services.zfs.autoSnapshot = {
  enable = true;
  frequent = 8; # keep the latest eight 15-minute snapshots (instead of four)
  monthly = 1;  # keep only one monthly snapshot (instead of twelve)
};

You can also disable a given type of snapshots on a per-dataset basis by setting a ZFS property, like this:

# zfs set com.sun:auto-snapshot:weekly=false <pool>/<fs>

This would disable only weekly snapshots on the given filesystem.

Installing NixOS on a ZFS root filesystem

Another guide titled "Encrypted ZFS mirror with mirrored boot on NixOS" is available at https://elis.nu/blog/2019/08/encrypted-zfs-mirror-with-mirrored-boot-on-nixos/.

OpenZFS document for NixOS Root on ZFS is also available: https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/Root%20on%20ZFS.html

This guide is based on the above OpenZFS guide and the NixOS installation instructions in the NixOS manual.

Pool layout considerations

it is important to keep /nix and the rest of the filesystem in different sections of the dataset hierarchy, like this:

rpool/
      nixos/
            nix         mounted to /nix
      userdata/
            root        mounted to /
            home        mounted to /home
            ...

the name of nixos and userdata/ can change, but them being peers is important.

ZFS can take consistent and atomic snapshots recursively down a dataset's hierarchy. Since Nix is good at being Nix, most users will want their server's data backed up, and don't mind reinstalling NixOS and then restoring data. If this is sufficient, only snapshot and back up the userdata hierarchy. Users who want to be able to restore a service with only ZFS snapshots will want to snapshot the entire tree, at the significant expense of snapshotting the Nix store.

Dataset properties

The following is a list of recommended dataset properties which have no drawbacks under regular uses:

  • compression=lz4 (zstd for higher-end machines)
  • xattr=sa for Journald
  • acltype=posixacl also for Journald
  • relatime=on for reduced stress on SSDs

The following is a list of dataset properties which are often useful, but do have drawbacks:

  • atime=off disables if a file's access time is updated when the file is read. This can result in significant performance gains, but might confuse some software like mailers.

Journald

Journald requires some properties for journalctl to work for non-root users. The dataset containing /var/log/journal (probably the / dataset for simple configurations) should be created with xattr=sa and acltype=posixacl.

For example:

# zpool create  -O xattr=sa -O acltype=posixacl rpool ...

or:

# zfs create -o xattr=sa -o acltype=posixacl rpool/root

If you have already created the dataset, these properties can be set later:

# zfs set xattr=sa acltype=posixacl rpool/root

Environment setup

For convenience set a shell variable with the paths to your disk(s):

For multiple disks:

$ disk=(/dev/disk/by-id/foo /dev/disk/by-id/bar)

For a single disk:

$ disk=/dev/disk/by-id/foo

Partitioning the disks

# Multiple disks
for x in "${disk[@]}"; do
  sudo parted "$x" -- mklabel gpt
  sudo parted "$x" -- mkpart primary 512MiB -8GiB
  sudo parted "$x" -- mkpart primary linux-swap -8GiB 100%
  sudo parted "$x" -- mkpart ESP fat32 1MiB 512MiB
  sudo parted "$x" -- set 3 esp on

  sudo mkswap -L swap "${x}-part2"
  sudo mkfs.fat -F 32 -n EFI "${x}-part3"
done

# Single disk
sudo parted "$disk" -- mklabel gpt
sudo parted "$disk" -- mkpart primary 512MiB -8GiB
sudo parted "$disk" -- mkpart primary linux-swap -8GiB 100%
sudo parted "$disk" -- mkpart ESP fat32 1MiB 512MiB
sudo parted "$disk" -- set 3 esp on

sudo mkswap -L swap "${disk}-part2"
sudo mkfs.fat -F 32 -n EFI "${disk}-part3"

Laying out the filesystem hierarchy

Create the ZFS pool

sudo zpool create \
  -o ashift=12 \
  -o autotrim=on \
  -R /mnt \
  -O canmount=off \
  -O mountpoint=none \
  -O acltype=posixacl \
  -O compression=zstd \
  -O dnodesize=auto \
  -O normalization=formD \
  -O relatime=on \
  -O xattr=sa \
  -O encryption=aes-256-gcm \
  -O keylocation=prompt \
  -O keyformat=passphrase \
  rpool \
  mirror \
  "${disk[@]/%/-part1}"

For a single disk, remove mirror and specify just "${disk}-part1" as the device.

If you do not want the entire pool to be encrypted, remove the options encryption keylocation and keyformat.

Create the ZFS datasets

Since zfs is a copy-on-write filesystem even for deleting files disk space is needed. Therefore it should be avoided to run out of disk space. Luckily it is possible to reserve disk space for datasets to prevent this.

# zfs create -o refreservation=1G -o mountpoint=none rpool/reserved

Create the datasets for the operating system. (Experienced ZFS users may wish to split up the OS datasets further.)

sudo zfs create -o canmount=on -o mountpoint=/ rpool/nixos
sudo zfs create rpool/nixos/nix

Create datasets for user home directories. If you opted to not encrypt the entire pool, you can encrypt just the userdata by specifying the same ZFS properties when creating rpool/userdata, and the child datasets will also be encrypted.

sudo zfs create -o canmount=off -o mountpoint=/ rpool/userdata
sudo zfs create -o canmount=on rpool/userdata/home
sudo zfs create -o canmount=on -o mountpoint=/root rpool/userdata/home/root
# Create child datasets of home for users' home directories.
sudo zfs create -o canmount=on rpool/userdata/home/alice
sudo zfs create -o canmount=on rpool/userdata/home/bob
sudo zfs create -o canmount=on rpool/userdata/home/...

Mount /boot

We are going to use the default NixOS bootloader systemd-boot, which can install to only one device. You will want to periodically rsync /mnt/boot to /mnt/boot2 so that you can always boot your system if either disk fails.

sudo mkdir /mnt/boot /mnt/boot2
sudo mount "${disk[0]}-part3" /mnt/boot
sudo mount "${disk[1]}-part3" /mnt/boot2

Or for single-disk systems:

sudo mkdir /mnt/boot
sudo mount "${disk}-part3" /mnt/boot

Configure the NixOS system

Generate the base NixOS configuration files.

# nixos-generate-config --root /mnt

Open /mnt/etc/nixos/configuration.nix in a text editor and change imports to include hardware-configuration-zfs.nix instead of the default hardware-configuration.nix. We will be editing this file later.

Now Add the following block of code anywhere (how you organise your configuration.nix is up to you):

# ZFS boot settings.
boot.supportedFilesystems = [ "zfs" ];
boot.zfs.devNodes = "/dev/";

Now set networking.hostName and networking.hostId. The host ID must be an eight digit hexadecimal value. You can derive it from the /etc/machine-id, taking the first eight characters; from the hostname, by taking the first eight characters of the hostname's md5sum,

$ hostname | md5sum | head -c 8

or by taking eight hexadecimal characters from /dev/urandom,

$ tr -dc 0-9a-f < /dev/urandom | head -c 8

Now add some ZFS maintenance settings:

# ZFS maintenance settings.
services.zfs.trim.enable = true;
services.zfs.autoScrub.enable = true;
services.zfs.autoScrub.pools = [ "rpool" ];

You may wish to also add services.zfs.autoSnapshot.enable = true; and set the ZFS property com.sun:auto-snapshot to true on rpool/userdata to have automatic snapshots. (See #How to use the auto-snapshotting service earlier on this page.)

Now open /mnt/etc/nixos/hardware-configuration-zfs.nix.

  • Add options = [ "zfsutil" ]; to every ZFS fileSystems block.
  • Add options = [ "X-mount.mkdir" ]; to fileSystems."/boot" and fileSystems."/boot2".
  • Replace swapDevices with the following, replacing DISK1 and DISK2 with the names of your disks.
swapDevices = [
  { device = "/dev/disk/by-id/DISK1-part2";
    randomEncryption = true;
  }
  { device = "/dev/disk/by-id/DISK2-part2";
    randomEncryption = true;
  }
];

For single-disk installs, remove the second entry of this array.

Optional additional setup for encrypted ZFS

Unlock encrypted zfs via ssh on boot
Note: As of 22.05, rebuilding your config with the below directions may result in a situation where, if you want to revert the changes, you may need to do some pretty hairy nix-store manipulation to be able to successfully rebuild, see https://github.com/NixOS/nixpkgs/issues/101462#issuecomment-1172926129

In case you want unlock a machine remotely (after an update), having an ssh service in initrd for the password prompt is handy:

boot = {
  initrd.network = {
    # This will use udhcp to get an ip address.
    # Make sure you have added the kernel module for your network driver to `boot.initrd.availableKernelModules`, 
    # so your initrd can load it!
    # Static ip addresses might be configured using the ip argument in kernel command line:
    # https://www.kernel.org/doc/Documentation/filesystems/nfs/nfsroot.txt
    enable = true;
    ssh = {
      enable = true;
      # To prevent ssh clients from freaking out because a different host key is used,
      # a different port for ssh is useful (assuming the same host has also a regular sshd running)
      port = 2222; 
      # hostKeys paths must be unquoted strings, otherwise you'll run into issues with boot.initrd.secrets
      # the keys are copied to initrd from the path specified; multiple keys can be set
      # you can generate any number of host keys using 
      # `ssh-keygen -t ed25519 -N "" -f /path/to/ssh_host_ed25519_key`
      hostKeys = [ /path/to/ssh_host_rsa_key ];
      # public ssh key used for login
      authorizedKeys = [ "ssh-rsa AAAA..." ];
    };
    # this will automatically load the zfs password prompt on login
    # and kill the other prompt so boot can continue
    postCommands = ''
      cat <<EOF > /root/.profile
      if pgrep -x "zfs" > /dev/null
      then
        zfs load-key -a
        killall zfs
      else
        echo "zfs not running -- maybe the pool is taking some time to load for some unforseen reason."
      fi
      EOF
    '';
  };
};
  • In order to use DHCP in the initrd, network manager must not be enabled and networking.useDHCP = true; must be set.
  • If your network card isn't started, you'll need to add the according kernel module to the initrd as well, e.g. boot.initrd.kernelModules = [ "r8169" ];
Import and unlock multiple encrypted pools/dataset at boot

If you have not only one encrypted pool/dataset but multiple ones and you want to import and unlock them at boot, so that they can be automounted using the hardware-configuration.nix, you could just amend the boot.initrd.network.postCommands option.

Unfortunately having an unlock key file stored in an encrypted zfs dataset cannot be used directly, so the pool must use keyformat=passphrase and keylocation=prompt.

The following example follows the remote unlocking with OpenSSH, but imports another pool also and prompts for unlocking (either when at the machine itself or when logging in remotely:

boot = {
  initrd.network = {
    enable = true;
    ssh = {
      enable = true;
      port = 2222; 
      hostKeys = [ /path/to/ssh_host_rsa_key ];
      authorizedKeys = [ "ssh-rsa AAAA..." ];
    };
    postCommands = ''
      zpool import tankXXX
      echo "zfs load-key -a; killall zfs" >> /root/.profile
    '';
  };
};

When you login by SSH into the box or when you have physical access to the machine itself, you will be prompted to supply the unlocking password for your zroot and tankXXX pools.

Install NixOS

# nixos-install --show-trace --root /mnt

--show-trace will show you where exactly things went wrong if nixos-install fails. To take advantage of all cores on your system, also specify --max-jobs n replacing n with the number of cores on your machine.

ZFS trim support for SSDs

ZFS 0.8 now also features trim support for SSDs.

How to use ZFS trimming

ZFS trimming works on one or more zpools and will trim each ssd inside it. There are two modes of it. One mode will manually trim the specified pool and the other will auto-trim pools. However the main difference is, that auto-trim will skip ranges that it considers too small while manually issued trim will trim all ranges.

To manually start trimming of a zpool run: zpool trim tank. Since PR-65331 this can be also done periodically (by default once a week) by setting services.zfs.trim.enable = true;.

To set a pool for auto-trim run: zpool set autotrim=on tank

To check the status of the manual trim, you can just run zpool status -t

To see the effects of trimming, you can run zpool iostat -r and zpool iostat -w

To see whether auto-trimming works, just run zpool iostat -r note the results and run it later again. The trim entries should change.

For further information read the PR description.


Following are a few discourse posts on zfs, serving as pointers, form your own opinion

Mail notification for ZFS Event Daemon

ZFS Event Daemon (zed) monitors events generated by the ZFS kernel module and runs configured tasks. It can be configured to send an email when a pool scrub is finished or a disk has failed. zed options

Alternative 1: Rebuild ZFS with Mail Support

The zfs package can be rebuilt with mail features. However, please note that this will cause Nix to recompile the entire ZFS package on the computer, and on every kernel update, which could be very time-consuming on lower-end NAS systems.

An alternative solution that does not involve recompliation can be found below.

The following override is needed as zfs is implicitly used in partition mounting:

nixpkgs.config.packageOverrides = pkgs: {
  zfsStable = pkgs.zfsStable.override { enableMail = true; };
};

A mail sender like msmtp or postfix is required.

A minimal, testable ZED configuration example:

services.zfs.zed.enableMail = true;
services.zfs.zed.settings = {
  ZED_EMAIL_ADDR = [ "root" ];
  ZED_NOTIFY_VERBOSE = true;
};

Above, ZED_EMAIL_ADDR is set to root, which most people will have an alias for in their mailer. You can change it to directly mail you: ZED_EMAIL_ADDR = [ "you@example.com" ];

ZED pulls in mailutils and runs mail by default, but you can override it with ZED_EMAIL_PROG. If using msmtp, you may need ZED_EMAIL_PROG = "${pkgs.msmtp}/bin/msmtp";.

You can customize the mail command with ZED_EMAIL_OPTS. For example, if your upstream mail server requires a certain FROM address: ZED_EMAIL_OPTS = "-r 'noreply@example.com' -s '@SUBJECT@' @ADDRESS@";

Alternative 2: Enable Mail Notification without Re-compliation

First, we need to configure a mail transfer agent, the program that sends email:

{
  programs.msmtp = {
    enable = true;
    setSendmail = true;
    defaults = {
      aliases = "/etc/aliases";
      port = 465;
      tls_trust_file = "/etc/ssl/certs/ca-certificates.crt";
      tls = "on";
      auth = "login";
      tls_starttls = "off";
    };
    accounts = {
      default = {
        host = "mail.example.com";
        passwordeval = "cat /etc/emailpass.txt";
        user = "user@example.com";
        from = "user@example.com";
      };
    };
  };
}

Then, configure an alias for root account. With this alias configured, all mails sent to root, such as cron job results and failed sudo login events, will be redirected to the configured email account.

tee -a /etc/aliases <<EOF
root: user@example.com
EOF

Finally, override default zed settings with a custom one:

{
  services.zfs.zed.settings = {
    ZED_DEBUG_LOG = "/tmp/zed.debug.log";
    ZED_EMAIL_ADDR = [ "root" ];
    ZED_EMAIL_PROG = "${pkgs.msmtp}/bin/msmtp";
    ZED_EMAIL_OPTS = "@ADDRESS@";

    ZED_NOTIFY_INTERVAL_SECS = 3600;
    ZED_NOTIFY_VERBOSE = true;

    ZED_USE_ENCLOSURE_LEDS = true;
    ZED_SCRUB_AFTER_RESILVER = true;
  };
  # this option does not work; will return error
  services.zfs.zed.enableMail = false;
}

You can now test this by performing a scrub

# zpool scrub $pool

Mount datasets without legacy mountpoint

Contrary to conventional wisdom, mountpoint=legacy is not required for mounting datasets. The trick is to use mount -t zfs -o zfsutil path/to/dataset /path/to/mountpoint.

Also, legacy mountpoints are also inconvenient in that the mounts can not be natively handled by zfs mount command, hence legacy in the name.

An example configuration of mounting non-legacy dataset is the following:

{
  fileSystems."/tank" =
    { device = "tank_pool/data";
      fsType = "zfs"; options = [ "zfsutil" ];
    };
}

An alternative is to set boot.zfs.extraPools = [ pool_name ];, which is recommended by the documentation if you have many zfs filesystems.

NFS share

With sharenfs property, ZFS has build-in support for generating /etc/exports.d/zfs.exports file, which in turn is processed by NFS service automatically.

Warning: If you are intending on defining an IPv6 subnet as part of your sharenfs rule, as of ZFS 2.0.6 (2021-09-23) please note that due to a bug in openzfs your rule will not correctly apply, and may result in a security vulnerability (CVE-2013-20001). A fix has been implemented in the next yet-to-be-released upstream version - openzfs/zfs#11939

To enable NFS share on a dataset, only two steps are needed:

First, enable NFS service:

services.nfs.server.enable = true;

Only this line is needed. Configure firewall if necessary, as described in NFS article.

Then, set sharenfs property:

# zfs set sharenfs="ro=192.168.1.0/24,all_squash,anonuid=70,anongid=70" rpool/myData

For more options, see man 5 exports.

See also

This article on how to setup encrypted ZFS on Hetzner: <https://mazzo.li/posts/hetzner-zfs.html>.