ZFS: Difference between revisions

From NixOS Wiki
imported>Sjau
Adding ZFS Trimming
improve module that selects the oldest kernel.
(144 intermediate revisions by 59 users not shown)
Line 1: Line 1:
[[NixOS]] has native support for ZFS ([[wikipedia:ZFS]]). It uses the code from the [http://zfsonlinux.org/ ZFS on Linux project], including kernel modules and userspace utilities. The installation CD also comes with zfs.
[https://zfsonlinux.org/ {{PAGENAME}}] ([[wikipedia:en:{{PAGENAME}}]]) - also known as [https://openzfs.org/ OpenZFS] ([[wikipedia:en:OpenZFS]]) - is a modern filesystem[[category:filesystem]] which is well supported on [[NixOS]].


== What works ==
Besides the ''zfs'' package (''ZFS Filesystem Linux Kernel module'') <ref>https://search.nixos.org/packages?channel=unstable&show=zfs&query=zfs</ref> itself there are many packages in the [[{{PAGENAME}}]] ecosystem available.


All functionality supported by ZFS on Linux, including:
[[{{PAGENAME}}]] integrates into NixOS via the ''boot.zfs''<ref>https://search.nixos.org/options?channel=unstable&query=boot.zfs</ref> and ''service.zfs''<ref>https://search.nixos.org/options?channel=unstable&query=services.zfs</ref> options.
* Using ZFS as the root filesystem (using either MS-DOS or GPT partitions)
* Encrypted ZFS pools (using either native encryption or Linux's dm-crypt)
* All the other ZFS goodies (cheap snapshotting, checksumming, compression, RAID-Z, …)
* Auto-snapshotting service


== Known issues ==
== Limitations ==


* You shouldn't use a ZVol as a swap device, as it can deadlock under memory pressure
==== Latest kernel compatible with ZFS ====
* You should set the <code>mountpoint</code> property of your ZFS filesystems to be <code>legacy</code> and let NixOS mount them like any other filesystem (such as ext4 or btrfs), otherwise some filesystems may fail to mount due to ordering issues
 
* All ZFS pools available to the system will be forcibly imported during boot, regardless if you had imported them before or not. You should be careful not to have any other system accessing them at the same time, otherwise it will corrupt your pools. Normally (for the common desktop user) this should not be a problem, as a hard disk is usually only directly connected to one machine.
Newer kernels might not be supported by ZFS yet. If you are running a kernel which is not officially supported by zfs, the module will refuse to evaluate and show an error.
* Using NixOS on a ZFS root file system might result in the boot error "external pointer tables not supported" when the number of hardlinks in the nix store gets very high. This can be avoided by adding this option to your <code>configuration.nix</code> file:
 
<syntaxhighlight lang="nix">
You can pin to a newer kernel version explicitly, but note that this version may be dropped by upstream and in nixpkgs prior to zfs supporting the next version. See [[Linux kernel]] for more information.<syntaxhighlight lang="nix">
boot.loader.grub.copyKernels = true;
{
  boot.kernelPackages = pkgs.linuxPackages_latest;
  # OR
  boot.kernelPackages = pkgs.linuxPackages_6_6
}
</syntaxhighlight>
</syntaxhighlight>


== How to use it ==
This snippet will configure the latest compatible kernel.
Note that this can over time jump back to old kernel versions because non-lts kernel version
get removed over time and their newer replacements might be not supported by zfs yet.


{{warning|Add all mounts to your configuration as legacy mounts as described in this article instead of zfs's own mount mechanism. Otherwise mounts might be not mounted in the correct order during boot!}}
<syntaxhighlight lang="nix">
{
  lib,
  pkgs,
  config,
  ...
}:


Just add the following to your <code>configuration.nix</code> file:
let
<syntaxhighlight lang="nix">
  isUnstable = config.boot.zfs.package == pkgs.zfsUnstable;
boot.supportedFilesystems = [ "zfs" ];
  zfsCompatibleKernelPackages = lib.filterAttrs (
    name: kernelPackages:
    (builtins.match "linux_[0-9]+_[0-9]+" name) != null
    && (builtins.tryEval kernelPackages).success
    && (
      (!isUnstable && !kernelPackages.zfs.meta.broken)
      || (isUnstable && !kernelPackages.zfs_unstable.meta.broken)
    )
  ) pkgs.linuxKernel.packages;
  latestKernelPackage = lib.last (
    lib.sort (a: b: (lib.versionOlder a.kernel.version b.kernel.version)) (
      builtins.attrValues zfsCompatibleKernelPackages
    )
  );
in
{
  # Note this might jump back and worth as kernel get added or removed.
  boot.kernelPackages = latestKernelPackage;
}
</syntaxhighlight>
</syntaxhighlight>


Be sure to also set <code>networking.hostId</code>, see https://nixos.org/nixos/manual/options.html#opt-networking.hostId
==== Partial support for SWAP on ZFS ====


To activate the configuration and load the ZFS kernel module, run:
ZFS does not support swapfiles. SWAP devices can be used instead. Additionally, hibernation is disabled by default due to a [https://github.com/NixOS/nixpkgs/pull/208037 high risk] of data corruption. Note that even if that pull request is merged, it does not fully mitigate the risk. If you wish to enable hibernation regardless and made sure that not swapfiles on ZFS are used, set <code>boot.zfs.allowHibernation = true</code>.
<syntaxhighlight lang="console">
 
nixos-rebuild switch
==== Zpool not found ====
</syntaxhighlight>


All ZFS functionality should now be available.
If NixOS fails to import the zpool on reboot, you may need to add <syntaxhighlight lang="nix" inline>boot.zfs.devNodes = "/dev/disk/by-path";</syntaxhighlight> or <syntaxhighlight lang="nix" inline>boot.zfs.devNodes = "/dev/disk/by-partuuid";</syntaxhighlight> to your configuration.nix file.


If you want NixOS to auto-mount your ZFS filesystems during boot, you should set their <code>mountpoint</code> property to <code>legacy</code> and treat it like if it were any other filesystem, i.e.: mount the filesystem manually and regenerate your list of filesystems, as such:
The differences can be tested by running <code>zpool import -d /dev/disk/by-id</code> when none of the pools are discovered, eg. a live iso.


<syntaxhighlight lang="console">
==== declarative mounting of ZFS datasets ====
zfs set mountpoint=legacy <pool>/<fs>
</syntaxhighlight>


<syntaxhighlight lang="console">
When using legacy mountpoints (created with eg<code>zfs create -o mountpoint=legacy</code>) mountpoints must be specified with <code>fileSystems."/mount/point" = {};</code>. ZFS native mountpoints are not managed as part of the system configuration but better support hibernation with a separate swap partition. This can lead to conflicts if ZFS mount service is also enabled for the same datasets. Disable it with <code>systemd.services.zfs-mount.enable = false;</code>.
mount -t zfs <pool>/<fs> <mountpoint>
</syntaxhighlight>


This will regenerate your /etc/nixos/hardware-configuration.nix file:
== Guides ==
<syntaxhighlight lang="console">
nixos-generate-config
</syntaxhighlight>


<syntaxhighlight lang="console">
==== '''OpenZFS Documentation for installing''' ====
nixos-rebuild switch
</syntaxhighlight>


NixOS will now make sure that your filesystem is always mounted during boot.
{{warning|This guide is not endorsed by NixOS and some features like immutable root do not have upstream support and could break on updates. If an issue arises while following this guide, please consult the guides support channels.}}


The <code>nixos-generate-config</code> command regenerates your <code>/etc/nixos/hardware-configuration.nix</code> file, which includes the list of filesystems for NixOS to mount during boot, e.g.:
One guide for a NixOS installation with ZFS is maintained at [https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/ OpenZFS Documentation (''Getting Started'' for ''NixOS'')]
<syntaxhighlight lang="nix">
  fileSystems."/home" =
    { device = "rpool/home";
      fsType = "zfs";
    };


  fileSystems."/backup" =
It is about:
    { device = "rpool/backup";
* [https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/index.html#installation enabling ZFS on an existing NixOS installation] and
      fsType = "zfs";
* [https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/#root-on-zfs (installing NixOS with) Root on ZFS].
    };
</syntaxhighlight>


== Changing the Cache Size ==
It is not about:
* Give understandable, easy to follow and close to the standard installation guide instructions
* integrating ZFS into your existing config


ZFS has a complicated cache system.  The cache you're most likely to want to fiddle with is the called Adaptive Replacement Cache, usually abbreviated ARC.  This is the first-level (fastest) of ZFS's caches.


You can increase or decrease a parameter which represents approximately the maximum size of the ARC cache.  You can't set its actual size (ZFS does that adaptively according to its workload), nor can you set its exact maximum size.
==== '''Simple NixOS ZFS in root installation''' ====


To change the maximum size of the ARC cache to (for example) 12 GB, add this to your NixOS configuration:
Start from here in the NixOS manual: [https://nixos.org/manual/nixos/stable/#sec-installation-manual].
<syntaxhighlight lang="nix">
Under manual partitioning [https://nixos.org/manual/nixos/stable/#sec-installation-manual-partitioning] do this instead:
boot.kernelParams = ["zfs.zfs_arc_max=12884901888"];
</syntaxhighlight>


In some versions of ZFS, you can change the maximum size of the ARC on the fly, but in NixOS 18.03 this is not possible.  (Nor is it possible in other versions of ZFS on Linux yet, according to Stack Exchange.)
'''Partition your disk with your favorite partition tool.'''


== Automatic Scrubbing ==
We need the following partitions:


Regular scrubbing of ZFS pools is recommended and can be enabled in your NixOS configuration via:
* 1G for boot partition with "boot" as the partition label (also called name in some tools) and ef00 as partition code
<syntaxhighlight lang="nix">
* 4G for a swap partition with "swap" as the partition label and 8200 as partition code. We will encrypt this with a random secret on each boot.
services.zfs.autoScrub.enable = true;
* The rest of disk space for zfs with "root" as the partition label and 8300 as partition code (default code)
</syntaxhighlight>


You can tweak the interval (defaults to once a week) and which pools should be scrubbed (defaults to all).
Reason for swap partition: ZFS does use a caching mechanism that is different from the normal Linux cache infrastructure.
In low-memory situations, ZFS therefore might need a bit longer to free up memory from its cache. The swap partition will help with that.


== Reservations ==
Example with gdisk:


Since zfs is a copy-on-write filesystem even for deleting files disk space is needed. Therefore it should be avoided to run out of disk space. Luckily it is possible to reserve disk space for datasets to prevent this.
<syntaxhighlight lang="bash">
sudo gdisk /dev/nvme0n1
GPT fdisk (gdisk) version 1.0.10
...
# boot partition
Command (? for help): n
Partition number (1-128, default 1):
First sector (2048-1000215182, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-1000215182, default = 1000215175) or {+-}size{KMGTP}: +1G
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): ef00
Changed type of partition to 'EFI system partition'


To enable reservations pick any dataset of your and do:
# Swap partition
: reserves enough disk space to have room for cleanups/deletion
Command (? for help): n
<syntaxhighlight lang="console">
Partition number (2-128, default 2):  
zfs set reservation=1G zroot
First sector (2099200-1000215182, default = 2099200) or {+-}size{KMGTP}:  
</syntaxhighlight>
Last sector (2099200-1000215182, default = 1000215175) or {+-}size{KMGTP}: +4G
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): 8200
Changed type of partition to 'Linux swap'


where <code>zroot</code> should be replaced by a dataset in your pool.
# root partition
Command (? for help): n
Partition number (3-128, default 3):
First sector (10487808-1000215182, default = 10487808) or {+-}size{KMGTP}:
Last sector (10487808-1000215182, default = 1000215175) or {+-}size{KMGTP}:
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300):
Changed type of partition to 'Linux filesystem'


== How to use the auto-snapshotting service ==
# write changes
Command (? for help): w


To auto-snapshot a ZFS filesystem or a ZVol, set its <code>com.sun:auto-snapshot</code> property to <code>true</code>, like this:
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!


<syntaxhighlight lang="console">
Do you want to proceed? (Y/N): y
zfs set com.sun:auto-snapshot=true <pool>/<fs>
OK; writing new GUID partition table (GPT) to /dev/nvme0n1.
The operation has completed successfully.
</syntaxhighlight>
Final partition table
<syntaxhighlight lang=bash>
Number  Start (sector)    End (sector)  Size      Code  Name
  1            2048        2099199  1024.0 MiB  EF00  EFI system partition
  2        2099200        10487807  4.0 GiB    8200  Linux swap
  3        10487808      1000215175  471.9 GiB  8300  Linux filesystem
</syntaxhighlight>
</syntaxhighlight>


(Note that by default this property will be inherited by all descendent datasets, but you can set their properties to false if you prefer.)
'''Let's use variables from now on for simplicity.
Get the device ID in <code>/dev/disk/by-id/</code>, in our case here it is <code>nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O</code>
'''
<syntaxhighlight lang=bash>
BOOT=/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part1
SWAP=/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part2
DISK=/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part3
 
'''Make zfs pool with encryption and mount points:'''
 
'''Note:''' zpool config can significantly affect performance (especially the ashift option) so you may want to do some research. The [https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/ ZFS tuning cheatsheet] or [https://wiki.archlinux.org/title/ZFS#Storage_pools ArchWiki] is a good place to start.


Then, to enable the auto-snapshot service, add this to your <code>configuration.nix</code>:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="nix">
zpool create -O encryption=on -O keyformat=passphrase -O keylocation=prompt -O compression=zstd -O mountpoint=none -O xattr=sa -O acltype=posixacl -o ashift=12 zpool $DISK
services.zfs.autoSnapshot.enable = true;
# enter the password to decrypt the pool at boot
</syntaxhighlight>
Enter new passphrase:
Re-enter new passphrase:


And finally, run <code>nixos-rebuild switch</code> to activate the new configuration!
# Create datasets
zfs create zpool/root
zfs create zpool/nix
zfs create zpool/var
zfs create zpool/home


By default, the auto-snapshot service will keep the latest four 15-minute, 24 hourly, 7 daily, 4 weekly and 12 monthly snapshots.
mkdir -p /mnt
You can globally override this configuration by setting the desired number of snapshots in your <code>configuration.nix</code>, like this:
mount -t zfs zpool/root /mnt -o zfsutil
mkdir /mnt/nix /mnt/var /mnt/home


<syntaxhighlight lang="nix">
mount -t zfs zpool/nix /mnt/nix -o zfsutil
services.zfs.autoSnapshot = {
mount -t zfs zpool/var /mnt/var -o zfsutil
  enable = true;
mount -t zfs zpool/home /mnt/home -o zfsutil
  frequent = 8; # keep the latest eight 15-minute snapshots (instead of four)
  monthly = 1;  # keep only one monthly snapshot (instead of twelve)
};
</syntaxhighlight>
</syntaxhighlight>


You can also disable a given type of snapshots on a per-dataset basis by setting a ZFS property, like this:
Output from <syntaxhighlight lang="bash" inline>zpool status</syntaxhighlight>:
<syntaxhighlight >
zpool status
  pool: zpool
state: ONLINE
...
config:
 
NAME                              STATE    READ WRITE CKSUM
zpool                              ONLINE      0    0    0
  nvme-eui.0025384b21406566-part2  ONLINE      0    0    0


<syntaxhighlight lang="console">
zfs set com.sun:auto-snapshot:weekly=false <pool>/<fs>
</syntaxhighlight>
</syntaxhighlight>


This would disable only weekly snapshots on the given filesystem.
'''Format boot partition with fat as filesystem'''
<syntaxhighlight lang="bash">
mkfs.fat -F 32 -n boot $BOOT
</syntaxhighlight>


== How to install NixOS on a ZFS root filesystem ==
'''Enable swap'''
<syntaxhighlight lang="bash">
mkswap -L swap $SWAP
swapon $SWAP
</syntaxhighlight>


=== Single-disk ===
'''Installation:'''
# Mount boot
<syntaxhighlight lang="bash">
mkdir -p /mnt/boot
mount $BOOT /mnt/boot


These instructions will get you started with a single-disk ZFS setup. If you're interested in setting up RAID, see below.
# Generate the nixos config
nixos-generate-config --root /mnt
...
writing /mnt/etc/nixos/hardware-configuration.nix...
writing /mnt/etc/nixos/configuration.nix...
For more hardware-specific settings, see https://github.com/NixOS/nixos-hardware.
</syntaxhighlight>


<syntaxhighlight lang="console">
Now edit the configuration.nix that was just created in <code>/mnt/etc/nixos/configuration.nix</code> and make sure to have at least the following content in it.
# Always use the by-id aliases for devices, otherwise ZFS can choke on imports.
DISK=/dev/disk/by-id/...


# Partition 2 will be the boot partition, needed for legacy (BIOS) boot
<syntaxhighlight lang="nix">
sgdisk -a1 -n2:34:2047 -t2:EF02 $DISK
{
# If you need EFI support, make an EFI partition:
...
sgdisk -n3:1M:+512M -t3:EF00 $DISK
  # Boot loader config for configuration.nix:
# Partition 1 will be the main ZFS partition, using up the remaining space on the drive.
  boot.loader.systemd-boot.enable = true;
sgdisk -n1:0:0 -t1:BF01 $DISK


# Create the pool. If you want to tweak this a bit and you're feeling adventurous, you
   # for local disks that are not shared over the network, we don't need this to be random
# might try adding one or more of the following additional options:
   networking.hostId = "8425e349";
# To disable writing access times:
...
#   -O atime=off
</syntaxhighlight>
# To enable filesystem compression:
#  -O compression=lz4
# To enable normalizing unicode filenames (and implicitly set utf8only=on):
#  -O normalization=formD
# To improve performance of certain extended attributes:
#  -O xattr=sa
# For systemd-journald posixacls are required
#  -O  acltype=posixacl
# To specify that your drive uses 4K sectors instead of relying on the size reported
# by the hardware (note small 'o'):
#   -o ashift=12
#
# The 'mountpoint=none' option disables ZFS's automount machinery; we'll use the
# normal fstab-based mounting machinery in Linux.
# '-R /mnt' is not a persistent property of the FS, it'll just be used while we're installing.
zpool create -O mountpoint=none -R /mnt rpool $DISK-part1


# Create the filesystems. This layout is designed so that /home is separate from the root
Now check the hardware-configuration.nix in <code>/mnt/etc/nixos/hardware-configuration.nix</code> and add whats missing e.g. <code>options = [ "zfsutil" ]</code> for all filesystems except boot and <code>randomEncryption = true;</code> for the swap partition. Also change the generated swap device to the partition we created e.g. <code>/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part2</code> in this case and <code>/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part1</code> for boot.
# filesystem, as you'll likely want to snapshot it differently for backup purposes. It also
# makes a "nixos" filesystem underneath the root, to support installing multiple OSes if
# that's something you choose to do in future.
zfs create -o mountpoint=none rpool/root
zfs create -o mountpoint=legacy rpool/root/nixos
zfs create -o mountpoint=legacy rpool/home


# Mount the filesystems manually. The nixos installer will detect these mountpoints
<syntaxhighlight lang="nix">
# and save them to /mnt/nixos/hardware-configuration.nix during the install process.
...
mount -t zfs rpool/root/nixos /mnt
  fileSystems."/" = {
mkdir /mnt/home
    device = "zpool/root";
mount -t zfs rpool/home /mnt/home
    fsType = "zfs";
    # the zfsutil option is needed when mounting zfs datasets without "legacy" mountpoints
    options = [ "zfsutil" ];
  };


# If you need to boot EFI, you'll need to set up /boot as a non-ZFS partition.
  fileSystems."/nix" = {
mkfs.vfat $DISK-part3
    device = "zpool/nix";
mkdir /mnt/boot
    fsType = "zfs";
mount $DISK-part3 /mnt/boot
    options = [ "zfsutil" ];
  };


# Generate the NixOS configuration, as per the NixOS manual.
  fileSystems."/var" = {
nixos-generate-config --root /mnt
    device = "zpool/var";
    fsType = "zfs";
    options = [ "zfsutil" ];
  };


# Edit /mnt/etc/nixos/configuration.nix and add the following line:
  fileSystems."/home" = {
## ---8<-------------------------8<---
    device = "zpool/home";
  boot.supportedFilesystems = [ "zfs" ];
    fsType = "zfs";
## ---8<-------------------------8<---
    options = [ "zfsutil" ];
  };


# Also, make sure you set the networking.hostId option, which ZFS requires:
  fileSystems."/boot" = {
## ---8<-------------------------8<---
  device = "/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part1";
  networking.hostId = "<random 8-digit hex string>"
  fsType = "vfat";
## ---8<-------------------------8<---
  };
# See https://nixos.org/nixos/manual/options.html#opt-networking.hostId for more.


# Continue with installation!
  swapDevices = [{
nixos-install
    device = "/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part2";
    randomEncryption = true;
  }];
}
</syntaxhighlight>
</syntaxhighlight>


=== With RAID ===
Now you may install nixos with <code>nixos-install</code>
 
== Importing on boot ==


Here's an example of how to create a ZFS root pool using 4 disks in RAID-10 mode (striping+mirroring), create a ZFS root+home filesystems and install NixOS on them:
If you create a zpool, it will not be imported on the next boot unless you either add the zpool name to <syntaxhighlight lang="nix" inline>boot.zfs.extraPools</syntaxhighlight>:
(thanks to Danny Wilson for the instructions)


<syntaxhighlight lang="console">
<syntaxhighlight lang="nix">
# Add the zfs filesystem to the install environment (note this is no longer
## In /etc/nixos/configuration.nix:
# necessary since nixOS 18.09, as the install environment comes with
boot.zfs.extraPools = [ "zpool_name" ];
# zfs enabled by default):
</syntaxhighlight>
nano /etc/nixos/configuration.nix


## ---8<-------------------------8<---
or if you are using legacy mountpoints, add a <syntaxhighlight lang="nix" inline>fileSystems</syntaxhighlight> entry and NixOS will automatically detect that the pool needs to be imported:
  boot.supportedFilesystems = [ "zfs" ];
## ---8<-------------------------8<---


nixos-rebuild switch
<syntaxhighlight lang="nix">
## In /etc/nixos/configuration.nix:
fileSystems."/mount/point" = {
  device = "zpool_name";
  fsType = "zfs";
};
</syntaxhighlight>


# Load the just installed ZFS kernel module
=== Zpool created with bus-based disk names ===
modprobe zfs
If you used bus-based disk names in the <syntaxhighlight inline>zpool create</syntaxhighlight> command, e.g., <syntaxhighlight inline>/dev/sda</syntaxhighlight>, NixOS may run into issues importing the pool if the names change. Even if the pool is able to be mounted (with <syntaxhighlight lang="nix" inline>boot.zfs.devNodes = "/dev/disk/by-partuuid";</syntaxhighlight> set), this may manifest as a <syntaxhighlight inline>FAULTED</syntaxhighlight> disk and a <syntaxhighlight inline>DEGRADED</syntaxhighlight> pool reported by <syntaxhighlight inline>zpool status</syntaxhighlight>. The fix is to re-import the pool using disk IDs:


# Create boot partition and (zfs) data partition
<syntaxhighlight>
# See: https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS#step-2-disk-formatting
# zpool export zpool_name
fdisk /dev/sda
# zpool import -d /dev/disk/by-id zpool_name
</syntaxhighlight>


# Copy the partition table to the other disks
The import setting is reflected in <syntaxhighlight inline="" lang="bash">/etc/zfs/zpool.cache</syntaxhighlight>, so it should persist through subsequent boots.
sfdisk --dump /dev/sda | sfdisk /dev/sdb
sfdisk --dump /dev/sda | sfdisk /dev/sdc
sfdisk --dump /dev/sda | sfdisk /dev/sdd


# Create a RAID-10 ZFS pool. Use "-o ashift=12" to create your ZFS pool with 4K sectors
=== Zpool created with disk IDs ===
# enable posixacls, otherwise journalctl is broken for users
If you used disk IDs to refer to disks in the <code>zpool create</code> command, e.g., <code>/dev/disk/by-id</code>, then NixOS may consistently fail to import the pool unless <code>boot.zfs.devNodes = "/dev/disk/by-id"</code> is also set.
zpool create -o ashift=12 -o altroot=/mnt -O  acltype=posixacl -O xattr=sa rpool mirror /dev/sda2 /dev/sdb2 mirror /dev/sdc2 /dev/sdd2


# Create the filesystems
== Mount datasets at boot ==
zfs create -o mountpoint=none rpool/root
zfs-mount service is enabled by default on NixOS 22.05.
zfs create -o mountpoint=legacy rpool/root/nixos
zfs create -o mountpoint=legacy rpool/home
zfs set compression=lz4 rpool/home    # compress the home directories automatically


# Mount the filesystems manually
To automatically mount a dataset at boot, you only need to set <code>canmount=on</code> and <code>mountpoint=/mount/point</code> on the respective datasets.
mount -t zfs rpool/root/nixos /mnt


mkdir /mnt/home
== Changing the Adaptive Replacement Cache size ==
mount -t zfs rpool/home /mnt/home


# Create a raid mirror of the first partitions for /boot (GRUB)
To change the maximum size of the ARC to (for example) 12 GB, add this to your NixOS configuration:
mdadm --create /dev/md127 --metadata=0.90 --level=1 --raid-devices=4 /dev/sd[a,b,c,d]1
<syntaxhighlight lang="nix">
mkfs.ext4 -m 0 -L boot -j /dev/md127
boot.kernelParams = [ "zfs.zfs_arc_max=12884901888" ];
</syntaxhighlight>


mkdir /mnt/boot
== Tuning other parameters ==
mount /dev/md127 /mnt/boot


# Generate the NixOS configuration, as per the NixOS manual
To tune other attributes of ARC, L2ARC or of ZFS itself via runtime modprobe config, add this to your NixOS configuration (keys and values are examples only!):
nixos-generate-config --root /mnt
<syntaxhighlight lang="nix">
    boot.extraModprobeConfig = ''
      options zfs l2arc_noprefetch=0 l2arc_write_boost=33554432 l2arc_write_max=16777216 zfs_arc_max=2147483648
    '';
</syntaxhighlight>


# Now edit the generated hardware config:
You can confirm whether any specified configuration/tuning got applied via commands like <code>arc_summary</code> and <code>arcstat -a -s " "</code>.
nano /mnt/etc/nixos/hardware-configuration.nix


## ---8<-------------------------8<---
== Automatic scrubbing ==
# This is what you want:


  fileSystems."/" =
Regular scrubbing of ZFS pools is recommended and can be enabled in your NixOS configuration via:
    { device = "rpool/root/nixos";
<syntaxhighlight lang="nix">
      fsType = "zfs";
services.zfs.autoScrub.enable = true;
    };
</syntaxhighlight>


  fileSystems."/home" =
You can tweak the interval (defaults to once a week) and which pools should be scrubbed (defaults to all).
    { device = "rpool/home";
      fsType = "zfs";
    };


  fileSystems."/boot" =
    { device = "/dev/md127";
      fsType = "ext4";
    };
## ---8<-------------------------8<---


# configuration.nix needs an adjustment:
== Remote unlock ==
nano /mnt/etc/nixos/configuration.nix
=== Unlock encrypted zfs via ssh on boot ===


## ---8<-------------------------8<---
{{note|As of 22.05, rebuilding your config with the below directions may result in a situation where, if you want to revert the changes, you may need to do some pretty hairy nix-store manipulation to be able to successfully rebuild, see https://github.com/NixOS/nixpkgs/issues/101462#issuecomment-1172926129}}
# This is some more of what you want:


  boot.loader.grub.devices = [ "/dev/sda" "/dev/sdb" "/dev/sdc" "/dev/sdd" ];
In case you want unlock a machine remotely (after an update), having an ssh service in initrd for the password prompt is handy:
  boot.supportedFilesystems = [ "zfs" ];
## ---8<-------------------------8<---


# Ready to go!
<syntaxhighlight lang="nix">
nixos-install
boot = {
  initrd.network = {
    # This will use udhcp to get an ip address.
    # Make sure you have added the kernel module for your network driver to `boot.initrd.availableKernelModules`,
    # so your initrd can load it!
    # Static ip addresses might be configured using the ip argument in kernel command line:
    # https://www.kernel.org/doc/Documentation/filesystems/nfs/nfsroot.txt
    enable = true;
    ssh = {
      enable = true;
      # To prevent ssh clients from freaking out because a different host key is used,
      # a different port for ssh is useful (assuming the same host has also a regular sshd running)
      port = 2222;
      # hostKeys paths must be unquoted strings, otherwise you'll run into issues with boot.initrd.secrets
      # the keys are copied to initrd from the path specified; multiple keys can be set
      # you can generate any number of host keys using
      # `ssh-keygen -t ed25519 -N "" -f /path/to/ssh_host_ed25519_key`
      hostKeys = [ /path/to/ssh_host_rsa_key ];
      # public ssh key used for login
      authorizedKeys = [ "ssh-rsa AAAA..." ];
    };
  };
};
</syntaxhighlight>
</syntaxhighlight>
* In order to use DHCP in the initrd, network manager must not be enabled and <syntaxhighlight lang="nix" inline>networking.useDHCP = true;</syntaxhighlight> must be set.
* If your network card isn't started, you'll need to add the according kernel module to the kernel and initrd as well, e.g. <syntaxhighlight lang="nix">
boot.kernelModules = [ "r8169" ];
boot.initrd.kernelModules = [ "r8169" ];</syntaxhighlight>
After that you can unlock your datasets using the following ssh command:


== Encrypted ZFS ==
<syntaxhighlight>
ssh -p 2222 root@host "zpool import -a; zfs load-key -a && killall zfs"
</syntaxhighlight>


Native encryption is only available in the <code>zfsUnstable</code> package of NixOS, which was added in [https://github.com/NixOS/nixpkgs/pull/29426 PR-29426] in <code>unstable</code>
Alternatively you could also add the commands as postCommands to your configuration.nix, then you just have to ssh into the initrd:
and will be part of <code>18.03</code>. In older versions it is also possible to use full disk encryption by creating zfs top of cryptsetup.


In the unstable channel at the moment it is necessary to set <code>boot.zfs.enableUnstable = true;</code> to get zfs version based on master branch as zfsStable does not yet have this feature.
<syntaxhighlight>
boot = {
  initrd.network = {
    postCommands = ''
    # Import all pools
    zpool import -a
    # Or import selected pools
    zpool import pool2
    zpool import pool3
    zpool import pool4
    # Add the load-key command to the .profile
    echo "zfs load-key -a; killall zfs" >> /root/.profile
    '';
  };
};
</syntaxhighlight>


Assuming that a zpool named <code>zroot</code> has been already created as described.
After that you can unlock your datasets using the following ssh command:
Encrypted datasets can be added on top as follow:
: posixacl are needed for journald
<syntaxhighlight lang="console">
zfs create -o  acltype=posixacl -o xattr=sa -o encryption=aes-256-gcm -o keyformat=passphrase -o mountpoint=none zroot/root
</syntaxHighlight>


Instead of encrypting just a dataset (and all its child datasets) you can also directly encrypt the whole pool upon creation:
<syntaxhighlight>
<syntaxhighlight lang="console">
ssh -p 2222 root@host
zpool create -o ashift=12 -o altroot="/mnt" -O encryption=aes-256-gcm -O keyformat=passphrase zroot /dev/sdxy
</syntaxhighlight>
</syntaxHighlight>


All child datasets will inherit the encryption.
== Reservations ==


Note that using grub to boot directly from zfs with encryption enabled might not work at the moment, so a separate boot partition is required.
On ZFS, the performance will deteriorate significantly when more than 80% of the available space is used.  To avoid this, reserve disk space beforehand.  


A full encrypted nixos installation on an UEFI system could look like this:
To reserve space create a new unused dataset that gets a guaranteed disk space of 10GB.
<syntaxhighlight lang="console">
zfs create -o mountpoint=legacy -o sync=disabled zroot/root/tmp
zfs create -o mountpoint=legacy -o com.sun:auto-snapshot=true zroot/root/home
zfs create -o mountpoint=legacy -o com.sun:auto-snapshot=true zroot/root/nixos
</syntaxHighlight>


<syntaxhighlight lang="console">
<syntaxhighlight lang="console">
mount -t zfs zroot/root/nixos /mnt
# zfs create -o refreservation=10G -o mountpoint=none zroot/reserved
mkdir /mnt/{home,tmp,boot}
</syntaxhighlight>
</syntaxHighlight>
: assuming that /dev/sda1 is the boot partition
<syntaxhighlight lang="console">
mkfs.vfat /dev/sda1
mount /dev/sda1 /mnt/boot/
</syntaxHighlight>


<syntaxhighlight lang="console">
== Auto ZFS trimming ==
mount -t zfs zroot/root/home /mnt/home/
mount -t zfs zroot/root/tmp /mnt/tmp/
</syntaxHighlight>


<syntaxhighlight lang="console">
<syntaxhighlight lang="nix" inline>services.zfs.trim.enable = true;</syntaxhighlight>.
nixos-generate-config  --root /mnt
</syntaxHighlight>


To unlock the zfs dataset at root also the <code>boot.zfs.requestEncryptionCredentials</code> option must be set to <code>true</code>. Note that at the moment one can only use passphrases (<code>keylocation=prompt</code>) for pools that are mounted as the root fs. Data pools are mounted by a background systemd service and need a key (<code>keylocation=file://</code>). A key file could be for example put on a root filesystem if it is encrypted.
This will periodically run <code>zpool trim</code>. Note that this is different from the <code>autotrim</code> pool property. For further information, see the <code>zpool-trim</code> and <code>zpoolprops</code> man pages.


If the key is not on the root filesystem, you will also need to set <code>zfs-import-poolname.serviceConfig.RequiresMountsFor=/path/to/key</code>, where <code>poolname</code> is the name of the data pool. This makes sure that systemd will mount the filesystem for <code>/path/to/key</code> first before importing the zfs pool.
== Take snapshots automatically ==


=== Unlock encrypted zfs via ssh on boot ===
See <code>services.sanoid</code> section in <code>man configuration.nix</code>.


In case you want unlock a machine remotely (after an update), having a dropbear ssh service in initrd for the password prompt is handy:
== NFS share ==


<syntaxhighlight lang="nix">
With <code>sharenfs</code> property, ZFS has build-in support for generating <code>/etc/exports.d/zfs.exports</code> file, which in turn is processed by NFS service automatically.
boot = {
  initrd.network = {
    # This will use udhcp to get an ip address.
    # Make sure you have added the kernel module for your network driver to `boot.initrd.availableKernelModules`,
    # so your initrd can load it!
    # Static ip addresses might be configured using the ip argument in kernel command line:
    # https://www.kernel.org/doc/Documentation/filesystems/nfs/nfsroot.txt
    enable = true;
    ssh = {
        enable = true;
        # To prevent ssh from freaking out because a different host key is used,
        # a different port for dropbear is useful (assuming the same host has also a normal sshd running)
        port = 2222;
        # dropbear uses key format different from openssh; can be generated by using:
        # $ nix-shell -p dropbear --command "dropbearkey -t ecdsa -f /tmp/initrd-ssh-key"
        hostECDSAKey = /run/keys/initrd-ssh-key;
        # public ssh key used for login
        authorizedKeys = [ "ssh-rsa AAAA..." ];
    };
    # this will automatically load the zfs password prompt on login
    # and kill the other prompt so boot can continue
    postCommands = ''
      echo "zfs load-key -a; killall zfs" >> /root/.profile
    '';
  };
};
</syntaxHighlight>
* In order to use DHCP in the initrd, network manager must not be enabled and <code>networking.useDHCP = true;</code> must be set.
* If your network card isn't started, you'll need to add the according kernel module to the initrd as well, e.g. <code>boot.initrd.kernelModules = [ "r8169" ];</code>


=== Import and unlock multiple encrypted pools/dataset at boot ===
{{warning|If you are intending on defining an IPv6 subnet as part of your sharenfs rule, as of ZFS 2.0.6 (2021-09-23) please note that due to a bug in openzfs '''your rule will not correctly apply''', and may result in a security vulnerability (CVE-2013-20001). A fix has been implemented in the next yet-to-be-released upstream version - [https://github.com/openzfs/zfs/pull/11939 openzfs/zfs#11939]}}


If you have not only one encrypted pool/dataset but multiple ones and you want to import and unlock them at boot, so that they can be automounted using the hardware-configuration.nix, you could just amend the <code>boot.initrd.network.postCommands</code> option.
To enable NFS share on a dataset, only two steps are needed:


Unfortunately having an unlock key file stored in an encrypted zfs dataset cannot be used directly, so the pool must use <code>keyformat=password</code> and <code>keylocation=prompt</code>.
First, enable [[NFS|NFS service]]:
<syntaxhighlight lang="nix">
services.nfs.server.enable = true;
</syntaxhighlight>
Only this line is needed. Configure firewall if necessary, as described in [[NFS]] article.


The following example follows the remote unlocking with dropbear, but imports another pool also and prompts for unlocking (either when at the machine itself or when logging in remotely:
Then, set <code>sharenfs</code> property:
<syntaxhighlight lang="console">
# zfs set sharenfs="ro=192.168.1.0/24,all_squash,anonuid=70,anongid=70" rpool/myData
</syntaxhighlight>
For more options, see <code>man 5 exports</code>.


<syntaxhighlight lang="nix">
Todo: sharesmb property for Samba.
boot = {
  initrd.network = {
    enable = true;
    ssh = {
        enable = true;
        port = 2222;
        hostECDSAKey = /run/keys/initrd-ssh-key;
        authorizedKeys = [ "ssh-rsa AAAA..." ];
    };
    postCommands = ''
      zpool import tankXXX
      echo "zfs load-key -a; killall zfs" >> /root/.profile
    '';
  };
};
</syntaxHighlight>


When you login by SSH into dropbear or when you have physical access to the machine itself, you will be prompted to supply the unlocking password for your zroot and tankXXX pools.
== Mail notification for ZFS Event Daemon ==


== Regarding installation of NixOS to ZFS direct from installation media ==
ZFS Event Daemon (zed) monitors events generated by the ZFS kernel module and runs configured tasks. It can be configured to send an email when a pool scrub is finished or a disk has failed. [https://search.nixos.org/options?query=services.zfs.zed zed options]


* Since [https://github.com/NixOS/nixpkgs/pull/51090 18.09] the installation iso comes with zfs by default again.
=== Alternative 1: Enable Mail Notification without Re-compliation ===
* For older versions it is still possible to enable it in the existing ISO at runtime adding:


First, we need to configure a mail transfer agent, the program that sends email:
<syntaxhighlight lang="nix">
<syntaxhighlight lang="nix">
{
{
   boot.supportedFilesystems = [ "zfs" ];
   programs.msmtp = {
    enable = true;
    setSendmail = true;
    defaults = {
      aliases = "/etc/aliases";
      port = 465;
      tls_trust_file = "/etc/ssl/certs/ca-certificates.crt";
      tls = "on";
      auth = "login";
      tls_starttls = "off";
    };
    accounts = {
      default = {
        host = "mail.example.com";
        passwordeval = "cat /etc/emailpass.txt";
        user = "user@example.com";
        from = "user@example.com";
      };
    };
  };
}
}
</syntaxHighlight>
</syntaxhighlight>


to the iso's configuration.nix followed by a <code>nixos-rebuild switch</code> ([https://discourse.nixos.org/t/install-report-from-new-user/1390/9 source])
Then, configure an alias for root account. With this alias configured, all mails sent to root, such as cron job results and failed sudo login events, will be redirected to the configured email account.


<syntaxhighlight lang="bash">
tee -a /etc/aliases <<EOF
root: user@example.com
EOF
</syntaxhighlight>


== ZFS Trim for SSDs ==
Finally, override default zed settings with a custom one:
<syntaxhighlight lang="nix">
{
  services.zfs.zed.settings = {
    ZED_DEBUG_LOG = "/tmp/zed.debug.log";
    ZED_EMAIL_ADDR = [ "root" ];
    ZED_EMAIL_PROG = "${pkgs.msmtp}/bin/msmtp";
    ZED_EMAIL_OPTS = "@ADDRESS@";


Currently a [https://github.com/zfsonlinux/zfs/pull/8419 patch for zfs] is being tested which allows the trimming for ssd drives. It's still being tested and not yet included in the zfs git-master. Testing so far has been positive and if you want to try - at your own risk - you can do so easily.
    ZED_NOTIFY_INTERVAL_SECS = 3600;
    ZED_NOTIFY_VERBOSE = true;


=== Apply ZFS Trim patch ===
    ZED_USE_ENCLOSURE_LEDS = true;
    ZED_SCRUB_AFTER_RESILVER = true;
  };
  # this option does not work; will return error
  services.zfs.zed.enableMail = false;
}
</syntaxhighlight>


1. Create a clone of Nixos git-master
You can now test this by performing a scrub
<syntaxhighlight lang="console">
# zpool scrub $pool
</syntaxhighlight>


2. Create a new branch, e.g. <code>git checkout -b zfs-trim-patch</code>
=== Alternative 2: Rebuild ZFS with Mail Support ===
The <code>zfs</code> package can be rebuilt with mail features. However, please note that this will cause Nix to recompile the entire ZFS package on the computer, and on every kernel update, which could be very time-consuming on lower-end NAS systems.


3. Get the zfs default.nix with trim patch, e.g. <code>curl -o ./pkgs/os-specific/linux/zfs/default.nix https://www.sjau.ch/zfs/default.nix</code>
An alternative solution that does not involve recompliation can be found above.


4. Rebuild Nixos, e.g. <code>nixos-rebuild boot -I pkgs=/root/nixos-git-master</code>
The following override is needed as <code>zfs</code> is implicitly used in partition mounting:


Of course use appropriate paths for your system.
<syntaxhighlight lang="nix">
nixpkgs.config.packageOverrides = pkgs: {
  zfsStable = pkgs.zfsStable.override { enableMail = true; };
};
</syntaxhighlight>


=== How to use ZFS trimming ===
A mail sender like [[msmtp]] or [[postfix]] is required.


ZFS trimming works on one or more zpools and will trim each ssd inside it. There are two modes of it. One mode will manually trim the specified pool and the other will auto-trim pools. However the main difference is, that auto-trim will skip ranges that it considers too small while manually issued trim will trim all ranges.
A minimal, testable ZED configuration example:


To manually start trimming of a zpool run: <code>zpool trim tank</code>
<syntaxhighlight lang="nix">
 
services.zfs.zed.enableMail = true;
To set a pool for auto-trim run: <code>zpool set autotrim=on tank</code>
services.zfs.zed.settings = {
 
  ZED_EMAIL_ADDR = [ "root" ];
To check the status of the manual trim, you can just run <code>zpool status -t</code>
  ZED_NOTIFY_VERBOSE = true;
 
};
To see the effects of trimming, you can run <code>zpool iostat -r</code> and <code>zpool iostat -w</code>
</syntaxhighlight>
 
To see whether auto-trimming works, just run <code>zpool iostat -r</code> note the results and run it later again. The trim entries should change.


For further information read the [https://github.com/zfsonlinux/zfs/pull/8419 PR description].
Above, <code>ZED_EMAIL_ADDR</code> is set to <code>root</code>, which most people will have an alias for in their mailer. You can change it to directly mail you: <code>ZED_EMAIL_ADDR = [ "you@example.com" ];</code>


== Need more info? ==
ZED pulls in <code>mailutils</code> and runs <code>mail</code> by default, but you can override it with <code>ZED_EMAIL_PROG</code>. If using msmtp, you may need <code>ZED_EMAIL_PROG = "${pkgs.msmtp}/bin/msmtp";</code>.


Feel free to ask your questions on the NixOS mailing list or the IRC channel: http://nixos.org/development/
You can customize the mail command with <code>ZED_EMAIL_OPTS</code>. For example, if your upstream mail server requires a certain FROM address: <code>ZED_EMAIL_OPTS = "-r 'noreply@example.com' -s '@SUBJECT@' @ADDRESS@";</code>


[[Category:Guide]]
[[Category:Guide]]

Revision as of 08:45, 27 September 2024

ZFS (wikipedia:en:ZFS) - also known as OpenZFS (wikipedia:en:OpenZFS) - is a modern filesystem which is well supported on NixOS.

Besides the zfs package (ZFS Filesystem Linux Kernel module) [1] itself there are many packages in the ZFS ecosystem available.

ZFS integrates into NixOS via the boot.zfs[2] and service.zfs[3] options.

Limitations

Latest kernel compatible with ZFS

Newer kernels might not be supported by ZFS yet. If you are running a kernel which is not officially supported by zfs, the module will refuse to evaluate and show an error.

You can pin to a newer kernel version explicitly, but note that this version may be dropped by upstream and in nixpkgs prior to zfs supporting the next version. See Linux kernel for more information.

{
  boot.kernelPackages = pkgs.linuxPackages_latest;
  # OR
  boot.kernelPackages = pkgs.linuxPackages_6_6
}

This snippet will configure the latest compatible kernel. Note that this can over time jump back to old kernel versions because non-lts kernel version get removed over time and their newer replacements might be not supported by zfs yet.

{
  lib,
  pkgs,
  config,
  ...
}:

let
  isUnstable = config.boot.zfs.package == pkgs.zfsUnstable;
  zfsCompatibleKernelPackages = lib.filterAttrs (
    name: kernelPackages:
    (builtins.match "linux_[0-9]+_[0-9]+" name) != null
    && (builtins.tryEval kernelPackages).success
    && (
      (!isUnstable && !kernelPackages.zfs.meta.broken)
      || (isUnstable && !kernelPackages.zfs_unstable.meta.broken)
    )
  ) pkgs.linuxKernel.packages;
  latestKernelPackage = lib.last (
    lib.sort (a: b: (lib.versionOlder a.kernel.version b.kernel.version)) (
      builtins.attrValues zfsCompatibleKernelPackages
    )
  );
in
{
  # Note this might jump back and worth as kernel get added or removed.
  boot.kernelPackages = latestKernelPackage;
}

Partial support for SWAP on ZFS

ZFS does not support swapfiles. SWAP devices can be used instead. Additionally, hibernation is disabled by default due to a high risk of data corruption. Note that even if that pull request is merged, it does not fully mitigate the risk. If you wish to enable hibernation regardless and made sure that not swapfiles on ZFS are used, set boot.zfs.allowHibernation = true.

Zpool not found

If NixOS fails to import the zpool on reboot, you may need to add boot.zfs.devNodes = "/dev/disk/by-path"; or boot.zfs.devNodes = "/dev/disk/by-partuuid"; to your configuration.nix file.

The differences can be tested by running zpool import -d /dev/disk/by-id when none of the pools are discovered, eg. a live iso.

declarative mounting of ZFS datasets

When using legacy mountpoints (created with egzfs create -o mountpoint=legacy) mountpoints must be specified with fileSystems."/mount/point" = {};. ZFS native mountpoints are not managed as part of the system configuration but better support hibernation with a separate swap partition. This can lead to conflicts if ZFS mount service is also enabled for the same datasets. Disable it with systemd.services.zfs-mount.enable = false;.

Guides

OpenZFS Documentation for installing

Warning: This guide is not endorsed by NixOS and some features like immutable root do not have upstream support and could break on updates. If an issue arises while following this guide, please consult the guides support channels.

One guide for a NixOS installation with ZFS is maintained at OpenZFS Documentation (Getting Started for NixOS)

It is about:

It is not about:

  • Give understandable, easy to follow and close to the standard installation guide instructions
  • integrating ZFS into your existing config


Simple NixOS ZFS in root installation

Start from here in the NixOS manual: [1]. Under manual partitioning [2] do this instead:

Partition your disk with your favorite partition tool.

We need the following partitions:

  • 1G for boot partition with "boot" as the partition label (also called name in some tools) and ef00 as partition code
  • 4G for a swap partition with "swap" as the partition label and 8200 as partition code. We will encrypt this with a random secret on each boot.
  • The rest of disk space for zfs with "root" as the partition label and 8300 as partition code (default code)

Reason for swap partition: ZFS does use a caching mechanism that is different from the normal Linux cache infrastructure. In low-memory situations, ZFS therefore might need a bit longer to free up memory from its cache. The swap partition will help with that.

Example with gdisk:

sudo gdisk /dev/nvme0n1
GPT fdisk (gdisk) version 1.0.10
...
# boot partition
Command (? for help): n
Partition number (1-128, default 1): 
First sector (2048-1000215182, default = 2048) or {+-}size{KMGTP}: 
Last sector (2048-1000215182, default = 1000215175) or {+-}size{KMGTP}: +1G
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): ef00
Changed type of partition to 'EFI system partition'

# Swap partition
Command (? for help): n
Partition number (2-128, default 2): 
First sector (2099200-1000215182, default = 2099200) or {+-}size{KMGTP}: 
Last sector (2099200-1000215182, default = 1000215175) or {+-}size{KMGTP}: +4G
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): 8200
Changed type of partition to 'Linux swap'

# root partition
Command (? for help): n
Partition number (3-128, default 3): 
First sector (10487808-1000215182, default = 10487808) or {+-}size{KMGTP}: 
Last sector (10487808-1000215182, default = 1000215175) or {+-}size{KMGTP}: 
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): 
Changed type of partition to 'Linux filesystem'

# write changes
Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/nvme0n1.
The operation has completed successfully.

Final partition table

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         2099199   1024.0 MiB  EF00  EFI system partition
   2         2099200        10487807   4.0 GiB     8200  Linux swap
   3        10487808      1000215175   471.9 GiB   8300  Linux filesystem

Let's use variables from now on for simplicity. Get the device ID in /dev/disk/by-id/, in our case here it is nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O

BOOT=/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part1
SWAP=/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part2
DISK=/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part3

'''Make zfs pool with encryption and mount points:'''

'''Note:''' zpool config can significantly affect performance (especially the ashift option) so you may want to do some research. The [https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/ ZFS tuning cheatsheet] or [https://wiki.archlinux.org/title/ZFS#Storage_pools ArchWiki] is a good place to start.

<syntaxhighlight lang="bash">
zpool create -O encryption=on -O keyformat=passphrase -O keylocation=prompt -O compression=zstd -O mountpoint=none -O xattr=sa -O acltype=posixacl -o ashift=12 zpool $DISK
# enter the password to decrypt the pool at boot
Enter new passphrase:
Re-enter new passphrase:

# Create datasets
zfs create zpool/root
zfs create zpool/nix
zfs create zpool/var
zfs create zpool/home

mkdir -p /mnt
mount -t zfs zpool/root /mnt -o zfsutil
mkdir /mnt/nix /mnt/var /mnt/home

mount -t zfs zpool/nix /mnt/nix -o zfsutil
mount -t zfs zpool/var /mnt/var -o zfsutil
mount -t zfs zpool/home /mnt/home -o zfsutil

Output from zpool status:

zpool status
  pool: zpool
 state: ONLINE
...
config:

	NAME                               STATE     READ WRITE CKSUM
	zpool                              ONLINE       0     0     0
	  nvme-eui.0025384b21406566-part2  ONLINE       0     0     0

Format boot partition with fat as filesystem

mkfs.fat -F 32 -n boot $BOOT

Enable swap

mkswap -L swap $SWAP
swapon $SWAP

Installation:

  1. Mount boot
mkdir -p /mnt/boot
mount $BOOT /mnt/boot

# Generate the nixos config
nixos-generate-config --root /mnt
...
writing /mnt/etc/nixos/hardware-configuration.nix...
writing /mnt/etc/nixos/configuration.nix...
For more hardware-specific settings, see https://github.com/NixOS/nixos-hardware.

Now edit the configuration.nix that was just created in /mnt/etc/nixos/configuration.nix and make sure to have at least the following content in it.

{
...
  # Boot loader config for configuration.nix:
  boot.loader.systemd-boot.enable = true;

  # for local disks that are not shared over the network, we don't need this to be random
  networking.hostId = "8425e349";
...

Now check the hardware-configuration.nix in /mnt/etc/nixos/hardware-configuration.nix and add whats missing e.g. options = [ "zfsutil" ] for all filesystems except boot and randomEncryption = true; for the swap partition. Also change the generated swap device to the partition we created e.g. /dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part2 in this case and /dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part1 for boot.

...
  fileSystems."/" = { 
    device = "zpool/root";
    fsType = "zfs";
    # the zfsutil option is needed when mounting zfs datasets without "legacy" mountpoints
    options = [ "zfsutil" ];
  };

  fileSystems."/nix" = { 
    device = "zpool/nix";
    fsType = "zfs";
    options = [ "zfsutil" ];
  };

  fileSystems."/var" = { 
    device = "zpool/var";
    fsType = "zfs";
    options = [ "zfsutil" ];
  };

  fileSystems."/home" = {
    device = "zpool/home";
    fsType = "zfs";
    options = [ "zfsutil" ];
  };

  fileSystems."/boot" = { 
   device = "/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part1";
   fsType = "vfat";
  };

  swapDevices = [{
    device = "/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part2";
    randomEncryption = true;
  }];
}

Now you may install nixos with nixos-install

Importing on boot

If you create a zpool, it will not be imported on the next boot unless you either add the zpool name to boot.zfs.extraPools:

## In /etc/nixos/configuration.nix:
boot.zfs.extraPools = [ "zpool_name" ];

or if you are using legacy mountpoints, add a fileSystems entry and NixOS will automatically detect that the pool needs to be imported:

## In /etc/nixos/configuration.nix:
fileSystems."/mount/point" = {
  device = "zpool_name";
  fsType = "zfs";
};

Zpool created with bus-based disk names

If you used bus-based disk names in the zpool create command, e.g., /dev/sda, NixOS may run into issues importing the pool if the names change. Even if the pool is able to be mounted (with boot.zfs.devNodes = "/dev/disk/by-partuuid"; set), this may manifest as a FAULTED disk and a DEGRADED pool reported by zpool status. The fix is to re-import the pool using disk IDs:

# zpool export zpool_name
# zpool import -d /dev/disk/by-id zpool_name

The import setting is reflected in /etc/zfs/zpool.cache, so it should persist through subsequent boots.

Zpool created with disk IDs

If you used disk IDs to refer to disks in the zpool create command, e.g., /dev/disk/by-id, then NixOS may consistently fail to import the pool unless boot.zfs.devNodes = "/dev/disk/by-id" is also set.

Mount datasets at boot

zfs-mount service is enabled by default on NixOS 22.05.

To automatically mount a dataset at boot, you only need to set canmount=on and mountpoint=/mount/point on the respective datasets.

Changing the Adaptive Replacement Cache size

To change the maximum size of the ARC to (for example) 12 GB, add this to your NixOS configuration:

boot.kernelParams = [ "zfs.zfs_arc_max=12884901888" ];

Tuning other parameters

To tune other attributes of ARC, L2ARC or of ZFS itself via runtime modprobe config, add this to your NixOS configuration (keys and values are examples only!):

    boot.extraModprobeConfig = ''
      options zfs l2arc_noprefetch=0 l2arc_write_boost=33554432 l2arc_write_max=16777216 zfs_arc_max=2147483648
    '';

You can confirm whether any specified configuration/tuning got applied via commands like arc_summary and arcstat -a -s " ".

Automatic scrubbing

Regular scrubbing of ZFS pools is recommended and can be enabled in your NixOS configuration via:

services.zfs.autoScrub.enable = true;

You can tweak the interval (defaults to once a week) and which pools should be scrubbed (defaults to all).


Remote unlock

Unlock encrypted zfs via ssh on boot

Note: As of 22.05, rebuilding your config with the below directions may result in a situation where, if you want to revert the changes, you may need to do some pretty hairy nix-store manipulation to be able to successfully rebuild, see https://github.com/NixOS/nixpkgs/issues/101462#issuecomment-1172926129

In case you want unlock a machine remotely (after an update), having an ssh service in initrd for the password prompt is handy:

boot = {
  initrd.network = {
    # This will use udhcp to get an ip address.
    # Make sure you have added the kernel module for your network driver to `boot.initrd.availableKernelModules`, 
    # so your initrd can load it!
    # Static ip addresses might be configured using the ip argument in kernel command line:
    # https://www.kernel.org/doc/Documentation/filesystems/nfs/nfsroot.txt
    enable = true;
    ssh = {
      enable = true;
      # To prevent ssh clients from freaking out because a different host key is used,
      # a different port for ssh is useful (assuming the same host has also a regular sshd running)
      port = 2222; 
      # hostKeys paths must be unquoted strings, otherwise you'll run into issues with boot.initrd.secrets
      # the keys are copied to initrd from the path specified; multiple keys can be set
      # you can generate any number of host keys using 
      # `ssh-keygen -t ed25519 -N "" -f /path/to/ssh_host_ed25519_key`
      hostKeys = [ /path/to/ssh_host_rsa_key ];
      # public ssh key used for login
      authorizedKeys = [ "ssh-rsa AAAA..." ];
    };
  };
};
  • In order to use DHCP in the initrd, network manager must not be enabled and networking.useDHCP = true; must be set.
  • If your network card isn't started, you'll need to add the according kernel module to the kernel and initrd as well, e.g.
    boot.kernelModules = [ "r8169" ];
    boot.initrd.kernelModules = [ "r8169" ];
    

After that you can unlock your datasets using the following ssh command:

ssh -p 2222 root@host "zpool import -a; zfs load-key -a && killall zfs"

Alternatively you could also add the commands as postCommands to your configuration.nix, then you just have to ssh into the initrd:

boot = {
  initrd.network = {
    postCommands = ''
    # Import all pools
    zpool import -a
    # Or import selected pools
    zpool import pool2
    zpool import pool3
    zpool import pool4
    # Add the load-key command to the .profile
    echo "zfs load-key -a; killall zfs" >> /root/.profile
    '';
  };
};

After that you can unlock your datasets using the following ssh command:

ssh -p 2222 root@host

Reservations

On ZFS, the performance will deteriorate significantly when more than 80% of the available space is used. To avoid this, reserve disk space beforehand.

To reserve space create a new unused dataset that gets a guaranteed disk space of 10GB.

# zfs create -o refreservation=10G -o mountpoint=none zroot/reserved

Auto ZFS trimming

services.zfs.trim.enable = true;.

This will periodically run zpool trim. Note that this is different from the autotrim pool property. For further information, see the zpool-trim and zpoolprops man pages.

Take snapshots automatically

See services.sanoid section in man configuration.nix.

NFS share

With sharenfs property, ZFS has build-in support for generating /etc/exports.d/zfs.exports file, which in turn is processed by NFS service automatically.

Warning: If you are intending on defining an IPv6 subnet as part of your sharenfs rule, as of ZFS 2.0.6 (2021-09-23) please note that due to a bug in openzfs your rule will not correctly apply, and may result in a security vulnerability (CVE-2013-20001). A fix has been implemented in the next yet-to-be-released upstream version - openzfs/zfs#11939

To enable NFS share on a dataset, only two steps are needed:

First, enable NFS service:

services.nfs.server.enable = true;

Only this line is needed. Configure firewall if necessary, as described in NFS article.

Then, set sharenfs property:

# zfs set sharenfs="ro=192.168.1.0/24,all_squash,anonuid=70,anongid=70" rpool/myData

For more options, see man 5 exports.

Todo: sharesmb property for Samba.

Mail notification for ZFS Event Daemon

ZFS Event Daemon (zed) monitors events generated by the ZFS kernel module and runs configured tasks. It can be configured to send an email when a pool scrub is finished or a disk has failed. zed options

Alternative 1: Enable Mail Notification without Re-compliation

First, we need to configure a mail transfer agent, the program that sends email:

{
  programs.msmtp = {
    enable = true;
    setSendmail = true;
    defaults = {
      aliases = "/etc/aliases";
      port = 465;
      tls_trust_file = "/etc/ssl/certs/ca-certificates.crt";
      tls = "on";
      auth = "login";
      tls_starttls = "off";
    };
    accounts = {
      default = {
        host = "mail.example.com";
        passwordeval = "cat /etc/emailpass.txt";
        user = "user@example.com";
        from = "user@example.com";
      };
    };
  };
}

Then, configure an alias for root account. With this alias configured, all mails sent to root, such as cron job results and failed sudo login events, will be redirected to the configured email account.

tee -a /etc/aliases <<EOF
root: user@example.com
EOF

Finally, override default zed settings with a custom one:

{
  services.zfs.zed.settings = {
    ZED_DEBUG_LOG = "/tmp/zed.debug.log";
    ZED_EMAIL_ADDR = [ "root" ];
    ZED_EMAIL_PROG = "${pkgs.msmtp}/bin/msmtp";
    ZED_EMAIL_OPTS = "@ADDRESS@";

    ZED_NOTIFY_INTERVAL_SECS = 3600;
    ZED_NOTIFY_VERBOSE = true;

    ZED_USE_ENCLOSURE_LEDS = true;
    ZED_SCRUB_AFTER_RESILVER = true;
  };
  # this option does not work; will return error
  services.zfs.zed.enableMail = false;
}

You can now test this by performing a scrub

# zpool scrub $pool

Alternative 2: Rebuild ZFS with Mail Support

The zfs package can be rebuilt with mail features. However, please note that this will cause Nix to recompile the entire ZFS package on the computer, and on every kernel update, which could be very time-consuming on lower-end NAS systems.

An alternative solution that does not involve recompliation can be found above.

The following override is needed as zfs is implicitly used in partition mounting:

nixpkgs.config.packageOverrides = pkgs: {
  zfsStable = pkgs.zfsStable.override { enableMail = true; };
};

A mail sender like msmtp or postfix is required.

A minimal, testable ZED configuration example:

services.zfs.zed.enableMail = true;
services.zfs.zed.settings = {
  ZED_EMAIL_ADDR = [ "root" ];
  ZED_NOTIFY_VERBOSE = true;
};

Above, ZED_EMAIL_ADDR is set to root, which most people will have an alias for in their mailer. You can change it to directly mail you: ZED_EMAIL_ADDR = [ "you@example.com" ];

ZED pulls in mailutils and runs mail by default, but you can override it with ZED_EMAIL_PROG. If using msmtp, you may need ZED_EMAIL_PROG = "${pkgs.msmtp}/bin/msmtp";.

You can customize the mail command with ZED_EMAIL_OPTS. For example, if your upstream mail server requires a certain FROM address: ZED_EMAIL_OPTS = "-r 'noreply@example.com' -s '@SUBJECT@' @ADDRESS@";