ZFS: Difference between revisions

imported>2r
Updates
m Undo revision 21396 by Musicmatze (talk) (Not a typo. 'made' is correct here.)
Tag: Undo
 
(96 intermediate revisions by 36 users not shown)
Line 1: Line 1:
== Notes ==
[https://zfsonlinux.org/ {{PAGENAME}}] ([[wikipedia:en:{{PAGENAME}}]]), also known as [https://openzfs.org/ OpenZFS] ([[wikipedia:en:OpenZFS]]), is a modern filesystem which is well supported on [[NixOS]].
* Newest kernels might not be supported by ZFS yet. If you are running an newer kernel which is not yet officially supported by zfs, the zfs module will refuse to evaluate and show up as ''broken''. Use <code>boot.kernelPackages = config.boot.zfs.package.latestCompatibleLinuxPackages;</code>
[[category:filesystem]]
Besides the {{nixos:package|zfs}} package (''ZFS Filesystem Linux Kernel module'') itself, there are many packages in the ZFS ecosystem available.


* ZFS does not support swap. Hibernation must be either disabled with <code><nowiki>boot.kernelParams = [ "nohibernate" ];</nowiki></code>, or enabled with a separate, non-ZFS swap partition.
ZFS integrates into NixOS via the {{nixos:option|boot.zfs}} and {{nixos:option|services.zfs}} options.


* By default, all ZFS pools available to the system will be forcibly imported during boot.  This behaviour can be disabled by setting <syntaxhighlight lang="nix" inline>boot.zfs.forceImportAll = false;</syntaxhighlight>.
== Limitations ==


* If you are running within a VM and NixOS fails to import the zpool on reboot, you may need to add <syntaxhighlight lang="nix" inline>boot.zfs.devNodes = "/dev/disk/by-path";</syntaxhighlight> to your configuration.nix file.
==== Latest Kernel compatible with ZFS ====
ZFS often does not support the latest Kernel versions. It is recommended to use an LTS Kernel version whenever possible; the NixOS default Kernel is generally suitable. See [[Linux kernel|Linux Kernel]] for more information about configuring a specific Kernel version.


If your config specifies a Kernel version that is not officially supported by upstream ZFS, the ZFS module will fail to evaluate with an error that the ZFS package is "broken". Upstream ZFS changed in 2.3 to refuse to build by default, regardless of Nixpkgs’ broken marking (or ignoring).


== Enable ZFS support ==
===== Selecting the latest ZFS-compatible Kernel =====
{{Warning|This will often result in the Kernel version going backwards as Kernel versions become end-of-life and are removed from Nixpkgs. If you need more control over the Kernel version due to hardware requirements, consider simply pinning a specific version rather than calculating it as below.}}
To use the latest ZFS-compatible Kernel currently available, the following configuration may be used.


Common ZFS installation guides are now maintained at [https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/index.html OpenZFS Documentation] website. Visit there for details and if an issue arises, submit an issue or pull request.
<syntaxhighlight lang="nix">
{
  config,
  lib,
  pkgs,
  ...
}:
 
let
  zfsCompatibleKernelPackages = lib.filterAttrs (
    name: kernelPackages:
    (builtins.match "linux_[0-9]+_[0-9]+" name) != null
    && (builtins.tryEval kernelPackages).success
    && (!kernelPackages.${config.boot.zfs.package.kernelModuleAttribute}.meta.broken)
  ) pkgs.linuxKernel.packages;
  latestKernelPackage = lib.last (
    lib.sort (a: b: (lib.versionOlder a.kernel.version b.kernel.version)) (
      builtins.attrValues zfsCompatibleKernelPackages
    )
  );
in
{
  # Note this might jump back and forth as kernels are added or removed.
  boot.kernelPackages = latestKernelPackage;
}
</syntaxhighlight>
 
===== Using unstable, pre-release ZFS =====
{{Warning|Pre-release ZFS versions may be less well-tested, and may have critical bugs that may cause data loss.}}{{Warning|Running ZFS with a Kernel unsupported by upstream “is considered EXPERIMENTAL by the OpenZFS project. Even if it appears to build and run correctly, there may be bugs that can cause SERIOUS DATA LOSS.”}}
In some cases, a pre-release version of ZFS may be available that supports a newer Kernel. Use it with <code>boot.zfs.package = pkgs.zfs_unstable;</code>. Using zfs_unstable may allow the use of an unsupported Kernel; as warned above, [https://github.com/openzfs/zfs/blob/6a2f7b38442b42f4bc9a848f8de10fc792ce8d76/config/kernel.m4#L473-L487 upstream considers this experimental].
 
==== Partial support for swap on ZFS ====
 
ZFS does not support swapfiles. swap devices can be used instead. Additionally, hibernation is disabled by default due to a [https://github.com/NixOS/nixpkgs/pull/208037 high risk] of data corruption. Note that even if that pull request is merged, it does not fully mitigate the risk. If you wish to enable hibernation regardless and made sure that swapfiles on ZFS are not used, set <code>boot.zfs.allowHibernation = true</code>.
 
==== Zpool not found ====
 
If NixOS fails to import the zpool on reboot, you may need to add <syntaxhighlight lang="nix" inline>boot.zfs.devNodes = "/dev/disk/by-path";</syntaxhighlight> or <syntaxhighlight lang="nix" inline>boot.zfs.devNodes = "/dev/disk/by-partuuid";</syntaxhighlight> to your configuration.nix file.
 
The differences can be tested by running <code>zpool import -d /dev/disk/by-id</code> when none of the pools are discovered, eg. a live iso.
 
==== ZFS conflicting with systemd ====
 
ZFS will manage mounting non-legacy ZFS filesystems, but NixOS tries to manage mounting with systemd. ZFS native mountpoints are not managed as part of the system configuration (but better support hibernation with a separate swap partition). This can lead to conflicts if the ZFS mount service is also enabled for the same datasets.
 
Disable the mount service with <code>systemd.services.zfs-mount.enable = false;</code> or remove the <code>fileSystems</code> entries in hardware-configuration.nix. Otherwise, use legacy mountpoints (created with e.g. <code>zfs create -o mountpoint=legacy</code>). Mountpoints must be specified with <code>fileSystems."/mount/point" = {};</code> or with <code>nixos-generate-config</code>.
 
== Guides ==
 
=== OpenZFS Documentation for installing ===
{{warning|This guide is not endorsed by NixOS and some features like immutable root do not have upstream support and could break on updates. If an issue arises while following this guide, please consult the guides support channels.}}
 
One guide for a NixOS installation with ZFS is maintained at [https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/ OpenZFS Documentation (''Getting Started'' for ''NixOS'')]
 
It is about:
* [https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/index.html#installation Enabling ZFS on an existing NixOS installation]
* [https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/#root-on-zfs (Installing NixOS with) Root on ZFS].
 
It is not about:
* Giving understandable, easy to follow instructions which are close to the standard installation guide
* Integrating ZFS into your existing config
 
=== Simple NixOS ZFS on root installation ===
Start from here in the NixOS manual: [https://nixos.org/manual/nixos/stable/#sec-installation-manual].
Under manual partitioning [https://nixos.org/manual/nixos/stable/#sec-installation-manual-partitioning] do this instead:
 
==== Partition the disk ====
We need the following partitions:
 
* 1G for boot partition with "boot" as the partition label (also called name in some tools) and ef00 as partition code
* 4G for a swap partition with "swap" as the partition label and 8200 as partition code. We will encrypt this with a random secret on each boot.
* The rest of disk space for zfs with "root" as the partition label and 8300 as partition code (default code)
 
Reason for swap partition: ZFS does use a caching mechanism that is different from the normal Linux cache infrastructure.
In low-memory situations, ZFS therefore might need a bit longer to free up memory from its cache. The swap partition will help with that.
 
Example with gdisk using <code>/dev/nvme0n1</code> as the device (use <code>lsblk</code> to find the device</code>):
 
<syntaxhighlight lang="bash">
sudo gdisk /dev/nvme0n1
GPT fdisk (gdisk) version 1.0.10
...
# boot partition
Command (? for help): n
Partition number (1-128, default 1):
First sector (2048-1000215182, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-1000215182, default = 1000215175) or {+-}size{KMGTP}: +1G
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): ef00
Changed type of partition to 'EFI system partition'


== Root on ZFS ==
# Swap partition
Command (? for help): n
Partition number (2-128, default 2):
First sector (2099200-1000215182, default = 2099200) or {+-}size{KMGTP}:
Last sector (2099200-1000215182, default = 1000215175) or {+-}size{KMGTP}: +4G
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): 8200
Changed type of partition to 'Linux swap'


Root on ZFS guide is now maintained at [https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/Root%20on%20ZFS.html OpenZFS Documentation] website. Visit there for details and if an issue arises, submit an issue or pull request.
# root partition
Command (? for help): n
Partition number (3-128, default 3):
First sector (10487808-1000215182, default = 10487808) or {+-}size{KMGTP}:  
Last sector (10487808-1000215182, default = 1000215175) or {+-}size{KMGTP}:
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300):
Changed type of partition to 'Linux filesystem'


== Mount datasets at boot ==
# write changes
zfs-mount service is enabled by default on NixOS 22.05.
Command (? for help): w
 
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!


To automatically mount a dataset at boot, you only need to set <code>canmount=on</code> and <code>mountpoint=/mount/point</code> on the respective datasets.
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/nvme0n1.
The operation has completed successfully.
</syntaxhighlight>
Final partition table (<code>fdisk -l /dev/nvme0n1</code>):
<syntaxhighlight lang=bash>
Number  Start (sector)    End (sector)  Size      Code  Name
  1            2048        2099199  1024.0 MiB  EF00  EFI system partition
  2        2099200        10487807  4.0 GiB    8200  Linux swap
  3        10487808      1000215175  471.9 GiB  8300  Linux filesystem
</syntaxhighlight>


== Changing the Adaptive Replacement Cache size ==
'''Let's use variables from now on for simplicity.''' Get the device ID in <code>/dev/disk/by-id/</code> (using {{ic|blkid}}), in our case here it is <code>nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O</code>


To change the maximum size of the ARC to (for example) 12 GB, add this to your NixOS configuration:
<syntaxhighlight lang=bash>
<syntaxhighlight lang="nix">
BOOT=/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part1
boot.kernelParams = [ "zfs.zfs_arc_max=12884901888" ];
SWAP=/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part2
DISK=/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part3
</syntaxhighlight>
</syntaxhighlight>


== Automatic scrubbing ==
{{note|It is often recommended to specify the drive using the device ID/UUID to prevent incorrect configuration, but it is also possible to use the device name (e.g. /dev/sda). See also: [[#Zpool created with bus-based disk names]], [https://wiki.archlinux.org/title/Persistent_block_device_naming Persistent block device naming - ArchWiki]}}
 
==== Make a ZFS pool with encryption and mount points ====
 
{{Note|zpool config can significantly affect performance (especially the ashift option) so you may want to do some research. The ZFS tuning cheatsheet or ArchWiki is a good place to start.}}
 
<syntaxhighlight lang="bash">
zpool create -O encryption=on -O keyformat=passphrase -O keylocation=prompt -O compression=zstd -O mountpoint=none -O xattr=sa -O acltype=posixacl -o ashift=12 zpool $DISK
# enter the password to decrypt the pool at boot
Enter new passphrase:
Re-enter new passphrase:
 
# Create datasets
zfs create zpool/root
zfs create zpool/nix
zfs create zpool/var
zfs create zpool/home
 
# Mount root
mkdir -p /mnt
mount -t zfs zpool/root /mnt -o zfsutil


Regular scrubbing of ZFS pools is recommended and can be enabled in your NixOS configuration via:
# Mount nix, var, home
<syntaxhighlight lang="nix">
mkdir /mnt/nix /mnt/var /mnt/home
services.zfs.autoScrub.enable = true;
mount -t zfs zpool/nix /mnt/nix -o zfsutil
mount -t zfs zpool/var /mnt/var -o zfsutil
mount -t zfs zpool/home /mnt/home -o zfsutil
</syntaxhighlight>
</syntaxhighlight>


You can tweak the interval (defaults to once a week) and which pools should be scrubbed (defaults to all).
Output from <syntaxhighlight lang="bash" inline>zpool status</syntaxhighlight>:
<syntaxhighlight >
zpool status
  pool: zpool
state: ONLINE
...
config:


== Reservations ==
NAME                              STATE    READ WRITE CKSUM
zpool                              ONLINE      0    0    0
  nvme-eui.0025384b21406566-part2  ONLINE      0    0    0


Since zfs is a copy-on-write filesystem even for deleting files disk space is needed. Therefore it should be avoided to run out of disk space. Luckily it is possible to reserve disk space for datasets to prevent this.
</syntaxhighlight>


To reserve space create a new unused dataset that gets a guaranteed disk space of 1GB.
==== Format boot partition and enable swap ====
<syntaxhighlight lang="bash">
mkfs.fat -F 32 -n boot $BOOT
</syntaxhighlight>


<syntaxhighlight lang="console">
<syntaxhighlight lang="bash">
# zfs create -o refreservation=1G -o mountpoint=none zroot/reserved
mkswap -L swap $SWAP
swapon $SWAP
</syntaxhighlight>
</syntaxhighlight>


where <code>zroot</code> should be replaced by a dataset in your pool.
==== Installation ====
The dataset itself should not be used. In case you would run out of space you can shrink the reservation to reclaim enough disk space to cleanup the other data from the pool:
<syntaxhighlight lang="bash">
# Mount boot
mkdir -p /mnt/boot
mount $BOOT /mnt/boot


<syntaxhighlight lang="console">
# Generate the nixos config
# zfs set refreservation=none zroot/reserved
nixos-generate-config --root /mnt
...
writing /mnt/etc/nixos/hardware-configuration.nix...
writing /mnt/etc/nixos/configuration.nix...
For more hardware-specific settings, see https://github.com/NixOS/nixos-hardware.
</syntaxhighlight>
</syntaxhighlight>


== Take a snapshot automatically ==
Now edit the configuration.nix that was just created in <code>/mnt/etc/nixos/configuration.nix</code> and make sure to have at least the following content in it.
 
{{file|/mnt/etc/nixos/configuration.nix|diff|3=
{
...
  # Boot loader config for configuration.nix:
  boot.loader.systemd-boot.enable = true;
 
  # for local disks that are not shared over the network, we don't need this to be random
  # without this, "ZFS requires networking.hostId to be set" will be raised
+  networking.hostId = "8425e349";
...
}
}}
 
Now check the hardware-configuration.nix in <code>/mnt/etc/nixos/hardware-configuration.nix</code> and add whats missing e.g. <code>options = [ "zfsutil" ]</code> for all filesystems except boot and <code>randomEncryption = true;</code> for the swap partition. Also change the generated swap device to the partition we created e.g. <code>/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part2</code> in this case and <code>/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part1</code> for boot.
 
{{file|/mnt/etc/nixos/configuration.nix|diff|3=
{
...
  fileSystems."/" = {
    device = "zpool/root";
    fsType = "zfs";
    # the zfsutil option is needed when mounting zfs datasets without "legacy" mountpoints
+    options = [ "zfsutil" ];
  };
 
  fileSystems."/nix" = {
    device = "zpool/nix";
    fsType = "zfs";
+    options = [ "zfsutil" ];
  };
 
  fileSystems."/var" = {
    device = "zpool/var";
    fsType = "zfs";
+    options = [ "zfsutil" ];
  };
 
  fileSystems."/home" = {
    device = "zpool/home";
    fsType = "zfs";
+    options = [ "zfsutil" ];
  };
 
  fileSystems."/boot" = {
  device = "/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part1";
  fsType = "vfat";
  };
 
  swapDevices = [{
+    device = "/dev/disk/by-id/nvme-SKHynix_HFS512GDE9X081N_FNB6N634510106K5O-part2";
+    randomEncryption = true;
  }];
}
}}
 
Now you may install NixOS with <code>nixos-install</code>.
 
== Importing on boot ==


To auto-snapshot a ZFS filesystem or a ZVol, set its <code>com.sun:auto-snapshot</code> property to <code>true</code>, like this:
If you create a zpool, it will not be imported on the next boot unless you either add the zpool name to <syntaxhighlight lang="nix" inline>boot.zfs.extraPools</syntaxhighlight>:


<syntaxhighlight lang="console">
<syntaxhighlight lang="nix">
# zfs set com.sun:auto-snapshot=true <pool>/<fs>
## In /etc/nixos/configuration.nix:
boot.zfs.extraPools = [ "zpool_name" ];
</syntaxhighlight>
</syntaxhighlight>


(Note that by default this property will be inherited by all descendant datasets, but you can set their properties to false if you prefer.)
or if you are using legacy mountpoints, add a <syntaxhighlight lang="nix" inline>fileSystems</syntaxhighlight> entry and NixOS will automatically detect that the pool needs to be imported:


Then, to enable the auto-snapshot service, add this to your <code>configuration.nix</code>:
<syntaxhighlight lang="nix">
<syntaxhighlight lang="nix">
services.zfs.autoSnapshot.enable = true;
## In /etc/nixos/configuration.nix:
fileSystems."/mount/point" = {
  device = "zpool_name";
  fsType = "zfs";
};
</syntaxhighlight>
</syntaxhighlight>


And finally, run <code>nixos-rebuild switch</code> to activate the new configuration!
=== Zpool created with bus-based disk names ===
If you used bus-based disk names in the <syntaxhighlight inline>zpool create</syntaxhighlight> command, e.g., <syntaxhighlight inline>/dev/sda</syntaxhighlight>, NixOS may run into issues importing the pool if the names change. Even if the pool is able to be mounted (with <syntaxhighlight lang="nix" inline>boot.zfs.devNodes = "/dev/disk/by-partuuid";</syntaxhighlight> set), this may manifest as a <syntaxhighlight inline>FAULTED</syntaxhighlight> disk and a <syntaxhighlight inline>DEGRADED</syntaxhighlight> pool reported by <syntaxhighlight inline>zpool status</syntaxhighlight>. The fix is to re-import the pool using disk IDs:


By default, the auto-snapshot service will keep the latest four 15-minute, 24 hourly, 7 daily, 4 weekly and 12 monthly snapshots.
<syntaxhighlight>
You can globally override this configuration by setting the desired number of snapshots in your <code>configuration.nix</code>, like this:
# zpool export zpool_name
# zpool import -d /dev/disk/by-id zpool_name
</syntaxhighlight>


The import setting is reflected in <syntaxhighlight inline="" lang="bash">/etc/zfs/zpool.cache</syntaxhighlight>, so it should persist through subsequent boots.
=== Zpool created with disk IDs ===
If you used disk IDs to refer to disks in the <code>zpool create</code> command, e.g., <code>/dev/disk/by-id</code>, then NixOS may consistently fail to import the pool unless <code>boot.zfs.devNodes = "/dev/disk/by-id"</code> is also set.
== Mount datasets at boot ==
zfs-mount service is enabled by default on NixOS 22.05.
To automatically mount a dataset at boot, you only need to set <code>canmount=on</code> and <code>mountpoint=/mount/point</code> on the respective datasets.
== Changing the Adaptive Replacement Cache size ==
To change the maximum size of the ARC to (for example) 12 GB, add this to your NixOS configuration:
<syntaxhighlight lang="nix">
<syntaxhighlight lang="nix">
services.zfs.autoSnapshot = {
boot.kernelParams = [ "zfs.zfs_arc_max=12884901888" ];
  enable = true;
  frequent = 8; # keep the latest eight 15-minute snapshots (instead of four)
  monthly = 1;  # keep only one monthly snapshot (instead of twelve)
};
</syntaxhighlight>
</syntaxhighlight>


You can also disable a given type of snapshots on a per-dataset basis by setting a ZFS property, like this:
== Tuning other parameters ==


<syntaxhighlight lang="console">
To tune other attributes of ARC, L2ARC or of ZFS itself via runtime modprobe config, add this to your NixOS configuration (keys and values are examples only!):
# zfs set com.sun:auto-snapshot:weekly=false <pool>/<fs>
<syntaxhighlight lang="nix">
    boot.extraModprobeConfig = ''
      options zfs l2arc_noprefetch=0 l2arc_write_boost=33554432 l2arc_write_max=16777216 zfs_arc_max=2147483648
    '';
</syntaxhighlight>
</syntaxhighlight>


This would disable only weekly snapshots on the given filesystem.
You can confirm whether any specified configuration/tuning got applied via commands like <code>arc_summary</code> and <code>arcstat -a -s " "</code>.
 
== Automatic scrubbing ==
 
Regular scrubbing of ZFS pools is recommended and can be enabled in your NixOS configuration via:
<syntaxhighlight lang="nix">
services.zfs.autoScrub.enable = true;
</syntaxhighlight>


== Unlock encrypted zfs via ssh on boot ==
You can tweak the interval (defaults to once a week) and which pools should be scrubbed (defaults to all).
== Remote unlock ==
=== Unlock encrypted ZFS via SSH on boot ===


{{note|As of 22.05, rebuilding your config with the below directions may result in a situation where, if you want to revert the changes, you may need to do some pretty hairy nix-store manipulation to be able to successfully rebuild, see https://github.com/NixOS/nixpkgs/issues/101462#issuecomment-1172926129}}
{{note|As of 22.05, rebuilding your config with the below directions may result in a situation where, if you want to revert the changes, you may need to do some pretty hairy nix-store manipulation to be able to successfully rebuild, see https://github.com/NixOS/nixpkgs/issues/101462#issuecomment-1172926129}}
Line 119: Line 369:
       authorizedKeys = [ "ssh-rsa AAAA..." ];
       authorizedKeys = [ "ssh-rsa AAAA..." ];
     };
     };
    # this will automatically load the zfs password prompt on login
    # and kill the other prompt so boot can continue
    postCommands = ''
      cat <<EOF > /root/.profile
      if pgrep -x "zfs" > /dev/null
      then
        zfs load-key -a
        killall zfs
      else
        echo "zfs not running -- maybe the pool is taking some time to load for some unforseen reason."
      fi
      EOF
    '';
   };
   };
};
};
</syntaxhighlight>
</syntaxhighlight>
* In order to use DHCP in the initrd, network manager must not be enabled and <syntaxhighlight lang="nix" inline>networking.useDHCP = true;</syntaxhighlight> must be set.
* In order to use DHCP in the initrd, network manager must not be enabled and <syntaxhighlight lang="nix" inline>networking.useDHCP = true;</syntaxhighlight> must be set.
* If your network card isn't started, you'll need to add the according kernel module to the initrd as well, e.g. <syntaxhighlight lang="nix" inline>boot.initrd.kernelModules = [ "r8169" ];</syntaxhighlight>
* If your network card isn't started, you'll need to add the according Kernel module to the Kernel and initrd as well, e.g. <syntaxhighlight lang="nix">
boot.kernelModules = [ "r8169" ];
boot.initrd.kernelModules = [ "r8169" ];</syntaxhighlight>To know what kernel modules are needed, run <code>nix shell nixpkgs#pciutils --command lspci -v | grep -iA8 'network\|ethernet'</code> .


== Import and unlock multiple encrypted pools/dataset at boot ==
After that you can unlock your datasets using the following ssh command:
If you have not only one encrypted pool/dataset but multiple ones and you want to import and unlock them at boot, so that they can be automounted using the hardware-configuration.nix, you could just amend the <code>boot.initrd.network.postCommands</code> option.


Unfortunately having an unlock key file stored in an encrypted zfs dataset cannot be used directly, so the pool must use <code>keyformat=passphrase</code> and <code>keylocation=prompt</code>.
<syntaxhighlight>
ssh -p 2222 root@host "zpool import -a; zfs load-key -a && killall zfs"
</syntaxhighlight>


The following example follows the remote unlocking with OpenSSH, but imports another pool also and prompts for unlocking (either when at the machine itself or when logging in remotely:
Alternatively you could also add the commands as postCommands to your configuration.nix, then you just have to ssh into the initrd:


<syntaxhighlight lang="nix">
<syntaxhighlight>
boot = {
boot = {
   initrd.network = {
   initrd.network = {
    enable = true;
    ssh = {
      enable = true;
      port = 2222;
      hostKeys = [ /path/to/ssh_host_rsa_key ];
      authorizedKeys = [ "ssh-rsa AAAA..." ];
    };
     postCommands = ''
     postCommands = ''
      zpool import tankXXX
    # Import all pools
      echo "zfs load-key -a; killall zfs" >> /root/.profile
    zpool import -a
    # Or import selected pools
    zpool import pool2
    zpool import pool3
    zpool import pool4
    # Add the load-key command to the .profile
    echo "zfs load-key -a; killall zfs" >> /root/.profile
     '';
     '';
   };
   };
Line 163: Line 402:
</syntaxhighlight>
</syntaxhighlight>


When you login by SSH into the box or when you have physical access to the machine itself, you will be prompted to supply the unlocking password for your zroot and tankXXX pools.
After that you can unlock your datasets using the following ssh command:


== ZFS trimming ==
<syntaxhighlight>
ssh -p 2222 root@host
</syntaxhighlight>


ZFS trimming works on one or more zpools and will trim each ssd inside it. There are two modes of it. One mode will manually trim the specified pool and the other will auto-trim pools. However the main difference is, that auto-trim will skip ranges that it considers too small while manually issued trim will trim all ranges.
== Reservations ==


To manually start trimming of a zpool run: <code>zpool trim tank</code>.
On ZFS, the performance will deteriorate significantly when more than 80% of the available space is used. To avoid this, reserve disk space beforehand.  
Since [https://github.com/NixOS/nixpkgs/pull/65331 PR-65331] this can be also done periodically (by default once a week) by setting <syntaxhighlight lang="nix" inline>services.zfs.trim.enable = true;</syntaxhighlight>.


To set a pool for auto-trim run: <code>zpool set autotrim=on tank</code>
To reserve space create a new unused dataset that gets a guaranteed disk space of 10GB.


To check the status of the manual trim, you can just run <code>zpool status -t</code>
<syntaxhighlight lang="console">
# zfs create -o refreservation=10G -o mountpoint=none zroot/reserved
</syntaxhighlight>


To see the effects of trimming, you can run <code>zpool iostat -r</code> and <code>zpool iostat -w</code>
== Auto ZFS trimming ==


To see whether auto-trimming works, just run <code>zpool iostat -r</code> note the results and run it later again. The trim entries should change.
<syntaxhighlight lang="nix" inline>services.zfs.trim.enable = true;</syntaxhighlight>.


For further information read the [https://github.com/zfsonlinux/zfs/pull/8419 PR description].
This will periodically run <code>zpool trim</code>. Note that this is different from the <code>autotrim</code> pool property. For further information, see the <code>zpool-trim</code> and <code>zpoolprops</code> man pages.


[[Category:Guide]]
== Take snapshots automatically ==


See {{nixos:option|services.sanoid}} section in <code>man configuration.nix</code>.


Following are a few discourse posts on zfs, serving as pointers, form your own opinion
== NFS share ==


* https://discourse.nixos.org/t/zfs-dedup-on-nix-store-is-it-worth-it/4959
With <code>sharenfs</code> property, ZFS has build-in support for generating <code>/etc/exports.d/zfs.exports</code> file, which in turn is processed by NFS service automatically.
* https://discourse.nixos.org/t/how-to-add-extra-build-input-to-linux-kernel/8208/3


== Mail notification for ZFS Event Daemon ==
{{warning|If you are intending on defining an IPv6 subnet as part of your sharenfs rule, as of ZFS 2.0.6 (2021-09-23) please note that due to a bug in openzfs '''your rule will not correctly apply''', and may result in a security vulnerability (CVE-2013-20001). A fix has been implemented in the next yet-to-be-released upstream version - [https://github.com/openzfs/zfs/pull/11939 openzfs/zfs#11939]}}


ZFS Event Daemon (zed) monitors events generated by the ZFS kernel module and runs configured tasks. It can be configured to send an email when a pool scrub is finished or a disk has failed. [https://search.nixos.org/options?query=services.zfs.zed zed options]
To enable NFS share on a dataset, only two steps are needed:
 
=== Alternative 1: Rebuild ZFS with Mail Support ===
The <code>zfs</code> package can be rebuilt with mail features. However, please note that this will cause Nix to recompile the entire ZFS package on the computer, and on every kernel update, which could be very time-consuming on lower-end NAS systems.
 
An alternative solution that does not involve recompliation can be found below.
 
The following override is needed as <code>zfs</code> is implicitly used in partition mounting:


First, enable [[NFS|NFS service]]:
<syntaxhighlight lang="nix">
<syntaxhighlight lang="nix">
nixpkgs.config.packageOverrides = pkgs: {
services.nfs.server.enable = true;
  zfsStable = pkgs.zfsStable.override { enableMail = true; };
};
</syntaxhighlight>
</syntaxhighlight>
Only this line is needed. Configure firewall if necessary, as described in [[NFS]] article.


A mail sender like [[msmtp]] or [[postfix]] is required.
Then, set <code>sharenfs</code> property:
 
<syntaxhighlight lang="console">
A minimal, testable ZED configuration example:
zfs set sharenfs="ro=192.168.1.0/24,all_squash,anonuid=70,anongid=70" rpool/myData
 
<syntaxhighlight lang="nix">
services.zfs.zed.enableMail = true;
services.zfs.zed.settings = {
  ZED_EMAIL_ADDR = [ "root" ];
  ZED_NOTIFY_VERBOSE = true;
};
</syntaxhighlight>
</syntaxhighlight>
For more options, see <code>man 5 exports</code>.


Above, <code>ZED_EMAIL_ADDR</code> is set to <code>root</code>, which most people will have an alias for in their mailer. You can change it to directly mail you: <code>ZED_EMAIL_ADDR = [ "you@example.com" ];</code>
Todo: sharesmb property for Samba.


ZED pulls in <code>mailutils</code> and runs <code>mail</code> by default, but you can override it with <code>ZED_EMAIL_PROG</code>. If using msmtp, you may need <code>ZED_EMAIL_PROG = "${pkgs.msmtp}/bin/msmtp";</code>.
== Mail notifications (ZFS Event Daemon) ==


You can customize the mail command with <code>ZED_EMAIL_OPTS</code>. For example, if your upstream mail server requires a certain FROM address: <code>ZED_EMAIL_OPTS = "-r 'noreply@example.com' -s '@SUBJECT@' @ADDRESS@";</code>
ZFS Event Daemon (zed) monitors events generated by the ZFS Kernel module and runs configured tasks. It can be configured to send an email when a pool scrub is finished or a disk has failed. [https://search.nixos.org/options?query=services.zfs.zed zed options]


=== Alternative 2: Enable Mail Notification without Re-compliation ===
=== Option A: enable mail notifications without re-compliation ===


First, we need to configure a mail transfer agent, the program that sends email:
First, we need to configure a mail transfer agent, the program that sends email:
Line 255: Line 484:
Then, configure an alias for root account. With this alias configured, all mails sent to root, such as cron job results and failed sudo login events, will be redirected to the configured email account.
Then, configure an alias for root account. With this alias configured, all mails sent to root, such as cron job results and failed sudo login events, will be redirected to the configured email account.


<syntaxhighlight lang="bash">
<syntaxhighlight lang="nix">
tee -a /etc/aliases <<EOF
{
root: user@example.com
  environment.etc.aliases.text = ''
EOF
    root: you@example.com
  '';
}
</syntaxhighlight>
</syntaxhighlight>


Line 286: Line 517:
</syntaxhighlight>
</syntaxhighlight>


== NFS share ==
=== Option B: Rebuild ZFS with mail support ===
With <code>sharenfs</code> property, ZFS has build-in support for generating <code>/etc/exports.d/zfs.exports</code> file, which in turn is processed by NFS service automatically.
The <code>zfs</code> package can be rebuilt with mail features. However, please note that this will cause Nix to recompile the entire ZFS package on the computer, and on every Kernel update, which could be very time-consuming on lower-end NAS systems.


{{warning|If you are intending on defining an IPv6 subnet as part of your sharenfs rule, as of ZFS 2.0.6 (2021-09-23) please note that due to a bug in openzfs '''your rule will not correctly apply''', and may result in a security vulnerability (CVE-2013-20001). A fix has been implemented in the next yet-to-be-released upstream version - [https://github.com/openzfs/zfs/pull/11939 openzfs/zfs#11939]}}
An alternative solution that does not involve recompliation can be found above.


To enable NFS share on a dataset, only two steps are needed:
The following override is needed as <code>zfs</code>is implicitly used in partition mounting:


First, enable [[NFS|NFS service]]:
<syntaxhighlight lang="nix">
<syntaxhighlight lang="nix">
services.nfs.server.enable = true;
nixpkgs.config.packageOverrides = pkgs: {
  zfsStable = pkgs.zfsStable.override { enableMail = true; };
};
</syntaxhighlight>
</syntaxhighlight>
Only this line is needed. Configure firewall if necessary, as described in [[NFS]] article.


Then, set <code>sharenfs</code> property:
A mail sender like [[msmtp]] or [[postfix]] is required.
<syntaxhighlight lang="console">
 
# zfs set sharenfs="ro=192.168.1.0/24,all_squash,anonuid=70,anongid=70" rpool/myData
A minimal, testable ZED configuration example:
 
<syntaxhighlight lang="nix">
services.zfs.zed.enableMail = true;
services.zfs.zed.settings = {
  ZED_EMAIL_ADDR = [ "root" ];
  ZED_NOTIFY_VERBOSE = true;
};
</syntaxhighlight>
</syntaxhighlight>
For more options, see <code>man 5 exports</code>.
 
Above, <code>ZED_EMAIL_ADDR</code> is set to <code>root</code>, which most people will have an alias for in their mailer. You can change it to directly mail you: <code>ZED_EMAIL_ADDR = [ "you@example.com" ];</code>
 
ZED pulls in <code>mailutils</code> and runs <code>mail</code> by default, but you can override it with <code>ZED_EMAIL_PROG</code>. If using msmtp, you may need <code>ZED_EMAIL_PROG = "${pkgs.msmtp}/bin/msmtp";</code>.
 
You can customize the mail command with <code>ZED_EMAIL_OPTS</code>. For example, if your upstream mail server requires a certain FROM address: <code>ZED_EMAIL_OPTS = "-r 'noreply@example.com' -s '@SUBJECT@' @ADDRESS@";</code>
 
[[Category:Guide]]