ZFS: Difference between revisions
imported>Mic92 No edit summary |
Add short command to know the difference between different disk/by-* paths |
||
(123 intermediate revisions by 56 users not shown) | |||
Line 1: | Line 1: | ||
[ | [https://zfsonlinux.org/ {{PAGENAME}}] ([[wikipedia:en:{{PAGENAME}}]]) - also known as [https://openzfs.org/ OpenZFS] ([[wikipedia:en:OpenZFS]]) - is a modern filesystem[[category:filesystem]] which is well supported on [[NixOS]]. | ||
[[ | |||
== | There are a lot of packages for [[{{PAGENAME}}]]. For example there is the ''zfs'' package (''ZFS Filesystem Linux Kernel module'') itself.<ref>https://search.nixos.org/packages?channel=unstable&show=zfs&query=zfs</ref> But there are also a lot of packages of the [[{{PAGENAME}}]] ecosystem available. | ||
[[{{PAGENAME}}]] integrates into NixOS via its [[module]] system. Examples: | |||
* | * ''boot.zfs''<ref>https://search.nixos.org/options?channel=unstable&query=boot.zfs</ref> | ||
* | * ''service.zfs''<ref>https://search.nixos.org/options?channel=unstable&query=services.zfs</ref> | ||
== | == Limitations == | ||
==== latestCompatibleLinuxPackages of ZFS for boot.kernelPackages ==== | |||
= | Newest kernels might not be supported by ZFS yet. If you are running an newer kernel which is not yet officially supported by zfs, the zfs module will refuse to evaluate and show up as ''broken''. Use <code>boot.kernelPackages = config.boot.zfs.package.latestCompatibleLinuxPackages;</code> to use the latest compatible kernel. | ||
==== partial support for SWAP on ZFS ==== | |||
ZFS does not support swapfiles. SWAP devices must be used instead. Additionally, hibernation is disabled by default due to a [https://github.com/NixOS/nixpkgs/pull/208037 high risk] of data corruption. Note that even if / after that pull request is merged, it does not fully mitigate the risk. If you wish to enable hibernation regardless, set <code>boot.zfs.allowHibernation = true</code>. | |||
==== boot.zfs.devNodes ==== | |||
If NixOS fails to import the zpool on reboot, you may need to add <syntaxhighlight lang="nix" inline>boot.zfs.devNodes = "/dev/disk/by-path";</syntaxhighlight> or <syntaxhighlight lang="nix" inline>boot.zfs.devNodes = "/dev/disk/by-partuuid";</syntaxhighlight> to your configuration.nix file. | |||
<syntaxhighlight lang=" | |||
</syntaxhighlight> | |||
The differences can be tested by running <code>zpool import -d /dev/disk/by-id</code> when none of the pools are discovered, eg. a live iso. | |||
==== declarative mounting of ZFS datasets ==== | |||
< | When using legacy mountpoints (created with eg<code>zfs create -o mountpoint=legacy</code>) mountpoints must be specified with <code>fileSystems."/mount/point" = {};</code>. ZFS native mountpoints are not managed as part of the system configuration but better support hibernation with a separate swap partition. This can lead to conflicts if ZFS mount service is also enabled for the same datasets. Disable it with <code>systemd.services.zfs-mount.enable = false;</code>. | ||
zfs | |||
</ | |||
== Guides == | |||
==== '''OpenZFS Documentation for installing''' ==== | |||
{{warning|This guide is not endorsed by NixOS and some features like immutable root do not have upstream support and could break on updates. If an issue arises while following this guide, please consult the guides support channels.}} | |||
NixOS | One guide for a NixOS installation with ZFS is maintained at [https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/ OpenZFS Documentation (''Getting Started'' for ''NixOS'')] | ||
It is about: | |||
* [https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/index.html#installation enabling ZFS on an existing NixOS installation] and | |||
* [https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/#root-on-zfs (installing NixOS with) Root on ZFS]. | |||
It is not about: | |||
* Give understandable, easy to follow and close to the standard installation guide instructions | |||
* integrating ZFS into your existing config | |||
ZFS | ==== '''Simple NixOS ZFS installation''' ==== | ||
Start from here in the NixOS manual: [https://nixos.org/manual/nixos/stable/#sec-installation-manual]. | |||
Under manual partitioning [https://nixos.org/manual/nixos/stable/#sec-installation-manual-partitioning] do this instead: | |||
'''Partition your disk with your favorite partition tool.''' | |||
We need the following partitions: | |||
* 1G for boot partition with "boot" as the partition label (also called name in some tools) and ef00 as partition code | |||
* 10G for a swap partition with "swap" as the partition label and 8200 as partition code. We will encrypt this with a random secret on each boot. | |||
* The rest of disk space for zfs with "root" as the partition label and 8300 as partition code (default code) | |||
Reason for swap partition: ZFS does use a caching mechanism that is different from the normal Linux cache infrastructure. | |||
In low-memory situations, ZFS therefore might need a bit longer to free up memory from its cache. The swap partition will help with that. | |||
Example output from fdisk: | |||
= | <syntaxhighlight lang="bash"> | ||
sudo gdisk /dev/nvme0n1 | |||
GPT fdisk (gdisk) version 1.0.9.1 | |||
... | |||
Command (? for help): p | |||
Disk /dev/nvme0n1: 500118192 sectors, 238.5 GiB | |||
Sector size (logical/physical): 512/512 bytes | |||
Disk identifier (GUID): CA926E8C-47F6-416A-AD1A-C2190CF5D1F8 | |||
Partition table holds up to 128 entries | |||
Main partition table begins at sector 2 and ends at sector 33 | |||
First usable sector is 34, last usable sector is 500118158 | |||
Partitions will be aligned on 2048-sector boundaries | |||
Total free space is 2669 sectors (1.3 MiB) | |||
Number Start (sector) End (sector) Size Code Name | |||
1 2048 2099199 1024.0 MiB EF00 boot | |||
2 2099200 23070719 10.0 GiB 8200 swap | |||
3 23070720 500117503 227.5 GiB 8300 root | |||
Command (? for help): | |||
</syntaxhighlight> | </syntaxhighlight> | ||
'''Make zfs pool with encryption and mount points:''' | |||
'''Note:''' zpool config can significantly affect performance (especially the ashift option) so you may want to do some research. The [https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/ ZFS tuning cheatsheet] or [https://wiki.archlinux.org/title/ZFS#Storage_pools ArchWiki] is a good place to start. | |||
<syntaxhighlight lang="bash"> | |||
zpool create -O encryption=on -O keyformat=passphrase -O keylocation=prompt -O compression=zstd -O mountpoint=none -O xattr=sa -O acltype=posixacl -o ashift=12 zpool /dev/nvme0n1p2 | |||
zfs create zpool/root | |||
zfs create zpool/nix | |||
zfs create zpool/var | |||
zfs create zpool/home | |||
mkdir -p /mnt | |||
mount -t zfs zpool/root /mnt -o zfsutil | |||
mkdir /mnt/nix /mnt/var /mnt/home | |||
mount -t zfs zpool/nix /mnt/nix -o zfsutil | |||
zfs | mount -t zfs zpool/var /mnt/var -o zfsutil | ||
mount -t zfs zpool/home /mnt/home -o zfsutil | |||
</syntaxhighlight> | </syntaxhighlight> | ||
Output from <syntaxhighlight lang="bash" inline>zpool status</syntaxhighlight>: | |||
<syntaxhighlight > | |||
zpool status | |||
pool: zpool | |||
state: ONLINE | |||
... | |||
config: | |||
NAME STATE READ WRITE CKSUM | |||
<syntaxhighlight lang=" | zpool ONLINE 0 0 0 | ||
nvme-eui.0025384b21406566-part2 ONLINE 0 0 0 | |||
</syntaxhighlight> | |||
'''Make fat filesystem on boot partition''' | |||
<syntaxhighlight lang="bash"> | |||
mkfs.fat -F 32 -n boot /dev/nvme0n1p1 | |||
</syntaxhighlight> | </syntaxhighlight> | ||
'''Installation:''' | |||
Install: [https://nixos.org/manual/nixos/stable/#sec-installation-manual-installing] | |||
Jump to "2. UEFI systems" | |||
<syntaxhighlight lang=" | <syntaxhighlight lang="bash"> | ||
mkdir -p /mnt/boot | |||
mount /dev/disk/by-partlabel/boot /mnt/boot | |||
</syntaxhighlight> | </syntaxhighlight> | ||
Jump to "4." ... /mnt/etc/nixos/configuration.nix ... | |||
Continue from here and add this boot loader and filesystems config to your configuration.nix: | |||
<syntaxhighlight lang="nix"> | |||
{ | |||
# Boot loader config for configuration.nix: | |||
boot.loader.systemd-boot.enable = true; | |||
= | # for local disks that are not shared over the network, we don't need this to be random | ||
networking.hostId = "8425e349"; | |||
fileSystems."/" = { | |||
device = "zpool/root"; | |||
fsType = "zfs"; | |||
# the zfsutil option is needed when mounting zfs datasets without "legacy" mountpoints | |||
options = [ "zfsutil" ]; | |||
}; | |||
fileSystems."/nix" = { | |||
device = "zpool/nix"; | |||
fsType = "zfs"; | |||
options = [ "zfsutil" ]; | |||
}; | |||
fileSystems."/var" = { | |||
device = "zpool/var"; | |||
fsType = "zfs"; | |||
options = [ "zfsutil" ]; | |||
}; | |||
fileSystems."/home" = { | |||
device = "zpool/home"; | |||
fsType = "zfs"; | |||
options = [ "zfsutil" ]; | |||
}; | |||
fileSystems."/boot" = { | |||
device = "/dev/disk/by-partlabel/boot"; | |||
fsType = "vfat"; | |||
}; | |||
swapDevices = [{ | |||
device = "/dev/disk/by-partlabel/swap"; | |||
randomEncryption = true; | |||
}]; | |||
} | |||
</syntaxhighlight> | |||
== Importing on boot == | |||
If you create a zpool, it will not be imported on the next boot unless you either add the zpool name to <syntaxhighlight lang="nix" inline>boot.zfs.extraPools</syntaxhighlight>: | |||
# | <syntaxhighlight lang="nix"> | ||
## In /etc/nixos/configuration.nix: | |||
boot.zfs.extraPools = [ "zpool_name" ]; | |||
</syntaxhighlight> | |||
or if you are using legacy mountpoints, add a <syntaxhighlight lang="nix" inline>fileSystems</syntaxhighlight> entry and NixOS will automatically detect that the pool needs to be imported: | |||
# | <syntaxhighlight lang="nix"> | ||
## In /etc/nixos/configuration.nix: | |||
fileSystems."/mount/point" = { | |||
device = "zpool_name"; | |||
fsType = "zfs"; | |||
}; | |||
</syntaxhighlight> | </syntaxhighlight> | ||
=== | === Zpool created with bus-based disk names === | ||
If you used bus-based disk names in the <syntaxhighlight inline>zpool create</syntaxhighlight> command, e.g., <syntaxhighlight inline>/dev/sda</syntaxhighlight>, NixOS may run into issues importing the pool if the names change. Even if the pool is able to be mounted (with <syntaxhighlight lang="nix" inline>boot.zfs.devNodes = "/dev/disk/by-partuuid";</syntaxhighlight> set), this may manifest as a <syntaxhighlight inline>FAULTED</syntaxhighlight> disk and a <syntaxhighlight inline>DEGRADED</syntaxhighlight> pool reported by <syntaxhighlight inline>zpool status</syntaxhighlight>. The fix is to re-import the pool using disk IDs: | |||
<syntaxhighlight> | |||
# zpool export zpool_name | |||
# zpool import -d /dev/disk/by-id zpool_name | |||
</syntaxhighlight> | |||
<syntaxhighlight lang=" | The import setting is reflected in <syntaxhighlight inline="" lang="bash">/etc/zfs/zpool.cache</syntaxhighlight>, so it should persist through subsequent boots. | ||
=== Zpool created with disk IDs === | |||
If you used disk IDs to refer to disks in the <code>zpool create</code> command, e.g., <code>/dev/disk/by-id</code>, then NixOS may consistently fail to import the pool unless <code>boot.zfs.devNodes = "/dev/disk/by-id"</code> is also set. | |||
== Mount datasets at boot == | |||
zfs-mount service is enabled by default on NixOS 22.05. | |||
To automatically mount a dataset at boot, you only need to set <code>canmount=on</code> and <code>mountpoint=/mount/point</code> on the respective datasets. | |||
== Changing the Adaptive Replacement Cache size == | |||
To change the maximum size of the ARC to (for example) 12 GB, add this to your NixOS configuration: | |||
<syntaxhighlight lang="nix"> | |||
boot.kernelParams = [ "zfs.zfs_arc_max=12884901888" ]; | |||
</syntaxhighlight> | |||
== Tuning other parameters == | |||
To tune other attributes of ARC, L2ARC or of ZFS itself via runtime modprobe config, add this to your NixOS configuration (keys and values are examples only!): | |||
<syntaxhighlight lang="nix"> | |||
boot.extraModprobeConfig = '' | |||
options zfs l2arc_noprefetch=0 l2arc_write_boost=33554432 l2arc_write_max=16777216 zfs_arc_max=2147483648 | |||
''; | |||
</syntaxhighlight> | |||
You can confirm whether any specified configuration/tuning got applied via commands like <code>arc_summary</code> and <code>arcstat -a -s " "</code>. | |||
== Automatic scrubbing == | |||
Regular scrubbing of ZFS pools is recommended and can be enabled in your NixOS configuration via: | |||
<syntaxhighlight lang="nix"> | |||
services.zfs.autoScrub.enable = true; | |||
</syntaxhighlight> | |||
You can tweak the interval (defaults to once a week) and which pools should be scrubbed (defaults to all). | |||
== Remote unlock == | |||
=== Unlock encrypted zfs via ssh on boot === | |||
{{note|As of 22.05, rebuilding your config with the below directions may result in a situation where, if you want to revert the changes, you may need to do some pretty hairy nix-store manipulation to be able to successfully rebuild, see https://github.com/NixOS/nixpkgs/issues/101462#issuecomment-1172926129}} | |||
In case you want unlock a machine remotely (after an update), having an ssh service in initrd for the password prompt is handy: | |||
<syntaxhighlight lang="nix"> | |||
{ | boot = { | ||
initrd.network = { | |||
# This will use udhcp to get an ip address. | |||
# Make sure you have added the kernel module for your network driver to `boot.initrd.availableKernelModules`, | |||
# so your initrd can load it! | |||
# Static ip addresses might be configured using the ip argument in kernel command line: | |||
# https://www.kernel.org/doc/Documentation/filesystems/nfs/nfsroot.txt | |||
enable = true; | |||
ssh = { | |||
enable = true; | |||
# To prevent ssh clients from freaking out because a different host key is used, | |||
# a different port for ssh is useful (assuming the same host has also a regular sshd running) | |||
port = 2222; | |||
# hostKeys paths must be unquoted strings, otherwise you'll run into issues with boot.initrd.secrets | |||
# the keys are copied to initrd from the path specified; multiple keys can be set | |||
# you can generate any number of host keys using | |||
# `ssh-keygen -t ed25519 -N "" -f /path/to/ssh_host_ed25519_key` | |||
hostKeys = [ /path/to/ssh_host_rsa_key ]; | |||
# public ssh key used for login | |||
authorizedKeys = [ "ssh-rsa AAAA..." ]; | |||
}; | }; | ||
}; | |||
}; | |||
</syntaxhighlight> | |||
* In order to use DHCP in the initrd, network manager must not be enabled and <syntaxhighlight lang="nix" inline>networking.useDHCP = true;</syntaxhighlight> must be set. | |||
* If your network card isn't started, you'll need to add the according kernel module to the kernel and initrd as well, e.g. <syntaxhighlight lang="nix"> | |||
boot.kernelModules = [ "r8169" ]; | |||
boot.initrd.kernelModules = [ "r8169" ];</syntaxhighlight> | |||
After that you can unlock your datasets using the following ssh command: | |||
<syntaxhighlight> | |||
ssh -p 2222 root@host "zpool import -a; zfs load-key -a && killall zfs" | |||
</syntaxhighlight> | |||
Alternatively you could also add the commands as postCommands to your configuration.nix, then you just have to ssh into the initrd: | |||
# | <syntaxhighlight> | ||
boot = { | |||
initrd.network = { | |||
postCommands = '' | |||
# Import all pools | |||
zpool import -a | |||
# Or import selected pools | |||
zpool import pool2 | |||
zpool import pool3 | |||
zpool import pool4 | |||
# Add the load-key command to the .profile | |||
echo "zfs load-key -a; killall zfs" >> /root/.profile | |||
''; | |||
}; | |||
}; | |||
</syntaxhighlight> | </syntaxhighlight> | ||
After that you can unlock your datasets using the following ssh command: | |||
<syntaxhighlight> | |||
ssh -p 2222 root@host | |||
</syntaxhighlight> | |||
== Reservations == | |||
On ZFS, the performance will deteriorate significantly when more than 80% of the available space is used. To avoid this, reserve disk space beforehand. | |||
To reserve space create a new unused dataset that gets a guaranteed disk space of 10GB. | |||
<syntaxhighlight lang="console"> | <syntaxhighlight lang="console"> | ||
# zfs create -o refreservation=10G -o mountpoint=none zroot/reserved | |||
</ | </syntaxhighlight> | ||
== Auto ZFS trimming == | |||
<syntaxhighlight lang="nix" inline>services.zfs.trim.enable = true;</syntaxhighlight>. | |||
This will periodically run <code>zpool trim</code>. Note that this is different from the <code>autotrim</code> pool property. For further information, see the <code>zpool-trim</code> and <code>zpoolprops</code> man pages. | |||
< | |||
</ | |||
== Take snapshots automatically == | |||
< | See <code>services.sanoid</code> section in <code>man configuration.nix</code>. | ||
</ | |||
== NFS share == | |||
With <code>sharenfs</code> property, ZFS has build-in support for generating <code>/etc/exports.d/zfs.exports</code> file, which in turn is processed by NFS service automatically. | |||
If | {{warning|If you are intending on defining an IPv6 subnet as part of your sharenfs rule, as of ZFS 2.0.6 (2021-09-23) please note that due to a bug in openzfs '''your rule will not correctly apply''', and may result in a security vulnerability (CVE-2013-20001). A fix has been implemented in the next yet-to-be-released upstream version - [https://github.com/openzfs/zfs/pull/11939 openzfs/zfs#11939]}} | ||
To enable NFS share on a dataset, only two steps are needed: | |||
First, enable [[NFS|NFS service]]: | |||
<syntaxhighlight lang="nix"> | |||
services.nfs.server.enable = true; | |||
</syntaxhighlight> | |||
Only this line is needed. Configure firewall if necessary, as described in [[NFS]] article. | |||
<syntaxhighlight lang=" | Then, set <code>sharenfs</code> property: | ||
<syntaxhighlight lang="console"> | |||
# zfs set sharenfs="ro=192.168.1.0/24,all_squash,anonuid=70,anongid=70" rpool/myData | |||
</syntaxhighlight> | |||
For more options, see <code>man 5 exports</code>. | |||
</ | |||
Todo: sharesmb property for Samba. | |||
== Mail notification for ZFS Event Daemon == | |||
ZFS Event Daemon (zed) monitors events generated by the ZFS kernel module and runs configured tasks. It can be configured to send an email when a pool scrub is finished or a disk has failed. [https://search.nixos.org/options?query=services.zfs.zed zed options] | |||
=== Alternative 1: Enable Mail Notification without Re-compliation === | |||
First, we need to configure a mail transfer agent, the program that sends email: | |||
<syntaxhighlight lang="nix"> | <syntaxhighlight lang="nix"> | ||
{ | |||
programs.msmtp = { | |||
enable = true; | |||
setSendmail = true; | |||
defaults = { | |||
aliases = "/etc/aliases"; | |||
port = 465; | |||
tls_trust_file = "/etc/ssl/certs/ca-certificates.crt"; | |||
tls = "on"; | |||
auth = "login"; | |||
tls_starttls = "off"; | |||
}; | |||
accounts = { | |||
default = { | |||
}; | host = "mail.example.com"; | ||
</ | passwordeval = "cat /etc/emailpass.txt"; | ||
user = "user@example.com"; | |||
from = "user@example.com"; | |||
}; | |||
}; | |||
}; | |||
} | |||
</syntaxhighlight> | |||
Then, configure an alias for root account. With this alias configured, all mails sent to root, such as cron job results and failed sudo login events, will be redirected to the configured email account. | |||
= | <syntaxhighlight lang="bash"> | ||
tee -a /etc/aliases <<EOF | |||
root: user@example.com | |||
EOF | |||
</syntaxhighlight> | |||
Finally, override default zed settings with a custom one: | |||
<syntaxhighlight lang="nix"> | <syntaxhighlight lang="nix"> | ||
{ | { | ||
services.zfs.zed.settings = { | |||
} | ZED_DEBUG_LOG = "/tmp/zed.debug.log"; | ||
ZED_EMAIL_ADDR = [ "root" ]; | |||
ZED_EMAIL_PROG = "${pkgs.msmtp}/bin/msmtp"; | |||
ZED_EMAIL_OPTS = "@ADDRESS@"; | |||
ZED_NOTIFY_INTERVAL_SECS = 3600; | |||
ZED_NOTIFY_VERBOSE = true; | |||
ZED_USE_ENCLOSURE_LEDS = true; | |||
ZED_SCRUB_AFTER_RESILVER = true; | |||
}; | |||
# this option does not work; will return error | |||
services.zfs.zed.enableMail = false; | |||
} | |||
</syntaxhighlight> | |||
= | You can now test this by performing a scrub | ||
<syntaxhighlight lang="console"> | |||
# zpool scrub $pool | |||
</syntaxhighlight> | |||
ZFS | === Alternative 2: Rebuild ZFS with Mail Support === | ||
The <code>zfs</code> package can be rebuilt with mail features. However, please note that this will cause Nix to recompile the entire ZFS package on the computer, and on every kernel update, which could be very time-consuming on lower-end NAS systems. | |||
An alternative solution that does not involve recompliation can be found above. | |||
The following override is needed as <code>zfs</code> is implicitly used in partition mounting: | |||
<syntaxhighlight lang="nix"> | |||
nixpkgs.config.packageOverrides = pkgs: { | |||
zfsStable = pkgs.zfsStable.override { enableMail = true; }; | |||
}; | |||
</syntaxhighlight> | |||
A mail sender like [[msmtp]] or [[postfix]] is required. | |||
A minimal, testable ZED configuration example: | |||
<syntaxhighlight lang="nix"> | |||
services.zfs.zed.enableMail = true; | |||
services.zfs.zed.settings = { | |||
ZED_EMAIL_ADDR = [ "root" ]; | |||
ZED_NOTIFY_VERBOSE = true; | |||
}; | |||
</syntaxhighlight> | |||
Above, <code>ZED_EMAIL_ADDR</code> is set to <code>root</code>, which most people will have an alias for in their mailer. You can change it to directly mail you: <code>ZED_EMAIL_ADDR = [ "you@example.com" ];</code> | |||
= | ZED pulls in <code>mailutils</code> and runs <code>mail</code> by default, but you can override it with <code>ZED_EMAIL_PROG</code>. If using msmtp, you may need <code>ZED_EMAIL_PROG = "${pkgs.msmtp}/bin/msmtp";</code>. | ||
You can customize the mail command with <code>ZED_EMAIL_OPTS</code>. For example, if your upstream mail server requires a certain FROM address: <code>ZED_EMAIL_OPTS = "-r 'noreply@example.com' -s '@SUBJECT@' @ADDRESS@";</code> | |||
[[Category:Guide]] | [[Category:Guide]] |