NFS: Difference between revisions
ensure nfs utils get installed |
Adds btrfs section and refactored layout for readability. Decrease the example from 4 mounts to 2 mounts. The 4 examples made a lot of boilerplate and didn't show off any extra functionality that just 2 example nfs shares would show. Tags: Mobile edit Mobile web edit Advanced mobile edit |
||
Line 1: | Line 1: | ||
__FORCETOC__ | __FORCETOC__ | ||
= Server = | |||
Let's say that we've got one server-machine with | == NFS share setup == | ||
=== Using bind mounts === | |||
Let's say that we've got one server-machine with 2 directories that we want to share: <code>/mnt/tomoyo</code> and <code>/mnt/kotomi</code>. | |||
First, we have to create a dedicated directory from which our NFS server will access the data: | First, we have to create a dedicated directory from which our NFS server will access the data: | ||
Line 16: | Line 20: | ||
<syntaxhighlight lang="nix"> | <syntaxhighlight lang="nix"> | ||
{ | { | ||
fileSystems."/export/tomoyo" = { | fileSystems."/export/tomoyo" = { | ||
device = "/mnt/tomoyo"; | device = "/mnt/tomoyo"; | ||
Line 37: | Line 31: | ||
} | } | ||
</syntaxhighlight> | </syntaxhighlight> | ||
=== Using btrfs subvolumes === | |||
If you are using Btrfs, instead of moving existing directories or bind-mounting them into <code>/export</code>, you can create dedicated subvolumes directly under <code>/export</code>. This avoids the need for additional bind mounts and makes snapshotting or quota management easier. See [[btrfs#Subvolumes]] for details on creating subvolumes. | |||
== NFS service configuration == | |||
Having the filesystem ready, we can proceed to configure the NFS server itself: | Having the filesystem ready, we can proceed to configure the NFS server itself: | ||
<syntaxhighlight lang="nix">{ | <syntaxhighlight lang="nix"> | ||
{ | |||
services.nfs.server.enable = true; | services.nfs.server.enable = true; | ||
services.nfs.server.exports = '' | services.nfs.server.exports = '' | ||
/export 192.168.1.10(rw,fsid=0,no_subtree_check) 192.168.1.15(rw,fsid=0,no_subtree_check) | /export 192.168.1.10(rw,fsid=0,no_subtree_check) 192.168.1.15(rw,fsid=0,no_subtree_check) | ||
/export/kotomi 192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check) | /export/kotomi 192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check) | ||
/export/tomoyo 192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check) | /export/tomoyo 192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check) | ||
''; | ''; | ||
}</syntaxhighlight> | } | ||
This configuration exposes all our shares to 2 local IPs; you can find more examples at | </syntaxhighlight> | ||
This configuration exposes all our shares to 2 local IPs; you can find more examples at [https://wiki.gentoo.org/wiki/NFSv4 Gentoo's wiki on NFS]. | |||
Other options are available on the [https://search.nixos.org/options?query=nfs NixOS option page] or via the <code>nixos-option</code> command. | Other options are available on the [https://search.nixos.org/options?query=nfs NixOS option page] or via the <code>nixos-option</code> command. | ||
{{note| If you are exporting a btrfs subvolume, it is recommended to use the fsid option with a unique id, e.g. <code>fsid{{=}}12345</code>. See the [https://btrfs.readthedocs.io/en/latest/Interoperability.html#nfs btrfs interoperability docs] for more info.}} | |||
=== Firewall === | === Firewall === | ||
If your server-machine has a firewall turned on (as NixOS does by default, for instance), don't forget to open appropriate ports; e.g. for NFSv4: | If your server-machine has a firewall turned on (as NixOS does by default, for instance), don't forget to open appropriate ports; e.g. for NFSv4: | ||
<syntaxhighlight lang="nix"> | <syntaxhighlight lang="nix"> | ||
Line 78: | Line 82: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
= Client = | |||
To ensure the client has the necessary utilities installed, add | To ensure the client has the necessary utilities installed, add | ||
Line 99: | Line 103: | ||
Replace "server" in the above device attribute with the IP address or DNS entry of the NFS server. Note that clients see exposed shares as if they were exposed at the root level - i.e. <code>/export/foo</code> becomes <code>/foo</code> (in the <code>device</code> option). Other, regular '''fileSystems''' options apply. | Replace "server" in the above device attribute with the IP address or DNS entry of the NFS server. Note that clients see exposed shares as if they were exposed at the root level - i.e. <code>/export/foo</code> becomes <code>/foo</code> (in the <code>device</code> option). Other, regular '''fileSystems''' options apply. | ||
== Specifying NFS version == | |||
You can specify NFS version by adding the <code>"nfsvers="</code> option: | You can specify NFS version by adding the <code>"nfsvers="</code> option: | ||
Line 111: | Line 115: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
== Lazy-mounting == | |||
By default, all shares will be mounted right when your machine starts - apart from being simply unwanted sometimes, this may also cause issues when your computer doesn't have a stable network connection or uses WiFi; you can fix this by telling systemd to mount your shares the first time they are ''accessed'' (instead of keeping them mounted at all times): | By default, all shares will be mounted right when your machine starts - apart from being simply unwanted sometimes, this may also cause issues when your computer doesn't have a stable network connection or uses WiFi; you can fix this by telling systemd to mount your shares the first time they are ''accessed'' (instead of keeping them mounted at all times): | ||
Line 124: | Line 128: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
== Auto-disconnecting == | |||
You can tell systemd to disconnect your NFS-client from the NFS-server when the directory has not been accessed for some time: | You can tell systemd to disconnect your NFS-client from the NFS-server when the directory has not been accessed for some time: | ||
Line 137: | Line 141: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
== Using systemd.mounts and systemd.automounts == | |||
Here is an example with auto-disconnecting and lazy-mounting implemented, and the <code>noatime</code> mount option added. | Here is an example with auto-disconnecting and lazy-mounting implemented, and the <code>noatime</code> mount option added. | ||
Line 188: | Line 192: | ||
(commonMountOptions // { | (commonMountOptions // { | ||
what = "server:/ | what = "server:/kotomi"; | ||
where = "/mnt/ | where = "/mnt/kotomi"; | ||
}) | }) | ||
]; | ]; | ||
Line 204: | Line 208: | ||
[ | [ | ||
(commonAutoMountOptions // { where = "/mnt/tomoyo"; }) | (commonAutoMountOptions // { where = "/mnt/tomoyo"; }) | ||
(commonAutoMountOptions // { where = "/mnt/ | (commonAutoMountOptions // { where = "/mnt/kotomi"; }) | ||
]; | ]; | ||
} | } | ||
</syntaxhighlight> | </syntaxhighlight> | ||
= Nix store on NFS = | |||
In a single-user setup ('''not on Nixos''') the Nix store can be also exported over NFS (common in HPC clusters) to share package over the networks. The only requirement is to also pass <code>local_lock=flock</code> or <code>local_lock=all</code> as mount option to allow the nix packages to take locks on modifications. Example entry in <code>fstab</code>: | In a single-user setup ('''not on Nixos''') the Nix store can be also exported over NFS (common in HPC clusters) to share package over the networks. The only requirement is to also pass <code>local_lock=flock</code> or <code>local_lock=all</code> as mount option to allow the nix packages to take locks on modifications. Example entry in <code>fstab</code>: |
Revision as of 06:55, 27 April 2025
Server
Using bind mounts
Let's say that we've got one server-machine with 2 directories that we want to share: /mnt/tomoyo
and /mnt/kotomi
.
First, we have to create a dedicated directory from which our NFS server will access the data:
$ mkdir /export
You may need to change ownership of the /export
directory to nobody:nogroup
Then we have to either move our already-existing directories inside /export
(using mv
from the command line) or bind-mount them there:
{
fileSystems."/export/tomoyo" = {
device = "/mnt/tomoyo";
options = [ "bind" ];
};
fileSystems."/export/kotomi" = {
device = "/mnt/kotomi";
options = [ "bind" ];
};
}
Using btrfs subvolumes
If you are using Btrfs, instead of moving existing directories or bind-mounting them into /export
, you can create dedicated subvolumes directly under /export
. This avoids the need for additional bind mounts and makes snapshotting or quota management easier. See btrfs#Subvolumes for details on creating subvolumes.
NFS service configuration
Having the filesystem ready, we can proceed to configure the NFS server itself:
{
services.nfs.server.enable = true;
services.nfs.server.exports = ''
/export 192.168.1.10(rw,fsid=0,no_subtree_check) 192.168.1.15(rw,fsid=0,no_subtree_check)
/export/kotomi 192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check)
/export/tomoyo 192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check)
'';
}
This configuration exposes all our shares to 2 local IPs; you can find more examples at Gentoo's wiki on NFS.
Other options are available on the NixOS option page or via the nixos-option
command.
fsid=12345
. See the btrfs interoperability docs for more info.Firewall
If your server-machine has a firewall turned on (as NixOS does by default, for instance), don't forget to open appropriate ports; e.g. for NFSv4:
networking.firewall.allowedTCPPorts = [ 2049 ];
Many clients only support NFSv3, which requires the server to have fixed ports:
services.nfs.server = {
enable = true;
# fixed rpc.statd port; for firewall
lockdPort = 4001;
mountdPort = 4002;
statdPort = 4000;
extraNfsdConfig = '''';
};
networking.firewall = {
enable = true;
# for NFSv3; view with `rpcinfo -p`
allowedTCPPorts = [ 111 2049 4000 4001 4002 20048 ];
allowedUDPPorts = [ 111 2049 4000 4001 4002 20048 ];
};
Client
To ensure the client has the necessary utilities installed, add
boot.supportedFilesystems = [ "nfs" ];
to your Nix configuration (e.g. configuration.nix
) file.
Continuing the server example, mounting the now-exposed tomoyo share on another box (on a client) is as simple as:
{
fileSystems."/mnt/tomoyo" = {
device = "server:/tomoyo";
fsType = "nfs";
};
}
Replace "server" in the above device attribute with the IP address or DNS entry of the NFS server. Note that clients see exposed shares as if they were exposed at the root level - i.e. /export/foo
becomes /foo
(in the device
option). Other, regular fileSystems options apply.
Specifying NFS version
You can specify NFS version by adding the "nfsvers="
option:
{
fileSystems."/mnt/tomoyo" = {
# ...
options = [ "nfsvers=4.2" ];
};
}
Lazy-mounting
By default, all shares will be mounted right when your machine starts - apart from being simply unwanted sometimes, this may also cause issues when your computer doesn't have a stable network connection or uses WiFi; you can fix this by telling systemd to mount your shares the first time they are accessed (instead of keeping them mounted at all times):
{
fileSystems."/mnt/tomoyo" = {
# ...
options = [ "x-systemd.automount" "noauto" ];
};
}
Auto-disconnecting
You can tell systemd to disconnect your NFS-client from the NFS-server when the directory has not been accessed for some time:
{
fileSystems."/mnt/tomoyo" = {
# ...
options = [ "x-systemd.idle-timeout=600" ]; # disconnects after 10 minutes (i.e. 600 seconds)
};
}
Using systemd.mounts and systemd.automounts
Here is an example with auto-disconnecting and lazy-mounting implemented, and the noatime
mount option added.
Note that wantedBy = [ "multi-user.target" ];
is required for the automount unit to start at boot.
Also note that x-systemd
mount options are unneeded, as they are a representation of systemd options in fstab(5)
format. They get parsed and converted to unit files by systemd-fstab-generator(8)
as mentioned in systemd.mount(5)
.
{
services.rpcbind.enable = true; # needed for NFS
systemd.mounts = [{
type = "nfs";
mountConfig = {
Options = "noatime";
};
what = "server:/tomoyo";
where = "/mnt/tomoyo";
}];
systemd.automounts = [{
wantedBy = [ "multi-user.target" ];
automountConfig = {
TimeoutIdleSec = "600";
};
where = "/mnt/tomoyo";
}];
}
Multiple mounts with the exact same options can benefit from abstraction.
{
services.rpcbind.enable = true; # needed for NFS
systemd.mounts = let commonMountOptions = {
type = "nfs";
mountConfig = {
Options = "noatime";
};
};
in
[
(commonMountOptions // {
what = "server:/tomoyo";
where = "/mnt/tomoyo";
})
(commonMountOptions // {
what = "server:/kotomi";
where = "/mnt/kotomi";
})
];
systemd.automounts = let commonAutoMountOptions = {
wantedBy = [ "multi-user.target" ];
automountConfig = {
TimeoutIdleSec = "600";
};
};
in
[
(commonAutoMountOptions // { where = "/mnt/tomoyo"; })
(commonAutoMountOptions // { where = "/mnt/kotomi"; })
];
}
Nix store on NFS
In a single-user setup (not on Nixos) the Nix store can be also exported over NFS (common in HPC clusters) to share package over the networks. The only requirement is to also pass local_lock=flock
or local_lock=all
as mount option to allow the nix packages to take locks on modifications. Example entry in fstab
:
<host_or_ip>/nix /nix nfs nofail,x-systemd.device-timeout=4,local_lock=all 0 0
TODO: Why this? That seems extremely unsafe. This disables NFS locks (which apply to all NFS clients), and makes locks local, meaning a lock taken by one NFS client isn't seen by another, and both can take their locks. So this removes protection against concurrent writes, which Nix assumes.