NFS: Difference between revisions

Nh2 (talk | contribs)
add TODO regarding local_lock=all on Nix store
Klinger (talk | contribs)
m Intro Sentence, Link, Category:Networking added (its not only a filesystem its a way to define networks too)
 
(4 intermediate revisions by 2 users not shown)
Line 1: Line 1:
__FORCETOC__
__FORCETOC__
== Server ==


Let's say that we've got one server-machine with 4 directories that we want to share: <code>/mnt/kotomi</code>, <code>/mnt/mafuyu</code>, <code>/mnt/sen</code> and <code>/mnt/tomoyo</code>.
[[wikipedia:Network_File_System|NFS]] is a distribute filesystem protocol to access directories and files over a network.
 
= Server =
 
== NFS share setup ==
 
=== Using bind mounts ===
 
Let's say that we've got one server-machine with 2 directories that we want to share: <code>/mnt/tomoyo</code> and <code>/mnt/kotomi</code>.


First, we have to create a dedicated directory from which our NFS server will access the data:
First, we have to create a dedicated directory from which our NFS server will access the data:
Line 16: Line 23:
<syntaxhighlight lang="nix">
<syntaxhighlight lang="nix">
{
{
  fileSystems."/export/mafuyu" = {
    device = "/mnt/mafuyu";
    options = [ "bind" ];
  };
  fileSystems."/export/sen" = {
    device = "/mnt/sen";
    options = [ "bind" ];
  };
   fileSystems."/export/tomoyo" = {
   fileSystems."/export/tomoyo" = {
     device = "/mnt/tomoyo";
     device = "/mnt/tomoyo";
Line 37: Line 34:
}
}
</syntaxhighlight>
</syntaxhighlight>
Refer to [[Filesystems#Bind mounts]] for more information on bind mounts.
=== Using btrfs subvolumes ===
If you are using Btrfs, instead of moving existing directories or bind-mounting them into <code>/export</code>, you can create dedicated subvolumes directly under <code>/export</code>. This avoids the need for additional bind mounts and makes snapshotting or quota management easier. See [[btrfs#Subvolumes]] for details on creating subvolumes.
== NFS service configuration ==


Having the filesystem ready, we can proceed to configure the NFS server itself:
Having the filesystem ready, we can proceed to configure the NFS server itself:


<syntaxhighlight lang="nix">{
<syntaxhighlight lang="nix">
{
   services.nfs.server.enable = true;
   services.nfs.server.enable = true;
   services.nfs.server.exports = ''
   services.nfs.server.exports = ''
     /export        192.168.1.10(rw,fsid=0,no_subtree_check) 192.168.1.15(rw,fsid=0,no_subtree_check)
     /export        192.168.1.10(rw,fsid=0,no_subtree_check) 192.168.1.15(rw,fsid=0,no_subtree_check)
     /export/kotomi  192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check)
     /export/kotomi  192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check)
    /export/mafuyu  192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check)
    /export/sen    192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check)
     /export/tomoyo  192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check)
     /export/tomoyo  192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check)
   '';
   '';
}</syntaxhighlight>
}
This configuration exposes all our shares to 2 local IPs; you can find more examples at Gentoo's wiki [https://wiki.gentoo.org/wiki/NFSv4].
</syntaxhighlight>
 
This configuration exposes all our shares to 2 local IPs; you can find more examples at [https://wiki.gentoo.org/wiki/NFSv4 Gentoo's wiki on NFS].
 
To list the current loaded exports, use: <code>exportfs -v</code>


Other options are available on the [https://search.nixos.org/options?query=nfs NixOS option page] or via the <code>nixos-option</code> command.
Other options are available on the [https://search.nixos.org/options?query=nfs NixOS option page] or via the <code>nixos-option</code> command.
{{note| If you are exporting a btrfs subvolume, it is recommended to use the fsid option with a unique id, e.g. <code>fsid{{=}}12345</code>. See the [https://btrfs.readthedocs.io/en/latest/Interoperability.html#nfs btrfs interoperability docs] for more info.}}


=== Firewall ===
=== Firewall ===
If your server-machine has a firewall turned on (as NixOS does by default, for instance), don't forget to open appropriate ports; e.g. for NFSv4:
If your server-machine has a firewall turned on (as NixOS does by default, for instance), don't forget to open appropriate ports; e.g. for NFSv4:
<syntaxhighlight lang="nix">
<syntaxhighlight lang="nix">
Line 78: Line 89:
</syntaxhighlight>
</syntaxhighlight>


== Client ==
= Client =


Continuing the server example, mounting the now-exposed ''tomoyo'' share on another box (on a client) is as simple as:
To ensure the client has the necessary NFS utilities installed, add the following to your system configuration (for example, in <code>configuration.nix</code>).
 
<syntaxhighlight lang="nix">
  boot.supportedFilesystems = [ "nfs" ];
</syntaxhighlight>
 
NFS shares can be mounted on a client system using the standard <code>filesystem</code> option. Continuing the server example, to mount the <code>tomoyo</code> share:


<syntaxhighlight lang="nix">
<syntaxhighlight lang="nix">
Line 90: Line 107:
}
}
</syntaxhighlight>
</syntaxhighlight>
Replace "server" in the above device attribute with the IP address or DNS entry of the NFS server. Note that clients see exposed shares as if they were exposed at the root level - i.e. <code>/export/foo</code> becomes <code>/foo</code> (in the <code>device</code> option). Other, regular '''fileSystems''' options apply.


=== Specifying NFS version ===
In the above configuration, replace "server" with the appropriate IP address or DNS entry of your NFS server. Other, regular [https://search.nixos.org/options?query=filesystems.%3Cname%3E filesystem options] apply.
 
{{note| On the client side, the exposed shares are as if they were exposed at the root level - i.e. <code>/export/foo</code> becomes <code>/foo</code> (in the <code>device</code> option) }}
 
== Specifying NFS version ==


You can specify NFS version by adding the <code>"nfsvers="</code> option:
You can specify NFS version by adding the <code>"nfsvers="</code> option:
Line 104: Line 124:
</syntaxhighlight>
</syntaxhighlight>


=== Lazy-mounting ===
== Lazy-mounting ==


By default, all shares will be mounted right when your machine starts - apart from being simply unwanted sometimes, this may also cause issues when your computer doesn't have a stable network connection or uses WiFi; you can fix this by telling systemd to mount your shares the first time they are ''accessed'' (instead of keeping them mounted at all times):
By default, all shares will be mounted right when your machine starts - apart from being simply unwanted sometimes, this may also cause issues when your computer doesn't have a stable network connection or uses WiFi; you can fix this by telling systemd to mount your shares the first time they are ''accessed'' (instead of keeping them mounted at all times):
Line 117: Line 137:
</syntaxhighlight>
</syntaxhighlight>


=== Auto-disconnecting ===
== Auto-disconnecting ==


You can tell systemd to disconnect your NFS-client from the NFS-server when the directory has not been accessed for some time:
You can tell systemd to disconnect your NFS-client from the NFS-server when the directory has not been accessed for some time:
Line 130: Line 150:
</syntaxhighlight>
</syntaxhighlight>


=== Using systemd.mounts and systemd.automounts ===
== Using systemd.mounts and systemd.automounts ==


Here is an example with auto-disconnecting and lazy-mounting implemented, and the <code>noatime</code> mount option added.
This section provides an alternative approach for users who prefer to manage mounts using dedicated systemd units. Here is an example with auto-disconnecting and lazy-mounting implemented, and the <code>noatime</code> mount option added.


Note that <code>wantedBy = [ "multi-user.target" ];</code> is required for the automount unit to start at boot.  
Note that <code>wantedBy = [ "multi-user.target" ];</code> is required for the automount unit to start at boot.  
Line 181: Line 201:


     (commonMountOptions // {
     (commonMountOptions // {
       what = "server:/oyomot";
       what = "server:/kotomi";
       where = "/mnt/oyomot";
       where = "/mnt/kotomi";
     })
     })
   ];
   ];
Line 197: Line 217:
   [
   [
     (commonAutoMountOptions // { where = "/mnt/tomoyo"; })
     (commonAutoMountOptions // { where = "/mnt/tomoyo"; })
     (commonAutoMountOptions // { where = "/mnt/oyomot"; })
     (commonAutoMountOptions // { where = "/mnt/kotomi"; })
   ];
   ];
}
}
</syntaxhighlight>
</syntaxhighlight>


== Nix store on NFS ==
= Nix store on NFS =


In a single-user setup ('''not on Nixos''') the Nix store can be also exported over NFS (common in HPC clusters) to share package over the networks. The only requirement is to also pass <code>local_lock=flock</code> or <code>local_lock=all</code> as mount option to allow the nix packages to take locks on modifications. Example entry in <code>fstab</code>:
In a single-user setup ('''not on Nixos''') the Nix store can be also exported over NFS (common in HPC clusters) to share package over the networks. The only requirement is to also pass <code>local_lock=flock</code> or <code>local_lock=all</code> as mount option to allow the nix packages to take locks on modifications. Example entry in <code>fstab</code>:
Line 208: Line 228:
<syntaxhighlight lang="console"><host_or_ip>/nix /nix nfs nofail,x-systemd.device-timeout=4,local_lock=all 0 0</syntaxhighlight>'''TODO:''' Why this? That seems extremely unsafe. This disables NFS locks (which apply to all NFS clients), and makes locks ''local'', meaning a lock taken by one NFS client isn't seen by another, and both can take their locks. So this removes protection against concurrent writes, which Nix assumes.
<syntaxhighlight lang="console"><host_or_ip>/nix /nix nfs nofail,x-systemd.device-timeout=4,local_lock=all 0 0</syntaxhighlight>'''TODO:''' Why this? That seems extremely unsafe. This disables NFS locks (which apply to all NFS clients), and makes locks ''local'', meaning a lock taken by one NFS client isn't seen by another, and both can take their locks. So this removes protection against concurrent writes, which Nix assumes.
[[Category:Filesystem]]
[[Category:Filesystem]]
[[Category:Networking]]