Jump to content

NFS: Difference between revisions

From NixOS Wiki
imported>Samueldr
m h1 → h2... h1 are forbidden (by mediawiki styleguide) even though they're possible
Klinger (talk | contribs)
m Intro Sentence, Link, Category:Networking added (its not only a filesystem its a way to define networks too)
 
(24 intermediate revisions by 14 users not shown)
Line 1: Line 1:
__FORCETOC__
__FORCETOC__
== Server ==


The setup is very similar as it would be done in regular config file. I will use my setup as an example.
[[wikipedia:Network_File_System|NFS]] is a distribute filesystem protocol to access directories and files over a network.  


I wish to share 4 mount-points (/mnt/kotomi, /mnt/mafuyu, /mnt/sen, /mnt/tomoyo) with my other computers which will run NFS clients.
= Server =


First I created a separate directory for NFS shares:
== NFS share setup ==


<syntaxhighlight lang="console">$ mkdir /export</syntaxhighlight>
=== Using bind mounts ===
Then I mount (bind) the locations inside of /export from my config. Normally one would put it in /etc/fstab but nix generates that for us:
 
Let's say that we've got one server-machine with 2 directories that we want to share: <code>/mnt/tomoyo</code> and <code>/mnt/kotomi</code>.
 
First, we have to create a dedicated directory from which our NFS server will access the data:
 
<syntaxhighlight lang="console">
$ mkdir /export
</syntaxhighlight>
 
You may need to change ownership of the <code>/export</code> directory to <code>nobody:nogroup</code>
 
Then we have to either move our already-existing directories inside <code>/export</code> (using <code>mv</code> from the command line) or bind-mount them there:


<syntaxhighlight lang="nix">
<syntaxhighlight lang="nix">
{
{
  fileSystems."/export/mafuyu" = {
    device = "/mnt/mafuyu";
    options = "bind";
  };
  fileSystems."/export/sen" = {
    device = "/mnt/sen";
    options = "bind";
  };
   fileSystems."/export/tomoyo" = {
   fileSystems."/export/tomoyo" = {
     device = "/mnt/tomoyo";
     device = "/mnt/tomoyo";
     options = "bind";
     options = [ "bind" ];
   };
   };


   fileSystems."/export/kotomi" = {
   fileSystems."/export/kotomi" = {
     device = "/mnt/kotomi";
     device = "/mnt/kotomi";
     options = "bind";
     options = [ "bind" ];
   };
   };
}
}
</syntaxhighlight>
</syntaxhighlight>
Next we have to tell nix how we want to export these and to whom:


<syntaxhighlight lang="nix">{
Refer to [[Filesystems#Bind mounts]] for more information on bind mounts.
 
=== Using btrfs subvolumes ===
 
If you are using Btrfs, instead of moving existing directories or bind-mounting them into <code>/export</code>, you can create dedicated subvolumes directly under <code>/export</code>. This avoids the need for additional bind mounts and makes snapshotting or quota management easier. See [[btrfs#Subvolumes]] for details on creating subvolumes.
 
== NFS service configuration ==
 
Having the filesystem ready, we can proceed to configure the NFS server itself:
 
<syntaxhighlight lang="nix">
{
   services.nfs.server.enable = true;
   services.nfs.server.enable = true;
   services.nfs.server.exports = ''
   services.nfs.server.exports = ''
     /export                 192.168.1.10(rw,fsid=0,no_subtree_check) 192.168.1.15(rw,fsid=0,no_subtree_check)
     /export         192.168.1.10(rw,fsid=0,no_subtree_check) 192.168.1.15(rw,fsid=0,no_subtree_check)
     /export/kotomi         192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check)
     /export/kotomi 192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check)
    /export/mafuyu          192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check)
     /export/tomoyo 192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check)
    /export/sen            192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check)
     /export/tomoyo         192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check)
   '';
   '';
}</syntaxhighlight>
}
Here I export all my bound shares to 2 local IPs. For various flags, you can check the [https://wiki.gentoo.org/wiki/NFSv4#Server Gentoo wiki NFSv4 article] which has a nice coverage.
</syntaxhighlight>
 
This configuration exposes all our shares to 2 local IPs; you can find more examples at [https://wiki.gentoo.org/wiki/NFSv4 Gentoo's wiki on NFS].
 
To list the current loaded exports, use: <code>exportfs -v</code>


Other options are available on the [https://nixos.org/nixos/options.html#nfs NixOS option page] or via the <code>nixos-option</code> command
Other options are available on the [https://search.nixos.org/options?query=nfs NixOS option page] or via the <code>nixos-option</code> command.


Please remember that NixOS by default has a firewall turned on! Add rules to allow NFS traffic or switch it off if you don't need it.
{{note| If you are exporting a btrfs subvolume, it is recommended to use the fsid option with a unique id, e.g. <code>fsid{{=}}12345</code>. See the [https://btrfs.readthedocs.io/en/latest/Interoperability.html#nfs btrfs interoperability docs] for more info.}}


== Client ==
=== Firewall ===


Setting up the client is very easy. To follow from the server example, say I want to mount the now exposed ''tomoyo'' share on another box, call it ''server'', to ''/mnt/tomoyo''.
If your server-machine has a firewall turned on (as NixOS does by default, for instance), don't forget to open appropriate ports; e.g. for NFSv4:
<syntaxhighlight lang="nix">
networking.firewall.allowedTCPPorts = [ 2049 ];
</syntaxhighlight>


All I have to do is to put
Many clients only support NFSv3, which requires the server to have fixed ports:
<syntaxhighlight lang="nix">
  services.nfs.server = {
    enable = true;
    # fixed rpc.statd port; for firewall
    lockdPort = 4001;
    mountdPort = 4002;
    statdPort = 4000;
    extraNfsdConfig = '''';
  };
  networking.firewall = {
    enable = true;
      # for NFSv3; view with `rpcinfo -p`
    allowedTCPPorts = [ 111  2049 4000 4001 4002 20048 ];
    allowedUDPPorts = [ 111 2049 4000 4001  4002 20048 ];
  };
</syntaxhighlight>


<syntaxhighlight lang="nix">{
= Client =
 
To ensure the client has the necessary NFS utilities installed, add the following to your system configuration (for example, in <code>configuration.nix</code>).
 
<syntaxhighlight lang="nix">
  boot.supportedFilesystems = [ "nfs" ];
</syntaxhighlight>
 
NFS shares can be mounted on a client system using the standard <code>filesystem</code> option. Continuing the server example, to mount the <code>tomoyo</code> share:
 
<syntaxhighlight lang="nix">
{
   fileSystems."/mnt/tomoyo" = {
   fileSystems."/mnt/tomoyo" = {
     device = "server:/tomoyo";
     device = "server:/tomoyo";
     fsType = "nfs";
     fsType = "nfs";
   };
   };
}</syntaxhighlight>
}
Note that clients see the exposed shares as if they were exposed at the root level: ''/export/foo'' becomes ''/foo'' when client is concerned with mounting it. Regular '''fileSystems''' options apply.
</syntaxhighlight>
 
In the above configuration, replace "server" with the appropriate IP address or DNS entry of your NFS server. Other, regular [https://search.nixos.org/options?query=filesystems.%3Cname%3E filesystem options] apply.


If you experience trouble with NFS mounts failing on boot because the network is not ready, try adding the following line in your fileSystems mount definition:
{{note| On the client side, the exposed shares are as if they were exposed at the root level - i.e. <code>/export/foo</code> becomes <code>/foo</code> (in the <code>device</code> option) }}


== Specifying NFS version ==
You can specify NFS version by adding the <code>"nfsvers="</code> option:
<syntaxhighlight lang="nix">
<syntaxhighlight lang="nix">
{
{
   # ...
   fileSystems."/mnt/tomoyo" = {
   options = ["x-systemd.automount,noauto"];
    # ...
    options = [ "nfsvers=4.2" ];
  };
}
</syntaxhighlight>
 
== Lazy-mounting ==
 
By default, all shares will be mounted right when your machine starts - apart from being simply unwanted sometimes, this may also cause issues when your computer doesn't have a stable network connection or uses WiFi; you can fix this by telling systemd to mount your shares the first time they are ''accessed'' (instead of keeping them mounted at all times):
 
<syntaxhighlight lang="nix">
{
   fileSystems."/mnt/tomoyo" = {
    # ...
    options = [ "x-systemd.automount" "noauto" ];
  };
}
</syntaxhighlight>
 
== Auto-disconnecting ==
 
You can tell systemd to disconnect your NFS-client from the NFS-server when the directory has not been accessed for some time:
 
<syntaxhighlight lang="nix">
{
  fileSystems."/mnt/tomoyo" = {
    # ...
    options = [ "x-systemd.idle-timeout=600" ]; # disconnects after 10 minutes (i.e. 600 seconds)
  };
}
</syntaxhighlight>
 
== Using systemd.mounts and systemd.automounts ==
 
This section provides an alternative approach for users who prefer to manage mounts using dedicated systemd units. Here is an example with auto-disconnecting and lazy-mounting implemented, and the <code>noatime</code> mount option added.
 
Note that <code>wantedBy = [ "multi-user.target" ];</code> is required for the automount unit to start at boot.
 
Also note that <code>x-systemd</code> mount options are unneeded, as they are a representation of systemd options in <code>fstab(5)</code> format. They get parsed and converted to unit files by <code>systemd-fstab-generator(8)</code> as mentioned in <code>systemd.mount(5)</code>.
 
<syntaxhighlight lang="nix">
{
  services.rpcbind.enable = true; # needed for NFS
  systemd.mounts = [{
    type = "nfs";
    mountConfig = {
      Options = "noatime";
    };
    what = "server:/tomoyo";
    where = "/mnt/tomoyo";
  }];
 
  systemd.automounts = [{
    wantedBy = [ "multi-user.target" ];
    automountConfig = {
      TimeoutIdleSec = "600";
    };
    where = "/mnt/tomoyo";
  }];
}
</syntaxhighlight>
 
Multiple mounts with the exact same options can benefit from abstraction.
 
<syntaxhighlight lang="nix">
{
  services.rpcbind.enable = true; # needed for NFS
  systemd.mounts = let commonMountOptions = {
    type = "nfs";
    mountConfig = {
      Options = "noatime";
    };
  };
 
  in
 
  [
    (commonMountOptions // {
      what = "server:/tomoyo";
      where = "/mnt/tomoyo";
    })
 
    (commonMountOptions // {
      what = "server:/kotomi";
      where = "/mnt/kotomi";
    })
  ];
 
  systemd.automounts = let commonAutoMountOptions = {
    wantedBy = [ "multi-user.target" ];
    automountConfig = {
      TimeoutIdleSec = "600";
    };
  };
 
  in
 
  [
    (commonAutoMountOptions // { where = "/mnt/tomoyo"; })
    (commonAutoMountOptions // { where = "/mnt/kotomi"; })
  ];
}
}
</syntaxhighlight>
</syntaxhighlight>
That way, the NFS mount action won't actually be performed until the first time the mountpoint is accessed.


== Nix store on NFS ==
= Nix store on NFS =


In a single-user setup ('''not on Nixos''') the nix store can be also exported over NFS (common in HPC clusters) to share package over the networks. The only requirement is to also pass <code>local_lock=flock</code> or <code>local_lock=all</code> as mount option to allow the nix packages to take locks on modifications. Example entry in <code>fstab</code>:
In a single-user setup ('''not on Nixos''') the Nix store can be also exported over NFS (common in HPC clusters) to share package over the networks. The only requirement is to also pass <code>local_lock=flock</code> or <code>local_lock=all</code> as mount option to allow the nix packages to take locks on modifications. Example entry in <code>fstab</code>:


<syntaxhighlight lang="console"><host_or_ip>/nix /nix nfs nofail,x-systemd.device-timeout=4,local_lock=all 0 0</syntaxhighlight>
<syntaxhighlight lang="console"><host_or_ip>/nix /nix nfs nofail,x-systemd.device-timeout=4,local_lock=all 0 0</syntaxhighlight>'''TODO:''' Why this? That seems extremely unsafe. This disables NFS locks (which apply to all NFS clients), and makes locks ''local'', meaning a lock taken by one NFS client isn't seen by another, and both can take their locks. So this removes protection against concurrent writes, which Nix assumes.
[[Category:Filesystem]]
[[Category:Networking]]

Latest revision as of 19:08, 30 April 2025


NFS is a distribute filesystem protocol to access directories and files over a network.

Server

NFS share setup

Using bind mounts

Let's say that we've got one server-machine with 2 directories that we want to share: /mnt/tomoyo and /mnt/kotomi.

First, we have to create a dedicated directory from which our NFS server will access the data:

$ mkdir /export

You may need to change ownership of the /export directory to nobody:nogroup

Then we have to either move our already-existing directories inside /export (using mv from the command line) or bind-mount them there:

{
  fileSystems."/export/tomoyo" = {
    device = "/mnt/tomoyo";
    options = [ "bind" ];
  };

  fileSystems."/export/kotomi" = {
    device = "/mnt/kotomi";
    options = [ "bind" ];
  };
}

Refer to Filesystems#Bind mounts for more information on bind mounts.

Using btrfs subvolumes

If you are using Btrfs, instead of moving existing directories or bind-mounting them into /export, you can create dedicated subvolumes directly under /export. This avoids the need for additional bind mounts and makes snapshotting or quota management easier. See btrfs#Subvolumes for details on creating subvolumes.

NFS service configuration

Having the filesystem ready, we can proceed to configure the NFS server itself:

{
  services.nfs.server.enable = true;
  services.nfs.server.exports = ''
    /export         192.168.1.10(rw,fsid=0,no_subtree_check) 192.168.1.15(rw,fsid=0,no_subtree_check)
    /export/kotomi  192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check)
    /export/tomoyo  192.168.1.10(rw,nohide,insecure,no_subtree_check) 192.168.1.15(rw,nohide,insecure,no_subtree_check)
  '';
}

This configuration exposes all our shares to 2 local IPs; you can find more examples at Gentoo's wiki on NFS.

To list the current loaded exports, use: exportfs -v

Other options are available on the NixOS option page or via the nixos-option command.

Note: If you are exporting a btrfs subvolume, it is recommended to use the fsid option with a unique id, e.g. fsid=12345. See the btrfs interoperability docs for more info.

Firewall

If your server-machine has a firewall turned on (as NixOS does by default, for instance), don't forget to open appropriate ports; e.g. for NFSv4:

networking.firewall.allowedTCPPorts = [ 2049 ];

Many clients only support NFSv3, which requires the server to have fixed ports:

  services.nfs.server = {
    enable = true;
    # fixed rpc.statd port; for firewall
    lockdPort = 4001;
    mountdPort = 4002;
    statdPort = 4000;
    extraNfsdConfig = '''';
  };
  networking.firewall = {
    enable = true;
      # for NFSv3; view with `rpcinfo -p`
    allowedTCPPorts = [ 111  2049 4000 4001 4002 20048 ];
    allowedUDPPorts = [ 111 2049 4000 4001  4002 20048 ];
  };

Client

To ensure the client has the necessary NFS utilities installed, add the following to your system configuration (for example, in configuration.nix).

  boot.supportedFilesystems = [ "nfs" ];

NFS shares can be mounted on a client system using the standard filesystem option. Continuing the server example, to mount the tomoyo share:

{
  fileSystems."/mnt/tomoyo" = {
    device = "server:/tomoyo";
    fsType = "nfs";
  };
}

In the above configuration, replace "server" with the appropriate IP address or DNS entry of your NFS server. Other, regular filesystem options apply.

Note: On the client side, the exposed shares are as if they were exposed at the root level - i.e. /export/foo becomes /foo (in the device option)

Specifying NFS version

You can specify NFS version by adding the "nfsvers=" option:

{
  fileSystems."/mnt/tomoyo" = {
    # ...
    options = [ "nfsvers=4.2" ];
  };
}

Lazy-mounting

By default, all shares will be mounted right when your machine starts - apart from being simply unwanted sometimes, this may also cause issues when your computer doesn't have a stable network connection or uses WiFi; you can fix this by telling systemd to mount your shares the first time they are accessed (instead of keeping them mounted at all times):

{
  fileSystems."/mnt/tomoyo" = {
    # ...
    options = [ "x-systemd.automount" "noauto" ];
  };
}

Auto-disconnecting

You can tell systemd to disconnect your NFS-client from the NFS-server when the directory has not been accessed for some time:

{
  fileSystems."/mnt/tomoyo" = {
    # ...
    options = [ "x-systemd.idle-timeout=600" ]; # disconnects after 10 minutes (i.e. 600 seconds)
  };
}

Using systemd.mounts and systemd.automounts

This section provides an alternative approach for users who prefer to manage mounts using dedicated systemd units. Here is an example with auto-disconnecting and lazy-mounting implemented, and the noatime mount option added.

Note that wantedBy = [ "multi-user.target" ]; is required for the automount unit to start at boot.

Also note that x-systemd mount options are unneeded, as they are a representation of systemd options in fstab(5) format. They get parsed and converted to unit files by systemd-fstab-generator(8) as mentioned in systemd.mount(5).

{
  services.rpcbind.enable = true; # needed for NFS
  systemd.mounts = [{
    type = "nfs";
    mountConfig = {
      Options = "noatime";
    };
    what = "server:/tomoyo";
    where = "/mnt/tomoyo";
  }];

  systemd.automounts = [{
    wantedBy = [ "multi-user.target" ];
    automountConfig = {
      TimeoutIdleSec = "600";
    };
    where = "/mnt/tomoyo";
  }];
}

Multiple mounts with the exact same options can benefit from abstraction.

{
  services.rpcbind.enable = true; # needed for NFS
  systemd.mounts = let commonMountOptions = {
    type = "nfs";
    mountConfig = {
      Options = "noatime";
    };
  };

  in

  [
    (commonMountOptions // {
      what = "server:/tomoyo";
      where = "/mnt/tomoyo";
    })

    (commonMountOptions // {
      what = "server:/kotomi";
      where = "/mnt/kotomi";
    })
  ];

  systemd.automounts = let commonAutoMountOptions = {
    wantedBy = [ "multi-user.target" ];
    automountConfig = {
      TimeoutIdleSec = "600";
    };
  };

  in

  [
    (commonAutoMountOptions // { where = "/mnt/tomoyo"; })
    (commonAutoMountOptions // { where = "/mnt/kotomi"; })
  ];
}

Nix store on NFS

In a single-user setup (not on Nixos) the Nix store can be also exported over NFS (common in HPC clusters) to share package over the networks. The only requirement is to also pass local_lock=flock or local_lock=all as mount option to allow the nix packages to take locks on modifications. Example entry in fstab:

<host_or_ip>/nix /nix nfs nofail,x-systemd.device-timeout=4,local_lock=all 0 0

TODO: Why this? That seems extremely unsafe. This disables NFS locks (which apply to all NFS clients), and makes locks local, meaning a lock taken by one NFS client isn't seen by another, and both can take their locks. So this removes protection against concurrent writes, which Nix assumes.