CUDA: Difference between revisions
imported>Hypnosis2839 m update GH links to point to master rather than old rev |
mNo edit summary |
||
(19 intermediate revisions by 13 users not shown) | |||
Line 1: | Line 1: | ||
NixOS supports using NVIDIA GPUs for pure computing purposes, not just for graphics. For example, many users rely on NixOS for machine learning both locally and on cloud instances. These use cases are supported by the [https://github.com/orgs/NixOS/teams/cuda-maintainers @NixOS/cuda-maintainers team] on GitHub. If you have an issue using your NVIDIA GPU for computing purposes [https://github.com/ | NixOS supports using NVIDIA GPUs for pure computing purposes, not just for graphics. For example, many users rely on NixOS for machine learning both locally and on cloud instances. These use cases are supported by the [https://github.com/orgs/NixOS/teams/cuda-maintainers @NixOS/cuda-maintainers team] on GitHub ([https://github.com/orgs/NixOS/projects/27 project board]). If you have an issue using your NVIDIA GPU for computing purposes [https://github.com/NixOS/nixpkgs/issues/new/choose open an issue] on GitHub and tag <code>@NixOS/cuda-maintainers</code>. | ||
'''Cache''': Using the [https://app.cachix.org/cache/ | {{tip|1='''Cache''': Using the [https://app.cachix.org/cache/nix-community nix-community cache] is recommended! It will save you valuable time and electrons. Getting set up should be as simple as <code>cachix use nix-community</code>. Click [[#Setting up CUDA Binary Cache|here]] for more details.}} | ||
'''Data center GPUs''': Note that you may need to adjust your driver version to use "data center" GPUs like V100/A100s. See [https://discourse.nixos.org/t/how-to-use-nvidia-v100-a100-gpus/17754 this thread] for more info. | {{tip|1='''Data center GPUs''': Note that you may need to adjust your driver version to use "data center" GPUs like V100/A100s. See [https://discourse.nixos.org/t/how-to-use-nvidia-v100-a100-gpus/17754 this thread] for more info.}} | ||
== <code>cudatoolkit</code>, <code>cudnn</code>, and related packages == | == <code>cudatoolkit</code>, <code>cudnn</code>, and related packages == | ||
{{outdated|scope=section|date=July 2024|reason=Note that these examples have been updated more recently (as of 2024-07-30). May not be the best solution. A better resource is likely the packaging CUDA sample code [https://github.com/NixOS/nixpkgs/tree/master/pkgs/development/cuda-modules/cutensor here].}} | |||
The CUDA toolkit is available in a [https://search.nixos.org/packages?channel=unstable&from=0&size=50&buckets=%7B%22package_attr_set%22%3A%5B%22cudaPackages%22%5D%2C%22package_license_set%22%3A%5B%5D%2C%22package_maintainers_set%22%3A%5B%5D%2C%22package_platforms%22%3A%5B%5D%7D&sort=relevance&type=packages&query=cudatoolkit number of different versions]. Please use the latest major version. You can see where they're defined in nixpkgs [https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/ | The CUDA toolkit is available in a [https://search.nixos.org/packages?channel=unstable&from=0&size=50&buckets=%7B%22package_attr_set%22%3A%5B%22cudaPackages%22%5D%2C%22package_license_set%22%3A%5B%5D%2C%22package_maintainers_set%22%3A%5B%5D%2C%22package_platforms%22%3A%5B%5D%7D&sort=relevance&type=packages&query=cudatoolkit number of different versions]. Please use the latest major version. You can see where they're defined in nixpkgs [https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/cuda-modules/cudatoolkit/releases.nix here]. | ||
Several "CUDA-X" libraries are packages as well. In particular, | Several "CUDA-X" libraries are packages as well. In particular, | ||
* cuDNN is packaged [https://github.com/NixOS/nixpkgs/ | * cuDNN is packaged [https://github.com/NixOS/nixpkgs/tree/master/pkgs/development/cuda-modules/cudnn here]. | ||
* cuTENSOR is packaged [https://github.com/NixOS/nixpkgs/ | * cuTENSOR is packaged [https://github.com/NixOS/nixpkgs/tree/master/pkgs/development/cuda-modules/cutensor here]. | ||
There are some possible ways to setup a development environment using CUDA on NixOS. This can be accomplished in the following ways: | There are some possible ways to setup a development environment using CUDA on NixOS. This can be accomplished in the following ways: | ||
Line 19: | Line 17: | ||
* By making a FHS user env | * By making a FHS user env | ||
<syntaxhighlight lang="nix" line="1" start="1"># flake.nix, run with `nix develop` | |||
# Run with `nix-shell cuda-fhs.nix` | # Run with `nix-shell cuda-fhs.nix` | ||
{ pkgs ? import </nowiki><nixpkgs><nowiki> {} }: | { pkgs ? import </nowiki><nixpkgs><nowiki> {} }: | ||
(pkgs. | let | ||
# Change according to the driver used: stable, beta | |||
nvidiaPackage = pkgs.linuxPackages.nvidiaPackages.stable; | |||
in | |||
(pkgs.buildFHSEnv { | |||
name = "cuda-env"; | name = "cuda-env"; | ||
targetPkgs = pkgs: with pkgs; [ | targetPkgs = pkgs: with pkgs; [ | ||
Line 37: | Line 39: | ||
unzip | unzip | ||
cudatoolkit | cudatoolkit | ||
nvidiaPackage | |||
libGLU libGL | libGLU libGL | ||
xorg.libXi xorg.libXmu freeglut | xorg.libXi xorg.libXmu freeglut | ||
Line 49: | Line 51: | ||
profile = '' | profile = '' | ||
export CUDA_PATH=${pkgs.cudatoolkit} | export CUDA_PATH=${pkgs.cudatoolkit} | ||
# export LD_LIBRARY_PATH=${ | # export LD_LIBRARY_PATH=${nvidiaPackage}/lib | ||
export EXTRA_LDFLAGS="-L/lib -L${ | export EXTRA_LDFLAGS="-L/lib -L${nvidiaPackage}/lib" | ||
export EXTRA_CCFLAGS="-I/usr/include" | export EXTRA_CCFLAGS="-I/usr/include" | ||
''; | ''; | ||
}).env | }).env | ||
</ | </syntaxhighlight> | ||
* By making a nix-shell | * By making a nix-shell | ||
<syntaxhighlight lang="nix" line="1" start="1"> | |||
# Run with `nix-shell cuda-shell.nix` | # flake.nix, run with `nix develop`# Run with `nix-shell cuda-shell.nix` | ||
{ pkgs ? import </nowiki><nixpkgs><nowiki> {} }: | { pkgs ? import </nowiki><nixpkgs><nowiki> {} }: | ||
let | |||
nvidiaPackage = pkgs.linuxPackages.nvidiaPackages.stable; | |||
in | |||
pkgs.mkShell { | pkgs.mkShell { | ||
name = "cuda-env-shell"; | name = "cuda-env-shell"; | ||
Line 66: | Line 71: | ||
git gitRepo gnupg autoconf curl | git gitRepo gnupg autoconf curl | ||
procps gnumake util-linux m4 gperf unzip | procps gnumake util-linux m4 gperf unzip | ||
cudatoolkit | cudatoolkit nvidiaPackage | ||
libGLU libGL | libGLU libGL | ||
xorg.libXi xorg.libXmu freeglut | xorg.libXi xorg.libXmu freeglut | ||
Line 74: | Line 79: | ||
shellHook = '' | shellHook = '' | ||
export CUDA_PATH=${pkgs.cudatoolkit} | export CUDA_PATH=${pkgs.cudatoolkit} | ||
# export LD_LIBRARY_PATH=${ | # export LD_LIBRARY_PATH=${nvidiaPackage}/lib:${pkgs.ncurses}/lib | ||
export EXTRA_LDFLAGS="-L/lib -L${ | export EXTRA_LDFLAGS="-L/lib -L${nvidiaPackage}/lib" | ||
export EXTRA_CCFLAGS="-I/usr/include" | export EXTRA_CCFLAGS="-I/usr/include" | ||
''; | ''; | ||
} | } | ||
</syntaxhighlight> | |||
* By making a flake.nix | |||
<syntaxhighlight lang="nix" line="1" start="1"># flake.nix, run with `nix develop` | |||
{ | |||
description = "CUDA development environment"; | |||
outputs = { | |||
self, | |||
nixpkgs, | |||
}: let | |||
system = "x86_64-linux"; | |||
pkgs = import nixpkgs { | |||
inherit system; | |||
config.allowUnfree = true; | |||
config.cudaSupport = true; | |||
config.cudaVersion = "12"; | |||
}; | |||
# Change according to the driver used: stable, beta | |||
nvidiaPackage = pkgs.linuxPackages.nvidiaPackages.stable; | |||
in { | |||
# alejandra is a nix formatter with a beautiful output | |||
formatter."${system}" = nixpkgs.legacyPackages.${system}.alejandra; | |||
devShells.${system}.default = pkgs.mkShell { | |||
buildInputs = with pkgs; [ | |||
ffmpeg | |||
fmt.dev | |||
cudaPackages.cuda_cudart | |||
cudatoolkit | |||
nvidiaPackage | |||
cudaPackages.cudnn | |||
libGLU | |||
libGL | |||
xorg.libXi | |||
xorg.libXmu | |||
freeglut | |||
xorg.libXext | |||
xorg.libX11 | |||
xorg.libXv | |||
xorg.libXrandr | |||
zlib | |||
ncurses | |||
stdenv.cc | |||
binutils | |||
uv | |||
]; | |||
shellHook = '' | |||
export LD_LIBRARY_PATH="${nvidiaPackage}/lib:$LD_LIBRARY_PATH" | |||
export CUDA_PATH=${pkgs.cudatoolkit} | |||
export EXTRA_LDFLAGS="-L/lib -L${nvidiaPackage}/lib" | |||
export EXTRA_CCFLAGS="-I/usr/include" | |||
export CMAKE_PREFIX_PATH="${pkgs.fmt.dev}:$CMAKE_PREFIX_PATH" | |||
export PKG_CONFIG_PATH="${pkgs.fmt.dev}/lib/pkgconfig:$PKG_CONFIG_PATH" | |||
''; | |||
}; | |||
}; | |||
}</syntaxhighlight> | |||
== Setting up CUDA Binary Cache == | |||
The [https://nix-community.org/cache/ Nix-community cache] contains pre-built CUDA packages. By adding it to your system, Nix will fetch these packages instead of building them, saving valuable time and processing power. | |||
For more information, refer to the [[Binary Cache#Using a binary cache Using a binary cache|Using a binary cache]] page. | |||
{{warning|1=You need to rebuild your system at least once after adding the cache, before it can be used.}} | |||
=== NixOS === | |||
Add the cache to <code>substituters</code> and <code>trusted-public-keys</code> inside your system configuration: | |||
{{file|/etc/nixos/configuration.nix|nix|<nowiki> | |||
nix.settings = { | |||
substituters = [ | |||
"https://nix-community.cachix.org" | |||
]; | |||
trusted-public-keys = [ | |||
"nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs=" | |||
]; | |||
}; | |||
</nowiki>}} | |||
=== Non-NixOS === | |||
If you have [https://www.cachix.org cachix] installed and set up, all you need to do is run: | |||
<syntaxHighlight lang="console"> | |||
$ cachix use nix-community | |||
</syntaxHighlight> | |||
Else, you have to add <code>substituters</code> and <code>trusted-public-keys</code> to <code>/etc/nix/nix.conf</code>: | |||
{{file|/etc/nix/nix.conf|nix|<nowiki> | |||
trusted-public-keys = nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs= | |||
trusted-substituters = https://nix-community.cachix.org | |||
trusted-users = root @wheel | |||
</nowiki>}} | |||
If your user is in <code>trusted-users</code>, you can also add the cache in your home directory: | |||
{{file|~/.config/nix/nix.conf|nix|<nowiki> | |||
substituters = https://nix-community.cachix.org | |||
</nowiki>}} | </nowiki>}} | ||
Line 85: | Line 190: | ||
* Even with the drivers correctly installed, some software, like Blender, may not see the CUDA GPU. Make sure your system configuration has the option <code>hardware.opengl.enable</code> enabled. | * Even with the drivers correctly installed, some software, like Blender, may not see the CUDA GPU. Make sure your system configuration has the option <code>hardware.opengl.enable</code> enabled. | ||
* By default, software packaged in source code form has CUDA support disabled, because of the unfree license. To solve this, you can enable builds with CUDA support with a nixpkgs wide configuration, or use binary packaged CUDA compatible software such as [https://github.com/edolstra/nix-warez/tree/master/blender blender-bin]. | * By default, software packaged in source code form has CUDA support disabled, because of the unfree license. To solve this, you can enable builds with CUDA support with a nixpkgs wide configuration, or use binary packaged CUDA compatible software such as [https://github.com/edolstra/nix-warez/tree/master/blender blender-bin]. | ||
== CUDA under WSL == | |||
This (surprisingly) works just fine using nixpkgs 23.05 provided that you prefix the <code>LD_LIBRARY_PATH</code> in your interactive environment with the WSL library directory. For nix shell this looks like: | |||
{{file|cuda-shell.nix|nix|<nowiki> | |||
shellHook = '' | |||
export CUDA_PATH=${pkgs.cudatoolkit} | |||
export LD_LIBRARY_PATH=/usr/lib/wsl/lib:${pkgs.linuxPackages.nvidia_x11}/lib:${pkgs.ncurses5}/lib | |||
export EXTRA_LDFLAGS="-L/lib -L${pkgs.linuxPackages.nvidia_x11}/lib" | |||
export EXTRA_CCFLAGS="-I/usr/include" | |||
''; | |||
</nowiki>}} | |||
== See also == | == See also == | ||
Line 93: | Line 211: | ||
* [https://github.com/NixOS/nixpkgs/issues/131608 eGPU with nvidia-docker on intel-xserver] | * [https://github.com/NixOS/nixpkgs/issues/131608 eGPU with nvidia-docker on intel-xserver] | ||
* [https://discourse.nixos.org/t/cuda-in-nixos-on-gcp-for-a-tesla-k80/ Tesla K80 based CUDA setup with Terraform on GCP] | * [https://discourse.nixos.org/t/cuda-in-nixos-on-gcp-for-a-tesla-k80/ Tesla K80 based CUDA setup with Terraform on GCP] | ||
[[Category:Server]] |
Latest revision as of 21:06, 24 June 2025
NixOS supports using NVIDIA GPUs for pure computing purposes, not just for graphics. For example, many users rely on NixOS for machine learning both locally and on cloud instances. These use cases are supported by the @NixOS/cuda-maintainers team on GitHub (project board). If you have an issue using your NVIDIA GPU for computing purposes open an issue on GitHub and tag @NixOS/cuda-maintainers
.
cachix use nix-community
. Click here for more details.
The CUDA toolkit is available in a number of different versions. Please use the latest major version. You can see where they're defined in nixpkgs here.
Several "CUDA-X" libraries are packages as well. In particular,
There are some possible ways to setup a development environment using CUDA on NixOS. This can be accomplished in the following ways:
- By making a FHS user env
# flake.nix, run with `nix develop`
# Run with `nix-shell cuda-fhs.nix`
{ pkgs ? import </nowiki><nixpkgs><nowiki> {} }:
let
# Change according to the driver used: stable, beta
nvidiaPackage = pkgs.linuxPackages.nvidiaPackages.stable;
in
(pkgs.buildFHSEnv {
name = "cuda-env";
targetPkgs = pkgs: with pkgs; [
git
gitRepo
gnupg
autoconf
curl
procps
gnumake
util-linux
m4
gperf
unzip
cudatoolkit
nvidiaPackage
libGLU libGL
xorg.libXi xorg.libXmu freeglut
xorg.libXext xorg.libX11 xorg.libXv xorg.libXrandr zlib
ncurses5
stdenv.cc
binutils
];
multiPkgs = pkgs: with pkgs; [ zlib ];
runScript = "bash";
profile = ''
export CUDA_PATH=${pkgs.cudatoolkit}
# export LD_LIBRARY_PATH=${nvidiaPackage}/lib
export EXTRA_LDFLAGS="-L/lib -L${nvidiaPackage}/lib"
export EXTRA_CCFLAGS="-I/usr/include"
'';
}).env
- By making a nix-shell
# flake.nix, run with `nix develop`# Run with `nix-shell cuda-shell.nix`
{ pkgs ? import </nowiki><nixpkgs><nowiki> {} }:
let
nvidiaPackage = pkgs.linuxPackages.nvidiaPackages.stable;
in
pkgs.mkShell {
name = "cuda-env-shell";
buildInputs = with pkgs; [
git gitRepo gnupg autoconf curl
procps gnumake util-linux m4 gperf unzip
cudatoolkit nvidiaPackage
libGLU libGL
xorg.libXi xorg.libXmu freeglut
xorg.libXext xorg.libX11 xorg.libXv xorg.libXrandr zlib
ncurses5 stdenv.cc binutils
];
shellHook = ''
export CUDA_PATH=${pkgs.cudatoolkit}
# export LD_LIBRARY_PATH=${nvidiaPackage}/lib:${pkgs.ncurses}/lib
export EXTRA_LDFLAGS="-L/lib -L${nvidiaPackage}/lib"
export EXTRA_CCFLAGS="-I/usr/include"
'';
}
- By making a flake.nix
# flake.nix, run with `nix develop`
{
description = "CUDA development environment";
outputs = {
self,
nixpkgs,
}: let
system = "x86_64-linux";
pkgs = import nixpkgs {
inherit system;
config.allowUnfree = true;
config.cudaSupport = true;
config.cudaVersion = "12";
};
# Change according to the driver used: stable, beta
nvidiaPackage = pkgs.linuxPackages.nvidiaPackages.stable;
in {
# alejandra is a nix formatter with a beautiful output
formatter."${system}" = nixpkgs.legacyPackages.${system}.alejandra;
devShells.${system}.default = pkgs.mkShell {
buildInputs = with pkgs; [
ffmpeg
fmt.dev
cudaPackages.cuda_cudart
cudatoolkit
nvidiaPackage
cudaPackages.cudnn
libGLU
libGL
xorg.libXi
xorg.libXmu
freeglut
xorg.libXext
xorg.libX11
xorg.libXv
xorg.libXrandr
zlib
ncurses
stdenv.cc
binutils
uv
];
shellHook = ''
export LD_LIBRARY_PATH="${nvidiaPackage}/lib:$LD_LIBRARY_PATH"
export CUDA_PATH=${pkgs.cudatoolkit}
export EXTRA_LDFLAGS="-L/lib -L${nvidiaPackage}/lib"
export EXTRA_CCFLAGS="-I/usr/include"
export CMAKE_PREFIX_PATH="${pkgs.fmt.dev}:$CMAKE_PREFIX_PATH"
export PKG_CONFIG_PATH="${pkgs.fmt.dev}/lib/pkgconfig:$PKG_CONFIG_PATH"
'';
};
};
}
Setting up CUDA Binary Cache
The Nix-community cache contains pre-built CUDA packages. By adding it to your system, Nix will fetch these packages instead of building them, saving valuable time and processing power.
For more information, refer to the Using a binary cache page.
NixOS
Add the cache to substituters
and trusted-public-keys
inside your system configuration:
nix.settings = {
substituters = [
"https://nix-community.cachix.org"
];
trusted-public-keys = [
"nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs="
];
};
Non-NixOS
If you have cachix installed and set up, all you need to do is run:
$ cachix use nix-community
Else, you have to add substituters
and trusted-public-keys
to /etc/nix/nix.conf
:
trusted-public-keys = nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs=
trusted-substituters = https://nix-community.cachix.org
trusted-users = root @wheel
If your user is in trusted-users
, you can also add the cache in your home directory:
substituters = https://nix-community.cachix.org
Some things to keep in mind when setting up CUDA in NixOS
- Some GPUs, like Tesla K80, don't work with the latest drivers, so you must specify them in the option
hardware.nvidia.package
getting the value from your selected kernel, for example,config.boot.kernelPackages.nvidia_x11_legacy470
. You can check which driver version your GPU supports by visiting the nvidia site and checking the driver version. - Even with the drivers correctly installed, some software, like Blender, may not see the CUDA GPU. Make sure your system configuration has the option
hardware.opengl.enable
enabled. - By default, software packaged in source code form has CUDA support disabled, because of the unfree license. To solve this, you can enable builds with CUDA support with a nixpkgs wide configuration, or use binary packaged CUDA compatible software such as blender-bin.
CUDA under WSL
This (surprisingly) works just fine using nixpkgs 23.05 provided that you prefix the LD_LIBRARY_PATH
in your interactive environment with the WSL library directory. For nix shell this looks like:
shellHook = ''
export CUDA_PATH=${pkgs.cudatoolkit}
export LD_LIBRARY_PATH=/usr/lib/wsl/lib:${pkgs.linuxPackages.nvidia_x11}/lib:${pkgs.ncurses5}/lib
export EXTRA_LDFLAGS="-L/lib -L${pkgs.linuxPackages.nvidia_x11}/lib"
export EXTRA_CCFLAGS="-I/usr/include"
'';