CUDA: Difference between revisions
mNo edit summary |
Replace nix-community binary cache with cache.nixos-cuda.org |
||
| (One intermediate revision by one other user not shown) | |||
| Line 1: | Line 1: | ||
NixOS supports using NVIDIA GPUs for pure computing purposes, not just for graphics. For example, many users rely on NixOS for machine learning both locally and on cloud instances. These use cases are supported by the [https://github.com/orgs/NixOS/teams/cuda-maintainers @NixOS/cuda-maintainers team] on GitHub ([https://github.com/orgs/NixOS/projects/27 project board]). If you have an issue using your NVIDIA GPU for computing purposes [https://github.com/NixOS/nixpkgs/issues/new/choose open an issue] on GitHub and tag <code>@NixOS/cuda-maintainers</code>. | NixOS supports using NVIDIA GPUs for pure computing purposes, not just for graphics. For example, many users rely on NixOS for machine learning both locally and on cloud instances. These use cases are supported by the [https://github.com/orgs/NixOS/teams/cuda-maintainers @NixOS/cuda-maintainers team] on GitHub ([https://github.com/orgs/NixOS/projects/27 project board]). If you have an issue using your NVIDIA GPU for computing purposes [https://github.com/NixOS/nixpkgs/issues/new/choose open an issue] on GitHub and tag <code>@NixOS/cuda-maintainers</code>. | ||
{{tip|1='''Cache''': Using the | {{tip|1='''Cache''': Using the binary cache is recommended! It will save you valuable time and electrons. Click [[#Setting up CUDA Binary Cache|here]] for more details.}} | ||
{{tip|1='''Data center GPUs''': Note that you may need to adjust your driver version to use "data center" GPUs like V100/A100s. See [https://discourse.nixos.org/t/how-to-use-nvidia-v100-a100-gpus/17754 this thread] for more info.}} | {{tip|1='''Data center GPUs''': Note that you may need to adjust your driver version to use "data center" GPUs like V100/A100s. See [https://discourse.nixos.org/t/how-to-use-nvidia-v100-a100-gpus/17754 this thread] for more info.}} | ||
| Line 19: | Line 19: | ||
<syntaxhighlight lang="nix" line="1" start="1"># flake.nix, run with `nix develop` | <syntaxhighlight lang="nix" line="1" start="1"># flake.nix, run with `nix develop` | ||
# Run with `nix-shell cuda-fhs.nix` | # Run with `nix-shell cuda-fhs.nix` | ||
{ pkgs ? import | { pkgs ? import <nixpkgs> {} }: | ||
let | let | ||
# Change according to the driver used: stable, beta | # Change according to the driver used: stable, beta | ||
| Line 144: | Line 144: | ||
== Setting up CUDA Binary Cache == | == Setting up CUDA Binary Cache == | ||
The | The binary cache contains pre-built CUDA packages. By adding it to your system, Nix will fetch these packages instead of building them, saving valuable time and processing power. | ||
For more information, refer to the [[Binary Cache#Using a binary cache Using a binary cache|Using a binary cache]] page. | For more information, refer to the [[Binary Cache#Using a binary cache Using a binary cache|Using a binary cache]] page. | ||
| Line 153: | Line 153: | ||
Add the cache to <code>substituters</code> and <code>trusted-public-keys</code> inside your system configuration: | Add the cache to <code>substituters</code> and <code>trusted-public-keys</code> inside your system configuration: | ||
{{file| | {{file|3=<nowiki> | ||
nix.settings = { | nix.settings = { | ||
substituters = [ | substituters = [ | ||
"https:// | "https://cache.nixos-cuda.org" | ||
]; | ]; | ||
trusted-public-keys = [ | trusted-public-keys = [ | ||
" | "cache.nixos-cuda.org:74DUi4Ye579gUqzH4ziL9IyiJBlDpMRn9MBN8oNan9M=" | ||
]; | ]; | ||
}; | }; | ||
</nowiki>}} | </nowiki>|name=/etc/nixos/configuration.nix|lang=nix}} | ||
=== Non-NixOS === | === Non-NixOS === | ||
You have to add <code>substituters</code> and <code>trusted-public-keys</code> to <code>/etc/nix/nix.conf</code>: | |||
{{file|3=<nowiki> | |||
trusted-public-keys = cache.nixos-cuda.org:74DUi4Ye579gUqzH4ziL9IyiJBlDpMRn9MBN8oNan9M= | |||
trusted-substituters = https://cache.nixos-cuda.org | |||
{{file| | |||
trusted-public-keys = | |||
trusted-substituters = https:// | |||
trusted-users = root @wheel | trusted-users = root @wheel | ||
</nowiki>}} | </nowiki>|name=/etc/nix/nix.conf|lang=nix}} | ||
If your user is in <code>trusted-users</code>, you can also add the cache in your home directory: | If your user is in <code>trusted-users</code>, you can also add the cache in your home directory: | ||
{{file| | {{file|3=<nowiki> | ||
substituters = https:// | trusted-public-keys = cache.nixos-cuda.org:74DUi4Ye579gUqzH4ziL9IyiJBlDpMRn9MBN8oNan9M= | ||
</nowiki>}} | trusted-substituters = https://cache.nixos-cuda.org | ||
</nowiki>|name=~/.config/nix/nix.conf|lang=nix}} | |||
== Some things to keep in mind when setting up CUDA in NixOS == | == Some things to keep in mind when setting up CUDA in NixOS == | ||
Latest revision as of 10:14, 19 November 2025
NixOS supports using NVIDIA GPUs for pure computing purposes, not just for graphics. For example, many users rely on NixOS for machine learning both locally and on cloud instances. These use cases are supported by the @NixOS/cuda-maintainers team on GitHub (project board). If you have an issue using your NVIDIA GPU for computing purposes open an issue on GitHub and tag @NixOS/cuda-maintainers.
cudatoolkit, cudnn, and related packages
The CUDA toolkit is available in a number of different versions. Please use the latest major version. You can see where they're defined in nixpkgs here.
Several "CUDA-X" libraries are packages as well. In particular,
There are some possible ways to setup a development environment using CUDA on NixOS. This can be accomplished in the following ways:
- By making a FHS user env
# flake.nix, run with `nix develop`
# Run with `nix-shell cuda-fhs.nix`
{ pkgs ? import <nixpkgs> {} }:
let
# Change according to the driver used: stable, beta
nvidiaPackage = pkgs.linuxPackages.nvidiaPackages.stable;
in
(pkgs.buildFHSEnv {
name = "cuda-env";
targetPkgs = pkgs: with pkgs; [
git
gitRepo
gnupg
autoconf
curl
procps
gnumake
util-linux
m4
gperf
unzip
cudatoolkit
nvidiaPackage
libGLU libGL
xorg.libXi xorg.libXmu freeglut
xorg.libXext xorg.libX11 xorg.libXv xorg.libXrandr zlib
ncurses5
stdenv.cc
binutils
];
multiPkgs = pkgs: with pkgs; [ zlib ];
runScript = "bash";
profile = ''
export CUDA_PATH=${pkgs.cudatoolkit}
# export LD_LIBRARY_PATH=${nvidiaPackage}/lib
export EXTRA_LDFLAGS="-L/lib -L${nvidiaPackage}/lib"
export EXTRA_CCFLAGS="-I/usr/include"
'';
}).env
- By making a nix-shell
# flake.nix, run with `nix develop`# Run with `nix-shell cuda-shell.nix`
{ pkgs ? import </nowiki><nixpkgs><nowiki> {} }:
let
nvidiaPackage = pkgs.linuxPackages.nvidiaPackages.stable;
in
pkgs.mkShell {
name = "cuda-env-shell";
buildInputs = with pkgs; [
git gitRepo gnupg autoconf curl
procps gnumake util-linux m4 gperf unzip
cudatoolkit nvidiaPackage
libGLU libGL
xorg.libXi xorg.libXmu freeglut
xorg.libXext xorg.libX11 xorg.libXv xorg.libXrandr zlib
ncurses5 stdenv.cc binutils
];
shellHook = ''
export CUDA_PATH=${pkgs.cudatoolkit}
# export LD_LIBRARY_PATH=${nvidiaPackage}/lib:${pkgs.ncurses}/lib
export EXTRA_LDFLAGS="-L/lib -L${nvidiaPackage}/lib"
export EXTRA_CCFLAGS="-I/usr/include"
'';
}
- By making a flake.nix
# flake.nix, run with `nix develop`
{
description = "CUDA development environment";
outputs = {
self,
nixpkgs,
}: let
system = "x86_64-linux";
pkgs = import nixpkgs {
inherit system;
config.allowUnfree = true;
config.cudaSupport = true;
config.cudaVersion = "12";
};
# Change according to the driver used: stable, beta
nvidiaPackage = pkgs.linuxPackages.nvidiaPackages.stable;
in {
# alejandra is a nix formatter with a beautiful output
formatter."${system}" = nixpkgs.legacyPackages.${system}.alejandra;
devShells.${system}.default = pkgs.mkShell {
buildInputs = with pkgs; [
ffmpeg
fmt.dev
cudaPackages.cuda_cudart
cudatoolkit
nvidiaPackage
cudaPackages.cudnn
libGLU
libGL
xorg.libXi
xorg.libXmu
freeglut
xorg.libXext
xorg.libX11
xorg.libXv
xorg.libXrandr
zlib
ncurses
stdenv.cc
binutils
uv
];
shellHook = ''
export LD_LIBRARY_PATH="${nvidiaPackage}/lib:$LD_LIBRARY_PATH"
export CUDA_PATH=${pkgs.cudatoolkit}
export EXTRA_LDFLAGS="-L/lib -L${nvidiaPackage}/lib"
export EXTRA_CCFLAGS="-I/usr/include"
export CMAKE_PREFIX_PATH="${pkgs.fmt.dev}:$CMAKE_PREFIX_PATH"
export PKG_CONFIG_PATH="${pkgs.fmt.dev}/lib/pkgconfig:$PKG_CONFIG_PATH"
'';
};
};
}
Setting up CUDA Binary Cache
The binary cache contains pre-built CUDA packages. By adding it to your system, Nix will fetch these packages instead of building them, saving valuable time and processing power.
For more information, refer to the Using a binary cache page.
NixOS
Add the cache to substituters and trusted-public-keys inside your system configuration:
nix.settings = {
substituters = [
"https://cache.nixos-cuda.org"
];
trusted-public-keys = [
"cache.nixos-cuda.org:74DUi4Ye579gUqzH4ziL9IyiJBlDpMRn9MBN8oNan9M="
];
};
Non-NixOS
You have to add substituters and trusted-public-keys to /etc/nix/nix.conf:
trusted-public-keys = cache.nixos-cuda.org:74DUi4Ye579gUqzH4ziL9IyiJBlDpMRn9MBN8oNan9M=
trusted-substituters = https://cache.nixos-cuda.org
trusted-users = root @wheel
If your user is in trusted-users, you can also add the cache in your home directory:
trusted-public-keys = cache.nixos-cuda.org:74DUi4Ye579gUqzH4ziL9IyiJBlDpMRn9MBN8oNan9M=
trusted-substituters = https://cache.nixos-cuda.org
Some things to keep in mind when setting up CUDA in NixOS
- Some GPUs, like Tesla K80, don't work with the latest drivers, so you must specify them in the option
hardware.nvidia.packagegetting the value from your selected kernel, for example,config.boot.kernelPackages.nvidia_x11_legacy470. You can check which driver version your GPU supports by visiting the nvidia site and checking the driver version. - Even with the drivers correctly installed, some software, like Blender, may not see the CUDA GPU. Make sure your system configuration has the option
hardware.opengl.enableenabled. - By default, software packaged in source code form has CUDA support disabled, because of the unfree license. To solve this, you can enable builds with CUDA support with a nixpkgs wide configuration, or use binary packaged CUDA compatible software such as blender-bin.
CUDA under WSL
This (surprisingly) works just fine using nixpkgs 23.05 provided that you prefix the LD_LIBRARY_PATH in your interactive environment with the WSL library directory. For nix shell this looks like:
shellHook = ''
export CUDA_PATH=${pkgs.cudatoolkit}
export LD_LIBRARY_PATH=/usr/lib/wsl/lib:${pkgs.linuxPackages.nvidia_x11}/lib:${pkgs.ncurses5}/lib
export EXTRA_LDFLAGS="-L/lib -L${pkgs.linuxPackages.nvidia_x11}/lib"
export EXTRA_CCFLAGS="-I/usr/include"
'';