CUDA: Difference between revisions

From NixOS Wiki
imported>Samuela
No edit summary
imported>Lucasew
(fix typo)
Line 19: Line 19:
* By making a FHS user env
* By making a FHS user env


{{file|cuda-fsh.nix|nix|<nowiki>
{{file|cuda-fhs.nix|nix|<nowiki>
{ pkgs ? import <nixpkgs> {} }:
{ pkgs ? import <nixpkgs> {} }:



Revision as of 20:28, 5 July 2022

NixOS supports using NVIDIA GPUs for pure computing purposes, not just for graphics. For example, many users rely on NixOS for machine learning both locally and on cloud instances. These use cases are supported by the @NixOS/cuda-maintainers team on GitHub. If you have an issue using your NVIDIA GPU for computing purposes open an issue on GitHub and tag @NixOS/cuda-maintainers.

Cache: Using the cuda-maintainers cache is recommended! It will save you valuable time and electrons. Getting set up should be as simple as cachix use cuda-maintainers.

Data center GPUs: Note that you may need to adjust your driver version to use "data center" GPUs like V100/A100s. See this thread for more info.

cudatoolkit, cudnn, and related packages

The CUDA toolkit is available in a number of different versions. Please use the latest major version. You can see where they're defined in nixpkgs here.

Several "CUDA-X" libraries are packages as well. In particular,

  • cuDNN is packaged here.
  • cuTENSOR is packaged here.

Note that these examples haven't been updated in a while (as of 2022-03-12). May not be the best solution. A better resource is likely the packaging CUDA sample code here.

There some possible ways to setup a development environment using CUDA on NixOS. This can accomplished in the following ways:

  • By making a FHS user env
cuda-fhs.nix
{ pkgs ? import &lt;nixpkgs&gt; {} }:

let 
  fhs = pkgs.buildFHSUserEnv {
    name = "cuda-env";
    targetPkgs = pkgs: with pkgs; [ 
      git
      gitRepo
      gnupg
      autoconf
      curl
      procps
      gnumake
      utillinux
      m4
      gperf
      unzip
      cudatoolkit
      linuxPackages.nvidia_x11
      libGLU libGL
      xorg.libXi xorg.libXmu freeglut
      xorg.libXext xorg.libX11 xorg.libXv xorg.libXrandr zlib 
      ncurses5
      stdenv.cc
      binutils
    ];
    multiPkgs = pkgs: with pkgs; [ zlib ];
    runScript = "bash";
    profile = ''
       export CUDA_PATH=${pkgs.cudatoolkit}
       # export LD_LIBRARY_PATH=${pkgs.linuxPackages.nvidia_x11}/lib
       export EXTRA_LDFLAGS="-L/lib -L${pkgs.linuxPackages.nvidia_x11}/lib"
       export EXTRA_CCFLAGS="-I/usr/include"
     '';
  };
in pkgs.stdenv.mkDerivation {
   name = "cuda-env-shell";
   nativeBuildInputs = [ fhs ];
   shellHook = "exec cuda-env";
}


  • By making a nix-shell
cuda-shell.nix
{ pkgs ? import &lt;nixpkgs&gt; {} }:

pkgs.stdenv.mkDerivation {
   name = "cuda-env-shell";
   buildInputs = with pkgs; [
     git gitRepo gnupg autoconf curl
     procps gnumake utillinux m4 gperf unzip
     cudatoolkit linuxPackages.nvidia_x11
     libGLU libGL
     xorg.libXi xorg.libXmu freeglut
     xorg.libXext xorg.libX11 xorg.libXv xorg.libXrandr zlib 
     ncurses5 stdenv.cc binutils
   ];
   shellHook = ''
      export CUDA_PATH=${pkgs.cudatoolkit}
      # export LD_LIBRARY_PATH=${pkgs.linuxPackages.nvidia_x11}/lib:${pkgs.ncurses5}/lib
      export EXTRA_LDFLAGS="-L/lib -L${pkgs.linuxPackages.nvidia_x11}/lib"
      export EXTRA_CCFLAGS="-I/usr/include"
   '';          
}

See also