NVIDIA: Difference between revisions
imported>Eadwu m Bold some issues that seem to be popping up a lot regarding PRIME offload |
imported>Shyim No edit summary |
||
Line 117: | Line 117: | ||
{ | { | ||
services.xserver.videoDrivers = [ "modesetting" "nvidia" ]; | services.xserver.videoDrivers = [ "modesetting" "nvidia" ]; | ||
hardware.nvidia. | hardware.nvidia.optimus_prime.enable = true; | ||
# Bus ID of the NVIDIA GPU. You can find it using lspci, either under 3D or VGA | # Bus ID of the NVIDIA GPU. You can find it using lspci, either under 3D or VGA | ||
hardware.nvidia. | hardware.nvidia.optimus_prime.nvidiaBusId = "PCI:1:0:0"; | ||
# Bus ID of the Intel GPU. You can find it using lspci, either under 3D or VGA | # Bus ID of the Intel GPU. You can find it using lspci, either under 3D or VGA | ||
hardware.nvidia. | hardware.nvidia.optimus_prime.intelBusId = "PCI:0:2:0"; | ||
} | } | ||
</nowiki>}} | </nowiki>}} |
Revision as of 10:48, 18 May 2020
Card type
- MXM / output-providing card (shows as VGA Controller in lspci), i.e. graphics card in desktop computer or in some laptops
- muxless/non-MXM Optimus cards have no display outputs and show as 3D Controller in lspci output, seen in most modern consumer laptops
Non-optimus mode
You need MXM card. Follow NVIDIA Graphics Cards section in official manual.
In case of laptop you may also need to use a BIOS option to select which card to use for the internal display.
Optimus
Mostly useful for laptops. There are currently two solutions available under NixOS:
Nvidia PRIME
Official solution by nvidia. Currently, reverse PRIME does not work. The consequence of this is that if you have a special laptop configuration where external display ports are only exposed to the dedicated GPU, then running in offload mode will not allow you to use those display ports for external monitors. If you wish to use the external monitors in that particular case, you have to use sync mode.
offload mode
Currently only in unstable, to be included in 20.09. In this mode nvidia card is only activated on demand when you run program(s) with specific environment variables, i.e., here's a sample script
nvidia-offload
export __NV_PRIME_RENDER_OFFLOAD=1
export __NV_PRIME_RENDER_OFFLOAD_PROVIDER=NVIDIA-G0
export __GLX_VENDOR_LIBRARY_NAME=nvidia
export __VK_LAYER_NV_optimus=NVIDIA_only
exec -a "$0" "$@"
Firstly you need to enable the proprietary nvidia driver
/etc/nixos/configuration.nix
{
services.xserver.videoDrivers = [ "nvidia" ];
...
Note that on certain laptops and/or if you are using a custom kernel version, you may have issues with your NixOS system finding the primary display. In this case you should also specify modesetting in videoDrivers as well, i.e.
/etc/nixos/configuration.nix
{
services.xserver.videoDrivers = [ "modesetting" "nvidia" ];
...
Then you need to setup the Bus ID's of the cards as seen below.
Note: Bus ID is important and needs to be formatted properly
The Nvidia driver expects the bus ID to be in decimal format; However, lspci shows the bus IDs in hexadecimal format.
You can convert the value by
- Stripping any leading zeros from the bus numbers or if the number is above 09, convert it to decimal and use that value.
- Replacing any full stops with colons.
- Prefix the final value with "PCI".
For example:
Output from lspci
09:1f.0 VGA compatible controller: NVIDIA Corporation Device 1f91 (rev a1)
Converted and correct format
PCI:9:31:0
An possible configuration is shown below
/etc/nixos/configuration.nix
{ pkgs, ... }:
let
nvidia-offload = pkgs.writeShellScriptBin "nvidia-offload" ''
export __NV_PRIME_RENDER_OFFLOAD=1
export __NV_PRIME_RENDER_OFFLOAD_PROVIDER=NVIDIA-G0
export __GLX_VENDOR_LIBRARY_NAME=nvidia
export __VK_LAYER_NV_optimus=NVIDIA_only
exec -a "$0" "$@"
'';
in
{
environment.systemPackages = [ nvidia-offload ];
services.xserver.videoDrivers = [ "nvidia" ];
hardware.nvidia.prime.offload.enable = true;
hardware.nvidia.prime = {
# Bus ID of the Intel GPU. You can find it using lspci, either under 3D or VGA
intelBusId = "PCI:0:2:0";
# Bus ID of the NVIDIA GPU. You can find it using lspci, either under 3D or VGA
nvidiaBusId = "PCI:1:0:0";
};
}
sync mode
In this mode nvidia card is turned on constantly, having impact on laptop battery and health.
Possible issues:
- Hangs of applications after resume from suspend
- Wrong DPI calculation (in this case provide dpi manually )
services.xserver.dpi = 96;
- Black screen after system upgrade
- No video playback acceleration available (vaapi)
An example of a final configuration is below
/etc/nixos/configuration.nix
{
services.xserver.videoDrivers = [ "modesetting" "nvidia" ];
hardware.nvidia.optimus_prime.enable = true;
# Bus ID of the NVIDIA GPU. You can find it using lspci, either under 3D or VGA
hardware.nvidia.optimus_prime.nvidiaBusId = "PCI:1:0:0";
# Bus ID of the Intel GPU. You can find it using lspci, either under 3D or VGA
hardware.nvidia.optimus_prime.intelBusId = "PCI:0:2:0";
}
Bumblebee
Deprecated solution. You should use offload mode instead.
Use option
hardware.bumblebee.enable = true;
non-NixOS case
- The nixGL project provides wrapper to use GL drivers outside of NixOS. You need to have nvidia drivers installed on your distro (for kernel modules). Then supply nvidia driver version you use on host system to nixGL.
CUDA
There some possible ways to setup a development environment using CUDA on NixOS. This can accomplished in the following ways:
- By making a FHS user env
cuda-fsh.nix
{ pkgs ? import <nixpkgs> {} }:
let fhs = pkgs.buildFHSUserEnv {
name = "cuda-env";
targetPkgs = pkgs: with pkgs;
[ git
gitRepo
gnupg
autoconf
curl
procps
gnumake
utillinux
m4
gperf
unzip
cudatoolkit
linuxPackages.nvidia_x11
libGLU_combined
xorg.libXi xorg.libXmu freeglut
xorg.libXext xorg.libX11 xorg.libXv xorg.libXrandr zlib
ncurses5
stdenv.cc
binutils
];
multiPkgs = pkgs: with pkgs; [ zlib ];
runScript = "bash";
profile = ''
export CUDA_PATH=${pkgs.cudatoolkit}
# export LD_LIBRARY_PATH=${pkgs.linuxPackages.nvidia_x11}/lib
export EXTRA_LDFLAGS="-L/lib -L${pkgs.linuxPackages.nvidia_x11}/lib"
export EXTRA_CCFLAGS="-I/usr/include"
'';
};
in pkgs.stdenv.mkDerivation {
name = "cuda-env-shell";
nativeBuildInputs = [ fhs ];
shellHook = "exec cuda-env";
}
- By making a nix-shell
cuda-shell.nix
{ pkgs ? import <nixpkgs> {} }:
pkgs.stdenv.mkDerivation {
name = "cuda-env-shell";
buildInputs = with pkgs;
[ git gitRepo gnupg autoconf curl
procps gnumake utillinux m4 gperf unzip
cudatoolkit linuxPackages.nvidia_x11
libGLU_combined
xorg.libXi xorg.libXmu freeglut
xorg.libXext xorg.libX11 xorg.libXv xorg.libXrandr zlib
ncurses5 stdenv.cc binutils
];
shellHook = ''
export CUDA_PATH=${pkgs.cudatoolkit}
# export LD_LIBRARY_PATH=${pkgs.linuxPackages.nvidia_x11}/lib:${pkgs.ncurses5}/lib
export EXTRA_LDFLAGS="-L/lib -L${pkgs.linuxPackages.nvidia_x11}/lib"
export EXTRA_CCFLAGS="-I/usr/include"
'';
}