Ollama

From NixOS Wiki
Revision as of 18:32, 29 September 2024 by Klinger (talk | contribs) (Ollama can also be used as CLI application.)

Ollama is an open-source framework designed to facilitate the deployment of large language models on local environments. It aims to simplify the complexities involved in running and managing these models, providing a seamless experience for users across different operating systems.

Setup

Add following line to your system configuration

services.ollama.enable = true;

Configuration

Enable GPU acceleration for Nvidia graphic cards

services.ollama = {
  enable = true;
  acceleration = "cuda";
};

Usage

Download and run Mistral LLM model as an interactive prompt

ollama run mistral

For ruther models see Ollama library.

Troubleshooting

AMD GPU with open source driver

In certain cases ollama might not allow your system to use GPU acceleration if it cannot be sure your GPU/driver is compatible.

However you can attempt to force-enable the usage of your GPU by overriding the LLVM target. [1]

You can get the version for your GPU from the logs or like so:

$ nix-shell -p "rocmPackages.rocminfo" --run "rocminfo" | grep "gfx"
Name:                    gfx1031

In this example the LLVM target is "gfx1031", that is, version "10.3.1", you can then override that value for ollama:

services.ollama = {
  enable = true;
  acceleration = "rocm";
  environmentVariables = {
    HCC_AMDGPU_TARGET = "gfx1031"; # used to be necessary, but doesn't seem to anymore
  };
  rocmOverrideGfx = "10.3.1";
};

If there are still errors, you can attempt to set a similar value that is listed here.