Ollama: Difference between revisions

m Add verbose flag hint
This has been changed so that it actually overrides the proper env vars. The last format shouldn't work as the convention is to use 10.3.0 for other rocm devices.
 
(4 intermediate revisions by one other user not shown)
Line 13: Line 13:
services.ollama = {
services.ollama = {
   enable = true;
   enable = true;
   # Optional: load models on startup
   # Optional: preload models, see https://ollama.com/library
   loadModels = [ ... ];
   loadModels = [ "llama3.2:3b" "deepseek-r1:1.5b"];
};
};
</syntaxhighlight>
</syntaxhighlight>
Line 87: Line 87:


== Usage via web API ==
== Usage via web API ==
Other software can use the web API (default at: http://localhost:11434 ) to query ollama. This works well e.g. in Intellij-IDEs with the "ProxyAI" and the "Ollama Commit Summarizer" plugins.
Other software can use the web API (default at: http://localhost:11434 ) to query Ollama. This works well e.g. in Intellij-IDEs with the "ProxyAI" and the "Ollama Commit Summarizer" plugins.


Alternatively, on enabling "open-webui", a web portal is available at: http://localhost:8080/:
Alternatively, on enabling "open-webui", a web portal is available at: http://localhost:8080/:
Line 95: Line 95:
=== AMD GPU with open source driver ===  
=== AMD GPU with open source driver ===  


In certain cases ollama might not allow your system to use GPU acceleration if it cannot be sure your GPU/driver is compatible.
In certain cases Ollama might not allow your system to use GPU acceleration if it cannot be sure your GPU/driver is compatible.


However you can attempt to force-enable the usage of your GPU by overriding the LLVM target. <ref>https://github.com/ollama/ollama/blob/main/docs/gpu.md#overrides</ref>
However you can attempt to force-enable the usage of your GPU by overriding the LLVM target. <ref>https://github.com/ollama/ollama/blob/main/docs/gpu.md#overrides</ref>
Line 102: Line 102:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# classical
$ nix-shell -p "rocmPackages.rocminfo" --run "rocminfo" | grep "gfx"
$ nix-shell -p "rocmPackages.rocminfo" --run "rocminfo" | grep "gfx"
Name:                    gfx1031
# flakes
$ nix run nixpkgs#"rocmPackages.rocminfo" -- --run "rocminfo" | grep "gfx"
Name:                    gfx1031
Name:                    gfx1031
</syntaxhighlight>
</syntaxhighlight>


In this example the LLVM target is "gfx1031", that is, version "10.3.1", you can then override that value for ollama:
In this example the LLVM target is "gfx1031", that is, version "10.3.1", you can then override that value for Ollama for the systemd service:
<syntaxhighlight lang="nix">
<syntaxhighlight lang="nix">
services.ollama = {
services.ollama = {
Line 114: Line 119:
     HCC_AMDGPU_TARGET = "gfx1031"; # used to be necessary, but doesn't seem to anymore
     HCC_AMDGPU_TARGET = "gfx1031"; # used to be necessary, but doesn't seem to anymore
   };
   };
   rocmOverrideGfx = "10.3.1";
  # results in environment variable "HSA_OVERRIDE_GFX_VERSION=10.3.0"
   rocmOverrideGfx = "10.3.0";
};
};
</syntaxhighlight>
</syntaxhighlight>
or via an environment variable in front of the standalone app
<syntaxhighlight lang="bash">
HSA_OVERRIDE_GFX_VERSION=10.3.0 ollama serve
</syntaxhighlight>
If there are still errors, you can attempt to set a similar value that is listed [https://github.com/ollama/ollama/blob/main/docs/gpu.md#overrides here].
If there are still errors, you can attempt to set a similar value that is listed [https://github.com/ollama/ollama/blob/main/docs/gpu.md#overrides here].