Ollama: Difference between revisions
Initial page |
No edit summary |
||
(5 intermediate revisions by 4 users not shown) | |||
Line 17: | Line 17: | ||
Download and run Mistral LLM model as an interactive prompt<syntaxhighlight lang="bash"> | Download and run Mistral LLM model as an interactive prompt<syntaxhighlight lang="bash"> | ||
ollama run mistral | ollama run mistral | ||
</syntaxhighlight>For ruther models see [https://ollama.ai/library Ollama library]. | |||
== Troubleshooting == | |||
=== AMD GPU with open source driver === | |||
In certain cases ollama might not allow your system to use GPU acceleration if it cannot be sure your GPU/driver is compatible. | |||
However you can attempt to force-enable the usage of your GPU by overriding the LLVM target. <ref>https://github.com/ollama/ollama/blob/main/docs/gpu.md#overrides</ref> | |||
You can get the version for your GPU from the logs or like so: | |||
<syntaxhighlight lang="bash"> | |||
$ nix-shell -p "rocmPackages.rocminfo" --run "rocminfo" | grep "gfx" | |||
Name: gfx1031 | |||
</syntaxhighlight> | </syntaxhighlight> | ||
In this example the LLVM target is "gfx1031", that is, version "10.3.1", you can then override that value for ollama: | |||
<syntaxhighlight lang="nix"> | |||
services.ollama = { | |||
enable = true; | |||
acceleration = "rocm"; | |||
environmentVariables = { | |||
HCC_AMDGPU_TARGET = "gfx1031"; # used to be necessary, but doesn't seem to anymore | |||
}; | |||
rocmOverrideGfx = "10.3.1"; | |||
}; | |||
</syntaxhighlight> | |||
If there are still errors, you can attempt to set a similar value that is listed [https://github.com/ollama/ollama/blob/main/docs/gpu.md#overrides here]. | |||
[[Category:Server]] |