Ollama: Difference between revisions
Initial page |
Add note on obtaining LLMs |
||
Line 17: | Line 17: | ||
Download and run Mistral LLM model as an interactive prompt<syntaxhighlight lang="bash"> | Download and run Mistral LLM model as an interactive prompt<syntaxhighlight lang="bash"> | ||
ollama run mistral | ollama run mistral | ||
</syntaxhighlight> | </syntaxhighlight>For ruther models see [https://ollama.ai/library Ollama library]. |
Revision as of 10:23, 10 May 2024
Ollama is an open-source framework designed to facilitate the deployment of large language models on local environments. It aims to simplify the complexities involved in running and managing these models, providing a seamless experience for users across different operating systems.
Setup
Add following line to your system configuration
services.ollama.enable = true;
Configuration
Enable GPU acceleration for Nvidia graphic cards
services.ollama = {
enable = true;
acceleration = "cuda";
};
Usage
Download and run Mistral LLM model as an interactive prompt
ollama run mistral
For ruther models see Ollama library.