π§ Ollama β Installation & Usage Guide (2025)
Ollama lets you run large language models (LLMs) like Llama 3, Mistral, or Gemma locally on your computer β no cloud required.
In order to run SocKey, the system must have Ollama running, with the mistral-7b:instruct model installed.
π₯οΈ 1. System Requirements
| Platform | Supported | Notes |
|---|---|---|
| macOS | β Native support (Apple Silicon & Intel) | Recommended |
| Windows | β Supported (via installer) | WSL not required |
Minimum recommended specs:
- 8 GB RAM (16+ GB for large models)
- At least 20 GB free disk space
- GPU optional (CPU-only mode supported)
βοΈ 2. Installation
π§© macOS
curl -fsSL https://ollama.com/install.sh | sh
Or use Homebrew:
brew install ollama
Then start the Ollama service:
ollama serve
πͺ Windows
- Go to https://ollama.com/download
- Download and run the Windows installer (
OllamaSetup.exe) - After installation, Ollama will automatically start running in the background.
π‘ You can verify itβs running by opening Command Prompt and typing:
ollama --version
π§° 3. Installing mistral:7b-instruct
Pull & Run in Terminal:
text
ollama run mistral:7b-instruct
- Downloads ~4.1 GBΒ (Q4_0 quantized by default for speed/efficiency).
- Starts chat immediately:Β >>>Β prompt β type away!
π§ Quick Summary
| Task | Command |
|---|---|
| Install (mac/Linux) | `curl -fsSL https://ollama.com/install.sh |
| Start service | ollama serve |
| Run model | ollama run llama3 |
| List models | ollama list |
| Pull new model | ollama run mistral:7b-instruct |
| Use API | http://localhost:11434 |

Leave a comment