🧠 Ollama — Installation & Usage Guide (2025)
Ollama lets you run large language models (LLMs) like Llama 3, Mistral, or Gemma locally on your computer — no cloud required.
In order to run SocKey, the system must have Ollama running, with the mistral-7b:instruct model installed.
🖥️ 1. System Requirements
| Platform | Supported | Notes |
|---|---|---|
| macOS | ✅ Native support (Apple Silicon & Intel) | Recommended |
| Windows | ✅ Supported (via installer) | WSL not required |
Minimum recommended specs:
- 8 GB RAM (16+ GB for large models)
- At least 20 GB free disk space
- GPU optional (CPU-only mode supported)
⚙️ 2. Installation
🧩 macOS
curl -fsSL https://ollama.com/install.sh | sh
Or use Homebrew:
brew install ollama
Then start the Ollama service:
ollama serve
🪟 Windows
- Go to https://ollama.com/download
- Download and run the Windows installer (
OllamaSetup.exe) - After installation, Ollama will automatically start running in the background.
💡 You can verify it’s running by opening Command Prompt and typing:
ollama --version
🧰 3. Installing mistral:7b-instruct
Pull & Run in Terminal:
text
ollama run mistral:7b-instruct
- Downloads ~4.1 GB (Q4_0 quantized by default for speed/efficiency).
- Starts chat immediately: >>> prompt – type away!
🧭 Quick Summary
| Task | Command |
|---|---|
| Install (mac/Linux) | `curl -fsSL https://ollama.com/install.sh |
| Start service | ollama serve |
| Run model | ollama run llama3 |
| List models | ollama list |
| Pull new model | ollama run mistral:7b-instruct |
| Use API | http://localhost:11434 |

Leave a Reply