Ollama Installation Guide


🧠 Ollama β€” Installation & Usage Guide (2025)

Ollama lets you run large language models (LLMs) like Llama 3, Mistral, or Gemma locally on your computer β€” no cloud required.

In order to run SocKey, the system must have Ollama running, with the mistral-7b:instruct model installed.


πŸ–₯️ 1. System Requirements

PlatformSupportedNotes
macOSβœ… Native support (Apple Silicon & Intel)Recommended
Windowsβœ… Supported (via installer)WSL not required

Minimum recommended specs:

  • 8 GB RAM (16+ GB for large models)
  • At least 20 GB free disk space
  • GPU optional (CPU-only mode supported)

βš™οΈ 2. Installation

🧩 macOS

curl -fsSL https://ollama.com/install.sh | sh

Or use Homebrew:

brew install ollama

Then start the Ollama service:

ollama serve


πŸͺŸ Windows

  1. Go to https://ollama.com/download
  2. Download and run the Windows installer (OllamaSetup.exe)
  3. After installation, Ollama will automatically start running in the background.

πŸ’‘ You can verify it’s running by opening Command Prompt and typing:

ollama --version


🧰 3. Installing mistral:7b-instruct

Pull & Run in Terminal:

text

ollama run mistral:7b-instruct
  • Downloads ~4.1 GBΒ (Q4_0 quantized by default for speed/efficiency).
  • Starts chat immediately:Β >>>Β prompt – type away!

🧭 Quick Summary

TaskCommand
Install (mac/Linux)`curl -fsSL https://ollama.com/install.sh
Start serviceollama serve
Run modelollama run llama3
List modelsollama list
Pull new modelollama run mistral:7b-instruct
Use APIhttp://localhost:11434

Leave a comment

Leave a comment