Ollama Installation Guide


🧠 Ollama — Installation & Usage Guide (2025)

Ollama lets you run large language models (LLMs) like Llama 3, Mistral, or Gemma locally on your computer — no cloud required.

In order to run SocKey, the system must have Ollama running, with the mistral-7b:instruct model installed.


🖥️ 1. System Requirements

PlatformSupportedNotes
macOS✅ Native support (Apple Silicon & Intel)Recommended
Windows✅ Supported (via installer)WSL not required

Minimum recommended specs:

  • 8 GB RAM (16+ GB for large models)
  • At least 20 GB free disk space
  • GPU optional (CPU-only mode supported)

⚙️ 2. Installation

🧩 macOS

curl -fsSL https://ollama.com/install.sh | sh

Or use Homebrew:

brew install ollama

Then start the Ollama service:

ollama serve

🪟 Windows

  1. Go to https://ollama.com/download
  2. Download and run the Windows installer (OllamaSetup.exe)
  3. After installation, Ollama will automatically start running in the background.

💡 You can verify it’s running by opening Command Prompt and typing:

ollama --version


🧰 3. Installing mistral:7b-instruct

Pull & Run in Terminal:

text

ollama run mistral:7b-instruct
  • Downloads ~4.1 GB (Q4_0 quantized by default for speed/efficiency).
  • Starts chat immediately: >>> prompt – type away!

🧭 Quick Summary

TaskCommand
Install (mac/Linux)`curl -fsSL https://ollama.com/install.sh
Start serviceollama serve
Run modelollama run llama3
List modelsollama list
Pull new modelollama run mistral:7b-instruct
Use APIhttp://localhost:11434

Leave a Reply

Leave a Reply

Discover more from Avi

Subscribe now to keep reading and get access to the full archive.

Continue reading