Ollama
by Ollama
Run open-source LLMs locally with a simple developer-first platform
Features
- Local-first model serving: download and run open-source models (Llama 3, Gemma, Mistral, Phi) locally
- Command-line interface and REST API for programmatic access
- Function calling / tool calling support for models
- Structured output (JSON schema enforcement) for reliable programmatic parsing
- Multimodal support (text + images) and expanded image format compatibility
- Model management: install, list, remove, and configure models; mod files for custom configs
- OpenAI-compatible API layer for easier migration and integration
- New standalone desktop app for non-technical users (model browsing, chat, file-based chat)
- MCP server ecosystem support for using Ollama models in MCP-based agent frameworks
Superpowers
- Privacy & offline operation: run LLMs without sending data to the cloud
- Developer ergonomics: simple CLI + mod files make local deployment fast and scriptable
- Production readiness: health checks, logging, and performance optimizations for on-prem use
Pricing
- Free open-source tooling
Quick usage examples
- Run a local model via CLI and query it from a REST client
- Use function calling to integrate local models with tools and databases
- Deploy an MCP server to let agent frameworks use local Ollama models
Sources
- Ollama official website and changelogs
- Community tutorials and MCP implementations