TLDR
Anthropic introduced the Model Context Protocol (MCP) in November 2024 to enable standardized AI-to-application integration. MCP provides a universal protocol for AI models to access tools, data sources, and capabilities across distributed systems. Within one year, it became the de-facto standard with nearly 2,000 servers in the MCP Registry, adopted by major organizations including Notion, Stripe, GitHub, Hugging Face, and Postman.
Overview
The Model Context Protocol is an open-source, client-server protocol that enables AI applications to securely connect with external data sources and tools. Instead of each AI agent embedding function code directly, MCP allows dynamic discovery and invocation of capabilities at runtime through a standardized JSON-RPC 2.0 interface.
Key Concepts
- Client-Server Architecture: AI applications (MCP hosts) connect to MCP servers through MCP clients
- Dynamic Capability Discovery: Tools, resources, and prompts are discovered at runtime via list/get methods
- Protocol-First Design: Standardized JSON-RPC 2.0 messaging works across multiple transport layers
- Separation of Concerns: Protocol layer is independent of transport mechanisms (stdio, HTTP)
Architecture
Core Components
1. MCP Host
The AI application (e.g., Claude Desktop, Claude Code) that coordinates multiple MCP client connections and orchestrates AI interactions.
2. MCP Client
Maintains a 1:1 connection with an MCP server, handles capability negotiation, and provides context to the host application.
3. MCP Server
A program that exposes tools, resources, and prompts to MCP clients. Servers can be local processes (stdio transport) or remote services (HTTP transport).
Two-Layer Design
Data Layer
Implements JSON-RPC 2.0 protocol defining:
- Lifecycle management (initialization, capability negotiation, termination)
- Server primitives (tools, resources, prompts)
- Client primitives (sampling, elicitation, logging)
- Real-time notifications for capability changes
Transport Layer
Manages communication channels:
- Stdio transport: Standard input/output for local processes (optimal performance, OS-level security)
- HTTP transport: HTTP POST with optional Server-Sent Events for remote servers, supporting OAuth 2.1, bearer tokens, and API keys
Core Primitives
Server Primitives (What servers expose)
Tools
Executable functions that AI applications can invoke:
- File operations (read, write, search)
- API calls (REST, GraphQL)
- Database queries
- Code execution
- System commands
Discovery: tools/list → Execution: tools/call
Resources
Data sources providing contextual information:
- File contents
- Database records
- API responses
- Configuration data
- Documentation
Discovery: resources/list → Retrieval: resources/read
Key characteristic: Application-controlled access (client explicitly fetches data, not model-initiated)
Prompts
Reusable templates for structuring LLM interactions:
- System prompts
- Few-shot examples
- Task-specific instructions
- Conversation templates
Discovery: prompts/list → Retrieval: prompts/get
Client Primitives (What clients expose)
Sampling
Servers can request LLM completions from the client’s AI application via sampling/createMessage
Elicitation
Request additional information from users through secure out-of-band flows (OAuth, credential collection)
Logging
Servers send log messages to clients for debugging and monitoring
Protocol Features
Lifecycle Management
- Initialization: Client and server negotiate protocol version and capabilities
- Capability Exchange: Both parties declare supported features
- Operation: Tools/resources/prompts are discovered and used
- Termination: Clean connection shutdown
Real-Time Notifications
Servers push updates about capability changes:
notifications/tools/list_changednotifications/resources/list_changednotifications/prompts/list_changed
Clients refresh capability lists to stay synchronized.
Security Model
- Transport-level security: OAuth 2.1, bearer tokens, API keys
- Process isolation: Stdio transport uses OS-level protection
- Authorization: Decoupled from protocol layer, implemented at transport level
- Elicitation: Secure credential collection through browser flows
Version History
2025-11-25 (First Anniversary Release)
Major Features:
- Tasks abstraction (SEP-1686): Track long-running operations across states (working, input_required, completed, failed, cancelled)
- Simplified OAuth: URL-based client registration using OAuth Client ID Metadata Documents
- Authorization Extensions: Machine-to-machine auth, Cross App Access for enterprise identity providers
- URL Mode Elicitation (SEP-1036): Secure out-of-band credential collection
- Sampling with Tools (SEP-1577): Server-side agentic loops with tool calling
- Developer experience improvements: standardized tool naming, decoupled RPC payloads
SDK Releases:
- TypeScript SDK v1.25.1 (December 2024)
- Python SDK v1.25.0 (December 2024)
2025-06-18
- Structured tool outputs
- OAuth-based authorization
- Elicitation for server-initiated user interactions
- Enhanced security best practices
2025-03
- Remote MCP server support improvements
- Some SDKs deprecated in favor of unified approach
2024-11 (Initial Release)
Anthropic introduced MCP to enable reusable function/API integration in distributed AI environments. Launched with Claude 3.5 Sonnet integration.
Community & Ecosystem
Governance
- 58 maintainers supporting 9 core/lead maintainers
- 2,900+ Discord contributors (100+ joining weekly)
- Specification Enhancement Proposals (SEPs) and Working Groups
- Community events: MCP Dev Summit, MCP Night, MCP Dev Days
Adoption
- MCP Registry: Nearly 2,000 server entries (407% growth since September 2025)
- Major Adopters: Notion, Stripe, GitHub, Hugging Face, Postman
- Clients: See pulsemcp.com for comprehensive list of MCP-capable applications
SDKs
- TypeScript: Official SDK for Node.js and browser environments
- Python: Official SDK with v2 development ongoing
- Community SDKs: Multiple languages supported by community
Use Cases
Multi-Agent Systems
- Coordinate multiple AI agents with shared tooling
- Enable task delegation and workflow orchestration
- Support long-running operations with Tasks abstraction
Enterprise Integration
- Connect AI to internal databases and APIs
- Implement SSO and enterprise identity controls
- Deploy remote MCP servers with OAuth security
Development Tools
- IDE integrations (e.g., Claude Code)
- Code analysis and refactoring
- Documentation generation
- Test automation
Data Access
- Database query interfaces
- File system operations
- Cloud storage integration
- API aggregation
Design Principles
-
Simplicity: Prioritize real-world production deployments over theoretical completeness
-
Extensibility: Composable architecture enables custom primitives and extensions
-
Model Agnostic: Works with any LLM, no model-specific dependencies
-
Backward Compatibility: New versions maintain compatibility with existing implementations
-
Community-Driven: Evolution guided by Specification Enhancement Proposals (SEPs)
Sources
- One Year of MCP: November 2025 Spec Release | Model Context Protocol Blog
- MCP Specification 2025-11-25
- Architecture Overview - Model Context Protocol
- Update on the Next MCP Protocol Release | Model Context Protocol Blog
- TypeScript SDK Releases
- Python SDK Releases
- The Full MCP Blueprint: Building a Full-Fledged MCP Workflow
- Model Context Protocol (MCP): A Comprehensive Introduction for Developers
- What Is the Model Context Protocol (MCP) and How It Works
- MCP 2025-11-25 is here: async Tasks, better OAuth, extensions | WorkOS