⌬ OpenCode Github repo

GitHub - opencode-ai/opencode

A powerful terminal-based AI assistant for developers, providing intelligent coding assistance directly in your terminal.

Overview

OpenCode is a Go-based CLI application that brings AI assistance to your terminal. It provides a TUI (Terminal User Interface) for interacting with various AI models to help with coding tasks, debugging, and more.

Features

  • Interactive TUI: Built with Bubble Tea for a smooth terminal experience
  • Multiple AI Providers: Support for OpenAI, Anthropic Claude, Google Gemini, AWS Bedrock, Groq, Azure OpenAI, and OpenRouter
  • Session Management: Save and manage multiple conversation sessions
  • Tool Integration: AI can execute commands, search files, and modify code
  • Vim-like Editor: Integrated editor with text input capabilities
  • Persistent Storage: SQLite database for storing conversations and sessions
  • LSP Integration: Language Server Protocol support for code intelligence
  • File Change Tracking: Track and visualize file changes during sessions
  • External Editor Support: Open your preferred editor for composing messages
  • Named Arguments for Custom Commands: Create powerful custom commands with multiple named placeholders

And there is already a fork under SST OpenCode to ensure it says open source which has domain opencode.ai
GitHub - sst/opencode: AI coding agent, built for the terminal.

Summary of ‘opencode’ videonotes sept 2025

What is OpenCode?

OpenCode is an open-source, terminal-first AI coding agent platform (a.k.a. “coding agent”) that runs locally and lets large language models (LLMs) interact with your codebase via tools. It’s model-agnostic (supports hosted and local providers) and designed for developer workflows: editing files, running commands, invoking language servers, running tests, and orchestrating agentic sub-workflows — all from a TUI (terminal UI) or other clients.

Core goals:

  • Put an AI-powered development loop directly in the terminal.
  • Be model- and provider-agnostic so developers can pick best deployments.
  • Offer extensibility via local custom tools (more secure) and MCP-style integrations when needed.
  • Provide guardrails (LSP, tests, snapshots, permissions) to reduce hallucinations and unsafe edits.

Official resources:


High-level architecture

  • Client (TUI)
    • Terminal UI (written in Go). Lightweight, synchronous, designed for developer workflows.
    • Also possible to build other front-ends (web UI, mobile clients) that talk to the same server.
  • Server / Backend
    • JavaScript (Bun/Node) backend that exposes HTTP endpoints and manages sessions, tools, event bus, and AISDK interactions.
    • Embeds and spawns the TUI binary so terminal UX + server streaming work together.
  • AISDK / Model Layer
    • Provider-agnostic abstraction for calling models (supports hosted providers, OpenRouter, local endpoints).
    • Streams tokens and emits tool-call instructions for the backend to execute.
  • Event Bus / SSE
    • Backend publishes events (progress, tool results, permission requests) to an event bus.
    • Clients subscribe to a /slashevent SSE endpoint to get live updates.
  • Tools & MCPs
    • Built-in tools (read, edit, bash, run-test, git, LSP, todo, etc.) that the LLM can call.
    • Optional MCP (Model Context Protocol) servers or plugins to extend capabilities (search, web fetch, third-party APIs).
    • Custom local tools (in .opencode/tools/) let teams expose internal functionality securely.
  • Snapshot / Safety
    • Lightweight snapshotting (git-tree snapshots) before agent steps so the system can revert if needed.
    • Permission prompts for potentially dangerous operations (running arbitrary bash, network calls).

Key concepts

  • Session: a conversation/context window for an agent. Sessions contain messages and incremental tool outputs.
  • Agent: a persona + system prompt + allowed tools + model settings.
    • Primary agents have edit permissions.
    • Sub-agents (spawned via the task tool) run in separate context windows for specialized subtasks (security review, testing, research).
  • Tools: structured abilities models can call (each has a description, JSON schema). Examples: read(file, offset, limit), edit(file, patch), bash(command), lsp(diagnostics), run_tests().
  • Slash commands: reusable command templates stored in dot opencode/command/*.md or opencode.json. Triggered from the TUI using /name and can accept args and reference files with @file for context injection.
  • Custom tools (local): small scripts or node/bun tools placed in .opencode/tool/ to be called by agents (keeps external network surface minimal).
  • LSP integration: language server runs per project; diagnostics are provided as guardrails to the model after edits.
  • Zen / curated providers: optional managed provider layer that offers tested, reliable deployments (curation, SLAs, negotiated pricing). Useful if you don’t want to manage provider quality.

Core built-in tools (typical set)

  • read(file, offset, limit): safe file reads (restricted to project directory).
  • edit(file, patch): make edits (formatted, then LSP diagnostics).
  • bash(cmd): run shell commands (with permission gating).
  • git: snapshot / inspect / revert changes.
  • lsp: language server diagnostics and hover/signature lookups (guardrails).
  • todo: manage per-session to-do lists.
  • test runner: run test suites and return results.

Installation & quick start (example)

  1. Install (system dependent; quick example)
  2. Configure model provider
    • Use Zen or OpenRouter, or configure your provider keys (Anthropic, OpenAI, Z.ai, local Ollama, etc.)
    • Example:
      • opencode login
      • select provider / paste API key
  3. Start OpenCode in a project
    • cd /path/to/project
    • opencode
    • TUI opens — run /help or press Tab to explore agents/commands.

Example workflows

  1. Feature development (typical coding agent loop)
    • Start session: describe feature or bug.
    • Agent reads relevant files (read tool), plans steps (plan agent), then build agent edits files (edit tool).
    • After each edit: run LSP diagnostics, then run tests. If tests fail, agent iterates.
    • Backend snapshots each step and asks for permission for risky operations.
    • Final commit produced with a suggested commit message or custom commit slash command.
  2. Code review & security scan
    • Main agent spawns sub-agents: one for security analysis, another for style review (task tool).
    • Sub-agents run within isolated contexts and return summaries.
  3. Research / content creation (non-coding)
    • Agent uses MCP/Brave Search or web tools to gather sources, assembles a draft with references, and stores artifacts in repo.
    • Useful for docs, release notes, blog posts.
  4. Productivity automation
    • Slash commands + agents generate daily plans, summarize PRs, auto-post commit summaries to Slack via custom tool/webhook.

Slash commands & custom tools — practical examples

  • Slash command (project-level): .opencode/command/joker.md

    ---  
    agent: joker-agent  
    model: <your-model>  
    ---  
    You are a concise joke generator. Input: {{args}}.  
    
  • Custom tool (TypeScript example .opencode/tools/joker.ts)

    import { tool } from '@opencode/plugin';  
    export default tool('joker', {  
      schema: { type: 'object', properties: { subject: { type: 'string' } } },  
      async execute({ subject }) {  
        return `Joke about ${subject}: Why did the ${subject} cross the road? ...`;  
      }  
    });  
  • AGENTS.md (agent definitions) — agents describe role and allowed tools; slash commands can reference agents.


Security, privacy, and operational notes

  • Local-first by design: running the TUI and backend locally helps keep code and secrets on-device.
  • Custom tools are safer than MCPs: local tools avoid adding external processes/network dependencies.
  • MCPs and external providers are useful but increase attack surface — require strict API key management and trust in provider deployments.
  • Permission prompts: dangerous actions (bash, network calls) require explicit user approval in TUI.
  • File sandboxing: read/edit tools restrict access to project working directory to reduce risk.
  • Snapshots and revert: snapshots before agent steps give ability to rollback unwanted edits without polluting git history.

Best practices

  • Start small: pilot one workflow (e.g., research draft or simple bug fix) before broad rollout.
  • Use slash commands to codify repeatable prompts and share them with the team for consistency.
  • Prefer local custom tools for accessing internal APIs or systems—MCP only when necessary.
  • Use LSP and test runners as guardrails: require tests and run formatters after edits automatically.
  • Choose reliable model deployments: test providers under real workload; prefer curated providers (Zen) or robust hosts for production usage.
  • Rate-limiting: when using web search MCPs like Brave, account for rate limits (sleep between queries or batch).
  • Observability: log agent runs and tool calls for auditability and debugging.
  • Team onboarding: include examples, recommended slash commands, and guidelines in repo .opencode/README for consistent adoption.

Troubleshooting — common issues & fixes

  • Slash command edits not reflected: reload/restart OpenCode after changing .opencode/command files.
  • Model name / provider errors: check provider configuration and model name (typos are common).
  • Self-signed TLS / network errors: temporary DEV workaround NODE_TLS_REJECT_UNAUTHORIZED=0 — DO NOT use in production; instead fix CA chain or provider endpoint.
  • Slow model responses: choose a faster model for interactive loops (e.g., K2, GLM coding plans) or use async workflows (spawn tasks and check later via mobile/web).
  • Tool errors / unexpected edits: check LSP diagnostics and revert snapshot, refine system prompt, add stricter tool schemas and safety checks.

Where OpenCode excels (pros)

  • Terminal-first, low-friction developer loop.
  • Model-agnostic: switch providers easily.
  • Powerful tool abstraction with real guardrails (LSP, tests, snapshots).
  • Extensible with local tools and agent workflows.
  • Great for prototyping and integrating AI into dev workflows.

Where caution is required (cons / limitations)

  • Reliability depends on chosen model/provider and deployment quality.
  • Security trade-offs when enabling MCPs or external plugins.
  • Some users hit model inconsistencies / random behavior; need to design workflows and UX to handle non-determinism.
  • Interactive workflows require pick of fast/low-latency models for good UX.

Comparison: OpenCode vs Claude Code (high level)

  • OpenCode: open-source, terminal-native, model-agnostic, community extensible, designed for customization and self-hosting.
  • Claude Code: closed-source commercial offering integrated with Anthropic models; may offer a more frictionless managed experience for Anthropic customers.
  • Choice depends on priorities: control, extensibility, and multi-provider flexibility (OpenCode) vs turnkey managed experience for a specific provider (Claude Code).

Example checklist to onboard OpenCode in a team

  • Pick an initial low-risk workflow: e.g., documenting a feature, research drafts, or automating PR summaries.
  • Configure a curated model provider (Zen or tested OpenRouter deployments).
  • Create 3 shared slash commands: /context, /commit (pre-commit + nice message), /doc-draft.
  • Add 1-2 custom tools for internal APIs (deployed as local tool scripts).
  • Configure LSPs for main languages + test runner hooks.
  • Train the team: how to use slash commands, how to approve permissions, how to revert snapshots.
  • Monitor usage and iterate on tool definitions, agent prompts, and provider selection.

  • OpenCode docs & quickstart: https://opencode.ai/ (project docs and examples)
  • GitHub: https://github.com/sst/opencode
  • Examples & community videos: search for “OpenCode” tutorials and setup videos (many creators show provider setups, GLM/Zen demos, and slash-command patterns).
  • Benchmarks & model selection: prefer real-world codebase tests (GoCoder-style) over synthetic academic benchmarks.