Compound Engineering
Software development methodology where each completed feature makes the next feature easier to build, creating self-improving development systems that accelerate over time
Core insight: Unlike traditional engineering where complexity accumulates, compound engineering inverts the dynamic—institutional knowledge compounds, making development faster with each iteration.
Core Principle
Traditional development follows this pattern:
Codebase grows → More complexity → More edge cases → Harder to build features
Result: Velocity decreases as system grows
Compound engineering inverts this:
Feature completed → Knowledge captured → Institutional knowledge accumulates
Result: Velocity increases as system grows (compounding effect)
How It Works: The Three-Phase Loop
Phase 1: Plan (80% of effort)
- Agent reads issues – Understand requirements clearly
- Research approaches – Analyze similar problems in codebase and web
- Synthesize plans – Create detailed implementation roadmap
- Human review – Validate approach before execution
Key: Most work is planning, not coding
Phase 2: Work (20% of effort)
- Agent writes code – Implements according to plan
- Agent creates tests – Comprehensive test coverage
- Agent iterates – Tests locally, fixes issues
Key: Execution is efficient because plan is thorough
Phase 3: Review (80% of effort)
- Human reviews architecture – Not syntax-level details
- Human validates design decisions – System implications
- Human approves strategy – Not line-by-line code
Key: Humans focus on high-level decisions, not code review
The Compound Effect: Institutional Knowledge
What Gets Captured
Every completed feature becomes permanent institutional knowledge:
- Bug lessons – Every bug becomes documented reference
- Failed attempts – Why certain approaches didn’t work
- Best practices – Patterns that work well in this codebase
- Architecture insights – How components interact
- Performance patterns – What’s fast, what’s slow
How It Compounds
Feature 1 completed → Knowledge base grows
↓
Feature 2 planned → Agents reference Feature 1 lessons
→ Build on what worked before
→ Avoid what didn't
→ 20% faster development
↓
Feature 2 completed → Knowledge base doubles
↓
Feature 3 planned → More lessons to learn from
→ Even faster development
→ 30% faster than Feature 2
↓
Feature N planned → Exponentially faster (compounding)
Result: System accelerates, not decelerates
Agent Orchestration
Rather than single AI processing large prompts sequentially, multiple agents work in parallel on different aspects:
Feature request
↓
Research Agent
├─→ Analyze codebase
├─→ Study web resources
└─→ Create GitHub issue with implementation steps
↓
Frontend Agent Backend Agent Test Agent Docs Agent
├─→ Build UI ├─→ Build APIs ├─→ Write ├─→ Update
│ components │ & services │ tests │ docs
└─→ Test locally └─→ Test locally └─→ Test └─→ Examples
locally
↓
Human Orchestrator
• Reviews architecture decisions
• Validates system design
• Approves implementation
• Identifies issues agents might miss
Key: Parallelization amplifies individual agent capability
Real-World Example: Every
Company: Every (technology company)
Result: Two engineers producing output of fifteen
Typical Every Workflow
- Voice input – Team member speaks feature idea to Claude Code
- Research phase – Agent analyzes codebase and best practices
- Issue creation – Detailed GitHub issue with implementation steps
- Parallel execution – Multiple agents work simultaneously
- Architecture review – Human reviews design decisions
- Deployment – Complete features in hours
Impact
- Small teams maintaining complex, production-grade systems
- Serving thousands of users daily
- Shipping at 5-10x traditional velocity
Comparison: Compound Engineering vs Traditional
| Aspect | Traditional | Compound Engineering |
|---|---|---|
| Planning | Minimal (jump to coding) | 80% of effort |
| Coding | 60% of effort | 20% of effort |
| Testing | Late-stage gate | Continuous, comprehensive |
| Knowledge | Lost after shipping | Captured, reused |
| Team role | Code + test + review | Orchestrate + decide |
| Velocity trend | Decreases over time | Increases over time |
| Complexity impact | More code = slower | More code = faster (knowledge) |
Key Mechanisms
1. Plan-Driven Development
80% of value comes from good planning, not fast coding:
- Clear specifications prevent rework
- Research reveals existing solutions
- Agents execute well-defined plans faster
2. Institutional Knowledge Accumulation
Every completed feature adds to knowledge base:
- Code patterns agents can reference
- Problem-solution mappings
- Performance characteristics
- API contracts and behaviors
3. Parallel Agent Execution
Multiple agents working simultaneously:
- Frontend, backend, testing in parallel
- Documentation generated alongside code
- Agents learn from each other’s work
- Human orchestrators resolve conflicts
4. Minimal Code Review
Shift from detailed code review to architectural review:
- Agent code must execute (not optional)
- Tests must pass (acceptance threshold)
- Humans focus on strategic decisions
- Syntax errors eliminated automatically
Benefits
1. Accelerating Velocity
- Feature 1: Baseline (1.0x)
- Feature 2: 20% faster (1.2x)
- Feature 3: 30% faster (1.5x)
- Feature 10: 80% faster (2.0x+)
2. Higher Quality
- Documented lessons prevent repeated mistakes
- Comprehensive testing built-in
- Consistent architecture patterns
- Better code organization
3. Small Team Scaling
- 2-3 engineers maintain complex systems
- Work that normally takes 15 people
- No need for massive teams
- Easier to coordinate and communicate
4. Knowledge Preservation
- Institutional knowledge doesn’t leave with departing engineers
- New team members access accumulated knowledge
- Onboarding faster (learning system already exists)
- Knowledge continuously improves
Prerequisites for Success
1. Clear Specifications
- Vague requirements undermine agent planning
- Detailed user stories essential
- Acceptance criteria explicit
2. Comprehensive Testing
- Tests must capture acceptance criteria
- Agents test extensively before human review
- Tests inform future development
3. Strong Version Control
- Git history becomes institutional knowledge
- Commits document decisions
- Easy to understand prior art
4. Good Documentation
- Agents learn from comments and docs
- Well-documented code accelerates understanding
- Future agents stand on shoulders of prior work
5. Capable AI Models
- Requires Claude 3.5 Sonnet or equivalent
- Long-horizon reasoning essential
- Multi-step task orchestration needed
Challenges & Limitations
1. Initial Setup Cost
Building initial knowledge base requires investment:
- First features aren’t much faster than traditional
- Planning phase has same overhead
- Compounding takes time to manifest
2. Specification Clarity
Weak specifications undermine acceleration:
- Agents can’t improve on vague requirements
- Planning phase takes longer if unclear
- Rework negates compound benefits
3. Team Coordination
Small teams work best with deep specialization:
- Large heterogeneous teams create friction
- Different workflows conflict
- Parallel execution harder to coordinate
4. Knowledge Staleness
Accumulated knowledge can become outdated:
- New frameworks/approaches replace old
- Need periodic refactoring
- False patterns can persist
Implementation Path
Stage 1: Foundation (Week 1-2)
- Set up agent orchestration framework
- Define planning process
- Establish git workflow
- Create initial knowledge base structure
Stage 2: Early Features (Week 3-8)
- Build first features with agents
- Capture lessons in documentation
- Establish code patterns
- Create reusable components
Stage 3: Compounding (Week 9+)
- Features build on prior knowledge
- Velocity visibly accelerates
- Knowledge base becomes competitive advantage
- Team becomes force multiplier
Tools & Frameworks
Primary Tools
- Claude Code – Primary agentic IDE
- GitHub – Knowledge base (issues, code, history)
- Git – Version control and history
- Markdown – Documentation
Frameworks
- Amplifier framework – Callback hooks, tool calling, flow control
- SpecKit – Specification management
- BMAD – Build, monitor, analyze, debug
- Every’s Compound Engineering Plugin – Specific workflow
Metrics & Measurement
Velocity Metrics
- Features per sprint (should increase)
- Bugs per feature (should decrease)
- Time from specification to shipping (should decrease)
- Code review time (should approach zero)
Quality Metrics
- Test coverage (should increase)
- Production incidents (should decrease)
- Performance regression (should decrease)
- User satisfaction (should increase)
Knowledge Metrics
- Documented lessons captured
- Code reuse across features
- New team member onboarding time
- Knowledge base growth rate
Organizational Implications
Team Structure
- Small, specialized teams (2-3 people)
- Deep domain expertise
- Isolated responsibilities
- Coordination through orchestration
Staffing
- More strategic roles (architects, product managers)
- Fewer implementation staff needed
- High-leverage orchestrators valued
- Knowledge management critical
Culture
- Embrace AI as collaborator
- Value planning over speed
- Document everything
- Celebrate acceleration, not just shipping
Relationship to Other Concepts
Compound Engineering focuses on:
- Knowledge accumulation
- Recursive improvement
- Small team scaling
- Institutional learning
Differs from:
- Software Factory – Enterprise scale, non-interactive, full automation
- Agentic Development – Mechanisms, not economics
- Digital Twin Universe – Testing infrastructure
Future Evolution
As systems improve, compound engineering will enable:
- Autonomous tool-building (agents create their own tools)
- Self-healing systems (agents fix issues proactively)
- Continuous learning (every production issue teaches system)
- Exponential scaling (compounding accelerates further)
The end state: Systems that improve faster than they grow in complexity.
Getting Started
- Pick a project – Ideally greenfield or well-scoped component
- Adopt planning-first mindset – Spend 80% on planning
- Start with Claude Code – Use orchestration workflow
- Document everything – Capture lessons for reuse
- Measure velocity – Track acceleration over time
- Iterate on process – Refine what works for your team
References
- Sam Schillace: “I Have Seen the Compounding Teams”
- Every: Compound engineering practitioners
- Claude Code: Primary tool for orchestration
- Compounding Teams: Full exploration of concept
- Software Factory: Related but different approach
Related Concepts
- Compounding Teams – Teams operating at compound engineering scale
- Software Factory – Enterprise-scale agentic development
- Agentic Development – Core mechanisms
- Context Graphs – How knowledge persists
- Sam Schillace – Key advocate and practitioner