Fabric
by Daniel Miessler
A modular framework for augmenting humans using AI through reusable prompt patterns and CLI workflows.
See https://github.com/danielmiessler/Fabric
Features
- Pattern library: reusable, versioned prompt templates (patterns) that solve specific human problems.
- CLI‑first workflow: run, combine and automate patterns from the terminal for reproducible results.
- Local and remote model support: integrates with local runtimes or hosted LLM providers.
- Sessions & dry‑run: record interactions, inspect prompts before execution, and replay sessions for auditability.
- Extensible connectors: lightweight adapters to call external tools, APIs, and data sources.
- Pattern studio (optional): web UI for pattern management, testing, and analysis.
Superpowers
Fabric’s special power is turning ad‑hoc LLM prompts into composable, shareable building blocks. Instead of one‑off chat prompts, Fabric patterns capture intent, constraints, and examples so teams can:
- Reuse and version prompt logic across projects.
- Build deterministic guard rails and validation around LLM outputs.
- Compose multi‑step pipelines (e.g., extract → summarize → generate) with auditable intermediate outputs.
Who this is for
- Developers and prompt engineers who prefer CLI workflows and need reproducibility.
- Security, ops and data teams that need predictable, scriptable LLM tasks integrated with CI/CD.
- Teams that want to share and govern prompt templates across an organization.
What you gain
- Consistent AI outputs via patterns instead of brittle, hand‑crafted prompts.
- Easier automation: pattern scripts can be embedded in pipelines, making LLMs first‑class in dev workflows.
- Better governance: dry‑runs, sessions and pattern testing reduce surprises and improve safety.
Pricing
Fabric is open source and free to use. Costs depend on choice of LLM provider (local vs hosted) and infra used for any hosted MCPs or running inference. Typical cost factors:
- Local inference: cost of hardware (GPUs, NVMe) and local runtime licenses (if any).
- Hosted inference: provider usage fees (per token or subscription).
- Optional managed services or third‑party UIs if deployed for teams (varies by vendor).