Daggr - AI Workflow DAG Builder

Overview

Daggr is an open-source Python library from the Gradio team for building AI workflows using Directed Acyclic Graphs (DAGs). It enables code-first workflow composition with automatic visual inspection of intermediate outputs, rerunnable steps, and built-in state persistence.

Repository: github.com/gradio-app/daggr
Blog Announcement: HuggingFace Blog - Daggr
Organization: Gradio / HuggingFace
License: Open-source
Status: Beta (as of Jan 2026)
Python Requirement: 3.10+

Problem It Solves

Workflow Pain Points

When chaining multiple AI models/APIs together, developers face:

  1. Debugging Hell: With 10-step pipeline, step 5 fails → must rerun entire pipeline to debug
  2. Lost Intermediate Data: Can’t easily inspect what each step produced
  3. Manual Orchestration: Write boilerplate code to manage state, cache, error handling
  4. Version Control: Visual node editors can’t be version controlled
  5. Integration Friction: Integrating different models/APIs requires custom adapters
  6. Inflexible Pipelines: Can’t easily swap out models or run partial workflows

Daggr’s Solution

Code-first DAG definition (version controllable)
Automatic visual canvas (auto-generated, not manually drawn)
Inspect any intermediate step (click to see output)
Rerun individual nodes (don’t rerun entire pipeline)
Seamless Gradio/Space integration (zero adapters needed)
State persistence (save progress, resume later)
Multiple node types (Gradio, Python functions, Inference API)

Core Concepts

Three Node Types

1. GradioNode
Calls a Gradio Space or locally-served app:

from daggr import GradioNode  
  
# Use any public Gradio Space as a node  
bg_remover = GradioNode(  
    space="merve/background-removal",  
    inputs=["image"],  
    outputs=["image"]  
)  
  
# Or run Space locally (auto-clones and launches)  
bg_remover = GradioNode(  
    space="merve/background-removal",  
    run_locally=True  # Clones Space, creates venv, runs locally  
)  

2. FnNode
Wraps a custom Python function:

from daggr import FnNode  
  
def downscale_image(image, scale=0.5):  
    """Helper function to resize image"""  
    return image.resize((int(image.width * scale),   
                         int(image.height * scale)))  
  
scaler = FnNode(downscale_image)  

3. InferenceNode
Calls models via HuggingFace Inference API:

from daggr import InferenceNode  
  
flux_generator = InferenceNode(  
    model_id="black-forest-labs/FLUX.2-klein-4B",  
    task="text-to-image",  
    inputs=["prompt"],  
    outputs=["image"]  
)  

Workflow Definition

Chain nodes together to build a DAG:

from daggr import Daggr  
  
# Create workflow  
app = Daggr()  
  
# Add nodes (automatically creates connections)  
step1 = app.add_node(bg_remover,   
                     inputs={"image": "input_image"})  
step2 = app.add_node(scaler,   
                     inputs={"image": step1.outputs["image"]})  
step3 = app.add_node(flux_generator,  
                     inputs={"prompt": "style_prompt"})  
step4 = app.add_node(trellis_3d,  
                     inputs={"image": step2.outputs["image"]})  
  
# Launch visual canvas  
app.launch()  

Key Features

Code-First Approach

Define workflows in Python (not visual editor):

# Workflow is version-controllable code  
app = Daggr()  
app.add_node(model1, inputs={...})  
app.add_node(model2, inputs={"output": model1.outputs[...]})  

Advantages:

  • Git-friendly (diffs, branches, PRs)
  • IDEs provide autocomplete
  • Parametric (use variables, functions, loops)
  • Testable

Automatic Visual Canvas

Running the code generates:

  • Interactive visual DAG
  • Input boxes for each node
  • Output display for each step
  • Canvas positioned saved
  • State persisted locally
Run: python app.py  
→ Visual canvas at http://localhost:7860  
→ Automatic Gradio tunneling (public shareable URL)  

Inspect & Rerun Any Step

Instead of rerunning entire pipeline:

  1. Run full workflow once
  2. See it fails at step 7
  3. Click step 7 → inspect output
  4. Modify input to step 7
  5. Rerun only step 7 onwards

No need to rerun steps 1-6.

Seamless Gradio Integration

Since Daggr is built by Gradio team:

# Just reference the Space name - no adapters needed  
GradioNode(space="username/space-name")  
  
# Automatically fetches API schema, parameters, outputs  
# Works with public AND private Spaces  
# Falls back to remote API if local execution fails  

Multiple Workspaces with “Sheets”

Maintain separate workflow runs/experiments:

# Same workflow, different inputs  
sheet1 = Workflow("experiment_1")  
sheet2 = Workflow("experiment_2")  
sheet3 = Workflow("experiment_3")  
  
# Each has its own state, cached results, etc.  

State Persistence

Automatically saves:

  • Workflow inputs and outputs
  • Canvas position (zoom, pan)
  • Cached results
  • Intermediate values

Resume later without re-running.

Complete Example: Image → 3D Asset

Workflow Steps

Input Image  
    ↓  
[GradioNode] Background Removal  
    ↓  
[FnNode] Downscale Image  
    ↓  
[InferenceNode] Image to 3D Style (Flux)  
    ↓  
[GradioNode] 3D Generation (Trellis.2)  
    ↓  
3D Asset Output  

Code Implementation

from daggr import Daggr, GradioNode, FnNode, InferenceNode  
  
app = Daggr()  
  
# Step 1: Background Removal  
# (Run locally - clones the BiRefNet Space)  
bg_remover = app.add_node(  
    GradioNode(  
        space="merve/background-removal",  
        run_locally=True,  
        inputs=["image"],  
        outputs=["image"]  
    ),  
    inputs={"image": "input_image"}  
)  
  
# Step 2: Downscale for efficiency  
def downscale_image(image, scale=0.5):  
    return image.resize((  
        int(image.width * scale),   
        int(image.height * scale)  
    ))  
  
downscaler = app.add_node(  
    FnNode(downscale_image),  
    inputs={"image": bg_remover.outputs["image"]}  
)  
  
# Step 3: Image to 3D style with Flux  
flux_3d = app.add_node(  
    InferenceNode(  
        model_id="black-forest-labs/FLUX.2-klein-4B",  
        task="text-to-image"  
    ),  
    inputs={  
        "prompt": "Convert this to 3D asset style"  
    }  
)  
  
# Step 4: 3D Generation with Trellis.2  
trellis_3d = app.add_node(  
    GradioNode(  
        space="JunzhanFeng/Trellis-2",  
        inputs=["image"],  
        outputs=["glb", "video"]  
    ),  
    inputs={"image": downscaler.outputs["image"]}  
)  
  
# Launch  
if __name__ == "__main__":  
    app.launch()  

Result

  • Visual DAG showing all steps connected
  • Can inspect output at each stage
  • Can rerun just the Flux step if tweaking prompt
  • State automatically saved
  • Shareable URL for demo

Architecture

How It Works

┌─────────────────────────────────────────┐  
│  Python Code (DAG Definition)           │  
│  app.add_node(...)                      │  
└────────────────┬────────────────────────┘  
                 │  
                 ▼  
        ┌────────────────┐  
        │ DAG Compiler   │  
        │ (Infer schema) │  
        └────────┬───────┘  
                 │  
                 ▼  
    ┌────────────────────────────┐  
    │ Gradio Visual Canvas       │  
    │ • Node representations     │  
    │ • Input/output boxes       │  
    │ • Execution controls       │  
    └────────┬───────────────────┘  
             │  
      ┌──────┴──────┐  
      │             │  
      ▼             ▼  
  Local         Remote  
  Execution     API Calls  
  (FnNode)      (Gradio/Inference)  
      │             │  
      └──────┬──────┘  
             │  
             ▼  
    ┌──────────────────┐  
    │ State Store      │  
    │ • Results cache  │  
    │ • Input values   │  
    │ • Canvas state   │  
    └──────────────────┘  

Execution Model

Normal Run (rerun from start):

Input → Node1 → Node2 → Node3 → Output  
[fresh] [fresh] [fresh] [fresh]  

Rerun from Node2 (uses cached Node1 output):

Input → Node1 → Node2 → Node3 → Output  
[cached][fresh] [fresh] [fresh]  

Fallback Logic

For GradioNode with run_locally=True:

1. Try to run Space locally  
   ↓ (Success) → Use local results  
     
2. If local run fails → Fallback to remote API gracefully  
   ↓ (Success) → Use remote results  
     
3. If both fail → Display error, allow retry  

Node Configuration Details

GradioNode

GradioNode(  
    space="username/space-name",     # Space identifier  
    api_name="/predict",              # Optional: API endpoint  
    run_locally=False,                # Clone and run locally?  
    inputs=["input_field1"],          # Expected input names  
    outputs=["output_field1"],        # Expected output names  
    fn_index=0                        # Optional: function index  
)  

FnNode

FnNode(  
    fn=my_function,                   # Callable Python function  
    inputs=["param1", "param2"],      # Parameter names  
    outputs=["result"]                # Return value name  
)  

InferenceNode

InferenceNode(  
    model_id="org/model",             # Model on HuggingFace Hub  
    task="text-to-image",             # Task type  
    inputs=["prompt"],                # Input field names  
    outputs=["image"]                 # Output field names  
)  

Deployment

Local Development

pip install daggr  
python app.py  
# → http://localhost:7860 (auto-opens)  

Public Demo (Instant)

# Daggr automatically creates shareable URL  
# See link in terminal output  
# Share with others for live demo  

Production Hosting (HuggingFace Spaces)

# 1. Create Space on HuggingFace  
# 2. Add requirements.txt with daggr  
# 3. Deploy app.py  
# 4. Runs as persistent Gradio app  

Current Limitations

Beta status: APIs may change between versions
Data loss possible: State persistence not guaranteed during updates
No multi-user: Single-user workflow (no concurrent editing)
Limited orchestration: For production pipelines, consider Airflow/Prefect
GPU management: Manual for advanced scenarios

Use Cases

Development & Debugging

  • Rapid iteration on ML pipelines
  • Inspect intermediate outputs
  • Test different model combinations

Demo Creation

  • Combine multiple Spaces into single app
  • One-click public demo generation
  • Interactive exploration interface

Experimentation

  • Compare different models/configs
  • A/B test different approaches
  • Track results across experiments

Educational

  • Show how complex ML workflows work
  • Interactive tutorials
  • Demonstrate pipeline dependencies

Comparison with Alternatives

ToolApproachBest For
DaggrCode-first DAG, auto-visualRapid ML experimentation
AirflowCode-first DAG, productionComplex production pipelines
PrefectCode-first, modernCloud-native workflows
Node editors (n8n, Make)Visual compositionNon-technical users
LangchainAgent/chain frameworkLLM reasoning chains
GradioUI component librarySingle app interface

When to use Daggr:

  • Building ML/AI workflow demos
  • Chaining Gradio Spaces
  • Rapid iteration and debugging
  • Educational/exploratory work

Integration Ecosystem

Native Support

  • Gradio Spaces: Seamless, zero-configuration
  • HuggingFace Inference API: Direct integration
  • Python functions: Any callable
  • Custom APIs: Via FnNode wrappers

Getting Started

Installation

pip install daggr  
# or  
uv pip install daggr  

3-Minute Example

from daggr import Daggr, GradioNode  
  
app = Daggr()  
  
# Use existing Spaces as workflow steps  
step1 = app.add_node(  
    GradioNode(space="merve/background-removal"),  
    inputs={"image": "input_image"}  
)  
  
app.launch()  

Resources

Key Advantages

Code + Visual: Get versioning + UI inspection
Zero Integration Overhead: Gradio Spaces work immediately
Debug-Friendly: Rerun individual steps, inspect outputs
State Persisted: Resume workflows, cache results
Lightweight: Minimal dependencies, easy to extend
Shared URLs: Instant public demos via Gradio
Open Source: Community-driven development

Community & Feedback

Sources

  1. HuggingFace Blog - Introducing Daggr - https://huggingface.co/blog/daggr
  2. GitHub - Daggr - https://github.com/gradio-app/daggr
  3. HuggingFace Spaces - Featured Daggr Workflows - https://huggingface.co/collections/ysharma/daggr-hf-spaces