LionAGI Starter — generate typed test-plan dispatch summarize

This is a minimal starter example (pseudocode + minimal real code snippets) showing how to use LionAGI to:

  1. Turn a high-level test requirement into a typed TestPlan (Pydantic)
  2. Dispatch steps to a test-runner via an HTTP tool adapter
  3. Collect results and produce a structured TestResultSummary

Note: adapt imports and APIs to the LionAGI version from the repo.

Pydantic schemas

from pydantic import BaseModel, Field  
from typing import List, Literal  
  
class TestStep(BaseModel):  
    id: str  
    description: str  
    runner_selector: str  # e.g., "fleet:ubuntu-22.04 tags:webserver"  
    command: str  
    timeout_seconds: int = 300  
  
class TestPlan(BaseModel):  
    plan_id: str  
    objective: str  
    steps: List[TestStep]  
  
class StepResult(BaseModel):  
    id: str  
    success: bool  
    stdout: str | None  
    stderr: str | None  
    artifacts: List[str] = []  # S3 URLs  
  
class TestResultSummary(BaseModel):  
    plan_id: str  
    overall_success: bool  
    step_results: List[StepResult]  
    summary: str  
  
## LionAGI Branch pseudocode  
  
```python  
# Pseudocode — adapt to actual lionagi API  
from lionagi import Branch, ModelProvider  
  
provider = ModelProvider("openai", api_key=...)  
branch = Branch(provider=provider, system_prompt="You are a QA planner. Return TestPlan JSON strictly matching the TestPlan schema.")  
  
# 1) generate TestPlan from objective  
objective = "Verify login flow on v2.1 for ubuntu webservers"  
response = branch.call("Generate a TestPlan for the following objective:\n" + objective, response_schema=TestPlan)  
plan: TestPlan = response.parsed  
  
# 2) dispatch each step via tool call 'dispatch_test'  
for step in plan.steps:  
    dispatch_payload = {"step_id": step.id, "runner_selector": step.runner_selector, "command": step.command}  
    # 'dispatch_test' is a tool adapter that triggers a runner and returns an execution_id  
    tool_result = branch.call_tool("dispatch_test", input=dispatch_payload)  
    # record tool_result in action log  
  
# 3) poll or wait for callbacks from runners — once results arrive, feed back into branch  
for result in collected_results:  
    branch.call("Process step result", input=result)  
  
# 4) ask model to summarize final TestResultSummary  
final = branch.call("Summarize the test run and produce a TestResultSummary JSON", response_schema=TestResultSummary)  
summary: TestResultSummary = final.parsed  
print(summary.json())  

Tool adapter: simple HTTP dispatch (Flask example)

# A tiny runner adapter that LionAGI can call via HTTP  
from flask import Flask, request, jsonify  
import requests  
  
app = Flask(__name__)  
  
@app.route('/dispatch_test', methods=['POST'])  
def dispatch_test():  
    payload = request.json  
    # choose a runner (simple round-robin or query FleetDM API)  
    runner_url = choose_runner(payload['runner_selector'])  
    res = requests.post(runner_url + '/run', json={"command": payload['command'], "timeout": payload.get('timeout_seconds', 300)})  
    # assume runner returns execution_id and callback_url  
    return jsonify(res.json())  
  
if __name__ == '__main__':  
    app.run(port=8080)  

Runner contract (examples)

  • /run POST { command, timeout } { execution_id, status_url }
  • Runner posts result to /callback with { execution_id, success, stdout, stderr, artifacts }

Notes

  • Use allow_changes and human approvals for any destructive or write actions.
  • Add idempotency keys when dispatching to avoid duplicates on retries.
  • Attach logs/artifacts to S3 and reference them in the TestResultSummary.

Status: DRAFT — sample starter. Adapt to LionAGI API version and your infra.