How goose Generates a Prompt to the LLM

1. Without Recipes (Basic Request)

  • The user provides a request (prompt) via CLI or Desktop interface.
  • Goose collects the user input and prepares a context with available tools (extensions) for the LLM.
  • The LLM receives:
    • The user’s request/prompt.
    • A list of available tools (extensions), with their parameters and capabilities.
  • The LLM may generate tool calls as part of its response, which goose executes and feeds the results back to the model (1) (2) (3) .

2. With Recipes (No Sub-Recipes)

  • The user runs goose run --recipe <file> --params ....
  • Goose loads the recipe file, which may define:
    • instructions and/or prompt fields (can include template variables).
    • Parameters (provided by user or defaulted).
    • Required extensions.
  • Goose substitutes parameters into the template, forming the prompt.
  • The LLM receives:
    • The rendered prompt/instructions (with parameters filled).
    • The list of available extensions/tools.
  • The LLM uses this structured context to decide which tool(s) to call (4) .

3. With Recipes and Sub-Recipes

  • The user runs a main recipe that references sub-recipes in its sub_recipes field.
  • Each sub-recipe is defined separately, with its own prompt/instructions and parameters.
  • When the main recipe runs:
    • Goose generates a tool for each sub-recipe.
    • The main recipe prompt can direct the LLM to use these sub-recipe tools in sequence or conditionally.
    • Parameters can be passed to sub-recipes either as fixed values or inherited from the main recipe.
    • Sub-recipes run in isolated sessions (no shared context/history with each other or the main recipe).
  • The LLM receives:
    • The main recipe prompt/instructions and available sub-recipe tools (as callable tools).
    • The list of extensions/tools configured for the main and sub-recipes.
  • The LLM may call sub-recipe tools as part of its workflow, which results in goose executing the sub-recipe in a new session and feeding back the output (5) (4) .

When/How MCP Tools (Extensions) Are Included

  • At the start of each LLM request, goose compiles a list of all enabled extensions (MCP servers).
  • Each extension exposes one or more tools with defined parameters and actions.
  • This list is included in the context sent to the LLM for every step of the interaction.
  • The LLM is aware of which tools it can call, and can request tool executions as part of its reasoning loop (1) (2) (3) (4) .

Mermaid Diagram

flowchart TD  
    A[User Input - CLI/Desktop] --> B{Recipe Provided?}  
    B -- No --> C[Prepare prompt from user input]  
    B -- Yes --> D[Load recipe - parse instructions, prompt, params, extensions]  
    D --> E{Sub-Recipes?}  
    E -- No --> F[Substitute parameters into prompt]  
    E -- Yes --> G[Register sub-recipe tools]  
    G --> H[LLM can invoke sub-recipe tools]  
    F --> I[Prepare context for LLM]  
    H --> I  
    C --> I  
    I --> J[Attach list of available extensions/tools]  
    J --> K[Send prompt + tools to LLM]  
    K --> L{LLM generates tool calls?}  
    L -- Yes --> M[Goose executes tool call - extension/sub-recipe]  
    M --> N[Results returned to LLM]  
    L -- No --> O[LLM generates final response]  
    N --> O  
    O --> P[Show output to user]  

Summary Table

StageWith RecipesWith Sub-RecipesWithout Recipes
Prompt SourceRecipe fileMain + sub-recipe filesUser input
ParametersDefined in recipeInherited or fixedN/A
Extensions IncludedIn recipeMain + sub-recipeDefault/global
Sub-Recipe ToolsNot presentAvailable as toolsN/A
LLM ReceivesRendered prompt + toolsMain prompt + sub-recipe tools + extensionsUser input + tools
Tool Calls PossibleYesYes (including sub-recipes)Yes

References