Tool Pipelines: Eliminating LLM Round-Trips for Deterministic Tool Chains
In a standard ReAct loop, every tool call requires an LLM round-trip. The agent calls a search tool, receives results, reasons about them, calls a filter tool, receives filtered output, reasons again, calls a format tool, and so on. Each step costs tokens, adds latency, and requires the LLM to make a decision that is often trivial — the next step in the chain is predetermined.
For deterministic data transformation chains, the LLM adds no reasoning value between steps. It just passes the output of one tool as input to the next. The interesting question is whether you can collapse that chain into a single tool call.
The ToolPipeline Abstraction
Section titled “The ToolPipeline Abstraction”AgentEnsemble provides ToolPipeline, which chains multiple tools into a single compound tool. The LLM calls it once; all steps execute sequentially without LLM round-trips between them.
// Standard ReAct loop (3 LLM round-trips for tool mediation):LLM -> search_tool -> LLM -> filter_tool -> LLM -> format_tool -> LLM
// With ToolPipeline (0 extra round-trips):LLM -> search_then_filter_then_format -> LLMThe simplest way to create one:
ToolPipeline pipeline = ToolPipeline.of( new WebSearchTool(provider), new JsonParserTool(), FileWriteTool.of(outputPath));// name: "web_search_then_json_parser_then_file_write"Register it on a task like any other tool:
var task = Task.builder() .description("Research AI trends and save the top result to disk") .expectedOutput("Confirmation that the result was saved") .tools(List.of(pipeline)) .build();Data Flow and Adapters
Section titled “Data Flow and Adapters”By default, ToolResult.getOutput() from step N is passed as the input to step N+1. This works when tool outputs are directly consumable by the next tool.
When you need to reshape data between steps, attach an adapter:
ToolPipeline pipeline = ToolPipeline.builder() .name("extract_and_calculate") .description("Extract a numeric field from JSON and apply a formula") .step(new JsonParserTool()) .adapter(result -> result.getOutput() + " * 1.1") .step(new CalculatorTool()) .build();The adapter transforms the JsonParserTool output (e.g., "149.99") into a calculator expression ("149.99 * 1.1") before passing it to CalculatorTool. Adapters have full access to ToolResult, including getStructuredOutput() for typed payloads.
This is the key design decision: adapters are plain Java functions, not LLM calls. They handle the deterministic reshaping that the LLM would otherwise do at full inference cost.
Error Strategies
Section titled “Error Strategies”Pipelines support two error strategies:
FAIL_FAST (default) stops the pipeline on the first failed step and returns that failure to the LLM immediately. Subsequent steps are never executed.
CONTINUE_ON_FAILURE continues executing subsequent steps even when an intermediate step fails. The failed step’s error message is forwarded as input to the next step.
ToolPipeline pipeline = ToolPipeline.builder() .name("resilient_pipeline") .description("Continues even when a step fails") .step(stepA) .step(stepB) .step(stepC) .errorStrategy(PipelineErrorStrategy.CONTINUE_ON_FAILURE) .build();The choice between them depends on whether downstream steps can recover from upstream failures. For a search-then-save pipeline, FAIL_FAST makes sense — there is nothing to save if the search failed. For a multi-source aggregation, CONTINUE_ON_FAILURE lets the pipeline produce partial results.
Approval Gates Within Pipelines
Section titled “Approval Gates Within Pipelines”Steps inside a pipeline that require human approval will pause mid-pipeline, exactly as if they were standalone tools. The pipeline propagates the ensemble’s ReviewHandler to all nested steps automatically.
ToolPipeline pipeline = ToolPipeline.of( new JsonParserTool(), FileWriteTool.builder(outputPath) .requireApproval(true) .build());This means you can build pipelines that include a human checkpoint before a destructive operation (like writing to disk or calling an external API) without losing the token savings for the deterministic steps before the checkpoint.
Nesting and Composition
Section titled “Nesting and Composition”A ToolPipeline implements AgentTool, so it can be used as a step inside another pipeline:
ToolPipeline inner = ToolPipeline.of("step_a", "desc", toolA, toolB);ToolPipeline outer = ToolPipeline.of("outer", "desc", inner, toolC);This lets you build reusable pipeline fragments and compose them into larger chains. Each pipeline records its own aggregate metrics (timing, success/failure counts) in addition to the per-step metrics from individual tools.
When to Use Pipelines vs. Separate Tools
Section titled “When to Use Pipelines vs. Separate Tools”The decision boundary is whether the LLM needs to reason between steps.
Use ToolPipeline when steps are deterministic and order-locked — the LLM should not skip or reorder them, and the data transformations between steps are mechanical. The full chain appears as one operation to the LLM.
Use separate tools when the LLM needs to decide which tool to call next based on intermediate results, or when intermediate results are useful for the LLM to see and reason about.
In practice, this means pipelines work well for data retrieval and transformation chains (search, parse, filter, write), while separate tools work better for exploratory workflows where the agent needs to adapt its approach based on what it finds.
The Broader Pattern
Section titled “The Broader Pattern”ToolPipeline is one instance of a broader design principle in AgentEnsemble: when something is deterministic, do not pay LLM inference costs for it. This same principle appears in deterministic-only orchestration (tasks that never call an LLM), typed tool inputs (schema validation without LLM intervention), and phase-level workflow grouping (execution order declared in code, not negotiated by the LLM).
The common thread is that the framework should handle mechanical work mechanically, and reserve LLM inference for decisions that actually require reasoning.
The full tool pipeline guide is in the documentation.
Curious whether you have seen tool chains where the boundary between “deterministic” and “needs reasoning” is ambiguous, and how you would draw that line.