Workflows
A workflow defines how tasks are executed. AgentEnsemble supports three strategies: SEQUENTIAL, HIERARCHICAL, and PARALLEL.
As of v2.0.0, declaring a workflow is optional. When you omit .workflow(...), the framework infers the appropriate strategy from your task context declarations. See Workflow Inference below.
Workflow Inference
Section titled “Workflow Inference”When no .workflow(...) call is made on the builder, the framework automatically infers the best execution strategy at run time, before any tasks execute:
| Condition | Inferred workflow |
|---|---|
No task has a context dependency on another task in the ensemble | SEQUENTIAL |
At least one task declares a context(...) dependency on another ensemble task | PARALLEL (DAG-based) |
This means the simplest use case requires no workflow configuration at all:
// No .workflow() call -- sequential is inferred (no context deps)EnsembleOutput output = Ensemble.builder() .chatLanguageModel(model) .task(researchTask) .task(writeTask) .build() .run();And once you introduce context dependencies, parallelism is inferred automatically:
var taskA = Task.builder() .description("Research AI trends").expectedOutput("Report").agent(researcher).build();var taskB = Task.builder() .description("Gather market data").expectedOutput("Data").agent(analyst).build();var taskC = Task.builder() .description("Synthesize findings") .expectedOutput("Combined report") .agent(writer) .context(List.of(taskA, taskB)) // declares deps -> PARALLEL inferred .build();
// No .workflow() -- PARALLEL is inferred because taskC declares context depsEnsembleOutput output = Ensemble.builder() .task(taskA) .task(taskB) .task(taskC) .build() .run();Explicit Override
Section titled “Explicit Override”An explicit .workflow(...) call always takes precedence over inference:
// Force sequential even if context deps existEnsemble.builder() .task(taskA) .task(taskB) .workflow(Workflow.SEQUENTIAL) // explicit override .build() .run();Context Ordering Validation
Section titled “Context Ordering Validation”When workflow is inferred as PARALLEL, context ordering validation is skipped — the DAG handles
execution order regardless of task list position. When SEQUENTIAL is either inferred (no context
deps) or explicitly set, context tasks must appear before their dependents in the list:
// This would fail SEQUENTIAL ordering validation// (secondTask declared before firstTask, but secondTask depends on firstTask)Ensemble.builder() .task(secondTask) // violation: appears before firstTask .task(firstTask) .workflow(Workflow.SEQUENTIAL) // explicit SEQUENTIAL triggers ordering check .build() .run(); // throws ValidationExceptionSEQUENTIAL
Section titled “SEQUENTIAL”Tasks execute one after another in the order they are declared. Each task that declares context dependencies receives those prior outputs injected into its agent’s prompt.
Ensemble.builder() .agent(researcher) .agent(writer) .task(researchTask) .task(writeTask) // writeTask has context(List.of(researchTask)) .workflow(Workflow.SEQUENTIAL) .build() .run();Note:
SEQUENTIALis the effective default when no context dependencies are declared between tasks. You can omit.workflow(Workflow.SEQUENTIAL)and the framework will infer it.
Execution Order
Section titled “Execution Order”Tasks run in list order. When .workflow(Workflow.SEQUENTIAL) is set explicitly, the ensemble validates that context tasks always appear before the tasks that reference them. This validation happens before any task execution or LLM calls. If this ordering is violated, a ValidationException is thrown at run time during run().
Context Injection
Section titled “Context Injection”When a task has a non-empty context list, each referenced task’s output is injected into the agent’s user prompt as a “Context from prior tasks” section. The agent uses this to inform its response.
When to Use SEQUENTIAL
Section titled “When to Use SEQUENTIAL”- You have a defined, linear pipeline (research -> write -> review)
- Task order is fixed and predictable
- Each task depends on the output of the task immediately before it
HIERARCHICAL
Section titled “HIERARCHICAL”A virtual Manager agent is automatically created at run time. The manager receives:
- A system prompt describing all worker agents and their roles/goals
- A user prompt listing all tasks to complete
The manager uses a delegateTask tool to assign tasks to workers. Workers execute and return their outputs as tool results. The manager synthesizes a final response.
Ensemble.builder() .agent(researcher) .agent(writer) .agent(editor) .task(researchTask) .task(writeTask) .task(editTask) .workflow(Workflow.HIERARCHICAL) .managerLlm(gpt4Model) // optional: dedicated LLM for the manager .managerMaxIterations(20) // optional: default is 20 .build() .run();Manager Agent
Section titled “Manager Agent”The manager is a virtual, automatically-configured agent with:
- role:
"Manager" - goal:
"Coordinate worker agents to complete all tasks and synthesize a comprehensive final result" - background: A generated description of all worker agents and their capabilities
- tools: The
delegateTasktool
The manager is not included in the agents list — it is created internally.
Manager LLM
Section titled “Manager LLM”If managerLlm is not set, the manager uses the first registered agent’s LLM. For production use, it is recommended to provide a capable LLM (GPT-4o, Claude 3.5, etc.) as the manager:
ChatModel powerfulModel = OpenAiChatModel.builder() .apiKey(System.getenv("OPENAI_API_KEY")) .modelName("gpt-4o") .build();
Ensemble.builder() .agents(...) .tasks(...) .workflow(Workflow.HIERARCHICAL) .managerLlm(powerfulModel) .build();Manager Max Iterations
Section titled “Manager Max Iterations”The managerMaxIterations field limits how many delegation tool calls the manager can make before being forced to synthesize. Default is 20.
Customizing the Manager Prompt
Section titled “Customizing the Manager Prompt”By default, the Manager agent receives a system prompt that lists all worker agents and a user prompt that lists all tasks to orchestrate. Both prompts are generated by DefaultManagerPromptStrategy, which replicates the built-in behaviour.
To inject domain-specific context — such as organisational constraints, custom personas, or project-level metadata — without forking framework internals, provide a custom ManagerPromptStrategy:
Ensemble.builder() .workflow(Workflow.HIERARCHICAL) .agent(researcher).agent(analyst).agent(writer) .task(researchTask).task(writeTask) .managerPromptStrategy(new ManagerPromptStrategy() { @Override public String buildSystemPrompt(ManagerPromptContext ctx) { // Extend the default system prompt with a domain constraint return DefaultManagerPromptStrategy.DEFAULT.buildSystemPrompt(ctx) + "\n\nAdditional constraint: always prefer the Analyst agent for any quantitative tasks."; } @Override public String buildUserPrompt(ManagerPromptContext ctx) { return DefaultManagerPromptStrategy.DEFAULT.buildUserPrompt(ctx); } }) .build() .run();ManagerPromptContext provides everything the strategy needs to build well-formed prompts:
| Field | Description |
|---|---|
agents | All worker agents available for delegation |
tasks | The tasks the manager must orchestrate |
previousOutputs | Outputs from prior ensemble executions (context chaining) |
workflowDescription | Optional ensemble-level description |
The buildSystemPrompt() result is used as the manager’s background field (included in the system message). The buildUserPrompt() result is used as the manager’s task description (the first user message).
This strategy is only exercised for HIERARCHICAL workflow; sequential and parallel workflows are unaffected.
Output Structure
Section titled “Output Structure”In hierarchical workflow, EnsembleOutput.getTaskOutputs() contains:
- All worker outputs in delegation order
- The manager’s final synthesized output (last)
EnsembleOutput.getRaw() is the manager’s final synthesis.
When to Use HIERARCHICAL
Section titled “When to Use HIERARCHICAL”- You want the LLM to decide which agent handles each task
- The task-to-agent mapping is not obvious from the task descriptions
- You want the manager to re-order or combine tasks dynamically
- You are building a system where task routing should be AI-driven
Hierarchical constraints: Add HierarchicalConstraints to impose deterministic guardrails (required workers, allowed workers, per-worker caps, stage ordering) while preserving the LLM-directed nature of the workflow. See the Delegation Guide for full documentation.
PARALLEL
Section titled “PARALLEL”Tasks execute concurrently using Java 21 virtual threads. The execution order is derived automatically from each task’s context list, which acts as a dependency declaration. Tasks with no unmet dependencies start immediately; dependent tasks are unblocked as their prerequisites complete.
Critically, you do not mark tasks as “parallel” or “serial” explicitly. The framework determines maximum safe concurrency from the context declarations. Tasks with no mutual dependencies automatically run concurrently. Tasks with dependencies automatically serialize.
// These two have no dependencies -- they run in PARALLELvar researchTask = Task.builder() .description("Research AI trends") .expectedOutput("Research report") .agent(researcher) .build();
var dataTask = Task.builder() .description("Gather market data") .expectedOutput("Market data") .agent(analyst) .build();
// This depends on BOTH above -- runs AFTER both completevar synthesisTask = Task.builder() .description("Synthesize findings into a report") .expectedOutput("Combined report") .agent(writer) .context(List.of(researchTask, dataTask)) .build();
// This depends on synthesis -- runs AFTER synthesisvar editTask = Task.builder() .description("Edit and polish the report") .expectedOutput("Final report") .agent(editor) .context(List.of(synthesisTask)) .build();
EnsembleOutput output = Ensemble.builder() .agent(researcher).agent(analyst).agent(writer).agent(editor) .task(researchTask).task(dataTask).task(synthesisTask).task(editTask) .workflow(Workflow.PARALLEL) .build() .run();Execution timeline:
[researchTask]----+ +--> [synthesisTask] --> [editTask][dataTask]--------+Task List Order Is Irrelevant
Section titled “Task List Order Is Irrelevant”For SEQUENTIAL workflow, tasks must be listed in dependency order (prerequisites first). For PARALLEL, the list order does not matter — the dependency graph determines execution order. This means a task can appear before its dependency in the list and still execute correctly:
// SEQUENTIAL: this would fail validation (tb listed before ta)// PARALLEL: this is valid -- tb waits for ta regardless of list positionEnsemble.builder() .task(tb) // tb depends on ta but is listed first .task(ta) .workflow(Workflow.PARALLEL) .build() .run();Diamond Dependency Pattern
Section titled “Diamond Dependency Pattern”Tasks with multiple dependencies and multiple dependents work naturally:
[A] ----+----> [C] ----+ | +--> [E][B] ----+----> [D] ----+- A and B run in parallel (no deps)
- C depends on A+B, D depends on A+B — both start after A+B complete, and C and D run in parallel
- E depends on C+D — starts after both complete
Expressed in code:
var e = Task.builder() .description("Final task") .expectedOutput("Final output") .agent(agent) .context(List.of(c, d)) // waits for both C and D .build();Error Handling
Section titled “Error Handling”Configure how failures are handled via parallelErrorStrategy:
Ensemble.builder() .tasks(...) .agents(...) .workflow(Workflow.PARALLEL) .parallelErrorStrategy(ParallelErrorStrategy.FAIL_FAST) // default // or: .parallelErrorStrategy(ParallelErrorStrategy.CONTINUE_ON_ERROR) .build();FAIL_FAST (default): On the first task failure, no new unstarted tasks are submitted. Already-running tasks complete normally. A TaskExecutionException is thrown after all running tasks finish. Completed task outputs are preserved in the exception.
CONTINUE_ON_ERROR: When a task fails, independent tasks continue running. Tasks that depend on the failed task are skipped automatically. At the end, if any tasks failed, a ParallelExecutionException is thrown containing both the successful outputs and a map of failed task descriptions to their causes.
try { EnsembleOutput output = ensemble.run();} catch (TaskExecutionException e) { // FAIL_FAST: single failure that halted the run System.err.println("Failed task: " + e.getTaskDescription()); System.out.println("Completed before failure: " + e.getCompletedTaskOutputs().size());} catch (ParallelExecutionException e) { // CONTINUE_ON_ERROR: some succeeded, some failed System.out.println("Completed: " + e.getCompletedCount()); System.err.println("Failed: " + e.getFailedCount()); e.getFailedTaskCauses().forEach((desc, cause) -> System.err.println(" " + desc + ": " + cause.getMessage())); // Successful outputs are available: e.getCompletedTaskOutputs().forEach(out -> System.out.println(out.getRaw()));}Thread Safety
Section titled “Thread Safety”Parallel workflow uses Executors.newVirtualThreadPerTaskExecutor() (Java 21 stable API — no preview flags required). Virtual threads are lightweight and do not block OS threads during LLM HTTP calls.
AgentTool implementations: If the same tool instance is shared across multiple agents that run concurrently, it must be thread-safe. Alternatively, provide separate tool instances per agent.
LangChain4j ChatModel: Most LangChain4j model implementations are thread-safe (they use HTTP clients that support concurrent requests). Check your specific provider’s documentation if unsure.
MDC propagation: The ensemble’s MDC context (including ensemble.id) is captured before tasks are submitted and propagated to each virtual thread. Each thread also sets agent.role during its execution.
Dynamic Agent Creation
Section titled “Dynamic Agent Creation”Workflow.PARALLEL works equally well when agents and tasks are constructed programmatically at
runtime. Because Agent and Task are plain immutable value objects, building them in a loop is
identical to declaring them individually — the framework does not distinguish between the two.
This is the recommended approach when the number of agents is not known at compile time:
List<Agent> agents = new ArrayList<>();List<Task> tasks = new ArrayList<>();
for (OrderItem item : order.getItems()) { Agent specialist = Agent.builder() .role(item.getDish() + " Specialist") .goal("Prepare " + item.getDish()) .llm(model) .build();
Task dishTask = Task.builder() .description("Prepare the recipe for " + item.getDish()) .expectedOutput("Recipe with ingredients, steps, and timing") .agent(specialist) .build();
agents.add(specialist); tasks.add(dishTask);}
// Fan-in: single aggregation task depends on all specialist tasksAgent headChef = Agent.builder() .role("Head Chef") .goal("Coordinate all dishes into a cohesive meal plan") .llm(model) .build();
Task mealPlan = Task.builder() .description("Create a coordinated meal service plan from all dish preparations.") .expectedOutput("Meal plan with serving order and timing.") .agent(headChef) .context(tasks) // depends on ALL specialist tasks .build();
// Assemble and runEnsemble.EnsembleBuilder builder = Ensemble.builder() .workflow(Workflow.PARALLEL);
agents.forEach(builder::agent);builder.agent(headChef);tasks.forEach(builder::task);builder.task(mealPlan);
EnsembleOutput output = builder.build().run();Execution pattern:
[Specialist 1] ----+[Specialist 2] ----+--> [Head Chef] --> Final Output[Specialist N] ----+Context size warning: Each specialist task’s output is injected into the aggregation task’s
prompt. With a large number of specialists each producing verbose output, this context can
approach or exceed the model’s context window. For large N, use outputType(RecordClass.class)
on each specialist task to produce compact structured JSON, or implement a tree-reduction pattern
where outputs are aggregated in batches across multiple levels.
See the Dynamic Agent Creation example for a full working example with the kitchen scenario.
When to Use PARALLEL
Section titled “When to Use PARALLEL”- Multiple independent tasks can run concurrently to reduce total wall-clock time
- Your pipeline has a natural DAG structure (some tasks depend on others, some are independent)
- You want to maximize throughput for LLM API calls
- The number of agents is not known at compile time (dynamic fan-out/fan-in)
Choosing a Workflow
Section titled “Choosing a Workflow”| Consideration | SEQUENTIAL | HIERARCHICAL | PARALLEL |
|---|---|---|---|
| Task order | Fixed, user-defined | Dynamic, manager-decided | DAG-driven, automatic |
| Routing logic | Explicit (agent per task) | Implicit (manager decides) | Explicit (agent per task) |
| LLM calls | N calls (one per task) | N+1 calls (tasks + manager) | N calls (one per task) |
| Predictability | High | Lower | High |
| Throughput | Serial | Serial (manager is bottleneck) | Maximum concurrency |
| Error handling | Stop on first failure | Manager decides | FAIL_FAST or CONTINUE_ON_ERROR |
Workflow and Memory
Section titled “Workflow and Memory”All three workflows support all memory types. Memory context is shared across all agent executions within a single run() call.
In hierarchical workflow, the Manager agent itself does not participate in memory — only the worker agents do.
In parallel workflow, ShortTermMemory is thread-safe (uses CopyOnWriteArrayList). Concurrent task completions each record their output to short-term memory independently. Long-term memory implementations must also be thread-safe when used with PARALLEL.
See the Memory guide.
Workflow and Delegation
Section titled “Workflow and Delegation”All three workflows support agent-to-agent delegation when agents have allowDelegation = true. In hierarchical workflow, worker agents can delegate to peer workers in addition to the manager’s own delegation via delegateTask.
See the Delegation guide.