Skip to content

11 - Configuration Reference

This document specifies all configurable settings, their defaults, hardcoded constants, and extension points.

All configuration is done through builder methods on domain objects. There are no configuration files, environment variables, or property files.

SettingTypeDefaultRequiredDescription
roleStringYesThe agent’s role/title. Used in prompts and logging.
goalStringYesThe agent’s primary objective. Included in system prompt.
backgroundStringnullNoBackground context for the agent persona. Included in system prompt if present.
toolsList<Object>List.of()NoTools available to this agent. Each must be AgentTool or @Tool-annotated object.
llmChatLanguageModelYesThe LangChain4j model to use.
allowDelegationbooleanfalseNoReserved for Phase 2. Whether the agent can delegate to other agents.
verbosebooleanfalseNoWhen true, elevates prompt/response logging to INFO level.
maxIterationsint25NoMaximum tool call iterations before forcing a final answer. Must be > 0.
responseFormatString""NoExtra formatting instructions appended to system prompt.
SettingTypeDefaultRequiredDescription
descriptionStringYesWhat the agent should do. Supports {variable} templates.
expectedOutputStringYesWhat the output should look like. Supports templates.
agentAgentYesThe agent assigned to this task.
contextList<Task>List.of()NoTasks whose outputs feed into this task as context.
SettingTypeDefaultRequiredDescription
agentsList<Agent>YesAll agents participating. Must not be empty.
tasksList<Task>YesAll tasks to execute, in order. Must not be empty.
workflowWorkflowSEQUENTIALNoHow tasks are executed.
verbosebooleanfalseNoWhen true, elevates logging for all tasks/agents to INFO.
memoryEnsembleMemorynullNoMemory configuration (short-term, long-term, entity).
maxDelegationDepthint3NoMaximum peer-delegation depth. Must be > 0.
toolExecutorExecutorvirtual threadsNoExecutor for parallel tool calls within a single LLM turn.
toolMetricsToolMetricsNoOpToolMetricsNoMetrics backend for tool execution timings.
listenersList<EnsembleListener>List.of()NoEvent listeners for task/tool/delegation lifecycle events.
inputsMap<String, String>{}NoTemplate variable values applied to task descriptions.
hierarchicalConstraintsHierarchicalConstraintsnullNoConstraints for hierarchical workflow (required workers, caps).
delegationPoliciesList<DelegationPolicy>List.of()NoCustom policies evaluated before each delegation.
costConfigurationCostConfigurationnullNoPer-token cost rates for monetary cost estimation.
traceExporterExecutionTraceExporternullNoCalled after each run with the complete execution trace.
captureModeCaptureModeOFFNoDepth of data collection: OFF, STANDARD, or FULL. Can also be set via the agentensemble.captureMode system property or AGENTENSEMBLE_CAPTURE_MODE environment variable.
ValueDescriptionStatus
SEQUENTIALTasks execute one after another. Output from earlier tasks can feed as context to later tasks.Implemented
HIERARCHICALA manager agent delegates tasks to worker agents.Phase 2

These values are internal framework constants, not configurable by users. They are defined as private static final fields in the relevant classes.

ConstantValueClassRationale
MAX_STOP_MESSAGES3AgentExecutorAfter 3 “please stop” messages, the agent is considered stuck and MaxIterationsExceededException is thrown.
CONTEXT_LENGTH_WARN_THRESHOLD10000AgentPromptBuilderLog a WARN when context from a single task exceeds this character count.
LOG_TRUNCATE_LENGTH200AgentExecutorTool input/output and output previews are truncated to this length in INFO logs.
MDC_DESCRIPTION_MAX_LENGTH80SequentialWorkflowExecutorTask description is truncated in MDC to keep diagnostic context concise.
ERROR_TEMPLATE_MAX_LENGTH100TemplateResolverTemplate string is truncated in error messages.
CHAT_MEMORY_MAX_MESSAGES20AgentExecutorMaximum messages retained in the agent’s chat memory window during tool-use loops.

These constants represent sensible defaults that rarely need changing. Making them configurable would add API surface without proportional value. If users need different values, they can:

  1. Open a GitHub issue requesting the constant be made configurable
  2. Fork the framework and change the constant
  3. In Phase 2+, we may expose some of these as configuration options if demand exists

The primary extension point. Users create tools by:

  1. Implementing the AgentTool interface
  2. Creating classes with @dev.langchain4j.agent.tool.Tool annotated methods

See 06-tool-system.md for details.

Users provide any ChatLanguageModel implementation from LangChain4j:

  • OpenAiChatModel (OpenAI / Azure OpenAI)
  • AnthropicChatModel (Anthropic Claude)
  • OllamaChatModel (Ollama / local models)
  • VertexAiGeminiChatModel (Google Vertex AI)
  • BedrockChatModel (Amazon Bedrock)
  • Custom implementations of the ChatLanguageModel interface

The WorkflowExecutor interface allows custom execution strategies:

public interface WorkflowExecutor {
EnsembleOutput execute(List<Task> resolvedTasks, boolean verbose);
}

In Phase 1, only SequentialWorkflowExecutor is provided. In Phase 2, HierarchicalWorkflowExecutor and ParallelWorkflowExecutor will be added. Users could also implement custom strategies.

FeatureWhy Not
Prompt templatesFixed in AgentPromptBuilder. Custom prompt strategies planned for Phase 2.
Tool execution timeoutUsers should implement timeouts within their AgentTool.execute() method.
Retry behavior for LLM callsLangChain4j model instances handle retries. Configure at the model level.
Logging formatConfigured via the user’s SLF4J implementation (Logback, Log4j2, etc.).
Memory / persistencePhase 2 feature.