Skip to content
AgentEnsemble AgentEnsemble
Get Started

TOON Context Format

AgentEnsemble can serialize structured data in LLM prompts using TOON (Token-Oriented Object Notation) instead of JSON, reducing token usage by 30-60%.


Every time AgentEnsemble sends context to an LLM — prior task outputs, tool results, memory entries — that data is serialized as text and counted against the model’s token limit. JSON is verbose: curly braces, quotes around every key, commas, colons, and brackets all consume tokens with no semantic value.

TOON is a compact, human-readable format designed specifically for LLM contexts. It combines YAML-like indentation with CSV-like tabular arrays:

JSON (47 tokens):

{"items":[{"sku":"A1","qty":2,"price":9.99},{"sku":"B2","qty":1,"price":14.5}]}

TOON (19 tokens):

items[2]{sku,qty,price}:
A1,2,9.99
B2,1,14.5

In multi-agent pipelines where structured data flows between tasks, the savings compound across every context injection and tool call iteration.


TOON support requires the JToon library on your runtime classpath. AgentEnsemble does not bundle it — you opt in by adding it to your project:

dependencies {
implementation("dev.toonformat:jtoon:1.0.9")
}
import net.agentensemble.format.ContextFormat;
EnsembleOutput result = Ensemble.builder()
.chatLanguageModel(model)
.contextFormat(ContextFormat.TOON)
.task(researchTask)
.task(writingTask)
.build()
.run();

That’s it. All structured data flowing to the LLM will now use TOON instead of JSON.


When contextFormat is set to TOON, the following data is serialized in TOON format:

DataWhereImpact
Context from prior tasksUser prompt (context section)Medium — depends on task output size
Tool execution resultsTool result messages in conversationHigh — tool results are often large JSON payloads
Memory entriesUser prompt (memory sections)Medium — structured memory content
Execution trace exportExecutionTrace.toToon()Low — export only, not sent to LLM

ExecutionTrace provides TOON export methods alongside the existing JSON ones:

EnsembleOutput result = ensemble.run();
// JSON (always available)
result.getTrace().toJson(Path.of("trace.json"));
// TOON (requires JToon on classpath)
result.getTrace().toToon(Path.of("trace.toon"));
// As strings
String jsonTrace = result.getTrace().toJson();
String toonTrace = result.getTrace().toToon();

If you set contextFormat(ContextFormat.TOON) but JToon is not on the classpath, Ensemble.run() fails with an IllegalStateException before any LLM calls are made:

TOON context format requires the JToon library on the classpath.
Add to your build (check the version catalog or docs for the current version):
Gradle: implementation("dev.toonformat:jtoon")
Maven: <dependency><groupId>dev.toonformat</groupId><artifactId>jtoon</artifactId></dependency>

Similarly, calling ExecutionTrace.toToon() without JToon throws an IllegalStateException with the same dependency instructions.


ScenarioRecommended Format
Long multi-agent pipelines with rich contextTOON — savings compound across tasks
Tool-heavy workflows (many ReAct iterations)TOON — tool results dominate token usage
Structured output tasksJSON schema stays in JSON regardless; context can be TOON
Debugging / human inspection of promptsJSON — more familiar for most developers
Maximum model compatibilityJSON — all models handle JSON well; TOON is newer
Cost-sensitive production workloadsTOON — 30-60% fewer tokens means 30-60% lower cost

import dev.langchain4j.model.openai.OpenAiChatModel;
import net.agentensemble.*;
import net.agentensemble.ensemble.EnsembleOutput;
import net.agentensemble.format.ContextFormat;
public class ToonFormatExample {
public static void main(String[] args) {
var model = OpenAiChatModel.builder()
.apiKey(System.getenv("OPENAI_API_KEY"))
.modelName("gpt-4o-mini")
.build();
Task research = Task.builder()
.description("Research the latest developments in {topic}")
.expectedOutput("A structured summary with key findings and statistics")
.build();
Task analysis = Task.builder()
.description("Analyze the research and identify the top 3 trends")
.expectedOutput("A ranked list of trends with supporting evidence")
.context(java.util.List.of(research))
.build();
Task report = Task.builder()
.description("Write an executive summary of {topic} trends")
.expectedOutput("A concise 500-word executive summary")
.context(java.util.List.of(research, analysis))
.build();
EnsembleOutput result = Ensemble.builder()
.chatLanguageModel(model)
.contextFormat(ContextFormat.TOON) // 30-60% fewer tokens
.task(research)
.task(analysis)
.task(report)
.input("topic", "generative AI")
.verbose(true)
.build()
.run();
System.out.println(result.getRaw());
// Export trace in TOON format (also compact)
result.getTrace().toToon(java.nio.file.Path.of("trace.toon"));
}
}

See the TOON Format example for a complete walkthrough.