Skip to content

API Quick Reference

skobeltsyn edited this page May 4, 2026 · 4 revisions

API Quick Reference

Compact tables covering every DSL function in Agents.KT. For in-depth explanations, follow the links to dedicated wiki articles.


Agent DSL

Defined inside agent<IN, OUT>("name") { ... }:

Function Signature Description
prompt prompt(text: String) Sets the system prompt sent to the LLM before every agentic call.
model model { ... } Configures the LLM backend. See Model Config.
budget budget { ... } Sets the agentic loop turn limit. See Budget Config.
tools tools { ... } Registers callable tools the LLM can invoke. See Tools.
skills skills { ... } Declares the agent's capabilities. At least one skill must produce OUT.
memory memory(bank: MemoryBank) Attaches a MemoryBank and auto-injects memory_read, memory_write, memory_search tools.
skillSelection skillSelection { input -> "skillName" } Predicate-based skill routing. Runs before LLM routing. See Skill Selection.
onToolUse onToolUse { name, args, result -> } Callback fired after every action tool execution.
onKnowledgeUsed onKnowledgeUsed { name, content -> } Callback fired when the LLM fetches a knowledge entry.
onSkillChosen onSkillChosen { name -> } Callback fired when the agent selects a skill (any routing strategy).

Invocation

val agent = agent<String, String>("myAgent") { /* ... */ }
val result: String = agent("input")  // operator fun invoke(input: IN): OUT

Skills DSL

Defined inside skills { ... }:

Function Signature Description
skill skill<IN, OUT>("name", "desc") { ... } Declares a skill inline within the agent.
+ (unaryPlus) +preDefinedSkill Adds a pre-defined Skill<IN, OUT> instance to the agent.

Skill Configuration

Defined inside skill<IN, OUT>("name", "desc") { ... }:

Function Signature Description
implementedBy implementedBy { input -> output } Marks skill as pure Kotlin. No LLM involved.
tools (typed, canonical) tools(vararg handles: Tool<*, *>) Marks skill as agentic. Pass typed handles captured from tool(...) for compile-time typo safety. (#1015–#1017)
tools (no args) tools() Marks skill as agentic with no action tools — knowledge tools and memory tools are still available.
tools (string, deprecated) tools(vararg names: String) @Deprecated(level = WARNING). Retained for built-in tools (escalate, throwException, memory_*) and runtime-discovered MCP tool names.
knowledge knowledge("key", "desc") { provider } Registers a lazy context provider. Loaded on demand in agentic skills, eagerly in non-agentic.
transformOutput transformOutput { llmString -> parsedType } Transforms the raw LLM text response into the skill's OUT type.
llmDescription llmDescription("override text") Overrides the auto-generated toLlmDescription() markdown.

Skill Introspection

Method Returns Description
skill.toLlmDescription() String Auto-generated markdown: name, types, description, knowledge index.
skill.toLlmContext() String Full context: toLlmDescription() + all knowledge content.
skill.knowledgeTools() List<KnowledgeTool> Knowledge entries as callable tools for lazy LLM loading.
skill.execute(input) OUT Directly executes the skill's implementedBy lambda.
skill(input) OUT Alias for execute(input) via operator fun invoke.

Model Config

Defined inside model { ... }:

Property/Function Type Default Description
ollama(name) fun -- Sets the model name and provider to Ollama.
host String "localhost" Ollama server hostname.
port Int 11434 Ollama server port.
temperature Double 0.7 Sampling temperature.
client ModelClient? null Custom ModelClient implementation (overrides Ollama). Used for testing.
model { ollama("qwen2.5:7b"); host = "localhost"; port = 11434; temperature = 0.7 }

ModelClient Interface

fun interface ModelClient {
    fun chat(messages: List<LlmMessage>): LlmResponse
}

Create a mock for testing:

val mock = ModelClient { messages -> LlmResponse.Text("mocked response") }
model { ollama("any"); client = mock }

Budget Config

Defined inside budget { ... }:

Property Type Default Description
maxTurns Int Int.MAX_VALUE Maximum agentic loop iterations. Throws BudgetExceededException when exceeded.
budget { maxTurns = 10 }

Tools

Defined inside tools { ... }:

Function Signature Description
tool tool("name", "desc") { args -> result } Registers a tool. args is Map<String, Any?>. Return value is sent back to the LLM.
tools {
    tool("add", "Add two numbers. Args: a, b") { args ->
        (args["a"] as Number).toDouble() + (args["b"] as Number).toDouble()
    }
}

Auto-Injected Memory Tools

When memory(bank) is called, these tools are auto-registered:

Tool Name Arguments Returns Description
memory_read -- Full memory content Reads the agent's memory bank.
memory_write content: String "ok" Overwrites the agent's memory.
memory_search query: String Matching lines Case-insensitive line search.

Composition Operators

All composition results are callable via operator fun invoke. See Type Algebra for full overload table.

Operator Syntax Result Type Description
Pipeline A then B Pipeline<A.IN, B.OUT> Sequential execution. Requires A.OUT == B.IN.
Parallel A / B Parallel<A.IN, A.OUT> Concurrent fan-out. Both agents must share <IN, OUT>. Returns List<OUT>.
Forum A * B Forum<A.IN, B.OUT> Multi-agent deliberation.
Loop A.loop { out -> nextInput? } Loop<A.IN, A.OUT> Feedback loop. Returns null to stop, IN to continue.
Branch A.branch { on<V>() then handler } Branch<A.IN, handler.OUT> Conditional routing on sealed type variants.
import agents_engine.composition.pipeline.then
import agents_engine.composition.parallel.div
import agents_engine.composition.forum.times
import agents_engine.composition.loop.loop
import agents_engine.composition.branch.branch

Generation API

See Generable and Guide for detailed usage.

Annotations

Annotation Target Description
@Generable("desc") Class Marks a data class or sealed interface as an LLM generation target.
@Guide("desc") Constructor param, sealed subclass Per-field or per-variant guidance for the LLM.
@LlmDescription("text") Class Overrides auto-generated toLlmDescription() verbatim.

KClass Extension Functions

Function Returns Description
KClass<*>.toLlmDescription() String Markdown description with fields, types, and @Guide texts.
KClass<*>.jsonSchema() String JSON Schema string for constrained decoding.
KClass<*>.promptFragment() String Natural-language prompt fragment with JSON template.
KClass<T>.fromLlmOutput(json) T? Lenient deserialization. Handles markdown fences, trailing commas. Returns null on failure.

PartiallyGenerated

Method Description
PartiallyGenerated.empty<T>() Creates an empty partial instance.
partial.withField("name", value) Returns a new partial with the field set.
partial.toComplete() Attempts full construction. Returns T?.
partial.has("name") Checks if a field has arrived.
partial["name"] Gets a field value (or null).
partial.arrivedFieldNames Set<String> of fields received so far.

MemoryBank

See Agent Memory for detailed usage.

Method Signature Description
Constructor MemoryBank(maxLines: Int = Int.MAX_VALUE) Creates a bank with optional line cap.
read read(key: String): String Returns stored content or empty string.
write write(key: String, content: String) Writes content, auto-truncating to maxLines.
entries entries(): Map<String, String> Returns a snapshot of all key-value pairs.

Resource Loading

Read agent prompts (or any static text) from classpath resources. See Best Practices: Load Long Prompts from Resources.

Function Signature Description
loadResource loadResource(path: String): String Read a UTF-8 classpath resource. Throws IllegalArgumentException if missing — fail-fast at agent construction.
loadResourceOrNull loadResourceOrNull(path: String): String? Same, but returns null on missing instead of throwing.

A leading slash is tolerated: prompts/x.md and /prompts/x.md resolve to the same resource.

import agents_engine.core.loadResource

agent<String, String>("coder") {
    prompt(loadResource("prompts/coder.md"))
}

Swarm

Multi-agent JAR composition via ServiceLoader. See Swarm for the full mechanism.

Symbol Signature Description
AgentProvider fun build(): Agent<*, *> SPI contract; each sibling JAR ships one impl plus a META-INF/services/agents_engine.runtime.AgentProvider descriptor.
Swarm.discover discover(): List<Agent<*, *>> Walks the thread context classloader, builds every registered provider.
Swarm.discover(loader) discover(classLoader: ClassLoader): List<Agent<*, *>> Discover from an explicit classloader (used in tests / multi-loader scenarios).
Agent.absorb Agent<*, *>.absorb(sibling: Agent<*, *>) Adds a tool named sibling.name whose executor invokes the sibling. Auto-enables the tool across every captain skill.
import agents_engine.runtime.Swarm
import agents_engine.runtime.absorb

val captain = agent<String, String>("captain") { /* ... */ }
Swarm.discover()
    .filterNot { it.name == captain.name }
    .forEach { sibling -> captain.absorb(sibling) }

Each absorb wraps the entire sibling agent (prompt + skills + knowledge + memory + hooks) as a single tool callable from any of the captain's skills. Sibling must be Agent<String, *>; typed-input siblings throw IllegalArgumentException at absorb time.


Error Recovery DSL

See Tool Error Recovery for detailed usage.

ToolError Types

Type Fields Description
ToolError.InvalidArgs rawArgs, parseError, expectedSchema Malformed tool call arguments.
ToolError.DeserializationError rawValue, targetType, cause Type coercion failure.
ToolError.ExecutionError args, cause Runtime error during tool execution.
ToolError.EscalationError source, reason, severity, originalError, attempts Repair agent escalated the error.

Severity Levels

LOW | MEDIUM | HIGH | CRITICAL

onError DSL

onError {
    invalidArgs { rawArgs, error ->
        fix { rawArgs.replace(",}", "}") }          // deterministic fix
        fix(agent = jsonFixer, retries = 3)           // LLM-driven repair
    }
    deserializationError { rawValue, error ->
        sanitize { rawValue.replace("\\", "/") }
    }
    executionError { cause ->
        retry(maxAttempts = 3, backoff = exponential())
    }
}

Tool-Level Defaults

tools {
    defaults {
        onError { invalidArgs { _, _ -> fix(agent = jsonFixer, retries = 3) } }
    }
    tool("write_file") { /* inherits defaults */ }
    tool("compile")    { onError { /* override */ } }
}

Built-in Repair Tools

Tool Description
escalate() Soft failure -- parent agent decides what to do.
throwException() Hard failure -- propagates through the pipeline.

LLM Message Types

Type Description
LlmMessage(role, content, toolCalls?) A message in the conversation. Roles: "system", "user", "assistant", "tool".
LlmResponse.Text(content) LLM returned a text response.
LlmResponse.ToolCalls(calls) LLM returned one or more tool calls.
ToolCall(name, arguments) A single tool invocation. arguments is Map<String, Any?>.

See also: Type Algebra | Glossary | Best Practices | Cookbook | Troubleshooting

Clone this wiki locally