Fabricatio is a streamlined Python library for building LLM applications using an event-based agent structure. It leverages Rust for performance-critical tasks, Handlebars for templating, and PyO3 for Python bindings.
- Event-Driven Architecture: Robust task management through an EventEmitter pattern.
- LLM Integration & Templating: Seamlessly interact with large language models and dynamic content generation.
- Async & Extensible: Fully asynchronous execution with easy extension via custom actions and workflows.
- Add api support.
- Define API types + REST route handlers + wire into axum server
- Add CORS/error middleware + Python binding for server config
- Integration tests + API docs
- Run as mcp server.
- Feature flag +
McpServerstruct + tool registry +tools/list - stdio + HTTP transports +
tools/calldispatch - Register Fabricatio tools as MCP tools + Python binding + tests
- Feature flag +
- Finalize the webui.
- Chat interface + API client + WebSocket/SSE streaming
- Config panel + agent status dashboard
- Error handling + loading states + UX polish
- Add Plugin system.
- Plugin protocol + registry + lifecycle (load/unload)
- Hook points in core lifecycle + entry-point discovery
- Plugin config support + validation + tests
- Replace litellm with native rust impl
- Port deprecated mock utils to thryd impl
- Port tests to new mock utils
- Sync documentations
- Router cache support ttl and eviction
- Add worktree-based isolated development subpackage
- Add level-based context compression subpackage
- Package skeleton +
CompressionLevelenum + compression strategies - Async compression + Python bindings + tests
- Package skeleton +
- TreeSetter-based ACE
- tree-sitter dep + AST node types + tree edit operations (insert/replace/delete/move)
- TreeSetter orchestrator + Python bindings + multi-language round-trip tests
- Self-Extensible Agent
- Capability protocol + runtime registry + dynamic method injection on Role
- Config-based discovery + hot-reload + tests
- Add more examples
- Write missing examples (Structured Output, Extract, Improve)
- Document undocumented examples + cross-link
use-cases.rst+ examples index -
ToolExecuterexec results feedback to llm- Surface errors via
ApplicationError+ResultCollector.error()+last_errortemplate param
- Surface errors via
- Use
stubgenfeat andcfg_attrto make the stub generation as an opt-in for all mixed packages. - Use
Thrydimpl to move some requests to rust side- All core LLM operations already routed through
rust.router_usage
- All core LLM operations already routed through
- Add Texts-based skill system, as a subpackage
- Skill YAML/JSON schema + loader + directory scanner
- Wire into Role + validation + example skill file + tests
- Port build workflow to
Justfile -
thryd::Routeruse concurrent safe impl - Extract
Routerfromfabricatio-coreinto standalonefabricatio-routercrate - Replace parser with native rust impl
- Better memory impl
- RAG package refactor, move rerank and embedding to
thryd- Add Reranker support in
thryd - TEI as
Providerin thryd (RerankerModel for OpenAI-compat: wontfix — OpenAI doesn't support rerankers) - Wire
rerank()into Router Python class + addUseRerankercapability
- Add Reranker support in
- Add embedding and rerank mock support to
fabricatio-mock- Add
add_or_update_dummy_embedding_modelandadd_or_update_dummy_reranker_modelto Router - Add
setup_dummy_embeddings/setup_dummy_reranks+ response builders infabricatio-mock - Tests for embedding and rerank mock paths
- Add
- Replace
UseLLMwith native rust impl- Fix the mock utils that is break by the replacement.
- router support
no_cache
- Diff use
Hashlineimpl instead ofStringGrep- Integrate
rho-hashlinecrate + hash-based line anchoring in Rust - Add
compute_hash,format_hashes,parse_hashline_anchor,apply_*functions
- Integrate
- Add
Diff.format_with_hashes()method + Python exports + 22 tests - Add high-level
HashlineDiffwrapper for hashline API-
Diffdataclass with anchor and line-number fields -
from_anchors()andfrom_line_range()factory methods -
apply()with line_range and pattern matching modes + tests
-
- Placeholder based multiple-agents edits
# install fabricatio with full capabilities.
pip install fabricatio[full]
# or with uv
uv add fabricatio[full]
# install fabricatio with only rag and rule capabilities.
pip install fabricatio[rag,rule]
# or with uv
uv add fabricatio[rag,rule]
You can download the templates from the github release manually and extract them to the work directory.
curl -L https://github.com/Whth/fabricatio/releases/download/v0.19.1/templates.tar.gz | tar -xzOr you can use the cli tdown bundled with fabricatio to achieve the same result.
tdown download --verbose -o ./Note:
fabricatioperforms template discovery across multiple sources with filename-based identification. Template resolution follows a priority hierarchy where working directory templates override templates located in<ROAMING>/fabricatio/templates.
"""Example of a simple hello world program using fabricatio."""
from typing import Any
# Import necessary classes from the namespace package.
from fabricatio import Action, Event, Role, Task, WorkFlow, logger
# Create an action.
class Hello(Action):
"""Action that says hello."""
output_key: str = "task_output"
async def _execute(self, **_) -> Any:
ret = "Hello fabricatio!"
logger.info("executing talk action")
return ret
# Create the role and register the workflow.
(Role()
.subscribe(Event.quick_instantiate("talk"), WorkFlow(name="talk", steps=(Hello,)))
.dispatch())
# Make a task and delegate it to the workflow registered above.
assert Task(name="say hello").delegate_blocking("talk") == "Hello fabricatio!"For various usage scenarios, refer to the following examples:
- Simple Chat
- Structured Output
- Extraction
- Content Improvement
- Retrieval-Augmented Generation (RAG)
- Article Extraction
- Propose Task
- Code Review
- Write Outline
(For full example details, see Examples)
Fabricatio supports flexible configuration through multiple sources, with the following priority order:
Call Arguments > ./.env > Environment Variables > ./fabricatio.toml > ./pyproject.toml >
<ROMANING>/fabricatio/fabricatio.toml > Builtin Defaults.
Below is a unified view of the same configuration expressed in different formats:
FABRICATIO_LLM__SEND_TO=openai/gpt-3.5-turbo
FABRICATIO_LLM__TEMPERATURE=1.0
FABRICATIO_LLM__TOP_P=0.35
FABRICATIO_LLM__STREAM=false
FABRICATIO_LLM__MAX_COMPLETION_TOKENS=8192
FABRICATIO_DEBUG__LOG_LEVEL=INFO[debug]
log_level = "DEBUG"
[llm]
send_to = "base" # send req to `base` group by default
max_completion_tokens = 32000
stream = false
temperature = 1.0
top_p = 0.35
[routing]
providers = [
{ ptype = "OpenAICompatible", key = "sk-...", name = "mm", base_url = "https://api.example.com/v1/" }
]
completion_deployments = [
{ id = "mm/a-completion-model", group = 'base', tpm = 100_000, rpm = 1000 }
]
cache_database_path = "path/to/.cache.db"
[tool.fabricatio.debug]
log_level = "DEBUG"
[tool.fabricatio.llm]
send_to = "base" # send req to `base` group by default
max_completion_tokens = 32000
stream = false
temperature = 1.0
top_p = 0.35
[tool.fabricatio.routing]
providers = [
{ ptype = "OpenAICompatible", key = "sk-...", name = "mm", base_url = "https://api.example.com/v1/" }
]
completion_deployments = [
{ id = "mm/a-completion-model", group = 'base', tpm = 100_000, rpm = 1000 }
]
cache_database_path = "path/to/.cache.db"
We welcome contributions from everyone! Before contributing, please read our Contributing Guide and Code of Conduct.
Fabricatio is licensed under the MIT License. See LICENSE for details.
Special thanks to the contributors and maintainers of: