You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ChainWeaver's backlog decomposes the "compile-away-LLM-calls" vision into library-level building blocks: ChainAnalyzer (#77) discovers valid chains from schemas, ChainObserver (#78) detects runtime patterns, the LLM compiler (#28) proposes flows semantically, the description optimizer (#100) rewrites tool descriptions, and GovernanceManager (#13) gates promotion. But all of these are pull-based APIs — a developer must manually invoke them.
The original product vision describes a continuously-running service (conceptually "offline" relative to agent calls) that:
Watches for tool registry changes (new MCP tools, updated schemas)
Automatically runs schema compatibility analysis
Monitors agent tool call sequences for recurring patterns
Proposes compiled flows and description rewrites to a review queue
Presents approved flows as tools to the agent LLM — closing the loop
Without this service layer, users must manually orchestrate the analysis → scoring → proposal → governance → registration pipeline. The service transforms ChainWeaver from a library into a continuous optimization layer for MCP agent ecosystems.
Proposal
1. ChainWeaverService — the integration daemon
Create chainweaver/service.py:
classChainWeaverService:
""" Continuous analysis service that ties together the ChainWeaver optimization pipeline: analyze → observe → propose → govern → expose. Runs as a background process, separate from agent runtime. """def__init__(
self,
*,
registry: FlowRegistry,
executor: FlowExecutor,
analyzer: ChainAnalyzer, # from #77observer: ChainObserver, # from #78governance: GovernanceManager, # from #13llm_fn: Callable[[str], str] |None=None, # for LLM-assisted proposalsconfig: ServiceConfig=ServiceConfig(),
) ->None: ...
defrun(self) ->None:
"""Start the continuous analysis loop."""defstop(self) ->None:
"""Gracefully stop the service."""
2. ServiceConfig — configurable triggers and thresholds
@dataclassclassServiceConfig:
# Analysis triggersanalyze_on_tool_change: bool=True# Re-analyze when tools are added/updatedanalyze_interval_seconds: int|None=None# Periodic re-analysis (None = disabled)# Observation thresholdsmin_trace_occurrences: int=3# Min pattern frequency before proposingmin_determinism_score: float=0.8# Min confidence for auto-proposal# LLM compilerenable_llm_proposals: bool=False# LLM-assisted proposals (opt-in)enable_description_optimization: bool=False# Description rewrites (opt-in)# Governanceauto_approve_deterministic: bool=False# Auto-approve fully deterministic chains (dangerous, off by default)max_pending_proposals: int=50# Cap on pending proposals queue
classServiceEvent(str, Enum):
TOOL_REGISTERED="tool_registered"TOOL_UPDATED="tool_updated"# Schema changedTRACE_RECORDED="trace_recorded"ANALYSIS_COMPLETED="analysis_completed"PROPOSAL_CREATED="proposal_created"FLOW_PROMOTED="flow_promoted"classChainWeaverService:
defon_event(self, event: ServiceEvent, callback: Callable) ->None:
"""Register a callback for service events."""defemit(self, event: ServiceEvent, data: dict) ->None:
"""Emit an event to all registered callbacks."""
Add chainweaver service subcommands (extends #44):
chainweaver service start # Start the service
chainweaver service status # Show pending proposals, active flows, metrics
chainweaver service proposals # List pending proposals with scores
chainweaver service approve <id># Approve a proposal
chainweaver service reject <id># Reject a proposal
chainweaver service metrics # Show stats: flows promoted, LLM calls avoided, patterns detected
7. Metrics and reporting
@dataclassclassServiceMetrics:
tools_monitored: inttraces_recorded: intpatterns_detected: intflows_proposed: intflows_promoted: inttotal_llm_calls_avoided: int# Across all executions of promoted flowsestimated_cost_saved_usd: floatestimated_latency_saved_ms: float
Minimum viable service: Requires only #77 (analyzer) + #13 (governance). LLM compiler, observer, and description optimizer can be plugged in incrementally.
Notes
This is the product-level manifestation of ChainWeaver's core value proposition: instead of manual flow authoring, the system continuously discovers and proposes optimizations.
The service transforms ChainWeaver from "a library you call" to "a layer that improves your agents automatically."
The estimated_llm_calls_avoided metric in ServiceMetrics is the single most compelling adoption argument — it directly quantifies value.
Consider a "dry-run mode" where the service runs analysis but only logs proposals without creating governance entries — useful for evaluation before commitment.
The event system enables future integrations: Slack notifications on proposals, CI hooks on promotions, dashboard updates on metrics changes.
Context / Problem
ChainWeaver's backlog decomposes the "compile-away-LLM-calls" vision into library-level building blocks:
ChainAnalyzer(#77) discovers valid chains from schemas,ChainObserver(#78) detects runtime patterns, the LLM compiler (#28) proposes flows semantically, the description optimizer (#100) rewrites tool descriptions, andGovernanceManager(#13) gates promotion. But all of these are pull-based APIs — a developer must manually invoke them.The original product vision describes a continuously-running service (conceptually "offline" relative to agent calls) that:
Without this service layer, users must manually orchestrate the analysis → scoring → proposal → governance → registration pipeline. The service transforms ChainWeaver from a library into a continuous optimization layer for MCP agent ecosystems.
Proposal
1.
ChainWeaverService— the integration daemonCreate
chainweaver/service.py:2.
ServiceConfig— configurable triggers and thresholds3. Service pipeline stages
4. Event-driven trigger system
5. Integration points
ChainAnalyzerChainObserverGovernanceManagerllm_propose_flows()optimize_tool_descriptions()VirtualTool.from_flow()FlowServer6. CLI integration
Add
chainweaver servicesubcommands (extends #44):7. Metrics and reporting
Relevant Code Locations
ChainAnalyzer→chainweaver/analyzer.py(Offline computation of valid tool combinations from schemas #77)ChainObserver→chainweaver/observer.py(Implement runtime chain observer with auto-flow suggestion #78)GovernanceManager→chainweaver/governance.py(Design opt-in governance workflow for promoting observed chains #13)llm_propose_flows→chainweaver/compiler_llm.py(Add optional offline LLM-assisted flow compiler (build-time only) #28)optimize_tool_descriptions→chainweaver/optimizer.py(Add offline LLM-assisted tool description optimizer #100)VirtualTool→chainweaver/virtual_tool.py(Implement Flow-to-VirtualTool adapter for tool-space reduction #24)FlowServer→chainweaver/mcp/server.py(MCP Flow Server — expose compiled flows as MCP tools #72)chainweaver/cli.py(Implement CLI entry point withinspectcommand #44)Acceptance Criteria
ChainWeaverServiceclass exists inchainweaver/service.pyServiceConfigcontrols triggers, thresholds, and feature flagsGovernanceManager(never auto-registered without governance)ServiceMetricstracks key value-prop numbers (patterns detected, flows promoted, LLM calls avoided)auto_approve_deterministicisFalseby default (safety)Out of Scope
llm_fn)Dependencies
This is an integration issue that ties together many prior components:
ChainAnalyzerChainObserverGovernanceManagerVirtualToolcompiler_llm.pyoptimizer.pyMinimum viable service: Requires only #77 (analyzer) + #13 (governance). LLM compiler, observer, and description optimizer can be plugged in incrementally.
Notes
estimated_llm_calls_avoidedmetric inServiceMetricsis the single most compelling adoption argument — it directly quantifies value.