TinyClaw uses a file-based queue system to coordinate message processing across multiple channels and agents. This document explains how it works.
The queue system acts as a central coordinator between:
- Channel clients (Discord, Telegram, WhatsApp) - produce messages
- Queue processor - routes and processes messages
- AI providers (Claude, Codex) - generate responses
- Agents - isolated AI agents with different configs
┌─────────────────────────────────────────────────────────────┐
│ Message Channels │
│ (Discord, Telegram, WhatsApp, Heartbeat) │
└────────────────────┬────────────────────────────────────────┘
│ Write message.json
↓
┌─────────────────────────────────────────────────────────────┐
│ ~/.tinyclaw/queue/ │
│ │
│ incoming/ processing/ outgoing/ │
│ ├─ msg1.json → ├─ msg1.json → ├─ msg1.json │
│ ├─ msg2.json └─ msg2.json └─ msg2.json │
│ └─ msg3.json │
│ │
└────────────────────┬────────────────────────────────────────┘
│ Queue Processor
↓
┌─────────────────────────────────────────────────────────────┐
│ Parallel Processing by Agent │
│ │
│ Agent: coder Agent: writer Agent: assistant │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Message 1│ │ Message 1│ │ Message 1│ │
│ │ Message 2│ ... │ Message 2│ ... │ Message 2│ ... │
│ │ Message 3│ │ │ │ │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
│ │ │ │ │
└───────┼──────────────────┼─────────────────────┼────────────┘
↓ ↓ ↓
claude CLI claude CLI claude CLI
(workspace/coder) (workspace/writer) (workspace/assistant)
~/.tinyclaw/
├── queue/
│ ├── incoming/ # New messages from channels
│ │ ├── msg_123456.json
│ │ └── msg_789012.json
│ ├── processing/ # Currently being processed
│ │ └── msg_123456.json
│ └── outgoing/ # Responses ready to send
│ └── msg_123456.json
├── logs/
│ ├── queue.log # Queue processor logs
│ ├── discord.log # Channel-specific logs
│ └── telegram.log
└── files/ # Uploaded files from channels
└── image_123.png
A channel client receives a message and writes it to incoming/:
{
"channel": "discord",
"sender": "Alice",
"senderId": "user_12345",
"message": "@coder fix the authentication bug",
"timestamp": 1707739200000,
"messageId": "discord_msg_123",
"files": ["/path/to/screenshot.png"]
}Optional fields:
agent- Pre-route to specific agent (bypasses @agent_id parsing)files- Array of file paths uploaded with message
The queue processor (runs every 1 second):
- Scans
incoming/for new messages - Sorts by timestamp (oldest first)
- Determines target agent:
- Checks
agentfield (if pre-routed) - Parses
@agent_idprefix from message - Falls back to
defaultagent
- Checks
- Moves to
processing/(atomic operation) - Routes to agent's promise chain (parallel processing)
Each agent has its own promise chain:
// Messages to same agent = sequential (preserve conversation order)
agentChain: msg1 → msg2 → msg3
// Different agents = parallel (don't block each other)
@coder: msg1 ──┐
@writer: msg1 ──┼─→ All run concurrently
@assistant: msg1 ──┘Per-agent isolation:
- Each agent runs in its own
working_directory - Separate conversation history (managed by CLI)
- Independent reset flags
- Own configuration files (.claude/, AGENTS.md)
Claude (Anthropic):
cd ~/workspace/coder/
claude --dangerously-skip-permissions \
--model claude-sonnet-4-5 \
-c \ # Continue conversation
-p "fix the authentication bug"Codex (OpenAI):
cd ~/workspace/coder/
codex exec resume --last \
--model gpt-5.3-codex \
--skip-git-repo-check \
--dangerously-bypass-approvals-and-sandbox \
--json "fix the authentication bug"After AI responds, queue processor writes to outgoing/:
{
"channel": "discord",
"sender": "Alice",
"message": "I've identified the issue in auth.ts:42...",
"originalMessage": "@coder fix the authentication bug",
"timestamp": 1707739205000,
"messageId": "discord_msg_123",
"agent": "coder",
"files": ["/path/to/fix.patch"]
}Channel clients poll outgoing/ and:
- Read response for their channel
- Send message to user
- Delete the JSON file
- Handle any file attachments
Each agent has its own promise chain that processes messages sequentially:
const agentProcessingChains = new Map<string, Promise<void>>();
// When message arrives for @coder:
const chain = agentProcessingChains.get('coder') || Promise.resolve();
const newChain = chain.then(() => processMessage(msg));
agentProcessingChains.set('coder', newChain);Example: 3 messages sent simultaneously
Sequential (old):
@coder fix bug 1 [████████████████] 30s
@writer docs [██████████] 20s
@assistant help [████████] 15s
Total: 65 seconds
Parallel (new):
@coder fix bug 1 [████████████████] 30s
@writer docs [██████████] 20s ← concurrent!
@assistant help [████████] 15s ← concurrent!
Total: 30 seconds (2.2x faster!)
Messages to the same agent remain sequential:
@coder fix bug 1 [████] 10s
@coder fix bug 2 [████] 10s ← waits for bug 1
@writer docs [██████] 15s ← parallel with both
This ensures:
- ✅ Conversation context is maintained
- ✅
-c(continue) flag works correctly - ✅ No race conditions within an agent
- ✅ Agents don't block each other
Use @agent_id prefix:
User: @coder fix the login bug
→ Routes to agent "coder"
→ Message becomes: "fix the login bug"
Channel clients can pre-route:
const queueData = {
channel: 'discord',
message: 'help me',
agent: 'assistant' // Pre-routed, no @prefix needed
};1. Check message.agent field (if pre-routed)
2. Parse @agent_id from message text
3. Look up agent in settings.agents
4. Fall back to 'default' agent
5. If no default, use first available agent
"@coder fix bug" → agent: coder
"help me" → agent: default
"@unknown test" → agent: default (unknown agent)
"@assistant help" → agent: assistant
pre-routed with agent=X → agent: X
If you mention multiple agents in one message:
User: "@coder @writer fix this bug and document it"
Result:
→ Returns friendly message about upcoming agent-to-agent collaboration
→ No AI processing (saves tokens!)
→ Suggests sending separate messages to each agent
The easter egg message:
🚀 Agent-to-Agent Collaboration - Coming Soon!
You mentioned multiple agents: @coder, @writer
Right now, I can only route to one agent at a time. But we're working on something cool:
✨ Multi-Agent Coordination - Agents will be able to collaborate on complex tasks! ✨ Smart Routing - Send instructions to multiple agents at once! ✨ Agent Handoffs - One agent can delegate to another!
For now, please send separate messages to each agent: •
@coder [your message]•@writer [your message]Stay tuned for updates! 🎉
This prevents confusion and teases the upcoming feature!
Creates ~/.tinyclaw/reset_flag:
tinyclaw resetNext message to any agent starts fresh (no -c flag).
Creates ~/workspace/{agent_id}/reset_flag:
tinyclaw agent reset coder
# Or in chat:
@coder /resetNext message to that agent starts fresh.
Queue processor checks before each message:
const globalReset = fs.existsSync(RESET_FLAG);
const agentReset = fs.existsSync(`${agentDir}/reset_flag`);
if (globalReset || agentReset) {
// Don't pass -c flag to CLI
// Delete flag files
}Channels download files to ~/.tinyclaw/files/:
User uploads: image.png
→ Saved as: ~/.tinyclaw/files/telegram_123_image.png
→ Message includes: [file: /absolute/path/to/image.png]
AI can send files back:
AI response: "Here's the diagram [send_file: /path/to/diagram.png]"
→ Queue processor extracts file path
→ Adds to response.files array
→ Channel client sends as attachment
→ Tag is stripped from message text
If agent not found:
User: @unknown help
→ Routes to: default agent
→ Logs: WARNING - Agent 'unknown' not found, using 'default'
Errors are caught per-agent:
newChain.catch(error => {
log('ERROR', `Error processing message for agent ${agentId}: ${error.message}`);
});Failed messages:
- Don't block other agents
- Are logged to
queue.log - Response file not created
- Channel client times out gracefully
Old messages in processing/ (crashed mid-process):
- Automatically picked up on restart
- Re-processed from scratch
- Original in
incoming/is moved again
- Sequential: 1 message per AI response time (~10-30s)
- Parallel: N agents × 1 message per response time
- 3 agents: ~3x throughput improvement
- Queue check: Every 1 second
- Agent routing: <1ms (file peek)
- Max latency: 1s + AI response time
Good for:
- ✅ Multiple independent agents
- ✅ High message volume
- ✅ Long AI response times
Limitations:
⚠️ File-based (not database)⚠️ Single queue processor instance⚠️ All agents on same machine
# See pending messages
ls ~/.tinyclaw/queue/incoming/
# See processing
ls ~/.tinyclaw/queue/processing/
# See responses waiting
ls ~/.tinyclaw/queue/outgoing/
# Watch queue logs
tail -f ~/.tinyclaw/logs/queue.logMessages stuck in incoming:
- Queue processor not running
- Check:
tinyclaw status
Messages stuck in processing:
- AI CLI crashed or hung
- Manual cleanup:
rm ~/.tinyclaw/queue/processing/* - Restart:
tinyclaw restart
No responses generated:
- Check agent routing (wrong @agent_id?)
- Check AI CLI is installed (claude/codex)
- Check logs:
tail -f ~/.tinyclaw/logs/queue.log
Agents not processing in parallel:
- Check TypeScript build:
npm run build - Check queue processor version in logs
Replace file-based queue with:
- Redis (for multi-instance)
- Database (for persistence)
- Message broker (RabbitMQ, Kafka)
Key interface to maintain:
interface QueueMessage {
channel: string;
sender: string;
message: string;
timestamp: number;
messageId: string;
agent?: string;
files?: string[];
}Currently: All agents run on same machine
Future: Route agents to different machines:
{
"agents": {
"coder": {
"host": "worker1.local",
"working_directory": "/agents/coder"
},
"writer": {
"host": "worker2.local",
"working_directory": "/agents/writer"
}
}
}Add metrics:
- messages_processed_total (by agent)
- processing_duration_seconds (by agent)
- queue_depth (incoming/processing/outgoing)
- agent_active_processing (concurrent count)- AGENTS.md - Agent configuration and management
- README.md - Main project documentation
- src/queue-processor.ts - Implementation