This project implements a modular multi-agent chat system for research, analysis, and memory retrieval. It features:
- Coordinator: Receives user queries, routes tasks, and synthesizes results.
- ResearchAgent: Retrieves knowledge from a mock knowledge base.
- AnalysisAgent: Summarizes, compares, and extracts insights from research results.
- MemoryAgent: Stores and retrieves structured knowledge, conversation, and agent state.
- MemoryLayer: Persists all records with metadata and supports vector-based retrieval.
See DEVELOPMENT_PHASES.md for a step-by-step breakdown of the development process.
Sequence:
- User submits a query.
- Coordinator logs and routes the query:
- Simple: ResearchAgent → MemoryAgent (recall)
- Complex: ResearchAgent → AnalysisAgent → MemoryAgent
- Memory: MemoryAgent directly
- Agents process and store results with provenance and timestamps.
- Coordinator returns the answer and saves transcripts/results in
outputs/.
Flow Diagram:
flowchart LR
User --> Coordinator
Coordinator --> ResearchAgent
ResearchAgent --> AnalysisAgent
AnalysisAgent --> MemoryAgent
Coordinator --> MemoryAgent
MemoryAgent --> Coordinator
Coordinator:
Receives the user query.
Decides that ResearchAgent should handle the task.
Logs and outputs the result.
ResearchAgent:
Looks up the query in a small mock knowledge base (data/knowledge_base.json).
Returns a list of results with a confidence score.
MemoryLayer:
Stores all results with timestamp, agent name, topic, and confidence.
Provides minimal retrieval (will be expanded in future prototypes).
Confidence values are simulated and rule-based:
0.9 for exact keyword matches.
0.3 for fuzzy or partial matches.
Example:
Query: "what are the main types of neural networks?" → Topic matched as "neural networks" but with punctuation. → Confidence returned: 0.3.
Limitation: Confidence does not reflect semantic similarity yet. This will be improved in Prototype 2.
Local Setup:
conda create -n multiagent-chat python=3.10 -y
conda activate multiagent-chat
pip install -r requirements.txtRun interactive chat:
python run.pyRun scenario tests:
python run_scenarios.pyDocker:
docker build -t multiagent-chat .
docker run -it multiagent-chat
docker run -it multiagent-chat python run_scenarios.pyDocker Compose:
docker-compose build
docker-compose up- This will run the service interactively (
python run.py). - The
outputs/folder is mounted for easy access to results on your host machine. - To stop the service:
docker-compose downmultiagent-chat/
├── agents/
│ ├── research_agent.py
│ ├── analysis_agent.py
│ ├── memory_agent.py
├── data/
│ └── knowledge_base.json
├── memory/
│ └── memory_layer.py
├── coordinator.py
├── run.py
├── run_scenarios.py
├── requirements.txt
├── README.md
└── outputs/ (sample outputs will be stored here)
✔ Coordinator routes query to ResearchAgent.
✔ ResearchAgent retrieves from mock knowledge base.
✔ MemoryLayer stores result.
✔ Scenario runner works for a simple query.
Confidence scores are rule-based, not semantic.
Only ResearchAgent is fully functional.
AnalysisAgent and MemoryAgent are stubs.
No real vector similarity search yet.
Error handling is minimal.
- MemoryLayer provides three structured stores:
- Conversation records: logs all user/system messages with provenance and timestamps.
- Knowledge records: stores synthesized findings, analysis, and research results.
- Agent state records: tracks agent actions, tasks, and outcomes.
- Retrieval uses a TF-IDF-like vector search for relevance.
- All records include provenance, agent, and timestamp for traceability.
- MemoryAgent exposes explicit retrieval methods for each store.
Sample from outputs/simple_query.txt:
Prompt: What are the main types of neural networks?
Result:
{'result': "Summary for 'What are the main types of neural networks?': Moderate relevance in 1 topic(s): neural networks | Weak signals from 3 topic(s), may be less reliable."}
Sample from outputs/memory_test.txt:
Prompt: What did we discuss about neural networks earlier?
Result:
{'result': "(from memory) Previous discussion on 'What did we discuss about neural networks earlier?': [{'topic': 'neural networks', 'agent': 'Research+Analysis', 'provenance': 'Coordinator-details'}, ...]"}
All outputs are saved in the outputs/ folder for inspection.
- The current system uses rule-based agents and vector search.
- For future LLM integration:
- Add configuration in the Coordinator to select LLM models.
- Implement fallback logic if no relevant results are found.
This project was developed by Adil Sheraz.
For questions or collaboration, please contact via GitHub Issues or repository email.