Thank you for your interest in contributing to the AgenticRAG! This project aims to provide a state-of-the-art, production-grade agentic RAG framework implementing the latest research from 2024-2025.
- Development Setup
- Project Structure
- Coding Standards
- Documentation Standards
- Testing & Benchmarking
- Pull Request Process
We use uv for dependency management and virtual environments.
# Clone the repository
git clone https://github.com/heshamfs/agentic-rag.git
cd agentic-rag
# Create virtual environment and install dependencies
uv venv --python 3.12
source .venv/bin/activate # or .venv\Scripts\activate on Windows
# Install in editable mode with dev dependencies
uv pip install -e ".[dev]"Copy the .env.example to .env and fill in your API keys:
cp .env.example .envsrc/agentic_rag/
├── agents/ # Multi-agent orchestration (Router, Retriever, Evaluator, Generator)
├── chunking/ # Chunking strategies (Semantic, Late, RAPTOR, Contextual)
├── embeddings/ # Embedding model integrations (Qwen3-Embedding)
├── generation/ # LLM provider clients (Claude, OpenAI, Gemini, Local)
├── graph/ # GraphRAG implementation (Entities, Relationships, Communities)
├── ingestion/ # Document loading and processing
├── pipeline/ # Pipeline orchestration and builder
├── retrieval/ # Retrieval strategies (Hybrid, HyDE, Multi-Query)
├── reranking/ # Reranking models (ColBERT, Cross-Encoders)
├── vectordb/ # Vector database clients (Qdrant)
├── evaluation/ # RAGAS metrics and Self-RAG reflection
└── observability/ # OpenTelemetry and tracing
- Type Hints: All new code must use Python type hints.
- Async/Await: The pipeline is async-first. Use
async/awaitfor all I/O operations. - Pydantic: Use Pydantic models for data structures and settings.
- Linting: We use
rufffor linting and formatting.ruff check src/ ruff format src/
- Type Checking: We use
mypyfor static type checking.mypy src/
- Docstrings: All public classes and methods must have Google-style docstrings.
- Type Information: Include type information in docstrings if it helps clarity, although type hints are preferred.
- README Updates: If you add a new feature, update the
README.mdand relevant files indocs/.
pytestThe benchmark system is critical for verifying performance gains.
# Run the core benchmark
python scripts/run_benchmark.py
# Run industry comparison (requires benchmark collection to be created first)
python scripts/run_comparison.py- Create a new branch for your feature or bugfix.
- Ensure all tests pass and linting is clean.
- Update documentation if necessary.
- Submit a PR with a clear description of the changes and their impact on performance (if applicable).
- Link any related issues.
By contributing, you agree that your contributions will be licensed under the project's MIT License.