Thank you for your interest in contributing to effGen! This guide will help you get started.
- Python 3.9+
- CUDA-compatible GPU (for integration tests)
- Git
# Clone the repository
git clone https://github.com/ctrl-gaurav/effGen.git
cd effGen
# Create a virtual environment
conda create -n effgen python=3.11 -y
conda activate effgen
# Install in development mode
pip install -e ".[dev]"
# Install pre-commit hooks
pip install pre-commit
pre-commit install# Unit tests (no GPU required)
pytest tests/unit/ -v --no-cov
# Integration tests (requires GPU)
CUDA_VISIBLE_DEVICES=0 pytest tests/integration/ -v -m gpu --no-cov
# Performance benchmarks
pytest tests/benchmarks/ -v --no-cov
# All tests with coverage
pytest tests/ -vWe use the following tools to maintain code quality:
- Black for code formatting (line length: 100)
- isort for import sorting (profile: black)
- flake8 for linting
- mypy for type checking
- bandit for security linting
All these run automatically via pre-commit hooks. You can also run them manually:
black effgen/
isort effgen/
flake8 effgen/
mypy effgen/ --ignore-missing-imports- Fork the repository and create a feature branch from
main - Write tests for your changes (unit tests at minimum, integration tests if GPU-dependent)
- Update CHANGELOG.md with a summary of your changes
- Ensure all tests pass:
pytest tests/unit/ -v --no-cov - Submit your PR with a clear description of the changes
- Tests added/updated
- CHANGELOG.md updated
- Code passes
black --checkandisort --check - No new
TODO/FIXMEwithout a tracking issue - Documentation updated if public API changed
When reporting bugs, please include:
- Python version (
python --version) - effGen version (
python -c "import effgen; print(effgen.__version__)") - GPU info (if relevant):
nvidia-smi - Full error traceback
- Steps to reproduce
effgen/
├── core/ # Agent, AgentConfig, ReAct loop
├── models/ # Model backends (vLLM, Transformers, API adapters)
├── tools/ # Built-in tools and protocols (MCP, A2A, ACP)
├── memory/ # Short-term, long-term, vector memory
├── prompts/ # Template management and optimization
├── config/ # Configuration loading and validation
├── execution/ # Code execution sandboxing
├── gpu/ # GPU allocation and monitoring
└── utils/ # Logging, metrics, validators, health checks
- Open Source First: All features must work without paid APIs
- SLM-Optimized: Prompts and tools designed for 1B-7B parameter models
- Tools extend
BaseToolwithasync _execute()method - Agent uses ReAct loop: Thought → Action → Observation → repeat
By contributing, you agree that your contributions will be licensed under the Apache License 2.0.