Thank you for your interest in contributing to Ollama Arena! This document provides guidelines and instructions for contributing.
- Code of Conduct
- Getting Started
- Development Setup
- How to Contribute
- Coding Standards
- Testing
- Pull Request Process
- Issue Guidelines
- Be respectful and inclusive
- Welcome newcomers
- Focus on constructive feedback
- Help others learn and grow
Note: This project uses Ollama (© Ollama, Inc.) as the inference engine. Ensure you have Ollama installed and running locally.
- Fork the repository
- Clone your fork:
git clone https://github.com/YOUR_USERNAME/ollama-arena.git - Add upstream remote:
git remote add upstream https://github.com/ORIGINAL/ollama-arena.git - Create a branch:
git checkout -b feature/amazing-feature
- Python 3.8+
- Ollama CLI installed
- Git for version control
# Clone repository
git clone https://github.com/yourusername/ollama-arena.git
cd ollama-arena
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # or .\.venv\Scripts\Activate.ps1 on Windows
# Install dependencies
pip install -r requirements.txt
pip install -r requirements-dev.txt
# Copy environment file
cp .env.example .env
# Run application
python run.pyexport WEB_DEBUG=1
export LOG_LEVEL=DEBUG
python run.py- Bug Fixes: Fix identified bugs
- Features: Add new functionality
- Documentation: Improve docs
- Tests: Add or improve tests
- Refactoring: Improve code quality
- Performance: Optimize performance
- Check Issues page
- Look for
good first issueorhelp wantedlabels - Ask in discussions if you're unsure where to start
Follow PEP 8 with these tools:
# Format code
black app/ *.py
# Sort imports
isort app/ *.py
# Lint code
flake8 app/
# Type checking
mypy app/- Type hints: Add type hints to all functions
- Docstrings: Use Google-style docstrings
- Line length: Max 100 characters
- Imports: Group stdlib, third-party, local
- Naming:
snake_casefor functions/variablesPascalCasefor classesUPPER_CASEfor constants
def calculate_metrics(
tokens: int,
duration_ns: int
) -> Dict[str, float]:
"""
Calculate performance metrics for model response.
Args:
tokens: Number of tokens generated
duration_ns: Duration in nanoseconds
Returns:
Dict with tokens_per_sec and duration_s
Raises:
ValueError: If duration is zero
"""
if duration_ns == 0:
raise ValueError("Duration cannot be zero")
duration_s = duration_ns / 1e9
tokens_per_sec = tokens / duration_s
return {
'tokens_per_sec': round(tokens_per_sec, 2),
'duration_s': round(duration_s, 2)
}- ES6+: Use modern JavaScript
- Const/Let: No
var - Arrow functions: Prefer arrow functions
- Comments: Explain complex logic
- Naming: camelCase for functions/variables
- Variables: Use CSS custom properties
- BEM: Consider BEM naming for new components
- Dark mode: Always support dark mode
- Responsive: Mobile-first design
# Run all tests
pytest
# Run with coverage
pytest --cov=app --cov-report=html
# Run specific test
pytest tests/test_ollama_service.py
# Run with verbose output
pytest -vdef test_list_models():
"""Test listing Ollama models."""
service = OllamaService()
models = service.list_models()
assert isinstance(models, list)
assert len(models) > 0
assert all(isinstance(m, str) for m in models)- Aim for 80%+ coverage
- Test happy paths and error cases
- Mock external dependencies (Ollama)
- Test edge cases
-
Update from upstream:
git fetch upstream git rebase upstream/main
-
Run tests:
pytest
-
Format code:
black . isort . flake8 app/
-
Update documentation:
- Update README if needed
- Update API.md for API changes
- Update CHANGELOG.md
-
Commit messages:
feat: Add model comparison feature fix: Resolve timeout issue docs: Update installation guide test: Add tests for service layer refactor: Simplify error handling
-
Push to your fork:
git push origin feature/amazing-feature
-
Create PR:
- Use descriptive title
- Reference related issues
- Describe changes made
- Add screenshots for UI changes
- List breaking changes
-
PR Template:
## Description Brief description of changes ## Related Issues Fixes #123 ## Changes Made - Added feature X - Fixed bug Y - Updated docs ## Testing - [ ] Added tests - [ ] All tests pass - [ ] Manual testing done ## Screenshots (if applicable) ## Breaking Changes (if any)
- Maintainer reviews code
- Address feedback
- Update PR as needed
- Once approved, maintainer merges
Use this template:
**Describe the bug**
Clear description of the bug
**To Reproduce**
Steps to reproduce:
1. Go to '...'
2. Click on '...'
3. See error
**Expected behavior**
What you expected to happen
**Screenshots**
Add screenshots if applicable
**Environment:**
- OS: [e.g. Windows 11]
- Python version: [e.g. 3.11]
- Ollama version: [e.g. 0.1.20]
**Additional context**
Any other context**Is your feature related to a problem?**
Clear description of the problem
**Describe the solution**
Description of proposed solution
**Describe alternatives**
Alternative solutions considered
**Additional context**
Mockups, examples, etc.feature/feature-name- New featuresfix/bug-description- Bug fixesdocs/what-changed- Documentationrefactor/what-changed- Code refactoringtest/what-tested- Test additions
- New features → Update README and API.md
- Config changes → Update .env.example
- API changes → Update API.md
- Breaking changes → Update CHANGELOG.md
- Setup changes → Update README
- Clear and concise
- Include examples
- Use code blocks
- Add screenshots for UI
- Keep up to date
Contributors will be:
- Listed in CONTRIBUTORS.md
- Mentioned in release notes
- Credited in CHANGELOG.md
- Open a Discussion
- Ask in existing issues
- Contact maintainers
By contributing, you agree that your contributions will be licensed under the MIT License.
Thank you for contributing to Ollama Arena! 🎉