This document describes the testing setup for Funannotate2 and how to run the tests.
The Funannotate2 project uses pytest for testing. The tests are organized as follows:
tests/unit/: Unit tests for individual functions and modulestests/unit/test_annotate_comprehensive.py: Comprehensive tests for the annotate moduletests/unit/test_search_comprehensive.py: Comprehensive tests for the search moduletests/unit/test_predict_comprehensive.py: Comprehensive tests for the predict module- And many more...
tests/integration/: Integration tests for the entire workflowtests/integration/test_funannotate_cli.py: Tests for the command-line interfacetests/integration/test_funannotate_workflow.py: Tests for the workflow using the Python API
tests/functional/: Functional tests for real-world scenariostests/data/: Test data files used by the tests
To run the tests, you need to install pytest and the package in development mode:
# Install pytest and coverage tools
pip install pytest pytest-cov
# Install funannotate2 in development mode
pip install -e .
# Run all tests
pytest
# Run with coverage report
pytest --cov=funannotate2
# Generate HTML coverage report
python scripts/run_coverage.py
# Run only unit tests
pytest tests/unit/
# Run only integration tests
pytest tests/integration/The tests cover a wide range of functionality including:
-
Core Modules:
annotate.py: Annotation pipelinepredict.py: Gene predictionsearch.py: Sequence searchingclean.py: Genome cleaningcompare.py: Genome comparison
-
Utility Functions:
merge_coordinates: Merging overlapping intervalsnaming_slug: Generating slugs for species and strain combinationscreate_tmpdir: Creating temporary directoriesreadBlocks: Reading blocks of text from a source
-
FASTA Processing Functions:
softwrap: Wrapping sequences to a specified lengthcountfasta: Counting sequences in a FASTA filefasta2dict: Converting a FASTA file to a dictionaryannotate_fasta: Annotating sequences in a FASTA filemergefasta: Merging and dereplicating FASTA files
When adding new tests:
-
Follow the naming conventions:
- Test files:
test_*.py - Test classes:
Test* - Test methods:
test_*
- Test files:
-
Use fixtures from
conftest.pywhen possible -
Add test data to the
data/directory when needed -
Run the tests to ensure they pass
-
Use pytest.mark.skipif to skip tests that require external dependencies that might not be installed
The project includes a GitHub Actions workflow that runs the tests on push and pull requests. The workflow:
- Runs on multiple Python versions (3.8, 3.9, 3.10, 3.11)
- Installs the package and dependencies
- Runs the unit tests and generates a coverage report
- Runs the integration tests that don't require external dependencies
- Uploads the coverage report to Codecov
- Runs linting checks with flake8 and black
You can see the status of the tests on the GitHub Actions page and the coverage report on the Codecov page.
To run the tests with a specific Python version, you can use a virtual environment:
# Create a virtual environment with Python 3.9
python3.9 -m venv venv-py39
source venv-py39/bin/activate
# Install dependencies
pip install pytest pytest-cov
pip install -e .
# Run tests
pytest