feat(tests): add unit test infrastructure#20
feat(tests): add unit test infrastructure#20arunsanna wants to merge 2 commits intoGenAI-Security-Project:mainfrom
Conversation
Fixes GenAI-Security-Project#15 - Add unit test infrastructure for the AIBOM Generator Added: - tests/ directory with pytest configuration - conftest.py with mock HuggingFace API fixtures - test_generator.py with 15 tests for AIBOMGenerator - test_scoring.py with 7 tests for completeness scoring - Sample fixtures for testing (sample_model_card.json, expected_aibom.json) - pytest.ini configuration - Test dependencies in requirements.txt (pytest, pytest-mock, pytest-cov) Test coverage: - AIBOM generation structure validation - CycloneDX compliance checks - PURL encoding (xfail until PR GenAI-Security-Project#18 merged) - Model card extraction - Error handling - Model ID normalization - Completeness scoring All tests run offline using mocked HuggingFace API responses. Results: 21 passed, 1 xfailed (expected)
There was a problem hiding this comment.
Pull request overview
This PR adds comprehensive unit test infrastructure to the AIBOM Generator project, enabling offline testing with mocked HuggingFace API responses. The PR addresses issue #15 by implementing pytest-based testing with 22 tests covering core generator and scoring functionality.
Changes:
- Added test infrastructure with pytest fixtures for mocking HuggingFace API interactions
- Implemented 15 generator tests covering AIBOM structure, PURL encoding, model card extraction, and error handling
- Implemented 7 scoring tests for completeness score validation
- Added test dependencies to requirements.txt (pytest, pytest-mock, pytest-cov, jsonschema)
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/init.py | Empty test suite initialization file with comment |
| tests/conftest.py | Shared pytest fixtures providing mock HuggingFace API and model objects |
| tests/test_generator.py | Unit tests for AIBOMGenerator covering AIBOM structure, PURL, error handling, and normalization |
| tests/test_scoring.py | Unit tests for completeness scoring functionality |
| tests/fixtures/sample_model_card.json | Sample model metadata fixture for testing |
| tests/fixtures/expected_aibom.json | Expected AIBOM output structure for validation |
| pytest.ini | Pytest configuration file |
| requirements.txt | Added test dependencies |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Remove unused variable `result` in test_generate_aibom_with_output_file - Simplify xfail reason to just reference PR GenAI-Security-Project#18 - Remove unused `import pytest` from test_scoring.py - Replace permissive `or` assertions with specific checks
Copilot Review Feedback Addressed ✅Fixed all 5 inline comments:
|
There was a problem hiding this comment.
I also added tests for my feature -- I think this should probably be in a different requirements file, e.g. requirements-dev.txt, no? ideally, it would nice to use uv and pyproject.toml to have even more granular control, but at the very least we shouldn't allow for installing dev deps in every pip install, right?
✅ Testing Completed - VERIFIEDTest Space: https://megamind1-aibom-pr20-unit-tests.hf.space Test Results
Files Verified
Ready for merge. ✓ |
Status Update: Superseded by v0.2The unit test infrastructure has been incorporated into the v0.2 branch architecture. Evidence:
This PR can be closed as the test infrastructure is already in v0.2. |
Summary
Changes
New Files
Modified Files
requirements.txt- Added test dependencies (pytest, pytest-mock, pytest-cov, jsonschema)Test Coverage
*PURL encoding test marked as xfail until PR #18 is merged
Test Results
How to Run Tests
Benefits