Skip to content

feat(tests): add unit test infrastructure#20

Open
arunsanna wants to merge 2 commits intoGenAI-Security-Project:mainfrom
arunsanna:feature/unit-tests-issue-15
Open

feat(tests): add unit test infrastructure#20
arunsanna wants to merge 2 commits intoGenAI-Security-Project:mainfrom
arunsanna:feature/unit-tests-issue-15

Conversation

@arunsanna
Copy link

Summary

Changes

New Files

tests/
├── __init__.py
├── conftest.py              # Shared fixtures with mock HF API
├── test_generator.py        # 15 generator tests
├── test_scoring.py          # 7 scoring tests
├── fixtures/
│   ├── sample_model_card.json
│   └── expected_aibom.json
pytest.ini                   # pytest configuration

Modified Files

  • requirements.txt - Added test dependencies (pytest, pytest-mock, pytest-cov, jsonschema)

Test Coverage

Category Tests Status
Generator initialization 1
AIBOM structure 6
PURL encoding 2 1 ✅, 1 xfail*
Model card extraction 2
Error handling 2
Model ID normalization 2
Completeness scoring 5
Field validation 2

*PURL encoding test marked as xfail until PR #18 is merged

Test Results

======================== 21 passed, 1 xfailed in 0.91s =========================

How to Run Tests

# In Docker
docker run --rm --entrypoint pytest aibom-test tests/ -v

# Or locally with pytest installed
pytest tests/ -v

Benefits

  • Catch regressions before deployment
  • Enable offline development/testing
  • Support for CI/CD integration
  • Easier onboarding for new contributors

Fixes GenAI-Security-Project#15 - Add unit test infrastructure for the AIBOM Generator

Added:
- tests/ directory with pytest configuration
- conftest.py with mock HuggingFace API fixtures
- test_generator.py with 15 tests for AIBOMGenerator
- test_scoring.py with 7 tests for completeness scoring
- Sample fixtures for testing (sample_model_card.json, expected_aibom.json)
- pytest.ini configuration
- Test dependencies in requirements.txt (pytest, pytest-mock, pytest-cov)

Test coverage:
- AIBOM generation structure validation
- CycloneDX compliance checks
- PURL encoding (xfail until PR GenAI-Security-Project#18 merged)
- Model card extraction
- Error handling
- Model ID normalization
- Completeness scoring

All tests run offline using mocked HuggingFace API responses.

Results: 21 passed, 1 xfailed (expected)
Copilot AI review requested due to automatic review settings January 15, 2026 04:06
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds comprehensive unit test infrastructure to the AIBOM Generator project, enabling offline testing with mocked HuggingFace API responses. The PR addresses issue #15 by implementing pytest-based testing with 22 tests covering core generator and scoring functionality.

Changes:

  • Added test infrastructure with pytest fixtures for mocking HuggingFace API interactions
  • Implemented 15 generator tests covering AIBOM structure, PURL encoding, model card extraction, and error handling
  • Implemented 7 scoring tests for completeness score validation
  • Added test dependencies to requirements.txt (pytest, pytest-mock, pytest-cov, jsonschema)

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
tests/init.py Empty test suite initialization file with comment
tests/conftest.py Shared pytest fixtures providing mock HuggingFace API and model objects
tests/test_generator.py Unit tests for AIBOMGenerator covering AIBOM structure, PURL, error handling, and normalization
tests/test_scoring.py Unit tests for completeness scoring functionality
tests/fixtures/sample_model_card.json Sample model metadata fixture for testing
tests/fixtures/expected_aibom.json Expected AIBOM output structure for validation
pytest.ini Pytest configuration file
requirements.txt Added test dependencies

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

- Remove unused variable `result` in test_generate_aibom_with_output_file
- Simplify xfail reason to just reference PR GenAI-Security-Project#18
- Remove unused `import pytest` from test_scoring.py
- Replace permissive `or` assertions with specific checks
@arunsanna
Copy link
Author

Copilot Review Feedback Addressed ✅

Fixed all 5 inline comments:

File Line Fix
test_generator.py 89 Removed unused result variable
test_generator.py 103 Simplified xfail reason to "PR #18"
test_scoring.py 9 Removed unused import pytest
test_scoring.py 66 Replaced permissive or assertion with specific checks
test_scoring.py 80 Replaced permissive or assertion with score threshold check

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also added tests for my feature -- I think this should probably be in a different requirements file, e.g. requirements-dev.txt, no? ideally, it would nice to use uv and pyproject.toml to have even more granular control, but at the very least we shouldn't allow for installing dev deps in every pip install, right?

@arunsanna
Copy link
Author

✅ Testing Completed - VERIFIED

Test Space: https://megamind1-aibom-pr20-unit-tests.hf.space

Test Results

Test Result
App functionality ✅ Working
Test files present ✅ Confirmed

Files Verified

  • tests/test_generator.py
  • tests/test_scoring.py
  • tests/test_validation.py
  • pytest.ini

Ready for merge.

@arunsanna
Copy link
Author

Status Update: Superseded by v0.2

The unit test infrastructure has been incorporated into the v0.2 branch architecture.

Evidence: tests/ directory exists with:

  • test_validation.py
  • test_service.py
  • test_scoring.py
  • test_license_utils.py

This PR can be closed as the test infrastructure is already in v0.2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Enhancement: Add unit test infrastructure

2 participants