Skip to content

Bug: some failure tests are wrongly set up and not testing intended behavior #266

@crowecawcaw

Description

@crowecawcaw

Describe the bug

Some unit tests that verify validation failures have mistakes in the test setup (e.g. using the wrong model) but the test only looks for a validation error so the test still passes. As a result, these tests do not actually verify the library behavior.

See this test which mixes attributes and amounts:

def test_non_standard_attribute_capability_noncompliant_value_string(
self, field: str, value: str, error_count: int
) -> None:
# Test the constraints on an amount capability value within the
# anyOf and allOf clauses raise validation errors when they're violated.
# GIVEN
data = {"name": "attr.custom", field: [value]}
# WHEN
with pytest.raises(ValidationError) as excinfo:
_parse_model(model=AmountRequirementTemplate, obj=data)
# THEN
assert len(excinfo.value.errors()) == error_count, str(excinfo.value)

See this PR which fixes a test that mixes HostRequirements and AmountRequirements models: https://github.com/OpenJobDescription/openjd-model-for-python/pull/265/changes

Recommend setting a new pattern for testing for validation failures that causes a test to fail if the wrong model is used, then applying that patter across the codebase. See the PR as a possible solution.

Expected Behaviour

...

Current Behaviour

...

Reproduction Steps

...

Environment

...

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions