From b4b9147b59124ba0a523efa675085d8276c64ff6 Mon Sep 17 00:00:00 2001 From: rnetser Date: Wed, 26 Nov 2025 18:46:01 +0200 Subject: [PATCH 01/21] Add STD template and examples --- docs/SOFTWARE_TEST_DESCRIPTION.md | 455 ++++++++++++++++++ .../network/flat_overlay/test_flat_overlay.py | 114 ++++- .../online_resize/test_online_resize.py | 20 +- .../virt/cluster/vm_lifecycle/test_restart.py | 19 +- 4 files changed, 602 insertions(+), 6 deletions(-) create mode 100644 docs/SOFTWARE_TEST_DESCRIPTION.md diff --git a/docs/SOFTWARE_TEST_DESCRIPTION.md b/docs/SOFTWARE_TEST_DESCRIPTION.md new file mode 100644 index 0000000000..bf8569beb6 --- /dev/null +++ b/docs/SOFTWARE_TEST_DESCRIPTION.md @@ -0,0 +1,455 @@ +> *This document was created with the assistance of Claude (Anthropic).* +# Software Test Description + +## Overview + +### Test Descriptions as Code + +In this repository, **test descriptions are written as docstrings directly in the test code**. +This approach keeps documentation and implementation together, ensuring they stay synchronized and reducing the overhead of maintaining separate documentation. + +Each test function includes a comprehensive docstring that serves as the STD, using the **Preconditions/Steps/Expected** format optimized for automation: +- **Preconditions**: Test setup requirements and state +- **Steps**: Numbered, discrete actions (each step maps to code) +- **Expected**: Natural language assertion (e.g., "VM is Running", "File does NOT exist") + +The STD format is particularly valuable for: +- **Design First**: Enables test design review before implementation effort +- **Quality Assurance**: Ensures tests are well-documented and can be understood by anyone on the team +- **Maintenance**: Makes it easier to update and maintain tests over time +- **Review**: Facilitates code review by clearly stating expected behavior + +--- + +## Development Workflow + +This project follows a **two-phase development workflow** that separates test design from test implementation: + +### Phase 1: Test Description PR (Design Phase) + +1. **Create test stubs with docstrings only**: + - Write the test function signature + - Add the complete STD docstring (Preconditions/Steps/Expected) + - Include a link to the approved STP (Software Test Plan) in the **module docstring** (top of the test file) + - Add applicable pytest markers (architecture markers, etc.) + - Leave the test body empty or with a `pass` statement + +2. **Submit PR for review**: + - The PR contains only the test descriptions (no automation code) + - Reviewers evaluate the test design, coverage, and clarity + - Discussions focus on *what* should be tested and *how* it should be validated + +3. **Approval and merge**: + - Once the test design is approved, merge the PR + - This establishes the test contract before implementation begins + +### Phase 2: Test Automation PR (Implementation Phase) + +1. **Implement the test automation**: + - Add the actual test code to the previously merged test stubs + - Create any required fixtures + - Implement helper functions as needed + +2. **Submit PR for review**: + - Reviewers verify the implementation matches the approved design + - Focus is on code quality, correctness, and adherence to the STD + +3. **Approval and merge**: + - Once implementation is verified, merge the automation + +### Benefits of This Workflow + +| Benefit | Description | +|---------|-------------| +| **Early Design Review** | Test design is reviewed before implementation effort is spent | +| **Clear Contracts** | The STD serves as a contract between design and implementation | +| **Reduced Rework** | Design issues are caught early, before automation is written | +| **Better Documentation** | Tests are always documented before they are implemented | +| **Easier Planning** | Test descriptions can be created during sprint planning | + + +--- + +## Automation-Friendly Syntax + +To enable consistent parsing and automation, use these conventions in docstrings: + +### Assertion Wording (Expected) + +Use clear, natural language that maps directly to assertions: + +| Wording Pattern | Maps To | +|-----------------|---------| +| `X equals Y` | `assert x == y` | +| `X does not equal Y` | `assert x != y` | +| `VM is "Running"` | `assert vm.status == Running` | +| `VM is not running` | `assert vm.status != Running` | +| `File exists` / `Resource x exists` | `assert exists(x)` | +| `File does not exist` / `Resource x does NOT exist` | `assert not exists(x)` | +| `X does not contain Y` | `assert y not in x` | +| `Ping succeeds` / `Operation succeeds` | `assert operation()` (no exception) | +| `Ping fails` / `Operation fails` | `assert` raises exception or returns failure | + +**Example:** +``` +Expected: + - VM is Running + - File content equals "data-before-snapshot" + - File /data/after.txt does NOT exist + - Ping fails with 100% packet loss +``` + +### Negative Test Indicator + +Mark tests that verify failure scenarios with `[NEGATIVE]` in the description: + +```python +def test_isolated_vms_cannot_communicate(): + """ + [NEGATIVE] Test that VMs on separate networks cannot ping each other. + ... +``` + +### Parametrization Hints + +When a test should run with multiple parameter combinations, add a `Parametrize:` section. + + +### Markers Section + +When specific pytest markers are required, list them explicitly. + +--- + +## STD Template + +**Key Principles:** +- Each test should verify **ONE thing** +- **Tests must be independent** - no test should depend on another test's outcome +- If a test needs a precondition that could be another test's outcome, use a **fixture** to set it up +- Related tests are grouped in a **test class** +- **Shared preconditions** go in the class docstring +- **Test-specific preconditions** (if any) go in the test docstring + +### Class-Level Template + +```python +class Test: + """ + Tests for . + + Markers: + - arm64 + - gating + + Parametrize: + - storage_class: [ocs-storagecluster-ceph-rbd, hostpath-csi] + - os_image: [rhel9, fedora] + + Preconditions: + - + - + + """ + + def test_(self): + """ + Test that . + + Steps: + 1. + + Expected: + - + """ +``` + +### Test-Level Template + +For standalone tests without related tests: + +```python +def test_(): + """ + Test that . + + Markers: + - gating + + Parametrize: + - os_image: [rhel9, fedora] + + Preconditions: + - + - + + Steps: + 1. + + Expected: + - + """ +``` + +### Template Components + +| Component | Purpose | Guidelines | +|-----------|---------|------------| +| **Class Docstring** | Shared preconditions | Setup common to all tests | +| **Brief Description** | One-line summary | Describe the ONE thing being verified; use `[NEGATIVE]` for failure tests | +| **Preconditions** (test) | Test-specific setup | Only if this test has additional setup beyond the class | +| **Steps** | Test action(s) | Minimal - just what's needed to get the result to verify | +| **Expected** | ONE assertion | Use natural language that maps to assertions | +| **Parametrize** | Matrix testing | Optional - list parameter combinations | +| **Markers** | pytest markers | Optional - list required decorators | + +--- + +## Best Practices + +### Writing Effective STDs + +1. **One Test = One Thing**: Each test should verify exactly one behavior. + - Good: `test_ping_succeeds`, `test_ping_fails_when_isolated` + - Bad: `test_ping_succeeds_and_fails_when_isolated` + +2. **Group Related Tests in Classes**: Use class docstring for shared preconditions. + - Good: Class `TestSnapshotRestore` with shared VM setup + - Bad: Standalone functions with repeated preconditions + +3. **Be Specific in Preconditions**: Describe the exact state required. + - Good: `- File path="/data/original.txt", content="test-data"` + - Bad: `- A file exists` + +4. **No Fixture Names in Phase 1**: Fixtures are implementation details. + - Good: `- Running Fedora virtual machine` + - Bad: `- Running Fedora VM (vm_to_restart fixture)` + +5. **Single Expected per Test**: One assertion = clear pass/fail. + - Good: `Expected: - Ping succeeds with 0% packet loss` + - Bad: `Expected: - Ping succeeds - VM remains running - No errors logged` + +6. **Tests Must Be Independent**: Tests should not depend on other tests. + - If a test needs a precondition that is another test's outcome, use a fixture + - Good: Fixture `migrated_vm` sets up a VM that has been migrated + - Bad: `test_migrate_vm` must run before `test_ssh_after_migration` + +### Common Patterns in This Project + +| Pattern | Description | Example | +|---------|-------------|---------| +| **Fixture-based Setup** | Use pytest fixtures for resource creation | `vm_to_restart`, `namespace` | +| **Matrix Testing** | Parameterize tests for multiple scenarios | `storage_class_matrix`, `run_strategy_matrix` | +| **Architecture Markers** | Indicate architecture compatibility | `@pytest.mark.arm64`, `@pytest.mark.s390x` | +| **Gating Tests** | Critical tests for CI/CD pipelines | `@pytest.mark.gating` | + +### STD Checklist + +#### Phase 1: Test Description PR + +- [ ] STP link in module docstring +- [ ] Tests grouped in class with shared preconditions +- [ ] Each test has: description, Preconditions, Steps, Expected +- [ ] Each test verifies ONE thing with ONE Expected +- [ ] Negative tests marked with `[NEGATIVE]` +- [ ] Test methods contain only `pass` + +#### Phase 2: Test Automation PR + +- [ ] Implementation matches approved STD +- [ ] Fixtures implement preconditions +- [ ] Assertions match Expected +- [ ] No changes to STD docstrings + +--- + +### Example 1: Group tests under a class + +```python +""" +VM Snapshot and Restore Tests + +STP Reference: https://example.com/stp/vm-snapshot-restore +""" + +import pytest + + +@pytest.mark.gating +class TestSnapshotRestore: + """ + Tests for VM snapshot restore functionality. + + Preconditions: + - Running VM with a data disk + - File path="/data/original.txt", content="data-before-snapshot" + - Snapshot created from VM + - File path="/data/after.txt", content="post-snapshot" (written after snapshot) + - VM Restored from snapshot, running and SSH accessible + """ + + def test_preserves_original_file(self): + """ + Test that files created before a snapshot are preserved after restore. + + Steps: + 1. Read file /data/original.txt from the restored VM + + Expected: + - File content equals "data-before-snapshot" + """ + pass + + def test_removes_post_snapshot_file(self): + """ + Test that files created after a snapshot are removed after restore. + + Steps: + 1. Check if file /data/after.txt exists on the restored VM + + Expected: + - File /data/after.txt does NOT exist + """ + pass +``` + + +### Example 2: Tests with test-specific preconditions + + +```python +class TestVMLifecycle: + """ + Tests for VM lifecycle operations. + + Preconditions: + - VM Running latest Fedora virtual machine + """ + + def test_vm_restart_completes_successfully(self): + """ + Test that a VM can be restarted. + + Steps: + 1. Restart the running VM and wait for completion + + Expected: + - VM is "Running" + """ + pass + + def test_vm_stop_completes_successfully(self): + """ + Test that a VM can be stopped. + + Steps: + 1. Stop the running VM and wait for completion + + Expected: + - VM is "Stopped" + """ + pass + + def test_vm_start_after_stop(self): + """ + Test that a stopped VM can be started. + + Preconditions: + - VM is in stopped state + + Steps: + 1. Start the VM and wait for it to become running + + Expected: + - VM is "Running" and SSH accessible + """ + pass +``` + +--- + +### Example 3: Single Test (No Class Needed) + +When a test stands alone without related tests, a class is not required: + +```python +@pytest.mark.gating +@pytest.mark.ipv4 +def test_flat_overlay_ping_between_vms(): + """ + Test that VMs on the same flat overlay network can communicate. + + Preconditions: + - Flat overlay Network Attachment Definition created + - VM-A running and attached to flat overlay network + - VM-B running and attached to flat overlay network + + Steps: + 1. Get IPv4 address of VM-B + 2. Execute ping from VM-A to VM-B + + Expected: + - Ping succeeds with 0% packet loss + """ + pass +``` + +--- + +### Example 4: Negative Test + +Tests that verify failure scenarios use the `[NEGATIVE]` indicator: + +```python +@pytest.mark.ipv4 +def test_isolated_vms_cannot_communicate(): + """ + [NEGATIVE] Test that VMs on separate flat overlay networks cannot ping each other. + + Preconditions: + - NAD-1 flat overlay network created + - NAD-2 separate flat overlay network created + - VM-A running and attached to NAD-1 + - VM-B running and attached to NAD-2 + + Steps: + 1. Get IPv4 address of VM-B + 2. Execute ping from VM-A to VM-B + + Expected: + - Ping fails with 100% packet loss + """ + pass +``` + +--- + +### Example 5: Parametrized Test + +Tests that should run with multiple parameter combinations include a `Parametrize:` section: + +```python +@pytest.mark.gating +def test_online_disk_resize(): + """ + Test that a running VM's disk can be expanded. + + Parametrize: + - storage_class: [ocs-storagecluster-ceph-rbd, hostpath-csi] + + Preconditions: + - Storage class from parameter exists + - DataVolume with RHEL image using the storage class + - Running VM with the DataVolume as boot disk + + Steps: + 1. Expand PVC by 1Gi + 2. Wait for resize to complete inside VM + + Expected: + - Disk size inside VM is greater than original size + """ + pass +``` + +--- diff --git a/tests/network/flat_overlay/test_flat_overlay.py b/tests/network/flat_overlay/test_flat_overlay.py index a8fff0bbbe..05979a9a55 100644 --- a/tests/network/flat_overlay/test_flat_overlay.py +++ b/tests/network/flat_overlay/test_flat_overlay.py @@ -1,3 +1,7 @@ +""" +Flat Overlay Network Connectivity Tests +""" + import logging import pytest @@ -17,12 +21,39 @@ @pytest.mark.s390x class TestFlatOverlayConnectivity: + """ + Tests for flat overlay network connectivity between VMs. + + Markers: + - s390x + + Preconditions: + - Multi-network policy usage enabled + - Flat overlay Network Attachment Definition created + - VM-A running and attached to a flat overlay network + - VM-B running and attached to a flat overlay network + """ + @pytest.mark.gating @pytest.mark.ipv4 @pytest.mark.polarion("CNV-10158") # Not marked as `conformance`; requires NMState @pytest.mark.dependency(name="test_flat_overlay_basic_ping") def test_flat_overlay_basic_ping(self, flat_overlay_vma_vmb_nad, vma_flat_overlay, vmb_flat_overlay): + """ + Test that VMs on the same flat overlay network can communicate. + + Markers: + - gating + - ipv4 + + Steps: + 1. Get IPv4 address of VM-B + 2. Execute ping from VM-A to VM-B + + Expected: + - Ping succeeds with 0% packet loss + """ assert_ping_successful( src_vm=vma_flat_overlay, dst_ip=get_vmi_ip_v4_by_name(vm=vmb_flat_overlay, name=flat_overlay_vma_vmb_nad.name), @@ -37,9 +68,21 @@ def test_flat_overlay_separate_nads( vmb_flat_overlay_ip_address, vmd_flat_overlay_ip_address, ): - # This ping is needed even though it was tested in test_flat_overlay_basic_ping because an additional network - # (flat_overlay_vmc_vmd_nad) is now created. We want to make sure that the connectivity wasn't harmed by this - # addition. + """ + Test that adding a second flat overlay network does not break existing connectivity. + + Preconditions: + - Second flat overlay NAD created (flat_overlay_vmc_vmd_nad) + - VM-C running and attached to a second flat overlay network + - VM-D running and attached to a second flat overlay network + + Steps: + 1. Execute ping from VM-A to VM-B (original network) + 2. Execute ping from VM-C to VM-D (new network) + + Expected: + - Both ping commands succeed with 0% packet loss + """ assert_ping_successful( src_vm=vma_flat_overlay, dst_ip=vmb_flat_overlay_ip_address, @@ -55,6 +98,19 @@ def test_flat_overlay_separate_nads_no_connectivity( vma_flat_overlay, vmd_flat_overlay_ip_address, ): + """ + [NEGATIVE] Test that VMs on separate flat overlay networks cannot communicate. + + Preconditions: + - VM-A attached to the first flat overlay network (NAD-1) + - VM-D attached to the second flat overlay network (NAD-2) + + Steps: + 1. Execute ping from VM-A to VM-D + + Expected: + - Ping fails with 100% packet loss + """ assert_no_ping( src_vm=vma_flat_overlay, dst_ip=vmd_flat_overlay_ip_address, @@ -68,6 +124,21 @@ def test_flat_overlay_connectivity_between_namespaces( vma_flat_overlay, vme_flat_overlay, ): + """ + Test that VMs in different namespaces can communicate via same-named NAD. + + Preconditions: + - NAD with identical name created in namespace-1 and namespace-2 + - VM-A running in namespace-1 attached to the NAD + - VM-E running in namespace-2 attached to the NAD + + Steps: + 1. Verify NAD names are identical in both namespaces + 2. Execute ping from VM-A to VM-E + + Expected: + - Ping succeeds with 0% packet loss + """ assert flat_overlay_vma_vmb_nad.name == flat_overlay_vme_nad.name, ( f"NAD names are not identical:\n first NAD's " f"name: {flat_overlay_vma_vmb_nad.name}, second NAD's name: " @@ -86,6 +157,21 @@ def test_flat_overlay_consistent_ip( ping_before_migration, migrated_vmc_flat_overlay, ): + """ + Test that VM retains its IP address after live migration. + + Preconditions: + - VM-C running with a flat overlay network IP address + - VM-D running on a flat overlay network + - Ping from VM-D to VM-C succeeded before migration + - VM-C live migrated to another node + + Steps: + 1. Execute ping from VM-D to VM-C's original IP address + + Expected: + - Ping succeeds with 0% packet loss + """ assert_ping_successful( src_vm=vmd_flat_overlay, dst_ip=vmc_flat_overlay_ip_address, @@ -93,6 +179,15 @@ def test_flat_overlay_consistent_ip( class TestFlatOverlayJumboConnectivity: + """ + Tests for flat overlay network jumbo frame connectivity. + + Preconditions: + - Flat overlay NAD configured for jumbo frames + - VM-A running and attached to jumbo frame NAD + - VM-B running and attached to jumbo frame NAD + """ + @pytest.mark.polarion("CNV-10162") @pytest.mark.s390x def test_flat_l2_jumbo_frame_connectivity( @@ -102,6 +197,19 @@ def test_flat_l2_jumbo_frame_connectivity( vma_jumbo_flat_l2, vmb_jumbo_flat_l2, ): + """ + Test that VMs can communicate using jumbo frames on a flat overlay network. + + Markers: + - s390x + + Steps: + 1. Get IPv4 address of VM-B + 2. Execute ping from VM-A to VM-B with jumbo frame packet size + + Expected: + - Ping succeeds with 0% packet loss + """ assert_ping_successful( src_vm=vma_jumbo_flat_l2, packet_size=flat_l2_jumbo_frame_packet_size, diff --git a/tests/storage/online_resize/test_online_resize.py b/tests/storage/online_resize/test_online_resize.py index 21bfc2bb36..f79574a067 100644 --- a/tests/storage/online_resize/test_online_resize.py +++ b/tests/storage/online_resize/test_online_resize.py @@ -1,7 +1,7 @@ # -*- coding: utf-8 -*- """ -Online resize (PVC expanded while VM running) +Online Resize Tests - PVC Expansion While VM Running """ import logging @@ -42,7 +42,23 @@ def test_sequential_disk_expand( rhel_vm_for_online_resize, running_rhel_vm, ): - # Expand PVC and wait for resize 6 times + """ + Test that a running VM's disk can be expanded multiple times sequentially. + + Markers: + - gating + + Preconditions: + - DataVolume with RHEL image + - VM using the DataVolume as boot disk + - VM is running + + Steps: + 1. Expand PVC by the smallest possible increment and wait for resize (repeat 6 times) + + Expected: + - All 6 resize operations complete successfully + """ for _ in range(6): with wait_for_resize(vm=rhel_vm_for_online_resize): expand_pvc(dv=rhel_dv_for_online_resize, size_change=SMALLEST_POSSIBLE_EXPAND) diff --git a/tests/virt/cluster/vm_lifecycle/test_restart.py b/tests/virt/cluster/vm_lifecycle/test_restart.py index 44bfb4990f..8c77eff90a 100644 --- a/tests/virt/cluster/vm_lifecycle/test_restart.py +++ b/tests/virt/cluster/vm_lifecycle/test_restart.py @@ -1,5 +1,5 @@ """ -Test VM restart +VM Lifecycle Tests - Restart Operations """ import logging @@ -33,6 +33,23 @@ def vm_to_restart(unprivileged_client, namespace): @pytest.mark.polarion("CNV-1497") def test_vm_restart(vm_to_restart): + """ + Test that a VM can complete a full restart cycle (restart, stop, start). + + Markers: + - arm64 + + Preconditions: + - Running Fedora virtual machine + + Steps: + 1. Restart the VM and wait for completion + 2. Stop the VM and wait for completion + 3. Start the VM and wait for it to become running + + Expected: + - VM is running and SSH accessible + """ LOGGER.info("VM is running: Restarting VM") vm_to_restart.restart(wait=True) LOGGER.info("VM is running: Stopping VM") From c90a8ac5b7f0ade138a65c022bb6f2d0100c8cf1 Mon Sep 17 00:00:00 2001 From: rnetser Date: Wed, 26 Nov 2025 18:53:13 +0200 Subject: [PATCH 02/21] fix layout and markers --- docs/SOFTWARE_TEST_DESCRIPTION.md | 98 +++++++++++++++++-------------- 1 file changed, 53 insertions(+), 45 deletions(-) diff --git a/docs/SOFTWARE_TEST_DESCRIPTION.md b/docs/SOFTWARE_TEST_DESCRIPTION.md index bf8569beb6..6974cedf7d 100644 --- a/docs/SOFTWARE_TEST_DESCRIPTION.md +++ b/docs/SOFTWARE_TEST_DESCRIPTION.md @@ -59,13 +59,13 @@ This project follows a **two-phase development workflow** that separates test de ### Benefits of This Workflow -| Benefit | Description | -|---------|-------------| -| **Early Design Review** | Test design is reviewed before implementation effort is spent | -| **Clear Contracts** | The STD serves as a contract between design and implementation | -| **Reduced Rework** | Design issues are caught early, before automation is written | -| **Better Documentation** | Tests are always documented before they are implemented | -| **Easier Planning** | Test descriptions can be created during sprint planning | +| Benefit | Description | +|--------------------------|----------------------------------------------------------------| +| **Early Design Review** | Test design is reviewed before implementation effort is spent | +| **Clear Contracts** | The STD serves as a contract between design and implementation | +| **Reduced Rework** | Design issues are caught early, before automation is written | +| **Better Documentation** | Tests are always documented before they are implemented | +| **Easier Planning** | Test descriptions can be created during sprint planning | --- @@ -76,19 +76,19 @@ To enable consistent parsing and automation, use these conventions in docstrings ### Assertion Wording (Expected) -Use clear, natural language that maps directly to assertions: +Use clear, natural language that maps directly to assertions, for example: -| Wording Pattern | Maps To | -|-----------------|---------| -| `X equals Y` | `assert x == y` | -| `X does not equal Y` | `assert x != y` | -| `VM is "Running"` | `assert vm.status == Running` | -| `VM is not running` | `assert vm.status != Running` | -| `File exists` / `Resource x exists` | `assert exists(x)` | -| `File does not exist` / `Resource x does NOT exist` | `assert not exists(x)` | -| `X does not contain Y` | `assert y not in x` | -| `Ping succeeds` / `Operation succeeds` | `assert operation()` (no exception) | -| `Ping fails` / `Operation fails` | `assert` raises exception or returns failure | +| Wording Pattern | Maps To | +|-----------------------------------------------------|----------------------------------------------| +| `X equals Y` | `assert x == y` | +| `X does not equal Y` | `assert x != y` | +| `VM is "Running"` | `assert vm.status == Running` | +| `VM is not running` | `assert vm.status != Running` | +| `File exists` / `Resource x exists` | `assert exists(x)` | +| `File does not exist` / `Resource x does NOT exist` | `assert not exists(x)` | +| `X does not contain Y` | `assert y not in x` | +| `Ping succeeds` / `Operation succeeds` | `assert operation()` (no exception) | +| `Ping fails` / `Operation fails` | `assert` raises exception or returns failure | **Example:** ``` @@ -107,7 +107,8 @@ Mark tests that verify failure scenarios with `[NEGATIVE]` in the description: def test_isolated_vms_cannot_communicate(): """ [NEGATIVE] Test that VMs on separate networks cannot ping each other. - ... + """ + pass ``` ### Parametrization Hints @@ -162,6 +163,7 @@ class Test: Expected: - """ + pass ``` ### Test-Level Template @@ -189,19 +191,20 @@ def test_(): Expected: - """ + pass ``` ### Template Components -| Component | Purpose | Guidelines | -|-----------|---------|------------| -| **Class Docstring** | Shared preconditions | Setup common to all tests | -| **Brief Description** | One-line summary | Describe the ONE thing being verified; use `[NEGATIVE]` for failure tests | -| **Preconditions** (test) | Test-specific setup | Only if this test has additional setup beyond the class | -| **Steps** | Test action(s) | Minimal - just what's needed to get the result to verify | -| **Expected** | ONE assertion | Use natural language that maps to assertions | -| **Parametrize** | Matrix testing | Optional - list parameter combinations | -| **Markers** | pytest markers | Optional - list required decorators | +| Component | Purpose | Guidelines | +|--------------------------|----------------------|---------------------------------------------------------------------------| +| **Class Docstring** | Shared preconditions | Setup common to all tests | +| **Brief Description** | One-line summary | Describe the ONE thing being verified; use `[NEGATIVE]` for failure tests | +| **Preconditions** (test) | Test-specific setup | Only if this test has additional setup beyond the class | +| **Steps** | Test action(s) | Minimal - just what's needed to get the result to verify | +| **Expected** | ONE assertion | Use natural language that maps to assertions | +| **Parametrize** | Matrix testing | Optional - list parameter combinations | +| **Markers** | pytest markers | Optional - list required decorators | --- @@ -236,12 +239,12 @@ def test_(): ### Common Patterns in This Project -| Pattern | Description | Example | -|---------|-------------|---------| -| **Fixture-based Setup** | Use pytest fixtures for resource creation | `vm_to_restart`, `namespace` | -| **Matrix Testing** | Parameterize tests for multiple scenarios | `storage_class_matrix`, `run_strategy_matrix` | -| **Architecture Markers** | Indicate architecture compatibility | `@pytest.mark.arm64`, `@pytest.mark.s390x` | -| **Gating Tests** | Critical tests for CI/CD pipelines | `@pytest.mark.gating` | +| Pattern | Description | Example | +|--------------------------|-------------------------------------------|-----------------------------------------------| +| **Fixture-based Setup** | Use pytest fixtures for resource creation | `vm_to_restart`, `namespace` | +| **Matrix Testing** | Parameterize tests for multiple scenarios | `storage_class_matrix`, `run_strategy_matrix` | +| **Architecture Markers** | Indicate architecture compatibility | `@pytest.mark.arm64`, `@pytest.mark.s390x` | +| **Gating Tests** | Critical tests for CI/CD pipelines | `@pytest.mark.gating` | ### STD Checklist @@ -272,14 +275,13 @@ VM Snapshot and Restore Tests STP Reference: https://example.com/stp/vm-snapshot-restore """ -import pytest - - -@pytest.mark.gating class TestSnapshotRestore: """ Tests for VM snapshot restore functionality. + Markers: + - gating + Preconditions: - Running VM with a data disk - File path="/data/original.txt", content="data-before-snapshot" @@ -373,16 +375,18 @@ class TestVMLifecycle: When a test stands alone without related tests, a class is not required: ```python -@pytest.mark.gating -@pytest.mark.ipv4 def test_flat_overlay_ping_between_vms(): """ Test that VMs on the same flat overlay network can communicate. + Markers: + - ipv4 + - gating + Preconditions: - Flat overlay Network Attachment Definition created - - VM-A running and attached to flat overlay network - - VM-B running and attached to flat overlay network + - VM-A running and attached to a flat overlay network + - VM-B running and attached to a flat overlay network Steps: 1. Get IPv4 address of VM-B @@ -401,11 +405,13 @@ def test_flat_overlay_ping_between_vms(): Tests that verify failure scenarios use the `[NEGATIVE]` indicator: ```python -@pytest.mark.ipv4 def test_isolated_vms_cannot_communicate(): """ [NEGATIVE] Test that VMs on separate flat overlay networks cannot ping each other. + Markers: + - ipv4 + Preconditions: - NAD-1 flat overlay network created - NAD-2 separate flat overlay network created @@ -429,11 +435,13 @@ def test_isolated_vms_cannot_communicate(): Tests that should run with multiple parameter combinations include a `Parametrize:` section: ```python -@pytest.mark.gating def test_online_disk_resize(): """ Test that a running VM's disk can be expanded. + Markers: + - gating + Parametrize: - storage_class: [ocs-storagecluster-ceph-rbd, hostpath-csi] From 446e1d4724632208fd40560da7f4360d0b8d3383 Mon Sep 17 00:00:00 2001 From: rnetser Date: Sun, 25 Jan 2026 21:14:20 +0200 Subject: [PATCH 03/21] Add STD docstrings to test files Add Software Test Description (STD) docstrings following the Preconditions/Steps/Expected format to improve test documentation. Files updated: - tests/virt/cluster/vm_lifecycle/test_vm_run_strategy.py - tests/virt/node/migration_and_maintenance/test_node_maintenance.py - tests/network/bgp/test_bgp_connectivity.py - tests/network/flat_overlay/test_flat_overlay.py - tests/network/network_service/test_service_config_manifest.py - tests/network/network_service/test_service_config_virtctl.py --- tests/network/bgp/test_bgp_connectivity.py | 47 +++++++ .../network/flat_overlay/test_flat_overlay.py | 25 +++- .../test_service_config_manifest.py | 52 ++++++++ .../test_service_config_virtctl.py | 35 ++++++ .../vm_lifecycle/test_vm_run_strategy.py | 97 ++++++++++++++- .../test_node_maintenance.py | 115 ++++++++++++++++-- 6 files changed, 354 insertions(+), 17 deletions(-) diff --git a/tests/network/bgp/test_bgp_connectivity.py b/tests/network/bgp/test_bgp_connectivity.py index 8d73f4a489..5d33ddc2ba 100644 --- a/tests/network/bgp/test_bgp_connectivity.py +++ b/tests/network/bgp/test_bgp_connectivity.py @@ -1,3 +1,13 @@ +""" +BGP Connectivity Tests + +Tests for verifying connectivity between CUDN (Cluster User-Defined Network) VMs +and external networks using BGP routing. + +STP Reference: +# TODO: Add link to Polarion STP +""" + import pytest from libs.net.traffic_generator import is_tcp_connection @@ -17,6 +27,24 @@ @pytest.mark.polarion("CNV-12276") def test_connectivity_cudn_vm_and_external_network(tcp_server_cudn_vm, tcp_client_external_network): + """ + Test that CUDN VM can establish TCP connection with external network. + + Markers: + - bgp + - ipv4 + + Preconditions: + - BGP setup configured + - TCP server running on CUDN VM + - TCP client on external network + + Steps: + 1. Establish TCP connection from external client to CUDN VM server + + Expected: + - TCP connection succeeds + """ assert is_tcp_connection(server=tcp_server_cudn_vm, client=tcp_client_external_network) @@ -25,5 +53,24 @@ def test_connectivity_is_preserved_during_cudn_vm_migration( tcp_server_cudn_vm, tcp_client_external_network, ): + """ + Test that TCP connectivity is preserved after CUDN VM migration. + + Markers: + - bgp + - ipv4 + + Preconditions: + - BGP setup configured + - TCP server running on CUDN VM + - TCP client on external network + + Steps: + 1. Migrate CUDN VM + 2. Establish TCP connection + + Expected: + - TCP connection succeeds after migration + """ migrate_vm_and_verify(vm=tcp_server_cudn_vm.vm) assert is_tcp_connection(server=tcp_server_cudn_vm, client=tcp_client_external_network) diff --git a/tests/network/flat_overlay/test_flat_overlay.py b/tests/network/flat_overlay/test_flat_overlay.py index a7d920ead5..4980f132ec 100644 --- a/tests/network/flat_overlay/test_flat_overlay.py +++ b/tests/network/flat_overlay/test_flat_overlay.py @@ -1,5 +1,8 @@ """ Flat Overlay Network Connectivity Tests + +STP Reference: +# TODO: add STP """ import logging @@ -28,6 +31,7 @@ class TestFlatOverlayConnectivity: Markers: - s390x + - ipv4 Preconditions: - Multi-network policy usage enabled @@ -41,6 +45,18 @@ class TestFlatOverlayConnectivity: # Not marked as `conformance`; requires NMState @pytest.mark.dependency(name="test_flat_overlay_basic_ping") def test_flat_overlay_basic_ping(self, vma_flat_overlay, vmb_flat_overlay_ip_address): + """ + Test that VMs on the same flat overlay network can communicate. + + Markers: + - gating + + Steps: + Execute ping from VM-A to VM-B + + Expected: + - Ping succeeds with 0% packet loss + """ assert_ping_successful( src_vm=vma_flat_overlay, dst_ip=vmb_flat_overlay_ip_address, @@ -59,7 +75,7 @@ def test_flat_overlay_separate_nads( Test that adding a second flat overlay network does not break existing connectivity. Preconditions: - - Second flat overlay NAD created (flat_overlay_vmc_vmd_nad) + - Second flat overlay NAD created - VM-C running and attached to a second flat overlay network - VM-D running and attached to a second flat overlay network @@ -169,6 +185,10 @@ class TestFlatOverlayJumboConnectivity: """ Tests for flat overlay network jumbo frame connectivity. + Markers: + - jumbo_frame + - ipv4 + Preconditions: - Flat overlay NAD configured for jumbo frames - VM-A running and attached to jumbo frame NAD @@ -191,8 +211,7 @@ def test_flat_l2_jumbo_frame_connectivity( - s390x Steps: - 1. Get IPv4 address of VM-B - 2. Execute ping from VM-A to VM-B with jumbo frame packet size + Execute ping from VM-A to VM-B with jumbo frame packet size Expected: - Ping succeeds with 0% packet loss diff --git a/tests/network/network_service/test_service_config_manifest.py b/tests/network/network_service/test_service_config_manifest.py index d636b080b3..3c5df8a49d 100644 --- a/tests/network/network_service/test_service_config_manifest.py +++ b/tests/network/network_service/test_service_config_manifest.py @@ -1,3 +1,12 @@ +""" +Service Configuration via Manifest Tests + +Tests for service configuration using manifest-based approach. + +STP Reference: +TODO: add link +""" + import pytest from tests.network.network_service.libservice import SERVICE_IP_FAMILY_POLICY_SINGLE_STACK @@ -5,6 +14,16 @@ @pytest.mark.gating class TestServiceConfigurationViaManifest: + """ + Tests for configuring Kubernetes services via manifest. + + Markers: + - gating + + Preconditions: + - Running VM exposed with a service + """ + @pytest.mark.single_nic @pytest.mark.parametrize( "single_stack_service_ip_family, single_stack_service", @@ -20,6 +39,24 @@ def test_service_with_configured_ip_families( single_stack_service_ip_family, single_stack_service, ): + """ + Test that service is created with configured IP family. + + Markers: + - single_nic + + Parametrize: + - ip_family: [IPv4, IPv6] + + Preconditions: + - Single stack service created with specified IP family + + Steps: + 1. Get ipFamilies from service spec + + Expected: + - Service has single IP family matching configuration + """ ip_families_in_svc = running_vm_for_exposure.custom_service.instance.spec.ipFamilies assert len(ip_families_in_svc) == 1 and ip_families_in_svc[0] == single_stack_service_ip_family, ( @@ -35,6 +72,21 @@ def test_service_with_default_ip_family_policy( self, running_vm_for_exposure, ): + """ + Test that service is created with default SingleStack IP family policy. + + Markers: + - single_nic + + Preconditions: + - Service created with default IP family policy + + Steps: + 1. Get ipFamilyPolicy from service spec + + Expected: + - Service ipFamilyPolicy is SingleStack + """ ip_family_policy = running_vm_for_exposure.custom_service.instance.spec.ipFamilyPolicy assert ip_family_policy == SERVICE_IP_FAMILY_POLICY_SINGLE_STACK, ( f"Service created with wrong default ipfamilyPolicy on VM {running_vm_for_exposure.name}: " diff --git a/tests/network/network_service/test_service_config_virtctl.py b/tests/network/network_service/test_service_config_virtctl.py index c542a24b72..5c1dbb555c 100644 --- a/tests/network/network_service/test_service_config_virtctl.py +++ b/tests/network/network_service/test_service_config_virtctl.py @@ -1,3 +1,12 @@ +""" +Service Configuration via virtctl Tests + +Tests for service configuration using virtctl expose command. + +STP Reference: +TODO: add link +""" + import pytest from tests.network.network_service.libservice import ( @@ -9,6 +18,14 @@ class TestServiceConfigurationViaVirtctl: + """ + Tests for configuring Kubernetes services via virtctl expose command. + + Preconditions: + - Running VM available for service exposure + - Dual-stack cluster configured + """ + @pytest.mark.parametrize( "virtctl_expose_service, expected_num_families_in_service, ip_family_policy", [ @@ -42,6 +59,24 @@ def test_virtctl_expose_services( dual_stack_cluster, ip_family_policy, ): + """ + Test that virtctl expose creates service with correct IP family policy. + + Markers: + - single_nic + + Parametrize: + - ip_family_policy: [SingleStack, PreferDualStack, RequireDualStack] + + Preconditions: + - Service created via virtctl expose with specified IP family policy + + Steps: + 1. Verify service IP family parameters + + Expected: + - Service has correct number of IP families and IP family policy + """ assert_svc_ip_params( svc=virtctl_expose_service, expected_num_families_in_service=expected_num_families_in_service, diff --git a/tests/virt/cluster/vm_lifecycle/test_vm_run_strategy.py b/tests/virt/cluster/vm_lifecycle/test_vm_run_strategy.py index ac6f0713ab..5a461d5f1e 100644 --- a/tests/virt/cluster/vm_lifecycle/test_vm_run_strategy.py +++ b/tests/virt/cluster/vm_lifecycle/test_vm_run_strategy.py @@ -1,5 +1,12 @@ -# Run strategies logic can be found under -# https://kubevirt.io/user-guide/#/creation/run-strategies?id=run-strategies +""" +VM Run Strategy Tests + +Run strategies logic can be found under +https://kubevirt.io/user-guide/#/creation/run-strategies?id=run-strategies + +STP Reference: +# TOOD: add link +""" import logging import re @@ -204,6 +211,23 @@ def shutdown_vm_guest_os(vm): @pytest.mark.s390x @pytest.mark.gating class TestRunStrategyBaseActions: + """ + Tests for VM run strategy basic lifecycle actions. + + Markers: + - arm64 + - s390x + - gating + - post_upgrade + + Parametrize: + - vm_action: [start, restart, stop] + + Preconditions: + - Running RHEL VM + - VM configured with run strategy + """ + @pytest.mark.parametrize( "vm_action", [ @@ -218,6 +242,18 @@ def test_run_strategy_policy( matrix_updated_vm_run_strategy, vm_action, ): + """ + Test that VM action behaves correctly according to run strategy policy. + + Parametrize: + - vm_action: [start, restart, stop] + + Steps: + 1. Perform the VM action (start/restart/stop) + + Expected: + - VM status and run strategy match expected policy + """ LOGGER.info(f"Verify VM with run strategy {matrix_updated_vm_run_strategy} and VM action {vm_action}") verify_vm_action( vm=lifecycle_vm, @@ -232,6 +268,16 @@ def test_run_strategy_policy( indirect=True, ) class TestRunStrategyAdvancedActions: + """ + Tests for advanced VM run strategy behaviors. + + Markers: + - post_upgrade + + Preconditions: + - Running RHEL VM + """ + @pytest.mark.polarion("CNV-5054") def test_run_strategy_shutdown( self, @@ -240,6 +286,18 @@ def test_run_strategy_shutdown( matrix_updated_vm_run_strategy, start_vm_if_not_running, ): + """ + Test that guest OS shutdown behaves correctly per run strategy. + + Preconditions: + - VM is running with specific run strategy + + Steps: + 1. Shutdown VM from guest OS + + Expected: + - VMI and virt-launcher pod reach expected status per run strategy + """ vmi = lifecycle_vm.vmi launcher_pod = vmi.virt_launcher_pod run_strategy = matrix_updated_vm_run_strategy @@ -278,6 +336,22 @@ def test_run_strategy_shutdown( def test_run_strategy_pause_unpause_vmi( self, lifecycle_vm, request_updated_vm_run_strategy, start_vm_if_not_running ): + """ + Test that VMI can be paused and unpaused. + + Parametrize: + - run_strategy: [Manual, Always] + + Preconditions: + - VM is running with run strategy + + Steps: + 1. Pause VMI + 2. Unpause VMI + + Expected: + - VM is Running after unpause + """ LOGGER.info(f"Verify VMI pause/un-pause with runStrategy: {request_updated_vm_run_strategy}") pause_unpause_vmi_and_verify_status(vm=lifecycle_vm) @@ -299,4 +373,23 @@ def test_run_strategy_pause_unpause_vmi( ) @pytest.mark.rwx_default_storage def test_run_strategy_migrate_vm(self, lifecycle_vm, request_updated_vm_run_strategy, start_vm_if_not_running): + """ + Test that VM can be migrated. + + Markers: + - rwx_default_storage + + Parametrize: + - run_strategy: [Manual, Always] + + Preconditions: + - VM is running with run strategy + - RWX storage available + + Steps: + 1. Migrate VM + + Expected: + - VM is Running and run strategy unchanged + """ migrate_validate_run_strategy_vm(vm=lifecycle_vm, run_strategy=request_updated_vm_run_strategy) diff --git a/tests/virt/node/migration_and_maintenance/test_node_maintenance.py b/tests/virt/node/migration_and_maintenance/test_node_maintenance.py index 19e371386f..e53f3b71c9 100644 --- a/tests/virt/node/migration_and_maintenance/test_node_maintenance.py +++ b/tests/virt/node/migration_and_maintenance/test_node_maintenance.py @@ -1,5 +1,7 @@ """ Draining node by Node Maintenance Operator + +STP Reference: https://docs.openshift.com/container-platform/latest/nodes/nodes/nodes-nodes-working.html """ import logging @@ -105,6 +107,24 @@ def test_node_drain_using_console_fedora( admin_client, vm_container_disk_fedora, ): + """ + Test that Fedora container disk VM migrates successfully during node drain. + + Markers: + - post_upgrade + - rwx_default_storage + + Preconditions: + - Running Fedora container disk VM + - RWX storage available + + Steps: + 1. Start a process on VM + 2. Drain the node hosting the VM + + Expected: + - VM migrates successfully, process continues running + """ privileged_virt_launcher_pod = vm_container_disk_fedora.privileged_vmi.virt_launcher_pod drain_using_console(client=admin_client, source_node=privileged_virt_launcher_pod.node, vm=vm_container_disk_fedora) @@ -124,8 +144,36 @@ def test_node_drain_using_console_fedora( ) @pytest.mark.ibm_bare_metal class TestNodeMaintenanceRHEL: + """ + Tests for node maintenance operations with RHEL VM. + + Markers: + - ibm_bare_metal + - post_upgrade + - rwx_default_storage + + Parametrize: + - os_image: [RHEL_LATEST] + + Preconditions: + - Running RHEL VM from template + """ + @pytest.mark.polarion("CNV-2292") def test_node_drain_using_console_rhel(self, no_migration_job, vm_for_test_from_template_scope_class, admin_client): + """ + Test that RHEL VM migrates successfully during node drain. + + Preconditions: + - No existing migration job in namespace + + Steps: + 1. Start a process on VM + 2. Drain the node hosting the VM + + Expected: + - VM migrates successfully + """ vm = vm_for_test_from_template_scope_class drain_using_console(client=admin_client, source_node=vm.privileged_vmi.virt_launcher_pod.node, vm=vm) @@ -133,18 +181,18 @@ def test_node_drain_using_console_rhel(self, no_migration_job, vm_for_test_from_ def test_migration_when_multiple_nodes_unschedulable_using_console_rhel( self, no_migration_job, vm_for_test_from_template_scope_class, schedulable_nodes, admin_client ): - """Test VMI migration, when multiple nodes are unschedulable. - - In our BM or PSI setups, we mostly use only 3 worker nodes, - the OCS pods would need at-least 2 nodes up and running, to - avoid violation of the ceph pod's disruption budget. - Hence we simulating this case here, with Cordon 1 node and - Drain 1 node, instead of Draining 2 Worker nodes. - - 1. Start a VMI - 2. Cordon a Node, other than the current running VMI Node. - 3. Drain the Node, on which the VMI is present. - 4. Make sure the VMI is migrated to the other node. + """ + Test that VM migrates when multiple nodes are unschedulable. + + Preconditions: + - No existing migration job in namespace + + Steps: + 1. Cordon one node + 2. Drain the node hosting VM + + Expected: + - VM migrates to remaining available node """ vm = vm_for_test_from_template_scope_class cordon_nodes = node_filter(pod=vm.privileged_vmi.virt_launcher_pod, schedulable_nodes=schedulable_nodes) @@ -168,13 +216,56 @@ def test_migration_when_multiple_nodes_unschedulable_using_console_rhel( ) @pytest.mark.ibm_bare_metal class TestNodeCordonAndDrain: + """ + Tests for node cordon and drain operations with Windows VM. + + Markers: + - ibm_bare_metal + - special_infra + - high_resource_vm + - post_upgrade + - rwx_default_storage + + Parametrize: + - os_image: [WINDOWS_LATEST] + + Preconditions: + - Running Windows VM from template + """ + @pytest.mark.polarion("CNV-2048") def test_node_drain_template_windows(self, no_migration_job, vm_for_test_from_template_scope_class, admin_client): + """ + Test that Windows VM migrates during node drain with process preservation. + + Preconditions: + - No existing migration job in namespace + + Steps: + 1. Start process on Windows VM + 2. Drain node + + Expected: + - Process ID after migration equals process ID before migration + """ vm = vm_for_test_from_template_scope_class drain_using_console_windows(client=admin_client, source_node=vm.privileged_vmi.virt_launcher_pod.node, vm=vm) @pytest.mark.polarion("CNV-4906") def test_node_cordon_template_windows(self, no_migration_job, vm_for_test_from_template_scope_class, admin_client): + """ + [NEGATIVE] Test that cordoning a node does NOT trigger VM migration. + + Preconditions: + - No existing migration job in namespace + + Steps: + 1. Cordon the node hosting VM + 2. Wait for migration job + + Expected: + - No migration job is created + """ vm = vm_for_test_from_template_scope_class with node_mgmt_console(node=vm.privileged_vmi.virt_launcher_pod.node, node_mgmt="cordon"): with pytest.raises(TimeoutExpiredError): From 928f05aeb96bf15bc0796a4e479fe0c16dc060b7 Mon Sep 17 00:00:00 2001 From: rnetser Date: Mon, 26 Jan 2026 19:01:06 +0200 Subject: [PATCH 04/21] add instrucvtions to avoid collection --- docs/SOFTWARE_TEST_DESCRIPTION.md | 26 +++++++++++++++++++ .../centos/test_centos_os_support.py | 2 ++ 2 files changed, 28 insertions(+) diff --git a/docs/SOFTWARE_TEST_DESCRIPTION.md b/docs/SOFTWARE_TEST_DESCRIPTION.md index 6974cedf7d..bac65f8495 100644 --- a/docs/SOFTWARE_TEST_DESCRIPTION.md +++ b/docs/SOFTWARE_TEST_DESCRIPTION.md @@ -99,6 +99,24 @@ Expected: - Ping fails with 100% packet loss ``` +### Exclude new test stubs from pytest collection [customizing-test-collection](https://doc.pytest.org/en/latest/example/pythoncollection.html#customizing-test-collection) + +To exclude new test classes from pytest collection, use: + +```python +class TestClass: + __test__ = False +``` + +To exclude new tests from pytest collection, use: + +```python +def test_abc(): + ... + +test_abc.__test__ = False +``` + ### Negative Test Indicator Mark tests that verify failure scenarios with `[NEGATIVE]` in the description: @@ -109,6 +127,7 @@ def test_isolated_vms_cannot_communicate(): [NEGATIVE] Test that VMs on separate networks cannot ping each other. """ pass +test_isolated_vms_cannot_communicate.__test__ = False ``` ### Parametrization Hints @@ -152,6 +171,7 @@ class Test: - """ + __test__ = False def test_(self): """ @@ -192,6 +212,7 @@ def test_(): - """ pass +test_.__test__ = False ``` ### Template Components @@ -289,6 +310,7 @@ class TestSnapshotRestore: - File path="/data/after.txt", content="post-snapshot" (written after snapshot) - VM Restored from snapshot, running and SSH accessible """ + __test__ = False def test_preserves_original_file(self): """ @@ -327,6 +349,7 @@ class TestVMLifecycle: Preconditions: - VM Running latest Fedora virtual machine """ + __test__ = False def test_vm_restart_completes_successfully(self): """ @@ -396,6 +419,7 @@ def test_flat_overlay_ping_between_vms(): - Ping succeeds with 0% packet loss """ pass +test_flat_overlay_ping_between_vms.__test__ = False ``` --- @@ -426,6 +450,7 @@ def test_isolated_vms_cannot_communicate(): - Ping fails with 100% packet loss """ pass +test_isolated_vms_cannot_communicate.__test__ = False ``` --- @@ -458,6 +483,7 @@ def test_online_disk_resize(): - Disk size inside VM is greater than original size """ pass +test_online_disk_resize.__test__ = False ``` --- diff --git a/tests/virt/cluster/common_templates/centos/test_centos_os_support.py b/tests/virt/cluster/common_templates/centos/test_centos_os_support.py index 30c3ea095e..1eeda1932d 100644 --- a/tests/virt/cluster/common_templates/centos/test_centos_os_support.py +++ b/tests/virt/cluster/common_templates/centos/test_centos_os_support.py @@ -33,6 +33,8 @@ @pytest.mark.s390x class TestCommonTemplatesCentos: + __test__ = False + @pytest.mark.dependency(name=f"{TESTS_CLASS_NAME}::create_vm") @pytest.mark.polarion("CNV-5337") def test_create_vm(self, matrix_centos_os_vm_from_template): From d87e121beba1d25758fd9590a99481c10b1a53be Mon Sep 17 00:00:00 2001 From: rnetser Date: Mon, 26 Jan 2026 19:41:07 +0200 Subject: [PATCH 05/21] add module level skip collecitopn --- docs/SOFTWARE_TEST_DESCRIPTION.md | 11 +++++++++++ tests/virt/node/general/test_container_disk_vm.py | 2 ++ 2 files changed, 13 insertions(+) diff --git a/docs/SOFTWARE_TEST_DESCRIPTION.md b/docs/SOFTWARE_TEST_DESCRIPTION.md index bac65f8495..518d44efe2 100644 --- a/docs/SOFTWARE_TEST_DESCRIPTION.md +++ b/docs/SOFTWARE_TEST_DESCRIPTION.md @@ -101,6 +101,17 @@ Expected: ### Exclude new test stubs from pytest collection [customizing-test-collection](https://doc.pytest.org/en/latest/example/pythoncollection.html#customizing-test-collection) +To exclude a whole new module from pytest collection, use: + +```python +# test_module_to_ignore.py +__test__ = False + +def test_abc(): + assert True # This test will not be collected or run + +``` + To exclude new test classes from pytest collection, use: ```python diff --git a/tests/virt/node/general/test_container_disk_vm.py b/tests/virt/node/general/test_container_disk_vm.py index e374b6a2cf..2b580bf625 100644 --- a/tests/virt/node/general/test_container_disk_vm.py +++ b/tests/virt/node/general/test_container_disk_vm.py @@ -2,6 +2,8 @@ from utilities.virt import VirtualMachineForTests, fedora_vm_body, running_vm +__test__ = False + @pytest.mark.arm64 @pytest.mark.smoke From 96a87b13e369e682ce261d4aad75e003d5094d3f Mon Sep 17 00:00:00 2001 From: rnetser Date: Mon, 26 Jan 2026 20:12:08 +0200 Subject: [PATCH 06/21] remove testing of pytest __test__ --- .../cluster/common_templates/centos/test_centos_os_support.py | 2 -- tests/virt/node/general/test_container_disk_vm.py | 2 -- 2 files changed, 4 deletions(-) diff --git a/tests/virt/cluster/common_templates/centos/test_centos_os_support.py b/tests/virt/cluster/common_templates/centos/test_centos_os_support.py index 1eeda1932d..30c3ea095e 100644 --- a/tests/virt/cluster/common_templates/centos/test_centos_os_support.py +++ b/tests/virt/cluster/common_templates/centos/test_centos_os_support.py @@ -33,8 +33,6 @@ @pytest.mark.s390x class TestCommonTemplatesCentos: - __test__ = False - @pytest.mark.dependency(name=f"{TESTS_CLASS_NAME}::create_vm") @pytest.mark.polarion("CNV-5337") def test_create_vm(self, matrix_centos_os_vm_from_template): diff --git a/tests/virt/node/general/test_container_disk_vm.py b/tests/virt/node/general/test_container_disk_vm.py index 2b580bf625..e374b6a2cf 100644 --- a/tests/virt/node/general/test_container_disk_vm.py +++ b/tests/virt/node/general/test_container_disk_vm.py @@ -2,8 +2,6 @@ from utilities.virt import VirtualMachineForTests, fedora_vm_body, running_vm -__test__ = False - @pytest.mark.arm64 @pytest.mark.smoke From a8a30b94576b029496b677ab9f0d2064fc4b4647 Mon Sep 17 00:00:00 2001 From: rnetser Date: Tue, 27 Jan 2026 11:58:11 +0200 Subject: [PATCH 07/21] add exception for _test__ = False in calude.md --- CLAUDE.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/CLAUDE.md b/CLAUDE.md index 4ec21d26e4..dba8a036df 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -64,6 +64,15 @@ The "no defensive programming" rule has these five exceptions: For resource dependencies, use shared fixtures instead. **When using `@pytest.mark.dependency`, a comment explaining WHY the dependency exists is REQUIRED.** - **ALWAYS use `@pytest.mark.usefixtures`** - REQUIRED when fixture return value is not used by test +**`__test__ = False` Usage Rules:** + +- ✅ **ALLOWED for STD placeholder tests** - tests that contain ONLY: + - Docstrings describing expected behavior + - No actual implementation code (no assertions, no test logic) +- ❌ **FORBIDDEN for implemented tests** - if a test has actual implementation code (assertions, test logic, setup/teardown), do NOT use `__test__ = False` + +**Rationale:** STD (Standard Test Design) placeholder tests document what will be tested before implementation. These can use `__test__ = False` to prevent collection errors. Once a test has implementation code, `__test__ = False` must be removed. + ### Fixture Guidelines (CRITICAL) 1. **Single Action REQUIRED**: Fixtures MUST do ONE action only (single responsibility) From 697c77c618ca4b53000bd31e372344d35849736d Mon Sep 17 00:00:00 2001 From: rnetser Date: Tue, 27 Jan 2026 15:37:39 +0200 Subject: [PATCH 08/21] add a script to find tests that are not implemenetd --- scripts/std_placeholder_stats/__init__.py | 0 .../std_placeholder_stats.py | 366 ++++++++++++++++++ 2 files changed, 366 insertions(+) create mode 100644 scripts/std_placeholder_stats/__init__.py create mode 100644 scripts/std_placeholder_stats/std_placeholder_stats.py diff --git a/scripts/std_placeholder_stats/__init__.py b/scripts/std_placeholder_stats/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/scripts/std_placeholder_stats/std_placeholder_stats.py b/scripts/std_placeholder_stats/std_placeholder_stats.py new file mode 100644 index 0000000000..0edcf5904b --- /dev/null +++ b/scripts/std_placeholder_stats/std_placeholder_stats.py @@ -0,0 +1,366 @@ +#!/usr/bin/env -S uv run python +"""STD Placeholder Tests Statistics Generator. + +Scans the tests directory for STD (Standard Test Design) placeholder tests that +are not yet implemented. These are tests with `__test__ = False` that contain +only docstrings describing expected behavior, without actual implementation code. + +Output: + - text: Human-readable summary to stdout (default) + - json: Machine-readable JSON output + +Usage: + uv run python scripts/std_placeholder_stats/std_placeholder_stats.py + uv run python scripts/std_placeholder_stats/std_placeholder_stats.py --tests-dir tests + uv run python scripts/std_placeholder_stats/std_placeholder_stats.py --output-format json + +Generated using Claude cli +""" + +from __future__ import annotations + +import ast +import json +from argparse import ArgumentParser, Namespace, RawDescriptionHelpFormatter +from pathlib import Path +from typing import Any + +from simple_logger.logger import get_logger + +LOGGER = get_logger(name=__name__) + + +def separator(symbol_: str, val: str | None = None) -> str: + """Create a separator line for terminal output. + + Args: + symbol_: The character to use for the separator. + val: Optional text to center in the separator. + + Returns: + Formatted separator string. + """ + terminal_width = 120 # Fixed width for consistent output + if not val: + return symbol_ * terminal_width + + sepa = int((terminal_width - len(val) - 2) // 2) + return f"{symbol_ * sepa} {val} {symbol_ * sepa}" + + +def module_has_test_false(module_tree: ast.Module) -> bool: + """Check if a module has `__test__ = False` assignment at top level. + + Args: + module_tree: AST module tree + + Returns: + True if the module has __test__ = False at top level, False otherwise + """ + for node in module_tree.body: + if isinstance(node, ast.Assign): + for target in node.targets: + if isinstance(target, ast.Name) and target.id == "__test__": + if isinstance(node.value, ast.Constant) and node.value.value is False: + return True + return False + + +def class_has_test_false(class_node: ast.ClassDef) -> bool: + """Check if a class has `__test__ = False` assignment in its body. + + Args: + class_node: AST class definition node + + Returns: + True if the class has __test__ = False, False otherwise + """ + for stmt in class_node.body: + if isinstance(stmt, ast.Assign): + for target in stmt.targets: + if isinstance(target, ast.Name) and target.id == "__test__": + if isinstance(stmt.value, ast.Constant) and stmt.value.value is False: + return True + return False + + +def function_has_test_false(module_tree: ast.Module, function_name: str) -> bool: + """Check if a standalone function has `function_name.__test__ = False` assignment. + + Args: + module_tree: AST module tree + function_name: Name of the function to check + + Returns: + True if the function has __test__ = False assignment, False otherwise + """ + for node in module_tree.body: + if isinstance(node, ast.Assign): + for target in node.targets: + if isinstance(target, ast.Attribute): + if ( + isinstance(target.value, ast.Name) + and target.value.id == function_name + and target.attr == "__test__" + ): + if isinstance(node.value, ast.Constant) and node.value.value is False: + return True + return False + + +def method_has_test_false(class_node: ast.ClassDef, method_name: str) -> bool: + """Check if a method has `method_name.__test__ = False` assignment in the class body. + + This detects patterns like: + class TestFoo: + def test_bar(self): + pass + test_bar.__test__ = False + + Args: + class_node: AST class definition node + method_name: Name of the method to check + + Returns: + True if the method has __test__ = False assignment in the class body, False otherwise + """ + for stmt in class_node.body: + if isinstance(stmt, ast.Assign): + for target in stmt.targets: + if isinstance(target, ast.Attribute): + if ( + isinstance(target.value, ast.Name) + and target.value.id == method_name + and target.attr == "__test__" + ): + if isinstance(stmt.value, ast.Constant) and stmt.value.value is False: + return True + return False + + +def get_test_methods_from_class(class_node: ast.ClassDef) -> list[str]: + """Extract formatted test method names from a class definition. + + Args: + class_node: AST class definition node + + Returns: + List of formatted test method names (prefixed with " - ") + """ + return [ + f" - {method.name}" + for method in class_node.body + if isinstance(method, ast.FunctionDef) and method.name.startswith("test_") + ] + + +def scan_placeholder_tests(tests_dir: Path) -> dict[str, list[str]]: + """Scan tests directory for STD placeholder tests. + + Args: + tests_dir: Path to the tests directory to scan. + + Returns: + Dictionary mapping file paths to lists of placeholder test entries. + """ + placeholder_files: dict[str, list[str]] = {} + + for test_file in tests_dir.rglob("test_*.py"): + file_content = test_file.read_text(encoding="utf-8") + if "__test__ = False" not in file_content: + continue + + try: + tree = ast.parse(source=file_content) + except SyntaxError as exc: + LOGGER.warning(f"Failed to parse {test_file}: {exc}") + continue + + relative_path = str(test_file.relative_to(tests_dir.parent)) + + # Check if module has __test__ = False at top level + if module_has_test_false(module_tree=tree): + # Report ALL test classes and functions in this module + module_has_standalone_tests = False + + for node in tree.body: + if isinstance(node, ast.ClassDef): + placeholder_files.setdefault(relative_path, []).append(f"{relative_path}::{node.name}") + test_methods = get_test_methods_from_class(class_node=node) + if test_methods: + placeholder_files[relative_path].extend(test_methods) + + elif isinstance(node, ast.FunctionDef) and node.name.startswith("test_"): + # For standalone functions, add module path first if not already added + if not module_has_standalone_tests: + placeholder_files.setdefault(relative_path, []).append(relative_path) + module_has_standalone_tests = True + placeholder_files[relative_path].append(f" - {node.name}") + else: + # Check individual classes and functions for __test__ = False + for node in tree.body: + if isinstance(node, ast.ClassDef): + if class_has_test_false(class_node=node): + # Class-level __test__ = False: report class and all methods + placeholder_files.setdefault(relative_path, []).append(f"{relative_path}::{node.name}") + test_methods = get_test_methods_from_class(class_node=node) + if test_methods: + placeholder_files[relative_path].extend(test_methods) + else: + # Check each method for method.__test__ = False in class body + method_placeholders: list[str] = [] + for method in node.body: + if isinstance(method, ast.FunctionDef) and method.name.startswith("test_"): + if method_has_test_false(class_node=node, method_name=method.name): + method_placeholders.append(f" - {method.name}") + if method_placeholders: + placeholder_files.setdefault(relative_path, []).append(f"{relative_path}::{node.name}") + placeholder_files[relative_path].extend(method_placeholders) + + elif isinstance(node, ast.FunctionDef) and node.name.startswith("test_"): + if function_has_test_false(module_tree=tree, function_name=node.name): + # For standalone functions, add module path first if not already added + if relative_path not in placeholder_files: + placeholder_files[relative_path] = [relative_path] + elif relative_path not in placeholder_files[relative_path]: + placeholder_files[relative_path].insert(0, relative_path) + placeholder_files[relative_path].append(f" - {node.name}") + + return placeholder_files + + +def output_text(placeholder_files: dict[str, list[str]]) -> None: + """Output results in human-readable text format. + + Args: + placeholder_files: Dictionary mapping file paths to placeholder test entries. + """ + if not placeholder_files: + LOGGER.info("No STD placeholder tests found.") + return + + total_tests = 0 + total_files = len(placeholder_files) + + output_lines: list[str] = [] + output_lines.append(separator(symbol_="=")) + output_lines.append("STD PLACEHOLDER TESTS (not yet implemented)") + output_lines.append(separator(symbol_="=")) + output_lines.append("") + + for entries in placeholder_files.values(): + for entry in entries: + output_lines.append(entry) + if entry.startswith(" - "): + total_tests += 1 + + output_lines.append("") + output_lines.append(separator(symbol_="-")) + output_lines.append(f"Total: {total_tests} placeholder tests in {total_files} files") + output_lines.append(separator(symbol_="=")) + + for line in output_lines: + LOGGER.info(line) + + +def output_json(placeholder_files: dict[str, list[str]]) -> None: + """Output results in JSON format. + + Args: + placeholder_files: Dictionary mapping file paths to placeholder test entries. + """ + total_tests = 0 + tests_by_file: dict[str, list[str]] = {} + + for file_path, entries in placeholder_files.items(): + tests: list[str] = [] + for entry in entries: + if entry.startswith(" - "): + tests.append(entry.strip().lstrip("- ")) + total_tests += 1 + if tests: + tests_by_file[file_path] = tests + + output: dict[str, Any] = { + "total_tests": total_tests, + "total_files": len(placeholder_files), + "files": tests_by_file, + } + + print(json.dumps(output, indent=2)) + + +def parse_args() -> Namespace: + """Parse command line arguments. + + Returns: + Parsed arguments namespace. + """ + parser = ArgumentParser( + description="STD Placeholder Tests Statistics Generator", + formatter_class=RawDescriptionHelpFormatter, + epilog=""" +Scans the tests directory for STD (Standard Test Design) placeholder tests. +These are tests marked with `__test__ = False` that contain only docstrings +describing expected behavior, without actual implementation code. + +Examples: + # Scan default tests directory with text output + uv run python scripts/std_placeholder_stats/std_placeholder_stats.py + + # Scan custom tests directory + uv run python scripts/std_placeholder_stats/std_placeholder_stats.py --tests-dir my_tests + + # Output as JSON + uv run python scripts/std_placeholder_stats/std_placeholder_stats.py --output-format json + """, + ) + parser.add_argument( + "--tests-dir", + type=Path, + default=Path("tests"), + help="The tests directory to scan (default: tests)", + ) + parser.add_argument( + "--output-format", + choices=["text", "json"], + default="text", + help="Output format: text (default) or json", + ) + return parser.parse_args() + + +def main() -> int: + """Main entry point for the STD placeholder stats generator. + + Returns: + Exit code: 0 on success, 1 on error. + """ + args = parse_args() + + tests_dir = args.tests_dir + if not tests_dir.is_absolute(): + tests_dir = Path.cwd() / tests_dir + + if not tests_dir.exists(): + LOGGER.error(f"Tests directory does not exist: {tests_dir}") + return 1 + + if not tests_dir.is_dir(): + LOGGER.error(f"Path is not a directory: {tests_dir}") + return 1 + + LOGGER.info(f"Scanning tests directory: {tests_dir}") + + placeholder_files = scan_placeholder_tests(tests_dir=tests_dir) + + if args.output_format == "json": + output_json(placeholder_files=placeholder_files) + else: + output_text(placeholder_files=placeholder_files) + + return 0 + + +if __name__ == "__main__": + raise SystemExit(main()) From b07dc90603f1eda0a7e6f1a559eae9378485b35c Mon Sep 17 00:00:00 2001 From: rnetser Date: Tue, 27 Jan 2026 18:22:35 +0200 Subject: [PATCH 09/21] update doc to reflect changes in docsting --- docs/SOFTWARE_TEST_DESCRIPTION.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/SOFTWARE_TEST_DESCRIPTION.md b/docs/SOFTWARE_TEST_DESCRIPTION.md index 518d44efe2..961a4d9885 100644 --- a/docs/SOFTWARE_TEST_DESCRIPTION.md +++ b/docs/SOFTWARE_TEST_DESCRIPTION.md @@ -49,6 +49,8 @@ This project follows a **two-phase development workflow** that separates test de - Add the actual test code to the previously merged test stubs - Create any required fixtures - Implement helper functions as needed + - Remove `__test__ = False` from implemented tests + - If needed, update the test description. This change must be approved by the team's tech lead. 2. **Submit PR for review**: - Reviewers verify the implementation matches the approved design From aa22437a495acb4e600548f29253fd8e81059feb Mon Sep 17 00:00:00 2001 From: rnetser Date: Tue, 10 Feb 2026 17:17:22 +0200 Subject: [PATCH 10/21] remove example --- tests/network/bgp/test_bgp_connectivity.py | 47 ------- .../network/flat_overlay/test_flat_overlay.py | 119 +----------------- .../test_service_config_manifest.py | 52 -------- .../test_service_config_virtctl.py | 35 ------ .../online_resize/test_online_resize.py | 20 +-- .../virt/cluster/vm_lifecycle/test_restart.py | 19 +-- .../vm_lifecycle/test_vm_run_strategy.py | 97 +------------- 7 files changed, 8 insertions(+), 381 deletions(-) diff --git a/tests/network/bgp/test_bgp_connectivity.py b/tests/network/bgp/test_bgp_connectivity.py index 076f9b426d..c763db0e65 100644 --- a/tests/network/bgp/test_bgp_connectivity.py +++ b/tests/network/bgp/test_bgp_connectivity.py @@ -1,13 +1,3 @@ -""" -BGP Connectivity Tests - -Tests for verifying connectivity between CUDN (Cluster User-Defined Network) VMs -and external networks using BGP routing. - -STP Reference: -# TODO: Add link to Polarion STP -""" - import pytest from libs.net.traffic_generator import is_tcp_connection @@ -22,24 +12,6 @@ @pytest.mark.polarion("CNV-12276") def test_connectivity_cudn_vm_and_external_network(tcp_server_cudn_vm, tcp_client_external_network): - """ - Test that CUDN VM can establish TCP connection with external network. - - Markers: - - bgp - - ipv4 - - Preconditions: - - BGP setup configured - - TCP server running on CUDN VM - - TCP client on external network - - Steps: - 1. Establish TCP connection from external client to CUDN VM server - - Expected: - - TCP connection succeeds - """ assert is_tcp_connection(server=tcp_server_cudn_vm, client=tcp_client_external_network) @@ -48,24 +20,5 @@ def test_connectivity_is_preserved_during_cudn_vm_migration( tcp_server_cudn_vm, tcp_client_external_network, ): - """ - Test that TCP connectivity is preserved after CUDN VM migration. - - Markers: - - bgp - - ipv4 - - Preconditions: - - BGP setup configured - - TCP server running on CUDN VM - - TCP client on external network - - Steps: - 1. Migrate CUDN VM - 2. Establish TCP connection - - Expected: - - TCP connection succeeds after migration - """ migrate_vm_and_verify(vm=tcp_server_cudn_vm.vm) assert is_tcp_connection(server=tcp_server_cudn_vm, client=tcp_client_external_network) diff --git a/tests/network/flat_overlay/test_flat_overlay.py b/tests/network/flat_overlay/test_flat_overlay.py index 4980f132ec..3e14d46a33 100644 --- a/tests/network/flat_overlay/test_flat_overlay.py +++ b/tests/network/flat_overlay/test_flat_overlay.py @@ -1,10 +1,3 @@ -""" -Flat Overlay Network Connectivity Tests - -STP Reference: -# TODO: add STP -""" - import logging import pytest @@ -26,37 +19,11 @@ @pytest.mark.s390x class TestFlatOverlayConnectivity: - """ - Tests for flat overlay network connectivity between VMs. - - Markers: - - s390x - - ipv4 - - Preconditions: - - Multi-network policy usage enabled - - Flat overlay Network Attachment Definition created - - VM-A running and attached to a flat overlay network - - VM-B running and attached to a flat overlay network - """ - @pytest.mark.gating @pytest.mark.polarion("CNV-10158") # Not marked as `conformance`; requires NMState @pytest.mark.dependency(name="test_flat_overlay_basic_ping") def test_flat_overlay_basic_ping(self, vma_flat_overlay, vmb_flat_overlay_ip_address): - """ - Test that VMs on the same flat overlay network can communicate. - - Markers: - - gating - - Steps: - Execute ping from VM-A to VM-B - - Expected: - - Ping succeeds with 0% packet loss - """ assert_ping_successful( src_vm=vma_flat_overlay, dst_ip=vmb_flat_overlay_ip_address, @@ -71,21 +38,9 @@ def test_flat_overlay_separate_nads( vmb_flat_overlay_ip_address, vmd_flat_overlay_ip_address, ): - """ - Test that adding a second flat overlay network does not break existing connectivity. - - Preconditions: - - Second flat overlay NAD created - - VM-C running and attached to a second flat overlay network - - VM-D running and attached to a second flat overlay network - - Steps: - 1. Execute ping from VM-A to VM-B (original network) - 2. Execute ping from VM-C to VM-D (new network) - - Expected: - - Both ping commands succeed with 0% packet loss - """ + # This ping is needed even though it was tested in test_flat_overlay_basic_ping because an additional network + # (flat_overlay_vmc_vmd_nad) is now created. We want to make sure that the connectivity wasn't harmed by this + # addition. assert_ping_successful( src_vm=vma_flat_overlay, dst_ip=vmb_flat_overlay_ip_address, @@ -101,19 +56,6 @@ def test_flat_overlay_separate_nads_no_connectivity( vma_flat_overlay, vmd_flat_overlay_ip_address, ): - """ - [NEGATIVE] Test that VMs on separate flat overlay networks cannot communicate. - - Preconditions: - - VM-A attached to the first flat overlay network (NAD-1) - - VM-D attached to the second flat overlay network (NAD-2) - - Steps: - 1. Execute ping from VM-A to VM-D - - Expected: - - Ping fails with 100% packet loss - """ assert_no_ping( src_vm=vma_flat_overlay, dst_ip=vmd_flat_overlay_ip_address, @@ -127,21 +69,6 @@ def test_flat_overlay_connectivity_between_namespaces( vma_flat_overlay, vme_flat_overlay, ): - """ - Test that VMs in different namespaces can communicate via same-named NAD. - - Preconditions: - - NAD with identical name created in namespace-1 and namespace-2 - - VM-A running in namespace-1 attached to the NAD - - VM-E running in namespace-2 attached to the NAD - - Steps: - 1. Verify NAD names are identical in both namespaces - 2. Execute ping from VM-A to VM-E - - Expected: - - Ping succeeds with 0% packet loss - """ assert flat_overlay_vma_vmb_nad.name == flat_overlay_vme_nad.name, ( f"NAD names are not identical:\n first NAD's name: {flat_overlay_vma_vmb_nad.name}, " f"second NAD's name: {flat_overlay_vme_nad.name}" @@ -159,21 +86,6 @@ def test_flat_overlay_consistent_ip( ping_before_migration, migrated_vmc_flat_overlay, ): - """ - Test that VM retains its IP address after live migration. - - Preconditions: - - VM-C running with a flat overlay network IP address - - VM-D running on a flat overlay network - - Ping from VM-D to VM-C succeeded before migration - - VM-C live migrated to another node - - Steps: - 1. Execute ping from VM-D to VM-C's original IP address - - Expected: - - Ping succeeds with 0% packet loss - """ assert_ping_successful( src_vm=vmd_flat_overlay, dst_ip=vmc_flat_overlay_ip_address, @@ -182,19 +94,6 @@ def test_flat_overlay_consistent_ip( @pytest.mark.jumbo_frame class TestFlatOverlayJumboConnectivity: - """ - Tests for flat overlay network jumbo frame connectivity. - - Markers: - - jumbo_frame - - ipv4 - - Preconditions: - - Flat overlay NAD configured for jumbo frames - - VM-A running and attached to jumbo frame NAD - - VM-B running and attached to jumbo frame NAD - """ - @pytest.mark.polarion("CNV-10162") @pytest.mark.s390x def test_flat_l2_jumbo_frame_connectivity( @@ -204,18 +103,6 @@ def test_flat_l2_jumbo_frame_connectivity( vma_jumbo_flat_l2, vmb_jumbo_flat_l2, ): - """ - Test that VMs can communicate using jumbo frames on a flat overlay network. - - Markers: - - s390x - - Steps: - Execute ping from VM-A to VM-B with jumbo frame packet size - - Expected: - - Ping succeeds with 0% packet loss - """ assert_ping_successful( src_vm=vma_jumbo_flat_l2, packet_size=flat_l2_jumbo_frame_packet_size, diff --git a/tests/network/network_service/test_service_config_manifest.py b/tests/network/network_service/test_service_config_manifest.py index 3c5df8a49d..d636b080b3 100644 --- a/tests/network/network_service/test_service_config_manifest.py +++ b/tests/network/network_service/test_service_config_manifest.py @@ -1,12 +1,3 @@ -""" -Service Configuration via Manifest Tests - -Tests for service configuration using manifest-based approach. - -STP Reference: -TODO: add link -""" - import pytest from tests.network.network_service.libservice import SERVICE_IP_FAMILY_POLICY_SINGLE_STACK @@ -14,16 +5,6 @@ @pytest.mark.gating class TestServiceConfigurationViaManifest: - """ - Tests for configuring Kubernetes services via manifest. - - Markers: - - gating - - Preconditions: - - Running VM exposed with a service - """ - @pytest.mark.single_nic @pytest.mark.parametrize( "single_stack_service_ip_family, single_stack_service", @@ -39,24 +20,6 @@ def test_service_with_configured_ip_families( single_stack_service_ip_family, single_stack_service, ): - """ - Test that service is created with configured IP family. - - Markers: - - single_nic - - Parametrize: - - ip_family: [IPv4, IPv6] - - Preconditions: - - Single stack service created with specified IP family - - Steps: - 1. Get ipFamilies from service spec - - Expected: - - Service has single IP family matching configuration - """ ip_families_in_svc = running_vm_for_exposure.custom_service.instance.spec.ipFamilies assert len(ip_families_in_svc) == 1 and ip_families_in_svc[0] == single_stack_service_ip_family, ( @@ -72,21 +35,6 @@ def test_service_with_default_ip_family_policy( self, running_vm_for_exposure, ): - """ - Test that service is created with default SingleStack IP family policy. - - Markers: - - single_nic - - Preconditions: - - Service created with default IP family policy - - Steps: - 1. Get ipFamilyPolicy from service spec - - Expected: - - Service ipFamilyPolicy is SingleStack - """ ip_family_policy = running_vm_for_exposure.custom_service.instance.spec.ipFamilyPolicy assert ip_family_policy == SERVICE_IP_FAMILY_POLICY_SINGLE_STACK, ( f"Service created with wrong default ipfamilyPolicy on VM {running_vm_for_exposure.name}: " diff --git a/tests/network/network_service/test_service_config_virtctl.py b/tests/network/network_service/test_service_config_virtctl.py index 5c1dbb555c..c542a24b72 100644 --- a/tests/network/network_service/test_service_config_virtctl.py +++ b/tests/network/network_service/test_service_config_virtctl.py @@ -1,12 +1,3 @@ -""" -Service Configuration via virtctl Tests - -Tests for service configuration using virtctl expose command. - -STP Reference: -TODO: add link -""" - import pytest from tests.network.network_service.libservice import ( @@ -18,14 +9,6 @@ class TestServiceConfigurationViaVirtctl: - """ - Tests for configuring Kubernetes services via virtctl expose command. - - Preconditions: - - Running VM available for service exposure - - Dual-stack cluster configured - """ - @pytest.mark.parametrize( "virtctl_expose_service, expected_num_families_in_service, ip_family_policy", [ @@ -59,24 +42,6 @@ def test_virtctl_expose_services( dual_stack_cluster, ip_family_policy, ): - """ - Test that virtctl expose creates service with correct IP family policy. - - Markers: - - single_nic - - Parametrize: - - ip_family_policy: [SingleStack, PreferDualStack, RequireDualStack] - - Preconditions: - - Service created via virtctl expose with specified IP family policy - - Steps: - 1. Verify service IP family parameters - - Expected: - - Service has correct number of IP families and IP family policy - """ assert_svc_ip_params( svc=virtctl_expose_service, expected_num_families_in_service=expected_num_families_in_service, diff --git a/tests/storage/online_resize/test_online_resize.py b/tests/storage/online_resize/test_online_resize.py index b7fe9af0d1..c9a611fa46 100644 --- a/tests/storage/online_resize/test_online_resize.py +++ b/tests/storage/online_resize/test_online_resize.py @@ -1,7 +1,7 @@ # -*- coding: utf-8 -*- """ -Online Resize Tests - PVC Expansion While VM Running +Online resize (PVC expanded while VM running) """ import logging @@ -45,23 +45,7 @@ def test_sequential_disk_expand( rhel_vm_for_online_resize, running_rhel_vm, ): - """ - Test that a running VM's disk can be expanded multiple times sequentially. - - Markers: - - gating - - Preconditions: - - DataVolume with RHEL image - - VM using the DataVolume as boot disk - - VM is running - - Steps: - 1. Expand PVC by the smallest possible increment and wait for resize (repeat 6 times) - - Expected: - - All 6 resize operations complete successfully - """ + # Expand PVC and wait for resize 6 times for _ in range(6): with wait_for_resize(vm=rhel_vm_for_online_resize): expand_pvc(dv=rhel_dv_for_online_resize, size_change=SMALLEST_POSSIBLE_EXPAND) diff --git a/tests/virt/cluster/vm_lifecycle/test_restart.py b/tests/virt/cluster/vm_lifecycle/test_restart.py index e23fc4af65..8dd65ac1bf 100644 --- a/tests/virt/cluster/vm_lifecycle/test_restart.py +++ b/tests/virt/cluster/vm_lifecycle/test_restart.py @@ -1,5 +1,5 @@ """ -VM Lifecycle Tests - Restart Operations +Test VM restart """ import logging @@ -34,23 +34,6 @@ def vm_to_restart(unprivileged_client, namespace): @pytest.mark.s390x @pytest.mark.polarion("CNV-1497") def test_vm_restart(vm_to_restart): - """ - Test that a VM can complete a full restart cycle (restart, stop, start). - - Markers: - - arm64 - - Preconditions: - - Running Fedora virtual machine - - Steps: - 1. Restart the VM and wait for completion - 2. Stop the VM and wait for completion - 3. Start the VM and wait for it to become running - - Expected: - - VM is running and SSH accessible - """ LOGGER.info("VM is running: Restarting VM") vm_to_restart.restart(wait=True) LOGGER.info("VM is running: Stopping VM") diff --git a/tests/virt/cluster/vm_lifecycle/test_vm_run_strategy.py b/tests/virt/cluster/vm_lifecycle/test_vm_run_strategy.py index 5a461d5f1e..ac6f0713ab 100644 --- a/tests/virt/cluster/vm_lifecycle/test_vm_run_strategy.py +++ b/tests/virt/cluster/vm_lifecycle/test_vm_run_strategy.py @@ -1,12 +1,5 @@ -""" -VM Run Strategy Tests - -Run strategies logic can be found under -https://kubevirt.io/user-guide/#/creation/run-strategies?id=run-strategies - -STP Reference: -# TOOD: add link -""" +# Run strategies logic can be found under +# https://kubevirt.io/user-guide/#/creation/run-strategies?id=run-strategies import logging import re @@ -211,23 +204,6 @@ def shutdown_vm_guest_os(vm): @pytest.mark.s390x @pytest.mark.gating class TestRunStrategyBaseActions: - """ - Tests for VM run strategy basic lifecycle actions. - - Markers: - - arm64 - - s390x - - gating - - post_upgrade - - Parametrize: - - vm_action: [start, restart, stop] - - Preconditions: - - Running RHEL VM - - VM configured with run strategy - """ - @pytest.mark.parametrize( "vm_action", [ @@ -242,18 +218,6 @@ def test_run_strategy_policy( matrix_updated_vm_run_strategy, vm_action, ): - """ - Test that VM action behaves correctly according to run strategy policy. - - Parametrize: - - vm_action: [start, restart, stop] - - Steps: - 1. Perform the VM action (start/restart/stop) - - Expected: - - VM status and run strategy match expected policy - """ LOGGER.info(f"Verify VM with run strategy {matrix_updated_vm_run_strategy} and VM action {vm_action}") verify_vm_action( vm=lifecycle_vm, @@ -268,16 +232,6 @@ def test_run_strategy_policy( indirect=True, ) class TestRunStrategyAdvancedActions: - """ - Tests for advanced VM run strategy behaviors. - - Markers: - - post_upgrade - - Preconditions: - - Running RHEL VM - """ - @pytest.mark.polarion("CNV-5054") def test_run_strategy_shutdown( self, @@ -286,18 +240,6 @@ def test_run_strategy_shutdown( matrix_updated_vm_run_strategy, start_vm_if_not_running, ): - """ - Test that guest OS shutdown behaves correctly per run strategy. - - Preconditions: - - VM is running with specific run strategy - - Steps: - 1. Shutdown VM from guest OS - - Expected: - - VMI and virt-launcher pod reach expected status per run strategy - """ vmi = lifecycle_vm.vmi launcher_pod = vmi.virt_launcher_pod run_strategy = matrix_updated_vm_run_strategy @@ -336,22 +278,6 @@ def test_run_strategy_shutdown( def test_run_strategy_pause_unpause_vmi( self, lifecycle_vm, request_updated_vm_run_strategy, start_vm_if_not_running ): - """ - Test that VMI can be paused and unpaused. - - Parametrize: - - run_strategy: [Manual, Always] - - Preconditions: - - VM is running with run strategy - - Steps: - 1. Pause VMI - 2. Unpause VMI - - Expected: - - VM is Running after unpause - """ LOGGER.info(f"Verify VMI pause/un-pause with runStrategy: {request_updated_vm_run_strategy}") pause_unpause_vmi_and_verify_status(vm=lifecycle_vm) @@ -373,23 +299,4 @@ def test_run_strategy_pause_unpause_vmi( ) @pytest.mark.rwx_default_storage def test_run_strategy_migrate_vm(self, lifecycle_vm, request_updated_vm_run_strategy, start_vm_if_not_running): - """ - Test that VM can be migrated. - - Markers: - - rwx_default_storage - - Parametrize: - - run_strategy: [Manual, Always] - - Preconditions: - - VM is running with run strategy - - RWX storage available - - Steps: - 1. Migrate VM - - Expected: - - VM is Running and run strategy unchanged - """ migrate_validate_run_strategy_vm(vm=lifecycle_vm, run_strategy=request_updated_vm_run_strategy) From b069d4ce3252f52cc0cf54f7353e43d8e50f0a1a Mon Sep 17 00:00:00 2001 From: rnetser Date: Tue, 17 Feb 2026 11:07:12 +0200 Subject: [PATCH 11/21] tests: add unit tests for std_placeholder_stats script Add 26 unit tests covering all AST-based analysis functions and the directory scanner in std_placeholder_stats.py. Tests cover module-level, class-level, method-level, and function-level __test__ = False detection, as well as recursive directory scanning and edge cases. --- .flake8 | 1 + .../std_placeholder_stats/tests/__init__.py | 0 .../tests/test_std_placeholder_stats.py | 492 ++++++++++++++++++ 3 files changed, 493 insertions(+) create mode 100644 scripts/std_placeholder_stats/tests/__init__.py create mode 100644 scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py diff --git a/.flake8 b/.flake8 index 1672adefe5..4cc3b976cf 100644 --- a/.flake8 +++ b/.flake8 @@ -13,6 +13,7 @@ exclude = docs/*, .cache/* utilities/unittests/* + scripts/*/tests/* fcn_exclude_functions = Path, diff --git a/scripts/std_placeholder_stats/tests/__init__.py b/scripts/std_placeholder_stats/tests/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py b/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py new file mode 100644 index 0000000000..f0cdd8adda --- /dev/null +++ b/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py @@ -0,0 +1,492 @@ +"""Unit tests for STD Placeholder Stats Generator. + +Tests cover all public functions in std_placeholder_stats.py including +AST-based analysis functions and the directory scanner. + +Generated using Claude cli +""" + +from __future__ import annotations + +import ast +from pathlib import Path + +import pytest + +from scripts.std_placeholder_stats.std_placeholder_stats import ( + class_has_test_false, + function_has_test_false, + get_test_methods_from_class, + method_has_test_false, + module_has_test_false, + scan_placeholder_tests, +) + +# --------------------------------------------------------------------------- +# Constants +# --------------------------------------------------------------------------- + +TEST_FALSE_MARKER = "__test__ = False" + +# --------------------------------------------------------------------------- +# Source code fragments for AST-based tests +# --------------------------------------------------------------------------- + +SOURCE_MODULE_TEST_FALSE = f"""\ +{TEST_FALSE_MARKER} + +class TestFoo: + def test_bar(self): + pass +""" + +SOURCE_NO_TEST_ASSIGNMENT = """\ +class TestFoo: + def test_bar(self): + pass +""" + +SOURCE_CLASS_TEST_FALSE = f"""\ +class TestFoo: + {TEST_FALSE_MARKER} + + def test_bar(self): + pass + + def test_baz(self): + pass +""" + +SOURCE_FUNCTION_TEST_FALSE = f"""\ +def test_standalone(): + pass + +test_standalone.{TEST_FALSE_MARKER} +""" + +SOURCE_FUNCTION_TEST_FALSE_DIFFERENT_NAME = f"""\ +def test_alpha(): + pass + +test_alpha.{TEST_FALSE_MARKER} + +def test_beta(): + pass +""" + +SOURCE_STANDALONE_FUNCTION = """\ +def test_standalone(): + pass +""" + +SOURCE_METHOD_TEST_FALSE = f"""\ +class TestFoo: + def test_alpha(self): + pass + + test_alpha.{TEST_FALSE_MARKER} + + def test_beta(self): + pass +""" + +SOURCE_TWO_METHODS = """\ +class TestFoo: + def test_alpha(self): + pass + + def test_beta(self): + pass +""" + +SOURCE_CLASS_WITH_MIXED_METHODS = f"""\ +class TestFoo: + {TEST_FALSE_MARKER} + + def __init__(self): + pass + + def helper_method(self): + pass + + def test_one(self): + pass + + def test_two(self): + pass + + def setup_method(self): + pass +""" + +SOURCE_CLASS_NO_TEST_METHODS = """\ +class TestFoo: + def __init__(self): + pass + + def helper(self): + pass +""" + + +# --------------------------------------------------------------------------- +# Helper functions +# --------------------------------------------------------------------------- + + +def _get_first_class_node(source: str) -> ast.ClassDef: + """Parse source and return the first ClassDef node. + + Args: + source: Python source code containing a class definition. + + Returns: + The first ast.ClassDef found in the parsed source. + """ + tree = ast.parse(source=source) + for node in tree.body: + if isinstance(node, ast.ClassDef): + return node + raise ValueError("No class definition found in source") + + +def _create_test_file(directory: Path, filename: str, content: str) -> Path: + """Create a test file in the given directory. + + Args: + directory: Parent directory for the file. + filename: Name of the test file. + content: Python source content for the file. + + Returns: + Path to the created file. + """ + file_path = directory / filename + file_path.write_text(data=content, encoding="utf-8") + return file_path + + +# --------------------------------------------------------------------------- +# Fixtures +# --------------------------------------------------------------------------- + + +@pytest.fixture() +def tests_dir(tmp_path: Path) -> Path: + """Provide a temporary 'tests' directory for scan_placeholder_tests.""" + directory = tmp_path / "tests" + directory.mkdir() + return directory + + +# =========================================================================== +# Tests for module_has_test_false() +# =========================================================================== + + +class TestModuleHasTestFalse: + """Tests for the module_has_test_false() function.""" + + def test_returns_true_when_module_has_test_false(self) -> None: + """module_has_test_false() detects __test__ = False at module level.""" + tree = ast.parse(source=SOURCE_MODULE_TEST_FALSE) + assert module_has_test_false(module_tree=tree) is True + + def test_returns_false_when_no_test_assignment(self) -> None: + """module_has_test_false() returns False with no __test__ assignment.""" + tree = ast.parse(source=SOURCE_NO_TEST_ASSIGNMENT) + assert module_has_test_false(module_tree=tree) is False + + def test_ignores_class_level_test_false(self) -> None: + """module_has_test_false() ignores __test__ = False inside classes.""" + tree = ast.parse(source=SOURCE_CLASS_TEST_FALSE) + assert module_has_test_false(module_tree=tree) is False + + +# =========================================================================== +# Tests for class_has_test_false() +# =========================================================================== + + +class TestClassHasTestFalse: + """Tests for the class_has_test_false() function.""" + + def test_returns_true_when_class_has_test_false(self) -> None: + """class_has_test_false() detects __test__ = False in class body.""" + class_node = _get_first_class_node(source=SOURCE_CLASS_TEST_FALSE) + assert class_has_test_false(class_node=class_node) is True + + def test_returns_false_when_no_test_assignment(self) -> None: + """class_has_test_false() returns False with no __test__ assignment.""" + class_node = _get_first_class_node(source=SOURCE_NO_TEST_ASSIGNMENT) + assert class_has_test_false(class_node=class_node) is False + + def test_detects_test_false_in_class_with_mixed_methods(self) -> None: + """class_has_test_false() detects __test__ = False even with non-test methods present.""" + class_node = _get_first_class_node(source=SOURCE_CLASS_WITH_MIXED_METHODS) + assert class_has_test_false(class_node=class_node) is True + + +# =========================================================================== +# Tests for function_has_test_false() +# =========================================================================== + + +class TestFunctionHasTestFalse: + """Tests for the function_has_test_false() function.""" + + def test_returns_true_when_function_has_test_false(self) -> None: + """function_has_test_false() detects func.__test__ = False at module level.""" + tree = ast.parse(source=SOURCE_FUNCTION_TEST_FALSE) + assert function_has_test_false(module_tree=tree, function_name="test_standalone") is True + + def test_returns_false_for_non_matching_function_name(self) -> None: + """function_has_test_false() returns False for a different function name.""" + tree = ast.parse(source=SOURCE_FUNCTION_TEST_FALSE) + assert function_has_test_false(module_tree=tree, function_name="test_other") is False + + def test_returns_false_when_no_test_assignment_exists(self) -> None: + """function_has_test_false() returns False with no __test__ assignment.""" + tree = ast.parse(source=SOURCE_STANDALONE_FUNCTION) + assert function_has_test_false(module_tree=tree, function_name="test_standalone") is False + + def test_matches_correct_function_among_multiple(self) -> None: + """function_has_test_false() only matches the specific function name.""" + tree = ast.parse(source=SOURCE_FUNCTION_TEST_FALSE_DIFFERENT_NAME) + assert function_has_test_false(module_tree=tree, function_name="test_alpha") is True + assert function_has_test_false(module_tree=tree, function_name="test_beta") is False + + +# =========================================================================== +# Tests for method_has_test_false() +# =========================================================================== + + +class TestMethodHasTestFalse: + """Tests for the method_has_test_false() function.""" + + def test_returns_true_when_method_has_test_false(self) -> None: + """method_has_test_false() detects method.__test__ = False in class body.""" + class_node = _get_first_class_node(source=SOURCE_METHOD_TEST_FALSE) + assert method_has_test_false(class_node=class_node, method_name="test_alpha") is True + + def test_returns_false_for_non_matching_method_name(self) -> None: + """method_has_test_false() returns False for a different method name.""" + class_node = _get_first_class_node(source=SOURCE_METHOD_TEST_FALSE) + assert method_has_test_false(class_node=class_node, method_name="test_beta") is False + + def test_returns_false_when_no_test_assignment_exists(self) -> None: + """method_has_test_false() returns False with no __test__ assignment.""" + class_node = _get_first_class_node(source=SOURCE_TWO_METHODS) + assert method_has_test_false(class_node=class_node, method_name="test_alpha") is False + + +# =========================================================================== +# Tests for get_test_methods_from_class() +# =========================================================================== + + +class TestGetTestMethodsFromClass: + """Tests for the get_test_methods_from_class() function.""" + + def test_returns_test_methods_with_prefix(self) -> None: + """get_test_methods_from_class() returns test methods prefixed with ' - '.""" + class_node = _get_first_class_node(source=SOURCE_CLASS_TEST_FALSE) + result = get_test_methods_from_class(class_node=class_node) + assert result == [" - test_bar", " - test_baz"] + + def test_excludes_non_test_methods(self) -> None: + """get_test_methods_from_class() excludes helper methods, __init__, etc.""" + class_node = _get_first_class_node(source=SOURCE_CLASS_WITH_MIXED_METHODS) + result = get_test_methods_from_class(class_node=class_node) + assert result == [" - test_one", " - test_two"] + assert " - __init__" not in result + assert " - helper_method" not in result + assert " - setup_method" not in result + + def test_returns_empty_list_for_no_test_methods(self) -> None: + """get_test_methods_from_class() returns empty list when no test_ methods.""" + class_node = _get_first_class_node(source=SOURCE_CLASS_NO_TEST_METHODS) + result = get_test_methods_from_class(class_node=class_node) + assert result == [] + + +# =========================================================================== +# Tests for scan_placeholder_tests() +# =========================================================================== + + +class TestScanPlaceholderTests: + """Tests for the scan_placeholder_tests() function.""" + + def test_module_level_test_false_reports_all_classes_and_functions(self, tests_dir: Path) -> None: + """scan_placeholder_tests() reports all classes and functions when module has __test__ = False.""" + _create_test_file( + directory=tests_dir, + filename="test_example.py", + content=( + f"{TEST_FALSE_MARKER}\n\n" + "class TestFoo:\n" + " def test_bar(self):\n" + " pass\n\n" + "class TestBaz:\n" + " def test_qux(self):\n" + " pass\n" + ), + ) + + result = scan_placeholder_tests(tests_dir=tests_dir) + + assert "tests/test_example.py" in result + entries = result["tests/test_example.py"] + assert "tests/test_example.py::TestFoo" in entries + assert " - test_bar" in entries + assert "tests/test_example.py::TestBaz" in entries + assert " - test_qux" in entries + + def test_module_level_test_false_reports_standalone_functions(self, tests_dir: Path) -> None: + """scan_placeholder_tests() reports standalone test functions under module-level __test__ = False.""" + _create_test_file( + directory=tests_dir, + filename="test_funcs.py", + content=(f"{TEST_FALSE_MARKER}\n\ndef test_alpha():\n pass\n\ndef test_beta():\n pass\n"), + ) + + result = scan_placeholder_tests(tests_dir=tests_dir) + + assert "tests/test_funcs.py" in result + entries = result["tests/test_funcs.py"] + assert "tests/test_funcs.py" in entries + assert " - test_alpha" in entries + assert " - test_beta" in entries + + def test_class_level_test_false_reports_class_and_methods(self, tests_dir: Path) -> None: + """scan_placeholder_tests() reports class and its methods when class has __test__ = False.""" + _create_test_file( + directory=tests_dir, + filename="test_cls.py", + content=( + "class TestFoo:\n" + f" {TEST_FALSE_MARKER}\n\n" + " def test_bar(self):\n" + " pass\n\n" + " def test_baz(self):\n" + " pass\n" + ), + ) + + result = scan_placeholder_tests(tests_dir=tests_dir) + + assert "tests/test_cls.py" in result + entries = result["tests/test_cls.py"] + assert "tests/test_cls.py::TestFoo" in entries + assert " - test_bar" in entries + assert " - test_baz" in entries + + def test_method_level_test_false_reports_only_that_method(self, tests_dir: Path) -> None: + """scan_placeholder_tests() reports only the specific method with __test__ = False.""" + _create_test_file( + directory=tests_dir, + filename="test_meth.py", + content=( + "class TestFoo:\n" + " def test_alpha(self):\n" + " pass\n\n" + f" test_alpha.{TEST_FALSE_MARKER}\n\n" + " def test_beta(self):\n" + " pass\n" + ), + ) + + result = scan_placeholder_tests(tests_dir=tests_dir) + + assert "tests/test_meth.py" in result + entries = result["tests/test_meth.py"] + assert "tests/test_meth.py::TestFoo" in entries + assert " - test_alpha" in entries + assert " - test_beta" not in entries + + def test_function_level_test_false_reports_only_that_function(self, tests_dir: Path) -> None: + """scan_placeholder_tests() reports only the function with func.__test__ = False.""" + _create_test_file( + directory=tests_dir, + filename="test_func.py", + content=(f"def test_alpha():\n pass\n\ntest_alpha.{TEST_FALSE_MARKER}\n\ndef test_beta():\n pass\n"), + ) + + result = scan_placeholder_tests(tests_dir=tests_dir) + + assert "tests/test_func.py" in result + entries = result["tests/test_func.py"] + assert " - test_alpha" in entries + assert " - test_beta" not in entries + + def test_skips_files_without_test_false(self, tests_dir: Path) -> None: + """scan_placeholder_tests() skips files that do not contain __test__ = False.""" + _create_test_file( + directory=tests_dir, + filename="test_normal.py", + content=("class TestFoo:\n def test_bar(self):\n assert True\n"), + ) + + result = scan_placeholder_tests(tests_dir=tests_dir) + + assert result == {} + + def test_handles_syntax_errors_gracefully(self, tests_dir: Path) -> None: + """scan_placeholder_tests() logs warning and continues on syntax errors.""" + _create_test_file( + directory=tests_dir, + filename="test_broken.py", + content=f"{TEST_FALSE_MARKER}\n\ndef this is not valid python:\n", + ) + _create_test_file( + directory=tests_dir, + filename="test_valid.py", + content=(f"{TEST_FALSE_MARKER}\n\nclass TestGood:\n def test_pass(self):\n pass\n"), + ) + + result = scan_placeholder_tests(tests_dir=tests_dir) + + # Broken file should be skipped, valid file should be included + assert "tests/test_broken.py" not in result + assert "tests/test_valid.py" in result + + def test_returns_empty_dict_when_no_test_files(self, tests_dir: Path) -> None: + """scan_placeholder_tests() returns empty dict when no test files exist.""" + result = scan_placeholder_tests(tests_dir=tests_dir) + + assert result == {} + + def test_scans_subdirectories_recursively(self, tests_dir: Path) -> None: + """scan_placeholder_tests() finds test files in nested subdirectories.""" + sub_dir = tests_dir / "network" / "ipv6" + sub_dir.mkdir(parents=True) + _create_test_file( + directory=sub_dir, + filename="test_deep.py", + content=(f"{TEST_FALSE_MARKER}\n\nclass TestDeep:\n def test_nested(self):\n pass\n"), + ) + + result = scan_placeholder_tests(tests_dir=tests_dir) + + assert result, "Expected at least one entry from nested test file" + found_keys = list(result.keys()) + assert any("test_deep.py" in key for key in found_keys) + + def test_ignores_non_test_files(self, tests_dir: Path) -> None: + """scan_placeholder_tests() only processes files matching test_*.py pattern.""" + _create_test_file( + directory=tests_dir, + filename="conftest.py", + content=f"{TEST_FALSE_MARKER}\n\ndef fixture():\n pass\n", + ) + _create_test_file( + directory=tests_dir, + filename="helper.py", + content=f"{TEST_FALSE_MARKER}\n\ndef helper():\n pass\n", + ) + + result = scan_placeholder_tests(tests_dir=tests_dir) + + assert result == {} From 843958ff8506ef4c13cd5ea2c69c47047f5b5003 Mon Sep 17 00:00:00 2001 From: rnetser Date: Tue, 17 Feb 2026 20:01:17 +0200 Subject: [PATCH 12/21] fix: separate data from presentation in std_placeholder_stats - get_test_methods_from_class returns raw method names instead of formatted strings; callers handle display formatting - Replace lstrip("- ") with removeprefix("- ") to avoid stripping character sets instead of exact prefix - Update unit tests accordingly --- .../std_placeholder_stats/std_placeholder_stats.py | 12 ++++++------ .../tests/test_std_placeholder_stats.py | 14 +++++++------- 2 files changed, 13 insertions(+), 13 deletions(-) diff --git a/scripts/std_placeholder_stats/std_placeholder_stats.py b/scripts/std_placeholder_stats/std_placeholder_stats.py index 0edcf5904b..bb41a440c8 100644 --- a/scripts/std_placeholder_stats/std_placeholder_stats.py +++ b/scripts/std_placeholder_stats/std_placeholder_stats.py @@ -139,16 +139,16 @@ def test_bar(self): def get_test_methods_from_class(class_node: ast.ClassDef) -> list[str]: - """Extract formatted test method names from a class definition. + """Extract test method names from a class definition. Args: class_node: AST class definition node Returns: - List of formatted test method names (prefixed with " - ") + List of test method names. """ return [ - f" - {method.name}" + method.name for method in class_node.body if isinstance(method, ast.FunctionDef) and method.name.startswith("test_") ] @@ -188,7 +188,7 @@ def scan_placeholder_tests(tests_dir: Path) -> dict[str, list[str]]: placeholder_files.setdefault(relative_path, []).append(f"{relative_path}::{node.name}") test_methods = get_test_methods_from_class(class_node=node) if test_methods: - placeholder_files[relative_path].extend(test_methods) + placeholder_files[relative_path].extend(f" - {method}" for method in test_methods) elif isinstance(node, ast.FunctionDef) and node.name.startswith("test_"): # For standalone functions, add module path first if not already added @@ -205,7 +205,7 @@ def scan_placeholder_tests(tests_dir: Path) -> dict[str, list[str]]: placeholder_files.setdefault(relative_path, []).append(f"{relative_path}::{node.name}") test_methods = get_test_methods_from_class(class_node=node) if test_methods: - placeholder_files[relative_path].extend(test_methods) + placeholder_files[relative_path].extend(f" - {method}" for method in test_methods) else: # Check each method for method.__test__ = False in class body method_placeholders: list[str] = [] @@ -276,7 +276,7 @@ def output_json(placeholder_files: dict[str, list[str]]) -> None: tests: list[str] = [] for entry in entries: if entry.startswith(" - "): - tests.append(entry.strip().lstrip("- ")) + tests.append(entry.strip().removeprefix("- ")) total_tests += 1 if tests: tests_by_file[file_path] = tests diff --git a/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py b/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py index f0cdd8adda..d948e6377d 100644 --- a/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py +++ b/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py @@ -289,20 +289,20 @@ def test_returns_false_when_no_test_assignment_exists(self) -> None: class TestGetTestMethodsFromClass: """Tests for the get_test_methods_from_class() function.""" - def test_returns_test_methods_with_prefix(self) -> None: - """get_test_methods_from_class() returns test methods prefixed with ' - '.""" + def test_returns_raw_test_method_names(self) -> None: + """get_test_methods_from_class() returns raw test method names.""" class_node = _get_first_class_node(source=SOURCE_CLASS_TEST_FALSE) result = get_test_methods_from_class(class_node=class_node) - assert result == [" - test_bar", " - test_baz"] + assert result == ["test_bar", "test_baz"] def test_excludes_non_test_methods(self) -> None: """get_test_methods_from_class() excludes helper methods, __init__, etc.""" class_node = _get_first_class_node(source=SOURCE_CLASS_WITH_MIXED_METHODS) result = get_test_methods_from_class(class_node=class_node) - assert result == [" - test_one", " - test_two"] - assert " - __init__" not in result - assert " - helper_method" not in result - assert " - setup_method" not in result + assert result == ["test_one", "test_two"] + assert "__init__" not in result + assert "helper_method" not in result + assert "setup_method" not in result def test_returns_empty_list_for_no_test_methods(self) -> None: """get_test_methods_from_class() returns empty list when no test_ methods.""" From bac78d4393d2601a5c72ac89e1e7f60659f7ea34 Mon Sep 17 00:00:00 2001 From: rnetser Date: Tue, 17 Feb 2026 20:04:52 +0200 Subject: [PATCH 13/21] docs: address reviewer feedback on STD documentation - Add verification step to Phase 2 workflow - Change approval authority to "qe tech lead" - Rework test independence guidelines with incremental marker - Clarify single expected behavior allows multiple assertions - Remove unnecessary pass statements from examples - Update checklist to reference __test__ = False pattern --- docs/SOFTWARE_TEST_DESCRIPTION.md | 56 +++++++++++++++++++++---------- 1 file changed, 38 insertions(+), 18 deletions(-) diff --git a/docs/SOFTWARE_TEST_DESCRIPTION.md b/docs/SOFTWARE_TEST_DESCRIPTION.md index 961a4d9885..743688770a 100644 --- a/docs/SOFTWARE_TEST_DESCRIPTION.md +++ b/docs/SOFTWARE_TEST_DESCRIPTION.md @@ -32,7 +32,7 @@ This project follows a **two-phase development workflow** that separates test de - Add the complete STD docstring (Preconditions/Steps/Expected) - Include a link to the approved STP (Software Test Plan) in the **module docstring** (top of the test file) - Add applicable pytest markers (architecture markers, etc.) - - Leave the test body empty or with a `pass` statement + - Add `__test__ = True` on implemented test(s). For a single test, add `.__test__ = True` 2. **Submit PR for review**: - The PR contains only the test descriptions (no automation code) @@ -50,13 +50,13 @@ This project follows a **two-phase development workflow** that separates test de - Create any required fixtures - Implement helper functions as needed - Remove `__test__ = False` from implemented tests - - If needed, update the test description. This change must be approved by the team's tech lead. + - If needed, update the test description. This change must be approved by the team's qe tech lead. 2. **Submit PR for review**: - Reviewers verify the implementation matches the approved design - Focus is on code quality, correctness, and adherence to the STD -3. **Approval and merge**: +3. **Approval, verification and merge**: - Once implementation is verified, merge the automation ### Benefits of This Workflow @@ -139,7 +139,7 @@ def test_isolated_vms_cannot_communicate(): """ [NEGATIVE] Test that VMs on separate networks cannot ping each other. """ - pass + test_isolated_vms_cannot_communicate.__test__ = False ``` @@ -159,8 +159,9 @@ When specific pytest markers are required, list them explicitly. **Key Principles:** - Each test should verify **ONE thing** - **Tests must be independent** - no test should depend on another test's outcome -- If a test needs a precondition that could be another test's outcome, use a **fixture** to set it up - Related tests are grouped in a **test class** + - If a test needs a precondition that could be another test's outcome, place the tests under the class in the required order + - Mention handling of early failures (i.e "fail fast") - **Shared preconditions** go in the class docstring - **Test-specific preconditions** (if any) go in the test docstring @@ -196,7 +197,6 @@ class Test: Expected: - """ - pass ``` ### Test-Level Template @@ -224,7 +224,7 @@ def test_(): Expected: - """ - pass + test_.__test__ = False ``` @@ -262,15 +262,40 @@ test_.__test__ = False - Good: `- Running Fedora virtual machine` - Bad: `- Running Fedora VM (vm_to_restart fixture)` -5. **Single Expected per Test**: One assertion = clear pass/fail. +5. **Single Expected Behavior per Test**: One assertion: clear pass/fail. - Good: `Expected: - Ping succeeds with 0% packet loss` - Bad: `Expected: - Ping succeeds - VM remains running - No errors logged` + - The may be **exceptions**, where multiple assertions are required to verify a **single** behavior. + - Example: `Expected: - VM reports valid IP addres. Expected - User can access VM via SSH` 6. **Tests Must Be Independent**: Tests should not depend on other tests. - - If a test needs a precondition that is another test's outcome, use a fixture + - Dependencies between tests mean that one test depends on the result of a previous test. + - If testing of a feature requires dependencies between tests, make sure that: + - They are grouped under a class with shared preconditions + - Use `@pytest.mark.incremental` marker to ensure tests dependency on previous test results - Good: Fixture `migrated_vm` sets up a VM that has been migrated - Bad: `test_migrate_vm` must run before `test_ssh_after_migration` + Example: + + ```python + import pytest + + @pytest.mark.incremental + class TestVMSomeFeature: + + def test_vm_is_created(self): + """ + Test that a VM with feature 1 can be created + """ + + def test_vm_migration(self): # will be marked as xfailed if test_vm_is_created failed + """ + Test that a VM with feature 1 can be migrated + """ + + ``` + ### Common Patterns in This Project | Pattern | Description | Example | @@ -289,7 +314,7 @@ test_.__test__ = False - [ ] Each test has: description, Preconditions, Steps, Expected - [ ] Each test verifies ONE thing with ONE Expected - [ ] Negative tests marked with `[NEGATIVE]` -- [ ] Test methods contain only `pass` +- [ ] Test methods/classes/tests contain only `__test__ = False` #### Phase 2: Test Automation PR @@ -335,7 +360,6 @@ class TestSnapshotRestore: Expected: - File content equals "data-before-snapshot" """ - pass def test_removes_post_snapshot_file(self): """ @@ -347,7 +371,6 @@ class TestSnapshotRestore: Expected: - File /data/after.txt does NOT exist """ - pass ``` @@ -374,7 +397,6 @@ class TestVMLifecycle: Expected: - VM is "Running" """ - pass def test_vm_stop_completes_successfully(self): """ @@ -386,7 +408,6 @@ class TestVMLifecycle: Expected: - VM is "Stopped" """ - pass def test_vm_start_after_stop(self): """ @@ -401,7 +422,6 @@ class TestVMLifecycle: Expected: - VM is "Running" and SSH accessible """ - pass ``` --- @@ -431,7 +451,7 @@ def test_flat_overlay_ping_between_vms(): Expected: - Ping succeeds with 0% packet loss """ - pass + test_flat_overlay_ping_between_vms.__test__ = False ``` @@ -462,7 +482,7 @@ def test_isolated_vms_cannot_communicate(): Expected: - Ping fails with 100% packet loss """ - pass + test_isolated_vms_cannot_communicate.__test__ = False ``` @@ -495,7 +515,7 @@ def test_online_disk_resize(): Expected: - Disk size inside VM is greater than original size """ - pass + test_online_disk_resize.__test__ = False ``` From c9dfaf6ab20df879cb2601f6b2830c55b0fdff20 Mon Sep 17 00:00:00 2001 From: rnetser Date: Thu, 19 Feb 2026 11:06:02 +0200 Subject: [PATCH 14/21] Address CodeRabbit review comments for STD placeholder stats - Extract _append_class_entries() helper to deduplicate class-entry logic - Fix total_files in JSON output to match actual file count - Remove redundant negative assertions in tests - Add text language specifier to docs code block - Replace hard tab with spaces in Example 5 markers --- docs/SOFTWARE_TEST_DESCRIPTION.md | 4 +- .../std_placeholder_stats.py | 42 +++++++++++++++---- .../tests/test_std_placeholder_stats.py | 3 -- 3 files changed, 35 insertions(+), 14 deletions(-) diff --git a/docs/SOFTWARE_TEST_DESCRIPTION.md b/docs/SOFTWARE_TEST_DESCRIPTION.md index 743688770a..6292df72d9 100644 --- a/docs/SOFTWARE_TEST_DESCRIPTION.md +++ b/docs/SOFTWARE_TEST_DESCRIPTION.md @@ -93,7 +93,7 @@ Use clear, natural language that maps directly to assertions, for example: | `Ping fails` / `Operation fails` | `assert` raises exception or returns failure | **Example:** -``` +```text Expected: - VM is Running - File content equals "data-before-snapshot" @@ -498,7 +498,7 @@ def test_online_disk_resize(): Test that a running VM's disk can be expanded. Markers: - - gating + - gating Parametrize: - storage_class: [ocs-storagecluster-ceph-rbd, hostpath-csi] diff --git a/scripts/std_placeholder_stats/std_placeholder_stats.py b/scripts/std_placeholder_stats/std_placeholder_stats.py index bb41a440c8..ad7d2d4f20 100644 --- a/scripts/std_placeholder_stats/std_placeholder_stats.py +++ b/scripts/std_placeholder_stats/std_placeholder_stats.py @@ -154,6 +154,28 @@ def get_test_methods_from_class(class_node: ast.ClassDef) -> list[str]: ] +def _append_class_entries( + placeholder_files: dict[str, list[str]], + relative_path: str, + class_node: ast.ClassDef, +) -> None: + """Append a class and its test methods to the placeholder files mapping. + + Adds the class entry in ``path::ClassName`` format and indented method + entries for every ``test_*`` method found in the class body. + + Args: + placeholder_files: Mapping of file paths to placeholder test entries + (modified in place). + relative_path: File path relative to the project root. + class_node: AST class definition node to extract entries from. + """ + placeholder_files.setdefault(relative_path, []).append(f"{relative_path}::{class_node.name}") + test_methods = get_test_methods_from_class(class_node=class_node) + if test_methods: + placeholder_files[relative_path].extend(f" - {method}" for method in test_methods) + + def scan_placeholder_tests(tests_dir: Path) -> dict[str, list[str]]: """Scan tests directory for STD placeholder tests. @@ -185,10 +207,11 @@ def scan_placeholder_tests(tests_dir: Path) -> dict[str, list[str]]: for node in tree.body: if isinstance(node, ast.ClassDef): - placeholder_files.setdefault(relative_path, []).append(f"{relative_path}::{node.name}") - test_methods = get_test_methods_from_class(class_node=node) - if test_methods: - placeholder_files[relative_path].extend(f" - {method}" for method in test_methods) + _append_class_entries( + placeholder_files=placeholder_files, + relative_path=relative_path, + class_node=node, + ) elif isinstance(node, ast.FunctionDef) and node.name.startswith("test_"): # For standalone functions, add module path first if not already added @@ -202,10 +225,11 @@ def scan_placeholder_tests(tests_dir: Path) -> dict[str, list[str]]: if isinstance(node, ast.ClassDef): if class_has_test_false(class_node=node): # Class-level __test__ = False: report class and all methods - placeholder_files.setdefault(relative_path, []).append(f"{relative_path}::{node.name}") - test_methods = get_test_methods_from_class(class_node=node) - if test_methods: - placeholder_files[relative_path].extend(f" - {method}" for method in test_methods) + _append_class_entries( + placeholder_files=placeholder_files, + relative_path=relative_path, + class_node=node, + ) else: # Check each method for method.__test__ = False in class body method_placeholders: list[str] = [] @@ -283,7 +307,7 @@ def output_json(placeholder_files: dict[str, list[str]]) -> None: output: dict[str, Any] = { "total_tests": total_tests, - "total_files": len(placeholder_files), + "total_files": len(tests_by_file), "files": tests_by_file, } diff --git a/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py b/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py index d948e6377d..ffc8047b15 100644 --- a/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py +++ b/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py @@ -300,9 +300,6 @@ def test_excludes_non_test_methods(self) -> None: class_node = _get_first_class_node(source=SOURCE_CLASS_WITH_MIXED_METHODS) result = get_test_methods_from_class(class_node=class_node) assert result == ["test_one", "test_two"] - assert "__init__" not in result - assert "helper_method" not in result - assert "setup_method" not in result def test_returns_empty_list_for_no_test_methods(self) -> None: """get_test_methods_from_class() returns empty list when no test_ methods.""" From b18c9d2b1b80bcb30ca58f5e9f942903db8ee9ef Mon Sep 17 00:00:00 2001 From: rnetser Date: Thu, 19 Feb 2026 20:28:53 +0200 Subject: [PATCH 15/21] Address CodeRabbit review: add output tests and assertion messages - Add tests for output_json structure and empty input - Add test for output_text total_files counting - Fix total_files in output_text to count only files with tests - Add descriptive failure messages to all membership assertions --- .../std_placeholder_stats.py | 6 +- .../tests/test_std_placeholder_stats.py | 146 +++++++++++++++--- 2 files changed, 129 insertions(+), 23 deletions(-) diff --git a/scripts/std_placeholder_stats/std_placeholder_stats.py b/scripts/std_placeholder_stats/std_placeholder_stats.py index ad7d2d4f20..32110715ca 100644 --- a/scripts/std_placeholder_stats/std_placeholder_stats.py +++ b/scripts/std_placeholder_stats/std_placeholder_stats.py @@ -264,7 +264,7 @@ def output_text(placeholder_files: dict[str, list[str]]) -> None: return total_tests = 0 - total_files = len(placeholder_files) + total_files = 0 output_lines: list[str] = [] output_lines.append(separator(symbol_="=")) @@ -273,10 +273,14 @@ def output_text(placeholder_files: dict[str, list[str]]) -> None: output_lines.append("") for entries in placeholder_files.values(): + has_tests = False for entry in entries: output_lines.append(entry) if entry.startswith(" - "): total_tests += 1 + has_tests = True + if has_tests: + total_files += 1 output_lines.append("") output_lines.append(separator(symbol_="-")) diff --git a/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py b/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py index ffc8047b15..5315887852 100644 --- a/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py +++ b/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py @@ -9,6 +9,8 @@ from __future__ import annotations import ast +import json +import logging from pathlib import Path import pytest @@ -19,6 +21,8 @@ get_test_methods_from_class, method_has_test_false, module_has_test_false, + output_json, + output_text, scan_placeholder_tests, ) @@ -334,12 +338,18 @@ def test_module_level_test_false_reports_all_classes_and_functions(self, tests_d result = scan_placeholder_tests(tests_dir=tests_dir) - assert "tests/test_example.py" in result + assert "tests/test_example.py" in result, ( + f"Expected key 'tests/test_example.py' in result, got keys: {list(result.keys())}" + ) entries = result["tests/test_example.py"] - assert "tests/test_example.py::TestFoo" in entries - assert " - test_bar" in entries - assert "tests/test_example.py::TestBaz" in entries - assert " - test_qux" in entries + assert "tests/test_example.py::TestFoo" in entries, ( + f"Expected 'tests/test_example.py::TestFoo' in entries, got: {entries}" + ) + assert " - test_bar" in entries, f"Expected ' - test_bar' in entries, got: {entries}" + assert "tests/test_example.py::TestBaz" in entries, ( + f"Expected 'tests/test_example.py::TestBaz' in entries, got: {entries}" + ) + assert " - test_qux" in entries, f"Expected ' - test_qux' in entries, got: {entries}" def test_module_level_test_false_reports_standalone_functions(self, tests_dir: Path) -> None: """scan_placeholder_tests() reports standalone test functions under module-level __test__ = False.""" @@ -351,11 +361,13 @@ def test_module_level_test_false_reports_standalone_functions(self, tests_dir: P result = scan_placeholder_tests(tests_dir=tests_dir) - assert "tests/test_funcs.py" in result + assert "tests/test_funcs.py" in result, ( + f"Expected key 'tests/test_funcs.py' in result, got keys: {list(result.keys())}" + ) entries = result["tests/test_funcs.py"] - assert "tests/test_funcs.py" in entries - assert " - test_alpha" in entries - assert " - test_beta" in entries + assert "tests/test_funcs.py" in entries, f"Expected 'tests/test_funcs.py' in entries, got: {entries}" + assert " - test_alpha" in entries, f"Expected ' - test_alpha' in entries, got: {entries}" + assert " - test_beta" in entries, f"Expected ' - test_beta' in entries, got: {entries}" def test_class_level_test_false_reports_class_and_methods(self, tests_dir: Path) -> None: """scan_placeholder_tests() reports class and its methods when class has __test__ = False.""" @@ -374,11 +386,15 @@ def test_class_level_test_false_reports_class_and_methods(self, tests_dir: Path) result = scan_placeholder_tests(tests_dir=tests_dir) - assert "tests/test_cls.py" in result + assert "tests/test_cls.py" in result, ( + f"Expected key 'tests/test_cls.py' in result, got keys: {list(result.keys())}" + ) entries = result["tests/test_cls.py"] - assert "tests/test_cls.py::TestFoo" in entries - assert " - test_bar" in entries - assert " - test_baz" in entries + assert "tests/test_cls.py::TestFoo" in entries, ( + f"Expected 'tests/test_cls.py::TestFoo' in entries, got: {entries}" + ) + assert " - test_bar" in entries, f"Expected ' - test_bar' in entries, got: {entries}" + assert " - test_baz" in entries, f"Expected ' - test_baz' in entries, got: {entries}" def test_method_level_test_false_reports_only_that_method(self, tests_dir: Path) -> None: """scan_placeholder_tests() reports only the specific method with __test__ = False.""" @@ -397,11 +413,15 @@ def test_method_level_test_false_reports_only_that_method(self, tests_dir: Path) result = scan_placeholder_tests(tests_dir=tests_dir) - assert "tests/test_meth.py" in result + assert "tests/test_meth.py" in result, ( + f"Expected key 'tests/test_meth.py' in result, got keys: {list(result.keys())}" + ) entries = result["tests/test_meth.py"] - assert "tests/test_meth.py::TestFoo" in entries - assert " - test_alpha" in entries - assert " - test_beta" not in entries + assert "tests/test_meth.py::TestFoo" in entries, ( + f"Expected 'tests/test_meth.py::TestFoo' in entries, got: {entries}" + ) + assert " - test_alpha" in entries, f"Expected ' - test_alpha' in entries, got: {entries}" + assert " - test_beta" not in entries, f"Unexpected ' - test_beta' found in entries: {entries}" def test_function_level_test_false_reports_only_that_function(self, tests_dir: Path) -> None: """scan_placeholder_tests() reports only the function with func.__test__ = False.""" @@ -413,10 +433,12 @@ def test_function_level_test_false_reports_only_that_function(self, tests_dir: P result = scan_placeholder_tests(tests_dir=tests_dir) - assert "tests/test_func.py" in result + assert "tests/test_func.py" in result, ( + f"Expected key 'tests/test_func.py' in result, got keys: {list(result.keys())}" + ) entries = result["tests/test_func.py"] - assert " - test_alpha" in entries - assert " - test_beta" not in entries + assert " - test_alpha" in entries, f"Expected ' - test_alpha' in entries, got: {entries}" + assert " - test_beta" not in entries, f"Unexpected ' - test_beta' found in entries: {entries}" def test_skips_files_without_test_false(self, tests_dir: Path) -> None: """scan_placeholder_tests() skips files that do not contain __test__ = False.""" @@ -446,8 +468,12 @@ def test_handles_syntax_errors_gracefully(self, tests_dir: Path) -> None: result = scan_placeholder_tests(tests_dir=tests_dir) # Broken file should be skipped, valid file should be included - assert "tests/test_broken.py" not in result - assert "tests/test_valid.py" in result + assert "tests/test_broken.py" not in result, ( + f"Unexpected key 'tests/test_broken.py' in result: {list(result.keys())}" + ) + assert "tests/test_valid.py" in result, ( + f"Expected key 'tests/test_valid.py' in result, got keys: {list(result.keys())}" + ) def test_returns_empty_dict_when_no_test_files(self, tests_dir: Path) -> None: """scan_placeholder_tests() returns empty dict when no test files exist.""" @@ -487,3 +513,79 @@ def test_ignores_non_test_files(self, tests_dir: Path) -> None: result = scan_placeholder_tests(tests_dir=tests_dir) assert result == {} + + +# =========================================================================== +# Tests for output_text() and output_json() +# =========================================================================== + + +class TestOutputFunctions: + """Tests for output_text() and output_json() functions.""" + + SAMPLE_PLACEHOLDER_FILES: dict[str, list[str]] = { + "tests/test_foo.py": [ + "tests/test_foo.py::TestFoo", + " - test_bar", + " - test_baz", + ], + "tests/test_standalone.py": [ + "tests/test_standalone.py", + " - test_alpha", + ], + } + + def test_output_json_structure(self, capsys: pytest.CaptureFixture[str]) -> None: + """output_json() produces valid JSON with correct totals and file entries.""" + output_json(placeholder_files=self.SAMPLE_PLACEHOLDER_FILES) + captured = capsys.readouterr() + result = json.loads(captured.out) + + assert result["total_tests"] == 3, f"Expected 3 total tests, got {result['total_tests']}" + assert result["total_files"] == 2, f"Expected 2 total files, got {result['total_files']}" + assert "tests/test_foo.py" in result["files"], ( + f"Missing tests/test_foo.py in files, got keys: {list(result['files'].keys())}" + ) + assert result["files"]["tests/test_foo.py"] == ["test_bar", "test_baz"], ( + f"Expected ['test_bar', 'test_baz'], got {result['files']['tests/test_foo.py']}" + ) + assert result["files"]["tests/test_standalone.py"] == ["test_alpha"], ( + f"Expected ['test_alpha'], got {result['files']['tests/test_standalone.py']}" + ) + + def test_output_json_empty_input(self, capsys: pytest.CaptureFixture[str]) -> None: + """output_json() produces correct JSON for empty input.""" + output_json(placeholder_files={}) + captured = capsys.readouterr() + result = json.loads(captured.out) + + assert result["total_tests"] == 0, f"Expected 0 total tests, got {result['total_tests']}" + assert result["total_files"] == 0, f"Expected 0 total files, got {result['total_files']}" + assert result["files"] == {}, f"Expected empty files dict, got: {result['files']}" + + def test_output_text_counts_only_files_with_tests(self) -> None: + """output_text() counts only files that have test entries in the total.""" + placeholder_files: dict[str, list[str]] = { + "tests/test_foo.py": [ + "tests/test_foo.py::TestFoo", + " - test_bar", + ], + "tests/test_empty.py": [ + "tests/test_empty.py::TestEmpty", + ], + } + handler = logging.Handler() + messages: list[str] = [] + handler.emit = lambda record: messages.append(record.getMessage()) + logger = logging.getLogger(name="scripts.std_placeholder_stats.std_placeholder_stats") + logger.addHandler(hdlr=handler) + try: + output_text(placeholder_files=placeholder_files) + finally: + logger.removeHandler(hdlr=handler) + + summary_line = [line for line in messages if "Total:" in line] + assert summary_line, f"Expected 'Total:' summary line in log output, got: {messages}" + assert "1 placeholder tests in 1 files" in summary_line[0], ( + f"Expected '1 placeholder tests in 1 files', got: {summary_line[0]}" + ) From 9bc38f55471eb698bd30947313c93ac3fcb5f7cf Mon Sep 17 00:00:00 2001 From: rnetser Date: Mon, 23 Feb 2026 15:51:55 +0200 Subject: [PATCH 16/21] fix: address CodeRabbit review comments on STD placeholder stats - Remove redundant int() cast on floor division result - Add comment explaining intentional SyntaxError catch skip - Use bool flag for standalone function header tracking - Add descriptive failure messages to test assertions - Add ClassVar annotation to mutable class attribute - Replace manual logging handler with pytest caplog fixture --- .../std_placeholder_stats.py | 12 ++++---- .../tests/test_std_placeholder_stats.py | 29 ++++++++++--------- 2 files changed, 21 insertions(+), 20 deletions(-) diff --git a/scripts/std_placeholder_stats/std_placeholder_stats.py b/scripts/std_placeholder_stats/std_placeholder_stats.py index 32110715ca..87c58034dc 100644 --- a/scripts/std_placeholder_stats/std_placeholder_stats.py +++ b/scripts/std_placeholder_stats/std_placeholder_stats.py @@ -44,7 +44,7 @@ def separator(symbol_: str, val: str | None = None) -> str: if not val: return symbol_ * terminal_width - sepa = int((terminal_width - len(val) - 2) // 2) + sepa = (terminal_width - len(val) - 2) // 2 return f"{symbol_ * sepa} {val} {symbol_ * sepa}" @@ -195,6 +195,7 @@ def scan_placeholder_tests(tests_dir: Path) -> dict[str, list[str]]: try: tree = ast.parse(source=file_content) except SyntaxError as exc: + # Intentionally skip unparseable files; warn so the user can investigate LOGGER.warning(f"Failed to parse {test_file}: {exc}") continue @@ -221,6 +222,7 @@ def scan_placeholder_tests(tests_dir: Path) -> dict[str, list[str]]: placeholder_files[relative_path].append(f" - {node.name}") else: # Check individual classes and functions for __test__ = False + has_standalone_header = False for node in tree.body: if isinstance(node, ast.ClassDef): if class_has_test_false(class_node=node): @@ -243,11 +245,9 @@ def scan_placeholder_tests(tests_dir: Path) -> dict[str, list[str]]: elif isinstance(node, ast.FunctionDef) and node.name.startswith("test_"): if function_has_test_false(module_tree=tree, function_name=node.name): - # For standalone functions, add module path first if not already added - if relative_path not in placeholder_files: - placeholder_files[relative_path] = [relative_path] - elif relative_path not in placeholder_files[relative_path]: - placeholder_files[relative_path].insert(0, relative_path) + if not has_standalone_header: + placeholder_files.setdefault(relative_path, []).insert(0, relative_path) + has_standalone_header = True placeholder_files[relative_path].append(f" - {node.name}") return placeholder_files diff --git a/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py b/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py index 5315887852..8d369014b6 100644 --- a/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py +++ b/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py @@ -12,6 +12,7 @@ import json import logging from pathlib import Path +from typing import ClassVar import pytest @@ -297,19 +298,19 @@ def test_returns_raw_test_method_names(self) -> None: """get_test_methods_from_class() returns raw test method names.""" class_node = _get_first_class_node(source=SOURCE_CLASS_TEST_FALSE) result = get_test_methods_from_class(class_node=class_node) - assert result == ["test_bar", "test_baz"] + assert result == ["test_bar", "test_baz"], f"Expected ['test_bar', 'test_baz'], got: {result}" def test_excludes_non_test_methods(self) -> None: """get_test_methods_from_class() excludes helper methods, __init__, etc.""" class_node = _get_first_class_node(source=SOURCE_CLASS_WITH_MIXED_METHODS) result = get_test_methods_from_class(class_node=class_node) - assert result == ["test_one", "test_two"] + assert result == ["test_one", "test_two"], f"Expected ['test_one', 'test_two'], got: {result}" def test_returns_empty_list_for_no_test_methods(self) -> None: """get_test_methods_from_class() returns empty list when no test_ methods.""" class_node = _get_first_class_node(source=SOURCE_CLASS_NO_TEST_METHODS) result = get_test_methods_from_class(class_node=class_node) - assert result == [] + assert result == [], f"Expected empty list, got: {result}" # =========================================================================== @@ -495,7 +496,9 @@ def test_scans_subdirectories_recursively(self, tests_dir: Path) -> None: assert result, "Expected at least one entry from nested test file" found_keys = list(result.keys()) - assert any("test_deep.py" in key for key in found_keys) + assert any("test_deep.py" in key for key in found_keys), ( + f"Expected a key containing 'test_deep.py' in results, got keys: {found_keys}" + ) def test_ignores_non_test_files(self, tests_dir: Path) -> None: """scan_placeholder_tests() only processes files matching test_*.py pattern.""" @@ -523,7 +526,7 @@ def test_ignores_non_test_files(self, tests_dir: Path) -> None: class TestOutputFunctions: """Tests for output_text() and output_json() functions.""" - SAMPLE_PLACEHOLDER_FILES: dict[str, list[str]] = { + SAMPLE_PLACEHOLDER_FILES: ClassVar[dict[str, list[str]]] = { "tests/test_foo.py": [ "tests/test_foo.py::TestFoo", " - test_bar", @@ -563,7 +566,7 @@ def test_output_json_empty_input(self, capsys: pytest.CaptureFixture[str]) -> No assert result["total_files"] == 0, f"Expected 0 total files, got {result['total_files']}" assert result["files"] == {}, f"Expected empty files dict, got: {result['files']}" - def test_output_text_counts_only_files_with_tests(self) -> None: + def test_output_text_counts_only_files_with_tests(self, caplog: pytest.LogCaptureFixture) -> None: """output_text() counts only files that have test entries in the total.""" placeholder_files: dict[str, list[str]] = { "tests/test_foo.py": [ @@ -574,18 +577,16 @@ def test_output_text_counts_only_files_with_tests(self) -> None: "tests/test_empty.py::TestEmpty", ], } - handler = logging.Handler() - messages: list[str] = [] - handler.emit = lambda record: messages.append(record.getMessage()) logger = logging.getLogger(name="scripts.std_placeholder_stats.std_placeholder_stats") - logger.addHandler(hdlr=handler) + logger.propagate = True try: - output_text(placeholder_files=placeholder_files) + with caplog.at_level(logging.INFO, logger="scripts.std_placeholder_stats.std_placeholder_stats"): + output_text(placeholder_files=placeholder_files) finally: - logger.removeHandler(hdlr=handler) + logger.propagate = False - summary_line = [line for line in messages if "Total:" in line] - assert summary_line, f"Expected 'Total:' summary line in log output, got: {messages}" + summary_line = [line for line in caplog.messages if "Total:" in line] + assert summary_line, f"Expected 'Total:' summary line in log output, got: {caplog.messages}" assert "1 placeholder tests in 1 files" in summary_line[0], ( f"Expected '1 placeholder tests in 1 files', got: {summary_line[0]}" ) From cb112ccc907e490673c703fe1ea49ffbca4f6ba8 Mon Sep 17 00:00:00 2001 From: rnetser Date: Thu, 26 Feb 2026 11:28:06 +0200 Subject: [PATCH 17/21] refactor: split std_placeholder_stats to separate branch --- .pre-commit-config.yaml | 1 + scripts/std_placeholder_stats/__init__.py | 0 .../std_placeholder_stats.py | 394 ------------ .../std_placeholder_stats/tests/__init__.py | 0 .../tests/test_std_placeholder_stats.py | 592 ------------------ 5 files changed, 1 insertion(+), 986 deletions(-) delete mode 100644 scripts/std_placeholder_stats/__init__.py delete mode 100644 scripts/std_placeholder_stats/std_placeholder_stats.py delete mode 100644 scripts/std_placeholder_stats/tests/__init__.py delete mode 100644 scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 492e22215e..d0a0e229ce 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -46,6 +46,7 @@ repos: - id: flake8 language_version: python3.14 args: [--config=.flake8] + exclude: "(utilities/unittests/|utilities/junit_ai_utils\\.py|scripts/.*/tests/)" additional_dependencies: [ "git+https://github.com/RedHatQE/flake8-plugins.git@v1.0.0", diff --git a/scripts/std_placeholder_stats/__init__.py b/scripts/std_placeholder_stats/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/scripts/std_placeholder_stats/std_placeholder_stats.py b/scripts/std_placeholder_stats/std_placeholder_stats.py deleted file mode 100644 index 87c58034dc..0000000000 --- a/scripts/std_placeholder_stats/std_placeholder_stats.py +++ /dev/null @@ -1,394 +0,0 @@ -#!/usr/bin/env -S uv run python -"""STD Placeholder Tests Statistics Generator. - -Scans the tests directory for STD (Standard Test Design) placeholder tests that -are not yet implemented. These are tests with `__test__ = False` that contain -only docstrings describing expected behavior, without actual implementation code. - -Output: - - text: Human-readable summary to stdout (default) - - json: Machine-readable JSON output - -Usage: - uv run python scripts/std_placeholder_stats/std_placeholder_stats.py - uv run python scripts/std_placeholder_stats/std_placeholder_stats.py --tests-dir tests - uv run python scripts/std_placeholder_stats/std_placeholder_stats.py --output-format json - -Generated using Claude cli -""" - -from __future__ import annotations - -import ast -import json -from argparse import ArgumentParser, Namespace, RawDescriptionHelpFormatter -from pathlib import Path -from typing import Any - -from simple_logger.logger import get_logger - -LOGGER = get_logger(name=__name__) - - -def separator(symbol_: str, val: str | None = None) -> str: - """Create a separator line for terminal output. - - Args: - symbol_: The character to use for the separator. - val: Optional text to center in the separator. - - Returns: - Formatted separator string. - """ - terminal_width = 120 # Fixed width for consistent output - if not val: - return symbol_ * terminal_width - - sepa = (terminal_width - len(val) - 2) // 2 - return f"{symbol_ * sepa} {val} {symbol_ * sepa}" - - -def module_has_test_false(module_tree: ast.Module) -> bool: - """Check if a module has `__test__ = False` assignment at top level. - - Args: - module_tree: AST module tree - - Returns: - True if the module has __test__ = False at top level, False otherwise - """ - for node in module_tree.body: - if isinstance(node, ast.Assign): - for target in node.targets: - if isinstance(target, ast.Name) and target.id == "__test__": - if isinstance(node.value, ast.Constant) and node.value.value is False: - return True - return False - - -def class_has_test_false(class_node: ast.ClassDef) -> bool: - """Check if a class has `__test__ = False` assignment in its body. - - Args: - class_node: AST class definition node - - Returns: - True if the class has __test__ = False, False otherwise - """ - for stmt in class_node.body: - if isinstance(stmt, ast.Assign): - for target in stmt.targets: - if isinstance(target, ast.Name) and target.id == "__test__": - if isinstance(stmt.value, ast.Constant) and stmt.value.value is False: - return True - return False - - -def function_has_test_false(module_tree: ast.Module, function_name: str) -> bool: - """Check if a standalone function has `function_name.__test__ = False` assignment. - - Args: - module_tree: AST module tree - function_name: Name of the function to check - - Returns: - True if the function has __test__ = False assignment, False otherwise - """ - for node in module_tree.body: - if isinstance(node, ast.Assign): - for target in node.targets: - if isinstance(target, ast.Attribute): - if ( - isinstance(target.value, ast.Name) - and target.value.id == function_name - and target.attr == "__test__" - ): - if isinstance(node.value, ast.Constant) and node.value.value is False: - return True - return False - - -def method_has_test_false(class_node: ast.ClassDef, method_name: str) -> bool: - """Check if a method has `method_name.__test__ = False` assignment in the class body. - - This detects patterns like: - class TestFoo: - def test_bar(self): - pass - test_bar.__test__ = False - - Args: - class_node: AST class definition node - method_name: Name of the method to check - - Returns: - True if the method has __test__ = False assignment in the class body, False otherwise - """ - for stmt in class_node.body: - if isinstance(stmt, ast.Assign): - for target in stmt.targets: - if isinstance(target, ast.Attribute): - if ( - isinstance(target.value, ast.Name) - and target.value.id == method_name - and target.attr == "__test__" - ): - if isinstance(stmt.value, ast.Constant) and stmt.value.value is False: - return True - return False - - -def get_test_methods_from_class(class_node: ast.ClassDef) -> list[str]: - """Extract test method names from a class definition. - - Args: - class_node: AST class definition node - - Returns: - List of test method names. - """ - return [ - method.name - for method in class_node.body - if isinstance(method, ast.FunctionDef) and method.name.startswith("test_") - ] - - -def _append_class_entries( - placeholder_files: dict[str, list[str]], - relative_path: str, - class_node: ast.ClassDef, -) -> None: - """Append a class and its test methods to the placeholder files mapping. - - Adds the class entry in ``path::ClassName`` format and indented method - entries for every ``test_*`` method found in the class body. - - Args: - placeholder_files: Mapping of file paths to placeholder test entries - (modified in place). - relative_path: File path relative to the project root. - class_node: AST class definition node to extract entries from. - """ - placeholder_files.setdefault(relative_path, []).append(f"{relative_path}::{class_node.name}") - test_methods = get_test_methods_from_class(class_node=class_node) - if test_methods: - placeholder_files[relative_path].extend(f" - {method}" for method in test_methods) - - -def scan_placeholder_tests(tests_dir: Path) -> dict[str, list[str]]: - """Scan tests directory for STD placeholder tests. - - Args: - tests_dir: Path to the tests directory to scan. - - Returns: - Dictionary mapping file paths to lists of placeholder test entries. - """ - placeholder_files: dict[str, list[str]] = {} - - for test_file in tests_dir.rglob("test_*.py"): - file_content = test_file.read_text(encoding="utf-8") - if "__test__ = False" not in file_content: - continue - - try: - tree = ast.parse(source=file_content) - except SyntaxError as exc: - # Intentionally skip unparseable files; warn so the user can investigate - LOGGER.warning(f"Failed to parse {test_file}: {exc}") - continue - - relative_path = str(test_file.relative_to(tests_dir.parent)) - - # Check if module has __test__ = False at top level - if module_has_test_false(module_tree=tree): - # Report ALL test classes and functions in this module - module_has_standalone_tests = False - - for node in tree.body: - if isinstance(node, ast.ClassDef): - _append_class_entries( - placeholder_files=placeholder_files, - relative_path=relative_path, - class_node=node, - ) - - elif isinstance(node, ast.FunctionDef) and node.name.startswith("test_"): - # For standalone functions, add module path first if not already added - if not module_has_standalone_tests: - placeholder_files.setdefault(relative_path, []).append(relative_path) - module_has_standalone_tests = True - placeholder_files[relative_path].append(f" - {node.name}") - else: - # Check individual classes and functions for __test__ = False - has_standalone_header = False - for node in tree.body: - if isinstance(node, ast.ClassDef): - if class_has_test_false(class_node=node): - # Class-level __test__ = False: report class and all methods - _append_class_entries( - placeholder_files=placeholder_files, - relative_path=relative_path, - class_node=node, - ) - else: - # Check each method for method.__test__ = False in class body - method_placeholders: list[str] = [] - for method in node.body: - if isinstance(method, ast.FunctionDef) and method.name.startswith("test_"): - if method_has_test_false(class_node=node, method_name=method.name): - method_placeholders.append(f" - {method.name}") - if method_placeholders: - placeholder_files.setdefault(relative_path, []).append(f"{relative_path}::{node.name}") - placeholder_files[relative_path].extend(method_placeholders) - - elif isinstance(node, ast.FunctionDef) and node.name.startswith("test_"): - if function_has_test_false(module_tree=tree, function_name=node.name): - if not has_standalone_header: - placeholder_files.setdefault(relative_path, []).insert(0, relative_path) - has_standalone_header = True - placeholder_files[relative_path].append(f" - {node.name}") - - return placeholder_files - - -def output_text(placeholder_files: dict[str, list[str]]) -> None: - """Output results in human-readable text format. - - Args: - placeholder_files: Dictionary mapping file paths to placeholder test entries. - """ - if not placeholder_files: - LOGGER.info("No STD placeholder tests found.") - return - - total_tests = 0 - total_files = 0 - - output_lines: list[str] = [] - output_lines.append(separator(symbol_="=")) - output_lines.append("STD PLACEHOLDER TESTS (not yet implemented)") - output_lines.append(separator(symbol_="=")) - output_lines.append("") - - for entries in placeholder_files.values(): - has_tests = False - for entry in entries: - output_lines.append(entry) - if entry.startswith(" - "): - total_tests += 1 - has_tests = True - if has_tests: - total_files += 1 - - output_lines.append("") - output_lines.append(separator(symbol_="-")) - output_lines.append(f"Total: {total_tests} placeholder tests in {total_files} files") - output_lines.append(separator(symbol_="=")) - - for line in output_lines: - LOGGER.info(line) - - -def output_json(placeholder_files: dict[str, list[str]]) -> None: - """Output results in JSON format. - - Args: - placeholder_files: Dictionary mapping file paths to placeholder test entries. - """ - total_tests = 0 - tests_by_file: dict[str, list[str]] = {} - - for file_path, entries in placeholder_files.items(): - tests: list[str] = [] - for entry in entries: - if entry.startswith(" - "): - tests.append(entry.strip().removeprefix("- ")) - total_tests += 1 - if tests: - tests_by_file[file_path] = tests - - output: dict[str, Any] = { - "total_tests": total_tests, - "total_files": len(tests_by_file), - "files": tests_by_file, - } - - print(json.dumps(output, indent=2)) - - -def parse_args() -> Namespace: - """Parse command line arguments. - - Returns: - Parsed arguments namespace. - """ - parser = ArgumentParser( - description="STD Placeholder Tests Statistics Generator", - formatter_class=RawDescriptionHelpFormatter, - epilog=""" -Scans the tests directory for STD (Standard Test Design) placeholder tests. -These are tests marked with `__test__ = False` that contain only docstrings -describing expected behavior, without actual implementation code. - -Examples: - # Scan default tests directory with text output - uv run python scripts/std_placeholder_stats/std_placeholder_stats.py - - # Scan custom tests directory - uv run python scripts/std_placeholder_stats/std_placeholder_stats.py --tests-dir my_tests - - # Output as JSON - uv run python scripts/std_placeholder_stats/std_placeholder_stats.py --output-format json - """, - ) - parser.add_argument( - "--tests-dir", - type=Path, - default=Path("tests"), - help="The tests directory to scan (default: tests)", - ) - parser.add_argument( - "--output-format", - choices=["text", "json"], - default="text", - help="Output format: text (default) or json", - ) - return parser.parse_args() - - -def main() -> int: - """Main entry point for the STD placeholder stats generator. - - Returns: - Exit code: 0 on success, 1 on error. - """ - args = parse_args() - - tests_dir = args.tests_dir - if not tests_dir.is_absolute(): - tests_dir = Path.cwd() / tests_dir - - if not tests_dir.exists(): - LOGGER.error(f"Tests directory does not exist: {tests_dir}") - return 1 - - if not tests_dir.is_dir(): - LOGGER.error(f"Path is not a directory: {tests_dir}") - return 1 - - LOGGER.info(f"Scanning tests directory: {tests_dir}") - - placeholder_files = scan_placeholder_tests(tests_dir=tests_dir) - - if args.output_format == "json": - output_json(placeholder_files=placeholder_files) - else: - output_text(placeholder_files=placeholder_files) - - return 0 - - -if __name__ == "__main__": - raise SystemExit(main()) diff --git a/scripts/std_placeholder_stats/tests/__init__.py b/scripts/std_placeholder_stats/tests/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py b/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py deleted file mode 100644 index 8d369014b6..0000000000 --- a/scripts/std_placeholder_stats/tests/test_std_placeholder_stats.py +++ /dev/null @@ -1,592 +0,0 @@ -"""Unit tests for STD Placeholder Stats Generator. - -Tests cover all public functions in std_placeholder_stats.py including -AST-based analysis functions and the directory scanner. - -Generated using Claude cli -""" - -from __future__ import annotations - -import ast -import json -import logging -from pathlib import Path -from typing import ClassVar - -import pytest - -from scripts.std_placeholder_stats.std_placeholder_stats import ( - class_has_test_false, - function_has_test_false, - get_test_methods_from_class, - method_has_test_false, - module_has_test_false, - output_json, - output_text, - scan_placeholder_tests, -) - -# --------------------------------------------------------------------------- -# Constants -# --------------------------------------------------------------------------- - -TEST_FALSE_MARKER = "__test__ = False" - -# --------------------------------------------------------------------------- -# Source code fragments for AST-based tests -# --------------------------------------------------------------------------- - -SOURCE_MODULE_TEST_FALSE = f"""\ -{TEST_FALSE_MARKER} - -class TestFoo: - def test_bar(self): - pass -""" - -SOURCE_NO_TEST_ASSIGNMENT = """\ -class TestFoo: - def test_bar(self): - pass -""" - -SOURCE_CLASS_TEST_FALSE = f"""\ -class TestFoo: - {TEST_FALSE_MARKER} - - def test_bar(self): - pass - - def test_baz(self): - pass -""" - -SOURCE_FUNCTION_TEST_FALSE = f"""\ -def test_standalone(): - pass - -test_standalone.{TEST_FALSE_MARKER} -""" - -SOURCE_FUNCTION_TEST_FALSE_DIFFERENT_NAME = f"""\ -def test_alpha(): - pass - -test_alpha.{TEST_FALSE_MARKER} - -def test_beta(): - pass -""" - -SOURCE_STANDALONE_FUNCTION = """\ -def test_standalone(): - pass -""" - -SOURCE_METHOD_TEST_FALSE = f"""\ -class TestFoo: - def test_alpha(self): - pass - - test_alpha.{TEST_FALSE_MARKER} - - def test_beta(self): - pass -""" - -SOURCE_TWO_METHODS = """\ -class TestFoo: - def test_alpha(self): - pass - - def test_beta(self): - pass -""" - -SOURCE_CLASS_WITH_MIXED_METHODS = f"""\ -class TestFoo: - {TEST_FALSE_MARKER} - - def __init__(self): - pass - - def helper_method(self): - pass - - def test_one(self): - pass - - def test_two(self): - pass - - def setup_method(self): - pass -""" - -SOURCE_CLASS_NO_TEST_METHODS = """\ -class TestFoo: - def __init__(self): - pass - - def helper(self): - pass -""" - - -# --------------------------------------------------------------------------- -# Helper functions -# --------------------------------------------------------------------------- - - -def _get_first_class_node(source: str) -> ast.ClassDef: - """Parse source and return the first ClassDef node. - - Args: - source: Python source code containing a class definition. - - Returns: - The first ast.ClassDef found in the parsed source. - """ - tree = ast.parse(source=source) - for node in tree.body: - if isinstance(node, ast.ClassDef): - return node - raise ValueError("No class definition found in source") - - -def _create_test_file(directory: Path, filename: str, content: str) -> Path: - """Create a test file in the given directory. - - Args: - directory: Parent directory for the file. - filename: Name of the test file. - content: Python source content for the file. - - Returns: - Path to the created file. - """ - file_path = directory / filename - file_path.write_text(data=content, encoding="utf-8") - return file_path - - -# --------------------------------------------------------------------------- -# Fixtures -# --------------------------------------------------------------------------- - - -@pytest.fixture() -def tests_dir(tmp_path: Path) -> Path: - """Provide a temporary 'tests' directory for scan_placeholder_tests.""" - directory = tmp_path / "tests" - directory.mkdir() - return directory - - -# =========================================================================== -# Tests for module_has_test_false() -# =========================================================================== - - -class TestModuleHasTestFalse: - """Tests for the module_has_test_false() function.""" - - def test_returns_true_when_module_has_test_false(self) -> None: - """module_has_test_false() detects __test__ = False at module level.""" - tree = ast.parse(source=SOURCE_MODULE_TEST_FALSE) - assert module_has_test_false(module_tree=tree) is True - - def test_returns_false_when_no_test_assignment(self) -> None: - """module_has_test_false() returns False with no __test__ assignment.""" - tree = ast.parse(source=SOURCE_NO_TEST_ASSIGNMENT) - assert module_has_test_false(module_tree=tree) is False - - def test_ignores_class_level_test_false(self) -> None: - """module_has_test_false() ignores __test__ = False inside classes.""" - tree = ast.parse(source=SOURCE_CLASS_TEST_FALSE) - assert module_has_test_false(module_tree=tree) is False - - -# =========================================================================== -# Tests for class_has_test_false() -# =========================================================================== - - -class TestClassHasTestFalse: - """Tests for the class_has_test_false() function.""" - - def test_returns_true_when_class_has_test_false(self) -> None: - """class_has_test_false() detects __test__ = False in class body.""" - class_node = _get_first_class_node(source=SOURCE_CLASS_TEST_FALSE) - assert class_has_test_false(class_node=class_node) is True - - def test_returns_false_when_no_test_assignment(self) -> None: - """class_has_test_false() returns False with no __test__ assignment.""" - class_node = _get_first_class_node(source=SOURCE_NO_TEST_ASSIGNMENT) - assert class_has_test_false(class_node=class_node) is False - - def test_detects_test_false_in_class_with_mixed_methods(self) -> None: - """class_has_test_false() detects __test__ = False even with non-test methods present.""" - class_node = _get_first_class_node(source=SOURCE_CLASS_WITH_MIXED_METHODS) - assert class_has_test_false(class_node=class_node) is True - - -# =========================================================================== -# Tests for function_has_test_false() -# =========================================================================== - - -class TestFunctionHasTestFalse: - """Tests for the function_has_test_false() function.""" - - def test_returns_true_when_function_has_test_false(self) -> None: - """function_has_test_false() detects func.__test__ = False at module level.""" - tree = ast.parse(source=SOURCE_FUNCTION_TEST_FALSE) - assert function_has_test_false(module_tree=tree, function_name="test_standalone") is True - - def test_returns_false_for_non_matching_function_name(self) -> None: - """function_has_test_false() returns False for a different function name.""" - tree = ast.parse(source=SOURCE_FUNCTION_TEST_FALSE) - assert function_has_test_false(module_tree=tree, function_name="test_other") is False - - def test_returns_false_when_no_test_assignment_exists(self) -> None: - """function_has_test_false() returns False with no __test__ assignment.""" - tree = ast.parse(source=SOURCE_STANDALONE_FUNCTION) - assert function_has_test_false(module_tree=tree, function_name="test_standalone") is False - - def test_matches_correct_function_among_multiple(self) -> None: - """function_has_test_false() only matches the specific function name.""" - tree = ast.parse(source=SOURCE_FUNCTION_TEST_FALSE_DIFFERENT_NAME) - assert function_has_test_false(module_tree=tree, function_name="test_alpha") is True - assert function_has_test_false(module_tree=tree, function_name="test_beta") is False - - -# =========================================================================== -# Tests for method_has_test_false() -# =========================================================================== - - -class TestMethodHasTestFalse: - """Tests for the method_has_test_false() function.""" - - def test_returns_true_when_method_has_test_false(self) -> None: - """method_has_test_false() detects method.__test__ = False in class body.""" - class_node = _get_first_class_node(source=SOURCE_METHOD_TEST_FALSE) - assert method_has_test_false(class_node=class_node, method_name="test_alpha") is True - - def test_returns_false_for_non_matching_method_name(self) -> None: - """method_has_test_false() returns False for a different method name.""" - class_node = _get_first_class_node(source=SOURCE_METHOD_TEST_FALSE) - assert method_has_test_false(class_node=class_node, method_name="test_beta") is False - - def test_returns_false_when_no_test_assignment_exists(self) -> None: - """method_has_test_false() returns False with no __test__ assignment.""" - class_node = _get_first_class_node(source=SOURCE_TWO_METHODS) - assert method_has_test_false(class_node=class_node, method_name="test_alpha") is False - - -# =========================================================================== -# Tests for get_test_methods_from_class() -# =========================================================================== - - -class TestGetTestMethodsFromClass: - """Tests for the get_test_methods_from_class() function.""" - - def test_returns_raw_test_method_names(self) -> None: - """get_test_methods_from_class() returns raw test method names.""" - class_node = _get_first_class_node(source=SOURCE_CLASS_TEST_FALSE) - result = get_test_methods_from_class(class_node=class_node) - assert result == ["test_bar", "test_baz"], f"Expected ['test_bar', 'test_baz'], got: {result}" - - def test_excludes_non_test_methods(self) -> None: - """get_test_methods_from_class() excludes helper methods, __init__, etc.""" - class_node = _get_first_class_node(source=SOURCE_CLASS_WITH_MIXED_METHODS) - result = get_test_methods_from_class(class_node=class_node) - assert result == ["test_one", "test_two"], f"Expected ['test_one', 'test_two'], got: {result}" - - def test_returns_empty_list_for_no_test_methods(self) -> None: - """get_test_methods_from_class() returns empty list when no test_ methods.""" - class_node = _get_first_class_node(source=SOURCE_CLASS_NO_TEST_METHODS) - result = get_test_methods_from_class(class_node=class_node) - assert result == [], f"Expected empty list, got: {result}" - - -# =========================================================================== -# Tests for scan_placeholder_tests() -# =========================================================================== - - -class TestScanPlaceholderTests: - """Tests for the scan_placeholder_tests() function.""" - - def test_module_level_test_false_reports_all_classes_and_functions(self, tests_dir: Path) -> None: - """scan_placeholder_tests() reports all classes and functions when module has __test__ = False.""" - _create_test_file( - directory=tests_dir, - filename="test_example.py", - content=( - f"{TEST_FALSE_MARKER}\n\n" - "class TestFoo:\n" - " def test_bar(self):\n" - " pass\n\n" - "class TestBaz:\n" - " def test_qux(self):\n" - " pass\n" - ), - ) - - result = scan_placeholder_tests(tests_dir=tests_dir) - - assert "tests/test_example.py" in result, ( - f"Expected key 'tests/test_example.py' in result, got keys: {list(result.keys())}" - ) - entries = result["tests/test_example.py"] - assert "tests/test_example.py::TestFoo" in entries, ( - f"Expected 'tests/test_example.py::TestFoo' in entries, got: {entries}" - ) - assert " - test_bar" in entries, f"Expected ' - test_bar' in entries, got: {entries}" - assert "tests/test_example.py::TestBaz" in entries, ( - f"Expected 'tests/test_example.py::TestBaz' in entries, got: {entries}" - ) - assert " - test_qux" in entries, f"Expected ' - test_qux' in entries, got: {entries}" - - def test_module_level_test_false_reports_standalone_functions(self, tests_dir: Path) -> None: - """scan_placeholder_tests() reports standalone test functions under module-level __test__ = False.""" - _create_test_file( - directory=tests_dir, - filename="test_funcs.py", - content=(f"{TEST_FALSE_MARKER}\n\ndef test_alpha():\n pass\n\ndef test_beta():\n pass\n"), - ) - - result = scan_placeholder_tests(tests_dir=tests_dir) - - assert "tests/test_funcs.py" in result, ( - f"Expected key 'tests/test_funcs.py' in result, got keys: {list(result.keys())}" - ) - entries = result["tests/test_funcs.py"] - assert "tests/test_funcs.py" in entries, f"Expected 'tests/test_funcs.py' in entries, got: {entries}" - assert " - test_alpha" in entries, f"Expected ' - test_alpha' in entries, got: {entries}" - assert " - test_beta" in entries, f"Expected ' - test_beta' in entries, got: {entries}" - - def test_class_level_test_false_reports_class_and_methods(self, tests_dir: Path) -> None: - """scan_placeholder_tests() reports class and its methods when class has __test__ = False.""" - _create_test_file( - directory=tests_dir, - filename="test_cls.py", - content=( - "class TestFoo:\n" - f" {TEST_FALSE_MARKER}\n\n" - " def test_bar(self):\n" - " pass\n\n" - " def test_baz(self):\n" - " pass\n" - ), - ) - - result = scan_placeholder_tests(tests_dir=tests_dir) - - assert "tests/test_cls.py" in result, ( - f"Expected key 'tests/test_cls.py' in result, got keys: {list(result.keys())}" - ) - entries = result["tests/test_cls.py"] - assert "tests/test_cls.py::TestFoo" in entries, ( - f"Expected 'tests/test_cls.py::TestFoo' in entries, got: {entries}" - ) - assert " - test_bar" in entries, f"Expected ' - test_bar' in entries, got: {entries}" - assert " - test_baz" in entries, f"Expected ' - test_baz' in entries, got: {entries}" - - def test_method_level_test_false_reports_only_that_method(self, tests_dir: Path) -> None: - """scan_placeholder_tests() reports only the specific method with __test__ = False.""" - _create_test_file( - directory=tests_dir, - filename="test_meth.py", - content=( - "class TestFoo:\n" - " def test_alpha(self):\n" - " pass\n\n" - f" test_alpha.{TEST_FALSE_MARKER}\n\n" - " def test_beta(self):\n" - " pass\n" - ), - ) - - result = scan_placeholder_tests(tests_dir=tests_dir) - - assert "tests/test_meth.py" in result, ( - f"Expected key 'tests/test_meth.py' in result, got keys: {list(result.keys())}" - ) - entries = result["tests/test_meth.py"] - assert "tests/test_meth.py::TestFoo" in entries, ( - f"Expected 'tests/test_meth.py::TestFoo' in entries, got: {entries}" - ) - assert " - test_alpha" in entries, f"Expected ' - test_alpha' in entries, got: {entries}" - assert " - test_beta" not in entries, f"Unexpected ' - test_beta' found in entries: {entries}" - - def test_function_level_test_false_reports_only_that_function(self, tests_dir: Path) -> None: - """scan_placeholder_tests() reports only the function with func.__test__ = False.""" - _create_test_file( - directory=tests_dir, - filename="test_func.py", - content=(f"def test_alpha():\n pass\n\ntest_alpha.{TEST_FALSE_MARKER}\n\ndef test_beta():\n pass\n"), - ) - - result = scan_placeholder_tests(tests_dir=tests_dir) - - assert "tests/test_func.py" in result, ( - f"Expected key 'tests/test_func.py' in result, got keys: {list(result.keys())}" - ) - entries = result["tests/test_func.py"] - assert " - test_alpha" in entries, f"Expected ' - test_alpha' in entries, got: {entries}" - assert " - test_beta" not in entries, f"Unexpected ' - test_beta' found in entries: {entries}" - - def test_skips_files_without_test_false(self, tests_dir: Path) -> None: - """scan_placeholder_tests() skips files that do not contain __test__ = False.""" - _create_test_file( - directory=tests_dir, - filename="test_normal.py", - content=("class TestFoo:\n def test_bar(self):\n assert True\n"), - ) - - result = scan_placeholder_tests(tests_dir=tests_dir) - - assert result == {} - - def test_handles_syntax_errors_gracefully(self, tests_dir: Path) -> None: - """scan_placeholder_tests() logs warning and continues on syntax errors.""" - _create_test_file( - directory=tests_dir, - filename="test_broken.py", - content=f"{TEST_FALSE_MARKER}\n\ndef this is not valid python:\n", - ) - _create_test_file( - directory=tests_dir, - filename="test_valid.py", - content=(f"{TEST_FALSE_MARKER}\n\nclass TestGood:\n def test_pass(self):\n pass\n"), - ) - - result = scan_placeholder_tests(tests_dir=tests_dir) - - # Broken file should be skipped, valid file should be included - assert "tests/test_broken.py" not in result, ( - f"Unexpected key 'tests/test_broken.py' in result: {list(result.keys())}" - ) - assert "tests/test_valid.py" in result, ( - f"Expected key 'tests/test_valid.py' in result, got keys: {list(result.keys())}" - ) - - def test_returns_empty_dict_when_no_test_files(self, tests_dir: Path) -> None: - """scan_placeholder_tests() returns empty dict when no test files exist.""" - result = scan_placeholder_tests(tests_dir=tests_dir) - - assert result == {} - - def test_scans_subdirectories_recursively(self, tests_dir: Path) -> None: - """scan_placeholder_tests() finds test files in nested subdirectories.""" - sub_dir = tests_dir / "network" / "ipv6" - sub_dir.mkdir(parents=True) - _create_test_file( - directory=sub_dir, - filename="test_deep.py", - content=(f"{TEST_FALSE_MARKER}\n\nclass TestDeep:\n def test_nested(self):\n pass\n"), - ) - - result = scan_placeholder_tests(tests_dir=tests_dir) - - assert result, "Expected at least one entry from nested test file" - found_keys = list(result.keys()) - assert any("test_deep.py" in key for key in found_keys), ( - f"Expected a key containing 'test_deep.py' in results, got keys: {found_keys}" - ) - - def test_ignores_non_test_files(self, tests_dir: Path) -> None: - """scan_placeholder_tests() only processes files matching test_*.py pattern.""" - _create_test_file( - directory=tests_dir, - filename="conftest.py", - content=f"{TEST_FALSE_MARKER}\n\ndef fixture():\n pass\n", - ) - _create_test_file( - directory=tests_dir, - filename="helper.py", - content=f"{TEST_FALSE_MARKER}\n\ndef helper():\n pass\n", - ) - - result = scan_placeholder_tests(tests_dir=tests_dir) - - assert result == {} - - -# =========================================================================== -# Tests for output_text() and output_json() -# =========================================================================== - - -class TestOutputFunctions: - """Tests for output_text() and output_json() functions.""" - - SAMPLE_PLACEHOLDER_FILES: ClassVar[dict[str, list[str]]] = { - "tests/test_foo.py": [ - "tests/test_foo.py::TestFoo", - " - test_bar", - " - test_baz", - ], - "tests/test_standalone.py": [ - "tests/test_standalone.py", - " - test_alpha", - ], - } - - def test_output_json_structure(self, capsys: pytest.CaptureFixture[str]) -> None: - """output_json() produces valid JSON with correct totals and file entries.""" - output_json(placeholder_files=self.SAMPLE_PLACEHOLDER_FILES) - captured = capsys.readouterr() - result = json.loads(captured.out) - - assert result["total_tests"] == 3, f"Expected 3 total tests, got {result['total_tests']}" - assert result["total_files"] == 2, f"Expected 2 total files, got {result['total_files']}" - assert "tests/test_foo.py" in result["files"], ( - f"Missing tests/test_foo.py in files, got keys: {list(result['files'].keys())}" - ) - assert result["files"]["tests/test_foo.py"] == ["test_bar", "test_baz"], ( - f"Expected ['test_bar', 'test_baz'], got {result['files']['tests/test_foo.py']}" - ) - assert result["files"]["tests/test_standalone.py"] == ["test_alpha"], ( - f"Expected ['test_alpha'], got {result['files']['tests/test_standalone.py']}" - ) - - def test_output_json_empty_input(self, capsys: pytest.CaptureFixture[str]) -> None: - """output_json() produces correct JSON for empty input.""" - output_json(placeholder_files={}) - captured = capsys.readouterr() - result = json.loads(captured.out) - - assert result["total_tests"] == 0, f"Expected 0 total tests, got {result['total_tests']}" - assert result["total_files"] == 0, f"Expected 0 total files, got {result['total_files']}" - assert result["files"] == {}, f"Expected empty files dict, got: {result['files']}" - - def test_output_text_counts_only_files_with_tests(self, caplog: pytest.LogCaptureFixture) -> None: - """output_text() counts only files that have test entries in the total.""" - placeholder_files: dict[str, list[str]] = { - "tests/test_foo.py": [ - "tests/test_foo.py::TestFoo", - " - test_bar", - ], - "tests/test_empty.py": [ - "tests/test_empty.py::TestEmpty", - ], - } - logger = logging.getLogger(name="scripts.std_placeholder_stats.std_placeholder_stats") - logger.propagate = True - try: - with caplog.at_level(logging.INFO, logger="scripts.std_placeholder_stats.std_placeholder_stats"): - output_text(placeholder_files=placeholder_files) - finally: - logger.propagate = False - - summary_line = [line for line in caplog.messages if "Total:" in line] - assert summary_line, f"Expected 'Total:' summary line in log output, got: {caplog.messages}" - assert "1 placeholder tests in 1 files" in summary_line[0], ( - f"Expected '1 placeholder tests in 1 files', got: {summary_line[0]}" - ) From ce450eba68c3af1b74b3fc31f5a79fbe4a27172c Mon Sep 17 00:00:00 2001 From: rnetser Date: Thu, 26 Feb 2026 11:37:55 +0200 Subject: [PATCH 18/21] split the tests to a separate pr --- .flake8 | 1 - .pre-commit-config.yaml | 1 - 2 files changed, 2 deletions(-) diff --git a/.flake8 b/.flake8 index 1172212230..d0c34c4402 100644 --- a/.flake8 +++ b/.flake8 @@ -14,7 +14,6 @@ exclude = .cache/*, utilities/unittests/*, utilities/junit_ai_utils.py, - scripts/*/tests/* fcn_exclude_functions = Path, diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index d0a0e229ce..492e22215e 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -46,7 +46,6 @@ repos: - id: flake8 language_version: python3.14 args: [--config=.flake8] - exclude: "(utilities/unittests/|utilities/junit_ai_utils\\.py|scripts/.*/tests/)" additional_dependencies: [ "git+https://github.com/RedHatQE/flake8-plugins.git@v1.0.0", From d36e4f754728e64ac88fc0b432d43ebf373abb0a Mon Sep 17 00:00:00 2001 From: rnetser Date: Mon, 2 Mar 2026 16:04:05 +0200 Subject: [PATCH 19/21] fix wrong value for __test__ for unimplemented tests --- docs/SOFTWARE_TEST_DESCRIPTION.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/SOFTWARE_TEST_DESCRIPTION.md b/docs/SOFTWARE_TEST_DESCRIPTION.md index 6292df72d9..33bf1aff7b 100644 --- a/docs/SOFTWARE_TEST_DESCRIPTION.md +++ b/docs/SOFTWARE_TEST_DESCRIPTION.md @@ -32,7 +32,7 @@ This project follows a **two-phase development workflow** that separates test de - Add the complete STD docstring (Preconditions/Steps/Expected) - Include a link to the approved STP (Software Test Plan) in the **module docstring** (top of the test file) - Add applicable pytest markers (architecture markers, etc.) - - Add `__test__ = True` on implemented test(s). For a single test, add `.__test__ = True` + - Add `__test__ = False` on unimplemented test(s). For a single test, add `.__test__ = False` 2. **Submit PR for review**: - The PR contains only the test descriptions (no automation code) From 056f7afeb643b7c5198c22510ff96cf970ab30b2 Mon Sep 17 00:00:00 2001 From: rnetser Date: Tue, 3 Mar 2026 19:48:02 +0200 Subject: [PATCH 20/21] add Parameterize Testing pattern to common patterns table --- docs/SOFTWARE_TEST_DESCRIPTION.md | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/docs/SOFTWARE_TEST_DESCRIPTION.md b/docs/SOFTWARE_TEST_DESCRIPTION.md index 33bf1aff7b..9ea5159246 100644 --- a/docs/SOFTWARE_TEST_DESCRIPTION.md +++ b/docs/SOFTWARE_TEST_DESCRIPTION.md @@ -50,7 +50,7 @@ This project follows a **two-phase development workflow** that separates test de - Create any required fixtures - Implement helper functions as needed - Remove `__test__ = False` from implemented tests - - If needed, update the test description. This change must be approved by the team's qe tech lead. + - If needed, update the test description. This change must be approved by the team's qe sig owner / lead. 2. **Submit PR for review**: - Reviewers verify the implementation matches the approved design @@ -265,7 +265,7 @@ test_.__test__ = False 5. **Single Expected Behavior per Test**: One assertion: clear pass/fail. - Good: `Expected: - Ping succeeds with 0% packet loss` - Bad: `Expected: - Ping succeeds - VM remains running - No errors logged` - - The may be **exceptions**, where multiple assertions are required to verify a **single** behavior. + - There may be **exceptions**, where multiple assertions are required to verify a **single** behavior. - Example: `Expected: - VM reports valid IP addres. Expected - User can access VM via SSH` 6. **Tests Must Be Independent**: Tests should not depend on other tests. @@ -298,12 +298,13 @@ test_.__test__ = False ### Common Patterns in This Project -| Pattern | Description | Example | -|--------------------------|-------------------------------------------|-----------------------------------------------| -| **Fixture-based Setup** | Use pytest fixtures for resource creation | `vm_to_restart`, `namespace` | -| **Matrix Testing** | Parameterize tests for multiple scenarios | `storage_class_matrix`, `run_strategy_matrix` | -| **Architecture Markers** | Indicate architecture compatibility | `@pytest.mark.arm64`, `@pytest.mark.s390x` | -| **Gating Tests** | Critical tests for CI/CD pipelines | `@pytest.mark.gating` | +| Pattern | Description | Example | +|----------------------------|------------------------------------------------------|-----------------------------------------------| +| **Fixture-based Setup** | Use pytest fixtures for resource creation | `vm_to_restart`, `namespace` | +| **Parameterize Testing** | Parametrize tests or fixtures for multiple scenarios | `@pytest.fixture(params=[...])`, `@pytest.mark.parametrize` | +| **Matrix Testing** | Dynamic parametrization for cluster-specific matrices (advanced) | `storage_class_matrix`, `run_strategy_matrix` | +| **Architecture Markers** | Indicate architecture compatibility | `@pytest.mark.arm64`, `@pytest.mark.s390x` | +| **Gating Tests** | Critical tests for CI/CD pipelines | `@pytest.mark.gating` | ### STD Checklist From bc3c3831c76f8acf953f4db8a4530925dbb444c6 Mon Sep 17 00:00:00 2001 From: rnetser Date: Tue, 3 Mar 2026 19:54:22 +0200 Subject: [PATCH 21/21] update metrix wording --- docs/SOFTWARE_TEST_DESCRIPTION.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/SOFTWARE_TEST_DESCRIPTION.md b/docs/SOFTWARE_TEST_DESCRIPTION.md index 9ea5159246..7822d26b55 100644 --- a/docs/SOFTWARE_TEST_DESCRIPTION.md +++ b/docs/SOFTWARE_TEST_DESCRIPTION.md @@ -298,13 +298,13 @@ test_.__test__ = False ### Common Patterns in This Project -| Pattern | Description | Example | -|----------------------------|------------------------------------------------------|-----------------------------------------------| -| **Fixture-based Setup** | Use pytest fixtures for resource creation | `vm_to_restart`, `namespace` | -| **Parameterize Testing** | Parametrize tests or fixtures for multiple scenarios | `@pytest.fixture(params=[...])`, `@pytest.mark.parametrize` | -| **Matrix Testing** | Dynamic parametrization for cluster-specific matrices (advanced) | `storage_class_matrix`, `run_strategy_matrix` | -| **Architecture Markers** | Indicate architecture compatibility | `@pytest.mark.arm64`, `@pytest.mark.s390x` | -| **Gating Tests** | Critical tests for CI/CD pipelines | `@pytest.mark.gating` | +| Pattern | Description | Example | +|--------------------------|------------------------------------------------------|--------------------------------------------------------------| +| **Fixture-based Setup** | Use pytest fixtures for resource creation | `vm_to_restart`, `namespace` | +| **Parameterize Testing** | Parametrize tests or fixtures for multiple scenarios | `@pytest.mark.parametrize("run_strategy", [Always, Manual])` | +| **Matrix Testing** | Advanced parametrization via dynamic fixtures | `storage_class_matrix`, `run_strategy_matrix` | +| **Architecture Markers** | Indicate architecture compatibility | `@pytest.mark.arm64`, `@pytest.mark.s390x` | +| **Gating Tests** | Critical tests for CI/CD pipelines | `@pytest.mark.gating` | ### STD Checklist