A Software Test Plan (STP) is a comprehensive document that serves as the overall roadmap for testing a feature or enhancement. It details the scope, approach, resources, and schedule for all testing activities.
The STP is a mandatory deliverable for formal QE sign-off and feature acceptance in OpenShift Virtualization.
- Ensures systematic coverage of all requirements (functional and non-functional)
- Provides clear visibility of test coverage, required resources, and identified risks
- Documents untestable aspects upfront with stakeholder agreement
- Aligns QE activities with the OpenShift Virtualization release cycle
- Ensures automation is merged for GA
- Creates a shared understanding between Dev, QE, and Product Management
- Explicitly identifies what can and cannot be tested
- Requires stakeholder sign-off on untestable functionalities
- Documents mitigation strategies for identified risks
The STP template includes the following main sections:
When a feature spans multiple SIGs, use a feature directory under the owning SIG. The parent STP defines the overall scope, and each participating SIG contributes a child STP that covers their specific test scope.
stps/<owning-sig>/<feature-name>/
├── stp.md ← parent STP
├── <sig-name>.md ← child STP per participating SIG
└── ...
Parent STP — defines Feature Overview, requirements, acceptance criteria, and overall coordination. Child STPs — define SIG-specific testing goals, scenarios, strategy, and risks. They reference the parent for shared context and do NOT duplicate it.
Single-SIG features place the STP directly under stps/<sig>/ without a feature directory.
Purpose: Ensure QE understands the feature value and testability before planning begins.
Key Activities:
- Review feature requirements (KubeVirt and OCP enhancements)
- Confirm requirements are testable and unambiguous
- Verify acceptance criteria are clearly defined
- Ensure Non-Functional Requirements (NFRs) are covered:
- Performance
- Security
- Usability
- Monitoring (metrics/alerts)
- Scalability
- Portability
- Document known feature limitations — confirmed product constraints; each limitation requires per-item sign-off (name/date)
Critical Checkpoint: Developer Handoff/QE Kickoff meeting where Dev/Arch walks QE through design and architecture.
- Purpose: High-level overview of testing approach
- Scope of Testing: What will be tested (functional and non-functional)
- Document Conventions: Acronyms and definitions
Primary goals:
- Verify software meets all requirements
- Identify and report defects
- Ensure software performs as expected
- Assess overall quality and release readiness
QE perspective on testing priorities:
- User Stories: Key scenarios driving test priorities
- Testing Goals: Specific, measurable objectives (coverage targets, performance SLAs, etc.)
- Out of Scope: Items the product supports but QE chooses not to test — requires PM/Lead sign-off
- Test Limitations: Constraints imposed on QE that limit what can be tested (e.g., hardware unavailability); each limitation requires per-item sign-off (name/date)
Types of Testing:
- Functional Testing
- Automation Testing
- Performance Testing
- Security Testing
- Usability Testing
- Compatibility Testing
- Regression Testing
- Upgrade Testing
- Backward Compatibility Testing
Potential Areas:
- Dependencies on other components
- Monitoring requirements (metrics/alerts)
- Cross-integration impacts
- UI requirements
Virtualization-specific requirements:
- Cluster topology (bare metal, SNO, compact cluster, etc.)
- OCP & OpenShift Virtualization versions
- Hardware requirements (CPU virtualization, compute resources, special hardware)
- Storage configuration (ODF, LSO, CSI drivers)
- Network configuration (CNI, secondary networks, network plugins)
- Required operators
- Platform (bare metal, AWS, Azure, GCP, etc.)
- Special configurations (disconnected, proxy, FIPS)
Entry Criteria:
- Requirements and design documents approved and stable
- Test environment set up and configured
- Test cases reviewed and approved
Document risks and mitigation strategies with sign-off:
- Timeline/Schedule Risk
- Insufficient test coverage
- Unstable test environment
- Untestable aspects
- Resource constraints
- Dependencies
Note: Risks are uncertainties — things that might happen. Each risk requires a mitigation plan and stakeholder sign-off. Do not confuse risks with:
- Feature Limitations (Section I) — known product constraints, not uncertain
- Test Limitations (Section II.1) — known constraints on QE's ability to test
- Out of Scope (Section II.1) — deliberate decisions not to test
The STP template uses four distinct categories for constraints. Refer to the template for detailed guidance:
- Feature Limitations (Section I.2) — Product doesn't support it (e.g., "Feature only supports IPv4")
- Out of Scope (Section II.1) — QE chose not to test it (e.g., "Windows guests won't be tested")
- Test Limitations (Section II.1) — QE can't test it due to constraints (e.g., "No dual-stack cluster available")
- Risks (Section II.5) — Uncertain, might impact testing (e.g., "Cluster may not be ready in time")
Purpose: Link strategic plan to tactical test cases.
Key Components:
- Test Case Repository: Links to Jira, Polarion, or test management system
- Traceability Matrix: Maps requirements to high-level test scenarios
- Test Scenario Descriptions: Brief descriptions of key scenarios
- Traceability Maintenance: Process for keeping traceability current
Final Sign-off Checklist:
- Tier 1 / Tier 2 tests defined
- Automation merged (mandatory for GA)
- Tests running in release checklist jobs
- Documentation reviewed
- Feature sign-off by QE
Approvers:
- QE Lead
- Product Manager
- Development Lead
- QE owner reviews feature requirements and enhancements
- Confirms testability and value for customers
- Developer Handoff/QE Kickoff meeting
- QE owner writes the STP
- Documents scope, strategy, environment, risks
- Includes an explicit list of untestable aspects
- Stakeholders review the STP
- All parties agree to risk associated with untestable aspects
- STP approved before test execution begins
- Implement test cases based on STP
- Track progress against testing goals
- Update traceability matrix
- Verify acceptance criteria were met
- Automation merged and running in CI
- QE formally signs off the feature
- STP must be written, reviewed, and approved
- Testing cannot begin until requirements/design approved
- Test automation must be merged for GA
- Features not landing prior to CF need an exception
- Start QE review immediately when the feature is defined
- Attend Developer Handoff/QE Kickoff meeting
- Identify untestable aspects early
- Document assumptions and dependencies explicitly
- Use traceability matrix to prove coverage
- Get stakeholder sign-off on non-goals
- Address both functional and non-functional requirements
- Plan for different test environments and configurations
- Include upgrade and backward compatibility testing
- Plan automation strategy upfront
- Identify which tests will be automated
- Ensure automation is merged before GA
- Document all risks, including untestable aspects
- Get explicit stakeholder agreement on risk acceptance
- Propose mitigation strategies
- Late STP Creation: STP must be approved before testing begins
- Unclear Scope: Non-goals must be explicitly documented with stakeholder sign-off
- Missing NFRs: Don't forget performance, security, monitoring requirements
- No Traceability: Must map requirements to test scenarios
- Automation Delays: Automation must be merged for GA - plan early
- Untested Assumptions: All untestable aspects must be explicitly documented