Skip to content

CLID-515: claude: add slash command to generate test cases and testing approach based on PR.#1335

Open
nidangavali wants to merge 1 commit intoopenshift:mainfrom
nidangavali:CLID-515
Open

CLID-515: claude: add slash command to generate test cases and testing approach based on PR.#1335
nidangavali wants to merge 1 commit intoopenshift:mainfrom
nidangavali:CLID-515

Conversation

@nidangavali
Copy link

@nidangavali nidangavali commented Jan 3, 2026

Description

Add a /test-pr claude command to generate test cases and testing approaches based on the respective PR.

Github / Jira issue: CLID-515

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Code Improvements (Refactoring, Performance, CI upgrades, etc)
  • Internal repo assets (diagrams / docs on github repo)
  • This change requires a documentation update on openshift docs

How Has This Been Tested?

Run /test-pr pr-url when using claude.

Expected Outcome

  1. PR Change Analysis — Fetches the PR diff using git and summarizes:
    - Files modified
    - Scope of changes (new features, bug fixes, refactoring)
    - Code patterns (functions added/modified)
    - Whether test files are already included
  2. Interactive Test Type Selection — Asks you which test types are needed (multi-select):
    - Unit, Integration, E2E, Regression, Manual, Ad-hoc, or Exploratory
  3. Targeted Testing Guidance — For each selected test type, provides:
    - Unit Tests: Lists testable functions with file paths/line numbers, prioritization, edge cases, mock
    requirements, and a Go table-driven test template
    - Integration Tests: Identifies integration points, component interactions, and run commands
    - E2E Tests: Identifies affected user workflows (M2D, D2M, M2M) with prerequisites and commands
    - Regression Tests: Lists affected code paths and existing test suites to run
    - Manual Tests: Generates step-by-step test case checklists
    - Ad-hoc/Exploratory: Provides exploration guidelines and boundary conditions
  4. Detailed Steps per Function/Scenario — For each item you choose to test:
    - Test file location
    - Starter test template (Go code)
    - Specific go test run commands (with coverage, race detection)
    - Validation criteria
  5. Iterative Loop — After each function/scenario, asks what to do next: test another function, move to the
    next test type, or finish.

Key point: It does not write actual test code unless you explicitly ask — it provides templates, guidance, and
commands for you to execute.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Jan 3, 2026
@openshift-ci-robot
Copy link

openshift-ci-robot commented Jan 3, 2026

@nidangavali: This pull request references CLID-515 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

Add a /pr-tests claude command to generate QE-focused test cases and testing approaches based on the respective PR.

Github / Jira issue:

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Code Improvements (Refactoring, Performance, CI upgrades, etc)
  • Internal repo assets (diagrams / docs on github repo)
  • This change requires a documentation update on openshift docs

How Has This Been Tested?

Run /pr-tests pr-url when using claude.

Expected Outcome

  • Comprehensive QE test scenario descriptions (integration, E2E, regression, manual)
  • Changed line numbers for precise change tracking (no code output)
  • Priority-ranked test scenarios (critical, high, medium priority)
  • Analysis of what changed, impact assessment, and risk evaluation
  • Edge cases and error conditions to verify
  • Test environment and configuration requirements

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested review from adolfo-ab and aguidirh January 3, 2026 02:37
@openshift-ci
Copy link

openshift-ci bot commented Jan 3, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: nidangavali
Once this PR has been reviewed and has the lgtm label, please assign r4f4 for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@nidangavali
Copy link
Author

/retest

@openshift-ci-robot
Copy link

openshift-ci-robot commented Jan 3, 2026

@nidangavali: This pull request references CLID-515 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

Add a /pr-tests claude command to generate QE-focused test cases and testing approaches based on the respective PR.

Github / Jira issue: CLID-515

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Code Improvements (Refactoring, Performance, CI upgrades, etc)
  • Internal repo assets (diagrams / docs on github repo)
  • This change requires a documentation update on openshift docs

How Has This Been Tested?

Run /pr-tests pr-url when using claude.

Expected Outcome

  • Comprehensive QE test scenario descriptions (integration, E2E, regression, manual)
  • Changed line numbers for precise change tracking (no code output)
  • Priority-ranked test scenarios (critical, high, medium priority)
  • Analysis of what changed, impact assessment, and risk evaluation
  • Edge cases and error conditions to verify
  • Test environment and configuration requirements

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Contributor

@adolfo-ab adolfo-ab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should avoid talking about "QE testing", since we are trying to bridge the gap between dev and QE, and instead talk only about testing from a unified point of view

@nidangavali
Copy link
Author

I think we should avoid talking about "QE testing", since we are trying to bridge the gap between dev and QE, and instead talk only about testing from a unified point of view

Agreed but the reason I added QE-focus was because of the areas the slash command will mostly cover. I tried to avoid unit tests majorly for 2 reasons:

  1. I feel the unit test should be more precisely focused on the code because of which there should be a separate slash command for unit test.
    2.The output given by the current slash command is very detailed and long. Adding output for unit test which will also involve multiple code snippets of tests in case there are multiple changes in the PR.
    This slash command mainly focuses on giving the testing approach and not the code for automation or any code changes.

Copy link
Contributor

@aguidirh aguidirh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @nidangavali,

Thanks for submitting this PR, I added some suggestions to your PR. Feel free to ask question about them.


### Why Use This?

- **QE-Focused Analysis** - Tailored for Quality Engineering.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of QE-Focused which is not very specific, I would suggest to use something like the type of test this is focused since tests should be everyone responsability.


1. Authenticate GitHub CLI:
```bash
gh auth login
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since our repositories are public, gh is not necessary to check the PRs, only git is enough to pull and do the diff on the PR. In this way we avoid adding an uneeded dependency on the slash command. Git usually is already installed in everyone machine, gh not necessarily.

### Output

```markdown
● 🧪 QE PR Testing Strategy Analysis
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of saying QE strategy, it would be nice if we could have strategies by test categories.

For example, integration test strategy analysis, regression test strategy analysis, etc.

In this way we make clear for the entire team what are the activities that QE is doing today so it is possible to get help from everyone because the steps are more transparent and easy to understand.

Comment on lines +236 to +420
Change Analysis

v2/internal/pkg/cli/executor.go

What Changed:
- Removed automatic deletion of the logs directory during initialization
- The setupLogsLevelAndDir() function no longer calls os.RemoveAll(o.LogsDir)
- Logs directory is now preserved between executions

Impact and Scope:
- CRITICAL behavioral change: Log persistence across multiple oc-mirror operations
- Affects all oc-mirror commands: mirror, delete, list, and any future commands
- Changes system behavior from "clean slate each run" to "accumulate logs over time"
- Impacts troubleshooting workflows (positive - historical logs retained)
- Impacts disk space management (logs accumulate until manually cleaned)
- May affect automated workflows expecting fresh logs directory

Dependencies and Integrations Affected:
- Log aggregation or monitoring systems expecting specific log patterns
- Automation scripts that parse logs or expect clean log directories
- Disk space management and cleanup procedures
- Backup/restore operations involving working directory
- Any tooling that relies on log file naming conventions

QE Testing Recommendations

Test Scenarios by Changed Files

Component: Log Management in oc-mirror Executor

Suggested Test Scenarios (QE Perspective):

1. Verify logs persistence across multiple mirror operations
- Run multiple mirror operations sequentially
- Verify logs from each operation are preserved and accessible
- Confirm logs don't overwrite or conflict with each other
2. Test logs accumulation over extended usage
- Perform 10+ consecutive oc-mirror operations (mirror, delete, list)
- Verify all log files are retained
- Check log directory structure and organization
3. Verify log directory creation on first run
- Execute oc-mirror on fresh working directory
- Confirm logs directory is created with correct permissions
- Verify logs are written successfully
4. Test behavior with pre-existing logs directory
- Execute oc-mirror with existing logs directory containing previous logs
- Verify existing logs are NOT deleted
- Verify new logs are added alongside old logs
5. Verify log file naming and uniqueness
- Run multiple operations in quick succession
- Confirm each operation generates unique log files
- Verify no log file overwrites or conflicts occur
6. Test disk space impact from log accumulation
- Run oc-mirror operations over extended period
- Monitor disk space usage in logs directory
- Verify system behavior when disk approaches capacity
7. Verify backward compatibility with existing workflows
- Test integration with existing automation/CI pipelines
- Verify log parsing tools still function correctly
- Confirm monitoring/alerting systems handle accumulated logs
8. Test log retention in error scenarios
- Trigger failures during oc-mirror operations
- Verify error logs are preserved for debugging
- Confirm logs from failed operations remain accessible
9. Verify multi-user/concurrent execution handling
- Run multiple oc-mirror instances with same working directory
- Verify log isolation or proper handling of concurrent writes
- Check for race conditions or corruption
10. Test log directory permissions and access
- Verify logs directory created with correct permissions (0755)
- Test read access to accumulated logs
- Verify behavior with restricted permissions

Edge Cases to Cover:
- Logs directory exists as a file instead of directory
- Logs directory is a symbolic link
- Working directory doesn't exist or is inaccessible
- Insufficient disk space for log accumulation
- Logs directory with thousands of existing files
- Concurrent writes to logs directory from multiple instances
- Logs directory with restricted permissions (read-only, no write)
- Very long running operations generating large log files
- System restart or crash during logging operation

Error Conditions to Handle:
- Permission denied when accessing logs directory
- Disk full during log write operation
- Invalid working directory path
- Corrupted existing logs directory
- Network filesystem latency/failures (if working dir on network storage)

Integration Points to Verify:
- Integration with oc-mirror mirror command
- Integration with oc-mirror delete command
- Integration with oc-mirror list command
- Integration with log level configuration (--log-level flag)
- Integration with working directory configuration
- Integration with CI/CD pipelines
- Integration with log monitoring tools
- Integration with backup/restore procedures

User Workflows Affected:
- Debugging failures by reviewing historical logs
- Monitoring oc-mirror operations over time
- Troubleshooting recurring issues using accumulated logs
- Disk space management and cleanup procedures
- Automated log collection for support cases
- Log rotation and archival processes

Test Strategy

Test Levels:
- Regression Testing (CRITICAL): Verify the bug fix - logs are NOT deleted
- Integration Testing (HIGH): Test with all oc-mirror commands (mirror, delete, list)
- E2E Testing (HIGH): Full user workflows from installation through multiple operations
- Manual Testing (MEDIUM): Exploratory testing of edge cases and error scenarios
- Performance Testing (MEDIUM): Log accumulation impact over extended usage

Coverage Goals:
- 100% coverage of all oc-mirror commands with the changed behavior
- Verify all supported platforms (Linux, macOS, Windows if applicable)
- Test both fresh installations and upgrades from previous versions
- Cover all error scenarios related to log directory operations

Priority Areas:
1. P0 - CRITICAL: Regression test - verify logs are preserved, not deleted
2. P0 - CRITICAL: Verify backward compatibility with existing workflows
3. P1 - HIGH: Multi-operation scenarios (3+ consecutive runs)
4. P1 - HIGH: Integration with all oc-mirror commands
5. P2 - MEDIUM: Disk space management and large-scale accumulation
6. P2 - MEDIUM: Concurrent execution scenarios
7. P3 - LOW: Edge cases with permissions and filesystem issues

Risk Assessment:
- HIGH RISK: Breaking existing automation expecting clean logs
- HIGH RISK: Disk space exhaustion from uncontrolled log growth
- MEDIUM RISK: Log file conflicts or overwrites in concurrent scenarios
- MEDIUM RISK: Performance degradation with thousands of log files
- LOW RISK: Backward compatibility issues with older oc-mirror versions

Test Scenario Descriptions

Additional scenarios to consider:

- "Verify end-to-end mirror workflow preserves logs from previous runs"
- "Test system behavior when logs directory reaches gigabytes of data"
- "Verify error messages guide users to retained logs for troubleshooting"
- "Test upgrade scenario from version that deleted logs to new version"
- "Verify log retention across system restarts and crashes"
- "Test integration with log rotation tools and practices"
- "Verify documentation and user guidance updated for new behavior"

QE Testing Approach Guidance

Test Environment Considerations

Required Test Environment Setup:
- Multiple test environments: fresh install, upgrade from previous version
- Different operating systems: RHEL, Fedora, Ubuntu, macOS
- Various storage configurations: local disk, NFS, cloud storage
- Different filesystem types: ext4, xfs, NTFS
- Sufficient disk space for log accumulation testing

Test Data Requirements:
- Sample image sets for mirror operations
- Pre-existing logs directory with various file counts (0, 10, 100, 1000 files)
- Test working directories with different permission configurations
- Large and small mirror operations to generate varying log sizes

External Dependencies and Integrations:
- GitHub CLI (gh) for PR analysis
- Container registries for mirror operations
- File system monitoring tools for disk space tracking
- Log parsing/analysis tools used in production

Configuration Needed for Testing:
- Multiple working directory configurations
- Different log level settings (debug, info, warning, error)
- Various oc-mirror command configurations
- CI/CD pipeline integration configurations

Files Not Requiring Tests

N/A - Only one source file was modified and it requires comprehensive QE testing due to the behavioral change in log management.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change analysis from line 236-419 could be simplified by test categories. Like what needs to be tested on the integration category? What needs to be tested from the regression category? In this way it is easier to be reproducible by everyone.

@openshift-ci-robot
Copy link

openshift-ci-robot commented Feb 16, 2026

@nidangavali: This pull request references CLID-515 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

Add a /test-pr claude command to generate test cases and testing approaches based on the respective PR.

Github / Jira issue: CLID-515

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Code Improvements (Refactoring, Performance, CI upgrades, etc)
  • Internal repo assets (diagrams / docs on github repo)
  • This change requires a documentation update on openshift docs

How Has This Been Tested?

Run /test-pr pr-url when using claude.

Expected Outcome

  1. PR Change Analysis — Fetches the PR diff using git and summarizes:
  • Files modified
  • Scope of changes (new features, bug fixes, refactoring)
  • Code patterns (functions added/modified)
  • Whether test files are already included
  1. Interactive Test Type Selection — Asks you which test types are needed (multi-select):
  • Unit, Integration, E2E, Regression, Manual, Ad-hoc, or Exploratory
  1. Targeted Testing Guidance — For each selected test type, provides:
  • Unit Tests: Lists testable functions with file paths/line numbers, prioritization, edge cases, mock
    requirements, and a Go table-driven test template
  • Integration Tests: Identifies integration points, component interactions, and run commands
  • E2E Tests: Identifies affected user workflows (M2D, D2M, M2M) with prerequisites and commands
  • Regression Tests: Lists affected code paths and existing test suites to run
  • Manual Tests: Generates step-by-step test case checklists
  • Ad-hoc/Exploratory: Provides exploration guidelines and boundary conditions
  1. Detailed Steps per Function/Scenario — For each item you choose to test:
  • Test file location
  • Starter test template (Go code)
  • Specific go test run commands (with coverage, race detection)
  • Validation criteria
  1. Iterative Loop — After each function/scenario, asks what to do next: test another function, move to the
    next test type, or finish.

Key point: It does not write actual test code unless you explicitly ask — it provides templates, guidance, and
commands for you to execute.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link

openshift-ci bot commented Feb 16, 2026

@nidangavali: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@nidangavali
Copy link
Author

@aguidirh @adolfo-ab can you guys PTAL when you get time ? Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants