Skip to content

Conversation

@superdav42
Copy link
Collaborator

@superdav42 superdav42 commented Nov 15, 2025

Summary by CodeRabbit

  • Tests

    • Added CI performance benchmarking that runs baseline vs. current runs, generates comparison reports, uploads artifacts, and posts summaries to pull requests.
    • Added self-contained benchmark and comparator tools to produce machine-readable results for CI.
    • Improved unit and integration tests with better setup/teardown, cache flushing, and mocked data to reduce database dependencies and increase isolation.
  • Chores

    • Added baseline/current benchmark result files for CI visibility.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 15, 2025

Warning

Rate limit exceeded

@superdav42 has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 0 minutes and 34 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between c218952 and b2b7d4f.

📒 Files selected for processing (1)
  • docs/PERFORMANCE.md (1 hunks)

Walkthrough

Adds a performance benchmarking pipeline: a GitHub Actions workflow runs PHP-based benchmarks (baseline vs current), compares results with a PHP comparator, uploads artifacts, and posts PR comments with regression findings.

Changes

Cohort / File(s) Summary
GitHub Actions Workflow
.github/workflows/performance.yml
New workflow "Performance Tests" triggered on PRs and pushes to main. Sets up MySQL 5.7 and PHP 8.1 container, installs deps for PR and baseline, runs benchmarks, compares results, uploads artifacts, posts/updates PR comments, and writes a GitHub summary.
Benchmark scripts (primary)
tests/performance-benchmark.php, tests/performance-comparator.php
Performance_Benchmark added: runs multiple benchmarks, collects timing/memory/db metrics, saves JSON. Performance_Comparator added: loads baseline/current JSON, computes % changes, classifies improvements/regressions/critical, generates Markdown and JSON reports, and can save/print reports.
Simple benchmark (mocked)
tests/simple-performance-test.php
Simple_Performance_Benchmark added: self-contained, mocks WP helpers, runs simulated benchmarks, collects metrics, saves JSON for CI.
Benchmark outputs / artifacts
baseline-test.json, current-test.json
New JSON result files containing metadata (timestamps, PHP/WordPress/plugin versions) and per-operation metrics (execution_time_ms, memory_usage_mb, peak_memory_mb, database_queries).
Test hygiene & unit/functional test fixes
tests/.../Customer_Test.php, tests/.../Sunrise_Test.php, tests/.../Dashboard_Taxes_Tab_Test.php, tests/functional/SSO_Functional_Test.php, tests/unit/SSO_Test.php, tests/WP_Ultimo/Limits/Customer_User_Role_Limits_Test.php
Various test updates: add setUp/tearDown lifecycle methods, flush caches, mock transients/filters to avoid DB calls, switch base test class to WP_UnitTestCase, add deterministic user id usage, and ensure proper preconditions (e.g., wp_users.can structure) before actions.

Sequence Diagram(s)

sequenceDiagram
    participant GH as GitHub Actions
    participant Container as PHP Container
    participant MySQL as MySQL Service
    participant Baseline as Baseline Benchmark (script)
    participant Current as Current Benchmark (script)
    participant Comparator as Comparator (script)
    participant PR as GitHub PR

    GH->>Container: Checkout baseline branch
    Container->>MySQL: wait-for-ready
    Container->>Baseline: run tests -> baseline-results.json
    Container->>GH: upload baseline artifact

    GH->>Container: Checkout PR branch
    Container->>MySQL: wait-for-ready
    Container->>Current: run tests -> current-results.json
    Container->>GH: upload current artifact

    GH->>Comparator: load baseline + current
    Comparator->>Comparator: compute % changes & classify
    Comparator-->>GH: produce Markdown + JSON report

    GH->>PR: post or update comment with report
    GH->>GH: check for (critical) regressions -> fail/annotate
    GH-->>GH: write workflow summary
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Areas needing extra attention:
    • .github/workflows/performance.yml: conditional logic for PR vs push, artifact uploads, and PR-comment update behavior.
    • tests/performance-comparator.phptests/performance-benchmark.php: ensure JSON schema compatibility and edge-case handling (missing keys, zero-values, division-by-zero).
    • Test changes that switch base classes or add process annotations (Sunrise_Test, Dashboard_Taxes_Tab_Test, SSO tests): verify test runner compatibility and isolation flags.
    • Mocking of transients/filters in multiple tests: ensure mocks don't mask real failures.

Possibly related PRs

  • feat: Add more tests for SSO #211 — Overlaps changes to SSO tests (tests/functional/SSO_Functional_Test.php, tests/unit/SSO_Test.php) and related test setup/teardown adjustments.

Poem

🐰 In a burrow of code I hop and I play,
I time every hop, and note memory sway.
Baseline then current — I compare with delight,
I flag every lag in the soft morning light.
Hooray for reports that keep metrics bright! 🥕

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 13.46% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title check ❓ Inconclusive The title 'Add benchmark check' is vague and generic. While it relates to the changeset (adding performance benchmarking functionality), it doesn't clearly convey what the benchmark does or why it matters, using non-descriptive language. Consider a more specific title like 'Add performance benchmark workflow and comparison tools' or 'Implement GitHub Actions performance testing pipeline' to better describe the primary changes.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

♻️ Duplicate comments (1)
baseline-test.json (1)

1-49: Critical: Malformed JSON structure will break the comparator.

Same issue as current-test.json: this file mixes console output with JSON, making it unparseable. The Performance_Comparator::load_results() method expects pure JSON but will receive status messages (lines 1-7) and a file path suffix (line 49).

See the comment on current-test.json for the fix approach.

🧹 Nitpick comments (6)
tests/performance-comparator.php (1)

141-145: Consider making the improvement threshold configurable.

The improvement threshold is hardcoded at -5% (line 141), while regression thresholds are configurable via class properties. Consider adding an $improvement_threshold property for consistency and flexibility.

Add a property and use it:

     private $critical_thresholds = [
         'execution_time_ms' => 30,
         'memory_usage_mb' => 40,
         'database_queries' => 25,
     ];
+    
+    private $improvement_threshold = -5; // 5% improvement threshold
     
     public function __construct($baseline_file, $current_file) {

Then reference it on line 141:

-            elseif ($change_percent < -5) { // 5% improvement threshold
+            elseif ($change_percent < $this->improvement_threshold) {
tests/performance-benchmark.php (1)

162-190: Consider adding error handling for individual benchmarks.

Currently, if any benchmark throws an exception, the entire suite stops and only that error is recorded. Consider wrapping each benchmark call in a try-catch block to allow the suite to continue and report results for successful benchmarks while recording failures individually.

Example approach:

public function run_all_benchmarks() {
    echo "Starting performance benchmarks...\n";
    
    $benchmarks = [
        'dashboard_loading' => 'benchmark_dashboard_loading',
        'checkout_process' => 'benchmark_checkout_process',
        // ... other benchmarks
    ];
    
    foreach ($benchmarks as $name => $method) {
        try {
            $this->$method();
            echo "" . ucwords(str_replace('_', ' ', $name)) . " completed\n";
        } catch (Exception $e) {
            $this->results[$name . '_error'] = $e->getMessage();
            echo "" . ucwords(str_replace('_', ' ', $name)) . " failed: " . $e->getMessage() . "\n";
        }
    }
    
    return $this->results;
}
tests/simple-performance-test.php (2)

109-118: Remove unnecessary mock assignments.

In the mock environment, $site_data (line 109) and $is_valid (line 118) are assigned but never used. Unlike the real benchmark where validation might inspect the data, here the operation is just usleep(500). These assignments add no value and can be removed.

Apply this diff:

     public function benchmark_site_creation() {
         $this->start_measurement();
         
-        // Test site creation data preparation
-        $site_data = [
-            'domain' => 'test-' . time() . '.example.com',
-            'title' => 'Test Site ' . time(),
-            'user_id' => 1,
-            'plan_id' => 1,
-        ];
-        
         // Validate site data (without actually creating)
         usleep(500); // Simulate validation
-        $is_valid = true;
         
         $this->end_measurement('site_creation_validation');
     }

11-13: Consider using the $plugin_file parameter in the mock.

The $plugin_file parameter is unused. Either remove it for simplicity, or use it to enable flexible mock version configuration (e.g., return ['Version' => basename($plugin_file, '.php')];).

Option 1 - Remove the parameter:

-function get_plugin_data($plugin_file) {
+function get_plugin_data() {
     return ['Version' => '2.4.7'];
 }

Option 2 - Use it for flexibility (if multiple mock versions might be needed):

 function get_plugin_data($plugin_file) {
-    return ['Version' => '2.4.7'];
+    $versions = [
+        'ultimate-multisite.php' => '2.4.7',
+        'default' => '2.4.7',
+    ];
+    $key = basename($plugin_file);
+    return ['Version' => $versions[$key] ?? $versions['default']];
 }
.github/workflows/performance.yml (2)

66-72: Add composer dependency caching to improve workflow performance.

Installing composer dependencies on every run is slow. GitHub Actions provides caching that can significantly speed up the workflow by caching the vendor directory between runs.

Add caching steps before installing dependencies:

      - name: Get Composer Cache Directory
        id: composer-cache
        run: |
          echo "dir=$(composer config cache-files-dir)" >> $GITHUB_OUTPUT
      
      - name: Cache Composer Dependencies
        uses: actions/cache@v3
        with:
          path: ${{ steps.composer-cache.outputs.dir }}
          key: ${{ runner.os }}-composer-${{ hashFiles('**/composer.lock') }}
          restore-keys: |
            ${{ runner.os }}-composer-
      
      - name: Install PHP Dependencies (PR Branch)
        run: |
          composer install --no-interaction --prefer-dist

163-178: Regression detection via grep is fragile.

Lines 166 and 173 use grep with regex patterns to detect regressions in the markdown report. If the report format changes (e.g., table structure, wording), this detection will break silently.

Consider a more robust approach:

  1. Have the comparator script exit with a non-zero code on regressions:
// In performance-comparator.php, after generating report:
if ($comparison['summary']['critical_regressions'] > 0) {
    exit(2); // Critical regressions
} elseif ($comparison['summary']['regressions'] > 0) {
    exit(1); // Regular regressions
}
  1. Update the workflow to check exit codes:
      - name: Compare Performance Results
        id: compare
        continue-on-error: true
        run: |
          php tests/performance-comparator.php baseline-results.json current-results.json > performance-report.md
          echo "comparison_exit_code=$?" >> $GITHUB_OUTPUT
      
      - name: Check for Performance Regressions
        run: |
          if [ "${{ steps.compare.outputs.comparison_exit_code }}" -eq "2" ]; then
            echo "🚨 Critical performance regressions detected!"
            exit 1
          elif [ "${{ steps.compare.outputs.comparison_exit_code }}" -eq "1" ]; then
            echo "⚠️ Performance regressions detected."
          else
            echo "✅ Performance tests passed!"
          fi
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a4d6ab1 and 98c4bd8.

📒 Files selected for processing (6)
  • .github/workflows/performance.yml (1 hunks)
  • baseline-test.json (1 hunks)
  • current-test.json (1 hunks)
  • tests/performance-benchmark.php (1 hunks)
  • tests/performance-comparator.php (1 hunks)
  • tests/simple-performance-test.php (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
tests/performance-comparator.php (2)
tests/performance-benchmark.php (1)
  • __construct (24-29)
tests/simple-performance-test.php (1)
  • __construct (48-53)
tests/performance-benchmark.php (4)
tests/simple-performance-test.php (16)
  • __construct (48-53)
  • get_bloginfo (15-17)
  • get_plugin_data (11-13)
  • start_measurement (55-62)
  • end_measurement (64-76)
  • benchmark_dashboard_loading (78-88)
  • wu_get_dashboard_statistics (19-21)
  • benchmark_checkout_process (90-103)
  • wu_get_plans (23-25)
  • wu_get_signup_fields (27-29)
  • benchmark_site_creation (105-121)
  • benchmark_membership_operations (123-135)
  • wu_get_memberships (31-33)
  • benchmark_api_endpoints (137-156)
  • wu_get_sites (35-37)
  • benchmark_database_queries (158-169)
inc/admin-pages/class-dashboard-admin-page.php (1)
  • Dashboard_Admin_Page (20-627)
inc/checkout/class-checkout.php (1)
  • Checkout (31-2445)
inc/class-api.php (1)
  • API (20-493)
🪛 Biome (2.1.2)
baseline-test.json

[error] 1-1: String values must be double quoted.

(parse)


[error] 1-1: String values must be double quoted.

(parse)


[error] 1-1: String values must be double quoted.

(parse)


[error] 1-1: String values must be double quoted.

(parse)


[error] 1-1: unexpected character .

(parse)


[error] 1-1: unexpected character .

(parse)


[error] 1-1: unexpected character .

(parse)


[error] 1-2: unexpected character

(parse)


[error] 2-2: String values must be double quoted.

(parse)


[error] 2-2: String values must be double quoted.

(parse)


[error] 2-2: String values must be double quoted.

(parse)


[error] 2-3: String values must be double quoted.

(parse)


[error] 3-3: unexpected character

(parse)


[error] 3-3: String values must be double quoted.

(parse)


[error] 3-3: String values must be double quoted.

(parse)


[error] 3-3: String values must be double quoted.

(parse)


[error] 3-4: String values must be double quoted.

(parse)


[error] 4-4: unexpected character

(parse)


[error] 4-4: String values must be double quoted.

(parse)


[error] 4-4: String values must be double quoted.

(parse)


[error] 4-4: String values must be double quoted.

(parse)


[error] 4-4: String values must be double quoted.

(parse)


[error] 4-5: String values must be double quoted.

(parse)


[error] 5-5: unexpected character

(parse)


[error] 5-5: String values must be double quoted.

(parse)


[error] 5-5: String values must be double quoted.

(parse)


[error] 5-5: String values must be double quoted.

(parse)


[error] 5-6: String values must be double quoted.

(parse)


[error] 6-6: unexpected character

(parse)


[error] 6-6: String values must be double quoted.

(parse)


[error] 6-6: String values must be double quoted.

(parse)


[error] 6-6: String values must be double quoted.

(parse)


[error] 6-7: String values must be double quoted.

(parse)


[error] 7-7: unexpected character

(parse)


[error] 7-7: String values must be double quoted.

(parse)


[error] 7-7: String values must be double quoted.

(parse)


[error] 7-8: String values must be double quoted.

(parse)


[error] 8-9: String values must be double quoted.

(parse)


[error] 9-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)


[error] 49-49: String values must be double quoted.

(parse)


[error] 49-49: String values must be double quoted.

(parse)


[error] 49-49: String values must be double quoted.

(parse)


[error] 49-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: Minus must be followed by a digit

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: Minus must be followed by a digit

(parse)


[error] 49-49: Minus must be followed by a digit

(parse)


[error] 49-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)


[error] 49-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)


[error] 49-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)

current-test.json

[error] 1-1: String values must be double quoted.

(parse)


[error] 1-1: String values must be double quoted.

(parse)


[error] 1-1: String values must be double quoted.

(parse)


[error] 1-1: String values must be double quoted.

(parse)


[error] 1-1: unexpected character .

(parse)


[error] 1-1: unexpected character .

(parse)


[error] 1-1: unexpected character .

(parse)


[error] 1-2: unexpected character

(parse)


[error] 2-2: String values must be double quoted.

(parse)


[error] 2-2: String values must be double quoted.

(parse)


[error] 2-2: String values must be double quoted.

(parse)


[error] 2-3: String values must be double quoted.

(parse)


[error] 3-3: unexpected character

(parse)


[error] 3-3: String values must be double quoted.

(parse)


[error] 3-3: String values must be double quoted.

(parse)


[error] 3-3: String values must be double quoted.

(parse)


[error] 3-4: String values must be double quoted.

(parse)


[error] 4-4: unexpected character

(parse)


[error] 4-4: String values must be double quoted.

(parse)


[error] 4-4: String values must be double quoted.

(parse)


[error] 4-4: String values must be double quoted.

(parse)


[error] 4-4: String values must be double quoted.

(parse)


[error] 4-5: String values must be double quoted.

(parse)


[error] 5-5: unexpected character

(parse)


[error] 5-5: String values must be double quoted.

(parse)


[error] 5-5: String values must be double quoted.

(parse)


[error] 5-5: String values must be double quoted.

(parse)


[error] 5-6: String values must be double quoted.

(parse)


[error] 6-6: unexpected character

(parse)


[error] 6-6: String values must be double quoted.

(parse)


[error] 6-6: String values must be double quoted.

(parse)


[error] 6-6: String values must be double quoted.

(parse)


[error] 6-7: String values must be double quoted.

(parse)


[error] 7-7: unexpected character

(parse)


[error] 7-7: String values must be double quoted.

(parse)


[error] 7-7: String values must be double quoted.

(parse)


[error] 7-8: String values must be double quoted.

(parse)


[error] 8-9: String values must be double quoted.

(parse)


[error] 9-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)


[error] 49-49: String values must be double quoted.

(parse)


[error] 49-49: String values must be double quoted.

(parse)


[error] 49-49: String values must be double quoted.

(parse)


[error] 49-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: Minus must be followed by a digit

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: Minus must be followed by a digit

(parse)


[error] 49-49: Minus must be followed by a digit

(parse)


[error] 49-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)


[error] 49-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)


[error] 49-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)


[error] 49-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)

🪛 GitHub Actions: Code Quality
tests/performance-comparator.php

[error] 1-1: Class file names should be based on the class name with "class-" prepended. Expected class-performance-comparator.php, but found performance-comparator.php.

🪛 GitHub Check: Code Quality Checks
tests/performance-comparator.php

[warning] 52-52:
Array double arrow not aligned correctly; expected 9 space(s) between "'improvements'" and double arrow, but found 1.


[warning] 50-50:
Array double arrow not aligned correctly; expected 10 space(s) between "'regressions'" and double arrow, but found 1.


[warning] 49-49:
Array double arrow not aligned correctly; expected 10 space(s) between "'total_tests'" and double arrow, but found 1.


[warning] 36-36:
file_get_contents() is discouraged. Use wp_remote_get() for remote URLs instead.


[warning] 28-28:
Equals sign not aligned with surrounding assignments; expected 2 spaces but found 1 space


[warning] 23-23:
Array double arrow not aligned correctly; expected 2 space(s) between "'database_queries'" and double arrow, but found 1.


[warning] 22-22:
Array double arrow not aligned correctly; expected 3 space(s) between "'memory_usage_mb'" and double arrow, but found 1.


[warning] 17-17:
Array double arrow not aligned correctly; expected 2 space(s) between "'database_queries'" and double arrow, but found 1.


[warning] 16-16:
Array double arrow not aligned correctly; expected 3 space(s) between "'memory_usage_mb'" and double arrow, but found 1.


[failure] 15-15:
Tabs must be used to indent lines; spaces are not allowed


[failure] 14-14:
Tabs must be used to indent lines; spaces are not allowed


[failure] 13-13:
Tabs must be used to indent lines; spaces are not allowed


[failure] 12-12:
Tabs must be used to indent lines; spaces are not allowed


[failure] 11-11:
Whitespace found at end of line


[failure] 11-11:
Tabs must be used to indent lines; spaces are not allowed


[failure] 8-8:
There must be no blank lines after the class comment


[failure] 5-5:
Whitespace found at end of line


[failure] 1-1:
Missing file doc comment


[failure] 1-1:
Class file names should be based on the class name with "class-" prepended. Expected class-performance-comparator.php, but found performance-comparator.php.

🪛 PHPMD (2.15.0)
tests/performance-comparator.php

107-107: Avoid unused local variables such as '$threshold'. (undefined)

(UnusedLocalVariable)

tests/simple-performance-test.php

11-11: Avoid unused parameters such as '$plugin_file'. (undefined)

(UnusedFormalParameter)


85-85: Avoid unused local variables such as '$stats'. (undefined)

(UnusedLocalVariable)


97-97: Avoid unused local variables such as '$plans'. (undefined)

(UnusedLocalVariable)


100-100: Avoid unused local variables such as '$fields'. (undefined)

(UnusedLocalVariable)


109-109: Avoid unused local variables such as '$site_data'. (undefined)

(UnusedLocalVariable)


118-118: Avoid unused local variables such as '$is_valid'. (undefined)

(UnusedLocalVariable)

tests/performance-benchmark.php

67-67: Avoid unused local variables such as '$stats'. (undefined)

(UnusedLocalVariable)


76-76: Avoid unused local variables such as '$checkout'. (undefined)

(UnusedLocalVariable)


79-79: Avoid unused local variables such as '$plans'. (undefined)

(UnusedLocalVariable)


82-82: Avoid unused local variables such as '$fields'. (undefined)

(UnusedLocalVariable)


100-100: Avoid unused local variables such as '$is_valid'. (undefined)

(UnusedLocalVariable)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: cypress (8.1, chrome)
  • GitHub Check: cypress (8.2, chrome)
🔇 Additional comments (7)
tests/performance-comparator.php (1)

115-117: Good: Division by zero protection.

The guard clause correctly prevents division by zero when the baseline value is zero, avoiding potential runtime errors in the percentage calculation on line 119.

tests/performance-benchmark.php (2)

67-67: Note: Unused variables are intentional for measurement.

Static analysis flags variables like $stats, $checkout, $plans, $fields, and $is_valid as unused. However, these assignments are intentional—they trigger code execution paths (function calls, object instantiation) whose performance is being measured. This is not a defect.

Also applies to: 76-76, 79-79, 82-82, 100-100


16-16: No action required — bootstrap.php path is correct and verified to exist.

The verification confirms that tests/bootstrap.php exists at the location referenced in the code. The path resolution dirname(__DIR__) . '/tests/bootstrap.php' correctly points to the file from the repository root, making it accessible across all execution contexts.

tests/simple-performance-test.php (1)

85-85: Note: Unused variables are intentional for measurement consistency.

Similar to tests/performance-benchmark.php, variables like $stats, $plans, and $fields are assigned to trigger function calls whose performance is being measured. This maintains parity with the real benchmark suite and is not a defect.

Also applies to: 97-97, 100-100

.github/workflows/performance.yml (3)

42-47: Verify baseline branch handling for push events.

Line 45 uses ${{ github.base_ref }} which is only available for pull_request events. For push events to main, this will be empty, causing the baseline checkout to fail or checkout an unexpected ref.

Add conditional logic to handle push events:

       - name: Checkout Base Branch
         uses: actions/checkout@v4
         with:
-          ref: ${{ github.base_ref }}
+          ref: ${{ github.event_name == 'pull_request' && github.base_ref || 'main' }}
           path: baseline
           fetch-depth: 0

Or skip baseline comparison for push events:

       - name: Checkout Base Branch
         if: github.event_name == 'pull_request'
         uses: actions/checkout@v4
         with:
           ref: ${{ github.base_ref }}
           path: baseline
           fetch-depth: 0

105-105: Update script path after filename fix.

After renaming performance-comparator.php to class-performance-comparator.php (per the pipeline failure), update this line to reference the correct filename.

-          php tests/performance-comparator.php baseline-results.json current-results.json > performance-report.md
+          php tests/class-performance-comparator.php baseline-results.json current-results.json > performance-report.md

This change depends on fixing the filename issue in tests/performance-comparator.php.


74-83: Good: MySQL readiness check prevents race conditions.

The loop waiting for MySQL to be ready (with 30 retries and 2-second intervals) is good practice. It prevents test failures due to the MySQL service not being fully initialized, which is a common CI issue.

Comment on lines +90 to +101
- name: Run Baseline Performance Tests
run: |
cd baseline
php tests/performance-benchmark.php > ../baseline-results.json
echo "Baseline results:"
cat ../baseline-results.json | jq '.'
- name: Run Current Branch Performance Tests
run: |
php tests/performance-benchmark.php > current-results.json
echo "Current results:"
cat current-results.json | jq '.'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Redirecting stdout creates malformed JSON files.

Lines 93 and 99 redirect the entire script output to JSON files using >, which includes console echo statements alongside the JSON output. This creates the malformed files seen in current-test.json and baseline-test.json (status messages + JSON + "Results saved" text).

The benchmark scripts already save JSON via their save_results() methods. Either use those saved files, or capture only the JSON output.

Option 1 - Use the saved JSON files instead of stdout redirect:

       - name: Run Baseline Performance Tests
         run: |
           cd baseline
-          php tests/performance-benchmark.php > ../baseline-results.json
+          php tests/performance-benchmark.php
+          cp tests/performance-results-*.json ../baseline-results.json
           echo "Baseline results:"
           cat ../baseline-results.json | jq '.'
       
       - name: Run Current Branch Performance Tests
         run: |
-          php tests/performance-benchmark.php > current-results.json
+          php tests/performance-benchmark.php
+          cp tests/performance-results-*.json current-results.json
           echo "Current results:"
           cat current-results.json | jq '.'

Option 2 - Suppress console output and only output JSON:

Modify the benchmark scripts to only echo JSON when stdout is redirected, or add a --quiet flag.

Committable suggestion skipped: line range outside the PR's diff.

✓ API endpoints benchmark completed
✓ Database queries benchmark completed
{
"test_run": "2025-12.55 20:22:06",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Invalid timestamp format in baseline data.

The timestamp "2025-12.55 20:22:06" is malformed—month cannot be "12.55". This should follow the format "Y-m-d H:i:s" (e.g., "2025-11-15 20:22:06"). While this won't break JSON parsing if the file format is fixed, it will cause confusion in reports and may break date parsing logic downstream.

Apply this diff:

-    "test_run": "2025-12.55 20:22:06",
+    "test_run": "2025-11-15 20:22:06",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"test_run": "2025-12.55 20:22:06",
"test_run": "2025-11-15 20:22:06",
🧰 Tools
🪛 Biome (2.1.2)

[error] 8-9: String values must be double quoted.

(parse)


[error] 9-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)

🤖 Prompt for AI Agents
In baseline-test.json around line 9, the "test_run" timestamp "2025-12.55
20:22:06" is malformed (month contains decimal); replace it with a valid "Y-m-d
H:i:s" timestamp (for example "2025-11-15 20:22:06" or the correct actual test
run datetime) so downstream date parsing uses a proper format.

Comment on lines +1 to +49
Starting simplified performance benchmarks...
✓ Dashboard loading benchmark completed
✓ Checkout process benchmark completed
✓ Site creation validation benchmark completed
✓ Membership operations benchmark completed
✓ API endpoints benchmark completed
✓ Database queries benchmark completed
{
"test_run": "2025-11-15 20:22:14",
"php_version": "8.4.12",
"wordpress_version": "6.4.3",
"plugin_version": "2.4.7",
"dashboard_loading": {
"execution_time_ms": 1.09,
"memory_usage_mb": 0,
"peak_memory_mb": 2,
"database_queries": 0
},
"checkout_process": {
"execution_time_ms": 0.6,
"memory_usage_mb": 0,
"peak_memory_mb": 2,
"database_queries": 0
},
"site_creation_validation": {
"execution_time_ms": 0.59,
"memory_usage_mb": 0,
"peak_memory_mb": 2,
"database_queries": 0
},
"membership_operations": {
"execution_time_ms": 0.01,
"memory_usage_mb": 0,
"peak_memory_mb": 2,
"database_queries": 0
},
"api_endpoints": {
"execution_time_ms": 0.89,
"memory_usage_mb": 0,
"peak_memory_mb": 2,
"database_queries": 0
},
"database_queries": {
"execution_time_ms": 0.26,
"memory_usage_mb": 0,
"peak_memory_mb": 2,
"database_queries": 5
}
}Results saved to: /home/dave/multisite/ultimate-multisite/tests/simple-performance-results-2025-11-15-20-22-14.json
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Malformed JSON structure will break the comparator.

This file contains mixed console output and JSON, making it unparseable as JSON. Lines 1-7 contain status messages, and line 49 concatenates the JSON closing brace with a file path message. The Performance_Comparator::load_results() method (lines 31-44 in tests/performance-comparator.php) uses json_decode() which will fail on this format.

These files should contain only the pure JSON object (lines 8-48), without the console output prefix or the "Results saved to:" suffix.

Apply this approach to fix:

  1. Strip lines 1-7 (console output)
  2. Remove the "Results saved to:" text from line 49, keeping only the closing }

Or ensure the benchmark scripts write pure JSON to these files, separate from console output.

🧰 Tools
🪛 Biome (2.1.2)

[error] 1-1: String values must be double quoted.

(parse)


[error] 1-1: String values must be double quoted.

(parse)


[error] 1-1: String values must be double quoted.

(parse)


[error] 1-1: String values must be double quoted.

(parse)


[error] 1-1: unexpected character .

(parse)


[error] 1-1: unexpected character .

(parse)


[error] 1-1: unexpected character .

(parse)


[error] 1-2: unexpected character

(parse)


[error] 2-2: String values must be double quoted.

(parse)


[error] 2-2: String values must be double quoted.

(parse)


[error] 2-2: String values must be double quoted.

(parse)


[error] 2-3: String values must be double quoted.

(parse)


[error] 3-3: unexpected character

(parse)


[error] 3-3: String values must be double quoted.

(parse)


[error] 3-3: String values must be double quoted.

(parse)


[error] 3-3: String values must be double quoted.

(parse)


[error] 3-4: String values must be double quoted.

(parse)


[error] 4-4: unexpected character

(parse)


[error] 4-4: String values must be double quoted.

(parse)


[error] 4-4: String values must be double quoted.

(parse)


[error] 4-4: String values must be double quoted.

(parse)


[error] 4-4: String values must be double quoted.

(parse)


[error] 4-5: String values must be double quoted.

(parse)


[error] 5-5: unexpected character

(parse)


[error] 5-5: String values must be double quoted.

(parse)


[error] 5-5: String values must be double quoted.

(parse)


[error] 5-5: String values must be double quoted.

(parse)


[error] 5-6: String values must be double quoted.

(parse)


[error] 6-6: unexpected character

(parse)


[error] 6-6: String values must be double quoted.

(parse)


[error] 6-6: String values must be double quoted.

(parse)


[error] 6-6: String values must be double quoted.

(parse)


[error] 6-7: String values must be double quoted.

(parse)


[error] 7-7: unexpected character

(parse)


[error] 7-7: String values must be double quoted.

(parse)


[error] 7-7: String values must be double quoted.

(parse)


[error] 7-8: String values must be double quoted.

(parse)


[error] 8-9: String values must be double quoted.

(parse)


[error] 9-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)


[error] 49-49: String values must be double quoted.

(parse)


[error] 49-49: String values must be double quoted.

(parse)


[error] 49-49: String values must be double quoted.

(parse)


[error] 49-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: Minus must be followed by a digit

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: unexpected character /

(parse)


[error] 49-49: Minus must be followed by a digit

(parse)


[error] 49-49: Minus must be followed by a digit

(parse)


[error] 49-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)


[error] 49-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)


[error] 49-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)


[error] 49-49: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)

🤖 Prompt for AI Agents
In current-test.json lines 1-49: the file mingles console output with JSON which
breaks json_decode; remove the leading status lines (lines 1-7) so the file
starts with the JSON object and strip the trailing "Results saved to: ..." text
on line 49 so the file ends exactly with the JSON closing brace; alternatively
modify the benchmark writer to emit only the pure JSON payload to the results
file and write console logs to stdout/stderr or a separate log file.

];

foreach ($tables as $table) {
$wpdb->get_var("SELECT COUNT(*) FROM $table LIMIT 1");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: SQL injection vulnerability in database benchmark.

The table name is directly interpolated into the SQL query without escaping, creating a potential SQL injection vulnerability. Even though $tables is hardcoded in this script, using $wpdb->prepare() or at least escaping with backticks is essential for security and to follow WordPress best practices.

Apply this diff:

         foreach ($tables as $table) {
-            $wpdb->get_var("SELECT COUNT(*) FROM $table LIMIT 1");
+            $wpdb->get_var($wpdb->prepare("SELECT COUNT(*) FROM %i LIMIT 1", $table));
         }

Or use backticks as a minimum:

-            $wpdb->get_var("SELECT COUNT(*) FROM $table LIMIT 1");
+            $wpdb->get_var("SELECT COUNT(*) FROM `$table` LIMIT 1");
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
$wpdb->get_var("SELECT COUNT(*) FROM $table LIMIT 1");
$wpdb->get_var($wpdb->prepare("SELECT COUNT(*) FROM %i LIMIT 1", $table));
Suggested change
$wpdb->get_var("SELECT COUNT(*) FROM $table LIMIT 1");
$wpdb->get_var("SELECT COUNT(*) FROM `$table` LIMIT 1");
🤖 Prompt for AI Agents
In tests/performance-benchmark.php around line 156, the SQL query directly
interpolates the $table variable into the string causing a potential SQL
injection; update the query to use $wpdb->prepare() with a %s placeholder (and
pass the table name) or, at minimum, wrap the table name in backticks and escape
it via $wpdb->esc_like/$wpdb->esc_sql as appropriate so the final query is built
safely rather than concatenated directly.

@@ -0,0 +1,247 @@
<?php
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix the filename to match WordPress coding standards.

The pipeline is failing because WordPress coding standards require class files to be prefixed with class-. This file should be renamed from performance-comparator.php to class-performance-comparator.php.

Rename the file:

tests/performance-comparator.php → tests/class-performance-comparator.php

Then update references in the workflow file (.github/workflows/performance.yml line 105) and any documentation.

🧰 Tools
🪛 GitHub Actions: Code Quality

[error] 1-1: Class file names should be based on the class name with "class-" prepended. Expected class-performance-comparator.php, but found performance-comparator.php.

🪛 GitHub Check: Code Quality Checks

[failure] 1-1:
Missing file doc comment


[failure] 1-1:
Class file names should be based on the class name with "class-" prepended. Expected class-performance-comparator.php, but found performance-comparator.php.

🤖 Prompt for AI Agents
In tests/performance-comparator.php around line 1, the filename doesn't follow
WordPress coding standards which require class files to be prefixed with
"class-"; rename the file to tests/class-performance-comparator.php, update all
references to the old filename (notably .github/workflows/performance.yml at
line 105) and any documentation or tests that import or require the file so they
point to the new path, and verify CI passes after renaming.

Comment on lines +107 to +146
foreach ($this->thresholds as $metric => $threshold) {
if (!isset($baseline[$metric]) || !isset($current[$metric])) {
continue;
}

$baseline_value = $baseline[$metric];
$current_value = $current[$metric];

if ($baseline_value == 0) {
continue; // Avoid division by zero
}

$change_percent = (($current_value - $baseline_value) / $baseline_value) * 100;

$result['changes'][$metric] = [
'baseline' => $baseline_value,
'current' => $current_value,
'change_percent' => round($change_percent, 2),
'change_absolute' => $current_value - $baseline_value
];

// Check for critical regression
if ($change_percent > $this->critical_thresholds[$metric]) {
$result['status'] = 'critical_regression';
$result['issues'][] = "Critical: {$metric} increased by {$change_percent}% (threshold: {$this->critical_thresholds[$metric]}%)";
}
// Check for normal regression
elseif ($change_percent > $this->thresholds[$metric]) {
if ($result['status'] !== 'critical_regression') {
$result['status'] = 'regression';
}
$result['issues'][] = "Warning: {$metric} increased by {$change_percent}% (threshold: {$this->thresholds[$metric]}%)";
}
// Check for improvement
elseif ($change_percent < -5) { // 5% improvement threshold
if ($result['status'] === 'no_change') {
$result['status'] = 'improvement';
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unused variable in foreach loop.

The variable $threshold on line 107 is declared but never used within the loop. The loop only needs the $metric key to access $this->thresholds[$metric] and $this->critical_thresholds[$metric].

Apply this diff:

-        foreach ($this->thresholds as $metric => $threshold) {
+        foreach ($this->thresholds as $metric => $_) {

Or use array_keys():

-        foreach ($this->thresholds as $metric => $threshold) {
+        foreach (array_keys($this->thresholds) as $metric) {
🧰 Tools
🪛 PHPMD (2.15.0)

107-107: Avoid unused local variables such as '$threshold'. (undefined)

(UnusedLocalVariable)

🤖 Prompt for AI Agents
In tests/performance-comparator.php around lines 107 to 146, the foreach
declares an unused $threshold variable; change the loop to only iterate keys
(e.g. foreach (array_keys($this->thresholds) as $metric) ) or replace $threshold
with a throwaway variable (foreach ($this->thresholds as $metric => $_) ) and
then use $metric to access $this->thresholds[$metric] and
$this->critical_thresholds[$metric]; no other logic changes required.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/WP_Ultimo/Tax/Dashboard_Taxes_Tab_Test.php (1)

1-1: Rename file to match coding standard and unblock Code Quality job

The Code Quality pipeline expects test filenames to be lowercase with hyphens. Please rename:

  • Dashboard_Taxes_Tab_Test.php
    to:
  • dashboard-taxes-tab-test.php

(or an equivalent lowercase-with-hyphens variant that matches your convention).

This should clear the [error] Filenames should be all lowercase with hyphens as word separators failure.

🧹 Nitpick comments (2)
tests/WP_Ultimo/Tax/Dashboard_Taxes_Tab_Test.php (2)

20-22: Verify that separate-process annotations are actually required

@runInSeparateProcess with @preserveGlobalState disabled is relatively heavy for a single test and can slow the suite or interact unexpectedly with the WordPress test bootstrap if not strictly needed.

If the only shared state here is the pre_site_transient_wu_tax_monthly_stats filter (already cleaned up in tearDown()), consider dropping these annotations unless there is a known isolation issue they fix.


41-44: Mock configuration: consider PHPUnit’s newer onlyMethods()/addMethods() APIs

Using:

$dashboard_admin_page = $this->getMockBuilder( Dashboard_Taxes_Tab::class )
	->disableOriginalConstructor()
	->setMethods( [ 'output' ] )
	->getMock();

achieves the intended behavior on older PHPUnit versions (only output() is mocked, register_scripts() runs real code), which is appropriate for this test.

However, setMethods() is deprecated in newer PHPUnit versions in favor of onlyMethods() / addMethods(). If your test stack is on (or moving to) a newer PHPUnit:

->onlyMethods( [ 'output' ] )

would be the forward-compatible replacement.

Please confirm against the PHPUnit version used in this project before changing it, as older versions may not support onlyMethods().

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 98c4bd8 and c218952.

📒 Files selected for processing (6)
  • tests/WP_Ultimo/Limits/Customer_User_Role_Limits_Test.php (1 hunks)
  • tests/WP_Ultimo/Models/Customer_Test.php (1 hunks)
  • tests/WP_Ultimo/Sunrise_Test.php (2 hunks)
  • tests/WP_Ultimo/Tax/Dashboard_Taxes_Tab_Test.php (1 hunks)
  • tests/functional/SSO_Functional_Test.php (3 hunks)
  • tests/unit/SSO_Test.php (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
tests/WP_Ultimo/Tax/Dashboard_Taxes_Tab_Test.php (3)
tests/WP_Ultimo/Limits/Customer_User_Role_Limits_Test.php (1)
  • setUp (17-41)
tests/functional/SSO_Functional_Test.php (2)
  • setUp (12-27)
  • tearDown (29-35)
tests/unit/SSO_Test.php (2)
  • setUp (11-17)
  • tearDown (19-25)
tests/WP_Ultimo/Models/Customer_Test.php (4)
tests/WP_Ultimo/Tax/Dashboard_Taxes_Tab_Test.php (1)
  • tearDown (13-16)
tests/functional/SSO_Functional_Test.php (1)
  • tearDown (29-35)
tests/unit/SSO_Test.php (1)
  • tearDown (19-25)
inc/models/class-customer.php (1)
  • Customer (28-910)
🪛 GitHub Actions: Code Quality
tests/WP_Ultimo/Tax/Dashboard_Taxes_Tab_Test.php

[error] 1-1: Filenames should be all lowercase with hyphens as word separators. Expected dashboard-taxes-tab-test.php, but found Dashboard_Taxes_Tab_Test.php.

🪛 GitHub Check: Code Quality Checks
tests/WP_Ultimo/Tax/Dashboard_Taxes_Tab_Test.php

[failure] 31-31:
Array keys must be surrounded by spaces unless they contain a string or an integer.


[failure] 13-13:
Missing doc comment for function tearDown()


[failure] 9-9:
Missing doc comment for function setUp()


[warning] 9-9:
Possible useless method overriding detected

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: cypress (8.2, chrome)
  • GitHub Check: cypress (8.1, chrome)
🔇 Additional comments (7)
tests/unit/SSO_Test.php (2)

13-14: LGTM - Good test isolation practice.

Flushing caches in setUp ensures a clean state for each test, improving test reliability and isolation.


58-59: LGTM - Appropriate for unit testing.

Using a fixed user ID is acceptable here since the transient is mocked. The test correctly validates that the session handler reads the transient value without requiring database interactions.

tests/functional/SSO_Functional_Test.php (4)

14-15: Good addition for test isolation.

Flushing the cache ensures a clean state before each test, which is essential for reliable benchmarking and preventing test pollution.


33-33: Proper test cleanup.

Correctly removes the network_home_url filter to prevent pollution between tests.


97-106: Mock appropriately avoids database overhead.

Mocking network_home_url to prevent DB calls aligns well with the performance benchmarking goals of this PR. The mock implementation is simple but sufficient for this test's needs.


92-95: I need to examine the tearDown method and full test implementation to verify if the separate process execution is necessary.

I need to examine the full test method implementation and tearDown to confirm whether the separate process is truly necessary.

Let me examine the full test implementation and tearDown method:

Based on the evidence gathered, I can now provide a strong assessment. The web search confirms that the whole stack of registered actions and filters will be reset after each test via the shared tear_down() method in WordPress tests. Additionally, the WordPress test suite does a good job reducing your work when writing tests, as there are countless occasions where you don't need to clean up after yourself, though some tests show filter removal due to developers not being aware the suite already handles cleanup.

The script output showed that network_home_url is only modified in the single test method and no other tests in the class touch this filter. Combined with the fact that process isolation comes with a performance cost, these annotations appear to add unnecessary overhead in a performance-focused PR.

Consider removing @runInSeparateProcess and @preserveGlobalState disabled annotations.

The test only mocks a single filter (network_home_url) which WordPress automatically resets after each test. Process isolation adds performance overhead, contradicting this PR's benchmarking goals. The filter mock and tearDown cleanup should provide sufficient isolation without spawning separate processes.

tests/WP_Ultimo/Sunrise_Test.php (1)

11-11: LGTM! Proper alignment with WordPress testing conventions.

The change to extend WP_UnitTestCase instead of the generic TestCase is appropriate for WordPress plugin testing and provides access to WordPress-specific test utilities and fixtures needed for multisite operations.

Also applies to: 16-16

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Clean up the mocked filter in tearDown().

The pre_site_transient filter should be removed in tearDown() to prevent test pollution. Additionally, $_REQUEST['broker'] set at line 68 should also be cleaned up.

Apply this diff to add cleanup in the tearDown method:

 public function tearDown(): void {
 	// Remove any filters set in tests.
 	remove_all_filters('wu_sso_enabled');
 	remove_all_filters('wu_sso_get_url_path');
 	remove_all_filters('determine_current_user');
+	// Clean up mocked transient filters
+	remove_all_filters('pre_site_transient_sso-' . $_REQUEST['broker'] . '-' . get_current_blog_id());
+	// Clean up request globals
+	unset($_REQUEST['broker']);
 	parent::tearDown();
 }

Note: You may need to store the broker value as a class property if $_REQUEST['broker'] is not reliably accessible in tearDown.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In tests/unit/SSO_Test.php around lines 72 to 78, the test adds a
pre_site_transient filter and sets $_REQUEST['broker'] but does not remove/unset
them in tearDown, which can pollute other tests; update the test class by
storing the broker (and any site_id/user_id used to build the transient name) as
a class property when setting up the test, then in tearDown() remove the filter
added via remove_filter('pre_site_transient_sso-' . $this->broker . '-' .
$this->site_id, <the same callback or use has_filter to remove by priority>) and
unset $_REQUEST['broker'] (and any other global changes), ensuring the filter
removal uses the identical hook name and callback/priorities used when adding
it.

Comment on lines +102 to +115
// Ensure wp_users.can is properly set as an array before calling revoke_super_admin
// Clear all WordPress caches to ensure fresh option load
wp_cache_flush();

$wp_users_can = get_option('wp_users.can');
echo 'wp_users.can type: ' . gettype($wp_users_can) . PHP_EOL;
echo 'wp_users.can value: ';
var_export($wp_users_can);
echo PHP_EOL;

if (!is_array($wp_users_can)) {
update_option('wp_users.can', ['list_users' => true, 'promote_users' => true, 'remove_users' => true, 'edit_users' => true]);
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

These changes appear unrelated to the PR objectives.

The PR is titled "Add benchmark check" and describes adding a performance benchmarking pipeline, but these changes modify test preconditions for user role limits. This suggests either:

  • These changes were committed to the wrong branch
  • This is a separate fix that should be in its own PR
  • The PR description is incorrect

Please clarify whether these test changes are intentional for this PR. If they're addressing a separate test failure, consider moving them to a dedicated PR with appropriate context.


🏁 Script executed:

#!/bin/bash
# Get PR metadata if available
if [ -d .git ]; then
  echo "=== Current branch and recent commits ==="
  git branch -v
  git log -1 --oneline
  echo ""
  echo "=== PR title/description from commit message ==="
  git log -1 --format=%B
  echo ""
  echo "=== List of changed files ==="
  git diff --name-only HEAD~1..HEAD 2>/dev/null || git diff --name-only HEAD~5..HEAD 2>/dev/null || echo "Cannot determine changed files"
fi

Length of output: 523


🏁 Script executed:

#!/bin/bash
# Inspect the full test file context around lines 102-115
if [ -f "tests/WP_Ultimo/Limits/Customer_User_Role_Limits_Test.php" ]; then
  echo "=== Test method containing lines 102-115 ==="
  sed -n '85,130p' "tests/WP_Ultimo/Limits/Customer_User_Role_Limits_Test.php"
fi

Length of output: 1797


🏁 Script executed:

#!/bin/bash
# Search for wp_users.can usage in codebase
rg -n "wp_users\.can" --type=php

Length of output: 672


🏁 Script executed:

#!/bin/bash
# Search for benchmarking/performance test related code in the PR changes
rg -n -i "benchmark|performance|speed|timing|profil" --type=php tests/

Length of output: 7133


🏁 Script executed:

#!/bin/bash
# Check for debug statements (echo, var_export, print_r) in test files
rg -n "(echo|var_export|print_r|var_dump)" "tests/WP_Ultimo/Limits/Customer_User_Role_Limits_Test.php"

Length of output: 160


Remove false premise about PR title; redirect focus to test code quality issues.

The PR is actually titled "Add some more tests" and does contain test additions—so the scope claim is incorrect. However, legitimate code quality concerns remain:

  1. Remove debug statements (lines 107–110): echo and var_export() should not appear in test code. Use test assertions or proper logging instead.

  2. Reconsider wp_cache_flush() (line 104): This heavy cache flush may be unnecessary or hiding an underlying issue. If caching is the root cause of test failures, that should be fixed directly rather than worked around.

  3. Clarify wp_users.can workaround (lines 106, 112–114): This option appears only in this test. Document why this precondition is necessary or remove if the underlying code should handle this state.

🤖 Prompt for AI Agents
In tests/WP_Ultimo/Limits/Customer_User_Role_Limits_Test.php around lines
102–115, remove the debug echo/var_export calls (lines 107–110) and replace them
with proper test setup or assertions; avoid using a global wp_cache_flush()
(line 104) — either remove it or replace with targeted cache invalidation (e.g.,
delete the specific option cache) or move cache handling into test
setUp/tearDown so tests don’t rely on a heavy global flush; finally, don’t leave
the wp_users.can workaround undocumented — either ensure the option is
deterministically set in the test setUp with update_option (so the test does not
depend on external state) or add a concise comment explaining why the option
must be seeded, and remove the conditional debug branching so the test is stable
and self-contained.

if (function_exists('revoke_super_admin')) {
// Ensure wp_users.can is properly set as an array before calling revoke_super_admin
// Clear all WordPress caches to ensure fresh option load
wp_cache_flush();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

Verify the necessity of cache flushing and wp_users.can workaround.

This code appears to work around a test failure rather than address the root cause:

  1. wp_cache_flush() is a heavy operation that flushes ALL WordPress caches and may mask underlying bugs by clearing state that wouldn't normally be cleared.
  2. The wp_users.can option name doesn't appear to be a standard WordPress option. The initialization with specific capabilities seems arbitrary.

This pattern suggests the test environment or the code under test may have an issue that should be properly diagnosed and fixed.

Please investigate:

  1. Why is wp_users.can not an array in the test environment?
  2. Is wp_users.can a valid WordPress core option or a plugin-specific option?
  3. Whether the actual bug is in the production code, test setup, or WordPress test environment configuration?

Run the following script to search for references to this option in the codebase:

Also applies to: 112-114


🏁 Script executed:

#!/bin/bash
# Search for wp_users.can references in the codebase
rg -n -C3 "wp_users\.can" --type php

Length of output: 1847






🏁 Script executed:

#!/bin/bash
# Look for the test class setup/teardown methods and surrounding context
rg -n -B10 -A2 "function setUp|function tearDown|function setUpBeforeClass|function tearDownAfterClass" tests/WP_Ultimo/Limits/Customer_User_Role_Limits_Test.php

Length of output: 239


🏁 Script executed:

#!/bin/bash
# Search for revoke_super_admin usage/definition in codebase
rg -n "revoke_super_admin" --type php -C2

Length of output: 1102


🏁 Script executed:

#!/bin/bash
# Check if wp_users.can or similar options are set elsewhere in test files
rg -n "update_option.*wp_users|update_option.*can|get_option.*wp_users" --type php -C2

Length of output: 970


🌐 Web query:

WordPress wp_users.can option multisite capabilities

💡 Result:

Short answer: There is no built‑in database option or column named "wp_users.can". WordPress (including Multisite) stores capabilities/roles in wp_usermeta entries named with the blog/table prefix + "capabilities" (eg. "wp_2_capabilities"), and the WP_User class and helper functions map/merge those per‑site capability entries when checking permissions. Use functions like user_can_for_site() (or switch_to_blog() + user_can()) to test a user's capability on a particular site in a network. [1][2][3]

Sources

  • WP_User::_init_caps / get_role_caps (how WP builds a user's caps from the site-prefixed capabilities meta). [1]
  • Explanation of multisite storage: multiple meta_keys like "wp_#_capabilities" stored in wp_usermeta. [2]
  • user_can_for_site() reference (check capability for a user on a given site). [3]

Remove debug output and investigate the root cause of the revoke_super_admin failure.

This test code has multiple issues:

  1. Non-existent option: wp_users.can is not a standard WordPress option. Capabilities are stored in wp_usermeta with site-prefixed keys (e.g., wp_2_capabilities), not as options.

  2. Debug artifacts: Lines 107-109 contain echo and var_export() statements that should not be in test code.

  3. Unnecessary workaround: The wp_cache_flush() at line 104 and the conditional initialization at lines 112-114 appear to work around a deeper issue with how revoke_super_admin() is being called, rather than addressing the root cause.

Remove the debug output, the cache flush, and the option initialization. Then investigate why revoke_super_admin() requires this workaround—it likely indicates either:

  • The test environment lacks proper multisite setup
  • The production code under test has a bug when called in a test context
🤖 Prompt for AI Agents
In tests/WP_Ultimo/Limits/Customer_User_Role_Limits_Test.php around lines
104–114, remove the debug artifacts (echo/var_export), drop the wp_cache_flush()
call at line 104, and remove the non-standard option initialization for
"wp_users.can"; instead ensure capabilities are handled via usermeta or proper
role/capability helpers. After cleaning the test, investigate why
revoke_super_admin() needed the workaround by confirming the test bootstrap
provides a proper multisite environment (network setup, current_blog/site
constants, and super admin list) and by reproducing revoke_super_admin() in
isolation to determine whether the production code incorrectly depends on cache
or option state when run in tests; fix the test setup or adjust the production
code so revoke_super_admin() behaves correctly in the test context.

Comment on lines +107 to +110
echo 'wp_users.can type: ' . gettype($wp_users_can) . PHP_EOL;
echo 'wp_users.can value: ';
var_export($wp_users_can);
echo PHP_EOL;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Remove debug statements before merging.

These echo and var_export statements pollute test output and should not be committed to the codebase. They appear to be temporary debugging code left in accidentally.

Apply this diff to remove the debug statements:

-		$wp_users_can = get_option('wp_users.can');
-		echo 'wp_users.can type: ' . gettype($wp_users_can) . PHP_EOL;
-		echo 'wp_users.can value: ';
-		var_export($wp_users_can);
-		echo PHP_EOL;
-		
+		$wp_users_can = get_option('wp_users.can');
+		
 		if (!is_array($wp_users_can)) {

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In tests/WP_Ultimo/Limits/Customer_User_Role_Limits_Test.php around lines 107 to
110, there are leftover debug statements (echo and var_export) that pollute test
output; remove those four lines so the test no longer prints debugging info,
ensuring the test output stays clean before merging.

Comment on lines +390 to +405
/**
* Tear down test environment.
*/
public function tearDown(): void {
// Clean up created customers
$customers = Customer::get_all();
if ($customers) {
foreach ($customers as $customer) {
if ($customer->get_id()) {
$customer->delete();
}
}
}

parent::tearDown();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

Overly broad cleanup may cause test interference.

The tearDown deletes ALL customers in the database via Customer::get_all(), not just those created by the current test. This could cause issues with parallel test execution or inadvertently delete pre-existing test data.

Additionally, most tests in this file don't call save() (only test_verification_key_functionality at line 325 does), so the database cleanup may be unnecessary since WP_UnitTestCase::tearDown() typically resets database state between tests.

Recommended approach:

Track created customers in an instance property and delete only those:

 class Customer_Test extends WP_UnitTestCase {
+
+	/**
+	 * Customers created during tests.
+	 *
+	 * @var Customer[]
+	 */
+	private $created_customers = [];
+
+	/**
+	 * Helper to create and track a customer.
+	 */
+	private function create_and_track_customer(): Customer {
+		$customer = new Customer();
+		$this->created_customers[] = $customer;
+		return $customer;
+	}

	/**
	 * Tear down test environment.
	 */
	public function tearDown(): void {
-		// Clean up created customers
-		$customers = Customer::get_all();
-		if ($customers) {
-			foreach ($customers as $customer) {
-				if ($customer->get_id()) {
-					$customer->delete();
-				}
+		// Clean up only customers created by this test
+		foreach ($this->created_customers as $customer) {
+			if ($customer->get_id()) {
+				$customer->delete();
			}
		}
-		
+		$this->created_customers = [];
+
		parent::tearDown();
	}

Then update test_verification_key_functionality (and any future tests that persist) to use $this->create_and_track_customer().

Verify that Customer::get_all() and Customer::delete() methods exist and behave as expected:


🏁 Script executed:

#!/bin/bash
# Verify Customer::get_all() static method exists
ast-grep --pattern 'class Customer {
  $$$
  static function get_all($$$) {
    $$$
  }
  $$$
}'

# Also check Base_Model for get_all() since Customer extends it
ast-grep --pattern 'class Base_Model {
  $$$
  static function get_all($$$) {
    $$$
  }
  $$$
}'

# Verify delete() instance method
ast-grep --pattern 'class Customer {
  $$$
  function delete($$$) {
    $$$
  }
  $$$
}'

# Also check Base_Model for delete()
ast-grep --pattern 'class Base_Model {
  $$$
  function delete($$$) {
    $$$
  }
  $$$
}'

# Search for any get_all usages to understand the pattern
rg -n --type=php -C3 '::get_all\(\)'

Length of output: 1007


🏁 Script executed:

#!/bin/bash
# Find Customer class definition file
fd -type f -name "*.php" -exec grep -l "^class Customer" {} \;

# Find Base_Model class definition file
fd -type f -name "*.php" -exec grep -l "^class Base_Model" {} \;

Length of output: 341


🏁 Script executed:

#!/bin/bash
# Search for get_all() method definition in PHP files
rg -n --type=php "function get_all\(" -A 10 | head -50

# Search for delete() method definition
rg -n --type=php "function delete\(" -A 5 | head -50

Length of output: 2618


🏁 Script executed:

#!/bin/bash
# Look in inc/ directory for Customer class
fd -path "inc/*" -type f -name "*.php" -exec grep -l "class Customer" {} \;

# Look in inc/ directory for Base_Model class
fd -path "inc/*" -type f -name "*.php" -exec grep -l "class Base_Model" {} \;

Length of output: 337


🏁 Script executed:

#!/bin/bash
# Find Customer class file and verify it extends Base_Model
rg -n "class Customer" -A 2 --type=php | head -20

# Confirm the inheritance chain
rg -n "extends Base_Model" --type=php

Length of output: 2036


🏁 Script executed:

#!/bin/bash
# Check if there are any Customer-specific overrides of get_all()
rg -n "class Customer" -A 50 --type=php | grep -E "(function get_all|class Customer)"

# Check Base_Model get_all() more thoroughly to understand query behavior
rg -n "function get_all" -B 5 -A 15 --type=php

Length of output: 40449


🏁 Script executed:

#!/bin/bash
# Verify that other tests in the file don't cleanup all records similarly
rg -n "::get_all\(\)" tests/ --type=php

# Check if there are other tearDown methods that show the pattern
rg -n "function tearDown" -A 10 tests/WP_Ultimo/Models/Customer_Test.php

Length of output: 362


Overly broad cleanup deletes all customers indiscriminately.

The tearDown() method calls Customer::get_all() which retrieves all customers in the database via the inherited Base_Model method that queries without filtering, then deletes every customer found. This violates test isolation—data created by other tests or test runs could be deleted, risking parallel test failures.

Additionally, only one test in this file explicitly calls save() (line 325), making the database cleanup largely unnecessary since WP_UnitTestCase::tearDown() typically resets database state between tests.

Recommended approach:

Track created customers in an instance property and delete only those:

 class Customer_Test extends WP_UnitTestCase {
+
+	/**
+	 * Customers created during tests.
+	 *
+	 * @var Customer[]
+	 */
+	private $created_customers = [];
+
+	/**
+	 * Helper to create and track a customer.
+	 */
+	private function create_and_track_customer(): Customer {
+		$customer = new Customer();
+		$this->created_customers[] = $customer;
+		return $customer;
+	}

	/**
	 * Tear down test environment.
	 */
	public function tearDown(): void {
-		// Clean up created customers
-		$customers = Customer::get_all();
-		if ($customers) {
-			foreach ($customers as $customer) {
-				if ($customer->get_id()) {
-					$customer->delete();
-				}
+		// Clean up only customers created by this test
+		foreach ($this->created_customers as $customer) {
+			if ($customer->get_id()) {
+				$customer->delete();
			}
		}
-		
+		$this->created_customers = [];
+
		parent::tearDown();
	}

Update test_verification_key_functionality (and any future tests that persist) to use $this->create_and_track_customer().

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In tests/WP_Ultimo/Models/Customer_Test.php around lines 390 to 405, the
tearDown() currently calls Customer::get_all() and deletes every customer which
is too broad and breaks test isolation; replace that global cleanup with an
instance-level tracking approach: add a private property (e.g.
$created_customers = []) and a helper method create_and_track_customer() that
saves a customer and pushes its id/object into the tracker; update tests that
call save() (like test_verification_key_functionality) to use the helper, and
modify tearDown() to only loop over and delete entries in $created_customers (or
skip deletion if the tracker is empty), then call parent::tearDown().

Comment on lines +198 to +200
// Skip this test as it requires complex WordPress multisite operations
// that involve blog switching and logging, which can fail in test environment
$this->markTestSkipped('Requires complex multisite setup with logging capabilities');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

Consider tracking re-enablement of these skipped tests.

While skipping tests that require complex multisite setup is reasonable, these tests cover important operations (manage_sunrise_updates and try_upgrade) that should ideally be validated. The skip messages indicate infrastructure limitations rather than unnecessary tests.

Consider:

  1. Adding a TODO comment or issue reference to track re-enabling these tests when proper multisite test infrastructure is available
  2. Documenting whether these operations are tested elsewhere (e.g., integration tests, manual QA)
  3. Exploring conditional test execution in a multisite-enabled CI environment

Can you clarify:

  • Are these tests blocking the benchmark pipeline, or is this refactoring preparatory work?
  • Is there alternative coverage for these multisite operations?

Also applies to: 207-209


🏁 Script executed:

#!/bin/bash
# Check if there are other tests covering manage_sunrise_updates or try_upgrade functionality
rg -n --type=php -C3 'manage_sunrise_updates|try_upgrade' -g '!tests/WP_Ultimo/Sunrise_Test.php'

Length of output: 1907


🏁 Script executed:

#!/bin/bash
# Find all test files in the repository
fd -e php -e xml -path '*/test*' | head -20

Length of output: 168


🏁 Script executed:

#!/bin/bash
# Look for phpunit configuration files
fd -e xml | grep -E '(phpunit|test)' | head -20

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Check for multisite or test group annotations in Sunrise_Test.php
rg -n '@group|WP_TESTS_MULTISITE|markTestSkipped' tests/WP_Ultimo/Sunrise_Test.php -B2 -A2

Length of output: 537


🏁 Script executed:

#!/bin/bash
# Search for test files that might test installer or WP_Ultimo
fd -type f -path '*/test*' -name '*.php' | xargs rg -l 'test.*installer|test.*wp.*ultimo|test.*sunrise' 2>/dev/null | head -10

Length of output: 203


🏁 Script executed:

#!/bin/bash
# Find all test files in the repository using corrected syntax
find tests -name '*.php' -type f 2>/dev/null | sort

Length of output: 1742


🏁 Script executed:

#!/bin/bash
# Look at the skipped test implementations to understand what they were testing
rg -n 'function test_manage_sunrise_updates_no_fatal_errors|function test_try_upgrade_no_fatal_errors' tests/WP_Ultimo/Sunrise_Test.php -A15

Length of output: 739


🏁 Script executed:

#!/bin/bash
# Search for any mentions of "manage_sunrise_updates" or "try_upgrade" in test files
find tests -name '*.php' -type f -exec grep -l 'manage_sunrise_updates\|try_upgrade' {} \; 2>/dev/null

Length of output: 32


🏁 Script executed:

#!/bin/bash
# Check if there are tests for WP_Ultimo class that might indirectly test manage_sunrise_updates
rg -n 'test.*WP_Ultimo|class.*WP_Ultimo.*Test' tests/ -l

Length of output: 24


🏁 Script executed:

#!/bin/bash
# Check for TODO, FIXME, or issue references in Sunrise_Test.php
rg -n 'TODO|FIXME|Issue|github.com|tracked' tests/WP_Ultimo/Sunrise_Test.php

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Check the entire Sunrise_Test.php file to see if there's multisite setup or configuration
cat tests/WP_Ultimo/Sunrise_Test.php | head -50

Length of output: 1147


🏁 Script executed:

#!/bin/bash
# Check if WP_Ultimo_Test.php tests the init method that calls manage_sunrise_updates
rg -n 'function test|init|manage_sunrise' tests/WP_Ultimo_Test.php -B2 -A5

Length of output: 1233


🏁 Script executed:

#!/bin/bash
# Look at the WP_Ultimo class init method to confirm it calls manage_sunrise_updates
rg -n 'public function init|manage_sunrise_updates' inc/class-wp-ultimo.php -B2 -A5

Length of output: 354


🏁 Script executed:

#!/bin/bash
# Check if there are any integration or bootstrap tests that might exercise these code paths
rg -n 'manage_sunrise|try_upgrade' tests/bootstrap.php tests/functional/ -B2 -A2 2>/dev/null

Length of output: 0


Add issue tracking and TODO comment for re-enabling skipped tests covering critical multisite operations.

Verification confirms these tests have NO alternative coverage. The functions manage_sunrise_updates() (called in WP_Ultimo::init()) and try_upgrade() (called in Core_Installer::_install_sunrise()) are critical production code paths with zero test coverage in the current state.

While skipping multisite-dependent tests is understandable for CI limitations, this requires explicit tracking:

  1. Add a TODO comment with a GitHub issue link in both skipped tests documenting the need to re-enable them when multisite test infrastructure is available
  2. Consider exploring conditional test execution or separate multisite CI configuration for full coverage
  3. Without tracking, this coverage debt may be forgotten indefinitely

Example:

$this->markTestSkipped('Requires multisite setup (see #XXXX); can run locally with multisite.xml');
🤖 Prompt for AI Agents
tests/WP_Ultimo/Sunrise_Test.php around lines 198-200: the skipped test
currently calls markTestSkipped without tracking metadata; add a TODO comment
above the skip referencing a GitHub issue (e.g. TODO: re-enable when multisite
CI available — see #XXXX) and change the markTestSkipped message to include that
same issue link and note how to run locally (e.g. can run locally with
multisite.xml); also locate the other skipped test(s) covering
manage_sunrise_updates()/try_upgrade() and apply the same TODO + issue-linked
skip message so this uncovered critical functionality is tracked for
re-enablement or CI configuration work.

Comment on lines +14 to +16
remove_all_filters('pre_site_transient_wu_tax_monthly_stats');
parent::tearDown();
}

/**
* Test that register_scripts method registers the correct scripts.
*
* @runInSeparateProcess
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

setUp/tearDown lifecycle: behavior is fine, but standards/docs can be improved

  • tearDown() correctly removes the pre_site_transient_wu_tax_monthly_stats filter and then calls parent::tearDown(). That’s good isolation.
  • setUp() currently just forwards to parent::setUp() and adds no behavior, which static analysis flags as a “possible useless method overriding”.

To better align with the Code Quality hints:

  • Either remove setUp() until you actually need it, or add a short docblock explaining its purpose if you intend to extend it soon.
  • Add docblocks for both methods to satisfy the “Missing doc comment for function setUp()/tearDown()” sniffs, e.g.:
/**
 * Set up test environment.
 */
protected function setUp(): void {
	parent::setUp();
}

/**
 * Clean up filters and test environment.
 */
protected function tearDown(): void {
	remove_all_filters( 'pre_site_transient_wu_tax_monthly_stats' );
	parent::tearDown();
}
🧰 Tools
🪛 GitHub Check: Code Quality Checks

[failure] 13-13:
Missing doc comment for function tearDown()


[failure] 9-9:
Missing doc comment for function setUp()


[warning] 9-9:
Possible useless method overriding detected

🤖 Prompt for AI Agents
In tests/WP_Ultimo/Tax/Dashboard_Taxes_Tab_Test.php around lines 9 to 16, the
setUp() override is currently a no-op and both setUp()/tearDown() lack docblocks
which triggers static analysis warnings; either remove the empty setUp() method
entirely if not needed, or add a short docblock above it describing its intent
(e.g., "Set up test environment.") and also add a docblock above tearDown()
(e.g., "Clean up filters and test environment.") while keeping the existing
remove_all_filters call and parent::tearDown() behavior.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Mocking tax monthly stats via filter looks good; fix array key spacing to satisfy sniff

The approach of using pre_site_transient_wu_tax_monthly_stats to inject deterministic monthly tax data is sound, and combined with the tearDown() cleanup it should not leak state to other tests.

The Code Quality check, however, flags the array key spacing (likely the aligned spaces before =>). To satisfy the “Array keys must be surrounded by spaces…” sniff, normalize the spacing to a single space on each side of =>, for example:

-			function () {
-				$mock_tax_data = [];
-				for ($i = 1; $i <= 12; $i++) {
-					$mock_tax_data[$i] = [
-						'order_count' => 0,
-						'total_tax'   => 0,
-					];
-				}
-				return $mock_tax_data;
-			}
+			function () {
+				$mock_tax_data = [];
+
+				for ( $i = 1; $i <= 12; $i++ ) {
+					$mock_tax_data[ $i ] = [
+						'order_count' => 0,
+						'total_tax'   => 0,
+					];
+				}
+
+				return $mock_tax_data;
+			}

(Adjust spacing/blank lines to your project’s standard, but be sure that 'total_tax' uses a single space before =>.)

You may also want to tweak the comment on Line 25 to more accurately describe what’s mocked (the site transient rather than wu_calculate_taxes_by_month directly), but that’s optional.

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 GitHub Check: Code Quality Checks

[failure] 31-31:
Array keys must be surrounded by spaces unless they contain a string or an integer.

🤖 Prompt for AI Agents
In tests/WP_Ultimo/Tax/Dashboard_Taxes_Tab_Test.php around lines 25 to 38, the
array keys in the mocked site transient have misaligned spacing around the =>
operator which triggers the code sniff; change each array entry to use a single
space on both sides of => (e.g. 'order_count' => 0, 'total_tax' => 0) and
normalize any extra alignment spaces so all keys follow the single-space
convention, and optionally update the comment on line 25 to say you’re mocking
the site transient (pre_site_transient_wu_tax_monthly_stats) rather than the
wu_calculate_taxes_by_month function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants