Skip to content

Add ui-test skill — adversarial UI testing with browse CLI#56

Open
shrey150 wants to merge 18 commits intomainfrom
shrey/ui-test-skill
Open

Add ui-test skill — adversarial UI testing with browse CLI#56
shrey150 wants to merge 18 commits intomainfrom
shrey/ui-test-skill

Conversation

@shrey150
Copy link
Copy Markdown
Contributor

@shrey150 shrey150 commented Mar 26, 2026

Summary

Adds a ui-test skill that uses the browse CLI to run AI-powered adversarial UI tests in a real browser. The agent analyzes git diffs to test only what changed (or explores the full app), checking functional correctness, accessibility, responsive layout, and UX heuristics.

Key features:

  • Diff-driven testinggit diff → targeted tests for changed pages/components
  • Exploratory testing — agent navigates freely to find bugs without a predefined suite
  • Parallel execution — coordinator agent fans out to sub-agents (up to 5), each with a 20-step hard cap
  • Structured assertionsSTEP_PASS|id|evidence / STEP_FAIL|id|expected → actual with screenshot evidence
  • Local + remote — localhost uses local browser, deployed sites use Browserbase
  • Deterministic checks — axe-core a11y, console errors, broken images, overflow detection, form labels
  • Adversarial patterns — XSS injection, empty submit, rapid click, keyboard-only nav, focus traps

Files

skills/ui-test/
├── SKILL.md                              # Skill definition
├── EXAMPLES.md                           # 9 worked examples with assertions
├── LICENSE.txt                           # MIT
├── README.md                             # Overview
├── rules/
│   └── ux-heuristics.md                 # 6 evaluation frameworks
└── references/
    ├── browser-recipes.md               # Deterministic check recipes
    ├── design-system.example.md         # Example design system template (copy to design-system.md)
    └── exploratory-testing.md           # Agent-driven QA guide

Test plan

  • Smoke tested diff-driven testing (component rendering, before/after snapshots)
  • Smoke tested adversarial patterns (empty submit, XSS, rapid click, keyboard-only)
  • Smoke tested axe-core accessibility audit
  • Smoke tested responsive screenshots + overflow/touch-target checks
  • Smoke tested console error capture
  • Smoke tested parallel execution with named Browserbase sessions
  • Found real bugs in test app (Escape not closing modals, undersized touch targets)

🤖 Generated with Claude Code


Note

Low Risk
Low risk because this PR only adds new documentation/skill content and does not modify runtime application logic. The main risk is maintenance/expectation mismatch if the documented browse/Browserbase behaviors change.

Overview
Introduces a new skills/ui-test skill that standardizes agentic UI QA in a real browser via the browse CLI, including diff-driven, exploratory, and parallel (multi-session) workflows with a strict before/after assertion protocol.

Adds extensive supporting docs: worked end-to-end examples (EXAMPLES.md), deterministic check recipes (axe-core, console errors, broken images, responsive/overflow/touch-target checks), and UX/a11y heuristic reference material, plus MIT licensing and install/usage guidance.

Written by Cursor Bugbot for commit 28e875a. This will update automatically on new commits. Configure here.

Builds on #52 with three key additions:

1. Local/remote mode selection — localhost uses local browser (no API key),
   deployed sites use Browserbase via cookie-sync for authenticated testing

2. Diff-driven testing — analyze git diff, generate targeted tests for what
   changed, execute with before/after snapshot comparison

3. Structured assertion protocol — STEP_PASS/STEP_FAIL markers with evidence,
   deterministic checks (axe-core, console errors, overflow detection), and
   adversarial testing patterns (XSS, empty submit, rapid click, keyboard-only)

Smoke-tested against a local Next.js app: found real bugs (Escape not closing
modals, undersized mobile touch targets) that confirmed the adversarial patterns
work. Fixed browse eval recipes (no top-level await, console capture on-page not
about:blank).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
shubh24 and others added 4 commits March 26, 2026 12:41
…ssions

Enables concurrent test execution by leveraging browse CLI's --session flag
to spin up independent Browserbase browsers per test group, with fan-out
via Agent tool and merged result reporting.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Documents how to add Bash(browse:*) to project or user settings
so users don't get prompted on every browse snapshot/click/eval.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…t figure it out

- Remove .ui-tests/suite.yml format and generation pipeline
- Replace Workflow B (8-step codebase analysis) with lightweight exploratory testing
- Simplify references/codebase-analysis.md to quick hints (framework detection, route finding)
- Remove example YAML suite file
- Update README to reflect no-artifacts philosophy
- Drop Write tool from allowed-tools (no files to generate)

The codegen/suite approach can ship as v2 later.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- XSS check: replace false-positive inline script count with input value check
- Console capture: preserve original console.error in Examples 6 snippets
- Form labels: use native i.labels API in browser-recipes.md (matches SKILL.md)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
shubh24 and others added 6 commits March 26, 2026 13:32
Also strengthens auto-select rule: localhost → browse env local,
deployed URLs → browse env remote, applied consistently across
all workflows including parallel sessions.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Remove "remote only" restriction — named sessions work with local mode
- Add BROWSE_SESSION=* permission pattern to avoid approval fatigue on parallel runs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
If references/design-system.md exists, use it as ground truth.
Otherwise, screenshot 2-3 existing pages to establish baseline
patterns (spacing, radii, colors, typography) and compare the
changed page against them.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When a test step fails, the skill now instructs the agent to take a
screenshot and save it to .context/ui-test-screenshots/<step-id>.png,
referenced in the STEP_FAIL marker and final report so developers can
see exactly what went wrong.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Extracted from browserbase-brand-guidelines skill: colors, typography,
border radii, spacing grid, component patterns, and visual principles.
The ui-test skill checks changed pages against this when it exists.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Rewrite Budget & Limits section to use a coordinator/sub-agent model:
main agent plans and delegates, sub-agents do the actual testing with
a 20-step hard cap each. Wall clock target ~10 min for default runs.
@shrey150 shrey150 force-pushed the shrey/ui-test-skill branch from e563ad6 to dc1eedd Compare March 30, 2026 19:50
shubh24 and others added 2 commits March 30, 2026 14:12
Integrates two external frameworks into the testing skill:

- Judgement (Emil Kowalski + Josh Puckett + UI Wiki): 9 reference files
  covering animations, forms, touch/a11y, typography, polish, component
  design, marketing, performance, and 152 UI wiki rules. Adds deterministic
  eval checks for touch targets, iOS zoom, transition:all, z-index abuse,
  and form labels. Adds screenshot-based critique methodology.

- Luck (soleio): Assembly Theory meta-evaluation lens — 7 facets adapted
  to UI (solvency, gradient coupling, compatibility, niche construction,
  circulation, integration, path sensitivity) for "will this UI thrive?"

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Strip out the Craft Quality Judgement section, Luck Lens meta-evaluation,
and all references/judgement/ files. Keep the skill focused on functional
testing, accessibility, and UX heuristics.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@shubh24 shubh24 force-pushed the shrey/ui-test-skill branch from 0cfbc08 to d94fa1b Compare March 30, 2026 21:13
@shubh24 shubh24 changed the title [STG-NEW] Add ui-test skill for adversarial UI testing Add ui-test skill — adversarial UI testing with browse CLI Mar 30, 2026
shubh24 and others added 2 commits March 30, 2026 14:20
Rename design-system.md → design-system.example.md with instructions
for users to copy and fill in their own brand tokens. The skill reads
design-system.md (user-created), not the example.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add browse wait timeout 3000 after axe-core script injection (SKILL.md, browser-recipes.md)
- Fix form label check to include aria-label and aria-labelledby (SKILL.md)
- Fix focus ring detection to check box-shadow too, not just outline (browser-recipes.md)
- Fix window.__capturedErrors → window.__logs in Example 8 (EXAMPLES.md)
- design-system.md already fixed in prior commit (renamed to .example.md)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

shubh24 and others added 2 commits March 30, 2026 14:44
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add aria-labelledby to hasLabel check in browser-recipes.md
- Add browse wait timeout 3000 after axe-core injection in Examples 4 and 7
- hasFocus box-shadow check was already fixed in prior commit

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants