Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 31 additions & 0 deletions RFC-by-stage/1-approved/standardised-accessibility-testing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
---
Status: `Proposal` # Please do not change this.
Implementer: # It will be changed upon merging and as it moves through the RFC stages
---

# Standardised manual accessibility testing

## The issue to be solved

Web accessibility is too subjective. Reading the WCAG success criteria is challenging and trying to agree on what constitutes a failure is hard, multiple people may have multiple views each slightly different. Normative failings (things which must pass) in WCAG becomes conflated with accessibility best practice and what people _feel_ is poor accessibility. Agreeing on what constitutes a failure is the first step towards testing in a consistent way and having consistent results.

Manual accessibility testing is aimed at conveying facts. Tests should strive to avoid personal preferences for identifying accessibility failures.

This article forms the approach
https://www.tpgi.com/heading-off-confusion-when-do-headings-fail-wcag/

> WCAG techniques, such as H42: Using h1-h6 to identify headings and ARIA12: Using role=heading to identify headings, recommend that heading markup indicate the appropriate heading level for the content, but they don’t go so far as to define what’s “appropriate”—an issue that has been the subject of considerable discussion. **So although hierarchical heading structures reflect a best practice, skipping heading levels does not represent a WCAG failure.**

## A short description of the desired outcome or solution

The outcome is a community driven master list of all 50 WCAG 2.1 AA success criteria expanded out into simple easy to understand individual tests. A developer building a component would use the same tests from the same list as a accessibility specialist, accessibility tester, BA or anyone else involved in the accessibility. The tests remove the personal opinions of what people believe is an accessibility failure but isn't and makes it simpler to apply the minimum level of accessibility consistently.

The master list would invite community input but not every request would be actioned. Requests undergo a rigorous review and require justification that _a). it is a failure_ or _b). the test requires more detail_.

## Technical details

- 103 separate accessibility tests comprising all 50 WCAG 2.1 AA success criteria, as an example https://canaxess.github.io/accessibility-resources/
- filterable by role or component
- excel export functionality
- working from a common language of agreeing what is a failure/pass according to WCAG
- making it easy for devs, ui, ux and whoever else to interrogate and understand