top of page

A11M - Part II

  • Writer: Nir Horesh
    Nir Horesh
  • Aug 18
  • 3 min read

One of the most common and frustrating discussions I had as Head of Accessibility at Wix was about automation. Management and stakeholders wanted it for three main reasons:

It's fast and can be integrated into any process. It's decisive—giving yes/no answers to "is it accessible?" And because of this, we can create scores and make clear decisions: above score X is good, below is not good.

This makes total sense. It's how we handle almost every other quality domain. We measure and score performance, SEO, GDPR compliance, and general quality (bugs, resolution time). But it was my job to endlessly explain that accessibility is different.


The Human Factor Problem

Because accessibility is about human-machine interaction, not just machine processes, it's sometimes impossible to measure with automation alone—human involvement is needed. Until very recently, the saying was "the best tools in the market cover 30-40% of WCAG requirements" (referring to Axe, Wave, Lighthouse, ARC). We could cover some requirements with these tools, but then had to add extensive human testing to complete full audits.

Human testing has many problems, but two major ones stand out: it takes significant time, and if you let three accessibility experts test the same page, you'll get four different results and days of philosophical debates. When you ask "Well, is it accessible now?" they'll answer "well... it depends." In a world where we need both great user experience for people with disabilities and legal compliance, this answer is problematic.


Why Machines Struggled

The main issue is that we're making systems work for humans, and humans are more complicated than machines. Machines can't understand and test what's sometimes trivial to humans.

Consider Success Criterion 1.4.1 Use of Colour: "Colour is not used as the only visual means of conveying information." It's not wrong to use red or green text - that's a legitimate design decision. However, you shouldn't only use red to indicate something is bad and green for good without additional indicators like icons or explanatory text. To automate this, a machine needs to identify coloured text, understand if the colour conveys information, look for other indicators, and only then decide if there's a problem. Easy for humans - hard for machines.

Or Success Criterion 1.1.1 Non-text Content: Everyone knows we need alt text for images, and every accessibility tool can check if alt text exists. But is it good alt text? Does it follow the alt decision tree? Even if I write "asdklfjadskfjh" as alt text, all accessibility tools show it as "pass."

The common thread for most issues is that they're about user experience, and there's no automation to measure UX. For humans, something feels good, comfortable, easy, confusing, or misleading - but it's not always easy to put into a formula.

Distracted boyfriend meme showing A11Y teams (boyfriend in plaid shirt) looking away from manual A11Y tests (girlfriend in blue) toward AI audits (woman in red dress), illustrating the shift from traditional manual accessibility testing to AI-powered automated auditing.

Enter AI

But the machines of yesterday are not the machines of today.

What AI brings is the ability to "understand" humans and learn from them. AI can automate accessibility audits like never before.

It can understand an image's context and content, know the alt decision tree rules, and provide excellent alt text. It can understand colour context to determine if it conveys information and visually check for other indicators. AI can understand component purposes from appearance and page position, compare to HTML to find conflicts - not to mention the ease of creating transcripts and captions.


The Scale Revolution

The bottom line: AI can not only help people with disabilities navigate inaccessible websites (as mentioned in Part I), but also dramatically help website creators, platforms, and auditors find and fix issues with unprecedented speed, quality, and scale.

We're moving from 30-40% automated coverage to potentially comprehensive accessibility testing that understands context, semantics, and user experience - bringing us closer to that elusive "yes/no" answer stakeholders have always wanted.

 
 
 

Comments


bottom of page