By Noah Mashni, Director, Solutions Engineering at Level Access

From time to time, for various reasons, we at Level Access get asked by customers and potential customers which accessibility testing engine is “best.” By this, they usually mean which tool is going to catch “the most” accessibility barriers with the highest degree of reliability. I understand the question. If you’re just getting started setting up an accessibility practice, you may want help choosing between the many engines that are available, especially those that are freely available. Or, if you’ve been working in accessibility for a while, you may be trying to decide if you should stick with what you know, or switch to another tool. The answer I give in this situation depends on how provocative I’m feeling on the day, but it usually boils down to the same basic sentiment: it doesn’t matter.

As a solutions engineer, I see my job as helping organizations understand not just what individual “tools” they need in their accessibility arsenal, but which accessibility outcomes their team should be pursuing, and the type of holistic solution framework they’ll need to reach them. So, in this article, I’ll explain why teams should avoid spending time trying to quantify which accessibility testing engine provides the “most coverage,” and instead focus on the more important question, “what are we going to do with the results?”

Comparing testing engines’ coverage is a wild goose chase

If you aren’t familiar with the term, a testing engine is essentially a set of rules and checks, typically developed in JavaScript or Java, that can be executed automatically to test a digital experience for accessibility issues. These engines are typically leveraged in browser extensions, CI/CD integrations, and post-production scanning and monitoring. There are several accessibility testing engines on the market. Some are free, standalone tools, while others exist as part of a digital accessibility provider’s broader solution.

If the goal of an accessibility practice, from an outcomes perspective, is eliminating accessibility barriers and maintaining an inclusive and usable digital experience, it makes sense that teams would want to choose an engine that will catch the most errors or has the “best” accessibility testing coverage. Unfortunately, as a buyer, trying to determine which testing engine has the best coverage is often a confusing, if not misleading, conversation. Here’s why:

  • Most testing engines have considerable overlap in terms of the kinds of accessibility barriers they are able to identify. For the most part, they all catch the most common accessibility bugs, such as insufficient color contrast, missing descriptive text on things like images, links, and buttons, and a wide variety of other bugs that are semantic or structural in nature.
  • Assessing and comparing the reliability of the testing data generated by different engines can be extremely complex. There is a considerable amount of interpretation in defining which errors identified may be “false positives” or “false negatives,” and, most of the time, whether a given test result is truly a bug or a “false positive” is largely rooted within the context of the digital experience. This makes these errors not globally consistent, and thus, a poor metric with which to compare the accuracy and reliability of data generated by different engines.

The Level Access team worked hard to develop our proprietary testing engine, Access Engine, making sure it could provide the most specific, detailed understanding possible of a digital experience’s accessibility issues when needed. We also made sure it produces accurate, actionable reporting by helping surface the most critical issues for teams to address and providing prescriptive instructions for how to remediate them. We happen to believe it’s an excellent tool.

But in reality, the “best” testing engine is one your team will commit to using, consistently. This is partly why, in the Level Access Platform, clients are free to use the testing engine of their choice between the common options of Access Engine, axe-core, WAVE, and Equal Access. That way, the familiarity a team has established around a particular engine and its reporting formats doesn’t exclude them from accessing the wider toolset and support our solution has to offer.

More isn’t always “more”

In my seven years of experience collaborating with teams of various sizes to develop and maintain accessibility practices, I’ve observed many organizations struggle to understand how to effectively implement actionable steps based on test results to enhance their digital inclusivity. Typically, teams find a tsunami of data overwhelming and de-motivating. There’s no way to possibly address all errors at once, but the overflow of data can make it hard to know how and where to prioritize remediation efforts. A testing engine might provide excellent “coverage,” testing against dozens, if not hundreds, of rules and criteria, but if teams aren’t supported to take action with the data generated, then more accessibility testing coverage will not lead to fewer accessibility barriers (which in turn won’t lead to a more inclusive digital experience).

To advance accessibility efforts effectively, I often advise teams to actually reduce the number of rules being used in testing. This may mean excluding rules related to elements beyond developers’ control, like color contrast semantic issues dictated by brand guidelines or fixed templates in content management systems. While it’s important to note that contrast issues and semantic accessibility bugs inherent to templates are still crucial to capture, reporting them alongside more focused efforts specific to developers isn’t actually helpful in terms of outcomes if there are no clear actions to be taken in that context. Automated accessibility testing should be decentralized and integrated throughout the digital life cycle, with a contextual focus on generating actionable data. This ensures reporting emphasizes outcomes (work to be done) over data generation, minimizing distractions and facilitating clear quality standards. Eventually, addressing specific barriers should become standard in acceptance criteria or product guidelines, allowing teams to gradually expand their testing scope sustainably.

Access the report

Teams need tools to bridge the gap between findings and fixes

So, one might ask, if accessibility testing “coverage” isn’t a rational basis for comparison between testing engines, why would a team pay for a testing tool as part of an accessibility solution when there are comparable testing tools available for free? Simply put, an accessibility testing engine on its own is not an accessibility solution. Free, browser-based accessibility testing tools, like Google Lighthouse, are well suited to help a single person find and fix accessibility bugs in their local browser environment. However, when only using a free tool in an organizational setting, there is no overarching reporting, no data trail demonstrating adoption or use of testing tools to support organizational compliance requirements, and no support for collaboration between users or across teams. The value in paying for access to a testing tool as part of a comprehensive licensed or enterprise solution comes not from an increase in accessibility testing coverage or volume of test results, but rather from the direction that the solution can provide about how and where to take action.

Organizations who want to truly make an impact on accessibility will need to rely on a solution that helps them to act on the data and test results they generate. This solution might include:

  • Reporting / dashboards that clearly answer questions like:
    • How is my accessibility program doing?
    • How was my program doing 90 days ago?
    • What steps can be taken to progress my program so that we get to where we know we want to be 90 days from now?
    • Which of my teams are excelling and which are struggling?
    • Are there clear patterns in where gaps in accessibility performance seem to exist?
  • Simple and direct support for integrating accessibility practices into existing processes within the digital operation, for example:
    • Browser tools for fast and simple automated detection of accessibility barriers, allowing creators to find and fix bugs in their local environments while they code or author new content.
    • Public APIs or Plugins offering low-lift options to add accessibility scans as part of CI/CD processes. Note: when focused on outcomes, rather than data generation, you’ll not only want the capability to add scans to a process, but also to easily manage the data so that a quality gate can be enforced. This is what actually reduces the presence of bugs downstream, and ultimately leads to more sustained gains in overall accessibility of the digital experience.
    • Seamless integrations with project or task management workflows, either by supporting those functions natively or by syncing with task and ticket management systems, like Jira. This helps teams organize and communicate about the work to be done without manually entering or transferring key context.

With a comprehensive solution, teams move from testing to generate information, to testing to make an impact. These solutions make work simpler and collaboration easier for development teams, while also enabling key stakeholders in an organization to make informed decisions about goal setting and resource allocation, helping drive progress toward their desired accessibility outcomes.

Conclusion

Based on WebAIM’s annual report surveying the accessibility of the internet’s top one million home pages, roughly 96% of the most visited home pages online still contain dozens of WCAG failures, which likely equate to accessibility barriers for many. And most of them are issues that can be caught by automated testing using any of the popular accessibility testing engines mentioned in this article. If you combine that with the reported number of axe-core downloads (currently around 15 million weekly downloads reported by GitHub), and extrapolate that to the availability of other free tools and rule libraries like equal-access and WAVE, there is no shortage of automated testing being performed out in the world, and yet:

  • 84% of the web’s top pages still have text contrast issues.
  • 58% still have missing alt text issues.
  • 46% have form inputs missing labels.

For me, it is very clear that more accessibility testing isn’t what’s going to make the internet a more accessible space. A testing engine alone can’t drive teams to act on what the data is telling them or support them through that action in a sustainable way. Ultimately, what most organizations need is clear direction about what steps to take and where to take them. To find those answers, teams need a holistic solution; one that drives them to act on focused test results by setting smart priorities based on relevant data, and to collaborate seamlessly to maintain momentum.

In my view, Access Engine is a great accessibility testing engine, but it is just one piece of what makes the Level Access Platform—and our overarching approach to accessibility—so effective. With simple, actionable reporting capabilities, features to enable organization-wide accessibility governance, integrations for seamless communication and task management, and more, we help teams go beyond “testing better” and make meaningful progress toward their compliance goals by creating more inclusive, accessible experiences for everyone. If that sounds like the progress your team is ready for, reach out to our team to start a conversation.

About Noah Mashni

Noah is the Director of Solutions Engineering at Level Access with more than 10 years of experience as a technologist and technical solutions expert. His expertise on strategic accessibility practices has been featured by outlets like Smashing and at accessibility conferences like CSUN.