person looking at a mobile phone surrounded by icons for various apps

ADA Compliance: How Do You Test Full and Equal Access for a Website?

Written by: Timothy Stephen Springer

This is post #18 in the ADA Compliance Series, which aims to outline a structure for validating and justifying a claim of “ADA compliance” for a website or other digital system (a few notes and disclaimers on that).


To determine if a specific site provides full and equal access, we seek to answer two core questions:

  • Was the site built in a fashion that is reasonably likely to support to accessibility needs of people with disabilities? We answer this by looking at a broadly used technical standard for digital accessibility—typically the WCAG 2.1 Level A and AA.
  • Does it work for a person with disabilities? This is a broader question that looks at the core, covered tasks the site provides, and reviews if people with disabilities can accomplish those. Basically, this is a question of, “can a person with a disability (who may use assistive technology or accessibility features) use the thing?”

Was the site built in a fashion that is reasonably likely to support the accessibility needs of people with disabilities?

This topic is defined in more detail in the “accessible formats” discussion of the next blog post in this series. Conceptually, though, this is about looking at how the website is actually coded and seeing if it’s created in a fashion that is supported by the assistive technologies and accessibility features people with disabilities commonly use. For our purposes in this post, this can be construed as technical conformance to the WCAG or other common standard that meets the bar for effective communication. As of this writing, the most widely accepted standard here is WCAG 2.1 Level A and AA, and we’ll use that for the rest of this post. It’s worth noting, though, we’re choosing to use that standard based on our due diligence on what is needed for people with disabilities. We aren’t being compelled to use that standard by a regulatory authority. If a better standard existed—that resulted in full and equal access—we’d be able to use that.

The technical implementation requirements we evaluate here are, by far, the requirements people are most familiar with. You’ve heard a lot about WCAG and the fact it’s broadly viewed as the de facto standard for accessibility. We’d wholly agree with that consensus view. To test conformance with the WCAG we validate the entire page using the success criteria and (aptly named) Conformance requirements and apply them to a representative sample set of pages from the site. At Level Access, we take each of those success criteria and break them down into best practices we test for compliance. As an example, the Success Criterion 2.4.3 on Focus Order maps to a variety of specific best practices—or test requirements—for accessibility which includes the following:

  • Ensure the focus order of interactive elements on the page is meaningful
  • Ensure focus is moved according to user-initiated context changes
  • Ensure that dynamic focusable content is rendered in a focus order with the controls that change it
  • Ensure that when dialogs are activated focus moves appropriately
  • Ensure non-modal dialogs are rendered in line with the controls that activate them
  • Ensure that keyboard focus remains within modal dialogs
  • Ensure keyboard focus returns properly from dialogs
  • Ensure keyboard and programmatic focus moves to opened menus
  • Ensure keyboard focus returns properly when menus are closed
  • Ensure that when calendars are activated and deactivated focus moves appropriately

Each of those best practices, in turn, are classified based on how they are tested. Broadly speaking, they can be tested either automatically or manually.

Automatic testing is the cheapest and most common testing but covers only a small fraction of legal requirements. Automatic tests are requirements that can be tested automatically with a high degree of certainty. Automatic testing covers around 25-30% of applicable accessibility requirements the rest of which need to be tested manually. In almost all cases, automatic tests don’t allow you to determine full compliance with the requirements, for example:

  • Automatic testing can test for the presence of alternative text, but not if the text is a meaningful replacement. The latter is the legal requirement.
  • Automatic testing can test for the presence of form field labels, but not if assistive technology users can fill out a form. The latter is the legal requirement.

Bottom line – if someone tells you that you can buy an automatic-only tool to test for and/or automatically fix ADA compliance, at best they are poorly informed on the requirements of the standards. At worst, they are actively seeking to misrepresent whatever their thing does. Give them a polite “no, thank you” and move on to a qualified partner.

Manual testing relates to the issues that can be validated on a page-by-page (module-by-module in Level Access terms) basis, but can’t be validated across the entire application automatically. All requirements that don’t fit the automatic profile are by nature manual requirements. Level Access actively works to move tests from the manual category to the automatic category but without introducing false positives that clutter automatic findings with inaccuracies.

In terms of testing the Technical Requirements, we basically build a big test matrix and fill in results. On one axis of the matrix, we have the set of best practices that are relevant. For the standard audit we do – against WCAG 2.1 A and AA in the context of a website – we get a set of several hundred best practices we are validating conformance to. On the other axis of the matrix, we have the individual pieces of the application we are testing. In the context of a web-based system, these are either web pages or portions – common UI widgets or blocks used across the site– on the overall page. We then go through and validate for each component we are testing that it conforms to all the required best practices. This results in several thousand unit tests in each asset. These results and then prioritized based on context to provide a prescriptive set of recommendations to address conformance.

Does it work for a person with disabilities?

This test is closest to the core legislative intent and what we actually see requested as injunctive relief in most lawsuits. The functional requirements require a system to be usable to people with disabilities using current assistive technologies. If you think of the technical requirements as the trees, you can think of the functional requirements as the forest. The functional requirements are basically saying, as a whole, can this system be used by individuals with disabilities?

Broadly, under the ADA, these requirements can be applied to anyone with a disability, which is an exceptionally broad group. Practically, as it relates to information technology, we’ve seen a few discrete groups that are commonly called out:

  • People that are blind or visually impaired
  • People that are deaf or hard of hearing
  • People with limited or no color perception
  • People that cannot produce speech, or have difficulty doing so
  • People that lack fine motor control, have limited reach or strength, and generally have an impairment of an axis of motion
  • People that have difficultly focusing on content or are otherwise easily distracted
  • People that have difficulty understanding content
  • People with photosensitive epilepsy or other triggers that impact daily life activities

To test functional conformance we define a set of use cases for the application – basically core tasks (creating an account, sign in, purchasing a product or service, providing feedback, etc.) – and then execute those core tasks across a set of assistive technologies. The assistive technologies are used and operated by individuals with disabilities. The users report back on both the ability to execute individual use cases in the application as well as the overall usability of the application as a whole. The results are rated on a one to five Likert scale with certain ratings (1 and 2) representing a fail and the rest representing degrees of ease for the pass. The data is then crunched across all use cases and relevant user requirements to develop a data-driven view of the usability of the application by people with disabilities. We then compare that to the same testing results for people without disabilities and determine if, broadly, the accessibility is equivalent. That is, we determine if the issues found have a disparate impact on people with disabilities and are not issues that impact everyone equally.

A key point of consideration in determining functional conformance is, from the list above, what specific group of people with disabilities are we focusing on during the testing. As a practical matter, the vast majority of digital accessibility lawsuits focus on the first group: people that are blind or visually impaired. Focusing on the needs of that group—from a functional testing perspective—vastly narrows the focus on the scope of inquiry and balances that with the enforcement risk. If the budget only allows for testing with a limited set of assistive technology focusing on that group is practical from a variety of perspectives.

To be clear: that focus does not mean we aren’t accounting for the broader needs of people with other types of disabilities. Testing with a wide variety of assistive technology, browsers, and users with disabilities is important and is a solution we provide. It simply means that often given budget limitations choosing the combination of technology that will maximize the testing effort to uncover issues and reduce risk from those filing complaints should be used. in our approach, we focus on ensuring full and equal use of the goods and services by a wide range of people with disabilities by ensuring good technical conformance with a widely adopted technical standard (WCAG 2 Level A and AA). We are then looking across the types of disabilities and actively choosing an area of further inquiry based on our well-informed view of the group that warrants further inquiry to ensure their full and equal use of the goods and services. Such an approach, appropriately implemented, feels consistent with the statutory and legal requirements which require thought and reason to be applied on the part of the covered entity in implementing their accessibility approach.