As digital technology becomes an increasingly indispensable part of our daily lives, the weight falls on developers and designers to not only create a seamless online experience for users, but to ensure what they create is accessible. A website or platform that isn’t accessible to people with disabilities—a demographic that includes 1 in 4 U.S. adults—is not complete. Through accessibility testing, web, software, and product development teams can ensure that digital experiences are fully accessible to people with disabilities and compliant with accessibility laws and guidelines.

What is accessibility testing?

Accessibility testing is the practice of ensuring a website, platform, mobile app, or other digital experience is accessible to people with disabilities. Specifically, most accessibility testing efforts involve ensuring a digital experience meets the latest Web Content Accessibility Guideline (WCAG) standards.

Accessibility testing of a website or app typically starts with automated testing, which gauges a digital experience’s overall accessibility by checking for several of the most common violations of WCAG standards. However, while a great start, automated testing often flags false positives and can produce hundreds of findings, which may be overwhelming.

This is where manual testing comes in. Manual accessibility testing involves extensive manual scrutiny of individual pages and is crucial to ensuring accessibility across all aspects of a website or other digital experience.

The most comprehensive and effective approach to accessibility testing combines automated and manual evaluation.

What is automated accessibility testing?

While automated testing lacks the nuance of human assessment, digital accessibility is most efficient when manual evaluation builds off the results of automated tools.

There are many kinds of tools that perform automated tests on websites, instantly informing teams whether pages or screens contain violations of WCAG criteria. Although automated accessibility testing doesn’t account for all accessibility barriers on a site, it can surface areas that may need the most attention in remediation. For example, automated testing may highlight an accessibility issue that’s present across multiple pages of your site, which, when fixed, could have a significantly positive impact on your site’s overall accessibility health. Many digital accessibility solution providers offer their own automated accessibility checker using the WAVE (Web Accessibility Evaluation Tool) tool.

What does automated accessibility testing software scan for?

Automated scans check for web accessibility issues across various categories:

  • Color contrast: evaluates the contrast between elements in the foreground and the background to ensure that content is readable for people with visual disabilities.
  • Navigation: checks whether the site navigation is consistent, verifying that links have descriptive text and are different from surrounding text.
  • ARIA labels and headings: ARIA labels provide support to users of assistive technology that is not available in a site’s HTML code. A checker examines whether ARIA is properly implemented, and headings are hierarchical for screen reader users.
  • Labels and images: checks whether buttons are marked with labels and if images have alternative text for screen-reader users and other assistive technologies.
  • Keyboard accessibility: checks whether all interactive elements can be accessed and operated solely through a keyboard, to support usability for users who cannot use a mouse or other devices.
  • Audio and video: checks whether captioning or a transcript is provided.
  • Compatibility with assistive technology: checks whether digital content is compatible with assistive technologies, such as screen readers and voice recognition software.

Importantly, automated scans evaluate accessibility using a binary framework: are accessibility considerations accounted for, or are they missing? They may not detect issues stemming from the specific way these considerations are addressed. For example, an automated scanner will only be able to tell if images on a web page are accompanied by alt text—not whether the alt text provides an equivalent, fulsome understanding of the images present.

What is manual accessibility testing?

Unlike automated testing, manual accessibility testing involves human judgment and is performed entirely by trained accessibility experts. While manual testing can be more comprehensive than automated testing, it is most commonly used to surface accessibility issues within the core user flows of a digital experience: the specific paths that users take to complete core tasks on a digital property, such as making a purchase, booking an appointment, requesting a demo, or accessing important information.

By scrutinizing these pathways, organizations can streamline efforts and prioritize accessibility remediation where it matters most. Manual testing makes sure crucial tasks are easily achievable for all users and underscores an organization’s commitment to inclusivity.

Organizations can also have people with disabilities attempt to complete user flows on a website or other digital experience using assistive technologies such as screen readers. In this type of assessment, often called use case testing, the tester doesn’t evaluate an experience against WCAG but instead aims to successfully check out, create an account, or finish another core task the way a user would. Use case testing is crucial for identifying barriers that might otherwise go unnoticed.

For example, if a user is unable to type quickly on a keyboard due to their disability, they might get “timed out” when they’re trying to complete a purchase, causing them to lose all the information they’ve already inputted. If this is happening to potential customers with disabilities on a regular basis, it’s a problem organizations should prioritize remediating. Use case testing is the only way to catch this type of issue.

What are the principles of accessibility testing?

There are four main guiding principles of accessibility testing. These four principles were established by WCAG and are referred to as POUR:

  • Perceivable— Users can identify the interface elements of a digital experience.
  • Operable— Users can successfully use buttons and other interactive parts of a digital experience.
  • Understandable— Users can comprehend and remember how to use the interface.
  • Robust— Digital content can be interpreted reliably by a wide variety of users and types of assistive technologies.

POUR provides a structured framework to evaluate various aspects of a digital product’s accessibility. A digital experience that is perceivable, operable, understandable, and robust is better for everyone.

Why accessibility testing is necessary?

Organizations may think their websites are problem-free, but the only way to confirm this is by undergoing accurate and comprehensive accessibility testing. Accessibility barriers in a website, mobile app, or digital product not only restrict the reach of an organization’s services but also expose organizations to the potential of a lawsuit citing non-compliance with the Americans with Disabilities ActSection 508 of the Rehabilitation Act of 1973, or other global accessibility legislation.

In the end, accessibility is best achieved through a combination of automated and manual testing. While automated scans surface common accessibility issues across an entire digital platform, manual testing is crucial for identifying problems that can only be detected by a human.

And remember, maintaining an accessible website is an ongoing effort. Automated scans and manual accessibility testing should be integrated into the software or product development life cycle to ensure ongoing accessibility and compliance.