Accessibility user testing involves engaging people with disabilities, including users of assistive technologies, to navigate your website or app and provide feedback on their experiences. This process provides insight on how well your experience works for users in the real world.
User testing helps organizations increase real-world impact with their accessibility programs. While automated scans and manual audits show you how well you’re meeting established standards, like the Web Content Accessibility Guidelines (WCAG), testing by end users reveals which issues affect real people as they try to complete meaningful tasks. It can also validate that accessibility improvements you’ve made actually work in practice.
Key insights
- Testing with users is essential to expose real-world issues affecting people with disabilities, including users of assistive technologies.
- For the most reliable results, this evaluation should be performed by professional accessibility testers who have disabilities.
- Participants should include multiple people from each group representing different types of accessibility needs.
- After testing is complete, ensure you have processes in place to prioritize and address any barriers identified.
Why testing with real users matters
Most accessibility programs rely heavily on automated testing tools for good reason. These tools are fast, scalable, and effective at identifying common issues. But they cannot detect all accessibility problems, meaning critical barriers may remain undiscovered without human testing.
This gap contributes to a persistent industry challenge: 95% of websites still contain significant accessibility issues, according to the WebAIM Million report. Even digital products that technically “pass” automated checks can be inaccessible to people who use assistive technologies, like screen readers or keyboard navigation.
And the impact extends beyond usability, hitting many organizations where it hurts the most. A Nucleus Research study estimates that internet retailers alone lose $6.9 billion each year when users with disabilities abandon inaccessible digital experiences.
Manual audits and functional assistive technology (AT) testing, both performed by accessibility experts, go beyond automated scans. While manual audits deliver in-depth insight on how well digital experiences meet accessibility standards, functional AT testing evaluates how well core user flows (such as completing a purchase) work with assistive technology. But the only way to know for sure that you’re providing an equitable experience is to test with real users with disabilities.
The four types of accessibility testing
No single testing method catches everything. Effective accessibility programs combine all four approaches—automated scans, manual audits, functional AT testing, and testing with end users—for robust insight.
| Testing type | What it finds | Limitations |
| Automated | Common violations of accessibility standards (e.g., color contrast issues, missing alt text, missing form labels) | Cannot detect all issues |
| Manual expert | All violations of accessibility standards (including keyboard navigation issues, improper focus order, and improper page structure) | Experts don’t use your site as real customers do. |
| Functional AT testing | Obstacles, challenges, and / or barriers for assistive technology users in key user flows (e.g., a checkout flow) | Only performed by one expert who is a native AT user; not comprehensive across different disabilities and user needs |
| User testing | Real-world obstacles, challenges, and / or barriers, such as confusing layouts, or frustrating steps to complete core tasks | May require more resources and planning to set up |
Together, automated scanning and manual audits by accessibility professionals can identify where your digital experience falls short of meeting standards. But only testing by people with disabilities can tell you whether any fixes you’ve made have resulted in equitable real-world experiences.
Planning and conducting effective testing
The easiest, and most reliable, way to conduct testing with end users is to engage a third-party digital accessibility solution that offers these services. For example, Level Access provides testing by end-users through our integration with Fable. Engaging a reputable third-party vendor is recommended, because professionals bring the experience required to deliver comprehensive and accurate results. However, if you want to coordinate testing yourself, the following pointers will help you get started.
Recruiting the right participants
You don’t need to recruit a large number of users to conduct effective testing. What’s important is that you include multiple participants with each type of accessibility need, and who use different types of assistive technologies. Engage people with various disabilities, including:
- Visual disabilities (screen reader or screen magnifier users)
- Hearing disabilities
- Motor or mobility disabilities
- Cognitive disabilities
- Learning disabilities
Many organizations work with specialized recruitment services or partner with disability organizations directly.
Defining tasks and success metrics
Effective testing starts with tasks that reflect users’ real-world goals. Focus on realistic scenarios that mirror how people actually engage with your digital offerings—for example, “Find and purchase a product under $50” or “Submit a support request.”
Success should be measured by the volume of completed tasks, not by speed alone. Capture qualitative insights, not just numbers, including where participants hesitate, what they find confusing, and how they feel during the process. Prioritize critical user flows; for example, completing a purchase on an e-commerce website or checking an account balance in a banking app.
Remote vs. in-person sessions
Remote testing enables participants to use their own equipment (e.g., mobile device, computer, AT, accessibility features, and settings)—often producing the most natural and reliable results. It also expands access to a broader participant pool.
In-person testing can make observation easier, but it requires an accessible physical space and compatible technology. Regardless of format, ensure you allocate sufficient time to accommodate setup, configuration, logistics, and data collection.
Adapting standard usability testing practices
Proper testing should build on standard practices with a few key adjustments. Before testing commences:
- Ensure that all research materials and consent forms are accessible, so participants can understand and agree to participate independently.
- Verify that any remote testing software is fully compatible with assistive technology
- Ask participants in advance about any accommodations they may need—such as sign language interpretation, captions, or alternative input devices—and provide those accommodations to ensure an inclusive and comfortable testing environment.
During testing, avoid speaking over screen readers or interrupting assistive technology output, as this can disrupt navigation and comprehension.
Allow participants to attempt tasks in their own way before stepping in. Carefully document how each participant navigates—whether through keyboard commands, gestures, or voice input—to provide essential context for designers and developers addressing the issues uncovered.
Analyzing and sharing findings
After testing is complete, you’ll need to make a plan for addressing any identified barriers. This involves determining which issues to tackle first, and ensuring teams are prepared to remediate them.
Prioritizing accessibility issues
Not all issues have the same impact on users. Prioritize findings based on how they affect task completion:
- High priority: Completely blocks someone from completing a task
- Medium priority: Allows completion but causes significant frustration
- Low priority: Creates minor inconvenience
Where possible, connect issues to relevant WCAG success criteria to support compliance and reporting needs. Clear prioritization streamlines your team’s accessibility skills, helping them focus on fixes that deliver the greatest improvements.
Sharing results with your team
Findings are most effective when digital teams can understand their impact firsthand. Short video clips of users encountering barriers often drive faster action than written summaries alone.
Make sure remediation tasks are assigned directly in existing workflow management tools—such as Jira, GitHub, or Figma—to ensure teams can easily review and address findings as part of their standard design and development processes.
Get real-world insight on digital accessibility
Testing with end users is the difference between meeting accessibility requirements and delivering experiences that work in real life. It ensures that accessibility efforts result in meaningful improvements for your audience—not just compliance on paper.
The most reliable way to get insight from end users is to partner with a trusted third-party expert, like Level Access. Reach out to our team to learn more about how our Fable integration can help you understand the real-world accessibility of your digital experiences.