Just announced: Level Access and eSSENTIAL Accessibility agree to merge! Read more.
A man sits at a desk between two laptops, testing a website for accessibility. Behind him are an audit report with compliance score of 94% and various charts and graphs.

At Level Access, our team is always working on building a better Accessibility Management Platform (AMP), and as part of that, we have put together high-level accessibility testing tool requirements, or conceptually, a broad definition of what ideal accessibility software should look like. It’s not a standard or specification so much – but principals for how people want accessibility testing tools to work. It’s something that we are using to guide development and chart how to make accessibility testing infrastructures work best in modern development organizations.

Right now, we have this well-developed internally and are looking to get some feedback from key customers and influencers in the market. As part of that, I wanted to post it out to the blog in full glorious form as it is both an interesting read and may identify some of the issues that you have with the current generation of testing tools. From our perspective it’s mostly of the form – if we build something that provided this functionality, would it be useful to people in the market? Outside of that, we are also looking to see if there are other obvious requirements, market realities or ways of doing things we should consider.

Any comments or thoughts you have are appreciated!


Automate Everything

The ideal accessibility testing solution is 100% automated and requires no user interaction apart from telling the tool what to test. While the nature of the compliance requirements (Section 508, WCAG 1.0, WCAG 2.0, Client Specific) make it impossible to test for everything automatically, everything that can be automated should be automated. Viewed from a different angle, accessibility requirements that must be validated manually pose the highest testing cost and tend to be the dominant cost in performing accessibility testing. If we can minimize those costs without negatively impacting the scope and quality of testing then we have an easy win.

That leads us to the following principles:

  • Any test that can be automated should be automated. A high cost to automate something will likely be justified as the cost savings will be amortized across millions of tests performed across all AMP clients.
  • Any portion of a test that can be automated should. Many tests fall into this category of “Guided Automatic” tests – tests where we can determine the candidacy of elements in a document for testing but the direct test needs to be performed by a human. Proper use of guided automatic tests would have the effect of drastically limiting the number of manual tests that need to be performed.
  • If a manual test has been performed, we should apply the same test results in the future in similar situations. This concept would allow us to automatically re-play the results of any tests that have previously been completed and stored in AMP. As a basic idea, if we have an image that has been reviewed to ensure the alt text is a meaningful alternative – and then we encounter this same image again and it has the same alt text – we need not review the image again.
  • Many manual tests are often difficult to perform when looking at code but easy to perform with a proper view of the issue at hand. InFocus pioneered the concept of using previews or special renderings of a page and element to allow manual testing to rapidly be performed. The modern equivalent is the use of InFocus Toolbar preview modes to allow complex semantic and visual equivalence issues – like the proper use of headers – to be diagnosed in a quick and easy fashion. These preview tests should be provided in a quick and easy fashion via whatever visual interface is readily available.

Testing Approaches

Across all our clients, Level Access has seen many different testing approaches for Section 508 compliance or WCAG conformance. These run the gamut from no testing to full, formal audits of IT systems on a per-release basis. Out of all of these different approaches, however, four general approaches have been repeated across the marketplace:

  • Automatic Testing – The cheapest and most common approach to validation is pure, automatic testing – generally performed with the aid of a spider. In this scenario you enter a URL, a spider is dispatched to gather pages and the resulting pages that are discovered are diagnosed.
  • Quick Test – The second type of test is something we generally refer to as a quick test. This uses automatic testing as a baseline but extends it with testing against a limited set of manual tests – generally somewhere between ten and twenty in a basic checklist. The set of best practices can be chosen using any fashion but generally are chosen based on the frequency and severity of violations. This testing gives you a good hybrid of coverage for critical accessibility issues, but the test is significantly cheaper than a full audit.
  • AT Testing – AT testing focuses on testing solely in one or a few assistive technologies, and it does not perform any normative or rule-based testing on the application. This approach has the effect of determining if the system works in a specific technology but limits results to particular assistive technologies and versions, disability types and application paths tested.
  • Audit – A formal audit for Section 508 compliance or WCAG conformance producing a VPAT or conformance statement (or both). This scenario generally conforms to a formal audit methodology – such as Level Access’s Unified Audit Methodology – and includes testing for the full set of normative issues as well as functional validation in specific assistive technologies. This approach provides all the information required to determine compliance with a given accessibility standard but also is the most costly and time-consuming approach.

Each of these testing solutions provides benefits and detriments at certain points in the development life cycle. In general, then, the issue is not one of picking a single permanent approach versus providing an easy way to pick the testing approach that provides the right tradeoff for the situation at hand. This ensures that, for the given situation, the most cost-effective testing approach can be executed maximizing the risk reduction/cost tradeoff.

Browser Centric

An ideal interface for performing manual accessibility testing is the browser, which has become the de-facto operating system for the user interface of most applications. Ideally, accessibility testing would be a matter of bringing up a page in a browser, pressing a button and performing any required tests in a guided fashion. For example, if we wanted to diagnose the Google home page we would navigate to www.google.com, press a “Diagnose” button and then the current page would have all automatic tests performed and a basic, wizard interface would be displayed to allow the completion of the remaining, relevant manual tests. Upon completion of the testing cycle, the tester would have the ability to store these results to a central server for distribution to whoever owns the relevant asset.

Simple

One of the problems users can run into with accessibility software solutions is that there is too much information available too soon in the system. While it is valuable to have access to over a thousand best practices, users do not want to see everything at once when they login to the system. In the same vein, while they appreciate the need to validate an application’s compliance with all relevant Section 508 requirements in practice, this may not be the testing method that makes the most sense for them at a given point in time.

Another reality of accessibility is that it tends to impact a large number of people with significant variance in the level of technical depth. For some users, a spider and HTML are relatively complex topics – for others, direct access to the source code and API for InFocus is par for the course. In practice, however, the makeup of users tends to be skewed toward those less technical in nature that are looking to get in and out of the system as quickly as possible.

All that means our focus is on (i) keeping the workflows in AMP as simple as possible and (ii) removing and limiting the numbers of features exposed in the system. Historically, our focus at Level Access has been on broadly exposing features (more features = good). Going forward, our focus is on enabling the same core business tasks while removing or simplifying the number of activities required to complete them.

An up or down determination of compliance is good

Currently, AMP provides a variety of different summary reports that allow you to slice and dice the compliance of a system in a variety of different fashions. Level Access was the first company in the market to provide percentage-based compliance reporting and it remains a core part of the drill-down reporting experience we provide today.

What we have found over the last few years, however, is that customers more and more are looking for a basic way to quickly determine the compliance of the system. This type of reporting is like a basic red/yellow/green dashboard approach to compliance, where you can quickly see if an application is compliant, potentially in trouble or not compliant. Ideally, these reports would be provided in real-time for any assets that warrant it – a public web site, for example – so that you can quickly check on any high-risk assets to make sure that they are currently compliant.

Learn as you go

While we at Level Access would love it if all of our clients completed accessible developer or QA engineer courses, in practice, training for accessibility occurs with widely varying degrees of formality and it is rarely a requirement to actually be able to deploy content to a site or check-in code. The implication of this is that the vast majority of users of AMP and our audit reports will have little prior knowledge of accessibility and little to no formal training in the subject. For our average user, we have to assume they have had little or no training in accessibility, and our in-system workflow should focus on training users on what is required for conformance as they use the system, rather than expecting all users to take full, formal training courses on accessibility.

Level Access pioneered this approach with the creation of Just-in-Time Learning in InFocus – where information about best practices was provided to developers in the context of the current violation – allowing people to learn as you go with live code examples. This was one of the things people always remarked on as a positive for InFocus and is an experience we should support across all the AMP clients.