At CSUN this year I had the pleasure of attending a talk titled “Ensuring Accessibility is Considered Throughout the Lifecycle” by Oleg Vasilyev and William Lawrence of AT&T. (Apologies, guys, I know I didn’t get the talk title quite right.) Oleg and William discussed AT&T’s Corporate Accessibility Technology Office (CATO) and the leading efforts of AT&T in the accessibility space. The talk provided a variety of great observations and notes – thanks to Oleg and William for delivering it. A few points that I found of interest:
Develop and manage a front door process for accessibility. Make sure that all products and projects flowing through an organization’s development lifecycle are evaluated for accessibility and the relevant regulatory coverage at project start. The core idea here is that we want to screen for accessibility requirements at the beginning of a project. That allows planning around accessibility and proper scope development for budgets. At AT&T they built this process ground-up for accessibility and it is now in a position to work as a benchmark for other compliance areas. So a good example of accessibility leading development process innovation.
Have authority. CATO has the ability to issue go or no go directives for any project that has not meet the appropriate design gateways. So if you haven’t examined and implemented accessibility as is appropriate your project doesn’t launch.
Think about scope early. Dig into what areas of the project are likely to impact accessibility. Dig into who on the project is likely to impact accessibility. Plan your activities around that.
Integrate into the lifecycle. Instead of providing a blanket set of requirements think about where we inject (and evaluate conformance to) those requirements in the process. If you are using Scrum put accessibility into both the Definition of Done and the Definition of Ready.
Support the business units. At the end of the day business units are responsible for primary accessibility including development and testing. Those business units are trained to know when to ask for help and pull in more resources or advice as needed. A purely centralized model dilutes business responsibility for accessibility. So balance a central expert group with a decentralized implementation model.
Get some standards. AT&T developed a unified set of enterprise wide accessibility standards. These incorporate WCAG, ATAG, 255, 716, and other accessibility and user experience standards. Ultimately these also reflect the performance objectives for 255 and 716 to provide access across a variety of user interfaces. The standards contain a suggested set of possible ways to achieve the standard and links to other solutions and more resources.
Have a test plan. The AT&T Accessibility Standards can be used to create a good test plan. Such a plan should have a clear set of accessibility testing methods that the production team can use to determine conformance with the standards and, ultimately, the underlying regulatory requirements. (Read: CVAA performance objectives). These test results then need to be fed back into the production process to improve the final product.
Have a good design library. A good design library (pattern library) should be a recurring set of solutions that solve common accessibility issues. In practice it often is a recurring set of problems that cause common accessibility issues. Including accessibility in the design libraries provides a cost effective way to significantly speed work.
Vendor Management. Require vendors to follow the same requirements as the internal teams. It doesn’t matter if we make it directly or pull it in from a vendor – make it accessible.
Outside of those lessons Oleg and William talked about an interesting solution for CAPTCHA which balanced accessibility and security concerns. The solution was, basically, use it but do so in a little smarter fashion. The idea was that CAPTCHA – even with auditory alternatives – provides challenges for users with disabilities. AT&T realized that they could address many of these issues by simply deploying CAPTCHA in a more judicious fashion. In the discussed implementation a system users CAPTCHA but only when it seems likely to need to. The system temporarily tracks each user’s IP. If it sees more than X number of requests from a particular IP in Y seconds CAPTCHA is displayed. All tracked IPs are temporarily stored and clear once every Z minutes. This provides for CAPTCHA when needed, but balances with a more compelling, and accessible, user experience. I liked the idea of weighing out that tradeoff and thinking through substantive new ways of balancing accessibility concerns versus other business concerns.
Great presentation – see you guys both at CSUN 2016!