Written by: Rajan Nanavati
9 months ago

During our most recent webinar—The Future Of WCAG: Introduction to WCAG 3.0 with Chief Accessibility Officer Jonathan Avila—we weren’t able to answer all of the question submitted to us during the allotted time.

We wanted to ensure everyone’s question received an answer, so below you’ll find the complete list of questions submitted during the webinar with Jonathan’s responses inline.

Also, please note that this is still an unofficial non-published working draft of the future WCAG 3.0 guidelines, which are subject to change. The interpretations are our own and not official guidance from the W3C.

How will companies arrive at their random sampling of views/pages? How will those views/pages be selected to ensure that they are random? Will conformance levels of bronze, silver, gold be dated?”

At this time no specific random sampling method has been defined. Random sampling is discussed in the WCAG EM 1.0 and generally excludes already tested pages and requires you start from a comprehensive list of pages in scope that may be generated from different sources such as server logs, analytics, or site scans.

In regard to dating – yes – conformance claims will need to be dated based on when the claim is made. It is likely outside of the scope of the guidelines to mandate when a claim is too old to be reliable.

I would guess that each functional outcome may be mapped to several functional disability categories, is that correct?

Yes, as we understand it, most outcomes will be mapped to several functional disability categories.

Are there any plans to enforce User Agent accessibility guidelines in a way that would produce more consistent interpretations in for example browsers (like respecting ARIA and generating consistent accessibility tree, or for example focus outlines etc.)?

Progress has been made in this area by browser vendors and a couple of vendors have worked together to tackle common issue (e.g. Chrome and Edge teams) such as focus in Chromium. The guidance will be applicable, and the guidelines written in a way that they could be extended to many different types of content. However, the guidelines are less like to tackle platform specific requirements.

Do you expect 3.0, like 2.1, to continue defining robustness as conformity to standards, rather than actual compatibility with ATs?

In terms of accessibility supported techniques – the hope is to ensure content is accessible to users but without requiring authors to work around browser and assistive technology bugs. Thus, issues with parsing such as those covered under WCAG 2 4.1.1 Parsing are likely not to impact a passing score.

What was the prevailing argument that moved 2.4.7 from AA to A?

In reality most organizations require both A and AA so there is likely no impact on this change. My understanding of the thought process is that you can’t effectively use the keyboard without a visual indication of focus – and since SC 2.1.1 Keyboard Operation is Level A it makes sense for these to be at the same level.

So in WC3 Accessibility Guidelines (WCAG 3.0 “Silver”) are tests driven by functional business outcomes or accessibility outcomes? for example: a web page on an e-commerce site is difficult to read because of missing labels but a user is still able to complete a purchase. Would this in WCAG 3.0 considered accessible?

The standards are not at a point yet to answer this situation specifically – but the goal is to allow some items that are not in the critical path to have a lesser impact on the score. So, it would likely depend on whether these fields are in path and the extent of the number of issues, etc. and their impact on a diverse set of users. This situation is one that will have to be considered carefully based on a number of factors that can be quantified to ensure measurability and repeatability.

In your expanded definition are you counting web forms as an authoring tool? For example, a job application form or web app?

Generally anything that allows someone to create content that can be shared with other people and that isn’t just plain text is an authoring tool – at least that is how it is interpreted by other standards such as Section 508.

Still not sure what the difference is between “user task flow” and “complete process”.

The exact terminology is still being discussed, defined, and determined. I suspect that a complete process requires specific steps that must be done in a certain order while a path could be a smaller chunk and perhaps something that is not tied to a particular order or sequence of steps. More to come on specific terms and what they mean.

In the aspects of scope, it seems that the thought is to say we can pick and choose what is okay or not okay to be accessible as long as the overall user flow chosen allows for completion of the business task. Is that a fair interpretation of the consideration?

Yes, at this time the thinking is that testers will be given flexibility to determine what is and is not in scope for a claim.

How do you define a “critical failure”?

Certain things like presence of flashing content in a certain threshold would likely be a critical failure as their presence can cause a seizure. Other issues might prevent access to the rest of the page such as moving content that can’t be stopped.

So, in testing, let’s say 92% pass but a keyboard issue blocks a user from completing a user flow. In the silver methodology is this still considered substantively accessible?

A keyboard blocker issue in a critical flow would likely cause a failure because there would not be enough points for the functional outcomes to meet the functional needs of users across disability types who rely on the keyboard. However, all of these details and the exact scoring has not been agreed upon and validated across a large sample of sites and content types. So, at this time these are good questions that still need to be vetted in practice with the current proposal.

Did you or are you planning to discuss accessibility for User Agents / devices like Neuralink (that have potential to bypass some disabilities)?

There are some technologies that when combined with content can allow users to meet the requirements (for example touch accommodation support on iOS). Some guidelines are written in ways that allow for mechanisms in user agents, platforms – but not others. At this point it’s too early to speculate on this particular situation – but this issue in general is something that the group is aware of.

Is the concept of “view” and “paths” meant to extend WCAG to native mobile apps? Is there an intention to apply the WCAG to native mobile app a11y?

Paths in this case are likely similar to user flows – although the exact definition has not been agreed upon. The goal is to allow the functional outcomes and categories to support scoring for many different types of content including mobile apps. The W3C is focused on web technology, so while the guidelines can be applied to mobile I suspect much of the tests and techniques provided would be for web technology – however, they could be more broadly extended to mobile apps more effectively than today.

One of the issues raised when developing WCAG 2.0 was this issue of using a metric to measure how “accessible” something is. At the time it was not clear how any metric would make sense. How will this WCAG 3 model work? What does 75% accessibility mean vs. 80% accessibility?

Any percentage would need to take into account impact on critical user tasks/paths and impact of the missing support on disability groups. So, depending on context and users impacted a given percent may cause you to be above or below the threshold when the score is rolled up and cross referenced with these other factors. This is the nut that the group is trying to solve in a way that is fair across disabilities and fair to the context of the site. It is a challenge and something that much effort has gone into by the Silver task force of the Accessibility Guidelines working group.

If a company passes at bronze level, but cannot pass at silver, how does the difference in compliance level apply for the understanding against ensuring all content is perceivable, operable, understandable, or robustness?

The bronze level outcomes could be mapped to these general principles as they are still relevant today. The silver and gold levels likely will give recognition to people who go beyond the minimum guidelines and address accessibility in their processes as well as optimize the user experience of all people including those with disabilities. For example, something that may require 20 tab stops to reach may technically be accessible but would not get points for usability.