Skip to Main Content
Toggle

CSUN 2018 Takeaways: VIKF, AOM, and AblePlayer

Owen Edwards 04/12/18

Did you attend CSUN this year? If not, we can bring a little bit of CSUN to you! We asked our Levelers to share about some of the interesting sessions they attended. 

Save the Outlines – or visual indication of keyboard focus (VIKF)

One perennial problem in website accessibility is getting the site’s visual designers to accept that disabling “outline” (or the visual indication of keyboard focus) can be a total accessibility blocker for sighted keyboard-only users. (See OutlineNone for more details.)

Imagine a website which actively disabled your mouse cursor, so you couldn’t see it moving across the page. That’s what it’s like for a keyboard-only user when outline is disabled. But when a sighted mouse user clicks on a button control, it places visual focus there and the outline appears, which some designers don’t like. The outline does not serve any purpose other than the visual indication of focus, which is extremely important to keyboard-only users. While some designers may find it distasteful, it is necessary.

This has been a big topic of discussion on the video.js open source video player (see, for example, https://github.com/videojs/video.js/pull/5027), in part because there are so many buttons for the user to interact with, and none of the buttons change page content or otherwise refresh the page in a way that would typically appear to hide focus.

Google’s YouTube implements an interesting work-around, where the web page looks for any keyboard interaction with the page, and only enables outline if it occurs, otherwise outline stays turned off. This moves towards giving the desired behavior for both sighted mouse users and sighted keyboard-only users.

But are there users with other interaction strategies (either native or through assistive technology) which use simulated mouse or keyboard interaction, where this solution wouldn’t work? And how does mobile/touchscreen interaction map to all of this? (Note the parallel to the tooltip on-hover feature that have been used in the past, which doesn’t always map well to a mobile/touchscreen environment.)

Google has proposed moving this to the browser in CSS4; see https://github.com/wicg/focus-visible. It seems like a good fix to the issue of why visual designers want outline: none, because they don’t like focus rings becoming visible when someone clicks with a mouse. But are there side-effects for people with disabilities and for assistive technology users? And how will this be used in a progressive enhancement way? Many of our clients are required to support users who access websites using JAWS, and that means using Internet Explorer 11 (hopefully!); IE11 will never be enhanced to support focus-visible (and note that JAWS users aren’t the ones impacted by VIKF!).

The Accessibility Object Model (AOM)

Alice Boxhall from Google gave a fascinating presentation on the concept of an “Accessibility Object Model”—an interface from JavaScript inside the browser to set the name, role, state and settings of objects in the DOM (Document Object Model).

There are several interesting use cases for the AOM:

  • allowing better dynamic construction of components (e.g. Web Components),
  • allowing JavaScript to detect if the specific browser supports setting accessibility values (e.g. ARIA value) to certain settings, and
  • even potentially allowing a web page to know whether the user is using assistive technology.

The latter has always been a controversial topic, with many assistive technology users (typically users with disabilities) not wanting a website to be able to identify them as such. However, as Alice pointed out, it’s a similar privacy issue as a website asking if it can use your webcam and microphone for a video chat app, or (perhaps a better analogy) asking to use your location information (e.g., via GPS on a mobile device), and browsers already support a way to ask the user if they are willing to share that information with the website.

More information, including demos which make use of experimental implementation of the AOM features in the Chrome browser, are at https://github.com/WICG/aom and http://wicg.github.io/aom/demos/.

Video player accessibility (based on AblePlayer updates)

Terrill Thompson from University of Washington gave a presentation on the results of interviews and focus groups that were conducted during CSUN 2017 about video accessibility in general, and the AblePlayer specifically, Media Player Accessibility: Insights from Interviews and Focus Groups.

The full results will be published by CSUN as part of an upcoming research results publication, but the summary identified five key issues:

  1. Seeking to a new point in the media

Currently the AblePlayer’s seek control allow users to seek by a certain percentage of the timeline, except if the video is short. This is not obvious to users. Controls should disclose the seek interval (e.g., “Forward 10 seconds”).

Even with slider and skip-forward/skip-backward controls, users found they couldn’t easily navigate through the timeline. One suggestion is a “Jump to time” edit box that becomes visible somehow.

  1. Audio Description Preferences (conventional recorded audio description versus text video description announced by the user’s screen reader)

The pros of text video description are that it is easy to produce, inexpensive, searchable, and readable (via the transcript). But one major issue that was point out is that if the screen reader is reading the text description and the user presses a key, the screen reader stops speaking but the video keeps playing. The audio description accommodation gets broken. But, in the end, any audio description is better than none.

  1. Should there be a mechanism to allow non-sighted users to hear the captions/subtitles?

A solution to this issue could be easy to implement (like the text video description mechanism mentioned above), but it’s not clear if this feature should always be enabled, or if it should be user-selectable.

  1. Support for a separate sign-language video

The AblePlayer has the ability to show a separate video with sign language of what’s being spoken in the main video window. This is a WCAG Level AAA feature which AblePlayer implements and is probably unique to AblePlayer—other players could potentially include a sign-language video “burned in” to the original video as a kind of picture-in-picture, but not support the actual separate video that AblePlayer offers. If present, users wanted maximum customizability of the sign-language video window as far as sizing and placement in relation to the main video.

  1. Quantity/visibility/order of video player controls

Common complaint: “Too many buttons,” but no consensus on which ones are important and which ones could be hidden behind a “More…” drop-down.

I look forward to reading the full report when it is released later this year.

To view Level Access slide decks, please visit this page to Download 2018 CSUN Presentations.

Owen Edwards is one of our Senior Accessibility Consultants. Stay tuned for more reports from CSUN 2018 from the Level Access team!

Leave a Comment

Your email address will not be published. Required fields are marked *