Skip to Main Content
Toggle
(800) 889-9659
info@levelaccess.com

CSUN Educates on Machine Learning

Alistair Garrison 04/06/18

Did you attend CSUN this year? If not, we can bring a little bit of CSUN to you! We asked our Levelers to share about some of the interesting sessions they attended. 

Each year, CSUN provides a great opportunity to catch-up with like-minded folks involved in similar accessibility research fields – pooled together from around the globe.

And, 2018 did not disappoint…

This year I was very excited to give a new talk on Machine Learning (ML) – Machine Learning: Where We’re At and Where We’re Going – which I’m glad to say was well attended; and I was equally excited (to find time between meetings) to attend a couple of the ML-related presentations found this year in the CSUN schedule.

It’s an understatement when I say I was terribly impressed with the Seeing-AI app; and even went to the follow-up session the presenters (Anirudh Koul and Saqib Shaikh) gave in the AT hall.

For those that have not heard of Seeing-AI, it is an iOS app which – for want of a better description – describes, in the most natural way, images, people, objects on camera, and much, much more to the user.

The technology which allows this natural description to occur is two-fold:

  • A neural-network which has been trained to identify 225 different objects; and
  • A method which allows a “bag of tags” to be formulated into a natural sentence.

These two mechanisms are made available to the app via Microsoft’s Cognitive Services.

The really interesting thing, at least for me, was the description of how the team improved recognition; and lowered bias, by training the neural net on images which contained the item to be identified in the most cluttered or confused settings.

They had discovered that stock images of dollar bills (e.g. a dollar bill on a table) were simply to clear and clean to achieve the highest recognition rates; at least when compared to cluttered images e.g. a dollar bill on a sleeping cat; a dollar bill in a bowl of ice; etc…

In addition to the ML talks, I had an opportunity to see two more very useful presentations:

And, after hearing about the strides taken in the development of an industry consistent Accessibility Object Model I’m now cognizant of its powerful potential future impact.

The Accessibility Object Model project aims to create a JavaScript API to allow developers to modify (and eventually explore) the accessibility tree for an HTML page.

The Model development is fronted by personnel from Google, Apple and Mozilla – including, Google’s Alice Boxhall and Dominic Mazzoni (who I’ve had the pleasure of knowing through his work on ChromeVox) and James Craig (Apple).

During their fascinating talk, I was especially interested to hear about the intent to enable web content (albeit via possible increased permissions) to listen for events from Assistive Technology; as I could immediately start to think of many potential uses.

For more information on Accessible Object Model, see https://github.com/WICG/aom/blob/gh-pages/explainer.md

The point for me of CSUN is to have your horizons expanded, and to come away with more avenues of research to explore than you went with – and in that CSUN 2018 did not fail.  Even with the 24-hour flight home…!

To view Level Access slide decks, please visit this page to Download 2018 CSUN Presentations.

Alistair Garrison, Director of Accessibility Research for Level Access, lives in Scotland. Stay tuned for more reports from CSUN 2018 from the Level Access team!

Leave a Comment

Your email address will not be published. Required fields are marked *