If Everything is Urgent, Nothing is Urgent

  • 0
  •  0

Written by: Timothy Stephen Springer

AT CSUN 2015 I had the pleasure of attending many great sessions. As an Accessibility Consultant myself, a quote from the session titled “Targeting an Exceptional Online Shopping Experience” really resonated with me; “if everything is urgent, nothing is urgent”.

Accessibility Consultants for Target.com presented a mature testing and development approach to digital accessibility within an agile environment.   The team showcased the importance and success of partnering with the user experience (UX) teams within the organization through support and consultation ranging from wireframe reviews, pattern libraries and assistive technology testing to name a few.

It is a valiant trait of an Accessibility Consultant to provide a product team a laundry list of accessibility defects found in any given audit. We’re often the bearer of bad news after all of the hard work has been performed, design comps reviewed, pattern libraries referenced and training conducted. Like any defect with information and communication technology, accessibility defects occur as well. It is equally vital that we prioritize these accessibility defects in a manner that ensures agile development and delivery of a product by providing developers and project managers a prioritized list of these defects. When all similar violations of a standard are viewed equally, a more severe instance could be placed in a backlog equally.

Severity, Noticeability and Tractability

SSB BART Group’s Accessibility Management Platform (AMP) provides a rating system where we factor severity, noticeability, tractability and frequency when prioritizing violations mapped to standards and guidelines like WCAG 2.0.

  • Severity (S) – Severity is a measure of how large an impact on the user experience a violation of the best practice will have. Severity is inferred from user experience analysis and is ranked on a scale of one to ten where the rank represents the impact that a violation of the best practice would have on the user experience. A violation of a best practice with a severity of one (1) would have virtually no impact on the user experience, while a violation of a best practice with a severity of ten (10) would denote an insurmountable obstacle in the user experience.
  • Noticeability (N) – Noticeability is the likelihood that a given violation will be detected by users of a solution. Certain best practice violations are more easily detected than others, such as violations that can be detected with automated tools. Other violations, such as those that can only be detected through manual review techniques, are more difficult to find in a module. A violation with Noticeability of one (1) is virtually impossible to detect, while a violation with Noticeability of ten (10) would be easily detected in any automated or manual test. Violations that are more difficult to detect generally pose a lower overall risk for enforcement than violations which can be detected in a trivial fashion.
  • Tractability (T) – Tractability defines the estimated costs associated with ensuring that instances of a violation are fixed in accordance with the best practice. Tractability is a rough corollary to the number of hours of effort required to ensure compliance with a given best practice and the level of specificity of the fix. A violation with Tractability of one (1) is simple to fix, generally requiring few changes that are well defined. A violation with Tractability of ten (10) would relate to architecture level implementations within the solution and be impossible to fix without extensive changes to the solution as a whole.
  • Frequency (F) – Frequency is a measure of how often a particular violation occurs within a module. Frequency is calculated based on the number of modules that exhibit a violation divided by the total number of modules, multiplied by ten. For example, if a violation of a given best practice occurred in 54% of all modules – the frequency would be 5.4. The frequency variable is calculated after automated testing of applicable technology platforms or after manual testing of platforms that cannot be automatically tested.

Our newest release of AMP (Winter 2015 release) allows testers the ability to override the default severity ratings of individual instances of a particular accessibility violation.  This feature allows us to escalate and even deescalate the severity of one specific issue that otherwise could be lumped into a default severity category that could be detrimental to an end-user’s access and pose more risk over all others.

Below, I identify a couple scenarios where this feature can help escalate perceived low severity issues and lower the severity of a specific issue that typically is high severity by default.

Scenario 1 – Escalating Severity of a Single Violation Instance

To put this into perspective let’s say that we have a violation where the default language definition (3.1.1 Level A) is in violation. Assuming in this scenario that lang=en was not set for content presented in the English language, the lack of setting the default language on a page is typically not a barrier to access the information with a screen reader. Therefore, our default severity rating for this particular scenario is scored a “1”.

 

Figure 1 – AMP best practice library interface providing a description of a WCAG 2.0 requirement to set the default language in HTML.
Figure 1: The requirement of declaring a default language declaration in HTML has a default Severity rating of “1″ (Low Severity)

 


 

Figure 2 – AMP interface where a description of the default language violation is entered. The default severity score is set to “1”
Figure 2: The default severity rating of “1” (Low Severity) is conveyed for a reported violation instance in AMP

 


Let’s say, however, that during our testing we identify that the English site presents a link to the Spanish version and lang=en (English) is defined for the Spanish content. The severity of this specific violation instance can now be increased and prioritized over the other because in this situation, the user impact is far greater because a screen reader like JAWS will attempt to render the Spanish content with English inflection.

 

Figure 3 – AMP interface where a reported description of an additional default language violation is edited to escalate the severity score to “6”
Figure 3: The default severity is escalated from “1” (Low Severity) to “6” (Medium Severity) for a separate instance of the same WCAG violation. In this instance this violation of defining the default language in HTML is more severe than another reported instance in the same report.

 


Scenario 2 – Diminishing Severity of a Single Violation Instance

With the previous scenario, two separate instances of the same WCAG violation can be prioritized differently as we escalate the severity of an issue.  Equally, AMP provides us the opportunity to deescalate a violation found in an audit.

Let’s assume with Scenario #2 that a custom Close button appears in a dialog window and the button contains only an onClick JavaScript event.  This custom close button can be activated with the JAWS screen reader but not with the keyboard alone. We report that “the sole use of device-dependent event handlers” is in violation of 2.1.1 (Level A). The default severity of this requirement is an “8” (High Severity).

 

Figure 4 - AMP best practice library interface providing a description of a Section 508 and WCAG 2.0 requirement to ensure the sole use of device-dependent event handlers are avoided.
Figure 4: The requirement of avoiding device-dependent event handlers in HTML has a default Severity rating of “8” (High Severity)

 


However, in this scenario we discover and note in our audit report that the Escape key functions as an alternative to the close button and can be used with the keyboard to close the dialog window. Documenting this, we can lower the severity of this violation so other, more severe and noticeable violation instances from our audit report can be prioritized.

 

Figure 5 - AMP interface where a reported description of a keyboard violation is edited to deescalate the severity score to “3”
Figure 5: We provide a note that there is an alternative or workaround to a noted violation. In turn we lower the severity to “3” (Low Severity) in lieu of the default severity of “8” (High Severity)

 


These types of scenarios (albeit extremely simplified above) typically present themselves during our audits. When testing is complete and we report to our clients, we differentiate the core issues out of (potentially) hundreds of accessibility violations. By describing the user impact of the most severe issues and factoring the user impact into the instance severity rating, we find that some of the nebulous requirements are comprehended more accurately and typically scheduled for remediation sooner. Otherwise, how urgent can this “accessibility thing” be, right?

No Comments

    Leave A Comment