woman with mobile phone surrounded by icons for mobile app

FirstBank’s Organizational Approach to Digital Accessibility – Webinar Q&A

Written by: Kim Phillips

This post shares Q&A from the Banking for Good: FirstBank’s Organizational Approach to Digital Accessibility webinar. FirstBank’s Software Delivery Manager Melissa Alamo joined Matt Wolfson, Financial Services Account Manager at Level Access, to talk about the bank’s multi-phased, and multi-departmental approach to developing and rolling out their digital accessibility program.

Q: How did you select the membership of your digital accessibility committee?

Melissa: The roadmap that Level Access provided us essentially took an inventory – a systems survey which took an inventory of all applications owned by different departments in the organization. I used that as a guide when we were compiling the committee.  As with anything in financial institutions, you need Compliance to be an obvious choice, and Human Resources – they’re very versed in accessibility, not just digital.  And then the product owners from each of the affected applications, the business sponsors, that was always very important. And then of course myself and my director from a technology representation perspective.

Q: Did you prioritize your accessibility work between public websites vs. sites that require a login and password? If so, what factors did you take into consideration?

Melissa: Because of the way our teams are broken up, we kind of aligned with business line. In that way we could equally prioritize the external facing company website along with our online banking application requiring a login. Basically, our external company website is on one team, and our online banking is my team, which is a separate team. They moved forward in parallel.  Thankfully, we were already in a good position with our external company website – there wasn’t a lot of work that needed to be done on that side.  But we did take that into consideration.  We worked with our marketing team, which is the product owner for our main company website, so they could be aware – that was part of the training for them – and we took their advice into consideration.  Our external facing website has a lot of attention to it, but we were lucky enough to work in parallel on all our efforts, and that has been helpful.

Matt: This is an interesting question for several reasons, and there is no right or wrong answer by the way.  It really boils down to how your organization looks at accessibility and the resources that they have. If it is very much compliance first, we tend to find that organizations will start with public websites because traditionally they provide you with the highest levels of risk. When you think about companies that are more customer first when it comes to accessibility, that is when they will start with behind the log in and any authenticated experience. So, if you’re looking for a little bit of a shorthand, I would say for those who are new to accessibility and you feel like you have some really big gaps out there, it’s usually the public websites.  For those who have been on the journey, and who do accessibility well, you want to think about those authenticated experiences.  But I would be remiss if I didn’t mention that COVID-19 is changing a lot about how we live our lives, and the unfortunate reality is your phone lines are backed up, your branches are closed or operating at a very limited capacity, and your website really is the main vehicle for many of your customers to monitor and manage their money in a time filled with turmoil and volatility. So, I would highly recommend that you start thinking about your behind authentication customer experience if you are thinking about a short-term project with accessibility.

How do you test those internal websites since they are private?

Matt: If internal websites can be rendered in a browser, you should be able to test it for both technical and functional compliance. Depending on how that content is delivered (for example, if you need credentials to get behind authentication, content is pulled in via iframes, etc.) that adds complexity to the ability to test, but it should not prevent you from testing.

Relative to testing, remediation (or the fixing of issues) is often harder across internal websites because your organization may not own the source code or have the ability to modify it directly. In those cases, you should work with the vendor to have those issues addressed. Organizations are not free from any legal risk when acquiring inaccessible third-party content. Ensuring that accessibility criteria are part of the contract and vendor management process helps to mitigate these risks.

How do you hold developers accountable for remediating issues found by QA? I’ve heard many excuses such as, “We designed XYZ in a certain way,” in response to bringing forth an accessibility concern. Is training the only way to get buy-in from developers?

Matt: This is often resolved by having proper policies defined at the corporate or department level across the organization. Clearly defining the organization’s / department’s stance on accessibility and the standards it adheres to is a strong step in mitigating the issue described in the question. With proper policies in place, organizations generally have a clear route to take when resolving issues that might arise when accessibility conflicts with design choices, your security environment, etc. Training most certainly helps, whether it’s driving sheer awareness of these issues or driving buy-in. However, training alone will not solve the issue described.

What do you recommend when the development process has already started, but accessibility has to be embedded throughout the entire platform? How can we catch up or design a strategy to make sure we don’t overlook what has already been coded?

Melissa: We have a lot of production elements that have been out there for many years.  Digital accessibility discussions need to be shifted as far to the left as possible in the SDLC so It’s part of design.  It’s honestly part of user experience discussions that you could be having.  It could be part of that user testing that I spoke about.  For things just finishing initial development, we use Jira and created a card to go back and make changes to address digital accessibility as much as possible. It would be a good idea to get things in development scanned and make those changes before going to production.  For those elements already in production, we went with a strategy of re-skinning and upgrading the framework used in our front-end applications.

What is the recommendation for testing sites from a representative sample vs. each and every page/feature/workflow? What was First Bank’s approach when initially evaluating for accessibility?

Melissa: We definitely do a representative sample, especially for our external website. It’s templated, so if we test one page it’s pretty representative of a lot of other pages, especially per section.

Matt: For some broader context, that is the approach that Level Access recommends as well. Often, you’ll have a lot of common elements on your pages, for instance a header or footer that’s relatively static across 100 pages. If you have a single issue in that header across 100 pages, you don’t really have 100 problems to solve.  You have one problem to solve and you would apply it across that header, ideally where the template has formed, in a very scalable way.

Did you use any kind of accessibility maturity model to help you drive policy, or assess where you were? General thoughts on maturity model scorecards?

Matt: During the policy-related work with Level Access, no accessibility maturity model was launched formally. However, Level Access employed elements from the Digital Accessibility Maturity Model to inform policy creation and iteration with FirstBank. Maturity model scorecards are a great tool to gauge where your organization is on the accessibility journey. It helps ensure attention is dedicated to departments/functions/areas of the business that can often overlook their role in driving accessible outcomes. However, for organizations at the early stages of their accessibility initiatives, it may not be as beneficial as spending the time to create internal process, policies, and implementation.

How do you decide the support baseline for browsers?

Matt: Content that is compliant with the Web Content Accessibility Guidelines will be accessible in any modern browser configuration. When determining which browser to conduct your own testing, there are several variables to consider such as:

  • What is the most commonly used browser currently in the marketplace? WebAIM provides some great statistics for those looking to understand popular browser types used by people using screen reader technology.
  • If you use technical testing tools, what browsers do those tools support?

What do your training programs look like? Do you ensure every staff member completes some degree of training?

Melissa: The training for Developers and QA was provided by Level Access.  The Developer training was ARIA specific because a lot of my developers already had a good understanding of how someone uses a screen reader or other technologies and basic knowledge of how they should be able to get through a user interface.  Level Access provided us with the individual who actually came up with ARIA, which was fantastic.  He also uses assistive technologies natively. And the QA training was an overview of digital accessibility and assistive technologies and how they’re used. I think a lot of my QA folks came out of that training with some very wide eyes like they had been enlightened.

The product training was from a local organization, who essentially brought someone in who went through our interfaces using assistive technology to demonstrate how they interact with them. Having someone show you what they go through to use those technologies I think is super important and really brings it home.

And then the organizational training that I talked about was a one-day workshop set up by HR which was open to the entire company.  Blind IT was the name of the company here in Denver that put it on.  It was essentially an experiential workshop to put you in the shoes of someone with a disability and push you to think outside the box.

Can you comment on the recurrence and frequency of that training?

Melissa: For the developer training, I probably wouldn’t repeat it at this time.  They tend to have the same folks on board and they’re all moving forward.  If we did have a shuffle, I would probably consider doing the product training again at some point. Business sponsors are a big one for us that do transition out, at least in the past three years.  That would definitely be something that I would consider doing again.  We also contract with LinkedIn for a lot of online classes, and there’s my developer champion that I spoke of earlier.  She’s outlined some digital accessibility LinkedIn courses that we provide to folks that might be joining, and if there’s still some issues then we could definitely think about more training. I’d say as you get new people, and if there’s additional training that needs to be done, then repeat that training.

If various business units deploy content regularly to a domain that only has QA testing and a Dev team for sprint work not including the content from all of these other teams, and the site is largely inaccessible as a result, how would I work with the other business units to incorporate accessibility testing for their content?

Melissa: Well, we’re not quite set up that way, as I’ve mentioned.  Everything flows through a software delivery team, and through technology to get published. But we couldn’t be the only ones pushing digital accessibility. We had to build a partnership and educate our business sponsors and help them understand.  It’s about doing the education for them, honestly.  That would be the first step, then handing it to them and saying here is what needs to be done, and we need you to make this a priority.  We have been able to do that with our business sponsors. That’s also part of that periodic auditing in production, because we do have multiple teams putting content out. They all have digital accessibility as a key piece that they should be talking about or are talking about in the design process, but sometimes things slip through the cracks. That’s totally possible, and that’s why the auditing in production on a periodic basis. Whether it’s biannual, quarterly, yearly – however you want to work it. But training is the key piece – for folks that are going to be putting any kind of content out, and for getting buy-in on what needs to happen from an organizational perspective.

Matt: It really emphasizes the idea of governance, and the higher up you’re able to get that started and setting it as an enterprise-wide vision and issue to resolve, that should also help you as well. And as Melissa had mentioned, there’s a vast array of tools that you can scan and point to those different business units that need to improve to ensure the overall health or domain of that property is good.

How do you suggest managing contract negotiations and annual vendor risk reviews with vendors who state that they will not rep and warrant compliance with certain parameters (such as WCAG 2.0 AA)?

Matt: The first thing I would say is the organization (purchaser) should evaluate the criticality of the vendor to their business.  If they offer a service that is only offered by that particular vendor, the conversation looks vastly different than if other vendors provide similar products and services.  If they are the sole provider, the conversation should be around revising contract language to include very specific procurement and contracting clauses that outline the expectations with respect to accessibility of third-party products/services. This is something Level Access can help clients with if support is needed.  The third party should understand what the expectations are, and the purchaser and third party should discuss what can be done, what limitations exist, etc. These limitations should be documented, and the two parties should have a discussion around a roadmap/strategy to get the third party up to the standards desired and the timing for same. Many third parties will ask for more money to fund the efforts and that needs to be handled on a case by case basis at the institution level.  Generally, most purchasers will expect conformance, so I advise against paying extra for a third party to remediate a solution they are selling. The third-party product will be far more marketable and competitive once they make it accessible so that is a good approach to stop the price gauging that can occur in such discussions.  After the creation of a roadmap, I recommend frequent (monthly or quarterly) check-ins to track progress on the roadmap.

At the end of the day, non-conformance can hurt the third party just as much as the purchaser. We’ve seen that third party products/solutions can be cited in lawsuits, so the tone should be set of collaboration where possible. If the third party refuses to discuss or refuses to become conformant, then the purchaser should elevate that risk to their legal/compliance/risk management teams for formal acceptance and approval to proceed using a product that is not accessible.

If the third party is not the only provider of goods/services, I often recommend looking at different sourcing options.  That said, this can be an extensive process to pull out software from a client interface and replace with new software, but if feasible, this should be evaluated as an option as well.

For tools that offer a good way to get started to help developers and QA, is something like JAWS still relevant, or still desirable? Also, web-based helpers – any valuable ones?

Matt: There’s a group called WebAIM that publishes a report periodically of the most popular assistive technologies. JAWS is the most popular. It’s one of the longest standing ones. Another one that is free is NVDA. I actually use that myself when I’m learning about other organizations.  It’s a great tool given that it’s free and a strong one with respect to that. In terms of automatic tools, there’s an abundance of them out there. Again, I’ll stress that you should not rely on those automatic tools alone. You should still be conducting manual testing (a person examining for technical compliance) as well as use case testing with something like JAWS.

Regarding web-based helpers, on the surface, they can be really helpful if you’re referring to overlays and those badges and buttons that you might see. But only at first glance. We’ve heard from our own testers and customers that these types of tools often force individuals to have to engage with a site in ways they’re not used to. If you think about using JAWS, the screen reader technology that I mentioned, people have learned to operate that very efficiently and have certain key bindings and settings that they use. Those overlays can override that and actually lead to worse outcomes in some instances.

Melissa: You mentioned WebAIM. We have a group that determines our standards – basically what we’re using for browsers, operating systems, mobile devices, when examining accessibility. We have conversations on a regular basis about this – to ensure that our testers are clear on which tools they should be using. We use the WebAIM tool, and that same company came out with a survey that identifies some of the most used browsers and technologies. JAWS is our number one, and is reflected in the WebAIM survey results in terms of how widely used it is. NVDA is another one we use for internal testing. If we have a vendor do the testing, we request that they use the same combination of browsers, operating systems, etc. that are widely used so we are reaching the most folks possible and getting the most bang for our buck with our testing resources.

What about internal websites? I never see the same accessibility used inside of my bank as I see on the external website.

Melissa: I think that all our teams are doing both internal and external applications, so they all have the knowledge and ability.  I think for us, when we started this journey, we prioritized our external application to address first. And then ultimately, another phase would be to come back into our internal applications to confirm that the work we’ve been doing since we started on the journey is being applied there as well.

I’m a small business, 1-man shop. Do I need to get my website scanned and afterwards request a manual Audit? After I get the findings report, will I then take that info to my website provider to make corrections?

Matt: It makes the most sense to find out what your vendor can do in terms of accessibility. If the platform doesn’t fully support accessibility, or the vendor doesn’t have the skills, spending money and time on the audit first might not be as effective.  If another platform is better, another template is more accessible for your site, or a vendor switch is in need, you would want to figure that out up front.  Be careful when vendors say they can make the site accessible – you’ll want to verify that they do actually have the skills to do that. One option is to perhaps perform some sort of spot check or automated testing up front to get a sense of the issues and discuss with the vendor the approach.  Keep in mind, while some issues may only involve adding code out of sight, other changes might require redesign or changes to plugins, libraries, or frameworks – in particular if you want to support WCAG 2.1 criterion like reflow – unless your site is already fluid and responsive to common mobile breakpoints.