By Dr. Susanne Meyer, Employment Solutions Architect
This blog post is based on a presentation that my colleagues from LCI Tech (now known as Ablr) gave at this year’s virtual NFB conference. Alyssa Cheeseman and Shannon Garner are blind accessibility testers who do much of Ablr’s digital content testing. They were introduced by Quan Leysath, who is our Business Development Manager, and joined by several other team members for the Q&A portion of the presentation. Both the presentation and most of the questions we received and discussed focused on our claim that providers of digital accessibility services should involve blind testers in their testing. We do this, and think that it ought to be an industry standard for several reasons. The major reasons for this claim that we presented are that blind testers have first-hand experience of what it is like for a blind person to navigate websites, they resist the risk to “overfix” websites, they have better assistive technology skills, they are intricately familiar with accessibility tools and standards that are difficult to master, they have a unique perspective on the layout of websites, and they can explain the reasons behind accessibility standards to developers so that the developers can take them into account on future projects.
Most of these points are self-evident — but their implications may not be immediately apparent. This is particularly true for the statement that blind testers have first-hand experience of the blind experience. In simple terms, this allows blind testers do much more than just apply accessibility tools to websites and other applications — they bring their own understanding of what makes websites difficult, frustrating, or confusing to use. It may well be the case that a webpage meets all official theoretical standards and yet remains confusing to use. This can be the case, for example, if the text fields in a form are arranged in a non-standard format — for example, if the CSV code precedes the credit card number window in a payment site. Visually, this rearrangement might be intuitive, but to a screen reader user, it can be off-putting or even disorienting. Another example concerns invalid information in a form field. If a user mistypes an email address, for instance, an error message generally pops up on the screen. Too often, screen readers do not read this message. The focus remains on the box, and the blind user waits for the form to load — which it never does. Accessibility standards are complex and complicated to apply, and it is not exactly clear what they dictate in these circumstances. But blind testers have first-hand experience of how this error affects the usability of the website for others like them. They can anticipate the user’s experience against their own, which makes them excellent judges on what needs to be fixed, and what does not.
A separate but related reason for using blind accessibility testers is that they tend to resist the urge to “overfix”. Among sighted testers, it is common to attempt to overcompensate — to address accessibility issues to an extent that makes the website more difficult to use. For example, a sighted tester who has been told that some of the features on a website are not focusable with a screen reader might respond by making *every* sentence on the website focusable. This results in a frustrating experience for blind users because they now have to sort through all these potential points of focus to find what is actually relevant to them. Another common problem is that sighted testers overdescribe images, particularly decorative ones. While many blind users enjoy knowing whether there are images on a page and what they represent, things can get frustrating when these descriptions go into so much detail that the descriptions come to overwhelm and detract from the message being conveyed. Sighted testers also tend to approve information that is repeated — perhaps in an alt text, and then again in a link. Visually, this is not a problem, but to the screen reader user, it represents an unnecessary and annoying repetition. Blind testers, in general, are better judges of the detail of description that a blind user is likely to find helpful and comfortable.
Furthermore, blind testers are generally much better assistive technology users than their sighted counterparts. They are either “native users” in the same sense in which a person can be a native speaker of a language, or have come to use it assistive technology as their primary mode of accessing information — not a secondary, ancillary mode as is the case for sighted testers. For sighted testers, JAWS and other screen readers, or ZoomText or other magnification programs, are only a few among an arsenal of tools and tests, but for blind testers, they are literally the gateway to the website. Because they themselves rely on these tools, they are generally a lot more familiar with their intricacies, and much more comfortable using them. Blind testers are going to get the most out of their assistive technology, and are thus likely to know exactly what it can and can’t do. Sighted tester might never come to know these devices in as much details because they are not as central to their experience.
A fourth advantage of using blind testers in testing is that many of the tools and standards involved in accessibility testing are very intricate and difficult to master. One example of such a standard is ARIA, or Accessible Rich Internet Applications. ARIA requirements are complicated, and their implications are difficult to grasp sufficiently well to properly apply them. But when ARIA is incorrectly utilized, the consequence is that information becomes hidden and inaccessible to assistive technology users. There is no theoretical reason that sighted testers should not be able to master these standards — and many do. But doing so requires a deep involvement with and understanding of accessibility and its ins and outs. Using these standards is not something that can be done as an afterthought, and the fact that a first-hand understanding of accessibility issues is built into the blind tester’s very existence makes it more likely that they are properly able to anticipate and respect the danger of misapplying standards.
Fifthly, blind testers have a unique perspective on the layout of webpages. Sighted users, including testers, approach websites in a whole-to-part manner: they visually take in the entire site first, notice the layout, and from there access its individual pages. By this point, they have already formed a preliminary impression of the overall content and layout of a website, and this impression might come to influence or even bias their accessibility assessment. Blind testers cannot rely on this kind of overall first impression and resort to visual “cheats”. Blind testers, just like other blind users, consume websites feature by feature — they have a trees-before-the-forest perspective rather than the sighted forest-before-the-trees approach. This can reveal structural problems with the site that may not be apparent through visual inspection. Even if a website is laid out in a visual intuitive way, the content might not be coded in a way that reflects this order when a screen reader tries to read the information. Perhaps the heading structure shuffles tangential content to the top, or links can only be accessed in a round-about way by a screen reader. The blind tester will discover this even if the sighted tester misses it.
The final reason that accessibility service providers should use blind testers is that they are uniquely positioned to communicate the principles of accessibility to web developers. It is one thing to learn about standards in theory — but it is quite another to understand and truly absorb the reasons behind these standards, and the implications of ignoring them. While developers are required to respect accessibility requirements by law, these requirements remain a theoretical annoyance and possible source of headaches and litigation if the reasons behind them are not brought into the picture. While any tester, sighted or blind, can share the standards with developers, blind testers are generally better able to explain them and their consequences because they, as users themselves, are intimately familiar with their importance. Having blind testers to communicate the case for accessibility gives blind people a voice for themselves, and gives them the chance to represent themselves as users. This gives life to the accessibility effort, and puts a face (or voice) to it. It makes testers and developers complicit in the accessibility project, and it make the blind experience rather than a set of standards the cornerstone of accessibility, just as it should be.
It is important to notice that we do not deny that proper testing *also* requires sighted backup. There are certain accessibility violations that screen reader users simply cannot discover. For example, only a sighted tester can judge whether the content of a picture and the purported alt text actually match — our testers described occasions on which an image showed a fruit, but the alt text read the name of an animal. A tester has be able to assess both the picture and the corresponding alt text. Also, blind testers simply cannot discover features on a website that were coded so as to be invisible to a screen reader. None of this is a reflection on the skill of the tester, but a function of the interaction between visual and screen reader content. Sighted testers, then, will always be required for backup in accessibility testing. As always, considering and consolidating a variety of perspectives and the information they yield leads to the most comprehensive assessment. Since access for all is our ultimate goal, this can only be a good thing. Nevertheless, blind testers *can* do the heavy lifting. And for the reasons listed above, they *should* do the heavy lifting.