Register for GAAD: Raleigh 2024! GO

Skip to content
Home » News » Why Automated Tools Aren’t Enough – You Need a Human

Why Automated Tools Aren’t Enough – You Need a Human

By Susanne Meyer, Employment Solutions Architect

Recently, an article was passed around among accessibility professionals that showcases how it’s possible to have a website that passes all the automatic accessibility tests that are generally used in the field, yet remains completely inaccessible (Read Full Article here: https://www.matuzo.at/blog/building-the-most-inaccessible-site-possible-with-a-perfect-lighthouse-score/). This is because automatic tools often overlook accessibility barriers that can only be found through manual testing. The article in question, though really good, is highly technical, and requires a good bit of coding knowledge.

In this blog post, I will try to recreate the point in common rather than technical language. In this way, I hope to explain the problem to potential clients who know they need to make their websites accessible, but think that the integrated accessibility tools in many web applications make it unnecessary to hire an actual accessibility services provider to perform manual testing.

Automatic testing does indeed ensure that WCAG, or Web Content Accessibility Guidelines, are fulfilled. For example, Deque’s accessibility checker axe, (which also powers Google’s Lighthouse checker, the software referenced in the original article) searches for WCAG 2.1 violations. As such, automated tools are an excellent basis for further accessibility testing. However, automated accessibility testing tools are inherently context insensitive. This creates a slew of problems that can only be found through manual testing.

For example, imagine that a picture on a website actually communicates content. Imagine, for instance, that a website contains a chart that shows the distribution of some resource. As long as the chart is tagged with some sort of “alt text” (or alternative text) that a screen reader can read, an automated tool will pass the image. There is an image, and there is some description of it, and the box gets checked. However, what that alt text says might not be representative of the content that the image contributes to the website. There are some content management systems that automatically mark all images as decorative precisely to avoid an automated accessibility error. But if the image is not reviewed specifically and revised, it will remain identified as decorative. Since screen readers ignore decorative images altogether, a screen reader user will not only miss out on the information that’s communicated in the chart, but will never even know the chart is there.

Similarly, as long as there is some description of an image, an automated test cannot tell whether that description is actually representative of the image. An image description could literally consist of a string of jibberish – e.g. “laksjflkajsldkjfs” – and still pass an automated accessibility test because automated tests are context blind. The same problem arises in the context of videos that are closed captioned or audio-described, or have transcripts. An automated accessibility test will alert if these are missing altogether, but it cannot ascertain whether the description, captions, or transcript really transmit what was communicated that in the video. This sort of cross-referencing requires a conscious, context-sensitive mind – a human, in other words.

But automated tests can miss accessibility issues beyond pictures and videos. For example, think about digital forms that a user can complete online. In forms, each fillable field needs to be correctly labeled with the desired input. For instance, if a field on the screen says “Current address”, it needs to be coded so that the screen reader will say the same thing when the user focuses on it. But once again, as long as the field is tagged in some arbitrary way, an automated accessibility test will pass it. However, it’s not a rare occurrence that prompts are mixed up, and fields are mislabeled. In these cases, what the screen reader reads does not correspond to what the prompt on the screen says, and the user will consequently submit invalid information. Only a human tester who can cross-reference the visual and the accessible name will pick up on the discrepancy.

In addition, digital forms in particular, and websites in general, contain many different buttons and links that a user can click to be taken to another feature of the page, or to another website altogether. Sometimes, in the process of coding, developers neglect to make some of these buttons “focusable”. Being focusable means that a button or link can be reached with keyboard strokes. Many people with disabilities, including people with visual impairments, cannot use a mouse and rely on keyboard input. A button that is not focusable in such a way remains inaccessible to them.

Accessibility issues that automated accessibility tests tend to miss go beyond screen readers. Some of these concerns affect users with low vision – or even users with normal vision. Many websites include text that’s written on top of gradient backgrounds containing busy pictures or shaded colors. While automated tests can determine whether the text-to-background ratio meets WCAG standards, it can’t determine whether the text is, in fact, easy to read for actual users. Cluttered backgrounds or changing shading can be distracting and difficult to decipher, even in the absence of a violation. This is true even for users without disabilities – but it is especially true for people who struggle with ADHD or dyslexia. The same goes for revolving banner ads as well as auto-playing video on a website – constantly moving and changing content makes it extremely difficult for users with cognitive challenges to focus on their tasks. In addition, carousels and videos are often accompanied by soundtracks that automatically start playing, which makes it extremely hard for screen reader users to hear what their screen reader is simultaneously trying to tell them. Last but not least, automated carousels tend to kick the cursor – and with it, the focus of the screen reader – back to the top of the page. None of these issues would be flagged by an automated accessibility test. (For more information on the accessibility downsides of carousels, check out http://shouldiuseacarousel.com/)

This list is by no means meant to be exclusive. It is simply meant to give the non-expert an idea of why it’s not enough to run an automated tool on their site. Although such tests are important to ensure that basic WCAG requirements are met, they don’t ensure that a website is fully accessible to people with disabilities. Running such a test will minimize the scope, and therefore the cost, of the engagement with an accessibility services provider, but it will not eliminate the need. Only a human tester who is familiar with the common accessibility pain points can identify these. If this post has helped you come to the realization that you need help with the accessibility of your site, please contact us at Ablr website.

I would like to close this post with a quote from Carl Groves – the self-proclaimed Accessibility Viking and leading accessibility expert – who says in his seminal article “Automated Web Accessibility Tools Are Not Judges” (https://karlgroves.com/2017/03/24/automated-web-accessibility-testing-tools-are-not-judges):

“This cannot be said often enough or loudly enough: There’s just too many things in Accessibility that are too subjective and too complex for a tool to test with enough accuracy to be considered a judgment of the system’s level of accessibility.”