Automated scans and manual evaluations are fundamentally different means of testing for digital accessibility. Together, they provide a comprehensive assessment of the digital accessibility of your assets. But they don’t return the exact same results. Here are the main reasons why:
Automated scans test code only. Manual evaluations test code, language, user flows, keyboard-only interactions, and interactions with assistive technologies.
Automated scans look for accessibility issues at the code level only. Because not all issues are visible or obvious at the code level, automated scans alone won’t capture all potential accessibility barriers or ensure your website is accessible to all.
Manual evaluations test all aspects of your digital property, from code to key user flows with assistive devices like screen readers. Because manual evaluations are performed by humans, some of whom have disabilities, they provide the most accurate assessment of how accessible your digital property actually is. Unlike computer automation, human testers can determine if a menu button works correctly or if an image contains sufficient and relevant alternative text. Testers also use assistive technologies to uncover issues that can't be identified by automated scans.
Automated scans test every page you input. Manual evaluations test only a representative sample set of pages.
Automated scans can test the code on every page you input and identify each and every instance of a potential issue. Manual evaluations focus on a representative sample set of pages, identifying issues in common elements (like headers and footers, for example) only once. Because of these differences, the number of findings in your automated and manual results can vary greatly for the same digital property.
Findings are also displayed differently in manual and automated results: where automated findings are grouped under the accessibility rule they may have violated, manual findings are presented individually.
Automated scans almost always produce false positives. Manual evaluations rarely do.
Automated scans flag every possible issue, so they’re likely to flag some false positives, or findings that don’t actually affect accessibility or user experience. Because human testers use their judgement to identify accessibility issues, manual evaluations rarely include false positives.
Summary of how automated scans and manual evaluations compare
This table summarizes the main differences between automated scans and manual evaluations that lead to different results.
Criteria | Automated scans | Manual evaluations |
---|---|---|
Who does the testing? | Computer automation | Humans, including individuals with disabilities |
What elements of a digital property does it test? | Code only | Code, language, key user flows with assistive technologies, and keyboard-only interactions |
How many pages are tested? | Every page you input | A representative sample set of pages |
How often does it produce false positives? | Almost always | Rarely |