Scanning and manual testing are fundamentally different means of testing for digital accessibility. Together, they provide a comprehensive assessment of the digital accessibility of your assets. But they don’t return the exact same results. Here are the main reasons why:
Scanning tests code only. Manual testing tests code, language, user flows, keyboard-only interactions, and interactions with assistive technologies.
Scanning looks for accessibility issues at the code level only. Because not all issues are visible or obvious at the code level, scanning alone won’t capture all potential accessibility barriers or ensure your website is accessible to all.
Manual testing tests all aspects of your websites or apps, from code to key user flows with assistive devices like screen readers. Because manual tests are performed by humans, some of whom have disabilities, they provide the most accurate assessment of how accessible your website or app actually is. Unlike computer automation, human testers can determine if a menu button works correctly or if an image contains sufficient and relevant alternative text. Testers also use assistive technologies to uncover issues that can't be identified by scanning.
Scanning tests every page you input. Manual tests test only a representative sample set of pages.
Scans can test the code on every page you input and identify each and every instance of a potential issue. Manual tests focus on a representative sample set of pages, identifying issues in common elements (like headers and footers, for example) only once. Because of these differences, the number of findings in your scans and manual results can vary greatly for the same website/app.
Findings are also displayed differently in manual and scan results: where scan findings are grouped under the accessibility rule they may have violated, manual findings are presented individually.
Scans almost always produce false positives. Manual tests rarely do.
Scans flag every possible issue, so they’re likely to flag some false positives, or findings that don’t actually affect accessibility or user experience. Because human testers use their judgement to identify accessibility issues, manual tests rarely include false positives.
Summary of how scans and manual tests compare
This table summarizes the main differences between scanning and manual testing that lead to different results.
Criteria | Scans | Manual testing |
---|---|---|
Who does the testing? | Computer automation | Humans, including individuals with disabilities |
What elements of a digital property does it test? | Code only | Code, language, key user flows with assistive technologies, and keyboard-only interactions |
How many pages are tested? | Every page you input | A representative sample set of pages |
How often does it produce false positives? | Almost always | Rarely |
Comments
0 comments
Article is closed for comments.