The accessibility health score (or scan score) is based on the automated scans you run in the platform. It’s not an indicator of the overall accessibility or WCAG conformance of your site (you’ll need manual testing for that), but it does show relative improvement from one scan to the next. Learn more about the score and how it’s calculated.
When it comes to overall accessibility, the score isn’t everything. But it should trend upwards as you make your websites more accessible. To improve your score, keep these guidelines in mind:
Address all open findings in a rule.
The score is calculated based on rules, not findings. To improve your score, address all findings in a failed rule. For example, if you have 10 findings under the rule “Insufficient color contrast,” then there are 10 different instances of poor color contrast on the page. You must fix or dismiss all 10 to see improvements in the score.
You won’t see any improvement from addressing individual findings from different rules.
Dismiss all false positives.
Scans can return false positives, or findings that don’t actually affect users with disabilities. Investigate your findings and dismiss all false positives to help clear out failed rules and improve your score.
You only need to dismiss a false positive once. The system will automatically dismiss the same finding in future scans.
Recognize that scans capture only 25–30% of total accessibility issues.
Scans only capture accessibility issues in code, which account for roughly 25–30% of total issues, while manual tests capture everything. If you’re working off of manual findings only, you’re likely fixing findings that a scan wouldn’t catch in the first place. Those findings need to be fixed, but if computer automation can’t detect them, they don’t factor into your score.
Fix every instance of a global finding.
Global findings are findings that occur on multiple pages of the same page type (for instance, on every product page or in the header and footer). Manual results record global findings once, while scans record all instances of a global finding. Check the “Finding frequency” of a manual finding. If it’s a global finding, make sure you fix it on every page or in every template.
Scan the exact pages included in manual test plans.
During a manual test, our testing team assesses the pages and flows included in your test plan. The test plan should be a representative sample of your digital property with at least one of each key page type included.
If you’re remediating manual results but running scans only on select pages or pages that aren’t included in your test plan, you likely won’t see the manual fixes reflected in your score. Scan the exact pages from your test plan for the best comparison.
Measure progress from one automated testing tool.
Every scan collects data from four different testing tools: Access Engine, equal-access, WAVE, and axe-core. If you check the same set of scan results with different tools, you’ll notice that each tool shows a different score.
Each testing tool has a unique library of rules that it tests against during a scan, so the results will never be identical between tools. That means there will be varying amounts of findings and false positives. They also weigh rules differently, so fixing a common finding will have a different impact on the score. Use one tool consistently to see the benefits of the accessibility health score.
Still not improving?
Are you doing all of the above and still not seeing improvements? Request support and our team will investigate for you.