There are a number of frequently asked questions regarding AMP Compliance Scores. This document highlights those questions and explains the details and nuance of AMP Compliance Scores in plain English.
On this page:
- Why is the Compliance Score for a given ruleset not the average of the compliance score for each of the paragraph / success criteria?
- Why is the score in the 'Total Compliance' widget not the average of the scores in the 'Compliance Health By Ruleset' widget?
- Why do some rulesets have more weight than others in the calculation of AMP Compliance Scores?
- How is my asset level Total Compliance score calculated?
- What does the little * next to a total compliance score mean?
- What is a Projected Compliance score?
- How does the 'level of testing completed' impact my compliance score?
- Why do the compliance graphics on the Overview page of my report show less than 100% compliance, even though when I go to the Modules page, I don't see any violations?
- How can my score for WCAG 2.0 AA be higher than my score for WCAG 2.0 A? Isn't WCAG 2.0 AA compliance reliant on WCAG 2.0 A?
Q: Why is the Compliance Score for a given ruleset not the average of the compliance score for each of the paragraph / success criteria?
The compliance score for a ruleset is calculated based on the total possible score for the ruleset vs. the average of the paragraph / success criteria percentage scores. For example WCAG 2.0 A has a total possible rule score of 250 points. For example, if the total points across all success criteria amount to 211 points the compliance score is calculated as follows: 211/250 = 84.4 rounded down to 84%.
Q: Why is the score in the 'Total Compliance' widget not the average of the scores in the 'Compliance Health By Ruleset' widget?
The total compliance is calculated by taking the total rule scores for each ruleset and dividing it by the total rule scores for all of the rulesets combined. The reason that it is not equivalent to the average of the scores in the 'Compliance Health by Ruleset' widget is that each of the rulesets has a given score and they can each be different. This makes it so that the different rulesets weigh differently on the 'Total Compliance'.
Q: Why do some rulesets have more weight than others in the calculation of AMP Compliance Scores?
Different rulesets have different weights based on their significance. Level Access' team of accessibility experts deem certain paragraph / success criteria of rulesets more or less impactive on users with disabilities than others. This is based on evaluation of the user experience issues that are caused when these rulesets are violated.
Q: How is my asset level Total Compliance score calculated?
Each report's total rule score is added together and divided by the sum of the total possible rule scores for each report. This calculation excludes all but the most recent report created by a recurring AMP Test and it excludes any reports that have 0 modules.
Q: What does the little * next to a total compliance score mean?
The star icon means that the compliance score has been adjusted because manual testing hasn't been completed at all or only a portion of it is complete. This means that an Asset Compliance score is being provided rather than the actual compliance score.
Q: What is a Projected Compliance score?
A projected compliance score adjusts the current compliance score to give a rough estimate of what the score might be after all manual testing has been carried out. This estimation is based on observing the average change that occurs when manual testing is carried out, using all reports that have had manual testing completed as the sampling pool.
Q: How does the 'level of testing completed' impact my compliance score?
If manual testing has not been completed and projected compliance scores are turned on at the organization level, the compliance scores will be adjusted based on a projection of the changes that will occur when manual testing is carried out. A * (star) in the upper right-hand corner of the compliance score will indicate that manual testing has not been fully completed. As manual testing is completed less of a delta will be applied to the overall compliance score. Once all manual testing is marked as completed the actual compliance score for the report will be displayed.
Q: Why do the compliance graphics on the Overview page of my report show less than 100% compliance, even though when I go to the Modules page, I don't see any violations?
This could be due to multiple reasons. If, at the organization level, projected compliance scores are turned on and you have not fully completed manual testing, then your score has been adjusted to reflect the average change that occurs for users after carrying out manual testing. Additionally, if there is a 'scores are refreshing' symbol in the 'total compliance' widget at the report dashboard, it means the scores are still updating and that the current score may not have adjusted based on changes in modules.
Q: How can my score for WCAG 2.0 AA be higher than my score for WCAG 2.0 A? Isn't WCAG 2.0 AA compliance reliant on WCAG 2.0 A?
The reason why this occurs is because of our testing paradigm. Each Standard maps to a given number of best practices. When we say that you have 85% for WCAG AA, we mean that you have 85% for all standards mapped to WCAG AA. That being said, you can still be non-compliant in WCAG AA because of the fact that WCAG A must also be satisfied in order to be AA compliant.
Read more about the Compliance Score calculation.
Comments
0 comments
Article is closed for comments.