Voiceover as a screen reader for both Mac and iOS, requires different approaches. If we are testing accessibility on a mobile app versus a web page on a Mac OS device, we would want to expand beyond testing simply with voiceover to incorporate testing with PC based screen readers as well. At scale that means different things to different people, so there may not be a one-size-fits-all model. My suggestion is generally to think about the QA process and where we fit accessibility into the QA process. It's generally not sufficient just to identify a single assistive technology and say, “we'll do testing on voiceover, but we won't really do anything else beyond that”. It should include screen reader testing, ideally by a person or by people who are end users of that assistive technology. Evaluate your QA process, come up with an accessibility testing procedure, and voiceover should be a part of that. It's not something that you would want to focus on uniquely. You can’t test everything, there isn't really a QA process which includes testing every single component on every single page or every app to the most rigorous degree. We want to test for blockers, barriers, anything that's going to cause the user to potentially fail or not be able to complete a process. We want to test across different assistive technologies on different devices. If we're thinking about an iOS app uniquely or specifically then we can incorporate testing from the voiceover perspective and think about different iPhone devices, as well as iPads and test for any types of issues that might arise across different versions. Other than that, screen reader testing could definitely be a core part of a larger accessibility testing practice.