Who can use this feature?
- Organization administrators, workspace administrators, and workspace users.
- Available for Accelerate and Enterprise.
Scan internal environments or intranets using the Desktop Crawler App. To scan with Desktop Crawler App, make sure you've downloaded and installed the application first.
The Desktop Crawler App can scan up to 100 pages with a crawl depth of 25.
On this page:
- Log in to the environment
- Generate an API key
- Log in to the Desktop Crawler App
- Scan with the Desktop Crawler App
- Understanding configuration settings for crawling
Log in to the environment
The Desktop Crawler App requires your access to the environment you want to scan. This means you have to log in to the environment you want to scan before logging in to the Desktop Crawler App.
Before logging into the Desktop Crawler App:
- Log in to the environment you want to scan. Note that the browser you use must correspond with the browser you select in Scan with the Desktop Crawler App step 4.a.
Generate an API key
Each time you log in to the Desktop Crawler App, you need an API key. We recommend saving the API key somewhere like a password manager or downloading the .txt file, so you don't have to generate a new one each time you use the app.
Note:
- API keys are only valid for six months.
- API keys are user-specific.
- You can only access this API key from the platform at the time of generation. Be sure to save it immediately, as it won't be available in the platform later.
To generate an API key:
- From the Control hub, go to Tools and Integrations.
- Choose the API tab.
- Enter a name for the key and select Generate new API key.
- Copy or download the key.
Log in to the Desktop Crawler App
Once you've installed the Desktop Crawler App, you can log in to the app with your organization URL and API key.
To log in to the Desktop Crawler App:
- Open the app.
- Enter your organization URL. For example, https://ACME.hub.essentia11y.com/.
- Enter your API key.
You're ready to start scanning.
Scan with the Desktop Crawler App
Note:
- The Desktop Crawler App is:
- Available for Chrome, Edge, and Firefox.
- Not available for Safari.
If you are on Windows you must close Chrome before running a DCA scan using Chrome. If you attempt running a scan with Chrome open, DCA informs you that you must close Chrome but DCA will no longer function. If this happens, quit Chrome and also restart DCA. After that you can run scans using Chrome.
To scan with the Desktop Crawler App:
- Select the workspace with website you want to scan.
- Find the website you want to scan and select Create scan.
- Enter or select the:
- Browser. If you're on Windows and the default browser is supported, it will be selected automatically.
- Scan title.
- Website URL.
- Scan tag.
- Maximum number of pages.
- Crawl depth.
- Skip URL:
- # endings
- ? endings
- Select Run scan.
The application starts crawling and scanning the pages using specified settings. It discovers, scans, and submits results in a sequential manner. Closing the application will interrupt the scanning process, and only partial results will be available in the platform. Scanning will stop when:
- All discovered possible URLs are scanned, or
- The maximum number of pages is reached.
Once the scan is complete, you can view the results in the platform or run a new scan.
Understanding configuration settings for crawling
Before you can run a scan, you need to specify values for the following settings:
Setting | Value |
Browser |
Browser to use for loading the pages to be scanned. This Note: You can select only the browsers that are installed on your system. If you choose Chrome, you must quit that browser before running your scan. |
Scan title |
Name of the scan report that will be created. Includes an appended time stamp. Example Scan title: Edge Not Scan |
Website URL |
The page on which the spider will start. Start location. Example: www.mycompany.com/foo Note: The scanner will skip any links that jump to a page that doesn’t start with that URL. That is, in this example, www.mycompany.com/bar would not be scanned. |
Scan tag | Scan tags help you to categorize, filter, and find past scans. Refer to Scan tag best practices. |
Maximum number of pages |
The number of pages to scan or test. The maximum value is 100 pages. Example: 50 |
Crawl depth |
The depth of a webpage in the website’s hierarchical structure. Indicates how many sub-levels you'd like to test. Depth aligns with the number of backslashes in a URL. The minimum is 1. The maximum is 25. Example of Crawl depth value: 5 Note: If we set Crawl depth to 5 and we scan a page that took four links to reach, we are already five levels deep. After testing that page, the scanner will not look for any more links on that page to honour the limit on Crawl depth. Instead, the crawler moves to other pages in its queue. |
Skip URL |
Both Skip options default to off, which is the same as the default for Level Access Platform scans and monitoring.
|
Comments
0 comments
Article is closed for comments.