Access all crawl tests
-
How can I see all crawl tests ran in the history of the account? Also, can I get them sent to an email that isn't the primary one on the account?
Please advise as I need this historical data ASAP.
-
Just a note that we've discontinued the old Crawl Test tool and have launched an entirely new On-Demand Crawl tool based on our upgraded Site Crawl engine (launched last year). The new tool has an enhanced UI, entirely rebuilt back-end, full export capability, and will save your old crawls for up to 90 days.
We've written up a sample case study or logged-in customers can go directly to On-Demand Crawl.
-
Hey Irene, thanks for contacting us!
If you are logged into the account that ran the crawl tests, you can navigate to the Crawl Test tool to view/download all reports that have been ran. When crawl tests complete an notification is sent to the account owner email address, and that can receive these notices unfortunately. Hopefully that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Server blocking crawl bot due to DOS protection and MOZ Help team not responding
First of all has anyone else not received a response from the help team, ive sent 4 emails the oldest one is a month old, and one of our most used features on moz on demand crawl to find broken links doesnt work and its really frustrating to not get a response, when we're paying so much a month for a feature that doesnt work. Ok rant over now onto the actual issue, on our crawls we're just getting 429 errors because our server has a DOS protection and is blocking MOZ's robot, im sure it will be as easy as whitelisting the robots IP, but i cant get a response from MOZ with the IP. Cheers, Fergus
Feature Requests | | JamesDavison0 -
Why my site not crawl?
my error in dashboard: **Moz was unable to crawl your site on Jul 23, 2020. **Our crawler was banned by a page on your site, either through your robots.txt, the X-Robots-Tag HTTP header, or the meta robots tag. Update these tags to allow your page and the rest of your site to be crawled. If this error is found on any page on your site, it prevents our crawler (and some search engines) from crawling the rest of your site. Typically errors like this should be investigated and fixed by the site webmaster i thing edit robots.txt how fix that?
Feature Requests | | alixxf0 -
No access to the Analytics part of the website. Help?
I have accepted a seat on my managers subscription to help look into our website and the analytics part specifically but every time I try to access wither the keyword explorer or any other of the analytics part of the website it appears that there is a broken link and will just show a white screen with only the headers and footers intact.
Feature Requests | | libbyh2019
Could someone please let me know how to resolve the issue?
Kind regards,
Olivia Houghton.0 -
Is there a way to take notes on a crawled URL?
I'm trying to figure out the best way to keep track of there different things I've done to work on a page (for example, adding a longer description, or changing h2 wording, or adding a canonical URL. Is there a way to take notes for crawled URLs? If not what do you use to accomplish this?
Feature Requests | | TouchdownTech0 -
MOZ Site Crawl - Ignore functionality Request
I now understand that the ignore option found in the MOZ Site Crawl tool will permanently remove the item from ever showing up in our Issues again. We desire to use the issues list as kind of like a To-Do checklist with the ultimate goal to have no issues found and would like to "Temporarily Remove" an issue to see if it is shows back up in future crawls. If we properly fix the issue it shouldn't show back up. However, based on the current Ignore function, if we ignore the issue it will never show back up, even if the issue is still a problem. At the same time, the issue could be a known issue that the end user doesn't want to ever show back up and they desire to never have it show again. In this case it might be nice to maintain the current "Permanently Ignore" option. Use the following imgur to see a mockup of my idea for your review. pzdfW
Feature Requests | | StickyLife0 -
Crawl error : 804 https (SSL) error
Hi, I have a crawl error in my report : 804 : HTTPS (SSL) error encountered when requesting page. I check all pages and database to fix wrong url http->https but this one persist. Do you have any ideas, how can i fix it ? Thanks website : https://ilovemypopotin.fr/
Feature Requests | | Sitiodev0 -
Crawl test limitaton - ways to take advantage of large sites?
Hello I have a large site (120,000+) and crawl test is limited to 3,000 pages. I want to know if you have a way to take advantage to crawl a type of this sites. Can i do a regular expression for example? Thanks!
Feature Requests | | CamiRojasE0 -
Crawl diagnostic errors due to query string
I'm seeing a large amount of duplicate page titles, duplicate content, missing meta descriptions, etc. in my Crawl Diagnostics Report due to URLs' query strings. These pages already have canonical tags, but I know canonical tags aren't considered in MOZ's crawl diagnostic reports and therefore won't reduce the number of reported errors. Is there any way to configure MOZ to not consider query string variants as unique URLs? It's difficult to find a legitimate error among hundreds of these non-errors.
Feature Requests | | jmorehouse0