Crawl test
-
Can anyone give me an idea how to use the MOZ crawl test results...I'm a little confused on how to read it? I have a lot of "no's"...I think this is good?
-
Hello!
What you will want to do is look at your crawl diagnostics in a campaign to get a better understanding of the errors. The CSV is helpful to hand over to a web developer to address any issues. There is a mix of positive and negative errors you will find in the report. You can use the data viewed in a campaign as a guideline.
TRUE values mean the errors for the specific column exist, these are the ones you will pay close attention to.
FALSE values mean they do not exist. Most FALSE values will not appear in a campaign so many of the cells in a CSV will show this.
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
5xx Crawl Issue might not be issues at all. Help
Hi, I ran a crawl test on our website and it came back with 900 5xx potential errors. When I started opening these links 1 by 1 I could see they were actually working. So i exported the full list of 900 and went to the website: https://httpstatus.io/ pasted the links by 100 and used that. They came back with status codes of 301 / 301 / 200 which i believe means they are okay. After reading it says that my programmer may need to see if we are blocking the MOZ BOT or to slow the MOZ BOT down. I guess I'm wondering if this is not done is the site actually having these 5xx errors when Google is Crawling or is it just showing 900 errors because of MOZ BOT but actually things are okay? I know the simple answer is to get the programmer to fix the MOZ BOT issue to know for sure but getting programmers to do things take a lot of time so I'm trying to get a better idea here. Thanks for your input.
Getting Started | | Cfarcher1 -
Crawling issue
Hi, I have to set up a campaign for a webshop. This webshop is a subdomain itself. First question: The two subfolders I need to track are /nl_BE and /fr_BE. What is the best way to handle this? Shall I set up two different campaigns for each subfolder, or shall I just make one campaign and add tags to keywords? **Second question: **it seems like Moz can't crawl enough pages. There are no disallows in the robots.txt. Should I try putting the following at the top into my robots.txt? User-agent: rogerbot
Getting Started | | Mat_C
Disallow: Or is it because I want to crawl only a subdomain that it doesn't work? Thanks0 -
Why is Moz unable to crawl my site?
Was hoping someone could advise why Moz is unable to crawl my site at https://www.oceaniacruises.com **Moz was unable to crawl your site on Oct 5, 2017. **Our crawler was banned by a page on your site, either through your robots.txt, the X-Robots-Tag HTTP header, or the meta robots tag. Update these tags to allow your page and the rest of your site to be crawled. If this error is found on any page on your site, it prevents our crawler (and some search engines) from crawling the rest of your site. Typically errors like this should be investigated and fixed by the site webmaster. Any help would be appreciated. Thanks!
Getting Started | | jbarinaga0 -
Crawl rate
How often does Moz crawl my website ? (I have a number of issues I believe I have fixed, and wondered if there was a manual request to re-crawl ?) Thanks. Austin.
Getting Started | | FuelDump0 -
After fixing Crawl Errors, how long does it take to for Moz or Google to re-crawl a website?
Last night I found out through Moz that my robots.txt file was blocking any crawling of my website. I fixed the issue. Now do I just sit and wait?
Getting Started | | cmc-interactive0 -
High Number of Crawl Errors for Blog
Hello All, We have been having an issue with very high crawl errors on websites that contain blogs. Here is a screenshot of one of the sites we are dealing with: http://cl.ly/image/0i2Q2O100p2v . Looking through the links that are turning up in the crawl errors, the majority of them (roughly 90%) are auto-generated by the blog's system. This includes category/tag links, archived links, etc. A few examples being: http://www.mysite.com/2004/10/ http://www.mysite.com/2004/10/17/ http://www.mysite.com/tagname As far as I know (please correct me if I'm wrong!), search engines will not penalize you for things like this that appear on auto-generated pages. Also, even if search engines did penalize you, I do not believe we can make a unique meta tag for auto-generate pages. Regardless, our client is very concerned seeing these high number of errors in the reports, even though we have explained the situation to him. Would anyone have any suggestions on how to either 1) tell Moz to ignore these types of errors or 2) adjust the website so that these errors now longer appear in the reports? Thanks so much! Rebecca
Getting Started | | Level2Designs0 -
Crawl Diagnostics
Hello Experts, today i was analyse one of my website with moz and get issue overview and get total 212 issue 37 high all derive to this same url http://blogname.blogspot.com/search?updated-max=2013-10-30T17:59:00%2B05:30&max-results=4&reverse-paginate=true so can anyone help me how to find this url and remove all high priority error. and even on page website get A grade then why not performing well in SE ?
Getting Started | | JulieWhite0 -
MOZ Starter Crawl not happeneing
Hi I added a new site 48hours + ago and the starter crawler has not even begun collecting data. Any help would be appreciated. cheers Isaac
Getting Started | | sodafizz0