Why do I have so many site crawl issues with my site?
-
Hi there,
Our website is www.mormonhub.com. We have many other websites that we own as well. For some reason, our website has tens of thousands of Site Crawl Issues. We have tried looking into what's causing the problem, but can't figure it out. We do use Wordpress for our website. Any help would be greatly appreciated.
Thanks!
-
Will do. Thank you very much, Sam!
-
Hi there,
Sam from Moz's Help Team here!
There can be a variety of crawl errors occurring for a fairly varied amount of reasons so it's a little hard to isolate without looking directly at the Site Crawl data within your campaign. Could you please pop an email over to help@moz.com, and include the name of the affected campaign so that we can take a look into exactly what errors are popping up and what may be causing those?
Looking forward to hearing from you!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Many Duplicate Content Flags
Not sure about you all, but I’m loving the new Moz Site Crawler. However, I was noticing that it is identifying a huge amount of pages as duplicate content. There are about 30,000 pages in this website, with that said we’ve had to make many templates to make the site scalable. Additionally a url rule was lost which caused a significant amount of duplicate pages to be created. I am working through using the moz crawl tool to identify duplicate pages but noticing many pages under “Affected Pages,” are actually unique content pages with initial content that is duplicate. I read that Moz flags any pages with 90% or more content overlapping content or code. My theory for this is that some templates that are too similar, to the point that Moz reads them as duplicative. Has this happened for anyone else? In addition, if Moz is flagging these similar pages as duplicate content, do we surmise that Google bots are having the same issue? We have seen issues with rankings as it pertains to the actual duplicate pages but hadn't experienced issues across the unique pages, they are hyperlocal pages so we are able to see rankings quite easily.
Moz Bar | | HZseo0 -
Odd crawl test issues
Hi all, first post, be gentle... Just signed up for moz with the hope that it, and the learning will help me improve my web traffic. Have managed to get a bit of woe already with one of the sites we have added to the tool. I cannot get the crawl test to do any actual crawling. Ive tried to add the domain three times now but the initial of a few pages (the auto one when you add a domain to pro) will not work for me. Instead of getting a list of problems with the site, i have a list of 18 pages where it says 'Error Code 902: Network Errors Prevented Crawler from Contacting Server'. Being a little puzzled by this, i checked the site myself...no problems. I asked several people in different locations (and countries) to have a go, and no problems for them either. I ran the same site through Raven Tool site auditor and got some results. it crawled a few thousand pages. I ran the site through screaming frog as google bot user agent, and again no issues. I just tried the fetch as Gbot in WMT and all was fine there. I'm very puzzled then as to why moz is having issues with the site but everyone is happy with it. I know the homepage takes 7 seconds to load - caching is off at the moment while we tweak the design - but all the other pages (according to SF) take average of 0.72 seconds to load. The site is a magento one so we have a lengthy robots.txt but that is not causing problems for any of the other services. The robots txt is below. Google Image Crawler Setup User-agent: Googlebot-Image
Moz Bar | | Arropa
Disallow: Crawlers Setup User-agent: * Directories Disallow: /ajax/
Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /errors/
Disallow: /includes/
#Disallow: /js/
#Disallow: /lib/
Disallow: /magento/
#Disallow: /media/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /scripts/
Disallow: /shell/
Disallow: /skin/
Disallow: /stats/
Disallow: /var/
Disallow: /catalog/product
Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalogsearch/
#Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/
Disallow: /catalog/product/gallery/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt Paths (no clean URLs) #Disallow: /.js$
#Disallow: /.css$
Disallow: /.php$
Disallow: /?SID= Pagnation Disallow: /?dir=
Disallow: /&dir=
Disallow: /?mode=
Disallow: /&mode=
Disallow: /?order=
Disallow: /&order=
Disallow: /?p=
Disallow: /&p= If anyone has any suggestions then please i would welcome them, be it with the tool or my robots. As a side note, im aware that we are blocking the individual product pages. Too many products on the site at the moment (250k plus) which manufacturer default descriptions so we have blocked them and are working on getting the category pages and guides listed. In time we will rewrite the most popular products and unblock them as we go Many thanks Carl0 -
Crawl Diagnostics: How many pages (deep) will it crawl for dup content
Does anyone know how deep the crawl diagnostics will crawl when searching for dup content? Will it crawl the entire site, or will it only crawl "x" amount of pages? Thanks!
Moz Bar | | tdawson090 -
URLS appearing twice in Moz crawl
I have asked this question before and got a Moz response to which i replied but no reply after that. Hi, We have noticed in our moz crawl that urls are appearing twice so urls like this - http://www.recyclingbins.co.uk/about/ www.recyclingbins.co.uk/about/ Thought it may be possible rel=canonical issue as can find URL's but no linking URL's to the pages. Does anyone have any ideas? Thank you Jon I did the crawl test and they were not there
Moz Bar | | imrubbish0 -
I got a 404 in the Crawl Test Tool Report
I, yesterday i ran an crawl on http://www.everlastinggarden.nl and i get an 404. Does anybody know why this happens? <colgroup><col width="1535"></colgroup>
Moz Bar | | IMforYou
| # ---------------------------------------- |
| Crawl Test Tool Report | Moz,http://pro.seomoz.org/tools/crawl-test |
| www.everlastinggarden.nl |
| Report created: 15 Jul 18:34 |
| # ---------------------------------------- |
| URL,Time Crawled,Title Tag,Meta Description,HTTP Status Code,Referrer,Link Count,Content-Type Header,4XX (Client Error),5XX (Server Error),Title Missing or Empty,Duplicate Page Content,URLs with Duplicate Page Content (up to 5),Duplicate Page Title,URLs with Duplicate Title Tags (up to 5),Long URL,Overly-Dynamic URL,301 (Permanent Redirect),302 (Temporary Redirect),301/302 Target,Meta Refresh,Meta Refresh Target,Title Element Too Short,Title Element Too Long,Too Many On-Page Links,Missing Meta Description Tag,Search Engine blocked by robots.txt,Meta-robots Nofollow,Blocked by X-robots,X-Robots-Tag Header,Blocked by meta-robots,Meta Robots Tag,Rel Canonical,Rel-Canonical Target,Blocking All User Agents,Blocking Google,Blocking Yahoo,Blocking Bing,Internal Links,Linking Root Domains,External Links,Page Authority |
| http://www.everlastinggarden.nl,2014,404 : Received 404 (Not Found) error response for page.,Error attempting to request page | Best regards, Jos0 -
Moz Crawl Showing Duplicate Content But It's Not?!
Unfortunately I can't give out the URL, but here's the deal... I have two URL's which have completely different content on them but are being crawled as duplicate content. Any Idea how that would happen? I'm not seeing any errors in WMT's. Has anyone seen this before? Is the duplicate content reporting based on a % of the page content matching as the same?
Moz Bar | | Swarm-SEO0 -
408 errors in crawl diagnostics
Best community, The Crawl Diagnostics Report of Moz gave our website a lot of 408 errors like below: <dl> <dt>Title</dt> <dd>408 : Error</dd> <dt>Meta Description</dt> <dd>408 Request Time-out</dd> <dt>Meta Robots</dt> <dd>Not present/empty</dd> <dt>Meta Refresh</dt> <dd>Not present/empty</dd> <dd>-----------------------------------------------------------------------</dd> <dd>The report has diagnosed a lot of these (around 320), even though we cannot reproduce the error (we cannot seem to find it ourself). </dd> <dd>2 questions relating to this: </dd> <dd>* Can you (the people of Moz) reproduce the errors manually? </dd> <dd>* Is it possible that it is a bug in the spider of Moz itself (too many spiders crawling at the same time)?</dd> </dl>
Moz Bar | | arjen.koedam0 -
Moz "Crawl Diagnostics" doesn't respect robots.txt
Hello, I've just had a new website crawled by the Moz bot. It's come back with thousands of errors saying things like: Duplicate content Overly dynamic URLs Duplicate Page Titles The duplicate content & URLs it's found are all blocked in the robots.txt so why am I seeing these errors?
Moz Bar | | Vitalized
Here's an example of some of the robots.txt that blocks things like dynamic URLs and directories (which Moz bot ignored): Disallow: /?mode=
Disallow: /?limit=
Disallow: /?dir=
Disallow: /?p=*&
Disallow: /?SID=
Disallow: /reviews/
Disallow: /home/ Many thanks for any info on this issue.0