GWT False Reporting or GoogleBot has weird crawling ability?
-
Hi I hope someone can help me.
I have launched a new website and trying hard to make everything perfect. I have been using Google Webmaster Tools (GWT) to ensure everything is as it should be but the crawl errors being reported do not match my site. I mark them as fixed and then check again the next day and it reports the same or similar errors again the next day.
Example:
http://www.mydomain.com/category/article/ (this would be a correct structure for the site).
GWT reports:
http://www.mydomain.com/category/article/category/article/ 404 (It does not exist, never has and never will) I have been to the pages listed to be linking to this page and it does not have the links in this manner. I have checked the page source code and all links from the given pages are correct structure and it is impossible to replicate this type of crawl.
This happens accross most of the site, I have a few hundred pages all ending in a trailing slash and most pages of the site are reported in this manner making it look like I have close to 1000, 404 errors when I am not able to replicate this crawl using many different methods.
The site is using a htacess file with redirects and a rewrite condition.
Rewrite Condition:
Need to redirect when no trailing slash
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !.(html|shtml)$
RewriteCond %{REQUEST_URI} !(.)/$
RewriteRule ^(.)$ /$1/ [L,R=301]The above condition forces the trailing slash on folders.
Then we are using redirects in this manner:
Redirect 301 /article.html http://www.domain.com/article/
In addition to the above we had a development site whilst I was building the new site which was http://dev.slimandsave.co.uk now this had been spidered without my knowledge until it was too late. So when I put the site live I left the development domain in place (http://dev.domain.com) and redirected it like so:
<ifmodule mod_rewrite.c="">RewriteEngine on
RewriteRule ^ - [E=protossl]
RewriteCond %{HTTPS} on
RewriteRule ^ - [E=protossl:s]RewriteRule ^ http%{ENV:protossl}://www.domain.com%{REQUEST_URI} [L,R=301]</ifmodule>
Is there anything that I have done that would cause this type of redirect 'loop' ?
Any help greatly appreciated.\
-
Yeah - do this!
-
Anyone any thoughts on this?
-
Sorry I also should add that the url structure that google generates is like this:
http://www.domain.com/category/article/
http://www.domain.com/category/article/same-category/differentarticle/
http://www.domain.com/category/article/same-category/another-different-article/
http://www.domain.com/category/article/another-different-category/differentarticle/
etc, it is like it gets to a category article and then moves sideways and somehow adds the move onto the current url without keeping hold of the suffix of the URL
-
Doesn't sound like GWT is false reporting. May want to check your trailing slash URL rewrite. It seems like there is an issue there as what you are describing sounds like the URLs are being written incorrectly and causing the incorrect URLs to be generated and show up in GWT.
Your 301 looks ok and if the dev site was spidered and indexed, you should just add the site to GWT and then use the URL removal tool to remove the site from the index, then remove the site and redirect.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Site redesign makes Moz Site Crawl go haywire
I work for an agency. Recently, one of our clients decided to do a complete site redesign without giving us notice. Shortly after this happened, Moz Site Crawl reported a massive spike of issues, including but not limited to 4xx errors. However, in the weeks that followed, it seemed these 4xx errors would disappear and then a large number of new ones would appear afterward, which makes me think they're phantom errors (and looking at the referring URLs, I suspect as much because I can't find the offending URLs). Is there any reason why this would happen? Like, something wrong with the sitemap or robots.txt?
Technical SEO | | YYSeanBrady1 -
Crawl solutions for landing pages that don't contain a robots.txt file?
My site (www.nomader.com) is currently built on Instapage, which does not offer the ability to add a robots.txt file. I plan to migrate to a Shopify site in the coming months, but for now the Instapage site is my primary website. In the interim, would you suggest that I manually request a Google crawl through the search console tool? If so, how often? Any other suggestions for countering this Meta Noindex issue?
Technical SEO | | Nomader1 -
Google only crawling a small percentage of the sitemap
Hi, The company which I work for have developed a new website for a customer, there URL is https://www.wideformatsolutions.co.uk I've created a sitemap which has 25,555 URL's. I submitted this to Google around 4 weeks ago and the most crawls that have ever occurred has been 2,379. I've checked everything I can think of, including; Speed of website Canonical Links 404 errors Setting a preferred domain Duplicate content Robots Txt .htaccess Meta Tags I did read that Matt Cutts revealed in an interview with Eric Enge that the number of pages Google crawls is roughly proportional to your pagerank. But I'm sure it should crawl more than 2000 pages. The website is based on Opencart, if anyone has experienced anything like this I would love hear from you.
Technical SEO | | chrissmithps0 -
When choosing GWT preferred domain its asking for re-verification?
Trying to set a preferred domain in GWT, and the site is verified via Google Analytics and meta tag in the code, but still asks: Part of the process of setting a preferred domain is to verify that you own http://site.org/. Please verify http://site.org/. Tried looking for answer to no avail, am I missing anything?
Technical SEO | | vmialik0 -
Sitemap and crawl impact
If I have two links in the sitemap (for example: page1.html and page2.html) but the web-site contains more pages (page1.html, page2.html and page3.html) is this a sign for Google to not to crawl other pages? I.e. Will Google index page3.html? Consider that any page can be accessed.
Technical SEO | | ditoroin0 -
Google WMT continues reporting fixed 404s - why?
I work with a news site that had a heavy restructuring last spring. This involved removing many pages that were duplicates, tags, etc. Since then, we have taken very careful steps to remove all links coming into these deleted pages, but for some reason, WMT continues to report them. By last August, we had cleared over 10k 404s to our site, but this lasted only for about 2 months and they started coming back. The "linked from" gives no data, and other crawlers like seomoz aren't detecting any of these errors. The pages aren't in the sitemap and I've confirmed that they're not really being linked from from anywhere. Why do these pages keep coming back? Should I even bother removing them over and over again? Thanks -Juanita
Technical SEO | | VoxxiVoxxi0 -
301'ing googlebot
I have a client that has been 301’ing googlebot to the canonical page. This is because they have a cart_id and session parameters in urls. This is mainly from when googlebot comes in on a link that has these parameters in the URL, as they don’t serve these parameters up to googlebot at all once it starts to crawl the site.
Technical SEO | | AlanMosley
I am worried about cloaking; I wanted to know if anyone has any info on this.
I know that Google have said that doing anything where you detect goolgebots useragent and treat them different is a problem.
Anybody had any experience on this, I would be glad to hear.0 -
How to handle Not found Crawl errors?
I'm using Google webmaster tools and able to see Not found Crawl errors. I have set up custom 404 page for all broken links. You can see my custom 404 page as follow. http://www.vistastores.com/404 But, I have question about it. Will it require to set 301 redirect for broken links which found in Google webmaster tools?
Technical SEO | | CommercePundit0