Impact of "restricted by robots" crawler error in WT
-
I have been wondering about this for a while now with regards to several of my sites. I am getting a list of pages that I have blocked in the robots.txt file. If I restrict Google from crawling them, then how can they consider their existence an error? In one case, I have even removed the urls from the index.
And do you have any idea of the negative impact associated with these errors.
And how do you suggest I remedy the situation.
Thanks for the help
-
Google is just showing you a warning that hey, these are excluded, make sure that you want them excluded. They're not passing a judgement on whether or not they should be excluded. So, as long as they're excluded on purposes, no worries.
-
Hi Patrick,
That section is simply there to advice on any URLs that Google feels are wrongly excluded within the robots.txt
If the URLs are not wrongly excluded, don't worry about it showing in WMT's - it's there just as an advisory.
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
"Search Box Optimization"
A client of ours recently received en email from a random SEO "company" claiming they could increase website traffic using a technique known as "search box optimization". Essentially, they are claiming they can insert a company name into the autocomplete results on Google. Clearly, this isn't a legitimate service - however, is it a well known technique? Despite our recommendation to not move forward with it, the client is still very intrigued. Here is a video of a similar service:
Technical SEO | | McFaddenGavender
https://www.youtube.com/watch?v=zW2Fz6dy1_A0 -
Robots.txt on refinements
In dealing with Panda do you think it is a good idea to put all refinements for category pages in the robots.txt file? We already have a lot as noindex, follow but I am wondering if it would be better to address from a crawl perspective as the pages are probably thin duplicate content to Google.
Technical SEO | | Gordian0 -
What is the difference between "Referring Pages" and "Total Backlinks" [on Ahrefs]?
I always thought they were essentially the same thing myself but appears there may be a difference? Any one care to help me out? Cheers!
Technical SEO | | Webrevolve0 -
Best action to take for "error" URLs?
My site has many error URLs that Google webmaster has identified as pages without titles. These are URLs such as: www.site.com/page???1234 For these URLs should I: 1. Add them as duplicate canonicals to the correct page (that is being displayed on the error URLs) 2. Add 301 redirect to the correct URL 3. Block the pages in robots.txt Thanks!
Technical SEO | | theLotter0 -
Can anyone help me understand why google is "Not Selecting" a large number of my webpages to include when crawling my site.
When looking through my google webmaster tools, I clicked into the advanced settings under index status and was surprised to see that google has marked around 90% of my pages on my site as "Not Selected" when crawling. Please take a look and offer any suggestions. www.luxuryhomehunt.com
Technical SEO | | Jdubin0 -
How many steps for a 301 redirect becomes a "bad thing"
OK, so I am not going to worry now about being a purist with the htaccess file, I can't seem to redirect the old pages without redirect errors (project is an old WordPress site to a redesigned WP site). And the new site has a new domain name; and none of the pages (except the blog posts) are the same. I installed the Simple 301 redirects plugin on old site and it's working (the Redirection plugin looks very promising too, but I got a warning it may not be compatible with the old non-supported theme and older v. of WP). Now my question using one of the redirect examples (and I need to know this for my client, who is an internet marketing consultant so this is going to be very important to them!): Using Redirect Checker, I see that http://creativemindsearchmarketing.com/blog --- 301 redirects to http://www.creativemindsearchmarketing.com/blog --- which then 301 redirects to final permanent location of http//www.cmsearchmarketing.com/blog How is Google going to perceive this 2-step process? And is there any way to get the "non-www-old-address" and also the "www-old-address" to both redirect to final permanent location without going through this 2-stepper? Any help is much appreciated. _Cindy
Technical SEO | | CeCeBar0 -
Crawl Errors
Okay, I was just in my Google Webmaster Tools and was looking at some of the stats. I have 1354 "not found" pages google says. Many of these URL's are bizarre. I don't know what they are. Others I do know. What should I do about this? Especially all the URL's I don't even know what they are?
Technical SEO | | azguy0