How to remove URLS from from crawl diagnostics blocked by robots.txt
-
I suddenly have a huge jump in the number of errors in crawl diagnostics and it all seems to be down to a load of URLs that should be blocked by robots.txt. These have never appeared before, how do I remove them or stop them appearing again?
-
Hi Simon,
Noindex Follow meta tag sounds like the way to go.
Best to read this first... http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world
Hope this helps.
Justin
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt
I have a page used for a reference that lists 150 links to blog articles. I use in in a training area of my website. I now get warnings from moz that it has too many links. I decided to disallow this page in robots.text. Below is the what appears in the file. Robots.txt file for http://www.boxtheorygold.com User-agent: * Disallow: /blog-links/ My understanding is that this simply has google bypass the page and not crawl it. However, in Webmaster Tools, I used the Fetch tool to check out a couple of my blog articles. One returned an expected result. The other returned a result of "access denied" due to robots.text. Both blog article links are listed on the /blog/links/ reference page. Question: Why does google refuse to crawl the one article (using the Fetch tool) when it is not referenced at all in the robots.text file. Why is access denied? Should I have used a noindex on this page instead of robots.txt? I am fearful that robots.text may be blocking many of my blog articles. Please advise. Thanks,
Moz Pro | | Rong
Ron0 -
Change Crawl and rank report day?
Does anyone know if there is a way to get all of my account's campaign's to get crawled and rank reports on the same day?
Moz Pro | | CDUBP0 -
Overly Dynamic URL in vBulleitin
I've got quite a few overly dynamic URLs reported like this one URL: http://www.phplinkdirectory.com/forum/forumdisplay.php?s=4a07050d7e48e8bae86ef7880d9f91e8&f=13&order=desc&page=3 Anyone know the quick fix to this problem?
Moz Pro | | dvduval0 -
Does crawling help in optimisation.?
the website is as it was last week. no optimisation from my side for 10 days now. i was ranked 5 with my keyword not much competition there. however 2 days ago i registrred at seomoz and created a campaign for my website with my keywords that were ranked 5 in search. today i see that my rank has gone up to 2. i have nt done any optimisation neither have ii created any backlinks. so how and why did i climb up? i just created a campaign and let seomoz crawl my website for 2days. am i to assume seomoz crawl optimises website? if that is the case then can i create a campaign crawl pages, climb up in searches, delete the campaign after a week, create it again crawl pages and climb up and so on ? please advise?
Moz Pro | | wahin10 -
Crawl Diagnostics Warnings - Duplicate Content
Hi All, I am getting a lot of warnings about duplicate page content. The pages are normally 'tag' pages. I have some news stories or blog posts tagged with multiple 'tags'. Should I ask google not to index the tag pages? Does it really affect my site? Thanks
Moz Pro | | skehoe0 -
Crawl Diagnostics Error Spike
With the last crawl update to one of my sites there was a huge spike in errors reported. The errors jumped by 16,659 -- majority of which are under the duplicate title and duplicate content category. When I look at the specific issues it seems that the crawler is crawling a ton of blank pages on the sites blog through pagination. The odd thing is that the site has not been updated in a while and prior to this crawl on Jun 4th there were no reports of these blank pages. Is this something that can be an error on the crawler side of things? Any suggestions on next steps would be greatly appreciated. I'm adding an image of the error spike Xovep.jpg?1 Xovep.jpg?1
Moz Pro | | VanadiumInteractive1 -
Blocking all robots except rogerbot
I'm in the process of working with a site under development and wish to run the SEOmoz crawl test before we launch it publicly. Unfortunately rogerbot is reluctant to crawl the site. I've set my robots.txt to disallow all bots besides rogerbot. Currently looks like this: User-agent: * Disallow: / User-agent: rogerbot Disallow: All pages within the site are meta tagged index,follow. Crawl report says: Search Engine blocked by robots.txt Yes Am I missing something here?
Moz Pro | | ignician0 -
Canonical tags and SEOmoz crawls
Hi there. Recently, we've made some changes to http://www.gear-zone.co.uk/ to implement canonical tags to some dynamically generated pages to stop duplicate content issues. Previously, these were blocked with robots.txt. In Webmaster Tools, everything looks great - pages crawled has shot up, and overall traffic and sales has seen a positive increase. However the SEOmoz crawl report is now showing a huge increase in duplicate content issues. What I'd like to know is whether SEOmoz registers a canonical tag as preventing a piece of duplicate content, or just adds to it the notices report. That is, if I have 10 pages of duplicate content all with correct canonical tags, will I still see 10 errors in the crawl, but also 10 notices showing a canonical has been found? Or, should it be 0 duplicate content errors, but 10 notices of canonicals? I know it's a small point, but it could potentially have a big difference. Thanks!
Moz Pro | | neooptic0