Removal request for entire catalog. Can be done without blocking in robots?
-
Bunch of thin content (catalog) pages modified with "follow, noindex" few weeks ago. Site completely re-crawled and related cache shows that these pages were not indexed again. So it's good I suppose
But all of them are still in main Google index and shows up from time to time in SERPs. Will they eventually disappear or we need to submit removal request?Problem is we really don't want to add this pages into robots.txt (they are passing link juice down below to product pages)Thanks!
-
If your intention is to keep the page for "follow" I would not submit a removal request for the page since that will wipe it off the map. Google needs to crawl your page to know you don't want it indexed.
As I said, Google already re-crawled them and they are not re-indexed. Last cache dates show all is correctly lef out of new indexation.
I guess my concern was how long this pages will stay in index. That was done to make sure Panda will be able to recalculate the value of the site.
-
If your intention is to keep the page for "follow" I would not submit a removal request for the page since that will wipe it off the map. Google needs to crawl your page to know you don't want it indexed.
Things can take a while, sometimes many months before its not indexed properly. Google also has a disclaimer that it could possibly keep the page indexed for a while.
Also you have to make sure that you do some post things after the removal is made to ensure it never comes back
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1663419
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
HREFLANG: language and geography without general language
Some developers always implement the hreflang for German (which should be "de") as "de-de", so language German and country Germany. There is usually no other German version targeting the other German-speaking countries (mostly ch, at). So obviously the recommendation is to make it "de" and that's the end. But I kept wondering and not finding anything: IF there is a more specialised hreflang, will google take that if there is no default? Example: Search in: de-at (or de-ch) Search result has the following hreflang versions: de-de; x-default (==en), en => Will Google give the result for x-default or de-de?
Technical SEO | | netzkern_AG0 -
Removal of date archive pages on the blog
I'm currently building a site which currently has an archive of blog posts by month/year but from a design perspective would rather not have these on the new website. Is the correct practice to 301 these to the main blog index page? Allow them to 404? Or actually to keep them after all. Many thanks in advance Andrew
Technical SEO | | AndieF0 -
Robots.txt anomaly
Hi, I'm monitoring a site thats had a new design relaunch and new robots.txt added. Over the period of a week (since launch) webmaster tools has shown a steadily increasing number of blocked urls (now at 14). In the robots.txt file though theres only 12 lines with the disallow command, could this be occurring because a line in the command could refer to more than one page/url ? They all look like single urls for example: Disallow: /wp-content/plugins
Technical SEO | | Dan-Lawrence
Disallow: /wp-content/cache
Disallow: /wp-content/themes etc, etc And is it normal for webmaster tools reporting of robots.txt blocked urls to steadily increase in number over time, as opposed to being identified straight away ? Thanks in advance for any help/advice/clarity why this may be happening ? Cheers Dan0 -
Robots.txt Sitemap with Relative Path
Hi Everyone, In robots.txt, can the sitemap be indicated with a relative path? I'm trying to roll out a robots file to ~200 websites, and they all have the same relative path for a sitemap but each is hosted on its own domain. Basically I'm trying to avoid needing to create 200 different robots.txt files just to change the domain. If I do need to do that, though, is there an easier way than just trudging through it?
Technical SEO | | MRCSearch0 -
How can I tell Google, that a page has not changed?
Hello, we have a website with many thousands of pages. Some of them change frequently, some never. Our problem is, that googlebot is generating way too much traffic. Half of our page views are generated by googlebot. We would like to tell googlebot, to stop crawling pages that never change. This one for instance: http://www.prinz.de/party/partybilder/bilder-party-pics,412598,9545978-1,VnPartypics.html As you can see, there is almost no content on the page and the picture will never change.So I am wondering, if it makes sense to tell google that there is no need to come back. The following header fields might be relevant. Currently our webserver answers with the following headers: Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0, public
Technical SEO | | bimp
Pragma: no-cache
Expires: Thu, 19 Nov 1981 08:52:00 GMT Does Google honor these fields? Should we remove no-cache, must-revalidate, pragma: no-cache and set expires e.g. to 30 days in the future? I also read, that a webpage that has not changed, should answer with 304 instead of 200. Does it make sense to implement that? Unfortunatly that would be quite hard for us. Maybe Google would also spend more time then on pages that actually changed, instead of wasting it on unchanged pages. Do you have any other suggestions, how we can reduce the traffic of google bot on unrelevant pages? Thanks for your help Cord0 -
Can hidden backlinks ever be ok?
Hi all, I'm very new to SEO and still learning a lot. Is it considered a black hat tactic to wrap a link in a DIV tag, with display set to none (hidden div), and what can the repercussions be? From what I've learnt so far, is that this is a very unethical thing to be doing, and that the site hosting these links can end up being removed from Google/Bing/etc indexes completely. Is this true? The site hosting these links is a group/parent site for a brand, and each hidden link points to one of the child sites (similar sites, but different companies in different areas). Thanks in advance!
Technical SEO | | gemcomp1230 -
Can double content be a reason to not have PR?
In a bigger project are several domains that show the same content like the main-site (there is a reason to have it like that). Now those "double-content domains" are indexed and ranking in Google. But now I see that all those double-content domains have no pagerank visible, despite they do all have their unique own backlinks. Do you know why those domains don't show Pagerank? Can it really have something to do with the double-content situation?
Technical SEO | | kenbrother0 -
Robots.txt Syntax
Does the order of the robots.txt syntax matter in SEO? For example (are there potential problems with this format): User-agent: * Sitemap: Disallow: /form.htm Allow: / Disallow: /cgnet_directory
Technical SEO | | RodrigoStockebrand0