Meta NOINDEX... how long before Google drops dupe pages?
-
Hi,
I have a lot of near dupe content caused by URL params - so I have applied:
How long will it take for this to take effect? It's been over a week now, I have done some removal with GWT removal tool, but still no major indexed pages dropped.
Any ideas?
Thanks,
Ben
-
In his case - he wants to get rid of some duplicate content only.
I see what you mean but if he is not in the situation listed in http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1269119 then it might be the best bet / fastest bet.
For me personally it worked so far very well - if no robots.txt is used as that won't help on the long run as the removal tool has an expiration date of several months.
The down side of the removal tools is the same expiration date - as if you change your mind you will have some issues getting the page sinto the index.
-
You know that I think you are the bees knees, but I am going to have to disagree on this one. Even Google does not recommend using the removal rool for this application.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1269119
Still pals?
-
There are several things that you can do to get Google to crawl your site (or your new content) quicker and more often. You should be doing all of these, but in case you're not, here is the list.
-
Create a Sitemap and submit it through Web Master Tools
-
Install Google Analytics
-
Create social accounts/update your social accounts
-
Fetch as Google Webmaster tools
-
Update your content more often (to get Google to crawl your site more frequently).
-
Adjust the crawl speed on Google Webmaster tools.
-
Check crawl errors on Google Webmaster tools. Are there sever side errors (500)?
I hope that helps!
-
-
Hi,
The best bet is the removal tool from GWT - this is the fastes way.
If your pages are static and google bot is visiting those pages once a month or once 4-5-6 months - you will need to wait until google bot is visiting those pages again, notice the nofollow and drop those from the index.
I'v e seen cases with 6 months.
Anyway you will probably see those pages drop step by step.
What you can try, although is not very straight forward is to build an xml sitemap only with those files and submit it via GWMt - sometimes google bot will think that something new happen and will visit those pages, see the no index and speed the process - but not always as I've seen in some cases that this didn't work - in some cases it did.
Again, the best bet will be the GWMT removal tool.
Cheers.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What referrer is shown in http request when google crawler visit a page?
Is it legit to show different content to http request having different referrer? case a: user view one page of the site with plenty of information about one brand, and click on a link on that page to see a product detail page of that brand, here I don't want to repeat information about the brand itself case b: a user view directly the product detail page clicking on a SERP result, in this case I would like to show him few paragraph about the brand Is it bad? Anyone have experience in doing it? My main concern is google crawler. Should not be considered cloaking because I am not differentiating on user-agent bot-no-bot. But when google is crawling the site which referrer will use? I have no idea, does anyone know? When going from one link to another on the website, is google crawler leaving the referrer empty?
Intermediate & Advanced SEO | | max.favilli0 -
Google is ranking the wrong page and I don't know why?
I have an E-Commerce store and to make things easy, let's say I am selling shoes. There is: Category named 'Shoes' and 3 products 'Sport shoes', 'Hiking shoes' and 'Dancing shoes' My problem: For the keyword 'Shoes' Google is showing the product result 'Sport shoes'. This makes no sense from user perspective. (It's like searching for 'iPhone' and getting a result for 'iPhone 4s' instead of a general overview.) Now what are the specifics of my category page (Which I want Google to rank): It has more external links with higher quality It has more internal links It has much higher page authority It has useful text to guide the user for the keyword It is a category instead of a product All this given, I just don't know how I can signal Google that this page makes sense to show in SERPs? Hope you can help with this!
Intermediate & Advanced SEO | | soralsokal0 -
Best way for Google and Bing not to crawl my /en default english pages
Hi Guys, I just transferred my old site to a new one and now have sub folder TLD's. My default pages from the front end and sitemap don't show /en after www.mysite.com. The only translation i have is in spanish where Google will crawl www.mysite.com/es (spanish). 1. On the SERPS of Google and Bing, every url that is crawled, shows the extra "/en" in my TLD. I find that very weird considering there is no physical /en in my urls. When i select the link it automatically redirects to it's default and natural page (no /en). All canonical tags do not show /en either, ONLY the SERPS. Should robots.txt be updated to "disallow /en"? 2. While i did a site transfer, we have altered some of the category url's in our domain. So we've had a lot of 301 redirects, but while searching specific keywords in the SERPS, the #1 ranked url shows up as our old url that redirects to a 404 page, and our newly created url shows up as #2 that goes to the correct page. Is there anyway to tell Google to stop showing our old url's in the SERP's? And would the "Fetch as Google" option in GWT be a great option to upload all of my url's so Google bots can crawl the right pages only? Direct Message me if you want real examples. THank you so much!
Intermediate & Advanced SEO | | Shawn1240 -
What Sources to use to compile an as comprehensive list of pages indexed in Google?
As part of a Panda recovery initiative we are trying to get an as comprehensive list of currently URLs indexed by Google as possible. Using the site:domain.com operator Google displays that approximately 21k pages are indexed. Scraping the results however ends after the listing of 240 links. Are there any other sources we could be using to make the list more comprehensive? To be clear, we are not looking for external crawlers like the SEOmoz crawl tool but sources that would be confidently allow us to determine a list of URLs currently hold in the Google index. Thank you /Thomas
Intermediate & Advanced SEO | | sp800 -
Robots.txt error message in Google Webmaster from a later date than the page was cached, how is that?
I have error messages in Google Webmaster that state that Googlebot encountered errors while attempting to access the robots.txt. The last date that this was reported was on December 25, 2012 (Merry Christmas), but the last cache date was November 16, 2012 (http://webcache.googleusercontent.com/search?q=cache%3Awww.etundra.com/robots.txt&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a). How could I get this error if the page hasn't been cached since November 16, 2012?
Intermediate & Advanced SEO | | eTundra0 -
Massive 40-50 page drop for primary keyphrase, no apparent reason, + map listing weirdness
Hi. I manage the site http://physiowinnipeg.com, which has had some interesting yo-yo effects lately, culminating in a 40-50 page drop as of last Friday. The phrase in question: physiotherapy winnipeg Background: The site has ranked on the first page for this keyphrase since about May There was the occasional blip where the homepage would disappear from rankings altogether for a day or two, and then return back to its regular listing This would coincide with a sudden increase in organic places listings (our site would hit the top of the local map listings, but the homepage would vanish) Other pages would still appear for the target phrase, just not the homepage About a month ago, the organic places listing settled, and the homepage permanently vanished Other pages still ranked high, and we commanded many of the listings on the first 10 pages of Google, first page included The homepage would still appear for some other searches We had been affected by the Google Places --> Google+ Local transition, so I was of the opinion that we needed to wait it out a bit to see if it, like the other issues related to the transition, would work itself out This time around, it worked itself out again, just today -- but we are now ranked slightly lower on the first page and our Google local listing has disappeared (again) Our other pages are still (currently) absent from the first few pages, for this keyphrase The main differences I noticed here, aside from the much longer timeframe of our drop, was that other pages disappeared as well, and that the homepage was actually found, between page 38 and 50, depending on the day According to SEO Moz and other tools, the site is doing pretty much everything right. I should note, however, that there is a service we use for educational content called Patientsites that loads in the subdomain http://education.physiowinnipeg.com -- and this site has pulled many warnings in SEO Moz (around 10,000, actually, mostly long URL related), as well as a few errors. I'm not sure if this is a part of the problem, but I am considering having the content of the subdomain blocked via robots.txt and Webmaster Tools. Has anyone else experienced anything similar, or have any insight? This weirdness has gone on too long. Thanks! Bobby
Intermediate & Advanced SEO | | PinnacleWpg0 -
How do I create a strategy to get rid of dupe content pages but still keep the SEO juice?
We have about 30,000 pages that are variations of "<product-type>prices/<type-of-thing>/<city><state "<="" p=""></state></city></type-of-thing></product-type> These pages are bringing us lots of free conversions because when somebody searches for this exact phrase for their city/state, they are pretty low-funnel. The problem that we are running into is that the pages are showing up as dupe content. One solution we were discussing is to 301-redirect or canonical all the city-state pages back to jus tthe "<type of="" thing="">" level, and then create really solid unique content for the few hundred pages we would have at that point.</type> My concern is this. I still want to rank for the city-state because as I look through our best-converting search-terms, they nearly always have the city-state in the search term, so the search is some variation of " <product-type><type of="" thing=""><city><state>"</state></city></type></product-type> One thing we thought about doing is dynamically changing the meta-data & headers to add the city-state info there. Are there other potential solutions to this?
Intermediate & Advanced SEO | | editabletext0 -
Why does google not show my ecommerce category page when I have the same keywords for many products in the product title?
I have found that google removes the google serach listing of a category from my site (ecommerce) when products within the category have the same key words. I sell golf shirts and have a category called "Mens Golf Shirts" Within the category I have added many products but when the too many of the products say mens golf shirt my link on google gets removed. Before i had products named: FUNKTION Mens Short Sleeve Golf Shirt Red / Black but now I have had to change it to: FUNKTION Red / Black I can understand that they may see this a keyword stuffing but how do I get around this to ensure that each product can rank on google for mens golf shirt
Intermediate & Advanced SEO | | funktiongolf0