Can't get auto-generated content de-indexed
-
Hello and thanks in advance for any help you can offer me!
Customgia.com, a costume jewelry e-commerce site, has two types of product pages - public pages that are internally linked and private pages that are only accessible by accessing the URL directly. Every item on Customgia is created online using an online design tool. Users can register for a free account and save the designs they create, even if they don't purchase them. Prior to saving their design, the user is required to enter a product name and choose "public" or "private" for that design. The page title and product description are auto-generated.
Since launching in October '11, the number of products grew and grew as more users designed jewelry items. Most users chose to show their designs publicly, so the number of products in the store swelled to nearly 3000. I realized many of these designs were similar to each and occasionally exact duplicates. So over the past 8 months, I've made 2300 of these design "private" - and no longer accessible unless the designer logs into their account (these pages can also be linked to directly).
When I realized that Google had indexed nearly all 3000 products, I entered URL removal requests on Webmaster Tools for the designs that I had changed to "private". I did this starting about 4 months ago. At the time, I did not have NOINDEX meta tags on these product pages (obviously a mistake) so it appears that most of these product pages were never removed from the index. Or if they were removed, they were added back in after the 90 days were up.
Of the 716 products currently showing (the ones I want Google to know about), 466 have unique, informative descriptions written by humans. The remaining 250 have auto-generated descriptions that read coherently but are somewhat similar to one another. I don't think these 250 descriptions are the big problem right now but these product pages can be hidden if necessary.
I think the big problem is the 2000 product pages that are still in the Google index but shouldn't be. The following Google query tells me roughly how many product pages are in the index: site:Customgia.com inurl:shop-for
Ideally, it should return just over 716 results but instead it's returning 2650 results. Most of these 1900 product pages have bad product names and highly similar, auto-generated descriptions and page titles. I wish Google never crawled them.
Last week, NOINDEX tags were added to all 1900 "private" designs so currently the only product pages that should be indexed are the 716 showing on the site. Unfortunately, over the past ten days the number of product pages in the Google index hasn't changed.
One solution I initially thought might work is to re-enter the removal requests because now, with the NOINDEX tags, these pages should be removed permanently. But I can't determine which product pages need to be removed because Google doesn't let me see that deep into the search results. If I look at the removal request history it says "Expired" or "Removed" but these labels don't seem to correspond in any way to whether or not that page is currently indexed. Additionally, Google is unlikely to crawl these "private" pages because they are orphaned and no longer linked to any public pages of the site (and no external links either).
Currently, Customgia.com averages 25 organic visits per month (branded and non-branded) and close to zero sales. Does anyone think de-indexing the entire site would be appropriate here? Start with a clean slate and then let Google re-crawl and index only the public pages - would that be easier than battling with Webmaster tools for months on end?
Back in August, I posted a similar problem that was solved using NOINDEX tags (de-indexing a different set of pages on Customgia): http://moz.com/community/q/does-this-site-have-a-duplicate-content-issue#reply_176813
Thanks for reading through all this!
-
I don't think there's any harm in submitting a new/full list, even if it duplicates past lists. The URLs haven't been removed, and you did fix the tags. This isn't like disavowing links - it's more of a technical issue. Worst case, it doesn't work, from what I've seen.
-
Thanks for helping me with this.
You are correct that all the product pages are in the same folder regardless of whether they are public or private so unfortunately, removing an entire folder isn't an option at this point.
When I go to Webmaster tools and view past removal requests, each one shows as either "Expired" or "Removed". WMT only allows me to resubmit the removal request if the label is "Expired". Going back past 90 days, many are still labeled "removed" but the further back I go, more and more say "Expired". There are too many requests to try to determine whether or not each page is indexed - so I think our best bet is to re-submit every expired private product page removal request and then monitor removal. Does this make sense?
Back in August, a Moz crawl showed tons of duplicates for the designer pages (the pages where the user actually designs the jewelry). Using NOINDEX tags and removal requests (credit to Dr. Pete and Everett Sizemore) the number of designer pages in the index dropped from 5K to exactly 8 - so it worked.
Our XML sitemap is dynamic and doesn't list private product pages.
-
It honestly sounds like you're on the right track - you do need to explicitly mark those (and META NOINDEX should be fine). Could you just request removal for all private pages? Worst case, Google removes some that aren't in the index, or attempts to. Since the public/private setting can be changed, you can't really put the private pages all in one folder (real or virtual) - that would make life easier, long-term, but probably isn't useful/appropriate for your case.
I'd also recommend having a clean XML sitemap with just the public entries (updated dynamically). That won't deindex the other pages, but it's one more cue Google can use. You want all of the signals you're sending to be consistent.
I agree with Doug, though - this is really tricky, because ideally you would want people to share these pages, and if you NOINDEX then you're losing out on that. My gut feeling is that, until your site is stronger, you probably can't support 3K near duplicates (and counting). If you want to get sophisticated, though, you could dynamically NOINDEX and only noindex posts that have very little content or our obvious dupes. As people fill out or share a product, you could remove the NOINDEX.
-
Hi Doug,
Thanks for the quick response. I will do my best to answer each of your points.
In Webmaster Tools, under Index Status, it shows 1781 pages indexed, with a high of 6515 on June 2, 2013. Not sure that helps to clarify anything but it's another piece of Google data to consider.
We continually monitor WMT and Analytics. I'm addressing this issue specifically because search impressions on our product pages average less than 5 impressions/day despite continuous improvements over the last 12 months - keyword research, better page titles/product names and longer, more informative descriptions. These 500 or so product pages are vastly better today than then were 12 months ago - but impressions have not improved at all.
Every design, public or private, has social/sharing buttons. As I mentioned above, these designs can all be linked to directly from any external website.
I think the category pages are sufficient. There is some fine-tuning that could be done in terms of how products are organized within categories but overall it's pretty solid and probably not an issue.
Our initial strategy was to attract long-tail traffic with user-generated content but the problem is most users gave their products personal, irrelevant (and possibly spammy) product names. There were other problems with the user generated designs as well - like one user who designed 15 earrings that looked exactly the same except for one bead which she changed to a different color for each design. Anyway, we left all these designs public for over 12 months - as more and more designs were added to the site, organic search traffic actually fell.
-
I agree with Doug.
create better category pages - make sure each product page is under a category.
the user generated products are great and should be indexed.
-
Hey Richard,
First, note that the estimated number of pages displayed by that is an estimate which gets refined the deeper you go into the search results. On page one, they tend to be wildly inaccurate.
If you go all the way to the end (page 13) and then repeat the process with ommitted results included you still get to page 13, and a total of 123 pages. (Somewhat better than the 2k+ results.)
This is less than the 716 pages you mention so maybe you've got he opposite problem? What do you see if you check your google analytics and webmaster tools? Which pages are getting organic traffic from google? Which pages are showing in the search results (Webmaster Tools, Impressions)
What are the pages you want to appear in search and what are the keywords you're targeting?
My first thought is - if you're allowing people to design your own jewellery - are you also allowing them to easily share their creations on social, etc? Have you got embed codes so that they can put their designs on their blog etc? If you're not then I think you're missing a trick.
All of these individual items, designed by users, will (should) all be linking back to the specific category pages (or other landning page) and increasing the authority of that page. Make sure your category/landing pages have good unique content that communicates both the value proposition and the products you've got available.
If you don't have these category pages, then it might be worth looking at your site architecture/hierarchy and think about creating them.
Your individual product pages might get long-tail traffic (and having lots of different variations, described in real-people's own words might actually work to your advantage here), your category pages should be the ones targeting head terms.
I notice you've no-indexed and no-followed the product pages in question. This means that if these pages are shared, then any inbound authority/link equity/link-juice/ is just being discarded. Are you sure you want to do that?
I don't think you need to worry too much about google's index at this point and I certainly wouldn't consider deindexing the whole site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Removing indexed internal search pages from Google when it's driving lots of traffic?
Hi I'm working on an E-Commerce site and the internal Search results page is our 3rd most popular landing page. I've also seen Google has often used this page as a "Google-selected canonical" on Search Console on a few pages, and it has thousands of these Search pages indexed. Hoping you can help with the below: To remove these results, is it as simple as adding "noindex/follow" to Search pages? Should I do it incrementally? There are parameters (brand, colour, size, etc.) in the indexed results and maybe I should block each one of them over time. Will there be an initial negative impact on results I should warn others about? Thanks!
Intermediate & Advanced SEO | | Frankie-BTDublin0 -
How do internal search results get indexed by Google?
Hi all, Most of the URLs that are created by using the internal search function of a website/web shop shouldn't be indexed since they create duplicate content or waste crawl budget. The standard way to go is to 'noindex, follow' these pages or sometimes to use robots.txt to disallow crawling of these pages. The first question I have is how these pages actually would get indexed in the first place if you wouldn't use one of the options above. Crawlers follow links to index a website's pages. If a random visitor comes to your site and uses the search function, this creates a URL. There are no links leading to this URL, it is not in a sitemap, it can't be found through navigating on the website,... so how can search engines index these URLs that were generated by using an internal search function? Second question: let's say somebody embeds a link on his website pointing to a URL from your website that was created by an internal search. Now let's assume you used robots.txt to make sure these URLs weren't indexed. This means Google won't even crawl those pages. Is it possible then that the link that was used on another website will show an empty page after a while, since Google doesn't even crawl this page? Thanks for your thoughts guys.
Intermediate & Advanced SEO | | Mat_C0 -
Can you rank without 10 x content
If I create a page about a "Normandy bike tour "and present the same things (pictures, hotels, dates, day by day itinerary, clients reviews, map) as my competitors can I still rank ? Or do I need to add something totally that my competitors don't have on their webpages to rank and compete ? Thank you,
Intermediate & Advanced SEO | | seoanalytics0 -
H1 tag found on page, but saying doesn't match keyword
We've run a on-page grader test on our home page www.whichledlight.com with the keyword 'led bulbs' it comes back with saying there is a H1 tag, although the content of the keyword apperently doesn't contain 'led bulbs... which seems a bit odd because the content of the tag is 'UK’s #1 Price Comparison Site for LED Bulbs` I've used other SEO checkers and some say we don't even have a H1 tag, or H2, H3 and so on for any page. Screaming Frog seems to think we have a H1 tag though, and can also detect the content of the tag. Any ideas? ** Update ** The website is a single page app (EmberJS) so we use prerender to create snapshots of the pages.
Intermediate & Advanced SEO | | TrueluxGroup
We were under the impression that MOZ can crawl these prerendered pages fine, so were a bit baffled as to why it would say we have a H1 tag, but think the contents of the tag still doesn't match our keyword.0 -
Website penalised can't find where the problem is. Google went INSANE
Hello, I desperately need a hand here! Firstly I just want to say that I we never infracted google guidelines as far as we know. I have been around in this field for about 6 years and have had success with many websites on the way relying only in natural SEO and was never penalised until now. The problem is that our website www.turbosconto.it is and we have no idea why. (not manual) The web has been online for more than 6 months and it NEVER started to rank. it has about 2 organic visits a day at max. In this time we got several links from good websites which are related to our topic which actually keep sending us about 50 visits a day. Nevertheless our organic visita are still 1 or 2 a day. All the pages seem to be heavily penalised ... when you perform a search for any of our "shops"even including our Url, no entries for the domain appear. A search example: http://www.turbosconto.it zalando What I will expect to find as a result: http://www.turbosconto.it/buono-sconto-zalando The same case repeats for all of the pages for the "shops" we promote. Searching any of the brads + our domain shows no result except from "nike" and "euroclinix" (I see no relationship between these 2) Some days before for these same type of searches it was showing pages from the domain which we blocked by robots months ago, and which go to 404 error instead of our optimised landing pages which cannot be found in the first 50 results. These pages are generated by our rating system... We already send requests to de index all theses pages but they keep appearing for every new page that we create. And the real pages nowhere to be found... Here isan example: http://www.turbosconto.it/shops/codice-promozionale-pimkie/rat
Intermediate & Advanced SEO | | sebastiankoch
You can see how google indexes that for as in this search: site:www.turbosconto.it rate Why on earth will google show a page which is blocked by the robots.txt displaying that the content cannot retrieved because it is blocked by the robots instead of showing pages which are totally SEO Friendly and content rich... All the script from TurboSconto is the same one that we use in our spanish version www.turbocupones.com. With this last one we have awesome results, so it makes things even more weird... Ok apart from those weird issues with the indexation and the robots, why did a research on out backlinks and we where surprised to fin a few bad links that we never asked for. Never the less there are just a few and we have many HIGH QUALITY LINKS, which makes it hard to believe that this could be the reason. Just to be sure we, we used the disavow tool for these links, here are the bad links we submitted 2 days ago: domain: www.drilldown.it #we did not ask for this domain: www.indicizza.net #we did not ask for this domain: urlbook.in #we did not ask for this, moreover is a spammy one http://inpe.br.way2seo.org/domain-list-878 #we did not ask for this, moreover is a spammy one http://shady.nu.gomarathi.com/domain-list-789 #we did not ask for this, moreover is a spammy one http://www.clicdopoclic.it/2013/12/i-migliori-siti-italiani-di-coupon-e.html #we did not ask for this, moreover and is a copy of a post of an other blog http://typo.domain.bi/turbosconto.it I have no clue what can it be, we have no warning messages in the webmaster tools or anything.
For me it looks as if google has a BUG and went crazy on judging our italian website. Or perhaps we are just missing something ??? If anyone could throw some light on this I will be really glad and willing to pay some compensation for the help provided. THANKS A LOT!0 -
Most recent blog post isn't being indexed?
http://www.howlatthemoon.com/dueling_piano_bar/kids-activities-denver/ Even if I put the URL into Google it doesn't show up....
Intermediate & Advanced SEO | | howlusa0 -
Googlebot Can't Access My Sites After I Repair My Robots File
Hello Mozzers, A colleague and I have been collectively managing about 12 brands for the past several months and we have recently received a number of messages in the sites' webmaster tools instructing us that 'Googlebot was not able to access our site due to some errors with our robots.txt file' My colleague and I, in turn, created new robots.txt files with the intention of preventing the spider from crawling our 'cgi-bin' directory as follows: User-agent: * Disallow: /cgi-bin/ After creating the robots and manually re-submitting it in Webmaster Tools (and receiving the green checkbox), I received the same message about Googlebot not being able to access the site, only difference being that this time it was for a different site that I manage. I repeated the process and everything, aesthetically looked correct, however, I continued receiving these messages for each of the other sites I manage on a daily-basis for roughly a 10-day period. Do any of you know why I may be receiving this error? is it not possible for me to block the Googlebot from crawling the 'cgi-bin'? Any and all advice/insight is very much welcome, I hope I'm being descriptive enough!
Intermediate & Advanced SEO | | NiallSmith1 -
Should I 301 Poorly Worded URL's which are indexed and driving traffic
Hi, I'm working on our sites structure and SEO at present and wondering when the benefit I may get from a well written URL, i.e ourDomain / keyword or keyphrase .html would be preferable to the downturn in traffic i may witness by 301 redirecting an existing, not as well structured, but indexed URL. We have a number of odd looking URL's i.e ourDomain / ourDomain_keyword_92.html alongside some others that will have a keyword followed by 20 underscores in a long line... My concern is although i would like to have a keyword or key phrase sitting on its own in a well targeted URL string I don't want to mess to much with pages that are driving say 2% or 3% of our traffic just because my OCD has kicked in.... Some further advice on strategies i could utilise would be great. My current thinking is that if a page is performing well then i should leave the URL alone. Then if I'm not 100% happy with the keyword or phrase it is targeting I could build another page to handle the new keyword / phrase with the aim of that moving up the rankings and eventually taking over from where the other page left off. Any advice is much appreciated, Guy
Intermediate & Advanced SEO | | guycampbell0