Last Panda: removed a lot of duplicated content but no still luck!
-
Hello here,
my website virtualsheetmusic.com has been hit several times by Panda since its inception back in February 2011, and so we decided 5 weeks ago to get rid of about 60,000 thin, almost duplicate pages via noindex metatags and canonical (we have no removed physically those pages from our site giving back a 404 because our users may search for those items on our own website), so we expected this last Panda update (#25) to give us some traffic back... instead we lost an additional 10-12% traffic from Google and now it looks even really badly targeted.
Let me say how disappointing is this after so much work!
I must admit that we still have many pages that may look thin and duplicate content and we are considering to remove those too (but those are actually giving us sales from Google!), but I expected from this last Panda to recover a little bit and improve our positions on the index. Instead nothing, we have been hit again, and badly.
I am pretty desperate, and I am afraid to have lost the compass here. I am particularly afraid that the removal of over 60,000 pages via noindex metatags from the index, for some unknown reason, has been more damaging than beneficial.
What do you think? Is it just a matter of time? Am I on the right path? Do we need to wait just a little bit more and keep removing (via noindex metatags) duplicate content and improve all the rest as usual?
Thank you in advance for any thoughts.
-
Never mind, I have just found your site... thank you again!
-
Thank you very much Marie for your time and explanation, I appreciated it. Do you offer SEO consultation? Please, let me know.
Thank you again!
-
The short answer to this is that this is not what the disavow tool was meant for, so no I wouldn't use it. Affiliate links SHOULD be nofollowed though. However, affiliate links won't cause you to be affected by Panda. Link related issues are totally unrelated to Panda.
Unfortunately at this point though I'm going to bow out of taking this discussion any further due to time constraints. Q&A is a good place to get someone to take a quick look at your site, but if you've got lots of questions it may be worthwhile to pay a consultant to help out with your site's traffic drop issues.
-
Marie, I was thinking, do you think the new Google's Disavow Links Tool could help me with my affiliate's inbound links? I mean, in case I could be damaged by that kind of link profile...
-
Yes, I think will be easier to change our own contents and tell them to add the canonical tag to our page. Thanks!
-
Actually you can see the subsequent pages still in the index, just enter on Google:
site:virtualsheetmusic.com inurl:downloads/Indici/Guitar.html
and you will see what I mean. I see though that most of those pages have been cached before I put the canonical tag, so I guess it is just a matter of time.
Am I correct? I mean, if a page has a canonical tag that points to a different page it should NOT be in the index, right?Thank you for looking!
-
If there's duplicate content then you've either got to change yours, get theirs changed, or get them to use a rel-canonical tag pointing to your site or a noindex tag.
-
I just had a quick look but I don't see any other versions of the page you listed in the index. If you just added the rel prev and next it won't take effect until the pages are crawled which could take even up to a few weeks to happen.
-
Sorry Marie, I forgot to answer your inquiry about music2print.com: that's one of our affiliates! That's another issue we could suffer for... how do you suggest to tackle the affiliate-possible-duplicate content? Thanks!
-
Yes EGOL, I understand that my only way is to really thicken and differentiate the pages with real and unique content. I will try that and keep you posted! Thank you for your help again.
-
Marie, look at the following page, it is the main (first) page of our guitar index:
http://www.virtualsheetmusic.com/downloads/Indici/Guitar.html
Now, if you want to browse the guitar repertoire to the second page of the index, you click the page "2" or "next" link right? And then the second page appears:
http://www.virtualsheetmusic.com/downloads/Indici/Guitar.html?cp=2&lpg=20
And so on... well, those subsequent pages are the ones I was talking about: they have the rel=prev and rel-next tags together with the canonical tag that refers to the main (first) index page, but many of those subsequent pages are still in the index, Shouldn't they disappear and only the first page kept in the index?
As for what you wrote about how I can expect a recover from Panda, it makes sense and I really hope this new integration of Panda into the main algorithm will gradually speed things up. Thank you for your opinion on that.
I think my approach will be to keep noindexing those pages that really don't bring any business first and in the meantime improve all the others one by one. To nonidex all pages and start releasing just the optimized ones one by one scares me too much!
-
Most of the content on my site is articles that are 500 to 5000 words and one to ten photos - all on a single page.
It was very easy for me to "noindex" the republished content and "noindex" the blog posts that were very short.
For a site that consists of pages where most of the content is thin and duplicated a massive rewriting job is required in my opinion. That is what I would do if I wanted to make an attempt at recovering such a site.
I had to chop off my foot to save my ass.
-
I'm not sure that I'm following what you are saying. Which pages are in the index that you feel should not be because of their canonical tag?
You mentioned above that it sounds like it is "easy" to recover from Panda. I don't think that is true for most sites. Most likely in EGOL's case he had a site that had some fantastic content to go along with the duplicate and thin content. If there is good stuff there, then getting rid of the low quality stuff can sometimes be a quick fix. But, if you've got a site that consists almost completely of thin or duplicated content then it may not be so easy.
In my experience, when a site recovers from Panda, it does not happen gradually as the site gets cleaned up and improved. Rather, there is a sudden uptick when Panda refreshes provided that you have done enough work for Google to say that enough of your site is high quality. However, this may change now that Panda will be rolling out as part of the regular algorithm and not just every 4-6 weeks or so as before.
-
The academic year is coming to a close in the northern hemisphere. Hire a music scholar who is also a great writer and attack this. Or hire a writer who appreciates music. Better yet, hire one of each.
It is time to exert yourself.
-
Thank you Marie, yes, the canonical should tell Google what you said, but I don't understand why the other pages (subsequent index pages) are still in the index despite the canonical tag. Am I missing something?
About the thin content and how that affect the whole site, I have no more doubts, that's clear and I will tackle that page by page. I am just wondering if my presence on Google is going to improve little by little over time while I tackle the problem page-by-page, or will my site score get better only when everything will be clean and improved? To deindex everything and start rewriting with the best products first. as EGOL suggested really scares me since we live with the site and we could ending up making no money at all for too long.
-
Yes, I see, it's great to know you could recover pretty easily. I will keep working on the contents then, even though I guess is going to be a long way... thanks!
-
You have a canonical tag on that page which tells Google that this particular page is the version that you would like in the index. It is indeed in the index. But there's not much on the page of value.
EGOL explained well how Panda can affect an entire site. I look at it as a flag. So, if Google sees that you have a certain amount of duplicate or thin or otherwise low quality content, then they put a flag on the entire site that says, "This site is generally low quality." and as such, the whole site has trouble ranking, even if there are some good pages in the midst of the low quality ones.
-
When you have a Panda problem it can damage rankings across your site.
I had a Panda problem with two sites.
One had some republished content and some very short blog posts. Rankings went down for the entire site. I noindexed them and the rankings came back in a few weeks.
The other site had hundreds of printable .pdfs that contained only an image and a few words. These were images using the .pdf format to control the scale of the printer. Rankings went down for the entire site. I noindexed the .pdfs and rankings came back in a few weeks.
In my opinion, your site needs a huge writing job.
-
Thank you Egol for reinforcing what Marie said, but still I can't figure out why some of my best pages, with many reviews and unique content, have dropped from the top rankings (from 1st page to 13th page) the last November:
http://www.seomoz.org/q/what-can-do-to-put-these-pages-back-in-the-top-results
Thank you again.
-
Wow, thank you so much Marie for your extended reply and information, it is like gold for me!
Some thoughts about what you wrote:
For example, take this page:
http://www.virtualsheetmusic.com/downloads/Indici/Guitar.html
There is almost no text on that page that is unique to that page. Why should it be in the search results? I did a search for the text on the top of the page and saw that it was repeated on thousands of your pages. The rest of the text is all from other pages as well. If there is nothing on this page that is unique and adds value, then it needs to be noindexed.
I actually used to not care about subsequent pages in indexes such as the Guitar one because I thought that what Google needed was just the new rel=prev and rel=next tags to figure out that the important page was the first one only, but then I got scared by Panda and 5 weeks ago I put the canonical tag on subsequent pages pointing to the main page. So, I don't understand why you still find the subsequent pages on the index... shouldn't the canonical tag help on that?
And I get it now more than before: we really need to make our product pages more unique and compelling and we'll do that. Our best pages have many users reviews, but looks like that's not enough... look at what I am discussing on this thread about our best product pages with many and unique user reviews on them:
http://www.seomoz.org/q/what-can-do-to-put-these-pages-back-in-the-top-results
Those pages are dropped from page 1 to over page 10! Why?! Everything looks non-sense if you look at the data and how some thinner pages rank better than thicker ones. IN other words, despite what you write makes perfectly sense to me and I will try to pursue it, if I analyze Google results and my pages rankings, I cannot understand what Google wants from me (i.e. Why it's penalizing my good pages?).
And so, my last question is: have you idea when I will begin to see some improvements? So far I haven't seen any good results from my last action of dropping over 50,000 thin pages from the index, which I must say, it is not much encouraging!
Thank you again very much again.
-
I agree with Marie. The content is duplicate AND the content is very thin. Both of the Panda problems on every page.
A complete authorship job is needed.
Every page needs to be 100% unique and substantive.
Comments that appear on some pages are the only content that I saw that I would consider as unique.
If I owned this site and was willing to make a big investment I would deindex everything and start rewriting with the best products first.
-
Hi Fabrizo. I have a few thoughts for you. In order to recover from Panda you really need to make your pages compelling. Think, for each page, "Would Google want to show this page to people who are searching for information?"
I still see that there is a lot of work to be done to recover. For example, take this page:
http://www.virtualsheetmusic.com/downloads/Indici/Guitar.html
There is almost no text on that page that is unique to that page. Why should it be in the search results? I did a search for the text on the top of the page and saw that it was repeated on thousands of your pages. The rest of the text is all from other pages as well. If there is nothing on this page that is unique and adds value, then it needs to be noindexed.
Is music2print.com your site as well? I see that the pages redirect to your site, but they mustn't have always done that because they are still listed in the Google index. If you had duplicate versions of the site then this is a sure-fire way to get a Panda flag on your site. If you no longer want music2print.com in the index then you can use the url removal tool in WMT to get rid of it. With the 301 in place, eventually Google will figure it out but it could take some time.
When I look at a product page such as http://www.virtualsheetmusic.com/score/JesuGu.html, the page is extremely thin. This is one of the difficulties with having a commerce site that sells products. In order for Google to want to display your products prominently in search, they need to see that there is something there that users will want to see that is better than other sites selling this product. When I search for "Jesu, Joy of Man's Desiring sheet music" I see that there are 136,000 results. Why would Google want to display yours to a user? Now, the argument that I usually get when I say this is that everyone else is doing the same thing. Sometimes it can be a mystery why Panda affects one site and not the next, and comparing won't get us anywhere.
So, what can you do for products like this? You need to make these pages SUPER useful. I like giving thinkgeek.com as an example. This site sells products that you can buy on other sites but they go above and beyond to describe the product in unique ways. As such, they rank well for their products.
Also, the way you have your pages set up with tabs is inviting a duplicate content issue as well. For example, these pages are all considered separate pages:
http://www.virtualsheetmusic.com/score/JesuGu.html
http://www.virtualsheetmusic.com/score/JesuGu.html?tab=pdf
http://www.virtualsheetmusic.com/score/JesuGu.html?tab=mp3
http://www.virtualsheetmusic.com/score/JesuGu.html?tab=midi
...and so on. But they are creating a duplicate content problem because they are almost identical to each other. EDIT: Actually, you are using the canonical tag correctly so this is not as big an issue. However, if the canonical tag on http://www.virtualsheetmusic.com/score/JesuGu.html?tab=pdf is pointing to http://www.virtualsheetmusic.com/score/JesuGu.html, you are saying to Google, "http://www.virtualsheetmusic.com/score/JesuGu.html" is the main version of this page and I want this page to appear in your index. The problem is that THIS page contains almost no valuable information that can't be found elsewhere and the majority of the page is templated material that is seen on every page of your site.
Unfortunately there are a lot of issues here and I'm afraid that recovery from Panda is going to be very challenging.
If this were my site I would likely noindex EVERYTHING and then one page at a time work on creating the best page possible to put into the search results. You may start by looking at your analytics and finding out which pages were actually bringing in traffic at some time and then rewrite those pages. You may need to be creative. You could write something about the history of the composition. Is there a story around it? Was it ever played for someone famous? Has anyone famous every played it? If so, on what instrument? Is there anything unusual about the composition such as the key or tempo? Can you embed a video of someone playing the composition?
It may sound ridiculous to do so much work for each item, but unless you can add value that can't be found elsewhere, then Panda is going to continue to keep your rankings down.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Concerns of Duplicative Content on Purchased Site
Recently I purchased a site of 50+ DA (oldsite.com) that had been offline/404 for 9-12 months from the previous owner. The purchase included the domain and the content previously hosted on the domain. The backlink profile is 100% contextual and pristine. Upon purchasing the domain, I did the following: Rehosted the old site and content that had been down for 9-12 months on oldsite.com Allowed a week or two for indexation on oldsite.com Hosted the old content on my newsite.com and then performed 100+ contextual 301 redirects from the oldsite.com to newsite.com using direct and wild card htaccess rules Issued a Press Release declaring the acquisition of oldsite.com for newsite.com Performed a site "Change of Name" in Google from oldsite.com to newsite.com Performed a site "Site Move" in Bing/Yahoo from oldsite.com to newsite.com It's been close to a month and while organic traffic is growing gradually, it's not what I would expect from a domain with 700+ referring contextual domains. My current concern is around original attribution of content on oldsite.com shifting to scraper sites during the year or so that it was offline. For Example: Oldsite.com has full attribution prior to going offline Scraper sites scan site and repost content elsewhere (effort unsuccessful at time because google know original attribution) Oldsite.com goes offline Scraper sites continue hosting content Google loses consumer facing cache from oldsite.com (and potentially loses original attribution of content) Google reassigns original attribution to a scraper site Oldsite.com is hosted again and Google no longer remembers it's original attribution and thinks content is stolen Google then silently punished Oldsite.com and Newsite.com (which it is redirected to) QUESTIONS Does this sequence have any merit? Does Google keep track of original attribution after the content ceases to exist in Google's search cache? Are there any tools or ways to tell if you're being punished for content being posted else on the web even if you originally had attribution? Unrelated: Are there any other steps that are recommend for a Change of site as described above.
Intermediate & Advanced SEO | | PetSite0 -
Product Syndication and duplicate content
Hi, It's a duplicate content question. We sell products (vacation rental homes) on a number of websites as well as our own. Generally, these affiliate sites have a higher domain authority and much more traffic than our site. The product content (text, images, and often availability and rates) is pulled by our affiliates into their websites daily and is exactly the same as the content on our site, not including their page structure. We receive enquiries by email and any links from their domains to ours are nofollow. For example, all of the listing text on mysite.com/listing_id is identical to my-first-affiliate-site.com/listing_id and my-second-affiliate-site.com/listing_id. Does this count as duplicate content and, if so, can anyone suggest a strategy to make the best of the situation? Thanks
Intermediate & Advanced SEO | | McCaldin0 -
Noindex Valuable duplicate content?
How could duplicate content be valuable and why question no indexing it? My new client has a clever african safari route builder that you can use to plan your safari. The result is 100's of pages that have different routes. Each page inevitably has overlapping content / destination descriptions. see link examples. To the point - I think it is foolish to noindex something like this. But is Google's algo sophisticated enough to not get triggered by something like this? http://isafari.nathab.com/routes/ultimate-tanzania-kenya-uganda-safari-july-november
Intermediate & Advanced SEO | | Rich_Coffman
http://isafari.nathab.com/routes/ultimate-tanzania-kenya-uganda-safari-december-june0 -
Penalized for Duplicate Page Content?
I have some high priority notices regarding duplicate page content on my website www.3000doorhangers.com Most of the pages listed here are on our sample pages: http://www.3000doorhangers.com/home/door-hanger-pricing/door-hanger-design-samples/ On the left side of our page you can go through the different categories. Most of the category pages have similar text. We mainly just changed the industry on each page. Is this something that google would penalize us for? Should I go through all the pages and use completely unique text for each page? Any suggestions would be helpful Thanks! Andrea
Intermediate & Advanced SEO | | JimDirectMailCoach0 -
404 for duplicate content?
Sorry, I think this is my third question today... But I have a lot of duplicated content on my site. I use joomla so theres a lot of unintentional duplication. For example, www.mysite.com/index.php exists, etc. Up till now, I thought I had to 301 redirect or rel=canonical these "duplicated pages." However, can I just 404 it? Is there anything wrong with this rpactice in regards to SEO?
Intermediate & Advanced SEO | | waltergah0 -
Duplicate content on ecommerce sites
I just want to confirm something about duplicate content. On an eCommerce site, if the meta-titles, meta-descriptions and product descriptions are all unique, yet a big chunk at the bottom (featuring "why buy with us" etc) is copied across all product pages, would each page be penalised, or not indexed, for duplicate content? Does the whole page need to be a duplicate to be worried about this, or would this large chunk of text, bigger than the product description, have an effect on the page. If this would be a problem, what are some ways around it? Because the content is quite powerful, and is relavent to all products... Cheers,
Intermediate & Advanced SEO | | Creode0 -
Duplicate content, website authority and affiliates
We've got a dilemma at the moment with the content we supply to an affiliate. We currently supply the affiliate with our product database which includes everything about a product including the price, title, description and images. The affiliate then lists the products on their website and provides a Commission Junction link back to our ecommerce store which tracks any purchases with the affiliate getting a commission based on any sales via a cookie. This has been very successful for us in terms of sales but we've noticed a significant dip over the past year in ranking whilst the affiliate has achieved a peak...all eyes are pointing towards the Panda update. Whenever I type one of our 'uniquely written' product descriptions into Google, the affiliate website appears higher than ours suggesting Google has ranked them the authority. My question is, without writing unique content for the affiliate and changing the commission junction link. What would be the best option to be recognised as the authority of the content which we wrote in the first place? It always appears on our website first but Google seems to position the affiliate higher than us in the SERPS after a few weeks. The commission junction link is written like this: http://www.anrdoezrs.net/click-1428744-10475505?sid=shopp&url=http://www.outdoormegastore.co.uk/vango-calisto-600xl-tent.html
Intermediate & Advanced SEO | | gavinhoman0 -
SEOMoz mistaking image pages as duplicate content
I'm getting duplicate content errors, but it's for pages with high-res images on them. Each page has a different, high-res image on it. But SEOMoz keeps telling me it's duplicate content, even though the images are different (and named different). Is this something I can ignore or will Google see it the same way too?
Intermediate & Advanced SEO | | JHT0