Last Panda: removed a lot of duplicated content but no still luck!
-
Hello here,
my website virtualsheetmusic.com has been hit several times by Panda since its inception back in February 2011, and so we decided 5 weeks ago to get rid of about 60,000 thin, almost duplicate pages via noindex metatags and canonical (we have no removed physically those pages from our site giving back a 404 because our users may search for those items on our own website), so we expected this last Panda update (#25) to give us some traffic back... instead we lost an additional 10-12% traffic from Google and now it looks even really badly targeted.
Let me say how disappointing is this after so much work!
I must admit that we still have many pages that may look thin and duplicate content and we are considering to remove those too (but those are actually giving us sales from Google!), but I expected from this last Panda to recover a little bit and improve our positions on the index. Instead nothing, we have been hit again, and badly.
I am pretty desperate, and I am afraid to have lost the compass here. I am particularly afraid that the removal of over 60,000 pages via noindex metatags from the index, for some unknown reason, has been more damaging than beneficial.
What do you think? Is it just a matter of time? Am I on the right path? Do we need to wait just a little bit more and keep removing (via noindex metatags) duplicate content and improve all the rest as usual?
Thank you in advance for any thoughts.
-
Never mind, I have just found your site... thank you again!
-
Thank you very much Marie for your time and explanation, I appreciated it. Do you offer SEO consultation? Please, let me know.
Thank you again!
-
The short answer to this is that this is not what the disavow tool was meant for, so no I wouldn't use it. Affiliate links SHOULD be nofollowed though. However, affiliate links won't cause you to be affected by Panda. Link related issues are totally unrelated to Panda.
Unfortunately at this point though I'm going to bow out of taking this discussion any further due to time constraints. Q&A is a good place to get someone to take a quick look at your site, but if you've got lots of questions it may be worthwhile to pay a consultant to help out with your site's traffic drop issues.
-
Marie, I was thinking, do you think the new Google's Disavow Links Tool could help me with my affiliate's inbound links? I mean, in case I could be damaged by that kind of link profile...
-
Yes, I think will be easier to change our own contents and tell them to add the canonical tag to our page. Thanks!
-
Actually you can see the subsequent pages still in the index, just enter on Google:
site:virtualsheetmusic.com inurl:downloads/Indici/Guitar.html
and you will see what I mean. I see though that most of those pages have been cached before I put the canonical tag, so I guess it is just a matter of time.
Am I correct? I mean, if a page has a canonical tag that points to a different page it should NOT be in the index, right?Thank you for looking!
-
If there's duplicate content then you've either got to change yours, get theirs changed, or get them to use a rel-canonical tag pointing to your site or a noindex tag.
-
I just had a quick look but I don't see any other versions of the page you listed in the index. If you just added the rel prev and next it won't take effect until the pages are crawled which could take even up to a few weeks to happen.
-
Sorry Marie, I forgot to answer your inquiry about music2print.com: that's one of our affiliates! That's another issue we could suffer for... how do you suggest to tackle the affiliate-possible-duplicate content? Thanks!
-
Yes EGOL, I understand that my only way is to really thicken and differentiate the pages with real and unique content. I will try that and keep you posted! Thank you for your help again.
-
Marie, look at the following page, it is the main (first) page of our guitar index:
http://www.virtualsheetmusic.com/downloads/Indici/Guitar.html
Now, if you want to browse the guitar repertoire to the second page of the index, you click the page "2" or "next" link right? And then the second page appears:
http://www.virtualsheetmusic.com/downloads/Indici/Guitar.html?cp=2&lpg=20
And so on... well, those subsequent pages are the ones I was talking about: they have the rel=prev and rel-next tags together with the canonical tag that refers to the main (first) index page, but many of those subsequent pages are still in the index, Shouldn't they disappear and only the first page kept in the index?
As for what you wrote about how I can expect a recover from Panda, it makes sense and I really hope this new integration of Panda into the main algorithm will gradually speed things up. Thank you for your opinion on that.
I think my approach will be to keep noindexing those pages that really don't bring any business first and in the meantime improve all the others one by one. To nonidex all pages and start releasing just the optimized ones one by one scares me too much!
-
Most of the content on my site is articles that are 500 to 5000 words and one to ten photos - all on a single page.
It was very easy for me to "noindex" the republished content and "noindex" the blog posts that were very short.
For a site that consists of pages where most of the content is thin and duplicated a massive rewriting job is required in my opinion. That is what I would do if I wanted to make an attempt at recovering such a site.
I had to chop off my foot to save my ass.
-
I'm not sure that I'm following what you are saying. Which pages are in the index that you feel should not be because of their canonical tag?
You mentioned above that it sounds like it is "easy" to recover from Panda. I don't think that is true for most sites. Most likely in EGOL's case he had a site that had some fantastic content to go along with the duplicate and thin content. If there is good stuff there, then getting rid of the low quality stuff can sometimes be a quick fix. But, if you've got a site that consists almost completely of thin or duplicated content then it may not be so easy.
In my experience, when a site recovers from Panda, it does not happen gradually as the site gets cleaned up and improved. Rather, there is a sudden uptick when Panda refreshes provided that you have done enough work for Google to say that enough of your site is high quality. However, this may change now that Panda will be rolling out as part of the regular algorithm and not just every 4-6 weeks or so as before.
-
The academic year is coming to a close in the northern hemisphere. Hire a music scholar who is also a great writer and attack this. Or hire a writer who appreciates music. Better yet, hire one of each.
It is time to exert yourself.
-
Thank you Marie, yes, the canonical should tell Google what you said, but I don't understand why the other pages (subsequent index pages) are still in the index despite the canonical tag. Am I missing something?
About the thin content and how that affect the whole site, I have no more doubts, that's clear and I will tackle that page by page. I am just wondering if my presence on Google is going to improve little by little over time while I tackle the problem page-by-page, or will my site score get better only when everything will be clean and improved? To deindex everything and start rewriting with the best products first. as EGOL suggested really scares me since we live with the site and we could ending up making no money at all for too long.
-
Yes, I see, it's great to know you could recover pretty easily. I will keep working on the contents then, even though I guess is going to be a long way... thanks!
-
You have a canonical tag on that page which tells Google that this particular page is the version that you would like in the index. It is indeed in the index. But there's not much on the page of value.
EGOL explained well how Panda can affect an entire site. I look at it as a flag. So, if Google sees that you have a certain amount of duplicate or thin or otherwise low quality content, then they put a flag on the entire site that says, "This site is generally low quality." and as such, the whole site has trouble ranking, even if there are some good pages in the midst of the low quality ones.
-
When you have a Panda problem it can damage rankings across your site.
I had a Panda problem with two sites.
One had some republished content and some very short blog posts. Rankings went down for the entire site. I noindexed them and the rankings came back in a few weeks.
The other site had hundreds of printable .pdfs that contained only an image and a few words. These were images using the .pdf format to control the scale of the printer. Rankings went down for the entire site. I noindexed the .pdfs and rankings came back in a few weeks.
In my opinion, your site needs a huge writing job.
-
Thank you Egol for reinforcing what Marie said, but still I can't figure out why some of my best pages, with many reviews and unique content, have dropped from the top rankings (from 1st page to 13th page) the last November:
http://www.seomoz.org/q/what-can-do-to-put-these-pages-back-in-the-top-results
Thank you again.
-
Wow, thank you so much Marie for your extended reply and information, it is like gold for me!
Some thoughts about what you wrote:
For example, take this page:
http://www.virtualsheetmusic.com/downloads/Indici/Guitar.html
There is almost no text on that page that is unique to that page. Why should it be in the search results? I did a search for the text on the top of the page and saw that it was repeated on thousands of your pages. The rest of the text is all from other pages as well. If there is nothing on this page that is unique and adds value, then it needs to be noindexed.
I actually used to not care about subsequent pages in indexes such as the Guitar one because I thought that what Google needed was just the new rel=prev and rel=next tags to figure out that the important page was the first one only, but then I got scared by Panda and 5 weeks ago I put the canonical tag on subsequent pages pointing to the main page. So, I don't understand why you still find the subsequent pages on the index... shouldn't the canonical tag help on that?
And I get it now more than before: we really need to make our product pages more unique and compelling and we'll do that. Our best pages have many users reviews, but looks like that's not enough... look at what I am discussing on this thread about our best product pages with many and unique user reviews on them:
http://www.seomoz.org/q/what-can-do-to-put-these-pages-back-in-the-top-results
Those pages are dropped from page 1 to over page 10! Why?! Everything looks non-sense if you look at the data and how some thinner pages rank better than thicker ones. IN other words, despite what you write makes perfectly sense to me and I will try to pursue it, if I analyze Google results and my pages rankings, I cannot understand what Google wants from me (i.e. Why it's penalizing my good pages?).
And so, my last question is: have you idea when I will begin to see some improvements? So far I haven't seen any good results from my last action of dropping over 50,000 thin pages from the index, which I must say, it is not much encouraging!
Thank you again very much again.
-
I agree with Marie. The content is duplicate AND the content is very thin. Both of the Panda problems on every page.
A complete authorship job is needed.
Every page needs to be 100% unique and substantive.
Comments that appear on some pages are the only content that I saw that I would consider as unique.
If I owned this site and was willing to make a big investment I would deindex everything and start rewriting with the best products first.
-
Hi Fabrizo. I have a few thoughts for you. In order to recover from Panda you really need to make your pages compelling. Think, for each page, "Would Google want to show this page to people who are searching for information?"
I still see that there is a lot of work to be done to recover. For example, take this page:
http://www.virtualsheetmusic.com/downloads/Indici/Guitar.html
There is almost no text on that page that is unique to that page. Why should it be in the search results? I did a search for the text on the top of the page and saw that it was repeated on thousands of your pages. The rest of the text is all from other pages as well. If there is nothing on this page that is unique and adds value, then it needs to be noindexed.
Is music2print.com your site as well? I see that the pages redirect to your site, but they mustn't have always done that because they are still listed in the Google index. If you had duplicate versions of the site then this is a sure-fire way to get a Panda flag on your site. If you no longer want music2print.com in the index then you can use the url removal tool in WMT to get rid of it. With the 301 in place, eventually Google will figure it out but it could take some time.
When I look at a product page such as http://www.virtualsheetmusic.com/score/JesuGu.html, the page is extremely thin. This is one of the difficulties with having a commerce site that sells products. In order for Google to want to display your products prominently in search, they need to see that there is something there that users will want to see that is better than other sites selling this product. When I search for "Jesu, Joy of Man's Desiring sheet music" I see that there are 136,000 results. Why would Google want to display yours to a user? Now, the argument that I usually get when I say this is that everyone else is doing the same thing. Sometimes it can be a mystery why Panda affects one site and not the next, and comparing won't get us anywhere.
So, what can you do for products like this? You need to make these pages SUPER useful. I like giving thinkgeek.com as an example. This site sells products that you can buy on other sites but they go above and beyond to describe the product in unique ways. As such, they rank well for their products.
Also, the way you have your pages set up with tabs is inviting a duplicate content issue as well. For example, these pages are all considered separate pages:
http://www.virtualsheetmusic.com/score/JesuGu.html
http://www.virtualsheetmusic.com/score/JesuGu.html?tab=pdf
http://www.virtualsheetmusic.com/score/JesuGu.html?tab=mp3
http://www.virtualsheetmusic.com/score/JesuGu.html?tab=midi
...and so on. But they are creating a duplicate content problem because they are almost identical to each other. EDIT: Actually, you are using the canonical tag correctly so this is not as big an issue. However, if the canonical tag on http://www.virtualsheetmusic.com/score/JesuGu.html?tab=pdf is pointing to http://www.virtualsheetmusic.com/score/JesuGu.html, you are saying to Google, "http://www.virtualsheetmusic.com/score/JesuGu.html" is the main version of this page and I want this page to appear in your index. The problem is that THIS page contains almost no valuable information that can't be found elsewhere and the majority of the page is templated material that is seen on every page of your site.
Unfortunately there are a lot of issues here and I'm afraid that recovery from Panda is going to be very challenging.
If this were my site I would likely noindex EVERYTHING and then one page at a time work on creating the best page possible to put into the search results. You may start by looking at your analytics and finding out which pages were actually bringing in traffic at some time and then rewrite those pages. You may need to be creative. You could write something about the history of the composition. Is there a story around it? Was it ever played for someone famous? Has anyone famous every played it? If so, on what instrument? Is there anything unusual about the composition such as the key or tempo? Can you embed a video of someone playing the composition?
It may sound ridiculous to do so much work for each item, but unless you can add value that can't be found elsewhere, then Panda is going to continue to keep your rankings down.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Content From API - Remove or to Redirect ?
Hi Guys,
Intermediate & Advanced SEO | | PaddyM556
I am working on a site at the moment,
Previous developer used a API to pull in HealthCare content (HSE) .
So the API basically generates landing pages within the site, and generates the content.
To date it has over 2k in pages being generated.
Some actually rank organically and some don't. New site being launch: So a new site is being launched & the "health advice" where this content used to live be not included in the new site. So this content will not have a place to be displayed. My Query: Would you allow the old content die off in the migration process & just become 404's
Or
Would you 301 redirect the all or only ranking pages to the homepage ? Other considerations, site will be moved to https:// so site will be submitted to search console & re-indexed by Google. Would love to hear if anyone had similar situation or suggestions.
Best Regards
Pat0 -
Google isn't seeing the content but it is still indexing the webpage
When I fetch my website page using GWT this is what I receive. HTTP/1.1 301 Moved Permanently
Intermediate & Advanced SEO | | jacobfy
X-Pantheon-Styx-Hostname: styx1560bba9.chios.panth.io
server: nginx
content-type: text/html
location: https://www.inscopix.com/
x-pantheon-endpoint: 4ac0249e-9a7a-4fd6-81fc-a7170812c4d6
Cache-Control: public, max-age=86400
Content-Length: 0
Accept-Ranges: bytes
Date: Fri, 14 Mar 2014 16:29:38 GMT
X-Varnish: 2640682369 2640432361
Age: 326
Via: 1.1 varnish
Connection: keep-alive What I used to get is this: HTTP/1.1 200 OK
Date: Thu, 11 Apr 2013 16:00:24 GMT
Server: Apache/2.2.23 (Amazon)
X-Powered-By: PHP/5.3.18
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Thu, 11 Apr 2013 16:00:24 +0000
Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0
ETag: "1365696024"
Content-Language: en
Link: ; rel="canonical",; rel="shortlink"
X-Generator: Drupal 7 (http://drupal.org)
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8 xmlns:content="http://purl.org/rss/1.0/modules/content/"
xmlns:dc="http://purl.org/dc/terms/"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:og="http://ogp.me/ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:sioc="http://rdfs.org/sioc/ns#"
xmlns:sioct="http://rdfs.org/sioc/types#"
xmlns:skos="http://www.w3.org/2004/02/skos/core#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"> <title>Inscopix | In vivo rodent brain imaging</title>0 -
Is legacy duplicate content an issue?
I am looking for some proof, or at least evidence to whether or not sites are being hurt by duplicate content. The situation is, that there were 4 content rich newspaper/magazine style sites that were basically just reskins of each other. [ a tactic used under a previous regime 😉 ] The least busy of the sites has since been discontinued & 301d to one of the others, but the traffic was so low on the discontinued site as to be lost in noise, so it is unclear if that was any benefit. Now for the last ~2 years all the sites have had unique content going up, but there are still the archives of articles that are on all 3 remaining sites, now I would like to know whether to redirect, remove or rewrite the content, but it is a big decision - the number of duplicate articles? 263,114 ! Is there a chance this is hurting one or more of the sites? Is there anyway to prove it, short of actually doing the work?
Intermediate & Advanced SEO | | Fammy0 -
Duplicate Content www vs. non-www and best practices
I have a customer who had prior help on his website and I noticed a 301 redirect in his .htaccess Rule for duplicate content removal : www.domain.com vs domain.com RewriteCond %{HTTP_HOST} ^MY-CUSTOMER-SITE.com [NC]
Intermediate & Advanced SEO | | EnvoyWeb
RewriteRule (.*) http://www.MY-CUSTOMER-SITE.com/$1 [R=301,L,NC] The result of this rule is that i type MY-CUSTOMER-SITE.com in the browser and it redirects to www.MY-CUSTOMER-SITE.com I wonder if this is causing issues in SERPS. If I have some inbound links pointing to www.MY-CUSTOMER-SITE.com and some pointing to MY-CUSTOMER-SITE.com, I would think that this rewrite isn't necessary as it would seem that Googlebot is smart enough to know that these aren't two sites. -----Can you comment on whether this is a best practice for all domains?
-----I've run a report for backlinks. If my thought is true that there are some pointing to www.www.MY-CUSTOMER-SITE.com and some to the www.MY-CUSTOMER-SITE.com, is there any value in addressing this?0 -
How to Remove Joomla Canonical and Duplicate Page Content
I've attempted to follow advice from the Q&A section. Currently on the site www.cherrycreekspine.com, I've edited the .htaccess file to help with 301s - all pages redirect to www.cherrycreekspine.com. Secondly, I'd added the canonical statement in the header of the web pages. I have cut the Duplicate Page Content in half ... now I have a remaining 40 pages to fix up. This is my practice site to try and understand what SEOmoz can do for me. I've looked at some of your videos on Youtube ... I feel like I'm scrambling around to the Q&A and the internet to understand this product. I'm reading the beginners guide.... any other resources would be helpful.
Intermediate & Advanced SEO | | deskstudio0 -
Syndicating duplicate content descriptions - Can these be canonicalised?
Hi there, I have a site that contains descriptions of accommodation and we also use this content to syndicate to our partner sites. They then use this content to fill their descriptions on the same accommodation locations. I have looked at copyscape and Google and this does appear as duplicate content across these partnered sites. I do understand as well that certain kinds of content will not impact Google's duplication issue such as locations, addresses, opening times those kind of things, but would actual descriptions of a location around 250 words long be seen and penalised as duplicate content? Also is there a possible way to canonicalise this content so that Google can see it relates back to our original site? The only other way I can think of getting round a duplicate content issue like this is ordering the external sites to use tags like blockquotes and cite tags around the content.
Intermediate & Advanced SEO | | MalcolmGibb0 -
Can you be penalized by a development server with duplicate content?
I developed a site for another company late last year and after a few months of seo done by them they were getting good rankings for hundreds of keywords. When penguin hit they seemed to benefit and had many top 3 rankings. Then their rankings dropped one day early May. Site is still indexed and they still rank for their domain. After some digging they found the development server had a copy of the site (not 100% duplicate). We neglected to hide the site from the crawlers, although there were no links built and we hadn't done any optimization like meta descriptions etc. The company was justifiably upset. We contacted Google and let them know the site should not have been indexed, and asked they reconsider any penalties that may have been placed on the original site. We have not heard back from them as yet. I am wondering if this really was the cause of the penalty though. Here are a few more facts: Rankings built during late March / April on an aged domain with a site that went live in December. Between April 14-16 they lost about 250 links, mostly from one domain. They acquired those links about a month before. They went from 0 to 1130 links between Dec and April, then back to around 870 currently According to ahrefs.com they went from 5 ranked keywords in March to 200 in April to 800 in May, now down to 500 and dropping (I believe their data lags by at least a couple of weeks). So the bottom line is this site appeared to have suddenly ranked well for about a month then got hit with a penalty and are not in top 10 pages for most keywords anymore. I would love to hear any opinions on whether a duplicate site that had no links could be the cause of this penalty? I have read there is no such thing as a duplicate content penalty per se. I am of the (amateur) opinion that it may have had more to do with the quick sudden rise in the rankings triggering something. Thanks in advance.
Intermediate & Advanced SEO | | rmsmall0 -
Duplicate content on sub-domains?
I have 2 subdamains intented for 2 different countries (Colombia and Venezuela) ve.domain.com and co.domain.com. The site it's an e-commerce with over a million products available so they have the same page with the same content on both sub-domains....the only differences are the prices a payment options. Does google take that as duplicate content? Thanks
Intermediate & Advanced SEO | | daniel.alvarez0