Recovery Steps For Panda 3.5 (Rel. Apr. 19, 2012)?
-
I'm asking people who have recovered from Panda to share what criteria they used - especially on sites that are not large scale ecommerce sites.
Blog site hit by Panda 3.5. Blog has approximately 250 posts. Some of the posts are the most thorough on the subject and regained traffic despite a Penguin mauling a few days after the Panda attack. (The site has probably regained 80% of the traffic it lost since Penguin hit without any link removal or link building, and minimal new content.)
Bounce rate is 80% and average time on page is 2:00 min. (Even my most productive pages tend to have very high bounce rates BUT those pages maintain time on page in the 4 to 12 minute range.)
The Panda discussions I've read on these boards seem to focus on e-commerce sites with extremely thin content. I assume that Google views much of my content as "thin" too. But, my site seems to need a pruning instead of just combiining the blue model, white model, red model, and white model all on one page like most of the ecommerce sites we've discussed.
So, I'm asking people who have recovered from Panda to share what criteria they used to decide whether to combine a page, prune a page, etc.
After I combine any series articles to one long post (driving the time on page to nice levels), I plan to prune the remaining pages that have poor time on page and/or bounce rates. Regardless of the analytics, I plan to keep the "thin" pages that are essential for readers to understand the subject matter of the blog. (I'll work on flushing out the content or producing videos for those pages.)
How deep should I prune on the first cut? 5% ? 10% ? Even more ? Should I focus on the pages with the worst bounce rates, the worst time on page, or try some of both?
If I post unique and informative video content (hosted on site using Wistia), what I should I expect for a range of the decrease in bounce rate ?
Thanks for reading this long post.
-
Alan : Thanks for sharing your experience in such detail.
-
After almost 2 years of panda destruction, and constant work on my site with no recovery whatsoever, I don't know if I have anything useful to contribute yet, so take this as some input.
Large site with over 2.2 million pages.
Deleted around 1.5 million pages
Removed all duplicate titles (removed for fixed)
removed all duplicate descriptions (removed or fixed)
Removed all problem pages (extra short, damaged content, empty)
Removed all duplicate body content pages.
Prevent addition of any new duplicates and if any slip past, fix within 24 hours.
Also, checked for incoming links and discovered some sites with problems pointing in - fixed or had these removed.
RESULT after almost 2 years? - zero improvement.
Almost ready to slash wrists, but about to try subdomaining first.
It would be funny if not so sad.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
We killed our SEO, but how come some of our keywords are still in the top 1-3
I am looking for the answer for this interesting question: 1. I have a static page with NO information on it, this is almost completelyblank, only a search box on it, which does nothing! The information it contains is absolutely zero, but the page has got a specific URL which is = the keyword i will look for 2. And i have another page which is fully optimized with the help of on page grader (97%) for anoter specific keyword Q, in case of 1: Searching in google for a keyword which is = the page www.domain.hu/keyword, and i will have top 1-3 serp!???? (i say it again, the page contains no information...and the keyword is really frequent, in google adwords it says that this keyword which has got high competition) in case of 2. i have the /url which is completely the same as the keyword, have 97% on page grade, and i see that week by week i can only move upwards a little in the serp. Created unique content a lot, made several changes on this page and like no position changing. So the question is WHY in case 1 i can be with no information in (empty static page) top 1-3 for a really hard keyword, and why i cannot move upward on the list for a not so frequent keyword however i did everything i could??
Intermediate & Advanced SEO | | Neckermann0 -
Rel="self" and what to do with it?
Hey there Mozzers, Another question about a forum issue I encountered. When a forum thread has more than just one page as we all know the best course of action is to use rel="next" rel="prev" or rel="previous" But my forum automatically creates another line in the header called Rel="self" What that does is simple. If i have 3 pages http://www.example.com/article?story=abc1
Intermediate & Advanced SEO | | Angelos_Savvaidis
http://www.example.com/article?story=abc2
http://www.example.com/article?story=abc3 **instead of this ** On the first page, http://www.example.com/article?story=abc1 On the second page, http://www.example.com/article?story=abc2 On the third page, http://www.example.com/article?story=abc3: it creates this On the first page, http://www.example.com/article?story=abc1 So as you can see it creates a url by adding the ?page=1 and names it rel=self which actually gives back a duplicate page because now instead of just http://www.example.com/article?story=abc1 I also have the same page at http://www.example.com/article?story=abc1?page=1 Do i even need rel="self"? I thought that rel="next" and rel="prev" was enough? Should I change that?0 -
Should I put rel=publisher on UGC?
My website has a main section that we call expert content and write for. We also have a community subdomain which is all user generated. We are a pretty big brand and I am wondering should the rel publisher tag just be for the www expert content, or should we also use it on the community UGC even though we don't directly write that?
Intermediate & Advanced SEO | | MarloSchneider1 -
Rel=canonical
I have seen that almost all of my website pages need rel=canonical tag. Seems that something's wrong here since I have unique content to every page. Even show the homepage as a rel=canonical which doesnt make sense. Can anyone suggest anything? or just ignore those issues.
Intermediate & Advanced SEO | | arcade880 -
Ecommerce Duplicate Product Descriptions across 3 websites
Hi, We are an e commerce company that has our own domain but also sell the same products on eBay and Amazon. What is the feeling on the same exact descriptions being used on different platforms? Do they count as duplicate content? Will our domain be punished/penalised as our domain does not have as much authority as EBay or Amazon? We have over 5,000 products with our own hand written product descriptions. We want our website to be the main place/ have priority over the above market places. What's the best suggestion/solution? thanks,
Intermediate & Advanced SEO | | Roy19730 -
Panda Updates - robots.txt or noindex?
Hi, I have a site that I believe has been impacted by the recent Panda updates. Assuming that Google has crawled and indexed several thousand pages that are essentially the same and the site has now passed the threshold to be picked out by the Panda update, what is the best way to proceed? Is it enough to block the pages from being crawled in the future using robots.txt, or would I need to remove the pages from the index using the meta noindex tag? Of course if I block the URLs with robots.txt then Googlebot won't be able to access the page in order to see the noindex tag. Anyone have and previous experiences of doing something similar? Thanks very much.
Intermediate & Advanced SEO | | ianmcintosh0 -
Why specify robots instead of googlebot for a Panda affected site?
Daniweb is the poster child for sites that have recovered from Panda. I know one strategy she mentioned was de-indexing all of her tagged content, fo rexample: http://www.daniweb.com/tags/database Why do you think more Panda affected sites specifying 'googlebot' rather than 'robots' to capture traffic from Bing & Yahoo?
Intermediate & Advanced SEO | | nicole.healthline0 -
Panda Prevention Plan (PPP)
Hi SEOMOzers, I'm planning to prepare Panda deployment, by creating a check-list from thinks to do in SEO to prevent mass trafic pert. I would like to spread these ideas with SEOMoz community and SEOMoz staff in order to build help ressources for other marketers. Here are some ideas for content website : the main one is to block duplicate content (robots.txt, noindex tag, according to the different canonical case) same issue on very low quality content (questions / answers, forums), by inserting canonical redirect or noindex on threads with few answers
Intermediate & Advanced SEO | | Palbertus1