Recovery Steps For Panda 3.5 (Rel. Apr. 19, 2012)?
-
I'm asking people who have recovered from Panda to share what criteria they used - especially on sites that are not large scale ecommerce sites.
Blog site hit by Panda 3.5. Blog has approximately 250 posts. Some of the posts are the most thorough on the subject and regained traffic despite a Penguin mauling a few days after the Panda attack. (The site has probably regained 80% of the traffic it lost since Penguin hit without any link removal or link building, and minimal new content.)
Bounce rate is 80% and average time on page is 2:00 min. (Even my most productive pages tend to have very high bounce rates BUT those pages maintain time on page in the 4 to 12 minute range.)
The Panda discussions I've read on these boards seem to focus on e-commerce sites with extremely thin content. I assume that Google views much of my content as "thin" too. But, my site seems to need a pruning instead of just combiining the blue model, white model, red model, and white model all on one page like most of the ecommerce sites we've discussed.
So, I'm asking people who have recovered from Panda to share what criteria they used to decide whether to combine a page, prune a page, etc.
After I combine any series articles to one long post (driving the time on page to nice levels), I plan to prune the remaining pages that have poor time on page and/or bounce rates. Regardless of the analytics, I plan to keep the "thin" pages that are essential for readers to understand the subject matter of the blog. (I'll work on flushing out the content or producing videos for those pages.)
How deep should I prune on the first cut? 5% ? 10% ? Even more ? Should I focus on the pages with the worst bounce rates, the worst time on page, or try some of both?
If I post unique and informative video content (hosted on site using Wistia), what I should I expect for a range of the decrease in bounce rate ?
Thanks for reading this long post.
-
Alan : Thanks for sharing your experience in such detail.
-
After almost 2 years of panda destruction, and constant work on my site with no recovery whatsoever, I don't know if I have anything useful to contribute yet, so take this as some input.
Large site with over 2.2 million pages.
Deleted around 1.5 million pages
Removed all duplicate titles (removed for fixed)
removed all duplicate descriptions (removed or fixed)
Removed all problem pages (extra short, damaged content, empty)
Removed all duplicate body content pages.
Prevent addition of any new duplicates and if any slip past, fix within 24 hours.
Also, checked for incoming links and discovered some sites with problems pointing in - fixed or had these removed.
RESULT after almost 2 years? - zero improvement.
Almost ready to slash wrists, but about to try subdomaining first.
It would be funny if not so sad.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
If I put a piece of content on an external site can I syndicate to my site later using a rel=canonical link?
Could someone help me with a 'what if ' scenario please? What happens if I publish a piece of content on an external website, but then later decide to also put this content on my website. I want my website to rank first for this content, even though the original location for the content was the external website. Would it be okay for me to put a rel=canonical tag on the external website's content pointing to the copy on my website? Or would this be seen as manipulative?
Intermediate & Advanced SEO | | RG_SEO1 -
Site dropped after recovery
Hi everybody! I've been working for http://www.newyoubootcamp.com for some time now. They came to me as they had dropped heavily for their main term, "boot camp". This turned out to be due to a manual penalty, which was in part due to their forum being hacked, as well as some bad link building. Here's an example of the dodgy forum links - http://about1.typepad.com/blog/2014/04/tweetdeck-to-launch-as-html5-web-app-now-accepting-beta-testers.html. The anchor is "microsoft". They've all been 410'd now. Also, we cleaned up the other bad links as best we could, and got through the manual penalty. The site then returned to #5 for "boot camps", below its pre-crash peak of #2, but OK. Over the past few weeks, it has started to slide though. I'm certain it is not down to a lack of quality links - this site has great PR and links from national newspapers and magazines. There's been a few on-site issues too, but nothing outrageous. I'm getting a bit stumped though, and any fresh eyes would be much appreciated!
Intermediate & Advanced SEO | | Blink-SEO0 -
Rel=canonical
I have seen that almost all of my website pages need rel=canonical tag. Seems that something's wrong here since I have unique content to every page. Even show the homepage as a rel=canonical which doesnt make sense. Can anyone suggest anything? or just ignore those issues.
Intermediate & Advanced SEO | | arcade880 -
Rel=canonical on image pages
Hi, Im working on a Wordpress hosted blog site. I recently did a "site:search" in Google for a specific article page to make sure it was getting crawled, and it returned three separate URLs in the search results. One was the article page, and the other two were the URLs that hosted the images that are found in the article. Would you suggest adding the rel=canonical tag to the pages that host the images so they point back to the actual context article page? Or are they fine being left alone? Thank you!
Intermediate & Advanced SEO | | dbfrench0 -
Canonical VS Rel=Next & Rel=Prev for Paginated Pages
I run an ecommerce site that paginates product pages within Categories/Sub-Categories. Currently, products are not displayed in multiple categories but this will most likely happen as time goes on (in Clearance and Manufacturer Categories). I am unclear as to the proper implementation of Canonical tags and Rel=Next & Rel=Prev tags on paginated pages. I do not have a View All page to use as the Canonical URL so that is not an option. I want to avoid duplicate content issues down the road when products are displayed in multiple categories of the site and have Search Engines index paginated pages. My question is, should I use the Rel=Next & Rel=Prev tags on paginated pages as well as using Page One as the Canonical URL? Also, should I implement the Canonical tag on pages that are not yet paginated (only one page)?
Intermediate & Advanced SEO | | mj7750 -
Can I rank for 3 keyword phrases using one long tial Keyword Phrase?
I am new to seo and still getting my head around long tail searches. If i use a phrase such as "luxury towels uk", Can i also rank on the same page for "luxury towels" and "towels uk"? Tui means ranking for 3 phrases using 1 long tail
Intermediate & Advanced SEO | | Towelsrus0 -
Rel=author, google plus, picture in Article page SERP
Hello, Could someone explain the easiest way to use Google Plus and rel="author" to claim our articles written by us and get our picture beside them in the Google SERPS site: nlpca(dot)com
Intermediate & Advanced SEO | | BobGW0 -
How Google treat internal links with rel="nofollow"?
Today, I was reading about NoFollow on Wikipedia. Following statement is over my head and not able to understand with proper manner. "Google states that their engine takes "nofollow" literally and does not "follow" the link at all. However, experiments conducted by SEOs show conflicting results. These studies reveal that Google does follow the link, but does not index the linked-to page, unless it was in Google's index already for other reasons (such as other, non-nofollow links that point to the page)." It's all about indexing and ranking for specific keywords for hyperlink text during external links. I aware about that section. It may not generate in relevant result during any keyword on Google web search. But, what about internal links? I have defined rel="nofollow" attribute on too many internal links. I have archive blog post of Randfish with same subject. I read following question over there. Q. Does Google recommend the use of nofollow internally as a positive method for controlling the flow of internal link love? [In 2007] A: Yes – webmasters can feel free to use nofollow internally to help tell Googlebot which pages they want to receive link juice from other pages
Intermediate & Advanced SEO | | CommercePundit
_
(Matt's precise words were: The nofollow attribute is just a mechanism that gives webmasters the ability to modify PageRank flow at link-level granularity. Plenty of other mechanisms would also work (e.g. a link through a page that is robot.txt'ed out), but nofollow on individual links is simpler for some folks to use. There's no stigma to using nofollow, even on your own internal links; for Google, nofollow'ed links are dropped out of our link graph; we don't even use such links for discovery. By the way, the nofollow meta tag does that same thing, but at a page level.) Matt has given excellent answer on following question. [In 2011] Q: Should internal links use rel="nofollow"? A:Matt said: "I don't know how to make it more concrete than that." I use nofollow for each internal link that points to an internal page that has the meta name="robots" content="noindex" tag. Why should I waste Googlebot's ressources and those of my server if in the end the target must not be indexed? As far as I can say and since years, this does not cause any problems at all. For internal page anchors (links with the hash mark in front like "#top", the answer is "no", of course. I am still using nofollow attributes on my website. So, what is current trend? Will it require to use nofollow attribute for internal pages?0