Panda Recovery - What is the best way to shrink your index and make Google aware?
-
We have been hit significantly with Panda and assume that our large index with some pages holding thin/duplicate content being the reason.
We have reduced our index size by 95% and have done significant content development on the remaining 5% pages.
For the old, removed pages, we have installed 410 responses (Page does not exist any longer) and made sure that they are removed from the sitempa submitted to Google; however after over a month we still see Google spider returning to the same pages and the webmaster tools shows no indicator that Google is shrinking our index size.
Are there more effective and automated ways to make Google aware of a smaller index size in hope of Panda recovery? Potentially using the robots.txt file, GWT URL removal tool etc?
Thanks /sp80
-
Hi. I would be curious to know if anyone else has experienced something similar and recovered from Panda. How long did it take you? Did you manually remove the pages, set up 410s or 404s, or create 301s?
I've been working on a site for sometime now which has lost a great of traffic since July 2013. Over the past 2 months, a process has gone underway to manually remove the URLs from the index. The index has been cut in half, but still not at what it was pre-penalty. About 20,000 more pages to figure out what needs to be removed before it reaches the level it was before the massive traffic drop.
Any recovery or insight would be helpful.
-
Hi Sp80 (and group),
It's been about six months since you posted your Panda recovery question. I'm curious if you implemented Kerry22's suggestions, and what results you've seen. I hope it's worked out for you.
We're also dealing with removing thousands of pages of thin content (through 410s, keeping links up and sitemaps, as per Kerry's suggestion). This was a very helpful discussion to read.
Thanks,
Tom
-
Hi kerry,
Your post gives me some hope. I was hit by Panda in Feb. 2011 and lost 85% of my google traffic Made many changes to my site -- page deletions re-directs added content etc. Got a bump of 25% in September 2011 but lost that and more afterward.
We have an e-commerce gift site with 6000 pages. Is your site an e-commerce site?
I have not found a recovery story from any sites like mine that were hit with that large a drop.
I hope your recovery would relate to my situation.
-
Did Google process the 301s? In other words, are the old pages still in the index or not? If they processed the 301s eventually, you generally should be ok. If the old URLs seem stranded, then you might be best setting up the XML sitemap with those old URLs to just kick Google a little. I don't think I'd switch signals and move from a 301 to 404, unless the old pages are low quality, had bad links, etc.
Unfortunately, these things are very situational, so it can be hard to speak in generalities.
-
Hi Dr. Pete,
I know this is a late entry into this thread, but.. what if we did all our content cutting in the wrong ways over the past year – is there something we could/should do now to correct for this? Our site was hit by panda back in March 2012, and since then we've cut content several times. But we didn’t use this good process you advocate – here’s what we did when we cut pages:
1. We set up permanent 301 redirects for all of them immediately
2. Simultaneously, we always removed all links pointing to cut pages (we wanted to make sure users didn’t get redirected all the time)This is a far cry from what you recommend and what Kerry22 did to recover successfully. If you have some advice on the following questions, I’d definitely appreciate it:
- Is it possible Google still thinks we have this content on our site or intend to bring it back, and as a result we continue to suffer?
- If that is a possibility, then what can we do now (if anything) to correct the damage we did?
We're thinking about removing all of those 301s now, letting all cut content return 404s and making a separate sitemap of cut content to submit it to Google. Do you think it's too late or otherwise inadvisable for us to do this kind of thing?
Thanks in advance,
Eric -
It might be worth exploring NOINDEX'ing the useful pages and 410'ing the non-useful ones, if only because sometimes a mix of signals is more palatable to Google. Any time you remove a swatch of content with one method, it can trigger alarm bells. I'll be honest, though - these situations are almost always tricky and you almost always have to measure and adjust. I've never found a method that's right for all situations.
-
Thanks Pete,
I appreciate your input. Next to the additional sitemap with the known Google-indexed URLs we want deindexed, we also have reopened some crawl paths to these pages to see if there is a speed up.
This is an undertaking carried out across 30 international properties so we will be able to experiment with measures for certain domains and see how it affects de-indexing speed as we are tracking the numbers reported by Google daily.
I agree about the bad user experience of 410s as a dead end. We are mostly de-indexing as a mean of recovery from Panda but the content pages that we try to deindex are actually still useful to the users, just thin and partially duplicative in content. We have decided to still display the content when such page is reached but return a status code of 410. Alternatively it seems we could just set the robot tag to noindex but my feeling is the 410 approach will lead to faster deindexing - would you agree?
Also if you have any expertise to share on how to compile a more ocomprehensive list of URLs indexed by Google for a particular domain other than scraping the web interface using the site:domain.com query approach (only returns a small subset compared to the stated total number of indexed pages) please let me know.
Thanks again /Thomas
-
If you want to completely remove these pages, I think Kerry22 is spot on. A 410 is about the fastest method we know of, and her points about leaving the crawl paths open are very important. I completely agree with leaving them in a stand-alone sitemap - that's good advice.
Saw your other answer, so I assume you don't want to 301 or canonical these pages. The only caveat I'd add is user value. Even if the pages have no links, make sure people aren't trying to visit them.
This can take time, especially at large scale, and a massive removal can look odd to Google. This doesn't generally result in a penalty or major problems, but it can cause short-term issues as Google re-evaluates the site.
The only option to speed it up is, if the pages have a consistent URL parameter or folder structure, you may be able to do a mass removal in Google Webmaster Tools. This can be faster, but it's constrained to similar-looking URLs. In other words, there has to be a pattern. The benefit is that you can make the GWT request on top of the 410s, so that can sometimes help. Any massive change takes time, though, and often requires some course correction, I find.
-
Think second sitemap will be fine. Wouldn't add a page with just links as that is the type of page Panda doesn't like.
Regarding sets of pages - we started by going into the search results - found a lot of content that shouldn't have been indexed.
We then looked manually at the content on subsets of pages and found pages that were thin and very similar to others (at the product level) and either made them more unique or removed them. Tools like this also help identify similar pages across products/categories http://www.copyscape.com/compare.php
It's only been 2 weeks, so it looks like we have pretty much 80% recovered and still improving - still looking at numbers and over Christmas and NY obviously traffic is quiet. I think 100% recovery is dependent on too many variables, like whether you continue link building during your time fixing the site, losing links by removing pages, adding more pages, competitors gaining authority/rankings etc
-
Hey Kerry,
There was addition of additional pages in April which is also when our sites started seeing a decrease in rankings - so the timing adds up.
The drops starting June have no clear root for us - we started our de-indexation process starting of December.
We are thinking to speed up indexation exclusively through a second Google Sitemap as anything else would need to be a very artificial landing page with a high number of links at this point. Would you be concerned exclusively using a Sitemap over keeping the unwanted pages linked from your linking structure?
Further, I am interested in how you determined the set of pages you know were part of the Google index to be delisted? It appears the best way to do so is to scrape the Google search results of pages returned for a domain and build up a list this way.
Did you recover completely to prior Panda?
Best /Thomas
-
Hi
No problem, I am happy to help!
Yes. graph declined sloooowly but only when we started removing pages. This is half the problem - you have to wait for Google to find the changes. The waiting is frustrating as you don't know if what you have done is right, but the stuff I listed will help speed it up. We literally had to wait until none of the pages could be found in the index.
I see a big increase in your indexation from April to May 2012. When did you get hit and what happened over that month - did you add a lot of new pages/products? Are those drops in indexation from June to Dec 2012 you removing pages or did the drop just start to 'happen' and then you got hit?
-
Kerry,
Thank your for your amazing response on the deindexing question I had. It was incredibly well written and very easy to follow. Very happy to hear you were able to recover.
You make a really good point; allowing Google to still be able to reach the pages; when we started reviewing our site structure we also changed our linking structure so while all pages we dont want to have longer in the index return a 410 they certainly aren't all discoverable. Our assumption was that Google will revisit them sooner or later given that they are part of the index but I can definitely imagine that thinks would get sped up by compiling a dedicated sitemap.
A big question I would have for you is how did the index status graph adjust for you in GWT over time? We started our restructuring start of January and we can't see a difference yet: http://imgur.com/eKBJ0
Did you graph decline step by step?
Thanks again
-
Hi
We just recovered from Panda - took us 6 months, but the best way to do this is to 410 or 404 your pages, but don't remove the links. If you remove the links to those pages then Google won't be able to find those pages and know that you have removed them.
Here are the steps you need to follow to get the changes indexed:
1. Remove the pages but leave the links to them on your site (we left these discretely at the bottom of the pages they were on, so users wouldn't find them easily, but Google would). You will see Google slowly start to pick up the number of 404s/410s in Webmaster Tools - don't worry about so many 410s being picked up - it won't hurt you. Don't no follow links, remove links, or block pages with robots.txt. You want Google to find your changes.
2. Revise your sitemaps - take the 410 pages out of the original sitemap and add them to a new separate sitemap and submit this in Webmaster Tools. Then you can see the true indexation rates of your current pages (gives you a good idea of how many are indexed vs not and if you still have issues). You can then also track the deindexation of your 410s separately - see how fast they are being deindexed - be patient, it takes time. We only recovered once they were all deindexed.
Our decision to use sitemaps as well as internal links was due to the fact that some deep pages are only crawled periodically and we wanted Google to find the changes quickly. This is useful: http://www.seomoz.org/blog/logic-meet-google-crawling-to-deindex
4. Then Wait If all your pages are removed and you are still affected by Panda, start looking for more duplicate content, and look with an objective view at your pages that still exist. You may be surprised with what you find. The process took us 6 months because we had to wait for Google to pick up our changes, and then revise, tweak, look for more to do etc.
I will write a case study soon, but in the meantime hope this helps you! I know how frustrating it is.
PS. If you are losing link value from 410s, 410 first, recover from Panda, and then 301 the select pages that have links to get the link juice back. It will be faster that way.
-
Google is already recrawling those pages for the last months but is returning to the pages that return 410. We have very explicit logging configured.
Google URL removal tool is not an option due to the manual character of the submission.
-
I think you need to wait for Google to get them recrawl these pages .. however, you can use Google URL removal tool in Webmaster Tools...
-
Thanks,
To be clear - my question does not look for proposals to recovery but implementation advice around shrinking the Google index size. We are talking about a scale of 10 thousands of pages. /Thomas
-
what about this approach - I am assuming that you know the exact date when the rank falls ..
You need to compare the traffic from Google for each pages. Find out those pages that suffered the most. Either get them removed [just exactly what you are doing] or completely rewrite them, adding nice images, videos etc, in short make it more interactive.
Now locate pages that are not that much affected. You need to make slight changes in them. Do not remove these pages.
Now locate those pages that have not affected at all. If those pages are content heavy, you need to produce some more pages with well written content./
Hope that helps.
-
Correct, it is intentional. The removed links have no link juice. The hop is though that an explicit 410 is a clearer signal for Google to remove the pages form the index.
I have been reading warnings around implementing a significant volume of 301s as it could be considered unnatural.
-
Just curious, is there any reason you did a 410 instead of a 301? I think most webmasters would setup 301 redirects to the most relevant remaining page for each of the pages that you did remove. With a 410, you're effectively dropping backlinks that might have existed to any of the pages that you had.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Sitemap Indexed Pages, Google Glitch or Problem With Site?
Hello, I have a quick question about our Sitemap Web Pages Indexed status in Google Search Console. Because of the drastic drop I can't tell if this is a glitch or a serious issue. When you look at the attached image you can see that under Sitemaps Web Pages Indexed has dropped suddenly on 3/12/17 from 6029 to 540. Our Index status shows 7K+ indexed. Other than product updates/additions and homepage layout updates there have been no significant changes to this website. If it helps we are operating on the Volusion platform. Thanks for your help! -Ryan rou1zMs
Intermediate & Advanced SEO | | rrhansen0 -
Is it a problem that Google's index shows paginated page urls, even with canonical tags in place?
Since Google shows more pages indexed than makes sense, I used Google's API and some other means to get everything Google has in its index for a site I'm working on. The results bring up a couple of oddities. It shows a lot of urls to the same page, but with different tracking code.The url with tracking code always follows a question mark and could look like: http://www.MozExampleURL.com?tracking-example http://www.MozExampleURL.com?another-tracking-examle http://www.MozExampleURL.com?tracking-example-3 etc So, the only thing that distinguishes one url from the next is a tracking url. On these pages, canonical tags are in place as: <link rel="canonical<a class="attribute-value">l</a>" href="http://www.MozExampleURL.com" /> So, why does the index have urls that are only different in terms of tracking urls? I would think it would ignore everything, starting with the question mark. The index also shows paginated pages. I would think it should show the one canonical url and leave it at that. Is this a problem about which something should be done? Best... Darcy
Intermediate & Advanced SEO | | 945010 -
What's the best way to redirect categories & paginated pages on a blog?
I'm currently re-doing my blog and have a few categories that I'm getting rid of for housecleaning purposes and crawl efficiency. Each of these categories has many pages (some have hundreds). The new blog will also not have new relevant categories to redirect them to (1 or 2 may work). So what is the best place to properly redirect these pages to? And how do I handle the paginated URLs? The only logical place I can think of would be to redirect them to the homepage of the blog, but since there are so many pages, I don't know if that's the best idea. Does anybody have any thoughts?
Intermediate & Advanced SEO | | kking41200 -
How Long Does it Take for Rel Canonical to De-Index / Re-Index a Page?
Hi Mozzers, We have 2 e-commerce websites, Website A and Website B, sharing thousands of pages with duplicate product descriptions. Currently only the product pages on Website B are indexing, and we want Website A indexed instead. We added the rel canonical tag on each of Website B's product pages with a link towards the matching product on Page A. How long until Website B gets de-indexed and Website A gets indexed instead? Did we add the rel canonical tag correctly? Thanks!
Intermediate & Advanced SEO | | Travis-W0 -
What's the best way to manage content that is shared on two sites and keep both sites in search results?
I manage two sites that share some content. Currently we do not use a cross-domain canonical URL and allow both sites to be fully indexed. For business reasons, we want both sites to appear in results and need both to accumulate PR and other SEO/Social metrics. How can I manage the threat of duplicate content and still make sure business needs are met?
Intermediate & Advanced SEO | | BostonWright0 -
Google bot vs google mobile bot
Hi everyone 🙂 I seriously hope you can come up with an idea to a solution for the problem below, cause I am kinda stuck 😕 Situation: A client of mine has a webshop located on a hosted server. The shop is made in a closed CMS, meaning that I have very limited options for changing the code. Limited access to pagehead and can within the CMS only use JavaScript and HTML. The only place I have access to a server-side language is in the root where a Defualt.asp file redirects the visitor to a specific folder where the webshop is located. The webshop have 2 "languages"/store views. One for normal browsers and google-bot and one for mobile browsers and google-mobile-bot.In the default.asp (asp classic). I do a test for user agent and redirect the user to one domain or the mobile, sub-domain. All good right? unfortunately not. Now we arrive at the core of the problem. Since the mobile shop was added on a later date, Google already had most of the pages from the shop in it's index. and apparently uses them as entrance pages to crawl the site with the mobile bot. Hence it never sees the default.asp (or outright ignores it).. and this causes as you might have guessed a huge pile of "Dub-content" Normally you would just place some user-agent detection in the page head and either throw Google a 301 or a rel-canon. But since I only have access to JavaScript and html in the page head, this cannot be done. I'm kinda running out of options quickly, so if anyone has an idea as to how the BEEP! I get Google to index the right domains for the right devices, please feel free to comment. 🙂 Any and all ideas are more then welcome.
Intermediate & Advanced SEO | | ReneReinholdt0 -
Best way to migrate to a new URL structure
Hello everyone, We’re changing our URL structure from something like this: example.com/index.php?language=English To something like this: example.com**/english/**index.php The change is implemented with mod_rewrite so all the old URLs can still work We have hundreds of thousands of pages that are currently indexed with the old URL structure What’s the best way to get Google to rapidly update its index and to maintain as much ranking as possible? 301 redirect all the old URLs to the new equivalent format? If we detect that the URL is in an old format, render the page with a canonical tag pointing to the new equivalent format as well as adding a noindex, nofollow tag? Something else? Thanks for your input!
Intermediate & Advanced SEO | | anthematic0 -
Best way to preserve site authority / juice when moving a property to Facebook?
Hi, so, I have a website. Let's call it a cooking website with about 300 pieces of content cross-listed among 20 categories. I want to move my entire site, hook line and sinker, to Facebook. My first thought was to do this with a domain-wide 301, as that would preserve most of the authority and juice my site has built over the years... but would this have a corollary effect of unfocusing my keyword strategy? E.g. is there a risk in doing a sitewide 301 to a single landing page, in that some of the juice I'd be passing to my new home page would be from, say, "recipes for jelly donuts?" Has anyone had an experience making a large product transition like this, and are there any current best practices? Thanks!
Intermediate & Advanced SEO | | Kenn_Gold1