Panda Recovery - What is the best way to shrink your index and make Google aware?
-
We have been hit significantly with Panda and assume that our large index with some pages holding thin/duplicate content being the reason.
We have reduced our index size by 95% and have done significant content development on the remaining 5% pages.
For the old, removed pages, we have installed 410 responses (Page does not exist any longer) and made sure that they are removed from the sitempa submitted to Google; however after over a month we still see Google spider returning to the same pages and the webmaster tools shows no indicator that Google is shrinking our index size.
Are there more effective and automated ways to make Google aware of a smaller index size in hope of Panda recovery? Potentially using the robots.txt file, GWT URL removal tool etc?
Thanks /sp80
-
Hi. I would be curious to know if anyone else has experienced something similar and recovered from Panda. How long did it take you? Did you manually remove the pages, set up 410s or 404s, or create 301s?
I've been working on a site for sometime now which has lost a great of traffic since July 2013. Over the past 2 months, a process has gone underway to manually remove the URLs from the index. The index has been cut in half, but still not at what it was pre-penalty. About 20,000 more pages to figure out what needs to be removed before it reaches the level it was before the massive traffic drop.
Any recovery or insight would be helpful.
-
Hi Sp80 (and group),
It's been about six months since you posted your Panda recovery question. I'm curious if you implemented Kerry22's suggestions, and what results you've seen. I hope it's worked out for you.
We're also dealing with removing thousands of pages of thin content (through 410s, keeping links up and sitemaps, as per Kerry's suggestion). This was a very helpful discussion to read.
Thanks,
Tom
-
Hi kerry,
Your post gives me some hope. I was hit by Panda in Feb. 2011 and lost 85% of my google traffic Made many changes to my site -- page deletions re-directs added content etc. Got a bump of 25% in September 2011 but lost that and more afterward.
We have an e-commerce gift site with 6000 pages. Is your site an e-commerce site?
I have not found a recovery story from any sites like mine that were hit with that large a drop.
I hope your recovery would relate to my situation.
-
Did Google process the 301s? In other words, are the old pages still in the index or not? If they processed the 301s eventually, you generally should be ok. If the old URLs seem stranded, then you might be best setting up the XML sitemap with those old URLs to just kick Google a little. I don't think I'd switch signals and move from a 301 to 404, unless the old pages are low quality, had bad links, etc.
Unfortunately, these things are very situational, so it can be hard to speak in generalities.
-
Hi Dr. Pete,
I know this is a late entry into this thread, but.. what if we did all our content cutting in the wrong ways over the past year – is there something we could/should do now to correct for this? Our site was hit by panda back in March 2012, and since then we've cut content several times. But we didn’t use this good process you advocate – here’s what we did when we cut pages:
1. We set up permanent 301 redirects for all of them immediately
2. Simultaneously, we always removed all links pointing to cut pages (we wanted to make sure users didn’t get redirected all the time)This is a far cry from what you recommend and what Kerry22 did to recover successfully. If you have some advice on the following questions, I’d definitely appreciate it:
- Is it possible Google still thinks we have this content on our site or intend to bring it back, and as a result we continue to suffer?
- If that is a possibility, then what can we do now (if anything) to correct the damage we did?
We're thinking about removing all of those 301s now, letting all cut content return 404s and making a separate sitemap of cut content to submit it to Google. Do you think it's too late or otherwise inadvisable for us to do this kind of thing?
Thanks in advance,
Eric -
It might be worth exploring NOINDEX'ing the useful pages and 410'ing the non-useful ones, if only because sometimes a mix of signals is more palatable to Google. Any time you remove a swatch of content with one method, it can trigger alarm bells. I'll be honest, though - these situations are almost always tricky and you almost always have to measure and adjust. I've never found a method that's right for all situations.
-
Thanks Pete,
I appreciate your input. Next to the additional sitemap with the known Google-indexed URLs we want deindexed, we also have reopened some crawl paths to these pages to see if there is a speed up.
This is an undertaking carried out across 30 international properties so we will be able to experiment with measures for certain domains and see how it affects de-indexing speed as we are tracking the numbers reported by Google daily.
I agree about the bad user experience of 410s as a dead end. We are mostly de-indexing as a mean of recovery from Panda but the content pages that we try to deindex are actually still useful to the users, just thin and partially duplicative in content. We have decided to still display the content when such page is reached but return a status code of 410. Alternatively it seems we could just set the robot tag to noindex but my feeling is the 410 approach will lead to faster deindexing - would you agree?
Also if you have any expertise to share on how to compile a more ocomprehensive list of URLs indexed by Google for a particular domain other than scraping the web interface using the site:domain.com query approach (only returns a small subset compared to the stated total number of indexed pages) please let me know.
Thanks again /Thomas
-
If you want to completely remove these pages, I think Kerry22 is spot on. A 410 is about the fastest method we know of, and her points about leaving the crawl paths open are very important. I completely agree with leaving them in a stand-alone sitemap - that's good advice.
Saw your other answer, so I assume you don't want to 301 or canonical these pages. The only caveat I'd add is user value. Even if the pages have no links, make sure people aren't trying to visit them.
This can take time, especially at large scale, and a massive removal can look odd to Google. This doesn't generally result in a penalty or major problems, but it can cause short-term issues as Google re-evaluates the site.
The only option to speed it up is, if the pages have a consistent URL parameter or folder structure, you may be able to do a mass removal in Google Webmaster Tools. This can be faster, but it's constrained to similar-looking URLs. In other words, there has to be a pattern. The benefit is that you can make the GWT request on top of the 410s, so that can sometimes help. Any massive change takes time, though, and often requires some course correction, I find.
-
Think second sitemap will be fine. Wouldn't add a page with just links as that is the type of page Panda doesn't like.
Regarding sets of pages - we started by going into the search results - found a lot of content that shouldn't have been indexed.
We then looked manually at the content on subsets of pages and found pages that were thin and very similar to others (at the product level) and either made them more unique or removed them. Tools like this also help identify similar pages across products/categories http://www.copyscape.com/compare.php
It's only been 2 weeks, so it looks like we have pretty much 80% recovered and still improving - still looking at numbers and over Christmas and NY obviously traffic is quiet. I think 100% recovery is dependent on too many variables, like whether you continue link building during your time fixing the site, losing links by removing pages, adding more pages, competitors gaining authority/rankings etc
-
Hey Kerry,
There was addition of additional pages in April which is also when our sites started seeing a decrease in rankings - so the timing adds up.
The drops starting June have no clear root for us - we started our de-indexation process starting of December.
We are thinking to speed up indexation exclusively through a second Google Sitemap as anything else would need to be a very artificial landing page with a high number of links at this point. Would you be concerned exclusively using a Sitemap over keeping the unwanted pages linked from your linking structure?
Further, I am interested in how you determined the set of pages you know were part of the Google index to be delisted? It appears the best way to do so is to scrape the Google search results of pages returned for a domain and build up a list this way.
Did you recover completely to prior Panda?
Best /Thomas
-
Hi
No problem, I am happy to help!
Yes. graph declined sloooowly but only when we started removing pages. This is half the problem - you have to wait for Google to find the changes. The waiting is frustrating as you don't know if what you have done is right, but the stuff I listed will help speed it up. We literally had to wait until none of the pages could be found in the index.
I see a big increase in your indexation from April to May 2012. When did you get hit and what happened over that month - did you add a lot of new pages/products? Are those drops in indexation from June to Dec 2012 you removing pages or did the drop just start to 'happen' and then you got hit?
-
Kerry,
Thank your for your amazing response on the deindexing question I had. It was incredibly well written and very easy to follow. Very happy to hear you were able to recover.
You make a really good point; allowing Google to still be able to reach the pages; when we started reviewing our site structure we also changed our linking structure so while all pages we dont want to have longer in the index return a 410 they certainly aren't all discoverable. Our assumption was that Google will revisit them sooner or later given that they are part of the index but I can definitely imagine that thinks would get sped up by compiling a dedicated sitemap.
A big question I would have for you is how did the index status graph adjust for you in GWT over time? We started our restructuring start of January and we can't see a difference yet: http://imgur.com/eKBJ0
Did you graph decline step by step?
Thanks again
-
Hi
We just recovered from Panda - took us 6 months, but the best way to do this is to 410 or 404 your pages, but don't remove the links. If you remove the links to those pages then Google won't be able to find those pages and know that you have removed them.
Here are the steps you need to follow to get the changes indexed:
1. Remove the pages but leave the links to them on your site (we left these discretely at the bottom of the pages they were on, so users wouldn't find them easily, but Google would). You will see Google slowly start to pick up the number of 404s/410s in Webmaster Tools - don't worry about so many 410s being picked up - it won't hurt you. Don't no follow links, remove links, or block pages with robots.txt. You want Google to find your changes.
2. Revise your sitemaps - take the 410 pages out of the original sitemap and add them to a new separate sitemap and submit this in Webmaster Tools. Then you can see the true indexation rates of your current pages (gives you a good idea of how many are indexed vs not and if you still have issues). You can then also track the deindexation of your 410s separately - see how fast they are being deindexed - be patient, it takes time. We only recovered once they were all deindexed.
Our decision to use sitemaps as well as internal links was due to the fact that some deep pages are only crawled periodically and we wanted Google to find the changes quickly. This is useful: http://www.seomoz.org/blog/logic-meet-google-crawling-to-deindex
4. Then Wait If all your pages are removed and you are still affected by Panda, start looking for more duplicate content, and look with an objective view at your pages that still exist. You may be surprised with what you find. The process took us 6 months because we had to wait for Google to pick up our changes, and then revise, tweak, look for more to do etc.
I will write a case study soon, but in the meantime hope this helps you! I know how frustrating it is.
PS. If you are losing link value from 410s, 410 first, recover from Panda, and then 301 the select pages that have links to get the link juice back. It will be faster that way.
-
Google is already recrawling those pages for the last months but is returning to the pages that return 410. We have very explicit logging configured.
Google URL removal tool is not an option due to the manual character of the submission.
-
I think you need to wait for Google to get them recrawl these pages .. however, you can use Google URL removal tool in Webmaster Tools...
-
Thanks,
To be clear - my question does not look for proposals to recovery but implementation advice around shrinking the Google index size. We are talking about a scale of 10 thousands of pages. /Thomas
-
what about this approach - I am assuming that you know the exact date when the rank falls ..
You need to compare the traffic from Google for each pages. Find out those pages that suffered the most. Either get them removed [just exactly what you are doing] or completely rewrite them, adding nice images, videos etc, in short make it more interactive.
Now locate pages that are not that much affected. You need to make slight changes in them. Do not remove these pages.
Now locate those pages that have not affected at all. If those pages are content heavy, you need to produce some more pages with well written content./
Hope that helps.
-
Correct, it is intentional. The removed links have no link juice. The hop is though that an explicit 410 is a clearer signal for Google to remove the pages form the index.
I have been reading warnings around implementing a significant volume of 301s as it could be considered unnatural.
-
Just curious, is there any reason you did a 410 instead of a 301? I think most webmasters would setup 301 redirects to the most relevant remaining page for each of the pages that you did remove. With a 410, you're effectively dropping backlinks that might have existed to any of the pages that you had.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Is Indexing my 301 Redirects to Other sites
Long story but now i have a few links from my site 301 redirecting to youtube videos or eCommerce stores. They carry a considerable amount of traffic that i benefit from so i can't take them down, and that traffic is people from other websites, so basically i have backlinks from places that i don't own, to my redirect urls (Ex. http://example.com/redirect) My problem is that google is indexing them and doesn't let them go, i have tried blocking that url from robots.txt but google is still indexing it uncrawled, i have also tried allowing google to crawl it and adding noindex from robots.txt, i have tried removing it from GWT but it pops back again after a few days. Any ideas? Thanks!
Intermediate & Advanced SEO | | cuarto7150 -
Google slow to index pages
Hi We've recently had a product launch for one of our clients. Historically speaking Google has been quick to respond, i.e when the page for the product goes live it's indexed and performing for branded terms within 10 minutes (without 'Fetch and Render'). This time however, we found that it took Google over an hour to index the pages. we found initially that press coverage ranked until we were indexed. Nothing major had changed in terms of the page structure, content, internal linking etc; these were brand new pages, with new product content. Has anyone ever experienced Google having an 'off' day or being uncharacteristically slow with indexing? We do have a few ideas what could have caused this, but we were interested to see if anyone else had experienced this sort of change in Google's behaviour, either recently or previously? Thanks.
Intermediate & Advanced SEO | | punchseo0 -
Google Indexing & Caching Some Other Domain In Place of Mine-Lost Ranking -Sucuri.net Found nothing
Again I am facing same Problem with another wordpress blog. Google has suddenly started to Cache a different domain in place of mine & caching my domain in place of that domain. Here is an example page of my site which is wrongly cached on google, same thing happening with many other pages as well - http://goo.gl/57uluq That duplicate site ( protestage.xyz) is showing fully copied from my client's site but showing all pages as 404 now but on google cache its showing my sites. site:protestage.xyz showing all pages of my site only but when we try to open any page its showing 404 error My site has been scanned by sucuri.net Senior Support for any malware & there is none, they scanned all files, database etc & there is no malware found on my site. As per Sucuri.net Senior Support It's a known Google bug. Sometimes they incorrectly identify the original and the duplicate URLs, which results in messed ranking and query results. As you can see, the "protestage.xyz" site was hacked, not yours. And the hackers created "copies" of your pages on that hacked site. And this is why they do it - the "copy" (doorway) redirects websearchers to a third-party site [http://www.unmaskparasites.com/security-report/?page=protestage.xyz](http://www.unmaskparasites.com/security-report/?page=protestage.xyz) It was not the only site they hacked, so they placed many links to that "copy" from other sites. As a result Google desided that that copy might actually be the original, not the duplicate. So they basically hijacked some of your pages in search results for some queries that don't include your site domain. Nonetheless your site still does quite well and outperform the spammers. For example in this query: [https://www.google.com/search?q=](https://www.google.com/search?q=)%22We+offer+personalized+sweatshirts%22%2C+every+bride#q=%22GenF20+Plus+Review+Worth+Reading+If+You+are+Planning+to+Buy+It%22 But overall, I think both the Google bug and the spammy duplicates have the negative effect on your site. We see such hacks every now and then (both sides: the hacked sites and the copied sites) and here's what you can do in this situation: It's not a hack of your site, so you should focus on preventing copying the pages: 1\. Contact the protestage.xyz site and tell them that their site is hacked and that and show the hacked pages. [https://www.google.com/search?q=](https://www.google.com/search?q=)%22We+offer+personalized+sweatshirts%22%2C+every+bride#q=%22GenF20+Plus+Review+Worth+Reading+If+You+are+Planning+to+Buy+It%22 Hopefully they clean their site up and your site will have the unique content again. Here's their email flang.juliette@yandex.com 2\. You might want to send one more complain to their hosting provider (OVH.NET) abuse@ovh.net, and explain that the site they host stole content from your site (show the evidence) and that you suspect the the site is hacked. 3\. Try blocking IPs of the Aruba hosting (real visitors don't use server IPs) on your site. This well prevent that site from copying your site content (if they do it via a script on the same server). I currently see that sites using these two IP address: 149.202.120.102\. I think it would be safe to block anything that begins with 149.202 This .htaccess snippet should help (you might want to test it) #-------------- Order Deny,Allow Deny from 149.202.120.102 #-------------- 4\. Use rel=canonical to tell Google that your pages are the original ones. [https://support.google.com/webmasters/answer/139066?hl=en](https://support.google.com/webmasters/answer/139066?hl=en) It won't help much if the hackers still copy your pages because they usually replace your rel=canonical with their, so Google can' decide which one is real. But without the rel=canonical, hackers have more chances to hijack your search results especially if they use rel=canonical and you don't. I should admit that this process may be quite long. Google will not return your previous ranking overnight even if you manage to shut down the malicious copies of your pages on the hacked site. Their indexes would still have some mixed signals (side effects of the black hat SEO campaign) and it may take weeks before things normalize. The same thing is correct for the opposite situation. The traffic wasn't lost right after hackers created the duplicates on other sites. The effect build up with time as Google collects more and more signals. Plus sometimes they run scheduled spam/duplicate cleanups of their index. It's really hard to tell what was the last drop since we don't have access to Google internals. However, in practice, if you see some significant changes in Google search results, it's not because of something you just did. In most cases, it's because of something that Google observed for some period of time. Kindly help me if we can actually do anything to get the site indexed properly again, PS it happened with this site earlier as well & that time I had to change Domain to get rid of this problem after I could not find any solution after months & now it happened again. Looking forward for possible solution Ankit
Intermediate & Advanced SEO | | killthebillion0 -
Google Index Status Falling Fast - What should I be considering?
Hi Folks, Working on an ecommerce site. I have found a month on month fall in the Index Status continuing since late 2015. This has resulted in around 80% of pages indexed according to Webmaster. I do not seem to have any bad links or server issues. I am in the early stages of working through, updating content and tags but am yet to see a slowing of the fall. If anybody has tips on where to look for to issues or insight to resolve this I would really appreciate it. Thanks everybody! Tim
Intermediate & Advanced SEO | | Toby-Symec0 -
Removing Parameterized URLs from Google Index
We have duplicate eCommerce websites, and we are in the process of implementing cross-domain canonicals. (We can't 301 - both sites are major brands). So far, this is working well - rankings are improving dramatically in most cases. However, what we are seeing in some cases is that Google has indexed a parameterized page for the site being canonicaled (this is the site that is getting the canonical tag - the "from" page). When this happens, both sites are being ranked, and the parameterized page appears to be blocking the canonical. The question is, how do I remove canonicaled pages from Google's index? If Google doesn't crawl the page in question, it never sees the canonical tag, and we still have duplicate content. Example: A. www.domain2.com/productname.cfm%3FclickSource%3DXSELL_PR is ranked at #35, and B. www.domain1.com/productname.cfm is ranked at #12. (yes, I know that upper case is bad. We fixed that too.) Page A has the canonical tag, but page B's rank didn't improve. I know that there are no guarantees that it will improve, but I am seeing a pattern. Page A appears to be preventing Google from passing link juice via canonical. If Google doesn't crawl Page A, it can't see the rel=canonical tag. We likely have thousands of pages like this. Any ideas? Does it make sense to block the "clicksource" parameter in GWT? That kind of scares me.
Intermediate & Advanced SEO | | AMHC0 -
Our login pages are being indexed by Google - How do you remove them?
Each of our login pages show up under different subdomains of our website. Currently these are accessible by Google which is a huge competitive advantage for our competitors looking for our client list. We've done a few things to try to rectify the problem: - No index/archive to each login page Robot.txt to all subdomains to block search engines gone into webmaster tools and added the subdomain of one of our bigger clients then requested to remove it from Google (This would be great to do for every subdomain but we have a LOT of clients and it would require tons of backend work to make this happen.) Other than the last option, is there something we can do that will remove subdomains from being viewed from search engines? We know the robots.txt are working since the message on search results say: "A description for this result is not available because of this site's robots.txt – learn more." But we'd like the whole link to disappear.. Any suggestions?
Intermediate & Advanced SEO | | desmond.liang1 -
Previously ranking #1 in google, web page has 301 / url rewrite, indexed but now showing for keyword search?
Two web pages on my website, previously ranked well in google, consistent top 3 places for 6months+, but when the site was modified, these two pages previously ending .php had the page names changed to the keyword to further improve (or so I thought). Since then the page doesn't rank at all for that search term in google. I used google webmaster tools to remove the previous page from Cache and search results, re submitted a sitemap, and where possible fixed links to the new page from other sites. On previous advice to fix I purchased links, web directories, social and articles etc to the new page but so far nothing... Its been almost 5 months and its very frustrating as these two pages previously ranked well and as a landing page ended in conversions. This problem is only appearing in google. The pages still rank well in Bing and Yahoo. Google has got the page indexed if I do a search by the url, but the page never shows under any search term it should, despite being heavily optimised for certain terms. I've spoke to my developers and they are stumped also, they've now added this text to the effected page(s) to see if this helps. Header("HTTP/1.1 301 Moved Permanently");
Intermediate & Advanced SEO | | seanclc
$newurl=SITE_URL.$seo;
Header("Location:$newurl"); Can Google still index a web page but refuse to show it in search results? All other pages on my site rank well, just these two that were once called something different has caused issues? Any advice? Any ideas, Have I missed something? Im at a loss...0 -
To index or not to index search pages - (Panda related)
Hi Mozzers I have a WordPress site with Relevanssi the search engine plugin, free version. Questions: Should I let Google index my site's SERPS? I am scared the page quality is to thin, and then Panda bear will get angry. This plugin (or my previous search engine plugin) created many of these "no-results" uris: /?s=no-results%3Ano-results%3Ano-results%3Ano-results%3Ano-results%3Ano-results%3Ano-results%3Akids+wall&cat=no-results&pg=6 I have added a robots.txt rule to disallow these pages and did a GWT URL removal request. But links to these pages are still being displayed in Google's SERPS under "repeat the search with the omitted results included" results. So will this affect me negatively or are these results harmless? What exactly is an omitted result? As I understand it is that Google found a link to a page they but can't display it because I block GoogleBot. Thanx in advance guys.
Intermediate & Advanced SEO | | ClassifiedsKing0