How To Cleanup the Google Index After a Website Has Been HACKED
-
We have a client whose website was hacked, and some troll created thousands of viagra pages, which were all indexed by Google. See the screenshot for an example. The site has been cleaned up completely, but I wanted to know if anyone can weigh in on how we can cleanup the Google index. Are there extra steps we should take? So far we have gone into webmaster tools and submitted a new site map.
^802D799E5372F02797BE19290D8987F3E248DCA6656F8D9BF6^pimgpsh_fullsize_distr.png
-
As has been suggested you can request the removal of pages in GWMT and you should keep any wordpress site and plugins up to date.
To add to this, you might want to look at something like Cloudflare as an extra layer to protect your clients site. We've been using it for a year now and its made a massive difference, both to performance and security.
-
Doesn't have to be so tedious. If you have a list of URLs you can use the bulk removal extension for Chrome found here: https://github.com/noitcudni/google-webmaster-tools-bulk-url-removal
-
If you submitted a new sitemap then you should be fine, but "in case of emergency", you can try listing the individual URL's in Google Webmaster Tools > Google Index > Remove URL's. It's a tedious process, but something else you can try.
Sucks, doesn't it? Wordfence or another WP security plugin is well worth the time to install and monitor. Good luck.
-
Hi,
Just to let you know it's quite easy to find the domain of the site in question from the screenshot you provided
I think as you've created a new sitemap and submitted it to Google and removed all the pages so they return 404s, you should be fine. The page will just drop out of the index in time. You might also want to add the address of your XML sitemap to your robots.txt file as it's not there at present.
I crawled the site using Screaming Frog and couldn't see any dodgy looking pages remaining. You could create removal requests of the pages in Webmaster Tools if you'd like to take an extra step. Do this under Google Index > Remove URLs. To prevent this happening again, make sure you keep your WordPress installation and all plugins up to date.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google tries to index non existing language URLs. Why?
Hi, I am working for a SAAS client. He uses two different language versions by using two different subdomains.
Technical SEO | | TheHecksler
de.domain.com/company for german and en.domain.com for english. Many thousands URLs has been indexed correctly. But Google Search Console tries to index URLs which were never existing before and are still not existing. de.domain.com**/en/company
en.domain.com/de/**company ... and an thousand more using the /en/ or /de/ in between. We never use this variant and calling these URLs will throw up a 404 Page correctly (but with wrong respond code - we`re fixing that 😉 ). But Google tries to index these kind of URLs again and again. And, I couldnt find any source of these URLs. No Website is using this as an out going link, etc.
We do see in our logfiles, that a Screaming Frog Installation and moz.com w opensiteexplorer were trying to access this earlier. My Question: How does Google comes up with that? From where did they get these URLs, that (to our knowledge) never existed? Any ideas? Thanks 🙂0 -
Page missing from Google index
Hi all, One of our most important pages seems to be missing from the Google index. A number of our collections pages (e.g., http://perfectlinens.com/collections/size-king) are thin, so we've included a canonical reference in all of them to the main collection page (http://perfectlinens.com/collections/all). However, I don't see the main collection page in any Google search result. When I search using "info:http://perfectlinens.com/collections/all", the page displayed is our homepage. Why is this happening? The main collection page has a rel=canonical reference to itself (auto-generated by Shopify so I can't control that). Thanks! WUKeBVB
Technical SEO | | leo920 -
Lots of Pages Dropped Out of Google's Index?
Until yesterday, my website had about 1200 pages indexed in Google. I did lots of changes: removed low quality content, rewrote passable content to make it better, wrote high quality content, got lots of likes and shares on social networks, etc. Now this morning I see that out of 1252 pages submitted, only 691 are indexed. Is that a temporary situation related to the recent updates? Anyone seeing this? What should I interpret about this?
Technical SEO | | sbrault740 -
How to optimize for different google seach center (google.de, google.ch) ?
We all use Deutsch language and (.com) domains for the sites. I ranked well in google.com ,but not so well in google.de , google.ch , my competitors ranked much better in google.de,google.ch. I checked most of their outbound-links, but get few information. Links from (.DE) domains or links from sites located in German help the rank for special google seach center ? (google.de, google.ch) . Or some other factors i missed? please help.
Technical SEO | | sunvary0 -
Indexing Issue
Hi, I am working on www.stjohnswaydentalpractice.co.uk Google only seems to be indexing two of the pages when i search site:www.stjohnswaydentalpractice.co.uk I have added the site to webmaster tools and created a new sitemap which is showing that it has only submitted two of the pages. Can anyone shed any light for why these pages are not being indexed? Thanks Faye
Technical SEO | | dentaldesign0 -
CDN Being Crawled and Indexed by Google
I'm doing a SEO site audit, and I've discovered that the site uses a Content Delivery Network (CDN) that's being crawled and indexed by Google. There are two sub-domains from the CDN that are being crawled and indexed. A small number of organic search visitors have come through these two sub domains. So the CDN based content is out-ranking the root domain, in a small number of cases. It's a huge duplicate content issue (tens of thousands of URLs being crawled) - what's the best way to prevent the crawling and indexing of a CDN like this? Exclude via robots.txt? Additionally, the use of relative canonical tags (instead of absolute) appear to be contributing to this problem as well. As I understand it, these canonical tags are telling the SEs that each sub domain is the "home" of the content/URL. Thanks! Scott
Technical SEO | | Scott-Thomas0 -
Why Google not picking My META Description? Google itself populate the description.. How to control this Search Snippets??
Why Google not picking My META Description? Google itself populate the description.. How to control this Search Snippets??
Technical SEO | | greyniumseo0 -
Google Website Optimizer
So if you are AB testing two pages: index.html and indexB.html Shouldn't I nofollow indexB.html? It has all the same content, just a different design.
Technical SEO | | tylerfraser0