Spam posts indexed, what to do now?
-
Hi,
So we had a staff problem last week and we let some spam posts (cheap nike jerseys etc.) that also got indexed by Google. (We just checked and there are lik 105 already indexed)
Of course we have now removed all these spam posts but what is the best practice at this point? Are we supposed to do something else to remove these from Google's index? (maybe through google webmaster tools?) We have already edited robots.txt to disallow those pages as a quick remedy.
And finally, could this have done any harm? We were quite slow noticing these posts to remove them. They were there for about 12 days.
thanks
-
Good to know
-
Hi,
Thanks for the comprehensive answer. We don't have any vulnerabilities. It was all my fault as I completely forgot that I had given administrative access to one of our former content managers who had temporarily allowed anonymous users to post on this certain section of the site. And once he left, we forgot to update that permission and never really noticed those posts, until today.
-
haha I just say you said "all those links had auto-nofollow on them"
NO PROBLEM MAN! rest easy! You cannot get penalized for nofollow links!
-
Thanks for the quick response. We're just requesting URL removal for all those URL's. I hope this makes it all good. No sign of ranking drop at the moment. We're lucky those pages were automatically filtered out by our sitemap.xml and all those links had auto-nofollow on them. Time to consider buying a service like Mollom I guess.
-
Do you know how the spam posts were published on your site? Just make sure the vulnerability is fixed so it doesn't happen again. Once the spam posts you found have been deleted from your site, you shouldn't have to do anything more since they will fall out of Google's index. Keep an eye on Google Webmaster Tools though to see if you notice any more spam pages pop up on Google's radar and then manually remove them.
Here is Google's official answer - http://support.google.com/webmasters/bin/answer.py?hl=en&answer=164734
When a page is updated or removed, it will automatically fall out of our search results. You don’t need to do anything to make this happen.
However, if you urgently need to remove content from Google's search results (for example, if you’ve already removed, updated, or blocked a page accidentally displaying confidential information like credit card numbers), you can request expedited removal of those URLs.
Our removal tools are intended for pages that urgently need to be removed—for example, if they contain confidential data that was accidentally exposed. Using the tools for other purposes may cause problems for your site.
Another Google resource if your site was actually hacked or compromised - http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1269119
To take your site "offline" after being hacked. If your site was hacked and you want to get rid of bad URLs that got indexed, use the URL removal tool to remove any new URLs that the hacker created—for example, http://www.example.com/buy-cheap-cialis-skq3w598.html. But we don't recommend removing your entire site, or removing URLs that you'll eventually want indexed. Instead, clean up the hacking and let us recrawl your site.
-
So someone was posting articles on your site that linked to other sites like paid links?
If you removed the posts no need to block them in robots.txt because they no longer exist so will not get crawled anymore. Yes definitely request removal in WMT URL removal tool and get those pages out of Google's index ASAP.
You're probably OK. Just keep your fingers crossed and an eye on rankings and run a tight ship so that doesn't happen again, definitely something you can get penalized for. Good thing you caught it quickly.
EDIT: if you meant that you let spam comments get posted live/approved by the admin then all you can do is remove the spammy posts and make sure your comment settings are set to need admin approval before getting posed live. No need to block in robots.txt or remove URLs in that case but it doesn't hurt. If the links are off of your site you should be fine.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google indexes page elements
Hello We face this problem that Google indexes page elements from WordPress as single pages. How can we prevent these elements from being indexed separately and being displayed in the search results? For example this project: www.rovana.be When scrolling down the search results, there are a lot of elements that are indexed separately. When clicking on the link, this is wat we see (see attachements) Does anyone have experience with this way of indexing and how can we solve this problem? Thanks! LlAWG4w.png C7XDDYS.png gVroomx.png
Technical SEO | | conversal0 -
Indexing Issue of Dynamic Pages
Hi All, I have a query for which i am struggling to find out the answer. I unable to retrieve URL using "site:" query on Google SERP. However, when i enter the direct URL or with "info:" query then a snippet appears. I am not able to understand why google is not showing URL with "site:" query. Whether the page is indexed or not? Or it's soon going to be deindexed. Secondly, I would like to mention that this is a dynamic URL. The index file which we are using to generate this URL is not available to Google Bot. For instance, There are two different URL's. http://www.abc.com/browse/ --- It's a parent page.
Technical SEO | | SameerBhatia
http://www.abc.com/browse/?q=123 --- This is the URL, generated at run time using browse index file. Google unable to crawl index file of browse page as it is unable to run independently until some value will get passed in the parameter and is not indexed by Google. Earlier the dynamic URL's were indexed and was showing up in Google for "site:" query but now it is not showing up. Can anyone help me what is happening here? Please advise. Thanks0 -
Indexing Issue
Hi, We have moved one of our domain https://www.mycity4kids.com/ in angular js and after that, i observed the major drop in the number of indexed pages. I crosschecked the coding and other important parameters but didn't find any major issue. What could be the reason behind the drop?
Technical SEO | | ResultFirst0 -
How to know how much pages are indexed on Google?
I have a big site, there are a way to know what page are not indexed? I know that you can use site: but with a big site is a mess to check page by page. This is a tool or a system to check a entire site and automatically find non-indexed pages?
Technical SEO | | markovald0 -
Getting a Vanity (Clean) URL indexed
Hello, I have a vanity (clean looking) URL that 302 redirects to the ugly version. So in other words http://www.site.com/url 302 >>> http://www.site.com/directory/directory/url.aspx What I'm trying to do is get the clean version to show up in search. However, for some reason Google only indexes the ugly version. cache:http://www.site.com/directory/directory/url.aspx is showing the ugly URL as cached and cache:http://www.site.com/url is showing not cached at all. Is there some way to force Google to index the clean version? Fetch as Google for the clean URL only returns a redirect status and canonicalizing the ugly to the clean would seem to send a strange message because of the redirect back to the ugly. Any help would be appreciated. Thank you,
Technical SEO | | Digi12340 -
Strange URL's indexed
Hi, I got the message "Increase in not found errors" (404 errors) in GWT for one of my website. I did not change anything but I now see a lot of "strange" URL's indexed (~50) : &ui=2&tf=1&shva=1 &cat_id=6&tag_id=31&Remark=In %22%3EAny suggestion on how to fix it ?Erwan
Technical SEO | | johnny1220 -
How to tell if PDF content is being indexed?
I've searched extensively for this, but could not find a definitive answer. We recently updated our website and it contains links to about 30 PDF data sheets. I want to determine if the text from these PDFs is being archived by search engines. When I do this search http://bit.ly/rRYJPe (google - site:www.gamma-sci.com and filetype:pdf) I can see that the PDF urls are getting indexed, but does that mean that their content is getting indexed? I have read in other posts/places that if you can copy text from a PDF and paste it that means Google can index the content. When I try this with PDFs from our site I cannot copy text, but I was told that these PDFs were all created from Word docs, so they should be indexable, correct? Since WordPress has you upload PDFs like they are an image could this be causing the problem? Would it make sense to take the time and extract all of the PDF content to html? Thanks for any assistance, this has been driving me crazy.
Technical SEO | | zazo0 -
Weird Indexing Question
Google has indexed mysite.com/ and mysitem.com/\/ (no idea why). If you click on the /%5C? URL it takes you to mysite.com//. I have a rel=canonical tag on it that goes to mysite.com/ but I was wondering if there was another way to correct the issue.
Technical SEO | | BryanPhelps-BigLeapWeb0