Using 410 To Remove URLs Starting With Same Word
-
We had a spam injection a few months ago. We successfully cleaned up the site and resubmitted to google. I recently received a notification showing a spike in 404 errors.
All of the URLS have a common word at the beginning injected via the spam:
sitename.com/mono
sitename.com/mono.php?buy-good-essays
sitename.com/mono.php?professional-paper-writerThere's about 100 total URLS with the same syntax with the word "mono" in them. Based on my research, it seems that it would be best to serve a 410. I wanted to know what the line of HTACCESS code would be to do that in bulk for any URL that has the word "mono" after the sitename.com/
-
Martijn -
Thanks for your reply. I tried the code you provided, however it still provided a 404 error. I was able to get the following to work properly - any drawbacks to doing it this way?
RewriteRule ^mono(.*)$ - [NC,R=410,L]
The browser now shows the following anytime there is the word "mono" immediately after "sitename.com/"
The requested resource
/mono.php
is no longer available on this server and there is no forwarding address. Please remove all references to this resource.Additionally, a 410 Gone error was encountered while trying to use an ErrorDocument to handle the request.
-
Thanks for the detailed response. Yes, there are some negative-SEO backlinks to some of the URLs created during the spam injection. I've seen a few backlinks from other forum sites to our site to one of the spam created URLs which has hurt our rankings such as the following URL created on our site:
sitename.com/mono.php?best-resume-writing-service-for-it-professionals
I was confused by the following in your response: "If you can serve the 410s on a custom 410 page which also gives the Meta no-index directive, that will be a very strong signal to Google indeed that those aren't proper pages or fit for indexation"
- Is that all done view the htaccess file? Code? Or is the meta no-index directive done in the robots.txt?- custom 410 page? I've seen some 404 pages, but not custom 410 pages. Would that be similar to a new 404 page?
Thanks for your response.
-
There are so many ways to deal with this. If these were indeed spam URLs, someone may have attached negative-SEO links to them (to water down your site's ranking power). As such, redirecting these URLs back to their parents could pull spam metrics 'onto' your site which would be really bad. I can see why you are thinking about using 410 (gone)
Using Canonical tags to stop Google from indexing those bad parameter-based URLs could also be helpful. If you 'canonicalled' those addresses to their non-parameter based parents, Google would stop crawling those pages. When a URL 'canonicals' to another, different page - it cites itself as non-canonical, and thus gets de-indexed (usually, although this is only a directive). Again though, canonical tags interrelate pages. If those spam URLs were backed by negative SEO attacks, the usage of canonical tags would (again) be highly inadvisable (leaving your 410 suggestion as a better method).
Google listens for wildcard rules in your robots.txt file, though it runs very simplified regex (in fact I think only the "*" wildcard is supported). In your robots.txt you could do something like:
User-agent: *
Disallow: /mono.php?*That would cull Google's crawling of most of those URLs, but not necessarily the indexation. This would be something to do after Google has swallowed most of the 410s and 'got the message'. You shouldn't start out with this, as if Google can't crawl those URLs - it won't see your 410s! Just remember this, so that when the issue is resolved you can smack this down and stop the attack from occurring again (or at least, it will be preemptively nullified)
Finally you have Meta "No-Index" tags. They don't stop Google from crawling a URL, but they will remove those URLs from Google's index. If you can serve the 410s on a custom 410 page which also gives the Meta no-index directive, that will be a very strong signal to Google indeed that those aren't proper pages or fit for indexation
So now we have a bit of an action plan:
- 410 the bad URLs alongside a Meta no-index directive served from the same URL
- Once Google has swallowed all that (may be some weeks or just over 1 month), back-plate it with robots.txt wildcards
With regards to your oriignal question (sorry I took so long to get here) I'd use something like:
Redirect 410 /mono.php?*
I think .htaccess swallows proper regex (I think). The back slashes say "whatever character follows me, treat that character as a value and do not apply its general regex function". It's the regex escape character (usually). This would go in the .htaccess file at the root of your site, not in a subdir .htaccess file
Please sandbox text my recommendation first. I'm really more of a technical data analyst than a developer!
This document seems to suggest that a .htaccess file will properly swallow "" as the escape character:
https://premium.wpmudev.org/forums/topic/htaccess-redirects-with-special-characters
Hope this helps!
-
Hi,
Have you also excluded these pages from the robots.txt file so you can make sure that they're also not being crawled?
The code for the redirect looks something like this:RewriteEngine on
RewriteRule ^/mono* - [G,NC]Martijn.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Same URL names in one domain
Hi All, I have 9 different subdirectories for languages in the same domain example: www.example.com/page.html www.example.com/uk/page-uk.html www.example.com/es/page-es.html we are implementing hreflang tags for the languages. I know it is better to translate URLs, but we won't for now, because all the NON-ASCII characters. But we are thinking to get rid of the dashes on the languages URL: -uk or -es, so it will be: www.example.com/page.html www.example.com/uk/page.html www.example.com/es/page.hrml would this be a problem? to have same page names even if they are in different subdirectories? would we need to add canonical tags, at least for the main domain URLs? www.example.com/page.html Thank you, Rachel
Technical SEO | | RaquelSaiz0 -
How do I deindex url parameters
Google indexed a bunch of our URL parameters. I'm worried about duplicate content. I used the URL parameter tool in webmaster to set it so future parameters don't get indexed. What can I do to remove the ones that have already been indexed? For example, Site.com/products and site.com/products?campaign=email have both been indexed as separate pages even though they are the same page. If I use a no index I'm worried about de indexing the product page. What can I do to just deindexed the URL parameter version? Thank you!
Technical SEO | | BT20090 -
Canonical URL
Hi there Our website www.snowbusiness.com has a non www version and this one has 398 backlinks. What is the best way of transfering this link value if i establish the www. address as the canonical URL? Thanks, Ben
Technical SEO | | SnowFX0 -
URL removals
Hello there, I found out that some pages of the site have two different URL's pointing at the same page generating duplicate content, title and description. Is there a way to block one of them? cheers
Technical SEO | | PremioOscar0 -
How to Remove Old Comment Page Query String URLs
I used to use a comments program on my website that created comment pages in the form of http://www.example.com/web-page.htm?comm_page=2. When I switched to a new comments program, I worried that these old comment URLs would be considered duplicate content. I created a 301 redirect that, for example, would redirect http://www.example.com/web-page.htm?comm_page=2 to http://www.example.com/web-page.htm and disallowed them in robots.txt, which I later learned was not the thing to do.. I have removed the URLs from being disallowed in robots.txt. However, many months later, these comment page URLs keep appearing in Google's index from time to time. I use the "Remove URLs" tool in Google Webmaster Tools to remove the URLs from Google's index, but more URLs appear a few days later. How can I get rid of these URLs for good? Thanks!
Technical SEO | | MrFrost0 -
Would these be considered dynamic URLs?
Hi, I have a (brand) new client (outdoor recreation), and it links to many different lodges. It's built in Wordpress (Pagelines), and the partner page link URLs. Although they do have the "?" in there, it's only has a single parameter. http://www.clientsite/?partners=partner-name Google is indexing the URLs, I do plan to increase the amount of content/on-page for each. Yet, weighing the risk/reward of rewriting all of these URLs.
Technical SEO | | csmithal0 -
Remove a directory using htaccess
Hi, Can someone tell me if there's a way using htaccess to say that everything in a particular directory, let's call it "A", is gone (http 410 code)? i.e. all the links should be de-indexed? Right now, I'm using the robots file to deny access. I'm not sure if it's the right thing to do since Google webmaster tools is showing me the link as indexed still and a 403 error code. Thanks.
Technical SEO | | webtarget0 -
Should I use www. or not in my main URL?
I have backlinks coming into my homepage, which has both a www. URL and one that's merely http://mysite.com. Which is the preferred URL for best optimization for search engines and how do I find this out?
Technical SEO | | NetPicks0