Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Hacked website - Dealing with 301 redirects and a large .htaccess file
-
One of my client's websites was recently hacked and I've been dealing with the after effects of it. The website is now clean of malware and I already appealed to Google about the malware issue. The current issue I have is dealing with the 20, 000+ crawl errors which are garbage links that were created from the hacking.
How does one go about dealing with all the 301 redirects I need to create for all the 404 crawl errors? I'm already noticing an increased load time on the website due to having a rather large .htaccess file with a couple thousand 301 redirects done already which I fear will result in my client's website performance and SEO performance taking a hit as well.
-
This is the correct answer.
To expand on this slightly, just make sure none of the 404s are internal (ie there are no links on your site pointing to one of these dodgy pages as a result of the hack) and you're all good.
Remove the entries from your htaccess file to avoid having to parse them constantly and let any external links to dodgy pages 404. This sort of circumstance is exactly what 404s are made for!
The only site at risk of a ranking drop from these 404s is the one pointing to those dodgy pages - who cares about your hackers' rankings?

-
So robots part could be at the end but in my case it worked fine too.
-
Just a correction here. I agree with all the items above, with one very, very, very, very, very important change.
DO NOT set the corrected urls to disallow in your robots.txt
If you do not allow Google to crawl the pages, Google will not see that the links were removed, that the page is now 4xx, etc. If you were to disallow all those pages, all the clean up work that you have done will not be seen by Google and would be for naught.
If you later want to disallow those pages, that would be fine, but you need to let Google see your clean up work first.
-
Hi
I just finished similar job.
What you should do:
- collect all bad "pages" and links pointing to them
- find a pattern like some kind of directory
- set them (directories I believe?) 410, not 404
- set robots to disallow those directories
- push all pages and links to reindex
- remove from Google index
- done (need to wait some time)
Important thing is to get rid of all bad links pointing to those pages. If you do that, then there'll be no issues. However this could be ongoing negseo. If you need help with that, pm me.
Krzysztof
-
If they are garbage links, why are you redirecting them? Let them 404. Having not found pages does not lead to penalties, in and of itself.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
301 Redirects for Multiple Language Sites in htaccess File
Hi everyone, I have a site on a subdomain that has multiple languages set up at the domain level: https://mysite.site.com, https://mysite.site.fr , https://mysite.site.es , https://mysite.site.de , etc. We are migrating to a new subdomain and I am trying to create 301 redirects within the htaccess file, but I am a bit lost on how to do this as it seems you have to go from a relative url to an absolute - which would be fine if I was only doing this for the english site, but I'm not. It doesn't seem like I can go from absolute url to an absolute url - but I could be wrong. I am new to editing the htaccess file - so I could definitely use some advice here. Thanks.
Intermediate & Advanced SEO | | amberprata0 -
301 redirect hops from non-https and www
It's best practice to minimize the amount of 301 redirect hops. Ideally only one redirect hop. It's also best practice to 301 redirect (or at least canonical) your non-https and/or your non-www (or www) to the canonical protocol/subdomain. The simplest (and possibly the most common) way to implement canonical protocol/subdomain redirects is through a load balancer or before your app processes the request. Both of which will just blanket 301 to the canonical domain/protocol regardless if the path exists or not In which case, you could have: Two hops. i.e. hop #1 http://example.com/foo to https://example.com/foo, hop #2 https://example.com/foo to https://example.com/bar 301 to a 404. Let's say https://example.com/dog never existed, but somebody for whatever reason linked to it (maybe a typo). If I request https://www.example.com/dog, the load balancer would 301 to a 404 page. Either scenario above should be fairly rare. However, you can't control how people link to you. Should I care about either above scenario? I could have my app attempt to check if the page exists before forwarding, but that code could be complicated.
Intermediate & Advanced SEO | | dsbud0 -
Switching from HTTP to HTTPS: 301 redirect or keep both & rel canonical?
Hey Mozzers, I'll be moving several sites from HTTP to HTTPS in the coming weeks (same brand, multiple ccTLDs). We'll start on a low traffic site and test it for 2-4 weeks to see the impact before rolling out across all 8 sites. Ideally, I'd like to simply 301 redirect the HTTP version page to the HTTPS version of the page (to get that potential SEO rankings boost). However, I'm concerned about the potential drop in rankings, links and traffic. I'm thinking of alternative ways and so instead of the 301 redirect approach, I would keep both sites live and accessible, and then add rel canonical on the HTTPS pages to point towards HTTP so that Google keeps the current pages/ links/ indexed as they are today (in this case, HTTPS is more UX than for SEO). Has anyone tried the rel canonical approach, and if so, what were the results? Do you recommend it? Also, for those who have implemented HTTPS, how long did it take for Google to index those pages over the older HTTP pages?
Intermediate & Advanced SEO | | Steven_Macdonald0 -
I'm a newb, built a website with Wix want to redirect it to a domain I own, but am reading that Wix is bad for this
Hi, I am building this site for my boss http://charlesfridmanpr.wix.com/real-estate and am still working on it. I'm getting close to the stage where I want to redirect it to the URL we want to use, but in reading these forums, it says that because all of subpages (?) have a # in them, they will not be read or indexed by google. I am very new to this, and while it may not look like it, the website has taken me quite a while to design. Is there a way to fix this? We want to appear high up for a non competitive keyword. Thanks
Intermediate & Advanced SEO | | Charlesfridmanpr0 -
Is it a problem to use a 301 redirect to a 404 error page, instead of serving directly a 404 page?
We are building URLs dynamically with apache rewrite.
Intermediate & Advanced SEO | | lcourse
When we detect that an URL is matching some valid patterns, we serve a script which then may detect that the combination of parameters in the URL does not exist. If this happens we produce a 301 redirect to another URL which serves a 404 error page, So my doubt is the following: Do I have to worry about not serving directly an 404, but redirecting (301) to a 404 page? Will this lead to the erroneous original URL staying longer in the google index than if I would serve directly a 404? Some context. It is a site with about 200.000 web pages and we have currently 90.000 404 errors reported in webmaster tools (even though only 600 detected last month).0 -
301 doesn't redirect a page that ends in %20, and others being appended with ?q=
I have a product page that ends /product-name**%20** that I'm trying to redirect in this way: Redirect 301 /products/product-name%20 http://www.site.com/products/product-name And it doesn't redirect at all. The others, those with %20, are being redirected to a url hybrid of old and new: http://www.site.com/products/product-name**?q=old-url** I'm using Drupal CMS, and it may be creating rules that counter my entries.
Intermediate & Advanced SEO | | Brocberry0 -
How to stop Google crawling after 301 redirect?
I have removed all pages from my old website and set 301 redirect to new website. But, I have verified old website with Google webmaster tools' HTML verification file which enable me to track all data and existence of pages in Google search for my old website. I was assumed that, Google will stop crawling and DE-indexed all pages after 301 redirect. Because, I have set 301 redirect before 3 months. Now, I'm able to see Google bot activity on my website with help of Google webmaster tools. You can find out attachment to know more about it. How can it possible & How Google can crawl removed pages? You can see following image to know more about it. First & Second
Intermediate & Advanced SEO | | CommercePundit0 -
301 - should I redirect entire domain or page for page?
Hi, We recently enabled a 301 on our domain from our old website to our new website. On the advice of fellow mozzer's we copied the old site exactly to the new domain, then did the 301 so that the sites are identical. Question is, should we be doing the 301 as a whole domain redirect, i.e. www.oldsite.com is now > www.newsite.com, or individually setting each page, i.e. www.oldsite.com/page1 is now www.newsite.com/page1 etc for each page in our site? Remembering that both old and new sites (for now) are identical copies. Also we set the 301 about 5 days ago and have verified its working but haven't seen a single change in rank either from the old site or new - is this because Google hasn't likely re-indexed yet? Thanks, Anthony
Intermediate & Advanced SEO | | Grenadi0