My site got hacked and now i have 1000s of 404 pages and backlinks. Should i transfer my site to a new domain name and start again?
-
My site was hacked and I had 1000s of pages that should not exist created and has had 1000s of backlinks put in. Now i have the same pages and backlinks redirecting to 404 pages. Is this why my site crashed out of google and my SEO fixes since have made no progress to my problem?
-
Its not a big problem and there is no need to start on a new domain. In fact, this is common with WordPress websites that are not updated regularly.
I would advise the following:
-
Try to identify how did the hack take place - if its WordPress you can install the Sucuri Security (free) plugin that will help identify any compromised files.
-
Cleanup the website by deleting all the spam sub-folders & pages.
-
Update your WordPress, Theme and all Plugins (if this is a WordPress website).
-
You can have a quick fix for all the 404s by submitting a 410 response code for these URLs. This can be done through a number of free redirection plugins on WordPress. If its not a WordPress site, there are a number of other options to get this done too.
Hope that helps.
-
-
Are you unable to retrieve your site from your server? Is your domain or site hacked?
-
Its not a problem to have 404-Pages, when they are not linked internally. High amount of pages pointing to your site sounds like you may have to disavow stuff. If your page is known long enaugh for search engines, a new domain with equal content will make the problems move to the new domain (you may can see that happen for dejanseo switching without redirect to dejanmarketing).
Did Google saw the hack and asked you to solve the problem?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Auto-generated, weird, long URLs? Or possible hacking? Help needed.
Good afternoon, Moz. My team and I are a bit stumped. We are getting ready to send out reporting and discovered that we are getting hits on a few very odd landing pages. We build our websites and we never made this URL: https://eaglehearing.com/Home/gclid/ EAIaIQobChMIo_K0t8T_3QIVg81kCh1NQAKIEAEYASAAEgJajfD_BwE There are around 7-10 variations on these weird URLs pulling up in our reports for two different clients now. Do you think we got hacked?
Industry News | | TaylorRHawkins1 -
Other Category Pages Impression Down
My Domain is http://www.grandcrayon.ca/ We have more than 40,000 Office supplies products pages including 200+ categories and we get most of impression on only home page with keyword office supply, office supplies Canada, grand crayon office supplies Why our other category pages have no impressions, My targeted Keywords which have impression on homepage Cheapest Office Supplies, Stationary Supplies Canada Please suggest to enhance impression for other category pages impression.
Industry News | | CommercePundit0 -
What is the best way to become a published writer / blogger for bigger industry sites?
I'm a younger SEO manager and wanted to started establishing my name throughout the industry through well written blogs, expert articles, and any advice that is needed. I know a lot of answers will be to start my own blog and establish that and I'll get noticed, but is there a good way to make the first contact with industry sites to get an article or blog post published to their site? Or is it a pretty tight "gotta know someone" inner circle?
Industry News | | MERGE-Chicago0 -
Two Industry giant has occupied the Google SE First Page.
I have seen just two websites on Google SE first page one with 4 and second one is with 6 internal pages on Coupon related question/niche . What it means Is this a Google Dance or temporary fluctuation. ? Why Google don't consider other sites to show on first page? Many pages from askvila.amazon, yahoo answers and facebook pages are shown on second and third pages.? Is this meant that there are a lack of quality websites/pages for this industry or query? What do you say? Here is the SERPs from Google US.(as on July 09) http://bitly.com/Md8JFW http://bitly.com/LaT9yD Thanks Alex
Industry News | | alexgray0 -
Hyphenated domains – good or bad?
I have a client who will be changing their domain name from www.brand.co.uk to www.brandglobal.com. There are concerns however that this might affect the searches for brand terms and variants. Hence, there is a suggestion for going for www.brand-global.com. I’m aware that search engines are smart enough to discern the brand name in most cases, but I’m wary if it would impact highly on the brand search terms and would it be a better option to go for www.brand-global.com instead of www.brandglobal.com? Any advice (or reference to discussions around this) would be much appreciated. Thanks
Industry News | | ravisodha0 -
Any pointers for my site would be greatly apprecieated and rewarded.
We just started changing all the meta data and adding relevant blog content. Anything I'm missing? Any advise or "after panda" link building strategies? inkfarm.com
Industry News | | ibex0 -
What is the best method for getting pure Javascript/Ajax pages Indeded by Google for SEO?
I am in the process of researching this further, and wanted to share some of what I have found below. Anyone who can confirm or deny these assumptions or add some insight would be appreciated. Option: 1 If you're starting from scratch, a good approach is to build your site's structure and navigation using only HTML. Then, once you have the site's pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your AJAX bonuses. You can use Hijax to help ajax and html links coexist. You can use Meta NoFollow tags etc to prevent the crawlers from accessing the javascript versions of the page. Currently, webmasters create a "parallel universe" of content. Users of JavaScript-enabled browsers will see content that is created dynamically, whereas users of non-JavaScript-enabled browsers as well as crawlers will see content that is static and created offline. In current practice, "progressive enhancement" in the form of Hijax-links are often used. Option: 2
Industry News | | webbroi
In order to make your AJAX application crawlable, your site needs to abide by a new agreement. This agreement rests on the following: The site adopts the AJAX crawling scheme. For each URL that has dynamically produced content, your server provides an HTML snapshot, which is the content a user (with a browser) sees. Often, such URLs will be AJAX URLs, that is, URLs containing a hash fragment, for example www.example.com/index.html#key=value, where #key=value is the hash fragment. An HTML snapshot is all the content that appears on the page after the JavaScript has been executed. The search engine indexes the HTML snapshot and serves your original AJAX URLs in search results. In order to make this work, the application must use a specific syntax in the AJAX URLs (let's call them "pretty URLs;" you'll see why in the following sections). The search engine crawler will temporarily modify these "pretty URLs" into "ugly URLs" and request those from your server. This request of an "ugly URL" indicates to the server that it should not return the regular web page it would give to a browser, but instead an HTML snapshot. When the crawler has obtained the content for the modified ugly URL, it indexes its content, then displays the original pretty URL in the search results. In other words, end users will always see the pretty URL containing a hash fragment. The following diagram summarizes the agreement:
See more in the....... Getting Started Guide. Make sure you avoid this:
http://www.google.com/support/webmasters/bin/answer.py?answer=66355
Here is a few example Pages that have mostly Javascrip/AJAX : http://catchfree.com/listen-to-music#&tab=top-free-apps-tab https://www.pivotaltracker.com/public_projects This is what the spiders see: view-source:http://catchfree.com/listen-to-music#&tab=top-free-apps-tab This is the best resources I have found regarding Google and Javascript http://code.google.com/web/ajaxcrawling/ - This is step by step instructions.
http://www.google.com/support/webmasters/bin/answer.py?answer=81766
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
Some additional Resources: http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
http://www.google.com/support/webmasters/bin/answer.py?answer=357690