How do I stop someone from hijacking my site?
-
I had lost a lot of search engine rankings & it almost put me out of business. I worked a lot on the site in April and in May my sales rebounded 37%+ almost back to where they typically are. I have blamed Panda and saw another Panda update in May and my traffic is declining again. Today I happened to decide to check what pages were being indexed and I also noticed that when I published an update this morning it said the page "<cite class="_Zd">www.cheaptubes.com/index2.asp</cite>" exists on the destination server but not in current site and it asked if I wanted to delete it. I said no at the time but am going to delete it right after this question. I went to google & put "site:cheaptubes.com" into the search. My website has about 50 pages but google has "About 2,160 results" @ 10X per page. The first four pages are mostly my pages but I noticed even on the first page, 3rd result was for Nike shoes and it isn't my site. Someone has hacked my site and put up over 21,000 pages! That must be the real reason behind my website problems, right? How do I stop this from happening again? Is this having the negative effect that I fear it is?
-
ok, thx for that. I will try it. I know the pages that need the 301 from the others. all the bad pages have index2.asp in the names. I have changed my ftp and login passwords
good to know the 404 don't count against me
-
I would definitely make sure not to 301 redirect any of the bad pages. You might get penalized for that.
They won't count the 404's against you, I would change them to a 410 error though for those pages. Then they will drop out of your GWT and the Google index quicker.
-
I went to GWT today and now that I deleted the 2 malicious pages, my site now had 1950 404 errors of which only 15 are legitimate that I need to 301. I'm not sure how to handle this now. I read that google doesn't count 404 or 410 against you but 1950 pages is a lot. I would like to deal with it. I thought about wiping out the site and uploading a new version of it but I think it will have the same errors. Any advice?
http://www.cheaptubes.com/: Increase in not found errors June 20, 2014
Google detected a significant increase in the number of URLs that return a 404 (Page Not Found) error. Investigating these errors and fixing them where appropriate ensures that Google can successfully crawl your site's pages.
Recommended action
- Check the Crawl Errors page in Webmaster Tools.
- Check that pages on your site don't link to non-existent pages. (If another site lists a broken link to your site, Google may list that URL but you may not be able to fix the error.)
-
Thank you for your help
-
Interesting, it sounds like they had a long term plan. Good thing you found about it in time.
I think the plan sounds good. I would definitely get off front page, it has been out of development for a while so there are vulnerabilities that have not been patched in years.
You might look into a cms like Wordpress or concrete5 to make things easier for you with transitioning the site. Then you would have to only learn minimal html / css, and could focus more on the content.
Good luck to ya.
-
Thank you so much. I changed my main login & FTP passwords today. I found that they had changed my email address to info@chatfieldstatepark.org & the last name on my account was changed to Puraty. I suspect it might be networksolutions who got hacked but i don't know for sure. I updated and scanned my computer last night and it is virus/malware free.
I am on shared hosting. I need to get my site updated and off frontpage & then off network solutions. You helped me back in April when i was trying to get my rankings back, thx. I have an interim plan.
1- I will use filezilla to FTP and it shouldn't have the same vulnerabilities as frontpage as it can be updated.
2- My theme designer said the theme doesn't rely on FP extensions. With this in mind I plan to download an HTML editor and can hopefully do future editing from there.
I think this is my short path to getting off frontpage. Does that make sense? I do realize I need to get from HTML2 to HTML5 or something more modern but I'm tryimng to handle this while juggling my other responsibilites in my 80hr work week.
-
Good that the site is clean. What scurri and programs like that do is analyze the site (your real site, not the made up pages) for malicious code. That means all the public facing files should be intact. Also GWT is google webmaster tools, if you passed the scurri check you should be fine there.
Since it sounds like you are on shared or managed hosting I would send a support email to your host and let them know the issue. They might be able to see where someone got in and when it happened, it is worth a shot at least.
What platform are you running on your site? Is it a cms or a custom platform?
More than likely the reason that Moz never detected the pages is because it is a crawler. It starts with your home page and follows every linking page on your site, if the pages were "orphaned" as it sounds like, the crawler would never have picked them up.
-
sucuri says my site is clean
-
Thanks you for your response. I had thought of changing my FTP password but the hosting co's servers were down for maintenance. They blamed it all on me. I did delete the index2.asp page and all the links are broken. I had deleted pages in April when cleaning up the site & I think I deleted this then but didn't know it. I do have a restorable version and I will pursue this tomorrow as well.I will check about the malware, thx as I wouldn't know where to do that. I have been having email problems for some time. I have moz scan my site every week (started in April) and they never detected 10,000 extra pages. I suspect I deleted it in April & they got back in. Maybe I should start selling Nike? : ) IS GWT google webmaster tools?
-
Change all of your passwords is the first thing you should do. Then you need to examine the server logs to see how they got in. I would check the ftp access logs first. Hopefully you have logging turned on. Then with those logs I would search for lines that are not your ip address. If you are on a static ip and you have had the same ip for a while it should be a lot easier. You will be looking for the other ip address. If you cannot find that the server was accessed from another ip address through ftp, then the next option is to look at the code. There might be an exploit in your site that will allow for it. One thing I would do is look at the files that were added via ftp, they will hold a time stamp on them. You can try to cross reference that time and day with the ftp log. If there is at those times (remember your server might not be set to your time zone) then start looking through the site for a "connector" file. It would have been the first that they created, it is basically a bot file that can create files on your server. If you can find that, check that time stamp against the log.
If you have a restorable version of the site, I would consider doing it. I would also see if your site is label as having malware on it. You can use GWT and sucurri to do that.
As for possibility hurting your rankings, yes, definitely. I would get the issue cleaned up, see if there is a pattern for the files that you can redirect to a 404 page and also the robots.txt file as well. I would also check GWT and see if you have a penalty as well. But I would do all of this ASAP if I were you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I want your SEO feedback for my site!
Hello Moz community, Please check out my website http://www.likechimp.com - I'd love to know your thoughts. I have never built links for my website but managed to do pretty well on the rankings organically. Lately, figures have dropped so need to get back up there on Google. Any tips from what you see already would be greatly appreciated! Example ecom product I sell: http://likechimp.com/facebook-shop/buy-100-facebook-event-attendees/ Thanks
Content Development | | xlucax0 -
How to add Press Releases the site without it will be consider Copied ?
Hello guys,
Content Development | | WayneRooney
There is a Press Releases company that posting every month 2 Press Releases in their website about our company. I want to show the Press Releases post in our company as well.
How can i do it without that it will be consider Copied text ? Thank you0 -
Should i 301 redirect my blog to the respective site or just 404 all the blog pages?
Hi guys, We plan to close down our blog, which is a subdomain to our anchor site. I need advice on how to keep the link juice and its SEO value. Is it better to 301 redirect the blog to the homepage of our site, or just 404 all the pages of the blog? Thanks
Content Development | | dimicos0 -
Moving a html site into Wordpress
I'm getting ready to move a site into Wordpress. The current or old site is built with static html pages. My question is, how should I handle Google with these old pages. Should I 301 redirect from each old page to the new? Or is there a better way to handle it?
Content Development | | brandco0 -
Does having blog on subdomain weaken primary site's SEO?
I have a Joomla site with a WordPress subdomain that I'm wanting to use as a learning center. It's far more simple for new blog editors and publishers to learn that over Joomla. The issue I'm facing is trying to figure out rather having the blog put into the primary domain instead of a subdomain will hurt my ratings at all? Where I got the idea that it may hurt ratings is due to how (from what I understand) search engines will see a subdomain slightly different than the primary domain. Thank you for any help!
Content Development | | sDevik0 -
Mobile Sites / Useragent detection
I've got a question about how search engines declare that they're mobile browsers... Our website is based on wordpress, and uses the caching plugin W3TC to send a different site template to mobile useragents - i believe from the HTTP useragent string; (the same content is served on every page whether it's a desktop or mobile - just different themes). After having this mobile site online for a few months, we're a little confused as to why google still shows the instant preview of the desktop version for mobile users, and it doesn't show the little mobile phone icon in our SERPs for mobile devices (it's as if it doesn't realise the mobile site exists). I was reading today that the "old" method of serving different content based on the browser is to use the HTTP useragent string; and there's a "new" object checking method which is more robust (although I can't find a lot of information about it). Can anyone explain the "new" method? Would this be the reason that google is so far ignorant of our mobile site?
Content Development | | AlecPR0 -
Blog for SEO: embedded in the site or separate
Hello, For both ecommerce and sites that sell services, I've seen a lot of people recommending a blog for SEO. Should this blog be inside or separate from the main website for the most results? I can see how adding one to a site would create more unique content and an opportunity for link bait, but perhaps there is a reason to have a blog separate from the main site Thank you.
Content Development | | BobGW1 -
Index pdf files but redirecto to site
Hi, One of our clients has tons of PDFs (manuals, etc.) and frequently gets good rankings for the direct PDF link. While we're happy about the PDFs attracting users' attention, we'd like to redirect them to the site where the original PDF link is published and avoid that people open the pdf directly. In short, we'd like to index the PDFs, but show to users the pdf link within a site - how should we proceed to do that? Thanks, GM
Content Development | | gmellak0