Best Google Practice for Hacked SIte: Shift Servers/IP or Disavow?
-
Hi -
Over the past few months, I've identified multiple sites which are linking into my site and creating fake pages (below is an example and there's over 500K+ of similar links from various sites}. I've attempted to contact the hosting companies, etc. with little success. Was wondering if my best course of action might be at this point: A) which servers (or IP address). B) Use the Google Disavow tool? C) both.
example: { http://aryafar.com/crossings/200-krsn-team-part19.html }
Thanks!!
-
Few things... make sure you have a sitemap that is always upto date and submitted to search engines - this will encourage them to view your content first and recognise it as belonging to your domain.
In addition to this put links in your content to other parts of your site, if it gets scraped it will probably be with the links in it and so anyone actually wanting real content can get through.
If there are thousands from the same domain coming to your site, disavow the base url and also report that url for spam (it's your copyright). In fact if you notice a small site scraping you, do that after you've tried to contact them.
If this still doesn't stop them look at your logs and see where their crawlers are coming from and block their IP's.
On one of my old site I blocked the whole of China at one point because it was constantly being barraged by scrapers and people trying to guess account passwords.
Hope that helps
-
OK, so they're scraping much of your site, and then adding in their own garbage etc.
I wouldn't worry about the occasional instance of this, unless you do see a penalty. For the more egregious ones, where they're building a ton of links, I'd throw their domain in your disavow list.
-
Hi Michael -
Sorry for the confusion...My site is HHisland.com and sites like the example below are linking in and creating false pages...Most are adult sites, etc.
Thanks again -
Billy
-
Hi Billy,
I'm not sure exactly what's going on here. Is it YOUR site that's getting hacked, or is it other sites getting hacked and linking to you, and you're worried that the "bad neighborhood" links will hurt you?
Michael.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Confused on footer links (Which are best practices for footer links on other websites?)
Hello folks, We are eCommerce web design and Development Company and we give do follow links of our website to every projects which we have done with specific keywords. So now the concern is we are seeing huge amount of back-links are being generated from single root domain for particular keyword in webmaster tools. So what should be the best way to practice this? Should we give no follow attribute to it or can use our company logo with link? LtMjHER.png
Technical SEO | | CommercePundit0 -
Google having trouble accessing my site
Hi google is having problem accessing my site. each day it is bringing up access denied errors and when i have checked what this means i have the following Access denied errors In general, Google discovers content by following links from one page to another. To crawl a page, Googlebot must be able to access it. If you’re seeing unexpected Access Denied errors, it may be for the following reasons: Googlebot couldn’t access a URL on your site because your site requires users to log in to view all or some of your content. (Tip: You can get around this by removing this requirement for user-agent Googlebot.) Your robots.txt file is blocking Google from accessing your whole site or individual URLs or directories. Test that your robots.txt is working as expected. The Test robots.txt tool lets you see exactly how Googlebot will interpret the contents of your robots.txt file. The Google user-agent is Googlebot. (How to verify that a user-agent really is Googlebot.) The Fetch as Google tool helps you understand exactly how your site appears to Googlebot. This can be very useful when troubleshooting problems with your site's content or discoverability in search results. Your server requires users to authenticate using a proxy, or your hosting provider may be blocking Google from accessing your site. Now i have contacted my hosting company who said there is not a problem but said to read the following page http://www.tmdhosting.com/kb/technical-questions/other/robots-txt-file-to-improve-the-way-search-bots-crawl/ i have read it and as far as i can see i have my file set up right which is listed below. they said if i still have problems then i need to contact google. can anyone please give me advice on what to do. the errors are responce code 403 User-agent: *
Technical SEO | | ClaireH-184886
Disallow: /administrator/
Disallow: /cache/
Disallow: /components/
Disallow: /includes/
Disallow: /installation/
Disallow: /language/
Disallow: /libraries/
Disallow: /media/
Disallow: /modules/
Disallow: /plugins/
Disallow: /templates/
Disallow: /tmp/
Disallow: /xmlrpc/0 -
Best way to retain banklink values when moving site?
Hi all, I want to get some opinions on what the best practice is when transferring backlink values from an old site to a new one. On the old site, I currently have a product page and this particular product has multiple models all listed on the one singe page. However on the new site, every model of this particular product has its own page. These product model pages would have relatively similar content apart from several key details which differentiates the models. Firstly would you guys recommend this splitting of models of the same product to different pages? If so, my initial thought process is to 301 redirect the old product page to the new model page that is most popular, and adding rel canonical tags to the other model pages. Would you consider this best practice? Or are there better ways I can be doing this to retain backlink values without also getting penalised due to possible content duplication? Thanks! Jac - sent from my manager's account.
Technical SEO | | RuchirP0 -
Link to Articles for news sites in Google SERPs
I'm trying to figure out why when I search for "international news" or "world news", for example, some sites in the SERPs have links to news articles, while others don't. For "international news", result of Fox News and New York Times have links to articles, while CNN (the top result), only have sitelinks. I would appreciate any theories on why this happens. Thanks.
Technical SEO | | seoFan210 -
ECommerce: Best Practice for expired product pages
I'm optimizing a pet supplies site (http://www.qualipet.ch/) and have a question about the best practice for expired product pages. We have thousands of products and hundreds of our offers just exist for a few months. Currently, when a product is no longer available, the site just returns a 404. Now I'm wondering what a better solution could be: 1. When a product disappears, a 301 redirect is established to the category page it in (i.e. leash would redirect to dog accessories). 2. After a product disappers, a customized 404 page appears, listing similar products (but the server returns a 404) I prefer solution 1, but am afraid that having hundreds of new redirects each month might look strange. But then again, returning lots of 404s to search engines is also not the best option. Do you know the best practice for large ecommerce sites where they have hundreds or even thousands of products that appear/disappear on a frequent basis? What should be done with those obsolete URLs?
Technical SEO | | zeepartner1 -
Best strategy for redirecting domain authority from an acquired site...?
Hi all, I'm an in-house for a company that made several acquisitions last year prior to my starting. I'm just now hearing about several loose-ends websites that belong to companies that have been absorbed by us. The question is how to best approach the task of utilizing that site's domain authority to our site's benefit. There is already a link to the homepage in the header of the site in question (our logo's right under theirs) so we're already getting some linkjuice. Looks like the whois information never changed. Here are the options I'm considering: 1. Blanket redirect (all of their pages there into our home page) - not ideal. 2. Targeted redirect (try to "connect the dots" between content pages with similar subjects/keyword relevance - better than #1, but is it worth the extra effort? 3. More linking (add more strategically placed and keyword optimized links back to our site) - also more work, but certainly do-able if the consensus is to leave the site up. 4. Any other suggestions? Thanks for your help everyone!
Technical SEO | | TGViaWest0 -
Google and QnA sites
My website has a QnA site - a bit like this one except it's not private to premium members. It is a page with a left colomn for category links and it has a list of recently asked questions, each question is a link to view the full question and answers etc. Does google know this is a QnA ? Or will it say - hey, there are far too many links on this page, tut tut. Is there anything I can do to help it understand what the page is.
Technical SEO | | borderbound0 -
Best practice canonical tags
I WAS WONDERING WHAT THE BESTPRACTICE IS WHEN USING CANONICAL TAGS: or 2:
Technical SEO | | NEWCRAFT0