How should I react to my site being "attacked" by bad links?
-
Hello,
We have never bought links or done manipulative linbuilding. Meanwhile, someone has recently (15th of March) pointed at the top 5 websites on my main keyword with lots of bad quality links.
So far it has not affected my rankings at all. Actually, I think it will not affect them because I think it was not a massive enough attack. The particular page that has been attacked had about 100 root domains pointing it and now it went up to something like 400. All those were in one day. All of those links use the same anchor text: the keyword we're ranking for.
With those extra 300 root domains pointing at us, we went from 600 rootdomain to 900 pointing at our domain as a whole. The page that was targetted by the attack is not the homepage.
What I wanted to do was to basically do nothing since I think it won't affect our rankings in any ways but I wanted you guys' opinion.
Thanks.
-
This happened to me, too. Google has been finding these links since last October, and I just keep adding the domains to my Disavow list. My rank has slipped a bit (from 2 to 4) but its hard to know if these thinks are the reason. Probably not.
When the links were first pointed at my site, Google moved me from 2nd to 1st place for my top keywords. The bad links seemed to give me about a one week a temporary boost, before we settled back to 2nd .
Google HAS to be aware of this. Their silence on this issue is deafening. Cutts gave it a little bit of lip service; but there must be tens of millions of these junk links being added daily judging from all of the people selling 100k bad links on Fiverr for five bucks.
So far I'm mostly annoyed by these bad links, rather than hurt by them.This has really screwed up all the intense work I did to scrutinize and analyze my link profile.
A nice feature for SEOmoz would be to allow us to UPLOAD our DISAVOW list so we can get some of our reports with the junk scrubbed out. After all, if I tell Google to ignore these 1,000 links and presuming they actually do ignore them, then it would be more useful to get SEOmoz reports with that data removed as well.
-
I agree with Russ. If not now, you may eventually see the effect of this surge in links and root domains. 300 in a short spam is bad.
-
I would proactively disavow those links and let Google know what is going on. Google needs to know that Penguin has created a market for malicious negative SEO attacks.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Submitting Same Press Release Content to Multiple PR Sites - Good or Bad Practice?
I see some PR (press release) sites where they distribute the same content on many different sites and at end they give the source link is that Good SEO Practice or Bad ? If it is Good Practice then how Google Panda or other algorithms consider it ?
Intermediate & Advanced SEO | | KaranX0 -
Blog On Subdomain - Do backlinks to the blog posts on Subdomain count as links for main site?
I want to put blog on my site. The IT department is asking that I use a subdomain (myblog.mysite.com) instead of a subfolder (mysite.com/myblog). I am worried b/c it was my understanding that any links I get to my blog posts (if on subdomain) will not count toward the main site (search engines would view almost as other website). The main purpose of this blog is to attract backlinks. That is why I prefer the subfolder location for the Blog. Can anyone tell me if I am thinking about this right? Another solution I am being offered is to use a reverse proxy. Thoughts? Thank you for your time.
Intermediate & Advanced SEO | | ecerbone0 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Hreflang="x-default"
Hello all This is my first question in the Moz Forum, hope I will get some concrete answers 🙂 I am looking for some suggestions on implementing the hreflang="x-default" properly in our site. Any previous experience or a link to a specific resource/ example will be very helpful. I have found many examples on implementing the homepage hreflang, however nothing on non-homepage urls within your site. The below will be the code for the "Homepage" for /uk/. Here /en-INT/ is a Global English site not targeted for any country unlike en-MY, en-SG, en-AU etc. Is this the correct approach? Now, in case of non homepage urls, should the respective en-INT url be "x-default" or the "x-default" shouldn't exist altogether? For example, will the below be the correct coding? Many thanks Avi
Intermediate & Advanced SEO | | Delonghi_Group0 -
Site migration from non canonicalized site
Hi Mozzers - I'm working on a site migration from a non-canonicalized site - I am wondering about the best way to deal with that - should I ask them to canonicalize prior to migration? Many thanks.
Intermediate & Advanced SEO | | McTaggart0 -
How important is a good "follow" / "no-follow" link ratio for SEO?
Is it very important to make sure most of the links pointing at your site are "follow" links? Is it problematic to post legitimate comments on blogs that include a link back to relevant content or posts on your site?
Intermediate & Advanced SEO | | BlueLinkERP0 -
How to use my time: Make my site bigger or make link wheels?
I have a site which consist of about 500 pages. It's the biggest of it's tiny niche, and I'm making a livin' out of it (it gets me clients). So this is important to me. I have access to tons and tons of non-copyrighted relevant texts. This text is not on the www, and thus would be unique to google. All though the text is relevant, it's not really useful for my visitors. How to use this text and get the most of my time spent? 1. Making thousands of articles on my website, with internal linking to the "selling" keyword pages? 2. Use text to make a lot of link wheels - eventually linking to my main site? Thanx a bunch! 😃 And if you have other suggestions I'd love to hear'em out 😃
Intermediate & Advanced SEO | | eirikte0 -
"Duplicate" Page Titles and Content
Hi All, This is a rather lengthy one, so please bear with me! SEOmoz has recently crawled 10,000 webpages from my site, FrenchEntree, and has returned 8,000 errors of duplicate page content. The main reason I have so many is because of the directories I have on site. The site is broken down into 2 levels of hierachy. "Weblets" and "Articles". A weblet is a landing page, and articles are created within these weblets. Weblets can hold any number of articles - 0 - 1,000,000 (in theory) and an article must be assigned to a weblet in order for it to work. Here's how it roughly looks in URL form - http://www.mysite.com/[weblet]/[articleID]/ Now; our directory results pages are weblets with standard content in the left and right hand columns, but the information in the middle column is pulled in from our directory database following a user query. This happens by adding the query string to the end of the URL. We have 3 main directory databases, but perhaps around 100 weblets promoting various 'canned' queries that users may want to navigate straight into. However, any one of the 100 directory promoting weblets could return any query from the parent directory database with the correct query string. The problem with this method (as pointed out by the 8,000 errors) is that each possible permutation of search is considered to be it's own URL, and therefore, it's own page. The example I will use is the first alphabetically. "Activity Holidays in France": http://www.frenchentree.com/activity-holidays-france/ - This link shows you a results weblet without the query at the end, and therefore only displays the left and right hand columns as populated. http://www.frenchentree.com/activity-holidays-france/home.asp?CategoryFilter= - This link shows you the same weblet with the an 'open' query on the end. I.e. display all results from this database. Listings are displayed in the middle. There are around 500 different URL permutations for this weblet alone when you take into account the various categories and cities a user may want to search in. What I'd like to do is to prevent SEOmoz (and therefore search engines) from counting each individual query permutation as a unique page, without harming the visibility that the directory results received in SERPs. We often appear in the top 5 for quite competitive keywords and we'd like it to stay that way. I also wouldn't want the search engine results to only display (and therefore direct the user through to) an empty weblet by some sort of robot exclusion or canonical classification. Does anyone have any advice on how best to remove the "duplication" problem, whilst keeping the search visibility? All advice welcome. Thanks Matt
Intermediate & Advanced SEO | | Horizon0