Black Seo --> Attack
-
Hello there,
Happy new year for everyone, and good luck this year.
I have a real problem here, I saw in MOZ link history that somehow the "Total Linking Root Domains" is growing from a medium of 30 - 40 to 240 - 340 links and keep it growing. I guess somebody make me good joke, cause i did not buy any link :)) even cn, brasil, jp links, my store is from Romania.
How I can block these links I think google will make me bad instead. What should i do?
Thank you so much.
With respect,
Andrei -
Hello all, and thank you for answers.
I disavow the links (500), but no effects, still. 3 Months passed. Nothing, I am still "banned", if i can say so.
Anyway for unserious players (Seo), this will become a precedent situation. So if you want to "kick" the competition just buy them 500 - 1000 links, and you should be 1st position. Google must pay attention to this kinds of shits. Really now :))
With respect,
Andrei -
If you had no part in making them Google says you should be fine to ignore them. But, if you want to be sure, it's not a bad idea to comb through them and disavow on the domain level.
There is info in here that may help:
http://moz.com/blog/preparing-for-negative-seo
http://moz.com/blog/guide-to-googles-disavow-tool
You may also want to have a good look for malware if you are seeing a sudden increase in links to your site. Sometimes a Google search for site:yoursite.com viagra | cialis | loans can help.
-
'Neatza Andrei,
First of all, try to monitor new backlinks for the website, where are they coming from, what type of links are coming in to your website. Are all of these links low quality?
I'd use Majestic for this problem (they identify new links way faster than Moz), so you can export raw data, analyize and create a disavow file for the low quality links.
Gr., Keszi
-
Hi Andrei
You should definitely disavow these links if you suspect them to be bad or dodgy links. A very substantial guide on how to do so can be found here http://moz.com/blog/guide-to-googles-disavow-tool
This will help you avoid getting penalised by google.
Regards
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why does expired domains still work for SEO?
Hi everyone I’ve been doing an experiment during more than 1 year to try to see if its possible to buy expired domains. I know its considered black hat, but like I said, I wanted to experiment, that is what SEO is about. What I did was to buy domains that just expired, immediately added content on a WP setup, filled it with relevant content to the expired domain and then started building links to other relevant sites from these domains.( Here is a pretty good post on how to do, and I did it in a similar way. http://searchenginewatch.com/article/2297718/How-to-Build-Links-Using-Expired-Domains ) This is nothing new and SEO:s has been doing it for along time. There is a lot of rumors around the SEO world that the domains becomes worthless after they expire. But after trying it out during more than 1 year and with about 50 different expired domains I can conclude that it DOES work, 100% of the time. Some of the domains are of course better than others, but I cannot see any signs of the expired domains or the sites i link to has been punished by Google. The sites im liking to ranks great ONLY with those links 🙂 So to the question: WHY does Google allow this? They should be able to see that a domain has been expired right? And if its expired, why dont they just “delete” all the links to that domain after the expiry date? Google is well aware of this problem so what is stopping them? Is there any one here that know how this works technically?
White Hat / Black Hat SEO | | Sir0 -
Do rss feeds help seo in 2013?
I have seen answers for this back in 2012 but as we all now things have changed in 2013. My question is Do rss feeds help seo in 2013? Or does google see it as duplicate content (I see that the moz site has RSS ...)
White Hat / Black Hat SEO | | Llanero0 -
Does the Traffic boost SEO/SERP ranks?
Hello, I know a guy that sells Organic traffic, bought 10k from him, will this help me to bost google seo ranks? Attached a screenshoot thank you!
White Hat / Black Hat SEO | | 7liberty0 -
Search query for SEO Brisbane
Would love to get some opinions on the latest Penguin 2.0 update and how on earth the #1 rank is #1 ranked, very, very peculiar... http://www.google.com/search?gs_rn=14&gs_ri=psy-ab&pq=sila&cp=8&gs_id=10&xhr=t&q=seo+brisbane&pf=p&client=safari&rls=en&sclient=psy-ab&oq=seo+bris&gs_l=&pbx=1&bav=on.2,or.r_qf.&bvm=bv.47008514,d.aGc&biw=1300&bih=569 Any and all theories welcomed and appreciated. Thanks, Mike
White Hat / Black Hat SEO | | MichaelYork0 -
Dust.js Client-side JavaScript Templates & SEO
I work for a commerce company and our IT team is pushing to switch our JSP server-side templates over to client-side templates using a JavaScript library called Dust.js Dust.js is a JavaScript client-side templating solution that takes the presentation layer away from the data layer. The problem with front-end solutions like this is they are not SEO friendly because all the content is being served up with JavaScript. Dust.js has the ability to render your client-side content server-side if it detects Google bot or a browser with JavaScript turned off but I’m not sold on this as being “safe”. Read about Linkedin switching over to Dust.js http://engineering.linkedin.com/frontend/leaving-jsps-dust-moving-linkedin-dustjs-client-side-templates http://engineering.linkedin.com/frontend/client-side-templating-throwdown-mustache-handlebars-dustjs-and-more Explanation of this: “Dust.js server side support: if you have a client that can't execute JavaScript, such as a search engine crawler, a page must be rendered server side. Once written, the same dust.js template can be rendered not only in the browser, but also on the server using node.js or Rhino.” Basically what would be happening on the backend of our site, is we would be detecting the user-agent of all traffic and once we found a search bot, serve up our web pages server-side instead client-side to the bots so they can index our site. Server-side and client-side will be identical content and there will be NO black hat cloaking going on. The content will be identical. But, this technique is Cloaking right? From Wikipedia: “Cloaking is a SEO technique in which the content presented to the search engine spider is different from that presented to the user's browser. This is done by delivering content based on the IP addresses or the User-Agent HTTP header of the user requesting the page. When a user is identified as a search engine spider, a server-side script delivers a different version of the web page, one that contains content not present on the visible page, or that is present but not searchable.” Matt Cutts on Cloaking http://support.google.com/webmasters/bin/answer.py?hl=en&answer=66355 Like I said our content will be the same but if you read the very last sentence from Wikipdia it’s the “present but not searchable” that gets me. If our content is the same, are we cloaking? Should we be developing our site like this for ease of development and performance? Do you think client-side templates with server-side solutions are safe from getting us kicked out of search engines? Thank you in advance for ANY help with this!
White Hat / Black Hat SEO | | Bodybuilding.com0 -
Vendor Descriptions for SEO... Troublesome?
Howdy! I have been tossing this idea around in my head over the weekend and I cannot decide which answer is correct, so here I am! We a retailer of products and is currently in the midst of redesigning our site-- not only design but also content. The issue that we are facing is with product descriptions from our vendors. We are able to access the product descriptions/specs from their websites and use them on ours, but my worry is that we will get tagged for duplicate content. Other retailers (as well as the vendors) are using this content as well, so I don't want this to have an adverse effect on our ranking. There are so many products that it would be a large feat to re-write unique content-- not to mention that the majority of the rhetoric would be extremely similar. What have you seen in your experiences in similar situations? Is it bad to use the descriptions? Or do we need to bite the bullet and do our best to re-write hundreds of product descriptions? Or is there a way to use the descriptions and tag it in a way that won't have Google penalize us? I originally thought that if we have enough other unique content on our site, that it shouldn't be as big of a deal, but then I realized how much of our site's structure is our actual products. Thanks in advance!
White Hat / Black Hat SEO | | jpretz0 -
What do you think are some of the least talked about topics of SEO?
What do you think are some of the least talked about topics of SEO? Do you think these topics need to be given more attention? Why do you think they've been ignored?
White Hat / Black Hat SEO | | TheOceanAgency0 -
White Papers! Is this still good for SEO
Does publishing a white paper good for SEO? We are trying to decide to publish one or not for the purpose of SEO. If it will not help, we will spend money for other things.
White Hat / Black Hat SEO | | AppleCapitalGroup0