Has Google Made Unnatural Link Building Easier?
-
I see lots of competitors and crappy sites ranking well for highly competitive keywords in the web hosting niche.
After analysing their backlinks, I noticed that most of them had only 1 or 2 backlinks to the page they wanted to rank. The anchor text is usually a slight variation of the targeted keyword.
Now suppose you are able to rank well for a handful of highly lucrative keywords using very few spammy links. That would mean that even if you got a Penguin penalty, cleaning up your link profile would take an hour at most.
I really have no intentions of using this strategy but it's frustrating to see spammy competitors outranking you with crappy sites and a handful of backlinks.
Your thoughts?
-
I don't think they buy google ads for most of them (the spammers)
-
I see this all the time within my niche and almost every niche. Google has become a useless engine, unless you are looking for news or information, but in that moment you decide to buy something, BAM, pure crap.... maybe because those sites are the ones that buy google ads?
-
I also happen to see new websites coming in and out of the top 10 on a weekly basis for some competitive keywords. Lots of them are about 6 months old.
-
There's a guy I see he just buys a sh*tload of spammy links and ranks high for about 1 or 2 months. When his website gets hits, he just buy another domain, puts back the exact same content (doesn't even care to change the website name in the image), spams the hell out of it and now he's back at the top with a domain he bought on August 25!!!
That's right! He ranked a brand new domain only 3 days after buying it! The website already got over 15k backlinks.
-
It seems that everyone is talking about this at the moment.
Google would appear to have got super addicted to providing fresh content, that it is ranking the newer sites higher than those with most authority. Why? I have no idea. Everyone is just taking the black hat route of making more sites, newer sites that can cheat the system for now, until Google packs up it's ideas and changes the algo.
I mean look at what has happened this year with updates, everyday is an update. I think we need to hold tight for the storm to settle and wait for everything to stabilise. Google can't keep this up for long, the SERPs change everyday!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does "google selected canonical" pass link juice the same as "user selected canonical"?
We are in a bit of a tricky situation since a key top-level page with lots of external links has been selected as a duplicate by Google. We do not have any canonical tag in place. Now this is fine if Google passes the link juice towards the page they have selected as canonical (an identical top-level page)- does anyone know the answer to this question? Due to various reasons, we can't put a canonical tag ourselves at this moment in time. So my question is, does a Google selected canonical work the same way and pass link juice as a user selected canonical? Thanks!
Technical SEO | | Lewald10 -
Should we Nofollow Social Links?
I've been asked the question of whether if we should nofollow all of our social links, would this be a wise thing to do? I'm not exactly getting a clear answer from search results and thought you guys would be best to ask 🙂 Thanks in advance.
Technical SEO | | JH_OffLimits0 -
Will Google Recrawl an Indexed URL Which is No Longer Internally Linked?
We accidentally introduced Google to our incomplete site. The end result: thousands of pages indexed which return nothing but a "Sorry, no results" page. I know there are many ways to go about this, but the sheer number of pages makes it frustrating. Ideally, in the interim, I'd love to 404 the offending pages and allow Google to recrawl them, realize they're dead, and begin removing them from the index. Unfortunately, we've removed the initial internal links that lead to this premature indexation from our site. So my question is, will Google revisit these pages based on their own records (as in, this page is indexed, let's go check it out again!), or will they only revisit them by following along a current site structure? We are signed up with WMT if that helps.
Technical SEO | | kirmeliux0 -
Google not pulling my favicon
Several sites use Google favicon to load favicons instead of loading it from the Website itself. Our favicon is not being pulled from our site correctly, instead it shows the default "world" image. https://plus.google.com/_/favicon?domain=www.example.com Is the address to pull a favicon. When I post on G+ or see other sites that use that service to pull favicons ours isn't displaying, despite it shows up in Chrome, Firefox, IE, etc and we have the correct meta in all pages of our site. Any idea why is this happening? Or how to "ping" Google to update that?
Technical SEO | | FedeEinhorn0 -
Google Plus 1
How google plus 1 votes can affect your position in google search engine? On my website i have a google plus button, i have just 3 votes, if i buy lets say 100 votes, but slowly, in a month, will this affect my ranking positively?
Technical SEO | | prunarevic0 -
Which version of pages should I build links to?
I'm working on the site www.qualityauditor.co.uk which is built in Moonfruit. Moonfruit renders pages in Flash. Not ideal, I know, but it also automatically produces an HTML version of every page for those without Flash, Javascript and search engines. This HTML version is fairly well optimised for search engines, but sits on different URLs. For example, the page you're likely to see if browsing the site is at http://www.qualityauditor.co.uk/#/iso-9001-lead-auditor-course/4528742734 However, if you turn Javascript off you can see the HTML version of the page here <cite>http://www.qualityauditor.co.uk/page/4528742734</cite> Mostly, it's the last version of the URL which appears in the Google search results for a relevant query. But not always. Plus, in Google Webmaster Tools fetching as Googlebot only shows page content for the first version of the URL. For the second version it returns HTTP status code and a 302 redirect to the first version. I have two questions, really: Will these two versions of the page cause my duplicate content issues? I suspect not as the first version renders only in Flash. But will Google think the 302 redirect for people is cloaking? Which version of the URL should I be pointing new links to (bearing in mind the 302 redirect which doesn't pass link juice). The URL's which I see in my browser and which Google likes the look at when I 'fetch as Googlebot'. Or those Google shows in the search results? Thanks folks, much appreciated! Eamon
Technical SEO | | driftnetmedia0 -
Does Google care how you write internal links?
I am changing ecommerce platforms. For my internal linking on the old site there was a lot of old links written like this: http://www.domain.com/page-name But now i am writing links mostly like this: /page-name Will that make a difference to search engines? Is one easier than the other for them to interpret?
Technical SEO | | Hyrule0