How Google could quickly fix the whole Links problem...
-
A Thursday morning brainstorm that hopefully an important Google manager will see...
Google could quickly end all the problems of link buying, spammy links, and negative SEO with one easy step: Only count the 100 best follow links to any domain.
Ignore all the nofollows and everything beyond the 100 best. They can choose what "best" means.
Suddenly links would be all about quality. Quantity would not matter. Fiverr links, comment links, and all the other mass-produced spam links would literally be ignored. Unless that's all a domain had, and then they would surely be stomped by any domain with 100 decent natural links.
Would it be an improvement over today's situation?
-
It's a cool thought, and it very well be an improvement over today's situation. The major reservation that jumped out at me was that it would hurt fresh content. I'm guessing 100 was a number you just threw out there, but whatever the number, once a site hits that number of quality links, they wouldn't need to produce anything else. Well, at least in terms of link building, they wouldn't need to produce anything else.
Also, different size sites would need different numbers. For example, 100 really high quality DA of 90 and above links would take my site a long time to get, but, Moz, for example, probably already has that. If you decide to make the number really high, 10,000 high quality DA of 90 then it might take Moz awhile to get that, but since neither myself, nor any of my competitors will ever reach number, it's not much of a deterrent for us to get fivver, comment links.
- Ruben
-
Hi Gregory,
I like your thinking, but Google would then need to rely heavily on other ranking factors in order to establish higher authority between websites that have equally strong links in their top 100. There's plenty of websites that have equally fantastic, relevant and followed links in their 100 - how would Google decide which one is most relevant and authoritative?
Thinking about it, it could become quite problematic. A new startup entering an established market with a strong product or USP might be able to acquire 100+ high-quality press links and suddenly level up with a competitor who's been in the game for years - perhaps even a heritage brand - and has the extensive portfolio of good quality links to prove it.
I like the idea, but I don't think Google uses its other ranking signals enough yet to take such drastic action.
Don't you think?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Penguin 2.0 - Coming soon
There is an interesting article on SEW that Google is going to update Penguin to the next major version - http://bit.ly/15Vkr6O So what do you think, what should we expect? And also, is there available updated webmaster guidelines?
Industry News | | ditoroin1 -
Google Alert not working - anyone else have this problem?
I have a Google Alert that has stopped pulling in recent results even though a web search indicates that the pages are being indexed. None of the alert settings have changed. Anyone else have this happen recently and know how to remedy this problem?
Industry News | | BostonWright0 -
Google Cached "Text Only" version
Is there a way to test what a page would look like in Google "Text Only" version before a page is indexed in Google? Is there a tool out there to help with this?
Industry News | | activejunky10 -
Google Will Penalize Sites Repeatedly Accused Of Copyright Infringement
Has someone filed a large number of DMCA “takedown” requests against your site? If so, look out. That’s the latest penalty that may cause you to rank lower in Google’s search results. It joins other penalties such as “Panda” and “Penguin.” We’re dubbing it the “Emanuel Update” in honor of Hollywood mogul Ari Emanuel, who helped prompt it. Read more here: http://searchengineland.com/dmca-requests-now-used-in-googles-ranking-algorithm-130118 What do you guys think MOZERS?
Industry News | | Chenzo0 -
Chrome blocked sites used by Googles Panda update
Google's Panda update said it used Chrome users blocked sites lists as a benchmark for what they now term poor quality content. They said the Panda update effectively took about 85% of them out of the search results. This got me thinking, it would be very nice to discover what are the exact sites they don't like. Does anyone know if there is an archive of what these sites might be? Or if none exists, maybe if people could share their Chrome blocked sites on here we might get an idea?
Industry News | | SpecialCase0 -
What is the best method for getting pure Javascript/Ajax pages Indeded by Google for SEO?
I am in the process of researching this further, and wanted to share some of what I have found below. Anyone who can confirm or deny these assumptions or add some insight would be appreciated. Option: 1 If you're starting from scratch, a good approach is to build your site's structure and navigation using only HTML. Then, once you have the site's pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your AJAX bonuses. You can use Hijax to help ajax and html links coexist. You can use Meta NoFollow tags etc to prevent the crawlers from accessing the javascript versions of the page. Currently, webmasters create a "parallel universe" of content. Users of JavaScript-enabled browsers will see content that is created dynamically, whereas users of non-JavaScript-enabled browsers as well as crawlers will see content that is static and created offline. In current practice, "progressive enhancement" in the form of Hijax-links are often used. Option: 2
Industry News | | webbroi
In order to make your AJAX application crawlable, your site needs to abide by a new agreement. This agreement rests on the following: The site adopts the AJAX crawling scheme. For each URL that has dynamically produced content, your server provides an HTML snapshot, which is the content a user (with a browser) sees. Often, such URLs will be AJAX URLs, that is, URLs containing a hash fragment, for example www.example.com/index.html#key=value, where #key=value is the hash fragment. An HTML snapshot is all the content that appears on the page after the JavaScript has been executed. The search engine indexes the HTML snapshot and serves your original AJAX URLs in search results. In order to make this work, the application must use a specific syntax in the AJAX URLs (let's call them "pretty URLs;" you'll see why in the following sections). The search engine crawler will temporarily modify these "pretty URLs" into "ugly URLs" and request those from your server. This request of an "ugly URL" indicates to the server that it should not return the regular web page it would give to a browser, but instead an HTML snapshot. When the crawler has obtained the content for the modified ugly URL, it indexes its content, then displays the original pretty URL in the search results. In other words, end users will always see the pretty URL containing a hash fragment. The following diagram summarizes the agreement:
See more in the....... Getting Started Guide. Make sure you avoid this:
http://www.google.com/support/webmasters/bin/answer.py?answer=66355
Here is a few example Pages that have mostly Javascrip/AJAX : http://catchfree.com/listen-to-music#&tab=top-free-apps-tab https://www.pivotaltracker.com/public_projects This is what the spiders see: view-source:http://catchfree.com/listen-to-music#&tab=top-free-apps-tab This is the best resources I have found regarding Google and Javascript http://code.google.com/web/ajaxcrawling/ - This is step by step instructions.
http://www.google.com/support/webmasters/bin/answer.py?answer=81766
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
Some additional Resources: http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
http://www.google.com/support/webmasters/bin/answer.py?answer=357690 -
Google International and National Algorithm
Hi guys, I have a question. Do you have an experience about google national and international ranking algorithm. For example, is the same algorithm for " Google.de " Germany, and " Google.com "??? For example, lot of tactics, are valid and effective on google.de and not effective on google.com, tell us about this . Do you have any idea? Share your skills please! we need your help!
Industry News | | leadsprofi0 -
Googles' Anonymous data sharing "pool"
Is sharing this information good for my websites? And Is it Open information for anyone to hack into, and see my sites analytics? Bottom line, good or a bad thing?
Industry News | | smstv0