Google Penguin 2.0 - Coming soon
-
There is an interesting article on SEW that Google is going to update Penguin to the next major version - http://bit.ly/15Vkr6O
So what do you think, what should we expect? And also, is there available updated webmaster guidelines?
-
This kind of link building is a real trap. You must pay every month for these links and we all know that soon or later Google finds out how to take down this network. Besides you will loose your money, you risk to be penalized.
-
Actually... looking in more detail, it seems the 30% dip in traffic started on May 7, through all my sites. Not May 2, but May 7. Still, more reason to believe it was an algo change and not my SEO software change that cause the dip.
-
I've been scratching my head because on May 1st I changed my most important site from Genesis SEO to Yoast Wordpress SEO, in order to have more control. I noticed my rankings fell about 30% since May 2 onwards.
This whole time I was thinking it was the change in SEO plugin. But... Looking at 2 of my other sites that had no plugin change, they have fallen about 30% as well. Looks like there may indeed have been an algo update and this whole time I thought it was my plugin changes that caused the problem.
-
On a "basic" research I did a month ago, I found over 5,000 Website buying sape links, and while checking rankings for say 100 of those, almost all had 1 - 3 first spots on those keywords they were targeting and pagerank of 4+. It seems that Google can't get them down. Matt Cutts tweeted that they were working to take down a pretty huge russian network (I guess it was sape), but some of the sites I researched are still ranking on the first spots with PR 4+ and only using those damn link.
-
There are still plenty of people selling SAPE links and they still work. But as a large part of the network is made up of hacked websites, I think Google will target the people at the end of the link rather than the victim website itself, which is probably why it's taking time to crack down on it. It's a targeted penalty rather than an algo update to find & destroy it.
-
It seems that they already did it, I've searched and almost every page with keywords "SAPE link network" has a 0 PR.
-
I really hope they were able to find a way to take down that SAPE link network, which apparently they were working on it.
I see hundreds of quality sites outranked by those link buyers, and all of them go back to sape. HATE THAT!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What are these categories called and where do they come from?
Hey there! I'm seeing categories at the top of a search (ex: cam software). You can see a screenshot example here: https://cl.ly/1w3J2s0W0K43. Any idea what these are called and if/how they are influenced? Thanks.
Industry News | | SUCCESSagency1 -
Manual action penalty by Google
Hello, We have a big well-known brand - www.titanbet.com. This brand is well established and the site has been live for almost 4 years now ranking very well on some very strong KWs. we received a message from Google on Aug 29<sup>th</sup> saying “Google has detected a pattern of artificial or unnatural links pointing to your site” and that “Google has applied a manual spam action to titanbet.com/” The past 2 weeks since the penalty was received we saw some of our major KWs drop in rankings. BUT all brand related KWs were still ranked 1<sup>st</sup> Over the last weekend the penalty has worsen and we no longer rank on any of the brands KWs (we find the site in 5<sup>th</sup> page at best). Moreover, when searching for a sentence from the any of the page on the site in Google, we see other sites ahead of us in the SERPs. Based on the message we originally received from Google we have started cleaning some of the bad links to the site. We found a lot of links from bad sites, some of them are not indexed and probably penalized as well, some are from affiliate websites and some are from some automatic indexation websites based in China and Russia
Industry News | | Tit
we have started reaching out to some of these sites to try and have them remove our links. We are also worried about the duplication of our site. We have found many other sites (mostly affiliate websites) have copied and in some cases completely duplicated our content. Google for some reason has chosen to penalize us for this. Although we do not have control over these other sites. We have run copyscape to try and figure out which pages are the most problematic and we will try to re-write the content on these pages. But what if the other sites copy us again? Any suggestions on the above would be appreciated as we try to understand why Google has penalized us. thank you Titan Bet Team0 -
Google number one search result looks drastically different in firefox compared to chrome
I just noticed this today that some websites and brands look like this on firefox only, and others while still being number one result for their brand name, do not appear like this at all. also, this does not happen over chrome at all. both images provided for comparison are using the same google apps account logged in. It would be nice if someone could shed some light on as to why this happens sporadically and what does it take to be distinguished like this for your own brand if you own the identical domain.com or whatever. Zz7ZkX5.png lpuwheo.png
Industry News | | Raydon0 -
If I have a Google+ Business page, do I need a Google Places page as well?
It seems like the two are redundant? Any official word on this? I'm fairly OCD about things being tidy and I dont want to split my reviews / shares / etc between two profiles. Are they not the same thing? I searched for my company, both my plus business page and my places page came up. I attached a SS of the situation. placesvplus.png
Industry News | | jonnyholt1 -
Is big Penguin update on its way
Just wondering, whether these updates have got anything to do with the next and the biggest Penguin update - Penguin refreshes and the launch of Disavow Option seems to be a bit correlated. It appears that Google might be testing its algo and the Disavow Link is launched so that website owners who might feel frustrated that their websites are wrongly affected can use this tool to exonerate from the penalty. Any thought?
Industry News | | Debdulal0 -
Is Google Making Life Harder For Aggregators?
Theres been a bunch of updates recently which have hurt aggregators: Reducing the number of search results to 7 for branded search queries The DMCA update which penalises those with trademark related takedown requests against them. At least 2 'domain diversity' updates, the most recent last week, which seeks to reduce the ability of sites to dominate SERPS e.g. a site which may have 2 search results on page 1 now may have 1. Plus Its commonly believed that Google favours big brands over smaller brands e.g. Marriott over examplehotelaggregator.com. Is this a deliberate ploy against aggregators in favour of brands i.e. does Google believe a brand site is a better search result than an aggregator? A brand site returned above an aggregator for a branded term may be seen by Google as a better fit, a better search result that should be higher. But is that true? Consumers like to see unbiased reviews and lowest prices and that isnt always available at the brand site. Thoughts please.
Industry News | | AndyMacLean0 -
Searching for a keyword on html source code of a website via Google
Is such a thing possible? Can we google for a specific keyword that can be found on the source code of a website? Is there any search operator for this? Thanks in advance!
Industry News | | merkal20050 -
What is the best method for getting pure Javascript/Ajax pages Indeded by Google for SEO?
I am in the process of researching this further, and wanted to share some of what I have found below. Anyone who can confirm or deny these assumptions or add some insight would be appreciated. Option: 1 If you're starting from scratch, a good approach is to build your site's structure and navigation using only HTML. Then, once you have the site's pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your AJAX bonuses. You can use Hijax to help ajax and html links coexist. You can use Meta NoFollow tags etc to prevent the crawlers from accessing the javascript versions of the page. Currently, webmasters create a "parallel universe" of content. Users of JavaScript-enabled browsers will see content that is created dynamically, whereas users of non-JavaScript-enabled browsers as well as crawlers will see content that is static and created offline. In current practice, "progressive enhancement" in the form of Hijax-links are often used. Option: 2
Industry News | | webbroi
In order to make your AJAX application crawlable, your site needs to abide by a new agreement. This agreement rests on the following: The site adopts the AJAX crawling scheme. For each URL that has dynamically produced content, your server provides an HTML snapshot, which is the content a user (with a browser) sees. Often, such URLs will be AJAX URLs, that is, URLs containing a hash fragment, for example www.example.com/index.html#key=value, where #key=value is the hash fragment. An HTML snapshot is all the content that appears on the page after the JavaScript has been executed. The search engine indexes the HTML snapshot and serves your original AJAX URLs in search results. In order to make this work, the application must use a specific syntax in the AJAX URLs (let's call them "pretty URLs;" you'll see why in the following sections). The search engine crawler will temporarily modify these "pretty URLs" into "ugly URLs" and request those from your server. This request of an "ugly URL" indicates to the server that it should not return the regular web page it would give to a browser, but instead an HTML snapshot. When the crawler has obtained the content for the modified ugly URL, it indexes its content, then displays the original pretty URL in the search results. In other words, end users will always see the pretty URL containing a hash fragment. The following diagram summarizes the agreement:
See more in the....... Getting Started Guide. Make sure you avoid this:
http://www.google.com/support/webmasters/bin/answer.py?answer=66355
Here is a few example Pages that have mostly Javascrip/AJAX : http://catchfree.com/listen-to-music#&tab=top-free-apps-tab https://www.pivotaltracker.com/public_projects This is what the spiders see: view-source:http://catchfree.com/listen-to-music#&tab=top-free-apps-tab This is the best resources I have found regarding Google and Javascript http://code.google.com/web/ajaxcrawling/ - This is step by step instructions.
http://www.google.com/support/webmasters/bin/answer.py?answer=81766
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
Some additional Resources: http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
http://www.google.com/support/webmasters/bin/answer.py?answer=357690