Are Wordpress sites being dinged by Google? Read a few articles regarding.
-
I read a couple "SEO" related articles that sites built in Wordpress are going to be dinged by Google because Google sees Wordpress sites as simple to make and a higher potential to be "spammy".
Is there any truth to this? Your thoughts?
I do give "thumbs up" and "best answer" marks and appreciate receiving thumbs up myself...
Thanks
-
I will have to try to find them. I saw them about a month ago and had been meaning to post it. Do you see any out there?
-
I read a couple "SEO" related articles that sites built in Wordpress are going to be dinged by Google because Google sees Wordpress sites as simple to make and a higher potential to be "spammy".
Oh man.... that's really funny.
IMO..... Blogspot and bogger and sites.google.com have tons more crap sin that wordpress.
Combine the above with YouTube and Google is probably the largest host for crap, spin and infringement on the entire planet.
-
No google will not punish Wordpress sites specifically. The Yoast SEO pluggin for Wordpress is a great way to help manage your Wordpress SEO if you have not already heard of it I would check it out. The chap who made it is also full of excellent advice.
-
Thanks for your reply. I had the same thoughts but figured I would run it by my fellow experts here on seomoz
-
Cheesy sometimes works I just gave you one... Also thank you for your response. Interesting info regarding the anchor texts in the footer links.
-
There's absolutely nothing wrong with having a Wordpress site. They're popular and as such there will be more Wordpress sites that have penalties or are affect by algo changes, simply because more of them exist!
I do believe that it is possible, however, for poor Wordpress structure to contribute to a Panda problem. Many wordpress sites have duplicate content because the same content can be found on:
example.com/post-about-green-widgets/
example.com/category/green-widgets/
example.com/author/example-author/
example.com/tag/green-widgets/
Matt Cutts has said in the past, however, that Google is pretty good at figuring out issues such as multiple pages being created by tag pages in WP. But, I'm wondering if having this type of duplication on top of other Panda issues could be enough to push some sites into a situation where Panda filters them.
Having a Wordpress site will not cause you to get a Penguin issue as this is about the links TO the site. However, I think some sites that benefitted from making free wordpress themes got affected by Penguin b/c of anchor texted footer links.
I have several wordpress sites and will continue to build more.
(p.s. It's kind of cheezy to ask for thumbs up. It's not likely to make anyone give you one. )
-
Hi James
Nakul is spot on of course. Just wondering if the articles you read were speculating about Wpress as a CMS or site specifically hosted on Wordpress.com?
cheers
Michael
-
I highly doubt it...
WordPress powers roughly 54 percent of websites managed by a CMS, and approximately 15 percent of the top 1 million websites in the world.
-
Wordpress is like any other CMS. The same problem of SPAM exists with Google's Blogspot. Spam is everywhere and I don't think the Search engines are trying to use "What CMS are you using" to decide if you are SPAM.
I would not worry about getting penalized "Just because you decided to use Wordpress as a CMS/Blogging platform".
There are tons and tons of great, high authority, ranking websites made with all kinds of CMS platforms including Wordpress. So, I would say there's no merit to this.
Google's Matt Cutt's own blog is Wordpress Powered if that helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I buy an expired domain that has already been redirected to some other website? Can I use it or redirect it to my own site if I purchase it?
I was going to purchase an expired domain but then I came to know that it has been redirected to some other website. I have two questions: Can I build a website around this domain if I purchase it? and will the Domain Authority remains the same ? Can I redirect it to my own site? and will all the Link Juice flow to my site?
Industry News | | Kamranktk0 -
Yelp (recrawl Google/Bing)
If Google and Bing show an older version of a site's Yelp rating in the search results, what options are there to help ensure Google and Bing recrawl the Yelp page? Additionally, it appears third-party sites such as MapQuest show Yelp ratings and appear in Google search results; is it possible to request MapQuest to recrawl Yelp and then ask Google to recrawl MapQuest? Any advice would be much appreciated!
Industry News | | Mack_1 -
Google number one search result looks drastically different in firefox compared to chrome
I just noticed this today that some websites and brands look like this on firefox only, and others while still being number one result for their brand name, do not appear like this at all. also, this does not happen over chrome at all. both images provided for comparison are using the same google apps account logged in. It would be nice if someone could shed some light on as to why this happens sporadically and what does it take to be distinguished like this for your own brand if you own the identical domain.com or whatever. Zz7ZkX5.png lpuwheo.png
Industry News | | Raydon0 -
Did Google Search Just Get Crazy Local?
Hey All, I think it's a known fact at this point that when signed into a personal Google account while doing a search, the results are very oriented around keywords and phrases you have already searched for, as well as your account's perceived location; for instance when I wanted to check one of my own web properties in SE listings I would sign out or it would likely appear first as a false reading. Today I noticed something very interesting: even when not signed in, Google's listings were giving precedence to locality. It was to a very extreme degree, as in when searching for "web design," a firm a mile away ranked higher than one 1.5 miles away and such. It would seem that the algos having this high a level of location sensitivity and preference would actually be a boon for the little guys, which is, I assume why it was implemented. However, it brings up a couple of interesting questions for me. 1. How is this going to affect Moz (or any SE ranking platform, for that matter) reports? I assume that Google pulls locations from IP Addresses, therefore would it not simply pull the local results most relevant for the Moz server(s) IP? 2. What can one do to rise above this aggressive level of location based search? I mean, my site (which has a DA of 37 and a PA of 48) appears above sites like webdesign.org (DA of 82, PA of 85). Not that I'm complaining at the moment, but I could see this being a fairly big deal for larger firms looking to rank on a national level. What gives? I'd love to get some opinions from the community here if anyone else has noticed this...
Industry News | | G2W1 -
How do you measure impacts of Google Updates Like Penguin 4?
Having a conversation with a fellow SEO via twitter and we were discussing measuring algorithm updates. In the aftermath of Google Penguin 4 how do you determine the effects it has on your site/sites and your respective verticals?
Industry News | | Thos0030 -
Will Google ever begin penalising bad English/grammar in regards to rankings and SEO?
Considering Google seem to be on a great crusade with all their algorithm updates to raise the overall "quality" of content on the Internet, i'm a bit concerned with their seeming lack of action towards penalising sites that contain terrible English. I'm sure you've all noticed this when you attempt to do some proper research via Google and come across an article that "looks" to be what you're after, then you click through and realise it's obviously been either put together in a rush by someone not paying attention or putting much effort in, or been outsourced for cheap labour to another country whose workers aren't (close to being) native speakers. It's getting really old trying to make sense of articles that have completely incorrect grammar, entirely missing words, verb tenses that don't make any sense, randomly over-extravagant adjectives thrown in just as padding, etc. etc. No offense to all those from non-native speaking countries who are attempting to make a few bucks online, but this for me is becoming by far more of an issue in terms of "quality" of information online as opposed to some of the other search issues that are being given higher priority, and it just seems strange that Google have been so blasé about it up to this point - especially given so many of these articles and pages are nothing more than outsourced filler for cheap traffic. I understand it's probably hard to code in something so advanced, but it would go a long way towards making the web a better place in my opinion. Anyone else feeling the same way? Thoughts?
Industry News | | ExperienceOz1 -
Chrome blocked sites used by Googles Panda update
Google's Panda update said it used Chrome users blocked sites lists as a benchmark for what they now term poor quality content. They said the Panda update effectively took about 85% of them out of the search results. This got me thinking, it would be very nice to discover what are the exact sites they don't like. Does anyone know if there is an archive of what these sites might be? Or if none exists, maybe if people could share their Chrome blocked sites on here we might get an idea?
Industry News | | SpecialCase0 -
What is the best method for getting pure Javascript/Ajax pages Indeded by Google for SEO?
I am in the process of researching this further, and wanted to share some of what I have found below. Anyone who can confirm or deny these assumptions or add some insight would be appreciated. Option: 1 If you're starting from scratch, a good approach is to build your site's structure and navigation using only HTML. Then, once you have the site's pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your AJAX bonuses. You can use Hijax to help ajax and html links coexist. You can use Meta NoFollow tags etc to prevent the crawlers from accessing the javascript versions of the page. Currently, webmasters create a "parallel universe" of content. Users of JavaScript-enabled browsers will see content that is created dynamically, whereas users of non-JavaScript-enabled browsers as well as crawlers will see content that is static and created offline. In current practice, "progressive enhancement" in the form of Hijax-links are often used. Option: 2
Industry News | | webbroi
In order to make your AJAX application crawlable, your site needs to abide by a new agreement. This agreement rests on the following: The site adopts the AJAX crawling scheme. For each URL that has dynamically produced content, your server provides an HTML snapshot, which is the content a user (with a browser) sees. Often, such URLs will be AJAX URLs, that is, URLs containing a hash fragment, for example www.example.com/index.html#key=value, where #key=value is the hash fragment. An HTML snapshot is all the content that appears on the page after the JavaScript has been executed. The search engine indexes the HTML snapshot and serves your original AJAX URLs in search results. In order to make this work, the application must use a specific syntax in the AJAX URLs (let's call them "pretty URLs;" you'll see why in the following sections). The search engine crawler will temporarily modify these "pretty URLs" into "ugly URLs" and request those from your server. This request of an "ugly URL" indicates to the server that it should not return the regular web page it would give to a browser, but instead an HTML snapshot. When the crawler has obtained the content for the modified ugly URL, it indexes its content, then displays the original pretty URL in the search results. In other words, end users will always see the pretty URL containing a hash fragment. The following diagram summarizes the agreement:
See more in the....... Getting Started Guide. Make sure you avoid this:
http://www.google.com/support/webmasters/bin/answer.py?answer=66355
Here is a few example Pages that have mostly Javascrip/AJAX : http://catchfree.com/listen-to-music#&tab=top-free-apps-tab https://www.pivotaltracker.com/public_projects This is what the spiders see: view-source:http://catchfree.com/listen-to-music#&tab=top-free-apps-tab This is the best resources I have found regarding Google and Javascript http://code.google.com/web/ajaxcrawling/ - This is step by step instructions.
http://www.google.com/support/webmasters/bin/answer.py?answer=81766
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
Some additional Resources: http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
http://www.google.com/support/webmasters/bin/answer.py?answer=357690