How to find out if I have been penalized?
-
I have launched a new website beginning January this year and have seen slowly more and more traffic coming from google to the website until the 20th of March where suddenly there are no more visitors from the google search engine. The only traffic left is from google images, social networks or other search engines. Without visitors from google search this reduces our overall traffic by ~66%.
I can't easily find anymore our website in the search results of google by using terms which we usually ranked quite well. Nevertheless, the website is still indexed as I can find it using the "site:" search query. In google webmaster tools there are no messages and we have only been doing a bit of link building on website and blog directories (nothing excessive and nothing paid neither).
Is there any way to find out if google penalized my website? I guess it has... and what would be the best thing to do right now?
The website is hellasholiday (dot) com
Thanks in advance for your idea and suggestions
-
I am not a fan of CMS, i realize there are pros and cons, but when you try to do too much and be all things to all people you tend to have a lot of compromises.
There is one other reason i dont like to use robots,txt, i remeber Matt Cutts saying that it is a spam signal because they can not see what you are hiding, not that it is going to get you flaged by itself, but with other signals it can. If i remember correctly he was talking about hiding malware in scripts blocked by robots.
If you are interested, the best CMS for SEO i had found was Orchard CMS but even that has some silly errors, it puts more then one H1 tag in pages, but is still the best solution I have looked at. It is more customizable via code.
-
After having read your post and all the linked articles you have recommended I understand the issue and have adapted the robots.txt accordingly. Basically only leaving one single Disallow for the WordPress plugins. I hope this will help but I suppose I will see this in the next few days...
Now regarding WordPress I would suggest them to adapt their documentation as it is really misleading. Also I think they should implement all these noindex meta tags where necessary natively into wordpress and not by having to use a plugin for that, but this is another story.
-
Wordpress do many things that are not recommened, and blocking by robots is not recomened, what they are suggesting is a extream messure to solve the softewares problems. there are better ways to solve duplicate content without giveing away your link juice
Read this section "WordPress Robots.txt blocking Search results and Feeds"
on this page http://yoast.com/example-robots-txt-wordpress/
These plug-ins like yoast and word press itself, do not produse very good results. I have crawled many wordpress sites and they all have the same old problems many caused by the yoast plugin.
What google is refereing to in the link, is not getting pages of little value into their index, this is for their advantage not yours.
Its quite simple, if you block a page, the links pointing to that page waste their link juice, if you dont, or at least allow follow with a meta tag, you will get the link juice back.
See this article where Dr Pets calls it an extream messure, search for robots.txt you will see many comments refering to my point http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world
See Dr pets comments here http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions
-
I thought it would be no use for google to index and cache small icons, logos and cached resized images which have no meaningful name or so. So now I have at least removed the Disallow for these but for WordPress blog I want to keep the Disallow rules as recommended by WordPress itself for SEO purposes as documented here http://codex.wordpress.org/Search_Engine_Optimization_for_WordPress#Robots.txt_Optimization assuming they know what they are speaking about.
Anyhow I don't have the feeling this is really the problem why my website doesn't show up anymore in the google search engine results...
-
The question should be why block them?
its like cutting off your hand, because you have a splinter.
If duplicate content is a problem, then you can (in order of prefrance) fix it, use a canonical, a noindex,follow meta tag, but not robots
-
Many thanks Alan for your answer!
Regarding the robots.txt, basically I just would like to block/disallow some cached images and small icons/pictures from the website as well as some stuff for the associated WordPress blog which is also host on the same website. For the blog I am disallowing the admin pages, feeds, comments, trackbacks, content theme files etc. Here wold be the complete list just in case:
Disallow /wp-admin
Disallow: /wp-includes
Disallow: /wp-content/plugins
Disallow: /wp-content/cache
Disallow: /wp-content/themes
Disallow: /trackback
Disallow: /feed
Disallow: /comments
Disallow: /category//
Disallow: /*/trackback
Disallow: /*/feed
Disallow: /*/comments
Disallow: /?
Disallow: /*?
So maybe I should change my question to "what URLs should I disallow for a WordPress blog?"
Also where can I see all the pages which are blocked by my robots.txt file?
-
You can ask for reconsideration from google though webmaster tools. But since you have no warnings and you are still in the index, i have doubts that you have been flagegd manualaly, but you may have been algorthmicly.
I notived that you have blokced hundreds of pages with robots.txt, thios had led to thousonds of links pointing to pages that are not indexed, this means these links are puiring away link juice into nowhere.
You should not use robots text to block pages that are linked to, its a waste of valuable link juice.
if you must no-index the pages, use a meta noindex,follow tag, this way you will get most of the link juice back though the pages outlinks
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Switched from and HTTPS to HTTP. My home page is facing a redirect issue from the http to https. Should I no index the HTTP or find the redirect and delete it? Thank you
Switched from and HTTPS to HTTP. My home page is facing a redirect issue from the http to https. Should I no index the HTTP or find the redirect and delete it? Thank you
Technical SEO | | LandmarkRecovery20170 -
Hi My website is showing 403 error. Please help me to find out the reason
Hi My website is showing 403 error. Please help me to find out the reason & how to fix URL :http://eosdev.sharjah.ae/admin/adminlogin.aspx
Technical SEO | | nlogix0 -
Determining Penalization
Hello, I have a site that initially ranked in the first 30 google results for targeted keywords. However, after I contracted out further SEO work to a consultant, my site is nowhere to be found for any keywords. I'm afraid that they have done something to get me penalized, but I'm not sure how I would tell if 1) I have in fact been penalized and 2) what the issue(s) are so I can fix them. Thanks in advance and any help would be appreciated. -Alex
Technical SEO | | felt0 -
How do find where a 301 redirect is located
My report says I have http://www.30minuteseder.com/Passover.blog redirected to http://30minuteseder.com/Passover.blog. It is correct, but I can't find where the 301 redirect is located. I looked in my .htaccess file in the root and it's not there. How do I find it so I can change it?
Technical SEO | | Sederman0 -
How to find original URLS after Hosting Company added canonical URLs, URL rewrites and duplicate content.
We recently changed hosting companies for our ecommerce website. The hosting company added some functionality such that duplicate content and/or mirrored pages appear in the search engines. To fix this problem, the hosting company created both canonical URLs and URL rewrites. Now, we have page A (which is the original page with all the link juice) and page B (which is the new page with no link juice or SEO value). Both pages have the same content, with different URLs. I understand that a canonical URL is the way to tell the search engines which page is the preferred page in cases of duplicate content and mirrored pages. I also understand that canonical URLs tell the search engine that page B is a copy of page A, but page A is the preferred page to index. The problem we now face is that the hosting company made page A a copy of page B, rather than the other way around. But page A is the original page with the seo value and link juice, while page B is the new page with no value. As a result, the search engines are now prioritizing the newly created page over the original one. I believe the solution is to reverse this and make it so that page B (the new page) is a copy of page A (the original page). Now, I would simply need to put the original URL as the canonical URL for the duplicate pages. The problem is, with all the rewrites and changes in functionality, I no longer know which URLs have the backlinks that are creating this SEO value. I figure if I can find the back links to the original page, then I can find out the original web address of the original pages. My question is, how can I search for back links on the web in such a way that I can figure out the URL that all of these back links are pointing to in order to make that URL the canonical URL for all the new, duplicate pages.
Technical SEO | | CABLES0 -
I am Posting an article on my site and another site has asked to use the same article - Is this a duplicate content issue with google if i am the creator of the content and will it penalize our sites - or one more than the other??
I operate an ecommerce site for outdoor gear and was invited to guest post on a popular blog (not my site) for a trip i had been on. I wrote the aritcle for them and i also will post this same article on my website. Is this a dup content problem with google? and or the other site? Any Help. Also if i wanted to post this same article to 1 or 2 other blogs as long as they link back to me as the author of the article
Technical SEO | | isle_surf0 -
Where can I find a good definition of "link juice"?
I have heard the term link juice being used in many different contexts. Where can I find a good definition for it?
Technical SEO | | casper4340 -
Parameter handling (where to find all parameters to handle)?
Google recently said they updated their parameter handling, but I was wondering what is the best way to know all of the parameters that need "handling"? Will Google Webmaster find them? Should the company know based on what is on their site? Thanks!
Technical SEO | | nicole.healthline0