Google Webmaster Tools: MESSAGE
-
Dear site owner or webmaster of http://www.enakliyat.com.tr/,
Some of your site's pages may be using techniques that do not comply with Google's Webmaster Guidelines.
On your site, in particular, does not provide an adequate level of innovation in low-quality unique content or set of pages. Examples of this type of thin affiliate pages, pages, bridge pages, it will automatically be created or copied content. For more information about the unique and interesting content, visit http://www.google.com/support/webmasters/bin/answer.py?answer=66361.
We recommend you to make the necessary changes to your site to fit your site's quality guidelines. After making these changes, please submit your site for reconsideration in Google's search results.
If you have questions about how to resolve this problem, please see our Webmaster Help Forum for support.
Sincerely,
Google Search Quality Team**After this massege ve find our low quality pages and we added this urls on Robots.txt. Other than that, what can we do? **
**Our site is a home to home moving listing portal. Consumers who wants to move his home fills a form so that moving companies can cote prices. We were generating listing page URL’s by using the title submitted by customer. **
-
Generally, if you've been hit by a penalty, blocking with robots.txt isn't enough. You generally either have to remove or improve those pages entirely.
As you can see, most of these pages are still in the index.
When looking at your site, I found many pages like this:
http://www.enakliyat.com.tr/hata.aspx?aspxerrorpath=/manisa-evden-eve-nakliyat-fiyatlari-37
And also "thin" pages without much unique content. If these pages are valuable to your customer, you should consider updating them with fresh, unique content, then file a reconsideration request with Google to lift the penalty.
-
we all learned about google's guidelines but we really have no idea where is the problem or what cause the problem if you went to our website we have been destroyed all of our duplicate page content and problem's that google ask's us to and now we all think that alright thats all its fixed but when we ask to google to come and check our website again google keep sending us the same and the same message.
-
If you just want to block them from the index then best practice is usually considered to do it with a robots meta tag rather than the robots.txt file. See here for a better run down of the options and why one is better than the other for various situations: http://moz.com/learn/seo/robotstxt and also a good post here on duplicate/thin content: http://moz.com/blog/fat-pandas-and-thin-content
I would think that in your case you might be able to get enough unique content on those pages (at least the company pages) to make them more acceptable to google. Thinking off the top of my head you could:
- Add information to the page meta and title tags so that the companies location is included, and your brand is also mentioned
- Force the companies to add a proper profile!
- Add some of your own unique content about the company, the areas they service, their specialties etc etc
- once you have better content then encourage users to leave comments/ratings etc
There are not THAT many company pages in your robots.txt so with a bit of brainstorming on the best way to phrase things and to increase unique content on those pages you should be able to come up with a strategy that will allow you with a bit of effort to comply with the google guidelines and also make the pages more user friendly and more informative generally.
-
You either have to comply to their guidelines or use a different method to grow your business such as using social media.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Domain not ranking in Google
https://www.buitenspeelgoed.nl/ is a domain acquired by our client. Previously this website was on http://www.buitenspeelgoed-keupink.nl. With the old domain they were ranking top 30 on 'buitenspeelgoed' in google.nl. Now with the new exact match domain they aren't ranking any more (for months). However, the website is indexed, as you can see on http://1l1.be/nz I don't know what to do anymore. Need some advise. What we allready have done the last months: made adjustments to the 301-redirects (this was originaly setup wrong by the webdesigner (de) optimized the homepage on 'buitenspeelgoed' (strange is the fact that the Moz robot can't access the site). Checked the robots.txt to see if the website was blocked for Google Checked the meta robots to see if the website was blocked for Google Disavowed some spammy (old) links which linked to the old domain Checked Search console > Fetch as Google if there isn't any Malware of some kind (and to see if Google can access the site) Checked Search consol to see if there manual spam actions (isn't the case) Checked for duplicate content by copy/paste some texts in Google and see if any other results are showing up (isn't the case for most of the texts) Please let me know what we can do.
Technical SEO | | InventusOnline0 -
Why is Google Webmaster Tools showing 404 Page Not Found Errors for web pages that don't have anything to do with my site?
I am currently working on a small site with approx 50 web pages. In the crawl error section in WMT Google has highlighted over 10,000 page not found errors for pages that have nothing to do with my site. Anyone come across this before?
Technical SEO | | Pete40 -
Google Indexing - what did I missed??
Hello, all SEOers~ I just renewed my web site about 3 weeks ago, and in order to preserve SEO values as much as possible, I did 301 redirect, XML Sitemap and so on for minimize the possible data losses. But the problem is that about week later from site renewal, my team some how made mistake and removed all 301 redirects. So now my old site URLs are all gone from Google Indexing and my new site is not getting any index from Google. My traffic and rankings are also gone....OMG I checked Google Webmaster Tool, but it didn't say any special message other than Google bot founds increase of 404 error which is obvious. Also I used "fetch as google bot" from webmaster tool to increase chance to index but it seems like not working much. I am re-doing 301 redirect within today, but I am not sure it means anything anymore. Any advise or opinion?? Thanks in advance~!
Technical SEO | | Yunhee.Choi0 -
Can I have an http AND a https site on Google Webmaster tools
My website is https but the default property that was configured on Google WMT was http and wasn't showing me any information because of that. I added an https property for that, but my question is: do I need to delete the original HTTP or can I leave both websites?
Technical SEO | | Onboard.com0 -
Google is changing the title
Hi! Lately i have seen that Google changed the page titles for some clients, not all... its about 30% of them. For example, the title looks likes this after the Google change: Company name: SEO and Pay per click management But in on that page it looks like this: SEO and Pay per click management - Company name Does anyone know why?
Technical SEO | | DanielNordahl1 -
CDN Being Crawled and Indexed by Google
I'm doing a SEO site audit, and I've discovered that the site uses a Content Delivery Network (CDN) that's being crawled and indexed by Google. There are two sub-domains from the CDN that are being crawled and indexed. A small number of organic search visitors have come through these two sub domains. So the CDN based content is out-ranking the root domain, in a small number of cases. It's a huge duplicate content issue (tens of thousands of URLs being crawled) - what's the best way to prevent the crawling and indexing of a CDN like this? Exclude via robots.txt? Additionally, the use of relative canonical tags (instead of absolute) appear to be contributing to this problem as well. As I understand it, these canonical tags are telling the SEs that each sub domain is the "home" of the content/URL. Thanks! Scott
Technical SEO | | Scott-Thomas0 -
Tags showing up in Google
Yesterday a user pointed out to me that Tags were being indexed in Google search results and that was not a good idea. I went into my Yoast settings and checked the "nofollow, index" in my Taxanomies, but when checking the source code for no follow, I found nothing. So instead, I went into the robot.txt and disallowed /tag/ Is that ok? or is that a bad idea? The site is The Tech Block for anyone interested in looking.
Technical SEO | | ttb0 -
Is this against google rules
Hi i am wanting to know if this is against google rules. I am building a website which will have lots of different sections and i wanted to know if you were allowed to have a new domain name pointing to a section of the site. so for example if i had a site with a domain name of manchester and then i wanted a section of the site to be called www.manchester.com/complimentary health I want to know if to help with traffic to the site and to have a better domain name, if it was allowed to have a new domain name pointing to that section of the site which could be called www.complimentaryhealth.com and have that pointing to the section. would love to hear your thoughts on this
Technical SEO | | ClaireH-1848860