Robots.txt
-
Google Webmaster Tools say our website's have low-quality pages, so we have created a robots.txt file and listed all URL’s that we want to remove from Google index.
Is this enough for the solve problem?
-
Ah, it's difficult to see anything on the page because i can't read Turkish.
The only thing you should know is that every single page in a website should have unique content. So if two pages are exactly or almost exactly the same then Google will think it's duplicate content.
-
Yeah that's definitely a duplicate content issue you're facing.
However, did you know that each of your pages have this little tag right at the top of them? name="robots" content="noindex" />
...Seems like it's already done.
-
Thank You Wesley,
Here our pages but language is Turkish,
http://www.enakliyat.com.tr/detaylar/besiktas-basaksehir-ev-esyasi-tasinma-6495
http://www.enakliyat.com.tr/detaylar/ev-tasima-6503
http://www.enakliyat.com.tr/detaylar/evden-eve-nakliyat-6471
Our site is a home to home moving listing portal. Consumers who wants to move his home fills a form so that moving companies can cote prices. We were generating listing page URL’s by using the title submitted by customer. Unfortunately we have understood by now that many customers have entered same content.
-
Well now I'm confused on the problem.. If the issue is duplicate content then the answer is definitely to block them with robots and/or use a rel=canonical tag on each.
However, the Google notice you are referencing has nothing to do with duplicate content notices to my knowledge.
There is always a way to improve your content. Filling out a form auto-generates a page, per my understanding. Great. Have it auto-generate a better looking page!
-my 2 cents. hope it's helpful.
-
I agree with Jesse and Allen.
Of course the problems in Google Webmaster Tools will disappear by no-indexing it.
Low quality pages isn't a good thing for visitors either.It's difficult to give you any other advice then the very broad advise: Improve the quality of the pages.
If you could give us some links to let us know which website and which pages we're talking about then we could give you a better advice on how exactly you can improve those pages. -
Our site is a home to home moving listing portal. Consumers who wants to move his home fills a form so that moving companies can cote prices. We were generating listing page URL’s by using the title submitted by customer. Unfortunately we have understood by now that many customers have entered same content.
-
Iskender.
Our experience has been YES. Google does follow your Robots.txt file and will ignore indexing those pages. If they have a problem, the problem will disappear.
My concern is, what is causing the "Low-quality" error message? In the long run, wouldn't it be better to correct the page to improve the quality? I look at each page as a way to qualify for a greater number of keywords, hence attracting more attention for your website.
We have had several pages flagged as duplicate content, when we never wanted the duplicate page indexed anyway. Once we included the page in the Robots.txt file the flagged error disappeared.
-
Why not improve the pages, instead?
If Google says they are low quality, what makes you think any viewer will stick around? Bet the bounce rate is exceptionally high on those pages, maybe even site-wide.
Always remember to design pages for readers and not Google. If Google tells you your pages suck, they are probably just trying to help you and give you a hint that it's time to improve your site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to stop robots.txt restricting access to sitemap?
I'm working on a site right now and having an issue with the robots.txt file restricting access to the sitemap - with no web dev to help, I'm wondering how I can fix the issue myself? The robots.txt page shows User-agent: * Disallow: / And then sitemap: with the correct sitemap link
Technical SEO | | Ad-Rank0 -
Are robots.txt wildcards still valid? If so, what is the proper syntax for setting this up?
I've got several URL's that I need to disallow in my robots.txt file. For example, I've got several documents that I don't want indexed and filters that are getting flagged as duplicate content. Rather than typing in thousands of URL's I was hoping that wildcards were still valid.
Technical SEO | | mkhGT0 -
Have I constructed my robots.txt file correctly for sitemap autodiscovery?
Hi, Here is my sitemap: User-agent: * Sitemap: http://www.bedsite.co.uk/sitemaps/sitemap.xml Directories Disallow: /sendfriend/
Technical SEO | | Bedsite
Disallow: /catalog/product_compare/
Disallow: /media/catalog/product/cache/
Disallow: /checkout/
Disallow: /categories/
Disallow: /blog/index.php/
Disallow: /catalogsearch/result/index/
Disallow: /links.html I'm using Magento and want to make sure I have constructed my robots.txt file correctly with the sitemap autodiscovery? thanks,0 -
Robots.txt & Mobile Site
Background - Our mobile site is on the same domain as our main site. We use a folder approach for our mobile site abc.com/m/home.html We are re-directing traffic to our mobile site vie device detection and re-direction exists for a handful of pages of our site ie most of our pages do not redirect the user to a mobile equivalent page. Issue – Our mobile pages are being indexed in desktop Google searches Input Required – How should we modify our robots.txt so that the desktop google index does not index our mobile pages/urls User-agent: Googlebot-Mobile Disallow: /m User-agent: `YahooSeeker/M1A1-R2D2` Disallow: /m User-agent: `MSNBOT_Mobile` Disallow: /m Many thanks
Technical SEO | | CeeC-Blogger0 -
How many times robots.txt gets visited by crawlers, especially Google?
Hi, Do you know if there's any way to track how often robots.txt file has been crawled? I know we can check when is the latest downloaded from webmaster tool, but I actually want to know if they download every time crawlers visit any page on the site (e.g. hundreds of thousands of times every day), or less. thanks...
Technical SEO | | linklater0 -
Robots.txt question
What is this robots.txt telling the search engines? User-agent: * Disallow: /stats/
Technical SEO | | DenverKelly0