Robots.txt
-
Google Webmaster Tools say our website's have low-quality pages, so we have created a robots.txt file and listed all URL’s that we want to remove from Google index.
Is this enough for the solve problem?
-
Ah, it's difficult to see anything on the page because i can't read Turkish.
The only thing you should know is that every single page in a website should have unique content. So if two pages are exactly or almost exactly the same then Google will think it's duplicate content.
-
Yeah that's definitely a duplicate content issue you're facing.
However, did you know that each of your pages have this little tag right at the top of them? name="robots" content="noindex" />
...Seems like it's already done.
-
Thank You Wesley,
Here our pages but language is Turkish,
http://www.enakliyat.com.tr/detaylar/besiktas-basaksehir-ev-esyasi-tasinma-6495
http://www.enakliyat.com.tr/detaylar/ev-tasima-6503
http://www.enakliyat.com.tr/detaylar/evden-eve-nakliyat-6471
Our site is a home to home moving listing portal. Consumers who wants to move his home fills a form so that moving companies can cote prices. We were generating listing page URL’s by using the title submitted by customer. Unfortunately we have understood by now that many customers have entered same content.
-
Well now I'm confused on the problem.. If the issue is duplicate content then the answer is definitely to block them with robots and/or use a rel=canonical tag on each.
However, the Google notice you are referencing has nothing to do with duplicate content notices to my knowledge.
There is always a way to improve your content. Filling out a form auto-generates a page, per my understanding. Great. Have it auto-generate a better looking page!
-my 2 cents. hope it's helpful.
-
I agree with Jesse and Allen.
Of course the problems in Google Webmaster Tools will disappear by no-indexing it.
Low quality pages isn't a good thing for visitors either.It's difficult to give you any other advice then the very broad advise: Improve the quality of the pages.
If you could give us some links to let us know which website and which pages we're talking about then we could give you a better advice on how exactly you can improve those pages. -
Our site is a home to home moving listing portal. Consumers who wants to move his home fills a form so that moving companies can cote prices. We were generating listing page URL’s by using the title submitted by customer. Unfortunately we have understood by now that many customers have entered same content.
-
Iskender.
Our experience has been YES. Google does follow your Robots.txt file and will ignore indexing those pages. If they have a problem, the problem will disappear.
My concern is, what is causing the "Low-quality" error message? In the long run, wouldn't it be better to correct the page to improve the quality? I look at each page as a way to qualify for a greater number of keywords, hence attracting more attention for your website.
We have had several pages flagged as duplicate content, when we never wanted the duplicate page indexed anyway. Once we included the page in the Robots.txt file the flagged error disappeared.
-
Why not improve the pages, instead?
If Google says they are low quality, what makes you think any viewer will stick around? Bet the bounce rate is exceptionally high on those pages, maybe even site-wide.
Always remember to design pages for readers and not Google. If Google tells you your pages suck, they are probably just trying to help you and give you a hint that it's time to improve your site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Website URL, Robots.txt and Google Search Console (www. vs non www.)
Hi MOZ Community,
Technical SEO | | Badiuzz
I would like to request your kind assistance on domain URLs - www. VS non www. Recently, my team have moved to a new website where a 301 Redirection has been done. Original URL : https://www.example.com.my/ (with www.) New URL : https://example.com.my/ (without www.) Our current robots.txt sitemap : https://www.example.com.my/sitemap.xml (with www.)
Our Google Search Console property : https://www.example.com.my/ (with www.) Question:
1. How/Should I standardize these so that Google crawler can effectively crawl my website?
2. Do I have to change back my website URLs to (with www.) or I just need to update my robots.txt?
3. How can I update my Google Search Console property to reflect accordingly (without www.), because I cannot see the options in the dashboard.
4. Is there any to dos such as Canonicalization needed, or should I wait for Google to automatically detect and change it, especially in GSC property? Really appreciate your kind assistance. Thank you,
Badiuzz0 -
Robots.txt Disallow: / in Search Console
Two days ago I found out through search console that my website's Robots.txt has changed to User-agent: *
Technical SEO | | RAN_SEO
Disallow: / When I check the robots.txt in the website it looks fine - I see its blocked just in search console( in the robots.txt tester). when I try to do fetch as google to the homepage I see its blocked. Any ideas why would robots.txt block my website? it was fine until the weekend. before that, in the last 3 months I saw I had blocked resources in the website and I brought back pages with fetch as google. Any ideas?0 -
Robots.txt on refinements
In dealing with Panda do you think it is a good idea to put all refinements for category pages in the robots.txt file? We already have a lot as noindex, follow but I am wondering if it would be better to address from a crawl perspective as the pages are probably thin duplicate content to Google.
Technical SEO | | Gordian0 -
Robots.txt best practices & tips
Hey, I was wondering if someone could give me some advice on whether I should block the robots.txt file from the average user (not from googlebot, yandex, etc)? If so, how would I go about doing this? With .htaccess I'm guessing - but not an expert. What can people do with the information in the file? Maybe someone can give me some "best practices"? (I have a wordpress based website) Thanks in advance!
Technical SEO | | JonathanRolande0 -
Robots.txt query
Quick question, if this appears in a clients robots.txt file, what does it mean? Disallow: /*/_/ Does it mean no pages can be indexed? I have checked and there are no pages in the index but it's a new site too so not sure if this is the problem. Thanks Karen
Technical SEO | | Karen_Dauncey0 -
Blocked by meta-robots but there is no robots file
OK, I'm a little frustred here. I've waited a week for the next weekly index to take place after changing the privacy setting in a wordpress website so Google can index, but I still got the same problem. Blocked by meta-robots, no index, no follow. But I do not see a robot file anywhere and the privacy setting in this Wordpress site is set to allow search engines to index this site. Website is www.marketalert.ca What am I missing here? Why can't I index the rest of the website and is there a faster way to test this rather than wait another week just to find out it didn't work again?
Technical SEO | | Twinbytes0 -
SeoMoz robot is not able to crawl my website.
Hi, SeoMoz robot crawls only two web pages of my website. I contacts seomoz team and they told me that the problem is because of Javascript use. What is the solution to this? Should I contact my webdesign company and ask them to remove Javascript code?
Technical SEO | | ashish2110 -
Robots.txt question
I want to block spiders from specific specific part of website (say abc folder). In robots.txt, i have to write - User-agent: * Disallow: /abc/ Shall i have to insert the last slash. or will this do User-agent: * Disallow: /abc
Technical SEO | | seoug_20050