Should you use robots.txt for pages within your site which do not have high quality content or are not contributing a great deal so when Google crawls your site the best performing content has a higher chance of being indexed?
-
I'm really not sure what is best practice for this query?
-
Thank you for your answer John!
-
I would definitely not block these pages. You want to block as few pages as possible.
1. These pages can be used to boost internal links by linking to your important pages.
2. Google crawls thousands of pages...it will likely crawl all your important and unimportant files.
3. You can de-prioritize these page in the XML sitemap, telling the spiders that there are more important pages to crawl.
4. If these are similar pages, then use the URL parameter tool in Search Console to indicate a page might be a filtered version of a more important page.
-
Hi,
Yes you can block such pages in robots.txt. I would also like to let you know that If you don't want to index some pages you can use .
I would go for in your case.
Hope this helps.
Thanks
-
Is it possible to beef up those lower quality pages with better content? If they are important main content pages I would imagine you would want to improve those pages.
However, if you were going to block them I would recommend a tag within the header of those pages.
Hope that helps some.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Using one robots.txt for two websites
I have two websites that are hosted in the same CMS. Rather than having two separate robots.txt files (one for each domain), my web agency has created one which lists the sitemaps for both websites, like this: User-agent: * Disallow: Sitemap: https://www.siteA.org/sitemap Sitemap: https://www.siteB.com/sitemap Is this ok? I thought you needed one robots.txt per website which provides the URL for the sitemap. Will having both sitemap URLs listed in one robots.txt confuse the search engines?
Technical SEO | | ciehmoz0 -
Should a login page for a payroll / timekeeping comp[any be no follow for robots.txt?
I am managing a Timekeeping/Payroll company. My question is about the customer login page. Would this typically be nofollow for robots?
Technical SEO | | donsilvernail0 -
What is the best way to show content in Listing Pages?
If it is e-commerce site and a product listing page there is always a conflict how to show the content? As per my understanding we can show content in two different ways. 1. To show little content and use **Read more. (**In this case there is a direct message to the google: Here is the content visible and rest content is hidden but available for visitors to read more 2. Can use** Scroll bar**. So here is the message to Google and visitors that my full content is available here. So just scroll down to read further. So I want to know that which method of showing content is best and it's impact of SEO where there is UI constraint or both the method is ok without any SEO impact. Please share your suggestions. DCdRJpH
Technical SEO | | kathiravan0 -
Google crawling but not indexing for no apparent reason
Client's site went secure about two months ago and chose root domain as rel canonical (so site redirects to https://rootdomain.com (no "www"). Client is seeing the site recognized and indexed by Google about every 3-5 days and then not indexed until they request a "Fetch". They've been going through this annoying process for about 3 weeks now. Not sure if it's a server issue or a domain issue. They've done work to enhance .htaccess (i.e., the redirects) and robots.txt. If you've encountered this issue and have a recommendation or have a tech site or person resource to recommend, please let me know. Google search engine results are respectable. One option would be to do nothing but then would SERPs start to fall without requesting a new Fetch? Thanks in advance, Alan
Technical SEO | | alankoen1230 -
What is the best way to stop a page being indexed?
What is the best way to stop a page being indexed? Is it to implement robots.txt at a site level with a Robots.txt file in the main directory or at a page level with the tag?
Technical SEO | | cbarron0 -
Dealing with high link juice/low value pages?
How do people deal with low value pages on sites which tend to pool pagerank and internal links? For example log in pages, copyright, privacy notice pages, etc. I know recently Matt Cutts did a video saying don't worry about them, and in the past we all know various strategies like nofollow, etc. were effective but no more. Are there any other tactics or techniques with dealing with these pages and leveraging them for SEO benefit? Maybe having internal links on these pages to strategically pass off some of the link juice?
Technical SEO | | IrvCo_Interactive0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
Google Panda and ticketing sites: quality of content
Hi from Madrid! I am managing the Marketing Department of a ticketing site in Europe similar to Stubhub.com. We have thousands of events and, until now, we used templates for their descriptions. A lot of events share the same description with minor changes. They also have a lot of tickets on sale, so that's unique content different on each event. Now the last Google Panda update hit Europe and I was wondering if that will affect us a lot. It's hard to tell for now, because we are in the middle of the summer and the volume of searches in our industry depends decreases a lot during this time of the year. I know that ideally we should have unique descriptions but that would need a lot of resources and they are not important for our users: they just want to know the venue, the time and the price of the tickets! Have you experienced something about Google Panda update with a similar site or with another e-commerce industry? Thanks!
Technical SEO | | jorgediaz0