Should you use robots.txt for pages within your site which do not have high quality content or are not contributing a great deal so when Google crawls your site the best performing content has a higher chance of being indexed?
-
I'm really not sure what is best practice for this query?
-
Thank you for your answer John!
-
I would definitely not block these pages. You want to block as few pages as possible.
1. These pages can be used to boost internal links by linking to your important pages.
2. Google crawls thousands of pages...it will likely crawl all your important and unimportant files.
3. You can de-prioritize these page in the XML sitemap, telling the spiders that there are more important pages to crawl.
4. If these are similar pages, then use the URL parameter tool in Search Console to indicate a page might be a filtered version of a more important page.
-
Hi,
Yes you can block such pages in robots.txt. I would also like to let you know that If you don't want to index some pages you can use .
I would go for in your case.
Hope this helps.
Thanks
-
Is it possible to beef up those lower quality pages with better content? If they are important main content pages I would imagine you would want to improve those pages.
However, if you were going to block them I would recommend a tag within the header of those pages.
Hope that helps some.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can you use no-index to counter duplicate content across separate domains?
Hi Moz Community, I have a client who is splitting out a sub brand from a company website to its own domain. They have lots of content around the theme and they want to migrate most of the content out to the new domain, but they also wanted to keep that content on the main site as the main site gets lots of traffic. My question is, as they want search traffic to go to the new site, but want to keep the best content on the original site too, so it can be found in the nav, if they no-index identical content on main site and index content on the new site will they still be penalised for duplicate content? Our advice has been to keep the thematic content on both sites but make them different enough so they are not considered duplicate - we routinely write the same blog post in 50 different ways for them but their Head of Web asked if the no-index is a route, which means they don't need to pay for and wait for brand new content? They are comfortable in losing traffic until the new domain gets traction. In theory, if they are telling Google not to index or rank the main site content, the new site shouldn't be penalised but I'm not confident giving that advice as I've never been asked to do this before. Thoughts?
Technical SEO | | Algorhythm_jT0 -
Using one robots.txt for two websites
I have two websites that are hosted in the same CMS. Rather than having two separate robots.txt files (one for each domain), my web agency has created one which lists the sitemaps for both websites, like this: User-agent: * Disallow: Sitemap: https://www.siteA.org/sitemap Sitemap: https://www.siteB.com/sitemap Is this ok? I thought you needed one robots.txt per website which provides the URL for the sitemap. Will having both sitemap URLs listed in one robots.txt confuse the search engines?
Technical SEO | | ciehmoz0 -
My client is using a mobile template for their local pages and the Google search console is reporting thousands of duplicate titles/meta descriptions
So my client has 2000+ different store locations. Each location has the standard desktop location and my client opted for a corresponding mobile template for each location. Now the Google search console is reporting thousands of duplicate titles/meta descriptions. However this is only because the mobile template and desktop store pages are using the exact same title/meta description tag. Is Google penalizing my client for this? Would it be worth it to update the mobile template title/meta description tags?
Technical SEO | | RosemaryB0 -
Blocked URL parameters can still be crawled and indexed by google?
Hy guys, I have two questions and one might be a dumb question but there it goes. I just want to be sure that I understand: IF I tell webmaster tools to ignore an URL Parameter, will google still index and rank my url? IS it ok if I don't append in the url structure the brand filter?, will I still rank for that brand? Thanks, PS: ok 3 questions :)...
Technical SEO | | catalinmoraru0 -
Why my site is not indexing in google
In google webmaster i have updated my sitemap in Mar 6th..There is around 22000 links..But google fetched only 5300 links for long time...
Technical SEO | | Rajesh.Chandran
I waited for 1 month till no improvement in google index..So apr6th we have uploaded new sitemap (1200 links totally)..,But only 4 links indexed in google ..
why google not indexing my urls? Is this affect our ranking in SERP? How many links are advisable to submit in sitemap for a website?0 -
Huge number of indexed pages with no content
Hi, We have accidentally had Google indexed lots os our pages with no useful content at all on them. The site in question is a directory site, where we have tags and we have cities. Some cities have suppliers for almost all the tags, but there are lots of cities, where we have suppliers for only a handful of tags. The problem occured, when we created a page for each cities, where we list the tags as links. Unfortunately, our programmer listed all the tags, so not only the ones, where we have businesses, offering their services, but all of them! We have 3,142 cities and 542 tags. I guess, that you can imagine the problem this caused! Now I know, that Google might simply ignore these empty pages and not crawl them again, but when I check a city (city site:domain) with only 40 providers, I still have 1,050 pages indexed. (Yes, we have some issues between the 550 and the 1050 as well, but first things first:)) These pages might not be crawled again, but will be clicked, and bounces and the whole user experience in itself will be terrible. My idea is, that I might use meta noindex for all of these empty pages and perhaps also have a 301 redirect from all the empty category pages, directly to the main page of the given city. Can this work the way I imagine? Any better solution to cut this really bad nightmare short? Thank you in advance. Andras
Technical SEO | | Dilbak0 -
How do I get google to index the right pages with the right key word?
Hello I notice that even though I have a site map google is indexing the wrong pages under the wrong key words. As a result its not as relevant and is not ranking properly.
Technical SEO | | ursalesguru0 -
How to add a disclaimer to a site but keep the content accessible to search robots?
Hi, I have a client with a site regulated by the UK FSA (Financial Services Authority). They have to display a disclaimer which visitor must accept before browsing. This is for real, not like the EU cookie compliance debacle 🙂 Currently the site 302 redirects anyone not already cookied (as having accepted) to a disclaimer page/form. Do you have any suggestions or examples of how to require acceptance while maintaining accessibility? I'm not sure just using a jquery lightbox would meet the FSA's requirements, as it wouldn't be shown if JS was not enabled. Thanks, -Jason
Technical SEO | | GroupM_APAC0