Should you use robots.txt for pages within your site which do not have high quality content or are not contributing a great deal so when Google crawls your site the best performing content has a higher chance of being indexed?
-
I'm really not sure what is best practice for this query?
-
Thank you for your answer John!
-
I would definitely not block these pages. You want to block as few pages as possible.
1. These pages can be used to boost internal links by linking to your important pages.
2. Google crawls thousands of pages...it will likely crawl all your important and unimportant files.
3. You can de-prioritize these page in the XML sitemap, telling the spiders that there are more important pages to crawl.
4. If these are similar pages, then use the URL parameter tool in Search Console to indicate a page might be a filtered version of a more important page.
-
Hi,
Yes you can block such pages in robots.txt. I would also like to let you know that If you don't want to index some pages you can use .
I would go for in your case.
Hope this helps.
Thanks
-
Is it possible to beef up those lower quality pages with better content? If they are important main content pages I would imagine you would want to improve those pages.
However, if you were going to block them I would recommend a tag within the header of those pages.
Hope that helps some.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Site Audit Tools Not Picking Up Content Nor Does Google Cache
Hi Guys, Got a site I am working with on the Wix platform. However site audit tools such as Screaming Frog, Ryte and even Moz's onpage crawler show the pages having no content, despite them having 200 words+. Fetching the site as Google clearly shows the rendered page with content, however when I look at the Google cached pages, they also show just blank pages. I have had issues with nofollow, noindex on here, but it shows the meta tags correct, just 0 content. What would you look to diagnose? I am guessing some rogue JS but why wasn't this picked up on the "fetch as Google".
Technical SEO | | nezona0 -
Removing Personal content from Google Index
Hi everyone, A user is complaining that her name is appearing in google search through our job ads site, so I removed such ads through Search Console, but the problem is not the ads anymore but our internal search results. The ads are no longer live but our searches has been indexed by google back then, We have been manually taking over 500 pages that included such name but more and more keep coming through pagination, we haven't found a pattern yet so pretty much any search result might have contained such name. We might get some legal issues here, did you guys got into anything similar before? We have just set some rules so that this doesn't happen again, but still can't find a way to deal with this one. Thanks in advance. PD: Not sure if this is the right category to fit it.
Technical SEO | | JoaoCJ0 -
Best way to handle URLs of the to-be-translated pages on a multilingual site
Dear Moz community, I have a multilingual site and there are pages with content that is supposed to be translated but for now is English only. The structure of the site is such that different languages have their virtual subdirs: domain.com/en/page1.html for English, domain.com/fr/page1.html for French and so on. Obviously, if the page1.html is not translated, the URLs point to the same content and I get warnings about duplicate content. I see two ways to handle this situation: Break the naming scheme and link to original English pages, i.e. instead of domain.com/fr/index.html linking to domain.com/fr/page1.html link to domain.com/en/page.html Leave the naming scheme intact and set up a 301 redirect so that /fr/page1.html redirects to /en/page1.html Is there any difference for the two methods from the SEO standpoint? Thanks.
Technical SEO | | Lomar0 -
Best Topography for eCommerce Site Product Pages (flat nav/off the root OR in products subfolder) ?
Hi Im SEO'ing a Shopify site (new/not yet live) at the moment and all the products are in a 'Products' subfolder along the lines of: domain.com/products/blue-widgets/ etc I understand that many ecommerce SEO's these days go 'Flat Navigation' with all products 'off the root' rather than in a sub folder. Then they communicate product & categories/departmental relationships via breadcrumbs & other internal linking etc In the case of a platform like Shopfy is this a good idea or is it best to leave 'as is' and the 'Products' subfolder is a perfectly good place for the product pages ? All Best Dan
Technical SEO | | Dan-Lawrence0 -
Will blocking the Wayback Machine (archive.org) have any impact on Google crawl and indexing/SEO?
Will blocking the Wayback Machine (archive.org) by adding the code they give have any impact on Google crawl and indexing/SEO? Anyone know? Thanks! ~Brett
Technical SEO | | BBuck0 -
Would Google Call These Pages Duplicate Content?
Our Web store, http://www.audiobooksonline.com/index.html, has struggled with duplicate content issues for some time. One aspect of duplicate content is a page like this: http://www.audiobooksonline.com/out-of-publication-audio-books-book-audiobook-audiobooks.html. When an audio book title goes out-of-publication we keep the page at our store and display a http://www.audiobooksonline.com/out-of-publication-audio-books-book-audiobook-audiobooks.html whenever a visitor attempts to visit a specific title that is OOP. There are several thousand OOP pages. Would Google consider these OOP pages duplicate content?
Technical SEO | | lbohen0 -
I was googling the word "best web hosting" and i notice the 1st and 3rd result were results with google plus. Does Google plus now play a role in improving ranking for the website?
I was googling the word "best web hosting" and i notice the 1st and 3rd result were results with google plus. Does Google plus now play a role in improving ranking for the website?I see a person's name next to the website too
Technical SEO | | mainguy0 -
Help: Google Time Spent Downloading a Page, My Site is Slow
All, My site: http://www.nationalbankruptcyforum.com shows an average time spent downloading a page of 1,489 (in milliseconds) We've had spikes of well over 3,000 and lows of around 980 (all according to WMT). I understand that this is really slow. Does anyone have some suggestions as to how I could improve load times? Constructive criticism welcomed and encouraged.
Technical SEO | | JSOC0