Should all pages on a site be included in either your sitemap or robots.txt?
-
I don't have any specific scenario here but just curious as I come across sites fairly often that have, for example, 20,000 pages but only 1,000 in their sitemap. If they only think 1,000 of their URL's are ones that they want included in their sitemap and indexed, should the others be excluded using robots.txt or a page level exclusion? Is there a point to having pages that are included in neither and leaving it up to Google to decide?
-
Thanks guys!
-
You bet - Cheers!
-
Clever PHD,
You are correct. I have found that these little housekeeping issues like eliminating duplicate content really do make a big difference.
Ron
-
I thinks Ron's point was that if you have a bunch of duplicates, the dups are not "real" pages, if you are only counting "real" pages. Therefore, if Google indexes your "real" pages and the dup versions of them, you can have more pages indexed. That is the issue then that you have duplicate versions of the same page in Google's index and so which will rank for a given key term? You could be competing against yourself. That is why it is so important you deal with crawl issues.
-
Thank you. Just curious, how would the number of pages indexed be higher than the number of actual pages?
-
I think you are looking at the pages indexed which is generally a higher number than those on your web site. There is a point to marking things up so that there is a no follow on any pages that you do not want indexed as well as properly marking up the web pages that you do specifically want indexed. It is really important that you eliminate duplicate pages. A common source of these duplicates is improper tags on the blog. Make sure that your tags are set up in a logical hierarchy like your site map. This will assist the search engines when they re index your page.
Hope this helps,
Ron
-
You want to have as many pages in the index as possible, as long as they are high quality pages with original content - if you publish quality original articles on a regular basis, you want to have all those pages indexed. Yes, from a practical perspective you may only be able to focus on tweaking the SEO on a portion of them, but if you have good SEO processes in place as you produce those pages, they will rank long term for a broad range of terms and bring traffic..
If you have 20,000 pages as you have an online catalog and you have 345 different ways to sort the same set of page results, or if you have keyword search URLs, or printer friendly version pages or your shopping cart pages, you do not want those indexed. These pages are typically, low quality/thin content pages and/or are duplicates and those do you no favor. You would want to use the noindex meta tag or canonical where appropriate. The reality is that out of the 20,000 pages, there are probably only a subset that are the "originals" and so you dont want to waste Googles time in crawling those pages.
A good concept here to look up is Crawl Budget or Crawl Optimization
http://searchengineland.com/how-i-think-crawl-budget-works-sort-of-59768
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Validated pages on GSC displays 5x more pages than when performing site:domain.com?
Hi mozzers, When checking the coverage report on GSC I am seeing over 649,000 valid pages https://cl.ly/ae46ec25f494 but when performing site:domain.com I am only seeing 130,000 pages. Which one is more the source of truth especially I have checked some of these "valid" pages and noticed they're not even indexed?
Intermediate & Advanced SEO | | Ty19860 -
Redirecting Pages During Site Migration
Hi everyone, We are changing a website's domain name. The site architecture will stay the same, but we are renaming some pages. How do we treat redirects? I read this on Search Engine Land: The ideal way to set up your redirects is with a regex expression in the .htaccess file of your old site. The regex expression should simply swap out your domain name, or swap out HTTP for HTTPS if you are doing an SSL migration. For any pages where this isn’t possible, you will need to set up an individual redirect. Make sure this doesn’t create any conflicts with your regex and that it doesn’t produce any redirect chains. Does the above mean we are able to set up a domain redirect on the regex for pages that we are not renaming and then have individual 1:1 redirects for renamed pages in the same .htaccess file? So have both? This will not conflict with the regex rule?
Intermediate & Advanced SEO | | nhhernandez0 -
Duplicate Page getting indexed and not the main page!
Main Page: www.domain.com/service
Intermediate & Advanced SEO | | Ishrat-Khan
Duplicate Page: www.domain.com/products-handler.php/?cat=service 1. My page was getting indexed properly in 2015 as: www.domain.com/service
2. Redesigning done in Aug 2016, a new URL pattern surfaced for my pages with parameter "products-handler"
3. One of my product landing pages had got 301-permanent redirected on the "products-handler" page
MAIN PAGE: www.domain.com/service GETTING REDIRECTED TO: www.domain.com/products-handler.php/?cat=service
4. This redirection was appearing until Nov 2016.
5. I took over the website in 2017, the main page was getting indexed and deindexed on and off.
6. This June it suddenly started showing an index of this page "domain.com/products-handler.php/?cat=service"
7. These "products-handler.php" pages were creating sitewide internal duplicacy, hence I blocked them in robots.
8. Then my page (Main Page: www.domain.com/service) got totally off the Google index Q1) What could be the possible reasons for the creation of these pages?
Q2) How can 301 get placed from main to duplicate URL?
Q3) When I have submitted my main URL multiple times in Search Console, why it doesn't get indexed?
Q4) How can I make Google understand that these URLs are not my preferred URLs?
Q5) How can I permanently remove these (products-handler.php) URLs? All the suggestions and discussions are welcome! Thanks in advance! 🙂0 -
Merging B2B site with B2C site
Hi, A mobile phone accessory client of ours has a retail site (B2C) and a trade site (B2B). The retail site does pretty well and ranks highly for a number of terms. The trade site doesn't really rank for anything as they don't optimise it. They would like to merge the two sites and allow trade customers to log-in and purchase goods in bulk for their business. If they were to merge the trade site into the already successful consumer site, what would be the best way of doing this and what, if any, implications would it have on the organic visibility of the B2C site? Would it be possible to target retail and trade customers on one website? Cheers, Lewis
Intermediate & Advanced SEO | | PeaSoupDigital0 -
Google Panda question Category Pages on e-commerce site
Dear Mates, Could you check this category page of our e-commerce site: http://tinyurl.com/zqjalng and give me your opinion about, this is a Panda safe page or not? Actually I have this as NOINDEX preventing any Panda hit, but I'm in doubt. My Question is "Can I index this page again in peace?" Thank you Clay
Intermediate & Advanced SEO | | ClayRey0 -
Can URLs blocked with robots.txt hurt your site?
We have about 20 testing environments blocked by robots.txt, and these environments contain duplicates of our indexed content. These environments are all blocked by robots.txt, and appearing in google's index as blocked by robots.txt--can they still count against us or hurt us? I know the best practice to permanently remove these would be to use the noindex tag, but I'm wondering if we leave them they way they are if they can still hurt us.
Intermediate & Advanced SEO | | nicole.healthline0 -
Does the number of links on a page metric include repeated links?
Just wondering if the number of links on the page metric includes links that are repeated? So, if I had 100 links to one page would this count as 100 or 1 link?
Intermediate & Advanced SEO | | Cornwall
If it's the former does this mean more links to one page adds weight? Thanks0 -
E-commerce Site - Filter Pages
Hi, We have a client who has a fairly large e-commerce site that went live quite recently. The site is near enough fully indexed by Google, but one thing I've noticed is that filtered search results pages are being indexed, all with duplicate page titles. Obviously this is an issue that needs to be looked at ASAP. My questions is this - would we be better tweaking site settings so that page titles are constructed from the filters (brand/price/size) and therefore unique (and useful for searchers who are after a specific brand or size of a given item). Or should we rel=canonical the filtered pages so that they are eventually dropped from the index (the safer of the two options)? Thanks in advance for your help!
Intermediate & Advanced SEO | | jasarrow0