Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Block in robots.txt instead of using canonical?
-
When I use a canonical tag for pages that are variations of the same page, it basically means that I don't want Google to index this page. But at the same time, spiders will go ahead and crawl the page. Isn't this a waste of my crawl budget? Wouldn't it be better to just disallow the page in robots.txt and let Google focus on crawling the pages that I do want indexed?
In other words, why should I ever use rel=canonical as opposed to simply disallowing in robots.txt?
-
With this info, I would go with Robots.txt because, as you say, it outweighs any potential loss given the use of the pages and the absence of links.
Thanks
-
Thanks Robert.
The pages that I'm talking about disallowing do not have rank or links. They are sub-pages of a profile page. If anything, the main page will be linked to, not the sub-pages.
Maybe I should have explained that I'm talking about a large site - around 400K pages. More than 1,000 new pages are created per week. That's why I am concerned about managing crawl budget. The pages that I'm referring to are not linked to anywhere on the site. Sure, Google can potentially get to them if someone decides to link to them on their own site, but this is unlikely and certainly won't happen on a large scale. So I'm not really concerned about about losing pagerank on the main profile page if I disallow them. To be clear: we have many thousands of pages with content that we want to rank. The pages I'm talking about are not important in those terms.
So it's really a question of balance... if these pages (there are MANY of them) are included in the crawl (and in our sitemap), potentially it's a real waste of crawl budget. Doesn't this outweigh the minuscule, far-fetched potential loss?
I understand that Google designed rel=canonical for this scenario, but that does not mean that it's necessarily the best way to go considering the other options.
-
Thanks Takeshi.
Maybe I should have explained that I'm talking about a large site - around 400K pages. More than 1,000 new pages are created per week. That's why I am concerned about managing crawl budget. The pages that I'm referring to are not linked to anywhere on the site. Sure, Google can potentially get to them if someone decides to link to them on their own site, but this is unlikely (since it's a sub-page of the main profile page, which is where people would naturally link to) and certainly won't happen on a large scale. So I'm not really concerned about about link-juice evaporation. According to AJ Kohn here, it's not enough to see in Webmaster Tools that Google has indexed all pages on our site. There is also the issue of how often pages are being crawled, which is what we are trying to optimize for.
So it's really a question of balance... if these pages (there are MANY of them) are included in the crawl (and in our sitemap), potentially it's a real waste of crawl budget. Doesn't this outweigh the minuscule, far-fetched potential loss?
Would love to hear your thoughts...
-
I would go with the canonicals. If there are any links going to these duplicate pages, that will prevent any "link juice evaporation" from links which Google can see but can't crawl due to robots.txt. Best to let Google just crawl the page and see the canonical so that it understands that it is a duplicate page.
Having canonicals on all your pages is good practice anyway, as it can prevent inadvertent duplicate content from things like query parameters.
Crawl budget can be of some concern if you're talking about a massive number of pages, but start by first taking a look at Google Webmaster Tools and seeing how many of your pages are being crawled vs the total number of pages on your site. As long as this ration isn't small, you should be good. You can also get more crawl budget by building up your domain authority by building links.
-
I don't disagree at all and I think AJ Kohn is a rock star. In SEO, I have learned over time that there are rarely absolutes like always do this or never do that. I based my answer on how you posited the question.
If you read AJ's post you will note that the rel=canonical issue comes up with others commenting and not in the body of his post. Yes, if the page is superfluous like a cart page or a contact page, use the robots.txt to block the crawl. But, if you have a page with rank, links, etc. that help your canonical page, how are you helping yourself by forgoing rel=canon?
I think his bigger point was that you want to be aware and to understand that the # of times you are crawled is at least partially governed by PR which is governed by all those other things we discussed. If you understand that and keep the crawl focused on better pages you help yourself.
Does that clarify a bit?
Best -
Hi, even if you use robots.txt file to block these pages, Google can still pick the references of these pages from third-party websites and can crawl from there. Such pages will not have the description snippet in the search results and instead will show text that reads:
A description of this result is not available because of this site's robots.txt.
So, to fully stop Google from crawling these pages, you can go in for the page-level meta robots tag along with the robots.txt method. The page-level robots meta tag complements robots.txt method.By the way, robots.txt file can definitely save you some crawl budget. I don't think you should be thinking much about crawl budget though, as long as your website is super-easy to crawl with simple text-based internal links and stuff like, super-fast servers etc.,
Those my my two cents my friend.
Best regards,
Devanur Rafi
-
Thanks for the response, Robert.
I have read lots of SEO advice on maximizing your "crawl budget" - making sure your internal link system is built well to send the bots to the right pages. According to my research, since bots only spend a certain amount of time on your site when they are crawling, it is important to do whatever you can to ensure that they don't "waste time" on pages that are not important for SEO. Just as one example, see this post from AJ Kohn.
Do you disagree with this whole approach?
-
Yair
I think that the canonical is the better option. I am unsure as to your use of the term "crawl budget," in that there is no fixed number of times a page or a site will be crawled versus a second similar site for example. I have a huge reference site that is crawled every couple of days and I have small sites of ten pages that are crawled weekly or less. It is dependent on the traffic and behaviors of that traffic (which would include number of inbound links, etc.) and on things like you re-submitting sitemap, etc.
The canonical tag was created to provide the clarification to the search engine as to what you considered to be the relevant page. Go ahead and use it.Best
Robert
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt blocked internal resources Wordpress
Hi all, We've recently migrated a Wordpress website from staging to live, but the robots.txt was deleted. I've created the following new one: User-agent: *
Intermediate & Advanced SEO | | Mat_C
Allow: /
Disallow: /wp-admin/
Disallow: /wp-includes/
Disallow: /wp-content/plugins/
Disallow: /wp-content/cache/
Disallow: /wp-content/themes/
Allow: /wp-admin/admin-ajax.php However, in the site audit on SemRush, I now get the mention that a lot of pages have issues with blocked internal resources in robots.txt file. These blocked internal resources are all cached and minified css elements: links, images and scripts. Does this mean that Google won't crawl some parts of these pages with blocked resources correctly and thus won't be able to follow these links and index the images? In other words, is this any cause for concern regarding SEO? Of course I can change the robots.txt again, but will urls like https://example.com/wp-content/cache/minify/df983.js end up in the index? Thanks for your thoughts!2 -
Robots.txt & Disallow: /*? Question!
Hi, I have a site where they have: Disallow: /*? Problem is we need the following indexed: ?utm_source=google_shopping What would the best solution be? I have read: User-agent: *
Intermediate & Advanced SEO | | vetofunk
Allow: ?utm_source=google_shopping
Disallow: /*? Any ideas?0 -
SEO Best Practices regarding Robots.txt disallow
I cannot find hard and fast direction about the following issue: It looks like the Robots.txt file on my server has been set up to disallow "account" and "search" pages within my site, so I am receiving warnings from the Google Search console that URLs are being blocked by Robots.txt. (Disallow: /Account/ and Disallow: /?search=). Do you recommend unblocking these URLs? I'm getting a warning that over 18,000 Urls are blocked by robots.txt. ("Sitemap contains urls which are blocked by robots.txt"). Seems that I wouldn't want that many urls blocked. ? Thank you!!
Intermediate & Advanced SEO | | jamiegriz0 -
If my website do not have a robot.txt file, does it hurt my website ranking?
After a site audit, I find out that my website don't have a robot.txt. Does it hurt my website rankings? One more thing, when I type mywebsite.com/robot.txt, it automatically redirect to the homepage. Please help!
Intermediate & Advanced SEO | | binhlai0 -
Baidu Spider appearing on robots.txt
Hi, I'm not too sure what to do about this or what to think of it. This magically appeared in my companies robots.txt file (literally magically appeared/text is below) User-agent: Baiduspider
Intermediate & Advanced SEO | | IceIcebaby
User-agent: Baiduspider-video
User-agent: Baiduspider-image
Disallow: / I know that Baidu is the Google of China, but I'm not sure why this would appear in our robots.txt all of a sudden. Should I be worried about a hack? Also, would I want to disallow Baidu from crawling my companies website? Thanks for your help,
-Reed0 -
Is it a problem to use a 301 redirect to a 404 error page, instead of serving directly a 404 page?
We are building URLs dynamically with apache rewrite.
Intermediate & Advanced SEO | | lcourse
When we detect that an URL is matching some valid patterns, we serve a script which then may detect that the combination of parameters in the URL does not exist. If this happens we produce a 301 redirect to another URL which serves a 404 error page, So my doubt is the following: Do I have to worry about not serving directly an 404, but redirecting (301) to a 404 page? Will this lead to the erroneous original URL staying longer in the google index than if I would serve directly a 404? Some context. It is a site with about 200.000 web pages and we have currently 90.000 404 errors reported in webmaster tools (even though only 600 detected last month).0 -
Using disavow tool for 404s
Hey Community, Got a question about the disavow tool for you. My site is getting thousands of 404 errors from old blog/coupon/you name it sites linking to our old URL structure (which used underscores and ended in .jsp). It seems like the webmasters of these sites aren't answering back or haven't updated their sites in ages so it's returning 404 errors. If I disavow these domains and/or links will it clear out these 404 errors in Google? I read the GWT help page on it, but it didn't seem to answer this question. Feel free to ask any questions that may help you understand the issue more. Thanks for your help,
Intermediate & Advanced SEO | | IceIcebaby
-Reed0 -
Canonical & noindex? Use together
For duplicate pages created by the "print" function, seomoz says its better to use noindex (http://www.seomoz.org/blog/complete-guide-to-rel-canonical-how-to-and-why-not) and JohnMu says its better to use canonical http://www.google.com/support/forum/p/Webmasters/thread?tid=6c18b666a552585d&hl=en What do you think?
Intermediate & Advanced SEO | | nicole.healthline1