Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
-
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages.
The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us...
http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutionsSpecifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'...
Thoughts?
Kurus
-
I always advise people NOT to use the robots txt to block off pages - it isnt the best way to handle things. In your case, there may be two options that you can consider:
1. For variant pages, (multiple parameters of the same page) use the rel canonical to increase the strength of the original page, and to keep the variants out of the index.
2. A controversial one this, and many may disagree, but depends on situation basis - allow crawling of the page, but dont allow indexing - follow, no index, which would still pass any juice, but wont index pages that you dont want in the SERPs. I normally do this for Search Result Pages that get indexed...
-
Got disconnected by seomoz as I posted so here is the short answer :
You were affected by Pand so you may pages with almost no content. These pages may be the one using crawl budget, much more than the paginated results. Worry about these low value pages and let Google handle the paginated results
-
Baptiste,
Thanks for the feedback. Can you clarify what you mean by the following?
"On a side note, if you were impacted by Panda, I would strongly suggest to remove / disallow the empty pages on your site. This will give you more crawl budget for interesting content."
-
I would not dig too much in the crawl budget + pagination problem - Google knows what is a pagination and will increase the crawl budget when necessary. On the 'thin' vision of your site, I think your right and I would immediately allow pages > 1 to be indexed.
Beware this may or not impact a lot on your site, it depends on the navigation system (you may have a lot of paginated subsets).
What tells site: requests ? Do you have all your items submitted in your sitemaps and indexed (see WMT) ?
On a side note, if you were impacted by Panda, I would strongly suggest to remove / disallow the empty pages on your site. This will give you more crawl budget for interesting content.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will link juice still be passed if you have the same links in multiple, outreach articles?
We are developing high quality, unique content and sending them out to bloggers to for guest posts. In these articles we have links to 2 to 3 sites. While the links are completely relevant, each article points to the same 2 to 3 sites. The link text varies slightly from article to article, but the linked-to site/URLs remain the same. We have read that it is best to have 2 to 3 external links, not all pointing to the same site. We have followed this rule, but the 2 to 3 external sites are the same sites on the other articles. I'm having a hard time explaining this, so I hope this makes sense. My concern is, will Google see this as a pattern and link juice won't be passed to the linked-to URLs, or worst penalize all/some of the sites being linked to or linked from? Someone I spoke to had suggest that my "link scheme" describes a "link wheel" and the site(s) will be penalized by Penguin. Is there any truth to this statement?
Intermediate & Advanced SEO | | Cutopia0 -
If I nofollow outbound external links to minimize link juice loss > is it a good/bad thing?
OK, imagine you have a blog, and you want to make each blog post authoritative so you link out to authority relevant websites for reference. In this case it is two external links per blog post, one to an authority website for reference and one to flickr for photo credit. And one internal link to another part of the website like the buy-now page or a related internal blog post. Now tell me if this is a good or bad idea. What if you nofollow the external links and leave the internal link untouched so all internal links are dofollow. The thinking is this minimizes loss of link juice from external links and keeps it flowing through internal links to pages within the website. Would it be a good idea to lay off the nofollow tag and leave all as do follow? or would this be a good way to link out to authority sites but keep the link juice internal? Your thoughts are welcome. Thanks.
Intermediate & Advanced SEO | | Rich_Coffman0 -
Twitter Robots.TXT
Hello Moz World, So, I trying to wrap my head around all of the different robots.txt. I decided to dive into a site like Twitter, and look at their robot text. And now, I'm super confused. What are they telling the search engines with /hasttag/*src=. Why don't they just use: Useragent: * Disallow: But, they address each search engine. Is there any benefit to this? Thanks for all of the awesome responses!!! B/R Will H.
Intermediate & Advanced SEO | | MarketingChimp100 -
Should I disallow via robots.txt for my sub folder country TLD's?
Hello, My website is in default English and Spanish as a sub folder TLD. Because of my Joomla platform, Google is listing hundreds of soft 404 links of French, Chinese, German etc. sub TLD's. Again, i never created these country sub folder url's, but Google is crawling them. Is it best to just "Disallow" these sub folder TLD's like the example below, then "mark as fixed" in my crawl errors section in Google Webmaster tools?: User-agent: * Disallow: /de/ Disallow: /fr/ Disallow: /cn/ Thank you, Shawn
Intermediate & Advanced SEO | | Shawn1240 -
Pros vs Cons - Navigation/content embedded within javascript
My programmer showed me this demo website where all the navigation and content is embedded within javascript: http://sailsjs.org/#! Google site search returned 51 in results, all pages pretty much unique Title Tags and Meta Descriptions Bing site search returned 24 results with pretty much identical Title Tags and Meta Descriptions Matt Cutts said it's fine but to test first: http://www.youtube.com/watch?v=Mibrj2bOFCU Has anyone seen any reason to avoid this web convention? My gut is to avoid this approach with the main drawback I see is that websites like this won't do well on search engines other than Google that have less sophisticated algorithms. thoughts?
Intermediate & Advanced SEO | | Rich_Coffman0 -
What's the best way to manage content that is shared on two sites and keep both sites in search results?
I manage two sites that share some content. Currently we do not use a cross-domain canonical URL and allow both sites to be fully indexed. For business reasons, we want both sites to appear in results and need both to accumulate PR and other SEO/Social metrics. How can I manage the threat of duplicate content and still make sure business needs are met?
Intermediate & Advanced SEO | | BostonWright0 -
OOPS!! My website links the most to me, I can't get it??
Today, I have checked Google webmaster tools to get answer of following question. Who links the most to my website? I was assumed that Google webmaster tools provide me list of external website where I have created my text links. But, I can't get it when see my own website links the most to me. (4652??) I checked my other websites which are integrated in Google webmaster tools. They also developed on same platform as well as same internal linking structure. But, I am not able to find out similar issue over there. That's why I am quite confuse with Vista Store. How can I solve it? Does it really matter? "Open Site Explorer is my favorite one and always using that to get it done. But, Google webmaster tools is also active & free so why should I not jump in to... 🙂 "
Intermediate & Advanced SEO | | CommercePundit0 -
Blocking Dynamic URLs with Robots.txt
Background: My e-commerce site uses a lot of layered navigation and sorting links. While this is great for users, it ends up in a lot of URL variations of the same page being crawled by Google. For example, a standard category page: www.mysite.com/widgets.html ...which uses a "Price" layered navigation sidebar to filter products based on price also produces the following URLs which link to the same page: http://www.mysite.com/widgets.html?price=1%2C250 http://www.mysite.com/widgets.html?price=2%2C250 http://www.mysite.com/widgets.html?price=3%2C250 As there are literally thousands of these URL variations being indexed, so I'd like to use Robots.txt to disallow these variations. Question: Is this a wise thing to do? Or does Google take into account layered navigation links by default, and I don't need to worry. To implement, I was going to do the following in Robots.txt: User-agent: * Disallow: /*? Disallow: /*= ....which would prevent any dynamic URL with a '?" or '=' from being indexed. Is there a better way to do this, or is this a good solution? Thank you!
Intermediate & Advanced SEO | | AndrewY1