Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
-
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages.
The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us...
http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutionsSpecifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'...
Thoughts?
Kurus
-
I always advise people NOT to use the robots txt to block off pages - it isnt the best way to handle things. In your case, there may be two options that you can consider:
1. For variant pages, (multiple parameters of the same page) use the rel canonical to increase the strength of the original page, and to keep the variants out of the index.
2. A controversial one this, and many may disagree, but depends on situation basis - allow crawling of the page, but dont allow indexing - follow, no index, which would still pass any juice, but wont index pages that you dont want in the SERPs. I normally do this for Search Result Pages that get indexed...
-
Got disconnected by seomoz as I posted so here is the short answer :
You were affected by Pand so you may pages with almost no content. These pages may be the one using crawl budget, much more than the paginated results. Worry about these low value pages and let Google handle the paginated results
-
Baptiste,
Thanks for the feedback. Can you clarify what you mean by the following?
"On a side note, if you were impacted by Panda, I would strongly suggest to remove / disallow the empty pages on your site. This will give you more crawl budget for interesting content."
-
I would not dig too much in the crawl budget + pagination problem - Google knows what is a pagination and will increase the crawl budget when necessary. On the 'thin' vision of your site, I think your right and I would immediately allow pages > 1 to be indexed.
Beware this may or not impact a lot on your site, it depends on the navigation system (you may have a lot of paginated subsets).
What tells site: requests ? Do you have all your items submitted in your sitemaps and indexed (see WMT) ?
On a side note, if you were impacted by Panda, I would strongly suggest to remove / disallow the empty pages on your site. This will give you more crawl budget for interesting content.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What do you add to your robots.txt on your ecommerce sites?
We're looking at expanding our robots.txt, we currently don't have the ability to noindex/nofollow. We're thinking about adding the following: Checkout Basket Then possibly: Price Theme Sortby other misc filters. What do you include?
Intermediate & Advanced SEO | | ThomasHarvey0 -
Google displaying a content box above the listing link for top ranking listing in SERPs
Hi, In the attached Google SERP example the first listing below the paid search ads has a large box with a snippet of content from the relevant page then followed by the standard link. Does anyone know how you get Google to display a box like this in their SERPs? I checked the code on the page and there doesn't appear to be anything special about it such as any schema markup. It uses standard list code. Does this only appear for particular types of content or sites, such as medical content in this case? Is the content more likely to appear for lists? Does it only appear for high authority sites that Google has selected? We have a similar medical information based site and it would be great to try to get Google to display a similar box of content for some of our pages. Thanks. Damien ZmPJVSl.png
Intermediate & Advanced SEO | | james.harris0 -
'Nofollow' footer links from another site, are they 'bad' links?
Hi everyone,
Intermediate & Advanced SEO | | romanbond
one of my sites has about 1000 'nofollow' links from the footer of another of my sites. Are these in any way hurtful? Any help appreciated..0 -
Robots.txt, does it need preceding directory structure?
Do you need the entire preceding path in robots.txt for it to match? e.g: I know if i add Disallow: /fish to robots.txt it will block /fish
Intermediate & Advanced SEO | | Milian
/fish.html
/fish/salmon.html
/fishheads
/fishheads/yummy.html
/fish.php?id=anything But would it block?: en/fish
en/fish.html
en/fish/salmon.html
en/fishheads
en/fishheads/yummy.html
**en/fish.php?id=anything (taken from Robots.txt Specifications)** I'm hoping it actually wont match, that way writing this particular robots.txt will be much easier! As basically I'm wanting to block many URL that have BTS- in such as: http://www.example.com/BTS-something
http://www.example.com/BTS-somethingelse
http://www.example.com/BTS-thingybob But have other pages that I do not want blocked, in subfolders that also have BTS- in, such as: http://www.example.com/somesubfolder/BTS-thingy
http://www.example.com/anothersubfolder/BTS-otherthingy Thanks for listening0 -
Block Level Link Juice
I need a better understanding of how links in different parts of the page pass juice. Much has been written about how footer links pass less juice than other parts of the page. The question I have is that if a page has a hypothetical 1000 points of Link Juice and can pass on +/-800 points via links, and I have 1 and only 1 link in the footer to another page, does it pass the full 800 points? Or... since footers only pass a small fraction of link juice, it passes lets say 80 points, and the other 720 points stays locked up on the page. This question is a hypothetical - I'm just trying to understand relationships. I don't know if I've explained the question too well, but if someone could answer i it, or point me in the right direction, I would appreciate it.
Intermediate & Advanced SEO | | CsmBill0 -
If i disallow unfriendly URL via robots.txt, will its friendly counterpart still be indexed?
Our not-so-lovely CMS loves to render pages regardless of the URL structure, just as long as the page name itself is correct. For example, it will render the following as the same page: example.com/123.html example.com/dumb/123.html example.com/really/dumb/duplicative/URL/123.html To help combat this, we are creating mod rewrites with friendly urls, so all of the above would simply render as example.com/123 I understand robots.txt respects the wildcard (*), so I was considering adding this to our robots.txt: Disallow: */123.html If I move forward, will this block all of the potential permutations of the directories preceding 123.html yet not block our friendly example.com/123? Oh, and yes, we do use the canonical tag religiously - we're just mucking with the robots.txt as an added safety net.
Intermediate & Advanced SEO | | mrwestern0 -
OOPS!! My website links the most to me, I can't get it??
Today, I have checked Google webmaster tools to get answer of following question. Who links the most to my website? I was assumed that Google webmaster tools provide me list of external website where I have created my text links. But, I can't get it when see my own website links the most to me. (4652??) I checked my other websites which are integrated in Google webmaster tools. They also developed on same platform as well as same internal linking structure. But, I am not able to find out similar issue over there. That's why I am quite confuse with Vista Store. How can I solve it? Does it really matter? "Open Site Explorer is my favorite one and always using that to get it done. But, Google webmaster tools is also active & free so why should I not jump in to... 🙂 "
Intermediate & Advanced SEO | | CommercePundit0 -
Does 302 pass link juice?
Hi! We have our content under two subdomains, one for the English language and one for Spanish. Depending on the language of the browser, there's a 302 redirecting to one of this subdomains. However, our main domain (which has no content) is receiving a lot of links - people rather link to mydomain.com than to en.mydomain.com. Does the 302 passing any link juice? If so, to which subdomain? Thank you!
Intermediate & Advanced SEO | | bodaclick0