Adding non-important folders to disallow in robots.txt file
-
Hi all,
If we have many non-important folders like /category/ in blog.....these will multiply the links. These are strictly for users who access very rarely but not for bots. Can we add such to disallow list in robots to stop link juice passing from them, so internal linking will me minimised to an extent. Can we add any such paths or pages in disallow list? Is this going to work pure technical or any penalty?
Thanks,
Satish
-
But as per the current SEO buzz, internal nofollow leads of waste of link juice and we cannot preserve it. Moreover some suggests not to use nofollow internally.
-
This is a great resource for all things robots.txt related: [http://www.robotstxt.org/robotstxt.html](http://www.robotstxt.org/robotstxt.html)
-
Hi,
Yes you can block in robots.txt. You can also use rel="nofollow" if you don't want to pass link juice.
[No Link Juice](https://www.example.com) Hope this helps. Thanks
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Adding / Manipulating Page Meta Titles?
We have a client who is experiencing some heavy google modification to the title tags being displayed on the search engine. It is adding "- 0 Reviews" to an ecommerce site. Obviously a bad start. There were no instances of these keywords anywhere on any of these pages, header tag or otherwise (on only a handful of the affected pages there was a single commented out image with an alt tag 0 reviews - but it was commented out and since removed) We have attempted to rewrite the title multiple times and it will modify the title but still include the non-relevant addition. Has anyone ever experienced anything like this?
Algorithm Updates | | Spindle0 -
Google AMP (accelerated mobile pages), can it be used for non-Google news and Ecommerce Websites?
Mozzers, I've been doing a lot of research on Google's new Accelerated Mobile Pages (AMP) https://moz.com/blog/accelerated-mobile-pages-whiteboard-friday. From what I'm seeing, these AMP version websites are only for Google News-worthy websites such as New York Times, Cosmopolitan, and the BuzzFeeds of the world. But what about Ecommerce websites like Ebay or Amazon? Will AMP versions of "scotch tape" via OfficeDepot work in the SERP's on non-Google News cards?
Algorithm Updates | | Shawn1240 -
W3C Validation: How Important is This to Ranking
Hi, I'm currently working with a developer who is trying to tell me that validation errors and warnings are of little to no importance in a website's SERP. In the past, whenever I've had a site that was experiencing problems ranking for a keyword terms, this was one of the first places we'd look. Is this still a relatively important component in getting a site to rank?
Algorithm Updates | | maxcarnage2 -
Googlebot soon to be executing javascript - Should I change my robots.txt?
This question came to mind as I was pursuing an unrelated issue and reviewing a site's robots/txt file. Currently this is a line item in the file: Disallow: https://* According to a recent post in the Google Webmasters Central Blog: [http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html](http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html "Understanding Web Pages Better") Googlebot is getting much closer to being able to properly render javascript. Pardon some ignorance on my part because I am not a developer, but wouldn't this require Googlebot be able to execute javascript? If so, I am concerned that disallowing Googlebot from the https:// versions of our pages could interfere with crawling and indexation because as soon as an end-user clicks the "checkout" button on our view cart page, everything on the site flips to https:// - If this were disallowed then would Googlebot stop crawling at that point and simply leave because all pages were now https:// ??? Or am I just waaayyyy over thinking it?...wouldn't be the first time! Thanks all! [](http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html "Understanding Web Pages Better")
Algorithm Updates | | danatanseo0 -
How important is Social Media for building domain authority / Google rankings? Are there any cases?
I really would like to know if someone tested the importance of Social Media for Google rankings.
Algorithm Updates | | Seeders
Are there some sites who build authority only by doing good social media?
Ofcourse, I know it is all about the mix (content, linkbuilding, social media, etc.) but how important is it?
I know many sites who rank good without any form of social media, but I do not know any sites who do only social media and rank high. I hope there are some good cases which give good insight. ps. I know it becomes more and more important...0 -
Does a KML file have to be indexed by Google?
I'm currently using the Yoast Local SEO plugin for WordPress to generate my KML file which is linked to from the GeoSitemap. Check it out http://www.holycitycatering.com/sitemap_index.xml. A competitor of mine just told me that this isn't correct and that the link to the KML should be a downloadable file that's indexed in Google. This is the opposite of what Yoast is saying... "He's wrong. 🙂 And the KML isn't a file, it's being rendered. You wouldn't want it to be indexed anyway, you just want Google to find the information in there. What is the best way to create a KML? Should it be indexed?
Algorithm Updates | | projectassistant1 -
Increased importance given to spammy/educational domains in SERPs!?
Hey guys, Can anyone shed some light on these bizarre and confusing SERPs which Google seems to be producing following their latest update?? For example, we have a client who targets "payday loans" with another targeting "IT services". However, since the update, the former keyword brings back a host of spammy domain results while the latter seems to have given all focus to educational institutions like universities. This just seems utterly ludicrous considering that if I'm searching for "IT services" I don't want the help desk of a local university - that's completely irrelevant, right? Can anyone provide some information on what seems to be going on? Thanks
Algorithm Updates | | Webrevolve0 -
Is it hurting my seo ranking if robots.txt is forbidden?
robots.txt is forbidden - I have read up on what the robots.txt file does and how to configure it but what about if it is not able to be accessed at all?
Algorithm Updates | | Assembla0