Adding non-important folders to disallow in robots.txt file
-
Hi all,
If we have many non-important folders like /category/ in blog.....these will multiply the links. These are strictly for users who access very rarely but not for bots. Can we add such to disallow list in robots to stop link juice passing from them, so internal linking will me minimised to an extent. Can we add any such paths or pages in disallow list? Is this going to work pure technical or any penalty?
Thanks,
Satish
-
But as per the current SEO buzz, internal nofollow leads of waste of link juice and we cannot preserve it. Moreover some suggests not to use nofollow internally.
-
This is a great resource for all things robots.txt related: [http://www.robotstxt.org/robotstxt.html](http://www.robotstxt.org/robotstxt.html)
-
Hi,
Yes you can block in robots.txt. You can also use rel="nofollow" if you don't want to pass link juice.
[No Link Juice](https://www.example.com) Hope this helps. Thanks
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google's Importance on usability issues in sub directories or sub domains?
Hi Moz community, As the different usability issues like pagespeed or mobile responsiveness are playing a key role in website rankings; I wonder how much the same factors are important for sub directories or sub domain pages? Do each and every page of sub directory or sub domain must be optimised like website pages? Does Google gives same importance? Thanks
Algorithm Updates | | vtmoz0 -
Have you ever seen or experienced a page indexed which is actually from a website which is blocked by robots.txt?
Hi all, We use robots file and meta robots tags for blocking website or website pages to block bots from crawling. Mostly robots.txt will be used for website and expect all the pages to not getting indexed. But there is a condition here that any page from website can be indexed by Google even the site is blocked from robots.txt; because crawler may find the page link somewhere on internet as stated here at last paragraph. I wonder if this really the case where some webpages have got indexed. And even we use meta tags at page level; do we need to block from robots.txt file? Can we use both techniques at a time? Thanks
Algorithm Updates | | vtmoz0 -
Meta robots at every page rather than using robots.txt for blocking crawlers? How they'll get indexed if we block crawlers?
Hi all, The suggestion to use meta robots tag rather than robots.txt file is to make sure the pages do not get indexed if their hyperlinks are available anywhere on the internet. I don't understand how the pages will be indexed if the entire site is blocked? Even though there are page links are available, will Google really index those pages? One of our site got blocked from robots file but internal links are available on internet for years which are not been indexed. So technically robots.txt file is quite enough right? Please clarify and guide me if I'm wrong. Thanks
Algorithm Updates | | vtmoz0 -
Do back-links to non indexed sub-domains / sub-directories considered by Google as website backlinks and pass Pagerank to website?
Hi, If some noindexed links on our website or sub-domain got some backlinks, will that backlinks pass Pagerank / linkjuice to website? Will they be considered as backlinks to website by Google? Here is a statement from Matt cutts for the question. My question is same as below with answer? Eric Enge: Can a NoIndex page accumulate PageRank? Matt Cutts: A NoIndex page can accumulate PageRank, because the links are still followed outwards from a NoIndex page. Thanks
Algorithm Updates | | vtmoz0 -
Homepage title tag: "Keywords for robots" vs "Phrases for users"
Hi all, We keep on listening and going through the articles that "Google is all about user" and people suggesting to just think about users but not search engine bots. I have gone through the title tags of all our competitors websites. Almost everybody directly targeted primary and secondary keywords and few more even. We have written a very good phrase as definite title tag for users beginning with keyword. But we are not getting ranked well comparing to the less optimised or backlinked websites. Two things here to mention is our title tag is almost 2 years old. Title tag begins with secondary keyword with primary keyword like "seo google" is secondary keyword and "seo" is primary keyword". Do I need to completely focus on only primary keyword to rank for it? Thanks
Algorithm Updates | | vtmoz0 -
Personalization for non logged in users
My question is, how does google personalize the search results for non logged in users and incoqnito searches? I already know about the location personalization whether you're logged in or not and auto complete. But, is personalization still helping in the other cases. Does for example Google keep track of your I.P and then match suggestions that way? Additionally, any other resources would be great.
Algorithm Updates | | PeterRota0 -
URL Names not so important in future?
I read somewhere (hard to say where with all the information about SEO and google!) that in the future, Google will put less importance on the URL name for ranking purposes. Any thoughts?
Algorithm Updates | | Llanero0