Adding non-important folders to disallow in robots.txt file
-
Hi all,
If we have many non-important folders like /category/ in blog.....these will multiply the links. These are strictly for users who access very rarely but not for bots. Can we add such to disallow list in robots to stop link juice passing from them, so internal linking will me minimised to an extent. Can we add any such paths or pages in disallow list? Is this going to work pure technical or any penalty?
Thanks,
Satish
-
But as per the current SEO buzz, internal nofollow leads of waste of link juice and we cannot preserve it. Moreover some suggests not to use nofollow internally.
-
This is a great resource for all things robots.txt related: [http://www.robotstxt.org/robotstxt.html](http://www.robotstxt.org/robotstxt.html)
-
Hi,
Yes you can block in robots.txt. You can also use rel="nofollow" if you don't want to pass link juice.
[No Link Juice](https://www.example.com) Hope this helps. Thanks
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Folders or no folders in url?
What's best for SEO: a folder or no folder? For example: https://domain.com/arizona-dentist/somecontent or just https://domain.com/somecontent. The website has 100+ pages with "dentist" within the content of the somecontent pages, as well as specific pages for /arizona-dentist/. Also, the breadcrumb for the somecontent page would appear something like follows: Arizona Dentist > Some Content ... you can find the somecontent page from the Arizona Dentist page. I didn't include folders in the path because I did not want the url to be too long. In terms of where it is showing up on google search results...it is within the top 3-4 on the first page when searching Arizona dentist come content. The website is pretty organized even without subfolders because it was made using Umbraco. I am wondering if using folders will increase the SEO ranking, or if it really doesn't and could hurt it if paths become too long; especially since it's not doing too bad in the search ranking right now. -Thanks in advance for any help.
Algorithm Updates | | bellezze0 -
Have you ever seen or experienced a page indexed which is actually from a website which is blocked by robots.txt?
Hi all, We use robots file and meta robots tags for blocking website or website pages to block bots from crawling. Mostly robots.txt will be used for website and expect all the pages to not getting indexed. But there is a condition here that any page from website can be indexed by Google even the site is blocked from robots.txt; because crawler may find the page link somewhere on internet as stated here at last paragraph. I wonder if this really the case where some webpages have got indexed. And even we use meta tags at page level; do we need to block from robots.txt file? Can we use both techniques at a time? Thanks
Algorithm Updates | | vtmoz0 -
Best place to employ "branded" related keywords to gain SEO benefits and rank for "non branded" keywords?
Hi all, I want to put this question straight with an example rather than confusing with a scenario. If there is company called "vertigo", a tiles manufacturer. There are many search queries with thousands of searches like "vertigo tiles life", "vertigo tiles for garden", "vertigo tiles dealers", "vertigo tiles for kitchen", etc....These kind of pages will eventually have tendency to rank for non-branded keywords like "tiles for garden", "tiles for kitchen", etc. So where to employ these kind of help/info pages? Main website or sub-domain? Is it Okay to have these pages on sub-domain and traffic getting diverted to sub domain? What if the same pages are on main website? Will main website have ranking improvement for non branded keywords because of employing the landing pages with related topics? Thanks
Algorithm Updates | | vtmoz0 -
What happens when non-relevant topic is getting more visitors?
Hi all, So we have a sub-domain which has user generated content like forums. Mostly the content is all about our product. Few times some spammy threads get posted and we delete them regularly. I have noticed that a non-relevant thread has been posted which about a movie. But this page got hundreds of clicks. I just wonder will this hurts being off topic and movie torrent thread or helps being receiving hundreds of visitors? Thanks
Algorithm Updates | | vtmoz0 -
Log File Analyzer Only Showing Spoofed Bots and No Verified Bots
Question for you guys: After analyzing some crawl data in Search Console in the sitemap section, I noticed that Google consistently isn't indexing about 3/4 of the client sites I work on that all use the same content management system. I began to wonder if maybe Google (and others) have a hard time crawling certain parts of the sites consistently, as finding a pattern here could lead me to investigate whether there's a CMS problem. To research this, I started using a log file analyzer (Screaming Frog's version) for some of those clients. After loading the files, I noticed that none of the crawl activity logged by the servers is considered verified. I input one month's worth of log files, but when I switch the program to show only verified bots, all data disappears. Is it possible for a site not to have any search engines crawling it for a whole month? Given my experience, that seems unlikely, particularly since we've been submitting crawl requests. I know that doesn't guarantee a crawl, but it seems odd that it's never happening for any search engines across the board. Context that might be helpful: I did check technical settings, and the sites are crawlable. The sites do appear in search but seem to be losing organic search traffic. Thanks for any help you can provide!
Algorithm Updates | | geodigitalmarketing0 -
Adding the link masking directory to robots.txt?
Hey guys, Just want to know if you have any experience with this. Is it worthwhile blocking search engines from following the link masking directory.. (what i mean by this is the directory that holds the link redirectors to an affiliate site: example:
Algorithm Updates | | irdeto
mydomain.com/go/thislink goes to
amazon.com/affiliatelink I want to know if blocking the 'go' directory from getting crawled in robots.txt is a good idea or a bad idea? I am not using wordpress but rather a custom built php site where i need to manually decide on these things. i want to specifically know if this in any way violates guidelines for google. it doesn't change the custom experience because they know exactly where they will end up if they click on the link. any advice would be much appreciated.0 -
Categories where "freshness" is of importance
I know that within the past couple of months, Google as made algo updates so that freshness of content is used as more of an indicator for relevancy, and hence, rankings. see: http://insidesearch.blogspot.com/2012/06/search-quality-highlights-39-changes.html I understand that freshness is important across the board, but it is obviously more of a factor for certain search terms. My questions is, how can you determine if your product category (ecommerce) is one where freshness is becoming more of a factor? Is there any way to know which terms are considered to require fresher results? Any input is appreciated.
Algorithm Updates | | inhouseseo1