Adding non-important folders to disallow in robots.txt file
-
Hi all,
If we have many non-important folders like /category/ in blog.....these will multiply the links. These are strictly for users who access very rarely but not for bots. Can we add such to disallow list in robots to stop link juice passing from them, so internal linking will me minimised to an extent. Can we add any such paths or pages in disallow list? Is this going to work pure technical or any penalty?
Thanks,
Satish
-
But as per the current SEO buzz, internal nofollow leads of waste of link juice and we cannot preserve it. Moreover some suggests not to use nofollow internally.
-
This is a great resource for all things robots.txt related: [http://www.robotstxt.org/robotstxt.html](http://www.robotstxt.org/robotstxt.html)
-
Hi,
Yes you can block in robots.txt. You can also use rel="nofollow" if you don't want to pass link juice.
[No Link Juice](https://www.example.com) Hope this helps. Thanks
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are Meta-descriptions important for blogs?
I am tasked with optimizing an existing sites SEO. I have added meta's to all the menu pages, however they have blog section with over 700 posts. How important are meta descriptions when it comes to a websites blog? Do I need to take the time to go through 700+ blog posts and create unique meta descriptions for each one?
Algorithm Updates | | rburnett0 -
Folders or no folders in url?
What's best for SEO: a folder or no folder? For example: https://domain.com/arizona-dentist/somecontent or just https://domain.com/somecontent. The website has 100+ pages with "dentist" within the content of the somecontent pages, as well as specific pages for /arizona-dentist/. Also, the breadcrumb for the somecontent page would appear something like follows: Arizona Dentist > Some Content ... you can find the somecontent page from the Arizona Dentist page. I didn't include folders in the path because I did not want the url to be too long. In terms of where it is showing up on google search results...it is within the top 3-4 on the first page when searching Arizona dentist come content. The website is pretty organized even without subfolders because it was made using Umbraco. I am wondering if using folders will increase the SEO ranking, or if it really doesn't and could hurt it if paths become too long; especially since it's not doing too bad in the search ranking right now. -Thanks in advance for any help.
Algorithm Updates | | bellezze0 -
Added a few paragraphs with header tags targeting a keyword and dropped immediately!
Hi all, Our website homepage doesn't contain much content associated with our primary keyword or product, it's mostly explaining our features. So we tried adding a section at bottom of the homepage which explains the about our services like "what is seo" and "how seo helps business". We are trying to rank for this primary keyword like "seo" with generic content and we dropped immediately after this deployment. Any suggestions on this why and how to proceed? Thanks
Algorithm Updates | | vtmoz0 -
Log File Analyzer Only Showing Spoofed Bots and No Verified Bots
Question for you guys: After analyzing some crawl data in Search Console in the sitemap section, I noticed that Google consistently isn't indexing about 3/4 of the client sites I work on that all use the same content management system. I began to wonder if maybe Google (and others) have a hard time crawling certain parts of the sites consistently, as finding a pattern here could lead me to investigate whether there's a CMS problem. To research this, I started using a log file analyzer (Screaming Frog's version) for some of those clients. After loading the files, I noticed that none of the crawl activity logged by the servers is considered verified. I input one month's worth of log files, but when I switch the program to show only verified bots, all data disappears. Is it possible for a site not to have any search engines crawling it for a whole month? Given my experience, that seems unlikely, particularly since we've been submitting crawl requests. I know that doesn't guarantee a crawl, but it seems odd that it's never happening for any search engines across the board. Context that might be helpful: I did check technical settings, and the sites are crawlable. The sites do appear in search but seem to be losing organic search traffic. Thanks for any help you can provide!
Algorithm Updates | | geodigitalmarketing0 -
Google AMP (accelerated mobile pages), can it be used for non-Google news and Ecommerce Websites?
Mozzers, I've been doing a lot of research on Google's new Accelerated Mobile Pages (AMP) https://moz.com/blog/accelerated-mobile-pages-whiteboard-friday. From what I'm seeing, these AMP version websites are only for Google News-worthy websites such as New York Times, Cosmopolitan, and the BuzzFeeds of the world. But what about Ecommerce websites like Ebay or Amazon? Will AMP versions of "scotch tape" via OfficeDepot work in the SERP's on non-Google News cards?
Algorithm Updates | | Shawn1240 -
Adding the link masking directory to robots.txt?
Hey guys, Just want to know if you have any experience with this. Is it worthwhile blocking search engines from following the link masking directory.. (what i mean by this is the directory that holds the link redirectors to an affiliate site: example:
Algorithm Updates | | irdeto
mydomain.com/go/thislink goes to
amazon.com/affiliatelink I want to know if blocking the 'go' directory from getting crawled in robots.txt is a good idea or a bad idea? I am not using wordpress but rather a custom built php site where i need to manually decide on these things. i want to specifically know if this in any way violates guidelines for google. it doesn't change the custom experience because they know exactly where they will end up if they click on the link. any advice would be much appreciated.0 -
Categories where "freshness" is of importance
I know that within the past couple of months, Google as made algo updates so that freshness of content is used as more of an indicator for relevancy, and hence, rankings. see: http://insidesearch.blogspot.com/2012/06/search-quality-highlights-39-changes.html I understand that freshness is important across the board, but it is obviously more of a factor for certain search terms. My questions is, how can you determine if your product category (ecommerce) is one where freshness is becoming more of a factor? Is there any way to know which terms are considered to require fresher results? Any input is appreciated.
Algorithm Updates | | inhouseseo1 -
Google automatically adding company name to serp titles
Maybe I've been living under a rock, but I was surprised to see that Google had algorithmically modified my page titles in the search results by adding the company name to the end of the (short) title. <title>About Us</title> became About Us - Company Name Interestingly, this wasn't consistent - sometimes it was "company name Limited" and sometimes just "company name. Anyone else notice this or is this a recent change?
Algorithm Updates | | DougRoberts0