Seomoz bar: No Follow and Robots.txt
-
Should the Mozbar pickup 'nofollow" links that are handled in robots.txt ?
the robots.tx blocks categories, but is still show as a followed (green) link when using the mozbar.
Thanks!
Holly
ETA: I'm assuming that- disallow: myblog.com/category/ - is comparable to the nofollow tag on catagory?
-
Thank you Cyrus for that great article link. And like that article states near the end, it touches on a common problem for those of us that assume all the info at SeoMoz is accurate even though it may not be current. (not only seomoz to be fair) I've found several instances where even authorities change their mind or google changes is for them?
But anyways, it appears using canonical or meta tags would be the better solution. Unfortunately,neither is possible in Squarespace. I had just about decided to change the robots.txt , get rid of the disallow: /category/ , and call it a day. But then I found an example where the noindex was used in the robots.txt file of a squarespace website (specializing in SEM among other things). Probably the "longest" robots list I've ever seen!
http://www.hunchfree.com/robots.txt
Would it be a good idea to use noindex, FOLLOW in the robots.txt for /category/
(if that's even possible) or just keep with my "call it a day" solution...at least where robots.txt is concerned.
BTW- I posted a similar question on the reasoning behind the robots.txt for ss websites at the developers forum- nothing but crickets. Unless it's about design, things pretty much drop like a rock. Oh well.
-
As Phil pointed out, blocking a URL with robot.txt may keep search engines from crawling your pages, but that doesn't mean they wont index those pages. The meta robots NOINDEX, FOLLOW tag is a much better choice.
Highly recommend the following article that explains this in more detail:
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions
Unfortunately, Sqarespace isn't all that flexible when it comes to meta tags. For the most part, Google is getting better at figuring this kind of duplicate content out, but it's best to address it when you can.
-
Thank you so much for the detailed reply. It's REALLY appreciated. The blog you are referring to is the Squarespace company's blog. This disallow: categories IS however on any site that uses their service. But I've done a similar search with my personal blog on Squarespace and a couple of categories still show up in the SERPs anyways. You can edit the robot file if you want, but you have to do a redirect as you don't have root access.
Unfortunately, (at least I don't think we can), include meta tags for noindex on a page by page basis. You can use it in robots.txt.
It seems their would be a lot more duplicate content issue with tags rather than categories as it's more granular than categories.
The point of all this is I'm creating new websites for some of our homeschool students and want to get it right from the start with the site architecture and how we use tags and categories with a balanced focus on usability as well as optimizing for search. These kids are super interested in all the reasoning behind things and their questions are tougher than any client! Ha!
Again, Thanks so much and take care,
Holly
-
Thanks for providing some more detail Holly. I definitely think it's applicable to leave here and I'm happy to help.
Some people like to prevent search engines from crawling category pages out of a fear of duplicate content. For example, say you have a post that's at this URL:
site.com/blog/chocolate-milk-is-great.html
and it's also the only post in the category "milk" with this url:
then search engines see the same exact content (your blog post) on two different URLs. Since duplicate content is a big no-no, many people choose to prevent the engines from crawling category pages. Although, in my experience, it's really up to you. Do you feel like your category pages will provide value to users? Would you like them to show up in search results? If so, then make sure you let Google crawl them.
If you DON'T want category pages to be indexed by Google, then I think there's a better choice than using robots.txt. Your best bet is applying the noindex, follow tag to these pages. This tag tells the engines NOT to index this page, but to follow all of the links on it. This is better than robots.txt because robots.txt won't always prevent your site from showing up in search results (that's another long story), but the noindex tag will.
If I'm not making sense at all then please just let me know :).
Lastly, from what I can see on your site and blog, it doesn't look like the category pages for your blog are actually in your robots.txt file. Have someone do a double check.
To check this myself, I just did a google search for this URL:
http://blog.squarespace.com/blog/?category=Roadmap
And it showed up in Google right away. Looks like something isn't going according to plan. Don't worry though, that happens all of the time and it should be an easy fix.
-
I know one day i may wakeup one morning and this will all click, but for now perhaps an example will help me get past this initial hurdle.
Squarespace disallows categories in the robots.txt, but using the mozbar I see the category links are green.
So if I understand (partly anyways), the disallow in robots keeps the bots from crawling those pages when they come knocking at my site. However, the category links in a blog post are being crawled? or what's the point?
I'm just trying to understand the reasoning behind disallowing categories and how that should impact the tagging and categorizing of blog posts.
Perhaps I should of started a new question? or is it applicable to leave it here..
-
The nofollow attribute and robots.txt file serve different purposes.
Nofollow Attribute
This attribute is used to tell search engines, "Don't follow this link", or even "Don't follow any links on this page." It doesn't prevent pages from being indexed, just prevents the search engines from following that link from that particular page.
Robots.txt
This file contains a list of pages that the search engine should not access and should not index.
To read more about robots.txt check out this page: http://googleblog.blogspot.com/2007/01/controlling-how-search-engines-access.html
For more on Nofollow, check out this page: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=96569
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No follow links also been reported in SEOmoz crawl diagnostics
Hi, Why does SEOmoz reports links which has been marked as 'nofollow'. I am getting 'Overly-Dynamic URL' reports on links which I have designated as nofollow which means Google will discount them. So why does SEOmoz still report them. Thanks.
Moz Pro | | malpani0 -
Site Explorer shows links as followable but they have nofollow tags
Hello, I am looking at site explorer and sites linking to my site moneyfact.co.uk. I've got thousands of links showing as 'followable' but when i check them they have rel="nofollow" tags. e.g: http://www.dianomioffers.co.uk/partner/moneyfacts.co.uk/brochures.epl?partner=93&partner_id=93&partner_variant_id=33 Why would they show as followable when the links are nofollowed? Thanks Steve
Moz Pro | | SteveBrumpton0 -
Does SEOMoz ever work?
Hi, I've signed up for the free 30 day trial and I'm on the edge of not actually subscribing to the service. I go through the Q & A boards which I find really interesting and hope I can add value there in the future, but the tools interest me more (and this is where the issue lies. Do they ever work? The Keyword difficulty tool just constantly says to come back in 20 minutes and I don't think the Rank Tracker has worked for at least half my freebie 30 days. Have the tools always been this flaky or is it a blip?
Moz Pro | | orlandovisiting1 -
SEOmoz indicating duplicate page content on one of my campaigns
Hello All, Alright, according to SEOmoz's PRO campaign manager, one of my websites is returning about 2,700 pages that supposedly have duplicate content. I checked a few of them manually and am not seeing where the issue lies. Is anyone else experiencing something similar to this and do you know if it is just a glitch with the crawl? Here are 2 of the pages it is indicating have dup page content: http://www.dieselpowerproducts.com/c-3120-1994-98-59l-12v-dodge-cummins-carbon-fiber-hoods.aspx http://www.dieselpowerproducts.com/c-90-dodge-cummins-94-02-59l-12v24v.aspx Any insight would be greatly appreciated! -Craig
Moz Pro | | ckilgore0 -
How seomoz sets the country for google to monitor my rankings?
In some of my campaign settings I use specific country for search engine (e.g. "Google Slovakia"). Some times I check the rankings manually too. My process for this: 1. Open a new Chrome incognito window (be sure that that's the only open incognito window) 2. go to google.sk, type in the keyword 3. at the bottom of the page select advanced search, select language: slovak, resubmit the query That's it. With my process, the monitored keywords are not there were seomoz reports them. So the question: with what settings (domain, language anything else) seomoz is tracking the rankings in country specific search engines?
Moz Pro | | Brainsum0 -
How long after 301 redirect does seomoz toolbar take to update?
I 301 redirected a page to a new URL that is better optimized for my content. However as soon as I did this upon visiting the page, my browser's seomoz toolbar "page authority measurement had fallen to "1" it was previously in the 40's... Does anyone know how long it takes for the seomoz toolbar PA rank to refresh on a 301 redirect? I see where seomoz says the redirected page will retain between 90-99% off its link juice. I'm fine with losing a little if it would mean long term gains in terms of good on-page seo. Any ideas?
Moz Pro | | TrueResults0 -
Crawl test tool from SEOmoz - which URLs does it actually crawl?
I am using for the first time the crawl test tool from SEOmoz and I do not really understand which URLs the tool is going to crawl. First, it says "enter any subdomain" --> why can´t I do the crawl for the root domain? Second it says "we'll crawl up to 3,000 linked-to pages" --> does that mean that the tool crawls all internal links that it can find on the given domain? Thanks for your help!
Moz Pro | | Elke.GetApp0 -
How to Stop SEOMOZ from Crawling a Sub-domain without redoing the whole campaign?
I am using SEOMOZ for a client to track their website's performance and fix any errors and issues. A few weeks ago, they created a sub-domain (sub.example.com) to create a niche website for some of their specialized content. However, when SEOMOZ re-crawled the main domain (example.com), it also reported the errors for the subdomain. Is there any way to stop SEOMOZ from crawling the subdomain and only crawl the main domain? I know that can be done by starting a new campaign, but is there any way to work around an existing campaign? I'm asking because we would like to avoid the setting up the campaign again and losing the historical data as well. Any input would be greatly appreciated. Thanks!
Moz Pro | | TheNorthernOffice790