Seomoz bar: No Follow and Robots.txt
-
Should the Mozbar pickup 'nofollow" links that are handled in robots.txt ?
the robots.tx blocks categories, but is still show as a followed (green) link when using the mozbar.
Thanks!
Holly
ETA: I'm assuming that- disallow: myblog.com/category/ - is comparable to the nofollow tag on catagory?
-
Thank you Cyrus for that great article link. And like that article states near the end, it touches on a common problem for those of us that assume all the info at SeoMoz is accurate even though it may not be current. (not only seomoz to be fair) I've found several instances where even authorities change their mind or google changes is for them?
But anyways, it appears using canonical or meta tags would be the better solution. Unfortunately,neither is possible in Squarespace. I had just about decided to change the robots.txt , get rid of the disallow: /category/ , and call it a day. But then I found an example where the noindex was used in the robots.txt file of a squarespace website (specializing in SEM among other things). Probably the "longest" robots list I've ever seen!
http://www.hunchfree.com/robots.txt
Would it be a good idea to use noindex, FOLLOW in the robots.txt for /category/
(if that's even possible) or just keep with my "call it a day" solution...at least where robots.txt is concerned.
BTW- I posted a similar question on the reasoning behind the robots.txt for ss websites at the developers forum- nothing but crickets. Unless it's about design, things pretty much drop like a rock. Oh well.
-
As Phil pointed out, blocking a URL with robot.txt may keep search engines from crawling your pages, but that doesn't mean they wont index those pages. The meta robots NOINDEX, FOLLOW tag is a much better choice.
Highly recommend the following article that explains this in more detail:
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions
Unfortunately, Sqarespace isn't all that flexible when it comes to meta tags. For the most part, Google is getting better at figuring this kind of duplicate content out, but it's best to address it when you can.
-
Thank you so much for the detailed reply. It's REALLY appreciated. The blog you are referring to is the Squarespace company's blog. This disallow: categories IS however on any site that uses their service. But I've done a similar search with my personal blog on Squarespace and a couple of categories still show up in the SERPs anyways. You can edit the robot file if you want, but you have to do a redirect as you don't have root access.
Unfortunately, (at least I don't think we can), include meta tags for noindex on a page by page basis. You can use it in robots.txt.
It seems their would be a lot more duplicate content issue with tags rather than categories as it's more granular than categories.
The point of all this is I'm creating new websites for some of our homeschool students and want to get it right from the start with the site architecture and how we use tags and categories with a balanced focus on usability as well as optimizing for search. These kids are super interested in all the reasoning behind things and their questions are tougher than any client! Ha!
Again, Thanks so much and take care,
Holly
-
Thanks for providing some more detail Holly. I definitely think it's applicable to leave here and I'm happy to help.
Some people like to prevent search engines from crawling category pages out of a fear of duplicate content. For example, say you have a post that's at this URL:
site.com/blog/chocolate-milk-is-great.html
and it's also the only post in the category "milk" with this url:
then search engines see the same exact content (your blog post) on two different URLs. Since duplicate content is a big no-no, many people choose to prevent the engines from crawling category pages. Although, in my experience, it's really up to you. Do you feel like your category pages will provide value to users? Would you like them to show up in search results? If so, then make sure you let Google crawl them.
If you DON'T want category pages to be indexed by Google, then I think there's a better choice than using robots.txt. Your best bet is applying the noindex, follow tag to these pages. This tag tells the engines NOT to index this page, but to follow all of the links on it. This is better than robots.txt because robots.txt won't always prevent your site from showing up in search results (that's another long story), but the noindex tag will.
If I'm not making sense at all then please just let me know :).
Lastly, from what I can see on your site and blog, it doesn't look like the category pages for your blog are actually in your robots.txt file. Have someone do a double check.
To check this myself, I just did a google search for this URL:
http://blog.squarespace.com/blog/?category=Roadmap
And it showed up in Google right away. Looks like something isn't going according to plan. Don't worry though, that happens all of the time and it should be an easy fix.
-
I know one day i may wakeup one morning and this will all click, but for now perhaps an example will help me get past this initial hurdle.
Squarespace disallows categories in the robots.txt, but using the mozbar I see the category links are green.
So if I understand (partly anyways), the disallow in robots keeps the bots from crawling those pages when they come knocking at my site. However, the category links in a blog post are being crawled? or what's the point?
I'm just trying to understand the reasoning behind disallowing categories and how that should impact the tagging and categorizing of blog posts.
Perhaps I should of started a new question? or is it applicable to leave it here..
-
The nofollow attribute and robots.txt file serve different purposes.
Nofollow Attribute
This attribute is used to tell search engines, "Don't follow this link", or even "Don't follow any links on this page." It doesn't prevent pages from being indexed, just prevents the search engines from following that link from that particular page.
Robots.txt
This file contains a list of pages that the search engine should not access and should not index.
To read more about robots.txt check out this page: http://googleblog.blogspot.com/2007/01/controlling-how-search-engines-access.html
For more on Nofollow, check out this page: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=96569
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Meta-Robots noFollow and Blocked by Meta-Robots
On my most recent campaign report, I have 2 Notices that we can't find any cause for: Meta-Robots nofollow-
Moz Pro | | gfiedel
http://www.fateyes.com/the-effect-of-social-media-on-the-serps-social-signals-seo/?replytocom=92
"noindex nofollow" for the page: http://www.fateyes.com/the-effect-of-social-media-on-the-serps-social-signals-seo/ Blocked by Meta-Robots -Meta-Robots nofollow-
http://www.fateyes.com/the-effect-of-social-media-on-the-serps-social-signals-seo/?replytocom=92
"noindex nofollow" for the page: http://www.fateyes.com/the-effect-of-social-media-on-the-serps-social-signals-seo/ We are unable to locate any code whatsoever that may explain this. Any ideas anyone?0 -
SeoMoz and incorrect results?
Just recieved my weekly Rankings and On Page report. First thing i noticed is that an important keyword dropped big time. "Rejser til Cypern" dropped from #3 to #40 ... So i checked google.dk myself (using impersonal.me) and saw i was still #3 in SERP. I ran SeoMoz's own Keyword Difficulty Report-tool. It showed me i was #3 in SERP. Now im thinking: Did i just loose my trust in Seomoz, to whom i give access to all my GA-data? What to do? Have anyone else experienced this?
Moz Pro | | alsvik0 -
Why is the SEOmoz customer service on this site so awful?
Half the time the tools don't work and there isn't any accountability. Has anyone else had the same experience?
Moz Pro | | wpsoule1 -
SEOmoz not displaying correct amount of links?
When I go to the link analysis page where ti shows how many links my site has and how many my competitors have...it shows that I have 0 links. But Google Webmaster Tools shows my site as having 149 links. Is this a glitch with zeomoz or whats going on? The reason I initially subscribed to seomoz was to track my links. Thanks
Moz Pro | | tarik30010 -
SEOMOZ Crawler unicode bug
for the last couple of weeks the SEOMOZ crawls my homepage only and gets 4xx error for most of the URL's. the crawler have no issues with English url's only with the unicode(Hebrew) ones. this is what is see in the csv export for the crawl (one sample) : http://www.funstuff.co.il/׳ž׳¡׳™׳‘׳×-׳¨׳•׳•׳§׳•׳× 404 text/html; charset=utf-8 you can see that the URL is Gibberish please help.
Moz Pro | | AsafY0 -
SEOmoz LDA tool experience?
http://www.seomoz.org/labs/lda Does anyone presently use this tool as a regular part of their SEO analysis? Does anyone have any updates regarding this tool? It is still in the lab despite no updates for over 6 months. Is it being retired? Or promoted to production? If you do use the tool, what % of relevance is your goal when optimizing pages?
Moz Pro | | RyanKent0 -
SEOmoz crawl diagnostics report - what are the duplicate pages urls?
I just see the number of duplicates but not what the urls of the duplicates are? I don't see it in the export either, but maybe I'm missing it Cheers S
Moz Pro | | firstconversion0 -
I want to hire someone to write some PHP code using the SEOmoz API.
...but I'm not sure how to go about it. What I need is simple: all I want is to be able to paste a list of URLs (different domains), and have the program return the Page Authority for all those URLs. I understand I can use the free SEOmoz API, particularly the URL Metrics API. Then I want to export the data to an Excel file. That's it. Problem is, I have absolutely no clue how to do it. Obviously I'd pay someone to do it. Pay very well if you can do it professionally and quickly. How can I go about finding the right person for the job? Apologies if this is not the right place to ask this, but I don't know where else to go. Thanks.
Moz Pro | | thegreatpursuit0