Seomoz bar: No Follow and Robots.txt
-
Should the Mozbar pickup 'nofollow" links that are handled in robots.txt ?
the robots.tx blocks categories, but is still show as a followed (green) link when using the mozbar.
Thanks!
Holly
ETA: I'm assuming that- disallow: myblog.com/category/ - is comparable to the nofollow tag on catagory?
-
Thank you Cyrus for that great article link. And like that article states near the end, it touches on a common problem for those of us that assume all the info at SeoMoz is accurate even though it may not be current. (not only seomoz to be fair) I've found several instances where even authorities change their mind or google changes is for them?
But anyways, it appears using canonical or meta tags would be the better solution. Unfortunately,neither is possible in Squarespace. I had just about decided to change the robots.txt , get rid of the disallow: /category/ , and call it a day. But then I found an example where the noindex was used in the robots.txt file of a squarespace website (specializing in SEM among other things). Probably the "longest" robots list I've ever seen!
http://www.hunchfree.com/robots.txt
Would it be a good idea to use noindex, FOLLOW in the robots.txt for /category/
(if that's even possible) or just keep with my "call it a day" solution...at least where robots.txt is concerned.
BTW- I posted a similar question on the reasoning behind the robots.txt for ss websites at the developers forum- nothing but crickets. Unless it's about design, things pretty much drop like a rock. Oh well.
-
As Phil pointed out, blocking a URL with robot.txt may keep search engines from crawling your pages, but that doesn't mean they wont index those pages. The meta robots NOINDEX, FOLLOW tag is a much better choice.
Highly recommend the following article that explains this in more detail:
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions
Unfortunately, Sqarespace isn't all that flexible when it comes to meta tags. For the most part, Google is getting better at figuring this kind of duplicate content out, but it's best to address it when you can.
-
Thank you so much for the detailed reply. It's REALLY appreciated. The blog you are referring to is the Squarespace company's blog. This disallow: categories IS however on any site that uses their service. But I've done a similar search with my personal blog on Squarespace and a couple of categories still show up in the SERPs anyways. You can edit the robot file if you want, but you have to do a redirect as you don't have root access.
Unfortunately, (at least I don't think we can), include meta tags for noindex on a page by page basis. You can use it in robots.txt.
It seems their would be a lot more duplicate content issue with tags rather than categories as it's more granular than categories.
The point of all this is I'm creating new websites for some of our homeschool students and want to get it right from the start with the site architecture and how we use tags and categories with a balanced focus on usability as well as optimizing for search. These kids are super interested in all the reasoning behind things and their questions are tougher than any client! Ha!
Again, Thanks so much and take care,
Holly
-
Thanks for providing some more detail Holly. I definitely think it's applicable to leave here and I'm happy to help.
Some people like to prevent search engines from crawling category pages out of a fear of duplicate content. For example, say you have a post that's at this URL:
site.com/blog/chocolate-milk-is-great.html
and it's also the only post in the category "milk" with this url:
then search engines see the same exact content (your blog post) on two different URLs. Since duplicate content is a big no-no, many people choose to prevent the engines from crawling category pages. Although, in my experience, it's really up to you. Do you feel like your category pages will provide value to users? Would you like them to show up in search results? If so, then make sure you let Google crawl them.
If you DON'T want category pages to be indexed by Google, then I think there's a better choice than using robots.txt. Your best bet is applying the noindex, follow tag to these pages. This tag tells the engines NOT to index this page, but to follow all of the links on it. This is better than robots.txt because robots.txt won't always prevent your site from showing up in search results (that's another long story), but the noindex tag will.
If I'm not making sense at all then please just let me know :).
Lastly, from what I can see on your site and blog, it doesn't look like the category pages for your blog are actually in your robots.txt file. Have someone do a double check.
To check this myself, I just did a google search for this URL:
http://blog.squarespace.com/blog/?category=Roadmap
And it showed up in Google right away. Looks like something isn't going according to plan. Don't worry though, that happens all of the time and it should be an easy fix.
-
I know one day i may wakeup one morning and this will all click, but for now perhaps an example will help me get past this initial hurdle.
Squarespace disallows categories in the robots.txt, but using the mozbar I see the category links are green.
So if I understand (partly anyways), the disallow in robots keeps the bots from crawling those pages when they come knocking at my site. However, the category links in a blog post are being crawled? or what's the point?
I'm just trying to understand the reasoning behind disallowing categories and how that should impact the tagging and categorizing of blog posts.
Perhaps I should of started a new question? or is it applicable to leave it here..
-
The nofollow attribute and robots.txt file serve different purposes.
Nofollow Attribute
This attribute is used to tell search engines, "Don't follow this link", or even "Don't follow any links on this page." It doesn't prevent pages from being indexed, just prevents the search engines from following that link from that particular page.
Robots.txt
This file contains a list of pages that the search engine should not access and should not index.
To read more about robots.txt check out this page: http://googleblog.blogspot.com/2007/01/controlling-how-search-engines-access.html
For more on Nofollow, check out this page: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=96569
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEOMOZ Campaign Query
In SEOMOZ price plan if i select $99 for 1 month . Can i campaign 3 different websites using this one month Paid plan. Is this advisable?
Moz Pro | | Rajesh.Chandran0 -
Why does SEOMoz only crawl 1 page of my site?
My site is: www.thetravelingdutchman.com. It has quite a few pages, but for some reason SEOMoz only crawls one. Please advise. Thanks, Jasper
Moz Pro | | Japking0 -
Question about SEOMoz Pro and Root Domain vs. Subdomain tracking
I currently have two Pro campaigns set up. They are both tracking the root domains of two different e-commerce sites. I also am tracking three competitors for each company, in each campaign. I have those set up by subdomains, like so www.Competitor.com. So in my Historical link analysis I am getting MyRootDomain.com, compared to www.competitor1.com, www.competitor2.com and www.competitor3.com Is this a problem? Would it be better for me to switch my company campaigns to track subdomains too, or to switch my competitor tracking to root domains. This is probably pretty rudimentary, but it never even occurred to me until just now. I realize that if I switch to subdomains for my own company tracking this would necessitate setting up a completely new campaign. This would be a problem because I am maxed out on my 1,000 keywords. Last but not least, does the fact that I have been tracking my own site root domains compared to competitors subdomains mean all of my competitive domain and link analysis is, well, garbage?...because I haven't really been comparing the same things?
Moz Pro | | danatanseo1 -
Why are we no longer ranking #1 for our brand on the SEOMoz web app?
Been noticing these anomalies for some time now. I'm from eVenues.com and we've found the rank tracker extremely useful to see how our campaigns are doing. I noticed however that we no longer show up in the app as ranking for our brand? I know that results differ according to location and a myriad other variables but I think we're pretty strong as far as our brand name is concerned. I did a quick double check in incognito and, yep, we're still ranking... Any ideas on this?
Moz Pro | | eVenuesSEO0 -
Is the SEOMOZ rank checker accounting for the mega sitelinks?
Just curious if the SEOMOZ rank checker accounts for the mega sitelinks? Are this counted as separate listings (which would in effect make it report a lower serp position) or are they bundled up with the first result and considered as #1 spot? Thanks
Moz Pro | | carmenmardiros0 -
SEOmoz vs Google Webmaster Tools on incoming links
I'm working on basic SEO for http://queueassoc.com. Google has indexed the non-www verions of the pages and these are what the SERPS return SEOmoz toolbar shows that all of the incoming links juice goes to the www. versions of the pages, none to the non-www version. Yesterday I set up GWMT for the site, submitted a sitemap with the www version of the pages and set the default address to the www version. I had to verify both versions of the site in order to do this and in looking at the non-www version I saw that Google had all the incoming links there and none in the www.version, the opposite of what SEOmoz shows. Is this just because Google only has the non-www versions in its index? Will they show the links to the www version once they get them in the index? I'm worried about losing Google Page Rank value or SEOmoz DA by making this switch.
Moz Pro | | bvalentine0 -
Help with SEOmoz API
Hi guys, I'm trying to make API requests from my webserver via PHP. I'd like to retrieve data from the SEOmoz URL Metrics API. Unfortunately I always get the error response "unauthorized" even when I copy and paste the Sample Valid API Signature generated by your system into the browser. Is Signed Authentication not longer supported? I even tried the sample PHP Code SignedAuth.php but there's the same problem, too. If signed authentication is not longer available, do you have a code example for the basic http authorization? Thanks, Brandon
Moz Pro | | thegreatpursuit1