Rel="Follow"? What the &#@? does that mean?
-
I've written a guest blog post for a site. In the link back to my site they've put a rel="follow" attribute. Is that valid HTML?
I've Googled it but the answers are inconclusive, to say the least.
-
I don't think so either, but you never know. Simple enough test to run to see if Google recognizes a "follow" or "dofollow" tag, simple enough test to run that's for sure. If it is hardcoded in the link code it will override any external nofollow tag.
-
Hi, what I meant was whether I should be looking for robot txt at the top of the page or somesuch
-
Hi Irvnig
Thanks for the response but the issue of adding tags doesn't apply as it's not my site.
-
AFAIK, there is no way to "sneakily" no-follow a link. You no-follow a link by adding rel=nofollow. If rel=nofollow isn't there, the link is followed.
-
test it to see if for some reason it is recognized, just for fun.
if something on a site is nofollowed by default and doesn't show up in the source code of that link (meaning it is declared in another piece of code), add a rel="follow" and a rel="dofollow" tag and see if it overrides the nofollow by using a firefox plugin tool that highlights nofollow links for you (you should already have this installed if you are an SEO)
-
The only other place I've seen that is in spam blog comments (as a desperate attempt to override the blog's default "no-follow")....
Yep, that's what I've read as well.
Now he's changed it to rel="dofollow" (no, me neither) -- which strikes me as even more gobbledegook.
Obviously I'm going to ask him to leave out the attribute altogether. But what other attributes should I be looking for on the page source (CTRL+U) to ensure he hasn't sneakily no-followed all the links on the page?
-
GoogleBot does obey the rel="nofollow" attribute.. as for rel="follow" - I don't think so. The only other place I've seen that is in spam blog comments (as a desperate attempt to override the blog's default "no-follow")....
-
It's a way of controlling the link power from a site. They're passing on the link juice to you.
If you want the search engines to see that link on the external blog, then what they have done is a good thing. They could have also just left that parameter out altogether.
People can put rel="nofollow". This means "don't pass link juice". You could interpret it as a directive to the world that whilst you are providing the link to the site, you don't endorse it.
From Google:
"Nofollow" provides a way for webmasters to tell search engines "Don't follow links on this page" or "Don't follow this specific link."
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=96569
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEO advice on ecommerce url structure where categories contain "/c/"
Hi! We use Hybris as plattform and I would like input on which url to choose. We must keep "/c/" before the actual category. c stands for category. I.e. this current url format will be shortened and cleaned:
Technical SEO | | hampgunn
https://www.granngarden.se/Sortiment/Husdjur/Hund/Hundfoder-%26-Hundmat/c/hundfoder To either: a.
https://www.granngarden.se/husdjur/hund/hundfoder/c/hundfoder b.
https://www.granngarden.se/husdjur/hund/c/hundfoder (hundfoder means dogfood) The question is whether we should keep the duplicated category name (hundfoder) before the "/c/" or not. Will there be SEO disadvantages by removing the duplicate "hundfoder" before the "/c/"? I prefer the shorter version ofc, but do not want to jeopardize any SEO rankings or send confusing signals to search engines or customers due to the "/c/" breaking up the url breadcrumb. What do you guys say and prefer from the above alternatives? Thanks /Hampus0 -
Link rel="prev" AND canonical
Hi guys, When you have several tabs on your website with products, you can most likely navigate to page 2, 3, 4 etc...
Technical SEO | | AdenaSEO
You can add the link rel="prev" and link rel="next" tags to make sure that 1 page get's indexed / ranked by Google. am I correct? However this still means that all the pages can get indexed, right? For example a webshop makes use of the link rel="prev" and ="next" tags. In the Google results page though, all the seperate tabs pages are still visible/indexed..
http://www.domain.nl/watches/?tab=1
http://www.domain.nl/watches/?tab=24
http://www.domain.nl/watches/?tab=19
etc..... Can we prevent this, and make sure only the main page get's indexed and ranked, by adding a canonical link on every 'tab page' to the main page --> www.domain.nl/watches/ I hope I explained it well and I'm looking forward to hearing from you. Regards, Tom1 -
What's our easiest, quickest "win" for page load speed?
This is a follow up question to an earlier thread located here: http://www.seomoz.org/q/we-just-fixed-a-meta-refresh-unified-our-link-profile-and-now-our-rankings-are-going-crazy In that thread, Dr. Pete Meyers said "You'd really be better off getting all that script into external files." Our IT Director is willing to spend time working on this, but he believes it is a complicated process because each script must be evaluated to determine which ones are needed "pre" page load and which ones can be loaded "post." Our IT Director went on to say that he believes the quickest "win" we could get would be to move our SSL javascript for our SSL icon (in our site footer) to an internal page, and just link to that page from an image of the icon in the footer. He says this javascript, more than any other, slows our page down. My question is two parts: 1. How can I verify that this javascript is indeed, a major culprit of our page load speed? 2. Is it possible that it is slow because so many styles have been applied to the surrounding area? In other words, if I stripped out the "Secured by" text and all the syles associated with that, could that effect the efficiency of the script? 3. Are there any negatives to moving that javascript to an interior landing page, leaving the icon as an image in the footer and linking to the new page? Any thoughts, suggestions, comments, etc. are greatly appreciated! Dana
Technical SEO | | danatanseo0 -
Rel=author
Hi everyone, i'm trying to understand the rel=author thing for cotent, i need some clarification please. Firstly do you only use it for content on your site or can you have it for a guest post you have done on another domain which is not your own - linking to your author profile on your domain? Secondly implementing it, i understand it's 3 links: 1., Link on your content where the blog post is with a rel=author going to your domain authort page. 2., a link from your domain author page going to your google + profile. This is rel=me 3 a link on your google+ profile to your blog? if so how do i do this? i only have an option to edit about page and add recommended links? there is no 'contributor' section. I am UK profile also. Any help really appreciated, thanks guys.
Technical SEO | | pauledwards0 -
International Websites: rel="alternate" hreflang="x"
Hi people, I keep on reading and reading , but I won't get it... 😉 I mean this page: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=189077&topic=2370587&ctx=topic On the bottom of the page they say: Step 2: Use rel="alternate" hreflang="x" Update the HTML of each URL in the set by adding a set of rel="alternate" hreflang="x" link elements. Include a rel="alternate" hreflang="x" link for every URL in the set, like this: This markup tells Google's algorithm to consider all of these pages as alternate versions of each other. OK! Each URL needs this markup. BUT: Do i need it exactly as written above, or do I have to put in the complete URL of the site, like: The next question is, what happens exactly in the SERPS when I do it like this (an also with Step1 that I haven't copied here)? Google will display the "canonical"-version of the page, but wehen a user from US clicks he will get on http://en-us.example.com/**page.htm **??? I tried to find other sites which use this method, but I haven't found one. Can someone give me an example.website??? Thank you, thank you very much! André
Technical SEO | | waynestock0 -
REL = cannonical and web app
I started a web app campaign for a site that I recently finished. It had no errors or warnings, but issued rel=cannonical notices for every page on the site. What does this mean?
Technical SEO | | waynekolenchuk0 -
Understanding No Follow
We manage a couple of sites with 100s of pages... Most of the sites have content that is not helpful as landing pages but obviously has relevent content related to our desired search terms. Some of links go off site to another domain. I am trying to understand the issue of "link juice" and if I gain it or lose it by putting "nofollow" designation on some of the page links. Specifically, do I increase the value of my pages if I put no follow tags on lower tier links off of these pages. Here is a page in question - http://www.vahmarketing.com/product/ductless-hoods Is there a best practice or SEO rule for using "no follow"? Thanks, Bob Nance
Technical SEO | | impressem0 -
How do I use the Robots.txt "disallow" command properly for folders I don't want indexed?
Today's sitemap webinar made me think about the disallow feature, seems opposite of sitemaps, but it also seems both are kind of ignored in varying ways by the engines. I don't need help semantically, I got that part. I just can't seem to find a contemporary answer about what should be blocked using the robots.txt file. For example, I have folders containing site comps for clients that I really don't want showing up in the SERPS. Is it better to not have these folders on the domain at all? There are also security issues I've heard of that make sense, simply look at a site's robots file to see what they are hiding. It makes it easier to hunt for files when they know the directory the files are contained in. Do I concern myself with this? Another example is a folder I have for my xml sitemap generator. I imagine google isn't going to try to index this or count it as content, so do I need to add folders like this to the disallow list?
Technical SEO | | SpringMountain0