No follow vs do follow the how to
-
Hi Guys,
Sorry if this is an ammature question, just wanted to know I noticed a few people talking about no follows and do follows for backlinks. Is there suppose to be some way to set you website up as nofollow and dofollow for backlinks? I noticed a few people saying to make sure that some directories are nofollow, i would like to know if I can set this up for my own site as I'm a bit conscious and paranoid about others that might backlink to my site who have huge spam or negative seo etc?
Any insight into this would be much appreciated
Thanks all
-
In short - do not concern yourself about negative SEO. Yes it can happen - but if you monitor your site the way you are - ie using moz diagnostics to regularly crawl back links etc. you will identify spam links and then can go through the procedure to disallow. So you have that covered.
However you should appreciate that if someone creates a link for you, an editorial article - generally you want a follow link. I spend time for clients trying to turn no-follows into follows. Then you get the link juice and the bump hopefully in rankings.
Clear as mud? If not let me know. Good question your on the right track.
-
Hi Edward,
You are a little confused about what this means i think so let me try to explain.
Each link can be assigned an attribute called rel="nofollow", the person who owns the link has control of this attribute so you can control if the links on your website are nofollow, but you have no control of the link people point to your website.
Generally speaking you want your link profile to contain both and it demonstrates a healthy link profile.
How does Google handle nofollowed links?
In general, we don't follow them. This means that Google does not transfer PageRank or anchor text across these links. Essentially, using
nofollow
causes us to drop the target links from our overall graph of the web. However, the target pages may still appear in our index if other sites link to them without usingnofollow
, or if the URLs are submitted to Google in a Sitemap. Also, it's important to note that other search engines may handlenofollow
in slightly different ways.Using nofollow them on your own website
The use of nofollow links on your own website to your own pages stops google crawling and indexing certain pages on your website. For example is you had a "Login" or "Checkout" page. Many people choose to nofollow it to stop google crawling and indexing it. This stop a page with normally fairly poor content due to its nature being indexed on your site.
It is also used to prevent duplicate content, if you know a page is a duplicate of another but it is needed, rather than use canocial tags etc some people choose to nofollow them.
Im summary You can't nofollow links that point to your website from external sites (unless you contact the person sending the link and they agree to do so). Your best defence against spammy links is to monitor your link profile and when a link pops up you dont link follow the normal channels to remove it,
nofollow on your own website should only be used to stop google crawling and indexing certain links and passing link juice as and when you need it. It Google still has a bot that crawls through nofollows. But in general it will recognise your wishes.
-
It's important to remember that a healthy link profile will be a mix of both dofollow and nofollow links. There is no rule of thumb that says which links should contain which attributes, but if you were in a generic directory, for example, you would want it to be nofollowed.
More often than not, any link that is given editorially is fine with whatever it comes with. Nofollowed links are very useful, but just don't pass page rank.
Google is pretty smart at detecting spam links and negative SEO though, due to how these normally appear, so I wouldn't worry too much, unless you have seen something that is concerning you? You are also able to handle any negative SEO or bad links through disavowing the links in Webmaster Tools.
I hope this helps a little?
-Andy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
404 vs 410 Across Search Engines
We are removing a large number of URLs permanently. We care about rankings for search engines other than Google such as Yahoo-Bing, who don't even list https status 410 code option: https://docs.microsoft.com/en-us/bingmaps/spatial-data-services/status-codes-and-error-handling Does anyone know how search engines other than Google handle 410 vs 404 status? For pages permanently being removed John Mueller at Google has stated "From our point of view, in the mid term/long term, a 404 is the same as a 410 for us. So in both of these cases, we drop those URLs from our index. We generally reduce crawling a little bit of those URLs so that we don’t spend too much time crawling things that we know don’t exist. The subtle difference here is that a 410 will sometimes fall out a little bit faster than a 404. But usually, we’re talking on the order of a couple days or so. So if you’re just removing content naturally, then that’s perfectly fine to use either one." Any information or thoughts? Thanks
Intermediate & Advanced SEO | | sb10300 -
Should we no-follow it or no?
We run the biggest yellow pages in Lithuania, and lately we have allowed companies to put their facebook' links on the info page. You can see the facebook button on the left: http://www.visalietuva.lt/imone/fcr-media-lietuva-uab We put the no-follow attribute for now. Should we do that or just leave it as follow and expect that Google will count this as "trustworthy" link? I hope I made it pretty clear 🙂 Thank you very much.
Intermediate & Advanced SEO | | FCRMediaLietuva0 -
No Follow for Social Media Buttons?
Our website has all social media buttons for Facebook, Twitter, LinkedIn and Google+ are located in the footer of all pages. These links are set to "no-follow". Running an SEMRUSH audit shows these "no-follows" coming up as an "issue". Is it best practices to set these links to social media sites as "follow" as opposed to "no-follow"? I am somewhat concerned about losing link juice but perhaps that is an outdated point of view. Any thoughts?? Thanks, Alan
Intermediate & Advanced SEO | | Kingalan11 -
To nofollow or follow internal links, that is the question...
"...Whether 'tis Nobler in the mind to suffer the slings and arrows of outrageous fortune or..." Okay, I'll drop the Hamlet riff. I'm working on a site with a forum. Top pages may have 20 to 30 answers. Each answer is by a member with an image/link and a name link to their member profile. A member profile may contain alot of info or none. We've noiondexed memeber profile pages, yet we still have these links to member profile pages. Is it better to nofollow these internal links to profile pages or what? Again, with 25 answers on a page and two links per answer to each member profile (image and name), that's a ton of internal links to noindexed pages. Thanks! Best... Darcy
Intermediate & Advanced SEO | | 945010 -
URL Parameters as a single solution vs Canonical tags
Hi all, We are running a classifieds platform in Spain (mercadonline.es) that has a lot of duplicate content. The majority of our duplicate content consists of URL's that contain site parameters. In other words, they are the result of multiple pages within the same subcategory, that are sorted by different field names like price and type of ad. I believe if I assign the correct group of url's to each parameter in Google webmastertools then a lot these duplicate issues will be resolved. Still a few questions remain: Once I set f.ex. the 'page' parameter and i choose 'paginates' as a behaviour, will I let Googlebot decide whether to index these pages or do i set them to 'no'? Since I told Google Webmaster what type of URL's contain this parameter, it will know that these are relevant pages, yet not always completely different in content. Other url's that contain 'sortby' don't differ in content at all so i set these to 'sorting' as behaviour and set them to 'no' for google crawling. What parameter can I use to assign this to 'search' I.e. the parameter that causes the URL's to contain an internal search string. Since this search parameter changes all the time depending on the user input, how can I choose the best one. I think I need 'specifies'? Do I still need to assign canonical tags for all of these url's after this process or is setting parameters in my case an alternative solution to this problem? I can send examples of the duplicates. But most of them contain 'page', 'descending' 'sort by' etc values. Thank you for your help. Ivor
Intermediate & Advanced SEO | | ivordg0 -
Dealing with non-canonical http vs https?
We're working on a complete rebuild of a client's site. The existing version of the site is in WordPress and I've noticed that the site is accessible via http and https. The new version of the site will have mostly or entirely different URLs. It seems that both http and https versions of a page will resolve, but all of the rel-canonical tags I've seen point to the https version. Sometimes image tags and stylesheets are https, sometimes they aren't. There are both http and https pages in Google's index. Having looked at other community posts about http/https, I've gathered the following: http/https is like two different domains. http and https versions need to be verified in Google Webmaster Tools separately. Set up the preferred domain properly. Rel-canonicals and internal links should have matching protocols. My thought is that we will do a .htaccess that redirects old URLs regardless of the protocol to new pages at one protocol. I would probably let the .css and image files from the current site 404. When we develop and launch the new site, does it make sense for everything to be forced to https? Are there any particular SEO issues that I should be aware of for a scenario like this? Thanks!
Intermediate & Advanced SEO | | GOODSIR0 -
No index.no follow certain pages
Hi, I want to stop Google et al from finding a some pages within my website. the url is www.mywebsite.com/call_backrequest.php?rid=14 As these pages are creating a lot of duplicate content issues. Would the easiest solution be to place a 'Nofollow/Noindex' META tag in page www.mywebsite.com/call_backrequest.php many thanks in advance
Intermediate & Advanced SEO | | wood1e19680 -
Frequent FAQs vs duplicate content
It would be helpful for our visitors if we were to include an expandable list of FAQs on most pages. Each section would have its own list of FAQs specific to that section, but all the pages in that section would have the same text. It occurred to me that Google might view this as a duplicate content issue. Each page _does _have a lot of unique text, but underneath we would have lots of of text repeated throughout the site. Should I be concerned? I guess I could always load these by AJAX after page load if might penalize us.
Intermediate & Advanced SEO | | boxcarpress0