No follow vs do follow the how to
-
Hi Guys,
Sorry if this is an ammature question, just wanted to know I noticed a few people talking about no follows and do follows for backlinks. Is there suppose to be some way to set you website up as nofollow and dofollow for backlinks? I noticed a few people saying to make sure that some directories are nofollow, i would like to know if I can set this up for my own site as I'm a bit conscious and paranoid about others that might backlink to my site who have huge spam or negative seo etc?
Any insight into this would be much appreciated
Thanks all
-
In short - do not concern yourself about negative SEO. Yes it can happen - but if you monitor your site the way you are - ie using moz diagnostics to regularly crawl back links etc. you will identify spam links and then can go through the procedure to disallow. So you have that covered.
However you should appreciate that if someone creates a link for you, an editorial article - generally you want a follow link. I spend time for clients trying to turn no-follows into follows. Then you get the link juice and the bump hopefully in rankings.
Clear as mud? If not let me know. Good question your on the right track.
-
Hi Edward,
You are a little confused about what this means i think so let me try to explain.
Each link can be assigned an attribute called rel="nofollow", the person who owns the link has control of this attribute so you can control if the links on your website are nofollow, but you have no control of the link people point to your website.
Generally speaking you want your link profile to contain both and it demonstrates a healthy link profile.
How does Google handle nofollowed links?
In general, we don't follow them. This means that Google does not transfer PageRank or anchor text across these links. Essentially, using
nofollow
causes us to drop the target links from our overall graph of the web. However, the target pages may still appear in our index if other sites link to them without usingnofollow
, or if the URLs are submitted to Google in a Sitemap. Also, it's important to note that other search engines may handlenofollow
in slightly different ways.Using nofollow them on your own website
The use of nofollow links on your own website to your own pages stops google crawling and indexing certain pages on your website. For example is you had a "Login" or "Checkout" page. Many people choose to nofollow it to stop google crawling and indexing it. This stop a page with normally fairly poor content due to its nature being indexed on your site.
It is also used to prevent duplicate content, if you know a page is a duplicate of another but it is needed, rather than use canocial tags etc some people choose to nofollow them.
Im summary You can't nofollow links that point to your website from external sites (unless you contact the person sending the link and they agree to do so). Your best defence against spammy links is to monitor your link profile and when a link pops up you dont link follow the normal channels to remove it,
nofollow on your own website should only be used to stop google crawling and indexing certain links and passing link juice as and when you need it. It Google still has a bot that crawls through nofollows. But in general it will recognise your wishes.
-
It's important to remember that a healthy link profile will be a mix of both dofollow and nofollow links. There is no rule of thumb that says which links should contain which attributes, but if you were in a generic directory, for example, you would want it to be nofollowed.
More often than not, any link that is given editorially is fine with whatever it comes with. Nofollowed links are very useful, but just don't pass page rank.
Google is pretty smart at detecting spam links and negative SEO though, due to how these normally appear, so I wouldn't worry too much, unless you have seen something that is concerning you? You are also able to handle any negative SEO or bad links through disavowing the links in Webmaster Tools.
I hope this helps a little?
-Andy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Internal search pages (and faceted navigation) solutions for 2018! Canonical or meta robots "noindex,follow"?
There seems to conflicting information on how best to handle internal search results pages. To recap - they are problematic because these pages generally result in lots of query parameters being appended to the URL string for every kind of search - whilst the title, meta-description and general framework of the page remain the same - which is flagged in Moz Pro Site Crawl - as duplicate, meta descriptions/h1s etc. The general advice these days is NOT to disallow these pages in robots.txt anymore - because there is still value in their being crawled for all the links that appear on the page. But in order to handle the duplicate issues - the advice varies into two camps on what to do: 1. Add meta robots tag - with "noindex,follow" to the page
Intermediate & Advanced SEO | | SWEMII
This means the page will not be indexed with all it's myriad queries and parameters. And so takes care of any duplicate meta /markup issues - but any other links from the page can still be crawled and indexed = better crawling, indexing of the site, however you lose any value the page itself might bring.
This is the advice Yoast recommends in 2017 : https://yoast.com/blocking-your-sites-search-results/ - who are adamant that Google just doesn't like or want to serve this kind of page anyway... 2. Just add a canonical link tag - this will ensure that the search results page is still indexed as well.
All the different query string URLs, and the array of results they serve - are 'canonicalised' as the same.
However - this seems a bit duplicitous as the results in the page body could all be very different. Also - all the paginated results pages - would be 'canonicalised' to the main search page - which we know Google states is not correct implementation of canonical tag
https://webmasters.googleblog.com/2013/04/5-common-mistakes-with-relcanonical.html this picks up on this older discussion here from 2012
https://moz.com/community/q/internal-search-rel-canonical-vs-noindex-vs-robots-txt
Where the advice was leaning towards using canonicals because the user was seeing a percentage of inbound into these search result pages - but i wonder if it will still be the case ? As the older discussion is now 6 years old - just wondering if there is any new approach or how others have chosen to handle internal search I think a lot of the same issues occur with faceted navigation as discussed here in 2017
https://moz.com/blog/large-site-seo-basics-faceted-navigation1 -
Location Pages On Website vs Landing pages
We have been having a terrible time in the local search results for 20 + locations. I have Places set up and all, but we decided to create location pages on our sites for each location - brief description and content optimized for our main service. The path would be something like .com/location/example. One option that has came up in question is to create landing pages / "mini websites" that would probably be location-example.url.com. I believe that the latter option, mini sites for each location, would be a bad idea as those kinds of tactics were once spammy in the past. What are are your thoughts and and resources so I can convince my team on the best practice.
Intermediate & Advanced SEO | | KJ-Rodgers0 -
Ecommerce: A product in multiple categories with a canonical to create a ‘cluster’ in one primary category Vs. a single listing at root level with dynamic breadcrumb.
OK – bear with me on this… I am working on some pretty large ecommerce websites (50,000 + products) where it is appropriate for some individual products to be placed within multiple categories / sub-categories. For example, a Red Polo T-shirt could be placed within: Men’s > T-shirts >
Intermediate & Advanced SEO | | AbsoluteDesign
Men’s > T-shirts > Red T-shirts
Men’s > T-shirts > Polo T-shirts
Men’s > Sale > T-shirts
Etc. We’re getting great organic results for our general T-shirt page (for example) by clustering creative content within its structure – Top 10 tips on wearing a t-shirt (obviously not, but you get the idea). My instinct tells me to replicate this with products too. So, of all the location mentioned above, make sure all polo shirts (no matter what colour) have a canonical set within Men’s > T-shirts > Polo T-shirts. The presumption is that this will help build the authority of the Polo T-shirts page – this obviously presumes “Polo Shirts” get more search volume than “Red T-shirts”. My presumption why this is the best option is because it is very difficult to manage, particularly with a large inventory. And, from experience, taking the time and being meticulous when it comes to SEO is the only way to achieve success. From an administration point of view, it is a lot easier to have all product URLs at the root level and develop a dynamic breadcrumb trail – so all roads can lead to that one instance of the product. There's No need for canonicals; no need for ecommerce managers to remember which primary category to assign product types to; keeping everything at root level also means there no reason to worry about redirects if product move from sub-category to sub-category etc. What do you think is the best approach? Do 1000s of canonicals and redirect look ‘messy’ to a search engine overtime? Any thoughts and insights greatly received.0 -
Domain.com/keyword1.keyword2.html vs doamin.com/keyword1-keyword2.html
I was doing some research and saw this url structure in a website that was not ranking well and can't help but wonder was the url structure part of the problem as well it looks like this with a period between keywords. domain.com/keyword1.keyword2.html and was wondering if that is acceptable for search engines as opposed to the normal dashes like this expample ... domain.com/keyword1-keyword2-keyword3.html I have never noticed a period to separate words in a url before. Anyone have any experience with this ? Is this going to hurt possible rankings ? Thank you in advance, Joe
Intermediate & Advanced SEO | | jlane91 -
Dealing with non-canonical http vs https?
We're working on a complete rebuild of a client's site. The existing version of the site is in WordPress and I've noticed that the site is accessible via http and https. The new version of the site will have mostly or entirely different URLs. It seems that both http and https versions of a page will resolve, but all of the rel-canonical tags I've seen point to the https version. Sometimes image tags and stylesheets are https, sometimes they aren't. There are both http and https pages in Google's index. Having looked at other community posts about http/https, I've gathered the following: http/https is like two different domains. http and https versions need to be verified in Google Webmaster Tools separately. Set up the preferred domain properly. Rel-canonicals and internal links should have matching protocols. My thought is that we will do a .htaccess that redirects old URLs regardless of the protocol to new pages at one protocol. I would probably let the .css and image files from the current site 404. When we develop and launch the new site, does it make sense for everything to be forced to https? Are there any particular SEO issues that I should be aware of for a scenario like this? Thanks!
Intermediate & Advanced SEO | | GOODSIR0 -
Subdomains vs directories on existing website with good search traffic
Hello everyone, I operate a website called Icy Veins (www.icy-veins.com), which gives gaming advice for World of Warcraft and Hearthstone, two titles from Blizzard Entertainment. Up until recently, we had articles for both games on the main subdomain (www.icy-veins.com), without a directory structure. The articles for World of Warcraft ended in -wow and those for Hearthstone ended in -hearthstone and that was it. We are planning to cover more games from Blizzard entertainment soon, so we hired a SEO consultant to figure out whether we should use directories (www.icy-veins.com/wow/, www.icy-veins.com/hearthstone/, etc.) or subdomains (www.icy-veins.com, wow.icy-veins.com, hearthstone.icy-veins.com). For a number of reason, the consultant was adamant that subdomains was the way to go. So, I implemented subdomains and I have 301-redirects from all the old URLs to the new ones, and after 2 weeks, the amount of search traffic we get has been slowly decreasing, as the new URLs were getting index. Now, we are getting about 20%-25% less search traffic. For example, the week before the subdomains went live we received 900,000 visits from search engines (11-17 May). This week, we only received 700,000 visits. All our new URLs are indexed, but they rank slightly lower than the old URLs used to, so I was wondering if this was something that was to be expected and that will improve in time or if I should just go for subdomains. Thank you in advance.
Intermediate & Advanced SEO | | damienthivolle0 -
When you add 10.000 pages that have no real intention to rank in the SERP, should you: "follow,noindex" or disallow the whole directory through robots? What is your opinion?
I just want a second opinion 🙂 The customer don't want to loose any internal linkvalue by vaporizing link value though a big amount of internal links. What would you do?
Intermediate & Advanced SEO | | Zanox0 -
Schema.org Implementation: "Physician" vs. "Person"
Hey all, I'm looking to implement Schema tagging for a local business and am unsure of whether to use "Physician" or "Person" for a handful of doctors. Though "Physician" seems like it should be the obvious answer, Schema.org states that it should refer to "A doctor's office" instead of a physician. The properties used in "Physician" seem to apply to a physician's practice, and not an actual physician. Properties are sourced from the "Thing", "Place", "Organization", and "LocalBusiness" schemas, so I'm wondering if "Person" might be a more appropriate implementation since it allows for more detail (affiliations, awards, colleagues, jobTitle, memberOf), but I wanna make sure I get this right. Also, I'm wondering if the "Physician" schema allows for properties pulled from the "Person" schema, which I think would solve everything. For reference: http://schema.org/Person http://schema.org/Physician Thanks, everyone! Let me know how off-base my strategy is, and how I might be able to tidy it up.
Intermediate & Advanced SEO | | mudbugmedia0