Rel Noindex Nofollow tag vs meta noindex nofollow
-
Hi Mozzers
I have a bit of thing I was pondering about this morning and would love to hear your opinion on it.
So we had a bit of an issue on our client's website in the beginning of the year. I tried to find a way around it by using wild cards in my robots.txt but because different search engines treat wild cards differently it dint work out so well and only some search engines understood what I was trying to do. so here goes,
I had a parameter on a big amount of URLs on the website with ?filter being pushed from the database we make use of filters on the site to filter out content for users to find what they are looking for much easier, concluding to database driven ?filter URLs (those ugly &^% URLs we all hate so much*.
So what we looking to do is implementing nofollow noindex on all the internal links pointing to it the ?filter parameter URLs, however my SEO sense is telling me that the noindex nofollow should rather be on the individual ?filter parameter URL's metadata robots instead of all the internal links pointing the parameter URLs. Am I right in thinking this way? (reason why we want to put it on the internal links atm is because the of the development company states that they don't have control over the metadata of these database driven parameter URLs)
If I am not mistaken noindex nofollow on the internal links could be seen as page rank sculpting where as onpage meta robots noindex nofolow is more of a comand like your robots.txt
Anyone tested this before or have some more knowledge on the small detail of noindex nofollow?
PS: canonical tags is also not doable at this point because we still in the process of cleaning out all the parameter URLs so +- 70% of the URLs doesn't have an SEO friendly URL yet to be canonicalized to.
Would love to hear your thoughts on this.
Thanks,
Chris Captivate.
-
I'm not a fan of doubling up, but only because it makes the results really hard to measure. If you implement both, you won't know which one worked, ultimately. I'm not sure it's actually harmful - it just can be hard to track.
If you're just trying to prevent future problems (and don't have any immediate issues), I'd probably pick one and give it a few weeks.
-
Hi Dr Pete
Thank you so much for your input, I really appreciate it. Always fun learning something new
I also don't prefer the engine-specific approach. However, could it hurt implementing both solutions?
Regards,
Chris Captivate.
-
A couple of options here. First off, though, there's really no rel="noindex" at the link level. You can "nofollow" a link, and that generally disrupts indexing, but it's not guaranteed. You're right that it can look like PR sculpting, although that's not a huge issue if your usage makes sense. In other words, if you're using rel=nofollow to keep the crawlers away from content with low search value, I generally think that's ok.
You could META noindex, nofollow the target pages, although then Google has to crawl those. The advantage is that I find the META Robots approach to be a bit more powerful.
The other option is to use parameter handling in Google Webmaster Tools (Bing has a similar function) to tell Google to ignore the "?filter" parameter. The purist in me doesn't love the engine-specific approach, but it's easier, you don't need to change the site itself, and it typically works fairly well.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Hacked site vs No site
So I have this website that got hacked with cloaking and Google has labeled it as such in the SERPs. With due reason of coarse. My question is I am going to relaunch an entirely new redesigned website in less than 30 days, do I pull the hacked site down until then or leave it up? Which option is better?
White Hat / Black Hat SEO | | Rich_Coffman0 -
Removing duplicated content using only the NOINDEX in large scale (80% of the website).
Hi everyone, I am taking care of the large "news" website (500k pages), which got massive hit from Panda because of the duplicated content (70% was syndicated content). I recommended that all syndicated content should be removed and the website should focus on original, high quallity content. However, this was implemented only partially. All syndicated content is set to NOINDEX (they thing that it is good for user to see standard news + original HQ content). Of course it didn't help at all. No change after months. If I would be Google, I would definitely penalize website that has 80% of the content set to NOINDEX a it is duplicated. I would consider this site "cheating" and not worthy for the user. What do you think about this "theory"? What would you do? Thank you for your help!
White Hat / Black Hat SEO | | Lukas_TheCurious0 -
Real Vs. Virtual Directory Question
Hi everyone. Thanks in advance for the assistance. We are reformatting the URL structure of our very content rich website (thousands of pages) into a cleaner stovepipe model. So our pages will have a URL structure something like http://oursite.com/topic-name/category-name/subcategory-name/title.html etc. My question is… is there any additional benefit to having the path /topic-name/category-name/subcategory-name/title.html literally exist on our server as a real directory? Our plan was to just use HTACCESS to point that URL to a single script that parses the URL structure and makes the page appropriately. Do search engine spiders know the difference between these two models and prefer one over the other? From our standpoint, managing a single HTACCESS file and a handful of page building scripts would be infinitely easier than a huge, complicated directory structure of real files. And while this makes sense to us, the HTACCESS model wouldn't be considered some kind of black hat scheme, would it? Thank you again for the help and looking forward to your thoughts!
White Hat / Black Hat SEO | | ClayPotCreative0 -
Is there any reason to Nofollow Internal Links or XML Sitemap?
I am viewing a new client's site and they have the following nofollow(S) on their site homepage. Is there a reason for this? Also, they people who originally built their site have a footer link on every page to their company (I guess to promote their work). They didn't "nofollow" that link lol... What are the thoughts on footer links? About Us Privacy Policy Customer Service Shipping & Returns Blog Contact Us Site Map Thanks James Chronicle
White Hat / Black Hat SEO | | Atlanta-SMO0 -
Tags on WordPress Sites, Good or bad?
My main concern is about the entire tags strategy. The whole concept has really been first seen by myself on WordPress which seems to be bringing positive results to these sites and now there are even plugins that auto generate tags. Can someone detail more about the pros and cons of tags? I was under the impression that google does not want 1000's of pages auto generated just because of a simple tag keyword, and then show relevant content to that specific tag. Usually these are just like search results pages... how are tag pages beneficial? Is there something going on behind the scenes with wordpress tags that actually bring benefits to these wp blogs? Setting a custom coded tag feature on a custom site just seems to create numerous spammy pages. I understand these pages may be good from a user perspective, but what about from an SEO perspective and getting indexed and driving traffic... Indexed and driving traffic is my main concern here, so as a recap I'd like to understand the pros and cons about tags on wp vs custom coded sites, and the correct way to set these up for SEO purposes.
White Hat / Black Hat SEO | | WebServiceConsulting.com1 -
Separate Servers for Humans vs. Bots with Same Content Considered Cloaking?
Hi, We are considering using separate servers for when a Bot vs. a Human lands on our site to prevent overloading our servers. Just wondering if this is considered cloaking if the content remains exactly the same to both the Bot & Human, but on different servers. And if this isn't considered cloaking, will this affect the way our site is crawled? Or hurt rankings? Thanks
White Hat / Black Hat SEO | | Desiree-CP0 -
Noindexing Thin Content Pages: Good or Bad?
If you have massive pages with super thin content (such as pagination pages) and you noindex them, once they are removed from googles index (and if these pages aren't viewable to the user and/or don't get any traffic) is it smart to completely remove them (404?) or is there any valid reason that they should be kept? If you noindex them, should you keep all URLs in the sitemap so that google will recrawl and notice the noindex tag? If you noindex them, and then remove the sitemap, can Google still recrawl and recognize the noindex tag on their own?
White Hat / Black Hat SEO | | WebServiceConsulting.com0 -
Branded Anchor Text, Exact vs. Non-exact Match Domain
Hello, For NLPCA.com, when you search for "NLP California" in Google,the letters "nlp" are bolded in the SERP URL and so is "ca". See here. This is because "ca" is an abbreviation for "California" Thus, this is not an exact match domain but it is close. What should our branded anchor text be? I want to change the anchor text profile to 98% branded anchor text. The 3 names our company goes by are NLP California NLP Institute of California NLP and Coaching Institute Let me know if we should not use one or more of these names for branded anchor text.
White Hat / Black Hat SEO | | BobGW0