Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Does having alot of pages with noindex and nofollow tags affect rankings?
-
We are an e-commerce marketplace at for alternative fashion and home decor. We have over 1000+ stores on the marketplace. Early this year, we switched the website from HTTP to HTTPS in March 2018 and also added noindex and nofollow tags to the store about page and store policies (mostly boilerplate content)
Our traffic dropped by 45% and we have since not recovered. We have done
I am wondering could these tags be affecting our rankings?
-
Hi Gaston
Thank you for the detailed response and suggestions. I will follow up with my findings. Point 3 and 4; - I think there is something there.
James
-
Hi James,
Great that you've checked out those items and there aren't errors.
I'd break my response into bullet points so its easier to respond

1- I'm bugged that the traffic loss occurs in the same month as the https redirection.
That completely tells me that you've either killed, redirected or noindexed some pages that drove a lot of traffic.
2- Also it could be possible that you didn't deserve that much traffic due to either being ranked on searches that you weren't relevant or Google didn't fully understand your site. That often happens when migration takes places, as Google needs to re-calculate and fully understand the new site.3- If you have still on the old HTTP search Console property, I'd check as many (and in some scalable way) keywords as possible, trying to find which have fallen out in rankings.
4- When checking those keywords, compare URLs that were ranked, there could be some changes.5- And lastly, have you made sure that there aren't any indexation and/or Crawlability issues? Check the raw number of indexable URLs and compare it with the number that Search Console shows in the index coverage report.
Best wishes.
GR -
Hi Gaston
Thank you for sharing your insights.
1. I have looked through all the pages and made sure we have not noindexed important pages
2. The migration went well; no double redirects or duplicate content.
3. I looked through Google search console - Fixed all the errors; (mostly complains about 404 error caused by products that are out of stock or from vendors who leave the website)
4. A friend said he thinks our pages are over-optimized - and hence that could be the reason; We went ahead and tweaked all the pages that were driving traffic; but change.
If you have a moment here is our website: www.rebelsmarket.com - If there is anything that standsout please let me know. I appreciate your help
James
-
Hi Joe
We have applied all the redirects carefully and tested them to make sure; we have no duplicate content
The url: www.rebelsmarket.com
Redirect to SSL: March 2018 (we started with the blog and then moved to products page)
We added; noindex and nofollow tags at the sametime;
Thank you
James
-
Hi John
Sorry, I have been tied up with travel schedule. Here is the website www.rebelsmarket.com
Thank you for your help John
-
Hi James,
Yiut issues lie elsewhere - did anything else happen during the update? My first thoughts are that the redirects were incorrectly applied.
- Whats the URL?
- When was the redirect HTTP > HTTPS installed & how?
- When was noindex and nofollow tags added?
You're a month in, so you should be able to recover. Sharing the URL would be useful if you need any further assistance.
-
Hey James - would you be comfortable sharing the URL? I can run some diagnostics on it to see what other issues could be the cause of the drop.
Thanks!
John
-
Hi James,
I'm sorry to hear that you've lost over 45% of your traffic.
Absolutely not, having a lot of noindex and nofollow pages won't affect your rankings and your SEO strength.On the other hand, a traffic drop could be related to many issues, some of them:
- Algorithm changes, there has been a lot of movement this year
- You've noindexed some of your high traffic pages
- Some part of the migration gone wrong
- And the list could be endless.
I'd start checking Search Console, there you could spot which keywords and/or URLs are those that aren't ranking that high.
It might come handy, this sort of tutorial on analyzing a traffic drop: How to Diagnose SEO Traffic Drops: 11 Questions to Answer - Moz Blog
Hope it helps.
Best luck.
GR
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page rank and menus
Hi, My client has a large website and has a navigation with main categories. However, they also have a hamburger type navigation in the top right. If you click it it opens to a massive menu with every category and page visible. Do you know if having a navigation like this bleeds page rank? So if all deep pages are visible from the hamburger navigation this means that page rank is not being conserved to the main categories. If you click a main category in the main navigation (not the hamburger) you can see the sub pages. I think this is the right structure but the client has installed this huge menu to make it easier for people to see what there is. From a technical SEO is this not bad?
Intermediate & Advanced SEO | | AL123al0 -
Category Page as Shopping Aggregator Page
Hi, I have been reviewing the info from Google on structured data for products and started to ponder.
Intermediate & Advanced SEO | | Alexcox6
https://developers.google.com/search/docs/data-types/products Here is the scenario.
You have a Category Page and it lists 8 products, each products shows an image, price and review rating. As the individual products pages are already marked up they display Rich Snippets in the serps.
I wonder how do we get the rich snippets for the category page. Now Google suggest a markup for shopping aggregator pages that lists a single product, along with information about different sellers offering that product but nothing for categories. My ponder is this, Can we use the shopping aggregator markup for category pages to achieve the coveted rich results (from and to price, average reviews)? Keen to hear from anyone who has had any thoughts on the matter or had already tried this.0 -
Fresh page versus old page climbing up the rankings.
Hello, I have noticed that if publishe a webpage that google has never seen it ranks right away and usually in a descend position to start with (not great but descend). Usually top 30 to 50 and then over the months it slowly climbs up the rankings. However, if my page has been existing for let's say 3 years and I make changes to it, it takes much longer to climb up the rankings Has someone noticed that too ? and why is that ?
Intermediate & Advanced SEO | | seoanalytics0 -
Is a 301 Redirect and a Canonical Tag on Uppercase to Lowercase Pages Correct?
We have a medium size site that lost more than 50% of its traffic in July 2013 just before the Panda rollout. After working with a SEO agency, we were advised to clean up various items, one of them being that the 10k+ urls were all mixed case (i.e. www.example.com/Blue-Widget). A 301 redirect was set up thereafter forcing all these urls to go to a lowercase version (i.e. www.example.com/blue-widget). In addition, there was a canonical tag placed on all of these pages in case any parameters or other characters were incorporated into a url. I thought this was a good set up, but when running a SEO audit through a third party tool, it shows me the massive amount of 301 redirects. And, now I wonder if there should only be a canonical without the redirect or if its okay to have tens of thousands 301 redirects on the site. We have not recovered yet from the traffic loss yet and we are wondering if its really more of a technical problem than a Google penalty. Guidance and advise from those experienced in the industry is appreciated.
Intermediate & Advanced SEO | | ABK7170 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Canonical tag + HREFLANG vs NOINDEX: Redundant?
Hi, We launched our new site back in Sept 2013 and to control indexation and traffic, etc we only allowed the search engines to index single dimension pages such as just category, brand or collection but never both like category + brand, brand + collection or collection + catergory We are now opening indexing to double faceted page like category + brand and the new tag structure would be: For any other facet we're including a "noindex, follow" meta tag. 1. My question is if we're including a "noindex, follow" tag to select pages do we need to include a canonical or hreflang tag afterall? Should we include it either way for when we want to remove the "noindex"? 2. Is the x-default redundant? Thanks for any input. Cheers WMCA
Intermediate & Advanced SEO | | WMCA0 -
301 redirection pointing to noindexed pages
I have rather an unusual situation where a recently launched affiliate site does not have any unique content as its all syndicated content. For that reason we are currently using the noindex,nofollow meta tags to keep the pages out of the search engines index until we create unique content for the pages. The problem is that due to a very tight timeframe with rebranding, we are looking at 301 redirecting (on a page to page basis) another high authority legacy domain to this new site before we have had a chance to add unique content to it and remove the noindex,nofollow tags. I would assume that any link authority normally passed through the 301 would be lost in this scenario but Im uncertain of what the broader impact might be. Has anyone dealt with a similar scenario? I know this scenario is not ideal and I would rather wait until the unique content is up and noindex tags are removed before launching the 301 redirect of the legacy domain but there are a number of competing priorities at play outside of SEO.
Intermediate & Advanced SEO | | LosNomads0 -
Meta NoIndex tag and Robots Disallow
Hi all, I hope you can spend some time to answer my first of a few questions 🙂 We are running a Magento site - layered/faceted navigation nightmare has created thousands of duplicate URLS! Anyway, during my process to tackle the issue, I disallowed in Robots.txt anything in the querystring that was not a p (allowed this for pagination). After checking some pages in Google, I did a site:www.mydomain.com/specificpage.html and a few duplicates came up along with the original with
Intermediate & Advanced SEO | | bjs2010
"There is no information about this page because it is blocked by robots.txt" So I had added in Meta Noindex, follow on all these duplicates also but I guess it wasnt being read because of Robots.txt. So coming to my question. Did robots.txt block access to these pages? If so, were these already in the index and after disallowing it with robots, Googlebot could not read Meta No index? Does Meta Noindex Follow on pages actually help Googlebot decide to remove these pages from index? I thought Robots would stop and prevent indexation? But I've read this:
"Noindex is a funny thing, it actually doesn’t mean “You can’t index this”, it means “You can’t show this in search results”. Robots.txt disallow means “You can’t index this” but it doesn’t mean “You can’t show it in the search results”. I'm a bit confused about how to use these in both preventing duplicate content in the first place and then helping to address dupe content once it's already in the index. Thanks! B0