Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Does having alot of pages with noindex and nofollow tags affect rankings?
-
We are an e-commerce marketplace at for alternative fashion and home decor. We have over 1000+ stores on the marketplace. Early this year, we switched the website from HTTP to HTTPS in March 2018 and also added noindex and nofollow tags to the store about page and store policies (mostly boilerplate content)
Our traffic dropped by 45% and we have since not recovered. We have done
I am wondering could these tags be affecting our rankings?
-
Hi Gaston
Thank you for the detailed response and suggestions. I will follow up with my findings. Point 3 and 4; - I think there is something there.
James
-
Hi James,
Great that you've checked out those items and there aren't errors.
I'd break my response into bullet points so its easier to respond
1- I'm bugged that the traffic loss occurs in the same month as the https redirection.
That completely tells me that you've either killed, redirected or noindexed some pages that drove a lot of traffic.
2- Also it could be possible that you didn't deserve that much traffic due to either being ranked on searches that you weren't relevant or Google didn't fully understand your site. That often happens when migration takes places, as Google needs to re-calculate and fully understand the new site.3- If you have still on the old HTTP search Console property, I'd check as many (and in some scalable way) keywords as possible, trying to find which have fallen out in rankings.
4- When checking those keywords, compare URLs that were ranked, there could be some changes.5- And lastly, have you made sure that there aren't any indexation and/or Crawlability issues? Check the raw number of indexable URLs and compare it with the number that Search Console shows in the index coverage report.
Best wishes.
GR -
Hi Gaston
Thank you for sharing your insights.
1. I have looked through all the pages and made sure we have not noindexed important pages
2. The migration went well; no double redirects or duplicate content.
3. I looked through Google search console - Fixed all the errors; (mostly complains about 404 error caused by products that are out of stock or from vendors who leave the website)
4. A friend said he thinks our pages are over-optimized - and hence that could be the reason; We went ahead and tweaked all the pages that were driving traffic; but change.
If you have a moment here is our website: www.rebelsmarket.com - If there is anything that standsout please let me know. I appreciate your help
James
-
Hi Joe
We have applied all the redirects carefully and tested them to make sure; we have no duplicate content
The url: www.rebelsmarket.com
Redirect to SSL: March 2018 (we started with the blog and then moved to products page)
We added; noindex and nofollow tags at the sametime;
Thank you
James
-
Hi John
Sorry, I have been tied up with travel schedule. Here is the website www.rebelsmarket.com
Thank you for your help John
-
Hi James,
Yiut issues lie elsewhere - did anything else happen during the update? My first thoughts are that the redirects were incorrectly applied.
- Whats the URL?
- When was the redirect HTTP > HTTPS installed & how?
- When was noindex and nofollow tags added?
You're a month in, so you should be able to recover. Sharing the URL would be useful if you need any further assistance.
-
Hey James - would you be comfortable sharing the URL? I can run some diagnostics on it to see what other issues could be the cause of the drop.
Thanks!
John
-
Hi James,
I'm sorry to hear that you've lost over 45% of your traffic.
Absolutely not, having a lot of noindex and nofollow pages won't affect your rankings and your SEO strength.On the other hand, a traffic drop could be related to many issues, some of them:
- Algorithm changes, there has been a lot of movement this year
- You've noindexed some of your high traffic pages
- Some part of the migration gone wrong
- And the list could be endless.
I'd start checking Search Console, there you could spot which keywords and/or URLs are those that aren't ranking that high.
It might come handy, this sort of tutorial on analyzing a traffic drop: How to Diagnose SEO Traffic Drops: 11 Questions to Answer - Moz Blog
Hope it helps.
Best luck.
GR
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Alternate page with proper canonical tag Status: Excluded in Google webmaster tools.
In Google Webmaster Tools, I have a coverage issue. I am getting this error message: Alternate page with proper canonical tag Status: Excluded. It gives the below blog post page as an example. Any idea how to resolve? At one time, I was using handl utm grabber, but the plugin is deactivated on my website. https://www.savacations.com/turrialba-costa-ricas-garden-city/?utm_source=deleted&utm_medium=deleted&utm_term=deleted&utm_content=deleted&utm_campaign=deleted&gclid=deleted5.
Intermediate & Advanced SEO | | Alancito0 -
Why is our noindex tag not working?
Hi, I have the following page where we've implemented a no index tag. But when we run this page in screaming frog or this tool here to verify the noidex is present and functioning, it shows that it's not. But if you view the source of the page, the code is present in the head tag. And unfortunately we've seen instances where Google is indexing pages we've noindexed. Any thoughts on the example above or why this is happening in Google? Eddy
Intermediate & Advanced SEO | | eddys_kap0 -
Does rewriting a URL affect the page authority?
Hi all, I recently optimized an overview page for a car rental website. Because the page didn’t rank very well, I rewrote the URL, putting the exact keyword combination in it. Then I asked Google to re-crawl the URL through Search Console. This afternoon, I checked Open Site Explorer and saw that the Page Authority had decreased to 1, while the subpages still have an authority of about 18-20. Hence my question: is rewriting a URL a bad idea for SEO? Thank you,
Intermediate & Advanced SEO | | LiseDE
Lise0 -
Newly designed page ranks in Google but then disappears - at a loss as to why.
Hi all, I wondered if you could help me at all please? We run a site called getinspired365.com (which is not optimised) and in the last 2 weeks have tried to optimise some new pages that we have added. For example, we have optimised this page - http://getinspired365.com/lifes-a-bit-like-mountaineering-never-look-down This page was added to Google's index via webmaster tools. When I then did a search for the full quote it came back 2nd in Google's search. If I did a search for half the quote (Life is a bit like mountaineering) it also ranked 2nd. We had another quote page that we'd optimised that displayed similar behaviour (it ranked 4th). But then for some reason when I now do the search it doesn't rank in the top 100 results. This, despite, an unoptimised "normal" page ranking 4th for a search such as: Thousands of geniuses live and die undiscovered. So our domain doesn't seem to be penalised as our "normal" pages are ranking. These pages aren't particularly well designed from an SEO standpoint. But our new pages - which are optimised - keep disappearing from Google, despite the fact they still show as indexed. I've rendered the pages and everything appears fine within Google Webmaster Tools. At a bit of a loss as to why they'd drop so significantly? A few pages I could understand but they've all but been removed. Any one seen this before, and any ideas what could be causing the issue? We have a different URL structure for our new pages in that we have the quote appear in the URL. All the content (bar the quote) that you see in the new pages are unique content that we've written ourselves. Could it be that we've over optimised and Google view these pages as spam? Many thanks in advance for all your help.
Intermediate & Advanced SEO | | MichaelWhyley0 -
Why is Google ranking irrelevant / not preferred pages for keywords?
Over the past few months we have been chipping away at duplicate content issues. We know this is our biggest issue and is working against us. However, it is due to this client also owning the competitor site. Therefore, product merchandise and top level categories are highly similar, including a shared server. Our rank is suffering major for this, which we understand. However, as we make changes, and I track and perform test searches, the pages that Google ranks for keywords never seems to match or make sense, at all. For example, I search for "solid scrub tops" and it ranks the "print scrub tops" category. Or the "Men Clearance" page is ranking for keyword "Women Scrub Pants". Or, I will search for a specific brand, and it ranks a completely different brand. Has anyone else seen this behavior with duplicate content issues? Or is it an issue with some other penalty? At this point, our only option is to test something and see what impact it has, but it is difficult to do when keywords do not align with content.
Intermediate & Advanced SEO | | lunavista-comm0 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Meta NoIndex tag and Robots Disallow
Hi all, I hope you can spend some time to answer my first of a few questions 🙂 We are running a Magento site - layered/faceted navigation nightmare has created thousands of duplicate URLS! Anyway, during my process to tackle the issue, I disallowed in Robots.txt anything in the querystring that was not a p (allowed this for pagination). After checking some pages in Google, I did a site:www.mydomain.com/specificpage.html and a few duplicates came up along with the original with
Intermediate & Advanced SEO | | bjs2010
"There is no information about this page because it is blocked by robots.txt" So I had added in Meta Noindex, follow on all these duplicates also but I guess it wasnt being read because of Robots.txt. So coming to my question. Did robots.txt block access to these pages? If so, were these already in the index and after disallowing it with robots, Googlebot could not read Meta No index? Does Meta Noindex Follow on pages actually help Googlebot decide to remove these pages from index? I thought Robots would stop and prevent indexation? But I've read this:
"Noindex is a funny thing, it actually doesn’t mean “You can’t index this”, it means “You can’t show this in search results”. Robots.txt disallow means “You can’t index this” but it doesn’t mean “You can’t show it in the search results”. I'm a bit confused about how to use these in both preventing duplicate content in the first place and then helping to address dupe content once it's already in the index. Thanks! B0 -
Should I nofollow the main navigation on certain pages?
We have several large Ecommerce sites with hundreds of links on each page. I have been trying to think of ways to focus our internal linking to increase certain pages relevancy. My thought was to put nofollow in the main navigation (since there are hundreds of links there controlled by dropdowns) and only follow the links on each page for the products we are selling and promoting (15-20 links). I would still be using a sitemap that includes the links. Is this a terrible idea? if a link is nofollowed in the main navigation does that still count as the one mention for google if it points to the same page that a normal link points too that is in the content of the page? since all of the main navigation is the same on every page of the website would it be good to only put nofollow on the subpages/subsections and leave the home page navigation alone (that would allow the spiders to crawl all of those links on the home page but not crawl those same links on the subsections where I could then focus the linking).
Intermediate & Advanced SEO | | bigtimeseo0