Site moved. Unable to index page : Noindex detected in robots meta tag?!
-
Hope someone can shed some light on this:
We moved our smaller site (into the main site ( different domains) .
The smaller site that was moved ( https://www.bluegreenrentals.com)
Directory where the site was moved (https://www.bluegreenvacations.com/rentals)Each page from the old site was 301 redirected to the appropriate page under .com/rentals. But we are seeing a significant drop in rankings and traffic., as I am unable to request a change of address in Google search console (a separate issue that I can elaborate on).
Lots of (301 redirect) new destination pages are not indexed. When Inspected, I got a message :
Indexing allowed? No: 'index' detected in 'robots' meta tagAll pages are set as Index/follow and there are no restrictions in robots.txtHere is an example URL :https://www.bluegreenvacations.com/rentals/resorts/colorado/innsbruck-aspen/Can someone take a look and share an opinion on this issue?Thank you!
-
That's hugely likely to have had an impact. No-indexing pages before they were ready was a mistake, but the much bigger mistake was releasing the site early before it was 'ready'. The site should only have been set live and released once ALL pages were ported to the new staging environment
Also, if all pages weren't yet live on the staging environment - how can the person looking at staging / the old site, have done all the 301 redirects properly?
When you no-index URLs you kill their SEO authority (dead). Often it never fully recovers and has to be restarted from scratch. In essence, a 301 to a no-indexed URL is moving the SEO authority from the old page into 'nowhere' (cyber oblivion)
The key learning is, don't set a half ready site live and finish development there. WAIT until you are ready, then perform your SEO / architectural / redirect maneuvering
Even if you hadn't no-indexed those new URLs, Google checks to see if the content on the old and new URLs is similar (think Boolean string similarity, in machine terms) before 'allowing' the SEO authority from the old URL to flow to the new one. If the content isn't basically the same, Google expects the pages to 'start over' and 're prove themselves'. Why? Well you tell me why a new page with different content, should benefit from the links of an old URL which was different - when the webmasters who linked to that old URL, may well not choose to link to the new one
Even if you hadn't no-indexed those new URLs, because they were incomplete their content was probably holding content (radically different from the content of the old URLs, on the old site) - it's extremely likely that even without the no-index tags, it still would have fallen flat on its face
In the end, your best course of actions is finish all the content, make sure the 301s are actually accurate (which by the sounds of it many of them won't be), lift the no-index tags, request re-indexation. If you are very, very lucky some of the SEO juice from the old URLs will still exist and the new URLs will get some shreds of authority through (which is better than nothing). In reality though the pooch is already screwed by this point
-
Thank you for the quick reply.
Yes, that's right (URLs and page look from 2017. The site was old and neglected. We decided to give it a facelift, sunset domain in a few months and bring site under our main site.
While pages were still in development (but migrated from staging to live site), we needed to protect them from accidental indexation and flagged every page "no index" no follow". Is it possible that google crawled pages in the past, got no index(as was set at that time) and never returned back? If that's' the case, should I manually request indexing?
-
I love these kinds of questions. You have shared a moved page URL, can you give us the URL it resided at before it was moved, which 'should' be redirecting now? That would massively help
Edit: found this one:
https://www.bluegreenrentals.com/searchresults.aspx?s=CO&sl=COLORADO
(this is what the page apparently used to look like before it was redirected, but the image is a little old from 2017 - OP can you confirm if it did look like this directly prior to redirect?)
... which 301 redirects to:
https://www.bluegreenvacations.com/rentals/resorts/colorado/innsbruck-aspen
... gonna carry on looking but this example of the full chain may help any other Mozzers looking to answer this Q
Suspected issue at this juncture, which could be wrong (not loads to go on right now) - content dissimilarity between URLs leading Google to deny the 301s
FYI: info to help OP, the no-index stuff may be relating moreso to this:
https://developers.google.com/search/reference/robots_meta_tag (may be deployed in the HTML as a tag, but can also be fired through the HTTP header which is another kettle of fish...)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Site's pages has GA codes based on Tag Manager but in Screaming Frog, it is not recognized
Using Tag Assistant (Google Chrome add-on), we have found that the site's pages has GA codes. (also see screenshot 1) However, when we used Screaming Frog's filter feature -- Configuration > Custom > Search > Contain/Does Not Contain, (see screenshot 2) SF is displaying several URLs (maybe all) of the site under 'Does Not Contain' which means that in SF's crawl, the site's pages has no GA code. (see screenshot 3) What could be the problem why SF states that there is no GA code in the site's pages when in fact, there are codes based on Tag Assistant/Manager? Please give us steps/ways on how to fix this issue. Thanks! SgTovPf VQNOJMF RCtBibP
Intermediate & Advanced SEO | | jayoliverwright0 -
Google Search Console - Indexed Pages
I am performing a site audit and looking at the "Index Status Report" in GSC. This shows a total of 17 URLs have been indexed. However when I look at the Sitemap report in GSC it shows 9,000 pages indexed. Also, when I perform a site: search on Google I get 24,000 results. Can anyone help me to explain these anomalies?
Intermediate & Advanced SEO | | richdan0 -
When does Google index a fetched page?
I have seen where it will index on of my pages within 5 minutes of fetching, but have also read that it can take a day. I'm on day #2 and it appears that it has still not re-indexed 15 pages that I fetched. I changed the meta-description in all of them, and added content to nearly all of them, but none of those changes are showing when I do a site:www.site/page I'm trying to test changes in this manner, so it is important for me to know WHEN a fetched page has been indexed, or at least IF it has. How can I tell what is going on?
Intermediate & Advanced SEO | | friendoffood0 -
Do I put a canonical tag on the page I am pointing to?
Lets say B i a duplicate page of A (main page). I understand I have to put canonical tag under B to point to A. Do I also put canonical tag under the main page A? Is it necessary? I understand that A would then tell Google that it is preferred page of A? Is this a correct understanding?
Intermediate & Advanced SEO | | andypatalak0 -
Page Count in Webmaster Tools Index Status Versus Page Count in Webmaster Tools Sitemap
Greeting MOZ Community: I run www.nyc-officespace-leader.com, a real estate website in New York City. The page count in Google Webmaster Tools Index status for our site is 850. The page count in our Webmaster Tools Sitemap is 637. Why is there a discrepancy between the two? What does the Google Webmaster Tools Index represent? If we filed a removal request for pages we did not want indexed, will these pages still show in the Google Webmaster Tools page count despite the fact that they no longer display in search results? The number of pages displayed in our Google Webmaster Tools Index remains at about 850 despite the removal request. Before a site upgrade in June the number of URLs in the Google Webmaster Tools Index and Google Webmaster Site Map were almost the same. I am concerned that page bloat has something to do with a recent drop in ranking. Thanks everyone!! Alan
Intermediate & Advanced SEO | | Kingalan10 -
Whole site blocked by robots in webmaster tools
My URL is: www.wheretobuybeauty.com.auThis new site has been re-crawled over last 2 weeks, and in webmaster tools index status the following is displayed:Indexed 50,000 pagesblocked by robots 69,000Search query 'site:wheretobuybeauty.com.au' returns 55,000 pagesHowever, all pages in the site do appear to be blocked and over the 2 weeks, the google search query site traffic declined from significant to zero (proving this is in fact the case ).This is a Linux php site and has the following: 55,000 URLs in sitemap.xml submitted successfully to webmaster toolsrobots.txt file existed but did not have any entries to allow or disallow URLs - today I have removed robots.txt file completely URL re-direction within Linux .htaccess file - there are many rows within this complex set of re-directions. Developer has double checked this file and found that it is valid.I have read everything that google and other sources have on this topic and this does not help. Also checked webmaster crawl errors, crawl stats, malware and there is no problem there related to this issue.Is this a duplicate content issue - this is a price comparison site where approx half the products have duplicate product descriptions - duplicated because they are obtained from the suppliers through an XML data file. The suppliers have the descriptions from the files in their own sites.Help!!
Intermediate & Advanced SEO | | rrogers0 -
Drop in number of pages in Bing index
I regularly check our index inclusion and this morning saw that we had dropped from having approx 6,000 pages in Bing's index to less than 100. We still have 13,000 in Bing's image index, and I've seen no similar drop in the number of pages in either Google or Yahoo. I've checked with our dev team and there have been no significant changes to the sitemap or robots file. Has anybody seen anything like this before, or could give any insight into why it might be happening?
Intermediate & Advanced SEO | | GBC0 -
Should I check Use noindex for Tag Archives?
I have a page indexed > (http://mysite.com/mypost) and also http://mysite.com/tag/mypost The same post shows up twice, one with /tag/ one without when I search site:http://mysite.com
Intermediate & Advanced SEO | | vinner-280241
Is this a duplicate content?? Can I get penalized for this? In the All in one plugin should I check Use noindex for Tag Archives to avoid this or doesn't matter.
Thanks0