Certain Product Pages Not Indexing
-
Hey All,
We discovered an issue where new product pages on our site were not getting indexed because a "noindex" tag was inadvertently being added to section when those pages were created.
We removed the noindex tag in late April and some of the pages that had not been previously indexed are now showing up, but others are still not getting indexed and I'd appreciate some help on why this could be.
Here is an example of a page that was not in the index but is now showing after removal of noindex:
http://www.cloud9living.com/san-diego/gaslamp-quarter-food-tour
And here is an example of a page that is still not showing in the index:
http://www.cloud9living.com/atlanta/race-a-ferrari
UPDATE: The above page is now showing after I manually submitted it in WMT. I had previously submitted another page like a month ago and it was still not indexing so I thought the manual submission was a dead end. However, it just so happens that the above URL just had its Page Title and H1 updated to something more specific and less duplicative so I am currently running a test to see if that's the problem with these pages not indexing. Will update this soon.
Any suggestions? Thanks!
-
Significantly changing the Page Title and H1 is working. Second page now indexing after not indexing for some time. Probably shoulda thought of that a long time ago but that noindex tag sidetracked me!
-
It's hard to say, and I admit that I would have also expected them to reindex the pages by now.
A while back I was working for a client who accidentally turned on noindex, nofollow in their WordPress SEO by Yoast plugin site-wide. I didn't catch it for a week, and after I turned it off it took an additional 3 weeks before a single page of the site was reindexed. Granted, this was a low ranked site, and it probably wasn't high on Google's priority, but it did take much longer than I hoped to recover from.
Unfortunately I think you just have to wait it out. Just keep doing what your doing, creating new content, etc. Maybe if you build a new link to the page, Google will recrawl it then?
-
Good point. But why then would they continue to not index a month after manual submission?
-
If the page was set to "noindex" for a long time, Google may have flagged the page as such and chosen to skip over it when it was crawling your site.
-
Also, historically indexation happened very quickly on this site (less than 24 hours) so that's why I think something else is afoot here. And it has been like 6 weeks... which I don't think makes sense for a site with this level of domain authority.
-
Hey Bradley,
Thanks for the response. Yes, I had manually fetched a few of these pages about a month back and that didn't change indexation so I thought it was a dead end. However, one I tried again this morning suddenly indexed and it just so happened to also have had its Page Title and H1 tag changed to be significantly more unique than they were previously so I am wondering if that is problem. I am currently running a test with another page that I manually submitted a month ago but without updating Page Title/H1 and now I have resubmitted with changed info.
We'll see if that does the trick.
Will let you know.
-
You can ask Google to crawl your page: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1352276
Ask Google to crawl a page or site:
- On the Webmaster Tools Home page, click the site you want.
- On the Dashboard, under Health, click Fetch as Google.
- In the text box, type the path to the page you want to check.
- In the dropdown list, select Web. (You can select another type of page, but currently we only accept submissions for our Web Search index.)
- Click Fetch. Google will fetch the URL you requested. It may take up to 10 or 15 minutes for Fetch status to be updated.
- Once you see a Fetch status of "Successful", click Submit to Index, and then click one of the following:
- To submit the individual URL to Google's index, select URL and click Submit. You can submit up to 500 URLs a week in this way.
- To submit the URL and all pages linked from it, click URL and all linked pages. You can submit up to 10 of these requests a month.
It's probably just Google slowly making their way around to re-crawling these pages. I would fetch the page, and just wait a little while longer.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Removing indexed internal search pages from Google when it's driving lots of traffic?
Hi I'm working on an E-Commerce site and the internal Search results page is our 3rd most popular landing page. I've also seen Google has often used this page as a "Google-selected canonical" on Search Console on a few pages, and it has thousands of these Search pages indexed. Hoping you can help with the below: To remove these results, is it as simple as adding "noindex/follow" to Search pages? Should I do it incrementally? There are parameters (brand, colour, size, etc.) in the indexed results and maybe I should block each one of them over time. Will there be an initial negative impact on results I should warn others about? Thanks!
Intermediate & Advanced SEO | | Frankie-BTDublin0 -
Should I apply Canonical Links from my Landing Pages to Core Website Pages?
I am working on an SEO project for the website: https://wave.com.au/ There are some core website pages, which we want to target for organic traffic, like this one: https://wave.com.au/doctors/medical-specialties/anaesthetist-jobs/ Then we have basically have another version that is set up as a landing page and used for CPC campaigns. https://wave.com.au/anaesthetists/ Essentially, my question is should I apply canonical links from the landing page versions to the core website pages (especially if I know they are only utilising them for CPC campaigns) so as to push link equity/juice across? Here is the GA data from January 1 - April 30, 2019 (Behavior > Site Content > All Pages😞
Intermediate & Advanced SEO | | Wavelength_International0 -
Google suddenly indexing 1,000 fewer pages. Why?
We have a site, blog.example.org, and another site, www.example.org. The most visited pages on www.example.org were redesigned; the redesign landed May 8. I would expect this change to have some effect on organic rank and conversions. But what I see is surprising; I can't believe it's related, but I mention this just in case. Between April 30 and May 7, Google stopped indexing roughly 1,000 pages on www.example.org, and roughly 3,000 pages on blog.example.org. In both cases the number of pages that fell out of the index represents appx. 15% of the overall number of pages. What would cause Google to suddenly stop indexing thousands of pages on two different subdomains? I'm just looking for ideas to dig into; no suggestion would be too basic. FWIW, the site is localized into dozens of languages.
Intermediate & Advanced SEO | | hoosteeno0 -
Removing massive number of no index follow page that are not crawled
Hi, We have stackable filters on some of our pages (ie: ?filter1=a&filter2=b&etc.). Those stacked filters pages are "noindex, follow". They were created in order to facilitate the indexation of the item listed in them. After analysing the logs we know that the search engines do not crawl those stacked filter pages. Does blocking those pages (by loading their link in AJAX for example) would help our crawl rate or not? In order words does removing links that are already not crawled help the crawl rate of the rest of our pages? My assumption here is that SE see those links but discard them because those pages are too deep in our architecture and by removing them we would help SE focus on the rest of our page. We don't want to waste our efforts removing those links if there will be no impact. Thanks
Intermediate & Advanced SEO | | Digitics0 -
Client has moved to secured https webpages but non secured http pages are still being indexed in Google. Is this an issue
We are currently working with a client that relaunched their website two months ago to have hypertext transfer protocol secure pages (https) across their entire site architecture. The problem is that their non secure (http) pages are still accessible and being indexed in Google. Here are our concerns: 1. Are co-existing non secure and secure webpages (http and https) considered duplicate content?
Intermediate & Advanced SEO | | VanguardCommunications
2. If these pages are duplicate content should we use 301 redirects or rel canonicals?
3. If we go with rel canonicals, is it okay for a non secure page to have rel canonical to the secure version? Thanks for the advice.0 -
Why the archive sub pages are still indexed by Google?
Why the archive sub pages are still indexed by Google? I am using the WordPress SEO by Yoast, and selected the needed option to get these pages no-index in order to avoid the duplicate content.
Intermediate & Advanced SEO | | MichaelNewman1 -
Why does google not show my ecommerce category page when I have the same keywords for many products in the product title?
I have found that google removes the google serach listing of a category from my site (ecommerce) when products within the category have the same key words. I sell golf shirts and have a category called "Mens Golf Shirts" Within the category I have added many products but when the too many of the products say mens golf shirt my link on google gets removed. Before i had products named: FUNKTION Mens Short Sleeve Golf Shirt Red / Black but now I have had to change it to: FUNKTION Red / Black I can understand that they may see this a keyword stuffing but how do I get around this to ensure that each product can rank on google for mens golf shirt
Intermediate & Advanced SEO | | funktiongolf0 -
Duplicate content on index.htm page
How do I avoid duplicate content on the index.htm page . I need to redirect the spider from the /index.htm file to the main root of http://www.manandhisvan.com.au and hence avoid duplicate content. Does anyone know of a foolproof way of achieving this without me buggering up the complete site Cheers Freddy
Intermediate & Advanced SEO | | Fatfreddy0