Page missing from Google index
-
Hi all,
One of our most important pages seems to be missing from the Google index.
A number of our collections pages (e.g., http://perfectlinens.com/collections/size-king) are thin, so we've included a canonical reference in all of them to the main collection page (http://perfectlinens.com/collections/all).
However, I don't see the main collection page in any Google search result. When I search using "info:http://perfectlinens.com/collections/all", the page displayed is our homepage. Why is this happening?
The main collection page has a rel=canonical reference to itself (auto-generated by Shopify so I can't control that).
Thanks!
-
In general, for link value to transfer either through 301s or canonicals, the content of the page needs to be nearly identical. See Cyrus' post for more. And canonicals are not always followed by Google, they are just a "hint", so it's unlikely you'll pass much value that way.
-
Dan, thanks for that response! I wasn't aware that our homepage had a canonical reference to our category page. On closer examination, I found that our category page in return had a canonical reference to our homepage. Messed up!
I've fixed that, and now resubmitted that page to Google using Search Console. Hopefully that will fix our issues.
Just one last question - why do you prefer noindex over canonical? If I had some backlinks to a thin category page (e.g., /collections/twin), wouldn't it be better to 'transfer' those benefits to our main category page (/collections/all) using canonical references?
Thanks again
-
Hello
Ahh ok, missed that detail.
I created a quick video for you ---> http://screencast.com/t/IKkEikyr
I think this is a bit of a complicated situation which will be tough to diagnose and fix in a Q&A thread. I would suggest catalog the different settings of your site in a spreadsheet like I show in the video.
Essentially, the canonical settings are just "suggestions" for Google and not "directives" so they will ignore them if they think they have been set in error.
I would start by clearly defining the end result you want (what pages should be crawled, and what should be indexed) and work backwards from there to apply the right settings.
I would probably try to use noindex, robots.txt etc before resorting to a canonical.
-
Hi Dan,
Thanks for your response. The page that you see when you type in our category page is in fact, our home page. e.g., when I do info:page A, or cache: page A, the result is for page B. Why is this happening if page A does not have a canonical reference or a redirect of any kind to B?
Thanks.
-
FYI - to check if a page is indexed try typing site:http://perfectlinens.com/collections/all into the Google search bar, or cache:http://perfectlinens.com/collections/all into your browser.
-
Hi There!
That page is in fact indexed and cached for me! Can you check again? And let me know?
-Dan
-
Patrick, thank you for your response.
1. The reason we're using canonical references on those pages is because they are almost identical copies of each other. In the future, we'll create some content on them and they can then stand by themselves.
2. But the original question remains - why is the main page (http://perfectlinens.com/collections/all) missing from the Google index? It's been on the site for a long time, it's one of our most important pages, it's in our sitemap, and robots.txt is not blocking it.
Thank you for your other tips though - I appreciate them, and will put them on our to-do list.
-
Hi there
First, those pages (size-king) should be canonicalized to their own pages, not canonicaling back to the "all" pages. This could be a potentially bad customer experience and you could be missing out on a LOT of organic traffic if some of those product pages are targeting high volume, low competition keywords / variations.
I would work on expanding the content on those product pages and implementing Schema. You have a lot of opportunities to be implementing these tags which will also help your search visibility.
Lastly, depending on when you implemented these canonical tags and your sitemap, Google and other search engines could still be indexing them. When did you upload your sitemap / implement canonical tags? Also, have you submitted these sitemaps to Google and Bing? I recommend you do so if you didn't!
And always make sure your robots.txt and meta tags aren't inadvertently blocking key pages from search! This is an often overlooked area in SEO!
But more than anything - work on that content for your product, canonical tag them to their pages, and add schema. It will make a world a difference!
Hope this helps! Good luck!
Patrick
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will redirecting a logged in user from a public page to an equivalent private page (not visible to google) impact SEO?
Hi, We have public pages that can obviously be visited by our registered members. When they visit these public pages + they are logged in to our site, we want to redirect them to the equivalent (richer) page on the private site e.g. a logged in user visiting /public/contentA will be redirected to /private/contentA Note: Our /public pages are indexed by Google whereas /private pages are excluded. a) will this affect our SEO? b) if not, is 302 the best http status code to use? Cheers
Technical SEO | | bernienabo0 -
Should you use google url remover if older indexed pages are still being kept?
Hello, A client recently did a redesign a few months ago, resulting in 700 pages being reduced to 60, mostly due to panda penalty and just low interest in products on those pages. Now google is still indexing a good number of them ( around 650 ) when we only have 70 on our sitemap. Thing is google indexes our site on average now for 115 urls when we only have 60 urls that need indexing and only 70 on our sitemap. I would of thought these urls would be crawled and not found, but is taking a very long period of time. Our rankings haven't recovered as much as we'd hope, and we believe that the indexed older pages are causes this. Would you agree and also would you think removing those old urls via the remover tool would be best option? It would mean using the url remover tool for 650 pages. Thank you in advance
Technical SEO | | Deacyde0 -
Getting high priority issue for our xxx.com and xxx.com/home as duplicate pages and duplicate page titles can't seem to find anything that needs to be corrected, what might I be missing?
I am getting high priority issue for our xxx.com and xxx.com/home as reporting both duplicate pages and duplicate page titles on crawl results, I can't seem to find anything that needs to be corrected, what am I be missing? Has anyone else had a similar issue, how was it corrected?
Technical SEO | | tgwebmaster0 -
What's going on with google index - javascript and google bot
Hi all, Weird issue with one of my websites. The website URL: http://www.athletictrainers.myindustrytracker.com/ Let's take 2 diffrenet article pages from this website: 1st: http://www.athletictrainers.myindustrytracker.com/en/article/71232/ As you can see the page is indexed correctly on google: http://webcache.googleusercontent.com/search?q=cache:dfbzhHkl5K4J:www.athletictrainers.myindustrytracker.com/en/article/71232/10-minute-core-and-cardio&hl=en&strip=1 (that the "text only" version, indexed on May 19th) 2nd: http://www.athletictrainers.myindustrytracker.com/en/article/69811 As you can see the page isn't indexed correctly on google: http://webcache.googleusercontent.com/search?q=cache:KeU6-oViFkgJ:www.athletictrainers.myindustrytracker.com/en/article/69811&hl=en&strip=1 (that the "text only" version, indexed on May 21th) They both have the same code, and about the dates, there are pages that indexed before the 19th and they also problematic. Google can't read the content, he can read it when he wants to. Can you think what is the problem with that? I know that google can read JS and crawl our pages correctly, but it happens only with few pages and not all of them (as you can see above).
Technical SEO | | cobano0 -
No confirmation page on Google's Disavow links tool?
I've been going through and doing some spring cleaning on some spammy links to my site. I used Google's Disavow links tool, but after I submit my text file, nothing happens. Should I be getting some sort of confirmation page? After I upload my file, I don't get any notifications telling me Google has received my file or anything like that. It just takes me back to this page: http://cl.ly/image/0S320q46321R/Image 2013-04-26 at 11.15.25 AM.png Am I doing something wrong or is this what everyone else is seeing too?
Technical SEO | | shawn810 -
De-indexed from Google
Hi Search Experts! We are just launching a new site for a client with a completely new URL. The client can not provide any access details for their existing site. Any ideas how can we get the existing site de-indexed from Google? Thanks guys!
Technical SEO | | rikmon0 -
Diagnostics say I'm missing Page titles... but I am not?
I've been running a crawl of one of our new site builds for a couple of weeks. The Diagnostics picked up a couple of issues, which was great, but it's saying we're missing Page Titles and Descriptions on pages that we have Page Titles and Descriptions. Anyone come across this before?
Technical SEO | | niamhomahony0 -
2 links on home page to each category page ..... is page rank being watered down?
I am working on a site that has a home page containing 2 links to each category page. One of the links is a text link and one link is an image link. I think I'm right in thinking that Google will only pay attention to the anchor text/alt text of the first link that it spiders with the anchor text/alt text of the second being ignored. This is not my question however. My question is about the page rank that is passed to each category page..... Because of the double links on the home page, my reckoning is that PR is being divided up twice as many times as necessary. Am I also right in thinking that if Google ignore the 2nd identical link on a page only one lot of this divided up PR will be passed to each category page rather than 2 lots ..... hence horribly watering down the 'link juice' that is being passed to each category page?? Please help me win this argument with a developer and improve the ranking potential of the category pages on the site 🙂
Technical SEO | | QubaSEO0