Page missing from Google index
-
Hi all,
One of our most important pages seems to be missing from the Google index.
A number of our collections pages (e.g., http://perfectlinens.com/collections/size-king) are thin, so we've included a canonical reference in all of them to the main collection page (http://perfectlinens.com/collections/all).
However, I don't see the main collection page in any Google search result. When I search using "info:http://perfectlinens.com/collections/all", the page displayed is our homepage. Why is this happening?
The main collection page has a rel=canonical reference to itself (auto-generated by Shopify so I can't control that).
Thanks!
-
In general, for link value to transfer either through 301s or canonicals, the content of the page needs to be nearly identical. See Cyrus' post for more. And canonicals are not always followed by Google, they are just a "hint", so it's unlikely you'll pass much value that way.
-
Dan, thanks for that response! I wasn't aware that our homepage had a canonical reference to our category page. On closer examination, I found that our category page in return had a canonical reference to our homepage. Messed up!
I've fixed that, and now resubmitted that page to Google using Search Console. Hopefully that will fix our issues.
Just one last question - why do you prefer noindex over canonical? If I had some backlinks to a thin category page (e.g., /collections/twin), wouldn't it be better to 'transfer' those benefits to our main category page (/collections/all) using canonical references?
Thanks again
-
Hello
Ahh ok, missed that detail.
I created a quick video for you ---> http://screencast.com/t/IKkEikyr
I think this is a bit of a complicated situation which will be tough to diagnose and fix in a Q&A thread. I would suggest catalog the different settings of your site in a spreadsheet like I show in the video.
Essentially, the canonical settings are just "suggestions" for Google and not "directives" so they will ignore them if they think they have been set in error.
I would start by clearly defining the end result you want (what pages should be crawled, and what should be indexed) and work backwards from there to apply the right settings.
I would probably try to use noindex, robots.txt etc before resorting to a canonical.
-
Hi Dan,
Thanks for your response. The page that you see when you type in our category page is in fact, our home page. e.g., when I do info:page A, or cache: page A, the result is for page B. Why is this happening if page A does not have a canonical reference or a redirect of any kind to B?
Thanks.
-
FYI - to check if a page is indexed try typing site:http://perfectlinens.com/collections/all into the Google search bar, or cache:http://perfectlinens.com/collections/all into your browser.
-
Hi There!
That page is in fact indexed and cached for me! Can you check again? And let me know?
-Dan
-
Patrick, thank you for your response.
1. The reason we're using canonical references on those pages is because they are almost identical copies of each other. In the future, we'll create some content on them and they can then stand by themselves.
2. But the original question remains - why is the main page (http://perfectlinens.com/collections/all) missing from the Google index? It's been on the site for a long time, it's one of our most important pages, it's in our sitemap, and robots.txt is not blocking it.
Thank you for your other tips though - I appreciate them, and will put them on our to-do list.
-
Hi there
First, those pages (size-king) should be canonicalized to their own pages, not canonicaling back to the "all" pages. This could be a potentially bad customer experience and you could be missing out on a LOT of organic traffic if some of those product pages are targeting high volume, low competition keywords / variations.
I would work on expanding the content on those product pages and implementing Schema. You have a lot of opportunities to be implementing these tags which will also help your search visibility.
Lastly, depending on when you implemented these canonical tags and your sitemap, Google and other search engines could still be indexing them. When did you upload your sitemap / implement canonical tags? Also, have you submitted these sitemaps to Google and Bing? I recommend you do so if you didn't!
And always make sure your robots.txt and meta tags aren't inadvertently blocking key pages from search! This is an often overlooked area in SEO!
But more than anything - work on that content for your product, canonical tag them to their pages, and add schema. It will make a world a difference!
Hope this helps! Good luck!
Patrick
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google indexing is slowing down?
I have up to 20 million unique pages, and so far I've only submitted about 30k of them on my sitemap. We had a few load related errors during googles initial visits, and it thought some were duplicates, but we fixed all that. We haven't gotten a crawl related error for 2 weeks now. Google appears to be indexing fewer and fewer urls every time it visits. Any ideas why? I am not sure how to get all our pages indexed if its going to operate like this... love some help thanks! HnJaXSM.png
Technical SEO | | RyanTheMoz0 -
Spam pages being redirected to 404s but sill indexed
Client had a website that was hacked about a year ago. Hackers went in and added a bunch of spam landing pages for various products. This was before the site had installed an SSL certificate. After the hack, the site was purged of the hacked pages and and SLL certificate was implemented. Part of that process involved setting up a rewrite that redirects http pages to the https versions. The trouble is that the spam pages are still being indexed by Google, even months later. If I do a site: search I still see all of those spam pages come up before most of the key "real" landing pages. The thing is, the listing on the SERP are to the http versions, so they're redirecting to the https version before serving a 404. Is there any way I can fix this without removing the rewrite rule?
Technical SEO | | SearchPros1 -
Google Indexing Desktop & Mobile Versions
We have a relatively new site and I have noticed recently that Google seems to be indexing both the mobile and the desktop version of our site. There are some queries where the mobile version will show up and sometimes both mobile and desktop show up. This can't be good. I would imagine that what is supposed to happen is that the desktop version is the one that should be indexed (always) and browser detection will load the mobile version where appropriate once the user is on the site. Do you have any advice on what we should do to solve this problem as we are a bit stuck?
Technical SEO | | simonukss0 -
Google Indexing of Site Map
We recently launched a new site - on June 4th we submitted our site map to google and almost instantly had all 25,000 URL's crawled (yay!). On June 18th, we made some updates to the title & description tags for the majority of pages on our site and added new content to our home page so we submitted a new sitemap. So far the results have been underwhelming and google has indexed a very low number of the updated pages. As a result, only a handful of the new titles and descriptions are showing up on the SERP pages. Any ideas as to why this might be? What are the tricks to having google re-index all of the URLs in a sitemap?
Technical SEO | | Emily_A0 -
Can Google show the hReview-Aggregate microformat in the SERPs on a product page if the reviews themselves are on a separate page?
Hi, We recently changed our eCommerce site structure a bit and separated our product reviews onto a a different page. There were a couple of reasons we did this : We used pagination on the product page which meant we got duplicate content warnings. We didn't want to show all the reviews on the product page because this was bad for UX (and diluted our keywords). We thought having a single page was better than paginated content, or at least safer for indexing. We found that Googlebot quite often got stuck in loops and we didn't want to bury the reviews way down in the site structure. We wanted to reduce our bounce rate a little, so having a different reviews page could help with this. In the process of doing this we tidied up our microformats a bit too. The product page used to have to three main microformats; hProduct hReview-Aggregate hReview The product page now only has hProduct and hReview-Aggregate (which is now nested inside the hProduct). This means the reviews page has hReview-Aggregate and hReviews for each review itself. We've taken care to make sure that we're specifying that it's a product review and the URL of that product. However, we've noticed over the past few weeks that Google has stopped feeding the reviews into the SERPs for product pages, and is instead only feeding them in for the reviews pages. Is there any way to separate the reviews out and get Google to use the Microformats for both pages? Would using microdata be a better way to implement this? Thanks,
Technical SEO | | OptiBacUK
James0 -
De-indexing millions of pages - would this work?
Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: 301 redirect all old SERP URLs to a new SERP URL. If new URL should not be indexed, add meta robots noindex tag on new URL. When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool This would be an example of an old URL:
Technical SEO | | TalkInThePark
www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 This would be an example of a new URL:
www.site.com/search?q=bmw&category=cars&color=blue I have to specific questions: Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above? What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site". And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business. Best regards,
TalkInThePark0 -
Sending signals to Google to rank the correct page for a set of Keywords.
Hi All, Out of all our keywords their are 3 that are showing our home page in the serps rather than the specific product page URL on Google.co.za (Google.com ranks the correct URL) Im not sure why this is happening as most links built using the anchor text are pointing to the correct page. Why would google prefer ranking our home page on local search and rank the correct page on Google.com? (only 3 keywords have this problem) I have tried to correct this by creating links from strong internal pages with anchor text pointing to the correct URL. I have also concentrated on building links from .co.za domains using the anchor text and correct URL but to no avail. It has been 2 weeks now, since i tried to sort it out, but im not sure what else i can do to tell Google to rank the correct page. Any ideas? Regards Greg
Technical SEO | | AndreVanKets0 -
How to remove a sub domain from Google Index!
Hello, I have a website having many subdomains having same copy of content i think its harming my SEO for that site since abc and xyz sub domains do have same contents. Thus i require to know i have already deleted required subdomain DNS RECORDS now how to have those pages removed from Google index as well ? The DNS Records no more exists for those subdomains already.
Technical SEO | | anand20100