Google-selected canonical makes no sense
-
Howdy, fellow mozzers,
We have added canonical URL to this page - https://www.dignitymemorial.com/obituaries/houston-tx/margot-schurig-8715369/share, pointing to https://www.dignitymemorial.com/obituaries/houston-tx/margot-schurig-8715369
When I check in Google search console, there are no issues reported with that page, and Google does say that it was able to properly read the canonical URL.
Yet, it still chooses the page itself as canonical. This doesn't make sense to me. (Here is the link to the screenshot: https://dmitrii-regexseo.tinytake.com/tt/MzU0Mjc0M18xMDY2MTc4Ng)
Has anyone dealt with this type of issue, and were you able to resolve it?
-
Thanks for the reply.
Yeah, that makes sense, and that was my recommendation to add noindexing. I'm just curious about how and why Google decided that our canonical is not worth it
-
Oh wow that's very insensitive of Google! What you have to understand is that, most online content exists to sell products, to drive revenue and business - to a large degree that's how Google evaluates web-pages (the lens that it sees through)
If you page were commercial in nature (which obviously it is not) then Google would be making a semi logical decision. They're trying to skip users past the 'waffle and blurb' to the 'action point' where the user performs their only meaningful interaction with the page (in this case, a contact form)
For your site this is entirely inappropriate. To be honest you could Meta no-index and / or robots.txt block the "/share" (contact form) URL - to discourage Google from crawling and indexing it. Robots.txt controls crawling (less relevant), Meta no-index controls indexation. Note that like the canonical tag, these are both still 'directives' which Google doesn't 'have' to obey (fundamentally). Don't deploy both at once, as if you deploy robots.txt first (thus stopping Google from crawling the URL) - Google won't be able to crawl and 'find' the Meta no-index directive
Remember: telling Google not to crawl one URL, doesn't necessarily mean that your preferred URL will rank in its place
Your other option is to re-code the site, so that the contact form pops out in a content-box (or slider). That way, the contact from will share the same URL as the main page - thus Google will have to rank them both simultaneously (as it will have no choice)
Sorry that you have encountered such a difficult issue, hope my advice helps somewhat
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Fetch as Google
I have odd scenario I don't know if anyone can help? I've done some serious speed optimisation on a website, amongst other things CDN and caching. However when I do a Search Console Fetch As Google It is still showing 1.7 seconds download time even though the cached content seems to be delivered in less than 200 ms. The site is using SSL which obviously creams off a bit of speed, but I still don't understand the huge discrepancy. Could it be that Google somehow is forcing the server to deliver fresh content despite settings to deliver cache? Thanks in advance
Intermediate & Advanced SEO | | seoman100 -
Html language deprecated by Google?
Hi Mates, Currently we are using on our site two tags for language (we are targeting english ) .... and these are defined on the head section, my question is it is required by Google in order to rank well or it is deprecated. Thank you Claudio
Intermediate & Advanced SEO | | ClayRey0 -
Problem with Google finding our website
We have an issue with Google finding our website: (URL removed) When we google "(keyword removed)" in google.com.au, our website doesn't come up anywhere. This is despite inserting the suitable title tag and onsite copy for SEO. We found this strange, and thought we'd investigate further. We decided to just google the website URL in google.com.au, to see if it was being properly found. Our site appeared at the top but with this description: A description for this result is not available because of this site's robots.txt – learn more. We also can see that the incorrect title tag is appearing. From this, we assumed that there must be an issue with the robot.txt file. We decided to put a new robot.txt file up: (URL removed) This hasn't solved the problem though and we still have the same issue. If someone could get to the bottom of this for us, we would be most appreciative. We are thinking that there may possibly be another robot.txt file that we can't find that is causing issues, or something else we're not sure of! We want to get to the bottom of it so that the site can be appropriately found. Any help here would be most appreciated!
Intermediate & Advanced SEO | | Gavo0 -
HTTPS in Rel Canonical
Hi, Should I, or do I need to, use HTTPS (note the "S") in my canonical tags? Thanks Andrew
Intermediate & Advanced SEO | | Studio330 -
Canonical tag usage.
I have added canonical tags to all my pages, yet I just don't know if I have used them correctly - do you have any ideas on this. My url is http://www.waspkilluk.co.uk
Intermediate & Advanced SEO | | simonberenyi0 -
Why will google not index my pages?
About 6 weeks ago we moved a subcategory out to becomne a main category using all the same content. We also removed 100's of old products and replaced these with new variation listings to remove duplicate content issues. The problem is google will not index 12 critcal pages and our ranking have slumped for the keywords in the categories. What can i do to entice google to index these pages?
Intermediate & Advanced SEO | | Towelsrus0 -
Is my other domain making me not rank?
Hi there, We have a .co.uk website which was ranking well for a number of highly competitive keywords, however in February 2012 those rankings for those keywords suddenly dropped off Google all together and have never came back. A few possibilties to why this has happened: We launched a .ie website which has exactly the same content, could this be the reason for the drop? I have put in all the necessary steps in making sure Google ranks these geographically correct by using hreflang and making sure everything is setup properly in webmaster tools. Why I think it could be this: If I copy and paste the first few paragraphs of text from the pages in the .co.uk website that were ranked highly in Google.co.uk it's the .ie version that appears not the .co.uk version. Here is the webpages in question: http://www.avogel.co.uk/health/menopause/ http://www.avogel.ie/health/menopause/ Forgot to mention, the reason we have these two websites is due to different currency and legalities. Hope someone can help me out with this.
Intermediate & Advanced SEO | | Paul780 -
Is Google Webmaster tools Accurate?
Is Google webmaster Tools data completely inaccurate, or am I just missing something? I noticed a recent surge in 404 errors detected 3 days ago (3/6/11) from pages that have not existed since November 2011. They are links to tag and author archives from pages initially indexed in August 2011. We switched to a new site in December 2011 and created 301 redirects from categories that no longer exist, to new categories. I am a little perplexed since the Google sitemap test shows no 404 errors, neither does SEO MOZ Crawl test, yet under GWT site diagnostics, these errors, all 125 of them, just showed up. Any thought/insights? We've worked hard to ensure a smooth site migration and now we are concerned. -Jason
Intermediate & Advanced SEO | | jimmyjohnson0