"Duplicate without user-selected canonical” - impact to SERPs
-
Hello, we are facing some issues on our project and we would like to get some advice.
Scenario
We run several websites (www.brandName.com, www.brandName.be, www.brandName.ch, etc..) all in French language . All sites have nearly the same content & structure, only minor text (some headings and phone numbers due to different countries are different). There are many good quality pages, but again they are the same over all domains.Goal
We want local domains (be, ch, fr, etc.) to appear in SERPs and also comply with Google policy of local language variants and/or canonical links.Current solution
Currently we don’t use canonicals, instead we use rel="alternate" hreflang="x-default":<link rel="alternate" hreflang="fr-BE" href="https://www.brandName.be/" /> <link rel="alternate" hreflang="fr-CA" href="https://www.brandName.ca/" /> <link rel="alternate" hreflang="fr-CH" href="https://www.brandName.ch/" /> <link rel="alternate" hreflang="fr-FR" href="https://www.brandName.fr/" /> <link rel="alternate" hreflang="fr-LU" href="https://www.brandName.lu/" /> <link rel="alternate" hreflang="x-default" href="https://www.brandName.com/" />
Issue
After Googlebot crawled the websites we see lot of “Duplicate without user-selected canonical” in Coverage/Excluded report (Google Search Console) for most domains. When we inspect some of those URLs we can see Google has decided that canonical URL points to (example):User-declared canonical: None
Google-selected canonical: …same page, but on a different domainStrange is that even those URLs are on Google and can be found in SERPs.
Obviously Google doesn’t know what to make of it. We noticed many websites in the same scenario use a self-referencing approach which is not really “kosher” - we are afraid if we use the same approach we can get penalized by Google.
Question: What do you suggest to fix the “Duplicate without user-selected canonical” in our scenario?
Any suggestions/ideas appreciated, thanks. Regards.
-
The issue of "Duplicate without user-selected canonical" refers to situations where there are multiple identical or very similar pages on a website, but a canonical tag has not been explicitly set to indicate which version should be considered the preferred or original version by search engines.
The impact of this issue on search engine results pages (SERPs) can be negative for several reasons:
Keyword Dilution: When search engines encounter multiple versions of the same or similar content, they might have a hard time determining which page to rank for a particular keyword. This can lead to keyword dilution, where the authority and relevance of the content is spread across multiple pages instead of being concentrated on a single page.
Page Selection Uncertainty: Without a canonical tag to guide search engines, they may choose to index and display a version of the page that is not the most relevant or valuable to users. This can result in users landing on less optimal pages from their search queries.
Ranking Competition: Duplicate content can cause internal competition between your own pages for rankings. Instead of consolidating ranking signals onto one page, they get divided among duplicates, potentially leading to lower overall rankings for all versions.
Crawling and Indexing Issues: Search engine bots may spend more time crawling and indexing duplicate content, which could lead to inefficient use of their resources. This might affect how often your new or updated content gets indexed.
To address the "Duplicate without user-selected canonical" issue and mitigate its impact on SERPs:
Implement Canonical Tags: Set up canonical tags on duplicate or similar pages to indicate the preferred version. This guides search engines to consolidate ranking signals and direct traffic to the correct page.
301 Redirects: If possible, redirect duplicate pages to a single, canonical version using 301 redirects. This not only consolidates ranking signals but also ensures that users are directed to the most relevant content.
Consolidate Content: Consider merging similar pages into a single, comprehensive page. This helps avoid duplication issues and improves the overall user experience.
Use Noindex Tags: If some duplicate pages are not crucial for SEO or user experience, you can add a noindex meta tag to prevent search engines from indexing those pages.
Monitor and Update: Regularly audit your website for duplicate content and ensure that new content is properly canonicalized to prevent future occurrences.
By addressing the "Duplicate without user-selected canonical" issue, you can help improve the clarity and accuracy of how your content appears in SERPs, potentially leading to better rankings and a more effective SEO strategy.
-
@GeorgeJohn727
Duplicate without user-selected canonical -
Understanding 'Duplicate without user-selected canonical' is crucial for optimizing SERPs. This issue can lead to content duplication concerns, potentially affecting search engine rankings. Just as addressing this matter ensures streamlined search results, exploring the 'best online betting sites in India' exemplifies how selecting the right canonical source enhances visibility and credibility in the online domain.
-
- Even if this error occurs it doesn't mean Google ignores the pages - it can and in our case they appear in SERPs.
- Duplicate pages carry value in sense that there is a slight alteration for local market - contact info, different pricing, etc. So 90% of the page is same on national
domains, but only slight part differs.
-
@alex_pisa
The error "Duplicate without user-selected canonical” indicates that Google found duplicate URLs that are not canonicalized to a preferred version. Google didn't index these duplicate URLs and assigned a canonical version on its own.How to fix this issue
Should these pages even exist? If the answer to this is no, simply remove these pages and return a HTTP status code 410.
If these pages have a purpose, then ask yourself whether they
carry any value:-
If yes, then canonicalize them to the preferred version of the URL. Need some inspiration where to canonicalize to? See which URL Google finds most relevant using the URL Inspection tool(opens in a new tab). If Google's listing PDF files for your site, canonicalize them through the HTTP header.
-
If these pages don't carry any value, then make sure to apply the noindex directive through the meta robots tag or X-Robots-Tag HTTP Header.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Understanding Redirects and Canonical Tags in SEO: A Complex Case
Hi everyone, nothing serious here, i'm just playing around doing my experiments 🙂
Technical SEO | | chueneke
but if any1 of you guys understand this chaos and what was the issue here, i'd appreciate if you try to explain it to me. I had a page "Linkaufbau" on my website at https://chriseo.de/linkaufbau. My .htaccess file contains only basic SEO stuff: # removed ".html" using htaccess RewriteCond %{THE_REQUEST} ^GET\ (.*)\.html\ HTTP RewriteRule (.*)\.html$ $1 [R=301,L] # internally added .html if necessary RewriteCond %{REQUEST_FILENAME}.html -f RewriteCond %{REQUEST_URI} !/$ RewriteRule (.*) $1\.html [L] # removed "index" from directory index pages RewriteRule (.*)/index$ $1/ [R=301,L] # removed trailing "/" if not a directory RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} /$ RewriteRule (.*)/ $1 [R=301,L] # Here’s the first redirect: RedirectPermanent /index / My first three questions: Why do I need this rule? Why must this rule be at the top? Why isn't this handled by mod_rewrite? Now to the interesting part: I moved the Linkaufbau page to the SEO folder: https://chriseo.de/seo/linkaufbau and set up the redirect accordingly: RedirectPermanent /linkaufbau /seo/linkaufbau.html I deleted the old /linkaufbau page. I requested indexing for /seo/linkaufbau in the Google Search Console. Once the page was indexed, I set a canonical to the old URL: <link rel="canonical" href="https://chriseo.de/linkaufbau"> Then I resubmitted the sitemap and requested indexing for /seo/linkaufbau again, even though it was already indexed. Due to the canonical tag, the page quickly disappeared. I then requested indexing for /linkaufbau and /linkaufbau.html in GSC (the old, deleted page). After two days, both URLs were back in the serps:: https://chriseo.de/linkaufbau https://chriseo.de/linkaufbau.html this is the new page /seo/linkaufbau
b14ee095-5c03-40d5-b7fc-57d47cf66e3b-grafik.png This is the old page /linkaufbau
242d5bfd-af7c-4bed-9887-c12a29837d77-grafik.png Both URLs are now in the search results and all rankings are significantly better than before for keywords like: organic linkbuilding linkaufbau kosten linkaufbau service natürlicher linkaufbau hochwertiger linkaufbau organische backlinks linkaufbau strategie linkaufbau agentur Interestingly, both URLs (with and without .html) redirect to the new URL https://chriseo.de/seo/linkaufbau, which in turn has a canonical pointing to https://chriseo.de/linkaufbau (without .html). In the SERPs, when https://chriseo.de/linkaufbau is shown, my new, updated snippet is displayed. When /linkaufbau.html is shown, it displays the old, deleted page that had already disappeared from the index. I have now removed the canonical tag. I don't fully understand the process of what happened and why. If anyone has any ideas, I would be very grateful. Best regards,
Chris0 -
Unsolved I have a "click rate juice" question would like to know.
Hello I have a "click rate juice" question would like to know. For example. I created a noindex site for a few days event purposes. Using a random domain like this: event.example.com. Expecting 5000+ clicks per day. Is it possible to gain some traffic juice from this event website domain "example.com" to my other main site "main.com" but without exposing its URL. Thought about using 301 redirecting "example.com" to "main.com". But it will reveal the example-b.com to the general public if someone visits the domain "example.com". Also thought about using a canonical URL, but it would not be working because the event site is noindex. or it would not matter at all 🤔 Wondering if there is a thing like this to gain some traffic juice for another domain? Thanks
Intermediate & Advanced SEO | | Blueli0 -
Tracking International Keywords
Hi I haven't had much luck tracking keywords on my international sites let alone discovering new keywords. What are some strategies/suggestions to accomplishing this? Currently I have campaigns set-up for the UK, Germany, France, and China in additional to our main US-Canada campaign.
International SEO | | Julie.P0 -
Dynamic Canonical Tag for Search Results Filtering Page
Hi everyone, I run a website in the travel industry where most users land on a location page (e.g. domain.com/product/location, before performing a search by selecting dates and times. This then takes them to a pre filtered dynamic search results page with options for their selected location on a separate URL (e.g. /book/results). The /book/results page can only be accessed on our website by performing a search, and URL's with search parameters from this page have never been indexed in the past. We work with some large partners who use our booking engine who have recently started linking to these pre filtered search results pages. This is not being done on a large scale and at present we only have a couple of hundred of these search results pages indexed. I could easily add a noindex or self-referencing canonical tag to the /book/results page to remove them, however it’s been suggested that adding a dynamic canonical tag to our pre filtered results pages pointing to the location page (based on the location information in the query string) could be beneficial for the SEO of our location pages. This makes sense as the partner websites that link to our /book/results page are very high authority and any way that this could be passed to our location pages (which are our most important in terms of rankings) sounds good, however I have a couple of concerns. • Is using a dynamic canonical tag in this way considered spammy / manipulative? • Whilst all the content that appears on the pre filtered /book/results page is present on the static location page where the search initiates and which the canonical tag would point to, it is presented differently and there is a lot more content on the static location page that isn’t present on the /book/results page. Is this likely to see the canonical tag being ignored / link equity not being passed as hoped, and are there greater risks to this that I should be worried about? I can’t find many examples of other sites where this has been implemented but the closest would probably be booking.com. https://www.booking.com/searchresults.it.html?label=gen173nr-1FCAEoggI46AdIM1gEaFCIAQGYARS4ARfIAQzYAQHoAQH4AQuIAgGoAgO4ArajrpcGwAIB0gIkYmUxYjNlZWMtYWQzMi00NWJmLTk5NTItNzY1MzljZTVhOTk02AIG4AIB&sid=d4030ebf4f04bb7ddcb2b04d1bade521&dest_id=-2601889&dest_type=city& Canonical points to https://www.booking.com/city/gb/london.it.html In our scenario however there is a greater difference between the content on both pages (and booking.com have a load of search results pages indexed which is not what we’re looking for) Would be great to get any feedback on this before I rule it out. Thanks!
Technical SEO | | GAnalytics1 -
Alternate page with proper canonical tag Status: Excluded in Google webmaster tools.
In Google Webmaster Tools, I have a coverage issue. I am getting this error message: Alternate page with proper canonical tag Status: Excluded. It gives the below blog post page as an example. Any idea how to resolve? At one time, I was using handl utm grabber, but the plugin is deactivated on my website. https://www.savacations.com/turrialba-costa-ricas-garden-city/?utm_source=deleted&utm_medium=deleted&utm_term=deleted&utm_content=deleted&utm_campaign=deleted&gclid=deleted5.
Intermediate & Advanced SEO | | Alancito0 -
Duplicate content across English-speaking ccTLDs
Morning, If a brand offering pretty the same products/services has 4 English-speaking ccTLDs (.com, .co.uk, .com.au and .co.nz), what are the best practices when thinking about SEO and content? In an ideal world, all content should be totally unique, but when the products/services offered across every ccTLD are the same, this may prove tricky. Am I right in thinking that duplicate content across ccTLDs is tolerated by Google as they know you're targeting specific countries? Cheers!
International SEO | | PeaSoupDigital0 -
How can I replicate India SERPs from US
I was recently checking out our analytics keywords stats and I am seeing traffic noted as google organic traffic coming from India. When I do a search on http://www.google.co.in/ for those terms using a "incognito" browser with no cookies, and not logged in I don't see our site in the first 100 results. The same thing happens on google.com if I try to set the account in india. I am trying to see if this might be an issue with an adwords campaign not getting auto-tagged (auto-tagging is turned on in adwords), or if there is something else going on. The keyword campaign gets many more clicks than analytics shows for this organic option so perhaps it is an odd rare occurrence. I am glad to have the traffic, but I want to be sure it is actually tracking correctly as organic, or if PPC is not tagging for some reason 😉 Any suggestions or thoughts?
International SEO | | SL_SEM0 -
Backlinks that we didnt place, killing our SERP rank and PR
I am in need of advice regarding back links that we did not place, and which are hurting our search engine results. How and why they got there I cannot explain.But they have appeared recently, and are damaging our SERP ranks. For several years, I have been a member with SEOmoz, and we have done our search engine optimization in house. I am the owner of a personal injury law firm, which is a competitive field in search engine optimization. Recently, in the spring, we updated our website, added significant content (over 100 additional pages), we set up a better site structure, and we completed a significant back link campaign from white hat sources. As a result, we were the strongest law firm in search engine results in the state of Arizona, and the page rank from our home page went from a 4 to 6, and from our next highest level of pages, they went from a 3 to 6. This happened in 10 week period. Our search engine results were fantastic. We were getting a significant amount of business from out Places page and our Organic results. That has almost completely dried up. Approximately 6-8 weeks later, we started having some serious problems. Specifically, our search engine results decreased significantly, our page rank reduced from a six to a four. So we started using SEOmoz tools to see what the problem was, and when we created an open site Explorer report, there are approximately 1000 different links from very shady websites they are now linking to our home page. Some of these linking URLs prompt a download to video and other files. Other of the linking are simple on junk sites. Obviously, some other person placed these links. First and foremost I am interested in maintaining the integrity of our site, and if there were a way to remove these links, and protect against that in the future, that is what I want. Secondly, if there were a way to find out who did this, I would like to know that also. What options and/or actions should be taken. I am thinking that I may need to employ a professional/consultant. Will I have to transfer content to another domain? Your thought and help are appreciated. Thanks,
International SEO | | MFC0