"Duplicate without user-selected canonical” - impact to SERPs
-
Hello, we are facing some issues on our project and we would like to get some advice.
Scenario
We run several websites (www.brandName.com, www.brandName.be, www.brandName.ch, etc..) all in French language . All sites have nearly the same content & structure, only minor text (some headings and phone numbers due to different countries are different). There are many good quality pages, but again they are the same over all domains.Goal
We want local domains (be, ch, fr, etc.) to appear in SERPs and also comply with Google policy of local language variants and/or canonical links.Current solution
Currently we don’t use canonicals, instead we use rel="alternate" hreflang="x-default":<link rel="alternate" hreflang="fr-BE" href="https://www.brandName.be/" /> <link rel="alternate" hreflang="fr-CA" href="https://www.brandName.ca/" /> <link rel="alternate" hreflang="fr-CH" href="https://www.brandName.ch/" /> <link rel="alternate" hreflang="fr-FR" href="https://www.brandName.fr/" /> <link rel="alternate" hreflang="fr-LU" href="https://www.brandName.lu/" /> <link rel="alternate" hreflang="x-default" href="https://www.brandName.com/" />
Issue
After Googlebot crawled the websites we see lot of “Duplicate without user-selected canonical” in Coverage/Excluded report (Google Search Console) for most domains. When we inspect some of those URLs we can see Google has decided that canonical URL points to (example):User-declared canonical: None
Google-selected canonical: …same page, but on a different domainStrange is that even those URLs are on Google and can be found in SERPs.
Obviously Google doesn’t know what to make of it. We noticed many websites in the same scenario use a self-referencing approach which is not really “kosher” - we are afraid if we use the same approach we can get penalized by Google.
Question: What do you suggest to fix the “Duplicate without user-selected canonical” in our scenario?
Any suggestions/ideas appreciated, thanks. Regards.
-
The issue of "Duplicate without user-selected canonical" refers to situations where there are multiple identical or very similar pages on a website, but a canonical tag has not been explicitly set to indicate which version should be considered the preferred or original version by search engines.
The impact of this issue on search engine results pages (SERPs) can be negative for several reasons:
Keyword Dilution: When search engines encounter multiple versions of the same or similar content, they might have a hard time determining which page to rank for a particular keyword. This can lead to keyword dilution, where the authority and relevance of the content is spread across multiple pages instead of being concentrated on a single page.
Page Selection Uncertainty: Without a canonical tag to guide search engines, they may choose to index and display a version of the page that is not the most relevant or valuable to users. This can result in users landing on less optimal pages from their search queries.
Ranking Competition: Duplicate content can cause internal competition between your own pages for rankings. Instead of consolidating ranking signals onto one page, they get divided among duplicates, potentially leading to lower overall rankings for all versions.
Crawling and Indexing Issues: Search engine bots may spend more time crawling and indexing duplicate content, which could lead to inefficient use of their resources. This might affect how often your new or updated content gets indexed.
To address the "Duplicate without user-selected canonical" issue and mitigate its impact on SERPs:
Implement Canonical Tags: Set up canonical tags on duplicate or similar pages to indicate the preferred version. This guides search engines to consolidate ranking signals and direct traffic to the correct page.
301 Redirects: If possible, redirect duplicate pages to a single, canonical version using 301 redirects. This not only consolidates ranking signals but also ensures that users are directed to the most relevant content.
Consolidate Content: Consider merging similar pages into a single, comprehensive page. This helps avoid duplication issues and improves the overall user experience.
Use Noindex Tags: If some duplicate pages are not crucial for SEO or user experience, you can add a noindex meta tag to prevent search engines from indexing those pages.
Monitor and Update: Regularly audit your website for duplicate content and ensure that new content is properly canonicalized to prevent future occurrences.
By addressing the "Duplicate without user-selected canonical" issue, you can help improve the clarity and accuracy of how your content appears in SERPs, potentially leading to better rankings and a more effective SEO strategy.
-
@GeorgeJohn727
Duplicate without user-selected canonical -
Understanding 'Duplicate without user-selected canonical' is crucial for optimizing SERPs. This issue can lead to content duplication concerns, potentially affecting search engine rankings. Just as addressing this matter ensures streamlined search results, exploring the 'best online betting sites in India' exemplifies how selecting the right canonical source enhances visibility and credibility in the online domain.
-
- Even if this error occurs it doesn't mean Google ignores the pages - it can and in our case they appear in SERPs.
- Duplicate pages carry value in sense that there is a slight alteration for local market - contact info, different pricing, etc. So 90% of the page is same on national
domains, but only slight part differs.
-
@alex_pisa
The error "Duplicate without user-selected canonical” indicates that Google found duplicate URLs that are not canonicalized to a preferred version. Google didn't index these duplicate URLs and assigned a canonical version on its own.How to fix this issue
Should these pages even exist? If the answer to this is no, simply remove these pages and return a HTTP status code 410.
If these pages have a purpose, then ask yourself whether they
carry any value:-
If yes, then canonicalize them to the preferred version of the URL. Need some inspiration where to canonicalize to? See which URL Google finds most relevant using the URL Inspection tool(opens in a new tab). If Google's listing PDF files for your site, canonicalize them through the HTTP header.
-
If these pages don't carry any value, then make sure to apply the noindex directive through the meta robots tag or X-Robots-Tag HTTP Header.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How Can I influence the Google Selected Canonical
Our company recently rebranded and launched a new website. The website was developed by an overseas team and they created the test site on their subdomain. The only problem is that Google crawled and indexed their site and ours. I noticed Google indexed their sub domain ahead of our domain and based on Search Console it has deemed our content as the duplicate of theirs and the Google selected theirs as the canonical.
Community | | Spaziohouston
The website in question is https://www.spaziointerni.us
What would be the best course of action to get our content ranked and selected instead of being marked as the duplicate?
Not sure if I have to modify the content to make it more unique or have them submit a removal in their search console.
Our indexed pages continue to go down due to this issue.
Any help is greatly appreciated.1 -
Dynamic Canonical Tag for Search Results Filtering Page
Hi everyone, I run a website in the travel industry where most users land on a location page (e.g. domain.com/product/location, before performing a search by selecting dates and times. This then takes them to a pre filtered dynamic search results page with options for their selected location on a separate URL (e.g. /book/results). The /book/results page can only be accessed on our website by performing a search, and URL's with search parameters from this page have never been indexed in the past. We work with some large partners who use our booking engine who have recently started linking to these pre filtered search results pages. This is not being done on a large scale and at present we only have a couple of hundred of these search results pages indexed. I could easily add a noindex or self-referencing canonical tag to the /book/results page to remove them, however it’s been suggested that adding a dynamic canonical tag to our pre filtered results pages pointing to the location page (based on the location information in the query string) could be beneficial for the SEO of our location pages. This makes sense as the partner websites that link to our /book/results page are very high authority and any way that this could be passed to our location pages (which are our most important in terms of rankings) sounds good, however I have a couple of concerns. • Is using a dynamic canonical tag in this way considered spammy / manipulative? • Whilst all the content that appears on the pre filtered /book/results page is present on the static location page where the search initiates and which the canonical tag would point to, it is presented differently and there is a lot more content on the static location page that isn’t present on the /book/results page. Is this likely to see the canonical tag being ignored / link equity not being passed as hoped, and are there greater risks to this that I should be worried about? I can’t find many examples of other sites where this has been implemented but the closest would probably be booking.com. https://www.booking.com/searchresults.it.html?label=gen173nr-1FCAEoggI46AdIM1gEaFCIAQGYARS4ARfIAQzYAQHoAQH4AQuIAgGoAgO4ArajrpcGwAIB0gIkYmUxYjNlZWMtYWQzMi00NWJmLTk5NTItNzY1MzljZTVhOTk02AIG4AIB&sid=d4030ebf4f04bb7ddcb2b04d1bade521&dest_id=-2601889&dest_type=city& Canonical points to https://www.booking.com/city/gb/london.it.html In our scenario however there is a greater difference between the content on both pages (and booking.com have a load of search results pages indexed which is not what we’re looking for) Would be great to get any feedback on this before I rule it out. Thanks!
Technical SEO | | GAnalytics1 -
My Website Not Showing In Google English Search Results
My website is not visible on Google English. Selecting the language of Google in Hindi, Spanish, etc., my pages are visible in search results.
International SEO | | Jude_Wix0 -
Multilang site: Auto redirect 301 or 302?
We need to establish if 301 or 302 response code is to be used for our auto redirects based on Accept-Language header. https://domain.com
International SEO | | fJ66doneOIdDpj
30x > https://domain.com/en
30x > https://domain.com/ru
30x > https://domain.com/de The site architecture is set up with proper inline HREFLANG.
We have read different opinions about this, Ahrefs says 302 is the correct one:
https://ahrefs.com/blog/301-vs-302-redirects/
302 redirect:
"You want to redirect users to the right version of the site for them (based on location/language)." You could argue that the root redirect is never permanent as it varies based on user language settings (302)
On the other hand, the lang specific redirects are permanent per language: IF Accept-Language header = en
https://domain.com > 301 > https://domain.com/en
IF Accept-Language header = ru
https://domain.com > 301 > https://domain.com/ru So each of these is 'permanent'. So which is the correct?0 -
Hreflang tags and canonical tags - might be causing indexing and duplicate content issues
Hi, Let's say I have a site located at https://www.example.com, and also have subdirectories setup for different languages. For example: https://www.example.com/es_ES/ https://www.example.com/fr_FR/ https://www.example.com/it_IT/ My Spanish version currently has the following hreflang tags and canonical tag implemented: My robots.txt file is blocking all of my language subdirectories. For example: User-agent:* Disallow: /es_ES/ Disallow: /fr_FR/ Disallow: /it_IT/ This setup doesn't seem right. I don't think I should be blocking the language-specific subdirectories via robots.txt What are your thoughts? Does my hreflang tag and canonical tag implementation look correct to you? Should I be doing this differently? I would greatly appreciate your feedback and/or suggestions.
International SEO | | Avid_Demand0 -
Shabaka domain - Impact on SEO
Hi All, I heard about shabaka domain names recently and am not sure if getting a shabaka top-level domain with arabic content help from a SEO stand-point? Currently my Arabic website is on this domain: http://www.tcf-me.ae/ Do you think it is a good idea to get a shabaka domain to target the GCC countries on our Arabic website? Or does it not matter? Thoughts? Thanks in advance for your help.
International SEO | | LaythDajani1 -
Backlinks that we didnt place, killing our SERP rank and PR
I am in need of advice regarding back links that we did not place, and which are hurting our search engine results. How and why they got there I cannot explain.But they have appeared recently, and are damaging our SERP ranks. For several years, I have been a member with SEOmoz, and we have done our search engine optimization in house. I am the owner of a personal injury law firm, which is a competitive field in search engine optimization. Recently, in the spring, we updated our website, added significant content (over 100 additional pages), we set up a better site structure, and we completed a significant back link campaign from white hat sources. As a result, we were the strongest law firm in search engine results in the state of Arizona, and the page rank from our home page went from a 4 to 6, and from our next highest level of pages, they went from a 3 to 6. This happened in 10 week period. Our search engine results were fantastic. We were getting a significant amount of business from out Places page and our Organic results. That has almost completely dried up. Approximately 6-8 weeks later, we started having some serious problems. Specifically, our search engine results decreased significantly, our page rank reduced from a six to a four. So we started using SEOmoz tools to see what the problem was, and when we created an open site Explorer report, there are approximately 1000 different links from very shady websites they are now linking to our home page. Some of these linking URLs prompt a download to video and other files. Other of the linking are simple on junk sites. Obviously, some other person placed these links. First and foremost I am interested in maintaining the integrity of our site, and if there were a way to remove these links, and protect against that in the future, that is what I want. Secondly, if there were a way to find out who did this, I would like to know that also. What options and/or actions should be taken. I am thinking that I may need to employ a professional/consultant. Will I have to transfer content to another domain? Your thought and help are appreciated. Thanks,
International SEO | | MFC0