How google bot see's two the same rel canonicals?
-
Hi,
I have a website where all the original URL's have a rel canonical back to themselves. This is kinda like a fail safe mode. It is because if a parameter occurs, then the URL with the parameter will have a canonical back to the original URL.
For example this url: https://www.example.com/something/page/1/ has this canonical: https://www.example.com/something/page/1/ which is the same since it's an original URL
This url https://www.example.com/something/page/1/?parameter has this canonical https://www.example.com/something/page/1/ like i said before, parameters have a rel canonical back to their original url's.
SO: https://www.example.com/something/page/1/?parameter and this https://www.example.com/something/page/1/ both have the same canonical which is this https://www.example.com/something/page/1/
Im telling you all that because when roger bot tried to crawl my website, it gave back duplicates. This happened because it was reading the canonical (https://www.example.com/something/page/1/) of the original url (https://www.example.com/something/page/1/) and the canonical (https://www.example.com/something/page/1/) of the url with the parameter (https://www.example.com/something/page/1/?parameter) and saw that both were point to the same canonical (https://www.example.com/something/page/1/)...
So, i would like to know if google bot treats canonicals the same way. Because if it does then im full of duplicates
thanks.
-
Its not about the canonical, its about the crawl optimization. I know that canonical URL saves the situation here, i am working under a fail safe mode in matter of duplicates and i want to believe that the canonical URL implementation is better than good in my website.
I just don't want bot's spending time on pages that have nothing actual to say and are canonicalized to pages that have the important content. That is why i configured the bot to not crawl those parameters in the URL parameters tab in GWT and eventually some time to even drop those results.
-
I would think that you're going a little over the top with what essentially is the job of a canonical tag. you don't need to block robots going to the pages as the canonical tag will be telling robots that its a duplicate version. if the urls have already been indexed it will take time for them to drop off.
-
All the parameters are configured to NO URL's in google webmaster tools URL parameters tab. Check the image http://prntscr.com/e9fs91
Its a better setting to do it straight from webmaster tools than disallowing the parameters in robots.txt
Tho, i have a problem with that because google is indexing these parameters even if its configured to NO URL's check my post here: https://moz.com/community/q/web-master-tools-url-parameters
-
Hello,
Rogerbot struggles a bit with canonical last I checked. You've the right set up you want to stop parameters it's especially helpful for stopping people rankings pages on your site like /?this-site-sucks! Always remember Rogerbot of any other services are a guide only to help you not a 100% true resource that will help you rank so use them like a tool not an authority.
TL:DR - your set up is all ok!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content homepage - Google canonical 'N/A'?
Hi, I redesigned a clients website and launched it two weeks ago. Since then, I have 301 redirected all old URL's in Google's search results to their counterparts on the new site. However, none of the new pages are appearing in the search results and even the homepage has disappeared. Only old site links are appearing (even though the old website has been taken down ) and in GSC, it's stating that: Page is not indexed: Duplicate, Google chose different canonical than user However, when I try to understand how to fix the issue and see which URL it is claiming to be a duplicate of, it says: Google-selected canonical: N/A It says that the last crawl was only yesterday - how can I possibly fix it without knowing which page it says it's a duplicate of? Is this something that just takes time, or is it permanent? I would understand if it was just Google taking time to crawl the pages and index but it seems to be adamant it's not going to show any of them at all. 55.png
Technical SEO | | goliath910 -
New website's ranking dropped
Hi, Im working on brand new website i didn't even start my link building yet, just added to local directories i slowly started getting my ranking on 3rd page of Google then few weeks ago my ranking fell for all the keywords so now the website doesn't even rank on 10th page. Its been like this for a few weeks now. Here's the website Screenshot http://screencast.com/t/wDWk8sxLw Thanks for your help
Technical SEO | | mezozcorp0 -
Rel=Canonical Help
The site in question is www.example.com/example. The client has added a rel=canonical tag to this page as . In other words, instead of putting the tag on the pages that are not to be canonical and pointing them to this one, they are doing it backwards and putting the same URL as the canonical one as the page they are putting the tag on. They have done this with thousands of pages. I know this is incorrect, but my question is, until the issue is resolved, are these tags hurting them at all just being there?
Technical SEO | | rock220 -
Best Practices for adding Dynamic URL's to XML Sitemap
Hi Guys, I'm working on an ecommerce website with all the product pages using dynamic URL's (we also have a few static pages but there is no issue with them). The products are updated on the site every couple of hours (because we sell out or the special offer expires) and as a result I keep seeing heaps of 404 errors in Google Webmaster tools and am trying to avoid this (if possible). I have already created an XML sitemap for the static pages and am now looking at incorporating the dynamic product pages but am not sure what is the best approach. The URL structure for the products are as follows: http://www.xyz.com/products/product1-is-really-cool
Technical SEO | | seekjobs
http://www.xyz.com/products/product2-is-even-cooler
http://www.xyz.com/products/product3-is-the-coolest Here are 2 approaches I was considering: 1. To just include the dynamic product URLS within the same sitemap as the static URLs using just the following http://www.xyz.com/products/ - This is so spiders have access to the folder the products are in and I don't have to create an automated sitemap for all product OR 2. Create a separate automated sitemap that updates when ever a product is updated and include the change frequency to be hourly - This is so spiders always have as close to be up to date sitemap when they crawl the sitemap I look forward to hearing your thoughts, opinions, suggestions and/or previous experiences with this. Thanks heaps, LW0 -
Intuit's Homestead web developer
I used Intuit's homestead to develop my website and when I analyze my site on semoz, I get duplicate page content between the site and the "index". Is this something to worry about and can I fix it if it is? Thanks. Michael
Technical SEO | | thompsoncpa0 -
Rel=Canonical on a page with 302 redirection existing
Hi SEOMoz! Can I have the rel=canonical tag on a URL page that has a 302 redirection? Does this harm the search engine friendliness of a content page / website? Thanks! Steve
Technical SEO | | sjcbayona-412180 -
Replacing H1's with images
We host a few Japanese sites and Japanese fonts tend to look a bit scruffy the larger they are. I was wondering if image replacement for H1 is risky or not? eg in short... spiders see: Some header text optimized for seo then in the css h1 {
Technical SEO | | -Al-
text-indent: -9999px;
} h1.header_1{ background:url(/images/bg_h1.jpg) no-repeat 0 0; } We are considering this technique, I thought I should get some advise before potentially jeopardising anything, especially as we are dealing with one of the most important on page elements. In my opinion any attempt to hide text could be seen as keyword stuffing, is it a case that in moderation it is acceptable? Cheers0 -
Rel="canonical" and rewrite
Hi, I'm going to describe a scenario on one of my sites, I was wondering if someone could tell me what is the correct use of rel="canonical" here. Suppose I have a rewrite rule that has a rule like this: RewriteRule ^Online-Games /main/index.php So, in the index file, do I set the rel="canonical" to Online-Games or /main/index.php? Thanks.
Technical SEO | | webtarget0