How google bot see's two the same rel canonicals?
-
Hi,
I have a website where all the original URL's have a rel canonical back to themselves. This is kinda like a fail safe mode. It is because if a parameter occurs, then the URL with the parameter will have a canonical back to the original URL.
For example this url: https://www.example.com/something/page/1/ has this canonical: https://www.example.com/something/page/1/ which is the same since it's an original URL
This url https://www.example.com/something/page/1/?parameter has this canonical https://www.example.com/something/page/1/ like i said before, parameters have a rel canonical back to their original url's.
SO: https://www.example.com/something/page/1/?parameter and this https://www.example.com/something/page/1/ both have the same canonical which is this https://www.example.com/something/page/1/
Im telling you all that because when roger bot tried to crawl my website, it gave back duplicates. This happened because it was reading the canonical (https://www.example.com/something/page/1/) of the original url (https://www.example.com/something/page/1/) and the canonical (https://www.example.com/something/page/1/) of the url with the parameter (https://www.example.com/something/page/1/?parameter) and saw that both were point to the same canonical (https://www.example.com/something/page/1/)...
So, i would like to know if google bot treats canonicals the same way. Because if it does then im full of duplicates
thanks.
-
Its not about the canonical, its about the crawl optimization. I know that canonical URL saves the situation here, i am working under a fail safe mode in matter of duplicates and i want to believe that the canonical URL implementation is better than good in my website.
I just don't want bot's spending time on pages that have nothing actual to say and are canonicalized to pages that have the important content. That is why i configured the bot to not crawl those parameters in the URL parameters tab in GWT and eventually some time to even drop those results.
-
I would think that you're going a little over the top with what essentially is the job of a canonical tag. you don't need to block robots going to the pages as the canonical tag will be telling robots that its a duplicate version. if the urls have already been indexed it will take time for them to drop off.
-
All the parameters are configured to NO URL's in google webmaster tools URL parameters tab. Check the image http://prntscr.com/e9fs91
Its a better setting to do it straight from webmaster tools than disallowing the parameters in robots.txt
Tho, i have a problem with that because google is indexing these parameters even if its configured to NO URL's check my post here: https://moz.com/community/q/web-master-tools-url-parameters
-
Hello,
Rogerbot struggles a bit with canonical last I checked. You've the right set up you want to stop parameters it's especially helpful for stopping people rankings pages on your site like /?this-site-sucks! Always remember Rogerbot of any other services are a guide only to help you not a 100% true resource that will help you rank so use them like a tool not an authority.
TL:DR - your set up is all ok!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Redirecting .edu subdomains to our site or taking the link, what's more valuable?
We have a relationship built through a service we offer to universities to be issued a .edu subdomain that we could redirect to our landing page relevant to that school. The other option is having a link from their website to that same page. My first question is, what would be more valuable? Can you pass domain authority by redirecting a subdomain to a subdirectory in my root domain? Or would simply passing the link equity from a page in their root domain to our page pass enough value? My second question is, if creating a subdomain with a redirect is much more valuable, what is the best process for this? Would we simply have their webmaster create the subdomain for us an have them put a 301 redirect to our page? Is this getting in the greyer hat area? Thanks guys!
Technical SEO | | Dom4410 -
Pro's & contra's: http vs https
Hi there, We are planning to take the step and go from http to https. The main reason to do this, is to mean trustfull to our clients. And of course the rumours that it would be better for ranking (in the future). We have a large e-commerce site. A part of this site ia already HTTPS. I've read a lot of info about pro's and contra's, also this MOZ article: http://moz.com/blog/seo-tips-https-ssl
Technical SEO | | Leonie-Kramer
But i want to know some experience from others who already done this. What did you encountered when changing to HTTPS, did you had ranking drops, or loss of links etc? I want to make a list form pro's and contra's and things we have to do in advance. Thanx, Leonie0 -
Will you get more 'google juice' if your social links are in your websites header, rather than its footer?
Hi team, I'm in the process of making some aesthetic changes to my website. Its getting quite cluttered so the main purpose is to clean up its look. I currently have 3x social links in the header, right at the top, and i would really like to move these to the footer to remove some clutter in the header. My concern is that moving them may have an impact on the domains ranking in google. Website: www.mountainjade.co.nz We've made some huge gains against our competitors over the past 6 months and I don't want to jeopardise that. Any help would be much appreciated as i'm self taught in SEO and have learnt through making mistakes. This time however, with Moz, i'd rather get some advice before I make any decisions! Thanks is advance, Jake S
Technical SEO | | Jacobsheehan0 -
My blog page isn't ranking in Google
Hi, I noticed that my blog page on my site isn't in Google when i search for full URL link http://www.asggutter.com/blog/ instead i see page that isn't even working asggutter.com/sitemap.xml screen shot http://screencast.com/t/6OVFLwL8nTL How i can i fix that. Thanks
Technical SEO | | tonyklu0 -
Why am I getting rel= canonical?
I'm getting 14 rel=canonical tags on my site. Could someone offer me an insight as to this is happening? http://cool-invent.com Thanks, Lorraine
Technical SEO | | coolinvent0 -
Similar pages: noindex or rel:canonical or disregard parameters?!
Hey all! We have a hotel booking website that has search results pages per destinations (e.g. hotels in NYC is dayguest.com/nyc). Pages are also generated for destinations depending on various parameters, that can be star rating, amenities, style of the properties, etc. (e.g. dayguest.com/nyc/4stars, dayguest.com/nyc/luggagestorage, dayguest.com/nyc/luxury, etc.). In general, all of these pages are very similar, as for example, there might be 10 hotels in NYC and all of them will offer luggage storage. Pages can be nearly identical. Come the problems of duplicate content and loss of juice by dilution. I was wondering what was the best practice in such a situation: should I just put all pages except the most important ones (e.g. dayguest.com/nyc) as noindex? Or set it as canonical page for all variations? Or in google webmaster tool ask google to disregard the URLs for various parameters? Or do something else altogether?! Thanks for the help!
Technical SEO | | Philoups0 -
A sitemap... What's the purpose?
Hello everybody, my question is really simple: what's the purpose of a sitemap? It's to help the robots to crawl your website but if you're website has a good architecture, the robots will be able to crawl your site easily! Am I wrong? Thank you for yours answers, Jonathan
Technical SEO | | JonathanLeplang0 -
Duplicate content and URL's
Hi Guys, Hope you are all well. Just a quick question which you will find nice and easy 🙂 I am just about to work through duplicate content pages and URL changes. Firstly, With the duplicate content issue i am finding the seo friendly URL i would normally direct to in some cases has less links, authority and root domain to it than some of the unseo friendly URL's. will this harm me if i still 301 redirect them to the seo friendly URL. Also, With the url changed it is going to be a huge job to change all the url so they are friendly and the CMS system is poor. Is there a better way of doing this? It has been suggested that we create a new webpage with a friendly URL and redirect all the pages to that. Will this lose all the weight as it will be a brand new page? Thank you for your help guys your legends!! Cheers Wayne
Technical SEO | | wazza19850