Review site using canonical tag in a puzzling way.
-
Have just been looking at a review site and they're using the canonical tag very strangely, to me.
For example, they may have several pages of reviews of the same item - they use the canonical tag on page 2/3/4 to point back at page 1 - and yet there is no duplication between the pages.
Any idea why they might be doing this?
-
Yes I agree Andy - many thanks for your response - can't see any logical reason - I took a look at Tripadvisor, out of interest - and although they're not using canonical tag oddly like the site I mention above, they were using the same title tags for each page of reviews (on same hotel) - which, again, doesn't seem logical - surely better if they simply add page 1, page 2 and so on to Title Tag?
-
It sounds to me like they might be trying to follow some advice but got it a little wrong - or perhaps they were expecting duplication but it never happened? Perhaps it came out-of-the-box like that?
There is no SEO benefit or trickery to why they are doing this though.
-Andy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are rel=author and rel=publisher meta tags currently in use?
Hello, Do these meta tags have any current usage? <meta name="author" content="Author Name"><meta name="publisher" content="Publisher Name"> I have also seen this usage linking to a companies Google+ Page:Thank you
Intermediate & Advanced SEO | | srbello0 -
Hreflang tag on links to alternate language site
Hey everyone! In the interest of trying to be brief, here's the situation in my favorite form of communication, bullet points! Client has two sites; one is in English and one is in Japanese Each site is a separate URL, no sub-domains or sub-pages Each main page on the English version of the site has a link to the homepage of the Japanese site Site has decent rankings overall, with room for improvement from page 2 to page 1 No Hreflang tags currently used in links to the Japanese version from the English version Given that the site isn't really suffering for most rankings, would this be helpful to implement on the English version? Ideally, I'd like each link to be updated to the corresponding subject matter of the Japanese, but in the interim it seems like identifying to Google that the link on the other side is a different language might be helpful to both the user and to maybe help those rankings on page two creep a little higher to page one. Thanks for reading, I appreciate your time.
Intermediate & Advanced SEO | | Etna0 -
URL Parameters as a single solution vs Canonical tags
Hi all, We are running a classifieds platform in Spain (mercadonline.es) that has a lot of duplicate content. The majority of our duplicate content consists of URL's that contain site parameters. In other words, they are the result of multiple pages within the same subcategory, that are sorted by different field names like price and type of ad. I believe if I assign the correct group of url's to each parameter in Google webmastertools then a lot these duplicate issues will be resolved. Still a few questions remain: Once I set f.ex. the 'page' parameter and i choose 'paginates' as a behaviour, will I let Googlebot decide whether to index these pages or do i set them to 'no'? Since I told Google Webmaster what type of URL's contain this parameter, it will know that these are relevant pages, yet not always completely different in content. Other url's that contain 'sortby' don't differ in content at all so i set these to 'sorting' as behaviour and set them to 'no' for google crawling. What parameter can I use to assign this to 'search' I.e. the parameter that causes the URL's to contain an internal search string. Since this search parameter changes all the time depending on the user input, how can I choose the best one. I think I need 'specifies'? Do I still need to assign canonical tags for all of these url's after this process or is setting parameters in my case an alternative solution to this problem? I can send examples of the duplicates. But most of them contain 'page', 'descending' 'sort by' etc values. Thank you for your help. Ivor
Intermediate & Advanced SEO | | ivordg0 -
Backlinking 3 sites from same domain and backlinking main site too
Hello, we have 4 sites, in which 1 is a main site and rest 3 are niche sites All these 3 sites have dofollow links to main site from home page We got a high quality backlink - through which all 3 niche sites have got it from that domain Is it worth to add backlink from that domain to main site too, despite the fact the 3 sites already have recvd it and they all link to main site many thanks
Intermediate & Advanced SEO | | Modi0 -
How do you prevent the mobile site becoming a duplicate of the full browser site?
We have a larger site with 100k+ pages, we need to create a mobile site which gets indexed in the mobile engines but I am afraid that google bot will consider these pages duplicates of the normal site pages. I know I can block it on the robots.txt but I still need it to be indexed for mobile search engines and I think google has a mobile crawler as well. Feel free to give me any other tips that I should follow while trying to optimize the mobile version. Any help would be appreciated 🙂
Intermediate & Advanced SEO | | pulseseo0 -
Best way to find broken links on a large site?
I've tried using Xenu, but this is a bit time consuming because it only tells you if the link sin't found & doesn't tell you which pages link to the 404'd page. Webmaster tools seems a bit dated & unreliable. Several of the links it lists as broken aren't. Does anyone have any other suggestions for compiling a list of broken links on a large site>
Intermediate & Advanced SEO | | nicole.healthline1 -
Examples of sites other than Hubpages that have used subdomains to recover from Panda?
Everyone knows subdomains worked for Hubpages to recover from Panda. Does anyone know of other examples of sites that have recovered from Panda using subdomains?
Intermediate & Advanced SEO | | nicole.healthline0 -
Use rel=canonical to save otherwise squandered link juice?
Oftentimes my site has content which I'm not really interested in having included in search engine results. Examples might be a "view cart" or "checkout" page, or old products in the catalog that are no longer available in our system. In the past, I'd blocked those pages from being indexed by using robots.txt or nofollowed links. However, it seems like there is potential link juice that's being lost by removing these from search engine indexes. What if, instead of keeping these pages out of the index completely, I use to reference the home page (http://www.mydomain.com) of the business? That way, even if the pages I don't care about accumulate a few links around the Internet, I'll be capturing the link juice behind the scenes without impacting the customer experience as they browse our site. Is there any downside of doing this, or am I missing any potential reasons why this wouldn't work as expected?
Intermediate & Advanced SEO | | cadenzajon1