Is it good practice to use hreflang on pages that have canonicals?
-
I have a page in English that has both English & Spanish translations on it. It is pulled in from a page generated on another site and I am not able to adjust the CSS to display only one language.
Until I can fix this, I have made the English page the canonical for both. Do I still want to use hreflang for English & Spanish pages?
What if I do not have a Spanish page at all. I assume (from what I've read) I should not have an hreflang on the English page. Is this correct?
Thank you in advance.
-
For a proper hreflang implementation, the canonical of each page has to point to itself in addition to referencing the other pages that have the same content in a different language. Otherwise, the implementation would be wrong
-
Just to confirm: I need to remove hreflang on site pages that have a canonical to another page (and/or other website)? Thanks for taking time to answer.
-
For hreflang to be implemented properly, both English and Spanish pages have to reference each other AND each page's canonical has to point to itself.
Having one of the pages point its canonical somewhere else will break the implementation.
-
Only for the pages (English & Spanish) on my site.
-
So if I understand this properly, you have a page on your site and has its canonical pointing to a page on another site and you want to create hreflang for both pages? Or only for the page on your website?
-
I understand. I was asking about the situation where the canonical points to a page not on my website.
-
To answer your last question, yes - you don't need to implement hreflang if you only have an English page.
However, if you have an English and a Spanish page for the same content, then you'll need to implement hreflang on both and have the canonical of each page point to itself. This is an important element of the hreflang implementation and where we see a lot of implementation errors. Having the canonical for both pages point to the English version is wrong.
You can read more about hreflang in MOZ's documentation here.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
NoIndex tag, canonical tag or automatically generated H1's for automatically generated enquiry pages?
What would be better for automatically generated accommodation enquiry pages for a travel company? NoIndex tag, canonical tag, automatically generated H1's or another solution? This is the homepage: https://www.discoverqueensland.com.au/ You would enquire from a page like this: https://www.discoverqueensland.com.au/accommodation/sunshine-coast/twin-waters/the-sebel-twin-waters This is the enquiry form: https://www.discoverqueensland.com.au/accommodation-enquiry.php?name=The+Sebel+Twin+Waters®ion_name=Sunshine+Coast
Technical SEO | | Kim_Lazaro0 -
Duplicate content: using the robots meta tag in conjunction with the canonical tag?
We have a WordPress instance on an Apache subdomain (let's say it's blog.website.com) alongside our main website, which is built in Angular. The tech team is using Akamai to do URL rewrites so that the blog posts appear under the main domain (website.com/more-keywords/here). However, due to the way they configured the WordPress install, they can't do a wildcard redirect under htaccess to force all the subdomain URLs to appear as subdirectories, so as you might have guessed, we're dealing with duplicate content issues. They could in theory do manual 301s for each blog post, but that's laborious and a real hassle given our IT structure (we're a financial services firm, so lots of bureaucracy and regulation). In addition, due to internal limitations (they seem mostly political in nature), a robots.txt file is out of the question. I'm thinking the next best alternative is the combined use of the robots meta tag (no index, follow) alongside the canonical tag to try to point the bot to the subdirectory URLs. I don't think this would be unethical use of either feature, but I'm trying to figure out if the two would conflict in some way? Or maybe there's a better approach with which we're unfamiliar or that we haven't considered?
Technical SEO | | prasadpathapati0 -
When to use mod rewrite / canonical / 301 redirect
Hello, I have taken over the management of a site which has a big problem with duplicate content. The duplicate content is caused by two things: Upper and lower case urls e.g: www.mysite.com/blog and www.mysite.com/Blog The other reason is the use of product filters / pagination which mean you can get to the same 'page' via different filters. The filters generate separate URLs. http://www.mysite.com/casestudy
Technical SEO | | Barques-Design
http://www.mysite.com/casestudy/filter?page=1
http://www.mysite.com/casestudy/filter?solution=0&page=1
http://www.mysite.com/casestudy?page=1
http://www.cpio.co.uk/casestudy/filter?solution=0" Am I right to assume that for the case sensitive URLs I should use a 301 redirect because I only want the lower page to be shown? For the issue with dynamic URLs should we implement a mod-rewrite and 301 to one page? Any advice would be greatly appreciated.
Mat0 -
Product page Canonicalization best practice
I'm getting duplicate content errors in GWT for product list pages that look like this: -www.example.com/category-page/product
Technical SEO | | IceIcebaby
-www.example.com/category-page/product/?p=2 The "p=2" example already has a rel=canonical in place, " Shouldn't the non-canonical pages be using the canonical attribute for the first page rather than the additional product pages? Thanks!0 -
Should I Use the Disavow Tool to for a Spammy Site/Landing Page?
Here's the situation... There's a site that is linking to about 6 articles of mine from about 225 pages of theirs (according to info in GWT). These pages are sales landing pages looking to sell their product. The pages are pretty much identical but have different urls. (I actually have a few sites doing this to me.) Here's where I think it's real bad -- when they are linking to me you don't see the link on the page, you have to view the page source and search for my site's url. I'm thinking having a hidden url, and it being my article that's hidden, has got to be bad. That on top of it being a sales page for a product I've seen traffic to my site dropping but I don't have a warning in GWT. These aren't links that I've placed or asked for in any way. I don't see how they could be good for me and I've already done what I could to email the site to remove the links (I didn't think it would work but thought I'd at least try). I totally understand that the site linking to me may not have any affect on my current traffic. So should I use the Disavow tool to make sure this site isn't counting against me?
Technical SEO | | GlenCraig0 -
According to 1 of my PRO campaigns - I have 250+ pages with Duplicate Content - Could my empty 'tag' pages be to blame?
Like I said, my one of my moz reports is showing 250+ pages with duplicate content. should I just delete the tag pages? Is that worth my time? how do I alert SEOmoz that the changes have been made, so that they show up in my next report?
Technical SEO | | TylerAbernethy0 -
Page crawling is only seeing a portion of the pages. Any Advice?
last couple of page crawls have returned 14 out of 35 pages. Is there any suggestions I can take.
Technical SEO | | cubetech0