Hreflang targeted website using the root directory's description & title
-
Hi there,
Recently I applied the href lang tags like so:
Unfortunately, the Australian site uses the same description and title as the US site (which was the root directory initially), am i doing something wrong?
Would appreciate any response, thanks!
-
Hi,
You could check this tool http://flang.dejanseo.com.au/ to check if the tags are properly implemented.
rgds,
Dirk
-
Hi Oliver
Take a look at the following articles from Moz:
Getting hreflang Right: Examples and Insights for International SEOYou can add a title to your hreflang attribute:
title="CNN Mexico" type="text/html"/>I would also note that your tags don't have a " />" at the end of it - add that. Also, switch href="" and hreflang="" in your attributes - if anything for best practices.
If for some reason these don't fix your meta description as well, I would check and make sure that your Australian site has a different meta description in place.
Here are a couple of more resources for you:
Hreflang Tag Best Practices (Moz)
Using the Correct Hreflang Tag: A New Generator Tool (Moz)Let me know if these help! Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
International Blog Structure & Hreflang Tags
Hi all, I'm running an international website across 5 regions using a correct hreflang setup. A problem I think I have is that my blog structure is not standardized and also uses hreflang tags for each blog article. This has naturally caused Google to index each of the pages across each region, meaning a massive amount of pages are being crawled. I know hreflang solves and issues with duplication penalties, but I have another question. If I have legacy blog articles that are considered low quality by Google, is that counting against my site once or multiple times for each time the blog is replicated across each region? I'm not sure if hreflang is something that would tell Google this. For example, if I have low quality blog posts: blog/en-us/low-quality-article-1
Intermediate & Advanced SEO | | MattBassos
blog/en-gb/low-quality-article-1
blog/en-ca/low-quality-article-1 Do you think Google is counting this as 3 low quality articles or just 1 if hreflang is correctly implemented? Any insights would be great because I'm considering to cull the international setup of the blog articles and use just /blog across each region.0 -
Why some websites can rank the keywords they don't have in the page?
Hello guys, Yesterday, I used SEMrush to search for the keyword "branding agency" to see the SERP. The Liquidagency ranks 5th on the first page. So I went to their homepage but saw no exact keywords "branding agency", even in the page source. Also, I didn't see "branding agency" as a top anchor text in the external links to the page (from the report of SEMrush). I am an SEO newbie, can someone explain this to me, please? Thank you.
Intermediate & Advanced SEO | | Raymondlee0 -
Disavow Experts: Here's one for ya ....
Not sure how to handle this one. Simply because there are SO MANY .... I want to be careful not to do something stupid ... Just a quick 3 minute video explanation: https://youtu.be/bVHUWTGH21E I'm interested in several opinions so if someone replies - please still chime in. Thanks.
Intermediate & Advanced SEO | | HLTalk0 -
How should I handle URL's created by an internal search engine?
Hi, I'm aware that internal search result URL's (www.example.co.uk/catalogsearch/result/?q=searchterm) should ideally be blocked using the robots.txt file. Unfortunately the damage has already been done and a large number of internal search result URL's have already been created and indexed by Google. I have double checked and these pages only account for approximately 1.5% of traffic per month. Is there a way I can remove the internal search URL's that have already been indexed and then stop this from happening in the future, I presume the last part would be to disallow /catalogsearch/ in the robots.txt file. Thanks
Intermediate & Advanced SEO | | GrappleAgency0 -
Removing content from Google's Indexes
Hello Mozers My client asked a very good question today. I didn't know the answer, hence this question. When you submit a 'Removing content for legal reasons report': https://support.google.com/legal/contact/lr_legalother?product=websearch will the person(s) owning the website containing this inflammatory content recieve any communication from Google? My clients have already had the offending URL removed by a court order which was sent to the offending company. However now the site has been relocated and the same content is glaring out at them (and their potential clients) with the title "Solicitors from Hell + Brand name" immediately under their SERPs entry. **I'm going to follow the advice of the forum and try to get the url removed via Googles report system as well as the reargard action of increasing my clients SERPs entries via Social + Content. ** However, I need to be able to firmly tell my clients the implications of submitting a report. They are worried that if they rock the boat this URL (with open access for reporting of complaints) will simply get more inflammatory)! By rocking the boat, I mean, Google informing the owners of this "Solicitors from Hell" site that they have been reported for "hosting defamatory" content. I'm hoping that Google wouldn't inform such a site, and that the only indicator would be an absence of visits. Is this the case or am I being too optimistic?
Intermediate & Advanced SEO | | catherine-2793880 -
What NAP format do I use if the USPS can't even find my client's address?
My client has a site already listed on Google+Local under "5208 N 1st St". He has some other NAPs, e.g., YellowPages, under "5208 N First Street". The USPS finds neither of these, nor any variation that I can possibly think of! Which is better? Do I just take the one that Google has accepted and make all the others like it as best I can? And doesn't it matter that the USPS doesn't even recognize the thing? Or no? Local SEO wizards, thanks in advance for your guidance!
Intermediate & Advanced SEO | | rayvensoft0 -
Google does not target my website properly!
Hello everyone, My website : www.pentrucadouri.ro, despite is a .ro with romanian content and is hosted in Romania appear for google.ro as a english targeted website.Google see internal pages as romanian ones but main page as english . In order to change this , I added : Also few days ago I uploaded a geositemap and I submitted this to google. Do you have suggestions ? Website ranks 2nd for "cosuri cadou" on google.com and 3rd on bing, but on google.ro ranks 11 . Thanks!
Intermediate & Advanced SEO | | VertiStudio0