How to take out international URL from google US index/hreflang help
-
Hi Moz Community,
Weird/confusing question so I'll try my best. The company I work for also has an Australian retail website. When you do a site:ourbrand.com search the second result that pops up is au.brand.com, which redirects to the actual brand.com.au website.
The Australian site owner removed this redirect per my bosses request and now it leads to a an unavailable webpage.
I'm confused as to best approach, is there a way to noindex the au.brand.com URL from US based searches? My only problem is that the au.brand.com URL is ranking higher than all of the actual US based sub-cat pages when using a site search.
Is this an appropriate place for an hreflang tag? Let me know how I can help clarify the issue.
Thanks,
-Reed -
Hi Sheena, sorry I didn't respond sooner, I wasn't receiving any notifications.
Thank you very much for your answer though, this was extremely helpful and helped verify that what I was thinking was correct, with some added help from you.
I didn't think taking away the 301 was the best approach, but from a bosses standpoint he sees it as them getting clicks that shouldn't be theirs, I just have to do my best job of explaining why it's better for long term.
The hreflang is in place and I think the best approach would be to consolidate international domains to the .com ccTLD's
Thanks again, very helpful.
-Reed -
I'm working on a very similar scenario, where .com.au pages are ranking in Google US and .com pages are ranking in Google AU (above .com.au pages).
We are moving forward with the hreflang attribute since it was specifically introduced to help search engines serve the correct language or regional URL to searchers. In helping search engines index and serve the localized version of your content, “hreflang” also prevents duplicate content penalties by telling Google that each potential “duplicate” is actually an alternative for users who require an alternate language version. * We see this as a short-term goal, as we plan to eventually consolidate the ccTLDs to the .com site.
Here are some international SEO / hreflang resources that might help:
- https://support.google.com/webmasters/answer/189077?hl=en
- http://moz.com/blog/hreflang-behaviour-insights
- http://moz.com/blog/the-international-seo-checklist
- Anything from Aleyda Solis &/or Gianluca Fiorelli
- http://moz.com/blog/using-the-correct-hreflang-tag-a-new-generator-tool
- http://www.themediaflow.com/tool_hreflang.php
Also, since the AU subdomain pages were ranking well, I probably would have left the redirect in place rather than let it go to a 404. Then focus on mapping out the equivalents between the .com and .com.au sites. This is a very tedious project, but the last 2 links I shared above really help move things along once you have all the URL equivalents mapped out.
I hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved How much time does it take for Google to read the Sitemap?
Hi there, I could use your help with something. Last week, I submitted my sitemap in the search console to improve my website's visibility on Google. Unfortunately, I got an error message saying that Google is not reading my sitemap. I'm not sure what went wrong. Could you take a look at my site (OceanXD.org) and let me know if there's anything I can do to fix the issue? I would appreciate your help. Thank you so much!
Intermediate & Advanced SEO | | OceanXD1 -
Why did Google cache & index a different domain than my own?
We own www.homemenorca.com, a real estate website based in Spain. Pages from this domain are not being indexed: https://www.google.com/search?q=site%3Awww.homemenorca.com&oq=site%3Awww.homemenorca.com&aqs=chrome..69i57j69i58j69i59l2.3504j0j7&sourceid=chrome&ie=UTF-8Please notice that the URLs are Home Menorca, but the titles are not Home Menorca, they are Fincas Mantolan, a completely different domain and company: http://www.fincasmantolan.com/. Furthermore, when we look at Google's cache of Home Menorca, we see a different website: http://webcache.googleusercontent.com/search?q=cache%3Awww.homemenorca.com%2Fen&oq=cache%3Awww.homemenorca.com%2Fen&aqs=chrome..69i57j69i58j69i59.1311j0j4&sourceid=chrome&ie=UTF-8We reviewed Google Search Console, Google Fetch, the canonical tags, the XML sitemap, and many more items. Google Search Console accepted our XML sitemap, but is only indexing 5-10% of the pages. Google is fetching and rendering the pages properly. However, we are not seeing the correct content being indexed in Google. We have seen issues with page loading times, loading content longer than 4 seconds, but are unsure why Google would be indexing a different domain.If you have suggestions or thoughts, we would very much appreciate it.Additional Language Issue:When a user searches "Home Menorca" from America or the UK with "English" selected in their browser as their default language, they are given a Spanish result. It seems to have accurate hreflang annotations within the head section on the HTML pages, but it is not working properly. Furthermore, Fincas Mantolan's search result is listed immediately below Home Menorca's Spanish result. We believe that if we fix the issue above, we will also fix the language issue. Please let us know any thoughts or recommendations that can help us. Thank you very much!
Intermediate & Advanced SEO | | CassG12340 -
Google only indexing the top 2/3 of my page?
HI, I have a page that is about 5000 lines of code total. I was having difficulty figuring out why the addition of a lot of targeted, quality content to the bottom of the pages was not helping with rankings. Then, when fetching as Google, I noticed that only about 3300 lines were getting indexed for some reason. So naturally, that content wasn't going to have any effect if Google in not seeing it. Has anyone seen this before? Thoughts on what may be happening? I'm not seeing any errors begin thrown by the page....and I'm not aware of a limit of lines of code Google will crawl. Pages load under 5 seconds so loading speed shouldn't be the issue. Thanks, Kevin
Intermediate & Advanced SEO | | yandl1 -
Only The Google Plus URL Appearing In Organic Search (No Images or Info)
Just a quick one, when I search for my brand in Google I see my normal listing but the Google Plus URL is appearing on the right hand side as a link on it's own. I see that other brands have there company info pulled in here. Is there anything I can do to make it appear like other brands have it, (with the images, text, etc.)? Thanks,
Intermediate & Advanced SEO | | the-gate-films0 -
Why would one of our section pages NOT be indexed by Google?
One of our higher traffic section pages is not being indexed by Google. The products that reside on this section page ARE indexed by Google and are on page 1. So why wouldn't the section page be even listed and indexed? The meta title is accurate, meta description is good. I haven't received any notices in Webmaster Tools. Is there a way to check to see if OTHER pages might also not be indexed? What should a small ecom site do to see about getting it listed? SOS in Modesto. Ron
Intermediate & Advanced SEO | | yatesandcojewelers0 -
Philosophy & Deep Thoughts On Tag/Category URLs
Hello, SEO Gurus! First off, my many thanks to this community for all of your past help and perspective. This is by far the most valuable SEO community on the web, and it is precisely because of all of you being here. Thanks! I've recently kicked off a robust niche biotech news publishing site for a client, and in the first 6 weeks, we've generated 15K+ views and 9300 visits. The site is built on the WordPress platform. I'm well aware that a best practice is to noindex tag and category pages, as I've heard SEOs say that they potentially lead to duplicate content issues. We're using tags and categories heavily, and to date, we've had just 282 visits from tag & category pages. So, that's 2.89% of our traffic; the vast majority of traffic has landed on the homepage or article pages (we are using author markup). Here's my question, though, and it's more philosophical: do these pages really cause a duplicate content issue? Isn't Google able to determine that said page is a tag page, and thus not worthy of duplicate content penalties? If not, then why not? To me, tag/category pages are sometimes better content pages to have ranked than article pages, since, for news especially, they potentially give searchers a better search result (particularly for short tail keywords). For example, if I write articles all the time about the Mayo Clinic," I'd rather have my evergreen "Mayo Clinic" tag page rank on page one for the keyword "mayo clinic" than just one specific article that very quickly drops out of the news cycle. Know what I mean? So, to summarize: 1. Are doindexed tag/category pages really a duplicate content problem, and if so, why the heck? 2. Is there a strategy for ranking tag/category pages for news publishing sites ahead of article pages? Thanks as always for your time and attention. Kind Regards, Mike
Intermediate & Advanced SEO | | RCNOnlineMarketing0 -
Adding Orphaned Pages to the Google Index
Hey folks, How do you think Google will treat adding 300K orphaned pages to a 4.5 million page site. The URLs would resolve but there would be no on site navigation to those pages, Google would only know about them through sitemap.xmls. These pages are super low competition. The plot thickens, what we are really after is to get 150k real pages back on the site, these pages do have crawlable paths on the site but in order to do that (for technical reasons) we need to push these other 300k orphaned pages live (it's an all or nothing deal) a) Do you think Google will have a problem with this or just decide to not index some or most these pages since they are orphaned. b) If these pages will just fall out of the index or not get included, and have no chance of ever accumulating PR anyway since they are not linked to, would it make sense to just noindex them? c) Should we not submit sitemap.xml files at all, and take our 150k and just ignore these 300k and hope Google ignores them as well since they are orhpaned? d) If Google is OK with this maybe we should submit the sitemap.xmls and keep an eye on the pages, maybe they will rank and bring us a bit of traffic, but we don't want to do that if it could be an issue with Google. Thanks for your opinions and if you have any hard evidence either way especially thanks for that info. 😉
Intermediate & Advanced SEO | | irvingw0 -
400 errors and URL parameters in Google Webmaster Tools
On our website we do a lot of dynamic resizing of images by using a script which automatically re-sizes an image dependant on paramaters in the URL like: www.mysite.com/images/1234.jpg?width=100&height=200&cut=false In webmaster tools I have noticed there are a lot of 400 errors on these image Also when I click the URL's listed as causing the errors the URL's are URL Encoded and go to pages like this (this give a bad request): www.mysite.com/images/1234.jpg?%3Fwidth%3D100%26height%3D200%26cut%3Dfalse What are your thoughts on what I should do to stop this? I notice in my webmaster tools "URL Parameters" there are parameters for:
Intermediate & Advanced SEO | | James77
height
width
cut which must be from the Image URLs. These are currently set to "Let Google Decide", but should I change them manually to "Doesn't effect page content"? Thanks in advance0