Google not returning an international version of the page
-
I run a website that duplicates some content across international editions. These are differentiated by the country codes e.g.
/uk/folder/article1/
/au/folder/article1/
The UK version is considered the origin of the content. We currently use hreflang to differentiate content, however there is no actual regional or language variation between the content on these pages.
Recently the UK version of a specific article is being indexed by Google as I am able to access via keyword search, however when I try to search for it via:
site:domain.com/uk/folder/article1/
then it is not displaying, however the AU version is. Identical articles in the same folder are not having this issue.There are no errors within webmaster tools and I have recently refetched the specific URL. Additionally when checking for internal links to the UK and AU edition of the article, I am getting internal links for the AU edition of the article however no internal links for the UK edition of the article.
The main reason why this is problematic is because the article is now no longer appearing on the UK edition of the site for internal site search.
How can I find out why Google is not getting a result when the URL is entered but it is coming up when doing a specific search?
-
Let me get this right before I start answering.
You have one article having an issue (just one) that is AU and UK, has hreflang implemented, is indexed for keyword searches but not showing for site: searches in Google. Few questions.
- Have you geo-targeted the subfolders for AU and UK?
- What version of Google are you using to check indexation?
- When you say you checked internal links for both, how did you do that?
- Your own site search (searching on your own site) does not return the UK article? Or do you mean that the site: search is not bringing up the UK article?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have two robots.txt pages for www and non-www version. Will that be a problem?
There are two robots.txt pages. One for www version and another for non-www version though I have moved to the non-www version.
Technical SEO | | ramb0 -
Google Ignoring region settings on contact pages
Hi All, I've got an issue with multi-region contact pages. For example, Google favors the UAE other region contact pages for French region searches, when I only want /fr/contact. I've used a Rel-con and set up the website to be pointing to the correct regions.
Technical SEO | | WattbikeSEO0 -
Stuck trying to deindex pages from google
Hi There, We had developers put a lot of spammy markups in one of our websites. We tried many ways to deindex them by fixing it and requesting recrawls... However, some of the URLs that had these spammy markups were incorrect URLs - redirected to the right version, (ex. same URL with or without / at the end) so now all the regular URLs are updated and clean, however, the redirected URLs can't be found in crawls so they weren't updated, and couldn't get the spam removed. They still show up in the serp. I tried deindexing those spammed pages by making then no-index in the robot.txt file. This seemed to be working for about a week, and now they showed up again in the serp Can you help us get rid of these spammy urls? edit?usp=sharing
Technical SEO | | Ruchy0 -
Will a Robots.txt 'disallow' of a directory, keep Google from seeing 301 redirects for pages/files within the directory?
Hi- I have a client that had thousands of dynamic php pages indexed by Google that shouldn't have been. He has since blocked these php pages via robots.txt disallow. Unfortunately, many of those php pages were linked to by high quality sites mulitiple times (instead of the static urls) before he put up the php 'disallow'. If we create 301 redirects for some of these php URLs that area still showing high value backlinks and send them to the correct static URLs, will Google even see these 301 redirects and pass link value to the proper static URLs? Or will the robots.txt keep Google away and we lose all these high quality backlinks? I guess the same question applies if we use the canonical tag instead of the 301. Will the robots.txt keep Google from seeing the canonical tags on the php pages? Thanks very much, V
Technical SEO | | Voodak0 -
Internal link structure, find out if there are any internal links to this page
When i use this url in open site explorer it says that there are no internal links:
Technical SEO | | wilcoXXL
http://goo.gl/d2s6tJ
Page Authority is also 1, it should be higher of there are any internal links to it right? But i am very sure there are links to this url on my website. For example on this URL:
http://goo.gl/ucixRH How certain can i be of this? Because if i can be very certain, than we have a internal linkstructure problem on our entire site i believe.0 -
Google Places Page Changes
We had a client(dentist) hire another marketing firm(without our knowledge) and due to some Google page changes they made, their website lost a #1 ranking, was disassociated with the places page and was placed at result #10 below all the local results. We quickly made some changes and were able to bring them up to #2 within a few days and restore their Google page after about a week, but the tracking/forwarding phone number the marketing company was using shows up on the page despite attempts to contact Google through updating the business in places management as well as submit the phone number as incorrect while providing the correct phone number. And because the client fired that marketing company, the phone number will no longer be active in a few days. Of course this is very important for a dental office. Has anyone else had problems with the speed and updating Google Places/Plus pages for businesses? What's the most efficient way to make changes like this?
Technical SEO | | tvinson0 -
What to do when you want the category page and landing page to be the same thing?
I'm working on structuring some of my content better and I have a dilemma. I'm using wordpress and I have a main category called "Therapy." Under therapy I want to have a few sub categories such as "physical therapy" "speech therapy" "occupational therapy" to separate the content. The url would end up being mysite/speech-therapy. However, those are also phrases I want to create a landing page for. So I'd like to have a page like mysite.com/speech-therapy that I could optimize and help people looking for those terms find some of the most helpful content on our site for those certain words. I know I can't have 2 urls that are the same, but I'm hoping someone can give me some feedback on the best way to about this. Thanks.
Technical SEO | | NoahsDad0 -
How can I tell Google, that a page has not changed?
Hello, we have a website with many thousands of pages. Some of them change frequently, some never. Our problem is, that googlebot is generating way too much traffic. Half of our page views are generated by googlebot. We would like to tell googlebot, to stop crawling pages that never change. This one for instance: http://www.prinz.de/party/partybilder/bilder-party-pics,412598,9545978-1,VnPartypics.html As you can see, there is almost no content on the page and the picture will never change.So I am wondering, if it makes sense to tell google that there is no need to come back. The following header fields might be relevant. Currently our webserver answers with the following headers: Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0, public
Technical SEO | | bimp
Pragma: no-cache
Expires: Thu, 19 Nov 1981 08:52:00 GMT Does Google honor these fields? Should we remove no-cache, must-revalidate, pragma: no-cache and set expires e.g. to 30 days in the future? I also read, that a webpage that has not changed, should answer with 304 instead of 200. Does it make sense to implement that? Unfortunatly that would be quite hard for us. Maybe Google would also spend more time then on pages that actually changed, instead of wasting it on unchanged pages. Do you have any other suggestions, how we can reduce the traffic of google bot on unrelevant pages? Thanks for your help Cord0