Page disappears from search results when Google geographic location is close to offline physical location
-
If you use Google to search georgefox.edu for "doctor of business administration", the first search result is http://www.georgefox.edu/business/dba/ - I'll refer to this page as the DBA homepage from here on. The second page is http://www.georgefox.edu/offices/sfs/grad/tuition/business/dba/ - I'll refer to this page as the DBA program costs page from here on.
Search: https://www.google.com/search?q=doctor+of+business+administration+site%3Ageorgefox.edu
This appears to hold true no matter what your geographic location is set to on Google.
George Fox University is located in Newberg, Oregon. If you search for "doctor of business administration" with your geographic location set to a location beyond a certain distance away from Newberg, Oregon, the first georgefox.edu result is the DBA homepage.
Set your location on Google to Redmond, Oregon
Search: https://www.google.com/search?q=doctor+of+business+administrationBut, if you set your location a little closer to home, the DBA homepage disappears from the top 50 search results on Google.
Set your location on Google to Newberg, Oregon
Search: https://www.google.com/search?q=doctor+of+business+administrationNow the first georgefox.edu page to appear in the search results is the DBA program costs page.
Here are the locations I have tested so far:
First georgefox.edu search result is the DBA homepage
- Redmond, OR
- Eugene, OR
- Boise, ID
- New York, NY
- Seattle, WA
First georgefox.edu search result is the DBA program costs page
- Newberg, OR
- Portland, OR
- Salem, OR
- Gresham, OR
- Corvallis, OR
It appears that if your location is set to within a certain distance of Newberg, OR, the DBA homepage is being pushed out of the search results for some reason.
Can anyone verify these results? Does anyone have any idea why this is happening?
-
Hi RCF,
Here is what I see:
Searching for 'doctor of business administration' with my location set to Redmond Oregon, http://www.georgefox.edu/business/dba/ is coming up #8 organically.
Doing this same search, but with my location set to Newberg, Oregon, I see http://www.georgefox.edu/offices/sfs/grad/tuition/business/dba/ coming up #2 organically.
It looks like my exact rankings may not be identical to yours, but at least my searches appear to confirm that the 2 different pages are surfacing with the 2 different locations being set.
Unfortunately, I agree with Moosa that it's very hard to hit on a precise reason for why Google would be favoring your tuition page over your home page when a user's location is set to the town the in which the institution is located. I do have a suggestion, though. Why not try putting Newberg, OR in the title tag of the page you'd like to be ranking highest for Newberg-located searchers, mention it in the H1 tag, and mention it more than once in the copy. I'd also recommend that you put the complete NAP (name, address, phone) in the body copy of the page, though I see it's already in the footer. Perhaps by increasing the optimization for Newberg on this page, you might strengthen the pages chances of ranking the way you want it to? Just a suggestions.
-
Yes, that's the only vaguely plausible explanation we've come up with too, though it's not very satisfying and it's impossible to prove.
Thanks for the suggestion, Moosa!
-
I believe no one can answer exactly what it is but one can only answer as per the experiments and what has happened to one in the past!
Google recently said that they will drop down your rankings if people are going to click on your website and bounce back within few seconds and select another result. If we keep this rule for local results that would be like if people from certain location find the page irrelevant, Google will probably rank it down and the same page if responding well for another location Google will rank it higher on that location.
I can be completely wrong but this is one of the thousand possibilities
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Strange site link on Google for a Facebook result
A Facebook page targetted to US Hispanics (with content in Spanish and English) is showing me a hindi sitelink underneath the main Facebook link when I google (in the US, English) for the page [ page name facebook]. We don't have any content in hindi, or targetted to that audience. If I click on the sitelink while logged out of facebook, I can see it takes me to a facebook subdomain of hi-in. When I'm logged in it just redirects me to the same page. Any idea why this could be happening?
Intermediate & Advanced SEO | | M_80 -
Crawled page count in Search console
Hi Guys, I'm working on a project (premium-hookahs.nl) where I stumble upon a situation I can’t address. Attached is a screenshot of the crawled pages in Search Console. History: Doing to technical difficulties this webshop didn’t always no index filterpages resulting in thousands of duplicated pages. In reality this webshops has less than 1000 individual pages. At this point we took the following steps to result this: Noindex filterpages. Exclude those filterspages in Search Console and robots.txt. Canonical the filterpages to the relevant categoriepages. This however didn’t result in Google crawling less pages. Although the implementation wasn’t always sound (technical problems during updates) I’m sure this setup has been the same for the last two weeks. Personally I expected a drop of crawled pages but they are still sky high. Can’t imagine Google visits this site 40 times a day. To complicate the situation: We’re running an experiment to gain positions on around 250 long term searches. A few filters will be indexed (size, color, number of hoses and flavors) and three of them can be combined. This results in around 250 extra pages. Meta titles, descriptions, h1 and texts are unique as well. Questions: - Excluding in robots.txt should result in Google not crawling those pages right? - Is this number of crawled pages normal for a website with around 1000 unique pages? - What am I missing? BxlESTT
Intermediate & Advanced SEO | | Bob_van_Biezen0 -
Noindex search pages?
Is it best to noindex search results pages, exclude them using robots.txt, or both?
Intermediate & Advanced SEO | | YairSpolter0 -
Same content pages in different versions of Google - is it duplicate>
Here's my issue I have the same page twice for content but on different url for the country, for example: www.example.com/gb/page/ and www.example.com/us/page So one for USA and one for Great Britain. Or it could be a subdomain gb. or us. etc. Now is it duplicate content is US version indexes the page and UK indexes other page (same content different url), the UK search engine will only see the UK page and the US the us page, different urls but same content. Is this bad for the panda update? or does this get away with it? People suggest it is ok and good for localised search for an international website - im not so sure. Really appreciate advice.
Intermediate & Advanced SEO | | pauledwards0 -
Our website scores A but on google we are still on 7th page
Hi all, I have run on page keyword optimizations with exact terminology used to find our company service or our competition on google. We have ranked A, with almost all points complete. I did the same for our main competitor and they ranked F. Then i did page positioning on Google and they get on page 1 fifth line and we get page 7. We have plenty of unique content and extensive website.
Intermediate & Advanced SEO | | EMGCSR
Could there be any other reason than reason for this other than backlinks? Many thanks for your help.0 -
To land page or not to land page
Hey all, I wish to increase my sites rankings on a variety of keywords within sub categories but I'm unsure where to be spending the time in SEO. Here's an example of the website page structure: General Home Page > Sub Category 1 Home Page
Intermediate & Advanced SEO | | DPSSeomonkey
> Searching / Results pages
- Sub Category 1
- Sub Category 2
- Sub Category 3
- Sub Category 4 > Sub Category 2 Home Page
> Searching / Results pages
- Sub Category 1
- Sub Category 2
- Sub Category 3
- Sub Category 4 We've newly introduced the Sub Category Home Pages and I was wondering if SEO is best performed on these pages or should landing pages be built, one for each of the 4 sub categories in each section. Those landing pages would have links to the "Searching / Results pages" for that sub category. Thanks!0 -
Duplicate Content issue on pages with Authority and decent SERP results
Hi, I'm not sure what the best thing to do here is. I've got quite a few duplicate page errors in my campaign. I must admit the pages were originally built just to rank a keyword variation. e.g. Main page keyword is [Widget in City] the "duplicate" page is [Black Widget in City] I guess the normal route to deal with duplicate pages is to add a canonical tag and do a 304 redirect yea? Well these pages have some page Authority and are ranking quite well for their exact keywords, what do I do?
Intermediate & Advanced SEO | | SpecialCase0 -
SEOMOZ duplicate page result: True or false?
SEOMOZ say's: I have six (6) duplicate pages. Duplicate content tool checker say's (0) On the physical computer that hosts the website the page exists as one file. The casing of the file is irrelevant to the host machine, it wouldn't allow 2 files of the same name in the same directory. To reenforce this point, you can access said file by camel-casing the URI in any fashion (eg; http://www.agi-automation.com/Pneumatic-grippers.htm). This does not bring up a different file each time, the server merely processes the URI as case-less and pulls the file by it's name. What is happening in the example given is that some sort of indexer is being used to create a "dummy" reference of all the site files. Since the indexer doesn't have file access to the server, it does this by link crawling instead of reading files. It is the crawler that is making an assumption that the different casings of the pages are in fact different files. Perhaps there is a setting in the indexer to ignore casing. So the indexer is thinking that these are 2 different pages when they really aren't. This makes all of the other points moot, though they would certainly be relevant in the case of an actual duplicated page." ****Page Authority Linking Root Domains http://www.agi-automation.com/ 43 82 http://www.agi-automation.com/index.html 25 2 http://www.agi-automation.com/Linear-escapements.htm 21 1 www.agi-automation.com/linear-escapements.htm 16 1 http://www.agi-automation.com/Pneumatic-grippers.htm 30 3 http://www.agi-automation.com/pneumatic-grippers.htm 16 1**** Duplicate content tool estimates the following: www and non-www header response; Google cache check; Similarity check; Default page check; 404 header response; PageRank dispersion check (i.e. if www and non-www versions have different PR).
Intermediate & Advanced SEO | | AGIAutomation0