Product Subdomain Outranking "Marketing" Domains
-
Hello, Moz community!
I have been puzzling about what to do for a client. Here is the challenge.
The client's "product"/welcome page lives at
this page allows the visitor to select the country/informational site they want OR to login to their subdomain/install of the product.
Google is choosing this www.client.com url as the main result for client brand searches.
In a perfect world, for searchers in the US, we would get served the client's US version of the information/marketing site which lives at https://client.com/us, and so on for other country level content (also living in a directory for that country)
It's a brand new client, we've done geo-targeting within the search console, and I'm kind of scared to rock the boat by de-indexing this www.client.com welcome screen.
Any thoughts, ideas, potential solutions are so appreciated.
THANKS!
Thank you!
-
thanks! Such a great answer.
-
You are very right to be worried about rocking that particular boat. If you de-index a page, it basically nullifies its SEO authority. Since the page which you would nullify, is a homepage-level URL (you gave the example 'www.client.com') then this would basically be SEOicide
Most other pages on your site, probably get most of their SEO authority and ranking power from your homepage (directly or indirectly, e.g: homepage linking to sub-page vs homepage linking to category, which then links to sub-page)
This is because, it's almost certain that your homepage will be the URL which has gained the most links from across the web. People are lazy, they just pick the shortest URL when linking. I'm not saying you don't have good deeplinks, just that 'most' of the good ones are probably hitting the homepage
So if you nullify the homepage's right to hold SEO authority, what happens to everything underneath the homepage? Are you imagining an avalanche right now? That's right, this would be one of the worst possible ideas in the universe. Write it down, print it out and burn it
Search-console level geo-targeting is for whole sites, not pages or (usually, though there can be exceptions) sections - you know that right? What that does is tell Google which country you want the website (the whole property which you have selected) to rank in. It basically stops that property from ranking well globally and gives minor boosts in the location which has been selected. If you just took your homepage level property and told it that it's US now, prepare to kiss most of your other traffic goodbye (hard lesson). If you were semi-smart and added /US/ as a separate property, and only set the geo targeting to US for that property - breathe a sigh of relief. It likely won't solve your issue but it won't be a complete catastrophe either (phew!)
Really the only decent tool you have to direct Google to rank individual web pages for regions and / or languages is the hreflang tag. These tags tell Google: "hey, you landed on me and I'm a valid page. But if you want to see versions of me in other languages - go to these other URLs through my hreflang links". Hreflangs only work if they are mutually agreed (both pages contain mirrored hreflangs to each other, and both pages do NOT give multiple URLs for a single language / location combination - or language / location in isolation)
The problem is, even if you do everything right - Google really has to believe "yes, this other page is another version of exactly the same page I'm looking at right now". Google can do stuff like, take the main content of both URLs, put it into a single string, then check the Boolean string similarity of both content strings to find the 'percentage' of the content's similarity. Well, this is how I check content similarity - Google does something similar, but probably infinitely more elegant and clever. In the case of hreflangs string translation is probably also enacted
If Google's mechanical mind, thinks that the pages are very different - then it will simply ignore the hreflang (just like Google will not pass SEO authority through a 301 redirect, if the contents of the old and new page are highly dissimilar in machine terms)
This is a fail-safe that Google has, to stop people from moving high rankings on 'useful' or 'proven' (via hyperlinks) URLs (content) - onto less useful, or less proven pages (which by Google's logic, if the content is very different, should have to re-prove their worth). Remember, what a human thinks is similar is irrelevant here. You need to focus on what a machine would find similar (can be VERY different things there)
So even if you do it all properly and use hreflangs, since the nature of the pages is very different (one is functional, helps users navigate, log-in and download something - that's very useful; whilst the other is selly, marketing content is usually thin) - it's unlikely that Google will swallow your intended URL serves
You'd be better off making the homepage include some marketing elements and making the marketing URLs include some of the functional elements. If both pages do both things well and are essentially the same, then hreflangs might actually start to work
If you want to keep the marketing URLs pure sell, fine - but they will only be useful as paid traffic landing pages (like from Google Ads, Pinterest Ads or FaceBook ads) where you can connect your ad to the advertorial (marketing) URLs. People expect ads to land on marketing-centric pages. People don't expect (or necessarily want) that for just regular web searches. The channel (SEO) is called 'organic' for a reason!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Old domain to new domain
Hi, A website on server A is no longer required. The owner has redirected some URLS of this website (via plugin) to his new website on server B -but not all URLS. So when I use COMMAND site:website A , I see a mixture of redirected URLS and not redirected URLS.Therefore two websites are still being indexed in some form and causing duplication. However, weirdly when I crawl with Screaming Frog I only see one URL which is 301 redirected to the new website. I would have thought I'd see lots of URLs which hadn't been redirected. How come it is different to using the site:command? Anyway, how do I move to the new website completely without the old one being indexed anymore. I thought I knew this but have read so many blogs I've confused myself! Should I: Redirect all URLS via the HTACESS file on old website on server A? There are lots of pages indexed so a lot of URLs. What if I miss some? or Point the old domain via DNS to server B and do the redirects in website B HTaccess file? This seems more sensible but does this method still retain the website rankings? Thanks for any help
Technical SEO | | AL123al0 -
One server, two domains - robots.txt allow for one domain but not other?
Hello, I would like to create a single server with two domains pointing to it. Ex: domain1.com -> myserver.com/ domain2.com -> myserver.com/subfolder. The goal is to create two separate sites on one server. I would like the second domain ( /subfolder) to be fully indexed / SEO friendly and have the robots txt file allow search bots to crawl. However, the first domain (server root) I would like to keep non-indexed, and the robots.txt file disallowing any bots / indexing. Does anyone have any suggestions for the best way to tackle this one? Thanks!
Technical SEO | | Dave1000 -
WebMaster Tools keeps showing old 404 error but doesn't show a "Linked From" url. Why is that?
Hello Moz Community. I have a question about 404 crawl errors in WebmasterTools, a while ago we had an internal linking problem regarding some links formed in a wrong way (a loop was making links on the fly), this error was identified and fixed back then but before it was fixed google got to index lots of those malformed pages. Recently we see in our WebMaster account that some of this links still appearing as 404 but we currently don't have that issue or any internal link pointing to any of those URLs and what confuses us even more is that WebMaster doesn't show anything in the "Linked From" tab where it usually does for this type of errors, so we are wondering what this means, could be that they still in google's cache or memory? we are not really sure. If anyone has an idea of what this errors showing up now means we would really appreciate the help. Thanks. jZVh7zt.png
Technical SEO | | revimedia1 -
Domains
My questions is what to do with old domains we own from a past business. Is it advantages to direct them to the new domain/company or is that going to cause a problem for the new company. They are not in the same industry.
Technical SEO | | KeylimeSocial0 -
How valuable is content "hidden" behind a JavaScript dropdown really?
I've come across a method implemented by some SEO agencies to fill up pages with somehow relevant text and hide it behind a javascript dropdown. Does Google fall for such cheap tricks? You can see this method used on these pages for example (just scroll down to the bottom) - it's all in German, but you get the idea I guess: http://www.insider-boersenbrief.de/ http://www.deko-und-kerzenshop.de/ How is you experience with this way of adding content to a site? Do you think it is valuable or will it get penalised?
Technical SEO | | jfkorn0 -
My site has a "Reported Web Forgery!" warning
When I check my bing cached page it comes up with a "Reported Web Forgery!" warning. I've looked at google web tools and no malware has been detected. I do have another site that has a very similar web address jaaronwoodcountertops.com and jaaron-wood-countertops.com. Could that be why? How do I go about letting bing and or firefox know this is not a forgery site?
Technical SEO | | JAARON0 -
Impact of "restricted by robots" crawler error in WT
I have been wondering about this for a while now with regards to several of my sites. I am getting a list of pages that I have blocked in the robots.txt file. If I restrict Google from crawling them, then how can they consider their existence an error? In one case, I have even removed the urls from the index. And do you have any idea of the negative impact associated with these errors. And how do you suggest I remedy the situation. Thanks for the help
Technical SEO | | phogan0 -
Research for "love quotes"
I'm doing some research for the term "love quotes" I'm trying to understand why following URL is ranking so high quote-monster.com/category/love-quotes/ it only has one link? Any advise would be appreciated. Rgds Mark
Technical SEO | | relientmark0