Product Subdomain Outranking "Marketing" Domains
-
Hello, Moz community!
I have been puzzling about what to do for a client. Here is the challenge.
The client's "product"/welcome page lives at
this page allows the visitor to select the country/informational site they want OR to login to their subdomain/install of the product.
Google is choosing this www.client.com url as the main result for client brand searches.
In a perfect world, for searchers in the US, we would get served the client's US version of the information/marketing site which lives at https://client.com/us, and so on for other country level content (also living in a directory for that country)
It's a brand new client, we've done geo-targeting within the search console, and I'm kind of scared to rock the boat by de-indexing this www.client.com welcome screen.
Any thoughts, ideas, potential solutions are so appreciated.
THANKS!
Thank you!
-
thanks! Such a great answer.
-
You are very right to be worried about rocking that particular boat. If you de-index a page, it basically nullifies its SEO authority. Since the page which you would nullify, is a homepage-level URL (you gave the example 'www.client.com') then this would basically be SEOicide
Most other pages on your site, probably get most of their SEO authority and ranking power from your homepage (directly or indirectly, e.g: homepage linking to sub-page vs homepage linking to category, which then links to sub-page)
This is because, it's almost certain that your homepage will be the URL which has gained the most links from across the web. People are lazy, they just pick the shortest URL when linking. I'm not saying you don't have good deeplinks, just that 'most' of the good ones are probably hitting the homepage
So if you nullify the homepage's right to hold SEO authority, what happens to everything underneath the homepage? Are you imagining an avalanche right now? That's right, this would be one of the worst possible ideas in the universe. Write it down, print it out and burn it
Search-console level geo-targeting is for whole sites, not pages or (usually, though there can be exceptions) sections - you know that right? What that does is tell Google which country you want the website (the whole property which you have selected) to rank in. It basically stops that property from ranking well globally and gives minor boosts in the location which has been selected. If you just took your homepage level property and told it that it's US now, prepare to kiss most of your other traffic goodbye (hard lesson). If you were semi-smart and added /US/ as a separate property, and only set the geo targeting to US for that property - breathe a sigh of relief. It likely won't solve your issue but it won't be a complete catastrophe either (phew!)
Really the only decent tool you have to direct Google to rank individual web pages for regions and / or languages is the hreflang tag. These tags tell Google: "hey, you landed on me and I'm a valid page. But if you want to see versions of me in other languages - go to these other URLs through my hreflang links". Hreflangs only work if they are mutually agreed (both pages contain mirrored hreflangs to each other, and both pages do NOT give multiple URLs for a single language / location combination - or language / location in isolation)
The problem is, even if you do everything right - Google really has to believe "yes, this other page is another version of exactly the same page I'm looking at right now". Google can do stuff like, take the main content of both URLs, put it into a single string, then check the Boolean string similarity of both content strings to find the 'percentage' of the content's similarity. Well, this is how I check content similarity - Google does something similar, but probably infinitely more elegant and clever. In the case of hreflangs string translation is probably also enacted
If Google's mechanical mind, thinks that the pages are very different - then it will simply ignore the hreflang (just like Google will not pass SEO authority through a 301 redirect, if the contents of the old and new page are highly dissimilar in machine terms)
This is a fail-safe that Google has, to stop people from moving high rankings on 'useful' or 'proven' (via hyperlinks) URLs (content) - onto less useful, or less proven pages (which by Google's logic, if the content is very different, should have to re-prove their worth). Remember, what a human thinks is similar is irrelevant here. You need to focus on what a machine would find similar (can be VERY different things there)
So even if you do it all properly and use hreflangs, since the nature of the pages is very different (one is functional, helps users navigate, log-in and download something - that's very useful; whilst the other is selly, marketing content is usually thin) - it's unlikely that Google will swallow your intended URL serves
You'd be better off making the homepage include some marketing elements and making the marketing URLs include some of the functional elements. If both pages do both things well and are essentially the same, then hreflangs might actually start to work
If you want to keep the marketing URLs pure sell, fine - but they will only be useful as paid traffic landing pages (like from Google Ads, Pinterest Ads or FaceBook ads) where you can connect your ad to the advertorial (marketing) URLs. People expect ads to land on marketing-centric pages. People don't expect (or necessarily want) that for just regular web searches. The channel (SEO) is called 'organic' for a reason!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Keep getting "/feed" broken links in Google Search Console
Hey guys, I'm having an issue for the past few months. I keep getting "/feed" broken links in Google Search Console (screenshot attached). The site is a WordPress site using the YoastSEO plugin for on-page SEO and sitemap. Has anyone else experienced this issue? Did you fix it? How should I redirect these links? s7elXMy
Technical SEO | | Extima-Christian0 -
Redirect multiple domains to 1 domain or not?
Hi there, I have client who has multiple domains that already have some PA and DA. Problem is that most websites have the same content and rank better on different keywords.
Technical SEO | | Leaf-a-mark
I want to redirect all the websites to 1 domain because it’s easier to manage and it removes any duplicate content. Question is if I redirect domain x to domain y do the rankings of domain x increase on domain y? Or is it better to keep domain x separately to generate more referral traffic to domain y? Thanks in advance! Cheers0 -
Old Redirected Domain is replacing my current domain on SERPs
Hello everyone, All of a sudden a 2 year old redirected domain is replacing my current domain for 2 weeks now, my site is apitus.com and my old domain is aptitus.pe (the redirect is still working), however this only happens on my country google results (google.com.pe), if you check my site on google.com, everything looks ok even with a sitelink, which I no longer have on my country search results. Back to the issue, the first thing I thought was go to Search Console and take it out from the index, so I asked for access by uploading a file but since everything on that old site redirects to my current site I can't make such action. While still waiting for such access, is there anything else I could do?. Thanks in advance. PD: I'm adding the images of my SERPs CmzN8kY G3zZwwj
Technical SEO | | JoaoCJ0 -
Google Webmaster Tools is saying "Sitemap contains urls which are blocked by robots.txt" after Https move...
Hi Everyone, I really don't see anything wrong with our robots.txt file after our https move that just happened, but Google says all URLs are blocked. The only change I know we need to make is changing the sitemap url to https. Anything you all see wrong with this robots.txt file? robots.txt This file is to prevent the crawling and indexing of certain parts of your site by web crawlers and spiders run by sites like Yahoo! and Google. By telling these "robots" where not to go on your site, you save bandwidth and server resources. This file will be ignored unless it is at the root of your host: Used: http://example.com/robots.txt Ignored: http://example.com/site/robots.txt For more information about the robots.txt standard, see: http://www.robotstxt.org/wc/robots.html For syntax checking, see: http://www.sxw.org.uk/computing/robots/check.html Website Sitemap Sitemap: http://www.bestpricenutrition.com/sitemap.xml Crawlers Setup User-agent: * Allowable Index Allow: /*?p=
Technical SEO | | vetofunk
Allow: /index.php/blog/
Allow: /catalog/seo_sitemap/category/ Directories Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /includes/
Disallow: /lib/
Disallow: /magento/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /stats/
Disallow: /var/ Paths (clean URLs) Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalogsearch/
Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/
Disallow: /aitmanufacturers/index/view/
Disallow: /blog/tag/
Disallow: /advancedreviews/abuse/reportajax/
Disallow: /advancedreviews/ajaxproduct/
Disallow: /advancedreviews/proscons/checkbyproscons/
Disallow: /catalog/product/gallery/
Disallow: /productquestions/index/ajaxform/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt Paths (no clean URLs) Disallow: /.php$
Disallow: /?SID=
disallow: /?cat=
disallow: /?price=
disallow: /?flavor=
disallow: /?dir=
disallow: /?mode=
disallow: /?list=
disallow: /?limit=5
disallow: /?limit=10
disallow: /?limit=15
disallow: /?limit=20
disallow: /*?limit=250 -
WebMaster Tools keeps showing old 404 error but doesn't show a "Linked From" url. Why is that?
Hello Moz Community. I have a question about 404 crawl errors in WebmasterTools, a while ago we had an internal linking problem regarding some links formed in a wrong way (a loop was making links on the fly), this error was identified and fixed back then but before it was fixed google got to index lots of those malformed pages. Recently we see in our WebMaster account that some of this links still appearing as 404 but we currently don't have that issue or any internal link pointing to any of those URLs and what confuses us even more is that WebMaster doesn't show anything in the "Linked From" tab where it usually does for this type of errors, so we are wondering what this means, could be that they still in google's cache or memory? we are not really sure. If anyone has an idea of what this errors showing up now means we would really appreciate the help. Thanks. jZVh7zt.png
Technical SEO | | revimedia1 -
What is "evttag=" used for?
I see evttag= used on realtor.com, what looks to be for click tracking purposes. Does anyone know if this is an official standard or something they made up?
Technical SEO | | JDatSB0 -
URL Error "NODE"
Hey guys, So I crawled my site after fixing a few issues, but for some reason I'm getting this strange node error that goes www.url.com/node/35801 which I haven't seen before. It appears to originate from user submitted content and when I go to the page it's a YouTube video with no video playing just a black blank screen. Has anyone had this issue before. I think it can probably just be taken off the site, but if it's a programming error of some sort I'd just like to know what it is to avoid it in the future. Thanks
Technical SEO | | KateGMaker0 -
How to 301 multiple domain names to a single domain
Hey, I tried to find and answer to this seemingly simple question, but no luck. So, I have one domain name with a website attached to it. I also registered all the other domain names that are similar to it or have different extensions - I want to redirect all the other domain names to my one main domain name without getting penalised by the big G. It looks like this: www.mainsite.com - this is my main domain I also have www.mainsite.com.au, www.mainsite.org, and www.mainsite.org.au which I all want to just redirect to www.mainsite.com I have been told that the best way to do this is a 301 redirect, but to do that you need to make a CNAME for all the other domains that points to www.mainsite.com. My problem is that I cannot seem to create a CNAME record for http://mainsite.com - I have it working for http://www.mainsite.com but not the non www record. What should I be doing differently? Is it just my DNS provider is useless? Thanks, Anthony
Technical SEO | | Grenadi0