Subdomain vs Subdirectory - does the content make a difference?
-
So I've read through all of the answers that suggest using a subdirectory is the best way to approach this - you rank more quickly and have all of your content on one site. BUT what if you're looking to move into a totally new market that your current site/content isn't in any way relevant to?
Some examples are Supermarkets such as Tesco (who seem to use a mix of methods) http://www.tesco.com/groceries/, http://www.clothingattesco.com/, http://www.tesco.com/bank/ which links out from their main site to http://www.tescobank.com/ etc and Sainsburys http://www.sainsburys.co.uk/ who use subdomains - here they have their grocery offering, their bank offering, clothes, phones etc split into subdomains.
If you have a product that is totally new to your Brand and different from all the products on your current site, does this change the answer to subdirectory vs subdomain?
Would be great to hear your expert opinions on this.
Thanks
-
for the subdomain to domain issue:
From a SEO perspective a subdomain is less favorable.From a user perpective: Please explain to my father the domain zoekmachinemarketing.stramark.nl how are you going to explain that there should not be a www. in front of it? how are you going to explain the fact that it is not only stramark he has to go to, but actualy the subdomain because it has a different offer?
I think young people can adapt somewhat better, but they are very used that they do not have to think. They just search from the adres bar and need the top result.
-
I agree with what John Cross said here - multiple domains means more work. If there is a business case to justify that increase in work, then that is an easier decision. If there isn't enough business case to justify the work, then maybe from an SEO standpoint you should keep it on the same domain to get the new content ranking more quickly.
Along with SEO considerations, though, there are a few other ways to break down this question...
First, what are the user expectations? Yes, the products are different and not highly related but are the customers different? In the Tesco example, would people who are interested in groceries also be interested in banking? Or, put another way, would people who are interested in groceries (but not in banking) be offended to see that this company also offers banking services? If the users are interconnected or are (at minimum) not put off by the variety of products, then why not have everything on one domain? That way you get the strong SEO benefit of using sub-directories. This isn't always a cheap investment though, as it requires a strong architecture to keep the directories and content types/voices distinct, but totally doable and a good solution from an SEO standpoint.
Second, I'd look at this from a brand perspective. Is this all the same company delivering these goods? Is it all Tesco or Sainsburys? If it is the same brand name, then why not have everything live on one authoritative domain name (assuming you aren't going to chase away customers by showing the breadth of products offered)? Google is an example of this - look at the wide variety of services they offer mail, analytics, drive, G+, search, etc. - it is all Google, even though they offer a wide range of products to a diverse range of customers. Now, if New Product A is a different brand and a really different thing from anything else being done by the company (in Google's case - Android), then that likely justifies a separate domain and a larger business investment (not just for SEO, but for design and other types of marketing too).
Finally, you do need to look at this technically I think. Chances are that Tesco Bank has to live on a different domain just because of security considerations. Some times the technology limitations have to dictate what we do with SEO. If those are great enough, then we may have to do the work to create two distinct domains and get those domains earning rankings/traffic. In that case, the business/technical needs justify the work required.
Hope that helps!
-
To optimize SEO outcomes the short answer answer would use your current domain.
However a counter argument could be you own an exact matching domain to keywords so that maybe push you to a new URL. Big marketing budget or maybe you just want a clean start - because of pigeon or panda issues plaguing teh current site.
That said using Tesco & Sainsbury as examples both have in common "big wallets". So they would have planned multi million dollar marketing campaigns around the new products/URL's. Hence they can drive backlinks. So if the company is a monster - with a massive marketing spend for the launch you may think a new brand and URL are in order.
I am old school - a brand new domain to start from scratch - new domain, no history and no backlinks is a far harder task, but certainly not unachievable. I would steer form it. Personally I believe you should try and limit new domains as practically it increases your required SEO output in this case by double. Have to review two lost of GA and Webmaster each day... So just to keep level you need to work extra hours each week with a new domain...
They are my views but there is plenty of info on moz heading the other way.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can you use multiple rel alternate tags for different device subdomains?
When redirecting from desktop to mobile with a separate URL structure, you need to have a rel alternate - rel canonical handshake to define the relationship between the pages. But if you have a different subdomain for different mobile devices, can you add more than one rel alternate tag on the desktop page? EG if site.com is redirecting to iphone.site.com, m.site.com, android.site.com
Intermediate & Advanced SEO | | AdiRste0 -
.ac.uk subdomain vs .co.uk domain
I'd be grateful if I could check my thinking... I've agreed to give some quick advice to a non profit organisation who are in the process of moving their website from an ac.uk subdomain to a .co.uk domain. They believe that their SEO can be improved considerably by making this migration. From my experience, I don't see how this could be the case. Does the unique domain in itself offer enough ranking benefit to justify this approach? The subdomain is on a very high authority domain with many pre-existing links, which makes me even more nervous about this approach. Does anyone have any opinions on this that they could share please? I'm guessing that it is possible to migrate safely and that there might be branding advantages, but from an actual SEO point of view there is not that much benefit? It looks like most of their current traffic is branded traffic.
Intermediate & Advanced SEO | | RG_SEO0 -
Is This Considered Duplicate Content?
My site has entered SEO hell and I am not sure how to fix it. Up until 18 months ago I had tremendous success on Google and Bing and now my website appears below my Facebook page for the term "Direct Mail Raleigh." What makes it even more frustrating is my competitors have done no SEO and they are dominating this keyword. I thought that the issue was due to harmful inbound links and two months ago I disavowed ones that were clearly spam. Somehow my site has actually gone down! I have a blog that I have updated infrequently and I do not know if it I am getting punished for duplicate content. On Google Webmaster Tools it says I have 279 crawled and indexed pages. Yesterday when I ran the MOZ crawl check I was amazed to find 1150 different webpages on my site. Despite the fact that it does not appear on the webmaster tools I have three different webpages due to the format that the Wordpress blog was created: "http://www.marketplace-solutions.com/report/part2leadershi/", "http://www.marketplace-solutions.com/report/page/91/" and "http://www.marketplace-solutions.com/report/category/competent-leadership/page/3/" What does not make sense to me is why Google only indexed 279 webpages AND why MOZ did not identify these three webpages as duplicate content with the Crawl Test Tool. Does anyone have any ideas? Would it be as easy as creating a massive robot.txt file and just putting 2 of the 3 URLs in that file? Thank you for your help.
Intermediate & Advanced SEO | | DR700950 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Is legacy duplicate content an issue?
I am looking for some proof, or at least evidence to whether or not sites are being hurt by duplicate content. The situation is, that there were 4 content rich newspaper/magazine style sites that were basically just reskins of each other. [ a tactic used under a previous regime 😉 ] The least busy of the sites has since been discontinued & 301d to one of the others, but the traffic was so low on the discontinued site as to be lost in noise, so it is unclear if that was any benefit. Now for the last ~2 years all the sites have had unique content going up, but there are still the archives of articles that are on all 3 remaining sites, now I would like to know whether to redirect, remove or rewrite the content, but it is a big decision - the number of duplicate articles? 263,114 ! Is there a chance this is hurting one or more of the sites? Is there anyway to prove it, short of actually doing the work?
Intermediate & Advanced SEO | | Fammy0 -
The use of subdomains to improve SEO?
A clients website which provide a number of trade services which have a page for each service they provide for example: carpentry or electrician or plumbing etc. currently these pages are found at domain.co.uk/bathrooms/ bathrooms.html I am trying to optmise each page better as they are competing with other sites who for example sell bathrooms rather than bathroom installers or plumbers. As part of the on page optimisation I plan to change the page names and directory structure. I had an idea to split the website down into subdomains for various sections i.e for all their services Create a sub domain such as http://plumber.domain.co.uk 2.) upload the relevant content (in this example the plumbing page) to the sub domain location 3.) correct all the links to absolute URLs for each sub domain / Will this help target better use of keywords in the URL in terms of SEO efforts ? hope it makes sense thanks Darren
Intermediate & Advanced SEO | | Bristolweb0 -
.co vs .com
hello Mozzers. question - does it make a big difference between having a .co vs a .com . I am tryign to get a URL, with the actual keywords in the URL . for example blackboots.com/ I see that the .com is taken but the .co is available, is it a good idea to buy it? also what about hyphens in urls - do they hurt or help if you actually have the keywords in the url. thanks much - you rock, V
Intermediate & Advanced SEO | | vijayvasu0 -
Serving different content based on IP location
I have city centric website. For sake of simplicity, say I only have 2 cities -- City A and City B. Depending on a user's IP address, they will either get City A or City B. Users can change their location through javascript on pages. But there is no cross-linking between cities. By this, I mean that unless you can read or execute javascript, there is no way for you to get from city A to City B. My concern is this: googlebot comes to my site, and we serve them up City A. How does City B get discovered if Googlebot doesn't read javascript? We have an xml sitemap plus plenty of backlinks to City B. Is this sufficient? Should I provide a static link to City B (and vice versa) on the homepage for crawling purposes?
Intermediate & Advanced SEO | | ChatterBlock0