Server Vs Authority
-
Deciding on whether to go for a Sub directory or CC tld structure.
So the tradeoff would be one server location (which can affect local rankings if the server is outside the country) VS a better passing of link authority.
What factor is more important?
-
ccTLD and Sub directories both are valid but it is preferable to use subdirectory instead of ccTLD.
By the way duplicate content can be a huge issue in that case weather you go for anything ccTLD, sub directory and even the different domain.
I would highly recommend using the sub directory with a unique text in it! Or use the different language (same content under different language won’t be a problem!)
-
Thanks Rebekah,
All in the same language with very similar content. Wouldnt the duplicate content be an issue for both strategies?
-
Will the content be in a different language? If so I would recommend the sub-directory. If not, the TLD. Personal preference, Google has said you could use either - but I can see a lot of duplicate content issues if the sub-directory is in the same language.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
New domain wipes out domain authority
A client wanted to change their domain name, which we have now done. The site content itself is exactly the same. We put 301 redirect links in so that Google searchers would redirect from the old site to the new one. However Moz then said that it couldn't crawl the old domain because of the redirects and advised creating a brand new campaign for the new domain. We have done this but now Moz says that the domain authority of the new site is 2 (it was 14 on the old domain). Specifics are:
Technical SEO | | mfrgolfgti
old domain: https://ryemeadcleaning.co.uk
new domain: https://ryemeadgroup.co.uk So basically it seems like we're starting again from scratch with the new domain and all the SEO from the old domain has been lost? Have we done it wrong?0 -
WMT "Index Status" vs Google search site:mydomain.com
Hi - I'm working for a client with a manual penalty. In their WMT account they have 2 pages indexed.If I search for "site:myclientsdomain.com" I get 175 results which is about right. I'm not sure what to make of the 2 indexed pages - any thoughts would be very appreciated. google-1.png google-2.png
Technical SEO | | JohnBolyard0 -
Easy Question: regarding no index meta tag vs robot.txt
This seems like a dumb question, but I'm not sure what the answer is. I have an ecommerce client who has a couple of subdirectories "gallery" and "blog". Neither directory gets a lot of traffic or really turns into much conversions, so I want to remove the pages so they don't drain my page rank from more important pages. Does this sound like a good idea? I was thinking of either disallowing the folders via robot.txt file or add a "no index" tag or 301redirect or delete them. Can you help me determine which is best. **DEINDEX: **As I understand it, the no index meta tag is going to allow the robots to still crawl the pages, but they won't be indexed. The supposed good news is that it still allows link juice to be passed through. This seems like a bad thing to me because I don't want to waste my link juice passing to these pages. The idea is to keep my page rank from being dilluted on these pages. Kind of similar question, if page rank is finite, does google still treat these pages as part of the site even if it's not indexing them? If I do deindex these pages, I think there are quite a few internal links to these pages. Even those these pages are deindexed, they still exist, so it's not as if the site would return a 404 right? ROBOTS.TXT As I understand it, this will keep the robots from crawling the page, so it won't be indexed and the link juice won't pass. I don't want to waste page rank which links to these pages, so is this a bad option? **301 redirect: **What if I just 301 redirect all these pages back to the homepage? Is this an easy answer? Part of the problem with this solution is that I'm not sure if it's permanent, but even more importantly is that currently 80% of the site is made up of blog and gallery pages and I think it would be strange to have the vast majority of the site 301 redirecting to the home page. What do you think? DELETE PAGES: Maybe I could just delete all the pages. This will keep the pages from taking link juice and will deindex, but I think there's quite a few internal links to these pages. How would you find all the internal links that point to these pages. There's hundreds of them.
Technical SEO | | Santaur0 -
Categories in Places Vs Local
Say you are listed with both Google places and Google Local. Places still allows custom categories, while Local limits you to preset categories. Which is the better strategy: to build service pages following custom services available in Places, or build out service pages following the (allowed) preset categories in Local.
Technical SEO | | waynekolenchuk0 -
Authorship Markup worth it for "invisible" authors
Greetings everyone! Background I help run multiple continuing education sites for Allied Health professionals. Our editors do a great job of getting some of the best authors in their respective fields to come onto the site and present webinars and we publish articles around those presentations. I would love to be able to use the rel=author tag on these sites as the authors we use help to improve our credibility when a user is on the site and I would like to take advantage of this in the SERPs. The issue is that while most of these authors are leaders in their respective fields and have published in many academic publications, they are not on Facebook or Twitter, let alone Google+. Also, they are probably not interested in setting up a G+ profile. They are "famous" and well published within their fields, yet they are somewhat "invisible" on the web. We are looking to implement author bios on our site and then could use the rel=author tag internally so that seems like a good first step. The question is then around linking out with rel=me to any profiles (FB, Twitter, G+) The issue is that, as I mentioned above, the online profiles are pretty scarce. Question / Discussion Is it worth it to setup all the authorship markup to internal bios on a site when many of the authors are "invisible" on G+, twitter, FB, etc. and so I will be limited in how I can link rel=me to those profiles. If the Google+ profile is not available for an author, what do you prefer to link to. Would you say FB over Twitter as FB has more users, or if a user has both profiles, but uses twitter more often, would you link to the Twitter profile instead? Many of these authors work at the university and have a bio page on the university website, would it be working linking to that profile? How do you judge the "best" place to link to if there is no Google+ profile. Thanks!
Technical SEO | | CleverPhD0 -
Rel=author: Which Google+ profile do I use (personal profiles or profiles set up under company email domain)?
Since our organization uses Google Business Apps, everyone in our org has a Google account under our company's domain name. When Google+ came out a lot of our employees set up two separate Google+ accounts (one under their work email address and one under their personal email address). Some people use one account more than the other. I'm about to set up rel=author on our blog, but I'm not sure which profiles to link to: personal account, business account or the account the individual uses the most?
Technical SEO | | janrain0 -
Internal search : rel=canonical vs noindex vs robots.txt
Hi everyone, I have a website with a lot of internal search results pages indexed. I'm not asking if they should be indexed or not, I know they should not according to Google's guidelines. And they make a bunch of duplicated pages so I want to solve this problem. The thing is, if I noindex them, the site is gonna lose a non-negligible chunk of traffic : nearly 13% according to google analytics !!! I thought of blocking them in robots.txt. This solution would not keep them out of the index. But the pages appearing in GG SERPS would then look empty (no title, no description), thus their CTR would plummet and I would lose a bit of traffic too... The last idea I had was to use a rel=canonical tag pointing to the original search page (that is empty, without results), but it would probably have the same effect as noindexing them, wouldn't it ? (never tried so I'm not sure of this) Of course I did some research on the subject, but each of my finding recommanded one of the 3 methods only ! One even recommanded noindex+robots.txt block which is stupid because the noindex would then be useless... Is there somebody who can tell me which option is the best to keep this traffic ? Thanks a million
Technical SEO | | JohannCR0