Sub Domain rel=canonical to Main Domain
-
Just a quick one, i have the following example scenario.
Main Domain: http://www.test.com
Sub Domain: http://sub.test.com
What I am wondering is I can add onto the sub domain a rel=canonical to the main domain. I dont want to de-index the whole sub domain just a few pages are duplicated from the main site.
Is it easier to de-index the individual sub domain pages or add the rel=canonical back to the main domain.
Much appreciated
Joseph
-
Canonicalizing http://sub.test.com to http://www.test.com will tell Google that those two individual pages are the same and it should only index http://www.test.com--it won't affect other pages on the subdomain.
If you only have a few subdomain pages that are duplicates of root domain pages, you can just use canonicals to indicate which pages you want indexed. You won't need to noindex the duplicate ones--they will fall out of Google's index naturally once Google sees which are the preferred, canonical ones.
-
Hi there,
The Rel=canonical will not de-index those pages.
To de-index, add a rel=robots with a noindex tag. And also add the rel=canonical or just redirect 301 the page. Of course, do it to those selected pages only.
Hope it helps.
GR.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Site Scraping and Canonical Tags
Hi, So I recently found a site (actually just one page) that has scraped my homepage. All the links to my site have been removed except the canonical tag, should this be disavowed through WMT or reported through WMT's Spam Report? Thanks in advance for any feedback.
White Hat / Black Hat SEO | | APFM0 -
Secondary Domain Outranking Master Website
IEEE is a large professional association dedicated to serving engineers. The IEEE Web Presence is made up of flagship sites like IEEE.org, IEEEXplore, and IEEE Spectrum, mid-tier sites like Computer.org, and smaller sites like those dedicated to specific conferences. It is unclear exactly when this started - but searches in Google for [ieee] currently return ieeeusa.org before ieee.org. This is troublesome, as users are typically looking for IEEE.org with such a general query. ieeeusa.org is a site that has a much narrower focus - it is dedicated to public policy. IEEE.org is one of the strongest domains - I am thinking that this is a glitch of some sort. I am removing a stale sitemap that is referenced in robots.txt (though again, I'm not seeing any issues with other pages - its just two queries that are trouble: [ieee] and [about ieee]. And its noticeable in analytics 🙂 http://ieee.d.pr/hMg0/YhklCw7Z What do you think? 🙂
White Hat / Black Hat SEO | | thegrif3290 -
Can a domain name alone be considered SPAM?
If someone has a domain that is spammy, such as "http://seattlesbestinsurancerates.com" can this cause Google to not index the website? This is not our domain, but a customer of ours has a similar one and it appears to be causing issues! Any thoughts? Thanks for any input!
White Hat / Black Hat SEO | | Tosten0 -
Rel Noindex Nofollow tag vs meta noindex nofollow robots
Hi Mozzers I have a bit of thing I was pondering about this morning and would love to hear your opinion on it. So we had a bit of an issue on our client's website in the beginning of the year. I tried to find a way around it by using wild cards in my robots.txt but because different search engines treat wild cards differently it dint work out so well and only some search engines understood what I was trying to do. so here goes, I had a parameter on a big amount of URLs on the website with ?filter being pushed from the database we make use of filters on the site to filter out content for users to find what they are looking for much easier, concluding to database driven ?filter URLs (those ugly &^% URLs we all hate so much*. So what we looking to do is implementing nofollow noindex on all the internal links pointing to it the ?filter parameter URLs, however my SEO sense is telling me that the noindex nofollow should rather be on the individual ?filter parameter URL's metadata robots instead of all the internal links pointing the parameter URLs. Am I right in thinking this way? (reason why we want to put it on the internal links atm is because the of the development company states that they don't have control over the metadata of these database driven parameter URLs) If I am not mistaken noindex nofollow on the internal links could be seen as page rank sculpting where as onpage meta robots noindex nofolow is more of a comand like your robots.txt Anyone tested this before or have some more knowledge on the small detail of noindex nofollow? PS: canonical tags is also not doable at this point because we still in the process of cleaning out all the parameter URLs so +- 70% of the URLs doesn't have an SEO friendly URL yet to be canonicalized to. PSS: another reason why this needs looking at is because search engines won't be able to make an interpretation of these pages (until they have been cleaned up and fleshed out with unique content) which could result in bad ranking of the pages which could conclude to my users not being satisfied, so over and above the SEO factor, usability of the site is being looked at here as well, I don't want my users to land on these pages atm. If they navigate to it via the filters then awesome because they are defining what they are looking for with the filters. Would love to hear your thoughts on this. Thanks, Chris Captivate.
White Hat / Black Hat SEO | | DROIDSTERS0 -
Will aggregating external content hurt my domain's SERP performance?
Hi, We operate a website that helps parents find babysitters. As a small add- on we currently run a small blog with the topic of childcare and parenting. We are now thinking of introducing a new category to our blog called "best articles to read today". The idea is that we "re-blog" selected articles from other blogs that we believe are relevant for our audience. We have obtained the permission from a number of bloggers that we may fully feature their articles on our blog. Our main aim in doing so is to become a destination site for parents. This obviously creates issues with regard to duplicated content. The question I have is: will including this duplicated content on our domain harm our domains general SERP performance? And if so, how can this effect be avoided? It isn't important for us that these "featured" articles rank in SERPs, so we could potentially make them "no index" sites or make the "rel canonical" point to the original author. Any thoughts anyone? Thx! Daan
White Hat / Black Hat SEO | | daan.loening0 -
Domain Structure For A Network of Websites
To achieve this we need to set up a new architecture of domains and sub-websites to effectively build this network. We want to make sure we follow the right protocols for setting up the domain structures to achieve good SEO for the primary domain and local websites. Today we have our core website at www.doctorsvisioncenter.com which will ultimately will become dvceyecarenetwork.com. That website will serve as the core web presence that can be custom branded for hundreds. For example, today you can go to www.doctorsvisioncenter.com/pinehurst. Note when you start there, you can click around and it is still branded for Pinehurst or spectrum eye care. So the burning question(s). - if I am an independent doc at www.newyorkeye.com, I could do domain forwarding but Google does not index forwarded domains so that is out. I could do a 301 permanent redirect to my page www.doctorsvisioncenter.com/newyorkeye. I could then put a rule in the HT Access file that says if newyorkeye.com redirect to www.doctorsvisioncenter/newyorkeye and then have the domain show up as www.newyorkeye.com. Another way to do that is we point the newyorkeye DNS to doctorsvisioncenter.com rather than a 301 redirect with the same basic rule in the HT Access file. That means that, theoretically, every sub page would show up, for example, as www.newyorkeye.com/contact-lens-center which is actually www.doctorsvisioncenter.com/contact-lens-center. It also means, theoretically, that it will be seen as an individual domain but pointing to all the same content under that individual domain just like potentially hundreds of others. The goal is we build once, manage once and benefit many. If we do something like the above which will mean that each domain will essentially be a separate domain, but, will google see it that way or as duplicative content? While it is easy to answer "yes" it would be duplicative, it is not necessarily the case if the content is on separate domains. Is this a good way to proceed, or does anyone have another recommendation for us?
White Hat / Black Hat SEO | | JessTopps0 -
One Blog Comment Now on Many Pages of The Same Domain
My question is I blog commented on this site http://blogirature.com/2012/07/01/half-of-200-signals-in-googles-ranking-algorithm-revealed/#comment-272 under the name "Peter Rota". For some reason the recent comments is a site wide link so, bascially my link from my website is pretty much on each page of their site now. I also noticed that the anchor text for each one of my links says "Peter Rota". This is my concern will google think its spammy if im on a lot of pages on a same site for one blog comment, and will I be penailzied for the exact same anchor text on each page? If this is the case what could I do in trying to get the links removed? thanks
White Hat / Black Hat SEO | | ilyaelbert0 -
If I redirect a penalized domain to a non-penalized domain, will the new domain still be penalized?
If I redirect a penalized domain to a non-penalized domain, will the new domain still be penalized?
White Hat / Black Hat SEO | | MangoMan160