How best to deal with www.home.com and www.home.com/index.html
-
Firstly, this is for an .asp site - and all my usual ways of fixing this (e.g. via htaccess) don't seem to work.
I'm working on a site which has www.home.com and www.home.com/index.html - both URL's resolve to the same page/content.
If I simply drop a rel canonical into the page, will this solve my dupe content woes?
The canonical tag would then appear in both www.home.com and www.home.com/index.html cases.
If the above is Ok, which version should I be going with?
- or -
Thanks in advance folks,
James @ Creatomatic -
Or another possible solution would be to create a 301 redirect for
Would be the preferred method, if not available then use the Canonical option.
Here is a great SEOMoz blog post
Here is a quote pertaining to your issue:
"Multiple Versions of the Homepage
This is another common mistake. Potentially a homepage URL could be access through the following means, depending on how it has been built -
http://seomoz.org
http://www.seomoz.org/home.html
http://www.seomoz.org/index.htmlIf the homepage can be accessed via these type of URLs, they should 301 to the correct URL which in this case would be www.seomoz.org.
Quick caveat - the only exception would be if these multiple versions of the homepage served a unique purpose, such as being shown to users who are logged in or have cookies dropped. In this case, you'd be better to use rel=canonical instead of a 301."
Hope it helps.
-
Many thanks Istvan - I'll give that a try.
-
Hi James,
If you will have the on both of the pages it should do the trick.
Or another possible solution would be to create a 301 redirect for www.home.com/index.html -> www.home.com/
Beware if you use both redirect and canonical have the same link there.
I have seen infinite loops created with canonical and 301 created for search engines. Canonical was pointing to the "/" version and the redirect was with the non - "/".
I hope this helps.
greetings,
Istvan
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Website URL, Robots.txt and Google Search Console (www. vs non www.)
Hi MOZ Community,
Technical SEO | | Badiuzz
I would like to request your kind assistance on domain URLs - www. VS non www. Recently, my team have moved to a new website where a 301 Redirection has been done. Original URL : https://www.example.com.my/ (with www.) New URL : https://example.com.my/ (without www.) Our current robots.txt sitemap : https://www.example.com.my/sitemap.xml (with www.)
Our Google Search Console property : https://www.example.com.my/ (with www.) Question:
1. How/Should I standardize these so that Google crawler can effectively crawl my website?
2. Do I have to change back my website URLs to (with www.) or I just need to update my robots.txt?
3. How can I update my Google Search Console property to reflect accordingly (without www.), because I cannot see the options in the dashboard.
4. Is there any to dos such as Canonicalization needed, or should I wait for Google to automatically detect and change it, especially in GSC property? Really appreciate your kind assistance. Thank you,
Badiuzz0 -
Pages not indexed
Hey everyone Despite doing the necessary checks, we have this problem that only a part of the sitemap is indexed.
Technical SEO | | conversal
We don't understand why this indexation doesn't want to take place. The major problem is that only a part of the sitemap is indexed. For a client we have several projects on the website with several subpages, but only a few of these subpages are indexed. Each project has 5 to 6 subpages. They all should be indexed. Project: https://www.brody.be/nl/nieuwbouwprojecten/nieuwbouw-eeklo/te-koop-eeklo/ Mainly subelements of the page are indexed: https://www.google.be/search?source=hp&ei=gZT1Wv2ANouX6ASC5K-4Bw&q=site%3Abrody.be%2Fnl%2Fnieuwbouwprojecten%2Fnieuwbouw-eeklo%2F&oq=site%3Abrody.be%2Fnl%2Fnieuwbouwprojecten%2Fnieuwbouw-eeklo%2F&gs_l=psy-ab.3...30.11088.0.11726.16.13.1.0.0.0.170.1112.8j3.11.0....0...1c.1.64.psy-ab..4.6.693.0..0j0i131k1.0.p6DjqM3iJY0 Do you have any idea what is going wrong here?
Thanks for your advice! Frederik
Digital marketeer at Conversal0 -
Best Web-site Structure/ SEO Strategy for an online travel agency?
Dear Experts! I need your help with pointing me in the right direction. So far I have found scattered tips around the Internet but it's hard to make a full picture with all these bits and pieces of information without a professional advice. My primary goal is to understand how I should build my online travel agency web-site’s (https://qualistay.com) structure, so that I target my keywords on correct pages and do not create a duplicate content. In my particular case I have very similar properties in similar locations in Tenerife. Many of them are located in the same villa or apartment complex, thus, it is very hard to come up with the unique description for each of them. Not speaking of amenities and pricing blocks, which are standard and almost identical (I don’t know if Google sees it as a duplicate content). From what I have read so far, it’s better to target archive pages rather than every single property. At the moment my archive pages are: all properties (includes all property types and locations), a page for each location (includes all property types). Does it make sense adding archive pages by property type in addition OR in stead of the location ones if I, for instance, target separate keywords like 'villas costa adeje' and 'apartments costa adeje'? At the moment, the title of the respective archive page "Properties to rent in costa adeje: villas, apartments" in principle targets both keywords... Does using the same keyword in a single property listing cannibalize archive page ranking it is linking back to? Or not, unless Google specifically identifies this as a duplicate content, which one can see in Google Search Console under HTML Improvements and/or archive page has more incoming links than a single property? If targeting only archive pages, how should I optimize them in such a way that they stay user-friendly. I have created (though, not yet fully optimized) descriptions for each archive page just below the main header. But I have them partially hidden (collapsible) using a JS in order to keep visitors’ focus on the properties. I know that Google does not rank hidden content high, at least at the moment, but since there is a new algorithm Mobile First coming up in the near future, they promise not to punish mobile sites for a collapsible content and will use mobile version to rate desktop one. Does this mean I should not worry about hidden content anymore or should I move the descirption to the bottom of the page and make it fully visible? Your feedback will be highly appreciated! Thank you! Dmitry
Technical SEO | | qualistay1 -
Best SEO service/process to harness the power of quality backlinks?
What/who would you recommend for those looking for a strategy around realizing the benefits of high quality back links? We have tons of earned links from DA 90+ sites, but don't think we are realizing the full benefit due to onsite issues. We have scraper sites outranking us. Would it be a technical on page audit? Any guidance appreciated.
Technical SEO | | loveit0 -
<sub>& <sup>tags, any SEO issues?</sup></sub>
Hi - the content on our corporate website is pretty technical, and we include chemical element codes in the text that users would search on (like S02, C02, etc.) A lot of times our engineers request that we list the codes correctly, with a <sub>on the last number. Question - does adding this code into the keyword affect SEO? The code would look like SO<sub>2</sub>.</sub> Thanks.
Technical SEO | | Jenny10 -
Help! www and non-www urls are driving me mad!
Sorry folks, I'm a very recently joined member, and after a five year gap in creating websites, I've decided to get back into the saddle and start again. Boy how things have changed! I'm soaking up all sorts of information from everywhere I can to get up to date with these changes, but I've come across this www v non-www problem in a big way. I realise there are already posts in here about this, but each time I read them, my mind seems to slip into some sort of loop that does not get anywhere. Basically, I think Google has indexed most of my pages as non-www, and only a hadful as www's. I have opened two accounts in Google Webmaster Tools for both www and non-www, and declared my preference for both accordingly. That was two days ago. As unprofessional as it may sound, I use Serif Web Plus X6, simply beacause it did the job six years ago, and it's all I know until I find and teach myself something better. My question is this - I can only create one page on X6, and yet there are two versions indexed in Google (although not all of them). I can only amend the one page that exists in X6, so how do I canonicalize two pages when there's the only version I have access to amending? Or am I miissing the point??? I hope that made sense?! I wouldn't mind, but I specified that I didn't want the site to be indexed yet with 'no follow', as it's nowhere near finished, but for some reason (probably due to placing Adsense ads on there) Google went ahead and indexed it anyway! The site is either http://www.cushioncutengagementringsstore.com or http://cushioncutengagementringsstore.com, depending on how you look at it! Any light you can shed on this would be gratefuly received! Thanks. Cem.
Technical SEO | | ConwyWebDesign0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
Follow up from http://www.seomoz.org/qa/discuss/52837/google-analytics
Ben, I have a follow up question from our previous discussion at http://www.seomoz.org/qa/discuss/52837/google-analytics To summarize, to implement what we need, we need to do three things: add GA code to the Darden page _gaq.push(['_setAccount', 'UA-12345-1']);_gaq.push(['_setAllowLinker', true]);_gaq.push(['_setDomainName', '.darden.virginia.edu']);_gaq.push(['_setAllowHash', false]);_gaq.push(['_trackPageview']); Change links on the Darden Page to look like http://www.darden.virginia.edu/web/MBA-for-Executives/ and [https://darden-admissions.symplicity.com/applicant](<a href=)">Apply Now and make into [https://darden-admissions.symplicity.com/applicant](<a href=)" > onclick="_gaq.push(['_link', 'https://darden-admissions.symplicity.com/applicant']); return false;">Apply Now Have symplicity add this code. _gaq.push(['_setAccount', 'UA-12345-1']);_gaq.push(['_setAllowLinker', true]);_gaq.push(['_setDomainName', '.symplicity.com']);_gaq.push(['_setAllowHash', false]);_gaq.push(['_trackPageview']); Due to our CMS system, it does not allow the user to add onClick to the link. So, we CANNOT add part 2) What will be the result if we have only 1) and 3) implemented? Will the data still be fed to GA account 'UA-12345-1'? If not, how can we get cross domain tracking if we cannot change the link code? Nick
Technical SEO | | Darden0