Duplicate Homepage: www.mysite.com/ and www.mysite.com/default.aspx
-
Hi,
I have a question regarding our client's site, http://www.outsolve-hr.com/ on ASP.net.
Google has indexed both www.outsolve-hr.com/ and www.outsolve-hr.com/default.aspx creating a duplicate content issue. We have added
to the default.aspx page. Now, because www.outsolve-hr.com/ and www.outsolve-hr.com/default.aspx are the same page on the actual backend the code is on the http://www.outsolve-hr.com/ when I view the code from the page loaded in a brower. Is this a problem? Will Google penalize the site for having the rel=canonical on the actual homepage...the canonical url.We cannot do a 301 redirect from www.outsolve-hr.com/default.aspx to www.outsolve-hr.com/ because this causes an infinite loop because on the backend they are the same page.
So my question is two-fold:
-
Will Google penalize the site for having the rel=canonical on the actual homepage...the canonical url.
-
Is the rel="canonical" the best solution to fix the duplicate homepage issue on ASP.
And lastly, if Google has not indexed duplicate pages, such as https://www.outsolve-hr.com/DEFAULT.aspx, is it a problem that they exist?
Thanks in advance for your knowledge and assistance.
Amy
-
-
The basic steps outlined below should work. CAVEAT: I'm a linux/apache person I don't know the specific implementation details for a windows/.net environment (but I believe it IS do-able - hopefully someone else can verify or expand on that)
1) Copy the default index (default.aspx) to a new name (example: mydefault.aspx)
-
Make mydefault.aspx the directory index for the root directory
-
Modify the original default.aspx so that all it contains is a redirect to http://www.outsolve-hr.com/
4) NOW default.aspx can be 301'd safely
5) Don't ever build any links to "mydefault.aspx" or you'll just re-create the problem!
Existing links to default.aspx would then be resolved safely and it's highly unlikely new ones to mydefault.aspx will appear.
Probably worth looking into, since a 301 is more powerful - rel=canonical is viewed by some search engines as more of a "suggestion."
-
-
To all your questions - you did exactly the right thing. I run ASP.NET sites too so totally understand the problem.
I add a noindex to all https pages, FWIW.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What should I name my Wordpress homepage?
I work almost exclusively in wordpress now. And I always hesitate when it comes to naming a site's homepage. I have to give it a name - right? I usually pick the business name or /home. And then that is identifies as the site's static homepage in the Wordpress settings and it works just fine. But I've started to get warning that it is an issue because it creates redirects. For example, I just ran the Ryte service analysis on a website and it warned me about "Non-indexable pages with high relevance" and it's basically my homepage that has 29 incoming links that "passes all pagerank to https://ourdomain/home But what am I supposed to call my homepage if not "Home"? It's not like the old days where anyone has to type it in. The root domain loads the homepage just as it should. Can anybody advise me regarding best practices for what to name a Wordpress homepage for good SEO? With thanks in advance for your help.
Technical SEO | | Dandelion0 -
PR / News stories across multiple sites - is it still duplicate content?
I was wondering does Google make an exception for news stories where duplicate content is concerned? After all depending on the story there can be a lot of quotes and bulk blocks of the same details. Is Google intelligent enough to distinguish between general website content and actual news stories? Also like a lot of big firms we publish news stories on our website, but then they get passed on to other websites in the form of PR, and then published on other websites. So if we put it on our website, then within a few hours or the same day other websites publish the story at the same time (literally copied and pasted) - how does this affect our website in terms of duplicate content? Will Google know automatically that we published it first? Thanks!
Technical SEO | | Brabian0 -
Is it needed to use http:// or not?
Hi, When doing link building and getting my URL to other websites, is it necessary that the other websites include http:// before the domain? Example: I want to increase the number of links to my site www.example.com . When I ask other websites to add a link to my site, should I ask them to use http://www.example.com or is www.example.com enough (without http://)? Or it really does not matter? Thanks in advance, Sam
Technical SEO | | Awaraman0 -
Duplicate title/content errors for blog archives
Hi All Would love some help, fairly new at SEO and using SEOMoz, I've looked through the forums and have just managed to confuse myself. I have a customer with a lot of duplicate page title/content errors in SEOMoz. It's an umbraco CMS and a lot of the errors appear to be blog archives and pagination. i.e. http://example.com/blog http://example.com/blog/ http://example.com/blog/?page=1 http://example.com/blog/?page=2 and then also http://example.com/blog/2011/08 http://example.com/blog/2011/08?page=1 http://example.com/blog/2011/08?page=2 http://example.com/blog/2011/08?page=3 (empty page) http://example.com/blog/2011/08?page=4 (empty page) This continues for different years and months and blog entries and creates hundreds of errors. What's the best way to handle this for the SEOMoz report and the search engines. Should I rel=canonical the /blog page? I think this would probably affect the SEO of all the blog entries? Use robots.txt? Sitemaps? URL parameters in the search engines? Appreciate any assistance/recommendations Thanks in advance Ian
Technical SEO | | iragless0 -
How to redirect my .com/blog to my server folder /blog ?
Hello SEO Moz ! Always hard to post something serious for the 04.01 but anyway let's try ! I'm releasing Joomla websites website.com, website.com/fr, website.com/es and so on. Usually i have the following folders on my server [ROOT]/com [ROOT]/com/fr [ROOT]/com/es However I would like to get the following now (for back up and security purpose). [ROOT]/com [ROOT]/es [ROOT]/fr So now what can I do (I gues .htaccess) to open the folder [ROOT]/es when people clic on website.com/es ? It sounds stupid but I really don't know. I found this on internet but did not answer my needs. .htaccess RewriteEngine On
Technical SEO | | AymanH
RewriteCond %{REQUEST_URI} !(^/fr/.) [NC]
RewriteRule ^(.)$ /sites/fr/$1 [L,R=301] Tks a lot ! Florian0 -
Robots.txt to disallow /index.php/ path
Hi SEOmoz, I have a problem with my Joomla site (yeah - me too!). I get a large amount of /index.php/ urls despite using a program to handle these issues. The URLs cause indexation errors with google (404). Now, I fixed this issue once before, but the problem persist. So I thought, instead of wasting more time, couldnt I just disallow all paths containing /index.php/ ?. I don't use that extension, but would it cause me any problems from an SEO perspective? How do I disallow all index.php's? Is it a simple: Disallow: /index.php/
Technical SEO | | Mikkehl0 -
Duplicate content with same URL?
SEOmoz is saying that I have duplicate content on: http://www.XXXX.com/content.asp?ID=ID http://www.XXXX.com/CONTENT.ASP?ID=ID The only difference I see in the URL is that the "content.asp" is capitalized in the second URL. Should I be worried about this or is this an issue with the SEOmoz crawl? Thanks for any help. Mike
Technical SEO | | Mike.Goracke0 -
Follow up from http://www.seomoz.org/qa/discuss/52837/google-analytics
Ben, I have a follow up question from our previous discussion at http://www.seomoz.org/qa/discuss/52837/google-analytics To summarize, to implement what we need, we need to do three things: add GA code to the Darden page _gaq.push(['_setAccount', 'UA-12345-1']);_gaq.push(['_setAllowLinker', true]);_gaq.push(['_setDomainName', '.darden.virginia.edu']);_gaq.push(['_setAllowHash', false]);_gaq.push(['_trackPageview']); Change links on the Darden Page to look like http://www.darden.virginia.edu/web/MBA-for-Executives/ and [https://darden-admissions.symplicity.com/applicant](<a href=)">Apply Now and make into [https://darden-admissions.symplicity.com/applicant](<a href=)" > onclick="_gaq.push(['_link', 'https://darden-admissions.symplicity.com/applicant']); return false;">Apply Now Have symplicity add this code. _gaq.push(['_setAccount', 'UA-12345-1']);_gaq.push(['_setAllowLinker', true]);_gaq.push(['_setDomainName', '.symplicity.com']);_gaq.push(['_setAllowHash', false]);_gaq.push(['_trackPageview']); Due to our CMS system, it does not allow the user to add onClick to the link. So, we CANNOT add part 2) What will be the result if we have only 1) and 3) implemented? Will the data still be fed to GA account 'UA-12345-1'? If not, how can we get cross domain tracking if we cannot change the link code? Nick
Technical SEO | | Darden0