Duplicate Content Errors
-
Ok, old fat client developer new at SEO so I apologize if this is obvious.
I have 4 errors in one of my campaigns. two are duplicate content and two are duplicate title.
Here is the duplicate title error
Rare Currency And Old Paper Money Values and Information.
http://www.antiquebanknotes.com/Rare Currency And Old Paper Money Values and Information.
http://www.antiquebanknotes.com/Default.aspxSo, my question is... What do I need to do to make this right? They are the same page. in my page load for default.aspx I have this:
this.Title = "Rare Currency And Old Paper Money Values and Information.";
And it occurs only once...
-
They offer II7 but I didn't see any point to it...
Now it's dawning on me I may need to reconsider.
-
I dont think it works on IIS6
There is your problem.
You can fix your www problem by code, see tutorials, but fixing the default page is not so easy. as in ASP.Net you can not detect the difference between domain.com and domain.com/default.aspx
Best thing you can do is make sure all your internal links point to domain.com and not domain.com/default.aspx any external links pointing to default.aspx will be watsed, but sicne no internal links point to default.aspx you are unlikely to get any mopre external links pointing to it.
GoDaddy much be using old servers.
-
It's II6 hosted on Godaddy. I will see if I can get my IIS restarted.
-
Cant see anything wrong with web.config
I should of asked before if you are using IIS7?
if so try to debug with the article http://blogs.iis.net/ruslany/archive/2008/10/30/debug-and-troubleshoot-rewrite-rules-easily.aspx
I have the same code as you in many sites all working fine
you may need to restart IIS?
-
nor is the non www redirected to the www
the code i gave you should be inside the configuration tag
<configuration></configuration>
place here
and you should only have one <system.webserver>tag</system.webserver>
if you want you cany send me your web.config and ill have a look
-
Ok, I have that in place... if I am testing right, the default.aspx isn't working. Maybe it's a hosting issue?
-
What this line is saying is, if not antiquebanknotes.com then redirect, note the negate=true
<add input="{HTTP_HOST}" pattern="^.antiquebanknotes.com$" negate="true"></add>
What you should have is
<system.webserver><rewrite><rules><rule name="CanonicalHostNameRule1"><match url="(.*)"><conditions><add input="{HTTP_HOST}" pattern="^www.antiquebanknotes.com$" negate="true"></add></conditions>
<action type="Redirect" url="http://www.antiquebanknotes.com/{R:1}"></action></match></rule>
<rule name="Default Page" enabled="true" stopprocessing="true"><match url="^default.aspx$"><conditions logicalgrouping="MatchAll"><add input="{REQUEST_METHOD}" pattern="GET"></add></conditions>
<action type="Redirect" url="/"></action></match></rule></rules></rewrite></system.webserver>Try that, let me know when you have it in place and I will test it for you.
-
I thought this pattern:
was ok for me bcause I am looking to take AntiqueBanknotes.com and make it into www.AntiqueBankNotes etc
I think my hostname redirect is working ok.... it's the default.aspx redir that isn't working. I have it in the same Rules section.
Look ok?
<rule name="Default Page" enabled="true" stopprocessing="true"><match url="^default.aspx$"><conditions logicalgrouping="MatchAll"><add input="{REQUEST_METHOD}" pattern="GET"></add></conditions>
<action type="Redirect" url="/"></action></match></rule> -
You have an error above, you should have
pattern="^www.antiquebanknotes.com$"
not pattern="^antiquebanknotes\.com$"
The other rule does work i just tested it
When adding it place stright under the last rule inside the same rules tag, there should only be one rules tag ```
-
Sorry, I forgot to add that I put that rule in before but it doesn't seem to be working for me. Not sure why but sadly it's time for my day job so I will take a look tonight.
-
Add the rule on this page to it
http://perthseocompany.com.au/seo/tutorials/how-to-fix-canonical-issues-involving-the-default-pageit should work no worries
-
Ah, I see... I have added the snippets to the web config for default page and www vs non www pages. it seems like the www issue is working correctly. I will have to work on the default issue as it seems to ignore my rule.
This is what I have done on that score.
<rule name="CanonicalHostNameRule1"><match url="(.*)"><conditions><add input="{HTTP_HOST}" pattern="^antiquebanknotes.com$" negate="true"></add></conditions>
<action type="Redirect" url="<a href=" http:="" www.antiquebanknotes.com="" {r:1"="">http://www.AntiqueBanknotes.com/{R:1}" />
</action></match></rule>Thanks for all your help!
-
Using iis writes to teh web.config file fior you, the tutorials show you the code it writes, you can just copy it into the web.config
There is also some tuorials on how to make a httpmodule that works much like the global.asax file but intercepts the requerst before it reaches the website, much better preformace.
Also i cheked your non www and it resolved to this
http://www.antiquebanknotes.com/antiquebanknotes/default.aspx
I notive above you dont have "" on the first url, is that just a typo?
this tutorial uses the httpmodule, it resolves the other way from www to nonwww, but you can alter it
http://perthseocompany.com.au/seo/tutorials/using-ihttpmodule-c-sharp
or you can just edit the web.config as I sugested above
-
Hi Alan,I appreciate the links and now I know what the problem is... sadly though I am not able to access IIS as this is running on Godaddy... I guess I will have to implement something in the global.asax file?I have this code in the global.asax to handle the www and non www page issue. string request = HttpContext.Current.Request.Url.ToString().ToLower();if (request.Contains("http://antiquebanknotes.com")){ HttpContext.Current.Response.Clear(); HttpContext.Current.Response.Status = "301 Moved Permanently"; HttpContext.Current.Response.AddHeader("Location", request.Replace (http://antiquebanknotes.com", "http://www.antiquebanknotes.com));}
-
They are seen as 2 different pages to search engines adn spliting your rank,
Here is a tutoprial just for this problem
http://perthseocompany.com.au/seo/tutorials/how-to-fix-canonical-issues-involving-the-default-page
You may want to have a look at this one also for yourt www and non www pages.
http://perthseocompany.com.au/seo/tutorials/how-to-fix-canonical-domain-name-issues
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
WordPress Duplicate Content Caused By Categories
Hello, We have a wordpress blog that has around 250 categories. Due to our platform we have a hierarchy structure for 3 separate stores. For example iPhone > Apps > Books. Placing a blog post in the books category automatically places it into iPhone and iPhone/Apps category, causing 3 instances of any blog post in this category. Is this an issue? I have seen 2 schools of thought on categories, 1 index follow and 2 noindex follow. I know some of our categories get indexed, but with so many, maybe it is better to noindex them. We also considered reducing our categories to 10 to 12 and use tags to provide the indexed site navigation as follows: Reviews (category) iPhone Book App, iPhone App Store (tags) but this seems a little redundant? Anyone want to take this on? thank you Mike
Technical SEO | | crazymikesapps10 -
Duplicate title/content errors for blog archives
Hi All Would love some help, fairly new at SEO and using SEOMoz, I've looked through the forums and have just managed to confuse myself. I have a customer with a lot of duplicate page title/content errors in SEOMoz. It's an umbraco CMS and a lot of the errors appear to be blog archives and pagination. i.e. http://example.com/blog http://example.com/blog/ http://example.com/blog/?page=1 http://example.com/blog/?page=2 and then also http://example.com/blog/2011/08 http://example.com/blog/2011/08?page=1 http://example.com/blog/2011/08?page=2 http://example.com/blog/2011/08?page=3 (empty page) http://example.com/blog/2011/08?page=4 (empty page) This continues for different years and months and blog entries and creates hundreds of errors. What's the best way to handle this for the SEOMoz report and the search engines. Should I rel=canonical the /blog page? I think this would probably affect the SEO of all the blog entries? Use robots.txt? Sitemaps? URL parameters in the search engines? Appreciate any assistance/recommendations Thanks in advance Ian
Technical SEO | | iragless0 -
Duplicate Content for Multiple Instances of the Same Product?
Hi again! We're set to launch a new inventory-based site for a chain of car dealers with various locations across the midwest. Here's our issue: The different branches have overlap in the products that they sell, and each branch is adamant that their inventory comes up uniquely in site search. We don't want the site to get penalized for duplicate content; however, we don't want to implement a link rel=canonical because each product should carry the same weight in search. We've talked about having a basic URL for these product descriptions, and each instance of the inventory would be canonicalized to this main product, but it doesn't really make sense for the site structure to do this. Do you have any tips on how to ensure that these products (same description, new product from manufacturer) won't be penalized as duplicate content?
Technical SEO | | newwhy0 -
How do I deal with Duplicate content?
Hi, I'm trying SEOMOZ and its saying that i've got loads of duplicate content. We provide phone numbers for cities all over the world, so have pages like this... https://www.keshercommunications.com/Romaniavoipnumbers.html https://www.keshercommunications.com/Icelandvoipnumbers.html etc etc. One for every country. The question is, how do I create pages for each one without it showing up as duplicate content? Each page is generated by the server, but Its impossible to write unique text for each one. Also, the competition seem to have done the same but google is listing all their pages when you search for 'DID Numbers. Look for DIDWW or MyDivert.
Technical SEO | | DanFromUK0 -
Duplicate Content Vs No Content
Hello! A question that has been throw around a lot at our company has been "Is duplicate content better than no content?". We operate a range of online flash game sites, most of which pull their games from a feed, which includes the game description. We have unique content written on the home page of the website, but aside from that, the game descriptions are the only text content on the website. We have been hit by both Panda and Penguin, and are in the process of trying to recover from both. In this effort we are trying to decide whether to remove or keep the game descriptions. I figured the best way to settle the issue would be to ask here. I understand the best solution would be to replace the descriptions with unique content, however, that is a massive task when you've got thousands of games. So if you have to choose between duplicate or no content, which is better for SEO? Thanks!
Technical SEO | | Ryan_Phillips0 -
Thin/Duplicate Content
Hi Guys, So here's the deal, my team and I just acquired a new site using some questionable tactics. Only about 5% of the entire site is actually written by humans the rest of the 40k + (and is increasing by 1-2k auto gen pages a day)pages are all autogen + thin content. I'm trying to convince the powers that be that we cannot continue to do this. Now i'm aware of the issue but my question is what is the best way to deal with this. Should I noindex these pages at the directory level? Should I 301 them to the most relevant section where actual valuable content exists. So far it doesn't seem like Google has caught on to this yet and I want to fix the issue while not raising any more red flags in the process. Thanks!
Technical SEO | | DPASeo0 -
WordPress Duplicate Content Issues
Everyone knows that WordPress has some duplicate content issues with tags, archive pages, category pages etc... My question is, how do you handle these issues? Is the smart strategy to use robots meta and add no follow/ no index category pages, archive pages tag pages etc? By doing this are you missing out on the additional internal links to your important pages from you category pages and tag pages? I hope this makes sense. Regards, Bill
Technical SEO | | wparlaman0 -
URL Duplicate Content Issues (Website Transition)
Hey guys, I just transitioned my website and I have a question. I have built up all the link juice around my old url styles. To give you some clarity: My old CMS rendered links like this: www.example.com/sweatbands My new CMS renders links like this: www.example.com/sweatbands/ My new CMS's auto-sitemap also generates them with the slash on the end. Also throughout the website the CMS links to them with the slash at the end and i link to them without the slash (because it's what i am used to). I have the canonical without the slash. Should I just 301 to the version with the slash before google crawls again? I'm worried that i'll lose all the trust and ranking i built up to the one without the slash. I rank very high for certain keywords and some pages house a large portion of our traffic. What a mess! Help! 🙂
Technical SEO | | Hyrule0