How do I fix a duplicate content error with a top level domain?
-
Hi,
I'm getting a duplicate content error from the SEOmoz crawler due to an issue with trailing slashes.
It's showing www.milengo.com and www.milengo.com/ as having duplicate page titles. However I'm pretty sure this has been fixed in the .htaccess file since if you type in the domain with a trailing slash it automatically redirects to the domain without a trailing slash, so this shouldn't be an issue.
I'm stuck here. Any ideas?
Thanks.
Rob
-
Hi Rob,
Couple of things:
1. You can check if you have a proper 301 in place using tools like URI Valet. For example, here's a report for the SEOmoz blog. If you include a backslash, it automatically redirects to remove it:
http://urivalet.com/?http://www.seomoz.org/blog/#Report
2. An actual better way to address this is with canoncial tags. A proper canonical on the homepage (pointing to the version without the slash) will address all duplicate content errors, and possibly also address other versions of your URL you didn't anticipate, like:
3. Although SEOmoz will flag these as errors (because technically they are) in reality a trailing slash on your homepage isn't a big deal. Google has gotten pretty good at figuring these URLs out. That said, it's still important to address other areas of duplicate content on your site.
Hope this helps. Best of luck!
-
Enter into google webmaster tools, and chek preference domain with www and without...
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
403 error but page is fine??
Hi, on my report im getting 4xx error. When i look into it it says the error is crital fo4r 403 error on this page https://gaspipes.co.uk/contact-us/ i can get to the page and see it fine but no idea why its showing a 403 error or how to fix it. This is the only page that the error is coming up on, is there anything i can check/do to get this resolved? Thanks
Moz Pro | | JU-Mark0 -
Account Error
Hey I have Moz account free trail for 30 days whenever i analyze website that is based on machines https://lattemachinehub.com/ i show error please solve my problem.
Moz Pro | | alihamughal6930 -
Duplicate content : what are best solutions
Hello i am a beginner in website, I got my first report the report saying that there are some duplicate content. I would like to know What can I do to solve that issue ?
Moz Pro | | Dieumerci0 -
Duplicate content issue
I'm getting duplicate content warnings from Moz for various slideshows on my posts and pages in Wordpress. It seems when I create a slideshow it exists as its own page and as these have no text Moz sees them as duplicates. Here are some examples - http://www.weddingphotojournalist.co.uk/?gallery_page=slideshow&pp_gallery_id=1331991312
Moz Pro | | simonatkinsphoto
Moz says is a duplicate of -
http://www.weddingphotojournalist.co.uk/?gallery_page=slideshow&pp_gallery_id=1000144730 The second of those two slideshows is on this page -http://www.weddingphotojournalist.co.uk/menorca-wedding/ but also exists as the page above. How can i avoid these being seen as duplicate content?0 -
Moz crawl duplicate pages issues
Hi According to the moz crawl on my website I have in the region of 800 pages which are considered internal duplicates. I'm a little puzzled by this, even more so as some of the pages it lists as being duplicate of another are not. For example, the moz crawler considers page B to be a duplicate of page A in the urls below: Not sure on the live link policy so ive put a space in the urls to 'unlive' them. Page A http:// nuchic.co.uk/index.php/jeans/straight-jeans.html?manufacturer=3751 Page B http:// nuchic.co.uk/index.php/catalog/category/view/s/accessories/id/92/?cat=97&manufacturer=3603 One is a filter page for Curvety Jeans and the other a filter page for Charles Clinkard Accessories. The page titles are different, the page content is different so Ive no idea why these would be considered duplicate. Thin maybe, but not duplicate. Like wise, pages B and C are considered a duplicate of page A in the following Page A http:// nuchic.co.uk/index.php/bags.html?dir=desc&manufacturer=4050&order=price Page B http:// nuchic.co.uk/index.php/catalog/category/view/s/purses/id/98/?manufacturer=4001 Page C http:// nuchic.co.uk/index.php/coats/waistcoats.html?manufacturer=4053 Again, these are product filter pages which the crawler would have found using the site filtering system, but, again, I cannot find what makes pages B and C a duplicate of A. Page A is a filtered result for Great Plains Bags (filtered from the general bags collection). Page B is the filtered results for Chic Look Purses from the Purses section and Page C is the filtered results for Apricot Waistcoats from the Waistcoat section. I'm keen to fix the duplicate content errors on the site before it goes properly live at the end of this month - that's why anyone kind enough to check the links will see a few design issues with the site - however in order to fix the problem I first need to work out what it is and I can't in this case. Can anyone else see how these pages could be considered a duplicate of each other please? Checking ive not gone mad!! Thanks, Carl
Moz Pro | | daedriccarl0 -
Site Crawl Error
In moz crawling error this message is appears: MOST COMMON ISSUES 1Search Engine Blocked by robots.txt Error Code 612: Error response for robots.txt i asked help staff but they crawled again and nothing changed. there's only robots.XML (not TXT) in root of my webpage it contains: User-agent: *
Moz Pro | | nopsts
Allow: /
Allow: /sitemap.htm anyone please help me? thank you0 -
Tool for scanning the content of the canonical tag
Hey All, question for you. What is your favorite tool/method for scanning a website for specific tags? Specifically (as my situation dictates now) for canonical tags? I am looking for a tool that is flexible, hopefully free, and highly customizable (for instance, you can specify the tag to look for). I like the concept of using google docs with the import xml feature but as you can only use 50 of those commands at a time it is very limiting (http://www.distilled.co.uk/blog/seo/how-to-build-agile-seo-tools-using-google-docs/). I do have a campaign set up using the tools which is great! but I need something that returns a response faster and can get data from more than 10,000 links. Our cms unfortunately puts out some odd canonical tags depending on how a page is rendered and I am trying to catch them quickly before it gets indexed and causes problems. Eventually I would also like to be able to scan for other specific tags, hence the customizable concern. If we have to write a vb script to get it into excel I suppose we can do that. Cheers, Josh
Moz Pro | | prima-2535090