Testing for duplicate content and title tags
-
Hi there,
I have been getting both Duplicate Page content and Duplicate Title content warnings on my crawl diagnostics report for one of my campaigns. I did my research, and implemented the preferred domain setting in Webmaster Tools. This did not resolve the crawl diagnostics warnings, and upon further research I discovered the preferred domain would only be noted by Google and not other bots like Roger. My only issue was that when I ran an SEOmoz crawl test on the same domain, I saw none of the duplicate content or title warnings yet they still appear on my crawl diagnostics report. I have now implemented a fix in my .htaccess file to 301 redirect to the www. domain. I want to check if it's worked, but since the crawl test did not show the issue last time I don't think I can rely on that. Can you help please?
Thanks,
Claire
-
Thanks Joseph. Very helpful.
-
Hello Claire,
I really don't think you are going to get those errors show up on the next crawl.
Just on another note after seeing your URL...
I would also code all of the links in the site with full absolute links.
I see the links as
<li> href=" /our-services/diabetes-support/get-started/"> Diabetes Supporta>li> I would add the **http://www.** in front of all those links.
-
Thanks Jared, that's awesome.
-
You can always check by testing in your browser but the best way is to check the header response to make sure the server is sending the proper response (a 301) - your landing pages look good (see below). I use Live HTTP Headers which is a firefox plugin - hers what it tell you:
http://pharmacy777.com.au/our-pharmacies/applecross-village/
GET /our-pharmacies/applecross-village/ HTTP/1.1
Host: pharmacy777.com.au
User-Agent: Mozilla/5.0 (Windows NT 6.0; rv:15.0) Gecko/20100101 Firefox/15.0.1
HTTP/1.1 301 Moved Permanently
Date: Thu, 04 Oct 2012 03:23:17 GMT
Server: Apache/2.2.22 (Ubuntu)
Location: http://www.pharmacy777.com.au/our-pharmacies/applecross-village/So the redirect is working. The only thing i noticed was that the home page instantly switched to www and didnt even return a 301 so it appears you may have implemented a redirect there outside of htaccess.
If your report is still showing duplicates make sure that its not the trailing slash. Your URLs can be loaded as such:
http://www.pharmacy777.com.au/our-pharmacies/applecross/
http://www.pharmacy777.com.au/our-pharmacies/applecross
The best way to find out if the SEOMoz report is counting these as dupes is to Export the crawl report to CSV (top right of crawl report). Then go all the way to the far right column called 'duplicate pages' and sort it alphabetically. This column will show you all of the duplicate urls for each particular URL row. Lots of times you can find little 'surprises' here - that csv report is priceless!
-
Hi Joseph,
Yes, I have done this test and it appears to be working. I just want to be sure I'm not going to be faced with a load of warnings when my next crawl runs on the weekend, as when I implemented the Webmaster preferred domain what happened was"
-
implemented preferred domain
-
rank crawl test - looked to have resolved the issue but then
-
4 days later the scheduled crawl report ran, all errors still present.
Luckily I told the client I had to wait for the report results and didn't tell think it was resolved after the crawl test looked OK!! This time I've run the crawl test and done the manual test you suggested, but I want to be able to feed back to the client today if I can (confidently), and I no longer trust the test.
Thanks very much for your answer, it's always good to have someone validate your own approach.
Cheers,
Claire
-
-
-
If I understand correctly you want to see if your re-direct has fixed your duplicate content issue.
Right now you still get the error ...
I would simply type the url in the address bar, with and without the www. and see if both show up.
If the re - direct is working then only one should show up , the other should re - direct immediately.
If they both show up do then your .htaccess code may have a mistake.
hope that helps.
-
Update: Reporting can be historic - so you are probably looking at a report from an older crawl.
-
Hi Claire - we need the url of your site to check the headers on the 301 redirect!
Definitely a good way to fix this is via htaccess like you are suggesting you did. When I get a new client its in the campaign startup list and it works well. Make sure there arent any other issues like the infamous trailing slash causing duplication. If you provide the URL a quick check can be made.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Dealing with duplicate content
Manufacturer product website (product.com) has an associated direct online store (buyproduct.com). the online store has much duplicate content such as product detail pages and key article pages such as technical/scientific data is duplicated on both sites. What are some ways to lessen the duplicate content here? product.com ranks #1 for several key keywords so penalties can't be too bad and buyproduct.com is moving its way up the SERPS for similar terms. Ideally I'd like to combine the sites into one, but not in the budget right away. Any thoughts?
Technical SEO | | Timmmmy0 -
Duplicate Page Title & Content Penalty On Website Tonight Platform
I built my primary website on Website Tonight (WT) five years ago when I was a net newbie and I'm presently new to seomoz. The initial crawl indicated a problem with duplicate page title and duplicate content with my website home page in WT. It turns out that the WT platform makes you assign a file name to your homepage i.e: www.business.com/homepage.html that differs from the www.business.com that you want as your homepage url. Apparently the search engines are recognizing these identical pages as separate and duplicate. I know that the standard answer would be to just do a 301 redirect from the long file name to the short file name - end of story. But WT does not allow you to do 301 redirects and they also do not give you the ability to go into the htaccess file to fix this yourself manually. I spoke to the folks at WT tonight and they claim that they automatically do 301 redirects on the platform. But if this true then why am I getting the error message in seomoz? Does anyone know if this is a problem? If so, does anyone here have a fix? Thanks in advance. Sincerely - Bill in Denver
Technical SEO | | anxietycoach0 -
What if meta description tag comes before meta title tag? Do the search engines disregard or penalize if the order is not title then description in the HTML?
Do the search engines disregard or penalize if the order is not title then description in the HTML? A client's webmaster is a newbie to SEO and did just this. Suggestions?
Technical SEO | | alankoen1230 -
Duplicate content + wordpress tags
According to SEOMoz platform, one of my wordpress websites deals with duplicate content because of the tags I use. How should I fix it? Is it loyal to remove tag links from the post pages?
Technical SEO | | giankar0 -
Duplicate Page Content
I've got several pages of similar products that google has listed as duplicate content. I have them all set up with rel="prev" and rel="next tags telling google that they are part of a group but they've still got them listed as duplicates. Is there something else I should do for these pages or is that just a short falling of googles webmaster tools? One of the pages: http://www.jaaronwoodcountertops.com/wood-countertop-gallery/walnut-countertop-9.html
Technical SEO | | JAARON0 -
Duplicate Content issue
I have been asked to review an old website to an identify opportunities for increasing search engine traffic. Whilst reviewing the site I came across a strange loop. On each page there is a link to printer friendly version: http://www.websitename.co.uk/index.php?pageid=7&printfriendly=yes That page also has a link to a printer friendly version http://www.websitename.co.uk/index.php?pageid=7&printfriendly=yes&printfriendly=yes and so on and so on....... Some of these pages are being included in Google's index. I appreciate that this can't be a good thing, however, I am not 100% sure as to the extent to which it is a bad thing and the priority that should be given to getting it sorted. Just wandering what views people have on the issues this may cause?
Technical SEO | | CPLDistribution0 -
Duplicate content
Greetings! I have inherited a problem that I am not sure how to fix. The website I am working on had a 302 redirect from its original home url (with all the link juice) to a newly designed page (with no real link juice). When the 302 redirect was removed, a duplicate content problem remained, since the new page had already been indexed by google. What is the best way to handle duplicate content? Thanks!
Technical SEO | | shedontdiet0 -
The Bible and Duplicate Content
We have our complete set of scriptures online, including the Bible at http://lds.org/scriptures. Users can browse to any of the volumes of scriptures. We've improved the user experience by allowing users to link to specific verses in context which will scroll to and highlight the linked verse. However, this creates a significant amount of duplicate content. For example, these links: http://lds.org/scriptures/nt/james/1.5 http://lds.org/scriptures/nt/james/1.5-10 http://lds.org/scriptures/nt/james/1 All of those will link to the same chapter in the book of James, yet the first two will highlight the verse 5 and verses 5-10 respectively. This is a good user experience because in other sections of our site and on blogs throughout the world webmasters link to specific verses so the reader can see the verse in context of the rest of the chapter. Another bible site has separate html pages for each verse individually and tends to outrank us because of this (and possibly some other reasons) for long tail chapter/verse queries. However, our tests indicated that the current version is preferred by users. We have a sitemap ready to publish which includes a URL for every chapter/verse. We hope this will improve indexing of some of the more popular verses. However, Googlebot is going to see some duplicate content as it crawls that sitemap! So the question is: is the sitemap a good idea realizing that we can't revert back to including each chapter/verse on its own unique page? We are also going to recommend that we create unique titles for each of the verses and pass a portion of the text from the verse into the meta description. Will this perhaps be enough to satisfy Googlebot that the pages are in fact unique? They certainly are from a user perspective. Thanks all for taking the time!
Technical SEO | | LDS-SEO0