SEOMoz Crawl Diagnostic indicates duplicate page content for home page?
-
My first SEOMoz Crawl Diagnostic report for my website indicates duplicate page content for my home page. It lists the home page URL Page Title and URL twice.
How do I go about diagnosing this?
Is the problem related to the following code that is in my .htaccess file? (The purpose of the code was to redirect any non "www" backlink referrals to the "www" version of the domain.)
RewriteCond %{HTTP_HOST} ^whatever.com [NC]
RewriteRule ^(.*)$ http://www.whatever.com/$1 [L,R=301]Should I get rid of the "http" reference in the second line?
Related to this is a notice in the "Crawl Notices Found" -- "301 Permanent redirect" which shows my home page title as "http://whatever.com" and shows the redirect address as http://http://www.whatever.com/
I'm guessing this problem is again related to the redirect code I'm using.
Also...
The report indicates duplicate content for those links that have different parameters added to the URL i.e. http://www.whatever.com?marker=Blah Blah&markerzoom=13
If I set up a canonical reference for the page, will this fix this?
Thank you.
-
I contacted the help desk as instructed an was told:
"I took a look at the campaign and it looks like our crawler can't parse the 301 redirect you have in place on the main page. The reason for this is the redirect in place, adds two https when rogerbot tries to crawl through it. Roger can’t parse the redirect as is, but it can identify it (as it did in your notice’s report on the crawl diagnostics page). This isn't a problem for browsers since they are made to ignore redirects of this nature. Crawlers on the other hand have a strict code to follow and can't follow redirects like that. When I load up your site [mywebsite.com] right now, it redirects to www.[mywebsite].com. Try creating a new campaign under the domain you are redirecting to, this should clear any issues up."
And so I did that and it looks like that worked after the new crawl, however I then set up another campaign for another website I manage being sure to use the "www" in front of the domain and got the same problem again -- the home page appears twice as duplicate content.
So I'm back to asking my primary question again: What is the definitive redirect code to use to convert a non "www" request to a "www" request? The same redirect code mentioned in my first post is being used on all of my sites.
-
Hi Perry,
In your browser, are you seeing things redirect to the double http? I think there's a bug in the crawl tools that's causing some false errors right now. Before you go work on the redirect file, could you send an email to help@seomoz.org to first make sure we're not the ones that messed up?
Thanks!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Footer Content Issue
Please check given screenshot URL. As per the screenshot we are using highlighted content through out the website in the footer section of our website (https://www.mastersindia.co/) . So, please tell us how Google will treat this content. Will Google count it as duplicate content or not? What is the solution in case if the Google treat it as duplicate content. Screenshot URL: https://prnt.sc/pmvumv
Technical SEO | | AnilTanwarMI0 -
Can Googlebot crawl the content on this page?
Hi all, I've read the posts in Google about Ajax and javascript (https://support.google.com/webmasters/answer/174992?hl=en) and also this post: http://moz.com/ugc/can-google-really-access-content-in-javascript-really. I am trying to evaluate if the content on this page, http://www.vwarcher.com/CustomerReviews, is crawlable by Googlebot? It appears not to be. I perused the sitemap and don't see any ugly Ajax URLs included as Google suggests doing. Also, the page is definitely indexed, but appears the content is only indexed via its original source (Yahoo!, Citysearch, Google+, etc.). I understand why they are using this dynamic content, because it looks nice to an end-user and requires little to no maintenance. But, is it providing them any SEO benefit? It appears to me that it would be far better to take these reviews and simply build them into HTML. Thoughts?
Technical SEO | | danatanseo0 -
Question about duplicate content in crawl reports
Okay, this one's a doozie: My crawl report is listing all of these as separate URLs with identical duplicate content issues, even though they are all the home page and the one that is http://www.ccisolutions.com (the preferred URL) has a canonical tag of rel= http://www.ccisolutions.com: http://www.ccisolutions.com http://ccisolutions.com http://www.ccisolutions.com/StoreFront/IAFDispatcher?iafAction=showMain I will add that OSE is recognizing that there is a 301-redirect on http://ccisolutions.com, but the duplicate content report doesn't seem to recognize the redirect. Also, every single one of our 404-error pages (we have set up a custom 404 page) is being identified as having duplicate content. The duplicate content on all of them is identical. Where do I even begin sorting this out? Any suggestions on how/why this is happening? Thanks!
Technical SEO | | danatanseo1 -
Techniques for diagnosing duplicate content
Buonjourno from Wetherby UK 🙂 Diagnosing duplicate content is a classic SEO skill but I'm curious to know what techniques other people use. Personally i use webmaster tools as illustrated here: http://i216.photobucket.com/albums/cc53/zymurgy_bucket/webmaster-tools-duplicate.jpg but what other techniques are effective? Thanks,
Technical SEO | | Nightwing
David0 -
Why are pages linked with URL parameters showing up as separate pages with duplicate content?
Only one page exists . . . Yet I link to the page with different URL parameters for tracking purposes and for some reason it is showing up as a separate page with duplicate content . . . Help? rpcIZ.png
Technical SEO | | BlueLinkERP0 -
Duplicate Content Issue with
Hello fellow Moz'rs! I'll get straight to the point here - The issue, which is shown in the attached image, is that for every URL ending in /blog/category/name, it has a duplicate page of /blog/category/name/?p=contactus. Also, its worth nothing that the ?p=contact us are not in the SERPs but were crawled by SEOMoz and they are live and duplicate. We are using Pinnacle cart. Is there a way to just stop the crawlers from ?p=contactus or? Thank you all and happy rankings, James
Technical SEO | | JamesPiper0 -
Why the number of crawled pages is so low¿?
Hi, my website is www.theprinterdepo.com and I have been in seomoz pro for 2 months. When it started it crawled 10000 pages, then I modified robots.txt to disallow some specific parameters in the pages to be crawled. We have about 3500 products, so thhe number of crawled pages should be close to that number In the last crawl, it shows only 1700, What should I do?
Technical SEO | | levalencia10 -
Are recipes excluded from duplicate content?
Does anyone know how recipes are treated by search engines? For example, I know press releases are expected to have lots of duplicates out there so they aren't penalized. Does anyone know if recipes are treated the same way. For example, if you Google "three cheese beef pasta shells" you get the first two results with identical content.
Technical SEO | | RiseSEO0