Ok, I have a strange duplicate content issue.
-
Just ran MOZ against my site and about fell of my chair, over 1k pages being flagged as having duplicate content. That's about every page on the site. I downloaded the CSV file and things got even more confusing, The URL's under "URL" and " Duplicate Page Content" area exactly the same. Am I missing something? I would expect the URL under Duplicate Page Content to show a different URL, to the page with the exact same content. Any help would be appreciated, traffic is tanking and I need to get this figured out.
Thanks,
-
http://moz.com/learn/seo/redirection can help. You're specifically looking for the "Redirecting Canonical Hostnames" section.
Briefly, google considers the following as different pages, even if they are the same.
Other fun things that can also happen are having all of these pages have the same content, but you can use the following URLs to get to that one piece of content:
example.com
example.com/
example.com/index.html
www.example.com
www.example.com/
www.example.com/index.htmlHope that helps!
-
I need a hand, can you point me to the right resource?
-
I took a quick look at your account. The URL and Duplicate URL are ALMOST the same, but if you look closely, you'll see that you've got both non-www and www in there. A simple redirect at the server side to your preferred version will take care of this quite nicely. If you need a hand with that, let us know and we'll help you with the right htaccess file.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl Issue Question
Hey guys, I have run the crawl on my WordPress site and Moz finds a "Critical crawl issue" for my site on a broken link (404 error): mydomain.com/**%25s **, I can't seem to be able to find such a link anyway and I have run the website through several other tools that scan for broken links and such and there is no such result.
Moz Bar | | K.Net
This link doesn't exist on my site at all and I don't know where Moz got it from, I have made changes to my site and recrawled several times and the specific error persists. Does anyone have any ideas?0 -
How do can the crawler not access my robots.txt file but have 0 crawler issues?
So I'm getting this errorOur crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster.https://www.evernote.com/l/ADOmJ5AG3A1OPZZ2wr_ETiU2dDrejywnZ8kHowever, Moz is saying I have 0 Crawler issues. Have I hit an edge case? What can I do to rectify this situation? I'm looking at my robots.txt file here: http://www.dateideas.net/robots.txt however, I don't see anything that woudl specifically get in the way.I'm trying to build a helpful resource from this domain, and getting zero organic traffic, and I have a sinking suspicion this might be the main culprit.I appreciate your help!Thanks! 🙂
Moz Bar | | will_l0 -
Duplicate content found in scan
On June 8th we ran a Moz Crawl on our site. We found 144 pages that were flagged with duplicate content.
Moz Bar | | StickyLife
Again on June 13th we ran another moz crawl on our site and found 137 pages that were flagged with duplicate content. Then one final scan on June 22nd with 161 pages of duplicate content. After comparing the 3 different scans I see that, without making any changes, pages that were not flagged as duplicate content are now being flagged as duplicate content. While at the same time, pages that were originally flagged as duplicate content are now no longer showing up with duplicate content. I could understand if we made some changes to these pages but no changes were made. For example: On the 8th this page was flagged as duplicate content - https://www.stickylife.com/star-magnet
On the 13th and 22nd it was not flagged as duplicate content but no changes were made to that page. For reference it was flagged as duplicate content with the following page: https://www.stickylife.com/baseball-glove-magnet This page was also Not changed or altered between between these dates. In addition, when Moz scans our site through our campaign every Friday the results do not match what we see when we do a manual scan. Moz's weekly scan only reveals 14 pages with duplicate content as opposed to the numbers you see above. Why such inconsistencies in the Moz Scans?0 -
Site Crawl report show strange duplicate pages
Beginning in early in Feb, we got a big bump in duplicate pages. The URLs of the pages are very odd: Example URL:
Moz Bar | | Neo4j
http://firstname.lastname@website.com/dir/page.php
is duplicate with http://website.com/dir/page.php I checked though the site, nginx conf files, and referral pages, and could not find what is prefixing the pages with 'http://firstname.lastname@'. Any ideas? The person whose name is 'Firstname Lastname' is stumped as well. Thanks.0 -
Odd crawl test issues
Hi all, first post, be gentle... Just signed up for moz with the hope that it, and the learning will help me improve my web traffic. Have managed to get a bit of woe already with one of the sites we have added to the tool. I cannot get the crawl test to do any actual crawling. Ive tried to add the domain three times now but the initial of a few pages (the auto one when you add a domain to pro) will not work for me. Instead of getting a list of problems with the site, i have a list of 18 pages where it says 'Error Code 902: Network Errors Prevented Crawler from Contacting Server'. Being a little puzzled by this, i checked the site myself...no problems. I asked several people in different locations (and countries) to have a go, and no problems for them either. I ran the same site through Raven Tool site auditor and got some results. it crawled a few thousand pages. I ran the site through screaming frog as google bot user agent, and again no issues. I just tried the fetch as Gbot in WMT and all was fine there. I'm very puzzled then as to why moz is having issues with the site but everyone is happy with it. I know the homepage takes 7 seconds to load - caching is off at the moment while we tweak the design - but all the other pages (according to SF) take average of 0.72 seconds to load. The site is a magento one so we have a lengthy robots.txt but that is not causing problems for any of the other services. The robots txt is below. Google Image Crawler Setup User-agent: Googlebot-Image
Moz Bar | | Arropa
Disallow: Crawlers Setup User-agent: * Directories Disallow: /ajax/
Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /errors/
Disallow: /includes/
#Disallow: /js/
#Disallow: /lib/
Disallow: /magento/
#Disallow: /media/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /scripts/
Disallow: /shell/
Disallow: /skin/
Disallow: /stats/
Disallow: /var/
Disallow: /catalog/product
Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalogsearch/
#Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/
Disallow: /catalog/product/gallery/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt Paths (no clean URLs) #Disallow: /.js$
#Disallow: /.css$
Disallow: /.php$
Disallow: /?SID= Pagnation Disallow: /?dir=
Disallow: /&dir=
Disallow: /?mode=
Disallow: /&mode=
Disallow: /?order=
Disallow: /&order=
Disallow: /?p=
Disallow: /&p= If anyone has any suggestions then please i would welcome them, be it with the tool or my robots. As a side note, im aware that we are blocking the individual product pages. Too many products on the site at the moment (250k plus) which manufacturer default descriptions so we have blocked them and are working on getting the category pages and guides listed. In time we will rewrite the most popular products and unblock them as we go Many thanks Carl0 -
FollowerWonk Issue?
Used followerwonk to do a Comparison of followers of my brand and two competitors. the question I have is Retweets, Contacts, and URL tweets are broken in percentages all equaling 100%, but how are these calculated to begin with? are each of the three categories compared to social authority or to each other? Basically, I ran it on my brand and it says there are 0% of retweets, when I know there are at least 6 retweets done by the brand.
Moz Bar | | YeslerB2B_Geek0 -
OnPage Reports - Duplicate titles and meta descriptions
Hi Moz, I know you guys changed your interface awhile back but I have a question about the new reports. On the old interface, I used to use a report that would automatically run when I created a new account letting me know where the dup titles and meta descriptions were on an entire site. Where can I find this report on the new interface? Thanks Carla
Moz Bar | | Carla_Dawson1 -
Duplicate Page Content Report on MOZ
Hi, I am just wondering as to the accuracy of this report - does it pick up all the duplicate on page content? Or is there a limit? We have an ecommerce store with a lot of copied and pasted descriptions - just wondering if there is a limit on how much the moz crawler picks up? In other words, once we fix what MOZ has detected, will there be more detected because it is limited to display say up to 200?? Hope you understand what I mean. Thanks
Moz Bar | | bjs20100