What is Considered Duplicate Content by Crawlers?
-
I am asking this because I have a couple of site audit tools that I use to crawl a site I work on every week and they are showing duplicate content issues (which I know there is a lot on this site) but some of what is flagged as duplicate content makes no sense.
For example, the following URL's were grouped together as duplicate content:
|
https://www.firefold.com/contact-us
|
| https://www.firefold.com/sale |
|
|
How are these pages duplicate content? I am confused on what site audit tools are considering duplicate content.
Just FYI, this is data from Moz crawl diagnostics but SEMrush site auditor is giving me the same type of data.
Any help would be greatly appreciated.
Ryan
-
Yea I just started working on this site. I haven't used Moz Analytics much so just wanting to see how their crawler crawls pages.
And yes I agree, there are a lot of BIG BIG BIG issues with this site.
I got a large workload over the next few months haha.
-
I would add that there's is no text on any of those three pages - any "text" one would see there is actually just embedded in an image - which is a huge issue for a number of reasons:
- Search engines see that there's no text - a big no-no.
- You're getting practically no SEO value from the content that would be there, even if there isn't much.
- It's heavier this way - which makes load times slower.
I want to clarify that there are many, bigger issues with these pages - but as your question concerns only duplicate content, I'll leave all of that out for the time being. To summarize, Google, Yahoo, and Bing are just seeing some duplicate banners, sidebars, etc. and then some images in the body of your pages. Hence, duplicate content.
-
Thanks for that information.
It makes sense looking at the data and pages from that perspective.
-
Hi Ryan!
Our crawler will flag pages that have at least 90% similarity in the entire source code of the site so not just the body.
The way you want to interpret the report is the contact-us page has 35 duplicates, so "gabe" and "sale" are not dupes of each other in this section but are only each a duplicate of "contact-us". Those URLs might appear with their own duplicates of the same pages further down in the report.
While on the front end the pages do not appear to be similar. The issue is likely with the amount of javascript code on those pages.
Our crawler cannot read javascript so we are likely only able to see the template of the page. Other search tools are probably seeing the same thing as it returns 79% similarity using this tool: http://www.freebulkseotools.com/similar-page-checker-tool.php
I can't provide much insight from a dev perspective but hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content & Title Tag Group Fields on MoZ Report
Hello, On my SEO MOZ exported Site Crawl CSV report, I have columns for Duplicate Content Group & for Duplicate Title Tag Group. The values in the columns are numerical - 20, 5 , 15, etc. Can anyone explain to me what these values represent and how I can fix the issues I presume they represent? Thank you,
Moz Bar | | AED-1
Scott0 -
Crawler triggering Spam Throttle and creating 4xx errors
Hey Folks, We have a client with an experience I want to ask about. The Moz crawler is showing 4xx errors. These are happening because the crawler is triggering my client's spam throttling. They could increase from 240 to 480 page loads per minute but this could open the door for spam as well. Any thoughts on how to proceed? Thanks! Kirk
Moz Bar | | kbates1 -
How do can the crawler not access my robots.txt file but have 0 crawler issues?
So I'm getting this errorOur crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster.https://www.evernote.com/l/ADOmJ5AG3A1OPZZ2wr_ETiU2dDrejywnZ8kHowever, Moz is saying I have 0 Crawler issues. Have I hit an edge case? What can I do to rectify this situation? I'm looking at my robots.txt file here: http://www.dateideas.net/robots.txt however, I don't see anything that woudl specifically get in the way.I'm trying to build a helpful resource from this domain, and getting zero organic traffic, and I have a sinking suspicion this might be the main culprit.I appreciate your help!Thanks! 🙂
Moz Bar | | will_l0 -
Moz Crawler URL paramaters & duplicate content
Hi all, this is my first post on Moz Q&A 🙂 Questions: Does the Moz Crawler take into account rel="canonical" for search results pages with sorting / filtering URL parameters? How much time does it take for an issue to disappear from the issues list after it's been corrected? Does it come op in the next weekly report? I'm asking because the crawler is reporting 50k+ pages crawled, when in reality, this number should be closer to 1000. All pages with query parameters have the correct canonical tag pointing to the root URL, so I'm wondering whether I need to noindex the other pages for the crawler to report correct data?: Original (canonical URL): DOMAIN.COM/charters/search/mx/BS?search_location=cabo-san-lucas Filter active URL: DOMAIN.COM/charters/search/mx/BS?search_location=cabo-san-lucas&booking_date=&booking_days=1&booking_persons=1&priceFilter%5B%5D=0%2C500&includedPriceFilter%5B%5D=drinks-soft Also, if noindex is the only solution, will it impact the ranking of the pages involved? Note: Google and Bing are semi-successful in reporting index page count, each reporting around 2.5k result pages when using the site:DOMAIN.com query. The rel canonical tag was missing for a short period of time about 4 weeks ago, but since fixing the issue these pages still haven't been deindexed. Appreciate any suggestions regarding Moz Crawler & Google / Bing index count!
Moz Bar | | Vukan_Simic0 -
Moz Crawl Test Tool - SEO Web Crawler showing up with no details
So basically I have ran the Moz Crawl Test tool twice for this url "bubblingwithenergy.info" and both times the report has listed 1 URL when there is obviously a lot more if you check the site. My question is, why is the Moz Crawl only reporting 1 URL when there are heaps? Is there a possibility it is being blocked and if so what would be blocking it? This website is using a CMS called Infusion and it is based off CMSMS (CMS Made Simple). Any answers would be greatly appreciated. Cheers
Moz Bar | | KBB_Digital0 -
Can Moz use canconical links to prevent notices about duplicate content issues?
if so how do we enable this - we've an average size site with a few hundred products but they appear in multiple categories, canonical url points to it's primary category (but a new page exists for each section... so for /cat-a/abc there will be another page cat-b/abc and again but the canonical points to cat-a always for that product) basically I see this kind of duplication error / notice as a false positive... help me
Moz Bar | | SEOAndy0 -
Duplicate page content
Hi guys the feedback form my campaign suggests I have to much duplicate page content. I’ve had a look at the CSV file but it doesn’t seem to be abundantly clear as to which pages on my site have the duplicate content. Can anyone tell which columns I need to refer to on the sheet, to ascertain this information. Also if the content is only slightly different, will Google still consider it to be duplicate? I look forward to hearing from you
Moz Bar | | Hardley1110 -
Why do the crawl diagnostics indicate duplicate page content among blog postings hosted by WordPress?
Does anyone know why the crawl diagnostics indicate duplicate page content regarding the blog we are hosting on WordPress? And does anyone know how to fix this issue? The content is not, or does not appear to be duplicate.
Moz Bar | | AndreaKayal0