Duplicate page report
-
We ran a CSV spreadsheet of our crawl diagnostics related to duplicate URLS' after waiting 5 days with no response to how Rogerbot can be made to filter.
My IT lead tells me he thinks the label on the spreadsheet is showing “duplicate URLs”, and that is – literally – what the spreadsheet is showing.
It thinks that a database ID number is the only valid part of a URL. To replicate: Just filter the spreadsheet for any number that you see on the page. For example, filtering for 1793 gives us the following result:
|
URL
http://truthbook.com/faq/dsp_viewFAQ.cfm?faqID=1793
http://truthbook.com/index.cfm?linkID=1793
http://truthbook.com/index.cfm?linkID=1793&pf=true
http://www.truthbook.com/blogs/dsp_viewBlogEntry.cfm?blogentryID=1793
http://www.truthbook.com/index.cfm?linkID=1793
|
There are a couple of problems with the above:
1. It gives the www result, as well as the non-www result.
2. It is seeing the print version as a duplicate (&pf=true) but these are blocked from Google via the noindex header tag.
3. It thinks that different sections of the website with the same ID number the same thing (faq / blogs / pages)
In short: this particular report tell us nothing at all.
I am trying to get a perspective from someone at SEOMoz to determine if he is reading the result correctly or there is something he is missing?
Please help. Jim
-
Hi Jim!
Thanks for the question. One thing we should clarify before we move forward is that the Pro app doesn't actually report on duplicate URLs, but we do report when we find duplicate title tags or content.
Duplicate titles just refer to when we find the same title tag on more than one page. In one example from your diagnostics, we're reporting the title tag 'Truthbook Religious News' is being used in multiple pages (http://screencast.com/t/GYCKNfAoj).
Duplicate content is content we see on the source code of your pages that is identical or nearly identical and would cause the pages to compete against each other for rankings. To fix either of these you have a several options:
- Set up a 301 redirect to have the pages you would consider duplicate redirect to the main page.
- Change the content/title tags enough that they won't be considered duplicates - Canonicalize the content you would consider duplicates.
Most developers will go for the latter two options so that the pages will still be reachable by visitors. You can find out more about how to implement these in our Help Hub.
To answer your other questions:
1 - At the time of the crawl, we were able to get to sub domain pages from other pages on your site. The sub domains were also resolving separately, but they seem to be redirecting to your root domain now, so your next crawl should reflect this.
2 - Running a curl for the print versions of your pages, I see "no follow" tags related to Wikipedia links embedded (http://screencast.com/t/reYjeLLPvWG3) in the doc, but I'm not finding any "no index tags" (http://screencast.com/t/DsXMZInngSzH). This would be why you're seeing us crawling those pages.
3 - As I mentioned above, our crawler looks for similarities in the source code of pages when reporting on duplicate content. Since no one knows exactly how similar content would need to be for the search engines to consider it a duplicate, we err on the side of caution and recommended best practices when reporting them. Using one of the methods mentioned above and detailed in our Help Hub should resolve this for you
Let me know if you have any other questions!
Best,
Sam
Moz Helpster - Set up a 301 redirect to have the pages you would consider duplicate redirect to the main page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
403 error but page is fine??
Hi, on my report im getting 4xx error. When i look into it it says the error is crital fo4r 403 error on this page https://gaspipes.co.uk/contact-us/ i can get to the page and see it fine but no idea why its showing a 403 error or how to fix it. This is the only page that the error is coming up on, is there anything i can check/do to get this resolved? Thanks
Moz Pro | | JU-Mark0 -
Moz is treating my pages as duplicate content but the pages have different content in reality
Attached here is a screenshot of links with duplicate content. Here are some links that is also based on the screenshot http://federalland.ph/construction_updates/paseo-de-roces-as-of-october-2015 http://federalland.ph/construction_updates/sixsenses-residences-tower-2-as-of-october-2015/ http://federalland.ph/construction_updates/sixsenses-residences-tower-3-as-of-october-2015 The links that I have placed here have different content. So I don't why they are treated as duplicates BWWJuvQ
Moz Pro | | clestcruz0 -
Links being reported in Webmaster Tools
Hi Are the Total Links To Your Site, as reported in GWT, purely external inbound links ? Since these links are usually, as far as i can tell, much higher in number than any other link reporting tool and hence, i presume, more accurate, why don't services such as Moz etc include this in reporting ? I know its just a total number and link quality is whats important not quantity, but i would have thought interesting to show in reporting in conjunction with link quality info such as is already reported. Since most backlink reporting tools do show a total but always much much lower than that reported in gwt (i think) All Best Dan
Moz Pro | | Dan-Lawrence0 -
Why does page report still have "Too Many Links" alert?
I'm a new Moz Pro user and see that the on-page optimization report includes "too many links" on page and references the "100 links" guideline that Google no longer endorses: http://www.youtube.com/watch?feature=player_embedded&v=l6g5hoBYlf0 Too many links is still an issue in terms of passing link juice, but not much else. Matt Cutts has stated that it is fine to have many internal links in this regard. Please correct me if I am mistaken.
Moz Pro | | Shannon-PayLoadz0 -
Help me to know why are all pages not being tracked by the Moz tool for on-page optimization reports?
The On-page Optimization report that the Moz tool shows, is not tracking all the pages from my website. I know this because it isn't showing a ranking for all pages on my website. Is there a particular reason why this is happening? It is important for me to know details of all pages, else it does not give me a comprehensive picture of what's going on in SEO.
Moz Pro | | jslusser0 -
How to solve duplicate page title & content error
I got lot of errors in Duplicate page title - 5000 Here the result page is same and content is also same,but it differs only with page no in meta title Title missing error In seomoz report i got empty msg - title,meta desc,meta robots,meta refresh But if i check the link which i got error it shows all meta tags..we have added all meta tags in our site..But i dont no why i got title missing error . 404 error In this report,if i click the link which i got error, it goes to main page of our site. But the url differs. eg: The error link is :www.example.com/buy/requirement-2-0-inmumbai-property it automatically goes to www.example.com page Let me know how to solve these issues.
Moz Pro | | Rajesh.Chandran0 -
Roger keeps telling me my canonical pages are duplicates
I've got a site that's brand spanking new that I'm trying to get the error count down to zero on, and I'm basically there except for this odd problem. Roger got into the site like a naughty puppy a bit too early, before I'd put the canonical tags in, so there were a couple thousand 'duplicate content' errors. I put canonicals in (programmatically, so they appear on every page) and waited a week and sure enough 99% of them went away. However, there's about 50 that are still lingering, and I'm not sure why they're being detected as such. It's an ecommerce site, and the duplicates are being detected on the product page, but why these 50? (there's hundreds of other products that aren't being detected). The URLs that are 'duplicates' look like this according to the crawl report: http://www.site.com/Product-1.aspx http://www.site.com/product-1.aspx And so on. Canonicals are in place, and have been for weeks, and as I said there's hundreds of other pages just like this not having this problem, so I'm finding it odd that these ones won't go away. All I can think of is that Roger is somehow caching stuff from previous crawls? According to the crawl report these duplicates were discovered '1 day ago' but that simply doesn't make sense. It's not a matter of messing up one or two pages on my part either; we made this site to be dynamically generated, and all of the SEO stuff (canonical, etc.) is applied to every single page regardless of what's on it. If anyone can give some insight I'd appreciate it!
Moz Pro | | icecarats0 -
Pro Report Card Question
I am getting this warning in the Pro Report Card, my site is http://www.myfairytalebooks.com which seems to have Children's and not Childrens in the title. Is it my site that needs to html encode in the title or is Report Card slightly broken? Exact Keyword Usage in Page Title Easy fix <dl> <dt>Page title</dt> <dd>"Personalized Childrens Books Kids Music CDs Baby Books & Gifts."</dd> <dt>Explanation</dt> <dd>Search engines consider the title element to be the most important place to identify keywords and associate the page with a topic and/or set of terms. SEOmoz's correlation research has also shown that rankings are heavily influenced by keyword usage in the title tag.</dd> <dt>Recommendation</dt> <dd>Employ the keyword in the page title, preferrably as the first words in the element.</dd> </dl>
Moz Pro | | DineshMistry0