I want to create a report of only de duplicate content pages as a csv file so i can create a script to canonicalize them.
-
I want to create a report of only de duplicate content pages as a csv file so i can create a script to canonicalize them. So i get something like:
http://example.com/page1, http://example.com/page2, http://example.com/page3, http://example.com/page4,
Because I now have to open each in "Issue: Duplicate Page Content", and this takes a lot of time.
The same for duplicate page title.
-
Hi nvs.nim,
could you tell me what you did differently? I also get an empty AF column.
-
Thanks! Because excel didn't seperate the fields right i didn't have the column AF. But i got it now! Thanks a lot!
-
Josh is right - when you export as CSV there should be a column in the spreadsheet -
|
duplicate_page_content
This column contains all the URLS that are considered duplicates
|
-
Yes it does, in column AF there is a list of Duplicate Page Content URLs
-
It doesn't tell me what other pages are identical. Only that there are identical pages.
-
Well.. SEOMoz Pro does it! Just check out the Crawl Diagnostics -> Duplicate Page Content then go to the top right and Export as CSV!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Large site with content silo's - best practice for deep indexing silo content
Thanks in advance for any advice/links/discussion. This honestly might be a scenario where we need to do some A/B testing. We have a massive (5 Million) content silo that is the basis for our long tail search strategy. Organic search traffic hits our individual "product" pages and we've divided our silo with a parent category & then secondarily with a field (so we can cross link to other content silo's using the same parent/field categorizations). We don't anticipate, nor expect to have top level category pages receive organic traffic - most people are searching for the individual/specific product (long tail). We're not trying to rank or get traffic for searches of all products in "category X" and others are competing and spending a lot in that area (head). The intent/purpose of the site structure/taxonomy is to more easily enable bots/crawlers to get deeper into our content silos. We've built the page for humans, but included link structure/taxonomy to assist crawlers. So here's my question on best practices. How to handle categories with 1,000+ pages/pagination. With our most popular product categories, there might be 100,000's products in one category. My top level hub page for a category looks like www.mysite/categoryA and the page build is showing 50 products and then pagination from 1-1000+. Currently we're using rel=next for pagination and for pages like www.mysite/categoryA?page=6 we make it reference itself as canonical (not the first/top page www.mysite/categoryA). Our goal is deep crawl/indexation of our silo. I use ScreamingFrog and SEOMoz campaign crawl to sample (site takes a week+ to fully crawl) and with each of these tools it "looks" like crawlers have gotten a bit "bogged down" with large categories with tons of pagination. For example rather than crawl multiple categories or fields to get to multiple product pages, some bots will hit all 1,000 (rel=next) pages of a single category. I don't want to waste crawl budget going through 1,000 pages of a single category, versus discovering/crawling more categories. I can't seem to find a consensus as to how to approach the issue. I can't have a page that lists "all" - there's just too much, so we're going to need pagination. I'm not worried about category pagination pages cannibalizing traffic as I don't expect any (should I make pages 2-1,000) noindex and canonically reference the main/first page in the category?). Should I worry about crawlers going deep in pagination among 1 category versus getting to more top level categories? Thanks!
Moz Pro | | DrewProZ1 -
Canonical tag on webstore products to avoid Duplicate Page Content ?
Hi, I would like to have an opinion on what how we are planning to solve the issue with Duplicate Page Contents that MOZ PRO is showing us. MOZ Pro is showing us a lot of pages with duplicate content as High Priority Issue. Mainly the problem is with products which have very few differences between them, e.g. pink bike model X and red bike model X. So we decided to implement a canonical tag on these products, and the pink bike model X will now have a canonical pointing to the red bike model X. So hopefully we will be ranking higher with our red bike model X and our pink bike model X will disapear from the index. Am I right ? Is it a good practice, since we will loose long tails indexes? I check each canonical in the Search Console, and we have extremely few searched for "pink bike model X" most of searches are "bike model X". Thank you in advance for your opinion. Isabelle
Moz Pro | | isabelledylag0 -
Duplicate content nightmare
Hey Moz Community I ran a crawl test, and there is a lot of duplicate content but I cannot work out why. It seems that when I publish a post secondary urls are being created depending on some tags and categories. Or at least, that is what it looks like. I don't know why this is happening, nor do I know if I need to do anything about it. Help? Please.
Moz Pro | | MobileDay0 -
Broken CSV files?
Hi, I have been downloading fiels from seoMOZ for quite a while and I keep getting broken CSV files. ON my Mac, I wrote a script that cleaned them out but I cannot (don't know how) do this on the Windows side. The problem is that some of the record lines get broken up mainly in the "Anchor Text" field (maybe the anchor text in question contained a return). This of course messes up my file when I open it in Excel. Is there a way get/open these files without having them messed up like this? P..S. I am also seeing problem with text encoding, I would like to see the anchor text that is being diaplayed on the referring site rather than its HTML/Unicode/UTF-8 code value. TIA, Michel
Moz Pro | | analysestc0 -
Finding the source of duplicate content URL's
We have a website that displays a number of products. The product has variations (sizes) and unfortunately every size has its own URL (for now anyway). Needless to say, this causes duplicate content issues. (And of course, we are looking to change the URL's for our site as soon as possible) However, even though these duplicate URL's exist, you should not be able to land on them by navigating through the site. In theory, the site should always display the link to the smallest size. It seems that there is a flaw in our system somewhere, as these links are now found in our campaign here on SEOmoz. My question: is there any way to find the crawl path that lead to the URL's that shouldn't have been found, so we can locate the problem?
Moz Pro | | DocdataCommerce0 -
Will canonical tag get rid of duplicate page title errors?
I have a directory on my website, paginated in groups of 10. On page 2 of the results, the title tag is the same as the first page, as it is on the 3rd page and so on. This is giving me duplicate page title errors. If i use rel=canonical tags on the subsequent pages and href the first page of my results, will my duplicate page title warnings go away? thanks.
Moz Pro | | fourthdimensioninc0 -
My crawl diagnostic is showing 2 duplicate content and titles.
First of all Hi - My name is Jason and I've just joined - How you all doing? My 1st question then: When I view where these errors are occurring it says www mydomain co uk and www mydomain co uk/index.html Isn't this the same page? I have looked into my root folder and only index.html exists.
Moz Pro | | JasonHegarty0 -
Crawl Diagnostics bringing 20k+ errors as duplicate content due to session ids
Signed up to the trial version of Seomoz today just to check it out as I have decided I'm going to do my own SEO rather than outsource it (been let down a few times!). So far I like the look of things and have a feeling I am going to learn a lot and get results. However I have just stumbled on something. After Seomoz dones it's crawl diagnostics run on the site (www.deviltronics.com) it is showing 20,000+ plus errors. From what I can see almost 99% of this is being picked up as erros for duplicate content due to session id's, so i am not sure what to do! I have done a "site:www.deviltronics.com" on google and this certainly doesn't pick up the session id's/duplicate content. So could this just be an issue with the Seomoz bot. If so how can I get Seomoz to ignore these on the crawl? Can I get my developer to add some code somewhere. Help will be much appreciated. Asif
Moz Pro | | blagger0