Duplicate URLs
-
A campaign that I ran said that my client's site had some 47,000+ duplicate pages and titles. I was wondering how I can possibly set that many 301 redirects, but a Moz help engineer said it has a lot to do with session IDs. See this set of duplicate URLs:
http://www.lumberliquidators.com/ll/c/engineered-hardwood-flooring (clearly the main URL for the page)
http://www.lumberliquidators.com/ll/c/engineered-hardwood-flooring?PIPELINE_SESSION_ID=0ac00a2e0ad53eb90cb0b0304d178fc1
http://www.lumberliquidators.com/ll/c/engineered-hardwood-flooring?PIPELINE_SESSION_ID=0ac3039d0ad4af2720b3ccd2238547ab
http://www.lumberliquidators.com/ll/c/engineered-hardwood-flooring?PIPELINE_SESSION_ID=0ac071ed0ad4af292684b0746931158fTo a crawler, that looks like 4 different pages, when it's clear that they're actually all different URLs for the same page. I was wondering if some of you, maybe with experience in site architecture, would have insight into how to address this issue?
Thanks
Alan
-
Hm, have you looked into rel canonical?
If those are all stand alone pages, you will have to redirect, if they are no longer active, or if they can be replaced by the original page.
Andy is correct, those pages likely are not 'created' with intent.
You should look at what is causing this issue and start there. If not, you are going to be redirecting till the cows come home.
If you are deciding on going through 301's, you may want to take a step back and look at the folders of the entire domain. /ll/ is a folder but not a page, nor is /ll/c/.
Good Luck, Alan!
-
A quick way would be to disallow crawling of all pages starting with /?PIPELINE. That will prevent Google from seeing them. You can do this by adding the following into your robots.txt file...
Disallow: /*?PIPELINE
However, you want to get to the root cause, which will be something to do with the system generating these. Ideally, this needs to be fixed.
-Andy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages with Duplicate Page Content Nov
Moz is showing all many of URL's as duplicate URLs. I put canonical for all the pages but still it showing all as duplicate page. These are URL's https://www.crystalizeonline.com/brands/ravenscroft-crystal/material/non-lead/page/2.html https://www.crystalizeonline.com/brands/ravenscroft-crystal/material/non-lead/page/2/sort-by/price/sort-direction/desc.html https://www.crystalizeonline.com/brands/ravenscroft-crystal/material/non-lead/page/2/sort-by/price/sort-direction/asc.html Their is a lot of pages like this. How can I get rid from all this issues.
Moz Pro | | crystalize0 -
Moz analytics telling me I have duplicate content issues - how to fix this?
Hey guys, Okay I ran into moz analytics - I have I have 199 Issues, priority issues are showing 38 Duplicate page content. I began looking into the URL's and from what I have noticed from all the urls are showing me a common theme. The urls are pointing to my blog pages - my blog is using wordpress. What iv noticed is the urls all have "Tag" in it Here are 3 examples that I have found. All url's take me to a blank page: Does anyone know what the solution is to fixing this? I read the article for duplicate content covering 301 redirects and Rel=Canonical tags - I'm wondering if this would need to be considered in this case? However I find it confusing that these pages for to a blank page. https://www.zenory.com.au/blog/tag/dysfunctional-relationships/ https://www.zenory.com.au/blog/tag/change/ https://www.zenory.com.au/blog/tag/intuitive/ Appreciate some assistance.
Moz Pro | | edward-may0 -
How to Avoid Duplicate Page Content errors when using Wordpress Categories & Tags?
I get a lot of duplicate page errors on my crawl diagnostics reports from 'categories' and 'tags' on my wordpress sites. The post is 1x link and then the content is 'duplicated' on the 'category' or 'tag' that is added to the page. Should I exclude the tags and categories from my sitemap or are these issues not that important? Thanks for your help Stacey
Moz Pro | | skehoe1 -
Yahoo Store Beginner with "duplicate content" errors. Can I pay for support? $$$
Hi. I have a Yahoo store that seems to have many errors. We built the site for utility knowing NOTHING about SEO. We just started with MOZ and would love to PAY someone to help get us past the beginning stages. Is there someone familiar with the Yahoo! Store format that can charge us perhaps in hourly blocks to walk us through possible solutions to issues? One issue we are having... seems to be that our subsections which contain items that are the endpoints... I know of no way to label the sections anything but an "item". I'm wondering if this might be causing the "duplicate" error because a specific item is listed both in the section and on it's own page. please help! Thom 888-567-5194
Moz Pro | | TITOJAX0 -
Is The Number of Duplicate Pages reduced after adding canonical ref to the dupe versions ?
Hi Is the number of duplicate pages reported in a dupe page content error report reduced on subsequent crawls, if you have resolved the dupe content problem via adding the canonical tag to duplicate versions (referring the original page). Like it would if you were solving the problem via a 301 redirect (i think/presume) ? Cheers Dan
Moz Pro | | Dan-Lawrence0 -
Looking for a tool that can pull OSE stats for a bulk amount of URLs
I know that people have developed inhouse tools with the OSE API that can analyze thousands of URLs and pull metrics like PA, inbound links, etc. I need to analyze about 80k URLs and sort them by authority and I was hoping that someone could point me to a tool that can do this or let me use their tool. I'm willing to pay for access to it. We could build it inhouse, I imagine that it would be pretty easy, but our IT resources are stretched too thin right now.
Moz Pro | | Business.com0 -
We were unable to grade that page. We received a response code of 301\. URL content not parseable
I am using seomoz webapp tool for my SEO on my site. I have run into this issue. Please see the attached file as it has the screen scrape of the error. I am running an on page scan from seomoz for the following url: http://www.racquetsource.com/squash-racquets-s/95.htm When I run the scan I receive the following error: We were unable to grade that page. We received a response code of 301. URL content not parseable. This page had worked previously. I have tried to verify my 301 redirects and am unable to resolve this error. I can perform other on page scans and they work fine. Is this a known problem with this tool? I have verified ensuring I don't have it defined. Any help would be appreciated.
Moz Pro | | GeoffBatterham0 -
Any tools for scraping blogroll URLs from sites?
This question is entirely in the whitehat realm... Let's say you've encountered a great blog - with a strong blogroll of 40 sites. The 40-site blogroll is interesting to you for any number of reasons, from link building targets to simply subscribing in your feedreader. Right now, it's tedious to extract the URLs from the site. There are some "save all links" tools, but they are also messy. Are there any good tools that will a) allow you to grab the blogroll (only) of any site into a list of URLs (yeah, ok, it might not be perfect since some sites call it "sites I like" etc.) b) same, but export as OPML so you can subscribe. Thanks! Scott
Moz Pro | | scottclark0