Locating Duplicate Pages
-
Hi,
Our website consists of approximately 15,000 pages however according to our Google Webmaster Tools account Google has around 26,000 pages for us in their index.
I have run through half a dozen sitemap generators and they all only discover the 15,000 pages that we know about. I have also thoroughly gone through the site to attempt to find any sections where we might be inadvertently generating duplicate pages without success.
It has been over six months since we did any structural changes (at which point we did 301's to the new locations) and so I'd like to think that the majority of these old pages have been removed from the Google Index. Additionally, the number of pages in the index doesn't appear to be going down by any discernable factor week on week.
I'm certain it's nothing to worry about however for my own peace of mind I'd like to just confirm that the additional 11,000 pages are just old results that will eventually disappear from the index and that we're not generating any duplicate content.
Unfortunately there doesn't appear to be a way to download a list of the 26,000 pages that Google has indexed so that I can compare it against our sitemap. Obviously I know about site:domain.com however this only returned the first 1,000 results which all checkout fine.
I was wondering if anybody knew of any methods or tools that we could use to attempt to identify these 11,000 extra pages in the Google index so we can confirm that they're just old pages which haven’t fallen out of the index yet and that they’re not going to be causing us a problem?
Thanks guys!
-
It's cool. Sorry, the point I was making is that irrespective of what you search for the page that is returned is http://www.refreshcartridges.co.uk/advanced_search_result.php (with nothing after the .php) and as such the search results page couldn't spurn multiple pages which could be indexed by Google.
-
Hmm, I'm not too knowledgeable about php pages. Sorry!
-
Sorry, I'm not sure what happened to that bit.ly address - The actual address of the website is www.refreshcartridges.co.uk.
Ah, I see what you mean about the search results now however this hopefully shouldn't be an issue as for security (our web guy said something about injections) the URL that is returned irrespective of what is searched for is http://www.refreshcartridges.co.uk/advanced_search_result.php
Thanks again!
-
I can't get that link to work.
What I said before still applies with physical input (this is what I assumed when I said it).
For example, user inputs the words "snakes and dogs" and clicks search. The new URL is "www.yoursite.com/search?q=snakes and dogs" All these weird URL pages need noindex meta tags or Google will flag them as duplicate content because, for example, this page and the result for "dogs and snakes" generate almost the same page.
Does that make sense?
It is in Google's Webmaster Guidelines that you should noindex these pages. -
Many thanks for your input on this. I have actually looked at this through the HTML improvements section of GWMT however I am showing only a few dozen duplicated titles / descriptions and this is simply due to the product categories being almost identical (for example HP Deskjet 500 and HP Deskjet 500+)
-
Many thanks for your response. Our site is an eCommerce site that doesn't employ tags as such and our categories are all accounted for in the 15,000 page figure.
-
We did have this at the beginning of the year when we used a ?dispmode=grid and ?dispmode=list to change the way our results were displayed. This has been rectified however by us completely removing the option and any instances of dispmode present in the URL force a 301 to the correct master page. There are still a few hundred instances of this dispmode being present in the Google index but 99% of them have fallen out now.
I have checked and double checked and we don't seem to have any issues like this at present.
-
I'm not certain if this is the case as our search engine requires physical input in order to yield a result. I don't know if it helps but the URL is http://bit.ly/4Cogchww if you fancy taking a look
-
Thanks for your reply. Indeed our website does force www. if someone were to attempt to navigate to us without prefixing www.
-
Hi Chris,
Google Webmaster has a tool that helps identify duplicate HTMLs and maybe you can use that to see if the 11,000 pages are duplicate. IF they are, I am assuming they should have the duplicate Title Tag and etc. which the tool may discover.
-
Have you checked for instances where a page parameter is being seen as another version of the same page? One of the sites I work for had an issue a few months back where every instance of a product page was being flagged as duplicate content because of an oversight. We had one of our coders write a clause into the page where every time a page loaded with a parameter such as ?color=72 it would canonicalize it to the page minus the parameter. This decreased our duplicate content warnings quickly and effectively.
-
it could be that your tags and categories are considered individual pages and therefore creating their own permalink: ex: http:www.example.com/keyword, and http://www.example.com/tag/keyword and http://www.example.com/category/keyword. Another way would be to check the sitemaps you have in webmaster tools and compare those to each other. Just a suggestion.
-
Does your website force 'www.'?
Both yourdomain.com and www.yourdomain.com are separate sites and can have different pages spidered.
-
Be sure to try different combinations of 'site:www.domain.com' and 'site:domain.com'. They will all yield different results.
Sounds to me like you probably have an internal search engine that is generating search results pages based off the search term, and each different results page is a piece of duplicate content.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page Title Length
Hi Gurus, I understand that it is a good practice is to use 50-60 characters for the a page title length. Google appends my brand name to the end of each title (15 characters including spaces) it index. Do I need to count what google adds as part of the maximum recommended length? i.e.
On-Page Optimization | | SunnyMay
is the maximum 50-60 characters + the 15 characters brand name Google adds to the end of the title or 50-60 including the addition? Many thanks!
Lev0 -
Duplicate page content
These two URLs are being flagged as 98% similar in the code. We're a large ecommerce site, and while it would be ideal to have unique product descriptions on each page we currently don't have the bandwith. Thoughts on what else might be triggering this duplicate content? https://www.etundra.com/restaurant-parts/cooking-equipment-parts/fryers/scoops-skimmers/fmp-175-1081-fryer-crumb-scoop/ https://www.etundra.com/restaurant-equipment/concession-equipment/condiment-pumps/tablecraft-664-wide-mouth-condiment-pump/ Thanks, Natalie
On-Page Optimization | | eTundra0 -
Will it upset Google if I aggregate product page reviews up into a product category page?
We have reviews on our product pages and we are considering averaging those reviews out and putting them on specific category pages in order for the average product ratings to be displayed in search results. Each averaged category review would be only for the products within it's category, and all reviews are from users of the site, no 3rd party reviews. For example, averaging the reviews from all of our boxes products pages, and listing that average review on the boxes category page. My question is, will this be doing anything wrong in the eyes of Google, and if so how so? -Derick
On-Page Optimization | | Deluxe0 -
Duplicate Content - What can be duplicate in two different product pages.
I am having a hard time understanding how my 3 different product pages are being shown up as Duplicate Content in s crawl. Some of my 21 different pages are being shown as duplicate content. Here are 3 of those: 1. http://champu.in/korn-rock-band-mens-round-neck-t-shirt-india 2. http://champu.in/stop-the-burning-mens-round-neck-t-shirt-india 3. http://champu.in/funny-t-shirts/absolut-punjabi-red-men-s-round-neck-t-shirt Can someone help me with this. Thanks in advance 🙂
On-Page Optimization | | sidjain4you0 -
Duplicate URL for homepage
Hi Gurus, Thank you for reading this question My site is developed in Classic ASP How can i make sure the homepage is not duplicated for http://www.partyrama.co.uk/ http://www.partyrama.co.uk/default.asp http://partyrama.co.uk/ http://partyrama.co.uk/default.asp Regards Sri
On-Page Optimization | | partyrama0 -
2 Duplicate pages. One Showing "/"
In SEOMoz report I have 2 pages showing that are identical in name/URL apart from the fact that one ends with the / symbol and one doesnt'. They are both showing as having too many on-page links. But one shows 106 links and the other shows 106. In clients' admin section I can only find evidence of the page without the /. I wonder if anyone could advise how best to move forward? Colin
On-Page Optimization | | NileCruises0 -
What is important for page rank?
I have heard quality is the most important factor for page rank but after the 7 Nov 11 PR update I am no longer a believer. The PR on my home page dropped from 4 to 3 and the rest of my inside pages remained the same even though I have added a significant amount of content since the previous update and kept it fresh. Any thoughts on this most recent PR update?
On-Page Optimization | | casper4340 -
Page Authority
I have recently optimised a set of images for a client of ours: I'm looking through all the PA of these newly optimised images, and have varying PA {from SEOmoz toolbar} I understand that internal linking will pass link juice, and obviously external links will add to the overall PA. I have several pages with a PA of 36: { Fairly deep pages} Yet they have no external or internal links going to them. My question is "How can a page gain any authority when it has no visible links pointing at it?" Obviously there must be a link pointing at it {internally} as Google wouldn't have crawled the page right? Also lets say all the keywords are of equal competitiveness would the keywords with highest PA rank higher than those on O PA pages. Many Thanks
On-Page Optimization | | Yozzer0