SEOMOZ duplicate page result: True or false?
-
SEOMOZ say's: I have six (6) duplicate pages.
Duplicate content tool checker say's (0)
On the physical computer that hosts the website the page exists as one file. The casing of the file is irrelevant to the host machine, it wouldn't allow 2 files of the same name in the same directory.
To reenforce this point, you can access said file by camel-casing the URI in any fashion (eg; http://www.agi-automation.com/Pneumatic-grippers.htm). This does not bring up a different file each time, the server merely processes the URI as case-less and pulls the file by it's name.
What is happening in the example given is that some sort of indexer is being used to create a "dummy" reference of all the site files. Since the indexer doesn't have file access to the server, it does this by link crawling instead of reading files. It is the crawler that is making an assumption that the different casings of the pages are in fact different files. Perhaps there is a setting in the indexer to ignore casing.
So the indexer is thinking that these are 2 different pages when they really aren't. This makes all of the other points moot, though they would certainly be relevant in the case of an actual duplicated page."
****Page Authority Linking Root Domains
http://www.agi-automation.com/ 43 82
http://www.agi-automation.com/index.html 25 2
http://www.agi-automation.com/Linear-escapements.htm 21 1
www.agi-automation.com/linear-escapements.htm 16 1
http://www.agi-automation.com/Pneumatic-grippers.htm 30 3
http://www.agi-automation.com/pneumatic-grippers.htm 16 1****
Duplicate content tool estimates the following:
- www and non-www header response;
- Google cache check;
- Similarity check;
- Default page check;
- 404 header response;
- PageRank dispersion check (i.e. if www and non-www versions have different PR).
-
I always think that there are 2 questions to answer in cases like this:
1. Are the search engines seeing duplicate content?
2. Could the search engines see duplicate content?
The tools are useful for quickly highlighting potential problems, but you really want to roll your sleeves up and look for yourself. I'll use teh Pneumatic Grippers page as an example:
The title of that page is :"Pneumatic, Grippers, Rotary Actuators, Linear Actuator, Robotics", so I'll do a search for:
intitle:"Pneumatic, Grippers, Rotary Actuators, Linear Actuator, Robotics"
That will bring up everything in google with that page title, just the one result - Good! With regards to that page at least it seems google is only indexing one URL.
The URL that google has indexed is http://www.agi-automation.com/Pneumatic-grippers.htm - as you say, changing that case doesn't affect what page loads, so if an odd case (say http://www.agi-automation.com/PneuMAtic-grippers.htm ) could cause a problem.
What you need to prevent that is a rel=canonical in the source (I checked, you don't). That tells the search engine what the correct address is. Just ensure you have something like the following in your head section
http://www.agi-automation.com/Pneumatic-grippers.htm" />
There is another way to do it, redirecting the "wrong" version of URLs, but rel=canonical looks like the right choice for your site.
What I would say though, is this: If the search engines aren't picking up duplicate copies don't panic too much over this. It would be good to have it, but it is only a big issue if duplicate pages are actually being indexed. If not it is good insurance.
I hope that helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I'm looking for a bulk way to take off from the Google search results over 600 old and inexisting pages?
When I search on Google site:alexanders.co.nz still showing over 900 results. There are over 600 inexisting pages and the 404/410 errrors aren't not working. The only way that I can think to do that is doing manually on search console using the "Removing URLs" tool but is going to take ages. Any idea how I can take down all those zombie pages from the search results?
Intermediate & Advanced SEO | | Alexanders1 -
Category Page as Shopping Aggregator Page
Hi, I have been reviewing the info from Google on structured data for products and started to ponder.
Intermediate & Advanced SEO | | Alexcox6
https://developers.google.com/search/docs/data-types/products Here is the scenario.
You have a Category Page and it lists 8 products, each products shows an image, price and review rating. As the individual products pages are already marked up they display Rich Snippets in the serps.
I wonder how do we get the rich snippets for the category page. Now Google suggest a markup for shopping aggregator pages that lists a single product, along with information about different sellers offering that product but nothing for categories. My ponder is this, Can we use the shopping aggregator markup for category pages to achieve the coveted rich results (from and to price, average reviews)? Keen to hear from anyone who has had any thoughts on the matter or had already tried this.0 -
Duplicate content on product pages
Hi, We are considering the impact when you want to deliver content directly on the product pages. If the products were manufactured in a specific way and its the same process across 100 other products you might want to tell your readers about it. If you were to believe the product page was the best place to deliver this information for your readers then you could potentially be creating mass content duplication. Especially as the storytelling of the product could equate to 60% of the page content this could really flag as duplication. Our options would appear to be:1. Instead add the content as a link on each product page to one centralised URL and risk taking users away from the product page (not going to help with conversion rate or designers plans)2. Put the content behind some javascript which requires interaction hopefully deterring the search engine from crawling the content (doesn't fit the designers plans & users have to interact which is a big ask)3. Assign one product as a canonical and risk the other products not appearing in search for relevant searches4. Leave the copy as crawlable and risk being marked down or de-indexed for duplicated contentIts seems the search engines do not offer a way for us to serve this great content to our readers with out being at risk of going against guidelines or the search engines not being able to crawl it.How would you suggest a site should go about this for optimal results?
Intermediate & Advanced SEO | | FashionLux2 -
Pages with rel "next"/"prev" still crawling as duplicate?
Howdy! I have a site that is crawling as "duplicate content pages" that is really just pagination. The rel next/prev is in place and done correctly but Roger Bot and Google are both showing duplicated content + duplicate page titles & meta's respectively. The only thing I can think of is we have a canonical pointing back at the URL you are on - we do not have a view all option right now and would not feel comfortable recommending it given the speed implications and size of their catalog. Any experience, recommendations here? Something to be worried about? /collections/all?page=15"/>
Intermediate & Advanced SEO | | paul-bold0 -
Joomla Duplicate Page content fix for mailto component?
Hi, I am currently working on my site and have the following duplicate page content issues: My Uni Essays http://www.myuniessays.co.uk/component/mailto/?tmpl=component&template=it_university&link=2631849e33 My Uni Essays http://www.myuniessays.co.uk/component/mailto/?tmpl=component&template=it_university&link=2edd30f8c6 This happens 15 times Any ideas on how to fix this please? Thank you
Intermediate & Advanced SEO | | grays01800 -
Redirecting thin content city pages to the state page, 404s or 301s?
I have a large number of thin content city-level pages (possibly 20,000+) that I recently removed from a site. Currently, I have it set up to send a 404 header when any of these removed city-level pages are accessed. But I'm not sending the visitor (or search engine) to a site-wide 404 page. Instead, I'm using PHP to redirect the visitor to the corresponding state-level page for that removed city-level page. Something like: if (this city page should be removed) { header("HTTP/1.0 404 Not Found");
Intermediate & Advanced SEO | | rriot
header("Location:http://example.com/state-level-page")
exit();
} Is it problematic to send a 404 header and still redirect to a category-level page like this? By doing this, I'm sending any visitors to removed pages to the next most relevant page. Does it make more sense to 301 all the removed city-level pages to the state-level page? Also, these removed city-level pages collectively have very little to none inbound links from other sites. I suspect that any inbound links to these removed pages are from low quality scraper-type sites anyway. Thanks in advance!2 -
Create different pages with keyword variations VS. Add keyword variations in 1 page
For searches involving keywords like "lessons", "courses", "classes" I see frequently pages in the top rankings which do not contain the search term in the title tag, despite these terms being quite competitive. It seems that when searching for "classes", google detects that pages about "courses" may be just as relevant. What do you recommend? option 1: creating 10 pages optimized on 10 different keyword variations, each with a significant part of unique content or option 2: one page and dropping throughout the page 10 keyword variations in body and headlines Given that keywords are all synonyms and website has already high domain authority in the niche. thanks
Intermediate & Advanced SEO | | lcourse0 -
Local results vs Normal results
Hi everyone, I am currently working on the website of a friend, who's owning a French spa treatment company. I have been working on it for the past 6 months, mostly on optimizing the page titles and the link building. So far the results are great in terms on normal results : if you type most of the keywords and the city name, the website would be very well positioned, if not top positioned. My only problem is that in the local results (Google Maps), nothing has improved at all. In most of the same keyword where the website is ranking 1st on normal results, the website doesn't appear at all on the same keywords in local results. This is confusing as you would think Google think the website is relevant to the subject according to the normal results but it doesn't show any good ones in a local matter. The website is clearly located in the city (thanks to the pages titles and there's a Google Map in a specific page dedicated to its location). The company has a Google Places page and it has positive customers reviews on different trusted websites for more than a year now (the website is 2 years old). I focused my work concerning the link building on the local websites (directories and specialized websites) for the past 2 months. The results kept improving on normal results but still no improvement at all in the local ones. As far as I know, there is no mistakes such as multiple addresses for the same business etc. Everything seems to be done by the rules. I am not sure at all what more I can do. The competitors do not seem to be working their SEO pretty much and in terms of linking (according to the -pretty good- Seomoz tools), they have up to 10 times less (good) links than us. Maybe you guys have some advice on how I can manage this situation ? I'm kind of lost here 😞 Thanks a lot for your help, appreciate it. Cheers,
Intermediate & Advanced SEO | | Pureshore
Raphael0