Why do I get duplicate pages, website referencing the capital version of the url vs the lowercase www.agi-automation.com/Pneumatic-grippers.htm
-
Can I the rel=canonical tag this?
-
I'm not a pro when it comes to technical server set ups, so maybe Keri can jump in with some better knowledge.
It seems to me like you have everything set up on your server correctly. And it looks like Google currently has only one version indexed of the original page in question.
You site navigation menu points to the capitalized version of the URL, but somewhere on your site there must be a link that points to the lowercase version which would explain how SEOmoz found the duplication when crawling your site, and if SEOmoz can find, so can Google.
I still think you should use the rel=canonical attribute just to be safe. Again, I'm not that great at technical stuff. Sorry I couldn't be of more help here.
Tim
-
Hi Tim,
Thanks for your responses. This is what the IT team has found. Let me know your thoughts:
On the physical computer that hosts the website the page exists as one file. The casing of the file is irrelevant to the host machine, it wouldn't allow 2 files of the same name in the same directory.
To reenforce this point, you can access said file by camel-casing the URI in any fashion (eg; http://www.agi-automation.com/Lin...). This does not bring up a different file each time, the server merely processes the URI as case-less and pulls the file by it's name.
What is happening in the example given is that some sort of indexer is being used to create a "dummy" reference of all the site files. Since the indexer doesn't have file access to the server, it does this by link crawling instead of reading files. It is the crawler that is making an assumption that the different casings of the pages are in fact different files. Perhaps there is a setting in the indexer to ignore casing.
So the indexer is thinking that these are 2 different pages when they really aren't. This makes all of the other points moot, though they would certainly be relevant in the case of an actual duplicated page."
-
Hi Keri and Tim,
Thanks for your responses. This is what the IT team has found. Let me know your thoughts:
On the physical computer that hosts the website the page exists as one file. The casing of the file is irrelevant to the host machine, it wouldn't allow 2 files of the same name in the same directory.
To reenforce this point, you can access said file by camel-casing the URI in any fashion (eg; http://www.agi-automation.com/Linear-EscapeMents.htm). This does not bring up a different file each time, the server merely processes the URI as case-less and pulls the file by it's name.
What is happening in the example given is that some sort of indexer is being used to create a "dummy" reference of all the site files. Since the indexer doesn't have file access to the server, it does this by link crawling instead of reading files. It is the crawler that is making an assumption that the different casings of the pages are in fact different files. Perhaps there is a setting in the indexer to ignore casing.
So the indexer is thinking that these are 2 different pages when they really aren't. This makes all of the other points moot, though they would certainly be relevant in the case of an actual duplicated page."
-
Excellent points, Keri. I hadn't thought about either of those issues. Using a redirect is definitely the best way to go.
-
I'd vote for doing the rewrite to the lowercase version. This gives you a couple of added benefits:
-
If people copy and paste the URL from their browser then link to it, you're getting all the links going to the same place.
-
Your analytics based on your URLs will be more accurate. Instead of seeing:
urla.htm 70 visits
urlb.htm 60 visits
urlB.htm 30 visitsYou'll see
urlb.htm 90 visits
urla.htm 70 visits -
-
The problem is that search engines view these URLs as two separate pages, so both pages get indexed and you run into duplication issues.
Yes, using rel=canonical is a good way to handle this. I would suggest using the lowercase version as your canonical page, so you would place this bit of HTML on both pages:
The other option is to create a 301 redirect from the caps version to the lowercase version. This would ensure that anyone arriving at the page (including search engine bots) would end up being directed to the lowercase version.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Solved Should I consolidate my "www" and "non-www" pages?
My page rank for www and non-www is the same. In one keyword instance, my www version performs SO much better. Wanting to consolidate to one or the other. My question is as to whether all these issues would ultimately resolve to my chosen consolidated domain (i.e. www or non-www) regardless of which one I choose. OR, would it be smart to choose the one where I am already ranking high for this significant keyword phrase? Thank you in advance for your help.
Technical SEO | | meditationbunny0 -
Worth redirecting non-www to www due to higher page authority with www?
When checking my domain I receive higher page authority for www vs non-www. I am considering moving to the www url and applying the necessary redirections but wanted to quickly check if this is worth it. The root page authority https://www.diveidc.com : PA 40 https://diveidc.com : PA 35 By redirecting would I just be transferring over negative signals to the www domain, thus voiding the point for doing any redirect at all?
Technical SEO | | MAGNUMCreative0 -
Will my site get devalued if I add the same company schema to all the pages of my website?
If I add the exact same schema markup to every page on my website - is it considered duplicate content? Our CMS is telling me that if I want schema mark-up on our site that it has to be the same on every page on the website. This limitation is frustrating but I am trying to figure out the best way to work within their boundaries. Your help is appreciated.
Technical SEO | | Annette_Wetzel0 -
Only my website homepage is appearing in search and the other indvidual pages are not coming up?This happened after the website revamp
We have revamped our website http://www.wsinetpower.com/ after te revamp the SEO rankings went down and the inner pages are not appearing in serach. What could be the reason
Technical SEO | | Muna0 -
Versions of same site with www, no www, ww, and w
It's just come to light that a couple of our clients have both www. and no www versions of their site, with no 301 in place. That's all fine, we're (trying) to get them to sort it. But, strangely, one of our clients not only has the www. and no www, but they also have ww., and w. - all showing their site as normal - the only difference being that the www. and no www are both PR 3 while all other versions are N/A. Does anyone have any idea what's going on/if it's a problem? Thanks very much 🙂
Technical SEO | | Chuck-Boom0 -
Duplicate content issues with australian and us version of website
Good afternoon. I've tried searching for an answer to the following question but I believe my circumstance is a little different than what has been asked in the past. I currently run a Australian website targeted at a specific demographic (50-75) and we produce a LARGE number of articles on a wide variety of lifestyle segments. All of our focus up until now has been in Australia and our SEO and language is dedicated towards this. The next logical step in my mind is to launch a mirror website targeted at the US market. This website would be a simple mirror of a large number of articles (1000+) on subjects such as Food, Health, Travel, Money and Technology. Our current CMS has no problems in duplicating the specific items over and sharing everything, the problem is in the fact that we currently use a .com.au domain and the .com domain in unavailable and not for sale, which would mean we have to create a new name for the US targeted domain. The question is, how will mirroring this information, targeted towards US, affect us on Google and would we better off getting a large number of these articles 're-written' by a company on freelancer.com etc? Thanks,
Technical SEO | | Geelong
Drew0 -
What is the best method to block a sub-domain, e.g. staging.domain.com/ from getting indexed?
Now that Google considers subdomains as part of the TLD I'm a little leery of testing robots.txt with something like: staging.domain.com
Technical SEO | | fthead9
User-agent: *
Disallow: / in fear it might get the www.domain.com blocked as well. Has anyone had any success using robots.txt to block sub-domains? I know I could add a meta robots tag to the staging.domain.com pages but that would require a lot more work.0 -
How do I eliminate duplicate url, duplicate title issues using Joomla CMS?
We have a site using Joomla CMS, integrated with Jreviews and Jomsocial. Utilizing ACE SEF to generate Dynamic URL structure. Our issue is that we are recieving multiple instances of duplicate url's and duplicate titles due to the way joomla is working with jreviews for all our 7,000+ business listings. Site is already ranked for many broad/national keywords, concerned that our state and local rankings are limited by these errors. How can we prevent this from happening without re-writing the entire website?
Technical SEO | | mdmcn0