URL Mixed Cases and Duplicate Content
-
Hi There,
I have a question for you. I am working on a website where by typing any letter of the URL in lower or upper case, it will give a 200 code.
Examples
www.examples.com/page1/product
www.examples.com/paGe1/Product
www.examples.com/PagE1/prOdUcT
www.examples.com/pAge1/proODUCt
and so on…
Although I cannot find evidence of backlinks pointing to my page with mixed cases, shall I redirect or rel=canonical all the possible combination of the cases to a lower version of them in order to prevent duplicate content? And if so, do you have any advice on how to complete such a massive job?
Thanks a lot
-
Thank you guys! Very helpful answers. Take care
-
Hi,
Technically this is duplicate content, but it is most likely something search engines understand to ignore (especially if no links point to the upper case / incorrect URLs). If it particularly bothers you, place a self-referencing canonical tag in the page's file so that www.examples.com/pAGe/PROducT references www.examples.com/page/product in its tag. Either that or a 301.
Cheers.
-
Have a look around many other sites, and they all do the same. Do a search for something in Google and then change the case of anything after the domain. Microsoft is the same. I am sure that there are examples that will 404 these requests, and that will then be a server side setting.
I really wouldn't worry about that though. MOZ is the same as well. Upper or lower case will give you the same page. in the last 15 years of SEO, I have never come across this as an issue
-Andy
-
It makes more sense to make a server level rule that will 301 any mixed case to a lower so if a URL is being accessed on anything other than lowercase it will redirect to the lower case URL.
This should save you some effort in the long run.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content and Subdirectories
Hi there and thank you in advance for your help! I'm seeking guidance on how to structure a resources directory (white papers, webinars, etc.) while avoiding duplicate content penalties. If you go to /resources on our site, there is filter function. If you filter for webinars, the URL becomes /resources/?type=webinar We didn't want that dynamic URL to be the primary URL for webinars, so we created a new page with the URL /resources/webinar that lists all of our webinars and includes a featured webinar up top. However, the same webinar titles now appear on the /resources page and the /resources/webinar page. Will that cause duplicate content issues? P.S. Not sure if it matters, but we also changed the URLs for the individual resource pages to include the resource type. For example, one of our webinar URLs is /resources/webinar/forecasting-your-revenue Thank you!
Technical SEO | | SAIM_Marketing0 -
Is it possible to deindex old URLs that contain duplicate content?
Our client is a recruitment agency and their website used to contain a substantial amount of duplicate content as many of the listed job descriptions were repeated and recycled. As a result, their rankings rarely progress beyond page 2 on Google. Although they have started using more unique content for each listing, it appears that old job listings pages are still indexed so our assumption is that Google is holding down the ranking due to the amount of duplicate content present (one software returned a score of 43% duplicate content across the website). Looking at other recruitment websites, it appears that they block the actual job listings via the robots.txt file. Would blocking the job listings page from being indexed either by robots.txt or by a noindex tag reduce the negative impact of the duplicate content, but also remove any link juice coming to those pages? In addition, expired job listing URLs stay live which is likely to be increasing the overall duplicate content. Would it be worth removing these pages and setting up 404s, given that any links to these pages would be lost? If these pages are removed, is it possible to permanently deindex these URLs? Any help is greatly appreciated!
Technical SEO | | ClickHub-Harry0 -
Duplicate page content
hi I am getting an duplicate content error in SEOMoz on one of my websites it shows http://www.exampledomain.co.uk http://www.exampledomain.co.uk/ http://www.exampledomain.co.uk/index.html how can i fix this? thanks darren
Technical SEO | | Bristolweb0 -
API for testing duplicate content
Does anyone know a service or API or php lib to compare two (or more) pages and to return their similiarity (Level-3-Shingles). API would be greatly prefered.
Technical SEO | | Sebes0 -
I have a ton of "duplicated content", "duplicated titles" in my website, solutions?
hi and thanks in advance, I have a Jomsocial site with 1000 users it is highly customized and as a result of the customization we did some of the pages have 5 or more different types of URLS pointing to the same page. Google has indexed 16.000 links already and the cowling report show a lot of duplicated content. this links are important for some of the functionality and are dynamically created and will continue growing, my developers offered my to create rules in robots file so a big part of this links don't get indexed but Google webmaster tools post says the following: "Google no longer recommends blocking crawler access to duplicate content on your website, whether with a robots.txt file or other methods. If search engines can't crawl pages with duplicate content, they can't automatically detect that these URLs point to the same content and will therefore effectively have to treat them as separate, unique pages. A better solution is to allow search engines to crawl these URLs, but mark them as duplicates by using the rel="canonical" link element, the URL parameter handling tool, or 301 redirects. In cases where duplicate content leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools." here is an example of the links: | | http://anxietysocialnet.com/profile/edit-profile/salocharly http://anxietysocialnet.com/salocharly/profile http://anxietysocialnet.com/profile/preferences/salocharly http://anxietysocialnet.com/profile/salocharly http://anxietysocialnet.com/profile/privacy/salocharly http://anxietysocialnet.com/profile/edit-details/salocharly http://anxietysocialnet.com/profile/change-profile-picture/salocharly | | so the question is, is this really that bad?? what are my options? it is really a good solution to set rules in robots so big chunks of the site don't get indexed? is there any other way i can resolve this? Thanks again! Salo
Technical SEO | | Salocharly0 -
Duplicate content, how to solve?
I have about 400 errors about duplicate content on my seomoz dashboard. However I have no idea how to solve this, I have 2 main scenarios of duplication in my site: Scenario 1: http://www.theprinterdepo.com/catalogsearch/advanced/result/?name=64MB+SDRAM+DIMM+MEMORY+MODULE&sku=&price%5Bfrom%5D=&price%5Bto%5D=&category= 3 products with the same title, but different product models, as you can note is has the same price as well. Some printers use a different memory product module. So I just cant delete 2 products. Scenario 2: toners http://www.theprinterdepo.com/brother-high-capacity-black-toner-cartridge-compatible-73 http://www.theprinterdepo.com/brother-high-capacity-black-toner-cartridge-compatible-75 In this scenario, products have a different title but the same price. Again, in this scenario the 2 products are different. Thank you
Technical SEO | | levalencia10 -
Duplicate content across multiple domains
I have come across a situation where we have discovered duplicate content between multiple domains. We have access to each domain and have recently within the past 2 weeks added a 301 redirect to redirect each page dynamically to the proper page on the desired domain. My question relates to the removal of these pages. There are thousands of these duplicate pages. I have gone back and looked at a number of these cached pages in google and have found that the cached pages that are roughly 30 days old or older. Will these pages ever get removed from google's index? Will the 301 redirect even be read by google to be redirected to the proper domain and page? If so when will that happen? Are we better off submitting a full site removal request of the sites that carries the duplicate content at this point? These smaller sites do bring traffic on their own but I'd rather not wait 3 months for the content to be removed since my assumption is that this content is competing with the main site. I suppose another option would be to include no cache meta tag for these pages. Any thoughts or comments would be appreciated.
Technical SEO | | jmsobe0 -
Why are my pages getting duplicate content errors?
Studying the Duplicate Page Content report reveals that all (or many) of my pages are getting flagged as having duplicate content because the crawler thinks there are two versions of the same page: http://www.mapsalive.com/Features/audio.aspx http://www.mapsalive.com/Features/Audio.aspx The only difference is the capitalization. We don't have two versions of the page so I don't understand what I'm missing or how to correct this. Anyone have any thoughts for what to look for?
Technical SEO | | jkenyon0