Duplicate title tags being caused by upper case and lower case version of urls'
-
Hi
GWT is reporting lots of duplicate titles for a clients new site.
Mainly these are due to 2x different versions of the url, one with words starting with an upper case character and the other all lower case.
Clients dev saying this has something to do with windows server and is ok!
Is this correct or should i be telling them to delete and then 301 redirect all upper case versions to the lower case (since lower case better practice) and that will deal with the reported dupe titles ?
All Best
Dan
-
Hi Tom
Sorry to be a pain but please can you confirm re my latest question/comment reply to yours, i.e. does your advice still stand if client on a windows server (since i dont think have .htaccess files or do/can they?)
And if not then is there a similar course of action that can be taken ?
Cheers
Dan
-
Many Thanks Raymond !
All Best
Dan
-
That great Tom thanks alot for your advice !
Clients on a windows server, am i right in thinking they dont have htaccess files since they are just on apache ?
If so is principle/code the same and just done in a different file ? if so any ideas what that file is ? or am i mistaken and windows do have htaccess files ?
Cheers
Dan
-
Hi Dan
I wouldn't want to leave anything to doubt and would prefer to have 1 version of each URL available.
Fortunately, a fairly simple solution can be put in place in your htaccess file. As always, please backup and test before trying any implementation - I can't tell you how many times I've made a simple mistake in the htaccess file that causes big problems!
Anyway, the code you'd want to enter at the top of the file is:
RewriteEngine On
RewriteMap lc int:tolower
RewriteCond %{REQUEST_URI} [A-Z]
RewriteRule (.*) ${lc:$1} [R=301,L]That code will basically rewrite any URL containing uppercase letters to the same URL using only lowercase.
Redirects are quicker and more reliable than canonical tags in my experience and this doesn't take long to get implemented, so best not leave anything to chance.
Hope this helps.
-
GWT sometimes takes a while to register the change. It just actually happened on one of my clients pages where there was a bug in the code made by the developer and they got 30,000 duplicate title and description errors. It was fixed about 2 months ago and while the errors have dropped to 2000 or so, it is still not at zero yet, so it does take some time depending on how often Google crawls the site.
If the developer says they did it, I would recommend the first thing you do is simply check that they did it. Go to one of the pages in question, both one with capital and without, and do a view source on both to check that the rel="canonical" tag is there and is correct. Programmers don't always know much about SEO so they may be implementing them wrong.
-
Thanks Raymond
The developers are saying they have done this
Does that mean that if its showing in GWT it hasnt been done ?
Cheers
Dan
-
As a simpler solution, why don't you just add a rel="canonical" tag to each page specifying the one you want? That should take care of any duplicate content issues.
I hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is it bad to update product titles and URLs if they are only slightly modified
I am doing some house cleaning on the site and made some minor updates to product titles and a rule was written in and it auto updated the URL to what the product title was with a redirect put in place from the old URL. If this a bad thing and should i leave the URL alone and just update the product title? Then for the ones i did change the Product title and the URL was updated is this a bad thing and should i have just left the URL alone? These are all high ranking popular products so dont want to mess with any rankings going into busy season?
Technical SEO | | isle_surf0 -
Using a Feedburner RSS link in your blog's header tag
It was suggested in Quick Sprout's Advanced SEO guide that it's good form to place your Feedburner RSS link into the header tag of your blog. Anyone know if this needs to be done for every page header of the blog, or just the home/main/index page? Thanks
Technical SEO | | Martin_S0 -
Carl errors on urls that don't normally exist
Hi, I have been having heaps (thousands) of SEOMoz crawl errors on urls that don't exist normally like: mydomain.com/RoomAvailability.aspx?DateFrom=2012-Oct-26&rcid=-1&Nights=2&Adults=1&Children=0&search=BestPrice These urls are missing siteids and other parameters and I can't see how they are gererated. Does anyone have any ideas on where MOZ is finding them ? Thanks Stephen
Technical SEO | | digmarketingguy0 -
We have over 3000 duplicate page titles, please help!
Hi, we did a crawl report and have over 3000 duplicate page titles. I'm not sure why this is happening... could it be because we have put posts in multiple categories? Can anyone help us with a quick fix? our site is www.stayathomemum.com.au thank you kindly, Chris
Technical SEO | | stayathomemum0 -
Duplicate Page Title & Content Penalty On Website Tonight Platform
I built my primary website on Website Tonight (WT) five years ago when I was a net newbie and I'm presently new to seomoz. The initial crawl indicated a problem with duplicate page title and duplicate content with my website home page in WT. It turns out that the WT platform makes you assign a file name to your homepage i.e: www.business.com/homepage.html that differs from the www.business.com that you want as your homepage url. Apparently the search engines are recognizing these identical pages as separate and duplicate. I know that the standard answer would be to just do a 301 redirect from the long file name to the short file name - end of story. But WT does not allow you to do 301 redirects and they also do not give you the ability to go into the htaccess file to fix this yourself manually. I spoke to the folks at WT tonight and they claim that they automatically do 301 redirects on the platform. But if this true then why am I getting the error message in seomoz? Does anyone know if this is a problem? If so, does anyone here have a fix? Thanks in advance. Sincerely - Bill in Denver
Technical SEO | | anxietycoach0 -
Penalization for Duplicate URLs with %29 or "/"
Hi there - Some of our dynamically generated product URLs somehow are showing up in SEOmoz as two different URLs even though they are the same page- one with a %28 and one with a 🙂 e.g., http://www.company.com/ProductX-(-etc/ http://www.company.com/ProductX-(-etc/ Also, some of the URLs are duplicated with a "/" at the end of them. Does Google penalize us for these duplicate URLs? Should we add canonical tags to all of them? Finally, our development team is claiming that they are not generating these pages, and that they are being generated from facebook/pinterest/etc. which doesn't make a whole lot of sense to me. Is that right? Thanks!
Technical SEO | | sfecommerce0 -
Anyone know how to fix duplicate content and titles with news section?
We use django for out site and it's working really well, but we're having an issue with duplicate titles and content via the news section. The news is basically stories sourced from other sites and we link to them via our news section. I'm not sure how to fix the duplicate title issue in this case. I noticed people recommend archiving or using a canonical, but because the news section is set up how it is I don't think that would work. Does anyone have a way around this?A
Technical SEO | | KateGMaker0 -
What's the best way to eliminate duplicate page content caused by blog archives?
I (obviously) can't delete the archived pages regardless of how much traffic they do/don't receive. Would you recommend a meta robot or robot.txt file? I'm not sure I'll have access to the root directory so I could be stuck with utilizing a meta robot, correct? Any other suggestions to alleviate this pesky duplicate page content issue?
Technical SEO | | ICM0