Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Lowercase VS. Uppercase Canonical tags?
-
Hi MOZ, I was hoping that someone could help shed some light on an issue I'm having with URL structure and the canonical tag.
The company I work for is a distributor of electrical products and our E-commerce site is structured so that our URL's (specifically, our product detail page URL's) include a portion (the part #) that is all uppercase (e.g: buy/OEL-Worldwide-Industries/AFW-PG-10-10).
The issue is that we have just recently included a canonical tag in all of our product detail pages and the programmer that worked on this project has every canonical tag in lowercase instead of uppercase. Now, in GWT, I'm seeing over 20,000-25,000 "duplicate title tags" or "duplicate descriptions".
Is this an issue? Could this issue be resolved by simply changing the canonical tag to reflect the uppercase URL's? I'm not too well versed in canonical tags and would love a little insight.
Thanks!
-
Thanks for the feedback, Federico! That actually helps a lot and also helps confirm what our programmer has just done (which is changed all the canonical tags to the uppercase URL). I guess now we'll play the waiting game and see if Google reduces the number or duplicates after it's next crawl.
Thanks again!
-
That should be an easy fix for your programmer. If your internal links point to pages with uppercase letters in them, then have the canonical tags with the uppercase. Almost always, uppercase and lowercase loads the same content as the rewrite rules use the URL to look on the products using a DB that does not distinguish uppercase & lowercase automatically (in MySQL, you can force the query to do so, but that will be actually more difficult to just change the way the programmed made the canonical tags). You should also redirect the pages that are duplicate to the original ones, if they have uppercase letters (the original) then the lowercase version should redirect to the uppercase one (once the canonical tags are properly set).
From MY OWN PERSONAL point of view, I always preferred lowercase URLs... if that's the case there's a little more coding to do, but you will end up with all URLs in lowercase (for some reason almost all CMS automatically convert uppercase letters to lowercase in a page URL, like Wordpress does).
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rel=Canonical Vs. 301 for blog articles
Over the last few years, my company has acquired numerous different companies -- some of which were acquired before that. Some of the products acquired were living on their previous company's parent site vs. having their own site dedicated to the product. The decision has been made that each product will have their own site moving forward. Since the product pages, blog articles and resource center landing pages (ex. whitepapers LPs) were living on the parent site, I'm struggling with the decision to 301 vs. rel=canonical those pages (with the new site being self canonicaled). I'm leaning toward take-down and 301 since rel=canonicals are simply suggestions to Google and a new domain can get all the help it can to start ranking. Are there any cons to doing so?
Intermediate & Advanced SEO | | mfcb0 -
Dealing with non-canonical http vs https?
We're working on a complete rebuild of a client's site. The existing version of the site is in WordPress and I've noticed that the site is accessible via http and https. The new version of the site will have mostly or entirely different URLs. It seems that both http and https versions of a page will resolve, but all of the rel-canonical tags I've seen point to the https version. Sometimes image tags and stylesheets are https, sometimes they aren't. There are both http and https pages in Google's index. Having looked at other community posts about http/https, I've gathered the following: http/https is like two different domains. http and https versions need to be verified in Google Webmaster Tools separately. Set up the preferred domain properly. Rel-canonicals and internal links should have matching protocols. My thought is that we will do a .htaccess that redirects old URLs regardless of the protocol to new pages at one protocol. I would probably let the .css and image files from the current site 404. When we develop and launch the new site, does it make sense for everything to be forced to https? Are there any particular SEO issues that I should be aware of for a scenario like this? Thanks!
Intermediate & Advanced SEO | | GOODSIR0 -
Schema Tag and RDF Microdata for Breadcrumbs
Can someone assist to analyse is the Schema tag and RDF Microdata is correct - http://www.mycarhelpline.com/index.php?option=com_newcar&view=product&Itemid=2&id=106&vid=361 - http://www.mycarhelpline.com/index.php?option=com_newcar&view=product&Itemid=2&id=22&vid=6 Reason - am asking is there are many sites reported on that the rich snippet though shows but in actual the RDFMicrodata does not show in on search engines many thanks
Intermediate & Advanced SEO | | Modi0 -
Rel Canonical on Home Page
I have a client who says they can't implement a 301 on their home page. They have tow different urls for their home page that are live and do not redirect. I know that the best solution would be to redirect one to the main URL but they say this isn't possible. So they implemented the rel canonical instead. Is this the second best solution for them if they can't redirect? Will the link juice be passed through the rel canonical? Thanks!
Intermediate & Advanced SEO | | AlightAnalytics0 -
Transactional vs Informative Search
I have a page that id ranking quiet good (Page1) for the plural of a Keyword but it is just ranking on Page 3 for the Singular Keyword. For more then one Year I am working on Onpage and Offpage optimization to improve ranking for the singular term, without success. Google is treating the two terms almost the same, when you search for term one also term 2 is marked in bold and the results are very similar. The big difference between both terms is in my opinion that one is more for informational search the other one is more for transactional search. Now i would be curious to know which factors could Google use to understand weather a search and a website is more transactional or informative? Apart of mentioning: Buy now, Shop, Buy now, Shop, Special offer etc. Any Ideas?
Intermediate & Advanced SEO | | SimCaffe0 -
SEO vs 301
I have a website about "Download of games" and im planning open one about "games online" i know that "games online" its super hard to get good ranks, soo im thinking and do a 301 from my website of "download games" to my new website, do you think that is a good strategy ?
Intermediate & Advanced SEO | | nafera21 -
Not using a robot command meta tag
Hi SEOmoz peeps. Was doing some research on robot commands and found a couple major sites that are not using them. If you check out the code for these: http://www.amazon.com http://www.zappos.com http://www.zappos.com/product/7787787/color/92100 http://www.altrec.com/ You fill not find a meta robot command line. Of course you need the line for any noindex, nofollow, noarchive pages. However for pages you want crawled and indexed, is there any benefit for not having the line at all? Thanks!
Intermediate & Advanced SEO | | STPseo0 -
Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages. The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us... http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
Intermediate & Advanced SEO | | kurus
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions Specifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'... Thoughts? Kurus0