Upper and lower case spelling = dupe content?
-
Hi All,
I've looking at my Crawl Diagnostics Summary and working on getting my site errors down as low as possible.
One thing I'm noticing is that in the "Other URLs" column I'm seeing a lot of 1s. When I click on the number, it is showing me the exact URL with an upper case category title.
For example, it appears like it's telling me that these two URLs are considered duplicate content:
Is that right? Does google care about upper and lower case spelling?
-
Thanks guys! This a huge help. I'll get it taken care of.
-
URLs are case sensitive after the TLD, so these would appear to Google to be duplicate content. Theoretically, those could be two different pages. Ideally, you could 301 redirect all of one to the other. So if you're using lower case /category across your site, you would want to 301 all the /Category URLs to /category.
Some sites correct for capitalization in URLs, and some don't. Do you have internal links using both forms of "category"? If so, you should correct those to one form, as 301's don't pass all your link juice, so you'd create a bit of a leak.
-
Hi Shawn
URLs are case sensitive; so in the example in your question, yes you have duplicate pages there.
As Category is spelt with both an Upper case and a lower case C you will have two identical pages, which is not good for either Search or User Experience.
Some confirmation for you that it's an absolute must to stick with lower case URLs can be found under Point 10 of "11 Best Practices for URLs" which is a blog post here on SEOmoz from a long time ago by Rand Fishkin which is still extremely valid today.
So it's highly recommended that you 301 Redirect any URLs with Upper case letters in to the corresponding URL that's completely lower case.
Regards
Simon
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Drupal 8 tags and categories cause duplicate content shown in MOZ
Hi all, There is something difficult to trace that is causing duplicate content that is related to categories and tags i.e. https://example.com/contact Associated Pages https://example.com/tags/business https://example.com/taxonomy/term/41 example 2 https://example.com/category/example-category-1 Associated Pages https://example.com/category/occupiers-liability example 3 https://example.com/tags/test https://example.com/tags/test-2 Above two pages display same content (maybe due to similar posts feature) My question here is: Is this caused by Drupal website misconfiguration (or one of its modules) since website uses similar posts feature or it's something else. Duplicate content for example.com/index.php issue has been solved by redirects. Should something similar be done in case of tags / categories? Any discussion / suggestions on that matter are greatly appreciated. Thank you.
Moz Pro | | Optimal_Strategies0 -
Large site with content silo's - best practice for deep indexing silo content
Thanks in advance for any advice/links/discussion. This honestly might be a scenario where we need to do some A/B testing. We have a massive (5 Million) content silo that is the basis for our long tail search strategy. Organic search traffic hits our individual "product" pages and we've divided our silo with a parent category & then secondarily with a field (so we can cross link to other content silo's using the same parent/field categorizations). We don't anticipate, nor expect to have top level category pages receive organic traffic - most people are searching for the individual/specific product (long tail). We're not trying to rank or get traffic for searches of all products in "category X" and others are competing and spending a lot in that area (head). The intent/purpose of the site structure/taxonomy is to more easily enable bots/crawlers to get deeper into our content silos. We've built the page for humans, but included link structure/taxonomy to assist crawlers. So here's my question on best practices. How to handle categories with 1,000+ pages/pagination. With our most popular product categories, there might be 100,000's products in one category. My top level hub page for a category looks like www.mysite/categoryA and the page build is showing 50 products and then pagination from 1-1000+. Currently we're using rel=next for pagination and for pages like www.mysite/categoryA?page=6 we make it reference itself as canonical (not the first/top page www.mysite/categoryA). Our goal is deep crawl/indexation of our silo. I use ScreamingFrog and SEOMoz campaign crawl to sample (site takes a week+ to fully crawl) and with each of these tools it "looks" like crawlers have gotten a bit "bogged down" with large categories with tons of pagination. For example rather than crawl multiple categories or fields to get to multiple product pages, some bots will hit all 1,000 (rel=next) pages of a single category. I don't want to waste crawl budget going through 1,000 pages of a single category, versus discovering/crawling more categories. I can't seem to find a consensus as to how to approach the issue. I can't have a page that lists "all" - there's just too much, so we're going to need pagination. I'm not worried about category pagination pages cannibalizing traffic as I don't expect any (should I make pages 2-1,000) noindex and canonically reference the main/first page in the category?). Should I worry about crawlers going deep in pagination among 1 category versus getting to more top level categories? Thanks!
Moz Pro | | DrewProZ1 -
Why is my domain authority lower than my competitors ?
I am totally confused with the information that I am getting from the Site explorer. My domain authority is 26, while my competitor's is 29. I am confused because every one of the factors that SEOmoz uses to determine our domain authority has higher rankings for my website. My SEOmoz rank is higher, my external followed links is higher, and so forth. The only factor that my competitor has with a greater ranking or number is that they have more internal links. I used the link metrics portion and added their URL to see all of this information. Can anyone tell me how this is possible ? My domain is www.Prickettproperties.com and one of the ones that I am looking at comparing is www.liquidlifevacationrentals.com
Moz Pro | | Prickett0 -
How does SEOmoz pull its duplicate page title and content information?
I ask because I am getting errors based on URLs that do not even exist on our site. For example: http://www.robots.com/applications/abb/panasonic/robots this URL does not even exist for our site, but somehow it is listed in the error section of page title duplication tool. http://www.robots.com/applications/ exists, but there is no place to get to an ABB or a Panasonic robot from this page, not to mention an ABB/Panasonic (which for sure does not exist). ?? We have quite a few of these out there and just wondering how to find out where the link is coming from. When we checked our URLs through Integrity, links like the one listed above (which we had 29 of them listed) that do not show up. Thoughts? Thanks! Janelle
Moz Pro | | jwanner0 -
Our Duplicate Content Crawled by SEOMoz Roger, but Not in Google Webmaster Tools
Hi Guys, We're new here and I couldn't find the answer to my question. Here it goes: We had SEOMoz's Roger Crawl all of our pages and he came up with quite a few erros (Duplicate Content, Duplicate Page Titles, Long URL's). Per our CTO and using our Google Webmaster Tools, we informed Google not to index those Duplicate Content Pages. For our Long URL Errors, they are redirected to SEF URL's. What we would like to know is if Roger is able to know that we have instructed Google to not index these pages. My concern is Should we still be concerned if Roger is still crawling those pages and the errors are not showing up in our Webmaster Tools Is there a way we can let Roger know so they don't come up as errors in our SEOMoz Tools? Thanks so much, e
Moz Pro | | RichSteel0 -
What is the best method to solve duplicate page content?
The issue I am having is an overwhelmingly large number of pages on cafecartel.com show that they have duplicate page content. But when I check the errors on SEOmoz it shows that the duplicate content is from www.cafecartel.com not cafecartel.com. So first of all, does this mean that there are two sites? and is this a problem I can fix easily? (i.e. redirecting the URL and deleting the extra pages) Is this going to make all other SEO useless due to the fact that it shows that nearly every page has duplicate page content? Or am I just completely reading the data wrong?
Moz Pro | | MarkP_0 -
I have another Duplicate page content Question to ask.Why does my blog tags come up as duplicates when my page gets crawled,how do I fix it?
I have a blog linked to my web page.& when rogerbot crawls my website it considers tags for my blog pages duplicate content.is there any way I can fix this? Thanks for your advice.
Moz Pro | | PCTechGuy20120 -
RSS feed showing up as duplicate content
Hi, I've just run an SEOMOZ Pro scan for the first time and it is picking up duplicate content errors from the RSS feed. For some reason it seems to be picking up two feeds, for example: http://blog.clove.co.uk/2009/05/13/htc-touch-diamond2-review/feed/ http://blog.clove.co.uk/2009/05/19/htc-touch-diamond2-review-2/feed/ Does anyone know why this is happening and how I can resolve this? Thanks
Moz Pro | | pugh0