Penalization for Duplicate URLs with %29 or "/"
-
Hi there -
Some of our dynamically generated product URLs somehow are showing up in SEOmoz as two different URLs even though they are the same page- one with a %28 and one with a
e.g.,
http://www.company.com/ProductX-(-etc/
http://www.company.com/ProductX-(-etc/
Also, some of the URLs are duplicated with a "/" at the end of them.
Does Google penalize us for these duplicate URLs? Should we add canonical tags to all of them?
Finally, our development team is claiming that they are not generating these pages, and that they are being generated from facebook/pinterest/etc. which doesn't make a whole lot of sense to me. Is that right?
Thanks!
-
Canonical tags should drastically help with this. The % is being generated because the URL is being encoded and has a "(" in it. Have your product page each contain their own canonical with the URL you want indexed. Not sure which URL to use? Check your internal links and see how your site is linking to your product pages. Presumably its:
http://www.company.com/ProductX-(-etc/
or
http://www.company.com/ProductX-(-etc
Add this URL as your canonical and the SE's will understand what page is the 'real' page. This will solve both problems from an SEO standpoint. If you want to actually stop the site from doing this, you can remove trailing slashes and encoding using HTACCESS.
-
In short, your site software should completely control all links generated on your site. If you hand code a site using .NET, Cold Fusion, HTML/CSS/PHP, etc. you are in complete control over your links. If you use a CMS or other software such as WordPress, Magento, etc. then the software creates the URLs for you. In either case a skilled developer should be able to offer you options.
In brief, I recommend using a standard format for your URLs. We like to have all categories end with a trailing slash, and all web pages end without one. For example: www.mysite.com/cars/ or www.mysite.com/cars/2010-ford-mustang.
Whatever choice you make, enforce it throughout your site. You can also use the canonical tag to help control issues where a page may be offered under multiple URLs, but the best choice would be solving the root issue.
**Our development team is claiming that they are not generating these pages, and that they are being generated from facebook/pinterest/etc. which doesn't make a whole lot of sense to me. **
Without looking at your website and an example URL of this issue, it is not possible to offer a definitive answer. I have never encountered this issue.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why are URLs like www.site.com/#something being indexed?
So, everything after a hash (#) is not supposed to be crawled and indexed. Has that changed? I see a clients site with all sorts of URLs indexed like ... http://www.website.com/#!category/c11f For the above URL, I thought it was the same as simply http://www.website.com/. But they aren't, they're getting indexed and all the content on the pages with these hash tags are getting crawled as well. Thanks!
Technical SEO | | wiredseo0 -
Google caching the "cookie law message"
Hello! So i've been looking at the cached text version of our website. (Google Eyes is a great add on for this) One thing I've noticed is that, Google caches our EU Cookie Law message. The message appears on the top of the page and Google is caching this. The message is enclosed within and but it still is being cached. I'm going to ask the development mean to move the message at the bottom of the page and fix the position, but reviewing other websites with cookie messages, Google isn't caching them in their text only versions. Any tips or advice?
Technical SEO | | Bio-RadAbs0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Block /tag/ or not?
I've asked this question in another area but now i want to ask it as a bigger question. Do we block /tag/ with robots.txt or not. Here's why I ask: My wordpress site does not block /tag/ and I have many /tag/ results in the top 10 results of Google. Have for months. The question is, does Google see /tag/ on WordPress as duplicate content? SEOMoz says it's duplicate content but it's a tag. It's not really content per say. I'm all for optimizing my site but Google is not penalizing me for /tag/ results. I don't want to block /tag/ if Google is not seeing it as duplicate content for only one reason and that's because I have many results in the top 10 on G. So, can someone who knows more about this weigh in on the subject for I really would like a accurate answer. Thanks in advance...
Technical SEO | | MyAllenMedia0 -
Duplicate Page Warnings, hkey and repetitive URLs
Hi, we just put our association's site in SEO Moz so we can tackle SEO and we received thousands of duplicate page content and duplicate title warnings. I searched the forum before asking 🙂 Appreciate some guidance on how many things are wrong with the URL's below that are getting flagged as duplicate page content. 1. Does the repetition of the page title and section hurt SEO? 2. Does the iMIS15 in the URL (the server) detract from relevant ranking? 3. From the forum, it looks like canonical tags should be added to the version of the page that starts with .....?hkey (is there a way to predict and "canonize" these? Or recommendations?) http://www.iiba.org/imis15/IIBA/About_IIBA/IIBA_Website/About_IIBA/About_IIBA.aspx?hkey=6d821afa-e3aa-4bef-bafa-2453238d12c6 http://www.iiba.org/imis15/IIBA/About_IIBA/IIBA_Website/About_IIBA/About_IIBA.aspx Thank you.
Technical SEO | | lyndas0 -
Getting multiple errors for domain.com/xxxx/xxxx/feed/feed/feed/feed...
A recent SEOMoz crawl report is showing a bunch 404's and duplicate page content on pages with urls like http://domain.com/categories/about/feed/feed/feed/feed/feed and on and on. This is a wordpress install. Does anyone know what could be causing this or why SEOMoz would be trying to read these non-existent feed pages?
Technical SEO | | Brandtailers0 -
Are URL's with trailing slash seen as two different URLs
Hello, http://www.example.com and http://ww.example.com/ Are these seen as two different URL's ? Just as with www or non www ? Or it doesn't make any difference ?
Technical SEO | | seoug_20050 -
"Too Many On-Page Links" Issue
I'm being docked for too many on page links on every page on the site, and I believe it is because the drop down nav has about 130 links in it. It's because we have a few levels of dropdowns, so you can get to any page from the main page. The site is here - http://www.ibethel.org/ Is what I'm doing just a bad practice and the dropdowns shouldn't give as much information? Or is there something different I should do with the links? Maybe a no-follow on the last tier of dropdown?
Technical SEO | | BethelMedia0