Rel=Canonical to Rewrite or original URL?
-
Working with a large number of duplicate pages due to different views of products. Rewriting URLs for the most linked page. Should rel=canonical point to the rewritten URL or the actual URL? Is there a way to see what the rewritten URL is within the crawl data?
I was taking the approach of rewriting only the base version of each page and then using a rel=canonical on the duplicate pages. Can anyone recommend a better or cleaner approach?
Haven't seen too many articles on retail SEO when faced with a less than optimized CMS.
Thanks!
-
We are using actual rewrites and then using rel=canonical for all sub pages.
-
If you are using rewriting in the web server software (like Apache), the rewritten URL is never exposed to users or search engines. Please clarify that you are actually rewriting URLs and not just redirecting them to the canonical version.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
URL Question: Is there any value for ecomm sites in having a reverse "breadcrumb" in the URL?
Wondering if there is any value for e-comm sites to feature a reverse breadcrumb like structure in the URL? For example: Example: https://www.grainger.com/category/anchor-bolts/anchors/fasteners/ecatalog/N-8j5?ssf=3&ssf=3 where we have a reverse categorization happening? with /level2-sub-cat/level1-sub-cat/category in the reverse order as to the actual location on the site. Category: Fasteners
Technical SEO | | ROI_DNA
Sub-Cat (level 1): Anchors
Sub-Cat (level 2): Anchor Bolts0 -
URL Structure
I'm going through the process of redesigning our website, and the URL structure was brought up. We currently have our URLs structured as domain.com/keyword. It seems that some people think setting your URLs up to look like: domain.com/directory/keyword makes more sense from a user's perspective, and from a search engine's perspective. With our directories labeled as services, solutions, clients - I see no value in adding directories as it dilutes the keyword and brings the keyword further away from the domain. Are there situations where adding a directory before the page in the URL makes sense? If anyone has data showing the difference between the two that'd be great! Thanks, Brian
Technical SEO | | PrasoonGoel0 -
Redirecting Canonical Hostnames
Hi, I want to rewrite all the url pages of "site.com" to "www.site.com". I read the moz redirection article and i concluded that this would be the best approach. RewriteCond %{HTTP_HOST} !^www.seomoz.org [NC]
Technical SEO | | bigrat95
RewriteRule (.*) http://www.seomoz.org/$1 [L,R=301]. But i recieved this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, webmaster@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. I tried this rewrite too... RewriteCond %{HTTP_HOST} !^www. [NC]
RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [L,R=301] It worked but it just rewriting my domain** "site.com"** and not all the subs "site.com/fr/example.php" to "www.site.com" Why it doesn't work properly, it seem to be easy... Could it be a hosting problem? Is there another way to do it? <address> </address> <address> </address> <address> </address> <address> </address>0 -
Special characters in URL
Will registered trademark symbol within a URL be bad? I know some special characters are unsafe (#, >, etc.) but can not find anything that mentions registered trademark. Thanks!
Technical SEO | | bonnierSEO0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Canonical: Is this a problem?
Hi!!
Technical SEO | | petrospan
I am running a small wordpress website and i have a question because i am a litle confusic about Rel Canonical notices in the crawl diagnostics! I have the seo by yoast and i have fix all the canonical url for my page, but i take notices. I must worried about it or is something that inform me that everyting is ok? rel.jpg rel.jpg0 -
Rel=Canonical Header Location
Hello, I've been trying to get our rel=canonical issues sorted out. A fellow named Ayaz very kindly pointed out that I'm trying to put the code into the wysisyg editor, but this might not be the best place to put the code. We are using Drupal 6. Where do I insert the code? head> <link rel="canonical" href="http://www.example.com/blog/my-awesome-blog-post"> Thanks!
Technical SEO | | OTSEO0 -
How to find original URLS after Hosting Company added canonical URLs, URL rewrites and duplicate content.
We recently changed hosting companies for our ecommerce website. The hosting company added some functionality such that duplicate content and/or mirrored pages appear in the search engines. To fix this problem, the hosting company created both canonical URLs and URL rewrites. Now, we have page A (which is the original page with all the link juice) and page B (which is the new page with no link juice or SEO value). Both pages have the same content, with different URLs. I understand that a canonical URL is the way to tell the search engines which page is the preferred page in cases of duplicate content and mirrored pages. I also understand that canonical URLs tell the search engine that page B is a copy of page A, but page A is the preferred page to index. The problem we now face is that the hosting company made page A a copy of page B, rather than the other way around. But page A is the original page with the seo value and link juice, while page B is the new page with no value. As a result, the search engines are now prioritizing the newly created page over the original one. I believe the solution is to reverse this and make it so that page B (the new page) is a copy of page A (the original page). Now, I would simply need to put the original URL as the canonical URL for the duplicate pages. The problem is, with all the rewrites and changes in functionality, I no longer know which URLs have the backlinks that are creating this SEO value. I figure if I can find the back links to the original page, then I can find out the original web address of the original pages. My question is, how can I search for back links on the web in such a way that I can figure out the URL that all of these back links are pointing to in order to make that URL the canonical URL for all the new, duplicate pages.
Technical SEO | | CABLES0