100's of versions of the same page. Is rel=canonical the solution???
-
Hi,
I am currently working with an eCommerce site that has a goofy set up for their contact form.
Basically, their are hundreds of "contact us" pages that look exactly the same but have different URLs and are used to help the store owner determine which product the user contacted them about. So almost every product has it's own "contact us" URL.
The obvious solution is to do away with this set up but if that is not an option, would a rel=canonical tag linked back to the actually "contact us" page be a possible solution? Or is the canonical tag only used to show the difference between www vs non-www?
Thanks!
-
I understand that you got some parameters in the url since you need them to recognize the product.
Just put a canonical to the generic contact us page and you're done.
Be sure that those parameters will be not carried over in other pages. In that sennse you'll better set up a self referring canonical on all pages so you'll avoid any dupe issue in the future.
-
Yup, use the canonical tag to tell the search engines which URL should be designated as the "correct" page to index. Here is some more information from Google - http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Subdomain initials vs full city name(s) for a multi city subdomain site?
Helping with a multi-city non-profit magazine/news blog. Subdomain options; sf.domain.com, ny.domain.com, la.domain.com sanfrancisco.domain.com, newyork.domain.com, ... Some cities added, will as an example seol.domain.com a city that doesnt have a recognizable initlals, like NYC for example. For brand, recognition, seo benefit, what have you used and why? Thanks
Industry News | | vmialik0 -
Indexing "Without WWW" while it is already redirected to the "WWW" version
Hi Guys My websites are being indexed without "WWW" while the 'http://abc.com' is redirected to 'http://www.abc.com'. Now what I believe is that the URL encoded in my website files are written as 'http://abc.com' rather than 'http://www.abc.com' And since now Google has removed the "Set Preferred Domain" option from the Webmaster Tools, I can't set the preferred version of the URL. Oh & Some pages are indexed with "WWW" & Some are indexed "without WWW" Now I think that it's not an issue, but a lot of people have been saying that this may hurt the rankings.. Some comments/tips would be really appreciated
Industry News | | kasiddiqi0 -
YELP: Legit, or is it wearing Prada's with a black hat?
Should I stop recommending clients to be in yelp? http://finance.yahoo.com/news/yelps-newest-weapon-against-fake-100101689.html
Industry News | | Chenzo0 -
Google Changes Up The Search Results Page
Hi Guys, As you Google has made changes on search results page. I have two points two discuss here : 1. Are we going to see more ads on left sidebar in future ? 2. I think it will also affect the CTR of top three ads in SERP ? Waiting for you guys opinion on it ? Reference: http://www.webpronews.com/google-changes-up-the-search-results-page-2012-11
Industry News | | SanketPatel1 -
Is a canonical to itself a link juice leak
Duane Forrester from Bing said that you should not have a canonical pointing back to the same page as it confuses Bingbot,
Industry News | | AlanMosley
“A lot of websites have rel=canonicals in place as placeholders within their page code. Its best to leave them blank rather than point them at themselves. Pointing a rel=canonical at the page it is installed in essentially tells us “this page is a copy of itself. Please pass any value from itself to itself.” No need for that.” He also stated that a canonical is much like a 301 except that it does not physically move the user to the canonical page. This leads me to think that having such a tag may leak link juice. “Please pass any value from itself to itself”
Google has stated that GoogleBot can handle such a tag, but this still does not mean that it is not leaking link juice.0 -
Google+ profiles and Rel Author. Extensive question
A bit of a mammoth question for discussion here: With the launch of Google+ and profiles, coupled with the ability to link/verify authorship using rel=me to google+ profile - A few questions with respect to the long term use and impact. As an individual - I can have a Google+ Profile, and add links to author pages where I am featured. If rel=me is used back to my G+ profile - google can recognise me as the writer - no problem with that. However - if I write for a variety of different sites, and produce a variety of different content - site owners could arguably become reluctant to link back or accredit me with the rel=me tag on the account I might be writing for a competitor for example, or other content in a totally different vertical that is irrelevant. Additionally - if i write for a company as an employee, and the rel=me tag is linked to my G+ profile - my profile (I would assume) is gaining strength from the fact that my work is cited through the link (even if no link juice is passed - my profile link is going to appear in the search results on a query that matches something I have written, and hence possibly drain some "company traffic" to my profile). If I were to then leave the employment of that company - and begin writing for a direct competitor - is my profile still benefiting from the old company content I have written? Given that google is not allowing pseudonyms or ghost writer profiles - where do we stand with respect to outsourced content? For example: The company has news written for them by a news supplier - (each writer has a name obviously) - but they don't have or don't want to create a G+ profile for me to link to. Is it a case of wait for google to come up with the company profiles? or, use a ghost name and run the gauntlet on G+? Lastly, and I suppose the bottom line - as a website owner/company director/SEO; Is adding rel=me links to all your writers profiles (given that some might only write 1 or 2 articles, and staff will inevitably come and go) an overall positive for SEO? or, a SERP nightmare if a writer moves on to another company? In essence are site owners just improving the writers profile rather than gaining very much?
Industry News | | IPINGlobal541 -
What is the best method for getting pure Javascript/Ajax pages Indeded by Google for SEO?
I am in the process of researching this further, and wanted to share some of what I have found below. Anyone who can confirm or deny these assumptions or add some insight would be appreciated. Option: 1 If you're starting from scratch, a good approach is to build your site's structure and navigation using only HTML. Then, once you have the site's pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your AJAX bonuses. You can use Hijax to help ajax and html links coexist. You can use Meta NoFollow tags etc to prevent the crawlers from accessing the javascript versions of the page. Currently, webmasters create a "parallel universe" of content. Users of JavaScript-enabled browsers will see content that is created dynamically, whereas users of non-JavaScript-enabled browsers as well as crawlers will see content that is static and created offline. In current practice, "progressive enhancement" in the form of Hijax-links are often used. Option: 2
Industry News | | webbroi
In order to make your AJAX application crawlable, your site needs to abide by a new agreement. This agreement rests on the following: The site adopts the AJAX crawling scheme. For each URL that has dynamically produced content, your server provides an HTML snapshot, which is the content a user (with a browser) sees. Often, such URLs will be AJAX URLs, that is, URLs containing a hash fragment, for example www.example.com/index.html#key=value, where #key=value is the hash fragment. An HTML snapshot is all the content that appears on the page after the JavaScript has been executed. The search engine indexes the HTML snapshot and serves your original AJAX URLs in search results. In order to make this work, the application must use a specific syntax in the AJAX URLs (let's call them "pretty URLs;" you'll see why in the following sections). The search engine crawler will temporarily modify these "pretty URLs" into "ugly URLs" and request those from your server. This request of an "ugly URL" indicates to the server that it should not return the regular web page it would give to a browser, but instead an HTML snapshot. When the crawler has obtained the content for the modified ugly URL, it indexes its content, then displays the original pretty URL in the search results. In other words, end users will always see the pretty URL containing a hash fragment. The following diagram summarizes the agreement:
See more in the....... Getting Started Guide. Make sure you avoid this:
http://www.google.com/support/webmasters/bin/answer.py?answer=66355
Here is a few example Pages that have mostly Javascrip/AJAX : http://catchfree.com/listen-to-music#&tab=top-free-apps-tab https://www.pivotaltracker.com/public_projects This is what the spiders see: view-source:http://catchfree.com/listen-to-music#&tab=top-free-apps-tab This is the best resources I have found regarding Google and Javascript http://code.google.com/web/ajaxcrawling/ - This is step by step instructions.
http://www.google.com/support/webmasters/bin/answer.py?answer=81766
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
Some additional Resources: http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
http://www.google.com/support/webmasters/bin/answer.py?answer=357690 -
I'm looking for solid internet usage / traffic data.
Hi there, In a week or two I'll give my first internet marketing presentation to a local business club. Ill walk them through the basics of what is happening online, and how both B2B and B2C business can use their online assets as marketing tools. I want some basic statistical data to back up my story, like: Current web usage distribution (social/search/media etc) Growth of Mobile vs Desktop Search Engines distribution (google bing yahoo msn) Online retail numbers To top it all off, I would also like to be able to show both worldwide and The Netherlands data. What is my best bet for some serious datamining ;)? Thanks in advance for helping me out!
Industry News | | rickdronkers0