How use Rel="canonical" for our Website
-
How is the best way to use Rel="canonical" for our website www.ofertasdeemail.com.br, for we can say goodbye for duplicated pages?
I appreciate for every help. I also hope to contribute to the SEOmoz community.
Sincerely,
Amador Goncalves -
Yeah, I'm with Mike - these are prone to cause you some real trouble. Given how many there probably are and how often they change/rotate, I'd strongly suggest using rel=canonical or some not indexing the alternate offers somehow.
They may be necessary for users, but these pages aren't all necessary to have in your index. By trying to rank for every single one, you risk harming your more important rankings. Honestly, as Mike said, Google can't even really tell these are different, except for the URLs, so even the long-tail ranking benefits are nearly zero, I suspect.
-
All offers of our website open through ajax.
How is the best use of canonical in this case?
-
Ah... Googlebot can't see those changes on the page from when you click the different offers so each page looks almost exactly the same and like thin content. In which case I'd suggest something along the lines of adding more written content to pages like http://www.ofertasdeemail.com.br/desconto/submarino/ so they look different than page like http://www.ofertasdeemail.com.br/desconto/submarino/so-o-cartao-submarino-indica-as-melhores-ofertas-para-voce-10733.html and then adding a canonical tag to show that the offers are a subset of http://www.ofertasdeemail.com.br/desconto/submarino/
They only problem with this solution is that the individual offers won't really rank for anything in the SERPs and will likely be replaced by the primary page in the index (assuming Google follows your canonical signal). So that may not be a perfectly solution for your needs but it could alleviate problems associated with duplicate & thin content.
-
Thanks Mike,
All pages, and the offers pages need to existe.
When I enter in the store's offers page, show a list with last offers of store (so users will can use the offers selector on left side of website, so after click, the offers will appears on the right side of website).
For example, the Submarino Store:
http://www.ofertasdeemail.com.br/desconto/submarinoPlease do a test, and help us.
Thanks
-
First, determine if those duplicate content pages need to exist or if your users would be better served by another page. If that page doesn't need to exist then you may want to consider a 301 redirect to the better page. If a page is an exact replica of another page then you need to ask yourself "Why do we have it?" If its only a duplicate because of thin content then you might want to consider adding more, relevant content to the individual pages to better differentiate them.
If the duplicate page needs to stay for whatever reason then you can consider adding a canonical tag pointing to the primary page. Some cases in which canonicals have worked best on the sites I work on have been relating to parameters. E.G. example.com/product and example.com/product?model=4 are basically the same page but they each serve a purpose. In this case, example/com/product?model=4 is a subset of the one without a parameter and was given a canonical tag pointing to the primary page.
Canonical tags are a signal, not a directive though... which means that the search engines may choose to listen to it or ignore it as they see fit.
I apologize if any of that seems confusing. Here's a link to the SeoMoz guide on canonicals: http://www.seomoz.org/learn-seo/canonicalization and a blog post on the subject: http://www.seomoz.org/blog/canonical-url-tag-the-most-important-advancement-in-seo-practices-since-sitemaps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google cache from my website give another website
Hello, Some time ago, I already asked a question here because my homepage disappeared from Google for our main keyword. One of the problems that we showing up was the Google cache. If you look to the cache of the website www.conseilfleursdebach.fr, you see that it show the content of www.lesfleursdebach.be. It's both our website, but one is focus on France and the other one on Belgium. http://webcache.googleusercontent.com/search?q=cache%3Awww.conseilfleursdebach.fr&oq=cach&aqs=chrome.0.69i59j69i57j0j69i60j0l2.1374j0j4&sourceid=chrome&ie=UTF-8 Before, there were flags on the page to go to the other country, but in the meantime I removed all links from the .fr to the .be and opposite. This is ongoing since January. Who has an idea of what can cause this and most of all, what do do? Kind regards, Tine
Intermediate & Advanced SEO | | TineDL1 -
302 to a page and rel=canonical back to the original (to preserve url juice)?
Bit of a weird case, but let me explain. We use unbounce.com to create our landing pages, which are on a separate sub-domain (get.domain.com).
Intermediate & Advanced SEO | | dragonlawhq
Some of these landing pages have a substantial amount of useful information and are part of our content building strategy (our content marketers are able to deploy them without going through the dev team cycle). We'd like to make sure the seo page-juice is counting towards our primary domain and not the subdomain.
(It would also help if we one day stop using unbounce and just migrate our landing page content to our primary website). Would it be an SEO faux-pas to do the following:
domain.com/awesome-page ---[302]---> get.domain.com/awesome-page
get.domain.com/awesome-page ---[rel=canonical]---> domain.com/awesome-page My understanding is that our primary domain would hold all the "page juice" whilst sending users to the unbounce landing page - and the day we stop using unbounce, we just kill the redirect and host the content on our primary domain.0 -
Can you use multiple rel alternate tags for different device subdomains?
When redirecting from desktop to mobile with a separate URL structure, you need to have a rel alternate - rel canonical handshake to define the relationship between the pages. But if you have a different subdomain for different mobile devices, can you add more than one rel alternate tag on the desktop page? EG if site.com is redirecting to iphone.site.com, m.site.com, android.site.com
Intermediate & Advanced SEO | | AdiRste0 -
Risk Using "Nofollow" tag
I have a lot of categories (like e-commerce sites) and many have page 1 - 50 for each category (view all not possible). Lots of the content on these pages are present across the web on other websites (duplicate stuff). I have added quality unique content to page 1 and added "noindex, follow" to page 2-50 and rel=next prev tags to the pages. Questions: By including the "follow" part, Google will read content and links on pages 2-50 and they may think "we have seen this stuff across the web….low quality content and though we see a noindex tag, we will consider even page 1 thin content, because we are able to read pages 2-50 and see the thin content." So even though I have "noindex, follow" the 'follow' part causes the issue (in that Google feels it is a lot of low quality content) - is this possible and if I had added "nofollow" instead that may solve the issue and page 1 would increase chance of looking more unique? Why don't I add "noindex, nofollow" to page 2 - 50? In this way I ensure Google does not read the content on page 2 - 50 and my site may come across as more unique than if it had the "follow" tag. I do understand that in such case (with nofollow tag on page 2-50) there is no link juice flowing from pages 2 - 50 to the main pages (assuming there are breadcrumbs or other links to the indexed pages), but I consider this minimal value from an SEO perspective. I have heard using "follow" is generally lower risk than "nofollow" - does this mean a website with a lot of "noindex, nofollow" tags may hurt the indexed pages because it comes across as a site Google can't trust since 95% of pages have such "noindex, nofollow" tag? I would like to understand what "risk" factors there may be. thank you very much
Intermediate & Advanced SEO | | khi50 -
Rel=Canonical to Longer Page?
We've got a series of articles on the same topic and we consolidated the content and pasted it altogether on a single page. We linked from each individual article to the consolidated page. We put a noindex on the consolidated page. The problem: Inbound links to individual articles in the series will only count toward the authority of those individual pages, and inbound links to the full article will be worthless. I am considering removing the noindex from the consolidated article and putting rel=canonicals on each individual post pointing to the consolidated article. That should consolidate the PageRank. But I am concerned about pointing****a rel=canonical to an article that is not an exact duplicate (although it does contain the full text of the original--it's just that it contains quite a bit of additional text). An alternative would be not to use rel=canonicals, nor to place a noindex on the consolidated article. But then my concern would be duplicate content and unconsolidated PageRank. Any thoughts?
Intermediate & Advanced SEO | | TheEspresseo0 -
HTTPS in Rel Canonical
Hi, Should I, or do I need to, use HTTPS (note the "S") in my canonical tags? Thanks Andrew
Intermediate & Advanced SEO | | Studio330 -
Fetch as GoogleBot "Unreachable Page"
Hi, We are suddenly having an error "Unreachable Page" when any page of our site is accessed as Googlebot from webmaster tools. There are no DNS errors shown in "Crawl Errors". We have two web servers named web1 and web2 which are controlled by a software load balancer HAProxy. The same network configuration has been working for over a year now and never had any GoogleBot errors before 21st of this month. We tried to check if there could be any error in sitemap, .htaccess or robots.txt by excluding the loadbalancer and pointing DNS to web1 and web2 directly and googlebot was able to access the pages properly and there was no error. But when loadbalancer was made active again by pointing the DNS to it, the "unreachable page" started appearing again. This very same configuration has been working properly for over a year till 21st of this month. Website is properly accessible from browser and there are no DNS errors either as shown by "Crawl Errors". Can you guide me about how to diagnose the issue. I've tried all sorts of combinations, even removed the firewall but no success. Is there any way to get more details about error instead of just "Unreachable Page" error ? Regards, shaz
Intermediate & Advanced SEO | | shaz_lhr0