Should you use the canonicalization tag when the content isn't exactly a duplicate?
-
We have a site that pull data from different sources with unique urls onto a main page and we are thinking about using the canonicalization tag to keep those source pages from being indexed and to give any authority to the main page. But this isn’t really what canonicalization is supposed to be used for so I’m unsure of if this is the right move.
To give some more detail: We manage a site that has pages for individual golf courses. On the golf course page in addition to other general information we have sections on that page that show “related articles” and “course reviews”.
We may only show 4 or 5 on each of those courses pages per page, but we have hundreds of related articles and reviews for each course. So below “related articles” on the course page we have a link to “see more articles” that would take the user to a new page that is simply a aggregate page that houses all the article or review content related to that course.
Since we would rather have the overall course page rank in SERPs rather than the page that lists these articles, we are considering canonicalizing the aggregate news page up to the course page.
But, as I said earlier, this isn’t really what the canonicalization tag is intended for so I’m hesitant.
Has anyone else run across something like this before? What do you think? -
Hey Fuel Interactive --
To answer your specific question, I would use the meta (noindex, follow) tag instead of the canonical. It doesn't pass authority, but it is the correct usage. It will strike the page from ranking consideration, and allow the overall course page to have less competition.
Another question for you: is the article listing page already ranking higher than the overall course page, or is this a worry? If it's a worry, I recommend testing it out first so you don't prematurely optimize. Just a thought.
Hope that helps -- Andrew
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Using hreflang tags properly.
On my site "example.com" I have set up the following in the header: The problem is that the tags are universal across the site, so every page has these tags, leading obviously to no return tag errors. I.e. the page www.example.ca/testing.html still has the tags: Not tags with "testing.html" in them. How bad is this? Does it matter?
Technical SEO | | absoauto0 -
Duplicate title tags being caused by upper case and lower case version of urls'
Hi GWT is reporting lots of duplicate titles for a clients new site. Mainly these are due to 2x different versions of the url, one with words starting with an upper case character and the other all lower case. Clients dev saying this has something to do with windows server and is ok! Is this correct or should i be telling them to delete and then 301 redirect all upper case versions to the lower case (since lower case better practice) and that will deal with the reported dupe titles ? All Best Dan
Technical SEO | | Dan-Lawrence0 -
Google insists robots.txt is blocking... but it isn't.
I recently launched a new website. During development, I'd enabled the option in WordPress to prevent search engines from indexing the site. When the site went public (over 24 hours ago), I cleared that option. At that point, I added a specific robots.txt file that only disallowed a couple directories of files. You can view the robots.txt at http://photogeardeals.com/robots.txt Google (via Webmaster tools) is insisting that my robots.txt file contains a "Disallow: /" on line 2 and that it's preventing Google from indexing the site and preventing me from submitting a sitemap. These errors are showing both in the sitemap section of Webmaster tools as well as the Blocked URLs section. Bing's webmaster tools are able to read the site and sitemap just fine. Any idea why Google insists I'm disallowing everything even after telling it to re-fetch?
Technical SEO | | ahockley0 -
Hiding Duplicate Content using Javascript
We have e-commerce site selling books. Besides basic information on books, we have content for “About the book” , “Editorial Reviews”, “About the author” etc. But the content in all these section are duplicate and are available on all sites selling similar books. Our question is: 1.Should we worry about the content being duplicate?2.If yes, then will it by a good idea to hide this duplicate content using javascript or iframe?
Technical SEO | | CyrilWilson0 -
Duplicate Content on SEO Pages
I'm trying to create a bunch of content pages, and I want to know if the shortcut I took is going to penalize me for duplicate content. Some background: we are an airport ground transportation search engine(www.mozio.com), and we constructed several airport transportation pages with the providers in a particular area listed. However, the problem is, sometimes in a certain region multiple of the same providers serve the same places. For instance, NYAS serves both JFK and LGA, and obviously SuperShuttle serves ~200 airports. So this means for every airport's page, they have the super shuttle box. All the provider info is stored in a database with tags for the airports they serve, and then we dynamically create the page. A good example follows: http://www.mozio.com/lga_airport_transportation/ http://www.mozio.com/jfk_airport_transportation/ http://www.mozio.com/ewr_airport_transportation/ All 3 of those pages have a lot in common. Now, I'm not sure, but they started out working decently, but as I added more and more pages the efficacy of them went down on the whole. Is what I've done qualify as "duplicate content", and would I be better off getting rid of some of the pages or somehow consolidating the info into a master page? Thanks!
Technical SEO | | moziodavid0 -
Would duplicate listings effect a client's ranking if they used same address?
Lots of duplication on directory listings using similar or same address, just different company names... like so-and-so carpet cleaning; and another listing with so-and-so janitorial services. Now my client went from a rank around 3 - 4 to not even in the top 50 within a week. -- -- -- Would duplication cause this sudden drop? Not a lot of competition for a client using keyword (janitorial services nh); -- -- -- would a competitor that recently optimized a site cause this sudden drop? Client does need to optimize for this keyword, and they do need to clean up this duplication. (Unfortunately this drop happened first of March -- I provided the audit, recommendations/implementation and still awaiting the thumbs up to continue with implementation). --- --- --- Did Google make a change and possibly find these discrepancies within listings and suddenly drop this client's ranking? And they there's Google Places:
Technical SEO | | CeCeBar
Client usually ranks #1 for Google Places with up to 12 excellent reviews, so they are still getting a good spot on the first page. The very odd thing though is that Google is still saying that need to re-verify their Google places. I really would like to know for my how this knowledge how a Google Places account could still need verification and yet still rank so well within Google places on page results? because of great reviews? --- Any ideas here, too? _Cindy0 -
Thin/Duplicate Content
Hi Guys, So here's the deal, my team and I just acquired a new site using some questionable tactics. Only about 5% of the entire site is actually written by humans the rest of the 40k + (and is increasing by 1-2k auto gen pages a day)pages are all autogen + thin content. I'm trying to convince the powers that be that we cannot continue to do this. Now i'm aware of the issue but my question is what is the best way to deal with this. Should I noindex these pages at the directory level? Should I 301 them to the most relevant section where actual valuable content exists. So far it doesn't seem like Google has caught on to this yet and I want to fix the issue while not raising any more red flags in the process. Thanks!
Technical SEO | | DPASeo0 -
Duplicate Content from Google URL Builder
Hello to the SEOmoz community! I am new to SEOmoz, SEO implementation, and the community and recently set up a campaign on one of the sites I managed. I was surprised at the amount of duplicate content that showed up as errors and when I took a look in deeper, the majority of errors were caused by pages on the root domain I put through Google Analytics URL Builder. After this, I went into webmaster tools and changed the parameter handling to ignore all of the tags the URL Builder adds to the end of the domain. SEOmoz recently recrawled my site and the errors being caused by the URL Builder are still being shown as duplicates. Any suggestions on what to do?
Technical SEO | | joshuaopinion0