Duplicate page titles
-
Hi,
I have a Joomla 2.5 site and I use categoryblogs. So I have a page with "reviews". All the reviews are shown on this page and there are about 15 pages of it. In my SEOMoz crawl result I get 71 errors ! about "duplicate titles".
How can I diminish this? I don't know how to show all the reviews in a proper way other than what I have accomplished with categoryblog.
Patrick
-
PS: I found this today, which seems to be a pretty good rundown of the pros and cons of various pagination strategies: http://www.ayima.com/seo-knowledge/conquering-pagination-guide.html . Keep in mind that it doesn't take into account how limited Joomla is in this area.
-
Patrick can you post the site? If you don't feel comfortable doing that you can send it to me in a private message. Please include the specific reviews page/section you mention. I'd like to have a look at how pagination is being handled, whether any rel canonical tags are being used, etc...
Without seeing the site I think the best advice I can give is sort of general:
-
Have the page number included in the title tag and meta description if possible (e.g. Page # - TITLE) as Mike Davis recommends below. This can probably be accomplished with the following extension ( http://extensions.joomla.org/extensions/site-management/seo-a-metadata/title-management/14747 ) though I have not had to use that specific extension personally.
-
Implement rel next/prev if possible, though I have not seen any good extensions to accomplish this. The unique titles and descriptions should be fine for now if you can't implement rel next/prev.
-
Ignore that SEOMoz warning next time if you have implemented either of the above solutions.
Good luck!
-
-
Wow, that is annoying! A simple solution would be to use a SEO page title module/widget (sorry, I'm not familiar with Joomla, but I'm sure they have one) and add page numbers.
Example: ... Reviews Page One, ... Reviews Page Two, etc.
I had an old site with this same problem on a photo gallery and it solved that issue.
I hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Www.xyz.com v/s xyz.com creating duplicate pages
I just put my site in moz analytics. The crawl results says I have duplicate content. When I look at the pages it is because one page is www.xyz.com and the duplicate is xyz.com. What causes this and how can it be fixed. I'm not a developer, so be kind and speak a language I can understand. Thanks for your help 🙂
Technical SEO | | Britewave0 -
Best way to deal with over 1000 pages of duplicate content?
Hi Using the moz tools i have over a 1000 pages of duplicate content. Which is a bit of an issue! 95% of the issues arise from our news and news archive as its been going for sometime now. We upload around 5 full articles a day. The articles have a standalone page but can only be reached by a master archive. The master archive sits in a top level section of the site and shows snippets of the articles, which if a user clicks on them takes them to the full page article. When a news article is added the snippets moves onto the next page, and move through the page as new articles are added. The problem is that the stand alone articles can only be reached via the snippet on the master page and Google is stating this is duplicate content as the snippet is a duplicate of the article. What is the best way to solve this issue? From what i have read using a 'Meta NoIndex' seems to be the answer (not that i know what that is). from what i have read you can only use a canonical tag on a page by page basis so that going to take to long. Thanks Ben
Technical SEO | | benjmoz0 -
Inconsistent page titles in SERP's
I encountered a strange phenomenon lately and I’d like to hear if you have any idea what’s causing it. For the past couple of weeks I’ve seen some our Google rankings getting unstable. While looking for a cause, I found that for some pages, Google results display another page title than the actual meta title of the page. Examples http://www.atexopleiding.nl Meta title: Atex cursus opleider met ruim 40 jaar ervaring - Atexopleiding.nl Title in SERP: Atexopleiding.nl: Atex cursus opleider met ruim 40 jaar ervaring http://www.reedbusinessopleidingen.nl/opleidingen/veiligheid/veiligheidskunde Meta title: Opleiding Veiligheidskunde, MBO & HBO - Reed Business Opleidingen Title in SERP: Veiligheidskunde - Reed Business Opleidingen http://www.pbna.com/vca-examens/ Meta title: Behaal uw VCA diploma bij de grootste van Nederland - PBNA Title in SERP: VCA Examens – PBNA I’ve looked in the source code, fetched some pages as Googlebot in WMT, but the title shown in the SERP doesn’t even exist in the source code. Now I suspect this might have something to do with the “cookiewall” implemented on our sites. Here’s why: Cookiewall was implemented end of January The problem didn’t exist until recently, though I can’t pinpoint an exact date. Problem exists on both rbo.nl, atexopleiding.nl & pbna.com, the latter running on Silverstripe CMS instead of WP. This rules out CMS specific causes. The image preview in the SERPS of many pages show the cookie alert overlay However, I’m not able to technically prove that the cookiescript causes this and I’d like to rule out other any obvious causes before I "blame it on the cookies" :). What do you think?
Technical SEO | | RBO0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Duplicate pages problem
The Moz report shows that I have 600 Duplicate pages, How can I locate the problem and how can I fix it?
Technical SEO | | Joseph-Green-SEO0 -
When Is It Good To Redirect Pages on Your Site to Another Page?
Suppose you have a page on your site that discusses a topic that is similar to another page but targets a different keyword phrase. The page has medium quality content, no inbound links, and the attracts little traffic. Should you 301 redirect the page to a stronger page?
Technical SEO | | ProjectLabs1 -
How to handle this specific duplicate title issue
Part of my website is a directory of companies. Some of the companies have mane locations in the same city. For these listings titles and url's are like this: 1. Company ABC - Miami, FL http://www.website.com/florida/miami/company-abc-10001 2. Company ABC - Miami, FL http://www.website.com/florida/miami/company-abc-10002 What is the best way to fix this problem? Thank you
Technical SEO | | Boxes0 -
Duplicate Page Content Lists the same page twice?
When checking my crawl diagnostics this morning I see that I have the error Duplicate page content. It lists the exact same url twice though and I don't understand how to fix this. It's also listed under duplicate page title. Personal Assistant | Virtual Assistant | Charlotte, NC http://charlottepersonalassistant.com/110 Personal Assistant | Virtual Assistant | Charlotte, NC http://charlottepersonalassistant.com/110 Does this have anything to do with a 301 redirect here? Why does it have http;// twice? Thanks all! | http://www.charlottepersonalassistant.com/ | http://http://charlottepersonalassistant.com/ |
Technical SEO | | eidna220