Duplicate titles Question
-
Hi eveyone,
I have around 1000 duplicate titles and meta description. The poblem was that I had pages in my home page and different pages had the same title.
For example,
index.php/site/articles/should_you_eat_protein_every_2-3_hours_for_muscle_growth/
/index.php/site/articles/should_you_eat_protein_every_2-3_hours_for_muscle_growth/N12//index.php/site/articles/should_you_eat_protein_every_2-3_hours_for_muscle_growth/N1444//index.php/site/articles/should_you_eat_protein_every_2-3_hours_for_muscle_growth/N1448//index.php/site/articles/should_you_eat_protein_every_2-3_hours_for_muscle_growth/N1448/P6//index.php/site/articles/should_you_eat_protein_every_2-3_hours_for_muscle_growth/N1452/I have 172 of the same page!So I took off all the pagination on my home page and just added 'click fo more'. When they click more, it takes them to the category.So my question is will google slowly start deleting or non-indexing these duplicate titles or pages as I have removed it from my website? (Just so that you know I added a canonical link and figuring out how to add page numbers to met titles and meta description tags for categories with pages)
-
I'm not entirely clear what the nature of the non-paginated pages is, but the canonical tag is probably a decent solution here. You may actually want to leave the crawl paths to those URLs open for a bit - Google won't process the canonicals unless they crawl the URLs. See my recent post on that subject:
http://www.seomoz.org/blog/logic-meet-google-crawling-to-deindex
Pagination is a bit tougher. You've got a few options:
(1) META NOINDEX pages 2+ (tends to be pretty effective, but depends on the nature of the pages)
(2) Use Rel=prev and Rel=next. This is tough to implement, but is recommended by Google. If the pagination isn't massive-scale, it works reasonably well.
(3) If the pagination is controlled by URL parameters, indicate them in Google Webmaster Tools. I've had mixed luck with this, and your examples wouldn't work ("/P6" isn't going to come up as a traditional URL variable).
-
In my category page, I have the links of the article and short summary of it. For example, check this page: http://www.exercisebiology.com/index.php/site/training/
So if I get you right, I shouldn't use the canonical for home page and categories since the content changes as I post new articles, right?
-
Do not use the canonical tag unless the pages are 100% duplicates.
If you are talking about duplicate content from your blog posts showing up on category pages, perhaps you can edit your blog settings to only show snippets of each post instead of the entire thing.
-
Thank you much Anthony.
I have pages on home page and category that obviously needs pages so that every arrticle don't show up in one long page. How do deal with those?
1) Just keep them the same and add a rel=canonical to it? Or since they have different content on each page ( or different article titles), just leave out the rel=canonical?
-
Yes. Google will stop indexing most of the duplicate pages since you added the canonical tag.
Figure out why there are so many duplicate pages and try to eliminate that problem. I'm guessing your comment about 'click more, it takes them to the category page' addressed this issue. It's best to stop a problem before it starts.
These pages may fall slowly out of Google's index. They will likely leave Webmaster Tools at an even slower rate. If this doesn't seem to work out for you, another option would be to 301 every page ending with growth/N* back to the canonical version.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages with Duplicate Content
When I crawl my site through moz, it shows lots of Pages with Duplicate Content. The thing is all that pages are pagination pages. How should I solve this issue?
Technical SEO | | 100offdeal0 -
Subdomain question
Hi guys, I have a subdomain on my site that i want to completely remove from the index. I tried already everything to remove it but it is special situation so the only choice i have left is to remove it from Search Console in "Remove URLs" feature. So my question is: if i remove my root subdomain (example: http://subdomain.mydomain.com/) via "Remove URLs" feature in Webmaster Console, will it remove all the URLs coming from that particular domain as well? I also want to make sure that my root domain will stay untouched and be functioning normally. Thank you for advice!
Technical SEO | | odmsoft0 -
Duplicate Titles Aren't Actually Duplicate
I am seeing duplicate title errors, but when I go to fix the problem, the titles are not actually identical. Any advice? Becky
Technical SEO | | Becky_Converge0 -
Duplicate page/Title content - Where?
Hi, I have just run a crawl on a new clients site, and there is several 'duplicate page content' and 'Duplicate Page Title'' issues. But I cannot find any duplicate content. And to make matters worse. The actual report has confused me. Just for example the about us page is showing in both reports and for both under 'Other URLs' it is showing 1? Why? Does this mean there is 1 other page with duplicate page title? or duplicate page content? Where are the pages that have the duplicate page titles, or duplicate page content? I have run scans using other software and a copyscape scan. And apart from missing page titles, I cannot find any page that has duplicate titles or content. I can find % percentages of pages with similar/same page titles/content. But this is only partial and contextually correct. So I understand that SEO Moz may pick percentage of content, which is fine, and therefore note that there is duplicate content/page titles. But I cannot seem to figure out where I would the source of the duplicate content/page titles. As there is only 1 listed in both reports for 'Other URLs' Hopefully my long question, has not confused. many thanks in advance for any help
Technical SEO | | wood1e20 -
Question about duplicate images used within a single site
I understand that using duplicate images across many websites was become an increasingly important duplicate content issue to be aware of. We have a couple dozen geotargeted landing pages on our site that are designed to promote our services to residents from various locations in our area. We've created 400+ word pieces of fresh, original content for each page, some of which talks about the specific region in some detail. However, we have a powerful list of top reasons to choose us that we'd like to use on each page as is, without rewriting them for each page. We'd like to simply present this bulleted list as an image file on each page to get around any duplicate written copy concerns. This image would not appear on any other websites but would appear on about two dozen landing pages for a single site. Is there anything to worry about this strategy from a duplicate content or duplicate image perspective in terms of SEO?
Technical SEO | | LeeAbrahamson0 -
Duplicate Homepage issue
SEOMOZ says my site has two homepages: www.mysite.com www.mysite.com/ When you go to "www.mysite.com/" the URL changes to "www.mysite.com" Why is this happening and what can I do about it?
Technical SEO | | LucasF0 -
A week ago I asked how to remove duplicate files and duplicate titles
Three weeks ago we had a very large number of site errors revealed by crawl diagostics. These errors related purely to the presence of both http://domain name and http://www.domain name. We used the rel canonical tag in the head of our index page to direct all to the www. preference, and we have no improvement. Matters got worse two weeks ago and I checked with Google Webmaster and found that Google had somehow lost our preference choice. A week ago I asked how to overcome this problem and received good advice about how to re-enter our preference for the www.tag with Google. This we did and it was accepted. We aso submitted a new sitemap.xml which was also acceptable to Google. Today, a week later we find that we have even more duplicate content (over 10,000 duplicate errors) showing up in the latest diagnostic crawl. Does anyone have any ideas? (Getting a bit desperate.)
Technical SEO | | FFTCOUK0 -
Robots.txt question
Hello, What does the following command mean - User-agent: * Allow: / Does it mean that we are blocking all spiders ? Is Allow supported in robots.txt ? Thanks
Technical SEO | | seoug_20050