How do I eliminate duplicate page titles?
-
Almost...I repeat almost all of my duplicate page titles show up as such because the page is being seen twice in the crawl. How do I prevent this?
<colgroup><col width="336"> <col width="438"></colgroup>
| www.ensoplastics.com/ContactUs/ContactUs.html | Contact ENSO Plastics |
| ensoplastics.com/ContactUs/ContactUs.html |Contact ENSO Plastics
|
This is what is from the CSV...there are many more just like this. How do I cut out all of these duplicate urls?
-
thank you for the follow up, Dr. Pete!
-
I don't see anything wrong with your home-page canonical. We usually suggest pointing the home-page to the root:
...and not including the filename (just for the home-page), but that's not necessary. You link internally to "index.html", so what you have is fine, and keeps it consistent. I think the error is only happening because our crawler is trying to view the "/" version and sees the canonical to "index.html" (so, they look different).
-
So if I want the www. page to be the one that shows up in google...what do I put in the head of the ContactUs.html page exactly? As you can see when I put this in the head then I get the critical error from SEOMOZ. So this fix just isn't making sense to me right now. IF I take it back out then the critical error is gone but then I get the message that I should add the canonical to the page.
<dt>Canonical URL</dt>
<dd>"http://www.ensoplastics.com/ContactUs/ContactUs.html"</dd>
<dt>Explanation</dt>
<dd>If the canonical tag is pointing to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. Make sure you're targeting the right page (if this isn't it, you can reset the target above) and then change the canonical tag to reference that URL.</dd>
<dt>Recommendation</dt>
<dd>We check to make sure that IF you use canonical URL tags, it points to the right page. If the canonical tag points to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. If you've not made this page the rel=canonical target, change the reference to this URL. NOTE: For pages not employing canonical URL tags, this factor does not apply.</dd>
-
So, just to be clear - these are the same physical page. Google sees the two URLs as being two different pages, but in terms of actual physical documents on your server, there's only one (ContactUs.html). So, you just need the one canonical tag per page.
If you have any dynamic (database/code-driven) pages, then be careful and make sure that the canonical tag is being create dynamically to match the correct page. You don't want to end up with canonical tags to the wrong pages.
-
I'd follow the advice to 301 or canonical, but it doesn't hurt to also declare a canonical version in Google Webmaster Tools - it's under "Site Configuration" > "Settings". You still need to canonicalize, but it's one additional signal to Google (and it's easy).
-
the index page canonical should point to index page,
the content page to the same content page (it is just for eliminating the duplicate issue)
don't put a canonical that points to a page that is not its duplicate
-
Once I add it in and crawl the page I end up with a critical error... so something is not right.
Appropriate Use of Rel Canonical
Moderate fix
<dl>
<dt>Canonical URL</dt>
<dd>"http://www.ensoplastics.com/index.html"</dd>
<dt>Explanation</dt>
<dd>If the canonical tag is pointing to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. Make sure you're targeting the right page (if this isn't it, you can reset the target above) and then change the canonical tag to reference that URL.</dd>
<dt>Recommendation</dt>
<dd>We check to make sure that IF you use canonical URL tags, it points to the right page. If the canonical tag points to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. If you've not made this page the rel=canonical target, change the reference to this URL. NOTE: For pages not employing canonical URL tags, this factor does not apply.</dd>
</dl>
-
ok, good luck!
-
Ok I will put the canonical in the head of the html files and see what happens.
-
you have 1 page, that can be reached from two or even more paths. we talk about these paths, and these are creating the duplicate content.
It is like having your site duplicate on your non www. of your domain.
With the canonical, you tell search engines which path, which .html file you are optimizing for.
-
This actually does not solve the problem. I have only one index.html file. So how in the world do I access a page that does not exist in my hierarchy? For example if I have the following two pages there is really only on instance of that page I can edit the head with an html file, it is not like there is actually 2 html pages that exist for each one so in this case am I just stuck creating redirects for each instance where this occurs?
|
www.ensoplastics.com/ContactUs/ContactUs.html
ensoplastics.com/ContactUs/ContactUs.html
|
-
you just insert into each html file the canonical that points to www url. and it should work it out.
-
So then in the page without the www I should insert this into the head and do the same for all other pages?
-
Hi again
So basically canonicals are better.
And why you get this: when robots crawl your website they see the following pages as different:
And we could continue with variants. Canonicals tell search engines that these pages are the same, and they should handle it as same page.
so if you put a into index file, you will have the following results:
www.example.com (no matter which URL does the search engine visit, they will handle as the canonical link)
This is also good for links, because people might link to you as example.com or example.com/index, etc. etc. Then if you insert the canonical you focus all the links to one URL.
Hope it helped,
Istvan
-
Which is better, and also I am interested in knowing why this happens?
-
Hi,
These duplicate URLs can be resolved two ways easily.
1. 301 from non www. to www. or vice versa
2. canonical to one of the links.
This way you will focus all the link juice on only one page. More power
I hope it helped,
Istvan
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best way to deal with over 1000 pages of duplicate content?
Hi Using the moz tools i have over a 1000 pages of duplicate content. Which is a bit of an issue! 95% of the issues arise from our news and news archive as its been going for sometime now. We upload around 5 full articles a day. The articles have a standalone page but can only be reached by a master archive. The master archive sits in a top level section of the site and shows snippets of the articles, which if a user clicks on them takes them to the full page article. When a news article is added the snippets moves onto the next page, and move through the page as new articles are added. The problem is that the stand alone articles can only be reached via the snippet on the master page and Google is stating this is duplicate content as the snippet is a duplicate of the article. What is the best way to solve this issue? From what i have read using a 'Meta NoIndex' seems to be the answer (not that i know what that is). from what i have read you can only use a canonical tag on a page by page basis so that going to take to long. Thanks Ben
Technical SEO | | benjmoz0 -
Duplicate Page Title for a Large Listing Website
My company has a popular website that has over 4,000 crawl errors showing in Moz, most of them coming up as Duplicate Page Title. These duplicate page titles are coming from pages with the title being the keyword, then location, such as: "main keyword" North Carolina
Technical SEO | | StorageUnitAuctionList
"main keyword" Texas ... and so forth. These pages are ranked and get a lot of traffic. I was wondering what the best solution is for resolving these types of crawl errors without it effecting our rankings. Thanks!0 -
Duplicate Page Content
Hi, I just had my site crawled by the seomoz robot and it came back with some errors. Basically it seems the categories and dates are not crawling directly. I'm a SEO newbie here Below is a capture of the video of what I am talking about. Any ideas on how to fix this? Hkpekchp
Technical SEO | | mcardenal0 -
Yoast WP SEO Plugin: Duplicate Title / Description For Pagination
Hello, I just have installed on YAOST WP SEO plugin on my blog to optimize and get better results, as I was using All in one seo Plugin before. On Tuesday SEOMOZ crawler has been updated my site report and I found several errors with my site related to duplicate meta title / description. Home Page Pagination, Categories/archive pagination and tags pagination bring the same meta title and description. I tried several methods to get the required result but unfortunately nothing helped. I used %%pagenumber%% and %%page%% etc. Any help will be highly appreciated.
Technical SEO | | KLLC0 -
I have a ton of "duplicated content", "duplicated titles" in my website, solutions?
hi and thanks in advance, I have a Jomsocial site with 1000 users it is highly customized and as a result of the customization we did some of the pages have 5 or more different types of URLS pointing to the same page. Google has indexed 16.000 links already and the cowling report show a lot of duplicated content. this links are important for some of the functionality and are dynamically created and will continue growing, my developers offered my to create rules in robots file so a big part of this links don't get indexed but Google webmaster tools post says the following: "Google no longer recommends blocking crawler access to duplicate content on your website, whether with a robots.txt file or other methods. If search engines can't crawl pages with duplicate content, they can't automatically detect that these URLs point to the same content and will therefore effectively have to treat them as separate, unique pages. A better solution is to allow search engines to crawl these URLs, but mark them as duplicates by using the rel="canonical" link element, the URL parameter handling tool, or 301 redirects. In cases where duplicate content leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools." here is an example of the links: | | http://anxietysocialnet.com/profile/edit-profile/salocharly http://anxietysocialnet.com/salocharly/profile http://anxietysocialnet.com/profile/preferences/salocharly http://anxietysocialnet.com/profile/salocharly http://anxietysocialnet.com/profile/privacy/salocharly http://anxietysocialnet.com/profile/edit-details/salocharly http://anxietysocialnet.com/profile/change-profile-picture/salocharly | | so the question is, is this really that bad?? what are my options? it is really a good solution to set rules in robots so big chunks of the site don't get indexed? is there any other way i can resolve this? Thanks again! Salo
Technical SEO | | Salocharly0 -
Is 100% duplicate content always duplicate?
Bit of a strange question here that would be keen on getting the opinions of others on. Let's say we have a web page which is 1000 lines line, pulling content from 5 websites (the content itself is duplicate, say rss headlines, for example). Obviously any content on it's own will be viewed by Google as being duplicate and so will suffer for it. However, given one of the ways duplicate content is considered is a page being x% the same as another page, be it your own site or someone elses. In the case of our duplicate page, while 100% of the content is duplicate, the page is no more than 20% identical to another page so would it technically be picked up as duplicate. Hope that makes sense? My reason for asking is I want to pull latest tweets, news and rss from leading sites onto a site I am developing. Obviously the site will have it's own content too but also want to pull in external.
Technical SEO | | Grumpy_Carl0 -
Duplicate Page title - PHP Experts!
After running a crawl diagnostics i was surprised to see 336 duplicate page titles. But i am wondering if it is true or not. Most of them are not a page at all but a .php variation. for example: The following are all the same page, but a different limit on viewing listings. Limiting your view to 5, 10, 15, 20, 25 as you choose. .com/?lang=en&limit=5 .com/?lang=en&limit=5&limitstart=10
Technical SEO | | nahopkin
.com/?lang=en&limit=5&limitstart=15
.com/?lang=en&limit=5&limitstart=20
.com/?lang=en&limit=5&limitstart=25 Same type of things are going on all over the site causing 228 duplicate content errors and the already mentioned 336 duplicate pages. But is "crawl diagnostic telling the truth" or is it just some php thing? I am not a php expert so any input would be much appreciated. What should i do?0 -
Why am i still getting duplicate page title warnings after implementing canonical URLS?
Hi there, i'm having some trouble understanding why I'm still getting duplicate page title warnings on pages that have the rel=canonical attribute. For example: this page is the relative url http://www.resnet.us/directory/auditor/az/89/home-energy-raters-hers-raters/1 and http://www.resnet.us/directory/auditor/az/89/home-energy-raters-hers-raters/2 is the second page of this parsed list which is linking back to the first page using rel=canonical. i have over 300 pages like this!! what should i do SEOmoz GURUS? how do i remedy this problem? is it a problem?
Technical SEO | | fourthdimensioninc0