How do I eliminate duplicate page titles?
-
Almost...I repeat almost all of my duplicate page titles show up as such because the page is being seen twice in the crawl. How do I prevent this?
<colgroup><col width="336"> <col width="438"></colgroup>
| www.ensoplastics.com/ContactUs/ContactUs.html | Contact ENSO Plastics |
| ensoplastics.com/ContactUs/ContactUs.html |Contact ENSO Plastics
|
This is what is from the CSV...there are many more just like this. How do I cut out all of these duplicate urls?
-
thank you for the follow up, Dr. Pete!
-
I don't see anything wrong with your home-page canonical. We usually suggest pointing the home-page to the root:
...and not including the filename (just for the home-page), but that's not necessary. You link internally to "index.html", so what you have is fine, and keeps it consistent. I think the error is only happening because our crawler is trying to view the "/" version and sees the canonical to "index.html" (so, they look different).
-
So if I want the www. page to be the one that shows up in google...what do I put in the head of the ContactUs.html page exactly? As you can see when I put this in the head then I get the critical error from SEOMOZ. So this fix just isn't making sense to me right now. IF I take it back out then the critical error is gone but then I get the message that I should add the canonical to the page.
<dt>Canonical URL</dt>
<dd>"http://www.ensoplastics.com/ContactUs/ContactUs.html"</dd>
<dt>Explanation</dt>
<dd>If the canonical tag is pointing to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. Make sure you're targeting the right page (if this isn't it, you can reset the target above) and then change the canonical tag to reference that URL.</dd>
<dt>Recommendation</dt>
<dd>We check to make sure that IF you use canonical URL tags, it points to the right page. If the canonical tag points to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. If you've not made this page the rel=canonical target, change the reference to this URL. NOTE: For pages not employing canonical URL tags, this factor does not apply.</dd>
-
So, just to be clear - these are the same physical page. Google sees the two URLs as being two different pages, but in terms of actual physical documents on your server, there's only one (ContactUs.html). So, you just need the one canonical tag per page.
If you have any dynamic (database/code-driven) pages, then be careful and make sure that the canonical tag is being create dynamically to match the correct page. You don't want to end up with canonical tags to the wrong pages.
-
I'd follow the advice to 301 or canonical, but it doesn't hurt to also declare a canonical version in Google Webmaster Tools - it's under "Site Configuration" > "Settings". You still need to canonicalize, but it's one additional signal to Google (and it's easy).
-
the index page canonical should point to index page,
the content page to the same content page (it is just for eliminating the duplicate issue)
don't put a canonical that points to a page that is not its duplicate
-
Once I add it in and crawl the page I end up with a critical error... so something is not right.
Appropriate Use of Rel Canonical
Moderate fix
<dl>
<dt>Canonical URL</dt>
<dd>"http://www.ensoplastics.com/index.html"</dd>
<dt>Explanation</dt>
<dd>If the canonical tag is pointing to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. Make sure you're targeting the right page (if this isn't it, you can reset the target above) and then change the canonical tag to reference that URL.</dd>
<dt>Recommendation</dt>
<dd>We check to make sure that IF you use canonical URL tags, it points to the right page. If the canonical tag points to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. If you've not made this page the rel=canonical target, change the reference to this URL. NOTE: For pages not employing canonical URL tags, this factor does not apply.</dd>
</dl>
-
ok, good luck!
-
Ok I will put the canonical in the head of the html files and see what happens.
-
you have 1 page, that can be reached from two or even more paths. we talk about these paths, and these are creating the duplicate content.
It is like having your site duplicate on your non www. of your domain.
With the canonical, you tell search engines which path, which .html file you are optimizing for.
-
This actually does not solve the problem. I have only one index.html file. So how in the world do I access a page that does not exist in my hierarchy? For example if I have the following two pages there is really only on instance of that page I can edit the head with an html file, it is not like there is actually 2 html pages that exist for each one so in this case am I just stuck creating redirects for each instance where this occurs?
|
www.ensoplastics.com/ContactUs/ContactUs.html
ensoplastics.com/ContactUs/ContactUs.html
|
-
you just insert into each html file the canonical that points to www url. and it should work it out.
-
So then in the page without the www I should insert this into the head and do the same for all other pages?
-
Hi again
So basically canonicals are better.
And why you get this: when robots crawl your website they see the following pages as different:
And we could continue with variants. Canonicals tell search engines that these pages are the same, and they should handle it as same page.
so if you put a into index file, you will have the following results:
www.example.com (no matter which URL does the search engine visit, they will handle as the canonical link)
This is also good for links, because people might link to you as example.com or example.com/index, etc. etc. Then if you insert the canonical you focus all the links to one URL.
Hope it helped,
Istvan
-
Which is better, and also I am interested in knowing why this happens?
-
Hi,
These duplicate URLs can be resolved two ways easily.
1. 301 from non www. to www. or vice versa
2. canonical to one of the links.
This way you will focus all the link juice on only one page. More power
I hope it helped,
Istvan
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My number of duplicate page title and temporary redirect warnings increased after I enabled Canonical urls. Why? Is this normal?
After receiving my first SEO moz report, I had some duplicate page titles and temporary redirects. I was told enabling Canonical urls would take of this. I enabled the Canonical URLs, but the next report showed that both of those problems had increased three fold after enabled the canonical urls! What happened?
Technical SEO | | btsseo780 -
Missing page titles
Does anyone know why my SeoMoz crawl reads my page titles differen't to what they truly are on my active site? I changed my pages titles and optimised them several months ago. Is my old page titles still been crawled rather than the new ones how do i fix this is?
Technical SEO | | gimes0 -
Duplicate pages problem
The Moz report shows that I have 600 Duplicate pages, How can I locate the problem and how can I fix it?
Technical SEO | | Joseph-Green-SEO0 -
Help with duplicate title tags.
Hi Just going through Seomoz crawl diagnostics for one of my clients sites & see that the title tags for over 2,000 pages are the same due to the way there site software generates the titles using php, examples : **home page** = <title>Home - Trade prices,Next Day Delivery,Bulk Discounttitle> other page = <title>A0 Frames - Trade prices,Next Day Delivery,Bulk Discounttitle> So as you can see the page titles have the page name given to the beginning of the title which is fine as they have the keyword, then the text : Trade prices,Next Day Delivery,Bulk Discount Now if i take out the text : Trade prices,Next Day Delivery,Bulk Discount the home page is going to have just the word Home which is rubbish & the other pages will have the keyword which is better but not great, just wondered if any one has a better solution?.
Technical SEO | | askshopper0 -
Duplicate Title/Meta Descriptions
Hi I had some error messages in the webmaster account, stating I had duplicate title/meta descriptions. Ive since fixed it, typically how long does it take for a full crawl if Ive fixed these issues? Webmaster is still showing problems with various Title/Descriptions. Also was wondering if I should block individual pages on a large ecomerce site? EX of a Large site - http://www.stubhub.com/chicago-bears-tickets/ (Page is structure and optimized) Then you have all individual games http://www.stubhub.com/chicago-bears-tickets/bears-vs-lions-soldier-field-4077064/ they have an H1 and a meta description, should the page above be blocked from google and concentrate only on the Pain page? Thanks!
Technical SEO | | TP_Marketing0 -
Duplicate title tags and meta description tags
According to GWT, it seems that some of the pages on my website have duplicate title and meta tags. The pages identified by Google are nothing but dynamic pages: http://www.mywebsite.com/page.php
Technical SEO | | sbrault74
http://www.mywebsite.com/page.php?param=1
http://www.mywebsite.com/page.php?param=2 The thing is that I do use the canonical link tag on all pages. Should I also use the "robots noindex" tag when the page is invoked using a GET parameter? Again sorry for my english. Thank you, Stephane1 -
What's the best way to eliminate duplicate page content caused by blog archives?
I (obviously) can't delete the archived pages regardless of how much traffic they do/don't receive. Would you recommend a meta robot or robot.txt file? I'm not sure I'll have access to the root directory so I could be stuck with utilizing a meta robot, correct? Any other suggestions to alleviate this pesky duplicate page content issue?
Technical SEO | | ICM0