Is there an easier way from the server to prevent duplicate page content?
-
I know that using either 301 or 302 will fix the problem of duplicate page content. My question would be; is there an easier way of preventing duplicate page content when it's an issue with the URL. For example:
URL: http://example.com
My guess would be like it says here, that it's a setting issue with the server.
If anyone has some pointers on how to prevent this from occurring, it would be greatly appreciated.
-
I have seen tons of duplicate content errors in the SEO Moz REport. The pages that I have are same but the sidebar ads and others are dynamic based on the store they are coming from. So we send store name as query string.http://www.appymall.com/apps/numberland-learn-numbers-with-montessori%20&store=Appy-back-2-school if you look at teh source code, we defined the canonicalURL. The system is still calling these duplicates. Can you help address this issue? What we are doing wroong?I did checked the on-page keyword tool and it has green check after
Canonical URL Tag Usage
-
Thanks. Did the server-level change, works great, the pages are having no problems resolving canonically, and the changes have been accounted for in Google and Bing's webmaster data since the 24th. Only, one other thing also happened at that same time: my site lurched downward another notch.
This is what usually happens when I do something that's been recommended by SEOs.
-
I've never done one of these yet so I will Google how to do it. I'm waiting to find out the type of server it is.
-
Yes, a 301-redirect is almost always a server-level directive. It's not a tag or HTML element. You can create them with code (in the header of the page), but that's typically harder and only for special cases.
-
Okay, so the code variant will rely on the type of server?
-
If that's the case Dr. Pete, that saves me from having to add the tag to 51 pages. I already have one on the homepage. Thank you.
-
As long as the tactic you use returns a proper 301, there's really no way that's better than any other. Ryan's approach works perfectly well for Apache-hosted sites.
-
In most cases, I don't find sitewide canonical tags to really be necessary, but if they're done right, they can't hurt. The trick is that people often screw them up (and bad canonicals can be really bad). I do like one on the home-page, because it sweeps up all the weird variants that are so common for home pages.
-
Robert check this article out re: frontpage and htaccess
Frontpage is an html editor that helps you build a site. Apache is a server that site can run on. It sounds like you have both.
You'll want to edit the .htaccess file in the root folder of your website, wherever the file for your homepage sits.
-
Make sure you have a space after the second quotation:
-
Thank you for expounding on this issue I thought it fit.
-
Dr. Peter, thank you for clarifying this. I do see the R=301 now but I didn't see it before.
That's what I figured. Is their a preferred 301 code to use?
Yes, I will be sure to use it internally as well. I can see where that would be a mess. Thank you again for sharing your expertise.
-
I'm still getting a bad canonical problem, even with every page having a rel="canonical". It even shows up in SEOmoz's stats, with it indexing 300+ pages when there's only 180-odd. Trouble is, the .htaccess file says "FrontPage" not Apache. Would your .htaccess thingy for Apache work there? And is it the .htaccess that's in the url's folder with the rest of the site's regular files, or one that's in a prev. folder?
-
Hi Dr Pete, Would correcting the current issue with a 301 and adding the rel=canonical tags to each page be the best option? My thought being any future duplicate content issues that may occur (not caused from this issue) would be avoided.
-
Sorry I just read this again, the 301 will fix the URL issue site wide.
-
Expounding is what I do Other people use different words for it...
-
Just to clarify, the rewrite that Ryan is proposing IS a 301-redirect (see the "R=301") - it's just one way to implement it. Done right, it can be used sitewide.
It's perfectly viable to also use canonicals (and I definitely think they're great to have on the home-page, for example), but I think the 301 is more standard practice here. It's best for search crawlers AND visitors to see your canonical URL (www vs. non-www, whichever you choose). That leads people to link to the "proper" version, bookmark it, promote it on social, etc.
Make sure, too, to use the canonical version internally. It's amazing how often people 301-redirect to "www." but then link to the non-www version internally, or vise-versa. Consistent signals are important.
-
Thank you again SEOKeith, I understand what has to be done. I just wanted to make sure I was clear on what needed to be done. Yes, the rel canonical tag will reflect whatever the page is I'm adding it. Since I didn't get the errors for it I never added it to my other sites; so now I have to it for all of them. Fun...
-
I recommend you update each page, note the rel canonical tag will be different for each page. And 50 pages should take you less than 15 mins
-
SEOKeith, the problem is sitewide, all 52 pages. I was hoping to solve the problem in the server and avoid coding each page. But from what I'm gathering is, even if I use the 301 redirect I should still add the rel="canonical" on each page to avoid scraping. This tells the SE that this page is the only page to index and crawl.
Lol, sorry I didn't recognize the acronym. Yes, I have a site that is through Wordpress and one that is through Joomla. The one that I'm having issues with is not through a CMS though.
-
Brian, it's the same thing just a different method both 301.
No it would not cover the issues site wide only for the home page.
CMS = Content Management System (an example would be Wordpress or Drupal).
You should still do the rel="canonical" site wide (on each page).
All make sense ?
-
Thank you SEOKeith, what would be the difference between using a 301 in the .htaccess verses the code Ryan suggested <ifmodule mod_rewrite.c="">?</ifmodule>
Also if I use the 301 redirect in the .htaccess would it cover this issue site wide?
Okay so the space needs to be there.
No, I don't use a CMS?
-
Brian, if you 301 the example.com to www.example.com that will get rid of the duplicate URL issue server side. (this will resolve your current duplicate content issue).
Additionally I recommend you add the rel=canonical it will prevent other potential duplicate content issues that may arise and is considered good practice to implement.
The tag looks correct, note the space after the domain in quotes:
Are you using a CMS ?
-
Thank you SEOKeith! I definitely want to make sure I don't use any bad practices to fix this issue and thank you for clarifying that about the code.
So if I apply the code and fix the issue from the server by making the URL www.example.com
I would then add the rel=canonical tag to prevent scraping.
Would this be the correct URL to put in the tag?
-
The rewrite rule above is not bad practice, it will fix the issue with your URL's
However it is good practice to use the rel=canonical tag on your site additionally to prevent any other duplicate content issues.
In short the rel=canonical tag tells Google which URL you wish to use, preventing Google from thinking you have duplicate content if multiple URL's exist for the same page.
-
Thank you Keri, that's what I'm thinking but I want to make sure. Thank you for messaging Dr. Pete, I hope maybe he can expound on this.
-
Generally, if you can fix it with code, that tends to be a bit better than the canonical tag, from my understanding. I've emailed Dr. Pete and asked him to contribute to this thread as well, as he's an expert on canonical tags.
-
Thanks a lot guys this is some great information. Let me get this straight.
Is solving this issue with the code below a bad practice?
<ifmodule mod_rewrite.c="">RewriteEngine on</ifmodule>
RewriteCond %{HTTP_HOST} !^www. [NC]
RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]If it's not a bad practice and I implement the code to stop the issue, you are saying I should still use a rel=canonical tag to prevent scraping?
-
You should set up the correct Canonicalization rewrites at the server level with IIS or .htaccess. (Not sure which one you have). If you know what type of sever you are on, then you can find all the correct rewrites. (www, non www, lowercase, trailing slash / , etc.)
For example, here is a great post if you have IIS. http://www.seomoz.org/blog/what-every-seo-should-know-about-iis
And you should also use rel=canonical tags.
-
I always use rel="canonical"
-
Rel Canonical is considered a best practice in SEO, so you should just always include it in your pages, even if they're the only copy of the content you know of. It will help prevent any scrapers from stealing your content down the road.
And re: you're sorta right. Technically speaking, what we're doing with that htaccess code is 301 redirecting every URL, either to the www or non-www version. So say you go with my method anyone going to http://example.com just gets 301'd over to http://www.example.com
-
Thank you Ryan, that is exactly what I expected the problem to be but really couldn't figure out how to address it or solve it. You explained it very well and I appreciate the suggested code to use as well. I should be able to figure it out from here.
Thank you again!
-
Thank you Brent and kjay. Take a look at Ryan's answer, I think that is what I was shooting for. If I can eliminate the problem of an ambiguous URL at the server level then I will not need rel="canonical or 301/302. What do you guys think?
-
Personally I would do the following:
- Set rel="canonical" as Brent says below
- 301 redirect the preferred URL, so if you are using www.example.com redirect example.com, that way if anyone points links at example.com "most" of the juice will pass over (this will probably fix the issue you have posted about)
- Set the referred URL in Google web master tools
If you are using CMS like Wordpress rel="canonical" will probably already be taken care of for your website, you can check this by viewing the source or using SEO Moz's on-page keyword optimization tool.
-
Actually in cases like your example above its more an issue of an ambiguous URL rather than actual duplicate content.
The thing to do in the example above is to choose which version of your site you want (with www or without) to always use, and then set your server accordingly. In Apache this means using your .htaccess file.
If you decide to always display www (my preferred way) then this should be in your .htaccess:
<ifmodule mod_rewrite.c="">RewriteEngine on</ifmodule>
RewriteCond %{HTTP_HOST} !^www. [NC]
RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]If you want your URLS to not use www:
<ifmodule mod_rewrite.c="">RewriteEngine on</ifmodule>
RewriteCond %{HTTP_HOST} ^www.(.+)$ [NC]
RewriteRule ^ http://%1%{REQUEST_URI} [L,R=301] -
You should definitely setup your site Canonicalization, and you should also utilize rel=canonical tags to help distinguish which page is the actual page.
For example, if you want to identify that www.example.com is the correct url, then you would use the following:
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page Indexing without content
Hello. I have a problem of page indexing without content. I have website in 3 different languages and 2 of the pages are indexing just fine, but one language page (the most important one) is indexing without content. When searching using site: page comes up, but when searching unique keywords for which I should rank 100% nothing comes up. This page was indexing just fine and the problem arose couple of days ago after google update finished. Looking further, the problem is language related and every page in the given language that is newly indexed has this problem, while pages that were last crawled around one week ago are just fine. Has anyone ran into this type of problem?
Technical SEO | | AtuliSulava1 -
Duplicate Content - Different URLs and Content on each
Seeing a lot of duplicate content instances of seemingly unrelated pages. For instance, http://www.rushimprint.com/custom-bluetooth-speakers.html?from=topnav3 is being tracked as a duplicate of http://www.rushimprint.com/custom-planners-diaries.html?resultsperpg=viewall. Does anyone else see this issue? Is there a solution anyone is aware of?
Technical SEO | | ClaytonKendall0 -
Duplicate Content Mystery
Hi Moz community! I have an ongoing duplicate mystery going on here and I'm hoping someone here can answer my question. We have an Ecommerce site that has a variety of product pages and category pages. There are Rel canonicals in place, along with parameters in GWT, and there are also URL rewrites. Here are some scenarios, maybe you can give insight as to what’s exactly going on and how to fix it. All the duplicates look to be coming from category pages specifically. For example:
Technical SEO | | Ecom-Team-Access
This link re-writes: http://www.incipio.com/cases/tablet-cases/amazon-kindle-cases-sleeves.html?cat=407&color=152&price=20- To: http://www.incipio.com/cases/tablet-cases/amazon-kindle-cases-sleeves.html The rel canonical tag looks like this: http://www.incipio.com/cases/tablet-cases/amazon-kindle-cases-sleeves.html" /> The CONTENT is different, but the URLs are the same. It thinks that the product category view is the same as the all products view, even though there is a canonical in there telling it which one is the original. Some of them don’t have anything to do with each other. Take a look: Link identified as duplicate: http://www.incipio.com/cases/smartphone-cases/htc-smartphone-cases/htc-windows-phone-8x-cases.html?color=27&price=20- Link this is a duplicate of: http://www.incipio.com/cases/macbook-cases/macbook-pro-13in-cases.html Any idea as to what could be happening here?0 -
Issue: Duplicate Page Content > Wordpress Comments Page
Hello Moz Community, I've create a campaign in Moz and received hundreds of errors, regarding "Duplicate Page Content". After some review, I've found that 99% of the errors in the "Duplicate Page Content" report are occurring due to Wordpress creating a new comment page (with the original post detail), if a comment is made on a blog post. The post comment can be displayed on the original blog post, but also viewable on a second URL, created by Wordpress. http://www.Example.com/example-post http://www.Example.com/example-post/comment-page-1 Anyone else experience this issue in Wordpress or this same type of report in Moz? Thanks for your help!
Technical SEO | | DomainUltra0 -
Is there an easy solution for duplicate page content on a drupal CMS?
I have a drupal 7 site www.australiacounselling.com.au that has over 5000 crawl errors (!). The main problem - close to 3000 errors- is I have duplicate page content. When I create a page I can create a URL alias for the page that is SEO friendly, however every time I do this, it is registering there are 2 pages with the same content. Is there a module that you're aware of that I can have installed that would allow me to show what is the canonical page? My developers seemed stumped and have given up trying to find a solution, but I'm not convinced that it should be that hard. Any ideas from those familiar with drupal 7 would be greatly appreciated!
Technical SEO | | ClintonP0 -
Duplicate Content on Multinational Sites?
Hi SEOmozers Tried finding a solution to this all morning but can't, so just going to spell it out and hope someone can help me! Pretty simple, my client has one site www.domain.com. UK-hosted and targeting the UK market. They want to launch www.domain.us, US-hosted and targeting the US market. They don't want to set up a simple redirect because a) the .com is UK-hosted b) there's a number of regional spelling changes that need to be made However, most of the content on domain.com applies to the US market and they want to copy it onto the new website. Are there ways to get around any duplicate content issues that will arise here? Or is the only answer to simply create completely unique content for the new site? Any help much appreciated! Thanks
Technical SEO | | Coolpink0 -
Strange duplicate content issue
Hi there, SEOmoz crawler has identified a set of duplicate content that we are struggling to resolve. For example, the crawler picked up that this page www. creative - choices.co.uk/industry-insight/article/Advice-for-a-freelance-career is a duplicate of this page www. creative - choices.co.uk/develop-your-career/article/Advice-for-a-freelance-career. The latter page's content is the original and can be found in the CMS admin area whilst the former page is the duplicate and has no entry in the CMS. So we don't know where to begin if the "duplicate" page doesn't exist in the CMS. The crawler states that this page www. creative-choices.co.uk/industry-insight/inside/creative-writing is the referrer page. Looking at it, only the original page's link is showing on the referrer page, so how did the crawler get to the duplicate page?
Technical SEO | | CreativeChoices0