Is there an easier way from the server to prevent duplicate page content?
-
I know that using either 301 or 302 will fix the problem of duplicate page content. My question would be; is there an easier way of preventing duplicate page content when it's an issue with the URL. For example:
URL: http://example.com
My guess would be like it says here, that it's a setting issue with the server.
If anyone has some pointers on how to prevent this from occurring, it would be greatly appreciated.
-
I have seen tons of duplicate content errors in the SEO Moz REport. The pages that I have are same but the sidebar ads and others are dynamic based on the store they are coming from. So we send store name as query string.http://www.appymall.com/apps/numberland-learn-numbers-with-montessori%20&store=Appy-back-2-school if you look at teh source code, we defined the canonicalURL. The system is still calling these duplicates. Can you help address this issue? What we are doing wroong?I did checked the on-page keyword tool and it has green check after
Canonical URL Tag Usage
-
Thanks. Did the server-level change, works great, the pages are having no problems resolving canonically, and the changes have been accounted for in Google and Bing's webmaster data since the 24th. Only, one other thing also happened at that same time: my site lurched downward another notch.
This is what usually happens when I do something that's been recommended by SEOs.
-
I've never done one of these yet so I will Google how to do it. I'm waiting to find out the type of server it is.
-
Yes, a 301-redirect is almost always a server-level directive. It's not a tag or HTML element. You can create them with code (in the header of the page), but that's typically harder and only for special cases.
-
Okay, so the code variant will rely on the type of server?
-
If that's the case Dr. Pete, that saves me from having to add the tag to 51 pages. I already have one on the homepage. Thank you.
-
As long as the tactic you use returns a proper 301, there's really no way that's better than any other. Ryan's approach works perfectly well for Apache-hosted sites.
-
In most cases, I don't find sitewide canonical tags to really be necessary, but if they're done right, they can't hurt. The trick is that people often screw them up (and bad canonicals can be really bad). I do like one on the home-page, because it sweeps up all the weird variants that are so common for home pages.
-
Robert check this article out re: frontpage and htaccess
Frontpage is an html editor that helps you build a site. Apache is a server that site can run on. It sounds like you have both.
You'll want to edit the .htaccess file in the root folder of your website, wherever the file for your homepage sits.
-
Make sure you have a space after the second quotation:
-
Thank you for expounding on this issue I thought it fit.
-
Dr. Peter, thank you for clarifying this. I do see the R=301 now but I didn't see it before.
That's what I figured. Is their a preferred 301 code to use?
Yes, I will be sure to use it internally as well. I can see where that would be a mess. Thank you again for sharing your expertise.
-
I'm still getting a bad canonical problem, even with every page having a rel="canonical". It even shows up in SEOmoz's stats, with it indexing 300+ pages when there's only 180-odd. Trouble is, the .htaccess file says "FrontPage" not Apache. Would your .htaccess thingy for Apache work there? And is it the .htaccess that's in the url's folder with the rest of the site's regular files, or one that's in a prev. folder?
-
Hi Dr Pete, Would correcting the current issue with a 301 and adding the rel=canonical tags to each page be the best option? My thought being any future duplicate content issues that may occur (not caused from this issue) would be avoided.
-
Sorry I just read this again, the 301 will fix the URL issue site wide.
-
Expounding is what I do Other people use different words for it...
-
Just to clarify, the rewrite that Ryan is proposing IS a 301-redirect (see the "R=301") - it's just one way to implement it. Done right, it can be used sitewide.
It's perfectly viable to also use canonicals (and I definitely think they're great to have on the home-page, for example), but I think the 301 is more standard practice here. It's best for search crawlers AND visitors to see your canonical URL (www vs. non-www, whichever you choose). That leads people to link to the "proper" version, bookmark it, promote it on social, etc.
Make sure, too, to use the canonical version internally. It's amazing how often people 301-redirect to "www." but then link to the non-www version internally, or vise-versa. Consistent signals are important.
-
Thank you again SEOKeith, I understand what has to be done. I just wanted to make sure I was clear on what needed to be done. Yes, the rel canonical tag will reflect whatever the page is I'm adding it. Since I didn't get the errors for it I never added it to my other sites; so now I have to it for all of them. Fun...
-
I recommend you update each page, note the rel canonical tag will be different for each page. And 50 pages should take you less than 15 mins
-
SEOKeith, the problem is sitewide, all 52 pages. I was hoping to solve the problem in the server and avoid coding each page. But from what I'm gathering is, even if I use the 301 redirect I should still add the rel="canonical" on each page to avoid scraping. This tells the SE that this page is the only page to index and crawl.
Lol, sorry I didn't recognize the acronym. Yes, I have a site that is through Wordpress and one that is through Joomla. The one that I'm having issues with is not through a CMS though.
-
Brian, it's the same thing just a different method both 301.
No it would not cover the issues site wide only for the home page.
CMS = Content Management System (an example would be Wordpress or Drupal).
You should still do the rel="canonical" site wide (on each page).
All make sense ?
-
Thank you SEOKeith, what would be the difference between using a 301 in the .htaccess verses the code Ryan suggested <ifmodule mod_rewrite.c="">?</ifmodule>
Also if I use the 301 redirect in the .htaccess would it cover this issue site wide?
Okay so the space needs to be there.
No, I don't use a CMS?
-
Brian, if you 301 the example.com to www.example.com that will get rid of the duplicate URL issue server side. (this will resolve your current duplicate content issue).
Additionally I recommend you add the rel=canonical it will prevent other potential duplicate content issues that may arise and is considered good practice to implement.
The tag looks correct, note the space after the domain in quotes:
Are you using a CMS ?
-
Thank you SEOKeith! I definitely want to make sure I don't use any bad practices to fix this issue and thank you for clarifying that about the code.
So if I apply the code and fix the issue from the server by making the URL www.example.com
I would then add the rel=canonical tag to prevent scraping.
Would this be the correct URL to put in the tag?
-
The rewrite rule above is not bad practice, it will fix the issue with your URL's
However it is good practice to use the rel=canonical tag on your site additionally to prevent any other duplicate content issues.
In short the rel=canonical tag tells Google which URL you wish to use, preventing Google from thinking you have duplicate content if multiple URL's exist for the same page.
-
Thank you Keri, that's what I'm thinking but I want to make sure. Thank you for messaging Dr. Pete, I hope maybe he can expound on this.
-
Generally, if you can fix it with code, that tends to be a bit better than the canonical tag, from my understanding. I've emailed Dr. Pete and asked him to contribute to this thread as well, as he's an expert on canonical tags.
-
Thanks a lot guys this is some great information. Let me get this straight.
Is solving this issue with the code below a bad practice?
<ifmodule mod_rewrite.c="">RewriteEngine on</ifmodule>
RewriteCond %{HTTP_HOST} !^www. [NC]
RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]If it's not a bad practice and I implement the code to stop the issue, you are saying I should still use a rel=canonical tag to prevent scraping?
-
You should set up the correct Canonicalization rewrites at the server level with IIS or .htaccess. (Not sure which one you have). If you know what type of sever you are on, then you can find all the correct rewrites. (www, non www, lowercase, trailing slash / , etc.)
For example, here is a great post if you have IIS. http://www.seomoz.org/blog/what-every-seo-should-know-about-iis
And you should also use rel=canonical tags.
-
I always use rel="canonical"
-
Rel Canonical is considered a best practice in SEO, so you should just always include it in your pages, even if they're the only copy of the content you know of. It will help prevent any scrapers from stealing your content down the road.
And re: you're sorta right. Technically speaking, what we're doing with that htaccess code is 301 redirecting every URL, either to the www or non-www version. So say you go with my method anyone going to http://example.com just gets 301'd over to http://www.example.com
-
Thank you Ryan, that is exactly what I expected the problem to be but really couldn't figure out how to address it or solve it. You explained it very well and I appreciate the suggested code to use as well. I should be able to figure it out from here.
Thank you again!
-
Thank you Brent and kjay. Take a look at Ryan's answer, I think that is what I was shooting for. If I can eliminate the problem of an ambiguous URL at the server level then I will not need rel="canonical or 301/302. What do you guys think?
-
Personally I would do the following:
- Set rel="canonical" as Brent says below
- 301 redirect the preferred URL, so if you are using www.example.com redirect example.com, that way if anyone points links at example.com "most" of the juice will pass over (this will probably fix the issue you have posted about)
- Set the referred URL in Google web master tools
If you are using CMS like Wordpress rel="canonical" will probably already be taken care of for your website, you can check this by viewing the source or using SEO Moz's on-page keyword optimization tool.
-
Actually in cases like your example above its more an issue of an ambiguous URL rather than actual duplicate content.
The thing to do in the example above is to choose which version of your site you want (with www or without) to always use, and then set your server accordingly. In Apache this means using your .htaccess file.
If you decide to always display www (my preferred way) then this should be in your .htaccess:
<ifmodule mod_rewrite.c="">RewriteEngine on</ifmodule>
RewriteCond %{HTTP_HOST} !^www. [NC]
RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]If you want your URLS to not use www:
<ifmodule mod_rewrite.c="">RewriteEngine on</ifmodule>
RewriteCond %{HTTP_HOST} ^www.(.+)$ [NC]
RewriteRule ^ http://%1%{REQUEST_URI} [L,R=301] -
You should definitely setup your site Canonicalization, and you should also utilize rel=canonical tags to help distinguish which page is the actual page.
For example, if you want to identify that www.example.com is the correct url, then you would use the following:
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content and 404 errors
I apologize in advance, but I am an SEO novice and my understanding of code is very limited. Moz has issued a lot (several hundred) of duplicate content and 404 error flags on the ecommerce site my company takes care of. For the duplicate content, some of the pages it says are duplicates don't even seem similar to me. additionally, a lot of them are static pages we embed images of size charts that we use as popups on item pages. it says these issues are high priority but how bad is this? Is this just an issue because if a page has similar content the engine spider won't know which one to index? also, what is the best way to handle these urls bringing back 404 errors? I should probably have a developer look at these issues but I wanted to ask the extremely knowledgeable Moz community before I do 🙂
Technical SEO | | AliMac260 -
Thousands of 404-pages, duplicate content pages, temporary redirect
Hi, i take over the SEO of a quite large e-commerce-site. After checking crawl issues, there seems to be +3000 4xx client errors, +3000 duplicate content issues and +35000 temporary redirects. I'm quite desperate regarding these results. What would be the most effective way to handle that. It's a magento shop. I'm grateful for any kind of help! Thx,
Technical SEO | | posthumus
boris0 -
Duplicate content due to numerous sub category level pages
We have a healthcare website which lists doctors based on their medical speciality. We have a paginated series to list hundreds of doctors. Algorithm: A search for Dentist in Newark locality of New York gives a result filled with dentists from Newark followed by list of dentists in locations near by Newark. So all localities under a city have the same set of doctors distributed jumbled an distributed across multiple pages based on nearness to locality. When we don't have any dentists in Newark we populate results for near by localities and create a page. The issue - So when the number of dentists in New York is <11 all Localities X Dentists will have jumbled up results all pointing to the same 10 doctors. The issue is even severe when we see that we have only 1-3 dentists in the city. Every locality page will be exactly the same as a city level page. We have about 2.5 Million pages with the above scenario. **City level page - **https://www.example.com/new-york/dentist - 5 dentists **Locality Level Page - **https://www.example.com/new-york/dentist/clifton, https://www.example.com/new-york/dentist/newark - Page contains the same 5 dentists as in New York city level page in jumbled up or same order. What do you think we must do in such a case? We had discussions on putting a noindex on locality level pages or to apply canonical pointing from locality level to city level. But we are still not 100% sure.
Technical SEO | | ozil0 -
Database driven content producing false duplicate content errors
How do I stop the Moz crawler from creating false duplicate content errors. I have yet to submit my website to google crawler because I am waiting to fix all my site optimization issues. Example: contactus.aspx?propid=200, contactus.aspx?propid=201.... these are the same pages but with some old url parameters stuck on them. How do I get Moz and Google not to consider these duplicates. I have looked at http://moz.com/learn/seo/duplicate-content with respect to Rel="canonical" and I think I am just confused. Nick
Technical SEO | | nickcargill0 -
Duplicate content with "no results found" search result pages
We have a motorcycle classifieds section that lets users search for motorcycles for sale using various drop down menus to pick year-make-type-model-trim, etc.. These search results create urls such as:
Technical SEO | | seoninjaz
www.example.com/classifieds/search.php?vehicle_manufacturer=Triumph&vehicle_category=On-Off Road&vehicle_model=Tiger&vehicle_trim=800 XC ABS We understand that all of these URL varieties are considered unique URLs by Google. The issue is that we are getting duplicate content errors on the pages that have no results as they have no content to distinguish themselves from each other. A URL like:
www.example.com/classifieds/search.php?vehicle_manufacturer=Triumph&vehicle_category=Sportbike
and
www.example.com/classifieds/search.php?vehicle_manufacturer=Honda&vehicle_category=Streetbike Will have a results page that says "0 results found". I'm wondering how we can distinguish these "unique" pages better? Some thoughts:
-make sure <title>reflects what was search<br />-add a heading that may say "0 results found for Triumph On-Off Road Tiger 800 XC ABS"<br /><br />Can anyone please help out and lend some ideas in solving this? <br /><br />Thank you.</p></title>0 -
How do I deal with my pages being seen as duplicate content by SeoMoz?
My Dashboard is giving my lots of warnings for duplicate content but it all seems to have something to do with the www and the slash / For example: http://www.ebow.ie/ is seen as having the same duplicate content as http:/ebow.ie/ and http://www.ebow.ie Alos lots to do with how Wordpress categorizes pages and tags that is driving me bonkers! Any help appreciated! Dave. seomoz.png
Technical SEO | | ebowdublin0 -
Duplicate Content Errors
Ok, old fat client developer new at SEO so I apologize if this is obvious. I have 4 errors in one of my campaigns. two are duplicate content and two are duplicate title. Here is the duplicate title error Rare Currency And Old Paper Money Values and Information.
Technical SEO | | Banknotes
http://www.antiquebanknotes.com/ Rare Currency And Old Paper Money Values and Information.
http://www.antiquebanknotes.com/Default.aspx So, my question is... What do I need to do to make this right? They are the same page. in my page load for default.aspx I have this: this.Title = "Rare Currency And Old Paper Money Values and Information."; And it occurs only once...0 -
Duplicate content question with PDF
Hi, I manage a property listing website which was recently revamped, but which has some on-site optimization weaknesses and issues. For each property listing like http://www.selectcaribbean.com/property/147.html there is an equivalent PDF version spidered by google. The page looks like this http://www.selectcaribbean.com/pdf1.php?pid=147 my question is: Can this create a duplicate content penalty? If yes, should I ban these pages from being spidered by google in the robots.txt or should I make these link nofollow?
Technical SEO | | multilang0