How can I prevent duplicate content between www.page.com/ and www.page.com
-
SEOMoz's recent crawl showed me that I had an error for duplicate content and duplicate page titles.
This is a problem because it found the same page twice because of a '/' on the end of one url.
e.g. www.page.com/ vs. www.page.com
My question is do I need to be concerned about this. And is there anything I should put in my htaccess file to prevent this happening.
Thanks!
Karl -
Thanks a lot for your quick replys. I have it sorted now
-
Yes you should adress this problem. Put something like this in your .htaccess :
Options +FollowSymLinks RewriteEngine On # REDIRECT to canonical url RewriteCond %{HTTP_HOST} ^mysite\.com [NC] RewriteRule ^(.*)$ http://www.mysite.com/$1 [R=301,L]
-
Hi there,
This should solve your problem
<code>#removes trailing slash RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{HTTP_HOST} !^\.localhost$ [NC] RewriteRule ^(.+)/$ http://%{HTTP_HOST}$1 [R=301,L]</code>
If you wish to redirect from non-trailing slash to trailing slash,
let me you know and I'll send you htaccess for that
Kind regards
Bojan
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Site:www.domainname.com - does not find homepage in Google (only inner pages - why?)
When I do a Google search on site:www.domainname.com, my clients homepage does not appear. Other inner pages do. The same thing happend a while ago and I did 'fetch by google' in Search Console. After that the homepage was indexed again when I did a site:www.domainname.com search. But now (2 weeks later), it's gone again. When I search on the brand name of the website in Google it does find the homepage. I don't know why it doesn't find the homepage when I do a site: search. Any ideas? [see images where you can see the problem] XTrDn 2doHF
Technical SEO | | robk1230 -
Http:// vs Https:// in Og:URL
Hi, Recently, we have migrated our website from http:// to https://. Now, every URL is in https:// and we have used 301 permanent redirection for redirecting OLD URL's to New Ones. We have planned to include http:// link in og:url instead of https:// due to some social share issues we are facing. My concern is, if Google finds the self http:// URL on every page of my blog, will Google gets confused with http and https:// as we are providing the old URL to Google for crawling. Please advice. Thanks
Technical SEO | | SameerBhatia0 -
PR / News stories across multiple sites - is it still duplicate content?
I was wondering does Google make an exception for news stories where duplicate content is concerned? After all depending on the story there can be a lot of quotes and bulk blocks of the same details. Is Google intelligent enough to distinguish between general website content and actual news stories? Also like a lot of big firms we publish news stories on our website, but then they get passed on to other websites in the form of PR, and then published on other websites. So if we put it on our website, then within a few hours or the same day other websites publish the story at the same time (literally copied and pasted) - how does this affect our website in terms of duplicate content? Will Google know automatically that we published it first? Thanks!
Technical SEO | | Brabian0 -
Duplicate page content - index.html
Roger is reporting duplicate page content for my domain name and www.mydomain name/index.html. Example: www.just-insulation.com
Technical SEO | | Collie
www.just-insulation.com/index.html What am I doing wrongly, please?0 -
WordPress - How to stop both http:// and https:// pages being indexed?
Just published a static page 2 days ago on WordPress site but noticed that Google has indexed both http:// and https:// url's. Usually I only get http:// indexed though. Could anyone please explain why this may have happened and how I can fix? Thanks!
Technical SEO | | Clicksjim1 -
/forum/ or /hookah-forum/
I'm building a new website on Hookah.org. It will have a forum and blog. Should I put them in Hookah.org/hookah-forum/ and Hookah.com/hookah-blog/ or Hookah.org/forum and Hookah.org/blog I think /forum/ and /blog/ are easier for users but am not sure how much adding the word hookah helps with SEO.
Technical SEO | | Heydarian0 -
Duplicate Page Titles and %3E, how can I avoid this?
In my crawl report I keep seeing duplicate page title warning with URL's being referenced twice: e.g. /company/ceo-message/ /company/ceo-message/%3E I'm using canonical link tags but after the new crawl report, I'm still seeing this duplicate page title crawl error. How can I avoid this? I've been looking for answers for a few days but don't seem to see this exact problem discussed. Any insight is appreciated!
Technical SEO | | mxmo0 -
I have a ton of "duplicated content", "duplicated titles" in my website, solutions?
hi and thanks in advance, I have a Jomsocial site with 1000 users it is highly customized and as a result of the customization we did some of the pages have 5 or more different types of URLS pointing to the same page. Google has indexed 16.000 links already and the cowling report show a lot of duplicated content. this links are important for some of the functionality and are dynamically created and will continue growing, my developers offered my to create rules in robots file so a big part of this links don't get indexed but Google webmaster tools post says the following: "Google no longer recommends blocking crawler access to duplicate content on your website, whether with a robots.txt file or other methods. If search engines can't crawl pages with duplicate content, they can't automatically detect that these URLs point to the same content and will therefore effectively have to treat them as separate, unique pages. A better solution is to allow search engines to crawl these URLs, but mark them as duplicates by using the rel="canonical" link element, the URL parameter handling tool, or 301 redirects. In cases where duplicate content leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools." here is an example of the links: | | http://anxietysocialnet.com/profile/edit-profile/salocharly http://anxietysocialnet.com/salocharly/profile http://anxietysocialnet.com/profile/preferences/salocharly http://anxietysocialnet.com/profile/salocharly http://anxietysocialnet.com/profile/privacy/salocharly http://anxietysocialnet.com/profile/edit-details/salocharly http://anxietysocialnet.com/profile/change-profile-picture/salocharly | | so the question is, is this really that bad?? what are my options? it is really a good solution to set rules in robots so big chunks of the site don't get indexed? is there any other way i can resolve this? Thanks again! Salo
Technical SEO | | Salocharly0