Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Home Page .index.htm and .com Duplicate Page Content/Title
-
I have been whittling away at the duplicate content on my clients' sites, thanks to SEOmoz's pro report, and have been getting push back from the account manager at register.com (the site was built here and the owner doesn't want to move it). He says these are the exact same page and he can't access one to redirect to the other. Any suggestions?
The SEOmoz report says there is duplicate content on both these urls:
Durango Mountain Biking | Durango Mountain Resort - Cascade Village
http://www.cascadevillagehotel.com/index.htm
Durango Mountain Biking | Durango Mountain Resort - Cascade Village
http://www.cascadevillagehotel.com/
Your help is greatly appreciated!
Sheryl
-
Totally helpful, thank you!
-
A relatively painless way (if .htaccess is too hard for your contact to implement) is to use rel canonical to point to the url you want since Google and Bing will (eventually) notice your canonical tag.
So, for http://www.cascadevillagehotel.com/index.htm you could add a tag like the one below into your index.htm file in the head section:
You should also make sure that any links to the home page refer to http://www.cascadevillagehotel.com rather than http://www.cascadevillagehotel.com/index.htm
See http://support.google.com/webmasters/bin/answer.py?hl=en&answer=139394 for more info on rel canonical
Having said that, a 301 redirect is probably the best way to solve the problem.
BTW, I'm assuming it is an Apache server and so uses .htaccess - IIS can be a bit more tricky (see http://www.seomoz.org/blog/what-every-seo-should-know-about-iis#chaining for IIS redirects)
You could use something like this in the .htaccess file (always make a backup copy of the .htaccess file before saving it in case something goes wrong - typos etc.):
BACKUP EXISTING .htaccess FIRST!!!
RewriteEngine On
RewriteBase /
may need to uncomment the next line depending on host
#Options +FollowSymlinks
add www for non www pages - you may not need these two lines
RewriteCond %{HTTP_HOST} ^cascadevillagehotel.com$ [NC]
RewriteRule ^(.*)$ http://www.cascadevillagehotel.com/$1 [L,R=301]
The following redirect is the one for index.htm assumes the default page is /
redirect 301 /index.htm http://www.cascadevillagehotel.com/
-
It's really important that you add a permanent 301 redirect from http://www.cascadevillagehotel.com/index.htm pointing to http://www.cascadevillagehotel.com (as mentioned by Marisa). Otherwise SEO goodness can be split between the two domains making it harder for you to get the sites ranking.
At the moment the homepage is accessible via 4 URL versions:
http://www.cascadevillagehotel.com/
http://www.cascadevillagehotel.com/index.htm
http://cascadevillagehotel.com/
http://cascadevillagehotel.com/index.htmALSO, when you send the request you should also be requesting that a 301 redirect is also placed on the non-www URL version pointing to its respective www URL version - you'd want to have this done for every page e.g. http://cascadevillagehotel.com/hotel should redirect to http://www.cascadevillagehotel.com/hotel
Note - I often have the same discussion with web developers, from their point of view it is the same page and I understand that but you just need to state that Google treats it as multiple versions (do some research are canonicalization).
-
No, I just send an email to the account manager at register.com telling him to do things exactly as I say. It is very time consuming, but this is the way the business owner wants it done for now.
-
Do you have access to the root directory to set up a 301 redirect in the .htaccess file?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content, although page has "noindex"
Hello, I had an issue with some pages being listed as duplicate content in my weekly Moz report. I've since discussed it with my web dev team and we decided to stop the pages from being crawled. The web dev team added this coding to the pages <meta name='robots' content='max-image-preview:large, noindex dofollow' />, but the Moz report is still reporting the pages as duplicate content. Note from the developer "So as far as I can see we've added robots to prevent the issue but maybe there is some subtle change that's needed here. You could check in Google Search Console to see how its seeing this content or you could ask Moz why they are still reporting this and see if we've missed something?" Any help much appreciated!
Technical SEO | | rj_dale0 -
Indexed pages
Just started a site audit and trying to determine the number of pages on a client site and whether there are more pages being indexed than actually exist. I've used four tools and got four very different answers... Google Search Console: 237 indexed pages Google search using site command: 468 results MOZ site crawl: 1013 unique URLs Screaming Frog: 183 page titles, 187 URIs (note this is a free licence, but should cut off at 500) Can anyone shed any light on why they differ so much? And where lies the truth?
Technical SEO | | muzzmoz1 -
404 Error Pages being picked up as duplicate content
Hi, I recently noticed an increase in duplicate content, but all of the pages are 404 error pages. For instance, Moz site crawl says this page: https://www.allconnect.com/sc-internet/internet.html has 43 duplicates and all the duplicates are also 404 pages (https://www.allconnect.com/Coxstatic.html for instance is a duplicate of this page). Looking for insight on how to fix this issue, do I add an rel=canonical tag to these 60 error pages that points to the original error page? Thanks!
Technical SEO | | kfallconnect0 -
Duplicate Content on a Page Due to Responsive Version
What are the implications if a web designer codes the content of the site twice into the page in order to make the site responsive? I can't add the url I'm afraid but the H1 and the content appear twice in the code in order to produce both a responsive version and a desktop version. This is a Wordpress site. Is Google clever enough to distinguish between the 2 versions and treat them individually? Or will Google really think that the content has been repeated on the same page?
Technical SEO | | Wagada0 -
Can you noindex a page, but still index an image on that page?
If a blog is centered around visual images, and we have specific pages with high quality content that we plan to index and drive our traffic, but we have many pages with our images...what is the best way to go about getting these images indexed? We want to noindex all the pages with just images because they are thin content... Can you noindex,follow a page, but still index the images on that page? Please explain how to go about this concept.....
Technical SEO | | WebServiceConsulting.com0 -
Wordpress categories causing too many links/duplicate content?
I've just added categories to my wordpress site and some of the posts show in several of the categories. Will this cause me duplicate content problems as I want the category pages to be indexed? Also as I add more categories I'm creating more links on the page. They can't be seen to the user as I have a plugin that creates drop down categories. When I go to 'view source' though all the links are there so google will see lots of links. How can I fix the too many links problem? And should I worry about duplicate content issue?
Technical SEO | | SamCUK1 -
"nofollow pages" or "duplicate content"?
We have a huge site with lots of geographical-pages in this structure: domain.com/country/resort/hotel domain.com/country/resort/hotel/facts domain.com/country/resort/hotel/images domain.com/country/resort/hotel/excursions domain.com/country/resort/hotel/maps domain.com/country/resort/hotel/car-rental Problem is that the text on ie. /excursions is often exactly the same on .../alcudia/hotel-sea-club/excursion and .../alcudia/hotel-beach-club/excursion The two hotels offer the same excursions, and the intro text on the pages are the exact same throughout the entire site. This is also a problem on the /images and /car-rental pages. I think in most cases the only difference on these pages is the Title, description and H1. These pages do not attract a lot of visits through search-engines. But to avoid them being flagged as duplicate content (we have more than 4000 of these pages - /excursions, /maps, /car-rental, /images), do i add a nofollow-tag to these, do i block them in robots.txt or should i just leave them and live with them being flagged as duplicate content? Im waiting for our web-team to add a function to insert a geographical-name in the text, so i could add ie #HOTELNAME# in the text and thereby avoiding the duplicate text. Right now we have intros like: When you visit the hotel ... instead of: When you visit Alcudia Sea Club But untill the web-team has fixed these GEO-tags, what should i do? What would you do and why?
Technical SEO | | alsvik0 -
What is the best method to block a sub-domain, e.g. staging.domain.com/ from getting indexed?
Now that Google considers subdomains as part of the TLD I'm a little leery of testing robots.txt with something like: staging.domain.com
Technical SEO | | fthead9
User-agent: *
Disallow: / in fear it might get the www.domain.com blocked as well. Has anyone had any success using robots.txt to block sub-domains? I know I could add a meta robots tag to the staging.domain.com pages but that would require a lot more work.0