Duplicate Content for index.html
-
In the Crawl Diagnostics Summary, it says that I have two pages with duplicate content which are:
I read in a Dream Weaver tutorial that you should name your home page "index.html" and then you can let www.mywebsite.com automatically direct the user to index.html. Is this a bug in SEOMoz's crawler or is it a real problem with my site?
Thank you,
Dan
-
The code should definitely go into the websites root directory's .htaccess, however .htaccess can be weird, a few days ago I ran into a similar issue with a client's website, and I was able to remedy the issue with a variation of the code.
index Redirect RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /([^/]+/)index.(php|html|htm|asp)\ HTTP/ RewriteRule ^(([^/]+/))index.(php|html|htm|asp)$ http://yoursite.com/$1 [R=301,L]
If you give me the URL for the site I will take a look at it and let you know what would be feasible.
-
Hi Daniel, can you share with us the URL of your site? We can take a look at it and give you a more precise answer that way. Thanks!
-
I eventually figured out that your method was a 301 redirect and I definitely broke my site trying to use the code you posted. .. haha. Its ok though. I just removed the code and it went back to normal. At first, I was editing the .htaccess file in the public_html folder which wasnt working. Then I tried the root folder for the site (I created the .htaccess file since it did not exist.) Neither of those worked. (I am using Bluehost so I do not think that I have root access and I am not sure if it is a Linux server or not.)
If there is an easy way to explain what I am doing wrong, please do so. Otherwise, I will use canonical.
Thanks for everything!
-
@Dan
Thanks for your reply. It seems like there are lots of different ways to solve this problem. I just watched this video on Matt Cutt's blog where he discusses his preference for 301 redirects over rel canonical tag.
Where would you say your solution fits in?
sorry about the delay of this response, i didn't realize the that you were asking me a question right away. When placing the code I provided in my previous answer this will cause a 301 perminant redirect to the original URL. That's actually what the
[R=301,L]
portion of the code is stating (R) redirect (301) status is referring to. After reviewing the Matt Cutts video, I realize that I should have asked you if you were operating on a Linux server that you had root access to. We actually utilize both redirects and canonical tags since it was recommended by the on-page optimization reports. Heck Google uses them, I would assume because it's easier for the user to be referred to a single page URL. Obviously though if you don't have server header access, and are not familiar with .htaccess (you can accidentally break your site) then the canonical solution is appropriate
-
Josh,
Thanks for your reply. It seems like there are lots of different ways to solve this problem. I just watched this video on Matt Cutt's blog where he discusses his preference for 301 redirects over rel canonical tag.
Where would you say your solution fits in?
Thanks,
Dan -
use the link rel tag for all my homepages for the http://www.yoursite.com
-
Odd enough I just recently answered this question. The SEOmoz crawler is correct, because without a redirect you will be able to access both versions of the page in your browser.
To resolve this issue simply rewrite the index.html to the root url by placing the following code into your .htaccess file into your root directory.
Options +FollowSymlinks RewriteEngine on
Index Rewrite RewriteRule ^index.(htm|html|php) http://www.yoursite.com/ [R=301,L] RewriteRule ^(.*)/index.(htm|html|php) http://www.yoursite.com/$1/ [R=301,L]
You can also do the same with the index file in any subdirectories that you might create, by simply placing a .htaccess into those sub directories and using variations of the above code. This is how you create nice tight URLs without the duplicate content issue that look like - http://www.semclix.com/design/business/
-
It is a problem which you need to fix. You need to canonicalize your pages.
Those are all various URLs which most likely lead to the same web page. I say "most likely" because these URLs can actually lead to different pages.
You need to tell crawlers and search engines how you organize your site. There are several ways to achieve canonicalization. The method I prefer is to add the following line of code to each page:
The URL provided should be the preferred URL for your page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
HTML and XML sitemaps for one website.
Hi all, First, we have created a HTML sitemap for our wordpress website. Then we again generated XML sitemap and submitted same in search console. It's been more than a week and still new XML sitemap has not been indexed yet. I can still only see HTML sitemap for search results "company sitemap". Also search console do have only XML sitemap. Both sitemaps are accessible but only HTML has been indexed. Is there anything wrong having 2 sitemaps? Why XML sitemap not been indexed? Thanks
Web Design | | vtmoz0 -
Google tag manager on blocked beta site - will it phone home to Google and cause site to get indexed?
We want to develop a beta site, in a directory with the robots.txt blocking bots. We want to include the Google Tag Manager tags and event layer tracking code on this beta site. My question is that by including the Google Tag Manager code, that phones home to Google, will it cause Google to index this beta site when we don't want it indexed?
Web Design | | CFSSEO0 -
40 percent redundant content on landing pages with 60 percent unique information.
I have searched schema.org for tags to use for our redudant content on 25 unique local landing pages. The redundant content references our services and abilities on each page. Could anyone tell me how to retain this content and direct the search engines to disregard this portion of the landing page. We are a WordPress site -- if there is a plugin - I would love to know which one might work, although I have not been able to find one that will protect us from duplicate content issues. Thank you in advance.
Web Design | | seant1190 -
Does using role="heading" instead of H1 in HTML code affects SEO?
Does using role="heading" instead of affect SEO? http://www.w3.org/WAI/GL/wiki/Headings_using_role%3Dheading
Web Design | | LNEseo0 -
How to provide product information without duplicate content?
Hi all, I have an ecommerce website with approx 400 products, these are quite technical products and i use to have helpful information about the products on the product pages... My SEO company told me to remove all this, as i had lots of duplicate content issues... I have since had content writers re-write all product descriptions (about 250 words per product)... and now i am trying to figure out a way of getting the "helpful" information back on but in some kind of dynamic way... There is basically about 5 or 6 blocks of information, that can be added to each product page, these overlap hundreds of products. i was thinking of perhaps creating a separate static page for each block of useful information, and putting links on the product pages to this... however, ideally i would prefer to not keep sending customers to other pages... so wanted to see if others had come across similar issues themselves and how they went about having this "content" available to the user but in such a way it was not duplicate content... Please note using images would not be any good here, as the content varies in size but most of it is text based... regards James
Web Design | | isntworkdull1 -
Getting a lot more duplicate content warnings than I expected.
I run WordPress on many of my sites and a site crawl has found MANY duplicate content pages on the latest domain I started a campaign for. I expected to see quite a lot on the tag pages that only had one post but even tag pages with multiple posts and author and category pages with many posts are showing as duplicate content. Is this normal for a WordPress site to have so much duplicate content warnings from the taxonomy pages? I have the option to bulk noindex, follow the category and tag pages but should I do it? I get some traffic directly to the tag pages so removing the pages from search results would dent the traffic of the site a little (generally high bounce rate, low engagement traffic anyway) but could removing the apparent duplicate content actually improve the article pages themselves? Or does anyone have any WordPress specific advice for making the pages not duplicate content? I've toyed with the idea of just displaying excerpts but creating manual excerpts for the 4 years worth of posts, some of which I have no personal knowledge of the subject matter so other suggestions are welcome.
Web Design | | williampatton0 -
Duplicate Content Problem on Our Site?
Hi, Having read the SEOMOZ guide and already worried about this previously, I have decided to look further into this. Our site is 4-5 years old, poorly built by a rouge firm so we have to stick with what we have for now. Were I think we might be getting punished is duplicate content across various pages. We have a Brands page, link at top of page. Here we are meant to enter each brand we stock and a little write up on that brands. What we then put in these write ups is used on each brands item page when we click a brand name on the left nav bar. Or when we click a Product Type (eg. Footwear) then click on a brand filter on the left. So this in theory is duplicate content. The SEO title and Meta Description for each brand is then used on the Brands Page and also on each page with the Brands Product on. As we have entered this brand info, you will notice that the page www.designerboutique-online.com/all-clothing/armani-jeans/ has the same brand description in the scroll box at the top as the page www.designerboutique-online.com/shirts/armani-jeans/ and all the other product type pages. The same SEO title and same Meta descriptions. Only the products change from each one. This then applies to each brand we have (at least 15) across about 8 pages. All with different URLs but the same text. Not sure how a 301 or rel: canonical would work for this, as each URL needs to point at specific pages (eg. shirts, shorts etc...). Some brands such as Creative Recreation and Cruyff only sell footwear, so technically I think??? We could 301 to the Footwear/ URL rather than having both all-clothing and footwear file paths? This surely must be down to the bad design? Could we be losing valulable rank and juice because of this issue? And how would I go about fixing it? I want a new site, but funds are tight. But if this issue is so big that only a new site would fix it, then maybe the money would need to come forward. What do people make of this? Cheers Will
Web Design | | YNWA0 -
Does listing my customer's address, phone number, and a contact form on "every page" count as duplicate content that they'd be penalized for?
I work with small local businesses (like Tree Farms, Feed Stores, Counselors, etc) doing web design, seo, etc. I encourage them to have their contact information visible at all times on their websites. I'm also delving into the world of contact forms. I want to have this info on every page - is this detrimental? Here's an example: http://www.trinityescape.net/marriage-couples-counselors-therapy-clermont-florida/ Thank you!
Web Design | | mikjgens1