Duplicate Content From Indexing of non- File Extension Page
-
Google somehow has indexed a page of mine without the .html extension. so they indexed www.samplepage.com/page, so I am showing duplicate content because Google also see's www.samplepage.com/page.html How can I force google or bing or whoever to only index and see the page including the .html extension? I know people are saying not to use the file extension on pages, but I want to, so please anybody...HELP!!!
-
Yeah I looked further into the URL removal, but I guess technically I did not meet the criteria....and honestly I am fearful other potential implications of removal....I guess I will just have to wait for the 301 to ick in. I just cant believe there is not a simple .htaccess code to cause all URL's to show the .html extension. I mean it is a simple thing to implement the reverse and have the extension dropped...I mean....good lord...
Thanks for all your help though Mike, I truly appreciate the efforts!
-
LAME! You may just want to let the 301 redirect you have in place take its course or remove the URL from Google's index since it was added by mistake anyway.
Mike
-
Nope. .....good lord....
-
Nope.
-
If that does not work, give this a whirl:
RewriteCond %{REQUEST_URI} !\.[a-zA-Z0-9]{3,4}
RewriteCond %{REQUEST_URI} !/$
RewriteRule ^(.*)$ $1.html
-
Try:
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^([^.]*[^./])$ /$1.html [R=301,L] -
That caused the same "500 Internal Server Error" .......
-
Try my code without all the other redirects, and see if it works. If it does, then add back the other redirects one by one until everything works.
-
Oh, and my site auditor is seeing it as a directory with a file in it??? Ugghhh....
-
Nope. Didn't work. I am seriously about to lose my mind with this....
-
Maybe give this a whirl:
If URL does not contain a period or end with a slash
RewriteCond %{REQUEST_URI} !(.|/$)
append .html to requested URL
RewriteRule (.*) /$1.html [L]
-
I get a server error when I do this? Sooo confused... Here is the htaccess changes I made. FYI...I have removed the code you told me to put in there temporarily so the site's not down. I attached the server error screenshot too...
Options +FollowSymlinks
RewriteEngine OnRewriteCond %{REQUEST_URI} ! .html$
RewriteCond %{REQUEST_URI} ! /$
RewriteRule ^(.*)$ $1.htmlRewriteCond %{HTTP_HOST} ^hanneganconstructionllc.com [NC]
RewriteRule ^(.*)$ http://hanneganremodeling.com/$1 [L,R=301]RewriteCond %{HTTP_HOST} ^www.hanneganconstructionllc.com [NC]
RewriteRule ^(.*)$ http://hanneganremodeling.com/$1 [L,R=301]RewriteCond %{HTTP_HOST} ^hremodeling.com [NC]
RewriteRule ^(.*)$ http://hanneganremodeling.com/$1 [L,R=301]RewriteCond %{HTTP_HOST} ^www.hremodeling.com [NC]
RewriteRule ^(.*)$ http://hanneganremodeling.com/$1 [L,R=301]RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index.html\ HTTP/
RewriteRule ^index.html$ http://www.hanneganremodeling.com/ [R=301,L]RewriteBase /
RewriteCond %{HTTP_HOST} ^hanneganremodeling.com$ [NC]
RewriteRule ^(.*)$ http://www.hanneganremodeling.com/$1 [R=301,L] -
You repeat this code a few times, maybe that's the problem? Pretty sure you only need it once:
RewriteEngine On
Options +FollowSymlinks
RewriteBase /The line:
RewriteEngine On
Also only needs to be included once in an htaccess file. You may want to remove all the other instances.
Try adding this code at the very top, after the first "RewriteEngine On":
RewriteCond %{REQUEST_URI} ! .html$
RewriteCond %{REQUEST_URI} ! /$
RewriteRule ^(.*)$ $1.html -
Thanks Mike, you are awesome! I actually was thinking to do that, but I was concerned that it might have some larger implications?
I also just resubmitted a sitemap so hopefully that "might" speed up the crawl process...
Thanks again!
-
"I accidentally manually submitted the url to google and manually in submitted it to index and that when this issue began...."
It sounds like you accidently added this URL to the index. You can follow the procedure outlined below to request Google remove the specific URL from the index:
https://support.google.com/webmasters/bin/answer.py?hl=en&answer=59819
I checked your site's structure using Screaming Frog and it does not appear that you are linking to any non-.html versions. If I perform a scan using one of your non-.html pages, it appears that it only links to itself.
Since you have the 301 redirect in place, you can choose to wait it out and Google should correct things eventually; otherwise, requesting Google remove the URL is a faster... PERMANENT process.
Good luck.
Mike
-
No it's not a wordpress, it was created with Dreamweaver. I didn't make sample and sample.html same page, but google is treating it that way.... I have implemented the 301, so I guess I just have to wait for a crawl
-
Thank you very for your input! When I implement into my .htacces what you suggested I get a "Internet 500 Server Error" ? Maybe it would help if I list what I currently have in my .htaccess I had to redirect some old domains and did canonical redirects and default non .index....I hope this help, I am at my wit's end... I also attached a screenshot of the webmaster warning... THANKS!!!
Options +FollowSymlinks
RewriteEngine OnRewriteCond %{HTTP_HOST} ^hanneganconstructionllc.com [NC]
RewriteRule ^(.*)$ http://hanneganremodeling.com/$1 [L,R=301]RewriteCond %{HTTP_HOST} ^www.hanneganconstructionllc.com [NC]
RewriteRule ^(.*)$ http://hanneganremodeling.com/$1 [L,R=301]RewriteCond %{HTTP_HOST} ^hremodeling.com [NC]
RewriteRule ^(.*)$ http://hanneganremodeling.com/$1 [L,R=301]RewriteCond %{HTTP_HOST} ^www.hremodeling.com [NC]
RewriteRule ^(.*)$ http://hanneganremodeling.com/$1 [L,R=301]RewriteEngine on
RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index.html\ HTTP/
RewriteRule ^index.html$ http://www.hanneganremodeling.com/ [R=301,L]RewriteEngine On
Options +FollowSymlinks
RewriteBase /
RewriteCond %{HTTP_HOST} ^hanneganremodeling.com$ [NC]
RewriteRule ^(.*)$ http://www.hanneganremodeling.com/$1 [R=301,L]Options +FollowSymLinks
RewriteEngine On
RewriteBase / -
Is this a wordpress based site ? What CMS are you using ? How were you able to get domain.com/sample and domain.com/sample.html be the same page ? Either way, canonical tag is the correct solution in this case. There's no need for a 301 and if you do 301 redirects, you are not really fixing the issue caused by your CMS System.
I would therefore strongly advise to use the canonical tag. That's the intended use of that tag.
-
A canonical tag won't physically redirect you when you visit the page, it just lets the search engines know which is the right page to index.
If you want to actually redirect using .htaccess, try using this code
RewriteEngine On
RewriteCond %{REQUEST_URI} ! .html$
RewriteCond %{REQUEST_URI} ! /$
RewriteRule ^(.*)$ $1.html
-
I tried the canonical and when I enter the url without the .html, it doesn't resolve to the url with the .html extension. I tried an .htaccess reirect...I am stumped, I can't get it to redirect automatically the the .html I accidentally manually submitted the url to google and manually in submitted it to index and that when this issue began....
-
Add a canonical tag to your header so that Google/Bing knows which version of your page they should be indexing.
You can also try looking into where the link to the non-html page is coming from. If it's an internal link, just change it so that Google doesn't continue to crawl it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to check if the page is indexable for SEs?
Hi, I'm building the extension for Chrome, which should show me the status of the indexability of the page I'm on. So, I need to know all the methods to check if the page has the potential to be crawled and indexed by a Search Engines. I've come up with a few methods: Check the URL in robots.txt file (if it's not disallowed) Check page metas (if there are not noindex meta) Check if page is the same for unregistered users (for those pages only available for registered users of the site) Are there any more methods to check if a particular page is indexable (or not closed for indexation) by Search Engines? Thanks in advance!
Intermediate & Advanced SEO | | boostaman0 -
Old pages STILL indexed...
Our new website has been live for around 3 months and the URL structure has completely changed. We weren't able to dynamically create 301 redirects for over 5,000 of our products because of how different the URL's were so we've been redirecting them as and when. 3 months on and we're still getting hundreds of 404 errors daily in our Webmaster Tools account. I've checked the server logs and it looks like Bing Bot still seems to want to crawl our old /product/ URL's. Also, if I perform a "site:example.co.uk/product" on Google or Bing - lots of results are still returned, indicating the both still haven't dropped them from their index. Should I ignore the 404 errors and continue to wait for them to drop off or should I just block /product/ in my robots.txt? After 3 months I'd have thought they'd have naturally dropped off by now! I'm half-debating this: User-agent: *
Intermediate & Advanced SEO | | LiamMcArthur
Disallow: /some-directory-for-all/* User-agent: Bingbot
User-agent: MSNBot
Disallow: /product/ Sitemap: http://www.example.co.uk/sitemap.xml0 -
Google indexing wrong pages
We have a variety of issues at the moment, and need some advice. First off, we have a HUGE indexing issue across our entire website. Website in question: http://www.localsearch.com.au/ Firstly
Intermediate & Advanced SEO | | localdirectories
In Google.com.au, if you search for 'plumbers gosford' (https://www.google.com.au/#q=plumbers+gosford), the wrong page appears - in this instance, the page ranking should be http://www.localsearch.com.au/Gosford,NSW/Plumbers I can see this across the board, across multiple locations. Secondly
Recently I've seen Google reporting in 'Crawl Errors' in webmaster tools URLs such as:
http://www.localsearch.com.au/Saunders-Beach,QLD/Electronic-Equipment-Sales-Repairs&Sa=U&Ei=xs-XVJzAA9T_YQSMgIHQCw&Ved=0CIMBEBYwEg&Usg=AFQjCNHXPrZZg0JU3O4yTGjWbijon1Q8OA This is an invalid URL, and more specifically, those query strings seem to be referrer queries from Google themselves: &Sa=U&Ei=xs-XVJzAA9T_YQSMgIHQCw&Ved=0CIMBEBYwEg&Usg=AFQjCNHXPrZZg0JU3O4yTGjWbijon1Q8OA Here's the above example indexed in Google: https://www.google.com.au/#q="AFQjCNHXPrZZg0JU3O4yTGjWbijon1Q8OA" Does anyone have any advice on those 2 errors?0 -
Why are these pages considered duplicate content?
I have a duplicate content warning in our PRO account (well several really) but I can't figure out WHY these pages are considered duplicate content. They have different H1 headers, different sidebar links, and while a couple are relatively scant as far as content (so I might believe those could be seen as duplicate), the others seem to have a substantial amount of content that is different. It is a little perplexing. Can anyone help me figure this out? Here are some of the pages that are showing as duplicate: http://www.downpour.com/catalogsearch/advanced/byNarrator/narrator/Seth+Green/?bioid=5554 http://www.downpour.com/catalogsearch/advanced/byAuthor/author/Solomon+Northup/?bioid=11758 http://www.downpour.com/catalogsearch/advanced/byNarrator/?mediatype=audio+books&bioid=3665 http://www.downpour.com/catalogsearch/advanced/byAuthor/author/Marcus+Rediker/?bioid=10145 http://www.downpour.com/catalogsearch/advanced/byNarrator/narrator/Robin+Miles/?bioid=2075
Intermediate & Advanced SEO | | DownPour0 -
Indexed Pages in Google, How do I find Out?
Is there a way to get a list of pages that google has indexed? Is there some software that can do this? I do not have access to webmaster tools, so hoping there is another way to do this. Would be great if I could also see if the indexed page is a 404 or other Thanks for your help, sorry if its basic question 😞
Intermediate & Advanced SEO | | JohnPeters0 -
Is there a way to stop my product pages with the "show all" catagory/attribute from duplicating content?
If there were less pages with the "show all" attribute it would be a simple fix by adding the canonical URL tag. But seeing that there are about 1,000 of them I was wondering if their was a broader fix that I could apply.
Intermediate & Advanced SEO | | cscoville0 -
"Duplicate" Page Titles and Content
Hi All, This is a rather lengthy one, so please bear with me! SEOmoz has recently crawled 10,000 webpages from my site, FrenchEntree, and has returned 8,000 errors of duplicate page content. The main reason I have so many is because of the directories I have on site. The site is broken down into 2 levels of hierachy. "Weblets" and "Articles". A weblet is a landing page, and articles are created within these weblets. Weblets can hold any number of articles - 0 - 1,000,000 (in theory) and an article must be assigned to a weblet in order for it to work. Here's how it roughly looks in URL form - http://www.mysite.com/[weblet]/[articleID]/ Now; our directory results pages are weblets with standard content in the left and right hand columns, but the information in the middle column is pulled in from our directory database following a user query. This happens by adding the query string to the end of the URL. We have 3 main directory databases, but perhaps around 100 weblets promoting various 'canned' queries that users may want to navigate straight into. However, any one of the 100 directory promoting weblets could return any query from the parent directory database with the correct query string. The problem with this method (as pointed out by the 8,000 errors) is that each possible permutation of search is considered to be it's own URL, and therefore, it's own page. The example I will use is the first alphabetically. "Activity Holidays in France": http://www.frenchentree.com/activity-holidays-france/ - This link shows you a results weblet without the query at the end, and therefore only displays the left and right hand columns as populated. http://www.frenchentree.com/activity-holidays-france/home.asp?CategoryFilter= - This link shows you the same weblet with the an 'open' query on the end. I.e. display all results from this database. Listings are displayed in the middle. There are around 500 different URL permutations for this weblet alone when you take into account the various categories and cities a user may want to search in. What I'd like to do is to prevent SEOmoz (and therefore search engines) from counting each individual query permutation as a unique page, without harming the visibility that the directory results received in SERPs. We often appear in the top 5 for quite competitive keywords and we'd like it to stay that way. I also wouldn't want the search engine results to only display (and therefore direct the user through to) an empty weblet by some sort of robot exclusion or canonical classification. Does anyone have any advice on how best to remove the "duplication" problem, whilst keeping the search visibility? All advice welcome. Thanks Matt
Intermediate & Advanced SEO | | Horizon0