What is the correct htaccess code for Canonicalization?
-
I've been working on a clients site and put up the following but when I check back on seomoz i have over 3000 errors and notices and its been crawling a silly amount of pages that don't exist!!
ErrorDocument 404 /404.html
Options +FollowSymLinksDirectoryIndex index.html RewriteEngine OnRewriteBase / RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index.html\ HTTP/ RewriteRule ^index.html$ http://hiperformanceautocentres.co.uk/ [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.html [L]
-
It would be a good starting place for sites that are created in a similar way.
-
Should this basically be the htaccess starting point for every website that I create going forward?
-
Thats great thanks for that Chris.
-
This basically says change anything ending index.html to end / using a 301 redirect
<code>RewriteCond %{THE_REQUEST} ^.*\/index\.html?\ HTTP/</code>
<code>RewriteRule ^(.*)index.html?$ "/$1" [R=301,L]</code>
This says redirect anything that starts http://www.domain...... to just http://domain......
<code>RewriteCond %{HTTP_HOST} ^hiperformanceautocentres.co.uk [NC]``` RewriteRule ^(.*)$ http://www.hiperformanceautocentres.co.uk/$1 [L,R=301] ```</code>
-
Okay then you want
ErrorDocument 404 /404.html
Options +FollowSymLinksDirectoryIndex index.html
<code>RewriteEngine on</code>
<code>RewriteCond %{THE_REQUEST} ^.*/index.html?\ HTTP/</code>
<code>RewriteRule ^(.*)index\.html?$ "/$1" [R=301,L]</code>
<code>RewriteCond %{HTTP_HOST} ^hiperformanceautocentres.co.uk [NC]```
RewriteRule ^(.*)$ http://www.hiperformanceautocentres.co.uk/$1 [L,R=301] -
oops - guess i've knackered this page with that code!!
Could you explain what all the code means in detail? I just copied and pasted the original!!
-
-
You haven't redirected www and non www so you need to add:
RewriteCond %{HTTP_HOST} ^hiperformanceautocentres.co.uk [NC] RewriteRule ^(.*)$ http://www.hiperformanceautocentres.co.uk/$1 [L,R=301]
What other errors are you getting? 3000 seems a lot!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Find 28 pages (status code 200) on MOZ Pro but 8 on Screaming Frog SEO Spider 11.3
Hi, Does anybody have an idea about this please? I am a newbie, so I am must be doing something wrong. The website: vtcservice-plus.com Thanks
Moz Pro | | ethan30000 -
Htaccess and robots.txt and 902 error
Hi this is my first question in here I truly hope someone will be able to help. It's quite a detailed problem and I'd love to be able to fix it through your kind help. It regards htaccess files and robot.txt files and 902 errors. In October I created a WordPress website from what was previously a non-WordPress site it was quite dated. I had built the new site on a sub-domain I created on the existing site so that the live site could remain live whilst I created on the subdomain. The site I built on the subdomain is now live but I am concerned about the existence of the old htaccess files and robots txt files and wonder if I should just delete the old ones to leave the just the new on the new site. I created new htaccess and robots.txt files on the new site and have left the old htaccess files there. Just to mention that all the old content files are still sat on the server under a folder called 'old files' so I am assuming that these aren't affecting matters. I access the htaccess and robots.txt files by clicking on 'public html' via ftp I did a Moz crawl and was astonished to 902 network error saying that it wasn't possible to crawl the site, but then I was alerted by Moz later on to say that the report was ready..I see 641 crawl errors ( 449 medium priority | 192 high priority | Zero low priority ). Please see attached image. Each of the errors seems to have status code 200; this seems to be applying to mainly the images on each of the pages: eg domain.com/imagename . The new website is built around the 907 Theme which has some page sections on the home page, and parallax sections on the home page and throughout the site. To my knowledge the content and the images on the pages are not duplicated because I have made each page as unique and original as possible. The report says 190 pages have been duplicated so I have no clue how this can be or how to approach fixing this. Since October when the new site was launched, approx 50% of incoming traffic has dropped off at the home page and that is still the case, but the site still continues to get new traffic according to Google Analytics statistics. However Bing Yahoo and Google show a low level of Indexing and exposure which may be indicative of the search engines having difficulty crawling the site. In Google Analytics in Webmaster Tools, the screen text reports no crawl errors. W3TC is a WordPress caching plugin which I installed just a few days ago to speed up page speed, so I am not querying anything here about W3TC unless someone spots that this might be a problem, but like I said there have been problems re traffic dropping off when visitors arrive on the home page. The Yoast SEO plugin is being used. I have included information about the htaccess and robots.txt files below. The pages on the subdomain are pointing to the live domain as has been explained to me by the person who did the site migration. I'd like the site to be free from pages and files that shouldn't be there and I feel that the site needs a clean up as well as knowing if the robots.txt and htaccess files that are included in the old site should actually be there or if they should be deleted... ok here goes with the information in the files. Site 1) refers to the current website. Site 2) refers to the subdomain. Site 3 refers to the folder that contains all the old files from the old non-WordPress file structure. **************** 1) htaccess on the current site: ********************* BEGIN W3TC Browser Cache <ifmodule mod_deflate.c=""><ifmodule mod_headers.c="">Header append Vary User-Agent env=!dont-vary</ifmodule>
Moz Pro | | SEOguy1
<ifmodule mod_filter.c="">AddOutputFilterByType DEFLATE text/css text/x-component application/x-javascript application/javascript text/javascript text/x-js text/html text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon application/json
<ifmodule mod_mime.c=""># DEFLATE by extension
AddOutputFilter DEFLATE js css htm html xml</ifmodule></ifmodule></ifmodule> END W3TC Browser Cache BEGIN W3TC CDN <filesmatch ".(ttf|ttc|otf|eot|woff|font.css)$"=""><ifmodule mod_headers.c="">Header set Access-Control-Allow-Origin "*"</ifmodule></filesmatch> END W3TC CDN BEGIN W3TC Page Cache core <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteCond %{HTTP:Accept-Encoding} gzip
RewriteRule .* - [E=W3TC_ENC:_gzip]
RewriteCond %{HTTP_COOKIE} w3tc_preview [NC]
RewriteRule .* - [E=W3TC_PREVIEW:_preview]
RewriteCond %{REQUEST_METHOD} !=POST
RewriteCond %{QUERY_STRING} =""
RewriteCond %{REQUEST_URI} /$
RewriteCond %{HTTP_COOKIE} !(comment_author|wp-postpass|w3tc_logged_out|wordpress_logged_in|wptouch_switch_toggle) [NC]
RewriteCond "%{DOCUMENT_ROOT}/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" -f
RewriteRule .* "/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" [L]</ifmodule> END W3TC Page Cache core BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress ....(((I have 7 301 redirects in place for old page url's to link to new page url's))).... #Force non-www:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www.domain.co.uk [NC]
RewriteRule ^(.*)$ http://domain.co.uk/$1 [L,R=301] **************** 1) robots.txt on the current site: ********************* User-agent: *
Disallow:
Sitemap: http://domain.co.uk/sitemap_index.xml **************** 2) htaccess in the subdomain folder: ********************* Switch rewrite engine off in case this was installed under HostPay. RewriteEngine Off SetEnv DEFAULT_PHP_VERSION 53 DirectoryIndex index.cgi index.php BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /WPnewsiteDee/
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /subdomain/index.php [L]</ifmodule> END WordPress **************** 2) robots.txt in the subdomain folder: ********************* this robots.txt file is empty **************** 3) htaccess in the Old Site folder: ********************* Deny from all *************** 3) robots.txt in the Old Site folder: ********************* User-agent: *
Disallow: / I have tried to be thorough so please excuse the length of my message here. I really hope one of you great people in the Moz community can help me with a solution. I have SEO knowledge I love SEO but I have not come across this before and I really don't know where to start with this one. Best Regards to you all and thank you for reading this. moz-site-crawl-report-image_zpsirfaelgm.jpg0 -
Errors in my coding how significant is this regarding rankings ?
I posted a question on here yesterday about the homepage asking for advice regarding the content and then was told by two people were very helpful bbut moved over comment not on content but to say taht the major problem was that the coding on my website basically has too many errors which would result in me receiving lower rankings in the search engines. I realise this website is old-fashioned Dreamweaver template which was constructed several years ago which I've updated and I'm certainly not a professional, but I watch my Google analytics and there doesn't seem to be any significant change in the stats from this time last year. This is the site http://www.endeavourcottage.co.uk/ I realise the site is old format and has been around for several years it's just from customer feedback they seem to think it looks okay for the products old cottages but I guess technically it's not the best now. I have run a test using Silktide Nibbler - a free online service that gives you a good complete overview of your website with an overall score. And it did give my website an overall good score but did point out errors in the coding but when I checked some of my competitors near the top of Google for the short tail keywords some of them also have errors in their coding, very similar to my own error score.. I then went to Google Webmaster tools and there were no warning messages. So the big question is how important are these errors scores when it appears that most of the top competition also are in the same situation? I think it's quite possible I could do with a redesign using responsive design Best Alan
Moz Pro | | WhitbyHolidayCottages0 -
Confused about canonicalization
Hello Guys, I have just started to use SEOMOZ and I am trying as much as possible to follow the advise from the initial scan and suggestions I received from SEOMOZ. However, it appears that the first changes I made has somehow made my website to disappear on Google and other search engines. Canonicalization The first changes I made was "Canonicalization" of my domain name (redirecting to a single dominant version) from the instructions here: http://www.seomoz.org/learn-seo/canonicalization. So I redirected and changed my domain name from "domainname.com" to "www.domainname.com" I did check my listing in Google before these updates and Google have my website down as "www.domainname.com" My keywords that were previously performing well before these recent updates have now disappeared which is causing me some great level of frustration and I am really not sure weather to continue with the instructions from SEOMOZ or not. However, it could be that I am being impatient or checking too soon. I'd appreciate some form of advise on what to do? Many thanks.
Moz Pro | | abbeylinks20020 -
Bad code on Learn Seo Redirection info Page
Is it just me, or is the Redirection resource page missing the exclamation point (!) in this code. If so, this could really mess someone's site up if they copy and paste. http://www.seomoz.org/learn-seo/redirection http://screencast.com/t/n7lknZ32G9xF Redirecting Canonical Hostnames: The original developers at SEOmoz needed to redirect any requests that do not start with www.seomoz.org to make sure they included the www. They did this not only because it looks better, but to avoid common canonicalization errors. Redirect: http://seomoz.org/To: http://www.seomoz.org/ Redirect: http://mail.seomoz.org/To: http://www.seomoz.org Redirect: http://seomoz.org/somefile.phpTo: http://www.seomoz.org/somefile.php Solution: Add the following directive: RewriteCond %{HTTP_HOST} ^seomoz.org [NC]RewriteRule (.*) http://www.seomoz.org/$1 [L,R=301] Explanation: This directive tells apache to examine the host the visitor is accessing (in this case: seomoz.org), and if it does not equal www.seomoz.org redirect to www.seomoz.org. The exclamation point (!) in front of www.seomoz.org negates the comparison, saying “if the host IS NOT www.seomoz.org, then perform RewriteRule.” In our case RewriteRule redirects them to www.seomoz.org while preserving the exact file they were accessing in a back-reference. *emphasis added by me
Moz Pro | | squareplug0 -
Is The Crawl Diagnostic tool working correctly?
The Crawl Diagnostic tool shows issues and displays a graph but they don't display the page specific results/suggestion like it used to. I get the "Congratulations, there are no pages affected by this issue!" message.
Moz Pro | | -PAUL-0 -
We were unable to grade that page. We received a response code of 301\. URL content not parseable
I am using seomoz webapp tool for my SEO on my site. I have run into this issue. Please see the attached file as it has the screen scrape of the error. I am running an on page scan from seomoz for the following url: http://www.racquetsource.com/squash-racquets-s/95.htm When I run the scan I receive the following error: We were unable to grade that page. We received a response code of 301. URL content not parseable. This page had worked previously. I have tried to verify my 301 redirects and am unable to resolve this error. I can perform other on page scans and they work fine. Is this a known problem with this tool? I have verified ensuring I don't have it defined. Any help would be appreciated.
Moz Pro | | GeoffBatterham0 -
I want to hire someone to write some PHP code using the SEOmoz API.
...but I'm not sure how to go about it. What I need is simple: all I want is to be able to paste a list of URLs (different domains), and have the program return the Page Authority for all those URLs. I understand I can use the free SEOmoz API, particularly the URL Metrics API. Then I want to export the data to an Excel file. That's it. Problem is, I have absolutely no clue how to do it. Obviously I'd pay someone to do it. Pay very well if you can do it professionally and quickly. How can I go about finding the right person for the job? Apologies if this is not the right place to ask this, but I don't know where else to go. Thanks.
Moz Pro | | thegreatpursuit0