403 forbidden error website
-
Hi Mozzers,
I got a question about new website from a new costumer http://www.eindexamensite.nl/.
There is a 403 forbidden error on it, and I can't find what the problem is.
I have checked on: http://gsitecrawler.com/tools/Server-Status.aspx
result:
URL=http://www.eindexamensite.nl/ **Result code: 403 (Forbidden / Forbidden)**
When I delete the .htaccess from the server there is a 200 OK :-). So it is in the .htaccess.
.htaccess code: ErrorDocument 404 /error.html
RewriteEngine On
RewriteRule ^home$ / [L]
RewriteRule ^typo3$ - [L]
RewriteRule ^typo3/.$ - [L]
RewriteRule ^uploads/.$ - [L]
RewriteRule ^fileadmin/.$ - [L]
RewriteRule ^typo3conf/.$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-l
RewriteRule .* index.phpStart rewrites for Static file caching
RewriteRule ^(typo3|typo3temp|typo3conf|t3lib|tslib|fileadmin|uploads|screens|showpic.php)/ - [L]
RewriteRule ^home$ / [L]Don't pull *.xml, *.css etc. from the cache
RewriteCond %{REQUEST_FILENAME} !^..xml$
RewriteCond %{REQUEST_FILENAME} !^..css$
RewriteCond %{REQUEST_FILENAME} !^.*.php$Check for Ctrl Shift reload
RewriteCond %{HTTP:Pragma} !no-cache
RewriteCond %{HTTP:Cache-Control} !no-cacheNO backend user is logged in.
RewriteCond %{HTTP_COOKIE} !be_typo_user [NC]
NO frontend user is logged in.
RewriteCond %{HTTP_COOKIE} !nc_staticfilecache [NC]
We only redirect GET requests
RewriteCond %{REQUEST_METHOD} GET
We only redirect URI's without query strings
RewriteCond %{QUERY_STRING} ^$
We only redirect if a cache file actually exists
RewriteCond %{DOCUMENT_ROOT}/typo3temp/tx_ncstaticfilecache/%{HTTP_HOST}/%{REQUEST_URI}/index.html -f
RewriteRule .* typo3temp/tx_ncstaticfilecache/%{HTTP_HOST}/%{REQUEST_URI}/index.html [L]End static file caching
DirectoryIndex index.html
CMS is typo3.
any ideas?
Thanks!
Maarten -
Hi everyone,
I know this thread hasn't been active for a while but i'm looking for an answer relating to a similar issue. Our infrastructure team had issues a few weeks ago and were routing bot traffic to a slave server. This obviously flagged up 403 errors in webmaster tools.
Having removed the traffic diversion our site hasn't been indexed in the three weeks since serving Googlebot with a 403 response. Does anyone have experience of Google delaying reindexing of a site after experiencing a 403 response?
Thanks
-
Hi Alan,
Ok we start cutting the htaccess. I'll keep you posted.
Thanks!
-
Thanks Anthony!
That's the strange thing, website is working only there is still a 403.
We will check chmod status.
-
Hello Maarten
Those RewriteCond entries are cumulative and it looks like there are missing commands.
Who edited that file last, and what did they change?
The way conditionals work is you set a condition, Then you add a command, then a line break You can add more than one condition and it acts as AND
This page has what look like too many conditions and not enough commands -but it could be ok
Try adding a blank line between the rule entries and the Cond entries (but not between the Cond and the Rule entries)
Here is what to do to test anything like this: Save a copy of the .htaccess Then start editing it Delete everything below ##Start rewrites See if that fixes it. If not, the problem is above or if that fixes it, the problem is below Keep cutting the file in half or adding half until you discover the problem line
It is harder with all those conditionals, I suspect it is the lower block that is the problem
So remove those Cond entries from the bottom up
-
Follow up:
I'm not seeing any errors when visiting the site (http://www.eindexamensite.nl/). It seems to be working perfectly. Could it be something client-side w/ your caching or system time?
-
Hi Maarten,
I'm not extremely familiar .htaccess or the typo3 CMS, but it could be the issue is simply a result of misconfigured file permissions for a specific directory or path.
I'd check the permissions on all of the paths that are affected by the .htaccess and make sure they're readable and executable (7,5,5).
This could explain why you get the 200 status w/o the .htaccess but the 403 error with it.
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Website crawl error
Hi all, When I try to crawl a website, I got next error message: "java.lang.IllegalArgumentException: Illegal cookie name" For the moment, I found next explanation: The errors indicate that one of the web servers within the same cookie domain as the server is setting a cookie for your domain with the name "path", as well as another cookie with the name "domain" Does anyone has experience with this problem, knows what it means and knows how to solve it? Thanks in advance! Jens
Technical SEO | | WeAreDigital_BE0 -
Migrating micro site into existing website
My company is planning to migrate an existing (ecommerce) micro site - which sits on its own domain - into their main ecommerce site. This means that the content will be moved from www.microdomain.co.uk to www.maindomain.com/category. Some products already exist on the main domain. The micro site is fairly small with just over 400 pages - I am planning to map each URL to the new URL (exact corresponding page) and create 301 redirects for each. Where any additional content does not exist yet on the existing main domain, we will create it and 301 redirect to it. The micro site currently ranks fairly well for some keywords - being such a specialised micro site, (some of) the keywords also form part of the domain name, however, they won't on the main page although they may form part of the URL (category). As an example (using a made up URL), our micro site www.bread-sticks.co.uk ranks on page 1 for the keyword bread sticks - we don't just sell bread sticks on www.bread-sticks.co.uk but also rolls and bread though, bread sticks is one category of very closely related categories. Say our main domain is www.supermarket.co.uk (selling a wide range of food / drink products. The micro site will be moving to www.supermarket.co.uk/baked-products/ - which is a category. Within that category, there are sub categories, i.e. bread sticks, rolls and bread which will sit under www.supermarket.co.uk/bread-sticks/ etc. What would be the best way for ensuring that our main domain would take over the rankings from our micro site, given that it will be sitting on our main domain as a category (one of many)? Can we expect www.supermarket.co.uk/baked-products/ or www.supermarket.co.uk/bread-sticks/ to replace www.bread-sticks.co.uk in the rankings simply by 301 redirecting? Thanks for your help!
Technical SEO | | ViviCa10 -
Product Code Error in Volusion
I started working with about 800+ 404 errors in September after we migrated our site to Volusion 13. There is a recurring 404 error that I can't trace inside of our source code or in our Sitemap. I don't know what is causing this error so I have no way of knowing how to fix it. Tech support at Volusion has been less than helpful so any feed back would be appreciated. | http://www.apelectric.com/Generac-6438-Guardian-Series-11kW-p/{1} | The error is seemingly starting with the product code. The addendum at the end of the URL "p/" should be followed by the product code. In this example, 6438. Instead, the code is being automatically populated with %7B1%7D Has anyone else this issue with Volusion or does this look familiar across any other platform?
Technical SEO | | MonicaOConnor0 -
403 error
Hey guys, I know that a 403 is not a terrible thing, but is it worth while fixing? If so what is the best way to approach it. Cheers
Technical SEO | | Adamshowbiz0 -
Massive Increase in 404 Errors in GWT
Last June, we transitioned our site to the Magento platform. When we did so, we naturally got an increase in 404 errors for URLs that were not redirected (for a variety of reasons: we hadn't carried the product for years, Google no longer got the same string when it did a "search" on the site, etc.). We knew these would be there and were completely fine with them. We also got many 404s due to the way Magento had implemented their site map (putting in products that were not visible to customers, including all the different file paths to get to a product even though we use a flat structure, etc.). These were frustrating but we did custom work on the site map and let Google resolve those many, many 440s on its own. Sure enough, a few months went by and GWT started to clear out the 404s. All the poor, nonexistent links from the site map and missing links from the old site - they started disappearing from the crawl notices and we slowly went from some 20k 404s to 4k 404s. Still a lot, but we were getting there. Then, in the last 2 weeks, all of those links started showing up again in GWT and reporting as 404s. Now we have 38k 404s (way more than ever reported). I confirmed that these bad links are not showing up in our site map or anything and I'm really not sure how Google found these again. I know, in general, these 404s don't hurt our site. But it just seems so odd. Is there any chance Google bots just randomly crawled a big ol' list of outdated links it hadn't tried for awhile? And does anyone have any advice for clearing them out?
Technical SEO | | Marketing.SCG0 -
403 forbidden, are these a problem?
Hi I have just run a crawl test on screaming frog and it is showing quite a few 403 forbidden status codes. We are showing none of these in webmaster tools, is this an issue?
Technical SEO | | jtay1230 -
404 errors and what to do
Hi, I am fairly new to the whole seo thing and am still getting confused a bit as to what to do to sort things out. I've checked the help pages but I cannot seem to find the issue. I've just signed up so my site is crawled for the first time and coming up with more then a 1000 404 errors. I checked a couple of the links via the report I downloaded and it does indeed show a 404 error but when I check the pages all seems to work fine. I did find one issue where an image if clicked on twice was pointing to an url with 'title= at the end. Now I have tried to get of that but couldn't find anything wrong. I'm a bit lost as to where to start!
Technical SEO | | junglefrog0 -
4xx Client Error
I have 2 pages showing as errors in my Crawl Diagnostics, but I have no idea where these pages have come from, they don't exist on my site. I have done a site wide search for them and they don't appear to be referenced are linked to from anywhere on my site, so where is SEomoz pulling this info from? the two links are: http://www.adgenerator.co.uk/acessibility.asp http://www.adgenerator.co.uk/reseller-application.asp The first link has a spelling mistake and the second link should have an "S" on the end of "application"
Technical SEO | | IPIM0