Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
403 forbidden error website
-
Hi Mozzers,
I got a question about new website from a new costumer http://www.eindexamensite.nl/.
There is a 403 forbidden error on it, and I can't find what the problem is.
I have checked on: http://gsitecrawler.com/tools/Server-Status.aspx
result:
URL=http://www.eindexamensite.nl/ **Result code: 403 (Forbidden / Forbidden)**
When I delete the .htaccess from the server there is a 200 OK :-). So it is in the .htaccess.
.htaccess code: ErrorDocument 404 /error.html
RewriteEngine On
RewriteRule ^home$ / [L]
RewriteRule ^typo3$ - [L]
RewriteRule ^typo3/.$ - [L]
RewriteRule ^uploads/.$ - [L]
RewriteRule ^fileadmin/.$ - [L]
RewriteRule ^typo3conf/.$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-l
RewriteRule .* index.phpStart rewrites for Static file caching
RewriteRule ^(typo3|typo3temp|typo3conf|t3lib|tslib|fileadmin|uploads|screens|showpic.php)/ - [L]
RewriteRule ^home$ / [L]Don't pull *.xml, *.css etc. from the cache
RewriteCond %{REQUEST_FILENAME} !^..xml$
RewriteCond %{REQUEST_FILENAME} !^..css$
RewriteCond %{REQUEST_FILENAME} !^.*.php$Check for Ctrl Shift reload
RewriteCond %{HTTP:Pragma} !no-cache
RewriteCond %{HTTP:Cache-Control} !no-cacheNO backend user is logged in.
RewriteCond %{HTTP_COOKIE} !be_typo_user [NC]
NO frontend user is logged in.
RewriteCond %{HTTP_COOKIE} !nc_staticfilecache [NC]
We only redirect GET requests
RewriteCond %{REQUEST_METHOD} GET
We only redirect URI's without query strings
RewriteCond %{QUERY_STRING} ^$
We only redirect if a cache file actually exists
RewriteCond %{DOCUMENT_ROOT}/typo3temp/tx_ncstaticfilecache/%{HTTP_HOST}/%{REQUEST_URI}/index.html -f
RewriteRule .* typo3temp/tx_ncstaticfilecache/%{HTTP_HOST}/%{REQUEST_URI}/index.html [L]End static file caching
DirectoryIndex index.html
CMS is typo3.
any ideas?
Thanks!
Maarten -
Hi everyone,
I know this thread hasn't been active for a while but i'm looking for an answer relating to a similar issue. Our infrastructure team had issues a few weeks ago and were routing bot traffic to a slave server. This obviously flagged up 403 errors in webmaster tools.
Having removed the traffic diversion our site hasn't been indexed in the three weeks since serving Googlebot with a 403 response. Does anyone have experience of Google delaying reindexing of a site after experiencing a 403 response?
Thanks
-
Hi Alan,
Ok we start cutting
the htaccess. I'll keep you posted.
Thanks!
-
Thanks Anthony!
That's the strange thing, website is working only there is still a 403.
We will check chmod status.
-
Hello Maarten
Those RewriteCond entries are cumulative and it looks like there are missing commands.
Who edited that file last, and what did they change?
The way conditionals work is you set a condition, Then you add a command, then a line break You can add more than one condition and it acts as AND
This page has what look like too many conditions and not enough commands -but it could be ok
Try adding a blank line between the rule entries and the Cond entries (but not between the Cond and the Rule entries)
Here is what to do to test anything like this: Save a copy of the .htaccess Then start editing it Delete everything below ##Start rewrites See if that fixes it. If not, the problem is above or if that fixes it, the problem is below Keep cutting the file in half or adding half until you discover the problem line
It is harder with all those conditionals, I suspect it is the lower block that is the problem
So remove those Cond entries from the bottom up
-
Follow up:
I'm not seeing any errors when visiting the site (http://www.eindexamensite.nl/). It seems to be working perfectly. Could it be something client-side w/ your caching or system time?
-
Hi Maarten,
I'm not extremely familiar .htaccess or the typo3 CMS, but it could be the issue is simply a result of misconfigured file permissions for a specific directory or path.
I'd check the permissions on all of the paths that are affected by the .htaccess and make sure they're readable and executable (7,5,5).
This could explain why you get the 200 status w/o the .htaccess but the 403 error with it.
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to get rid of bot verification errors
I have a client who sells highly technical products and has lots and lots (a couple of hundred) pdf datasheets that can be downloaded from their website. But in order to download a datasheet, a user has to register on the site. Once they are registered, they can download whatever they want (I know this isn't a good idea but this wasn't set up by us and is historical). On doing a Moz crawl of the site, it came up with a couple of hundred 401 errors. When I investigated, they are all pages where there is a button to click through to get one of these downloads. The Moz error report calls the error "Bot verification". My questions are:
Technical SEO | | mfrgolfgti
Are these really errors?
If so, what can I do to fix them?
If not, can I just tell Moz to ignore them or will this cause bigger problems?0 -
Broken canonical link errors
Hello, Several tools I'm using are returning errors due to "broken canonical links". However, I'm not too sure why is that. Eg.
Technical SEO | | GhillC
Page URL: domain.com/page.html?xxxx
Canonical link URL: domain.com/page.html
Returns an error. Any idea why? Am I doing it wrong? Thanks,
G1 -
Duplicate content and 404 errors
I apologize in advance, but I am an SEO novice and my understanding of code is very limited. Moz has issued a lot (several hundred) of duplicate content and 404 error flags on the ecommerce site my company takes care of. For the duplicate content, some of the pages it says are duplicates don't even seem similar to me. additionally, a lot of them are static pages we embed images of size charts that we use as popups on item pages. it says these issues are high priority but how bad is this? Is this just an issue because if a page has similar content the engine spider won't know which one to index? also, what is the best way to handle these urls bringing back 404 errors? I should probably have a developer look at these issues but I wanted to ask the extremely knowledgeable Moz community before I do 🙂
Technical SEO | | AliMac260 -
404 errors
Hi I am getting these show up in WMT crawl error any help would be very much appreciated | ?escaped_fragment=Meditation-find-peace-within/csso/55991bd90cf2efdf74ec3f60 | 404 | 12/5/15 |
Technical SEO | | ReSEOlve
| | 2 | mobile/?escaped_fragment= | 404 | 10/26/15 |
| | 3 | ?escaped_fragment=Tips-for-a-balanced-lifestyle/csso/1 | 404 | 12/1/15 |
| | 4 | ?escaped_fragment=My-favorite-yoga-spot/csso/5598e2130cf2585ebcde3b9a | 404 | 12/1/15 |
| | 5 | ?escaped_fragment=blog/c19s6 | 404 | 11/29/15 |
| | 6 | ?escaped_fragment=blog/c19s6/Tag/yoga | 404 | 11/30/15 |
| | 7 | ?escaped_fragment=Inhale-exhale-and-once-again/csso/2 | 404 | 11/27/15 |
| | 8 | ?escaped_fragment=classes/covl | 404 | 10/29/15 |
| | 9 | m/?escaped_fragment= | 404 | 10/26/15 |
| | 10 | ?escaped_fragment=blog/c19s6/Page/1 | 404 | 11/30/15 | | |0 -
I have a GoDaddy website and have multiple homepages
I have GoDaddy website builder and a new website http://ecuadorvisapros.com and I notices through your crawl test that there are 3 home pages http://ecuadorvisapros with a 302 temporary redirect, http://www.ecuadorvisapros.com/ with no redirect and http://www.ecuadorvisapros/home.html. GoDaddy says there is only one home page. Is this going to kill my chances of having a successful website and can this be fixed? Or can it. I actually went with the SEO version thinking it would be better, but it wants to auto change my settings that I worked so hard at with your sites help. Please keep it simple, I am a novice although I have had websites in the past I know more about the what's than the how's of websites. Thanks,
Technical SEO | | ScottR.0 -
Schema Markup Errors - Priority or Not?
Greetings All... I've been digging through the search console on a few of my sites and I've been noticing quite a few structured data errors. Most of the errors are related to: hcard, hentry and hatom. Most of them are missing author & entry-title, while the other one is missing: fn. I recently saw an article on SEL about Google's focus on spammy mark-up. The sites I use are built and managed by vendors, so I would have to impress upon them the impact of these errors and have them prioritize, then fix them. My question is whether or not this should be prioritized? Should I have them correct these errors sooner than later or can I take a phased approach? I haven't noticed any loss in traffic or anything like that, I'm more focused on what negative impact a "phased approach" could have. Any thoughts?
Technical SEO | | AfroSEO0 -
Website credits for designers - good or bad
Hi My core service is web design and development. I often place a credit on my clients websites pointing them back to my web design or web development pages. Is this a wise practice with penguin and panda updates? Would this also pull my ranking down?
Technical SEO | | Cocoonfxmedia0 -
404 errors on non-existent URLs
Hey guys and gals, First Moz Q&A for me and really looking forward to being part of the community. I hope as my first question this isn't a stupid one but I was just struggling to find any resource that dealt with the issue and am just looking for some general advice. Basically a client has raised a problem with 404 error pages - or the lack thereof- on non-existent URLs on their site; let's say for example: 'greatbeachtowels.com/beach-towels/asdfas' Obviously content never existed on this page so its not like you're saying 'hey, sorry this isn't here anymore'; its more like- 'there was never anything here in the first place'. Currently in this fictitious example typing in 'greatbeachtowels.com/beach-towels/asdfas**'** returns the same content as the 'greatbeachtowels.com/beach-towels' page which I appreciate isn't ideal. What I was wondering is how far do you take this issue- I've seen examples here on the seomoz site where you can edit the URI in a similar manner and it returns the same content as the parent page but with the alternate address. Should 404's be added across all folders on a site in a similar way? How often would this scenario be and issue particularly for internal pages two or three clicks down? I suppose unless someone linked to a page with a misspelled URL... Also would it be worth placing 301 redirects on a small number of common mis-spellings or typos e.g. 'greatbeachtowels.com/beach-towles' to the correct URLs as opposed to just 404s? Many thanks in advance.
Technical SEO | | AJ2340