403 forbidden error website
-
Hi Mozzers,
I got a question about new website from a new costumer http://www.eindexamensite.nl/.
There is a 403 forbidden error on it, and I can't find what the problem is.
I have checked on: http://gsitecrawler.com/tools/Server-Status.aspx
result:
URL=http://www.eindexamensite.nl/ **Result code: 403 (Forbidden / Forbidden)**
When I delete the .htaccess from the server there is a 200 OK :-). So it is in the .htaccess.
.htaccess code: ErrorDocument 404 /error.html
RewriteEngine On
RewriteRule ^home$ / [L]
RewriteRule ^typo3$ - [L]
RewriteRule ^typo3/.$ - [L]
RewriteRule ^uploads/.$ - [L]
RewriteRule ^fileadmin/.$ - [L]
RewriteRule ^typo3conf/.$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-l
RewriteRule .* index.phpStart rewrites for Static file caching
RewriteRule ^(typo3|typo3temp|typo3conf|t3lib|tslib|fileadmin|uploads|screens|showpic.php)/ - [L]
RewriteRule ^home$ / [L]Don't pull *.xml, *.css etc. from the cache
RewriteCond %{REQUEST_FILENAME} !^..xml$
RewriteCond %{REQUEST_FILENAME} !^..css$
RewriteCond %{REQUEST_FILENAME} !^.*.php$Check for Ctrl Shift reload
RewriteCond %{HTTP:Pragma} !no-cache
RewriteCond %{HTTP:Cache-Control} !no-cacheNO backend user is logged in.
RewriteCond %{HTTP_COOKIE} !be_typo_user [NC]
NO frontend user is logged in.
RewriteCond %{HTTP_COOKIE} !nc_staticfilecache [NC]
We only redirect GET requests
RewriteCond %{REQUEST_METHOD} GET
We only redirect URI's without query strings
RewriteCond %{QUERY_STRING} ^$
We only redirect if a cache file actually exists
RewriteCond %{DOCUMENT_ROOT}/typo3temp/tx_ncstaticfilecache/%{HTTP_HOST}/%{REQUEST_URI}/index.html -f
RewriteRule .* typo3temp/tx_ncstaticfilecache/%{HTTP_HOST}/%{REQUEST_URI}/index.html [L]End static file caching
DirectoryIndex index.html
CMS is typo3.
any ideas?
Thanks!
Maarten -
Hi everyone,
I know this thread hasn't been active for a while but i'm looking for an answer relating to a similar issue. Our infrastructure team had issues a few weeks ago and were routing bot traffic to a slave server. This obviously flagged up 403 errors in webmaster tools.
Having removed the traffic diversion our site hasn't been indexed in the three weeks since serving Googlebot with a 403 response. Does anyone have experience of Google delaying reindexing of a site after experiencing a 403 response?
Thanks
-
Hi Alan,
Ok we start cutting the htaccess. I'll keep you posted.
Thanks!
-
Thanks Anthony!
That's the strange thing, website is working only there is still a 403.
We will check chmod status.
-
Hello Maarten
Those RewriteCond entries are cumulative and it looks like there are missing commands.
Who edited that file last, and what did they change?
The way conditionals work is you set a condition, Then you add a command, then a line break You can add more than one condition and it acts as AND
This page has what look like too many conditions and not enough commands -but it could be ok
Try adding a blank line between the rule entries and the Cond entries (but not between the Cond and the Rule entries)
Here is what to do to test anything like this: Save a copy of the .htaccess Then start editing it Delete everything below ##Start rewrites See if that fixes it. If not, the problem is above or if that fixes it, the problem is below Keep cutting the file in half or adding half until you discover the problem line
It is harder with all those conditionals, I suspect it is the lower block that is the problem
So remove those Cond entries from the bottom up
-
Follow up:
I'm not seeing any errors when visiting the site (http://www.eindexamensite.nl/). It seems to be working perfectly. Could it be something client-side w/ your caching or system time?
-
Hi Maarten,
I'm not extremely familiar .htaccess or the typo3 CMS, but it could be the issue is simply a result of misconfigured file permissions for a specific directory or path.
I'd check the permissions on all of the paths that are affected by the .htaccess and make sure they're readable and executable (7,5,5).
This could explain why you get the 200 status w/o the .htaccess but the 403 error with it.
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Question Regarding Website Architecture
Hello All, Our website currently has a general solutions subdirectory, which then links to each specific solution, following the path /solutions/ => /solutions/solution1/. As our solutions can be quite complex, we are adding another subdirectory to target individuals by profession. I would like to link from our profession pages to the varying solutions that help. As both subdirectories will be top level pages in the main menu, would linking from our professions to **solutions **be poor architecture? In this case the path would look like: /professions/ => /professions/profession1/ => /solutions/solution1/. Thanks!
Technical SEO | | Tom3_150 -
Suggestions on Website Recovery
Hello Mozzers! I have been tasked with recovering a site from partial link penalty that was previous brought to my attention for this website www.active8canada.com. Upon reviewing the site backlinks and reporting info in Google webmaster tools, I found there was no penalty showing, could it have expired? We spent the last few months doing link cleanup as we recognize that there was some bad links that needed to be addressed. We requested removal of all the bad links after spending time categorizing all of them. Targeting commercial anchor text and bringing those numbers back to acceptable levels. Following this we did a disavow of the bad links which could not be removed through requests. We are actively building out additional content for the website as we recognize that some pages have thin content. We have earned some links as well to show some positive signals during the cleanup but have seen no change for better or worse. My question is, does anyone else see anything else we could be missing here? Should I revisit links again? Some of the links we disavowed are still showing in our backlink reports, but I cross referenced our disavows with the existing backlink profile to try and get an accurate sense of the remaining links. We never saw a decline in ranks further after the disavow, so I'm lead to believe that the links we removed had little, if any impact. I am a little hesitant to begin earning new links through content and partnership outreach as I still feel something is off that I can't quite put my finger on. It was previously confirmed that there was a penalty, but without that showing now in Google webmaster tools I'm grasping at any possible angle I may have missed. If anyone had a couple minutes to spare to shed some light on this situation, it would be greatly appreciated!
Technical SEO | | toddmumford0 -
3 Different Websites but Same Keywords
One of my client targeting same (5 Keywords) for 3 sites. Domain & Web Hosting is same for 3 sites. Site A - 50.72.134.29
Technical SEO | | krishnaxz
Site B - 50.72.140.227
Site C- 50.72.19.70 Some time ago, ranking dropped - but don't know if it is because on above things? Is it OK? What is the best way to target same keywords for 3 different sites.0 -
Why Canonical error?
I just got my SEOMOZ run and it says I have a CANONICAL ERROR: Scorpio Earrings - 7mm Stud - Sterling Silver http://www.astrojewelry.com/jewelry/scorpio-the-scorpion-earrings-30502.htm I'm not sure why--I only changed the <title>tag--not the URL.</span></p> <p><span class="truncated sub-url" title="http://www.astrojewelry.com/jewelry/scorpio-the-scorpion-earrings-30502.htm">Why would this generate a canonical error?</span></p> <p><span class="truncated sub-url" title="http://www.astrojewelry.com/jewelry/scorpio-the-scorpion-earrings-30502.htm">Kathleen</span></p> <p><span class="truncated sub-url" title="http://www.astrojewelry.com/jewelry/scorpio-the-scorpion-earrings-30502.htm">astrojewelry.com</span></p> <p> </p> <p> </p></title>
Technical SEO | | spkcp1110 -
Fixing Crawl Errors
Hi! I moved my Wordpress blog back in August, and lost much of my site traffic. I recently found over 1000 crawl errors in Webmaster Tools because some of my redirects weren't transferred, so we are working on fixing the errors and letting Google know. I'm wondering how long I should expect for Google to recognize that the errors have been fixed and for the traffic to start returning? Thanks! Jodi - momsfavoritestuff.com
Technical SEO | | JodiFTM0 -
Website of only circa 20 pages drawing 1,000s of errors?
Hi, One of the websites I run is getting 1,000s of errors for duplicate title / content even though there are only approximately 20 pages. SEOMoz seems to be finding pages that seem to have duplicated themselves. For example a blog page (/blog) is appearing as /blog/blog then blog/blog/blog and so on. Anyone shed some light on why this is occurring? Thanks.
Technical SEO | | TheCarnage0 -
Optimizing a website which uses information from a database
Hi, Sorry if this question is very general. I am in the process of building a website for local business, and it it will be heavily dependent on a database. The database will contain loads of information which can be optimized such as business directory listings, articles, forums, questions and answers etc. The businesses will also be able to link to and from the site. Which is the best way to display this information so that it can be optimized the best? I was going to use drop down boxes on a single page, ie main category, sub catagory, then display the business listings based on this. However, as the information on the page changes constantly based on the drop down the robot / user uses, I am assuming this is very hard to get optimized well. Does anyone know a better way? Thanks.
Technical SEO | | PhatJP0 -
404 errors on non-existent URLs
Hey guys and gals, First Moz Q&A for me and really looking forward to being part of the community. I hope as my first question this isn't a stupid one but I was just struggling to find any resource that dealt with the issue and am just looking for some general advice. Basically a client has raised a problem with 404 error pages - or the lack thereof- on non-existent URLs on their site; let's say for example: 'greatbeachtowels.com/beach-towels/asdfas' Obviously content never existed on this page so its not like you're saying 'hey, sorry this isn't here anymore'; its more like- 'there was never anything here in the first place'. Currently in this fictitious example typing in 'greatbeachtowels.com/beach-towels/asdfas**'** returns the same content as the 'greatbeachtowels.com/beach-towels' page which I appreciate isn't ideal. What I was wondering is how far do you take this issue- I've seen examples here on the seomoz site where you can edit the URI in a similar manner and it returns the same content as the parent page but with the alternate address. Should 404's be added across all folders on a site in a similar way? How often would this scenario be and issue particularly for internal pages two or three clicks down? I suppose unless someone linked to a page with a misspelled URL... Also would it be worth placing 301 redirects on a small number of common mis-spellings or typos e.g. 'greatbeachtowels.com/beach-towles' to the correct URLs as opposed to just 404s? Many thanks in advance.
Technical SEO | | AJ2340