Moz & Xenu Link Sleuth unable to crawl a website (403 error)
-
It could be that I am missing something really obvious however we are getting the following error when we try to use the Moz tool on a client website. (I have read through a few posts on 403 errors but none that appear to be the same problem as this)
Moz Result
Title 403 : Error
Meta Description 403 Forbidden
Meta Robots_Not present/empty_
Meta Refresh_Not present/empty_
Xenu Link Sleuth Result
Broken links, ordered by link:
error code: 403 (forbidden request), linked from page(s): Thanks in advance!
-
Hey Liam,
Thanks for following up. Unfortunately, we use thousands of dynamic IPs through Amazon Web Services to run our crawler and the IP would change from crawl to crawl. We don't even have a set range for the IPs we use through AWS.
As for throttling, we don't have a set throttle. We try to space out the server hits enough to not bring down the server, but then hit the server as often as necessary in order to crawl the full site or crawl limit in a reasonable amount of time. We try to find a balance between hitting the site too hard and having extremely long crawl times. If the devs are worried about how often we hit the server, they can add a crawl delay of 10 to the robots.txt to throttle the crawler. We will respect that delay.
If the devs use Moz, as well, they would also be getting a 403 on their crawl because the server is blocking our user agent specifically. The server would give the same status code regardless of who has set up the campaign.
I'm sorry this information isn't more specific. Please let me know if you need any other assistance.
Chiaryn
-
Hi Chiaryn
The sage continues....this is the response my client got back from the developers - please could you let me have the answers to the two questions?
Apparently as part of their ‘SAF’ (?) protocols, if the IT director sees a big spike in 3<sup>rd</sup> party products trawling the site he will block them! They did say that they use moz too. What they’ve asked me to get from moz is:
- Moz IP address/range
- Level of throttling they will use
I would question that if THEY USE MOZ themselves why would they need these answers but if I go back with that I will be going around in circles - any chance of letting me know the answer(s)?
Thanks in advance.
Liam
-
Awesome - thank you.
Kind Regards
Liam
-
Hey There,
The robots.txt shouldn't really affect 403s; you would actually get a "blocked by robots.txt" error if that was the cause. Your server is basically telling us that we are not authorized to access your site. I agree with Mat that we are most likely being blocked in the htaccess file. It may be that your server is flagging our crawler and Xenu's crawler as troll crawlers or something along those lines. I ran a test on your URL using a non-existent crawler, Rogerbot with a capital R, and got a 200 status code back but when I run the test with our real crawler, rogerbot with a lowercase r, I get the 403 error (http://screencast.com/t/Sv9cozvY2f01). This tells me that the server is specifically blocking our crawler, but not all crawlers in general.
I hope this helps. Let me know if you have any other questions.
Chiaryn
Help Team Ninja -
Hi Mat
Thanks for the reply - robots.txt file is as follows:
## The following are infinitely deep trees User-agent: * Disallow: /cgi-bin Disallow: /cms/events Disallow: /cms/latest Disallow: /cms/cookieprivacy Disallow: /cms/help Disallow: /site/services/megamenu/ Disallow: /site/mobile/ I can't get access to the .htaccess file at present (we're not the developers) Anyone else any thoughts? Weirdly I can get Screaming Frog info back on the site :-/
-
403s are tricky to diagnose because they, by their very nature, don't tell you much. They're sort of the server equivalent of just shouting "NO!".
You say Moz & Xenu are receiving the 403. I assume that it loads properly from a browser.
I'd start looking at the .htaccess . Any odd deny statements in there? It could be that an IP range or user agent is blocked. Some people like to block common crawlers (Not calling Roger names there). Check the robots.txt whilst you are there, although that shouldn't return a 403 really.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved How does Moz compile the "Important pages on your site are returning a 4xx error!" report?
I have over 200 links in this report (mostly from a staging site). I have deleted that staging site and I cannot find the reference to the other links. So my question is, where is Moz finding these links?
Moz Pro | | nomad_blogger0 -
Not provided traffic in Moz
Hello, I was wondering, if anyone knew how moz handles non provided keywords? is it not displayed, or is it labeled as something else? I know in WMT, it still displays it. Thanks for your help.
Moz Pro | | PeterRota1 -
Ajax4SEO and rogerbot crawling
Has anyone had any experience with seo4ajax.com and moz? The idea is that it points a bot to a html version of an ajax page (sounds good) without the need for ugly urls. However, I don't know how this will work with rogerbot and whether moz can crawl this. There's a section to add in specific user agents and I've added "rogerbot". Does anyone know if this will work or not? Otherwise, it's going to create some complications. I can't currently check as the site is in development and the dev version is noindexed currently. Thanks!
Moz Pro | | LeahHutcheon0 -
404 Error
When I get a 404 error report like below, I can't find a reference to the specific link on the page that is in error? Can you help me. 404 : Error http://www.boxtheorygold.com/blog/www.boxtheorygold.com/blog/bid/23385/Measuring-Your-Business-Processes-Pays-Big-Dividends Thanks, Ron Carroll
Moz Pro | | Rong0 -
1 page crawled - again
Just had to let you know that it happend again. So right now we are at 2 out of the last 4 crawls. Uptime here is 99,8% for the last 30 days, with a small downtime due to an update process at the 18/5 from around 2:30 to 4:30 GMT In relation to: http://moz.com/community/q/1-page-crawled-and-other-errors
Moz Pro | | alsvik0 -
Q&A alert e-mails no longer contain links back to the question referenced in the e-mail
How does everyone feel about no longer having direct links back to questions we may have answered or asked in the e-mails? Until just recently when somebody commented on a question you had commented on easily you were able to click a link and see if it was maybe another question for you if you had not been clear enough the first time or if you needed to give more info whatever the need you had a link directly to The question referenced in the email. Now all I can do is click on the person's profile which unfortunately does not get me back to Q&A to reply if needed to any questions. I have to go through all the steps again and find it strange why you guys would remove the link going straight to the question being referenced in the email. PS I love you and look the at new e-mails they look great. This is not anything big, but it was a nice advantage, and I will happily give up for all the great stuff you guys are giving us now. However if it would not be a burden I would love to see you back in their how does everyone else feel? Sincerely, Thomas Von Zickell
Moz Pro | | BlueprintMarketing0 -
Unable to crawl pages
Hi, I am trying to set up a campaign for our website - www.salvationarmy.org.au however, I can't seem to get a scan of more than three pages. I have tried the following: www.salvationarmy.org.au (only 2 pages) www.salvationarmy.org.au/home (only 1 page) salvationarmy.org.au (only 3 pages) There is a geo IP redirect on www.salvationarmy.org.au but the second domain listed above should resolve the full site. I'm a newbie to SEOmoz so any help would be appreciated! Thanks, Mel
Moz Pro | | KingPings0 -
More than 100 links
Hi Everyone, I was trawling through You Tube and came across this video from Matt Cuts from Google about the 100 links limit being removed. http://www.youtube.com/watch?v=l6g5hoBYlf0&feature=relmfu I then logged into my SEO account as I remember a previous report we did for a competitor where they had quite a number of pages just over the 100 links, but only barely (They are highlighted in SEOmoz as a warning). Given that google has removed this number from their guidelines and it's clear that a few over 100 isn't as bad as it used to be, is it not about time this was changed or looked at again? Maybe even put ones like these under a new title such as recommendation rather that the alarmist Warning one unless it's ridiculously high? I wonder if the SEOmoz team can also find out a better range figure for it instead of the outdated 100? Regards Alan watch?v=l6g5hoBYlf0&feature=relmfu
Moz Pro | | ContactOnline0