Moz & Xenu Link Sleuth unable to crawl a website (403 error)
-
It could be that I am missing something really obvious however we are getting the following error when we try to use the Moz tool on a client website. (I have read through a few posts on 403 errors but none that appear to be the same problem as this)
Moz Result
Title 403 : Error
Meta Description 403 Forbidden
Meta Robots_Not present/empty_
Meta Refresh_Not present/empty_
Xenu Link Sleuth Result
Broken links, ordered by link:
error code: 403 (forbidden request), linked from page(s): Thanks in advance!
-
Hey Liam,
Thanks for following up. Unfortunately, we use thousands of dynamic IPs through Amazon Web Services to run our crawler and the IP would change from crawl to crawl. We don't even have a set range for the IPs we use through AWS.
As for throttling, we don't have a set throttle. We try to space out the server hits enough to not bring down the server, but then hit the server as often as necessary in order to crawl the full site or crawl limit in a reasonable amount of time. We try to find a balance between hitting the site too hard and having extremely long crawl times. If the devs are worried about how often we hit the server, they can add a crawl delay of 10 to the robots.txt to throttle the crawler. We will respect that delay.
If the devs use Moz, as well, they would also be getting a 403 on their crawl because the server is blocking our user agent specifically. The server would give the same status code regardless of who has set up the campaign.
I'm sorry this information isn't more specific. Please let me know if you need any other assistance.
Chiaryn
-
Hi Chiaryn
The sage continues....this is the response my client got back from the developers - please could you let me have the answers to the two questions?
Apparently as part of their ‘SAF’ (?) protocols, if the IT director sees a big spike in 3<sup>rd</sup> party products trawling the site he will block them! They did say that they use moz too. What they’ve asked me to get from moz is:
- Moz IP address/range
- Level of throttling they will use
I would question that if THEY USE MOZ themselves why would they need these answers but if I go back with that I will be going around in circles - any chance of letting me know the answer(s)?
Thanks in advance.
Liam
-
Awesome - thank you.
Kind Regards
Liam
-
Hey There,
The robots.txt shouldn't really affect 403s; you would actually get a "blocked by robots.txt" error if that was the cause. Your server is basically telling us that we are not authorized to access your site. I agree with Mat that we are most likely being blocked in the htaccess file. It may be that your server is flagging our crawler and Xenu's crawler as troll crawlers or something along those lines. I ran a test on your URL using a non-existent crawler, Rogerbot with a capital R, and got a 200 status code back but when I run the test with our real crawler, rogerbot with a lowercase r, I get the 403 error (http://screencast.com/t/Sv9cozvY2f01). This tells me that the server is specifically blocking our crawler, but not all crawlers in general.
I hope this helps. Let me know if you have any other questions.
Chiaryn
Help Team Ninja -
Hi Mat
Thanks for the reply - robots.txt file is as follows:
## The following are infinitely deep trees User-agent: * Disallow: /cgi-bin Disallow: /cms/events Disallow: /cms/latest Disallow: /cms/cookieprivacy Disallow: /cms/help Disallow: /site/services/megamenu/ Disallow: /site/mobile/ I can't get access to the .htaccess file at present (we're not the developers) Anyone else any thoughts? Weirdly I can get Screaming Frog info back on the site :-/
-
403s are tricky to diagnose because they, by their very nature, don't tell you much. They're sort of the server equivalent of just shouting "NO!".
You say Moz & Xenu are receiving the 403. I assume that it loads properly from a browser.
I'd start looking at the .htaccess . Any odd deny statements in there? It could be that an IP range or user agent is blocked. Some people like to block common crawlers (Not calling Roger names there). Check the robots.txt whilst you are there, although that shouldn't return a 403 really.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz isn't recognizing content on our website hosted on bubble.is
Hi all. We have a bubble.is site and it's difficult to get Moz to recognize the text on the site (even though Google seems to recognize it most of the time). The pages are recognized...just not the text. **In case you want to play around: ** https:getmarlow.com/about https://getmarlow.com/article/gather-data-on-where-your-time-and-energy-go-1579415891021x868278876529492000
Moz Pro | | Marlow20 -
Why doesn't Moz crawl whole pages of our website to report All On-Page issues?
Hi friends & mozzers, How can't Moz crawl whole pages of our website: https://www.4atvtires.com/ to report All Serious On-Page issues. We have more than 15000 product pages. And how could it be possible that Moz isn't able to crawl whole, just got crawl report upto 258 pages of our website, and also I can experience the same in Google webmaster ?? Please help to fix this issue as early as possible. Regards,
Moz Pro | | BigSlate
Rann0 -
Crawl Diagnostics - 350 Critical errors? But I used rel-canonical links
Hello Mozzers, We launched a new website on Monday and had our first MOZ crawl on 01/07/15 which came back with 350+ critical errors. The majority of these were for duplicate content. We had a situation like this for each gym class: GLOBAL YOGA CLASS (canonical link / master record) YOGA CLASS BROMLEY YOGA CLASS OXFORD YOGA CLASS GLASGOW etc All of these local Yoga pages had the canonical link deployed. So why is this regarded as an error by MOZ? Should I have added robots NO INDEX instead? Would think help? Very scared our rankings are gonna get effected 😞 Ben
Moz Pro | | Bendall0 -
What are Spammy Back Links
My site was hit by Penguin 2, where I lost 40% of traffic. People keep talking about spammy links but I've made none of those for my site. So how can I analyse my site and find out which links are hurting me. Can it be done with SEOMOZ software? What do I look for? I used Site Explorer and most of my links are to relevant websites. Is there a penalty for being in somebody else's blogroll even if I was added naturally and their site is a high ranking (PR4) and relevant site? All the links to my site coming from forums are no-follow. Could it be my own website? How do I check that? I have no clue what to look for. Thanks
Moz Pro | | uesat0 -
OK Crawl Test Link Question Again!
I've downloaded a crawl test and column G Link Count reads 62 and yep there are a total of 62 links on the page in question. Column AM Internal Links reads 303 and yep there are somewhere in the order of 303 pages pointing at this one. Root Domains is surprisingly low at 6, so maybe there are only 6 domains linking to this page. BUT... External Links read 51. There are not 51 links pointing away from this domain on this page, no way hozay, so can anybody tell me what is meant by 'External Links? A humble thank you in anticipation of an education. Jem
Moz Pro | | JemRobinson0 -
Crawl Diagnostics Summary
Is there a way to view the charts in the crawl diagnostics summary on a monthly view (or export the monthly figures)?
Moz Pro | | RikkiD220 -
How do you track links over time?
I'm new to SEOmoz and I'm really enjoying the fact that I can track all my keywords in a single place and monitor the changes. Is there a way to do the same with no my links? I mean: I'm using Open Site Explorar to track my links, but I want to know my link building performance overtime. Is there a way to do this automatically with SEOmoz? In case of a negative answer, how do you track your links over time? Thanks in advance!
Moz Pro | | Nauweb0 -
Which would you chose? Link on PA56 with 88 OBL's and 80 IBL's or a link on a PA75 with 225 OBL's & 40 IBL (Same Domain)
Which would you chose? Link on PA56 with 88 OBL's and 80 IBL's or a link on a PA75 with 225 OBL's & 40 IBL (Same Domain) Pretty self explanatory. I want to know what metrics SEOMozzers rely on most. If you are not an expert at evaluating links for large scale development please don't muddy the waters on this question with a thin and vague answer.
Moz Pro | | DavidWolf580