Moz & Xenu Link Sleuth unable to crawl a website (403 error)
-
It could be that I am missing something really obvious however we are getting the following error when we try to use the Moz tool on a client website. (I have read through a few posts on 403 errors but none that appear to be the same problem as this)
Moz Result
Title 403 : Error
Meta Description 403 Forbidden
Meta Robots_Not present/empty_
Meta Refresh_Not present/empty_
Xenu Link Sleuth Result
Broken links, ordered by link:
error code: 403 (forbidden request), linked from page(s): Thanks in advance!
-
Hey Liam,
Thanks for following up. Unfortunately, we use thousands of dynamic IPs through Amazon Web Services to run our crawler and the IP would change from crawl to crawl. We don't even have a set range for the IPs we use through AWS.
As for throttling, we don't have a set throttle. We try to space out the server hits enough to not bring down the server, but then hit the server as often as necessary in order to crawl the full site or crawl limit in a reasonable amount of time. We try to find a balance between hitting the site too hard and having extremely long crawl times. If the devs are worried about how often we hit the server, they can add a crawl delay of 10 to the robots.txt to throttle the crawler. We will respect that delay.
If the devs use Moz, as well, they would also be getting a 403 on their crawl because the server is blocking our user agent specifically. The server would give the same status code regardless of who has set up the campaign.
I'm sorry this information isn't more specific. Please let me know if you need any other assistance.
Chiaryn
-
Hi Chiaryn
The sage continues....this is the response my client got back from the developers - please could you let me have the answers to the two questions?
Apparently as part of their ‘SAF’ (?) protocols, if the IT director sees a big spike in 3<sup>rd</sup> party products trawling the site he will block them! They did say that they use moz too. What they’ve asked me to get from moz is:
- Moz IP address/range
- Level of throttling they will use
I would question that if THEY USE MOZ themselves why would they need these answers but if I go back with that I will be going around in circles - any chance of letting me know the answer(s)?
Thanks in advance.
Liam
-
Awesome - thank you.
Kind Regards
Liam
-
Hey There,
The robots.txt shouldn't really affect 403s; you would actually get a "blocked by robots.txt" error if that was the cause. Your server is basically telling us that we are not authorized to access your site. I agree with Mat that we are most likely being blocked in the htaccess file. It may be that your server is flagging our crawler and Xenu's crawler as troll crawlers or something along those lines. I ran a test on your URL using a non-existent crawler, Rogerbot with a capital R, and got a 200 status code back but when I run the test with our real crawler, rogerbot with a lowercase r, I get the 403 error (http://screencast.com/t/Sv9cozvY2f01). This tells me that the server is specifically blocking our crawler, but not all crawlers in general.
I hope this helps. Let me know if you have any other questions.
Chiaryn
Help Team Ninja -
Hi Mat
Thanks for the reply - robots.txt file is as follows:
## The following are infinitely deep trees User-agent: * Disallow: /cgi-bin Disallow: /cms/events Disallow: /cms/latest Disallow: /cms/cookieprivacy Disallow: /cms/help Disallow: /site/services/megamenu/ Disallow: /site/mobile/ I can't get access to the .htaccess file at present (we're not the developers) Anyone else any thoughts? Weirdly I can get Screaming Frog info back on the site :-/
-
403s are tricky to diagnose because they, by their very nature, don't tell you much. They're sort of the server equivalent of just shouting "NO!".
You say Moz & Xenu are receiving the 403. I assume that it loads properly from a browser.
I'd start looking at the .htaccess . Any odd deny statements in there? It could be that an IP range or user agent is blocked. Some people like to block common crawlers (Not calling Roger names there). Check the robots.txt whilst you are there, although that shouldn't return a 403 really.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Is Moz Able to Track Internal Links Per Page?
I am trying to track internal links and identify orphan pages. What is the best way to do this?
Moz Pro | | WebMarkets0 -
404 errors High Priority Issues in Moz Pro: change to 301 or not ?
Hi there, Moz Pro is showing us 404 errors on our site as High Priority Issues. These 404 errors regard deleted product pages, which we did not 301. Should we 301 them all backwards ? We have an ecommerce site. After reading How Should You Handle Expired Content? on Moz and a few other Q&A discussions I now know we should 301 each expired url and now we do so. My concern is with what was done in the past, and what we should do about it: for the past few years we have been leaving the pages on the site, creating a big amount of outdated url's without either content nor traffic in march our IT decided to delete these url's, and ask for a webpage removal in Google Search Console: we 301 only a 40 url's and 404 the other 3500 now 6 monthts after, we still have 2500 crawl errors in the Search Console, and Moz Pro finding each week new 404 errors Our SEO consultant says we should not bother about the errors shown in the Search Console. But I am concerned about these errors not reducing, and about Moz Pro High Priority Issues: should we 301 the url's to similar categories or products?
Moz Pro | | isabelledylag0 -
Moz metrics
This discussion is strictly theoretical... I won't hold anyone to their answer. If I have 2 websites that are identical in every way and let's say the domain authority for both is 40, and I 301 redirect one site to the other, what would the DA become? Same question for single pages, both with a PA of 40. If I 301 redirect one page to the other, what does the PA become for the remaining page?
Moz Pro | | AMHC0 -
Campaigns - crawled
The new Pages Crawled: 2. I have many 404 and other errors, I wanted to start working on it tomorrow but the new crawl only crawled to pages and doesn't show any errors. Whats the problem and what can I do? Yoseph
Moz Pro | | Joseph-Green-SEO0 -
Link reporting.
Is there a way in the Pro reporting where I can see a summary of the number of incoming links by type (blogs / news / wiki / dir / forums etc)? Even better, could the report give me an average Page Rank for each link type? Thanks,
Moz Pro | | CarlDarby0 -
Duplicate pages with canonical links still show as errors
On our CMS, there are duplicate pages such as /news, /news/, /news?page=1, /news/?page=1. From an SEO perspective, I'm not too worried, because I guess Google is pretty capable of sorting this out, but to be on the safe side, I've added canonical links. /news itself has no link, but all the other variants have links to "/news". (And if you go wild and add a bunch of random meaningless parameters, creating /news/?page=1&jim=jam&foo=bar&this=that, we will laugh at you and generate a canonical link back to "/news". We're clever like that.) So far so good. And everything appears to work fine. But SEOMoz is still flagging up errors about duplicate titles and duplicate content. If you click in, you'll see a "Note" on each error, showing that SEOMoz has found the canonical link. So SEOMoz knows the duplication isn't a problem, as we're using canonical links exactly the way they're supposed to be used, and yet is still flagging it as an error. Is this something I should be concerned about, or is it just a bug in SEOMoz?
Moz Pro | | LockyDotser0 -
Crawl Disgnosis only crawling 250 pages not 10,000
My crawl diagnosis has suddenly dropped from 10,000 pages to just 250. I've been tracking and working on an ecommerce website with 102,000 pages (www.heatingreplacementparts.co.uk) and the history for this was showing some great improvements. Suddenly the CD report today is showing only 250 pages! What has happened? Not only is this frustrating to work with as I was chipping away at the errors and warnings, but also my graphs for reporting to my client are now all screwed up. I have a pro plan and nothing has (or should have!) changed.
Moz Pro | | eseyo0 -
Internal Links
Hi Just wondering if anyone can shed some light on the internal links metric in the Linkscape report. I have run the report for a number of sites that I know have internal links in them but often get the result of just 1 internal link when I run the Linkscape report and I am not sure why Thanks
Moz Pro | | UE-Web0