Moz crawler is not able to crawl my website
-
Hello All,
I'm facing an issue with the MOZ Crawler. Every time it crawls my website , there will be an error message saying " **Moz was unable to crawl your site on Sep 13, 2017. **Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. "
We changed the robots.txt file and checked it . but still the issue is not resolved.
URL : https://www.khadination.shop/robots.txt
Do let me know what went wrong and wjhat needs to be done.
Any suggestion is appreciated.
Thank you.
-
Hi there! Tawny from Moz's Help Team here!
I think I can help you figure out what's going on with your robots.txt file. First things first: we're not starting at the robots.txt URL you list. Our crawler always starts from your Campaign URL and goes from there, and it can't start at an HTTPS URL, so it starts at the HTTP version and crawls from there. So, the robots.txt file we're having trouble accessing is khadination.shop/robots.txt.
I ran a couple of tests, and it looks like this robots.txt file might be inaccessible from AWS (Amazon Web Services). When I tried to curl your robots.txt file from AWS I got a 302 temporary redirect error (https://www.screencast.com/t/jy4MkDZQNbQ), and when I ran it through hurl.it, which also runs on AWS, it returned an internal server error (https://www.screencast.com/t/mawknIyaMn).
One more thing — it looks like you have a wildcard character ( * ) for the user-agent as the first line in this robots.txt file. Best practices indicate that you should put all your specific user-agent disallow commands before a wildcard user-agent; otherwise those specific crawlers will stop reading your robots.txt file after the wildcard user-agent line, since they'll assume that those rules apply to them.
I think if you fix up those things, we should be able to access your robots.txt and crawl your site!
If you still have questions or run into more trouble, shoot us a note at help@moz.com and we'll do everything we can to help you sort everything out.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz Pricing Needs Update
Hi I am coming from Ahrefs and Moz plans need serious update about the limits especially rank tracking. Ahrefs is providing more keywords $0.08/keyword (5000 keywords for the advanced plan) and update interval of 3 days. Moz large plan is $0.13/keyword with intervals of 1 week. Can this be looked into? Hopefully Moz can improve the limits for keywords ranking/tracking. In 2020 where many rank tracking tools provide daily update, 1 week is a bit ridiculously long. Hopefully Moz can make some improvement in this area,
Feature Requests | | EdmondHong0 -
Even canonical present in my web pages, still moz showing canonical issues.
Even canonical present in my web pages, still moz showing canonical issues. ex : rel="canonical" content="https://www.domain.com/page/">
Feature Requests | | prasad.nueve1 -
Does/could the MpzBar highlight your website in the SERPs?
Is anyone aware of a setting in the MozBar which would highlight your website within SERPs? Or, even more exciting, display a number on the bar when looking at a SERP, showing your best current position for that query (in real time)? If not, it'd be interesting to know your thoughts on the idea? Great for quick checks and references or would it discourage people from staying vigilant in the SERPs and keeping an eye on the competition? (Note: we do have MOZ Pro and are aware that it does something similar. I'm thinking more for a quick check of those difficult targets where you're not hitting the first couple of pages).
Feature Requests | | MSGroup0 -
Moz crawler is not able to crawl my website
Hello All, I'm facing an issue with the MOZ Crawler. Every time it crawls my website , there will be an error message saying " **Moz was unable to crawl your site on Sep 13, 2017. **Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. " We changed the robots.txt file and checked it . but still the issue is not resolved. URL : https://www.khadination.shop/robots.txt Do let me know what went wrong and wjhat needs to be done. Any suggestion is appreciated. Thank you.
Feature Requests | | Harini.M0 -
Does moz offer a site auditor tool to imbed on your website?
similar service as http://mysiteauditor.com. just want to embed a tool on our website that allows the visitor to enter their url and have a report emailed to them.
Feature Requests | | WebMarkets1 -
Crawl test limitaton - ways to take advantage of large sites?
Hello I have a large site (120,000+) and crawl test is limited to 3,000 pages. I want to know if you have a way to take advantage to crawl a type of this sites. Can i do a regular expression for example? Thanks!
Feature Requests | | CamiRojasE0 -
MOZ should add a toxic link checker tool to their incredible arsenal of SEO tools. Seems like a no brainer.
Checking the health of a back links especially on new accounts is crucial. MOZ wants to be a one stop SEO tool shop and I think this would certainly go a long way in cementing that.
Feature Requests | | wearehappymedia0 -
Posted by Link on Moz - Broken
I wasn't sure which category to place this in as Support doesnt feature Q&A or the Moz site in general so I dropped it under other research tools to which the Q&A kind of is 🙂 Now I am not sure if it is just me and that you have rectified the issue, but when ever I click on the "posted by" links to the user. I am getting a page not found error. The links in question can be seen in my two grabs and effect all posted by links on the Q&A section of Moz. A simply trailing slash after .com "/" will do the trick 🙂 https://moz.comusers/view/636129 - BROKEN
Feature Requests | | TimHolmes
https://moz.com/users/view/636129 - FIXED Aj5S2Pj,YfSQEmZ#0 Aj5S2Pj,YfSQEmZ#10