Moz crawler is not able to crawl my website
-
Hello All,
I'm facing an issue with the MOZ Crawler. Every time it crawls my website , there will be an error message saying " **Moz was unable to crawl your site on Sep 13, 2017. **Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. "
We changed the robots.txt file and checked it . but still the issue is not resolved.
URL : https://www.khadination.shop/robots.txt
Do let me know what went wrong and wjhat needs to be done.
Any suggestion is appreciated.
Thank you.
-
Hi there! Tawny from Moz's Help Team here!
I think I can help you figure out what's going on with your robots.txt file. First things first: we're not starting at the robots.txt URL you list. Our crawler always starts from your Campaign URL and goes from there, and it can't start at an HTTPS URL, so it starts at the HTTP version and crawls from there. So, the robots.txt file we're having trouble accessing is khadination.shop/robots.txt.
I ran a couple of tests, and it looks like this robots.txt file might be inaccessible from AWS (Amazon Web Services). When I tried to curl your robots.txt file from AWS I got a 302 temporary redirect error (https://www.screencast.com/t/jy4MkDZQNbQ), and when I ran it through hurl.it, which also runs on AWS, it returned an internal server error (https://www.screencast.com/t/mawknIyaMn).
One more thing — it looks like you have a wildcard character ( * ) for the user-agent as the first line in this robots.txt file. Best practices indicate that you should put all your specific user-agent disallow commands before a wildcard user-agent; otherwise those specific crawlers will stop reading your robots.txt file after the wildcard user-agent line, since they'll assume that those rules apply to them.
I think if you fix up those things, we should be able to access your robots.txt and crawl your site!
If you still have questions or run into more trouble, shoot us a note at help@moz.com and we'll do everything we can to help you sort everything out.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Access all crawl tests
How can I see all crawl tests ran in the history of the account? Also, can I get them sent to an email that isn't the primary one on the account? Please advise as I need this historical data ASAP.
Feature Requests | | Brafton-Marketing0 -
MOZ Site Crawl - Ignore functionality question
Quick question about the ignore feature found in the MOZ Site Crawl. We've made some changes to pages containing errors found by the MOZ Site Crawl. These changes should have resolved issues but we're not sure about the "Ignore" feature and do not want to use it without first understanding what will happen when using it. Will it clear the item from the current list until the next Site Crawl takes place. If Roger finds the issue again, it will relist the error? Will it clear the item from the list permanently, regardless if it has not been properly corrected?
Feature Requests | | StickyLife1 -
Does moz offer a site auditor tool to imbed on your website?
similar service as http://mysiteauditor.com. just want to embed a tool on our website that allows the visitor to enter their url and have a report emailed to them.
Feature Requests | | WebMarkets1 -
MOZ should add a toxic link checker tool to their incredible arsenal of SEO tools. Seems like a no brainer.
Checking the health of a back links especially on new accounts is crucial. MOZ wants to be a one stop SEO tool shop and I think this would certainly go a long way in cementing that.
Feature Requests | | wearehappymedia0 -
Moz Local - Does it do the work for you?
Good Morning Everyone, I have a quick question regarding Moz Local ... I understand what it does and the purpose and how great it is... My question is this: If you sign up for Moz local, does it actually send all of the proper and necessary info to each of the aggregators or sites that it lists or does it simply tell me what information i need to send them and i have to provide this info to each one myself? For example, does it say something like "Inconsistent info found on Yelp, please do this to correct" .... OR, does it actually make the corrections and send to yelp for me? Thanks
Feature Requests | | Prime850 -
Posted by Link on Moz - Broken
I wasn't sure which category to place this in as Support doesnt feature Q&A or the Moz site in general so I dropped it under other research tools to which the Q&A kind of is 🙂 Now I am not sure if it is just me and that you have rectified the issue, but when ever I click on the "posted by" links to the user. I am getting a page not found error. The links in question can be seen in my two grabs and effect all posted by links on the Q&A section of Moz. A simply trailing slash after .com "/" will do the trick 🙂 https://moz.comusers/view/636129 - BROKEN
Feature Requests | | TimHolmes
https://moz.com/users/view/636129 - FIXED Aj5S2Pj,YfSQEmZ#0 Aj5S2Pj,YfSQEmZ#10 -
Crawl diagnostic errors due to query string
I'm seeing a large amount of duplicate page titles, duplicate content, missing meta descriptions, etc. in my Crawl Diagnostics Report due to URLs' query strings. These pages already have canonical tags, but I know canonical tags aren't considered in MOZ's crawl diagnostic reports and therefore won't reduce the number of reported errors. Is there any way to configure MOZ to not consider query string variants as unique URLs? It's difficult to find a legitimate error among hundreds of these non-errors.
Feature Requests | | jmorehouse0 -
Can Moz Reports Data be exported in Excel
Hi there, The moz reports are exported into PDF at the moment. Is there a way to export the data into excel? Thanks
Feature Requests | | Jacky.Bizcover0