Moz crawler is not able to crawl my website
-
Hi, i need help regarding Moz Can't Crawl Your Site i also share screenshot that Moz was unable to crawl your site on Mar 26, 2022. Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster.
my robts.txt also ok i checked it
Here is my website https://whiskcreative.com.au
just check it please as soon as possibe -
@jasontorney Hi, I had the same problem after moving several of my websites to a Virtual Private Server that had enhanced security features.
One of these features was specifically to stop the Moz bot from crawling websites, and the hosting engineers advised they had done this because it was particularly aggressive in nature.
With my VPS control panel I have found the switch that allows me to disable bot blocking, and I occasionally do this if I'm grading a page with Moz, but advice from hosting support was to otherwise leave it active to protect the websites from attached (which means I don't get feedback from Moz crawls).
If you check with your hosting company you may find that they have a similar bot blocker configured for security purposes.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved The Moz.com bot is overloading my server
0 -
Unsolved MOZ Crawler Stalled
MOZ crawl has been waiting since April 2nd. Whats the deal? I am not able to "recrawl", so i have been stuck in limbo.
Product Support | | WebMarkets0 -
Unsolved Why is'nt there an update yet!
0 -
Crawler issues on subdomain - Need resolving?
Hey Guys, I'm fairly new to the world of SEO and have a ton of crawler issues with a friends website I'm doing some work on. After Moz did a site crawl I'm getting loads of errors (Total of 100+ for critical crawler, content and meta data). Most of these are due to broken social links on a subdomain - so my question is do I need to resolve all of the errors even if they are on a sub-domain? Will it affect the primary website? Thanks, Jack
Technical SEO | | Jack11660 -
Any SEO benefits of adding a Glossary to our website?
Hi all, I manage a website for a software company. Many terms can be quite tricky so it would be nice to add a Glossary page. Other than that, I have 2 questions: 1. What would be the SEO benefits? 2. How would you suggest to implement this glossary so we can get as much SEO benefit as possible (for example how would we link, where would we place the glossary in the terms of the sitemap, etc.). Any advice appreciated! Katarina
Technical SEO | | Katarina-Borovska2 -
Using the Moz to weed out bad backlinks
How do you use the opensite explorer to weed out bad backlinks in your profile, and then how do you remove them if you cannot contact the various webmasters.
Technical SEO | | marketing-man19900 -
SEOMoz Crawler vs Googlebot Question
I read somewhere that SEOMoz’s crawler marks a page in its Crawl Diagnostics as duplicate content if it doesn’t have more than 5% unique content.(I can’t find that statistic anywhere on SEOMoz to confirm though). We are an eCommerce site, so many of our pages share the same sidebar, header, and footer links. The pages flagged by SEOMoz as duplicates have these same links, but they have unique URLs and category names. Because they’re not actual duplicates of each other, canonical tags aren’t the answer. Also because inventory might automatically come back in stock, we can’t use 301 redirects on these “duplicate” pages. It seems like it’s the sidebar, header, and footer links that are what’s causing these pages to be flagged as duplicates. Does the SEOMoz crawler mimic the way Googlebot works? Also, is Googlebot smart enough not to count the sidebar and header/footer links when looking for duplicate content?
Technical SEO | | ElDude0 -
Magento - Google Webmaster Crawl Errors
Hi guys, Started my free trial - very impressed - just thought I'd ask a question or two while I can. I've set up the website for http://www.worldofbooks.com (large bookseller in the UK), using Magento. I'm getting a huge amount of not found crawl errors (27,808), I think this is due to URL rewrites, all the errors are in this format (non search friendly): http://www.worldofbooks.com/search_inventory.php?search_text=&category=&tag=Ure&gift_code=&dd_sort_by=price_desc&dd_records_per_page=40&dd_page_number=1 As oppose to this format: http://www.worldofbooks.com/arts-books/history-of-art-design-styles/the-art-book-by-phaidon.html (the re-written URL). This doesn't seem to really be affecting our rankings, we targeted 'cheap books' and 'bargain books' heavily - we're up to 2nd for Cheap Books and 3rd for Bargain Books. So my question is - are these large amount of Crawl errors cause for concern or is it something that will work itself out? And secondly - if it is cause for concern will it be affecting our rankings negatively in any way and what could we do to resolve this issue? Any points in the right direction much appreciated. If you need any more clarification regarding any points I've raised just let me know. Benjamin Edwards
Technical SEO | | Benj250