Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Unsolved The Moz.com bot is overloading my server
-
. How to solve it?
-
For a step-by-step guide on setting up the tool, check out the solara executor download tutorial.
-
maybe crawl delay will help.
-
@paulavervo
Hi,
We do! The best way to chat with us is via our contact form or direct email. We also have chat within Moz Pro.
Please contact us via help@moz.com or https://moz.com/help/contact
We will be happy to help.
Cheers,
Kerry. -
very nice brother i like it very good keep it up !
-
very nice !
-
does the moz team even monitor this forum?
-
If the Moz.com bot is overloading your server, there are several steps you can take to manage and mitigate the issue effectively. First, you can adjust the crawl rate in your
robots.txt
file by specifying a crawl delay for the Moz bot using directives likeUser-agent: rogerbot
andUser-agent: dotbot
, followed byCrawl-delay: 10
to make the bot wait 10 seconds between requests. If this does not suffice, you can temporarily block the bot by disallowing it in yourrobots.txt
file. Additionally, it's a good idea to contact Moz’s support team to explain the issue, as they may offer solutions to adjust the crawl rate for your site. Implementing server-side rate limiting is another effective strategy. For Apache servers, you can add rules in your.htaccess
file to return a 429 Too Many Requests status code to the Moz bots, while for Nginx servers, you can set up rate limiting in your configuration file to control the number of requests per second from a single user or IP address. Monitoring your server’s performance and log files can help identify specific patterns or peak times, allowing you to fine-tune your settings. Furthermore, using a Content Delivery Network (CDN) can help distribute the load by caching content and serving it from multiple locations, reducing the direct impact on your server caused by crawlers. By taking these steps, you can manage the load from the Moz.com bot and maintain your server’s stability and responsiveness.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rogerbot directives in robots.txt
I feel like I spend a lot of time setting false positives in my reports to ignore. Can I prevent Rogerbot from crawling pages I don't care about with robots.txt directives? For example., I have some page types with meta noindex and it reports these to me. Theoretically, I can block Rogerbot from these with a robots,txt directive and not have to deal with false positives.
Reporting & Analytics | | awilliams_kingston0 -
Moz crawler is not able to crawl my website
Hi, i need help regarding Moz Can't Crawl Your Site i also share screenshot that Moz was unable to crawl your site on Mar 26, 2022. Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster.
Technical SEO | | JasonTorney
my robts.txt also ok i checked it
Here is my website https://whiskcreative.com.au
just check it please as soon as possibe0 -
Robots.txt file issues on Shopify server
We have repeated issues with one of our ecommerce sites not being crawled. We receive the following message: Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. Read our troubleshooting guide. Are you aware of an issue with robots.txt on the Shopify servers? It is happening at least twice a month so it is quite an issue.
Moz Pro | | A_Q0 -
Meta Tag Descriptions not being found in Moz Crawls
Hey guys, I have been managing a few websites and have input them into Moz for crawl reports, etc. For a while I have noticed that we were getting a gratuitous amount of errors when it came to the number of missing meta tags. It was numbering in the 200's. The sites were in place before I got here and a lot of the older posts no one had even attempted to include tags, links of the page or anything. As they are all Wordpress Sites and they all already had the Yoast/Wordpress SEO plug-in installed on them, I decided I would go through each post and media file one at a time and update their meta tags via the plug in. I personally did this so I know that I added and saved each one, however the Moz crawl reports continue to show that we are missing roughly 200 meta tags. I've seen a huge drop off in 404 errors and stuff since I went through and double checked everything on the sites, however the meta tag errors persist. Is this the case that Moz is not recognizing the tags when it crawls because I used the Yoast Plugin? Or would you say that the plugin is the issue and I should find another way to add meta tags to the pages and posts on the site? My main concern is that if Moz is having issues crawling the sites, is Google also seeing the same thing? The URLS include:
Moz Pro | | MOZ.info
sundancevacationsblog.com
sundancevacationsnews.com
sundancevacationscharities.com Any help would be appreciated!0 -
Why is MOZ and Google search Volume so different?
A search term in MOZ shows the monthly search volume to be 49K. In Google, the same term shows the search volume at only 1300 monthly searches. Which do I trust? Thanks, Don
Moz Pro | | rcman0 -
In alt tag of a image can we use #hashtag or domain.com ? Is that good SEO or not allowed ?
Some of the Google Search shows a title has a hashtag of an article, which contain keyword and while tweeting them, the title which has a hashtag automatically very good used for getting traffic to the blog. And other one, can we use the hash tag inside the alt attribute ? Or our domain name with .com in it. Like Google.com or #Google ?
Moz Pro | | Esaky0 -
Domain.com and domain.com/index.html duplicate content in reports even with rewrite on
I have a site that was recently hit by the Google penguin update and dropped a page back. When running the site through seomoz tools, I keep getting duplicate content in the reports for domain.com and domain.com/index.html, even though I have a 301 rewrite condition. When I test the site, domain.com/index.html redirects to domain.com for all directories and root. I don't understand how my index page can still get flagged as duplicate content. I also have a redirect from domain.com to www.domain.com. Is there anything else I need to do or add to my htaccess file? Appreciate any clarification on this.
Moz Pro | | anthonytjm0 -
Seomoz Spider/Bot Details
Hi All Our website identifies a list of search engine spiders so that it does not show them the session ID's when they come to crawl, preventing the search engines thinking there is duplicate content all over the place. The Seomoz has bought a over 20k crawl errors on the dashboard due to session ID's. Could someone please give the details for the Seomoz bot so that we can add it to the list on the website so when it does come to crawl it won't show it session ID's and give all these crawl errors. Thanks
Moz Pro | | blagger1