What happens if I go over my Mozscape api free limits?
-
Hello,
I just started using the free version of Mozscape and I fully understand there are limits and charges under this category. However to avoid any costly surprises, I like to know:
What happens when I get near my usage limit?
What happens when I just hit the limit?
What happens when I past the limit?Along with my questions, Is there any alert systems to let me know when I get to the range of said limit, like an email?
-
Hey There,
Happy to help answer your questions regarding our API. Our Free API Access will not lead to any charges unless you reach out to us and ask for additional rows to be added to your account. So there should be zero surprise charges from the API product.
What happens when I get near my usage limit? - Usage can be monitored here - https://moz.com/products/mozscape/usage
What happens when I just hit the limit? Once you pull 25k rows for the month, your account will be suspended until the following month.
**What happens when I past the limit? **Unless you reach out and ask for additional rows, your account will be suspended until the following month, where you will get another 25k rows to use.Let me know if you have any other questions about the API.
Ian W.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Does Anyone use DashThis API with MOZ?
Hi Team
API | | SkyrailCairns
Looking to share MOZ API credentials with DashThis - They provide an all in one reporting solution for all aspects of digital marketing.
Does anyone use DashThis?0 -
API v1 still gets new data?
Hello! Do the v1 API endpoints provide fresh data, or do I need to use the v2 endpoints for fresh data? According to the v1 API docs "This guide outlines the endpoints for now archived Mozscape API endpoints." Does this mean that the v1 API only serves archived data? Thanks!
API | | peterkovacs0 -
Moz rank tracker API
Hi we trying to make a PHP script to get values of keyword rank for a certain country and URL like we do in Rank Traker under Moz Pro tools. Is there any reference i can use to build one myself.
API | | Moreleads0 -
The difference between api value and screen value
When I check the two parameters (PA and DA values) actually, these values often differ from those which I receive from your API. Why does it happen?
API | | orange0021 -
September's Mozscape Update Broke; We're Building a New Index
Hey gang, I hate to write to you all again with more bad news, but such is life. Our big data team produced an index this week but, upon analysis, found that our crawlers had encountered a massive number of non-200 URLs, which meant this index was not only smaller, but also weirdly biased. PA and DA scores were way off, coverage of the right URLs went haywire, and our metrics that we use to gauge quality told us this index simply was not good enough to launch. Thus, we're in the process of rebuilding an index as fast as possible, but this takes, at minimum 19-20 days, and may take as long as 30 days. This sucks. There's no excuse. We need to do better and we owe all of you and all of the folks who use Mozscape better, more reliable updates. I'm embarassed and so is the team. We all want to deliver the best product, but continue to find problems we didn't account for, and have to go back and build systems in our software to look for them. In the spirit of transparency (not as an excuse), the problem appears to be a large number of new subdomains that found their way into our crawlers and exposed us to issues fetching robots.txt files that timed out and stalled our crawlers. In addition, some new portions of the link graph we crawled exposed us to websites/pages that we need to find ways to exclude, as these abuse our metrics for prioritizing crawls (aka PageRank, much like Google, but they're obviously much more sophisticated and experienced with this) and bias us to junky stuff which keeps us from getting to the good stuff we need. We have dozens of ideas to fix this, and we've managed to fix problems like this in the past (prior issues like .cn domains overwhelming our index, link wheels and webspam holes, etc plagued us and have been addressed, but every couple indices it seems we face a new challenge like this). Our biggest issue is one of monitoring and processing times. We don't see what's in a web index until it's finished processing, which means we don't know if we're building a good index until it's done. It's a lot of work to re-build the processing system so there can be visibility at checkpoints, but that appears to be necessary right now. Unfortunately, it takes time away from building the new, realtime version of our index (which is what we really want to finish and launch!). Such is the frustration of trying to tweak an old system while simultaneously working on a new, better one. Tradeoffs have to be made. For now, we're prioritizing fixing the old Mozscape system, getting a new index out as soon as possible, and then working to improve visibility and our crawl rules. I'm happy to answer any and all questions, and you have my deep, regretful apologies for once again letting you down. We will continue to do everything in our power to improve and fix these ongoing problems.
API | | randfish11 -
Have Questions about the Jan. 27th Mozscape Index Update? Get Answers Here!
Howdy y'all. I wanted to give a brief update (not quite worthy of a blog post, but more than would fit in a tweet) about the latest Mozscape index update. On January 27th, we released our largest web index ever, with 285 Billion unique URLs, and 1.25 Trillion links. Our previous index was also a record at 217 Billion pages, but this one is another 30% bigger. That's all good news - it means more links that you're seeking are likely to be in this index, and link counts, on average, will go up. There are two oddities about this index, however, that I should share: The first is that we broke one particular view of data - 301'ing links sorted by Page Authority doesn't work in this index, so we've defaulted to sorting 301s by Domain Authority. That should be fixed in the next index, and from our analytics, doesn't appear to be a hugely popular view, so it shouldn't affect many folks (you can always export to CSV and re-sort by PA in Excel if you need, too - note that if you have more than 10K links, OSE will only export the first 10K, so if you need more data, check out the API). The second is that we crawled a massively more diverse set of root domains than ever before. Whereas our previous index topped out at 192 million root domains, this latest one has 362 million (almost 1.9X as many unique, new domains we haven't crawled before). This means that DA and PA scores may fluctuate more than usual, as link diversity are big parts of those calculations and we've crawled a much larger swath of the deep, dark corners of the web (and non-US/non-.com domains, too). It also means that, for many of the big, more important sites on the web, we are crawling a little less deeply than we have in the past (the index grew by ~31% while the root domains grew by ~88%). Often, those deep pages on large sites do more internal than external linking, so this might not have a big impact, but it could depend on your field/niche and where your links come from. As always, my best suggestion is to make sure to compare your link data against your competition - that's a great way to see how relative changes are occurring and whether, generally speaking, you're losing or gaining ground in your field. If you have specific questions, feel free to leave them and I'll do my best to answer in a timely fashion. Thanks much! p.s. You can always find information about our index updates here.
API | | randfish8 -
How to pull metrics from API for a top 100 list
I'm working on a top 100 list for my industry and need a way to pull DA for the list of blogs and then sort them in descending order until it can pull other metrics (Alexa and follower count). Right now I am doing the above manually and was trying to hunt for a way for the table to update on a set date. any help / guidance appreciated
API | | ArfanB0