Mozscape API Updates (Non-updates!) - becoming a joke!
-
This is the 3rd month in succession where the Mozscape index has been delayed. Myself and clients are losing patience with this, as I am sure many others must be.
Just what do you suppose we tell clients waiting for that data? We have incomplete and sometimes skewed metrics to report on, delays which then get delayed further, with nothing but the usual 'we are working on it' and 'bear with us'.
It's becoming obvious you fudged the index update back in January (see discussion here with some kind of explanation finally from Rand: https://moz.com/community/q/is-everybody-seeing-da-pa-drops-after-last-moz-api-update), and seems you have been fumbling around ever since trying to fix it, with data all over the place, shifting DA scores and missing links from campaign data.
Your developers should be working around the clock to fix this, because this is a big part of what you're selling in your service, and as SEO's and marketers we are relying on that data for client retention and satisfaction. Will you refund us all if we should lose clients over this?! .. I don't think so!
With reports already sent out the beginning of the month with incomplete data, I told clients the index would refresh April 10th as informed from the API updates page, only to see it fudged again on day of release with the index being rolled back to previous. So again, I have to tell clients there will be more delays, ...with the uncertainty of IF it WILL EVEN get refreshed when you say it will. It's becoming a joke.. really!
-
Hey Matt - I can get into some of the nitty gritty details on this.
Basically - we've been having trouble of all kinds with Mozscape, and while our team has indeed been working around the clock, the reality is that it's an old, clunky, hard-to-understand system that needs to be replaced entirely. That work is also going on, but as you might imagine, has a separate team on it, which means the Mozscape team's bandwidth is split.
Mozscape has crawling trouble - we've had issues with our own crawler design, specifically with spam that's fooled our crawlers (it's designed to fool Google, obviously, but has caught us, too), and biased our index. We also had an issue where some code was commented out that helped us recrawl important pages and other issues (along with a couple of longtime engineering departures) made that invisible to us for a good few months (even with it fixed, it will take an index or two to get back to normal). We've had other issues with hardware and bandwidth restrictions, with team changes, with unintentionally excluding important sites and important pages on sites due to erroneous changes on our end, with robots.txt interpretation mistakes. You name it. It's been pretty frustrating because it's never a single issue coming up again and again, but rather new issues each time. The team currently on the Mozscape project is relatively new -- we had almost complete turnover on that team in the last year (a combination of voluntary and non), so there's a lot of rampup and trying to understand what things do, and fix old problems, etc. I'm sure as an engineer you're familiar with those types of challenges, especially when the documentation isn't pristine.
IMO - those are crappy excuses. We should be better. We will be better. I don't provide them to pardon our shitty quality the last few months, but rather because you said you wanted detail, and I do love transparency.
I think we're going to have a tough slog until the new index system comes out (likely this Fall). I'm keeping my fingers crossed that we can repair each new problem and that few others arise, but the past 6 months have made me wary of overpromising and under-delivering.
BTW - it is true that the ML model means there's lots of DA flux as the goal is to be as accurate as possible with Google's changes, so if we see a site with certain types of inputs matching patterns of sites that don't rank as well, that DA will drop. Given that Google's rankings fluctuate all the time, that our crawlers fluctuate a lot (more than they should, as noted above), and that the link graph changes constantly, a lot of flux in DA is to be expected. That said, the new model will have DA refreshed daily, rather than monthly, and will also have history, as well as a way to dig in and see what inputs are big in DA and how those have changed. I think all of that will help make these shifts vastly more transparent, even if they continue to be high (which they should so long as Google's own flux is high).
One thing I am working on with the team - a different kind of score, called something like "domain visibility" or "rankings visibility" that tracks how visible a site's pages are in a large set of Google rankings. I think that score might be more what clients are seeking in terms of their overall performance in Google, vs. their performance in the link graph and how their links might be counted/correlated with higher/lower rankings.
-
Hi Lisa, we've also experienced many of the same frustrations Greg mentions, but I mainly wanted to respond to your comments about not being able to compare Domain Authority over time. Given that this metric is positioned as measuring a website's overall ability to rank, it shouldn't be unexpected that people want to see how their score evolves over time. Even in your own software the change in Domain Authority since last update appears as one of the most prominent items on the Dashboard, and you also show a graph charting how it changes over time. My question is: since you clearly understand that customers want to be able to compare how Domain Authority evolves over time on a consistent scale, why not at least attempt to normalize it? I am also a machine learning engineer so the explanation "this is based upon machine learning so it will fluctuate unpredictably" makes no sense to me. You could normalize your inputs based upon certain characteristics of the population, or you could use a representative basket of websites to normalize the outputs. From what I've seen it seems that even just normalizing based on the size of your index hugely improves the consistency. It wouldn't need to be perfect to be a huge improvement over the current situation.
-
thanks for your reply Lisa.
Please understand my point is not to get a discount, refund or cancel my subscription. It's to air my grievances (in a place where it can be understood contextually more than a twitter tweet) and make you aware of how this looks for us as marketers in the middle.
I would far prefer you and the teams there to be able to fix the issues you're having and be on time with the updates, than migrate away from Moz - that's the last thing I want.
I'm aware the other tools and services are functioning fine, but of course with DA and link data missing there's a huge gap in our reporting to clients, and the frequent delays don't do anything for confidence.
I thank you for the detailed reply - which in reality could have been a good way to calm our nerves (with a product support blog post, email update to customers) rather than just relying on the Mozscape API update page saying 'we are having some problems with launching the new index'. That kind of transparency is after all part of the TAGFEE of Moz which has made it a success.
I sincerely hope you can iron-out the problems there, so we can all be confident in the data we are reporting on (and when index refreshes and everything else is scheduled to happen).
Greg
-
Hi Greg,
My name is Lisa and I'm a member of the support team here at Moz.
I want to thank you for getting in touch here and on Twitter to tell us about your concerns and the impact that these failures have on you as a customer and on your clients.
You're right - the last few updates have sucked. They've not been up to the standard that we hope to provide as a company and they haven't been what we want for you or your clients.
And you're right again when you say that our apologies and "we're working on it" messages haven't been enough to give you confidence that we really are doing the best we can to be better.
I apologise for the issue that you've had with your reports and their incomplete data and I wish I could make you some assurances but no decisions have currently been made and I don't want to make you any false promises.
Behind the scenes, the engineering teams have been doing a lot of work to build a better and more reliable crawler, shore up the infrastructure used to store and return the index data and proactively work to spot problems and prevent them from reaching our customers.
Some of this work has caused problems of its own - changing the infrastructure, for instance, made the index fail to upload correctly several times at the end of February - and some of it has not yet been completed.
You mentioned that we 'fudged the index update back in January' and to some extent that is true. We collected a lot of data from sites that were junk and had no value. Since then, we've been reviewing which sites are included in the index and working to strike a balance so that valuable sites are crawled but spam sites and subdomains are blocked. A lot of this work must be done manually.
Another problem we have, with Domain Authority in particular, is that we know there's a mismatch between how it works and how our customers apply it.
Although it is a score between 0 and 100, Domain Authority is a relative metric. It depends on the ranking model data we have (about which types of sites are likely to rank) and the data we've collected in the index, not just for one site but for all of them. A change to the link data we've collected for your site, or for other sites, or a change to the ranking model can dramatically affect the Domain Authority score of any site. This data should not be considered in a vacuum but used comparatively.
Domain Authority scores have always been and will always be expected to fluctuate. The best way to use a site's Domain Authority is to compare it to competitor sites and measure the distance between them. Using it as a standard score out of 100 is likely to cause anger and frustration when it drops, even though drops and rises are both part of the nature of Domain Authority.
Everyone here is invested in making the index better and we all want to make you and your clients happy. We'd like to provide round the clock coverage to solve these problems but this is not possible for us. We have a small engineering team based in Seattle and we use their time as efficiently as possible to allow them to do their best work.
We do feel that our tools have value beyond the link index and Domain Authority data - that's why we offer this for free using MozBar, Open Site Explorer and our API - but I would understand if you feel that the tools are not meeting your needs while there are problems and delays with this data. I'd be happy to help you cancel your subscription and offer a refund on your last payment if this in the case - just reach out to me via help@moz.com.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Mozscape API subscription
We have questions regarding our subscription and the plan we are on. We are more interested in Mozscape API and not the features that we have access to currently. Will you let us know how we can change. Is there some one that we can chat with. Thanks,
API | | PatientPop
Naveen
naveen.sarabu@patientpop.com0 -
MOZ API - Search Visibility
Hello there, We are looking to see if we can recreate the Search Visibility report through the API. We are wondering which API parameters and metrics we should use to recreate the report? Also, we want to distinguish between traffic coming from desktops and mobile traffic. Is this possible with the API? Which type of account do we need to use for these functionalities: the free account or paid account? We are currently using a free account. Also: we are wondering if can access historical data with the API? For instance: we want to determine the domain authority from October 2016 off on. Is this possible? Thanks in advance! Kind regards, Bart Minten
API | | thomas.deruiter0 -
"403 Forbidden" is they will be displayed in the "API"
Nice to meet you.
API | | yamayamax
Although the future API of the question of "URL Metrics", I want you to forgive me English is very immature thing. I'm very troubled now.
If you call the "API" in the manner described below, such PHP of the program, "403 Forbidden" is the result.
Because the information when you visit the URL directly is displayed, we suspect the PHP of the problem.
About the cause, it will be saved in a very and Nowak advice.
Whether Thank you. code $accessID = "<removed for="" privacy="">";
$secretKey = "<removed for="" privacy="">";
$expires = time() + 300;
$SignInStr = $accessID. "\n" .$expires;
$binarySignature = hash_hmac('sha1', $SignInStr, $secretKey, true);
$SafeSignature = urlencode(base64_encode($binarySignature));
$objURL = "http://www.google.com";
$flags = "103079217188";
$reqUrl = "http://lsapi.seomoz.com/linkscape/url-metrics/".urlencode($objURL)."?Cols=".$cols."&AccessID=".$accessID."&Expires=".$expires."&Signature=".$SafeSignature;
$opts = array(CURLOPT_RETURNTRANSFER => true);
$curlhandle = curl_init($reqUrl);
curl_setopt_array($curlhandle, $opts);
$content = curl_exec($curlhandle);
curl_close($curlhandle);
$resObj = json_decode($content); //decode the json object and fetch results
echo $reqUrl . "
";
echo "Domain Authority : " . $resObj->{'pda'};
echo "Page Authority : " . $resObj->{'upa'};
?>
------------------------------------------------------------------------------------</removed></removed>0 -
New Entry Level Mozscape API Plan
With the changes coming to our Free Mozscape access, and many of you asking for a lower priced API tier, we will now be offering an Entry Level plan. This plan will be priced at $250 per month for 120,000 rows of data, and $20 per 10,000 rows of overage. Here are some more details: Entry Level Mozscape API Access
API | | IanWatson
$250 - Per month
Overages:
$20 per additional 10,000 rows
Included Calls:
URL Metrics
Links
Top Pages
Anchor Text
Rows per Month:
120,000
Rate Limit:
200 requests per second The new Entry Level plan is not yet up on our Pricing Page, however if you are interested, reach out to me directly and I can help you get set up. Ian Watson - IanW@Moz.com6 -
The difference between api value and screen value
When I check the two parameters (PA and DA values) actually, these values often differ from those which I receive from your API. Why does it happen?
API | | orange0021 -
Pulling large amounts of data from moz api
Hi i'm looking to pull large amounts of data from the moz and semrush api. I have been using seotools addon for excel to extract data but excel is slow, sometimes crashes and not very reliable. Can anyone recommend any other tools i can use, to pull huge amounts of data? Any suggestions would be highly appreciated! Cheers, RM
API | | MBASydney0 -
Does any one know if there is a tool out there built with the Moz API where you can feed in a large list of URLs and get back Domain Authority?
Does any one know if there is a tool out there built with the Moz API where you can feed in a large list of URLs and get back Domain Authority? Has anyone used the API for something like this before? Thanks! Max
API | | Porch0