SEOMoz API not working for Scrapebox
-
I want to import SEOMoz data to list of URLs I have using scrapbox.
I added in my credentials according to the API but am getting error 401 as the status of all my links.
Any idea why and what I should be doing?
-
Thanks!
Figured out the problem was on my end
-
Hey Sara,
Thanks for the question. Unfortunately this is something you'd need to take up with Scrapebox. Your API key is setup just fine with the Pro rate limit of 1 request every 5 second. So these authentication issues are going to be due to improperly formed requests by the software.
I also see that you've getting heavily throttled, so you may want to check and see if you have any control over how fast it makes your requests.
I hope that helps. It looks like Scrapebox just has a form to fill out for their support here: http://www.scrapebox.com/contact-us
Cheers,
Joel.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Domain_to_domain api call syntax
I am trying to construct an api call that lists the top authority link from each domain. This page_to_page call works http://lsapi.seomoz.com/linkscape/links/www.jbwebanalytics.com?SourceCols=4&TargetCols=4&Scope=page_to_page&Limit=400&AccessID=membexxx&Expires=xxx&Signature=xxx This domain_to_domain does not http://lsapi.seomoz.com/linkscape/links/www.jbwebanalytics.com?&Scope=domain_to_domain&Limit=400&AccessID=member-xxx1&Expires=xxx&Signature=xxx I'm plum out of ideas why? Anyone know? The Idea output is the single highest authority link for all linking domains thanks! Brian
Moz Pro | | VISISEEKINC0 -
Does SeoMoz realize about duplicated url blocked in robot.txt?
Hi there: Just a newby question... I found some duplicated url in the "SEOmoz Crawl diagnostic reports" that should not be there. They are intended to be blocked by the web robot.txt file. Here is an example url (joomla + virtuemart structure): http://www.domain.com/component/users/?view=registration and the here is the blocking content in the robots.txt file User-agent: * _ Disallow: /components/_ Question is: Will this kind of duplicated url errors be removed from the error list automatically in the future? Should I remember what errors should not really be in the error list? What is the best way to handle this kind of errors? Thanks and best regards Franky
Moz Pro | | Viada0 -
Can 2 people from our company use the SEOMoz toolbar?
Hello, I've got SEOMoz Pro Can 2 of us use the search toolbar at once, or do we need to pay twice? Thanks! Bob
Moz Pro | | BobGW0 -
About Links API
I'ma Japanese, So, I'm sorry in poor English.
Moz Pro | | flaminGoGo
Question about the API.
Will be returned as unauthorized api 'links' to the following request. http://lsapi.seomoz.com/linkscape/links/domain/blog?Scope=page_to_page&Sort=domain_authority&AccessID=xxx&Expires=xxx&Signature=xxx Is it OK in the request parameters?0 -
Getting SEOMoz reports to ignore certain parameters
I want the SEOMoz reports to ignore duplicate content caused by link-specific parameters being added to URLs (same page reachable from different pages, having marker parameters regarding source page added to the URLs). I can get Google and Bing webmaster tools to ignore parameters I specify. I need to get SEOMoz tools to do it also!
Moz Pro | | SEO-Enlighten0 -
Hello I am new to SEO. Does SEOMoz have a to do list or can you send me one.
I just need to know what steps I should take to improve my site. My website url is : http://nuscopemed.com Thanks
Moz Pro | | NinaGraham0 -
SEOmoz Crawl CSV in Excel: already split by semicolon. Is this Excel's fault or SEOmoz's?
If for example a page title contains a ë the .csv created by the SEOmoz Crawl Test is already split into columns on that point, even though I haven't used Excel's text to columns yet. When I try to do the latter, Excel warns me that I'm overwriting non-empty cells, which of course is something I would rather not do since that would make me lose valuable data. My question is: is this something caused by opening the .csv in Excel, or earlier in the process when this .csv is created?
Moz Pro | | DeptAgency2 -
Canonical tags and SEOmoz crawls
Hi there. Recently, we've made some changes to http://www.gear-zone.co.uk/ to implement canonical tags to some dynamically generated pages to stop duplicate content issues. Previously, these were blocked with robots.txt. In Webmaster Tools, everything looks great - pages crawled has shot up, and overall traffic and sales has seen a positive increase. However the SEOmoz crawl report is now showing a huge increase in duplicate content issues. What I'd like to know is whether SEOmoz registers a canonical tag as preventing a piece of duplicate content, or just adds to it the notices report. That is, if I have 10 pages of duplicate content all with correct canonical tags, will I still see 10 errors in the crawl, but also 10 notices showing a canonical has been found? Or, should it be 0 duplicate content errors, but 10 notices of canonicals? I know it's a small point, but it could potentially have a big difference. Thanks!
Moz Pro | | neooptic0