Does Moz's crawlers use _escaped_fragment_ to inspect pages on a single-page application?
-
I just got started, but got a 902 error code on some pages, with a message saying there might be an outage on my site.
That's certainly not the case, so I'm wondering if the crawlers actually respect and use the escaped_fragment query parameter.
Thanks, David.
-
David, yes you can serve that as a solution as well and the User-agent string will be rogerbot. Sorry for any confusion this caused on your end. Once you get this up and running Roger will crawl like there is no tomorrow!
-
Hi James,
I can also serve the pre-rendered, static version of the website to Moz's bots. In order to detect the bots, I'll have to scan the User-Agent string.
Is it safe / enough to look for "rogerbot" in the User-Agent string?
Thanks, David.
-
Hi CareerDean,
At this time we are not supporting escaped_fragment for crawling websites and our crawler rogerbot will simply hit anchor tag links. I was able to take a look at your site and rogerbot will definitely have some trouble crawling anywhere here due to the lack of a hrefs.
What we have typically suggested in the past is using an HTML link at the bottom of the page leading to a basic site map so that Roger can navigate through. This will allow you to keep the same look and feel and just add one link to enable crawlers.
Please let me know if you need anything else and feel free to get us at help@moz.com with any further questions. Also feel free to ask any follow ups here as well.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is the metric to check link state and link type for MOZ API ?
Kindly do not share the website url related to url and link metrics,instead of mention the correct metric for link state and link type. Thanks!
API | | rogerdavid0 -
I'm only seeing 3 results for DA/PA, is this a limitation?
I'm checking the DA/PA of the top 10 urls on google for a keyword, but I only get the DA/PA results for 3 of the domains, I know the other domains in the list would have results, is this a limitation on the API or should I be able to get DA/PA for more than 3 domains at a time? would it help if I only check one at a time instead of a larger set? 5dpi0h.png
API | | infernodia0 -
Is There any API from FourSqaure that allow us directly feeding business listings into it , Moz is doing same , so there must be some API?
Hi, i have seen "Moz Local" , has a option , "FourSqaure". I just a curiosity to ask that , is there any tie-up between FourSqaure and Moz , coz Moz must be using its APIs to feeds its business listings into FS Database. When i have searched for FS APIs for direclty feeding business listings into it , i got nothing ? so i have got this question? if any one can help me out.
API | | Prodio0 -
Bulk Page Authority Tracking
Hi Is there a way in Moz to identify your page authority by landing page, possibly crawling the site and providing this in bulk so you don't have to go through and check each page? I want to track how my page authority for certain pages moves over time. Thank you
API | | BeckyKey0 -
September's Mozscape Update Broke; We're Building a New Index
Hey gang, I hate to write to you all again with more bad news, but such is life. Our big data team produced an index this week but, upon analysis, found that our crawlers had encountered a massive number of non-200 URLs, which meant this index was not only smaller, but also weirdly biased. PA and DA scores were way off, coverage of the right URLs went haywire, and our metrics that we use to gauge quality told us this index simply was not good enough to launch. Thus, we're in the process of rebuilding an index as fast as possible, but this takes, at minimum 19-20 days, and may take as long as 30 days. This sucks. There's no excuse. We need to do better and we owe all of you and all of the folks who use Mozscape better, more reliable updates. I'm embarassed and so is the team. We all want to deliver the best product, but continue to find problems we didn't account for, and have to go back and build systems in our software to look for them. In the spirit of transparency (not as an excuse), the problem appears to be a large number of new subdomains that found their way into our crawlers and exposed us to issues fetching robots.txt files that timed out and stalled our crawlers. In addition, some new portions of the link graph we crawled exposed us to websites/pages that we need to find ways to exclude, as these abuse our metrics for prioritizing crawls (aka PageRank, much like Google, but they're obviously much more sophisticated and experienced with this) and bias us to junky stuff which keeps us from getting to the good stuff we need. We have dozens of ideas to fix this, and we've managed to fix problems like this in the past (prior issues like .cn domains overwhelming our index, link wheels and webspam holes, etc plagued us and have been addressed, but every couple indices it seems we face a new challenge like this). Our biggest issue is one of monitoring and processing times. We don't see what's in a web index until it's finished processing, which means we don't know if we're building a good index until it's done. It's a lot of work to re-build the processing system so there can be visibility at checkpoints, but that appears to be necessary right now. Unfortunately, it takes time away from building the new, realtime version of our index (which is what we really want to finish and launch!). Such is the frustration of trying to tweak an old system while simultaneously working on a new, better one. Tradeoffs have to be made. For now, we're prioritizing fixing the old Mozscape system, getting a new index out as soon as possible, and then working to improve visibility and our crawl rules. I'm happy to answer any and all questions, and you have my deep, regretful apologies for once again letting you down. We will continue to do everything in our power to improve and fix these ongoing problems.
API | | randfish11 -
Is Moz Pro down?
This morning, just before trying to meet with a client, I cannot access Moz Pro. Here is the page I get: This XML file does not appear to have any style information associated with it. The document tree is shown below.
API | | jessential
<error>NoSuchBucket<message>The specified bucket does not exist</message><bucketname>analytics.moz.com</bucketname><requestid>DC68738B494D30A1</requestid><hostid>r13H1pVq04vKGcKkAD9AUCTRXNdZAhUOULNwv4/TB74e0utcat2mV3PT7dXOtnuG</hostid></error> What can I do to access my information?0 -
Seo moz api request problem [401 error_message Your authentication failed]
Hello Team, I have moz pro account. I'm getting following error for seomoz call using your API: {"status":"401","error_message":"Your authentication failed. Check your authentication details and try again. For more information on signed authentication, see: http://apiwiki.moz.com/signed-authentication"} The link that we are using is:
API | | eBrandz
http://lsapi.seomoz.com/linkscape/url-metrics/ And following bits are requested during the call: 68719476736 34359738368 32 2048 16384 The error is intermittent. It comes and goes. There were no issues with results 3 days ago. I suddenly started getting this error. Could you please investigate into it and let me know the cause of issues and its correctiveness. It will great if you can provide us a support email Id for immediate response. Thanks,0 -
MOZ API Developer
Hi Moz Fans, Can you recommend someone who can develop an app/tool as similar to Hubspots Marketing Grader Functionality http://marketing.grader.com/ using MOZ API? I appreciate all your help guys Thanks, Tony
API | | chickenjoy20130