Does Moz's crawlers use _escaped_fragment_ to inspect pages on a single-page application?
-
I just got started, but got a 902 error code on some pages, with a message saying there might be an outage on my site.
That's certainly not the case, so I'm wondering if the crawlers actually respect and use the escaped_fragment query parameter.
Thanks, David.
-
David, yes you can serve that as a solution as well and the User-agent string will be rogerbot. Sorry for any confusion this caused on your end. Once you get this up and running Roger will crawl like there is no tomorrow!
-
Hi James,
I can also serve the pre-rendered, static version of the website to Moz's bots. In order to detect the bots, I'll have to scan the User-Agent string.
Is it safe / enough to look for "rogerbot" in the User-Agent string?
Thanks, David.
-
Hi CareerDean,
At this time we are not supporting escaped_fragment for crawling websites and our crawler rogerbot will simply hit anchor tag links. I was able to take a look at your site and rogerbot will definitely have some trouble crawling anywhere here due to the lack of a hrefs.
What we have typically suggested in the past is using an HTML link at the bottom of the page leading to a basic site map so that Roger can navigate through. This will allow you to keep the same look and feel and just add one link to enable crawlers.
Please let me know if you need anything else and feel free to get us at help@moz.com with any further questions. Also feel free to ask any follow ups here as well.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are Moz-specific ids available in response sets from V2 calls?
I'm assuming that moz assign a unique internal id to each page, subdomain and root_domain. When making v2/links requests, are these moz-specific ids available? Currently, when gathering backlinks for a client we're generating our own ids. Maintaining uniqueness is costly time-wise. "results": [{
API | | StevePoul
"source": {
"page": "news.energysage.com/pros-cons-electric-water-heaters/",
"subdomain": "news.energysage.com",
"root_domain": "energysage.com", ... "target": {
"page": "mainlineplumbing.net.au/",
"subdomain": "mainlineplumbing.net.au",
"root_domain": "mainlineplumbing.net.au",0 -
Sitemaps and Indexed Pages
Hi guys, I created an XML sitemap and submitted it for my client last month. Now the developer of the site has also been messing around with a few things. I've noticed on my Moz site crawl that indexed pages have dropped significantly. Before I put my foot in it, I need to figure out if submitting the sitemap has caused this.. can a sitemap reduce the pages indexed? Thanks David. TInSM
API | | Slumberjac0 -
Frequency of Moz page authority updates?
I have some new pages on my site, and Moz gives them a very low PA ranking. I am wondering if these scores are updated monthly or quarterly? I'm not sure how frequently to check back for updated scoring.
API | | AndrewMicek0 -
January’s Mozscape Index Release Date has Been Pushed Back to Jan. 29th
With a new year brings new challenges. Unfortunately for all of us, one of those challenges manifested itself as a hardware issue within one of the Mozscape disc drives. Our team’s attempts to recover the data from the faulty drive only lead to finding corrupted files within the Index. Due to this issue we had to push the January Mozscape Index release date back to the 29<sup>th</sup>. This is not at all how we anticipated starting 2016, however hardware failures like this are an occasional reality and are also not something we see being a repeated hurdle moving forward. Our Big Data team has the new index processing and everything is looking great for the January 29<sup>th</sup> update. We never enjoy delivering bad news to our faithful community and are doing everything in our power to lessen these occurrences. Reach out with any questions or concerns.
API | | IanWatson2 -
Lost many links and keyword ranks since moz index update
Hi All, I came back from work today from a week off to find my site has gone from 681 external inbound links to 202. With this my domain authority, moz trust and moz rank have all also taken a slip. Compounding this, I am seeing a slip most of my keywords rankings. If i try to use the open site explorer to explore my links and see what going on i get the message It looks like we haven't discovered link data for this site or URL. If i check the just discovered links like it suggests I get It looks like there's no Just-Discovered Links data for this URL yet. I know these features worked before the index as i used them. Is this all attributable to the moz index issues that have been noted or could something have happened to my site? Since i started 2 months ago I have made many changes including... Updating the site map that was 4 years out of date and included 400 broken urls Removed blank pages and other useless webpages on the site that contained no content (from the previous administrator) Edited a few pages content from keyword spammy stuff to nicely written and relevant content Fixed url rewrites that made loops and un-accessible product pages All these changes should be for the better but the latest readings have me a little worried. Thanks.
API | | ATP0 -
Can we get access to Moz's Rank Tracker via the API?
I'd like to be able to pull the results from Rank Tracker into my own application. Can I access it via an API? I don't see it anywhere in the Moz documentation, which is usually a clear answer. If not, how do you suggest to automate the inclusion of this data without, for example, being blacklisted?
API | | MB070 -
Pulling large amounts of data from moz api
Hi i'm looking to pull large amounts of data from the moz and semrush api. I have been using seotools addon for excel to extract data but excel is slow, sometimes crashes and not very reliable. Can anyone recommend any other tools i can use, to pull huge amounts of data? Any suggestions would be highly appreciated! Cheers, RM
API | | MBASydney0 -
On-Page Reports showing old urls
While taking a look at our sites on-page reports I noticed some of our keywords with very old urls that haven't existed for close to a year. How do I make sure moz's keyword ranking is finding the correct page and make sure I'm not getting graded on that keywords/urls that don't exist any more or have been 301'd to new urls? Is there a way to clean these out? My on-page reports say I have 62 reports for only a total of 34 keywords in rankings. As you can see from the image most of the urls for "tax folder" have now been 301'd to not include /product or /category but moz is still showing them with the old url structure. BTW our site is minespress.com 2KdGcPL.png
API | | smines0