Angular.js + Crawlers
-
I am working with a site that recently deployed Angular.js on the site. From an SEO standpoint its a little more tricky than we thought. We have deployed a couple updates to render pages for the bots but we not seeing changes in Moz weekly reports.
When it comes to Angular.js, will the Moz bots read/access the site the same as the other major engines? I'm trying to figure out if our deployments are working or if there's something off in the Moz reports.
Thanks.
-
I am using prerender to cache/render static pages to crawl agents but MOZ is not able to crawl through my website (http://www.exambazaar.com/). Hence it has a domain authority of 1/100. I have been in touch with Prerender support to find a fix for the same and have also added dotbot to the list of crawler agents in addition to Prerender default list which includes rogerbot. Do you have any suggestions to fix this?
List: https://github.com/prerender/prerender-node/commit/5e9044e3f5c7a3bad536d86d26666c0d868bdfff
Adding dotbot:
prerender.crawlerUserAgents.push('dotbot'); -
Within prerender you are able to determine which user agents will receive the HTML snapshot. It is here that you can add rogerbot. This is allowing Moz to crawl the site as if they were Google and receive the HTML snapshot version.
Additionally, you can always use the fetch as bot function within Webmaster Tools, to see exactly what is being presented/indexed.
-
With the current direction of web development this is something that needs to be addressed. Google has already confirmed that they are in fact crawling Javascript based sites.
Reference:
http://ng-learn.org/2014/05/SEO-Google-crawl-JavaScript/
https://support.google.com/webmasters/answer/174992?hl=enThe solution in this case is an HTML snapshot which, you could roll your own, but there are services like https://prerender.io/ that can do it for you.
This doesn't quite help the case for Moz Bot, maybe the HTML snapshots do work here - I haven't tested it yet. Either way, Javascript is becoming more and more a dominant language to code up websites. I hope Moz recognizes this because this toolset is awesome and I'd love to continue using it.
-
Is there still no update to this by MOZ?
A number of sites I work on are using Angularjs pushstate. Is there a way to point moz bot to the escaped fragment static pages?
-
Static rendering is not cloaking. It's a very common practice that Google actually recommends. The issue with angular js is that everything is code based. If you were to look at the code all the pages would look the same. In fact, MozBot sees this as every page is duplicate content.
https://developers.google.com/webmasters/ajax-crawling/docs/html-snapshot
It would be nice to see the MozBot act more like Google-bot.
-
What do you mean by "We have deployed a couple updates to render pages for the bots" that sounds like clocking?
-
Hello, Josh
Currently our crawlers do not process any kind of javascript found on pages (including pages created with angular.js.) I don't if the major search engines have this restriction or not.
For moz's crawlers, this means that links created through AJAX or other javascript will not be picked up. Links appearing in static content, including those within
<noscript>tags, should be noticed and indexed. Be aware that even if you've already made changes exposing links in the page's static content, it can take up to a week for the campaign crawl to catch up.</p> <p>Hopefully that answered your questions! Let us know if you have any more.</p></noscript>
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawler errors or Page Load Time? What affect more to SEO
Hello, I have a page with a forum and at this moment the moz report says that have 15.1k of issues like url too long, meta noindex, title too long etc. But this page have a load time realy sloooow with 11 seconds. I know i need fix all that errors (i'm working on this) but... What is more important for SEO? The page load or that type of error like duplicate titles etc. Thank you!
Moz Pro | | DanielExposito1 -
MOZ Crawler
Hi, how much time it will take MOZ crawler to take entire site? In 24 hours it crawled only 500 pages isn't it too slow? My website has almost 50k pages.
Moz Pro | | macpalace0 -
Crawlers reporting upper case letter url versions although these have been 301'd to lower case !?
Hi I have a client e-com site who's dev platform is on a windows server Their product pages have been auto-named after the product title, with the first letter in each word being upper case, which has hence translated to the URL having upper cases instances too. I asked them to set up 301 redirects for all url's that had upper case instances to lower case versions, which they say they have done. However I'm still seeing url's with upper case instances showing up in webmaster tools and moz crawl reports but when I copy & paste them into a browser they do redirect to, & resolve in, the lower case version. Its also upper case versions reported in the Google cache! So how come webmaster tools & Moz etc are reporting the upper case versions, surely if redirected it should be the lower case versions All Best Dan
Moz Pro | | Dan-Lawrence0 -
Why seomoz crawler does not see my snapshot?
I have a web app that uses angularJS and the content is all dynamic (SPA). I have generated snapshots for the pages and write a rule to redirect ( 301) to the snapshot in case of find escaped_fragment in the URL. E.g http://plure.com/#!/imoveis/venda/rj/rio-de-janeiro Request: http://plure.com/?escaped_fragment=/imoveis/venda/rj/rio-de-janeiro is redirected to: http://plure.com/snapshots/imoveis/venda/rj/rio-de-janeiro/ The snapshot is a headless page generated by PhantomJS. Even following the guideline ( https://developers.google.com/webmasters/ajax-crawling/docs/specification) I still can't see my page crawled and I also in SEOMoz I can only see the 1st page crawled with no dynamic content on it. Am I doing something wrong? SEOMoz was supposed to get the snapshot based on same rules of GoogleBot or SEOMoz does not get snapshots?
Moz Pro | | plure_seo0 -
Reset Crawler
Hello, Does anyone know how to reset the crawler? We recently uploaded our new website and deleted the current campaign but it seems the crawler is caching our old websites data and not the new so every time we try to create a new campaign with the same details, it's just pulling everything from cache it seems. Thanks
Moz Pro | | ForzaHost0 -
Drop in number of Pages crawled by Moz crawler
What would cause a sudden drop in the number of pages crawled/accessed by the Moz crawler? The site has about 600 pages of content. We have multiple campaigns set up in our Pro account to track different keyword campaigns- but all for the same domain. Some show 600+ pages accessed, while others only access 7 pages for the same domain. What could be causing these issues?
Moz Pro | | AllaO0 -
SEOMOZ Crawler unicode bug
for the last couple of weeks the SEOMOZ crawls my homepage only and gets 4xx error for most of the URL's. the crawler have no issues with English url's only with the unicode(Hebrew) ones. this is what is see in the csv export for the crawl (one sample) : http://www.funstuff.co.il/׳ž׳¡׳™׳‘׳×-׳¨׳•׳•׳§׳•׳× 404 text/html; charset=utf-8 you can see that the URL is Gibberish please help.
Moz Pro | | AsafY0 -
MOZ Crawler only crawling one page per campaign
We set up some new campaigns, and now for the last two weekly crawls, the crawler is only accessing one page per campaign. Any ideas why this is happening? PS - two weeks back we did "upgrade" the account. Could this have been an issue?
Moz Pro | | AllaO0