Angular.js + Crawlers
-
I am working with a site that recently deployed Angular.js on the site. From an SEO standpoint its a little more tricky than we thought. We have deployed a couple updates to render pages for the bots but we not seeing changes in Moz weekly reports.
When it comes to Angular.js, will the Moz bots read/access the site the same as the other major engines? I'm trying to figure out if our deployments are working or if there's something off in the Moz reports.
Thanks.
-
I am using prerender to cache/render static pages to crawl agents but MOZ is not able to crawl through my website (http://www.exambazaar.com/). Hence it has a domain authority of 1/100. I have been in touch with Prerender support to find a fix for the same and have also added dotbot to the list of crawler agents in addition to Prerender default list which includes rogerbot. Do you have any suggestions to fix this?
List: https://github.com/prerender/prerender-node/commit/5e9044e3f5c7a3bad536d86d26666c0d868bdfff
Adding dotbot:
prerender.crawlerUserAgents.push('dotbot'); -
Within prerender you are able to determine which user agents will receive the HTML snapshot. It is here that you can add rogerbot. This is allowing Moz to crawl the site as if they were Google and receive the HTML snapshot version.
Additionally, you can always use the fetch as bot function within Webmaster Tools, to see exactly what is being presented/indexed.
-
With the current direction of web development this is something that needs to be addressed. Google has already confirmed that they are in fact crawling Javascript based sites.
Reference:
http://ng-learn.org/2014/05/SEO-Google-crawl-JavaScript/
https://support.google.com/webmasters/answer/174992?hl=enThe solution in this case is an HTML snapshot which, you could roll your own, but there are services like https://prerender.io/ that can do it for you.
This doesn't quite help the case for Moz Bot, maybe the HTML snapshots do work here - I haven't tested it yet. Either way, Javascript is becoming more and more a dominant language to code up websites. I hope Moz recognizes this because this toolset is awesome and I'd love to continue using it.
-
Is there still no update to this by MOZ?
A number of sites I work on are using Angularjs pushstate. Is there a way to point moz bot to the escaped fragment static pages?
-
Static rendering is not cloaking. It's a very common practice that Google actually recommends. The issue with angular js is that everything is code based. If you were to look at the code all the pages would look the same. In fact, MozBot sees this as every page is duplicate content.
https://developers.google.com/webmasters/ajax-crawling/docs/html-snapshot
It would be nice to see the MozBot act more like Google-bot.
-
What do you mean by "We have deployed a couple updates to render pages for the bots" that sounds like clocking?
-
Hello, Josh
Currently our crawlers do not process any kind of javascript found on pages (including pages created with angular.js.) I don't if the major search engines have this restriction or not.
For moz's crawlers, this means that links created through AJAX or other javascript will not be picked up. Links appearing in static content, including those within
<noscript>tags, should be noticed and indexed. Be aware that even if you've already made changes exposing links in the page's static content, it can take up to a week for the campaign crawl to catch up.</p> <p>Hopefully that answered your questions! Let us know if you have any more.</p></noscript>
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why seomoz crawler does not see my snapshot?
I have a web app that uses angularJS and the content is all dynamic (SPA). I have generated snapshots for the pages and write a rule to redirect ( 301) to the snapshot in case of find escaped_fragment in the URL. E.g http://plure.com/#!/imoveis/venda/rj/rio-de-janeiro Request: http://plure.com/?escaped_fragment=/imoveis/venda/rj/rio-de-janeiro is redirected to: http://plure.com/snapshots/imoveis/venda/rj/rio-de-janeiro/ The snapshot is a headless page generated by PhantomJS. Even following the guideline ( https://developers.google.com/webmasters/ajax-crawling/docs/specification) I still can't see my page crawled and I also in SEOMoz I can only see the 1st page crawled with no dynamic content on it. Am I doing something wrong? SEOMoz was supposed to get the snapshot based on same rules of GoogleBot or SEOMoz does not get snapshots?
Moz Pro | | plure_seo0 -
Why do crawlers still track meta keywords if it is not needed in my site?
I have crawled three sites already and it returns more than 5000 errors most of which are MIssing Meta Keywords tags. The sites are on Wordpress and using my SEO plugin I can easily edit the meta keywords of each page, but I am having second thoughts. Well should I?
Moz Pro | | jernest0020 -
Reset Crawler
Hello, Does anyone know how to reset the crawler? We recently uploaded our new website and deleted the current campaign but it seems the crawler is caching our old websites data and not the new so every time we try to create a new campaign with the same details, it's just pulling everything from cache it seems. Thanks
Moz Pro | | ForzaHost0 -
Where does the crawler find the urls?
The SEO Moz crawler has found a number of 500 error pages, and 404s etc which is very useful 🙂 however some of the urls are weird/broken formats we don't recognise and nobody remembers ever using - not weird enough to imply hacking, but something broken in the CMS Is there anyway to find out where the crawler found these urls? I can patch up and redirect the end result as best I can but I would prefer to fix plug the leak thanks 🙂
Moz Pro | | Fammy1 -
SEOMoz Crawler and rel_canonical_tag Errors
This tag is showing up on category pages (that do not have a duplicate page on the site). In mid November Google cut our traffic by 30%. Could this tag be confusing the spider? According to the moz crawler - we seemed to be dinged for this on 95% of our pages. Is this hurting us? It seems to direct back to the same page.EG: From the FMI3600 Page http://www.brick-anew.com/FMI-3600-Fireplace-Doors.html: http://www.brick-anew.com/FMI-3600-Fireplace-Doors.html"> There is only one page for the FMI 3600 Fireplace Door category - however, it does have the same products on it as other FP Door Category pages,
Moz Pro | | SammyT0 -
MOZ Crawler only crawling one page per campaign
We set up some new campaigns, and now for the last two weekly crawls, the crawler is only accessing one page per campaign. Any ideas why this is happening? PS - two weeks back we did "upgrade" the account. Could this have been an issue?
Moz Pro | | AllaO0 -
Crawler reporting incorrect URLs, resulting in false errors...
The SEOmoz crawler is showing 236 Duplicate Page Titles. When I go in to see what page titles are duplicated I see that the URLs in question are incorrect and read "/about/about/..." instead of just "/about/" The shown page duplicates are the result of the crawler is ending up on the "Page not found" page. Could it be the result of using relative links on the site? Anything I can do to remedy? Thanks for your help! -Frank
Moz Pro | | Clements1 -
SEOMoz site crawlers created an issue for our servers
I have set up a number of campaigns with your pro tool. Unfortunately we have 7 sites on our server and our IT dept have said that we had an issue when your site crawlers visited for several sites at the same time - is there any way that I can retain the campaigns but have the sites crawled on request rather than automatically?
Moz Pro | | StephenALee0