What's the best way to test Angular JS heavy page for SEO?
-
Hi Moz community,
Our tech team has recently decided to try switching our product pages to be JavaScript dependent, this includes links, product descriptions and things like breadcrumbs in JS. Given my concerns, they will create a proof of concept with a few product pages in a QA environment so I can test the SEO implications of these changes. They are planning to use Angular 5 client side rendering without any prerendering. I suggested universal but they said the lift was too great, so we're testing to see if this works.
I've read a lot of the articles in this guide to all things SEO and JS and am fairly confident in understanding when a site uses JS and how to troubleshoot to make sure everything is getting crawled and indexed.
https://sitebulb.com/resources/guides/javascript-seo-resources/
However, I am not sure I'll be able to test the QA pages since they aren't indexable and lives behind a login. I will be able to crawl the page using Screaming Frog but that's generally regarded as what a crawler should be able to crawl and not really what Googlebot will actually be able to crawl and index.
Any thoughts on this, is this concern valid?
Thanks!
-
Hi Zack,
I think your concern here is valid (your render with Screaming Frog or any other client is unlikely to be precisely representative of what Googlebot will see/index). That said, I'm not sure there's much you can do to eliminate this knowledge gap for your QA process.
For instance, while we have seen Googlebot timing out JS rendering around the ~5s mark using the "Fetch & Render as Googlebot" functionality in Search Console (see slide 25 of Max Prin's slide deck here), there's no confirmation this time limit represents Googlebot's behavior in the wild.
Additionally, we know that Googlebot crawls with limited JS support - for instance, when a script uses JS to generate a random number, my colleague Tom Anthony found that Googlebot's random() JS function is deterministic (returns a predictable set) - so it's clear they have modified the headless version of Chrome they use to conserve computational expenses in this way. We can only assume they've taken other steps to save computing costs. This isn't baked-into Screaming Frog or any other crawling tool.
We have seen that with a 5s timeout set in Screaming Frog, the rendered result is pretty close to what "Fetch & Render as Googlebot" functionality demonstrates. And with the ubiquity of JS-driven content on the web today, provided links and content are rendered into the DOM fairly quickly (well ahead of that 5s mark), we've seen Google rendering and indexing JS content fairly reliable.
The ideal would be for your dev team to code these pages to degrade gracefully - so that even with JS support totally disabled, navigation and content elements are still rendered (they should be delivered in the page source, then enhanced with JS, if possible).
Failing that, the best you're likely to achieve here is reasonable confident that Googlebot can crawl, render and index these pages - there'll be some risk when you publish them to production.
Hope this helps somewhat - best of luck!
Thanks,
Mike
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why does Google's search results display my home page instead of my target page?
Why does Google's search results display my home page instead of my target page?
Technical SEO | | h.hedayati6712365410 -
What's best practice for cart pages?
i don't mean e-commerce site in general, but the actual cart page itself. What's best practice for the links that customers click to add products to the cart, and the cart page itself? Also, I use vanity URLs for my cart links which redirect to the actual cart page with the parameters applied. Should I use use 301 or 302 redirects for the links? Do I make the cart page's canonical tag point back to the store home page so that I'm not accruing link juice to a page that customers don't actually want to land on from search? I'm kinda surprised at the dearth of information out there on this, or maybe I'm not looking in the right places?
Technical SEO | | VM-Oz0 -
My 'complete guide' is cannibalising my main product page and hurting rankings
Hi everyone, I have a main page for my blepharoplasty surgical product that I want to rank. It's a pretty in-depth summary for patients to read all about the treatment and look at before and after pictures and there's calls to action in there. It works great and is getting lots of conversions. But I also have a 'complete guide' PDF which is for patients who are really interested in discovering all the technicalities of their eye-lift procedure including medical research, clinical stuff and risks. Now my main page is at position 4 and the complete guide is right below it in 5. So I tried to consolidate by adding the complete guide as a download on the main page. I've looked into rel canonical but don't think it's appropriate here as they are not technically 'duplicates' because they serve different purposes. Then I thought of adding a meta noindex but was not sure whether this was the right thing to do either. My report doesn't get any clicks from the serps, people visit it from the main page. I saw in Wordpress that there's options for the link, one says 'link to media file', 'custom URL' and 'attachment'. I've got the custom URL selected at the moment. There's also a box for 'link rel' which i figure is where I'd put the noindex. If that's the right thing to do, what should go in that box? Thanks.
Technical SEO | | Smileworks_Liverpool0 -
Site Migration between CMS's
Hi There, I have a technical question about migrating CMS's but not servers. My client has site A on Joomla install, He want's ot migrate to Wordpress and we will call this site B. As he has a lot of old content on site A he doesn't want to lose, he has put site B (wordpress install) on a subdirectory site.com/siteb (for example). and will use a htaccess to forward the root domain to this wordpress site. Therefore anyone going to www.site.com will see the new wordpress site and the old content and joomla install will sit on the root of the server. Will Google have an issue with this? Will it even find the old content? what are the issues for the new site and new content? Look forward getting your guys input
Technical SEO | | nezona1 -
Why is robots.txt blocking URL's in sitemap?
Hi Folks, Any ideas why Google Webmaster Tools is indicating that my robots.txt is blocking URL's linked in my sitemap.xml, when in fact it isn't? I have checked the current robots.txt declarations and they are fine and I've also tested it in the 'robots.txt Tester' tool, which indicates for the URL's it's suggesting are blocked in the sitemap, in fact work fine. Is this a temporary issue that will be resolved over a few days or should I be concerned. I have recently removed the declaration from the robots.txt that would have been blocking them and then uploaded a new updated sitemap.xml. I'm assuming this issue is due to some sort of crossover. Thanks Gaz
Technical SEO | | PurpleGriffon0 -
The importance of url's - are they that important?
Hi Guys I'm reading some very contrasting and confusing reviews regarding urls and the impact they have on a sites ability to rank. My client has a number of flooring products, 71 to be exact - categorised under three sub categories 1. Gallery Wood - 2. Prefinshed Wood - 3. Parquet & Reclaimed. All of the 71 products are branded products (names that are completely unrelated to specific keyword search terms. This is having a major impact regarding how we optimise the site. FOR EXAMPLE: A product of the floor called "White Grain" - the "Key Word" we would like to rank this page for is Brown Engineered Flooring. I'm interested to know, should the name of the branded product match the url? What would you change to help this page rank better for the keyword - Brown Engineered Flooring. Title page: White Grain Url: thecompanyname.com/gallery-wood/white-grain (white grain is the name of the product) Key Word: Brown Engineered Flooring **Seo Title: **White Grain, Brown Engineered Flooring by X Meta Description: BLAH BLAH Brown Engineered Flooring BLAH BLAH Any feedback to help get my head around this would be really appreciated. Thank you.
Technical SEO | | GaryVictory0 -
Does using Google Loader's ClientLocation API to serve different content based on region hurt SEO?
Does using Google Loader's ClientLocation API to serve different content based on region hurt SEO? Is there a better way to do what I'm trying to do?
Technical SEO | | Ocularis0 -
2 links on home page to each category page ..... is page rank being watered down?
I am working on a site that has a home page containing 2 links to each category page. One of the links is a text link and one link is an image link. I think I'm right in thinking that Google will only pay attention to the anchor text/alt text of the first link that it spiders with the anchor text/alt text of the second being ignored. This is not my question however. My question is about the page rank that is passed to each category page..... Because of the double links on the home page, my reckoning is that PR is being divided up twice as many times as necessary. Am I also right in thinking that if Google ignore the 2nd identical link on a page only one lot of this divided up PR will be passed to each category page rather than 2 lots ..... hence horribly watering down the 'link juice' that is being passed to each category page?? Please help me win this argument with a developer and improve the ranking potential of the category pages on the site 🙂
Technical SEO | | QubaSEO0