Is there any proof that google can crawl PWA's correctly, yet
-
At the end of 2018 we rolled out our agency website as a PWA. At the time, Google used Chrome (41) headless to render our website. Although all sources announced at the time that it 'should work', we experienced the opposite. As a solution we implement the option for server side rendering, so that we did not experience any negative effects.
We are over a year later. Does anyone have 'evidence' that Google can actually render and correctly interpret client side PWA's?
-
-
Ok, so I found some new information. In the article mentioned in this Reddit thread, there is a suggestion that Google is now indexing based on the latest version of Googlebot.
"With Googlebot running on the most recent version of Chrome and JavaScript content being indexed faster than ever, it’s apparent that Google is getting better at indexing JavaScript." Thread: https://www.reddit.com/r/javascript/comments/e4uku1/javascript_indexing_delays_are_still_an_issue_for/
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I'm doing a crawl analysis for a website and finding all these duplicate URLs with "null" being added to them and have no clue what could be causing this.
Does anyone know what could be causing this? Our dev team thinks it's caused by mobile pages they created a while ago but it is adding 1000's of additional URLs to the crawl report and being indexed by Google. They don't see it as a priority but I believe these could be very harmful to our site. examples from URL string:
Web Design | | julianne.amann
uruguay-argentina-chilenullnull/days
rainforests-volcanoes-wildlifenullnull/reviews
of-eastern-europenullnullnullnull/hotels0 -
Does stock art photo attribution negatively impact SEO by leaking Google Page Rank?
Greetings: Companies such as Shutterstock often require that buyers place credit attribution on their web pages when photos you buy from them appear on these pages.. Shutterstock requests that credit attribution links such as these be added: Songquan Deng / Shutterstock.com Do these links negatively impact SEO? Or do search engines view them as a positive? Thanks,
Web Design | | Kingalan1
Alan0 -
Can't figure out what's going on with these strange 403's
The last crawl found a good number of 404's and I can't figure out what's going on. I don't even recognize these links. What's really strange is a few of them high a fairly decent page authority. For instance this one has a PA of 55: http://noahsdad.com/?path=http%3A%2F%2Feurosystems.it%2Fconf_commerciale%2Fimages%2Fdd.gif%3F%3F There are several more like this one also, it seems most of these new ones have "Feurosystems" in the link...I have no idea what that is. Just curious what you guys think is going on, why these are 404'ing, and how to fix it. Thanks. Edit: I took out the "%" from the links and I get this: http://noahsdad.com/?path=http3A2F2Feurosystems.it2Fconf_commerciale2Fimages2Fdd.gif3F3F which takes me to a page on my site. I have no idea what's going on, or what that link is. Hoping someone can chime in because this is strange. Another edit: I just checked out the Google Webmaster's and it looks like these errors are 403's and all started around March 21st. I have no idea what happend on March 21st to start causing all of these errors, but I'd sure like to get it fixed. 🙂
Web Design | | NoahsDad0 -
SEOMoz crawl report shows a duplicate content and duplicate title for these two url's http://freightmonster.com/ and http://freightmonster.com/index.html. How do I fix this?
What page is attached to http://freightmonster.com/ if it is not the index.html ? Should I do a redirect from the index page to something more descriptive?
Web Design | | FreightBoy1 -
Old links in Google, new website affecting SEO?
Hi Guys, I have launched my website in october and it has already been indexed by google. Now I'm going to launch my redesign which comes with a new structure, content, links, etc. So the question is, do I have to resubmit my website to google to get rid of old links? Onsite Explorer shows links to my forum which has been spammed with p* stuff which has been already indexed as well. The forum is off now. I want to use SEOmoz to track my new website but I guess this could be a hard thing as old links etc will be shown as well. Is there any tool to let Google know about my changes? Does it affect my SEO in any way? Thank you for your help. Nick
Web Design | | NickITW0 -
Why is site not being indexed by Google, and not showing on a crawl test??
On a site we developed of which .com is forwarded to .net domain, we quit getting crawled by google on about the 20th of Feb. Now when we try to run a crawl test on either url, we get There was an error fetching this page. Error description For some reason the page returned did not describe itself as an html page. It could be possible that the url is serving an image, rss feed, pdf, or xml file of some sort. The crawl tool does not currently report metrics on this type of data. Our other sites are fine and this was up to this date. We took out noodp, noydir today as the only thing we could think of. Site is on WP cms.
Web Design | | RobertFisher0 -
Crawl Budget vs Canonical
Got a debate raging here and I figured I'd ask for opinions. We have our websites structured as site/category/product This is fine for URL keywords, etc. We also use this for breadcrumbs. The problem is that we have multiple categories into which a category fits. So "product" could also be at site/cat1/product
Web Design | | Highland
site/cat2/product
site/cat3/product Obviously this produces duplicate content. There's no reason why it couldn't live under 1 URL but it would take some time and effort to do so (time we don't necessarily have). As such, we're applying the canonical band-aid and calling it good. My problem is that I think this will still kill our crawl budget (this is not an insignificant number of pages we're talking about). In some cases the duplicate pages are bloating a site by 500%. So what say you all? Do we just simply do canonical and call it good or do we need to take into account the crawl budget and actually remove the duplicate pages. Or am I totally off base and canonical solves the crawl budget issue as well?0