What is a "good" dwell time?
-
I know there isn't any official documentation from Google about exact number of seconds a user should spend on a site, but does anyone have any case studies that looks at what might be a good "dwell time" to shoot for?
We're looking on integrating an exact time on site into or Google Analytics metrics to count as a 'non-bounce'--so, for example, if a user spends 45 seconds on an article, then, we wouldn't count it as a bounce, since the reader likely read through all the content.
-
I have not seen any studies indicating such a thing,
(but my guess is that is that dwell time seems to be such a strong signal of relevance that google would never release that info, I could be totally wrong though)
An idea to improve UX... if you have a page with 2 paragraphs of text, take the average time it takes for 10 ppl in your office to read it and set the 'bounce rate' accordingly. Then you'll know if ppl are reading it.
If you have a page with 2000 words, avg. that time, etc.
If visitors bounce too soon, edit the text until your office avg. meets visitor avg. That would equal relevance right?
-
The answer to this is really going to be dependent on the page content, Micelleh. A simple page with a clear call to action could result in a user getting exactly what they want from a page within a few seconds and then leaving. A 350 word page might mean 45 seconds, but a 1500 word page might need 2 minutes to prove a user actually got value.
At best, if you insist on a value, get several users to use a good number of your pages, record their on-page time, then create a site-specific average from that.
However, you might be even better off using events for this process, instead of something nebulous like dwell time.
You could add event tracking to the amount of the page a user scrolls, and if they scroll more than half a page (for example), an "interactive" event triggers. "Interactive" events have an effect the same as another pageview (without screwing up your pageview metrics) so a single page visit that scrolled at least half way down the page would no longer be recorded as a bounce.
You could also create interactive events for things like pdf downloads, form submissions, sending emails, viewing a video etc that you consider appropriate for your site to negate what would be considered a bounce.
The biggest benefit to this events-based approach is that it would be vastly more accurate. It would track visitors' actual actions, as opposed to just assuming a dwell time meant a valuable interaction. (For example, we all know that the habit of opening multiple tabs at once for sequential reading significantly over-inflates time on page for many users.)
Perhaps that idea would work better for what you're trying to accomplish?
Paul
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages excluded from Google's index due to "different canonicalization than user"
Hi MOZ community, A few weeks ago we noticed a complete collapse in traffic on some of our pages (7 out of around 150 blog posts in question). We were able to confirm that those pages disappeared for good from Google's index at the end of January '18, they were still findable via all other major search engines. Using Google's Search Console (previously Webmastertools) we found the unindexed URLs in the list of pages being excluded because "Google chose different canonical than user". Content-wise, the page that Google falsely determines as canonical instead has little to no similarity to the pages it thereby excludes from the index. False canonicalization About our setup: We are a SPA, delivering our pages pre-rendered, each with an (empty) rel=canonical tag in the HTTP header that's then dynamically filled with a self-referential link to the pages own URL via Javascript. This seemed and seems to work fine for 99% of our pages but happens to fail for one of our top performing ones (which is why the hassle 😉 ). What we tried so far: going through every step of this handy guide: https://moz.com/blog/panic-stations-how-to-handle-an-important-page-disappearing-from-google-case-study --> inconclusive (healthy pages, no penalties etc.) manually requesting re-indexation via Search Console --> immediately brought back some pages, others shortly re-appeared in the index then got kicked again for the aforementioned reasons checking other search engines --> pages are only gone from Google, can still be found via Bing, DuckDuckGo and other search engines Questions to you: How does the Googlebot operate with Javascript and does anybody know if their setup has changed in that respect around the end of January? Could you think of any other reason to cause the behavior described above? Eternally thankful for any help! ldWB9
Intermediate & Advanced SEO | | SvenRi1 -
When "pruning" old content, is it normal to see an drop in Domain Authority on Moz crawl report?
After reading several posts about the benefits of pruning old, irrelevant content, I went through a content audit exercise to kick off the year. The biggest category of changes so far has been to noindex + remove from sitemap a number of blog posts from 2015/2016 (which were very time-specific, i.e. software release details). I assigned many of the old posts a new canonical URL pointing to the parent category. I realize it'd be ideal to point to a more relevant/current blog post, but could this be where I've gone wrong? Another big change was to hide the old posts from the archive pages on the blog. Any advice/experience from anyone doing something similar much appreciated! Would be good to be reassured I'm on the right track and a slight drop is nothing to worry about. 🙂 If anyone is interested in having a look: https://vivaldi.com https://vivaldi.com/blog/snapshots [this is the category where changes have been made, primarily] https://vivaldi.com/blog/snapshots/keyboard-shortcut-editing/ [example of a pruned post]
Intermediate & Advanced SEO | | jonmc1 -
B2B site targeting 20,000 companies with 20,000 dedicated "target company pages" on own website.
An energy company I'm working with has decided to target 20,000 odd companies on their own b2b website, by producing a new dedicated page per target company on their website - each page including unique copy and a sales proposition (20,000 odd new pages to optimize! Yikes!). I've never come across such an approach before... what might be the SEO pitfalls (other than that's a helluva number of pages to optimize!). Any thoughts would be very welcome.
Intermediate & Advanced SEO | | McTaggart0 -
Updating existing content - good or bad?
Hi All, There are many situations where I encounter the need (or the wish) to update existing content. Here are few reasons: Some update turned up on the subject that does not justify a new posy / article but rather just adding two lines. The article was simply poorly written yet the page has PR as it is a good subject and is online for quite some time (alternatively I can create a new and improved article and 301 the old one to the new). Improving titles and sub titles of old existing articles. I would love to hear your thoughts on each of the reasons... Thanks
Intermediate & Advanced SEO | | BeytzNet1 -
Penalised for duplicate content, time to fix?
Ok, I accept this one is my fault but wondering on time scales to fix... I have a website and I put an affiliate store on it, using merchant datafeeds in a bid to get revenue from the site. This was all good, however, I forgot to put noindex on the datafeed/duplicate content pages and over a period of a couple of weeks the traffic to the site died. I have since nofollowed or removed the products but some 3 months later my site still will not rank for the keywords it was ranking for previously. It will not even rank if I type in the sites' name (bright tights). I have searched for the name using bright tights, "bright tights" and brighttights but none of them return the site anywhere. I am guessing that I have been hit with a drop x place penalty by Google for the duplicate content. What is the easiest way around this? I have no warning about bad links or the such. Is it worth battling on trying to get the domain back or should I write off the domain, buy a new one and start again but minus the duplicate content? The goal of having the duplicate content store on the site was to be able to rank the category pages in the store which had unique content on so there were no problems with that which I could foresee. Like Amazon et al, the categories would have lists of products (amongst other content) and you would click through to the individual product description - the duplicate page. Thanks for reading
Intermediate & Advanced SEO | | Grumpy_Carl0 -
How are pages ranked when using Google's "site:" operator?
Hi, If you perform a Google search like site:seomoz.org, how are the pages displayed sorted/ranked? Thanks!
Intermediate & Advanced SEO | | anthematic0 -
How Can I know that a link placed is not lableld "No Follow"?
If someone wants to trade links, how can I be sure the link is followed?
Intermediate & Advanced SEO | | SEObleu.com0 -
How to keep the link juice in E-commerce to an "out of stock" products URL?
I am running an e-commerce business where I sell fashion jewelry. We usually have 500 products to offer and some of them we have only one in stock. What happens is that many of our back links are pointed directly to a specific product, and when a product is sold out and no longer is in stock the URL becomes inactive, and we lose the link juice. What is the best practice or tool to 301-redirect many URLs at the same time without going and changing one URL at a time? Do you have any other suggestions on how to manage an out of stock product but still maintain the link juice from the back link? Thanks!
Intermediate & Advanced SEO | | ikomorin0