Split test content experiment - Why can't GA identify a winner here?
-
I have been running a content experiment for a short while now and GA has just ended it saying it cannot determine a winner.
Looking at the images (links below), without any form of analysis I can already see a pattern of greater success in Variation 1. It ended with a 93% probability of outperforming the original yet the content experiment ended with no winner. Does this mean the 95% confidence threshold I set should've been lowered?
Ultimately I'm going to choose this as my winner but why didn't GA push it as the winner? Is there something I am missing?
Image 1 - Showing e-commerce performance (objective of split test was transactions)
Image 2 - Showing conversions (same split test, same objective, just different report)
Your thoughts and comments will be appreciated.
-
Hey Emeka,
Short Answer:
You're correct. Effectively what Google is saying here is that they don't have enough statistical confidence to definitely tell you that the variation is outperforming the original at a 95% confidence level, but they do at a 93.8% confidence level.
Quick Note: 95% is the lowest setting in GA Experiments.Long Answer:
The math behind this statistical significance calculation is:
Full credit to vwo.com for their A/B Testing Significance Calculator & doing all the work here.
Link to Image One - This is simply the data of the Control vs Variation & the Conversion Rate & Standard Error Rates
- Conversion Rate is: Conversions/Sessions.
- Standard Error is: √(Conversion Rate*(1-Conversion Rate)/Visitors)
Link to Image Two - Confidence Levels, Z-score, & P-value
To find if something is truly significant at a specific confidence level, we need to calculate the Z-score then use that value to find the P-value and from there we can determine the confidence level.- Z-score is: (Control Conversion Rate-Variation Conversion Rate)/√((Control Standard Error^2)+(Variation Standard Error^2))
- For the P-value, we need to calculate the normal distribution of the z-score with a mean of 0 and a standard deviation of 1. The easiest way to do this is to use an online tool, here's a link to your specific example.
Finally, we take the Confidence % expressed as a decimal (i.e. 0.90, 0.95, 0.99) and 1 minus these values (i.e. 0.1, 0.05, 0.01)
If the P-value is greater than the Confidence % or less than 1 minus the Confidence %, then it is significant, otherwise it is not. Let me explain that using our example:
at 95% confidence, our P-value needs to be <0.05 or >0.95. Since our P-value is .05799, it doesn't fit either of those requirements and such is not significant at that confidence level.I know that's a lot of math, but this is why Google Experiments is saying that the result is not statistically significant.
Hope this helps! Let me know if you have any further questions on this!
Trenton
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
We've a client that wants to repurpose some of the content on there site but not lose rankings. However its changing the content to an alternative to ....
The Current URL Example would be test.co.uk/prodcut-catogory/product-1 . This side of the business is being sold however they have products elsewhere in the range that can be used as alternatives to the product range that will be moved, so they want to amend these pages with an " alternative " slant. Example idea would be to change to the following : test.co.uk/prodcut-catogory/alternative-to-product-1, trying not to change the URL to much, keeping the majority of the content the same but adding new copy stating the benefits of using the alternative product. So it becomes a comparison between the two. I appreciate I will have to amend the URLs, titles, meta, H1s, add new copy & preform a 301's. Option 2 Simply write a second page which is all brand new content talking about it being an alternative to the products and doing a 301 from the original page but not amending that. Be great to get peoples feedback on this approach and if there is anything else that I could do to minimise the drop in rankings once I complete the changes. Thank you D
Conversion Rate Optimization | | SDOwner0 -
Is there a way to drill down on Facebook Audience's interests AFTER ad has run?
Howdy, folks. So, Everyone knows that we can create custom Facebook audience with all the filters, interests and behaviors and see what audience "consist of" in FB audience insights. That's all nice and dandy, but here is my situation: So, I "build" the audience the best way I can, run the ad, have, let's say average results. Theoretically (especially if audience is broad), I would be able to get better results if I was able to see which segment (by interests or behaviors) of targeted audience performed best. And then run another ad to narrowed down audience. But, as far as I understand and know, there is no way to do that. FB reporting only provides breakdowns on age, gender, location, devices and couple more things. So, is there way to get that data? P.S. If I'm still confusing, here are couple examples: I run ad for audience who have interest in football OR meat OR potatoes (let's say because I sell meat and potatoes dish for football players :D). So, is there way for me to find out which interest group performed best without running three separate a/b/c test campaigns (imagine if i have 20 "OR" interests)? I run ad for audience of male, between 22-44 age. Is there way to find out which interests group in this audience performed best?
Conversion Rate Optimization | | DmitriiK0 -
When to determine that a change DIDN'T affect conversion rates
Hi everyone, Description of test: We're a lead gen site trying to add more value by providing users with real, live quotes after they submit a lead. However, we don't want showing the quotes to tank our lead conversion rates. So we're running a test where 50% of leads see quote results and 50% don't, and we compare the lead conversion rates for each. The best possible outcome is to show that showing the quotes DIDN'T negatively affect conversion rates. My issue: When do we conclude the test? In the end, we're hoping to see that the change didn't cause a statistically significant difference between the control and version B, which is the opposite of every other test I've ever run. So, at what point do we conclude that the changes in version B didn't have a significant effect on lead conversions? Currently the control is doing 5% better than the variation with a p-value of .379
Conversion Rate Optimization | | ted-zarceczny0 -
Which eCommerce site you consider using best practices? Site we can learn from
Hi, I'm looking to hear thoughts and suggestions as per sites that you consider to have great practices in the eCommerce world. Almost none of the sites do everything good so you can split your suggestions by any criteria -
Conversion Rate Optimization | | BeytzNet
the site structure
conversion funnel
Converting product pages (good design)
content creation and blog management / structure
content marketing
SEO guidelines / practices
... Thanks0 -
Shorter checkout form converts better? Or can it be harmful?
I've frequently read that the shorter the checkout form the better. My checkout form has the following fields: First name Last name Email Username Password Retype password Card number Card expiration date CVV code Billing street Billing city Billing state Billing zip Here's the thing. Since it's a web-app, I don't need the "billing address" as I'm not physically shipping them anything. Should I remove it? The no-brainer answer seems to be "yes", but I'm wondering if folks don't see any billing address fields, it may look suspicious. Conversions for having it and not don't seem to make much difference, so I suppose I'm looking for some tie-breaker opinions. The "password" and "retype password" fields could be eliminated by emailing the user's a system-generated password. But once again, could a user see this as odd or suspicious and then abandon? Even if I tell them I'll be emailing them a password? They could be sensitive thinking we'd email the wrong email address due to system error or their own typo. I could also eliminate the CVV and not validate against that. But once again, could a user seeing the CVV gone become wary? As much as I'd like to have "guest checkout' it's not feasible. The app is tied to a logged in account, which would also make eliminating the "username" impossible. Based on all the above, I could trim down the form considerably, but would I be doing more harm than good? I could A/B test it, but I don't believe I have a sufficient number of users to test against. Everything I buy, physical or online app, has an address field, so perhaps folks are accustomed to filling out this stuff and I should just keep it to align with user expectations? Thanks.
Conversion Rate Optimization | | bluekite770 -
Keeping pages indexed but making sure they fill out a form before access - confusing Q's
OK so let me break down this little scenario we have going on. I work for a b2b company so we have a lot of gated content that is behind a form fill out - this is how we get a lot of our lead generation. Some pages that we have behind the form are showing up in search which allows people to view the documents bypassing the form. At first I thought, well why dont we just no index that page so that it does not appear in search. But then I thought it would be smart to keep the pages indexed to keep the SEO value, Is there a way to keep these pages indexed but make sure that when they click the link in the SERPS that they need to fill out the form in order to gain access to the document? Something on the backend that checks to make sure that the referral URL was completed or something like that? Anybody deal with this before?
Conversion Rate Optimization | | PatBausemer0 -
Any providers offering a/b testing using JS callbacks?
Hi all, I am looking to test the impact of buy button wording on conversion rate. The website in question has a few thousand products on non-dynamic URLs. The common a/b testing products on the market restrict tests to a single URL, or you have to enter all the URLs being tested which isn't practical. Ideal Solution What I'd really like to do is: use the provider's web app to configure an experiment with a name and a description of the variations; use some JS code to run the experiment with a callback to carry out the variation. This would allow me to easily put this code on the product details page template. I've written example code below for how this would look. When the experiment is run, the provider's framework would a) tell my code what variation to run, and b) handle the measuring of conversion rate. Questions Are there any providers which work like this? Is there an alternative solution on the market? If there isn't someone already doing this, would it be useful to anyone else? Joel // Loads the provider's framework. // Setting the experiment to run when doc is ready.
Conversion Rate Optimization | | switchplane
// Assumes jQuery environment.0 -
Can Changing Meta Descriptions Negatively Impact SERP's?
I have just had a page start ranking well in key SERP's and I would like to change the meta description and add a price as we are extremely competative in that line. Could changing the meta description now a page is ranking negatively impact the SERP placing? Does anyone have any experience with this?
Conversion Rate Optimization | | robertrRSwalters0