Split test content experiment - Why can't GA identify a winner here?
-
I have been running a content experiment for a short while now and GA has just ended it saying it cannot determine a winner.
Looking at the images (links below), without any form of analysis I can already see a pattern of greater success in Variation 1. It ended with a 93% probability of outperforming the original yet the content experiment ended with no winner. Does this mean the 95% confidence threshold I set should've been lowered?
Ultimately I'm going to choose this as my winner but why didn't GA push it as the winner? Is there something I am missing?
Image 1 - Showing e-commerce performance (objective of split test was transactions)
Image 2 - Showing conversions (same split test, same objective, just different report)
Your thoughts and comments will be appreciated.
-
Hey Emeka,
Short Answer:
You're correct. Effectively what Google is saying here is that they don't have enough statistical confidence to definitely tell you that the variation is outperforming the original at a 95% confidence level, but they do at a 93.8% confidence level.
Quick Note: 95% is the lowest setting in GA Experiments.Long Answer:
The math behind this statistical significance calculation is:
Full credit to vwo.com for their A/B Testing Significance Calculator & doing all the work here.
Link to Image One - This is simply the data of the Control vs Variation & the Conversion Rate & Standard Error Rates
- Conversion Rate is: Conversions/Sessions.
- Standard Error is: √(Conversion Rate*(1-Conversion Rate)/Visitors)
Link to Image Two - Confidence Levels, Z-score, & P-value
To find if something is truly significant at a specific confidence level, we need to calculate the Z-score then use that value to find the P-value and from there we can determine the confidence level.- Z-score is: (Control Conversion Rate-Variation Conversion Rate)/√((Control Standard Error^2)+(Variation Standard Error^2))
- For the P-value, we need to calculate the normal distribution of the z-score with a mean of 0 and a standard deviation of 1. The easiest way to do this is to use an online tool, here's a link to your specific example.
Finally, we take the Confidence % expressed as a decimal (i.e. 0.90, 0.95, 0.99) and 1 minus these values (i.e. 0.1, 0.05, 0.01)
If the P-value is greater than the Confidence % or less than 1 minus the Confidence %, then it is significant, otherwise it is not. Let me explain that using our example:
at 95% confidence, our P-value needs to be <0.05 or >0.95. Since our P-value is .05799, it doesn't fit either of those requirements and such is not significant at that confidence level.I know that's a lot of math, but this is why Google Experiments is saying that the result is not statistically significant.
Hope this helps! Let me know if you have any further questions on this!
Trenton
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
App Store Optimization - Google Play Console A/B Testing: Should I Optimize for Active Devices or Users
Can't seem to find any ASO type of communities to ask this question to and since the Moz community has been so helpful, I thought I'd try this out. I've been doing A/B Testing for featured graphics on the Google Play Store. There are segments of active devices and users and I've been keeping track daily for ~1.5 months. The data that I have written down displays that active devices shows a positive, however users show a negative. Google Play Console choose to display the scaled installs for active devices. When we do A/B Tests on Google Play Console, should we choose the winner based on the active devices or users?
Conversion Rate Optimization | | imjonny1230 -
Server-Side A/B Testing - Okay for SEO?
Hey Moz Community! I've been digging into the differences between server-side testing and client-side testing and had a generic question. Is it safe to run server-side A/B testing? For example, if I want to Split Test the home page of a site and show 50% of my traffic one home page, and show 50% of my traffic a completely different (read: new template, new content, new CTAs, etc) home page, are there any implications to SEO and organic search? I've spent about five hour researching and from what I can find A/B testing is acceptable as long as you don't show Googlebot different content or run A/B tests on Googlebot. Matt Cutts, head of Webspam at Google, has stated that A/B testing does not impact search rankings. "A/B or split testing or other forms of testing web sites is okay by Google as long as you don't test GoogleBot or don't treat GoogleBot differently." The biggest concerns for SEO cloaking, so from my understanding, for server-side testing, you'd need to do user-agent based redirection so that Googlebot (or any search bot) gets the normal version of the home page. The bots shouldn't be part of the test. Technically that is cloaking, but intention-wise, we're not trying to be sneaky. I've also read through this article about experimentation from Google developers here. Am I missing anything here or is there a definitive answer? If we serve a “B” as a different site for user testing, just exclude google bot by user-agent and we’re good? THANKS!
Conversion Rate Optimization | | andrewmeyer0 -
When to determine that a change DIDN'T affect conversion rates
Hi everyone, Description of test: We're a lead gen site trying to add more value by providing users with real, live quotes after they submit a lead. However, we don't want showing the quotes to tank our lead conversion rates. So we're running a test where 50% of leads see quote results and 50% don't, and we compare the lead conversion rates for each. The best possible outcome is to show that showing the quotes DIDN'T negatively affect conversion rates. My issue: When do we conclude the test? In the end, we're hoping to see that the change didn't cause a statistically significant difference between the control and version B, which is the opposite of every other test I've ever run. So, at what point do we conclude that the changes in version B didn't have a significant effect on lead conversions? Currently the control is doing 5% better than the variation with a p-value of .379
Conversion Rate Optimization | | ted-zarceczny0 -
Google Experiments issues
Hey fellow Mozzers! We're in the middle of working on CRO for a client of ours and we were going to try out Google Experiments for the first time. Following the click path described on the Google help page should take me to an option that says 'Create Experiment', but this isn't coming up - do we need to create the various URLs before it will let us set up an experiment, or has the system changed now? I've found in the past Google has a habit of updating its interfaces but not the advice on using the affected tools, so wondering if I'm looking at old information? Secondly, does the Experiment show a user the same variation of the page throughout their journey? What I mean by that is: their site currently has a real mish-mash of page styles, and I think a large part of boosting their conversions is probably down to ensuring consistency across the site as it currently seems like you're bouncing around different sites. But will this issue be made worse by running Experiments - ie, will I enter the site on Variation A, click a link and be shown the next page in Variation B and so on? If so, are we better off running the experiment on one page at a time and using what we've learnt to impact the next page? Thank for any help you can give! Nick.
Conversion Rate Optimization | | themegroup1 -
Can someone recommend a guide to getting started with Rich Snippets?
It's a bit tough to find info about getting started with Rich Snippets and how to start associating that information with the website. I'd appreciate some links to guides on how to get started, followed by how to get good at it. Thank You All in advance
Conversion Rate Optimization | | HMCOE0 -
How Do I Create a Google Analytics Dashboard for My Designer To Monitor Landing Page A/B Testing
Hello All, We recently started doing some "AdWords Experiments" using A/B testing of our search landing pages. My web design team does not have access to our AdWords account, but they do have "user" access to our Google Analytics account. What I need to figure out how to do is setup an easy dashboard (or custom report) that will show them at a quick glance how the two versions of their page are performing in terms of: Goal Completions (Conversions) where the specific page is the entrance/landing page. Bounce Rate Time spent on site where the specific page is the entrance/landing page. Pages viewed where the specific page is the entrance/landing page. Possibly a way to see the most popular page visited 'next' after starting on the specific entrance/landing page Anything else that might be useful The two URLs would be like: http://www.domain.com/search/testa/
Conversion Rate Optimization | | Robert-B
http://www.domain.com/search/testb/ Any insight about the best way to do this is greatly appreciated! Cheers!0 -
Can Adwords campaigns affect other campaigns performance?
I understand that keywords and adgroups within a campaign can affect the performance of other keywords within that parent campaign. I was interested to understand whether campaigns in my account can affect one another. I have an adwords account with around 30 campaigns. Some are heavy hitters and others get minimal to 0 traffic. Will these low quality campaigns drive up the costs of my high end campaigns?
Conversion Rate Optimization | | peigenesis0 -
Can Changing Meta Descriptions Negatively Impact SERP's?
I have just had a page start ranking well in key SERP's and I would like to change the meta description and add a price as we are extremely competative in that line. Could changing the meta description now a page is ranking negatively impact the SERP placing? Does anyone have any experience with this?
Conversion Rate Optimization | | robertrRSwalters0