A/B Testing Calculator: Calculate your statistical significance
Identifying promoters and detractors has never been so easy! Voxco’s best-in-class solutions help you gather feedback from customers continually to understand their pain points with your product or service. Get ready to deliver a personalized and unique customer experience for increased customer satisfaction and loyalty! All you need to do is find your Net Promoter Score (NPS) by using our free NPS Calculator. Start by filling in the number of times you received a particular score in the boxes given below.
Significant at % Confidence
Conversion Rate Limits
What is statistical significance?
Statistical significance in A/B testing deals with the difference between the control version and the test version in your experiment, and that it is not due to error or random fate.
This is ensured by using a significance level. A significance level of 98% means that one can be 98% confident that any difference between your control version and the test version is real.
Why is statistical significance used?
Statistical significance comes in handy when a business wants to observe how a change in their product or service (the experiment) can affect their business. Is there a positive or negative brought on by that change, and if there is, then why?
Statistical significance is used to ensure that any data you collect falls well within the margin of error you have deemed acceptable (signified by a confidence level), and that any final data is not one prone to error.
Calculating statistical significance
The first stage in A/B testing (or finding out statistical significance) is to formulate a hypothesis. There is a null hypothesis (H0, 0 being ‘nought’) and an alternative hypothesis. Typically the null hypothesis states that there is no relationship between the variables that you are comparing. The alternative hypothesis tries to prove a relationship exists and that the “test” is successful.
In A/B testing, there can be many instances that would work as a hypothesis – like adding a button on a website or an app, changing the UI or layout or color scheme and testing if these changes are affecting conversion rates by showing some users the normal (control) version.
A z-score is used to test the veracity of your null-hypothesis.
A p-value signifies the strength you have in favor of your hypothesis.
In A/B testing you must also decide if you are going to conduct a one-tailed or two-tailed test. One tailed tests can only account for directional effects from your alternative hypothesis. Two tailed tests on the other hand, also account for the eventuality that your hypothesis may have a negative impact. It is the safer approach.
Improving your results
Although A/B testing is an excellent technique for trying out changes and updates to your product, you need to go about conducting it the right way. There are a few techniques and guidelines you can keep in mind while conducting A/B tests.
Increasing sample size
The more people that partake in your tests, the more accurate the insights you will receive. For A/B testing, it means that you run your tests for a longer period of time, giving more people the chance to test out your null and alternative hypothesis.
Artificially direct traffic
You can add more links to your test pages on your social media and on your website to direct more traffic to them, allowing for a stress test of sorts.
Try significant changes
You can try making bigger changes to your product and testing its’ shock value on your users. This doesn’t mean a simple change to your color palette. Try getting users to engage with your services in an entirely new manner.
Don’t assume anything
Even if one scenario performs better than the other, it may not signify that users actually prefer using it. Therefore you must also pair A/B testing along with online surveys to get a deeper understanding into your users, and whether your A/B testing has yielded actionable insights.
Read More About AB Testing