A/B Testing Overview
Whether you're new to Userpilot or you're an experienced user, you must be curious about the best way to measure your goals conversions or how to measure the impact of a certain change to your Userpilot experiences.
Userpilot now makes it easy to test experiences against a control group (Userpilot vs nothing). With the help of our new A/B testing experimentation tool, your team can now make more informed decisions and measure the impact that experiences are having on your growth goals with higher accuracy.
Example use cases
- Understand the impact of showing an experience on a given goal to a specific segment versus showing nothing at all.
- Study, analyze, and decide the best version of the experience that would result in better engagement and usage of a certain event in your application.
- Measure the effectiveness of experiences in increasing your adoption goals and retention rates
How it works
Userpilot will equally divide your users who are eligible to enter the experiment and assign them automatically into 2 different groups:1 Group (A) and 2 Group (B).
The sample group (A) is the holdback/control group – not eligible to see the experience. The sample group (B) is the variant group – eligible to see the experience.
Note: Any user who doesn't match the selected targeting and triggering Settings of the experience, will be considered ineligible to enter the experiment.
After the experiment has been completed, the test results will favor one group over the other or return a neutral result when there is no significant change between groups A and B.
- Group A (out / control) is favored – the experience has a minor effect on the goal conversion
- Group B (in / variant) is favored – the experience has a noticeable effect on the goal conversion
- You will not be able to run an experiment without setting a goal first.
- You will not be able to run an experiment if the frequency setting of the experience is set to 'show everytime' or 'trigger manually'.
- Users who have achieved the given goal prior to the start of the experiment will automatically be excluded from the experiment.
- Allow experiments to run until there are enough participants for the result to be considered 'statistically significant'.
- Avoid running experiments on segments with small audience sizes.
If you'd like to send general feedback or suggestions for this feature, please reach out to us at firstname.lastname@example.org