Experimentation Overview


Whether you're a newcomer to Userpilot or an experienced user, you are most likely interested in understanding how to effectively assess goal conversions or measure the impact of certain modifications done to your Userpilot flows.

Userpilot has made this possible by introducing multiple convenient methods for testing flows against control groups and other flows. Our newly implemented experimentation tools empower your team to make well-informed choices and achieve greater precision when assessing the influence of flows on your growth objectives.

If you signed up after May 30, 2023, this feature is only available for Growth and Enterprise plans. If you wish to know more about upgrading your plan please reach out to support@userpilot.co

It's important to mention that you can access the experiment feature by navigating to the 'Flows' tab under the 'Engagement' section.

Types of Experimentations:

  1. Controlled A/B Test: Null hypothesis (Userpilot Flow VS nothing)
  2. Head-to-Head A/B Test: Test two versions of a flow (Flow VS Another Flow)
  3. Controlled Multivariate Test: Test two flows across 3 groups (Null, Group A, Group B)

Controlled A/B Test

In a Controlled A/B Test, you compare the performance of a single flow (Flow A) against a control group (no flow), aiming to determine if Flow A is effective in achieving its goal.


To get started, specify the goal you wish to measure. Then, set the experiment's duration. You can end it when the result has been determined, typically when 50% of users have seen the flow while the other 50% haven't. The second option is to run the experiment for a specific number of users, with a minimum duration of 48 hours.

Keep in mind that you will not be able to run an experiment if the frequency setting of the flow is set to 'show every time' or 'trigger manually'.


Use Case Example for Controlled A/B Test:

Let's say you're running an e-commerce web app, and you want to test the effectiveness of a new onboarding flow that guides users through the checkout process. You can show Flow A to a group of users (Group A) while not showing any flow (control group) to another group (Group B). By comparing the conversion rates of Group A and Group B, you can assess if Flow A significantly improves the checkout process.


Head-to-Head A/B Test

In a Head-to-Head A/B Test, you directly compare two different flows (Flow A and Flow B) without a control group. This is done to determine which version of the two flows yields better results. To ensure a fair comparison, both flows must share the same trigger settings and target the same user segment.


To begin, place the first flow in Group A and the second flow in Group B. Next, specify the goal you want to track to evaluate their effectiveness.

Then, set the experiment's duration. You can end it when a result has been determined, typically when 50% of users have triggered Group A's flow, and the other 50% have triggered Group B's flow. Alternatively, you can run the experiment for a set number of users, with a minimum duration of 48 hours.

Make sure that the settings for both flows match in order to create the Experiment


Use Case Example for Head to Head A/B Test:

Suppose you have a web app, and you want to find out which onboarding process is more effective: a video tutorial (Flow A) or a step-by-step guide (Flow B). You would show Flow A to one group of users and Flow B to another group, making sure they are both new users. By comparing user engagement, user retention, or conversion rates between the two groups, you can decide which onboarding method is better.


Controlled Multivariate Test

In a Controlled Multivariate Test, you compare the performance of multiple flows (Flow A, Flow B, Flow C, etc.) against a control group (no flow). The key difference here is that these flows can have different trigger settings and target different user segments, allowing you to test more than one variable simultaneously.


To start, add the first flow to Group B and the second flow to Group C. Then, specify the goal you want to track to determine the more effective way.

Next, set the experiment's duration. You can end it when a result has been determined, typically when approximately 67% of users in the matching segment have seen the flow in Group B, approximately 67% have seen the flow in Group C, and approximately 67% have not triggered any flow. Alternatively, you can run the experiment for a specific number of users, with a minimum duration of 48 hours.

Use Case Example for Controlled Multivariate Test:

Consider a SaaS platform offering multiple subscription plans (Basic, Pro, Premium). You want to optimize the user journey for each plan. In this scenario, you can create multiple onboarding flows tailored to each plan and show them to different user segments accordingly. By comparing user behavior and conversion rates between the flows and a control group that sees no flow, you can determine which onboarding approach works best for each subscription plan.



In essence, these A/B testing methods help you fine-tune your user engagement strategies by comparing different flows or variations to see which one performs best in achieving your specific goals. Each method provides a different level of control and insight into user behavior, allowing you to make data-driven decisions to improve user experiences.



If you have any questions or concerns please reach out to us at support@userpilot.co

Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.