There are lots of things that can go wrong [存在隐患] when you try to create AB tests.
Making it clear helps align the team toward the goal.
Your hypothesis should be short and to the point. It should not run into a couple of pages.
You can use the following format for writing an A/B test hypothesis:
“By making “X” change, we expect “Y” result”
An A/B test has two groups:
Users should be put into control and variant groups at random without any specific rule.
As part of the experiment you need to specify how the traffic needs to be split between control and variant.
A/B experiments need to have a start and end date. (Keep seasonality in mind)
In most cases you will have a control group and variant. But at times you might want to test more than one variant. So you can do an A/B/C test where B and C are two variants to find out the winner!
Users behave differently across platforms and devices. You will have to run different tests for each type to prove your hypothesis. What works for desktop might not work for mobile. What might work for Android users might not work for iOS users. You cannot extrapolate the results of one platform to say that it will work for the other.
“In the context of AB testing experiments, statistical significance is how likely it is that the difference between your experiment’s control version and test version isn’t due to error or random chance.”
The A/B experiment should not hurt other important metrics at the cost of increasing the impact of the experiment.