What is A/B Testing?

A/B testing (also called split testing) is a controlled experiment where two variants of a product experience -- a control (A) and a variation (B) -- are shown to different user segments simultaneously to determine which version performs better against a defined metric. It is one of the most reliable methods for making data-driven product decisions.

Why It Matters for Product Managers

A/B testing removes guesswork from product decisions. Instead of debating whether a new design, feature, or copy change will improve outcomes, PMs can run an experiment and let user behavior provide the answer. This builds a culture of evidence over opinion and helps teams ship improvements with confidence.

A/B tests also help PMs quantify impact. When you can show that a change increased conversion by 12% with statistical significance, it becomes much easier to justify investment in similar initiatives.

How to Run a Good A/B Test

A well-designed A/B test requires a clear hypothesis, a single primary metric, sufficient sample size for statistical significance, and enough runtime to account for day-of-week and seasonal effects. Common pitfalls include testing too many variables at once, ending tests too early, and ignoring secondary metrics that reveal unintended consequences.

Practical Example

A PM hypothesizes that simplifying the checkout page will increase completion rates. They run an A/B test where 50% of users see the existing three-step checkout (control) and 50% see a new single-page checkout (variation). After two weeks with 10,000 users per group, the single-page version shows a 9% higher completion rate with 95% statistical significance.

Related prompt: A/B Test Planning Document