A/B Test

A controlled experiment comparing two versions to determine which performs better.

1 min readLast updated Apr 2026

A controlled experiment comparing two versions to determine which performs better.

Why It Matters

A/B testing replaces opinions with data. Instead of guessing whether a red or green button converts better, you test both and let customer behavior decide. Even small wins (5-10% improvements) compound across thousands of visitors into significant revenue gains.

Practical Example

Scenario

A jewelry brand tests two product page layouts: A (current) vs B (larger images, simplified description). They split traffic 50/50 for 3 weeks.

Calculation

Variant A: 2.8% conversion. Variant B: 3.4% conversion. With 50,000 monthly visitors and $120 AOV.

Result

Variant B wins with 95% confidence. Implementing it adds 300 extra conversions/month = $36,000 annually from one test.

Pro Tips

  • 1Test one element at a time to understand what caused the change
  • 2Run tests until you reach statistical significance (95% confidence minimum)
  • 3Calculate required sample size before testing—most tests need 1,000+ conversions per variant
  • 4Document all tests and results to build institutional knowledge

Common Mistakes to Avoid

Ending tests too early before reaching statistical significance
Testing too many elements at once, making it impossible to know what worked
Not considering segment differences—a winner overall may lose for mobile users

Frequently Asked Questions

Related Terms