
In the world of A/B testing, precision and statistical rigor are essential to ensure that our experiments deliver meaningful and actionable results. One of the most critical parameters in designing an effective experiment is the Minimum Detectable Effect (MDE). Understanding what MDE is, how it works, and why it matters can make the difference between a successful data-driven decision and a misleading one.
What is Minimum Detectable Effect?
The Minimum Detectable Effect (MDE) represents the smallest difference between a control group and a variant that an experiment can reliably detect as statistically significant.
In simpler terms, it’s the smallest change in your key metric (such as conversion rate, click-through rate, or average order value) that your test can identify with confidence — given your chosen sample size, significance level, and statistical power.
If the real effect is smaller than the MDE, the test is unlikely to detect it, even if it truly exists.
How Does It Work?
To understand how MDE works, let’s start by looking at the components that influence it. MDE is mathematically connected to sample size, statistical power, significance level (α), and data variability (σ).
The basic idea is this:
A smaller MDE means you can detect tiny differences between variants, but it requires a larger sample size. Conversely, a larger MDE means you can detect only big differences, but you’ll need fewer samples.
Formally, the relationship can be expressed as follows:
Where:
- MDE = Minimum Detectable Effect
- z(1−α/2) = critical z-score for the chosen confidence level
- z(power) = z-score corresponding to desired statistical power
- σ = standard deviation (data variability)
- n = sample size per group
Main Components of MDE
Let’s break down the main components that influence MDE:
1. Significance Level (α)
The significance level represents the probability of rejecting the null hypothesis when it is actually true (a Type I error).
A common value is α = 0.05, which corresponds to a 95% confidence level.
Lowering α (for more stringent tests) increases the z-score, making the MDE larger unless you also increase your sample size.
2. Statistical Power (1−β)
Power is the probability of correctly rejecting the null hypothesis when there truly is an effect (avoiding a Type II error).
Commonly, power is set to 0.8 (80%) or 0.9 (90%).
Higher power makes your test more sensitive — but also demands more participants for the same MDE.
3. Variability (σ)
The standard deviation (σ) of your data reflects how much individual observations vary from the mean.
High variability makes it harder to detect differences, thus increasing the required MDE or the sample size.
For example, conversion rates with wide daily fluctuations will require a larger sample to confidently detect a small change.
4. Sample Size (n)
The sample size per group is one of the most controllable factors in experiment design.
Larger samples provide more statistical precision and allow for smaller detectable effects (lower MDE).
However, larger samples also mean longer test durations and higher operational costs.
Example Calculation
Let’s assume we are running an A/B test on a website with the following parameters:
- Baseline conversion rate = 5%
- Desired power = 80%
- Significance level (α) = 0.05
- Standard deviation (σ) = 0.02
- Sample size (per group) = 10,000
Plugging these values into the MDE equation:
This means our test can detect at least a 0.056% improvement in conversion rate with the given parameters.
Why is MDE Important?
MDE is fundamental to experimental design because it connects business expectations with statistical feasibility.
- It ensures your experiment is neither underpowered nor wasteful.
- It helps you balance test sensitivity and resource allocation.
- It prevents false assumptions about the test’s ability to detect meaningful effects.
- It informs stakeholders about what level of improvement is measurable and realistic.
In practice, if your expected effect size is smaller than the calculated MDE, you may need to increase your sample size or extend the test duration to achieve reliable results.
Integrating MDE into Your A/B Testing Process
When planning A/B tests, always define the MDE upfront — alongside your confidence level, power, and test duration.
Most modern experimentation platforms allow you to input these parameters and will automatically calculate the required sample size.
A good practice is to:
- Estimate your baseline metric and expected improvement.
- Compute the MDE using the formulas above.
- Adjust your test duration or audience accordingly.
- Validate assumptions post-test to ensure the MDE was realistic.
Conclusion
The Minimum Detectable Effect (MDE) is the cornerstone of statistically sound A/B testing.
By understanding and applying MDE correctly, you can design experiments that are both efficient and credible — ensuring that the insights you draw truly reflect meaningful improvements in your product or business.








Recent Comments