Type I & II Errors
Understand statistical errors and test power
What You'll Learn
- Type I and Type II errors
- Statistical power
- Alpha and beta
- Trade-offs
Type I Error (False Positive)
What: Rejecting true null hypothesis
Example: Concluding drug works when it doesn't
Probability: α (alpha) = Significance level (usually 0.05)
Control: Set lower α (but increases Type II)
Type II Error (False Negative)
What: Failing to reject false null hypothesis
Example: Concluding drug doesn't work when it does
Probability: β (beta)
Control: Increase sample size, increase α
Decision Table

Truth: H0 True
- Reject H0: Type I Error (α)
- Don't reject: Correct
Truth: H0 False
- Reject H0: Correct (Power)
- Don't reject: Type II Error (β)
Statistical Power

Power = 1 - β
What it means: Probability of detecting true effect
Typical target: 80% power (β = 0.20)
Increasing power:
- Larger sample size
- Larger effect size
- Higher α (trade-off!)
Alpha Level
Common values:
- 0.05 (5%) - most common
- 0.01 (1%) - more strict
- 0.10 (10%) - more lenient
Choosing α: Depends on cost of errors
Medical testing: Low α (avoid false positives)
Exploratory research: Higher α acceptable
Sample Size and Power
Power analysis: Determine n needed for desired power
Inputs:
- Effect size
- Alpha
- Desired power
Output: Required sample size
Real-World Implications
Medical trials: Type I: Approve bad drug Type II: Reject good drug
A/B testing: Type I: Launch bad variant Type II: Miss good variant
Quality control: Type I: Reject good product Type II: Accept bad product
Practice Exercise
Scenario: Testing if new feature increases conversions
What's worse:
- Type I error?
- Type II error?
Consider costs of each.
Next Steps
Learn about Correlation!
Tip: No perfect test - always trade-off between Type I and II!