The Effect-Size Benchmark That Matters Most: Education Interventions Often Fail
Abstract:
It is a healthy exercise to debate the merits of using effect-size benchmarks to interpret research findings. However, these debates obscure a more central insight that emerges from empirical distributions of effect-size estimates in the literature: Efforts to improve education often fail to move the needle. I find that 36% of effect sizes from randomized control trials of education interventions with standardized achievement outcomes are less than 0.05 SD. Publication bias surely masks many more failed efforts from our view. Recognizing the frequency of these failures should be at the core of any approach to interpreting the policy relevance of effect sizes. We can aim high without dismissing as trivial those effects sizes that represent more incremental improvement.
Citation:
Kraft, M. A. (2023). The Effect-Size Benchmark That Matters Most: Education Interventions Often Fail. Educational Researcher, 52(3), 183-187.