(Guest post by Greg Forster)
Today the U.S. Dept. of Education released the fourth annual report on the random-assignment evaluation of the DC voucher program, including academic results for the first two years of the program’s existence. As with last year’s report, across the whole population the voucher students had higher academic outcomes than the control group, but the positive results just barely fell short of the conventional cutoff for statistical certainty. This means that while the voucher students in fact had higher test scores, we cannot be 95 percent confident that their higher scores are due to vouchers and not a statistical fluke. This year it was the reading results that came close to statistical significance, reaching 91 percent certainty. The study also finds statistically certain positive results for three subgroups, which together comprise 88 percent of the voucher population.
Since the previous year’s results were also not statistically significant, this update of the study doesn’t change the balance of the studies on school choice. As before, there are a total of ten random-assignment studies on school vouchers, all ten of which found that the voucher students had higher academic achievement, with eight studies achieving statistical certainty for the positive finding and two not.
In other words, school vouchers are still better supported by high-quality scientific evidence than any other education policy. If you reject vouchers because this study is only 91 percent sure they produce academic improvements, you have no empirical grounds for supporting any other policy, since all other policies are far less well supported by empirical evidence than vouchers.
In a few minutes you’ll be able to see the Friedman Foundation’s response to the DC study, including details and citations on all ten random-assignment studies of vouchers, here.