(Guest Post by Matthew Ladner)
The hidden highlight from the Evaluation of the DC Opportunity Scholarship Program: Impacts After Two Years report is buried in the Appendix, pp. E-1 to E-2:
Applying IV analytic methods to the experimental data from the evaluation, we find a statistically significant relationship between enrollment in a private school in year 2 and the following outcomes for groups of students and parents (table E-1):
• Reading achievement for students who applied from non-SINI schools; that is, among students from non-SINI schools, those who were enrolled in private school in year 2 scored 10.73 scale score points higher (ES = .30)^2 than those who were not in private school in year 2.
• Reading achievement for students who applied with relatively higher academic performance; the difference between those who were and were not attending private schools in year 2 was 8.36 scale score points (ES = .24).
• Parents’ perceptions of danger at their child’s school, with those whose children were enrolled in private schools in year 2 reporting 1.53 fewer areas of concern (ES = -.45) than those with children in the public schools.
• Parental satisfaction with schooling, such that, for example, parents are 20 percentage points more likely to give their child’s school a grade of A or B if the child was in a private school in year 2.
• Satisfaction with school for students who applied to the OSP from a SINI school; for example, they were 23 percentage points more likely to give their current school a grade of A or B if it was a private school.
I’m trying to figure out why the impact of actually using the voucher program isn’t actually the focus of this study, and in fact is presented in an appendix. Instead all the “mixed” results are studying the impact of having been offered a scholarship whether the student actually used it or not.
I’m going to walk way out on a limb here and predict that the impact on test scores of being offered but not using a voucher will be indistinguishable from zero. If this were a medical study, we would have a group of patients in a control and experimental group offered a drug, some of them choose not to take it, but we ignore that fact and measure the impact of the drug based on the results of both those who took it and those who didn’t. Holding the pill bottle can’t be presumed to have the same impact as taking the pills.
We’ve all been told that exercise is good for our health. Should we judge the effectiveness of exercise on health outcomes by what happens to those who actually exercise, or by the results for everyone that has been told that it is good for you?
This shortcoming has been corrected in the Appendix, but that is getting very little attention. On page 24 the evaluation reads:
Children in the treatment group who never used the OSP scholarship offered to them, or who did not use the scholarship consistently, could have remained in or transferred to a public charter school or traditional DC public school, or enrolled in a non-OSP-participating private school.
So in the report’s main discussion, the kids actually attending private schools have to make gains big enough to make up for the fact that many “treatment” kids are actually back in DCPS. As it turns out, several subsets of students do make such gains, but that’s not the point. The point is we ought to be primarily concerned with whether actual utilization of the program improves education outcomes and with systemic effects of the program. We should indeed study who actually uses this program, and who chooses not to and the reasons why (very important information), but this sort of analysis seems to belong in the appendix rather than the other way around.
Receiving an offer of a school voucher doesn’t constitute much of an education intervention, and it seems painfully obvious that the discussion around this report is conflating the impact of voucher offers with that of voucher use. The impact of voucher use is clear and positive.