(Guest Post by Jason Richwine)
The new experimental evaluation of the Louisiana voucher program poses a challenge to school-choice advocates such as myself. How do we explain the voucher students’ negative test outcomes – including a massive 0.4 SD drop in math scores – when evaluations in other cities showed neutral to mildly positive effects? Supporters have quickly coalesced around the explanation that Louisiana imposed regulations so suffocating that only the worst private schools participated in the voucher program.
I find that explanation unsatisfactory for a few reasons. First, it feels like a post-hoc excuse. Yes, choice advocates have been warning about burdensome regulation for years, including in Louisiana, but how many predicted that the state’s voucher system would go down in flames because of it? The magnitude of the score declines must surprise even the most vociferous critics of regulation.
More importantly, if the participating private schools are so bad – and other people apparently knew they were bad, given the declining enrollments – then why did the voucher recipients choose them? Did the parents fail to research their options? Do they not value academics much at all? Blaming the results on an unusually bad set of private schools is tempting, but it creates the new problem of having to explain why parents made such dubious choices.
Personally, I do not find it plausible that school quality alone could have so much impact, especially in one year. The traits that students bring with them to school – natural abilities, resilience, family support networks – generally explain much more of the variance in student achievement than school quality. Only the absolute worst schools could have such deleterious effects, but there is no indication that the Louisiana voucher schools were the bottom of the barrel. Even if the one third of private schools that participated really were the worst third in the state, we are still talking about schools that are below average – not uniformly awful.
In trying to reconcile Louisiana with the successful experiments in DC, Milwaukee, Charlotte, etc., I suggest exploring other explanations. In particular, how well did the private schools align their curricula with the demands of the state tests? Maybe the private schools were simply teaching different material rather than teaching the state’s curriculum badly. Also worth examining is whether the randomization process, which was done within a complicated set of priority levels for admission, was conducted appropriately. Another issue is how schools adapt after the first year of statewide implementation. And, remember, it is not uncommon for studies to change significantly from the working paper phase to publication. So let’s be patient. Explaining this anomalous study will require more research.