Setting the Record Straight on Educational Choice

(Guest Post by Jason Bedrick)

In the last couple weeks, the New York Times has cranked their effort to discredit educational choice policies up to 11. The formula is simple: downplay the positive findings from all the previous gold-standard research and focus instead on more recent studies from Louisiana, Ohio, and Indiana — two of which are not random-assignment studies and one of which hasn’t even been released (not to mention the likelihood that overregulation is hampering Louisiana’s voucher program). Sadly, this distorted narrative is spreading, but some are pushing back. Yesterday, Paul DiPerna of EdChoice and Neal McCluskey of the Cato Institute each provided essential context for understanding the research on school choice.

In Education Next, DiPerna wrote:

Contrary to recent editorials in some major U.S newspapers, the empirical research on school choice programs is far more positive than not. Summaries of the effects of multiple programs generally show positive effects, as does a meta-analysis of gold-standard experimental research on school choice by Shakeel, Anderson, and Wolf (2016). Participating students usually show modest improvements in reading or math test scores, or both. Annual gains are relatively small but cumulative over time. Graduation and college attendance rates are substantially higher for choice students compared to peers. Programs are almost always associated with improved test scores in affected public schools. They also save money. Those savings can be used to increase per-pupil spending in local school districts. Studies also consistently show that programs increase parent satisfaction, racial integration and civic outcomes.

It’s true that recent studies have reported some initial negative effects on choice students’ test scores. The most sobering come from the rigorous, experimental evaluation of the Louisiana Scholarship Program (LSP). The LSP has a different, much more restrictive regulatory framework for private schools than other choice programs. The negative results in math should be monitored, but it’s important to note that the evaluation is only in its second of seven planned years.

Broad perspective and context are essential. Negative initial findings in one or two locations, based solely on one performance metric, should not halt the creation or expansion of school choice programs in other parts of the country. Generalizing those findings across states is problematic because education is sensitive to state and local cultural, political, governmental and economic conditions. The many places where we have observed significant positive results from choice programs swamp the few where we have seen negative findings. We need to consider the complete research base and not disproportionately emphasize the most recent studies.

McCluskey also turned a gimlet eye on the studies that found negative impacts on test scores:

First, the vast majority of random-assignment studies of private school voucher programs—the “gold-standard” research method that even controls for unobserved factors like parental motivation—have found choice producing equivalent or superior academic results, usually for a fraction of what is spent on public schools. Pointing at three, as we shall see, very limited studies, does not substantially change that track record.

Let’s look at the studies Carey highlighted: one on Louisiana’s voucher program, one on Ohio, and one on Indiana. Make that two studies: Carey cited Indiana findings without providing a link to, or title of, the research, and he did not identify the researchers. The Times did the same in their editorial. Why? Because the Indiana research has not been published. What Carey perhaps drew on was a piece by Mark Dynarski at the Brookings Institution. And what was that based on? Apparently, a 2015 academic conference presentation by R. Joseph Waddington and Mark Berends, who at the time were in the midst of analyzing Indiana’s program and who have not yet published their findings.

Next there is Ohio’s voucher program. The good news is that the research has been published, indeed by the choice-favoring Thomas B. Fordham Institute. And it does indicate that what the researchers were able to study revealed a negative effect on standardized tests. But Carey omitted two important aspects of the study. One, it found that choice had a modestly positive effect on public schools, spurring them to improve. Perhaps more important, because the research design was something called “regression discontinuity” it was limited in what it was able to reliably determine. Basically, that design looks at performance clustered around some eligibility cut-off—in this case, public schools that just made or missed the performance level below which students became eligible for vouchers—so the analysis could not tell us about a whole lot of kids. Wrote the researchers: “We can only identify with relative confidence the estimated effects…for those students who had been attending the highest-performing EdChoice-eligible public schools and not those who would have been attending lower-performing public schools.”

That is a big limit.

Finally, we come to the Louisiana study, which was random-assignment. Frankly, its negative findings are not new information. The report came out over a year ago, and we at Cato have written and talked about it extensively. And there are huge caveats to the findings, including that the program’s heavy regulations—e.g., participating schools must give state tests to voucher recipients and become part of a state accountability system—likely encouraged many of the better private schools to stay out. There are also competing private choice programs in the Pelican State. In addition, the rules requiring participating private schools to administer state tests are new, and there is a good chance that participating institutions were still transitioning. Indeed, as Carey noted, the study showed private school outcomes improving from the first year to the second. That could well indicate that the schools are adjusting to the change. And as in Ohio, there was evidence that the program spurred some improvements in public schools.

Both blog posts are worth reading in full, but the main point is this: the research literature is generally positive. The few negative findings are disconcerting and should cause education reformers to think critically about policy design, but the literature still generally finds that students exercising school choice tend to perform as well or better than their district school peers, they’re more likely to graduate high school and enroll in college, they’re less likely to be involved in crime, and all these positive effects come at a much lower cost per pupil to the taxpayer. Additionally, the overwhelming majority of studies find that choice programs have a modest but statistically significant positive effect of the performance of district schools.

Educational choice remains a win-win solution.

 

2 Responses to Setting the Record Straight on Educational Choice

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: