Modest Programs Produce Modest Results . . . Duh.

HT perfect stranger @ FR

By Greg Forster & Jay Greene

Edwize is touting a new “meta-analysis” by Cecilia Rouse and Lisa Barrow claiming that existing voucher programs produce only modest gains in student learning.

Edwize quotes the National Center for the Study of Privatization in Education (NCSPE), which sponsored the paper and is handling its media push, describing the paper as a “comprehensive review of all the evaluations done on education voucher schemes in the United States.”

But the paper itself says something different: “we present a summary of selected findings…” (emphasis added). Given EdWize’s recent accusations about cherry-picking research, which are repeated in his post on the Rouse/Barrow paper, we thought he’d be more sensitive to the difference between a comprehensive review of the reserach and a review that merely presents selected findings. (By contrast, the reviews we listed here are comprehensive.)

Even more important, the Rouse/Barrow paper provides no information on the criteria they used to decide which voucher experiments, and which analyses of those experiments, to include among the “selected findings” they present, and which to exclude from their review. The paper includes participant effect studies from Milwaukee, Cleveland, DC, and New York City, but does not include very similar studies conducted on programs in Dayton or Charlotte. In New York it includes analyses by Mayer, Howell and Peterson, as well as Krueger and Zhu, but not by Barnard, et al. The paper includes systemic effect analyses from Milwaukee and Florida, but excludes analyses by Howell and Peterson as well as by Greene and Winters.

Clearly this paper is not intended to be, and indeed it does not even profess to be, a comprehensive review.

But even with its odd and unexplained selection of studies to include and exclude, Rouse and Barrow’s paper nevertheless finds generally positive results. They identified 7 statistically significant positive participant effects and 4 significant negative participant effects (all of which come from one study: Belfield’s analysis of Cleveland, which is non-experimental and therefore lower in scientific quality than the studies finding positive results for vouchers). In total, 16 of the 26 point estimates they report for participant effects are positive.

On systemic effects, they report 15 significant positive effects and no significant negative effects. Of the 20 point estimates, 16 are positive.

And yet they conclude that the evidence “is at best mixed.” If this were research on therapies for curing cancer, the mostly positive and often significant findings they identified would never be described as “at best mixed.” We would say they were encouraging at the very least.

Moreover, the paper is not, and doesn’t claim to be, a “meta-analysis.” That term doesn’t even appear anywhere in the paper. It’s really just a research review, as the first sentence of the abstract clearly states (“In this article, we review the empirical evidence on…”). It looks like the term “meta-analysis,” like the phrase “comprehensive review,” was introduced by the NCSPE’s publicity materials.

What’s the difference? A meta-analysis performs an original analysis drawing together the data and/or findings of multiple previous studies, identified by a comprehensive review of the literature. The “conclusions” of a research review are just somebody’s opinion. Meta-analyses vary from simple (counting up the number of studies that find X and the number that find Y) to complex (using statistical methods to aggregate data or compare findings across studies). But what they all have in common is that they present new factual knowledge. A research review produces no new factual knowledge; it just states opinions.

There’s nothing wrong with researchers having opinions, as we have argued many times. It’s essential. But it’s even more essential to maintain a clear distinction between what is a fact and what is somebody’s opinion. Voucher opponents, as the saying goes, are entitled to their own opinions but not their own facts. (Judging by the way they conduct themselves, this may be news to some of them – for example, see Greg Anrig’s claims in the comment thread here.)

By falsely puffing this highly selective research review into a meta-analysis, NCSPE will decieve some people – especially journalists, who these days are often familiar with terms like “meta-analysis” and know what they mean, even if NCSPE doesn’t – into thinking that an original analysis has been performed and new factual knowledge is being contributed, when in fact this is just a repetition of the same statement of opinion that voucher opponents have been offering for years.

(We don’t blame Edwize for repeating NCSPE’s falsehood; there’s no shame in a layman not knowing the proper meaning of the technical terms used by scholars.)

And what about the merits of the opinion itself? The paper’s major claim, that the benefits of existing voucher programs are modest, is exactly what we have been saying for years. For example, in this study one of us wrote that “the benefits of school choice identified by these studies are sometimes moderate in size—not surprising, given that existing school choice programs are restricted to small numbers of students and limited to disadvantaged populations, hindering their ability to create a true marketplace that would produce dramatic innovation.”

And there’s the real rub. Existing programs are modest in size and scope. They are also modest in impact. Thank you, Captain Obvious.

The research review argues that because existing programs have a modest impact, we should be pessimistic about the potential of vouchers to improve education dramatically either for the students who use them or in public schools (although the review does acknowledge the extraordinary consensus in the empirical research showing that vouchers do improve public schools).

But why should we be pessimistic that a dramatic program would have a dramatic impact on grounds that modest programs have a modest impact?

One of us recently offered a “modest proposal” that we try some major pilot programs for the unions’ big-spending B.B. approach and for universal vouchers (as opposed to the modest voucher programs we have now), and see which one works. He wrote: “Better designed and better funded voucher programs could give us a much better look at vouchers’ full effects. Existing programs have vouchers that are worth significantly less than per pupil spending in public schools, have caps on enrollments, and at least partially immunize public schools from the financial effects of competition. If we see positive results from such limited voucher programs, what might happen if we could try broader, bolder ones and carefully studied the results?”

Has Edwize managed to respond to that proposal yet? If he has, we haven’t seen it. Come on – if you’re really as confident as you profess to be that your policies are backed up by the empirical research and ours are not, what are so you afraid of?

And while we’re calling him out, here’s another challenge: in the random-assignment research on vouchers, the point gains identified for vouchers over periods of four years or less are generally either the same size as or larger than the point gains identified over four years for reduced class sizes in the Tennessee STAR experiment. Will Edwize say what he thinks of the relative size of the benefits identified from existing voucher programs and class size reduction in the empirical research?

Leave a comment