The Way of the Future: The Guide on the Side?

January 1, 2009

(Guest Post by Matthew Ladner)

Over at VoxEU, Lisa Barrow, Elizabeth Debraggio and Cecilia  Rouse present a random assignment study showing that computer aided math instruction led to significantly higher scores for participating students.

We need more research on this, but it seems to back the notion that the schools of tomorrow may look very different from those of today. I’ll wager that mixed models of technology delivered instruction, where a smaller number of highly skilled teachers serve as “guides on the side” rather than a “sage on the stage” ultimately becomes more prevalent.

Of course, that’s only a guess, but ultimately greater experimentation with different delivery methods will point new ways forward.


Modest Programs Produce Modest Results . . . Duh.

September 3, 2008

HT perfect stranger @ FR

By Greg Forster & Jay Greene

Edwize is touting a new “meta-analysis” by Cecilia Rouse and Lisa Barrow claiming that existing voucher programs produce only modest gains in student learning.

Edwize quotes the National Center for the Study of Privatization in Education (NCSPE), which sponsored the paper and is handling its media push, describing the paper as a “comprehensive review of all the evaluations done on education voucher schemes in the United States.”

But the paper itself says something different: “we present a summary of selected findings…” (emphasis added). Given EdWize’s recent accusations about cherry-picking research, which are repeated in his post on the Rouse/Barrow paper, we thought he’d be more sensitive to the difference between a comprehensive review of the reserach and a review that merely presents selected findings. (By contrast, the reviews we listed here are comprehensive.)

Even more important, the Rouse/Barrow paper provides no information on the criteria they used to decide which voucher experiments, and which analyses of those experiments, to include among the “selected findings” they present, and which to exclude from their review. The paper includes participant effect studies from Milwaukee, Cleveland, DC, and New York City, but does not include very similar studies conducted on programs in Dayton or Charlotte. In New York it includes analyses by Mayer, Howell and Peterson, as well as Krueger and Zhu, but not by Barnard, et al. The paper includes systemic effect analyses from Milwaukee and Florida, but excludes analyses by Howell and Peterson as well as by Greene and Winters.

Clearly this paper is not intended to be, and indeed it does not even profess to be, a comprehensive review.

But even with its odd and unexplained selection of studies to include and exclude, Rouse and Barrow’s paper nevertheless finds generally positive results. They identified 7 statistically significant positive participant effects and 4 significant negative participant effects (all of which come from one study: Belfield’s analysis of Cleveland, which is non-experimental and therefore lower in scientific quality than the studies finding positive results for vouchers). In total, 16 of the 26 point estimates they report for participant effects are positive.

On systemic effects, they report 15 significant positive effects and no significant negative effects. Of the 20 point estimates, 16 are positive.

And yet they conclude that the evidence “is at best mixed.” If this were research on therapies for curing cancer, the mostly positive and often significant findings they identified would never be described as “at best mixed.” We would say they were encouraging at the very least.

Moreover, the paper is not, and doesn’t claim to be, a “meta-analysis.” That term doesn’t even appear anywhere in the paper. It’s really just a research review, as the first sentence of the abstract clearly states (“In this article, we review the empirical evidence on…”). It looks like the term “meta-analysis,” like the phrase “comprehensive review,” was introduced by the NCSPE’s publicity materials.

What’s the difference? A meta-analysis performs an original analysis drawing together the data and/or findings of multiple previous studies, identified by a comprehensive review of the literature. The “conclusions” of a research review are just somebody’s opinion. Meta-analyses vary from simple (counting up the number of studies that find X and the number that find Y) to complex (using statistical methods to aggregate data or compare findings across studies). But what they all have in common is that they present new factual knowledge. A research review produces no new factual knowledge; it just states opinions.

There’s nothing wrong with researchers having opinions, as we have argued many times. It’s essential. But it’s even more essential to maintain a clear distinction between what is a fact and what is somebody’s opinion. Voucher opponents, as the saying goes, are entitled to their own opinions but not their own facts. (Judging by the way they conduct themselves, this may be news to some of them – for example, see Greg Anrig’s claims in the comment thread here.)

By falsely puffing this highly selective research review into a meta-analysis, NCSPE will decieve some people – especially journalists, who these days are often familiar with terms like “meta-analysis” and know what they mean, even if NCSPE doesn’t – into thinking that an original analysis has been performed and new factual knowledge is being contributed, when in fact this is just a repetition of the same statement of opinion that voucher opponents have been offering for years.

(We don’t blame Edwize for repeating NCSPE’s falsehood; there’s no shame in a layman not knowing the proper meaning of the technical terms used by scholars.)

And what about the merits of the opinion itself? The paper’s major claim, that the benefits of existing voucher programs are modest, is exactly what we have been saying for years. For example, in this study one of us wrote that “the benefits of school choice identified by these studies are sometimes moderate in size—not surprising, given that existing school choice programs are restricted to small numbers of students and limited to disadvantaged populations, hindering their ability to create a true marketplace that would produce dramatic innovation.”

And there’s the real rub. Existing programs are modest in size and scope. They are also modest in impact. Thank you, Captain Obvious.

The research review argues that because existing programs have a modest impact, we should be pessimistic about the potential of vouchers to improve education dramatically either for the students who use them or in public schools (although the review does acknowledge the extraordinary consensus in the empirical research showing that vouchers do improve public schools).

But why should we be pessimistic that a dramatic program would have a dramatic impact on grounds that modest programs have a modest impact?

One of us recently offered a “modest proposal” that we try some major pilot programs for the unions’ big-spending B.B. approach and for universal vouchers (as opposed to the modest voucher programs we have now), and see which one works. He wrote: “Better designed and better funded voucher programs could give us a much better look at vouchers’ full effects. Existing programs have vouchers that are worth significantly less than per pupil spending in public schools, have caps on enrollments, and at least partially immunize public schools from the financial effects of competition. If we see positive results from such limited voucher programs, what might happen if we could try broader, bolder ones and carefully studied the results?”

Has Edwize managed to respond to that proposal yet? If he has, we haven’t seen it. Come on – if you’re really as confident as you profess to be that your policies are backed up by the empirical research and ours are not, what are so you afraid of?

And while we’re calling him out, here’s another challenge: in the random-assignment research on vouchers, the point gains identified for vouchers over periods of four years or less are generally either the same size as or larger than the point gains identified over four years for reduced class sizes in the Tennessee STAR experiment. Will Edwize say what he thinks of the relative size of the benefits identified from existing voucher programs and class size reduction in the empirical research?


Voucher Effects on Participants

August 21, 2008

(This is an update of a post I originally wrote on August 21.  I’ve included the new DC voucher findings.)

Here is what I believe is a complete (no cherry-picking) list of analyses taking advantage of random-assignment experiments of the effect of vouchers on participants.  As I’ve previously written, 9 of the 10 analyses show significant, positive effects for at least some subgroups of students.

All of them have been published in peer reviewed journals or were subject to outside peer review by the federal government.

Four of the 10 studies are independent replications of earlier analyses.  Cowen replicates Greene, 2001.  Rouse replicates Greene, Peterson, and Du.  Barnard, et al replicate Peterson and Howell.  And Krueger and Zhu also replicate Peterson and Howell.  All of these independent replications (except for Krueger and Zhu) confirm the basic findings of the original analyses by also finding positive effects.

Anyone interested in a more complete discussion of these 10 analyses and why it is important to focus on the random-assignment studies, should read Patrick Wolf’s article in the BYU Law Review that has been reproduced here.

I’m eager to hear how Leo Casey and Eduwonkette, who’ve accused me of cherry-picking the evidence, respond.

  • These 6 studies conclude that all groups of student participants experienced reading or math achievement gains and/or increased likelihood of graduating from high school as a result of vouchers:

Cowen, Joshua M.  2008. “School Choice as a Latent Variable: Estimating the ‘Complier Average Causal Effect’ of Vouchers in Charlotte.” Policy Studies Journal 36 (2).

Greene, Jay P. 2001. “Vouchers in Charlotte,” Education Matters 1 (2):55-60.

Greene, Jay P., Paul E. Peterson, and Jiangtao Du. 1999. “Effectiveness of School Choice: The Milwaukee Experiment.” Education and Urban Society, 31, January, pp. 190-213.

Howell, William G., Patrick J. Wolf, David E. Campbell, and Paul E. Peterson. 2002. “School Vouchers and Academic Performance:  Results from Three Randomized Field Trials.” Journal of Policy Analysis and Management, 21, April, pp. 191-217. (Washington, DC: Gains for all participants, almost all were African Americans)

Rouse, Cecilia E. 1998. “Private School Vouchers and Student Achievement: An Evaluation of the Milwaukee Parental Choice Program,” The Quarterly Journal of Economics, 113(2): 553-602.

Wolf, Patrick, Babette Gutmann, Michael Puma, Brian Kisida, Lou Rizzo, Nada Eissa, and Marsha Silverberg. March 2009.  Evaluation of the DC Opportunity Scholarship Program: Impacts After Three Years. U.S. Department of Education, Institute of Education Sciences. Washington, DC: U.S. Government Printing Office. (In the fourth year report the sample size shrunk so that the positive achievement effect barely missed meeting a strict threshold for statistical significance — p < .06 just missing the bar of p < .05.  But this new report was able for the first time to measure the effect of vouchers on the likelihood that students would graduate high school.  As it turns out, vouchers significantly boosted high school graduation rates.  As Paul Peterson points out, this suggests that vouchers boosted both achievement and graduation rates in the 4th year.  Read the 4th year evaluation here.)

  • These 3 studies conclude that at least one important sub-group of student participants experienced achievement gains from the voucher and no subgroup of students was harmed:

Barnard, John, Constantine E. Frangakis, Jennifer L. Hill, and Donald B. Rubin. 2003. “Principal Stratification Approach to Broken Randomized Experiments: A Case Study of School Choice Vouchers in New York City,” Journal of the American Statistical Association 98 (462):299–323. (Gains for African Americans)

Howell, William G., Patrick J. Wolf, David E. Campbell, and Paul E. Peterson. 2002. “School Vouchers and Academic Performance:  Results from Three Randomized Field Trials.” Journal of Policy Analysis and Management, 21, April, pp. 191-217. (Dayton, Ohio: Gains for African Americans)

Peterson, Paul E., and William G. Howell. 2004. “Efficiency, Bias, and Classification Schemes: A Response to Alan B. Krueger and Pei Zhu.” American Behavioral Scientist, 47(5): 699-717.  (New York City: Gains for African Americans)

This 1 study concludes that no sub-group of student participants experienced achievement gains from the voucher:

Krueger, Alan B., and Pei Zhu. 2004. “Another Look at the New York City School Voucher Experiment,” The American Behavioral Scientist 47 (5):658–698.

(Update: For a review of systemic effect research — how expanded competition affects achievement in traditional public schools — see here.)