Palin’s Palein’ on Education

October 3, 2008

(Guest post by Greg Forster)

I didn’t bother watching the debate, but from the comments around the web it looks like my prediction that there would be nothing really worth watching was accurate.

Cruising through the commentary, though, I came across this from Mickey Kaus:

Palin sounded like she was campaigning in Iowa for the teachers’ union vote when she talked about education. We need to spend more money. Pay teachers more. States need more “flexibility” in No Child Left Behind (“flexibility” to ignore it). I didn’t hear an actual single conservative principle, or even neoliberal principle. Pathetic.

So much for all that talk about how the McCain staff was overcoaching her. It’s remarkable – yet few seem to be remarking upon it, which is also remarkable – that Barack Obama is more of an education reformer than Palin. (At least, on paper he is. In practice they’re probably both about the same, which you can take as a compliment to Obama or as a criticism of Palin according to preference.) At any rate, her approach to education is pretty hard to square with McCain’s.

The lack of attention to this rather glaring contradiction, even by Palin detractors (and McCain/Palin detractors) who presumably have a motive to pay attention to it, shows just how irrelevant education has become as a national issue, at least for this cycle. Remember how big education was in 2000?

Good thing real reforms like school choice are winning big at the state level. The movement was wise not to bother showing up in DC for the big NCLB hulaballoo eight years ago. Now they’re not tied to NCLB or in general to the fortunes of education as a federal issue. I’ve heard some conservatives bash NCLB because it lacks serious choice components. But NCLB was never about choice. It seems clear that the choice components in Bush’s original proposal were only there to be given away as bargaining chips. The important question is, where would the school choice movement be now if it had tied itself to NCLB?


Special Ed Vouchers in NRO

September 9, 2008

I have a piece this morning on National Review Online about special education vouchers. 

Governor Palin said in her convention speech that she was going to be an advocate for special-needs kids in the White House.  I discuss what she should be an advocate for — special ed vouchers.


Modest Programs Produce Modest Results . . . Duh.

September 3, 2008

HT perfect stranger @ FR

By Greg Forster & Jay Greene

Edwize is touting a new “meta-analysis” by Cecilia Rouse and Lisa Barrow claiming that existing voucher programs produce only modest gains in student learning.

Edwize quotes the National Center for the Study of Privatization in Education (NCSPE), which sponsored the paper and is handling its media push, describing the paper as a “comprehensive review of all the evaluations done on education voucher schemes in the United States.”

But the paper itself says something different: “we present a summary of selected findings…” (emphasis added). Given EdWize’s recent accusations about cherry-picking research, which are repeated in his post on the Rouse/Barrow paper, we thought he’d be more sensitive to the difference between a comprehensive review of the reserach and a review that merely presents selected findings. (By contrast, the reviews we listed here are comprehensive.)

Even more important, the Rouse/Barrow paper provides no information on the criteria they used to decide which voucher experiments, and which analyses of those experiments, to include among the “selected findings” they present, and which to exclude from their review. The paper includes participant effect studies from Milwaukee, Cleveland, DC, and New York City, but does not include very similar studies conducted on programs in Dayton or Charlotte. In New York it includes analyses by Mayer, Howell and Peterson, as well as Krueger and Zhu, but not by Barnard, et al. The paper includes systemic effect analyses from Milwaukee and Florida, but excludes analyses by Howell and Peterson as well as by Greene and Winters.

Clearly this paper is not intended to be, and indeed it does not even profess to be, a comprehensive review.

But even with its odd and unexplained selection of studies to include and exclude, Rouse and Barrow’s paper nevertheless finds generally positive results. They identified 7 statistically significant positive participant effects and 4 significant negative participant effects (all of which come from one study: Belfield’s analysis of Cleveland, which is non-experimental and therefore lower in scientific quality than the studies finding positive results for vouchers). In total, 16 of the 26 point estimates they report for participant effects are positive.

On systemic effects, they report 15 significant positive effects and no significant negative effects. Of the 20 point estimates, 16 are positive.

And yet they conclude that the evidence “is at best mixed.” If this were research on therapies for curing cancer, the mostly positive and often significant findings they identified would never be described as “at best mixed.” We would say they were encouraging at the very least.

Moreover, the paper is not, and doesn’t claim to be, a “meta-analysis.” That term doesn’t even appear anywhere in the paper. It’s really just a research review, as the first sentence of the abstract clearly states (“In this article, we review the empirical evidence on…”). It looks like the term “meta-analysis,” like the phrase “comprehensive review,” was introduced by the NCSPE’s publicity materials.

What’s the difference? A meta-analysis performs an original analysis drawing together the data and/or findings of multiple previous studies, identified by a comprehensive review of the literature. The “conclusions” of a research review are just somebody’s opinion. Meta-analyses vary from simple (counting up the number of studies that find X and the number that find Y) to complex (using statistical methods to aggregate data or compare findings across studies). But what they all have in common is that they present new factual knowledge. A research review produces no new factual knowledge; it just states opinions.

There’s nothing wrong with researchers having opinions, as we have argued many times. It’s essential. But it’s even more essential to maintain a clear distinction between what is a fact and what is somebody’s opinion. Voucher opponents, as the saying goes, are entitled to their own opinions but not their own facts. (Judging by the way they conduct themselves, this may be news to some of them – for example, see Greg Anrig’s claims in the comment thread here.)

By falsely puffing this highly selective research review into a meta-analysis, NCSPE will decieve some people – especially journalists, who these days are often familiar with terms like “meta-analysis” and know what they mean, even if NCSPE doesn’t – into thinking that an original analysis has been performed and new factual knowledge is being contributed, when in fact this is just a repetition of the same statement of opinion that voucher opponents have been offering for years.

(We don’t blame Edwize for repeating NCSPE’s falsehood; there’s no shame in a layman not knowing the proper meaning of the technical terms used by scholars.)

And what about the merits of the opinion itself? The paper’s major claim, that the benefits of existing voucher programs are modest, is exactly what we have been saying for years. For example, in this study one of us wrote that “the benefits of school choice identified by these studies are sometimes moderate in size—not surprising, given that existing school choice programs are restricted to small numbers of students and limited to disadvantaged populations, hindering their ability to create a true marketplace that would produce dramatic innovation.”

And there’s the real rub. Existing programs are modest in size and scope. They are also modest in impact. Thank you, Captain Obvious.

The research review argues that because existing programs have a modest impact, we should be pessimistic about the potential of vouchers to improve education dramatically either for the students who use them or in public schools (although the review does acknowledge the extraordinary consensus in the empirical research showing that vouchers do improve public schools).

But why should we be pessimistic that a dramatic program would have a dramatic impact on grounds that modest programs have a modest impact?

One of us recently offered a “modest proposal” that we try some major pilot programs for the unions’ big-spending B.B. approach and for universal vouchers (as opposed to the modest voucher programs we have now), and see which one works. He wrote: “Better designed and better funded voucher programs could give us a much better look at vouchers’ full effects. Existing programs have vouchers that are worth significantly less than per pupil spending in public schools, have caps on enrollments, and at least partially immunize public schools from the financial effects of competition. If we see positive results from such limited voucher programs, what might happen if we could try broader, bolder ones and carefully studied the results?”

Has Edwize managed to respond to that proposal yet? If he has, we haven’t seen it. Come on – if you’re really as confident as you profess to be that your policies are backed up by the empirical research and ours are not, what are so you afraid of?

And while we’re calling him out, here’s another challenge: in the random-assignment research on vouchers, the point gains identified for vouchers over periods of four years or less are generally either the same size as or larger than the point gains identified over four years for reduced class sizes in the Tennessee STAR experiment. Will Edwize say what he thinks of the relative size of the benefits identified from existing voucher programs and class size reduction in the empirical research?


Systemic Effects of Vouchers

August 25, 2008

In an earlier post I listed all analyses of the effects of U.S. vouchers on program participants using random-assignment experiments.  Those studies tell us about what happens to the academic achievement of students who receive vouchers.  But we all recognize that expanding choice and competition with vouchers may also have significant effects on students who remain in traditional public schools.  Here is a brief summary of the research on that question.

In general, the evidence on systemic effects (how expanding choice and competition affects the performance of traditional public schools) has more methodological limitations than participant effects studies.  We haven’t been able to randomly assign school districts to increased competition, so we have more serious problems with drawing causal inferences.  Even devising accurate measures of the extent of competition has been problematic.  That being said, the findings on systemic effects, like on participant effects, is generally positive and almost never negative.

Even in the absence of choice programs traditional public schools are exposed to some amount of competition.  They may compete with public schools in other districts or with nearby private schools.  A relatively large number of studies have examined this naturally occurring variation in competition.  To avoid being accused of cherry-picking this evidence I’ll rely on the review of that literature conducted by Henry Levin and Clive Belfield.  Here is the abstract of their review, in full:

“This article systematically reviews U.S. evidence from cross-sectional research on educational outcomes when schools must compete with each other. Competition typically is measured by using either the HerfindahlIndex or the enrollment rate at an alternative school choice. Outcomes are academic test scores, graduation/attainment, expenditures/efficiency, teacher quality, students’ post-school wages, and local housing prices. The sampling strategy identified more than 41 relevant empiricalstudies. A sizable majority report beneficial effects of competition, and many report statistically significant correlations. For each study, the effect size of an increase of competition by one standard deviation is reported. The positive gains from competition are modest in scope with respect to realistic changes in levels of competition. The review also notes several methodological challenges and recommends caution in reasoning from point estimates to public policy.”

There have also been a number of studies that have examined the effect of expanding competition or the threat of competition on public schools from voucher programs in Milwaukee and Florida.  Here are all of the major studies of systemic effects of which I am aware from voucher programs in the US:

Milwaukee

Martin Carnoy, et al “Vouchers and Public School Performance,” Economic Policy Institute, October 2007;

Rajashri Chakrabarti, “Can Increasing Private School Participation and Monetary Loss in a Voucher Program Affect Public School Performance? Evidence from Milwaukee,” Federal Reserve Bank of New York, 2007; (forthcoming in the Journal of Public Economics)

Caroline Minter Hoxby, “The Rising Tide,” Education Next, Winter 2001;

Jay P. Greene and Ryan H. Marsh, “The Effect of Milwaukee’s Parental Choice Program on Student Achievement in Milwaukee Public Schools,” School Choice Demonstration Project Report, March 2009.

Florida

Rajashri Chakrabarti “Vouchers, Public School Response and the Role of Incentives: Evidence from Florida Federal Reserve Bank of New York Staff Report, Number 306, October 2007;

Jay P. Greene and Marcus A. Winters, “Competition Passes the Test,” Education Next, Summer 2004;

Cecilia Elena Rouse, Jane Hannaway, Dan Goldhaber, and David Figlio, “Feeling the Heat: How Low Performing Schools Respond to Voucher and Accountability Pressure,” CALDER Working Paper 13, Urban Institute, November 2007;

Martin West and Paul Peterson, “The Efficacy of Choice Threats Within School Accountability Systems,” Harvard PEPG Working Paper 05-01, March 23, 2005; (subsequently published in The Economic Journal, March, 2006)

Jay P. Greene and Marcus A. Winters, “The Effect of Special Education Vouchers on Public School Achievement: Evidence From Florida’s McKay Scholarship Program”  Manhattan Institute, Civic Report Number 52, April 2008. (looks only at voucher program for disabled students)

Cassandra Hart and David Figlio, “Does Competition Improve Public Schools?” Education Next, Winter, 2011.

Every one of these 10 studies finds positive systemic effects.  It is importantto note that Rouse, et al are ambiguous as to whether they attribute the improvements observed to competition or to the stigma of Florida’s accountability system.  The other four Florida studies perform analyses that support the conclusion that the gains were from competitive pressure rather than simply from stigma.

Also Carnoy, et al confirm Chakrabarti’s finding that Milwaukee public schools improved as the voucher program expanded, but they emphasize that those gains did not continue to increase as the program expanded further (nor did those gains disappear).  They find this lack of continued improvement worrisome and believe that it undermines confidence one could have in the initial positive reaction from competition that they and others have observed.  This and other analyses using different measures of competition with null results lead them to conclude that overall there is a null effect  — even though they do confirm Chakrabarti’s finding of a positive effect.

I would also add that Greg Forster and I have a study of systemic effects in Milwaukee and Greg has a new study of systemic effects from the voucher program in Ohio.  And Greg also has a neat study that shows that schools previously threatened with voucher competition slipped after Florida’s Supreme Court struck down the voucher provision.  All of these studies also show positive systemic effects, but since they have not undergone external review and since I do not want to overstate the evidence, I’ve left them out of the above list of studies.  People who, after reading them, have confidence in these three studies should add them to the list of studies on systemic effects.

The bottom line is that none of the studies of systemic effects from voucher programs finds negative effects on student achievement in public schools from voucher competition.  The bulk of the evidence, both from studies of voucher programs and from variation in existing competition among public schools, supports the conclusion that expanding competition improves student achievement.

(Updated 3/3/11 to include the new Florida study)


Voucher Effects on Participants

August 21, 2008

(This is an update of a post I originally wrote on August 21.  I’ve included the new DC voucher findings.)

Here is what I believe is a complete (no cherry-picking) list of analyses taking advantage of random-assignment experiments of the effect of vouchers on participants.  As I’ve previously written, 9 of the 10 analyses show significant, positive effects for at least some subgroups of students.

All of them have been published in peer reviewed journals or were subject to outside peer review by the federal government.

Four of the 10 studies are independent replications of earlier analyses.  Cowen replicates Greene, 2001.  Rouse replicates Greene, Peterson, and Du.  Barnard, et al replicate Peterson and Howell.  And Krueger and Zhu also replicate Peterson and Howell.  All of these independent replications (except for Krueger and Zhu) confirm the basic findings of the original analyses by also finding positive effects.

Anyone interested in a more complete discussion of these 10 analyses and why it is important to focus on the random-assignment studies, should read Patrick Wolf’s article in the BYU Law Review that has been reproduced here.

I’m eager to hear how Leo Casey and Eduwonkette, who’ve accused me of cherry-picking the evidence, respond.

  • These 6 studies conclude that all groups of student participants experienced reading or math achievement gains and/or increased likelihood of graduating from high school as a result of vouchers:

Cowen, Joshua M.  2008. “School Choice as a Latent Variable: Estimating the ‘Complier Average Causal Effect’ of Vouchers in Charlotte.” Policy Studies Journal 36 (2).

Greene, Jay P. 2001. “Vouchers in Charlotte,” Education Matters 1 (2):55-60.

Greene, Jay P., Paul E. Peterson, and Jiangtao Du. 1999. “Effectiveness of School Choice: The Milwaukee Experiment.” Education and Urban Society, 31, January, pp. 190-213.

Howell, William G., Patrick J. Wolf, David E. Campbell, and Paul E. Peterson. 2002. “School Vouchers and Academic Performance:  Results from Three Randomized Field Trials.” Journal of Policy Analysis and Management, 21, April, pp. 191-217. (Washington, DC: Gains for all participants, almost all were African Americans)

Rouse, Cecilia E. 1998. “Private School Vouchers and Student Achievement: An Evaluation of the Milwaukee Parental Choice Program,” The Quarterly Journal of Economics, 113(2): 553-602.

Wolf, Patrick, Babette Gutmann, Michael Puma, Brian Kisida, Lou Rizzo, Nada Eissa, and Marsha Silverberg. March 2009.  Evaluation of the DC Opportunity Scholarship Program: Impacts After Three Years. U.S. Department of Education, Institute of Education Sciences. Washington, DC: U.S. Government Printing Office. (In the fourth year report the sample size shrunk so that the positive achievement effect barely missed meeting a strict threshold for statistical significance — p < .06 just missing the bar of p < .05.  But this new report was able for the first time to measure the effect of vouchers on the likelihood that students would graduate high school.  As it turns out, vouchers significantly boosted high school graduation rates.  As Paul Peterson points out, this suggests that vouchers boosted both achievement and graduation rates in the 4th year.  Read the 4th year evaluation here.)

  • These 3 studies conclude that at least one important sub-group of student participants experienced achievement gains from the voucher and no subgroup of students was harmed:

Barnard, John, Constantine E. Frangakis, Jennifer L. Hill, and Donald B. Rubin. 2003. “Principal Stratification Approach to Broken Randomized Experiments: A Case Study of School Choice Vouchers in New York City,” Journal of the American Statistical Association 98 (462):299–323. (Gains for African Americans)

Howell, William G., Patrick J. Wolf, David E. Campbell, and Paul E. Peterson. 2002. “School Vouchers and Academic Performance:  Results from Three Randomized Field Trials.” Journal of Policy Analysis and Management, 21, April, pp. 191-217. (Dayton, Ohio: Gains for African Americans)

Peterson, Paul E., and William G. Howell. 2004. “Efficiency, Bias, and Classification Schemes: A Response to Alan B. Krueger and Pei Zhu.” American Behavioral Scientist, 47(5): 699-717.  (New York City: Gains for African Americans)

This 1 study concludes that no sub-group of student participants experienced achievement gains from the voucher:

Krueger, Alan B., and Pei Zhu. 2004. “Another Look at the New York City School Voucher Experiment,” The American Behavioral Scientist 47 (5):658–698.

(Update: For a review of systemic effect research — how expanded competition affects achievement in traditional public schools — see here.)


Yet Another Study Finds Vouchers Improve Public Schools

August 21, 2008

(Guest post by Greg Forster)

The Friedman Foundation has just released my new study showing that Ohio’s EdChoice voucher program had a positive impact on academic outcomes in public schools. I’m told that it has generated a number of news hits, though the only reporter to interview me so far was the author of this piece in the Columbus Dispatch. When she interviewed me I thought she was hostile, because her questions put me a little off balance, but the article is perfectly fair. I guess if the reporter is doing her job right, the interviewees ought to feel like they were being challenged. The final product is what counts.

The positive results that I found from the EdChoice program were substantial but not revolutionary. That’s not surprising, given that 1) failing-schools vouchers aren’t the optimum way to structure voucher programs in the first place, and 2) the data were from the program’s first year, when it was smaller and more restricted than it is now.

It’s too early to be sure, but among the large body of empirical studies consistently showing that vouchers improve public schools, a pattern seems to be emerging that voucher programs have a bigger impact on public schools when they’re larger, more universal, and have fewer obstacles to parental participation. That’s worth watching and studying further as opportunities arise.


False Claims of Cherry Picking are the Pits

August 20, 2008

Leo Casey over at Edwize is urging me to join the “United Cherry Pickers” union because he thinks I’ve cherry picked the evidence on vouchers in a previous post.  This sounds like a great deal if my dues, like those from AFT and NEA members, can contribute to paying for skyboxes for Leo and his buddies at the Democratic National Convention to make-up for the convention’s shortfall of $10 million.  Where do I sign up?

Making a charge of cherry picking is easy.  Substantiating it requires, well, uhm, evidence.  Evidence isn’t exactly Leo Casey’s strong-suit.

I said that there have been 10 analyses of random assignment voucher experiments.  I said that 9 of those 10 analyses show significant, positive effects (at least for some subgroups).  If I am cherry picking, which random assignment analyses am I leaving out? 

Leo Casey then asserts: “Serious research conducted by respected scholars without an ideological axe to grind has consistently found every major voucher experiment in the United States wanting. John Witte’s and Cecilia Rouse’s definitive analyses of the Milwaukee voucher program and the Indiana University studies of the Cleveland voucher program have shown no meaningful educational performance advantage for students in those two high profile, large scale voucher programs.”

Neither Witte nor the IU studies analyzed random-assignment experiments, making it harder to have confidence in their results, which is why I focus on the 10 analyses using the gold-standard approach. 

Rouse’s study did examine a random-assignment experiment, but Casey mischaracterizes her findings.  She writes: “I find that students in the Milwaukee Parental Choice Program had faster math score gains than, but similar reading score gains to, the comparison groups. The results appear robust to data imputations and sample attrition, although these deficiencies of the data should be kept in mind when interpreting the results.”   Remember, Casey falsely claims that she finds “no meaningful educational performance advantage for students.”

Casey also mischaracterizes my citation of Belfield and Levin’s findings: “[He even cites research that is not on the subject of vouchers: Hank Levin will be most surprised to learn that his research ‘supports’ vouchers.]” 

Since I actually bothered to quote Belfied and Levin’s findings about the effects of expanding choice and competition, I don’t think Hank Levin will be the least bit surprised to read what he wrote.  I’ll repeat the quotation here so that no one is shocked: “A sizable majority of these studies report beneficial effects of competition across all outcomes… The above evidence shows reasonably consistent evidence of a link between competition (choice) and education quality. Increased competition and higher educational quality are positively correlated.”

If Leo Casey is going to make the charge of cherry picking and improperly citing evidence, he has to deliver proof of those charges.  To the contrary, the facts indicate that Casey is the one cherry picking and improperly citing research.

Is there a union for playing fast and loose with the truth?  Maybe Leo Casey should join it.  Oh, I forgot.  He’s already a member of the AFT.

(Links added)


The Vitamin C of Education

August 20, 2008

Earlier this week I made my Modest Proposal for B.B. (Broader, Bolder or is it Buying Bananas?).  I noted that Randi Weingarten denounced vouchers as a waste of time despite considerable evidence supporting it, while she embraced the B.B. idea of community schools despite there being absolutely no evidence to support the claim that public schools could improve achievement by expanding their mission to include a host of social services.

Given the lack of evidence for B.B. I generously : ) offered to support a series of large pilot studies of the community schools approach, if Weingarten, Leo Casey, and the B.B. crowd would agree to a similar series of large pilot voucher programs as a way of learning more about both reform strategies.  No word yet but perhaps their internet is broken (just try unplugging it and plugging it back in).

Shital Shah from the Coalition for Community Schools, however, sent me a nice note with a link to a report claiming to contain the evidence supporting their approach.  After reviewing the report I still see virtually no evidence to give us confidence that public schools can increase student achievement by offering everything from legal assistance to health care.

In Appendix B the report lists 21 studies of the community school approach.  Seven of them have no student achievement outcomes.  Seven examine student test scores but only make pre/post comparisons without any control group.  And another seven have comparison groups but none employ random assignment, regression discontinuity, or another rigorous research design.  Four of those seven just compare achievement at schools using the B.B. approach to city or statewide averages.  And of the seven studies with some kind of control group, two find null effects, another finds null effects in math but not reading and even then only among schools with “high implementation” of the approach.  The quality (and quantity) of the evidence supporting community schools is no greater than what we could find to support the healing power of crystals

I understand why Randi Weingarten or Leo Casey would be pushing the educational equivalent of crystal healing.  Their job is to advocate for the interests of their union, not to make fair and reasonable assessments of research claims.  If schools expand their mission to include providing health care and other social services just think of all of the dues-paying nurses and social workers they could add to their rolls.

The greater mystery is why normally tough-minded and rigorous researchers, like Jim Heckman and Diane Ravitch, would sign on to this approach entirely lacking empirical support.  Heckman won the Nobel Prize for Economics for crying out loud.  But then again Linus Pauling won the Nobel Prize for Chemistry and later became a public advocate for mega doses of vitamin C to cure cancer, another intervention completely unsupported by rigorous evidence.

I’ll repeat that I am not against trying the B.B. community school approach with large pilot programs that are carefully studied.  I just can’t see why normally smart people would fully endorse untested approaches while ignoring other interventions, like expanding choice and competition in education, which have considerably more supporting evidence.

(edited for typos)


A Modest Proposal for B.B.

August 18, 2008

The advocates of B.B. (Broader, Bolder; or is it Bigger Budgets? or is it Bloated Behemoth?) have yet to muster the evidence to support widespread implementation of their vision to expand the mission of schools to include health care, legal assistance, and other social services. They do present background papers showing that children who suffer from social problems fare worse academically, but they have not shown that public schools are capable of addressing those social problems and increasing student learning.

And if you dare to question whether there is evidence about the effectiveness of public schools providing social services in order to raise achievement, you are accused of being opposed to “better social and economic environments for children.” Right. And if you question the effectiveness of central economic planning are you also then opposed to a better economy? And if you question the effectiveness of an untested drug therapy are you then opposed to quality health-care?

To help the B.B. crowd generate the evidence one would need before pursuing a reform agenda on a large-scale, I have a modest proposal. How about if we have a dozen large-scale, well-funded pilot programs of the “community school” concept advocated by B.B.? And, at the same time let’s have a dozen large-scale, well-funded pilot voucher programs. We’ll carefully evaluate the effects of both to learn about whether one, the other, or both are things that we should try on an even larger scale.

I’m all for trying out new ideas and carefully evaluating the results. I can’t imagine why the backers of B.B. wouldn’t want to do the same. So as soon as Larry Mishel at the union-funded Economic Policy Institute, Randi Weingarten of the AFT, and Leo Casey of the AFT’s blog, Edwize, endorse my modest proposal, we’ll all get behind the idea of trying new approaches and studying their effects — “community schools” and vouchers.

Wait, my psychic powers are picking something up. I expect that some might say we’ve already tried vouchers and they haven’t worked. In fact, Randi Weingarten just wrote something very much like that when she declared in the NY Daily News that vouchers “have not been shown by any credible research to improve student achievement.” Let’s leave aside that there have been 10 random assignment evaluations (the gold-standard in research) of voucher programs and 9 show significant positive effects, at least for certain sub-groups of students. And let’s leave aside that 3 of those analyses are independent replications of earlier studies that confirm the basic positive findings of the original analyses (and 1 replication does not). And let’s leave aside that 6 of those 10 studies have been published in peer-reviewed journals (including the QJE, the Journal of the American Statistical Association, and the Journal of Policy Studies), three in a Brookings book, and one in a federal government report (even if Chris Lubienski somehow denies that any of this constitutes real peer-review). And let’s leave aside that there have been more than 200 analyses of the effects of expanding choice and competition, which Clive Belfield and Henry Levin reviewed and concluded: “A sizable majority of these studies report beneficial effects of competition across all outcomes… The above evidence shows reasonably consistent evidence of a link between competition (choice) and education quality. Increased competition and higher educational quality are positively correlated.”

Let’s leave all of that aside and ask Randi Weingarten how many random-assignment studies of the community school concept she has. Uhm, none. How many evaluations of community schools, period? Uhm, still none. But that doesn’t stop her from drawing the definitive conclusion: “Through partnerships with universities, nonprofit groups and other organizations, community schools provide the learning conditions and resources that support effective instruction and bring crucial services to an entire community.” How does she know?

But I’m eager to help her and all of us learn about community schools if she is willing to do the same to learn about vouchers. Better designed and better funded voucher programs could give us a much better look at vouchers’ full effects. Existing programs have vouchers that are worth significantly less than per pupil spending in public schools, have caps on enrollments, and at least partially immunize public schools from the financial effects of competition. If we see positive results from such limited voucher programs, what might happen if we could try broader, bolder ones and carefully studied the results?

And if community schools really deliver all that is being promised, great, let’s do that too. But if our goal is to do what works, why not give both ideas a real try?

(Link added)


Best. Choice. Argument. Ever.

August 6, 2008

 

 Brilliant.

 

(HT, Stuart Buck and Lydia McGrew at http://www.whatswrongwiththeworld.net/2008/08/great_video_clip_on_government.html#comments )