Special Ed Vouchers in NRO

September 9, 2008

I have a piece this morning on National Review Online about special education vouchers. 

Governor Palin said in her convention speech that she was going to be an advocate for special-needs kids in the White House.  I discuss what she should be an advocate for — special ed vouchers.


PJM on Merit Pay in D.C.

September 8, 2008

(Guest post by Greg Forster)

Today, Pajamas Media carries my column on Michelle Rhee’s push for a limited, voluntary merit pay system in Washington D.C.:

To see how much has changed, just consider the amazing fact that about one out of every three public-school students in D.C. attends a charter school — government-owned but non-unionized, privately operated, and (most important of all) chosen by parents — instead of a regular public school. “We lost 6,000 students last year,” says Parker, referring to the number of students who moved from regular schools to charters. Six thousand students is over 13% of the city’s remaining enrollment in regular public schools — in one year.

Rhee isn’t the force behind charter schools or vouchers in D.C. She’s in charge of the regular public system. But the same widespread mandate for reform that made charters and vouchers successful have allowed Rhee to succeed with reforms like closing schools that were only there to create patronage jobs, introducing curriculum innovation, and taking on the unbelievable amount of bureaucratic waste in the system. And as vouchers and charters have sent a message that the system can’t take students for granted any more, the pressure for reform has only increased — strengthening Rhee’s hand.

By coincidence, the Washington Post‘s Marc Fisher has a column today emphasizing how the explosion of charter schools in D.C. was decisive in bringing the unions to the bargaining table, even on the issue of reforming the structure of teacher pay. Just as competition from globalization forced the private sector unions to start the long, slow process of giving up the ridiculous extravegances that they won from management in the 1960s and 1970s, thus rescuing the American economy from disaster, now competition in schooling is forcing the teachers’ unions to start the same process of giving up their own ridiculous extravegances – the biggest of all being a system of hiring, firing and pay that bears no serious relationship to job performance.


Modest Programs Produce Modest Results . . . Duh.

September 3, 2008

HT perfect stranger @ FR

By Greg Forster & Jay Greene

Edwize is touting a new “meta-analysis” by Cecilia Rouse and Lisa Barrow claiming that existing voucher programs produce only modest gains in student learning.

Edwize quotes the National Center for the Study of Privatization in Education (NCSPE), which sponsored the paper and is handling its media push, describing the paper as a “comprehensive review of all the evaluations done on education voucher schemes in the United States.”

But the paper itself says something different: “we present a summary of selected findings…” (emphasis added). Given EdWize’s recent accusations about cherry-picking research, which are repeated in his post on the Rouse/Barrow paper, we thought he’d be more sensitive to the difference between a comprehensive review of the reserach and a review that merely presents selected findings. (By contrast, the reviews we listed here are comprehensive.)

Even more important, the Rouse/Barrow paper provides no information on the criteria they used to decide which voucher experiments, and which analyses of those experiments, to include among the “selected findings” they present, and which to exclude from their review. The paper includes participant effect studies from Milwaukee, Cleveland, DC, and New York City, but does not include very similar studies conducted on programs in Dayton or Charlotte. In New York it includes analyses by Mayer, Howell and Peterson, as well as Krueger and Zhu, but not by Barnard, et al. The paper includes systemic effect analyses from Milwaukee and Florida, but excludes analyses by Howell and Peterson as well as by Greene and Winters.

Clearly this paper is not intended to be, and indeed it does not even profess to be, a comprehensive review.

But even with its odd and unexplained selection of studies to include and exclude, Rouse and Barrow’s paper nevertheless finds generally positive results. They identified 7 statistically significant positive participant effects and 4 significant negative participant effects (all of which come from one study: Belfield’s analysis of Cleveland, which is non-experimental and therefore lower in scientific quality than the studies finding positive results for vouchers). In total, 16 of the 26 point estimates they report for participant effects are positive.

On systemic effects, they report 15 significant positive effects and no significant negative effects. Of the 20 point estimates, 16 are positive.

And yet they conclude that the evidence “is at best mixed.” If this were research on therapies for curing cancer, the mostly positive and often significant findings they identified would never be described as “at best mixed.” We would say they were encouraging at the very least.

Moreover, the paper is not, and doesn’t claim to be, a “meta-analysis.” That term doesn’t even appear anywhere in the paper. It’s really just a research review, as the first sentence of the abstract clearly states (“In this article, we review the empirical evidence on…”). It looks like the term “meta-analysis,” like the phrase “comprehensive review,” was introduced by the NCSPE’s publicity materials.

What’s the difference? A meta-analysis performs an original analysis drawing together the data and/or findings of multiple previous studies, identified by a comprehensive review of the literature. The “conclusions” of a research review are just somebody’s opinion. Meta-analyses vary from simple (counting up the number of studies that find X and the number that find Y) to complex (using statistical methods to aggregate data or compare findings across studies). But what they all have in common is that they present new factual knowledge. A research review produces no new factual knowledge; it just states opinions.

There’s nothing wrong with researchers having opinions, as we have argued many times. It’s essential. But it’s even more essential to maintain a clear distinction between what is a fact and what is somebody’s opinion. Voucher opponents, as the saying goes, are entitled to their own opinions but not their own facts. (Judging by the way they conduct themselves, this may be news to some of them – for example, see Greg Anrig’s claims in the comment thread here.)

By falsely puffing this highly selective research review into a meta-analysis, NCSPE will decieve some people – especially journalists, who these days are often familiar with terms like “meta-analysis” and know what they mean, even if NCSPE doesn’t – into thinking that an original analysis has been performed and new factual knowledge is being contributed, when in fact this is just a repetition of the same statement of opinion that voucher opponents have been offering for years.

(We don’t blame Edwize for repeating NCSPE’s falsehood; there’s no shame in a layman not knowing the proper meaning of the technical terms used by scholars.)

And what about the merits of the opinion itself? The paper’s major claim, that the benefits of existing voucher programs are modest, is exactly what we have been saying for years. For example, in this study one of us wrote that “the benefits of school choice identified by these studies are sometimes moderate in size—not surprising, given that existing school choice programs are restricted to small numbers of students and limited to disadvantaged populations, hindering their ability to create a true marketplace that would produce dramatic innovation.”

And there’s the real rub. Existing programs are modest in size and scope. They are also modest in impact. Thank you, Captain Obvious.

The research review argues that because existing programs have a modest impact, we should be pessimistic about the potential of vouchers to improve education dramatically either for the students who use them or in public schools (although the review does acknowledge the extraordinary consensus in the empirical research showing that vouchers do improve public schools).

But why should we be pessimistic that a dramatic program would have a dramatic impact on grounds that modest programs have a modest impact?

One of us recently offered a “modest proposal” that we try some major pilot programs for the unions’ big-spending B.B. approach and for universal vouchers (as opposed to the modest voucher programs we have now), and see which one works. He wrote: “Better designed and better funded voucher programs could give us a much better look at vouchers’ full effects. Existing programs have vouchers that are worth significantly less than per pupil spending in public schools, have caps on enrollments, and at least partially immunize public schools from the financial effects of competition. If we see positive results from such limited voucher programs, what might happen if we could try broader, bolder ones and carefully studied the results?”

Has Edwize managed to respond to that proposal yet? If he has, we haven’t seen it. Come on – if you’re really as confident as you profess to be that your policies are backed up by the empirical research and ours are not, what are so you afraid of?

And while we’re calling him out, here’s another challenge: in the random-assignment research on vouchers, the point gains identified for vouchers over periods of four years or less are generally either the same size as or larger than the point gains identified over four years for reduced class sizes in the Tennessee STAR experiment. Will Edwize say what he thinks of the relative size of the benefits identified from existing voucher programs and class size reduction in the empirical research?


The Meta-List: An Incomplete List of Complete Lists

August 27, 2008

“The Treason of Images,” Rene Magritte, 1928-29 (“This is not a pipe.”)

(Guest post by Greg Forster)

Jay posted two “complete lists” of voucher research this week, and a number of people seem to have found them helpful. Jay and I have both spent a lot of time circulating these lists for years (they change over time, of course, as new research gets done). We keep on thinking we’ve circulated these lists so much that there can’t be much use in circulating them further, yet we keep on finding more people who say, “Wow, I’ve never seen anything like this before, this is really helpful!”

Well, if people found those two lists helpful, maybe they’d like to see some of the other lists that have been compiled. So here’s a meta-list: a list of complete lists of research.

Of course, this is not a complete list of the complete lists. If anyone wants to add more in the comment section, that will help make this page even more useful. And I’ll come back and update the list as needed, so that this page will remain a useful resource for people looking for all the research on vouchers.

Though no doubt others will think that my list of complete lists isn’t nearly complete enough. I hope they’ll compile their own lists of complete lists – the more the merrier. And when there are enough lists of complete lists out there, we’ll need to make a list of them, so that people can keep track of them all . . .

Of course, these lists are all “complete to my knowledge.” There may always be a study lurking out there that hasn’t been noticed – although on the voucher issue that’s a somewhat more remote possibility than it is with other issues.

Last year I made an effort to summarize all the research on all the issues relating to vouchers in this study. The sections covering random-assignment studies of voucher participants and studies of how vouchers affect public schools are now out of date, but the report will point you to a bunch of other studies on issues that don’t have enough of a body of research – or have too much of a body of research – to generate a “complete list.” For example, you’ll find a discussion of the evidence on questions like the fiscal impact of voucher programs, and whether vouchers provide all students with access to schooling.

On those last two subjects – fiscal impacts and whether the private school sector provides broad, inclusive access to schooling for all students – the Friedman Foundation offers handy guides (here and here) and references to the research issues (here and here).

And finally, here is a meta-list that will point you to a bunch of complete lists of research on issues related to vouchers. Personally, I’ve found this resource to be the most helpful of all.

NOTE: This post is edited as needed to keep it up to date.


Systemic Effects of Vouchers

August 25, 2008

In an earlier post I listed all analyses of the effects of U.S. vouchers on program participants using random-assignment experiments.  Those studies tell us about what happens to the academic achievement of students who receive vouchers.  But we all recognize that expanding choice and competition with vouchers may also have significant effects on students who remain in traditional public schools.  Here is a brief summary of the research on that question.

In general, the evidence on systemic effects (how expanding choice and competition affects the performance of traditional public schools) has more methodological limitations than participant effects studies.  We haven’t been able to randomly assign school districts to increased competition, so we have more serious problems with drawing causal inferences.  Even devising accurate measures of the extent of competition has been problematic.  That being said, the findings on systemic effects, like on participant effects, is generally positive and almost never negative.

Even in the absence of choice programs traditional public schools are exposed to some amount of competition.  They may compete with public schools in other districts or with nearby private schools.  A relatively large number of studies have examined this naturally occurring variation in competition.  To avoid being accused of cherry-picking this evidence I’ll rely on the review of that literature conducted by Henry Levin and Clive Belfield.  Here is the abstract of their review, in full:

“This article systematically reviews U.S. evidence from cross-sectional research on educational outcomes when schools must compete with each other. Competition typically is measured by using either the HerfindahlIndex or the enrollment rate at an alternative school choice. Outcomes are academic test scores, graduation/attainment, expenditures/efficiency, teacher quality, students’ post-school wages, and local housing prices. The sampling strategy identified more than 41 relevant empiricalstudies. A sizable majority report beneficial effects of competition, and many report statistically significant correlations. For each study, the effect size of an increase of competition by one standard deviation is reported. The positive gains from competition are modest in scope with respect to realistic changes in levels of competition. The review also notes several methodological challenges and recommends caution in reasoning from point estimates to public policy.”

There have also been a number of studies that have examined the effect of expanding competition or the threat of competition on public schools from voucher programs in Milwaukee and Florida.  Here are all of the major studies of systemic effects of which I am aware from voucher programs in the US:

Milwaukee

Martin Carnoy, et al “Vouchers and Public School Performance,” Economic Policy Institute, October 2007;

Rajashri Chakrabarti, “Can Increasing Private School Participation and Monetary Loss in a Voucher Program Affect Public School Performance? Evidence from Milwaukee,” Federal Reserve Bank of New York, 2007; (forthcoming in the Journal of Public Economics)

Caroline Minter Hoxby, “The Rising Tide,” Education Next, Winter 2001;

Jay P. Greene and Ryan H. Marsh, “The Effect of Milwaukee’s Parental Choice Program on Student Achievement in Milwaukee Public Schools,” School Choice Demonstration Project Report, March 2009.

Florida

Rajashri Chakrabarti “Vouchers, Public School Response and the Role of Incentives: Evidence from Florida Federal Reserve Bank of New York Staff Report, Number 306, October 2007;

Jay P. Greene and Marcus A. Winters, “Competition Passes the Test,” Education Next, Summer 2004;

Cecilia Elena Rouse, Jane Hannaway, Dan Goldhaber, and David Figlio, “Feeling the Heat: How Low Performing Schools Respond to Voucher and Accountability Pressure,” CALDER Working Paper 13, Urban Institute, November 2007;

Martin West and Paul Peterson, “The Efficacy of Choice Threats Within School Accountability Systems,” Harvard PEPG Working Paper 05-01, March 23, 2005; (subsequently published in The Economic Journal, March, 2006)

Jay P. Greene and Marcus A. Winters, “The Effect of Special Education Vouchers on Public School Achievement: Evidence From Florida’s McKay Scholarship Program”  Manhattan Institute, Civic Report Number 52, April 2008. (looks only at voucher program for disabled students)

Cassandra Hart and David Figlio, “Does Competition Improve Public Schools?” Education Next, Winter, 2011.

Every one of these 10 studies finds positive systemic effects.  It is importantto note that Rouse, et al are ambiguous as to whether they attribute the improvements observed to competition or to the stigma of Florida’s accountability system.  The other four Florida studies perform analyses that support the conclusion that the gains were from competitive pressure rather than simply from stigma.

Also Carnoy, et al confirm Chakrabarti’s finding that Milwaukee public schools improved as the voucher program expanded, but they emphasize that those gains did not continue to increase as the program expanded further (nor did those gains disappear).  They find this lack of continued improvement worrisome and believe that it undermines confidence one could have in the initial positive reaction from competition that they and others have observed.  This and other analyses using different measures of competition with null results lead them to conclude that overall there is a null effect  — even though they do confirm Chakrabarti’s finding of a positive effect.

I would also add that Greg Forster and I have a study of systemic effects in Milwaukee and Greg has a new study of systemic effects from the voucher program in Ohio.  And Greg also has a neat study that shows that schools previously threatened with voucher competition slipped after Florida’s Supreme Court struck down the voucher provision.  All of these studies also show positive systemic effects, but since they have not undergone external review and since I do not want to overstate the evidence, I’ve left them out of the above list of studies.  People who, after reading them, have confidence in these three studies should add them to the list of studies on systemic effects.

The bottom line is that none of the studies of systemic effects from voucher programs finds negative effects on student achievement in public schools from voucher competition.  The bulk of the evidence, both from studies of voucher programs and from variation in existing competition among public schools, supports the conclusion that expanding competition improves student achievement.

(Updated 3/3/11 to include the new Florida study)


Demography Is Not Destiny

August 22, 2008

(Guest Post by Matthew Ladner)

The Pacific Research Institute has put out a new study co-authored by PRI Senior Fellow Vicki Murray and some guy from Arizona comparing trends in academic achievement in California to those in Florida. Among the findings: Florida’s Hispanic students outscore the statewide average for all students in California on NAEP’s 4th Grade Reading Exam. Also, Florida’s Free and Reduced lunch eligible Hispanics outscore the statewide average for all students in California. After a decade of strong improvement in Florida, Florida’s African-American students are within striking distance of the statewide average for all students in California, and have already exceeded the statewide averages for all students in Louisiana and Mississippi.

Oh, and Florida’s free or reduced lunch eligible students attending inner city schools outscore the statewide average for all California students.

The point of all of this is not to bash California public schools, but instead to show just how much entirely plausible room for improvement exists. The question isn’t whether disadvantaged kids can learn. Yes they can! The question is whether we adults can get our acts together for the kids.


Voucher Effects on Participants

August 21, 2008

(This is an update of a post I originally wrote on August 21.  I’ve included the new DC voucher findings.)

Here is what I believe is a complete (no cherry-picking) list of analyses taking advantage of random-assignment experiments of the effect of vouchers on participants.  As I’ve previously written, 9 of the 10 analyses show significant, positive effects for at least some subgroups of students.

All of them have been published in peer reviewed journals or were subject to outside peer review by the federal government.

Four of the 10 studies are independent replications of earlier analyses.  Cowen replicates Greene, 2001.  Rouse replicates Greene, Peterson, and Du.  Barnard, et al replicate Peterson and Howell.  And Krueger and Zhu also replicate Peterson and Howell.  All of these independent replications (except for Krueger and Zhu) confirm the basic findings of the original analyses by also finding positive effects.

Anyone interested in a more complete discussion of these 10 analyses and why it is important to focus on the random-assignment studies, should read Patrick Wolf’s article in the BYU Law Review that has been reproduced here.

I’m eager to hear how Leo Casey and Eduwonkette, who’ve accused me of cherry-picking the evidence, respond.

  • These 6 studies conclude that all groups of student participants experienced reading or math achievement gains and/or increased likelihood of graduating from high school as a result of vouchers:

Cowen, Joshua M.  2008. “School Choice as a Latent Variable: Estimating the ‘Complier Average Causal Effect’ of Vouchers in Charlotte.” Policy Studies Journal 36 (2).

Greene, Jay P. 2001. “Vouchers in Charlotte,” Education Matters 1 (2):55-60.

Greene, Jay P., Paul E. Peterson, and Jiangtao Du. 1999. “Effectiveness of School Choice: The Milwaukee Experiment.” Education and Urban Society, 31, January, pp. 190-213.

Howell, William G., Patrick J. Wolf, David E. Campbell, and Paul E. Peterson. 2002. “School Vouchers and Academic Performance:  Results from Three Randomized Field Trials.” Journal of Policy Analysis and Management, 21, April, pp. 191-217. (Washington, DC: Gains for all participants, almost all were African Americans)

Rouse, Cecilia E. 1998. “Private School Vouchers and Student Achievement: An Evaluation of the Milwaukee Parental Choice Program,” The Quarterly Journal of Economics, 113(2): 553-602.

Wolf, Patrick, Babette Gutmann, Michael Puma, Brian Kisida, Lou Rizzo, Nada Eissa, and Marsha Silverberg. March 2009.  Evaluation of the DC Opportunity Scholarship Program: Impacts After Three Years. U.S. Department of Education, Institute of Education Sciences. Washington, DC: U.S. Government Printing Office. (In the fourth year report the sample size shrunk so that the positive achievement effect barely missed meeting a strict threshold for statistical significance — p < .06 just missing the bar of p < .05.  But this new report was able for the first time to measure the effect of vouchers on the likelihood that students would graduate high school.  As it turns out, vouchers significantly boosted high school graduation rates.  As Paul Peterson points out, this suggests that vouchers boosted both achievement and graduation rates in the 4th year.  Read the 4th year evaluation here.)

  • These 3 studies conclude that at least one important sub-group of student participants experienced achievement gains from the voucher and no subgroup of students was harmed:

Barnard, John, Constantine E. Frangakis, Jennifer L. Hill, and Donald B. Rubin. 2003. “Principal Stratification Approach to Broken Randomized Experiments: A Case Study of School Choice Vouchers in New York City,” Journal of the American Statistical Association 98 (462):299–323. (Gains for African Americans)

Howell, William G., Patrick J. Wolf, David E. Campbell, and Paul E. Peterson. 2002. “School Vouchers and Academic Performance:  Results from Three Randomized Field Trials.” Journal of Policy Analysis and Management, 21, April, pp. 191-217. (Dayton, Ohio: Gains for African Americans)

Peterson, Paul E., and William G. Howell. 2004. “Efficiency, Bias, and Classification Schemes: A Response to Alan B. Krueger and Pei Zhu.” American Behavioral Scientist, 47(5): 699-717.  (New York City: Gains for African Americans)

This 1 study concludes that no sub-group of student participants experienced achievement gains from the voucher:

Krueger, Alan B., and Pei Zhu. 2004. “Another Look at the New York City School Voucher Experiment,” The American Behavioral Scientist 47 (5):658–698.

(Update: For a review of systemic effect research — how expanded competition affects achievement in traditional public schools — see here.)


Yet Another Study Finds Vouchers Improve Public Schools

August 21, 2008

(Guest post by Greg Forster)

The Friedman Foundation has just released my new study showing that Ohio’s EdChoice voucher program had a positive impact on academic outcomes in public schools. I’m told that it has generated a number of news hits, though the only reporter to interview me so far was the author of this piece in the Columbus Dispatch. When she interviewed me I thought she was hostile, because her questions put me a little off balance, but the article is perfectly fair. I guess if the reporter is doing her job right, the interviewees ought to feel like they were being challenged. The final product is what counts.

The positive results that I found from the EdChoice program were substantial but not revolutionary. That’s not surprising, given that 1) failing-schools vouchers aren’t the optimum way to structure voucher programs in the first place, and 2) the data were from the program’s first year, when it was smaller and more restricted than it is now.

It’s too early to be sure, but among the large body of empirical studies consistently showing that vouchers improve public schools, a pattern seems to be emerging that voucher programs have a bigger impact on public schools when they’re larger, more universal, and have fewer obstacles to parental participation. That’s worth watching and studying further as opportunities arise.


False Claims of Cherry Picking are the Pits

August 20, 2008

Leo Casey over at Edwize is urging me to join the “United Cherry Pickers” union because he thinks I’ve cherry picked the evidence on vouchers in a previous post.  This sounds like a great deal if my dues, like those from AFT and NEA members, can contribute to paying for skyboxes for Leo and his buddies at the Democratic National Convention to make-up for the convention’s shortfall of $10 million.  Where do I sign up?

Making a charge of cherry picking is easy.  Substantiating it requires, well, uhm, evidence.  Evidence isn’t exactly Leo Casey’s strong-suit.

I said that there have been 10 analyses of random assignment voucher experiments.  I said that 9 of those 10 analyses show significant, positive effects (at least for some subgroups).  If I am cherry picking, which random assignment analyses am I leaving out? 

Leo Casey then asserts: “Serious research conducted by respected scholars without an ideological axe to grind has consistently found every major voucher experiment in the United States wanting. John Witte’s and Cecilia Rouse’s definitive analyses of the Milwaukee voucher program and the Indiana University studies of the Cleveland voucher program have shown no meaningful educational performance advantage for students in those two high profile, large scale voucher programs.”

Neither Witte nor the IU studies analyzed random-assignment experiments, making it harder to have confidence in their results, which is why I focus on the 10 analyses using the gold-standard approach. 

Rouse’s study did examine a random-assignment experiment, but Casey mischaracterizes her findings.  She writes: “I find that students in the Milwaukee Parental Choice Program had faster math score gains than, but similar reading score gains to, the comparison groups. The results appear robust to data imputations and sample attrition, although these deficiencies of the data should be kept in mind when interpreting the results.”   Remember, Casey falsely claims that she finds “no meaningful educational performance advantage for students.”

Casey also mischaracterizes my citation of Belfield and Levin’s findings: “[He even cites research that is not on the subject of vouchers: Hank Levin will be most surprised to learn that his research ‘supports’ vouchers.]” 

Since I actually bothered to quote Belfied and Levin’s findings about the effects of expanding choice and competition, I don’t think Hank Levin will be the least bit surprised to read what he wrote.  I’ll repeat the quotation here so that no one is shocked: “A sizable majority of these studies report beneficial effects of competition across all outcomes… The above evidence shows reasonably consistent evidence of a link between competition (choice) and education quality. Increased competition and higher educational quality are positively correlated.”

If Leo Casey is going to make the charge of cherry picking and improperly citing evidence, he has to deliver proof of those charges.  To the contrary, the facts indicate that Casey is the one cherry picking and improperly citing research.

Is there a union for playing fast and loose with the truth?  Maybe Leo Casey should join it.  Oh, I forgot.  He’s already a member of the AFT.

(Links added)


The Vitamin C of Education

August 20, 2008

Earlier this week I made my Modest Proposal for B.B. (Broader, Bolder or is it Buying Bananas?).  I noted that Randi Weingarten denounced vouchers as a waste of time despite considerable evidence supporting it, while she embraced the B.B. idea of community schools despite there being absolutely no evidence to support the claim that public schools could improve achievement by expanding their mission to include a host of social services.

Given the lack of evidence for B.B. I generously : ) offered to support a series of large pilot studies of the community schools approach, if Weingarten, Leo Casey, and the B.B. crowd would agree to a similar series of large pilot voucher programs as a way of learning more about both reform strategies.  No word yet but perhaps their internet is broken (just try unplugging it and plugging it back in).

Shital Shah from the Coalition for Community Schools, however, sent me a nice note with a link to a report claiming to contain the evidence supporting their approach.  After reviewing the report I still see virtually no evidence to give us confidence that public schools can increase student achievement by offering everything from legal assistance to health care.

In Appendix B the report lists 21 studies of the community school approach.  Seven of them have no student achievement outcomes.  Seven examine student test scores but only make pre/post comparisons without any control group.  And another seven have comparison groups but none employ random assignment, regression discontinuity, or another rigorous research design.  Four of those seven just compare achievement at schools using the B.B. approach to city or statewide averages.  And of the seven studies with some kind of control group, two find null effects, another finds null effects in math but not reading and even then only among schools with “high implementation” of the approach.  The quality (and quantity) of the evidence supporting community schools is no greater than what we could find to support the healing power of crystals

I understand why Randi Weingarten or Leo Casey would be pushing the educational equivalent of crystal healing.  Their job is to advocate for the interests of their union, not to make fair and reasonable assessments of research claims.  If schools expand their mission to include providing health care and other social services just think of all of the dues-paying nurses and social workers they could add to their rolls.

The greater mystery is why normally tough-minded and rigorous researchers, like Jim Heckman and Diane Ravitch, would sign on to this approach entirely lacking empirical support.  Heckman won the Nobel Prize for Economics for crying out loud.  But then again Linus Pauling won the Nobel Prize for Chemistry and later became a public advocate for mega doses of vitamin C to cure cancer, another intervention completely unsupported by rigorous evidence.

I’ll repeat that I am not against trying the B.B. community school approach with large pilot programs that are carefully studied.  I just can’t see why normally smart people would fully endorse untested approaches while ignoring other interventions, like expanding choice and competition in education, which have considerably more supporting evidence.

(edited for typos)