Yet Another Study Finds Vouchers Improve Public Schools

August 21, 2008

(Guest post by Greg Forster)

The Friedman Foundation has just released my new study showing that Ohio’s EdChoice voucher program had a positive impact on academic outcomes in public schools. I’m told that it has generated a number of news hits, though the only reporter to interview me so far was the author of this piece in the Columbus Dispatch. When she interviewed me I thought she was hostile, because her questions put me a little off balance, but the article is perfectly fair. I guess if the reporter is doing her job right, the interviewees ought to feel like they were being challenged. The final product is what counts.

The positive results that I found from the EdChoice program were substantial but not revolutionary. That’s not surprising, given that 1) failing-schools vouchers aren’t the optimum way to structure voucher programs in the first place, and 2) the data were from the program’s first year, when it was smaller and more restricted than it is now.

It’s too early to be sure, but among the large body of empirical studies consistently showing that vouchers improve public schools, a pattern seems to be emerging that voucher programs have a bigger impact on public schools when they’re larger, more universal, and have fewer obstacles to parental participation. That’s worth watching and studying further as opportunities arise.


Vouchers: Evidence and Ideology

May 8, 2008

(Guest post by Greg Forster)

 

Lately, Robert Enlow and I at the Friedman Foundation for Educational Choice have had to spend a lot of time responding to the erroneous claims Sol Stern has been making about school choice. I honestly hate to be going up against Sol Stern right at the moment when he’s doing important work in other areas. America owes Stern a debt for doing the basic journalistic work on Bill Ayers that most journalists covering the presidential race didn’t seem interested in doing.

 

But what can we do? We didn’t choose this fight. If Stern is going to make a bunch of false claims about school choice, it’s our responsibility to make sure people have access to the facts and the evidence that show he’s wrong.

 

That’s why Enlow and I have focused primarily on using data and evidence to demonstrate that Stern’s claims are directly contrary to the known facts. It’s been interesting to see how Stern and his defenders are responding.

 

I’ve been saddened at how little effort Stern and his many defenders are devoting to seriously addressing the evidence we present. For example, all the studies of the effects of vouchers on public schools that were conducted outside the city of Milwaukee have been completely ignored both by Stern and by every one of his defenders I’ve seen so far. Does evidence outside Milwaukee not count for some reason? Since most of the studies on this subject have been outside Milwaukee, this arbitrary focus on Milwaukee is hard to swallow.

 

And what about the studies in Milwaukee? All of them had positive findings: vouchers improve public schools. Unfortunately, Stern and his critics fail to engage with these studies seriously.

 

Stern had argued in his original article that school choice doesn’t improve public schools, on grounds that the aggregate performance of schools in Milwaukee is still bad. His critics pointed out that a large body of high quality empirical research found that vouchers have a positive effect on public schools, both in Milwaukee and elsewhere. If Milwaukee schools are still bad, that doesn’t prove vouchers aren’t helping; and since a large body of high quality empirical research says they do help, the obvious conclusion to reach – if we are going to be guided by the data – is that other factors are dragging down Milwaukee school performance at the same time vouchers are pulling it upward.

 

If an asthma patient starts using medicine, and at the same time takes up smoking, his overall health may not improve. But that doesn’t mean the medicine is no good. I also think that there may be a “neighborhood effect” in Milwaukee, since eligibility for the program isn’t spread evenly over the whole city.

 

There’s new research forthcoming in Milwaukee that I hope will shed more light on the particular reasons the city’s aggregate performance hasn’t improved while vouchers have exerted a positive influence on it. The important point is that all the science on this subject (with one exception, in D.C., which I’ve been careful to take note of when discussing the evidence) finds in favor of vouchers.

 

In Stern’s follow-up defense of his original article, his “response,” if you can call it that, is to repeat his original point – that the aggregate performance of schools in Milwaukee citywide are still generally bad.

 

He disguises his failure to respond to his critics’ argument by making a big deal out of dates. He says that all the studies in Milwaukee are at least six years old (which is actually not very old by the standards of education research), and then provides some more recent data on the citywide aggregate performance of Milwaukee schools. But this obviously has nothing to do with the question; Stern’s critics agree that the aggregate data show Milwaukee schools are still bad. The question is whether vouchers exert a positive or negative effect. Aggregate data are irrelevant; only causal studies can address the question.

 

Of course it’s easy to produce more up-to-date data if you’re not going to use scientific methods to distinguish the influence of different factors and ensure the accuracy of your analysis. If you don’t care about all that science stuff, there’s no need to wait for studies to be conducted; last year’s raw data will do fine.

 

Weak as this is, at least it talks about the evidence. The response to our use of facts and evidence has overwhelmingly been to accuse school choice supporters of ideological closed-mindedness. Although we are appealing to facts and evidence, we are accused of being unwilling to confront the facts and evidence – accused by people who themselves do not engage with the facts and evidence to which we appeal.

 

Stern, for example, complains at length that “school choice had become a secular faith, requiring enforced discipline” and “unity through an enforced code of silence.” Apparently when we demonstrate that his assertions are factually false, we are enforcing silence upon him. (We’ve been so successful in silencing Stern that he is now a darling of the New York Times. If he thinks this is silence, he should get his hearing checked.)

 

Similarly, when Stern’s claims received uncritical coverage from Daniel Casse in the Weekly Standard, Enlow and Neal McCluskey wrote in to correct the record. Casse responded by claiming, erroneously, that Stern had already addressed their arguments in his rebuttal.

 

Casse also repeated, in an abbreviated form, Stern’s non-response on the subject of the empirical studies in Milwaukee – and in so doing he changed it from a non-response to an error. He erroneously claims that Stern responded to our studies by citing the “most recent studies.” But Stern cites no studies; he just cites raw data. It’s not a study until you conduct a statistical analysis to distinguish the influence of particular factors (like vouchers) from the raw aggregate results – kind of like the analyses conducted in the studies that we cite and that Stern and Casse dismiss without serious discussion.

 

Casse then praised Stern’s article because “it dealt with the facts on the ground” and accused school choice supporters of “reciting the school choice catechism.”

 

Greg Anrig, in this Washington Monthly article, actually manages to broach the subject of the scientific quality of one of the Milwaukee studies. Unfortunately, he doesn’t cite any of the other research, in Milwaukee or elsewhere, examining the effect of vouchers on public schools. So if you read his article without knowing the facts, you’ll think that one Milwaukee study is the only study that ever found that vouchers improve public schools, when in fact there’s a large body of consistently positive research on the question.

 

Moreover, Anrig’s analysis of the one Milwaukee study he does cite is superficial. He points out that the results in that study may be attributable to the worst students leaving the public schools. Leave aside that this is unlikely to be the case, much less that it would account for the entire positive effect the study found. The more important point is that there have been numerous other studies of this question that use methods that allow researchers to examine whether this is driving the results. Guess what they find.

 

Though he ignores all but one of the studies cited by school choice supporters, shuffling all the rest offstage lest his audience become aware of the large body of research with positive findings on vouchers, Anrig cites other studies that he depicts as refuting the case for vouchers. Like Stern’s citation of the raw data in Milwaukee, these other studies in fact are methodologically unable to examine the only question that counts – what was the specific impact of vouchers, as distinct from the raw aggregate results? (I’m currently putting together a full-length response to Anrig’s article that will go over the specifics on these studies, but if you follow education research you already know about them – the notoriously tarnished HLM study of NAEP scores, the even more notoriously bogus WPRI fiasco, etc.)

 

But Anrig, like his predecessors, is primarily interested not in the quality of the evidence but in the motives of school choice supporters. He spends most of his time tracing the sinister influence of the Bradley Foundation and painting voucher supporters as right-wing ideologues.

 

And these are the more respectable versions of the argument. In the comment sections here on Jay P. Greene’s Blog, Pajamas Media, and Joanne Jacobs’s site, much the same argument is put in a cruder form: you can’t trust studies that find school choice works, because after all, they’re conducted by researchers who think that school choice works.

 

(Some of these commenters also seem to be confused about the provenance and data sources of these studies. I linked to copies of the studies stored in the Friedman Foundation’s research database, but that doesn’t make them Friedman Foundation studies. As I stated, they were conducted at Harvard, Princeton, etc. And at one point I linked to an ELS study I did last year that also contained an extensive review of the existing research on school choice, but that doesn’t mean all the previous studies on school choice were ELS studies.)

 

What is one to make of all this? The more facts and evidence we provide, the more we’re accused of ignoring the facts and evidence – by people who themselves fail to address the facts and evidence we provide.

 

I’m tempted to say that there’s a word for that sort of behavior. And there may be some merit in that explanation, though of course I have no way of knowing. But I also think there’s something else going on as well.

 

One prominent blogger put it succinctly to me over e-mail. The gist of his challenge was something like: “Why don’t you just admit that all this evidence and data is just for show, and you really support school choice for ideological reasons?”

 

I think this expresses an idea that many people have – that there is “evidence” over here and then there is “ideology” over there, and the two exist in hermetically sealed containers and can never have any contact with one another. (Perhaps this tendency is part of the long-term damage wrought by Max Weber’s misuse of the fact/value distinction, but that’s a question for another time.)

 

On this view, if you know that somebody has a strong ideology, you have him “pegged” and can dismiss any evidence he brings in support of his position as a mere epiphenomenon. The evidence is a distraction from your real task, which is to identify and reveal the pernicious influence of his ideology on his thinking. Hence the widespread assumption that when a school choice supporter brings facts and evidence, there is no need to trouble yourself addressing all that stuff. Why bother? The point is that he’s an ideologue; the facts are irrelevant.

 

But, as I explained to the blogger who issued that challenge, evidence and ideology are not hermetically sealed. Ideology includes policy preferences, but those policy preferences are always grounded in a set of expectations about the way the world works. In fact, I would say that an “ideology” is better defined as a set of expectations about how the world works than as a set of policy preferences. (That would help explain, for example, why we still speak of differences between “liberal” and “conservative” viewpoints even on issues like immigration where there are a lot of liberals and conservatives on both sides.) And our expectations about how the world works are subject to verification or falsification by evidence.

 

So, for example, I hold an ideology that says (broadly speaking) that freedom makes social institutions work better. That’s one of the more important reasons I support school choice – because I want schools (all schools, public and private) to get better, and I have an expectation that when educational freedom is increased, schools will improve. My ideology is subject to empirical verification. If school choice programs do in fact make public schools better – as the empirical studies consistently show they do – then that is evidence that supports my ideology.

 

Even the one study that has ever shown that vouchers didn’t improve public schools, the one in D.C., also confirms my ideology. The D.C. program gives cash bribes to the public school system to compensate for lost students, thus undermining the competitive incentives that would otherwise improve public schools – so the absence of a positive voucher impact is just what my ideology would predict.

 

Other evidence may also be relevant to the truth or falsehood of my ideology, of course. The point is that evidence is relevant, and truth or falsehood is the issue that matters.

 

Now, as I’ve already sort of obliquely indicated, my view that freedom makes things work better is not the only reason I support school choice. But it is one of the more important reasons. So, if you somehow proved to me that freedom doesn’t make social institutions work better, I wouldn’t immediately disavow school choice, since there are other reasons besides that to support it. However, I would have significantly less reason to support it than I did before.

 

If we really think that evidence has nothing to do with ideology, I don’t see how we avoid the conclusion that people’s beliefs have nothing to do with truth or falsehood – ultimately, that all human thought is irrational. Bottom line, you aren’t entitled to ignore your opponent’s evidence, or dismiss it as tainted because it is cited by your opponent.

 

UPDATE: See this list of complete lists of all the empirical research on vouchers.

 

Edited for typos


Surprise! What Researchers Don’t Know about Florida’s Vouchers

April 21, 2008

(Guest post by Greg Forster)

 

Florida’s A+ program, with its famous voucher component, has been studied to death. Everybody finds that the A+ program has produced major improvements in failing public schools, and among those who have tried to separate the effect of the vouchers from other possible impacts of the program, everybody finds that the vouchers have a positive impact. At this point our understanding of the impact of A+ vouchers ought to be pretty well-formed.

 

But guess what? None of the big empirical studies on the A+ program has looked at the program’s impact after 2002-03. That was the year in which large numbers of students became eligible for vouchers for the first time, so it’s natural that a lot of research would be done on the impact of the program in that year. Still, you would think somebody out there would be interested in finding out, say, whether the program continued to produce gains in subsequent years. In particular, you’d think people would be interested in finding out whether the program produced gains in 2006-07, the first school year after the Florida Supreme Court struck down the voucher program in a decision that quickly became notorious for its numerous false assumptions, internal inconsistencies, factually inaccurate assertions and logical fallacies.

 

Yet as far as I can tell, nobody has done any research on the impact of the A+ program after 2002-03. Oh, there’s a study that tracked the schools that were voucher-eligible in 2002-03 to see whether the gains made in those schools were sustained over time. But that gives us no information about whether the A+ program continued to produce improvements in other schools that were designated as failing in later years. For some reason, nobody seems to have looked at the crucial question of how vouchers impacted Florida public schools after 2002-03.

 

[format=shameless self-promotion]

 

That is, until now! I recently conducted a study that examines the impact of Florida’s A+ program separately in every school year from 2001-02 through 2006-07. I found that the program produced moderate gains in failing Florida public schools in 2001-02, before large numbers of students were eligible for vouchers; big gains in 2002-03, when large numbers of students first became eligible for vouchers; significantly smaller but still healthy gains from 2003-04 through 2005-06, when artificial obstacles to participation blocked many parents from using the vouchers; and only moderate gains (smaller even than the ones in 2001-02) after the vouchers were removed in 2006-07.

 

[end format=shameless self-promotion]

 

It seems to me that this is even stronger evidence than was provided by previous studies that the public school gains from the A+ program were largely driven by the healthy competitive incentives provided by vouchers. The A+ program did not undergo significant changes from year to year between 2001-02 and 2006-07 that would explain the dramatic swings in the size of the effect – except for the vouchers. In each year, the positive effects of the A+ program track the status of vouchers in the program. If the improvements in failing public schools are not primarily from vouchers, what’s the alternative explanation for these results?

 

 

 

 

Obviously the most newsworthy finding is that the A+ program is producing much smaller gains now that the vouchers are gone. But we should also look more closely at the finding that the program produced smaller (though still quite substantial) gains in 2003-04 through 2005-06 than it did in 2002-03.

 

As I have indicated, I think the most plausible explanation is the reduced participation rates for vouchers during those years, attributable to the many unnecessary obstacles that were placed in the path of parents wishing to use the vouchers. (These obstacles are detailed in the study; I won’t summarize them here so that your curiosity will drive you to go read the study.) While the mere presence of a voucher program might be expected to produce at least some gains – except where voucher competition is undermined by perverse incentives arising from bribery built into the program, as in the D.C. voucher – it appears that public schools may be more responsive to programs with higher participation levels.

 

There’s a lot that could be said about this, but the thing that jumps to my mind is this: if participation rates do drive greater improvements in public schools, we can reasonably expect that once we have universal vouchers, the public school gains will be dramatically larger than anything we’re getting from the restricted voucher programs we have now.

 

One more question that deserves to be raised: how come nobody else bothered to look at the impact of the A+ program after 2002-03 until now? We should have known a long time ago that the huge improvements we saw in that year got smaller in subsequent years.

 

It might, for example, have caused Rajashri Chakrabarti to modify her conclusion in this study that failing-schools vouchers can be expected to produce bigger improvements in public schools than broader vouchers. In this context it is relevant to point out that many of the obstacles that blocked Florida parents from using the vouchers arose from the failing-schools design of the program. Chakrabarti does great work, but the failing-schools model introduces a lot of problems that will generally keep participation levels low even when the program isn’t being actively sabotaged by the state department of education. If participation levels do affect the magnitude of the public school benefit from vouchers, then the failing-schools model isn’t so promising after all.

 

So why didn’t we know this? I don’t know, but I’ll offer a plausible (and conveniently non-falsifiable) theory. The latest statistical fad is regression discontinuity, and if you’re going to do regression discontinuity in Florida, 2002-03 is the year to do it. And everybody wants to do regression discontinuity these days. It’s cutting-edge; it’s the avant-garde. It’s like smearing a picture of the virgin Mary with elephant dung – except with math.

 

You see the problem? It’s like the old joke about the guy who drops his keys in one place but looks for them in another place because the light is better there. I think the stats profession is constantly in danger of neglecting good research on urgent questions simply because it doesn’t use the latest popular technique.

 

I don’t want to overstate the case. Obviously the studies that look at the impact of the A+ program in 2002-03 are producing real and very valuable knowledge, unlike the guy looking for his keys under the street lamp (to say nothing of the elephant dung). But is that the only knowledge worth having?

 

(Edited to fix a typo and a link.)