(Guest post by Greg Forster)
Florida’s A+ program, with its famous voucher component, has been studied to death. Everybody finds that the A+ program has produced major improvements in failing public schools, and among those who have tried to separate the effect of the vouchers from other possible impacts of the program, everybody finds that the vouchers have a positive impact. At this point our understanding of the impact of A+ vouchers ought to be pretty well-formed.
But guess what? None of the big empirical studies on the A+ program has looked at the program’s impact after 2002-03. That was the year in which large numbers of students became eligible for vouchers for the first time, so it’s natural that a lot of research would be done on the impact of the program in that year. Still, you would think somebody out there would be interested in finding out, say, whether the program continued to produce gains in subsequent years. In particular, you’d think people would be interested in finding out whether the program produced gains in 2006-07, the first school year after the Florida Supreme Court struck down the voucher program in a decision that quickly became notorious for its numerous false assumptions, internal inconsistencies, factually inaccurate assertions and logical fallacies.
Yet as far as I can tell, nobody has done any research on the impact of the A+ program after 2002-03. Oh, there’s a study that tracked the schools that were voucher-eligible in 2002-03 to see whether the gains made in those schools were sustained over time. But that gives us no information about whether the A+ program continued to produce improvements in other schools that were designated as failing in later years. For some reason, nobody seems to have looked at the crucial question of how vouchers impacted Florida public schools after 2002-03.
That is, until now! I recently conducted a study that examines the impact of Florida’s A+ program separately in every school year from 2001-02 through 2006-07. I found that the program produced moderate gains in failing Florida public schools in 2001-02, before large numbers of students were eligible for vouchers; big gains in 2002-03, when large numbers of students first became eligible for vouchers; significantly smaller but still healthy gains from 2003-04 through 2005-06, when artificial obstacles to participation blocked many parents from using the vouchers; and only moderate gains (smaller even than the ones in 2001-02) after the vouchers were removed in 2006-07.
[end format=shameless self-promotion]
It seems to me that this is even stronger evidence than was provided by previous studies that the public school gains from the A+ program were largely driven by the healthy competitive incentives provided by vouchers. The A+ program did not undergo significant changes from year to year between 2001-02 and 2006-07 that would explain the dramatic swings in the size of the effect – except for the vouchers. In each year, the positive effects of the A+ program track the status of vouchers in the program. If the improvements in failing public schools are not primarily from vouchers, what’s the alternative explanation for these results?
Obviously the most newsworthy finding is that the A+ program is producing much smaller gains now that the vouchers are gone. But we should also look more closely at the finding that the program produced smaller (though still quite substantial) gains in 2003-04 through 2005-06 than it did in 2002-03.
As I have indicated, I think the most plausible explanation is the reduced participation rates for vouchers during those years, attributable to the many unnecessary obstacles that were placed in the path of parents wishing to use the vouchers. (These obstacles are detailed in the study; I won’t summarize them here so that your curiosity will drive you to go read the study.) While the mere presence of a voucher program might be expected to produce at least some gains – except where voucher competition is undermined by perverse incentives arising from bribery built into the program, as in the D.C. voucher – it appears that public schools may be more responsive to programs with higher participation levels.
There’s a lot that could be said about this, but the thing that jumps to my mind is this: if participation rates do drive greater improvements in public schools, we can reasonably expect that once we have universal vouchers, the public school gains will be dramatically larger than anything we’re getting from the restricted voucher programs we have now.
One more question that deserves to be raised: how come nobody else bothered to look at the impact of the A+ program after 2002-03 until now? We should have known a long time ago that the huge improvements we saw in that year got smaller in subsequent years.
It might, for example, have caused Rajashri Chakrabarti to modify her conclusion in this study that failing-schools vouchers can be expected to produce bigger improvements in public schools than broader vouchers. In this context it is relevant to point out that many of the obstacles that blocked Florida parents from using the vouchers arose from the failing-schools design of the program. Chakrabarti does great work, but the failing-schools model introduces a lot of problems that will generally keep participation levels low even when the program isn’t being actively sabotaged by the state department of education. If participation levels do affect the magnitude of the public school benefit from vouchers, then the failing-schools model isn’t so promising after all.
So why didn’t we know this? I don’t know, but I’ll offer a plausible (and conveniently non-falsifiable) theory. The latest statistical fad is regression discontinuity, and if you’re going to do regression discontinuity in Florida, 2002-03 is the year to do it. And everybody wants to do regression discontinuity these days. It’s cutting-edge; it’s the avant-garde. It’s like smearing a picture of the virgin Mary with elephant dung – except with math.
You see the problem? It’s like the old joke about the guy who drops his keys in one place but looks for them in another place because the light is better there. I think the stats profession is constantly in danger of neglecting good research on urgent questions simply because it doesn’t use the latest popular technique.
I don’t want to overstate the case. Obviously the studies that look at the impact of the A+ program in 2002-03 are producing real and very valuable knowledge, unlike the guy looking for his keys under the street lamp (to say nothing of the elephant dung). But is that the only knowledge worth having?
(Edited to fix a typo and a link.)
[…] to support the benefits of vouchers. In fact, a few hours before I read the attack on me, I read this summary of a study showing the competitive benefits of vouchers in Florida. I also keep a copy of School Choice: The Findings […]
The teachers’ union still hates school choice…
But even with few programs on which to measure, there is evidence to support the benefits of vouchers. In fact, a few hours before I read the attack on me, I read this summary of a study showing the competitive benefits of vouchers in Florida….
[…] the demographic data collected for NCLB are valuable. In my last study, which showed that competition from school vouchers improves education in failing public schools, I […]