Voucher Effects on Participants

(This is an update of a post I originally wrote on August 21.  I’ve included the new DC voucher findings.)

Here is what I believe is a complete (no cherry-picking) list of analyses taking advantage of random-assignment experiments of the effect of vouchers on participants.  As I’ve previously written, 9 of the 10 analyses show significant, positive effects for at least some subgroups of students.

All of them have been published in peer reviewed journals or were subject to outside peer review by the federal government.

Four of the 10 studies are independent replications of earlier analyses.  Cowen replicates Greene, 2001.  Rouse replicates Greene, Peterson, and Du.  Barnard, et al replicate Peterson and Howell.  And Krueger and Zhu also replicate Peterson and Howell.  All of these independent replications (except for Krueger and Zhu) confirm the basic findings of the original analyses by also finding positive effects.

Anyone interested in a more complete discussion of these 10 analyses and why it is important to focus on the random-assignment studies, should read Patrick Wolf’s article in the BYU Law Review that has been reproduced here.

I’m eager to hear how Leo Casey and Eduwonkette, who’ve accused me of cherry-picking the evidence, respond.

  • These 6 studies conclude that all groups of student participants experienced reading or math achievement gains and/or increased likelihood of graduating from high school as a result of vouchers:

Cowen, Joshua M.  2008. “School Choice as a Latent Variable: Estimating the ‘Complier Average Causal Effect’ of Vouchers in Charlotte.” Policy Studies Journal 36 (2).

Greene, Jay P. 2001. “Vouchers in Charlotte,” Education Matters 1 (2):55-60.

Greene, Jay P., Paul E. Peterson, and Jiangtao Du. 1999. “Effectiveness of School Choice: The Milwaukee Experiment.” Education and Urban Society, 31, January, pp. 190-213.

Howell, William G., Patrick J. Wolf, David E. Campbell, and Paul E. Peterson. 2002. “School Vouchers and Academic Performance:  Results from Three Randomized Field Trials.” Journal of Policy Analysis and Management, 21, April, pp. 191-217. (Washington, DC: Gains for all participants, almost all were African Americans)

Rouse, Cecilia E. 1998. “Private School Vouchers and Student Achievement: An Evaluation of the Milwaukee Parental Choice Program,” The Quarterly Journal of Economics, 113(2): 553-602.

Wolf, Patrick, Babette Gutmann, Michael Puma, Brian Kisida, Lou Rizzo, Nada Eissa, and Marsha Silverberg. March 2009.  Evaluation of the DC Opportunity Scholarship Program: Impacts After Three Years. U.S. Department of Education, Institute of Education Sciences. Washington, DC: U.S. Government Printing Office. (In the fourth year report the sample size shrunk so that the positive achievement effect barely missed meeting a strict threshold for statistical significance — p < .06 just missing the bar of p < .05.  But this new report was able for the first time to measure the effect of vouchers on the likelihood that students would graduate high school.  As it turns out, vouchers significantly boosted high school graduation rates.  As Paul Peterson points out, this suggests that vouchers boosted both achievement and graduation rates in the 4th year.  Read the 4th year evaluation here.)

  • These 3 studies conclude that at least one important sub-group of student participants experienced achievement gains from the voucher and no subgroup of students was harmed:

Barnard, John, Constantine E. Frangakis, Jennifer L. Hill, and Donald B. Rubin. 2003. “Principal Stratification Approach to Broken Randomized Experiments: A Case Study of School Choice Vouchers in New York City,” Journal of the American Statistical Association 98 (462):299–323. (Gains for African Americans)

Howell, William G., Patrick J. Wolf, David E. Campbell, and Paul E. Peterson. 2002. “School Vouchers and Academic Performance:  Results from Three Randomized Field Trials.” Journal of Policy Analysis and Management, 21, April, pp. 191-217. (Dayton, Ohio: Gains for African Americans)

Peterson, Paul E., and William G. Howell. 2004. “Efficiency, Bias, and Classification Schemes: A Response to Alan B. Krueger and Pei Zhu.” American Behavioral Scientist, 47(5): 699-717.  (New York City: Gains for African Americans)

This 1 study concludes that no sub-group of student participants experienced achievement gains from the voucher:

Krueger, Alan B., and Pei Zhu. 2004. “Another Look at the New York City School Voucher Experiment,” The American Behavioral Scientist 47 (5):658–698.

(Update: For a review of systemic effect research — how expanded competition affects achievement in traditional public schools — see here.)

About these ads

28 Responses to Voucher Effects on Participants

  1. Greg Forster says:

    It’s also worth mentioning that the Krueger and Zhu study has been discredited. They cherry-picked (if you’ll pardon the expression) their method to ensure that the positive results for vouchers wouldn’t achieve statistical significance, as was established pretty convincingly not only by Howell and Peterson’s devastating response in Ed Next but also by Caroline Hoxby’s observations in an NBER paper on their manipulation of the definition of race – Krueger and Zhu use a definition of race that is not currently used by the Census, NCES, or anyone else I know of, and that doesn’t accurately reflect the way children really identify themselves by race – and they applied it selectively to only some of the students in the data set, not all of them.

    See here:

    http://www.hoover.org/publications/ednext/3288426.html

    http://www.utahtaxpayers.org/email_campaign/taxing%20times/post.economics.harvard.e.pdf

    Not that this or anything else will be enough to convince the closed-minded.

  2. Patrick says:

    Wow, thanks, I’ve bookmarked this page for future reference.

  3. Patrick says:

    Btw, is Krueger the same guy that worked with Card on trying to show that raising the minimum wage does not increase unemployment and their work was thrashed by economists for being so horrific and sloppy in their methodology?

  4. Greg Forster says:

    I’m afraid I don’t know. Krueger is an economist at Princeton and he’s done research on a ton of different issues, but I’ve only followed his work on education.

  5. Ryan Marsh says:

    According to his CV he did both this and the minimum wage book. I’ve read some of that book (I’m about to start grad school in economics) and was curious about some of it; I had no idea that there was pushback on it. Do you have any links to that stuff, I’d like to see the other side of the story?

    They seem to have some push-back against education reform in the Princeton econ department.

  6. Stuart Buck says:

    Yes, same Krueger. Oddly enough, his co-author David Card has a new study showing the benefits of education competition. See http://stuartbuck.blogspot.com/2008/08/new-paper-on-competition-in-education.html

  7. Stuart Buck says:

    What about Joshua Angrist’s work on vouchers in Colombia? http://econ-www.mit.edu/faculty/angrist/data/angbetkre06

  8. Thanks for the additions Stuart, but I was restricting my list to US studies of participant effects. Card’s study is about Canada and Angrist’s is about Columbia. And Card’s is systemic effects (how vouchers affect the performance of traditional public schools). I’ll address the evidence on systemic effects in a future post.

  9. Stuart Buck says:

    Thanks for the post. I need to bookmark for future use whenever the subject of vouchers comes up and people repeat that same old line that “there’s no evidence,” etc.

  10. This is all very interesting but I question the point of controlled studies. The point seems to be that random-assignment strategies eliminate the problem of sample self-selection bias, but why is this a problem? Consider an analogy: suppose a zookeeper decides to invesstiigate the effects of the dietary regime on the health of his animals. Where once the animals got universal omnivore food, he now gives 1/4 of his felids alfalfa pellets, 1/4 of them get mangoos and guavas, 1/4 of them get omnivore food, and 1/4 of them get meat. Similarly, 1/4 of the the pigs get fruit, 1/4 of them get alfalfa pellets and grain, 1/4 of them get omnivore food, and 1/4 of them get meat. Likewise, for the antelope and the spider monkeys.
    In aggregate, ALL the animals will get sick. 3/4 of the felids die. 3/4 of the antelope die. 3/4 of the monkeys die, 1/4 of the pigs (on the alfalfa diet) die.

    From a policy-maker’s point of voew the important issue is not whether private schools out-perform government schools in the education of students who want out (voucher applicants), but whether choice systems as a whole perform better than systems which do not feature choice.

    What happens if you let the animals eat what they want?

  11. Ryan Marsh says:

    There are other things that random assignment does as well. Say everyone in a city gets choice and scores start rising. It’s because of the choice, right? Simple, right? Not really. There may be other things going on that we can’t take into account. What if choice occurs at the same time as a push for new standards or a new superintendent comes to the district? There’s no way to tell what may have caused the scores to rise. In order to establish the kind of causal claims that policy debates really need, you have to have 2 groups of people: those exposed to a policy and those not. After you’ve decided that, random-assignment is the gold standard because it allows you do deal with other things as well, such as sample selection issues.

    This is why most of the listed studies deal primarily with (and Jay and Patrick Wolf report primarily the results of) what are known as ITT estimates, intention to treat. Kids are assigned to have the choice of attending a public school, and researchers test whether that choice leads to better results (not whether the schools they could attend are better than the schools they could leave). In many ways, it is doing what you want, just providing a way to make causal claims about the results.

  12. Greg Forster says:

    For more on how ITT models allow studies of voucher programs to address these concerns, see:

    http://jaypgreene.com/2008/06/19/what-does-the-red-pill-do-if-i-dont-take-it/

    …including the interesting comment thread.

  13. Ryan,

    I don’t dispute what you say, but it does not address my point. Yes, a rise in overall system performance following legislation of a choice program would not demonstrate that choice works, since you have one treatment and no control. The way to aaddress this is to compare entire systems (choice and no choice), not students.

    Consider: a choice system might enhance overall system performance even if students in your random assignment “winner” category (they got vouchers) did worse than students in the “loser” class. Suppose the people who want out aren’t interested in the academic orientation of government schools. Suppose these students disrupt classes. Suppose peer effects cause them to perform better in government schools (while draging everyone else’s scores down). In some independent, voucher-accepting schools, teach masonry, carpentry, art, auto shop, and culinary arts. In these schools, teach only the Math necessary for this program, while in the (now tranquil) government schools, you teach Real Analysis, Combinatorics, Group Theory, and other useless stuff. Measure system performance with standardized tests of Math. Choice works even though voucher winners (in trade schools) do worse that voucher losers (stuck in college-prep schools).

  14. Greg Forster says:

    Well, for whatever it’s worth, that hypothetical isn’t what’s actually happening. People who apply for vouchers overwhelmingly say it’s because they want better academics. Voucher users have fewer behavior problems after they enter voucher programs than they had before (no doubt because private schools are actually allowed to enforce their discipline policies rather than making paper hats out of them). Etc.

  15. Ryan Marsh says:

    The problem with evaluating whole systems is that there aren’t control groups that we can identify being as good as randomly assigned control groups. The closest situation I can come up with for a way to do what you suggest would be to use the twin cities: give one a choice program and the other not a choice program. This might let us be confident that the kids don’t differ on observables.

    The problem, though, is that families made choices to live in certain places and not others, and those choices may relate to their child’s education–we still have the selection problem, it’s just now we have it on things we can’t measure instead of things we can measure, so we can’t even test to see if the “random assignment” was successful. This is what random assignment does: it lets us be confident that even those things we can’t measure are probably randomly distributed across treatment and control groups, so even that potential problem doesn’t exist for our results.

  16. Patrick says:

    I don’t have anything off the top of my head about Krueger’s research on the minimum wage but you can easily Google it since it is widely used as evidence that raising the minimum wage doesn’t increase unemployment. As such, criticism is all over the place.

    I believe their methodology failed because of selection bias. 1) They chose to study wages in wealthy states where a rise in the national minimum wage may only reflect the already prevailing wage for low income workers in the state. Thus even though the law mandated a raise in the minimum wage, people were already paid that so no real raise actually occured. and 2) They basically phone called managers in (I forget the industry, maybe burger joints) to ask them about the number of employees before and after the minimum wage increase. This fails to take into account any stores that may have closed as a result since it only compares existing stores after the increase with stores before the increase.

    There are probably other complaints but that is what you get when you challenge a century’s worth of economic consensus. Krueger is a smart guy, but he seems to do a lot of boneheaded research from what I hear.

  17. Patrick says:

    Kirkpatrick, the answer to your problem is that you have given the animals a limited choice of mostly bad options but not have ruled out the possibility of giving the animals an unlimited choice of good, moderate, or possibly bad options (universal school choice).

    As the cream rises to the top, people figure out what works and what does not. And the right answer is not to manage the exceptions as the rule (id don’t manage everyone as if they will always fail).

  18. I agree with Malcolm that we should care quote a bit about systemic effects. I’ll be posting the research on that soon.

  19. Brian Kisida says:

    Coincidentally, I just read a well-designed study by Angrist and Krueger on compulsory schooling and educational attainment. They had a really novel approach to control for the selection and endogeneity issues that most attainment studies have.

    I would also like to see the rigorous research that has been done on this zoo food issue. I had no idea that antelope can’t live on mangos and guavas, but apparently pigs can? But alfalfa kills pigs? And 3/4 of the monkeys die? Really? You would think mangos and guavas would be perfect for them.

    As far as the analogy (Malcolm), these studies do let the “animals eat what they want.” Animals who are malnourished are identified. Some of these animals graze on whatever they normally eat, and some randomly have a bucket of mangoes or alfalfa offered to them. And then the animals eat whatever they want. Compliance is voluntary. I think you suggest that you think that selection bias is a good thing because you think some sort of voluntary choice needs to be a part of the analysis. Well, it is. The only thing that is randomly assigned are the amount of options the treatment and control groups have. What they do with those options is up to them. I think that good evaluations of choice are not (should not) be thought of as evaluations of alfalfa or mango diets, they should be thought of as evaluations of “choice.”

  20. Let me be clear: I support policies which give to individual parents the power to determine which institution shall receive the K-12 subsidy that taxpayers allot to their (the parents’ ) children. With some reasonable assumptions, measures of student performance in these random-assignment lotteries support the argument for parent control. Unless these assumptions are made explicit, however, measures of student performance are weak arguments, for the reason I gave: students in voucher-accepting schools could do worse and vouchers could still be good policy. Also, students in voucher-accepting schools systematically could do better than lottery losers and still vouchers might lower overall system performance.

    Brian, the monkeys sicken on the grazer, omnivore, and carnivore diets.

    Patrick, I agree that a wide range of options is better than the current US system. At some point, however, that wide range of choice sabotages attempts to measure the effectis of schooling. Consider the conclusion to my post “What’s a Linear Differential Operator, Anyway?”. According to one of the chapters in Vouchers and the Provision of Social Services, standardized Math test performance is lower and standardized Lagguage test performance is higher in countries which subsidize parent control bud DO NOT require schools to assess students’ Math performance than in countries which subsidize parent control and DO require schools to assess Math performance. I infer that Math matters more to Education Department bureaucrats and to politicians than it does to parents.

  21. The intended reference was Steuerle, Reischauer, et. al, __Vouchers and the Provision of Public Services__ (not “Social Services”), (Brookings).

  22. Greg Forster says:

    If the BB people have their way, soon it really will be “social services.”

  23. […] skyboxes for Leo and his buddies at the Democratic National Convention to make-up for the conventiohttp://jaypgreene.com/2008/08/21/voucher-effects-on-participants/Indiana University School of Education HomeAcademic programs, faculty and staff, services, […]

  24. […] the gold-standard of testing treatment effects). All ten demonstrate positive voucher effects, 9 out of 10 find statistically significant effects for at least some subgroups, and 8 out of 10 find […]

  25. […] voucher supporters of cherry-picking studies that support their positions. Jay Greene responds by listing a series of different studies proving the value of the school choice plans. Greg Forster […]

  26. […] big battles over school vouchers in American education have focused on programs serving low-income children who live in urban areas. Milwaukee’s program, begun in 1990, is the biggest and oldest in the country, and the District […]

  27. […] may just be a huge waste of money. Meanwhile, many relatively barebones private schools seem to do just as good a job or better at educating students. Oh, and there’s that charity thing again: Religious schools […]

  28. […] statement, are not an aberration for school choice. The highest-calibre research on choice has almost always found clear benefits stemming from it, and has never found negative […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 2,648 other followers