(Guest Post by Matthew Ladner)
Over at Eduwonk, Andy describes the gains among D.C. Opportunity Scholarships as “modest” and says he doesn’t think this evaluation will change many minds.
On this blog, I’ve previously complained about what I viewed as an inappropriately high bar as the focus on the evaluation in an Intention to Treat model. Some of you disagree, but my view is that the question that most people want to know is whether the kids who used a voucher have improved performance, or not. The second year evaluation found that the answer to this question was yes.
Because some kids won the voucher lottery but then didn’t find a spot in a private school, under the high bar evaluation they went into the experimental group. Other kids who lost a lottery but wound up going to private school anyway went into the control group.
So basically, the kids who actually did receive a voucher and used it had to make gains large enough to drag these other kids as a group over the level of statistical significance.
I’ll be damned if they didn’t do it in the third year of the program. Modest? You can’t possibly be serious.
Andy doesn’t think that evidence is going to sway anyone. Really? Why did the President say:
“Secretary Duncan will use only one test when deciding what ideas to support with your precious tax dollars: It’s not whether an idea is liberal or conservative, but whether it works.”
Why did Senator Durbin say “Allowing the program to continue through end of next school year (2009–2010) will give Congress a chance to examine all the evidence to determine whether or not this program works.”
Why did Senator Feinstein say “Why should the poor child not have the same access as the wealthy child does? That is all he is asking for. He is saying let’s try it for 5 years, and then let’s compare progress and let’s see if this model can work for these District youngsters.”
Senator Feinstein went on “I have gotten a lot of flak because I am supporting it. And guess what. I do not care. I have finally reached the stage in my career, I do not care. I am going to do what I sincerely believe is right. I have spent the time. I have gone to the schools, I have seen what works, I have seen what does not work. Believe it or not, I have always been sort of a political figure for the streets as opposed to the policy wonks. I know different things work on the streets that often do not work on the bookshelves. So we will see.”
Indeed we will, and now we have seen. Senator Feinstein should be applauded for her courage. It’s too bad she didn’t get to see this report before Congress voted to require reauthorization.
Perhaps Andy thinks that evidence won’t change minds because of this letter sent by the NEA demanding that Congress kill the DC program. Perhaps Feinstein’s courage really is in short supply.
There are 1,700 kids that just surmounted a very high bar that really hope that this is not the case.
Obviously you’re right that Andy shouldn’t cynically dismiss empirical evidence the way he does. But you seem to think he’s also wrong in thinking that courage and honesty are rare in DC. On that question, isn’t the empirical evidence on his side?
And for the record, I support the IIT model as scientifically superior, but Matt is right that this model sets the bar high, so when the bar gets set high and the program clears it anyway, it should get due credit.
I just want to praise your use of oi vey.
It’s an empirical question and we are going to find out one way or another!
[…] a round-up of reax on the news from Jay P. Greene and Matthew Ladner. Posted in: Education Send to a Friend Printer Friendly comments (0) trackbacks […]
I don’t think this is a fair conclusion – the report explicitly shows results from both groups separately in all instances except for the first page summary (as is pretty standard procedure in any scientific study with many subgroups.
As to your specific point, “there was a statistically significant impact on reading
achievement of 4.5 scale score points (effect size (ES) = .13)8 from the offer of a
scholarship and 5.3 scale score points (ES = .15) from the use of a scholarship” a minor difference of less than half a month. As for math “There was no statistically significant impact on math achievement, overall (ES = .03)
from the offer of a scholarship nor from the use of a scholarship“. It’s clear that results are very similar for both groups, and no one is “dragging” anyone up statistically.
In fact, the study actually should be comparing between those that got the scholarship and those that used it – if you were looking at the effectiveness of a scholarship given to students who scored highly on their SATs, you would naturally compare its effect to those high SAT scorers who didn’t take the scholarship. If you compare this way (pg. 30) the results or only marginal, and some fail multiple test correction.
The most important finding, however, is this: “There were no statistically significant reading (ES = .05) or math (ES = .01)
achievement impacts for the high-priority subgroup of students who had attended a
SINI public school“. With the most important and at-risk group showed no improvements from the voucher program, how can you honestly call the program a success (or better yet, “nothing short of phenomenal” as you do over at NRO)? To me, it seems that the program helps those gifted students who would excel anyways, and offers nothing for the students who actually go to bad schools. Moreover, it seems to me that math improvement would be a better measurement of teaching methods than reading (which essentially improves if you just give students better books) and in that aspect the program failed entirely.
1) The comparison you cite is between students who previously did or did not go to predominantly poor schools. It is not between students who are and are not poor. Very big difference.
2) With any positive result, you can always break it down and find some identifiable group of students where the benefits don’t achieve statistical certainty. If you won’t call a program successful if you can do that to the results, you’ll never find a successful program.
I think you’re dodging Greg. The goal of the program, as stated in the analysis, was primarily to improve the aptitude of kids from poor performing schools by putting them into private schools – in this the program failed unequivocally. This wasn’t some measly subgroup or outlier (for that, I could’ve taken the finding that the scholarship students end up reading less for fun) but the main group that it was set up for.
Does the program have some positive effects? Absolutely, and I agree that it is too early in the study to cancel it. But this is not the silver bullet as that voucher supports purport. Moreover, the cost analysis of these programs have been infantile – how much would it cost if the vast majority of students decided to follow vouchers? What about the cost of shutting down old schools or building new ones as students move from district to district on a whim?
1. Certain subgroups haven’t achieved statistically significant gains YET. Remember- the gains are incremental over time. There is no reason to expect all the various subgroups to reach the very high bar of statistical significance at the same time.
2. Vouchers are not a magic bullet, nor do they make you better looking or grant superpowers. They do help kids to learn to read better, make more satisfied parents for a fraction of the per pupil cost in DCPS.
[…] secretary, the head of the Senate subcommittee overseeing the program, and a host of others all promised that they would evaluate vouchers guided solely by evidence. Page 1 of 2 Next […]