Wolf v. Ravitch/Welner on the Effects of School Choice

(Guest Post By Jason Bedrick)

Is school choice effective at improving measurable student outcomes?

That question has been at the center of a heated debate between Patrick Wolf of the University of Arkansas and Diane Ravitch, former U.S. Assistant Secretary of Education, and one of her supporters. The controversy began when Ravitch attempted to critique Wolf’s studies of voucher programs in Milwaukee and Washington D.C.

After questioning Wolf’s credibility, Ravitch made three main empirical claims, all of which are misleading or outright false:

1) Wolf’s own evaluations have not “shown any test score advantage for students who get vouchers, whether in DC or Milwaukee.” The private schools participating in the voucher program do not outperform public schools on state tests. The only dispute is “whether voucher students are doing the same or worse than their peers in public schools.”

2) The attrition rate in Wolf’s Milwaukee study was 75% so the results only concern the 25% of students who remained in the program.

3) Wolf’s study doesn’t track the students who left the voucher program. (“But what about the 75% who dropped out and/or returned to [the public school system]?  No one knows.”)

Wolf then rebutted those claims:

1) Ravitch ignores the finding that vouchers had a strong positive impact on high school graduation rates. Moreover, there was evidence of academic gains among the voucher students:

The executive summary of the final report in our longitudinal achievement study of the Milwaukee voucher program states:  “The primary finding that emerges from these analyses is that, for the 2010-11 school year, the students in the [voucher] sample exhibit larger growth from the base year of 2006 in reading achievement than the matched [public school] sample.” Regarding the achievement impacts of the DC program, Ravitch quotes my own words that there was no conclusive evidence that the DC voucher program increased student achievement.  That achievement finding was in contrast to attainment, which clearly improved as a result of the program.  The uncertainty surrounding the achievement effects of the DC voucher program is because we set the high standard of 95% confidence to judge a voucher benefit as “statistically significant”, and we could only be 94% confident that the final-year reading gains from the DC program were statistically significant.

2) The attrition rate in the Milwaukee study was actually 56%, not 75%. Ravitch was relying on a third party’s critique of the study (to which Ravitch linked) that had the wrong figure, rather than reading the study herself. Moreover, the results regarding the higher attainment of voucher students are drawn from the graduation rate for all students who initially participated in the voucher program in the 9th grade in the fall of 2006, not just those who remained in the program.

3) Wolf’s team used data from the National Clearinghouse of College Enrollment to track these students into college.

Ravitch responded by hyperventilating about Wolf’s supposed “vitriol” (he had the temerity to point out that she’s not a statistician, didn’t understand the methods she was critiquing, and that she was relying on incorrect secondary sources) and posting a response from Kevin Welner of the University of Colorado at Boulder, who heads the National Education Policy Center (NEPC), which released the critique of Wolf’s study upon which Ravitch had relied.

Welner didn’t even attempt to defend Ravitch’s erroneous first and third claims, but took issue with Wolf’s rebuttal of her second claim. Welner defends the integrity of his organization’s critique by pointing out that when they read Wolf’s study, it had contained the “75% attrition” figure but that the number had been subsequently updated a few weeks later. They shouldn’t be faulted for not knowing about the update. As Welner wrote, “Nobody had thought to go back and see whether Wolf or his colleagues had changed important numbers in the SCDP report.”

That would be a fair point, except for the fact that they did know about the change. As Wolf pointed out, page four of the NECP critique contains the following sentence: “Notably, more than half the students (56%) in the MPCP 9th grade sample were not in the MPCP four years later.” In other words, the author of the NECP critique had seen the corrected report but failed to update parts of his critique. This is certainly not the smoking gun Welner thought it was.

Ravitch replied, again demonstrating her misunderstanding of intention-to-treat (“And, I dunno, but 56% still looks like a huge attrition rate”) and leaving the heavy lifting to Welner. Welner’s main argument is that Wolf should have “been honest with his readers the first time around, instead of implying ignorance or wrongdoing as a cheap way to scores some points against Diane Ravitch and (to a lesser extent) NEPC.” Welner would have had a point if Wolf’s initial response had been to NECP and not Ravitch, but Wolf’s point was that Ravitch was holding herself out as an expert when she had never read the primary source material that she was criticizing. Instead, she relied on a secondary source that cited two contradictory figures. She either didn’t notice or intentionally chose what she thought was the more damning of the two figures—though, again, the figure doesn’t matter for purposes of an intention-to-treat study.

We all make mistakes. Wolf’s team made a mistake in their report and corrected it within a few weeks. Welner has stated that his team will correct the NECP report now that their error has come to their attention a year later. Ravitch should also correct her erroneous assertions regarding the results and methodology of the studies.

(Edited for typo)

One Response to Wolf v. Ravitch/Welner on the Effects of School Choice

  1. […] to state with any certainty where they’re getting their faulty information (quite possibly the usual suspects), but President Obama made similarly false claims in a recent TV interview, prompting prominent […]

Leave a comment