Wolf and McShane in NRO

February 1, 2013

(Guest Post by Matthew Ladner)

A few years ago, a rookie quarterback named Michael Bishop was brought into a game to perform a last second desperation bomb before the end of the half. It was his first pass as an NFL player, and against the odds it resulted in a long touchdown. Commenting on the pass for ESPN, Chris Berman said something to the effect of “Completion rate-100%. Pass to touchdown ration also 100%. QB Rating = INFINITY!!!!!”

This came to mind when reading this great piece by Wolf and McShane in that had Congress redirected money from the bloated and ineffectual DCPS for the Opportunity Scholarship Program, then  the cost of the program would have been nothing and the benefits substantial, meaning ROI = INFINITY!!!”

!!!BOOOOOOOOOOOOOOOOOOOOOOOOOM!!

[Note: This is based on their peer reviewed article that is in the current issue of Education Finance and Policy.]


Charter or District in Milwaukee?

May 14, 2012

(Guest Post by Matthew Ladner)

Last year John Witte, Pat Wolf, Alicia Dean and Devin Carlson found evidence of significantly stronger academic gains for charter school students over district students in Milwaukee using the state data. This got me to wondering what the 2011 Trial Urban NAEP scores would look like between MPS and Milwaukee charter schools. Now, mind you that this chart doesn’t control for much, only comparing FRL eligible students in the charters and the districts. That’s okay with me, as Witte, Wolf, Dean and Carlson have admirably performed that task on three years of data with a promise of a fourth year in 2012 report. Also there is always at least a bit of sampling error with NAEP, yadda yadda ectera.

Do the NAEP tests tell the same broad story as the Witte et. al study? Judge for yourself:

 Those look like differences likely to survive the introduction of a whole bunch of control variables.


More on Milwaukee School Choice Research Results

March 5, 2012

I wrote last week about the release of the final research results from Milwaukee’s school choice program.  On Sunday the Milwaukee Journal Sentinel devoted its entire editorial page to a discussion of those results.  Check out the succinct summary of the findings by Patrick Wolf and John Witte.

Also be sure to check out the response from the head of the teachers union, Bob Peterson.  His rebuttal consists of noting that many students switch sectors, moving from choice to traditional public schools as well as in the opposite direction.  He thinks that this undermines the validity of Wolf and Witte’s graduation rate analysis, but he fails to understand that the researchers used an intention to treat approach that attributes outcomes to students’ original selection of sector regardless of their switching.  And on the special education claim he simply reiterates the Department of Public Instruction’s (DPI) faulty effort to equate the percentage of students who are entitled to accommodations on the state test with the percentage of students who have disabilities.

For more on how DPI under-stated the rate of disabilities in the Milwaukee choice program by between 400% and 900%, check out the new article Wolf, Fleming, and Witte just published in Education Next.  It’s not only an excellent piece of research detective work on how DPI arrived at such an erroneous claim, but it is also a useful warning to anyone who thinks that government issued claims provide the authoritative answer on research questions.  Government agencies, like DPI, can lie and distort as much or more than any special interest group.  They just do it with your tax dollars and in your name.


New Milwaukee Choice Results

February 27, 2012

My colleague at the University of Arkansas, Patrick Wolf, along with John Witte at the University of Wisconsin and a team of researchers have released their final round of reports on the Milwaukee school choice program.  You can read the press release here and find the full set of reports here.

They find that access to a private school with a voucher in Milwaukee significantly increases the probability that students will graduate from high school:

“Our clearest positive finding is that the Choice Program boosts the rates at which students graduate from high school, enroll in a four-year college, and persist in college,” said John Witte, professor of political science and public affairs at the University of Wisconsin-Madison. “Since educational attainment is linked to positive life outcomes such as higher lifetime earnings and lower rates of incarceration, this is a very encouraging result of the program.”

They also find that “when similar students in the voucher program and in Milwaukee Public Schools were compared, the achievement growth of students in the voucher program was higher in reading but similar in math.”  Unfortunately, the testing conditions changed during the study because the private school testing went from being low stakes to high stakes, making it difficult to draw strong conclusions about the effects of the program on test scores.

In addition, it should be remembered that the design of the Milwaukee study is a matched comparison, which is less rigorous than random-assignment.  The more convincing random-assignment analyses are significant and positive in 9 of the 10 that have been conducted, with the tenth having null effects.  You can find a summary and links to all of them here.

Perhaps the most interesting part of the new Milwaukee results is the report on special education rates in the choice program.  As it turns out, Wisconsin’s Department of Public Instruction grossly under-stated the percentage of students in the choice program who have disabilities.  Some reporters and policymakers act as if the Department of Public Instruction’s reports are reliable and insightful because they are a government agency, while the reports of university professors are distorted and misleading.  Read this report on special education rates and I think you’ll learn a lot about how politically biased government agencies like the Department of Public Instruction can be.


MPS Takes “Standing in the Schoolhouse Door” to a Whole New Level

May 31, 2011

(Guest post by Greg Forster)

Over the weekend, John Witte and Pat Wolf had a compelling article in the Milwaukee Journal Sentinel summarizing the real (as opposed to media-reported) results of the Milwaukee voucher program research being conducted by the School Choice Demonstration Project.

And then they dropped a bomb:

Recently, our research team conducted site visits to high schools in Milwaukee to examine any innovative things they are doing to educate disadvantaged children. The private high schools of the choice program graciously opened their doors to us and allowed us full access to their schools. Although several MPS principals urged us to come see their schools as well, the central administration at MPS prohibited us having any further contact with those schools as they considered our request for visits. We have not heard from them in weeks.

Our report on the private schools we visited, which will offer a series of best practices regarding student dropout prevention, will be released this fall. Should MPS choose to open the doors of their high schools to us, we will be able to learn from their approaches as well. [ea]

MPS opposition to vouchers takes standing in the schoolhouse door to a whole new level.


Patrick Wolf Testifies on DC Vouchers

February 16, 2011

Watch my colleague, Patrick Wolf, tell it like it is on DC vouchers to the U.S. Senate.

And you can read his testimony here.


What Doesn’t Work Clearinghouse

October 4, 2010

The U.S. Department of Education’s “What Works Clearinghouse” (WWC) is supposed to adjudicate the scientific validity of competing education research claims so that policymakers, reporters, practitioners, and others don’t have to strain their brains to do it themselves.  It would be much smarter for folks to exert the mental energy themselves rather than trust a government-operated truth committee to sort things out for them.

WWC makes mistakes, is subject to political manipulation, and applies arbitrary standards.  In short, what WWC says is not The Truth.  WWC is not necessarily less reliable than any other source that claims to adjudicate The Truth for you.  Everyone may make mistakes, distort results, and apply arbitrary standards.  The problem is that WWC has the official endorsement of the U.S. Department of Education, so many people fail to take their findings with the same grains of salt that they would to the findings of any other self-appointed truth committee.  And with the possibility that government money may be conditioned on WWC endorsement, WWC’s shortcomings are potentially more dangerous.

I could provide numerous examples of WWC’s mistakes, political manipulation, and arbitrariness, but for the brevity of a blog post let me illustrate my point with just a few.

First, WWC was sloppy and lazy in its recent finding that the Milwaukee voucher evaluation, led by my colleagues Pat Wolf and John Witte, failed to meet “WWC evidence standards” because “the authors do not provide evidence that the subsamples of voucher recipients and public school comparison students analyzed in this study were initially equivalent in math and reading achievement.” WWC justifies their conclusion with a helpful footnote that explains: “At the time of publication, the WWC had contacted the corresponding author for additional information regarding the equivalence of the analysis samples at baseline and no response had been received.”

But if WWC had actually bothered to read the Milwaukee reports they would have found the evidence of equivalence they were looking for.  The Milwaukee voucher evaluation that Pat and John are leading has a matched-sample research design.  In fact, the research team produced an entire report whose purpose was to demonstrate that the matching had worked and produced comparable samples. In addition, in the 3rd Year report the researchers devoted an entire section (see appendix B) to documenting the continuing equivalence of the matched samples despite some attrition of students over time.

Rather than reading the reports and examining the evidence on the comparability of the matched samples, WWC decided that the best way to determine whether the research met their standards for sample equivalence was to email John Witte and ask him.  I guess it’s all that hard work that justifies the multi-million dollar contract Mathematica receives from the U.S. Department of Education to run WWC.

As it turns out, Witte was traveling when WWC sent him the email.  When he returned he deleted their request along with a bunch of other emails without examining it closely.  But WWC took Witte’s non-response as confirmation that there was no evidence demonstrating the equivalence of the matched samples.  WWC couldn’t be bothered to contact any of the several co-authors.  They just went for their negative conclusion without further reading, thought, or effort.

I can’t prove it (and I’m sure my thought-process would not meet WWC standards), but I’ll bet that if the subject of the study was not vouchers, WWC would have been sure to read the reports closely and make extra efforts to contact co-authors before dismissing the research as failing to meet their standards.  But voucher researchers have grown accustomed to double-standards when others assess their research.  It’s just amazingly ironic to see the federally-sponsored entity charged with maintaining consistent and high standards fall so easily into their own double-standard.

Another example — I served on a WWC panel regarding school turnarounds a few years ago.  We were charged with assessing the research on how to successfully turnaround a failing school.  We quickly discovered that there was no research that met WWC’s standards on that question.  I suggested that we simply report that there is no rigorous evidence on this topic.  The staff rejected that suggestion, emphasizing that the Department of Education needed to have some evidence on effective turnaround strategies.

I have no idea why the political needs of the Department should have affected the truth committee in assessing the research, but it did.  We were told to look at non-rigorous research, including case-studies, anecdotes, and our own experience to do our best in identifying promising strategies.  It was strange — there were very tight criteria for what met WWC standards, but there were effectively no standards when it came to less rigorous research.  We just had to use our professional judgment.

We ended up endorsing some turnaround strategies (I can’t even remember what they were) but we did so based on virtually no evidence.  And this was all fine as long as we said that the conclusions were not based on research that met WWC standards.  I still don’t know what would have been wrong with simply saying that research doesn’t have much to tell us about effective turnaround strategies, but I guess that’s not the way truth committees work.  Truth committees have to provide the truth even when it is false.

The heart of the problem is that science has never depended on government-run truth committees to make progress.  It is simply not possible for the government to adjudicate the truth on disputed topics because the temptation to manipulate the answer or simply to make sloppy and lazy mistakes is all too great.  This is not a problem that is particular to the Obama Administration or to Mathematica.  My second example was from the Bush Administration when WWC was run by AIR.

The hard reality is that you can never fully rely on any authority to adjudicate the truth for you.  Yes, conflicting claims can be confusing.  Yes, it would be wonderfully convenient if someone just sorted it all out for us.  But once we give someone else the power to decide the truth on our behalf, we are prey to whatever distortions or mistakes they may make.  And since self-interest introduces distortions and the tendency to make mistakes, the government is a particularly untrustworthy entity to rely upon when it comes to government policy.

Science has always made progress by people sorting through the mess of competing, often technical, claims.  When official truth committees have intervened, it has almost always hindered scientific progress.  Remember that  it was the official truth committee that determined that Galileo was wrong.  Truth committees have taken positions on evolution, global warming, and a host of other controversial topics.  It simply doesn’t help.

We have no alternative to sorting through the evidence and trying to figure these things out ourselves.  We may rely upon the expertise of others in helping us sort out competing claims, but we should always do so with caution, since those experts may be mistaken or even deceptive.  But when the government starts weighing in as an expert, it speaks with far too much authority and can be much more coercive.  A What Works Clearinghouse simply doesn’t work.


Pat Wolf In Ed Next

August 20, 2009

Pat Wolf has an article summarizing and clarifying the latest evidence from the official evaluation of the D.C. voucher program newly posted at Education Next.

The part that struck me the most was how strong the DC voucher results are compared to the results of all of the other rigorous evaluations sponsored by the U.S. Department of Education:

The achievement results from the D.C. voucher evaluation are also striking when compared to the results from other experimental evaluations of education policies. The National Center for Education Evaluation and Regional Assistance (NCEE) at the IES has sponsored and overseen 11 studies that are RCTs, including the OSP evaluation. Only 3 of the 11 education interventions tested, when subjected to such a rigorous evaluation, have demonstrated statistically significant achievement impacts overall in either reading or math. The reading impact of the D.C. voucher program is the largest achievement impact yet reported in an RCT evaluation overseen by the NCEE. A second program was found to increase reading outcomes by about 40 percent less than the reading gain from the DC OSP. The third intervention was reported to have boosted math achievement by less than half the amount of the reading gain from the D.C. voucher program. Of the remaining eight NCEE-sponsored RCTs, six of them found no statistically significant achievement impacts overall and the other two showed a mix of no impacts and actual achievement losses from their programs.


Mistaken AJC Voucher Editorial Held Accountable

February 9, 2009

 

One of the great things about these here inter-web thingies is their ability to hold newspapers accountable when they make mistakes.  And the editorial by Maureen Downey that the Atlanta Journal Constitution ran last week on vouchers was very much mistaken.  In it Downey  claimed “in the handful of states that have conducted experiments with vouchers, the results contradict claims of improvement by Johnson and other voucher advocates… Yet, in return for zero impact, Johnson proposes to dismantle public education in Georgia.”  She also described “vouchers as a threat to the bedrock American belief that public education is critical to the health of the democracy and should not be sacrificed to political agendas.” 

To support her overwrought claims she cites a newspaper article on Ohio’s voucher program, studies of the voucher programs in DC and Milwaukee conducted by my colleague Pat  Wolf, and a review of the literature by Barrow and Rouse.  Unfortunately she cites all of them selectively or misinterprets their findings as showing “zero impact.”  Fortunately, Pat Wolf noticed her incorrect interpretation of his work and sent a letter, which the AJC ran today.

But letters are limited in length and less salient than the editorials they attempt to correct.  In the old days when newspapers were the only game in town, it was very difficult to hold newspapers accountable for editorials that were factually inaccurate.  They might have run letters, like the one Pat Wolf submitted, but they wouldn’t even have to do that if they didn’t want to.

With the inter-webs we not only have Pat Wolf’s letter in the AJC, we can also circulate it by posting it on blogs, like I just did.  And we can add additional material, for which there would have been no space in the letters section.  So let me add that here is a complete list of random-assignment studies of the effects of vouchers on students who use themHere is a summary of the effect of vouchers on the public school system.  And here is random-assignment research on the effect of charter schools on participants.  And if she thinks choice destroys democracy, here is a review of that literature showing that she is mistaken about that as well.

If Maureen Downey and the Atlanta Journal Constitution want to say that evidence shows “zero impact” from vouchers, then they have to explain away all of this evidence.  And if they don’t want to justify their claims in the pages of their paper, we can hold them accountable on the web.

(edited for typos)


Voucher Effects on Participants

August 21, 2008

(This is an update of a post I originally wrote on August 21.  I’ve included the new DC voucher findings.)

Here is what I believe is a complete (no cherry-picking) list of analyses taking advantage of random-assignment experiments of the effect of vouchers on participants.  As I’ve previously written, 9 of the 10 analyses show significant, positive effects for at least some subgroups of students.

All of them have been published in peer reviewed journals or were subject to outside peer review by the federal government.

Four of the 10 studies are independent replications of earlier analyses.  Cowen replicates Greene, 2001.  Rouse replicates Greene, Peterson, and Du.  Barnard, et al replicate Peterson and Howell.  And Krueger and Zhu also replicate Peterson and Howell.  All of these independent replications (except for Krueger and Zhu) confirm the basic findings of the original analyses by also finding positive effects.

Anyone interested in a more complete discussion of these 10 analyses and why it is important to focus on the random-assignment studies, should read Patrick Wolf’s article in the BYU Law Review that has been reproduced here.

I’m eager to hear how Leo Casey and Eduwonkette, who’ve accused me of cherry-picking the evidence, respond.

  • These 6 studies conclude that all groups of student participants experienced reading or math achievement gains and/or increased likelihood of graduating from high school as a result of vouchers:

Cowen, Joshua M.  2008. “School Choice as a Latent Variable: Estimating the ‘Complier Average Causal Effect’ of Vouchers in Charlotte.” Policy Studies Journal 36 (2).

Greene, Jay P. 2001. “Vouchers in Charlotte,” Education Matters 1 (2):55-60.

Greene, Jay P., Paul E. Peterson, and Jiangtao Du. 1999. “Effectiveness of School Choice: The Milwaukee Experiment.” Education and Urban Society, 31, January, pp. 190-213.

Howell, William G., Patrick J. Wolf, David E. Campbell, and Paul E. Peterson. 2002. “School Vouchers and Academic Performance:  Results from Three Randomized Field Trials.” Journal of Policy Analysis and Management, 21, April, pp. 191-217. (Washington, DC: Gains for all participants, almost all were African Americans)

Rouse, Cecilia E. 1998. “Private School Vouchers and Student Achievement: An Evaluation of the Milwaukee Parental Choice Program,” The Quarterly Journal of Economics, 113(2): 553-602.

Wolf, Patrick, Babette Gutmann, Michael Puma, Brian Kisida, Lou Rizzo, Nada Eissa, and Marsha Silverberg. March 2009.  Evaluation of the DC Opportunity Scholarship Program: Impacts After Three Years. U.S. Department of Education, Institute of Education Sciences. Washington, DC: U.S. Government Printing Office. (In the fourth year report the sample size shrunk so that the positive achievement effect barely missed meeting a strict threshold for statistical significance — p < .06 just missing the bar of p < .05.  But this new report was able for the first time to measure the effect of vouchers on the likelihood that students would graduate high school.  As it turns out, vouchers significantly boosted high school graduation rates.  As Paul Peterson points out, this suggests that vouchers boosted both achievement and graduation rates in the 4th year.  Read the 4th year evaluation here.)

  • These 3 studies conclude that at least one important sub-group of student participants experienced achievement gains from the voucher and no subgroup of students was harmed:

Barnard, John, Constantine E. Frangakis, Jennifer L. Hill, and Donald B. Rubin. 2003. “Principal Stratification Approach to Broken Randomized Experiments: A Case Study of School Choice Vouchers in New York City,” Journal of the American Statistical Association 98 (462):299–323. (Gains for African Americans)

Howell, William G., Patrick J. Wolf, David E. Campbell, and Paul E. Peterson. 2002. “School Vouchers and Academic Performance:  Results from Three Randomized Field Trials.” Journal of Policy Analysis and Management, 21, April, pp. 191-217. (Dayton, Ohio: Gains for African Americans)

Peterson, Paul E., and William G. Howell. 2004. “Efficiency, Bias, and Classification Schemes: A Response to Alan B. Krueger and Pei Zhu.” American Behavioral Scientist, 47(5): 699-717.  (New York City: Gains for African Americans)

This 1 study concludes that no sub-group of student participants experienced achievement gains from the voucher:

Krueger, Alan B., and Pei Zhu. 2004. “Another Look at the New York City School Voucher Experiment,” The American Behavioral Scientist 47 (5):658–698.

(Update: For a review of systemic effect research — how expanded competition affects achievement in traditional public schools – see here.)


Follow

Get every new post delivered to your Inbox.

Join 2,369 other followers