Hitt, McShane and Wolf Meta-Analysis leads to a call for humility

March 19, 2018

(Guest Post by Matthew Ladner)

My favorite quote from Hitt, McShane and Wolf’s new study:

Even with these caveats in mind, the policy implications from this analysis are clear. The most obvious implication is that policymakers need to be much more humble in what they believe that test scores tell them about the performance of schools of choice. Test scores
are not giving us the whole picture. Insofar as test scores are used to make determinations in “portfolio” governance structures or are used to close (or expand) schools, policymakers might be making errors. This is not to say that test scores should be wholly discarded.
Rather, test scores should be put in context and should not automatically occupy a privileged place over parental demand and satisfaction as short-term measures of school choice success or failure.

P.S. Letting parents take the lead on which schools expand and/or close can work out fine on the types of tests schools have almost no ability/incentive to game:

The implications of this meta-analysis of the research literature could stretch far beyond the choice sector in time. If test scores continue to show a weak and inconsistent relationship with long-term outcomes, broad rethinking will be required. Let’s see what happens next.


Saying Goodbye to a Flying Dutchman

April 3, 2016

(Guest post by Patrick J. Wolf)

Prolific Dutch education sociologist Jaap Dronkers died of a stroke on Wednesday at the age of 70.  Jaap (pronounced “yahp”) was the `James Coleman’ of European education research and a supporter of parental school choice for the simple reason that his exhaustive research demonstrated that it benefits families and the broader society.  More importantly, Jaap was a considerate colleague and a dear friend.  He will be greatly missed.

Among Jaap’s many accomplishments was his pioneering of the use of `league tables’ to measure school performance in the countries of Europe.  He then conducted a series of sophisticated school- and student-level analyses of demographic and achievement data to determine which types of schools were delivering value-added to students in terms of both test-score gains and civic outcomes.  Jaap’s conclusions, published in important peer-reviewed journal articles such as here and here, were that elite private schools produce better student outcomes because they surround students with advantaged peers.  Religious private schools participating in voucher-type programs, on the other hand, deliver positive value-added to students, net of peer effects, in terms of both achievement and civic outcomes.

I first met Jaap at a conference in London that I co-organized with Princeton political theorist Stephen Macedo in 2003.  The conference birthed a co-edited book by the Brookings Institution called Educating Citizens: International Perspectives on School Choice and Civic Values.  Steve and I had brought an A-list of scholars from the U.S., including Dave Campbell, Rick Garnett, Charlie Glenn, William Galston, Bruno Manno, and John Witte. At the opening reception, Charlie, who is better networked among international education scholars than any American I know, pulled Steve and me aside and said, “This is a very strong group of European scholars.”  Even so, Jaap Dronkers stood out from the rest.  He was the only participant in the project who we permitted to author multiple chapters in our book – one about how the Dutch education system manages school choice to promote civic values and another about how European religious schools tend to have a positive impact on student cognitive outcomes while equaling government-run public schools in generating civic outcomes.  Jaap concluded that second chapter with the statement, in his typical clear but scholarly tone, “Not enough is known about the effects of school choice in Europe, but what is known is generally comforting.” (p. 308)

In 2009 Jaap turned the tables and invited me to attend a conference in Geneva on school choice and educational equity. He then recruited my conference paper on what the DC Choice achievement effects suggest for social justice for review and ultimate publication in the special issue of Educational Research and Evaluation that he co-edited in the wake of the event.  Jaap was a skillful and demanding editor, even while operating in his fourth language of English (his primary languages were Dutch, German, and French).  At this time he was Professor of Social Stratification and Inequality at the highly prestigious European University Institute in Florence, Italy.

In 2014 Jaap traveled to Florida with his wife, Tonny, to deliver a keynote address at the Second International School Choice & Reform Academic Conference.  Far from simply beaming in, speaking, and beaming out, as keynoters so often do, Jaap hung around to attend all of the panels, asking piercing questions and breaking bread with new American friends he had made.  He returned for the third edition of the conference to present fascinating work that subsequently was published on Islamic schools in the Netherlands.

Jaap Dronkers was a five-tool social scientist.  He had a firm grasp of theory, refined empirical analysis skills, strong writing ability in four languages, solid speaking skills, and a delightful sense of humor.  I’ll always remember how Jaap introduced me to his European research colleagues.  “Patrick,” Jaap would say, “is from the U.S. where he can actually run experiments!”  One of my junior colleagues, Brian Kisida, upon meeting Jaap at an international conference in Belgium, simply said, “That is one cool dude.”  Indeed he was.

Rest in peace, Jaap Dronkers, European education researcher extraordinaire, supporter of parental school choice, citizen of the world and friend to many.  We know so much more about how to improve the education of children because of you.


Let’s Search for Sweet Spots, but with modesty please

June 5, 2014

(Guest Post by Matthew Ladner)

I have a number of friends who have either helped develop or have signed on to a Statement of Principles regarding a three sector reform strategy and what they view as a desirable level of state oversight of private choice programs.  This post will work better for you if you go and read the document first.

The needle starts to scratch across the vinyl for me at:

Even with the expanded choice to the private sector, they also have produced modest results.

This has become a mantra in recent years, but I believe that this statement reflects an incomplete understanding of the research results, and specifically a lack of understanding regarding our random assignment studies of voucher programs. The basic takeaway from the random assignment studies in my view is as follows: the test score impacts are modest but often statistically significant within the three year window that we can reliably study them.

So the Milwaukee Parental Choice Program offered $6,400 vouchers to very low-income inner city parents whose other options were to attend a district spending $14,000 per child and/or charter schools spending somewhere between the voucher and district spending.  We have several random assignment studies of the test score impacts that find that the experimental group basically stays on grade level (a triumph for poor inner city children) whereas the control group declines year by year.  You get to watch this process for about three years before the random assignment breaks down on you.

What happens to test scores after Year 3?  No one knows for sure- these studies fall apart over time.  We do know things however about what happens regarding high-school graduation, college attendance, college persistence, etc.  Borrowing a slide that Pat Wolf presented at the Alliance for School Choice conference:

Slide11

So basically you are less likely to graduate in 5 years (first red column) because you are more likely to graduate on time, less likely to graduate from a two-year college (second red) because you are more likely to be going to a four-year college.  The blue columns are all positive impacts from having been a choice student.

Now if you are determined to cling to the “modest” camp by saying that you wish these impacts were even larger, well, I do too.  I also wish that Chuck Norris’ tears really did cure cancer.  At this point it might be appropriate to raise the question as to just how much a positive impact we should reasonably expect from a program giving profoundly disadvantaged children a $6,400 coupon.  Although we don’t know what happens after a few years of random assignment study, those graduation figures ultimately are far more important than 6th grade math scores.

Being far more likely to graduate from high-school and college for less than half the money sounds like a triumph to me, albeit one that we could and should hope to improve upon through more robust program designs.  The standard here should not be to expect MPCP to transform every last profoundly disadvantaged inner city child into a Dean’s List Ivy Leaguers.  Rather in judging the impact of MPCP we should look at it on a return per dollar invested basis.  When you look at it appropriately through this ROI it is clear that the return on MPCP has been quite good, and that we should be looking for ways to get even more of it.

Then I got to this statement:

We know that smart accountability measures can ensure that public money and young lives are not invested in low-performing private schools.

The statement offers no evidence to support this claim, and moreover the claim itself dodges the more important question of costs and benefits to regulation.  Is it possible for “smart” accountability to keep young lives out of low-performing private schools?  Sure it’s possible.  Smart training can ensure that I could go from being a 46-year-old policy wonk to heavyweight champion of the world. I mean it is possible right? Is it also possible, even highly likely, for the whole enterprise to go south on you in a variety of different ways? Yep, that’s very possible too.

Who is going to administer these smart accountability measures and who will administer them a few years later?  What about 25 years from now?  How often will these people do something they think is smart which proves to be otherwise?  Unless we want to have the Federal Reserve administer these programs, how long will it be until politics will subvert the process of “smart” technocratic policymaking?  Also like the Fed, the costs of technocratic mistakes may prove quite costly.

Even well-intentioned efforts at “smart accountability” could easily backfire.  Let’s take Louisiana as an example.  Louisiana policymakers decided to grade all their schools A-F based upon a state accountability test tied to the state academic standards, and then decided to create a mechanism to remove low-performing schools from eligibility to take new students.  This probably sounds clever at a Georgetown cocktail party, but in Louisiana two-thirds of the state’s private schools have decided to stay out of the program, denying thousands of seats to low-income children attending relatively poor performing public schools in one of the lowest performing states in the union.

Ooops.

Let’s take things a step further. Is it possible that the one-third of Louisiana private schools that chose to participate in the program may have had a selection bias towards being more on the financially desperate side than those that have decided to stay out?  I have no data to support that this in fact did happen, but who would be surprised if it in fact did happen?  The correlation between financial desperation and academic ineptitude often proves strong.  In such a case the initial impact of the regulatory regime might have precisely the opposite of what was intended with many higher performing schools choosing to keep their distance.  Worse still, it might create an incentive for private schools to engage in the same sort of gaming strategies that have been common in states with rising state test scores but flat NAEP scores- teaching to test items rather than to standards (Arizona is waving hello!).  Finally of course it is no triumph if the schools do actually teach the state standards because the whole idea of a choice program is to provide, well, meaningfully varying choices for parents.  If you want state tests and standards in Louisiana you already have thousands of options available to you in the form of district and charter schools.

In the end of the day, policymakers must make decisions about where to draw the line in such matters. We have no wrong or right answers here, only preferences. Personally I believe that choice programs should provide academic transparency to the public in ways designed to have the lightest possible touch on the curricular independence of schools.  I’m willing to sacrifice some level of private school participation in return for transparency.  Preferences will vary and we will learn things along the way through variation between programs.  What I think I have learned however is that Arizona’s transparency-light programs represent a costly obstacle to building broad support, and that the Louisiana and Indiana model has far too many private schools saying “thanks but no thanks.”

To my friends who crafted and signed on to this statement I say only that we should continue the dialogue and gather more information.  I don’t believe in regulation free programs nor do I expect or desire for us to pass any, so I agree with you to a degree. I however strongly suspect that many of you are underestimating the cost of regulation and overestimating the capacity of technocratic regimes.

 

 

 

 

 

 

 

 


Wolf and McShane in NRO

February 1, 2013

(Guest Post by Matthew Ladner)

A few years ago, a rookie quarterback named Michael Bishop was brought into a game to perform a last second desperation bomb before the end of the half. It was his first pass as an NFL player, and against the odds it resulted in a long touchdown. Commenting on the pass for ESPN, Chris Berman said something to the effect of “Completion rate-100%. Pass to touchdown ration also 100%. QB Rating = INFINITY!!!!!”

This came to mind when reading this great piece by Wolf and McShane in that had Congress redirected money from the bloated and ineffectual DCPS for the Opportunity Scholarship Program, then  the cost of the program would have been nothing and the benefits substantial, meaning ROI = INFINITY!!!”

!!!BOOOOOOOOOOOOOOOOOOOOOOOOOM!!

[Note: This is based on their peer reviewed article that is in the current issue of Education Finance and Policy.]


Charter or District in Milwaukee?

May 14, 2012

(Guest Post by Matthew Ladner)

Last year John Witte, Pat Wolf, Alicia Dean and Devin Carlson found evidence of significantly stronger academic gains for charter school students over district students in Milwaukee using the state data. This got me to wondering what the 2011 Trial Urban NAEP scores would look like between MPS and Milwaukee charter schools. Now, mind you that this chart doesn’t control for much, only comparing FRL eligible students in the charters and the districts. That’s okay with me, as Witte, Wolf, Dean and Carlson have admirably performed that task on three years of data with a promise of a fourth year in 2012 report. Also there is always at least a bit of sampling error with NAEP, yadda yadda ectera.

Do the NAEP tests tell the same broad story as the Witte et. al study? Judge for yourself:

 Those look like differences likely to survive the introduction of a whole bunch of control variables.


More on Milwaukee School Choice Research Results

March 5, 2012

I wrote last week about the release of the final research results from Milwaukee’s school choice program.  On Sunday the Milwaukee Journal Sentinel devoted its entire editorial page to a discussion of those results.  Check out the succinct summary of the findings by Patrick Wolf and John Witte.

Also be sure to check out the response from the head of the teachers union, Bob Peterson.  His rebuttal consists of noting that many students switch sectors, moving from choice to traditional public schools as well as in the opposite direction.  He thinks that this undermines the validity of Wolf and Witte’s graduation rate analysis, but he fails to understand that the researchers used an intention to treat approach that attributes outcomes to students’ original selection of sector regardless of their switching.  And on the special education claim he simply reiterates the Department of Public Instruction’s (DPI) faulty effort to equate the percentage of students who are entitled to accommodations on the state test with the percentage of students who have disabilities.

For more on how DPI under-stated the rate of disabilities in the Milwaukee choice program by between 400% and 900%, check out the new article Wolf, Fleming, and Witte just published in Education Next.  It’s not only an excellent piece of research detective work on how DPI arrived at such an erroneous claim, but it is also a useful warning to anyone who thinks that government issued claims provide the authoritative answer on research questions.  Government agencies, like DPI, can lie and distort as much or more than any special interest group.  They just do it with your tax dollars and in your name.


New Milwaukee Choice Results

February 27, 2012

My colleague at the University of Arkansas, Patrick Wolf, along with John Witte at the University of Wisconsin and a team of researchers have released their final round of reports on the Milwaukee school choice program.  You can read the press release here and find the full set of reports here.

They find that access to a private school with a voucher in Milwaukee significantly increases the probability that students will graduate from high school:

“Our clearest positive finding is that the Choice Program boosts the rates at which students graduate from high school, enroll in a four-year college, and persist in college,” said John Witte, professor of political science and public affairs at the University of Wisconsin-Madison. “Since educational attainment is linked to positive life outcomes such as higher lifetime earnings and lower rates of incarceration, this is a very encouraging result of the program.”

They also find that “when similar students in the voucher program and in Milwaukee Public Schools were compared, the achievement growth of students in the voucher program was higher in reading but similar in math.”  Unfortunately, the testing conditions changed during the study because the private school testing went from being low stakes to high stakes, making it difficult to draw strong conclusions about the effects of the program on test scores.

In addition, it should be remembered that the design of the Milwaukee study is a matched comparison, which is less rigorous than random-assignment.  The more convincing random-assignment analyses are significant and positive in 9 of the 10 that have been conducted, with the tenth having null effects.  You can find a summary and links to all of them here.

Perhaps the most interesting part of the new Milwaukee results is the report on special education rates in the choice program.  As it turns out, Wisconsin’s Department of Public Instruction grossly under-stated the percentage of students in the choice program who have disabilities.  Some reporters and policymakers act as if the Department of Public Instruction’s reports are reliable and insightful because they are a government agency, while the reports of university professors are distorted and misleading.  Read this report on special education rates and I think you’ll learn a lot about how politically biased government agencies like the Department of Public Instruction can be.


MPS Takes “Standing in the Schoolhouse Door” to a Whole New Level

May 31, 2011

(Guest post by Greg Forster)

Over the weekend, John Witte and Pat Wolf had a compelling article in the Milwaukee Journal Sentinel summarizing the real (as opposed to media-reported) results of the Milwaukee voucher program research being conducted by the School Choice Demonstration Project.

And then they dropped a bomb:

Recently, our research team conducted site visits to high schools in Milwaukee to examine any innovative things they are doing to educate disadvantaged children. The private high schools of the choice program graciously opened their doors to us and allowed us full access to their schools. Although several MPS principals urged us to come see their schools as well, the central administration at MPS prohibited us having any further contact with those schools as they considered our request for visits. We have not heard from them in weeks.

Our report on the private schools we visited, which will offer a series of best practices regarding student dropout prevention, will be released this fall. Should MPS choose to open the doors of their high schools to us, we will be able to learn from their approaches as well. [ea]

MPS opposition to vouchers takes standing in the schoolhouse door to a whole new level.


Patrick Wolf Testifies on DC Vouchers

February 16, 2011

Watch my colleague, Patrick Wolf, tell it like it is on DC vouchers to the U.S. Senate.

And you can read his testimony here.


What Doesn’t Work Clearinghouse

October 4, 2010

The U.S. Department of Education’s “What Works Clearinghouse” (WWC) is supposed to adjudicate the scientific validity of competing education research claims so that policymakers, reporters, practitioners, and others don’t have to strain their brains to do it themselves.  It would be much smarter for folks to exert the mental energy themselves rather than trust a government-operated truth committee to sort things out for them.

WWC makes mistakes, is subject to political manipulation, and applies arbitrary standards.  In short, what WWC says is not The Truth.  WWC is not necessarily less reliable than any other source that claims to adjudicate The Truth for you.  Everyone may make mistakes, distort results, and apply arbitrary standards.  The problem is that WWC has the official endorsement of the U.S. Department of Education, so many people fail to take their findings with the same grains of salt that they would to the findings of any other self-appointed truth committee.  And with the possibility that government money may be conditioned on WWC endorsement, WWC’s shortcomings are potentially more dangerous.

I could provide numerous examples of WWC’s mistakes, political manipulation, and arbitrariness, but for the brevity of a blog post let me illustrate my point with just a few.

First, WWC was sloppy and lazy in its recent finding that the Milwaukee voucher evaluation, led by my colleagues Pat Wolf and John Witte, failed to meet “WWC evidence standards” because “the authors do not provide evidence that the subsamples of voucher recipients and public school comparison students analyzed in this study were initially equivalent in math and reading achievement.” WWC justifies their conclusion with a helpful footnote that explains: “At the time of publication, the WWC had contacted the corresponding author for additional information regarding the equivalence of the analysis samples at baseline and no response had been received.”

But if WWC had actually bothered to read the Milwaukee reports they would have found the evidence of equivalence they were looking for.  The Milwaukee voucher evaluation that Pat and John are leading has a matched-sample research design.  In fact, the research team produced an entire report whose purpose was to demonstrate that the matching had worked and produced comparable samples. In addition, in the 3rd Year report the researchers devoted an entire section (see appendix B) to documenting the continuing equivalence of the matched samples despite some attrition of students over time.

Rather than reading the reports and examining the evidence on the comparability of the matched samples, WWC decided that the best way to determine whether the research met their standards for sample equivalence was to email John Witte and ask him.  I guess it’s all that hard work that justifies the multi-million dollar contract Mathematica receives from the U.S. Department of Education to run WWC.

As it turns out, Witte was traveling when WWC sent him the email.  When he returned he deleted their request along with a bunch of other emails without examining it closely.  But WWC took Witte’s non-response as confirmation that there was no evidence demonstrating the equivalence of the matched samples.  WWC couldn’t be bothered to contact any of the several co-authors.  They just went for their negative conclusion without further reading, thought, or effort.

I can’t prove it (and I’m sure my thought-process would not meet WWC standards), but I’ll bet that if the subject of the study was not vouchers, WWC would have been sure to read the reports closely and make extra efforts to contact co-authors before dismissing the research as failing to meet their standards.  But voucher researchers have grown accustomed to double-standards when others assess their research.  It’s just amazingly ironic to see the federally-sponsored entity charged with maintaining consistent and high standards fall so easily into their own double-standard.

Another example — I served on a WWC panel regarding school turnarounds a few years ago.  We were charged with assessing the research on how to successfully turnaround a failing school.  We quickly discovered that there was no research that met WWC’s standards on that question.  I suggested that we simply report that there is no rigorous evidence on this topic.  The staff rejected that suggestion, emphasizing that the Department of Education needed to have some evidence on effective turnaround strategies.

I have no idea why the political needs of the Department should have affected the truth committee in assessing the research, but it did.  We were told to look at non-rigorous research, including case-studies, anecdotes, and our own experience to do our best in identifying promising strategies.  It was strange — there were very tight criteria for what met WWC standards, but there were effectively no standards when it came to less rigorous research.  We just had to use our professional judgment.

We ended up endorsing some turnaround strategies (I can’t even remember what they were) but we did so based on virtually no evidence.  And this was all fine as long as we said that the conclusions were not based on research that met WWC standards.  I still don’t know what would have been wrong with simply saying that research doesn’t have much to tell us about effective turnaround strategies, but I guess that’s not the way truth committees work.  Truth committees have to provide the truth even when it is false.

The heart of the problem is that science has never depended on government-run truth committees to make progress.  It is simply not possible for the government to adjudicate the truth on disputed topics because the temptation to manipulate the answer or simply to make sloppy and lazy mistakes is all too great.  This is not a problem that is particular to the Obama Administration or to Mathematica.  My second example was from the Bush Administration when WWC was run by AIR.

The hard reality is that you can never fully rely on any authority to adjudicate the truth for you.  Yes, conflicting claims can be confusing.  Yes, it would be wonderfully convenient if someone just sorted it all out for us.  But once we give someone else the power to decide the truth on our behalf, we are prey to whatever distortions or mistakes they may make.  And since self-interest introduces distortions and the tendency to make mistakes, the government is a particularly untrustworthy entity to rely upon when it comes to government policy.

Science has always made progress by people sorting through the mess of competing, often technical, claims.  When official truth committees have intervened, it has almost always hindered scientific progress.  Remember that  it was the official truth committee that determined that Galileo was wrong.  Truth committees have taken positions on evolution, global warming, and a host of other controversial topics.  It simply doesn’t help.

We have no alternative to sorting through the evidence and trying to figure these things out ourselves.  We may rely upon the expertise of others in helping us sort out competing claims, but we should always do so with caution, since those experts may be mistaken or even deceptive.  But when the government starts weighing in as an expert, it speaks with far too much authority and can be much more coercive.  A What Works Clearinghouse simply doesn’t work.


%d bloggers like this: