What Doesn’t Work Clearinghouse

October 4, 2010

The U.S. Department of Education’s “What Works Clearinghouse” (WWC) is supposed to adjudicate the scientific validity of competing education research claims so that policymakers, reporters, practitioners, and others don’t have to strain their brains to do it themselves.  It would be much smarter for folks to exert the mental energy themselves rather than trust a government-operated truth committee to sort things out for them.

WWC makes mistakes, is subject to political manipulation, and applies arbitrary standards.  In short, what WWC says is not The Truth.  WWC is not necessarily less reliable than any other source that claims to adjudicate The Truth for you.  Everyone may make mistakes, distort results, and apply arbitrary standards.  The problem is that WWC has the official endorsement of the U.S. Department of Education, so many people fail to take their findings with the same grains of salt that they would to the findings of any other self-appointed truth committee.  And with the possibility that government money may be conditioned on WWC endorsement, WWC’s shortcomings are potentially more dangerous.

I could provide numerous examples of WWC’s mistakes, political manipulation, and arbitrariness, but for the brevity of a blog post let me illustrate my point with just a few.

First, WWC was sloppy and lazy in its recent finding that the Milwaukee voucher evaluation, led by my colleagues Pat Wolf and John Witte, failed to meet “WWC evidence standards” because “the authors do not provide evidence that the subsamples of voucher recipients and public school comparison students analyzed in this study were initially equivalent in math and reading achievement.” WWC justifies their conclusion with a helpful footnote that explains: “At the time of publication, the WWC had contacted the corresponding author for additional information regarding the equivalence of the analysis samples at baseline and no response had been received.”

But if WWC had actually bothered to read the Milwaukee reports they would have found the evidence of equivalence they were looking for.  The Milwaukee voucher evaluation that Pat and John are leading has a matched-sample research design.  In fact, the research team produced an entire report whose purpose was to demonstrate that the matching had worked and produced comparable samples. In addition, in the 3rd Year report the researchers devoted an entire section (see appendix B) to documenting the continuing equivalence of the matched samples despite some attrition of students over time.

Rather than reading the reports and examining the evidence on the comparability of the matched samples, WWC decided that the best way to determine whether the research met their standards for sample equivalence was to email John Witte and ask him.  I guess it’s all that hard work that justifies the multi-million dollar contract Mathematica receives from the U.S. Department of Education to run WWC.

As it turns out, Witte was traveling when WWC sent him the email.  When he returned he deleted their request along with a bunch of other emails without examining it closely.  But WWC took Witte’s non-response as confirmation that there was no evidence demonstrating the equivalence of the matched samples.  WWC couldn’t be bothered to contact any of the several co-authors.  They just went for their negative conclusion without further reading, thought, or effort.

I can’t prove it (and I’m sure my thought-process would not meet WWC standards), but I’ll bet that if the subject of the study was not vouchers, WWC would have been sure to read the reports closely and make extra efforts to contact co-authors before dismissing the research as failing to meet their standards.  But voucher researchers have grown accustomed to double-standards when others assess their research.  It’s just amazingly ironic to see the federally-sponsored entity charged with maintaining consistent and high standards fall so easily into their own double-standard.

Another example — I served on a WWC panel regarding school turnarounds a few years ago.  We were charged with assessing the research on how to successfully turnaround a failing school.  We quickly discovered that there was no research that met WWC’s standards on that question.  I suggested that we simply report that there is no rigorous evidence on this topic.  The staff rejected that suggestion, emphasizing that the Department of Education needed to have some evidence on effective turnaround strategies.

I have no idea why the political needs of the Department should have affected the truth committee in assessing the research, but it did.  We were told to look at non-rigorous research, including case-studies, anecdotes, and our own experience to do our best in identifying promising strategies.  It was strange — there were very tight criteria for what met WWC standards, but there were effectively no standards when it came to less rigorous research.  We just had to use our professional judgment.

We ended up endorsing some turnaround strategies (I can’t even remember what they were) but we did so based on virtually no evidence.  And this was all fine as long as we said that the conclusions were not based on research that met WWC standards.  I still don’t know what would have been wrong with simply saying that research doesn’t have much to tell us about effective turnaround strategies, but I guess that’s not the way truth committees work.  Truth committees have to provide the truth even when it is false.

The heart of the problem is that science has never depended on government-run truth committees to make progress.  It is simply not possible for the government to adjudicate the truth on disputed topics because the temptation to manipulate the answer or simply to make sloppy and lazy mistakes is all too great.  This is not a problem that is particular to the Obama Administration or to Mathematica.  My second example was from the Bush Administration when WWC was run by AIR.

The hard reality is that you can never fully rely on any authority to adjudicate the truth for you.  Yes, conflicting claims can be confusing.  Yes, it would be wonderfully convenient if someone just sorted it all out for us.  But once we give someone else the power to decide the truth on our behalf, we are prey to whatever distortions or mistakes they may make.  And since self-interest introduces distortions and the tendency to make mistakes, the government is a particularly untrustworthy entity to rely upon when it comes to government policy.

Science has always made progress by people sorting through the mess of competing, often technical, claims.  When official truth committees have intervened, it has almost always hindered scientific progress.  Remember that  it was the official truth committee that determined that Galileo was wrong.  Truth committees have taken positions on evolution, global warming, and a host of other controversial topics.  It simply doesn’t help.

We have no alternative to sorting through the evidence and trying to figure these things out ourselves.  We may rely upon the expertise of others in helping us sort out competing claims, but we should always do so with caution, since those experts may be mistaken or even deceptive.  But when the government starts weighing in as an expert, it speaks with far too much authority and can be much more coercive.  A What Works Clearinghouse simply doesn’t work.


Feds And Research Shouldn’t Mix

March 2, 2010

 

As head of a department that has received and may wish to continue receiving federal research funds, it is completely contrary to my self-interest to say this:  the federal government should not be in the business of conducting or funding education policy research.  The federal government should facilitate research by greatly expanding the availability of individual student data sets stripped of identifying information.  But the federal government is particularly badly positioned to conduct or fund analyses based on those data.

The reasons for keeping the federal government out of education policy research should be obvious to everyone not blinded by the desire to keep eating at the trough.  First, the federal government develops and advocates for particular education policies, so it has a conflict of interest in evaluating those policies.  Even when those evaluations are outsourced to supposedly independent evaluators, they are never truly independent.  The evaluation business is a repeat-play game, so everyone understands that they cannot alienate powerful political forces too much without risking future evaluation dollars.  The safe thing to conclude in those circumstances is that the evidence is unclear about the effectiveness of a policy but future research is needed, which, not surprisingly, is what many federally funded evaluations find.

Unfortunately, political influence in education policy research is often more direct and explicit than the implicit distortions of a repeat-play game.  Every federally funded evaluation with which I am familiar has been subject to at least some, subtle political influence. 

I can’t mention most without breaking confidences, but I can briefly describe my own experience with a What Works Clearinghouse (WWC) panel on which I served (which was managed by a different firm than the one that currently manages WWC).  On that panel we were supposed to identify what was known from the research literature on how to turn around failing schools.  As we quickly discovered, there was virtually nothing known from rigorous research on how to successfully turn around failing schools.  I suggested that we should simply report that as our finding — nothing is known.  But we were told that the Department of Education wouldn’t like that  and we had to say something about how to turn around schools.  I asked on what basis we would draw those conclusions and was told that we should rely on our “professional judgment” informed by personal experience and non-rigorous research.  So, we dutifully produced a report that was much more of a political document than a scientific one.  We didn’t know anything from science about how to turn around schools, but we crafted a political answer to satisfy political needs.

In addition to being politically influenced, federally funded research is almost always overly expensive.  The cost of federal education policy research is many-fold more expensive than that research has to be.  There are several federal evaluations where the cost of the evaluation rivals the annual cost of the program being evaluated.

Beyond being politically distorted and cost-inefficient, a whole lot of federally funded research is really awful.  In particular, I am thinking of the work of the federally funded regional research labs.  For every useful study or review they release, there must be hundreds of drek.  The regional labs are so bad that the Department of Education has been trying to eliminate them from their budget for years.  But members of Congress want the pork, so they keep the regional labs alive.

Being politically distorted, cost-inefficient, and often of low quality is not a good combination.  Let’s get the feds out of the research business.  They can still play a critically important role of providing data sets to the research community, but they should not be funding evaluations or research summaries.  We need the feds to help with data because privacy laws are too great of a barrier for individual researchers.  But once basic data is available, the cost of analyzing the data should be quite low — just the time of the researchers and some computer equipment, perhaps supplemented with additional field data collection.  And if there is no “official” evaluation or “official” summary of the research literature, the research community is free to examine the evidence and draw its own conclusions.  Yes, there will be disagreement and messiness, but the world is uncertain and messy.  Freedom is uncertain and messy.  The solution is not to privilege over-priced, often lousy, politically driven federally funded work.

(edited for typos)