Education policy research is not really a scientific enterprise. If it were, the field would be equally open to accepting research of equal rigor regardless of the findings. That is simply not the case. Research with preferred findings is more easily published in leading journals and embraced by scholars than research supporting less favored results.
There are countless examples of this, but here is one to illustrate the point…
The Journal of Policy Analysis and Management, a top journal in our field, has just published an analysis of vouchers in Indiana based on a matching research design. Despite the fact that matching is normally intended to produce treatment and comparison groups that are nearly identical on observed characteristics, in this study the treatment group differed significantly from the control group in their pre-treatment measure of math performance. Specifically, the treatment group has significantly higher scores on math tests. And the one negative effect observed by the study was on math test scores, which was roughly comparable in magnitude to the amount by which the treatment group was higher on math scores pre-treatment. So, basically the treatment group reverted to having about the same math scores as the control group once treatment began. This negative effect, which was really the equalizing of the matched groups, was detected the first time students enrolled in a private school and did not grow in magnitude as students persisted in private school. One might think that if private schools really harmed math scores, that harm might compound over time, but that did not occur.
These results certainly deserve publication and ought to inform the school choice policy debate despite the obvious limitations of the matching design that failed to make the groups comparable on the one outcome measure for which a negative outcome was observed. While worthy of publication and discussion, it is questionable whether this article deserves publication in one of the field’s top journals and even more doubtful that it should be given as much credence as some folks in the field seem willing to give it.
Corey DeAngelis and Pat Wolf have a similar school choice study based on a matching research design with similar imperfections. It examines whether students enrolled in the Milwaukee voucher program were more likely to be accused or convicted of a crime in later years than comparable students who had attended Milwaukee’s public schools. Students in the treatment group were matched to public school students on a number of observable characteristics, including the neighborhood in which they lived. Despite that matching effort, the treatment and control groups were significantly different, with the treatment group having higher reading scores and more likely to be female. Unlike the JPAM study, neither of these variables were the same as the outcome for which they observed effects. Controlling for observable student and parental characteristics, students who had enrolled in Milwaukee’s voucher program were significantly less likely to be accused of a crime in later years.
The defects of Corey and Pat’s study are similar to those of the JPAM study. It also uses a matching research design, and as I have said many times before, I don’t think we should have much confidence in matching designs to produce causal inferences. And like the other study, Corey and Pat’s matching fails to produce treatment and control groups that are similar on all observed characteristics. But unlike the other study, Corey and Pat’s research is not being published in JPAM. In fact, JPAM desk rejected Corey and Pat’s study, deeming it unworthy even of being sent out for review. A number of other journals did the same and they are now struggling to get it published in any journal. I’m convinced that if only they had found that vouchers increased criminal behavior, their piece would already be in print in a respected journal. But because they found a positive result for vouchers, the bar is higher and editors and reviewers can rightly note the defects in the study to justify rejection.
All research has limitations that might be invoked to support rejection or overlooked to support publication. The double-standard used when judging voucher studies with favorable or unfavorable findings is a function of political bias and is an indication that our field is much less scientific than we would like to imagine.
It’s a shame that education policy researchers are largely uninterested in this problem of political bias. Despite considerable energy devoted to promoting many dimensions of diversity within our field, there is virtually no effort to promote ideological diversity. My department has a few researchers who would describe themselves as conservatives (while we also have had two faculty members who describe themselves as socialists), but I suspect most departments don’t have any self-described conservatives while others have no more than one or two.
It is interesting to note that despite having a department with six endowed chair holders, half of whom have Harvard doctorates, and all of whom have impressive research records, none of us have ever been asked to serve on the editorial boards of any journals (excluding the Journal of School Choice that my colleague, Bob Maranto, edits). We’ve tried to play a part in governing our profession, but because we are branded (sometimes incorrectly) as conservatives we have been shunned. The composition of editorial boards shapes who reviews submissions, which shapes what is published in those journals, which shapes what people in the field imagine the research consensus to be on various issues.
There are consequences to this political bias in our field. First, the scientific quality of research is harmed by an increasing groupthink that fails to critically examine the key assumptions, methods, and implications of much of the work being produced. Second, research in the field has diminished credibility and policy influence because others increasingly look at the field as more ideological and less scientific. Some of the leading people in our field regularly take to Twitter to deride policymakers and the public for failing to heed what they believe research has to say. But why should policymakers obey “science” when it is being produced by an increasingly insular group of researchers who may confuse their political agenda for science? Third, frustrated conservatives are likely to give up trying to be accepted by the dominant professional associations and journals and instead build their own parallel institutions. The Bar Association drove out conservatives who built the Federalist Society, which now seems to be thriving more than the “mainstream” organization at exercising policy influence.
I don’t expect this piece to alter this state of affairs. Leading scholars in our field seem quite adept at defending their prior convictions, sometimes in remarkably unscholarly ways on social media, rather than critically examining their own beliefs and behaviors. As far as I’m concerned they can rail away, but they will be left with the kind of nasty, unscientific, and irrelevant field they seem determined to build.