Political Bias in Education Policy Research

Image result for political bias in academia

Education policy research is not really a scientific enterprise.  If it were, the field would be equally open to accepting research of equal rigor regardless of the findings.  That is simply not the case.  Research with preferred findings is more easily published in leading journals and embraced by scholars than research supporting less favored results.

There are countless examples of this, but here is one to illustrate the point…

The Journal of Policy Analysis and Management, a top journal in our field, has just published an analysis of vouchers in Indiana based on a matching research design.  Despite the fact that matching is normally intended to produce treatment and comparison groups that are nearly identical on observed characteristics, in this study the treatment group differed significantly from the control group in their pre-treatment measure of math performance.  Specifically, the treatment group has significantly higher scores on math tests.  And the one negative effect observed by the study was on math test scores, which was roughly comparable in magnitude to the amount by which the treatment group was higher on math scores pre-treatment.  So, basically the treatment group reverted to having about the same math scores as the control group once treatment began.  This negative effect, which was really the equalizing of the matched groups, was detected the first time students enrolled in a private school and did not grow in magnitude as students persisted in private school.  One might think that if private schools really harmed math scores, that harm might compound over time, but that did not occur.

These results certainly deserve publication and ought to inform the school choice policy debate despite the obvious limitations of the matching design that failed to make the groups comparable on the one outcome measure for which a negative outcome was observed.  While worthy of publication and discussion, it is questionable whether this article deserves publication in one of the field’s top journals and even more doubtful that it should be given as much credence as some folks in the field seem willing to give it.

Corey DeAngelis and Pat Wolf have a similar school choice study based on a matching research design with similar imperfections.  It examines whether students enrolled in the Milwaukee voucher program were more likely to be accused or convicted of a crime in later years than comparable students who had attended Milwaukee’s public schools.  Students in the treatment group were matched to public school students on a number of observable characteristics, including the neighborhood in which they lived.  Despite that matching effort,  the treatment and control groups were significantly different, with the treatment group having higher reading scores and more likely to be female.  Unlike the JPAM study, neither of these variables were the same as the outcome for which they observed effects.  Controlling for observable student and parental characteristics, students who had enrolled in Milwaukee’s voucher program were significantly less likely to be accused of a crime in later years.

The defects of Corey and Pat’s study are similar to those of the JPAM study.  It also uses a matching research design, and as I have said many times before, I don’t think we should have much confidence in matching designs to produce causal inferences.  And like the other study, Corey and Pat’s matching fails to produce treatment and control groups that are similar on all observed characteristics.  But unlike the other study, Corey and Pat’s research is not being published in JPAM.  In fact, JPAM desk rejected Corey and Pat’s study, deeming it unworthy even of being sent out for review.  A number of other journals did the same and they are now struggling to get it published in any journal.  I’m convinced that if only they had found that vouchers increased criminal behavior, their piece would already be in print in a respected journal.  But because they found a positive result for vouchers, the bar is higher and editors and reviewers can rightly note the defects in the study to justify rejection.

All research has limitations that might be invoked to support rejection or overlooked to support publication.  The double-standard used when judging voucher studies with favorable or unfavorable findings is a function of political bias and is an indication that our field is much less scientific than we would like to imagine.

It’s a shame that education policy researchers are largely uninterested in this problem of political bias.  Despite considerable energy devoted to promoting many dimensions of diversity within our field, there is virtually no effort to promote ideological diversity.  My department has a few researchers who would describe themselves as conservatives (while we also have had two faculty members who describe themselves as socialists), but I suspect most departments don’t have any self-described conservatives while others have no more than one or two.

It is interesting to note that despite having a department with six endowed chair holders, half of whom have Harvard doctorates, and all of whom have impressive research records, none of us have ever been asked to serve on the editorial boards of any journals (excluding the Journal of School Choice that my colleague, Bob Maranto, edits).  We’ve tried to play a part in governing our profession, but because we are branded (sometimes incorrectly) as conservatives we have been shunned.  The composition of editorial boards shapes who reviews submissions, which shapes what is published in those journals, which shapes what people in the field imagine the research consensus to be on various issues.

There are consequences to this political bias in our field.  First, the scientific quality of research is harmed by an increasing groupthink that fails to critically examine the key assumptions, methods, and implications of much of the work being produced.  Second, research in the field has diminished credibility and policy influence because others increasingly look at the field as more ideological and less scientific.  Some of the leading people in our field regularly take to Twitter to deride policymakers and the public for failing to heed what they believe research has to say. But why should policymakers obey “science” when it is being produced by an increasingly insular group of researchers who may confuse their political agenda for science? Third, frustrated conservatives are likely to give up trying to be accepted by the dominant professional associations and journals and instead build their own parallel institutions.  The Bar Association drove out conservatives who built the Federalist Society, which now seems to be thriving more than the “mainstream” organization at exercising policy influence.

I don’t expect this piece to alter this state of affairs.  Leading scholars in our field seem quite adept at defending their prior convictions, sometimes in remarkably unscholarly ways on social media, rather than critically examining their own beliefs and behaviors.  As far as I’m concerned they can rail away, but they will be left with the kind of nasty, unscientific, and irrelevant field they seem determined to build.

10 Responses to Political Bias in Education Policy Research

  1. Emmett says:

    What you are saying is certainly true to varying degrees with lower-tier education journals where scholars from colleges of education have more influence. JPAM, however, has been pretty consistent across the years at having a higher bar for methodological rigor. I can’t explain how this article passed that bar. It may have something to do with the fact that nothing we do is really “blind” in this field, and reviewers/editors knew the authors and gave them a gift. By most established criteria, this article should not have made it in, especially when articles with more rigorous designs on the same topic are plentiful (I think a case can be made to lower the bar for less-rigorous designs in areas where we have fewer studies).

    Contrary to the political bias claim, a similar example comes from 2016, when JPAM published the Sass, Zimmer, Gill, and Booker study on charter schools and attainment, which found positive results using the similarly weak research design. Similarly, the reputation of the authors may have helped…though one could also argue that in their case, the research question and data were more novel.

  2. George Mitchell says:

    An important message.

  3. bogdan karol says:

    This is a misleading summary of these two studies. The JPAM study controls for past math scores in addition to the other matching variables. This means the approach does not require past math scores to be balanced after controlling for the other variables.

    Unlike the JPAM study, the DeAngelis and Wolf study does not include a control for a past measure of the main outcome (no past measure of crime). Matching approaches are viewed as more reliable when a past outcome control is included, so most researchers probably trust the JPAM results more. There are scientific reasons these studies might be treated differently so why jump to political bias?

    • But you fail to emphasize that the JPAM study failed to match successfully on the outcome measure. So how is it a superior methodology to match on an outcome measure when you fail to actually match on that variable?

      • bogdan karol says:

        The JPAM study controls for the lagged outcome measure. Look at their equation 1. You are missing the fact that the article uses a combination of exact matching on some covariates and regression control for other covariates including past test scores. It is not relevant that the matching fails to balance past scores because the matching alone is not the research design.

      • It is also worth noting that the original version of the JPAM manuscript successfully matched T & C on pre-treatment math scores and had a completely different finding — initial negative result for math became insignificant by year 4 and reading became significant and positive by year 4. So the results changed dramatically during the review process in ways that made the match less good and the results more negative.

        My point, again, is that both the JPAM and Corey and Pat’s study had significant (although not identical) imperfections. One was desk rejected and the other published.

      • Greg Forster says:

        The difference between desk rejection and publication should not hinge on fine subtleties. That’s what the review process is for.

      • bogdan karol says:

        This is not a fine subtlety. The JPAM study is able to control for the initial value of the outcome and the DeAngelis and Wolf study is not. This is a major difference in study design and editors must make desk rejection decisions on the basis of such issues all the time.

        Jay’s willingness to ignore the scientific issues and assume political bias is ironically a perfect example of the politicization of the field he claims to be upset about.

      • Isn’t the failure to match successfully on the key outcome variable also a reason one might desk reject a paper or at least decide not to publish it? And wouldn’t the lack of robustness of the result to slightly different matching strategies also be a reason not to publish?

        Again, my point is that political bias can creep into decision-making (without some conspiracy or explicit bias) because there is subjective judgement at every step in whether or not to tolerate imperfections in the methodology.

  4. Robert C. Enlow says:

    Bogdon, The only point i would make is that the initial draft of the paper had other methodologies that were rigorous and showed positive gains. After the review process, those methods and their results were eliminated and a brand new methodology (that some could argue is flawed) was used, one that showed negatives.

    I am for publishing ALL the methods. You should ask JPAM why they changed everything from the draft to the published paper.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: