When people can’t argue the facts, they argue peer review. That’s been my experience when I’ve released non-peer reviewed reports. Without peer review, folks wonder, how can we know whether to trust these results?
The reality is that even with peer review people still need to wonder whether to trust results. Peer-review is by definition irresponsible — by which I mean that the reviewers have no responsibility. By being anonymous, reviewers offer their opinions on the merit of research without any meaningful consequence to themselves. Many reviewers do a laudable job, but there is nothing to stop them from using their reviews to advance findings they prefer and block findings they dislike regardless of the true merit of the work. Peer-review is often little more than the anonymous committee vote of a panel composed of some mix of competitors and allies. It is about as reliable as the Miss Congeniality vote at a beauty contest. Do we really think she’s the nicest contestant or did the other contestants voting anonymously have ulterior motives for burying her with faint praise?
The true test of research quality is replication. Science doesn’t determine the truth by having an anonymous committee vote on what is true. Science identifies the truth by replicating past experiments, applying them to new situations, to see if the results continue to hold up.
I’m pleased to say that several pieces of my work have been successfully replicated. By successful replication I mean that the basic findings are upheld. Replicators almost always make new and different choices about how to handle data or run an analysis. The question is whether the same basic conclusion is found even when those different choices are made.
The evaluation I did with Paul Peterson and Jiangtao Du of the Milwaukee voucher experiment was successfully replicated by Cecilia Rouse. The evaluation I did of the Charlotte voucher program was successfully replciated by Josh Cowen. My study of of Florida’s A+ voucher and accountability program was successfully replicated three times — by Raj Chakrabarti; Rouse, et al; and West and Peterson. And my graduation rate work has been successfully replicated by Rob Warren and Chris Swanson.
The interesting thing is that every one of my studies above was initially released without peer review. And every one of them was attacked for being unreliable because they were not peer reviewed. When they were all later published in peer reviewed journals (except the grad rate work) and successfully replicated I don’t remember ever hearing anyone retract their accusations of unreliability.
(edited for typos)