(Guest post by Greg Forster)
Don’t worry, this post is definitely not a continuation of the recent big dustup about 1) whether it’s naughty for scholars to provide journalists with accurate information about their work; and 2) whether it’s naughty for anonymous bloggers to argue that scholars’ motives are relelvant to their credibility, but bloggers’ motives aren’t relevant to theirs (which reminds me of Pat Moynihan’s quip about the Supreme Court cases, since overturned, holding that government can’t subsidize private school books but can subsidize classroom equipment such as maps; Moynihan asked, “What about atlases?” – books of maps? What about scholars who are bloggers? Or bloggers who write about scholarly studies? Once you start legitimizing ad hominem arguments, where do you stop?).
But I would like to expand on a comment that Eduwonk made during said dustup, which deserves more attention and has significance well beyond the issues that were at stake in that squabble. The comment got lost in the exchange because it was somewhat tangential to the main points of contention.
He wrote:
Not infrequently newspapers get snookered on research and most consumers of this information lack the technical skills to evaluate much of the work for themselves. As education research has become more quantitative — a good thing — it’s also become less accessible and there is, I’d argue, more an asymmetry to the information market out there than a fully functioning marketplace of ideas right now. In terms of remedies there is no substitute for smart consumption of information and research, but we’re not there yet as a field.
We are living in the first golden age of education research, brought on by the advent of systematic data collection, which every other field of human endeavor began undertaking a long time ago but which education is only getting around to now because it has been shielded from pressure to improve thanks to its protected government monopoly. Given the explosion of new information that’s becoming available, educating journalists about quantitative research is a huge problem. Jay is right that there is a marketplace of ideas. There really can’t help but be one; the idea some people seem to have that we can forbid people who own information from spreading it around as much as they want is silly. But just because there’s a market doesn’t mean there’s a perfect market, and Eduwonk is right that markets require informed consumers to function well. The current state of methodological ignorance among journalists does hinder the market of ideas from functioning as well as it should. (I’ll bet Jay would agree.)
As it happens, the same subject came up this morning in a completely different context, as my co-workers and I struggled to figure out the best way to present the findings of an empirical study we’re coming out with so that journalists will be able to follow them. And I wasn’t there, but I hear this topic also came up at a bloggers’ panel at the recent conference of the Education Writers’ Association.
Here at the Friedman Foundation, this has been a topic of great importance to us for some time, since exposing the bad and even bogus research that’s used to justify the status quo is one of our perennial challenges. We took a stab at composing a journalist’s guide to research methods. It went over well when we first distributed it (at last year’s EWA, if memory serves). But it’s necessarily very basic stuff.
Eduwonk is also right about journalists having been snookered by lousy research, and I think that has had both good and bad effects. The good news is that I’ve noticed a clear trend toward greater care in reporting the results of studies (not at propaganda factories like the New York Times, of course, but at serious newspapers). In particular, we’re seeing journalists talk about studies in the context of previous studies that have looked at the same question. Of course, we have a long way to go. But we’re on the way up.
On the bad side, however, I have also noticed a greater reluctance to cover studies at all. Part of that is no doubt due to the increase in volume. I’m young, but even I can remember the heady days of 2003 when any serious empirical study on the effects of a controversial education policy (vouchers, charters, high-stakes testing) would get at least some coverage. Now it’s different, and (to echo Eduwonk) that’s a good thing. But I think it’s extremely unlikely that this is the only factor at work. Junk science has poisoned the well for serious research. No doubt that was part of its intended purpose (although of course the motives of those who produce it have no relevance to its scientific merts or lack thereof).
My hope is that journalists will soon realize they’re getting left behind if they don’t learn how to cover the research accurately. Their job is to go where the news is. If the news is in quantitative research – and that is in fact where a lot of it is – they’ll have to learn how to get there.
Also, the changing media landscape will help. The old idea that journalists must be neutral stenographers with Olympian detachment from all the issues they cover is an artifact of the mid-20th-century role of the media as oligarchic gatekeeper, and is rapidly dying out. As “news” increasingly includes coverage by people who are actively engaged in a field, even as advocates, we can expect the news to be increasingly provided by people with greater amounts of specialized knowledge. (By the way, the old idea of the scholar as detached Olympian stenographer is equally an artifact of vanished circumstances, and will probably be the next thing to go; see the Our Challenge to You statement on the inside cover of any empirical study published by the Friedman Foundation for our views on the relationship between advocacy and scholarship.)
An optimistic view, yes – but since my optimism on other subjects has been triumphantly vindicated over the past year, even when the conventional wisdom said to head for the hills, I think I’ll let it ride.
Like this:
Like Loading...