It’s time we drive a stake through the heart of “professional judgment” methodologies in education. Unfortunately, the method has come back from the grave in the most recent Fordham report on regulating vouchers in which an expert panel was asked about the best regulatory framework for voucher programs.
The methodology was previously known for its use in school funding adequacy lawsuits. In those cases a group of educators and experts was gathered to determine the amount of spending that is required to produce an adequate education. Not surprisingly, their professional judgment was always that we need to spend billions and billions (use Carl Sagan voice) more than we spend now. In the most famous use of the professional judgment method, an expert panel convinced the state courts to order the addition of $15 billion to the New York City school system — that’s an extra $15,000 per student.
And advocates for school construction have relied on professional judgment methodologies to argue that we need $127 billion in additional spending to get school facilities in adequate shape. And who could forget the JPGB professional judgment study that determined that this blog needs a spaceship, pony, martinis, cigars, and junkets to Vegas to do an adequate job?
Of course, the main problem with the professional judgment method is that it more closely resembles a political rather than a scientific process. Asking involved parties to recommend solutions may inspire haggling, coalition-building, and grandstanding, but it doesn’t produce truth. If we really wanted to know the best regulatory framework, shouldn’t we empirically examine the relationship between regulation and outcomes that we desire?
Rather than engage in the hard work of collecting or examining empirical evidence, it seems to be popular among beltway organizations to gather panels of experts and ask them what they think. Even worse, the answers depend heavily on which experts are asked and what the questions are.
For example, do high stakes pressure schools to sacrifice the learning of certain academic subjects to improve results in others with high stakes attached? The Center for Education Policy employed a variant of the professional judgment method by surveying school district officials to ask them if this was happening. They found that 62% of districts reported an increase in high-stakes subjects and 44% reported a decrease in other subjects, so CEP concluded that high-stakes was narrowing the curriculum. But the GAO surveyed teachers and found that 90% reported that there had not been a change in time spent on the low stakes subject of art. About 4% reported an increase in focus on art and 7% reported a decrease. So the GAO, also employing the professional judgment method, gets a very different answer than CEP. Obviously, which experts you ask and what you ask them make an enormous difference.
Besides, if we really wanted to know about whether high stakes narrow the curriculum, shouldn’t we try to measure the outcome directly rather than ask people what they think? Marcus Winters and I did this by studying whether high stakes in Florida negatively impinged on achievement in the low-stakes subject of science. We found no negative effect on science achievement from raising the stakes on math and reading. Schools that were under pressure to improve math and reading results also improved their science results.
Even if you aren’t convinced by our study, it is clear that this is a better way to get at policy questions than by using the professional judgment method. Stop organizing committees of selected “experts” and start analyzing actual outcomes.