The Professional Judgment Un-Dead

March 25, 2009

It’s time we drive a stake through the heart of “professional judgment” methodologies in education.  Unfortunately, the method has come back from the grave in the most recent Fordham report on regulating vouchers in which an expert panel was asked about the best regulatory framework for voucher programs.

The methodology was previously known for its use in school funding adequacy lawsuits.  In those cases a group of educators and experts was gathered to determine the amount of spending that is required to produce an adequate education.  Not surprisingly, their professional judgment was always that we need to spend billions and billions (use Carl Sagan voice) more than we spend now.  In the most famous use of the professional judgment method, an expert panel convinced the state courts to order the addition of $15 billion to the New York City school system — that’s an extra $15,000 per student.

And advocates for school construction have relied on professional judgment methodologies to argue that we need $127 billion in additional spending to get school facilities in adequate shape.  And who could forget the JPGB professional judgment study that determined that this blog needs a spaceship, pony, martinis, cigars, and junkets to Vegas to do an adequate job?

Of course, the main problem with the professional judgment method is that it more closely resembles a political rather than a scientific process.  Asking involved parties to recommend solutions may inspire haggling, coalition-building, and grandstanding, but it doesn’t produce truth.  If we really wanted to know the best regulatory framework, shouldn’t we empirically examine the relationship between regulation and outcomes that we desire? 

Rather than engage in the hard work of collecting or examining empirical evidence, it seems to be popular among beltway organizations to gather panels of experts and ask them what they think.  Even worse, the answers depend heavily on which experts are asked and what the questions are. 

For example, do high stakes pressure schools to sacrifice the learning of certain academic subjects to improve results in others with high stakes attached?  The Center for Education Policy employed a variant of the professional judgment method by surveying school district officials to ask them if this was happening.  They found that 62% of districts reported an increase in high-stakes subjects and 44% reported a decrease in other subjects, so CEP concluded that high-stakes was narrowing the curriculum.  But the GAO surveyed teachers and found that 90% reported that there had not been a change in time spent on the low stakes subject of art.  About 4% reported an increase in focus on art and 7% reported a decrease.  So the GAO, also employing the professional judgment method, gets a very different answer than CEP.  Obviously, which experts you ask and what you ask them make an enormous difference.

Besides, if we really wanted to know about whether high stakes narrow the curriculum, shouldn’t we try to measure the outcome directly rather than ask people what they think?  Marcus Winters and I did this by studying whether high stakes in Florida negatively impinged on achievement in the low-stakes subject of science.  We found no negative effect on science achievement from raising the stakes on math and reading.  Schools that were under pressure to improve math and reading results also improved their science results.

Even if you aren’t convinced by our study, it is clear that this is a better way to get at policy questions than by using the professional judgment method.  Stop organizing committees of selected “experts” and start analyzing actual outcomes.


New Study Release Tomorrow

July 7, 2008

Keep your eyes peeled for the release tomorrow by the Manhattan Institute of a new study on the effect of high-stakes testing on achievement in low-stakes subjects. The study, led by Marcus Winters and co-authored by me and Julie Trivitt, examines whether achievement in math and reading comes at the expense of science on Florida standardized tests.  Because there are meaningful consequences for performance in math and reading, but not for the rest of the curriculum, many people have worried that schools would improve their math and reading results by skimping on science and other subjects.

These concerns are not just coming from the usual critics of school accountability.  Even accountability advocates have expressed second thoughts.  For example, Chester Finn writes in the National Review Online: “Do the likely benefits exceed the ever clearer costs? Boosting skill levels and closing learning gaps are praiseworthy societal goals. But even if we were surer that NCLB would attain them, plenty of people — parents, teachers, lawmakers, and interest groups — are alarmed by the price. I don’t refer primarily to dollars. (They’re in dispute, too, with most Democrats wrongly insisting that they’re insufficient.) I refer to things like a narrowing curriculum that sacrifices history, art, and literature on the altar of reading and math skills…”

Diane Ravtich has similarly stepped on the high-stakes brakes, expressing concern about the crowding out of other academic subjects and activities: “a new organization called Common Core was launched on February 26 at a press conference in Washington, D.C., to advocate on behalf of the subjects that are neglected by the federal No Child Left Behind legislation and by pending STEM legislation. These subjects include history, literature, the sciences, the arts, geography, civics, even recess (although recess is not a subject, it is a necessary break in the school day that seems to be shrinking or disappearing in some districts). I serve as co-chair of CC with Toni Cortese, executive vice-president of the American Federation of Teachers.”

To find out whether these concerns are supported by the empirical evidence from Florida, tune into the Manhattan Institute web site tomorrow to see the study.


%d bloggers like this: