The Professional Judgment Un-Dead

It’s time we drive a stake through the heart of “professional judgment” methodologies in education.  Unfortunately, the method has come back from the grave in the most recent Fordham report on regulating vouchers in which an expert panel was asked about the best regulatory framework for voucher programs.

The methodology was previously known for its use in school funding adequacy lawsuits.  In those cases a group of educators and experts was gathered to determine the amount of spending that is required to produce an adequate education.  Not surprisingly, their professional judgment was always that we need to spend billions and billions (use Carl Sagan voice) more than we spend now.  In the most famous use of the professional judgment method, an expert panel convinced the state courts to order the addition of $15 billion to the New York City school system — that’s an extra $15,000 per student.

And advocates for school construction have relied on professional judgment methodologies to argue that we need $127 billion in additional spending to get school facilities in adequate shape.  And who could forget the JPGB professional judgment study that determined that this blog needs a spaceship, pony, martinis, cigars, and junkets to Vegas to do an adequate job?

Of course, the main problem with the professional judgment method is that it more closely resembles a political rather than a scientific process.  Asking involved parties to recommend solutions may inspire haggling, coalition-building, and grandstanding, but it doesn’t produce truth.  If we really wanted to know the best regulatory framework, shouldn’t we empirically examine the relationship between regulation and outcomes that we desire? 

Rather than engage in the hard work of collecting or examining empirical evidence, it seems to be popular among beltway organizations to gather panels of experts and ask them what they think.  Even worse, the answers depend heavily on which experts are asked and what the questions are. 

For example, do high stakes pressure schools to sacrifice the learning of certain academic subjects to improve results in others with high stakes attached?  The Center for Education Policy employed a variant of the professional judgment method by surveying school district officials to ask them if this was happening.  They found that 62% of districts reported an increase in high-stakes subjects and 44% reported a decrease in other subjects, so CEP concluded that high-stakes was narrowing the curriculum.  But the GAO surveyed teachers and found that 90% reported that there had not been a change in time spent on the low stakes subject of art.  About 4% reported an increase in focus on art and 7% reported a decrease.  So the GAO, also employing the professional judgment method, gets a very different answer than CEP.  Obviously, which experts you ask and what you ask them make an enormous difference.

Besides, if we really wanted to know about whether high stakes narrow the curriculum, shouldn’t we try to measure the outcome directly rather than ask people what they think?  Marcus Winters and I did this by studying whether high stakes in Florida negatively impinged on achievement in the low-stakes subject of science.  We found no negative effect on science achievement from raising the stakes on math and reading.  Schools that were under pressure to improve math and reading results also improved their science results.

Even if you aren’t convinced by our study, it is clear that this is a better way to get at policy questions than by using the professional judgment method.  Stop organizing committees of selected “experts” and start analyzing actual outcomes.

5 Responses to The Professional Judgment Un-Dead

  1. Patrick says:

    And their professional opinion that schools need more money and states need to spend more on construction has nothing to do with the fact that they stand to gain financially from more spending.

    Is that a logical fallacy to point that out?

  2. Greg Forster says:

    If you want empirical information on regulations in school choice programs, here’s a handy review of the regulations in each of the nation’s 24 school choice programs.

  3. Geoff says:

    It makes sense to look at outcomes in order to measure whether or not high stakes has a negative effect on achievement in low-stakes subjects, but that is not necessarily directly correlated to a widening or narrowing of low-stakes cirriculum.

    And it makes sense that science scores would improve with a high stakes focus on math and reading, since science depends deeply on reading and math. But when it comes to subjects like art and music, I think it is much harder to measure academic outcomes in a meaningful way. At the end of the day, a kid might still really suck at singing, but I would argue it was still worthwhile to expose her to it.

    The great thing about school choice is that I think it can transcend these debates. Giving parents a greater say in their child’s education is a fundamental restructuring of the organization of education in America, and it encompases much more than raw academic outcomes. Parents are able to chose schools not only based on their location or high math and reading scores, but also because: they like the other courses that are offered; their children have good relationships with the teachers; they agree with the worldview through which the school school operates (even a supposedly neutral public school operates under a worldview); the school is safter, etc.

    So, while I agree that academic outcomes are important to consider when deciding upon the merits of a particular education policy, other factors must be considered as well, such as how does the policy affect (for better or worse) the relationships between the parent, child, school, community and state. Data is very important, but it is not sufficient for settling all the issues in a public policy debate.

  4. matthewladner says:

    Jay-

    Crafting transparency measures into choice programs does not strike me as something that lends itself to data crunching.

  5. […] issue of what really comprises an “adequate” funding amount for education. Based on the “professional judgment” of what would make a school administrator’s life easier? Based on some imaginary formula of […]

Leave a comment