Rick Hess has a thoughtful post today on last week’s dust-up over whether choice schools should be required to take state tests. Rick is generally sympathetic with the arguments I was making but raises two objections.
First, Rick worries about whether I (and others) are being consistent in opposing testing requirements for choice schools while having “long slammed districts and promoted school choice by pointing to reading and math scores.” He continues, “I’ve got a lot of sympathy for those who feel like Greene’s position constitutes something of a bait-and-switch, with choice advocates are changing the rules when it suits them.”
Second, Rick thinks there is an inconsistency in my suspicion that test-prep and manipulation are largely responsible for test score improvements by Milwaukee choice schools after they were required to take high-stakes tests, while I interpret research from Florida as showing schools made exceptional test score gains when faced with the prospect of having vouchers offered to their students if scores did not improve. Why would I believe the former is an artifact of test prep, but not the latter?
Let me deal first with Rick’s second objection because it is easier and quicker to address. I was concerned about whether test prep and manipulation were responsible for the exceptional gains made by low-graded schools that faced the prospects of voucher competition if their results did not improve. So, Marcus Winters and I examined results from the Stanford-9, a nationally normed low-stakes test, as well as the state’s high-stakes FCAT, to see if the results were similar. Here is what we wrote:
Schools are not held accountable for their students’ performance on the Stanford-9. As a result, they have little incentive to manipulate the results by “teaching to the test” or through outright cheating. Thus, if gains are witnessed on both the FCAT and the Stanford-9, we can be reasonably confident that the gains reflect genuine improvements in student learning.
The results were similar, showing exceptional gains on both high and low stakes exams, which gave me confidence that the improvements in FL were real. In Milwaukee we do not have a similar check on whether learning gains were real after high stakes testing requirements were imposed. In the absence of a low-stakes check, I’m highly skeptical of whether choice schools suddenly improved in quality when they were required to administer the high-stakes tests that the study subjects had been taking all along with lower results.
Rick’s first point — essentially, that I am being hypocritical in opposing testing for choice schools but not for traditional public schools — requires a more complicated response. I would be happy opposing state testing requirements for all schools (choice and traditional public) if those schools had some reasonable mechanism for accountability. Choice schools are accountable without testing requirements because parents can choose whether to send their children (and the resources that follow those students) to those schools or not. If those schools are not accomplishing what parents want, choice schools have difficulty attracting and retaining students and resources.
Most traditional public schools, however, have no meaningful system of accountability. They receive students and resources regardless of whether they are accomplishing what families want or not. If schools are not held accountable by choice, then they have to be accountable by some mechanism. One way to produce this accountability is to require that they administer state tests and meet certain performance benchmarks. This type of top-down accountability is far less efficient and comprehensive than choice accountability, but it may have to do in the absence of choice. But if charter, private, and Tiebout choice were to expand to the point where no school was guaranteed students and revenues regardless of performance, then I’d be fine with getting rid of all testing requirements.
Of course, there would still be plenty of information about schools because most schools in choice systems voluntarily administer tests and report results. They just choose their own tests, just like how they choose their own standards, curriculum, and pedagogy. And since tests only capture a tiny portion of what most schools are trying to accomplish, parents would collect information on these other outcomes of education just as consumers collect information on the quality of other complicated services their children receive, including summer camp, piano lessons, babysitters, etc… We don’t have state required testing — or even any testing — for most of these services, so parents rely on reputation, word of mouth, direct observation, and other techniques to collect information and make choices. No system is perfect and people will make mistakes, but I’d rather that parents make their own mistakes than have bureaucrats impose mistakes upon them.
This skepticism about state testing does represent a shift in my thinking that has been underway for a few years now. I’m sure someone could dig up an old quote from me embracing top-down accountability in a way that I would not do now. But I’ve seen more evidence and collected more experience over the last several years that has made me much less enamored of state testing. I’m convinced that state tests are highly imprecise, very limited in what they cover, subject to test-prep and manipulation, unable to capture the diversity of school goals and circumstances, and seldom used to make intelligent decisions about improving schools. Simply put, I am no longer a supporter of top-down school accountability regimes. But until we have expanded choice further, I see no practical alternative to continuing state testing for schools not subject to meaningful choice accountability.