This Time It Counts

Contents under pressure

(Guest post by Patrick Wolf)

My friend Adam Emerson at the Fordham Foundation is championing the combination of high-stakes test-based accountability and parental school choice recently adopted by Louisiana, Indiana and Wisconsin, as “sunshine and school vouchers.”

His reasoning is that the free educational choices of parents alone are insufficient to ensure that choice-based reforms benefit the public by generating actual improvements in student learning.  He cites a study that my research team recently completed of the Milwaukee Parental Choice Program (MPCP), where high-stakes test-based accountability was added requirement to the long-running voucher program in 2010-11, and the achievement scores for voucher students surged in relation to the comparison public school students that year.

Now, like most researchers, I’m vain.  I like it when people cite my research in policy debates.  That’s why I do it – to speak truth to power.  But let’s not get too far ahead of ourselves here.  Ours is one study of what happened in one year for one school choice program that switched from low-stakes testing to high-stakes testing.  As we point out in the report, it is entirely possible that the surge in the test scores of the voucher students was a “one-off” due to a greater focus of the voucher schools on test preparation and test-taking strategies that year.  In other words, by taking the standardized testing seriously in that final year, the schools simply may have produced a truer measure of student’s actual (better) performance all along, not necessarily a signal that they actually learned a lot more in the one year under the new accountability regime.

If we had had another year to examine the trend in scores in our study we might have been able to tease out a possible test-prep bump from an effect of actually higher rates of learning due to accountability.  Our research mandate ended in 2010-11, sadly, and we had to leave it there – a finding that is enticing and suggestive but hardly conclusive.

What about the encouraging trend that lower-performing schools in the MPCP are being closed down?  Adam mentions that as well and attributes it to the stricter accountability regulations on the program.  That phenomenon of Schumpeterian “creative destruction” pre-dated the accountability changes in the choice program, however, and appears to have been caused mainly by low enrollments in low-performing choice schools, as parents “voted with their feet” against such institutional failure.  Sure, the new high-stakes testing and public reporting requirements might accelerate the creative destruction of low-performing choice schools in Milwaukee, but that remains to be seen.

I like sunshine – I live in Arkansas, after all.  I also like program evaluations enabled by student testing, since it pays my bills.  But I also like liberty and appreciate the innovation that I’ve seen in some schools of choice that eschew our testing-focused political culture.  This is all to say that the issue is one of reasonable and debatable tradeoffs, not absolutes.  Mainly, it would be helpful to see more studies, like mine, that shed light on what is gained and what might be lost when high-stakes testing is added to the choice mix.

6 Responses to This Time It Counts

  1. TeacherJoeInLosAngeles says:

    May I offer two thoughts about the test results that I find many do not know. Forgive me if you are aware of these. Take it as a compliment that I trust you will understand of what I am talking.
    1) Always remember that bell curve laws of probability require(? not sure if that’s the best word here) that 1 our of 6 results be one standard deviation above the mean and 1 out of 6 below the mean. Few look at the stats showing the value of a st. deviation for any one test, but usually it is high enough that many fools (a.k.a: politicians, journalists, and “I’m not good with numbers” educators-admisistrators) do not know that jumps or drops in results are EVER because of these chance laws.
    2) These same fools usually compare year over year figures for different students. When I had a smart principal I was able to show him that the important numbers to compare was 2012’s 6th grade results with 2013’s 7th grade results because that was comparing the changes in the same students. WAY too often the fools compare this year’s 6th to last year’s 6th ignoring the most important variable.

    • Greg Forster says:

      Please note the author of this post is Patrick Wolf (see the byline at the beginning). I just clicked the button to post his text.

    • jean sanders says:

      You are on track here if the students are the same group; if some new ones have moved in or some have left the comparison won’t work; high mobility of homeless children etc. Also, if students have been absent more than 10 days they have missed your curriculum. Greg I would like to discuss this if you have time… you are on the right track….

  2. […] Wolf, director of the research team that conducted the study, has now responded, explaining that his team’s results do not […]

  3. […] “may boost student achievement.” Problematically, one of the authors of that study has already publicly cautioned against drawing this conclusion, noting that his finding is “enticing and suggestive but hardly […]

  4. […] but one of the study’s authors, Dr. Patrick Wolf of the University of Arkansas, has previously cautioned the Fordham Institute against reading too much into that finding, calling it “enticing and […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: