(Guest Post by Matthew Ladner)
The 2017 NAEP is due to be released in a few weeks, so I thought it would be a good time to review a brief history of where we’ve been. The above table lists all of the available Math cohort gains by jurisdiction for the entire period all states have been giving NAEP. These cohort gains are calculated by subtracting the 4th grade scores of a cohort of students from their 8th grade scores. NAEP math and reading tests were specifically scaled and timed in such a way to allow for such comparisons.
Now…just a minute…stop staring at your state’s results and pay attention…oh okay fine go stare at your state’s results and then come back.
Right, now that you are done with that, allow me to draw your attention to the AVERAGE row at the bottom. This is a simple average between states, and it appears to be in slow but steady decline. Notice for instance Maryland’s transformation from a reform super-hero to a state that appeared to forget to teach mathematics to kids in 6th grade. Notice that the top gains from the 2009-2013 and 2011-2015 periods (Arizona) would have not been the top gainers in the golden age of 2003-2007. Arizona winds up coming top in recent years because they remained consistently pretty high while other states declined.
It should be noted that factors other than the quality of instruction could be at play here. For instance, inclusion rates for students with disabilities and ELL learners may have varied over time, creating the appearance of a decline. To test this, the below table runs the same math cohort gains but this time only for general education students:
Overall the story does not change a great deal- we still see a declining trend, and Maryland forgot to teach math to both general ed students about as much as everyone else. I will also note that Arizona owes its status as the math gains champ for 2009-13 and 2011-15 to gains among special education and/or ELL students, which as someone who worked on choice programs for special needs students in Arizona for a decade and a half, warms my heart:
My guess is that reformers picked the low-hanging fruit of education reform in the early aughts. The introduction of standards and testing in the early days seems to have produced a bump in achievement. Over time however this effect may be fading. Political Science 101 teaches that organized interests defeat diffuse interests 99 times out of a hundred, so the ability of states to employ a cat o’ nine tails and whip schools into improvement has limits. Dozens of decisions taken daily in the musty basements of State Departments of Education and obscure measures voted on by State Boards of Education can slowly but surely defang and/or subvert state accountability systems.
If there are two things that the organized employee interests of adults working in schools are expert at it is passive resistance and bureaucratic infighting. In my book, much of the reform crowd chose to fight their opponents on ground they did not choose wisely, and upon which they have little chance to prevail. Things fall apart, the center cannot hold.
Mike Petrilli recently and correctly imo noted that the 2017 NAEP would be a pretty definitive test on the efficacy of the Obama year projects- promoting Common Core and teacher evaluation, student discipline reform. Top down directives have a funny way of not working out, even backfiring. Let’s see what happens next.
It would be enormously valuable if we could look at earlier scores; we could test your hypothesis that NCLB and contemporary efforts produced gains. But of course we don’t have the data because it was NCLB that required all states to NAEPify. That was one really valuable thing we got out of NCLB, though on the whole the law has been a failure (which I say as a former supporter).
Some observers would claim it worked early and then we wimped out. This interpretation however tends to disregard Political Science 101 imo.
REAL accountability has never been tried!