The Devil’s in the Implementation

What went wrong with Reading First?  Don’t blame the evaluation.  Its regression discontinuity design approximates a random assignment experiment — the gold standard of research designs.  It allows us to know with confidence the effect of Reading First on the marginal adopter’s reading achievement.  We can’t assess the effect of Reading First on the first adopters or those who were rated as most in need, but a broadly useful program should have effects beyond those most eager or most desperate.  Reid Lyon is correct in noting that the evaluation did not address everything that we want to know.  And it is always possible that the program needs more time to show results.  But so far we have a null result.

We’re left with two possible explanations.  Either Reading First is conceptually mistaken or it was improperly implemented.  We have good reason to believe that it is the latter.  The science behind Reading First is pretty solid.  A greater emphasis on phonics seems to have a particularly beneficial effect on students from disadvantaged backgrounds. 

Reading First is probably the right idea but as with almost every instructional reform the devil is in the implementation.  The problem is that educators have few incentives to embrace and properly apply new instructional ideas.  It’s not that educators are uninterested in improving instructional approaches.  The problem is that they have often developed approaches from their own experience and training that they think works and are very skeptical of the latest great thing thrown their way.  Any theory of reform that is based on the assumption that educators are eagerly awaiting being informed of what works and will gladly do it once they are told is incredibly naive. 

Even if we could find the right techniques, the difficulty is in getting educators to adopt it and implement it properly.  This is so difficult because teachers don’t experience any meaningful consequences if they properly implement an instructional reform or if they don’t.  And since most teachers have developed routines with which they are comfortable and that they believe are effective, getting them to do something else without any real carrots or sticks is like getting children to eat spinach merely by suggesting it.  You can tell them that it’s really good for them, but they’d rather stick with the familiar mac and cheese.

The evaluation helps confirm that the problem was in implementation.  The differences between the treatment and control groups in time spent on phonics were very small.  And the treatment group was doing far less than the program has planned.  Similar problems have plagues other instructional reforms.  For example, see Mathematica’s evaluation of technology in the classroom, where usage of the technology by the treatment groups was only marginally greater than the control group.  Or see SRI’s evaluation of Following the Leaders, where the treatment group similarly barely used the intervention.  It should come as no surprise that the medicine doesn’t work if people won’t take their pills.

The solution that is usually offered when educators fail to implement an instructional reform is that we need to improve professional development so that they learn better how wonderful the intervention is and why/how they should use it.  Call it education disease — the solution to all problems is more education.  It’s an infinite regress.

Instead the obvious solution is that we have to address the incentives that educators have to adopt and properly implement effective instructional reforms.  Either the direct incentives of accountability with real consequences for teachers (like merit pay or job security) or the indirect incentives of market-based reforms (like school choice) would sharpen educators’ efforts in this regard.

This is why instructional reforms and incentive reforms have to go hand-in-hand.  Educators need to have effective ideas of what to do and they have to have the proper incentives to adopt and implement those effective ideas.  That’s also why pitting instructional reform against incentive reform makes no sense.  We need both.

2 Responses to The Devil’s in the Implementation

  1. Greg Forster says:

    There’s one point Jay makes here that I think could use some amplification, and that relates to some earlier discussion on the posts here this week on the role of science. The reason it’s naive to think that “educators are eagerly awaiting being informed of what works and will gladly do it once they are told” is not because educators don’t care what’s effective. As Jay says, it’s because educators already have practices “that they believe are effective.”

    I think that helps explain why people think the solution is always more education for the teachers. It’s an information problem.

    However, as the game theorists are always eager to tell us, information problems are a lot more difficult to solve than they appear to be. Information does not “want to be free.” It has a cost. And I’m not just talking about the time you have to spend in teacher training classes.

    The fact is, these teachers have been subject to tons of “professional development,” and most of it appears to have been wasted time. There appears to be no positive impact from teacher training the way it’s done now. So when you come along with your scientifically based approach to reading, you look to them like just the latest useless fad. Sure, they’ll sit through the classes if you make them. But after decades of useless fads, why would they bother to take you seriously?

    In other words, the biggest cost to acquiring better information is the cost of figuring out which information is reliable. And given the track record – decades of useless teacher training and a good solid century of educational fads (see Diane Ravitch’s book) that mostly came to nothing – they have good reason to believe that the cost of figuring out which information is good will not be worth paying.

    So what do you do? Change their incentives so that the price is worth paying.

  2. […] There was a huge dust-up in Washington recently over the program known as Reading First, when the official evaluation came out and found that it made no difference to outcomes. Some of the program’s defenders […]

Leave a comment