(Guest post by Greg Forster)
The Fordham Institute’s Gadfly, in an item signed by Janie Scull, picks up and recirculates the Gates Foundation’s drillandkillaphobic error – but with a subtle twist (see if you can catch it – I didn’t when I first read it):
Second, teachers who, according to their students, “teach to the test” do not produce the highest value-added scores for said students; rather, instructors who help their students understand math concepts and reading comprehension yield the highest scores.
Gates was originally pushing the line that the study found a negative relationship between “teach to the test” or “drill and kill” and outcomes. Not only is there nothing in the study to support that, the study actually finds the opposite.
Fordham is now slightly changing the claim so that it appears to say test prep is bad for students, without actually saying that. Read that sentence very carefully. Now read it again, and this time bear the following in mind: the study found a positive correlation between test prep and outcomes. Now, does this look like an honest characterization of the study to you?
I’ll admit that they had me fooled. I originally put up a version of this blog post saying that they were recirculating Gates’s erorr. Then I reread it and pulled that down. They’re not just recirculating the error, they’re weaseling it up to see if they can circulate it in a way that will pass muster. I haven’t seen such word-twisting since I watched the president of the United States explain “it depends on what the meaning of ‘is’ is.”
“I did not have a statistically significant relationship with that variable.”
Despite her efforts to remain technically correct, Scull does make an erroneous claim. She directly attributes the phrase “teach to the test,” in quotes, to students. In fact, as Jay pointed out, the phrase “teach to the test” and similar phobic phrases such as “drill and kill” do not appear in the study. The study found a positive relationship between “test prep” and outcomes.
This is worse than the New York Times and Los Angeles Times reporting the original error. Those papers simply picked up what Gates told them and reported that Gates said it. Sure, in a perfect world reporters would always check these things with independent researchers – but it’s not a hanging offense. (However comically hysterical some of them might get when they get called on it.)
The Fordham Institute is, or at least claims to be, an independent voice. And the Gadfly item did not attribute its claim to Gates, as the newspapers did. The Gadfly item states its partly erroneous, partly weaseled-up claim simply as a fact. That lends the intellectual prestige of the Fordham Institute to both the error and the Clintonian weaseling.
Jay has said before, and I agree, that Fordham can take huge piles of money from Gates without losing its integrity.
That’s why I have full confidence Scull and Fordham will be running a correction of this erroneous item.
As I wrote earlier this week, human beings are not daleks, so test prep and similar activities can’t be the be-all and end-all, but the fear of test prep has so far been much more destructive than its overemphasis. If we don’t get past drillandkillaphobia, we’ll never fix education.
[This post has been edited since it was first published, as indicated in the text.]
Just part of the century long war on academic knowledge as too related to social class and background.
Instead of knowledge we get nebulous learning tasks under Common Core that all can be said to achieve and thus be magically declared “College and Career Ready”.
Did anyone else read the Pearson report? If you have to come up with new and creative ways to measure something, it’s too nebulous to be of much marketable value.
I suspect the outcome of socializing knowledge and skills in the US to a level almost everyone can master will have a similar effect on the future American economy as collectivizing farmland had in the Ukraine.
One led to no grain and mass starvation. The other to no real skills but achieved at great public expense.
Well, I agree with you all the way except for the part about “coming up with new and creative ways to measure things.” Obviously if the impetus to devise new measurements comes from our desire to get the politically convenient result, that’s bad. It’s intellectual prostitution. But surely if the motives are right, “coming up with new and creative ways to measure things” is integral to the core function of science?
Greg,
The report makes it clear that these new assessments are not testing knowledge but rather what “students should understand and be able to do”. Apparently we are supposed to have forgotten that Outcomes Based Learning also went by performance based assessments.
New names, old ideas.
To me now that they are official having Pearson read through the standards and say:
“Without that clear vision of the test and its parameters, it is not possible to develop stimuli and items or build tests that will fairly and reliably measure student learning” sounds like now they agree with the Pioneer Institutes concerns over vagueness.
If the output is genuinely what is needed to truly succeed in college (I never liked the “C” work at a community college definition in the nondisclosed small print) or the workplace, it should be relatively easy to measure.
The need for creativity and technology and performance tasks to try to get a measure should be a huge shout-out that the “outcomes” being assessed under Common Core are not what the businesses or students need or what the parents and taxpayers need.
If you have the correlation coefficient can’t you predict the change in the dependent variable for every 1 increase in the independent variable?
Or was it how much the independent variable changes in the relationship for 1 increase in the dependent variable…
You had it the first time. The coefficient tells you how many units of change you get in the dependent variable for each 1 unit change in the independent variable. (That’s why the dependent variable is called “dependent” – because it changes in response to changes in the independent variables).
However, we have to be extremely cautious in relying on coefficient sizes. A high level of statistical certainty tells you only that you’re very sure the relationship exists and runs in the given direction, but it does not tell you that you’re very sure you got the correct size of the coefficients. In other words, statistical certainty tells you that you’re very sure 1) the coefficient is not zero, and 2) you got the coefficient sign right (positive or negative), but that’s it.