The Dead End of Scientific Progressivism

January 18, 2011

In Education Myths I argued that we needed to rely on science rather than our direct experience to identify effective policies.  Our eyes can mislead us, while scientific evidence has the systematic rigor to guide us more accurately.

That’s true, but I am now more aware of the opposite failing — believing that we can resolve all policy disputes and identify the “right way” to educate all children solely by relying on science.  Science has its limits.  Science cannot adjudicate among the competing values that might attract us to one educational approach over another.  Science usually tells us about outcomes for the typical or average student and cannot easily tell us about what is most effective for individual students with diverse needs.  Science is slow and uncertain, while policy and practice decisions have to be made right now whether a consensus of scientific evidence exists or not.  We should rely on science when we can but we also need to be humble about what science can and can’t address.

I was thinking about this while reflecting on the Gates Foundation’s Measuring Effective Teachers Project.  The project is an ambitious $45 million enterprise to improve the stability of value-added measures while identifying effective practices that contribute to higher value-added performance.  These are worthy goals.  The project intends to advance those goals by administering two standardized tests to students in 8 different school systems, surveying the students, and videotaping classroom lessons.

The idea is to see if combining information from the tests, survey, and classroom observations could produce more stable measures of teacher contributions to learning than is possible by just using the state test.  And since they are observing classrooms and surveying students, they can also identify certain teacher practices and techniques that might be associated with greater improvement.  The Gates folks are using science to improve the measures of student progress and to identify what makes a more effective teacher.

This is a great use of science, but there are limits to what we can expect.  When identifying practices that are more effective, we have to remember that this is just more effective for the typical student.  Different practices may be more effective for different students.  In principle science could help address this also, but even this study, with 3,000 teachers, is not nearly large enough to produce a fine-grained analysis of what kind of approach is most effective for many different kinds of kids.

My fear is that the researchers, their foundation-backers, and most-importantly, the policymaker and educator consumers of the research are insensitive to these limitations of science.  I fear that the project will identify the “right” way to teach and then it will be used to enforce that right way on everyone, even though it is highly likely that there are different “right” ways for different kids.

We already have a taste of this from the preliminary report that Gates issued last month.  Following its release Vicki Phillips, the head of education at the Gates Foundation, told the New York Times: “Teaching to the test makes your students do worse on the tests.”  Science had produced its answer — teachers should stop teaching to the test, stop drill and kill, and stop test prep (which the Gates officials and reporters used as interchangeable terms).

Unfortunately, Vicki Phillips mis-read her own Foundation’s report.  On p. 34 the correlation between test prep and value-added is positive, not negative.  If the study shows any relationship between test prep and student progress, it is that test prep contributes to higher value-added.  Let’s leave aside the fact that these were simply a series of pairwise correlations and not the sort of multivariate analysis that you would expect if you were really trying to identify effective teaching practices.  Vicki Phillips was just plain wrong in what she said.  Even worse, despite having the error pointed out, neither the Gates Foundation nor the New York Times has considered it worthwhile to post a public  correction.  Science says what I say it says.

And this is the greatest danger of a lack of humility in the application of science to public policy.  Science can be corrupted so that it simply becomes a shield disguising the policy preferences of those in authority.  How many times have you heard a school official justify a particular policy by saying that it is supported by research when in fact no such research exists?  This (mis)use of science is a way for authority figures to tell their critics, “shut up!”

But even if the Gates report had conducted multivariate analyses on effective teaching practices and even if Vicki Phillips could accurately describe the results of those analyses, the Gates project of using science to identify the “best” practices is doomed to failure.  The very nature of education is that different techniques are more effective in different kinds of situations for different kinds of kids.  Science can identify the best approach for the average student but it cannot identify the best approach for each individual student.  And if students are highly varied in their needs, which I believe they are, this is a major limitation.

But as the Gates Foundation pushes national standards with new national tests, they seem inclined to impose the “best” practices that science identified on all students.  The combination of Gates building a national infrastructure for driving educator behavior while launching a gigantic scientific effort to identify the best practices is worrisome.

There is nothing wrong with using science to inform local practice.  But science needs markets to keep it honest.  If competing educators can be informed by science, then they can pick among competing claims about what science tells us.  And they can learn from their experience whether the practices that are recommended for the typical student by science work in the particular circumstances in which they are operating.

But if the science of best educator practice is combined with a national infrastructure of standards and testing, then local actors cannot adjudicate among competing claims about what science says.  What the central authorities decide science says will be infused in the national standards and tests and all must adhere to that vision if they wish to excel along these centralized criteria.  Even if the central authority completely misunderstands what science has to say, we will all have to accept that interpretation.

I don’t mean to be overly alarmist.  Gates has a lot of sensible people working for them and there are many barriers remaining before we fully implement national standards and testing.  My concern is that the Gates Foundation is being informed by an incorrect theory of reform.  Reform does not come from science identifying the right thing to do and then a centralized authority imposing that right thing on everyone.  Progress comes from decentralized decision-makers having the freedom and motivation to choose among competing claims about what is right according to science.

(edited for typos)


Drill and Kill Kerfuffle

December 16, 2010

The reaction of New York Times reporter, Sam Dillon, and LA Times reporter, Jason Felch,  to my post on Monday about erroneous claims in their coverage of a new Gates report could not have been more different.  Felch said he would look into the issue, discovered that the claimed negative relationship between test prep and value-added was inaccurate, and is now working on a correction with his editors.

Sam Dillon took a very different tack.  His reaction was to believe that the blog post was “suggesting on the internet that I had misinterpreted an interview, and then you repeated the same thing about the Los Angeles Times. That was just a sloppy and irresponsible error.”  I’m not sure how Dillon jumps to this thin-skinned defensiveness when I clearly said I did not know where the error was made: “I don’t know whether something got lost in the translation between the researchers and Gates education chief, Vicki Phillips, or between her and Sam Dillon at the New York Times, but the article contains a false claim that needs to be corrected before it is used to push changes in education policy and practice.

But more importantly, Dillon failed to check the accuracy of the disputed claim with independent experts.  Instead, he simply reconfirmed the claim with Gates officials: “For your information, I contacted the Gates Foundation after our correspondence and asked them if I had misquoted or in any way misinterpreted either Vicki Phillips, or their report on their research. They said, ‘absolutely not, you got it exactly right.'”

He went on to call my efforts to correct the claim “pathetic, sloppy, and lazy, and by the way an insult.”  I guess Dillon thinks that being a reporter for the New York Times means never having to say you’re sorry — or consult independent experts to resolve a disputed claim.

If Dillon wasn’t going to check with independent experts, I decided that I should — just to make sure that I was right in saying that the claims in the NYT and LAT coverage were unsupported by the findings in the Gates report.

Just to review, here is what Dillon wrote in the New York Times: “One notable early finding, Ms. Phillips said, is that teachers who incessantly drill their students to prepare for standardized tests tend to have lower value-added learning gains than those who simply work their way methodically through the key concepts of literacy and mathematics.”  And here is what Jason Felch wrote in the LA Times: ““But the study found that teachers whose students said they ‘taught to the test’ were, on average, lower performers on value-added measures than their peers, not higher.”  And the correlations in the Gates report between test student reports of test prep and value-added on standardized tests were all positive: “We spend a lot of time in this class practicing for the state test.” (ρ=0.195), “I have learned a lot this year about the state test.” (ρ=0.143), “Getting ready for the state test takes a lot of time in our class.” ( ρ=0.103).  The report does not actually contain items that specifically mention “drill,”work their way methodically through the key concepts of literacy and mathematics,” or “taught to the test,” but I believe the reporters (and perhaps Gates officials) are referencing the test prep items with these phrases.

I sent links to the coverage and the Gates report to a half-dozen leading economists to ask if the claims mentioned above were supported by the findings.  The following reply from Jacob Vigdor, an economist at Duke, was fairly representative of what they said even if it was a bit more direct than most:

I looked carefully at the report and come to the same conclusion as you: these correlations are positive, not negative.  The NYT and LAT reports are both plainly inconsistent with what is written in the report.  A more accurate statement would be along the lines of “test preparation activities appear to be less important determinants of value added than [caring teachers, teacher control in the classroom, etc].”  But even this statement is subject to the caveat that pairwise correlations don’t definitively prove the importance of one factor over another.  Maybe the reporters are describing some other analysis that was not in the report (e.g., regression results that the investigators know about but do not appear in print), but even in that case they aren’t really getting the story right.  Even in that scenario, the best conclusion (given positive pairwise correlations and a hypothetically negative regression coefficient) would be that teachers who possess all these positive characteristics tend to emphasize test preparation as well.

Put another way, it’s alway good to have a caring teacher who is in control of the classroom, makes learning fun, and demands a lot of her students.  Among the teachers who share these characteristics, the best ones (in terms of value added) appear to also emphasize preparation for standardized tets.  I say “appear” because one would need a full-fledged multivariate regression analysis, and not pairwise correlations, to determine this definitively.

Another leading economist, who preferred not to be named, wrote: “I looked back over the report and I think you are absolutely right!”  I’m working on getting permission to quote others, but you get the idea.

In addition to confirming that a positive correlation for test prep items means that it contributes to value-added, not detracts from it, several of these leading economists emphasized the inappropriateness of comparing correlations to draw conclusions about whether test prep contributes to value-added any more or less than other teacher practices observed by students.  They noted that any such comparison would require a multivariate analysis and not just a series of pairwise correlations.  And they also noted that any causal claim about the relative effectiveness of test prep would require some effort to address the endogeneity of which teachers engage in more test prep.

As David Figlio, an economist at Northwestern University, put it:

You’re certainly correct here.  A positive pairwise correlation means that these behaviors are associated with higher performance on standardized tests, not lower performance.  The only way that it could be an accurate statement that test prep is causing worse outcomes would be if there was a negative coefficient on test prep in a head-to-head competition in a regression model — though even then, one would have to worry about endogeneity: maybe teachers with worse-performing students focus more on test prep, or maybe lower-performing students perceive test prep to be more oppressive (of course, this could go the other way as well.)  But that was not the purpose or intent of the report.  The report does not present this as a head-to-head comparison, but rather to take a first look at the correlates between practice measures and classroom performance.

There was no reason for this issue to have developed into the controversy that it has. The coverage contains obvious errors that should have been corrected quickly and clearly, just as Jason Felch is doing.   Tom Kane, Vicki Phillips, and other folks at Gates should have immediately issued a clarification as soon as they were alerted to the error, which was on Monday.

And while I did not know where the error occurred when I wrote the blog post on Monday, the indications now are that there was a miscommunication between the technical people who wrote the report and non-technical folks at Gates, like Vicki Phillips and the pr staff.  In other words, Sam Dillon can relax since the mistake appears to have originated within Gates (although Dillon’s subsequent defensiveness, name-calling, and failure to check with independent experts hardly bring credit to the profession of journalism).

The sooner Gates issues a public correction, the sooner we can move beyond this dispute over what is actually a sidebar in their report and focus instead on the enormously interesting project on which they’ve embarked to improve measures of teacher effectiveness.  An apology from Sam Dillon would be also nice but I’m not holding my breath.



False Claim on Drill & Kill

December 13, 2010

The Gates Foundation is funding a $45 million project to improve measures of teacher effectiveness.  As part of that project, researchers are collecting information from two standardized tests as well as surveys administered to students and classroom observations captured by video cameras in the classrooms.  It’s a big project.

The initial round of results were reported last week with information from the student survey and standardized tests.  In particular, the report described the relationship between classroom practices, as observed by students, and value-added on the standardized tests.

The New York Times reported on these findings Friday and repeated the following strong claim:

But now some 20 states are overhauling their evaluation systems, and many policymakers involved in those efforts have been asking the Gates Foundation for suggestions on what measures of teacher effectiveness to use, said Vicki L. Phillips, a director of education at the foundation.

One notable early finding, Ms. Phillips said, is that teachers who incessantly drill their students to prepare for standardized tests tend to have lower value-added learning gains than those who simply work their way methodically through the key concepts of literacy and mathematics. (emphasis added)

I looked through the report for evidence that supported this claim and could not find it.  Instead, the report actually shows a positive correlation between student reports of “test prep” and value added on standardized tests, not a negative correlation as the statement above suggests.  (See for example Appendix 1 on p. 34.)

The statement “We spend a lot of time in this class practicing for [the state test]” has a correlation of  0.195 with the value added math results.  That is about the same relationship as “My teacher asks questions to be sure we are following along when s/he is teaching,” which is 0.198.  And both are positive.

It’s true that the correlation for “Getting ready for [the state test] takes a lot of time in our class” is weaker (0.103) than other items, but it is still positive.  That just means that test prep may contribute less to value added than other practices, but it does not support the claim that  “teachers who incessantly drill their students to prepare for standardized tests tend to have lower value-added learning gains…”

In fact, on page 24, the report clearly says that the relationship between test prep and value-added on standardized tests is weaker than other observed practices, but does not claim that the relationship is negative:

The five questions with the strongest pair-wise correlation with teacher value-added were: “Students in this class treat the teacher with respect.” (ρ=0.317), “My classmates behave the way my teacher wants them to.”(ρ=0.286), “Our class stays busy and doesn’t waste time.” (ρ=0.284), “In this class, we learn a lot almost every day.”(ρ=0.273), “In this class, we learn to correct our mistakes.” (ρ=0.264) These questions were part of the “control” and “challenge” indices. We also asked students about the amount of test preparation they did in the class. Ironically, reported test preparation was among the weakest predictors of gains on the state tests: “We spend a lot of time in this class practicing for the state test.” (ρ=0.195), “I have learned a lot this year about the state test.” (ρ=0.143), “Getting ready for the state test takes a lot of time in our class.” ( ρ=0.103)

I don’t know whether something got lost in the translation between the researchers and Gates education chief, Vicki Phillips, or between her and Sam Dillon at the New York Times, but the article contains a false claim that needs to be corrected before it is used to push changes in education policy and practice.

UPDATE –

The LA Times coverage of the report contains a similar misinterpretation: “But the study found that teachers whose students said they “taught to the test” were, on average, lower performers on value-added measures than their peers, not higher.”

Try this thought experiment with another observed practice to illustrate my point about how the results are being mis-reported…  The correlation between student observations that “My teacher seems to know if something is bothering me” and value added was .153, which was less than the .195 correlation for “We spend a lot of time in this class practicing for [the state test].”  According to the interpretation in the NYT and LA Times, it would be correct to say “teachers who care about student problems tend to have lower value-added learning gains than those who spend a lot of time on test prep.”

Of course, that’s not true.  Teachers caring about what is bothering students is positively associated with value added just as test prep is.  It is just that teachers caring is a little less strongly related than test prep.  Caring does not have a negative effect just because the correlation is lower than other observed behaviors.

(edited for typos)


Follow

Get every new post delivered to your Inbox.

Join 2,621 other followers