Eduresponses to Edubloggers

July 10, 2008

My recent posts on the release of our new study on the effects of high-stakes testing in Florida and posts here and here on the appropriateness of releasing it before it has appeared in a scholarly journal, have produced a number of reactions.  Let me briefly note and respond to some of those reactions.

First, Eduwonkette, who started this all, has oddly not responded.  This is strange because I caught her in a glaring contradiction: she asserts that the credibility of the source of information is an important part of assessing the truth of a claim yet her anonymity prevents everyone from assessing her credibility.  I prefer that she resolve this contradiction by agreeing with my earlier defense of her anonymity that the truth of a claim is not dependent on who makes it.  But she has to resolve this one way or another — either she ends her anonymity or she drops the argument that we should assess the source when determining truth.

But apparently she doesn’t have to do anything.  Whose reputation suffers if she refuses to be consistent?  Her anonymity is producing just the sort of irresponsibility that Andy Rotherham warned about in the NY Sun and that I acknowledged even as I defended her.  The only reputation that is getting soiled is that of Education Week for agreeing to host her blog anonymously.  If she doesn’t resolve her double-standard by either revising her argument or dropping her anonymity, Education Week should stop hosting her.  They shouldn’t lend their reputation to someone who will tarnish it.

Mike Petrilli over at Flypaper praises our new study on high stakes testing but takes issue with referencing comments by Chester Finn and Diane Ravitch about how high stakes is narrowing the curriculum in the “pre-release spin.”  I agree with him that this study is not “the last word on the ‘narrowing of the curriculum.’”  But to the extent that it shows that another part of the curriculum (science) benefits when stakes are applied only to math and reading, it alleviates the concerns Checker and Diane have expressed. 

As we fully acknowledge in the study, we don’t have evidence on what happens to history, art, or other parts of the curriculum.  And we only have evidence from Florida, so we don’t know if there are different effects in other states.  But the evidence that high stakes in math and reading contribute to learning in science should make us less convinced that all low stakes subjects are harmed.  Perhaps school-wide reforms that flow from high stakes in math and reading produce improvements across the curriculum.  Perhaps improved basic skills in literacy and numeracy have spill-over benefits in history, art, and everything else as students can more effectively read their art texts and analyze data in history.

Andy Rotherham at Eduwonk laments that what I describe as our “caveat emptor market of ideas” doesn’t work very well.  I agree with him that people make plenty of mistakes.  But I also agree with him that “in terms of remedies there is no substitute for smart consumption of information and research…”  There is no Truth Committee that will figure everything out for us.  And any process of reviewing claims before release will make its own errors and will come at some expense of delay.  Think Tank West has added some useful points on this issue.

Sherman Dorn, who rarely has a kind word for me, says: “Jay Greene (one of the Manhattan Institute report’s authors and a key part of the think tank’s stable of writers) replied with probably the best argument against eduwonkette(or any blogger) in favor of using PR firms for unvetted research: as with blogs, publicizing unvetted reports involves a tradeoff between review and publishing speed, a tradeoff that reporters and other readers are aware of.”  He goes on to have a very lengthy discussion of the issue, but I was hypnotized by his rare praise, so I haven’t yet had a chance to take in everything else he said.


Eduwonkette and Eduwonk Aren’t Edumarried?

July 8, 2008

The New York Sun had a nice profile yesterday of Eduwonkette.  Well, it’s not exactly a profile because Eduwonkette writes anonymously.  In the article some folks complain that her anonymity is a problem: “A co-director of the Education Sector think tank, Andrew Rotherham, suggested on his blog Eduwonk that Eduwonkette might be unfairly pretending to be unbiased because she has ‘skin in the game… It’s this issue of you got all this information to readers, without a vital piece of information for them to put it in context.'”

I think Andy’s mistaken on this. (Did they have some kind of edu-break-up?)  The issue is not who Eduonkette is, but whether she is right or not.  Knowing who she is does not make her evidence or arguments any more or less compelling.  I wish we all spent a whole lot less time analyzing people’s motives and a whole lot more time on their evidence and arguments. 

The only major problem with anonymity is lack of responsibility for being wrong.  There is a reputational price for making bad arguments or getting the evidence wrong that Eduwonkette avoids paying professionally — although she does pay a reputational price to the name brand of Eduwonkette.

Speaking of being wrong, Eduwonkette knocks the study Marcus Winters, Julie Trivitt, and I released today through the Manhattan Institute.  She complains: “It may be an elegantly executed study, or it may be a terrible study. The trouble is that based on the embargoed version released to the press, on which many a news article will appear today, it’s impossible to tell. There is a technical appendix, but that wasn’t provided up front to the press with the glossy embargoed study. Though the embargo has been lifted now and the report is publicly available, the technical appendix is not.”

This isn’t correct.  Embargoed copies of the study were provided to reporters upon their request.  If they requested the technical report, they could get that.  Both were available well in advance to reporters so that they could take time to read it and circulate it to other experts before writing a story.  Both the study and the technical report were made publicly available today (although there seems to be a glitch with the link to the technical report that should be fixed within hours).  The technical report can be found here.

And while we are on the subject of Eduwonkette being wrong, her attacks on test-based promotion policies are overdone.  The Jacob and Lefgren paper does raise concerns, but there is more positive evidence from the experience in Florida.  As I wrote in a previous post: “In a study I did with Marcus Winters that was published in Education Finance and Policy, we found that retained students significantly outperformed their comparable peers over the next two years.  In another study we published in the Economics of Education Review, we found that schools were not effective at identifying which students should be exempted from this test-based promotion policy and appeared to discriminate in applying these exemptions.  That is, white students were more likely to be exempted by school officials in Florida from being retained, but those students suffered academically by being exempted.”

Our results may actually be consistent with what Jacob and Lefgren find.  We find academic benefits for students retained in third grade.  They find: “that grade retention leads to a modest increase in the probability of dropping out for older students, but has no significant effect on younger students.”  It could be that test-based promotion is more beneficial when done with younger students.  It could also be that the policy has positive effects on achievement with some cost to graduation. 

And particularly severe problems with the integrity of test results used for promotion decisions in Chicago may limit the ability to generalize from Chicago’s experience.  In Chicago it may have been easier to move retained students forward by cheating on the next test than actually teaching them the basic skills they need to succeed in the next grade.

Besides, I’m sure that Edwuonkette wouldn’t put too much stock in Jacob and Lefgren’s non-peer-reviewed paper released straight to the public.  I’m sure she would be consistent in her view that: “By the time the study’s main findings already have been widely disseminated, some sucker with expertise in regression discontinuity may find a mistake while combing through that appendix, one that could alter the results of the study. But the news cycle will have moved on by then. Good luck interesting a reporter in that story… So as much as I like to kvetch about peer review and the pain and suffering it inflicts, it makes educational research better. It catches many problems and errors before studies go prime time, even if it doesn’t always work perfectly.”  

Or do these standards only apply to studies whose findings she doesn’t like?   If Eduwonkette isn’t careful she might get a reputation.


%d bloggers like this: