Lies, Damned Lies, and NYT Statistics

January 31, 2017

51zfexbow9l-_sy344_bo1204203200_

(Guest Post by Jason Bedrick)

Earlier this month, Max Eden and I showed how three separate data sets employing three different methodologies all reached the same conclusion: Detroit’s charter schools are significantly outperforming Detroit’s district schools.

So how did the New York Times come to paint such a different narrative?

That’s the question Eden tackles at The Seventy-Four this week, and it isn’t pretty.

First, NYT reporter Kate Zernike rejected the findings from a credible center-right think tank purely for political reasons. In an email conversation with Eden, she argued that the Mackinac Center is “a partisan group that is pro–school choice and anti-[Detroit Public Schools],” as though that had a bearing on whether its data were accurate.

Second, she demonstrated little familiarity with either the data source she rejected or the one upon which she relied. She claimed Mackinac “only” used graduation rates as its basis of comparison, but that’s completely false. She also thought that Excellent Schools Detroit (ESD) — her preferred data source — adjusted their data for demographics, but they didn’t. Mackinac did.

Far more egregious is how she portrayed the ESD data. Eden painstakingly takes readers through her calculations, but the short story is this: in calculating the average performance of Detroit’s district schools, she inappropriately excluded the district schools that were so low performing that the state intervened and took over. She also inappropriately included selective-admission magnet schools that require students to maintain a certain GPA and pass a test to gain entrance — something charters and traditional district schools cannot do. She also compared a weighted average for the supposed “district” school performance against the median charter performance. Eden concludes:

If that sounds silly, it’s because comparing an average to a median is statistical nonsense. The “apples to oranges” metaphor is apt but insufficient here. Essentially, Zernike took a basket of apples, pulled out the rotten ones, kept the genetically modified ones, made statistically weighted applesauce, and plopped that applesauce in the middle of a row of organic oranges. Then she drew a false conclusion that’s become central to the case against Betsy DeVos’s nomination for secretary of education.

Eden also took Zernike to task for digging in her heels over her demonstrably false claim that “Ms. DeVos pushed back on any regulation as too much regulation.” As Eden details — and several others have detailed previously — DeVos has supported all sorts of regulations on choice programs. Indeed, I wish DeVos were as libertarian as Zernike portrays her, but the record indicates otherwise. As Eden notes, Zernike should have known better:

In a Detroit News op-ed, to which [Zernike’s] article later links, DeVos called for two additional regulations: A–F school accountability grades and default closure for failing schools, both charter and district. She certainly pushed back on some regulations as too much. But the bill that passed included the additional accountability regulations for which she advocated. In fact, the final legislation boosted Michigan’s accountability score on the National Alliance of Charter School Authorizers index.

Zernike, sadly, still refuses to acknowledge these glaring errors. Instead, in response to criticism, she has tried moving the goalposts and hoping no one would notice. Indeed, she’s even repeating the claim that Detroit’s charter sector “is no one’s model” even though I have repeatedly pointed out to her that the 2015 CREDO study called Detroit’s charter sector — wait for it — a “model to other communities.” As I’ve noted before, I think that’s overstated, but you can’t seriously claim that “no one” thinks Detroit is a model when, in fact, the most wide-ranging study of charter schools conducted by a research center at one of the most respected university’s in the world used that very word to describe Detroit’s charters.

Zernike has her narrative and she’s sticking to it, facts be damned. Moreover, this isn’t the first time Zernike has let her narrative get ahead of her reporting (for example, see pages 33-37 here for a long list of “errors of omission and commission” in her highly flawed reporting on a voucher study by Harvard’s Paul Peterson).

What’s particularly frustrating is that she claims to be an objective, bias-free journalist (“[I] don’t really have an opinion“) when it is obvious from her reporting (or her Twitter feed) that she’s a dyed-in-the-wool liberal. Now, there’s nothing wrong with that. Pretty much everyone has a worldview, especially those who spend a good deal of their time thinking about issues related to public policy. The problem isn’t having a worldview, it’s not admitting it, and therefore not taking steps to make sure that it doesn’t cloud your judgment (or your reporting). As Jonah Goldberg wrote recently:

Reporters routinely call experts they already agree with knowing that their “takes” will line up with what the reporter believes. Sometimes this is lazy or deadline-driven hackery. But more often, it’s not. And that shouldn’t surprise us. Smart liberal reporters are probably inclined to think that smart liberal experts are right when they say things the smart liberal reporters already agree with.

For these and similar reasons, liberal ideas and interpretations of the facts sail through while inconvenient facts and conservative interpretations send up ideological red flags. Think of editors like security guards at a military base. They tend to wave through the people they know and the folks with right ID badges. But when a stranger shows up, or if someone lacks the right credential, then the guards feel like they have to do their job. This is the basic modus operandi for places like Vox, which seek to explain not the facts or the news, but why liberals are right about the facts and the news. […]

And you know what, the same thing is true for conservative journalists, because it’s true of people… The distinction is that there aren’t a great number of conservative journalists, certainly not in print, who don’t openly admit their biases to the reader. There are literally thousands of mainstream journalists, editors, and producers who insist that they are objective — and who actually believe it. And that leaves out the fact that liberalism is besotted with the idea that liberals aren’t ideological at all in the first place, which makes it even harder for them to recognize their ideological biases.

All journalists have is their credibility. Keeping it requires admitting errors when necessary. It should be clear to everyone that Zernike botched her reporting of the data on Detroit’s charter schools and misrepresented DeVos’s views on regulations — significant errors that have had a real impact on the narrative surrounding a cabinet pick shortly before her confirmation hearings and vote.

A responsible and credible news organization would correct the record.


Grasping at Straws Over Detroit’s Charter Schools

July 1, 2016

grasping20at20straws

(Guest Post by Jason Bedrick)

Following the exposure of all the errors, distortions, and key omissions in the recent NYT hatchet job on charter schools, the new line from the reporter and her teacher union allies is that the CREDO data is current only through 2011-12, but the charter cap was lifted starting in the 2012-13 school year. So sure, charters may have been outperforming district schools before “opening the floodgates,” but now the supposed “free market” (which, for the record, has no price mechanism, no free entry and exit, and lots of regulations regarding school mission, admission standards, testing, etc.) is letting in all sorts of bad actors.

But is there any hard evidence for this? Charter critics point to several anecdotes, but as Jay noted earlier, the plural of anecdote is not data. They’re simply grasping at straws.

Until CREDO updates their report or some other group tries to replicate it, we won’t have accurate apples-to-apples comparisons. Until then, we can’t conclusively reject or accept that hypothesis. But what data we do have cast doubt on it.

According to the Mackinac Center’s “2014 Michigan Public High School Context and Performance Report Card,” which used data through 2013, Michigan’s charter schools are punching above their weight: “Though charter schools make up just 11 percent of the schools ranked on this report card, they represent 35 percent of the top 20 ranked schools.” Two of the top 10 high schools in the state were charter schools in Detroit. The study awarded an “A” or “B” to four of the 14 Detroit charter high schools, while only two received an “F.” By contrast, 12 of 14 non-selective Detroit district schools received an “F.”

Results from their 2015 Elementary & Middle School Report Card are more mixed, but charters still come out slightly better.

The Great Lakes Education Project also broke down the 2015 M-STEP proficiency and found that Detroit’s charter schools–which must have open enrollment–outperformed Detroit’s open-enrollment district schools, although they lagged behind Detroit’s selective-enrollment district schools (and, frankly, none of the sectors have particularly stellar performance). Again, this is not an apples-to-apples comparison, so we should be cautious in interpreting these data, but they certainly don’t lend support to the notion that the charter sector is particularly troubled.

ed561657-352e-434f-ac45-8ff15d0854ab

Detroit’s open-enrollment charters outperform open-enrollment district schools.

Moreover, as shown in this infographic that GLEP put together, Detroit’s charters are over-represented among the top-performing schools and outperform Detroit’s district schools on average:

18 of the Top 25 schools in Detroit are Charter schools

22 of the Bottom 25 schools in Detroit are DPS schools

 Charter average: 14.6%

 DPS average: 9.0%

 Charters are 62% more proficient than DPS

 71 charters (79%) perform ABOVE the DPS average and 19 charters schools (21%) perform BELOW the DPS average.

 20 DPS schools (30%) perform ABOVE the DPS average and 46 DPS schools (70%) perform BELOW the DPS average.

 12 DPS schools (18%) perform ABOVE the charter average and 54 DPS schools (82%) perform BELOW the charter average.

 40 charter schools (44%) perform ABOVE the charter average and 50 charter schools (56%) perform BELOW the charter average.

To reiterate yet again, these are not apples-to-apples comparisons. For that we will need another carefully matched comparison, like the CREDO studies, or (better yet) a random-assignment study. But until then, charter critics should be more circumspect in their allegations. Certainly there is plenty of room for improvement in both Detroit’s charter and district schools. But the charter critics have not presented any hard evidence that Detroit’s charter sector is particularly troubled, or that the increased choice and competition is at fault for the poor performance in either sector (especially since Detroit’s district schools have been seriously troubled for decades).

Neither Detroit’s charter schools nor their district schools are above criticism. But critics should put their criticism in its proper context — and be sure to bring evidence.


El Paso Cheating Scandal

October 15, 2012

(Guest post by Greg Forster)

One guy who isn’t going to be nominated for this year’s Al Copeland award is Lorenzo Garcia, disgraced ex-superintendent of El Paso schools. He’s at the center of the latest major cheating scandal connected to NCLB. From the New York Times:

Students identified as low-performing were transferred to charter schools, discouraged from enrolling in school or were visited at home by truant officers and told not to go to school on the test day. For some, credits were deleted from transcripts or grades were changed from passing to failing or from failing to passing so they could be reclassified as freshmen or juniors…

In 2008, Linda Hernandez-Romero’s daughter repeated her freshman year at Bowie High School after administrators told her she was not allowed to return as a sophomore. Ms. Hernandez-Romero said administrators told her that her daughter was not doing well academically and was not likely to perform well on the test.

Ms. Hernandez-Romero protested the decision, but she said her daughter never followed through with her education, never received a diploma or a G.E.D. and now, at age 21, has three children, is jobless and survives on welfare.

“Her decisions have been very negative after this,” her mother said. “She always tells me: ‘Mom, I got kicked out of school because I wasn’t smart. I guess I’m not, Mom, look at me.’ There’s not a way of expressing how bad it feels, because it’s so bad. Seeing one of your children fail and knowing that it was not all her doing is worse.” [ea]

Accountability systems don’t work when those being held accountable percieve the system as political and illegitimate. Schools need these systems but they’re not going to work as long as education is a government monopoly. More on that here and here.

Via Bill Evers


Dumb Headline Conceals Smart Story

September 5, 2012

(Guest Post by Matthew Ladner)

A fascinating and revealing NYT story on the impact of charter schools in Harlem is well worth reading despite the utterly absurd headline: School Choice Is No Cure-All, Harlem Finds.

So apparently the straw-man argument generator in the headline writer’s head told him or her that a few charter schools would cure all of Harlem’s problems. I doubt that anyone else did.

Reading the actual story leads one to the conclusion that while there have been difficulties and growing pains, Harlem’s experience with charter schools has been quite positive. The most serious problem pointed to in the article, in fact, is the need for more charter schools.

The NYT story deals with perceived difficulties in school grading. So A-F school grades and parental choice: sounds familiar. How has this been working out for NYC’s low-income Black students? Some day reporters will learn to use the NAEP Data Explorer and use actual evidence to sort through contending clouds of anecdotal fog, but in the meantime I can help out:

Did the Klein reforms cure all of the education problems of Harlem? Certainly not. They strangely also failed to cure cancer, restore sight to the blind nor did they erase the painful memories of having shelled out money to see Indiana Jones and the Kingdom of the Crystal Skulls.

They have however seen hard fought gains for disadvantaged students. Rather than wringing their hands, the New York Times should be calling for the logical next steps in reform.


False Claim on Drill & Kill

December 13, 2010

The Gates Foundation is funding a $45 million project to improve measures of teacher effectiveness.  As part of that project, researchers are collecting information from two standardized tests as well as surveys administered to students and classroom observations captured by video cameras in the classrooms.  It’s a big project.

The initial round of results were reported last week with information from the student survey and standardized tests.  In particular, the report described the relationship between classroom practices, as observed by students, and value-added on the standardized tests.

The New York Times reported on these findings Friday and repeated the following strong claim:

But now some 20 states are overhauling their evaluation systems, and many policymakers involved in those efforts have been asking the Gates Foundation for suggestions on what measures of teacher effectiveness to use, said Vicki L. Phillips, a director of education at the foundation.

One notable early finding, Ms. Phillips said, is that teachers who incessantly drill their students to prepare for standardized tests tend to have lower value-added learning gains than those who simply work their way methodically through the key concepts of literacy and mathematics. (emphasis added)

I looked through the report for evidence that supported this claim and could not find it.  Instead, the report actually shows a positive correlation between student reports of “test prep” and value added on standardized tests, not a negative correlation as the statement above suggests.  (See for example Appendix 1 on p. 34.)

The statement “We spend a lot of time in this class practicing for [the state test]” has a correlation of  0.195 with the value added math results.  That is about the same relationship as “My teacher asks questions to be sure we are following along when s/he is teaching,” which is 0.198.  And both are positive.

It’s true that the correlation for “Getting ready for [the state test] takes a lot of time in our class” is weaker (0.103) than other items, but it is still positive.  That just means that test prep may contribute less to value added than other practices, but it does not support the claim that  “teachers who incessantly drill their students to prepare for standardized tests tend to have lower value-added learning gains…”

In fact, on page 24, the report clearly says that the relationship between test prep and value-added on standardized tests is weaker than other observed practices, but does not claim that the relationship is negative:

The five questions with the strongest pair-wise correlation with teacher value-added were: “Students in this class treat the teacher with respect.” (ρ=0.317), “My classmates behave the way my teacher wants them to.”(ρ=0.286), “Our class stays busy and doesn’t waste time.” (ρ=0.284), “In this class, we learn a lot almost every day.”(ρ=0.273), “In this class, we learn to correct our mistakes.” (ρ=0.264) These questions were part of the “control” and “challenge” indices. We also asked students about the amount of test preparation they did in the class. Ironically, reported test preparation was among the weakest predictors of gains on the state tests: “We spend a lot of time in this class practicing for the state test.” (ρ=0.195), “I have learned a lot this year about the state test.” (ρ=0.143), “Getting ready for the state test takes a lot of time in our class.” ( ρ=0.103)

I don’t know whether something got lost in the translation between the researchers and Gates education chief, Vicki Phillips, or between her and Sam Dillon at the New York Times, but the article contains a false claim that needs to be corrected before it is used to push changes in education policy and practice.

UPDATE —

The LA Times coverage of the report contains a similar misinterpretation: “But the study found that teachers whose students said they “taught to the test” were, on average, lower performers on value-added measures than their peers, not higher.”

Try this thought experiment with another observed practice to illustrate my point about how the results are being mis-reported…  The correlation between student observations that “My teacher seems to know if something is bothering me” and value added was .153, which was less than the .195 correlation for “We spend a lot of time in this class practicing for [the state test].”  According to the interpretation in the NYT and LA Times, it would be correct to say “teachers who care about student problems tend to have lower value-added learning gains than those who spend a lot of time on test prep.”

Of course, that’s not true.  Teachers caring about what is bothering students is positively associated with value added just as test prep is.  It is just that teachers caring is a little less strongly related than test prep.  Caring does not have a negative effect just because the correlation is lower than other observed behaviors.

(edited for typos)


It Took So Long Because They Were Learning It in the Wrong Style

September 7, 2010

(Guest post by Greg Forster)

I had to laugh when I saw this New York Times story. They’ve discovered that the existence of multiple “learning styles” has no sound basis in empirical evidence:

Take the notion that children have specific learning styles, that some are “visual learners” and others are auditory; some are “left-brain” students, others “right-brain.” In a recent review of the relevant research, published in the journal Psychological Science in the Public Interest, a team of psychologists found almost zero support for such ideas. “The contrast between the enormous popularity of the learning-styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing,” the researchers concluded.

Wow, those daring journalists at the Times and scientists at Psychological Science in the Public Interest aren’t afraid to buck the conventional wisdom!

Imagine how daring they’d have been if they’d been reading Education Next . . . in 2004?

(Admittedly, the Ed Next article is framed in terms of “multiple intelligences” rather than “learning styles,” but when you come right down to it, “multiple intelligences” was just the fashionable early-aughts buzzword for the same cluster of fallacies that goes by “learning styles.”)

HT Joanne Jacobs


Room for Debate on Teacher Assessment at the NYT

September 6, 2010

(Guest Post by Matthew Ladner)

Lance and Marcus enter a bar brawl over at the NYT on value added assessment. Watch out for the guy holding the pool stick upside down!


The John Stuart Mill approach to Health Care Reform

October 21, 2009

(Guest Post by Matthew Ladner)

JSM once noted that if government would simply require an education, that it might save itself the trouble of providing one. He could have added trying to provide one at enormous cost, but let’s not quibble over details.

This was the approach to the Romney reform in MA, but that reform ignores the fundamental problem with our system: third party payers create a powerful incentive to ignore costs. The Romney plan did not address this central problem.

If you don’t believe it, give me an unlimited line of credit with your money at a Vegas casino and watch me transform into a gambling fiend.

The New York Times published an important piece suggesting a brilliant compromise: the government should mandate insurance, but only catastrophic insurance.

This would introduce supply and demand back into most of the health care market, which is precisely what is needed in order to curtail costs and thus prevent the continuing loss of coverage (which is a symptom, not the disease).

Government policy (both in the tax code and from Medicare and Medicaid) is directly responsible for the out of control costs we have experienced. Having quasi-socialized the health care system but without gaining monopoly power to dictate terms to health professionals, politicians have created a culture of “anything goes” in health care.

Paul Tsongas said it best “America is the only country that pretends that death is optional.”

The government, in essence, has created a health care culture which rejects the very essence of a government run plan, which is bureaucratically rationed care. Notice the scrambling to pretend that there are “no death panels” in the plan kicking around Congress. This is of course meaningless, as if there are no death panels there soon will be under a new name: Eurocare is all about having bureaucrats make cost/benefit decisions about health care. They withold treatment to 78 year old men with prostrate cancer so they can spend their limited resources on prenatal care.

Forget about arguing the ethics of Canadacare: after decades of anything goes Americans won’t go for it. If the Democrats pass it anyway, they are likely to rue the day. Put in death panels = driving off a cliff. Expanding coverage without rationing and death panels = faster fiscal suicide.

We’re caught in a trap…can’t walk out!

It seems to me then that some sort of catastrophic mandate/increased out of pocket expenses/health savings account approach outlined in the Times article far more profoundly sensible than the fiscal/political suicide pact currently under discussion.

Munchausen by proxy syndrome in health care might have been great fun for the politicians while it lasted, but with a $1.4 trillion dollar deficit this year, we can no longer afford it.


The Unions Have Lost Nick Kristoff

October 15, 2009

(Guest Post by Matthew Ladner)

Read it and weep K-12 reactionaries.

P.S.

Somewhere, John Rawls is smiling.


Over There (But Not Over Here)

September 21, 2009

Several years ago I was part of a delegation sent by the U.S. Department of Education to a conference in China on private education.  The U.S. Dept of Ed believed that encouraging the expansion of private education in China would help promote democracy.  Apparently, they thought private schools were good for democratic values over there, but not over here. 

I was reminded of that experience while reading a recent New York Times article about severe problems with education in South Africa.  The piece states:

Despite sharp increases in education spending since apartheid ended, South African children consistently score at or near rock bottom on international achievement tests, even measured against far poorer African countries. This bodes ill for South Africa’s ability to compete in a globalized economy, or to fill its yawning demand for skilled workers. And the wrenching achievement gap between black and white students persists.

Sound familiar?

And what does the NYT tell us is a central part of the problem:

The teachers’ union too often protected its members at the expense of pupils, critics say. “We have the highest level of teacher unionization in the world, but their focus is on rights, not responsibilities,” Mamphela Ramphele, former vice chancellor of the University of Cape Town, said in a recent speech.

I see.  Teacher unions over there = bad, while over here = good.  Sometimes you have to get people outside of their vested set of domestic interests to see how they really think the world works.


%d bloggers like this: