A Guide for the Perplexed — A Review of Rigorous Charter Research

(Guest Post by Collin Hitt)

So you say charter schools don’t work. That’s an empirical claim. It needs to be backed up by evidence. Here’s a helpful guide to the most rigorous research available. Once you’ve tackled this material, you’ll be in position to prove your point.

As you probably know, the gold standard method of research in social science is called random assignment. Charter schools are particularly well-suited for random assignment evaluations, since they’re usually required by law to admit students by lottery. The lotteries are fair to families – that’s why they’re put in place. But they also allow researchers to make fair comparisons between students who win or lose lotteries to attend charter schools.

To date, nine studies lottery-based evaluations of charter schools have been released. Let’s go through them, starting with the earliest work.

The first random assignment study of charter schools was released in 2004 by Caroline Hoxby and Jonah Rockoff. It focused on Chicago International Charter School. After three years, charter students had significantly higher reading scores, equal to 3.3 to 4.2 points on 100-point rankings.  Gains were even stronger for younger students.

That same year, the University of California San Diego released a study of the Preuss charter school located on the university’s campus. Test scores for charter students appeared unchanged, but the school improved college-going rates by 23 percent: 90 percent of Preuss juniors were headed to four year colleges.

So the first two random-assignment studies of charter schools won’t help your point. They find gains for charter schools. But those studies are becoming dated; most of the national charter boom has occurred since they were published. Also, the San Diego study employs few statistical controls. So these studies don’t disprove your point either. Let’s review the newer stuff.

In 2010, Harvard’s Will Dobbie and Roland Fryer released a study of the Harlem Promise Academy. Entering kindergartners experienced large gains by the third grade, sufficient to eliminate the black-white achievement gap, equal to 0.58 standard deviations (sd) in reading and 0.49 sd in math. Students who entered Harlem Promise Academy in early middle school saw smaller gains that nevertheless by the eighth grade closed the achievement gap in math and reduced it by half in reading.

Later in 2010, researchers from MIT, Harvard and Michigan released a study of KIPP Academy in Lynn, Massachusetts. The charter school is part of the national charter network, the Knowledge is Power Program (KIPP). After a single year in the school, students saw achievement gains of 0.12 sd in English and 0.35 sd in math.

And earlier this year, researchers from Yale and Brown released a study of an unnamed charter network in an anonymous school district. There were no visible math gains for charter students, but they did see awfully big reading gains of 0.35 sd and writing gains 0.79 sd.

Charter advocates will point to these studies to try to prove you wrong, since these charter schools are definitely working. In turn, you could attempt to discredit the statistical math of the authors above. (Good luck.) Or you could make a more obvious point: these studies together look only at five charter operators. There are hundreds of charter operators across the country. The researchers could be cherry-picking – studying schools that they suspected beforehand were high-performing.

Larger random-assignment studies could address these issues, if they looked at a wider number of charter schools. Luckily, we’ve got four studies that do just that, all of them fairly recent.

The first is a 2009 study led by Caroline Hoxby. It examines practically every charter school in New York City. For every year students were enrolled in a charter school, they saw 0.04 sd gains in reading and 0.09 sd gains in math. The findings here are similar to the middle school gains that Fryer found at Harlem Promise Academy, though the citywide charter gains are clearly smaller than the Promise Academy’s extraordinary gains for kindergarteners.

Later that year, a citywide study of Boston charter middle and high schools found that charters produced “extraordinarily large” gains, according to the authors, who were based at Duke, Harvard, MIT and Michigan. After only one year, Boston’s charter high schools produced gains of 0.16 sd in reading and 0.19 sd in math. Charter middle schools in the city produced similar reading gains of 0.17 sd and a remarkable 0.54 sd in math.

In 2010, the US Department of Education released the first nationwide random-assignment study of charter middle schools. It contained two useful findings. Charter schools in affluent areas produced lower results than neighboring schools, which makes some sense. Charter schools in the suburbs are competing with higher quality schools than found in the inner cities. Charter schools in urban areas, enrolling a large percentage of poor students, posted significant gains in math, over two years equal to 0.18 sd.

In 2011, the team behind the 2009 study of Boston charter schools presented findings from a statewide evaluation of Massachusetts charter middle and high schools. Overall, results were positive. As with the USDOE study of middle schools, they found that charter schools in non-urban areas produced no positive gains. On the other hand, schools located in urban areas produced middle school gains of 0.12 sd in English and 0.33 sd in math and high school gains of 0.33 sd in English and 0.39 sd in math. These gains almost perfectly mirror the findings at KIPP Lynn, which is one of many schools included in the statewide sample.

So Harlem Promise Academy and KIPP produced results that are fairly similar to other charter schools nearby. So any allegation of cherry-picking in the studies of those two schools will need to be dropped.

Altogether, these studies have remarkably similar findings that urban charter schools are producing significant gains in reading or math, or both. Suburban charter schools perform less well – you could cite this fact, but frankly this a minor concern in the battle to close the racial achievement gap in American education.

You could make a methodological point: lottery studies don’t tell us about students who never participated in lotteries. In other words, what about students who never signed up for charter schools, who don’t have charter schools in the area, or who signed up for a charter school that didn’t need to run a lottery? Some researchers use less-rigorous “observational” methods to answer these questions.

Indeed, many of the studies above include secondary observational studies to test the validity of this very argument. They look at similar but artificial comparison groups of non-charter students who for unknown reasons didn’t enroll in lotteries. Those secondary analyses broadly confirm the main random-assignment findings.

Altogether, the best research tells a consistent story: charter schools are working. In order to find much evidence to the contrary, you’ll need to dig into third or fourth tier research. And you’ll need to invent a justification to ignore the random assignment literature, though you probably shouldn’t bother. Relying solely on third-rate research simply says that you were never interested in evidence in the first place.

9 Responses to A Guide for the Perplexed — A Review of Rigorous Charter Research

  1. […] There’s so much to discuss on these issues, including the extent to which truly randomized experiments can actually shed light on how interventions might play out in other settings or at scale. But I’ll stick to a much narrower focus in this post, and that is, just how randomized is randomized? Most recently, this question came to mind after reading this post addressing “experimental” vs. “non-experimental” studies of charter schools by Matt Di______Carlo at Shanker blog, and this post over at Jay P. Greene’s blog on RIGOROUS charter research (meaning experimental, or randomized). […]

  2. Student gains are inversely correlated with achievement level. The higher the level of achievement of a given student, the lower his/her gains. Looking at aggregate gains without taking prior achievement level into account may lead to unjustified conclusions. For example, if suburban charter schools are disproportionately attracting frustrated high-achieving students who are not well served by neighborhood schools, their “average” gains may in fact be high for students at that achievement level. Similarly, if urban charters are attracting a disproportionate percentage of very low-achieving students, the gains of their students may in fact be average. Did the studies you cite take this into account?

  3. An econ says:

    This reads like cherry-picking. Next it will be ‘urban charter schools run by organisation X in state Y’.

    Here’s a better review of the literature (within a paper about method), which concludes that because of the causal issues the jury has to remain out:

    Click to access Morgan_and_Winship_2012.pdf

  4. […] Blog, February 27, 2013. For a summary of even more studies about charter school effectiveness, see “A Guide for the Perplexed — A Review of Rigorous Charter Research”, Collin Hitt, Jay P. Greene’s Blog, December 17, […]

  5. […] I haven’t been able to find an apples-to-apples comparison.) There’s also a lot of research suggesting some promising (but not necessarily conclusive) results and innovations created by school […]

  6. […] By the end of 2012, at least nine lottery-based studies of charter schools had been released, and you can find a nice summary of that research in this article from the Jay Greene Blog. […]

  7. Wonderful submit. Quite insightful read in this case. Thanks designed for sharing thinking that I’ll be to return soon to look into much alot more posts as a result of you. Regards! Good get the job done, hope your website be more desirable! I plan to make an important blog that fit this description!

  8. […] almost certainly not up to the task. Observational charter studies using limited controls simply do not produce results that are consistent with studies using more advanced […]

Leave a comment