NAEP Gains by Spending Trend- Some States are the Harlem Globetrotters, Others the Washington Generals

February 24, 2016

(Guest Post by Matthew Ladner)

Notice the large number of states on the negative side of zero on the spending bar. In the immortal words of Lee Ving it’s already started.

Winning

Once again let me note with insufferable state pride that Arizona is your ROI champion doing things that no one would have thought possible.

Go majority-minority, cut funding and improve scores? No problem-nothing but net!

So it is tough to pick a Washington Generals, but I’ve got mine narrowed down between Alaska, New York and Wyoming.

 


Harlem Kids Go To College: Another Positive Charter School Study

August 5, 2013

(Guest Post by Collin Hitt)

Harlem Promise Academy is a charter middle school, part of the Harlem Children’s Zone. Previous studies from Will Dobbie and Roland Fryer have found big test score gains. A new paper by the Harvard research pair finds that the school had large impacts on college attendance, even larger than the previous gains in test scores would have indicated. From their new paper:

Attending the Promise Academy increases the probability of enrolling in college by 24.2 (9.7) percentage points, an 84 percent increase. In Appendix Table 2, we show that lottery winners are also 21.3 (5.9) percentage points more likely to attend a four-year college and 7.2 (2.3) percentage points less likely to attend a two-year college.

The charter school not only increases the likelihood that its students will attend college, but it increases the quality of the colleges that they attend. Harlem Promise Academy is considered an exceptional school in many minds because of its inclusion in the larger HCZ neighborhood experiment, which includes “wrap-around” social services meant to address issues of poverty. So Dobbie and Fryer collected lottery records at three other charter schools across the country that don’t feature HCZ-style community services, including Noble Network in Chicago, a personal favorite of mine. They found similar college enrollment gains.

They also tested whether the Promise Academy had an impact on lifestyle choices. Charter enrollment appeared to lower teen pregnancy rates by 71 percent and, for boys, drove the observed incarceration rate to almost zero.

They close with what I think is a crucial point for the academic community and the education reform movement to understand:

The education reform movement is based, in part, on two important assumptions: (1) high quality schools can increase test scores, and (2) the well-known relationship between test scores and adult outcomes is causal. We have good evidence that the rst assumption holds (Angrist et al. 2010, Abdulkadiroglu et al. 2011, Dobbie and Fryer 2011a). This paper presents the first pieces of evidence that the second assumption may not only be true, but that the cross-sectional correlation between test scores and adult outcomes may understate the true impact of a high quality school, suggesting that high quality schools change more than cognitive ability. Importantly, the return on investment for high-performing charter schools could be much larger than that implied by the short-run test score increases.

As discussed on this blog, there is now a litany of gold-standard studies of charter schools that find test score gains. Perhaps these studies provide only a glimpse of the benefits to come. We don’t know yet, which is why Dobbie and Fryer do what every smart researcher does – they call for more research.

For now, we can say one thing: ANOTHER random-assignment, gold-standard study finds impressive gains for charter schools. What is that now, thirteen? It’s actually getting hard to keep track.

P.S. There’s another intriguing finding. Alums of Harlem Promise Academy were given a survey that included Duckworth’s “Grit Scale,” which asked them to self-report their persistence, focus and work ethic. The charter school alums scored far lower than the comparison group.  This suggests that the self-reported Grit Scale may be a bad measure of actual grit, since it suggested the opposite of the grit outcomes that were observed.


My Apple is Bigger

August 29, 2017

(Guest post by Patrick J. Wolf)

New York City public charters have been much in the news of late (see here & here) for hitting it out of Yankee Stadium on student achievement.  When Judge-ing (sorry, couldn’t resist) The Big Apple’s charter school sector, add this little fact to the case:  charters out-slug their peers at a lower cost.

That is the conclusion of my latest study of charter school funding inequity, co-authored with Larry D. Maloney.  It is fun to study New York City, in part because of great potential for wordplay but also because the place is so darn big that you can disaggregate results by borough and still have district v. charter comparisons informed by large samples.  So, “start spreading the news…”

There are over 1,000,000 public school children in The Big Apple.  Seven percent of them attended charter schools during Fiscal Year 2014, the focus of our study.  Cash revenue to charter schools averaged $15,983 per-pupil while payments to district-run schools averaged a much more generous $26,560 per-pupil.  “You just wait a New York Minute Mr. Henley,” you might caution, “The New York City Department of Education actually provides in-kind services to students in charter schools that represent a funding resource not accounted for in your cash calculations.”  You would be right.  After factoring in the cash value of such in-kind services, charter schools receive a mere $4,888 less in per-pupil funding than district schools (Figure 3).  New York City charters schools are outperforming the City’s district schools at about 81 cents on the dollar.

I’ll admit that my figure isn’t nearly as MoMA-worthy as Matt’s post-modernist depiction of the Arizona school districts that refuse to accept students through inter-district choice, but it makes a crucial point.  Even accounting for the value of everything contributed in support of charter schools in New York City, district schools still get more money per student.

Critics of our prior charter school funding studies (available here and here) have claimed that we are making Big-Apple-to-Big-Orange comparisons, since district schools provide more extensive educational services to students than charters.  Our accounting for in-kind district services to charters fully addresses that argument.  After factoring in the value of co-located facilities, transportation, meals, special education services, health services, textbooks, software, etc., all of which are provided to charters in New York City so that the scope of their services is equal to that of district schools, the charters still receive less funding.  We even examined school spending patterns, in addition to funding patterns, and the story is the same.

Surely the student populations in district schools are needier than those in charter schools, thereby justifying the funding gap, right?  Actually no.  The population of charter school students in New York City contains a higher percentage of free-and-reduced price lunch kids than the population of district school students (Figure 4).

The percentage of students with disabilities is only slightly higher in district schools versus charter schools, 18.2% compared to 15.9%.  That means that districts enroll 21,342 “extra” students with disabilities compared to charters.  For the special education enrollment gap favoring districts to explain the entire funding gap favoring districts, each “extra” student with a disability in the district sector would have to cost an additional $214,376 above the cost of educating a student in general education.  It is simply implausible that the slight gap in special education enrollments explains the substantial gap in funding between district and charter schools in New York City.

Like rookie sensation Aaron Judge, this report has lots of hits besides just the homeruns described above, so check it out.  In sum, New York City has made a major commitment to provide material support to students in its public charter schools.  Still, inexplicable funding inequities persist depending simply on whether a child is in a charter or a district school.  Larry and I think this case study provides yet another Reason to support weighted student funding with full portability (see what I did there?).  Switching to such a simple and equitable method for funding all public school students definitely would put us in a “New York State of Mind.”


If You Mostly Care About Test Scores, Private School Choice Is Not For You

April 28, 2017

If you mostly care about test scores, private school choice is not for you.  Despite the vast majority of randomized control trials (RCTs) of private school choice showing significant, positive test score effects for at least some subgroups of students, some of those gains have been modest and other effects have been null for at least some subgroups.  And now we have two RCTs, in Louisiana and DC, showing  significant test score declines for at least some subgroups and in some subjects.  The Louisiana decline is large and across-the-board, but the significant, negative effect in the new DC study appears to be  “driven entirely by students in elementary grades not previously in a needs-improvement school.

People will quibble over why these new DC results showed at least a partial decline.  They will note that the prior RCT of DC vouchers showed significant test score gains after three years (although the p value rose to .06 in year four even as the positive estimate remained).  They will note that vouchers in DC are worth almost 1/3 as much as the per pupil funding received by DC’s traditional public schools and almost half as much as DC’s charter schools.  Imagine how they might do if they received comparable resources (and yes resources can matter if there are proper incentives to use resources productively).  They will note that almost half of the control group attended charter schools, so to a large degree this study is a comparison of how students do in vouchers relative to charters.

But these largely miss the point — the benefits of private school choice are clearly evident in long term outcomes, not near-term test scores.  In the same DC program that just produced disappointing test score effects, using a voucher raised high school graduation rates by 21 percentage points.  Similarly, private school choice programs in Milwaukee and New York City were less impressive in their test score effects than in later educational attainment, where private school students in both cities were significantly more likely to enroll in college.

But if what you really care about is raising test scores, you’d be pushing no-excuses charter schools.  Rigorous evaluations, like the one in Boston, show huge test score gains for students randomly assigned to no-excuses charter schools.  You don’t even have to have school choice to produce these gains.  The same team of researchers showed that schools converted into no-excuses charters as part of a turnaround effort produced similarly big gains for students who were already there and did nothing to choose it.  The lesson that a fair number of foundations and policymakers draw is that we don’t need this messy and controversial choice stuff.  They believe that they have discovered the correct school model — it’s a no excuses charter — and all we need to do is get as many disadvantaged kids into these kinds of schools as we can, with or without them choosing it.

Unfortunately, no excuses charters don’t seem to produce long-term benefits that are commensurate with their huge test score gains.  The Boston no excuses charter study, for example, shows no increase in high school graduation rates and no increase in post-secondary enrollment despite large increases in test scores.  It’s true that students from those schools who did enroll in post-secondary schooling were more likely to go to a 4 than 2 year college, but it is unclear if this is a desirable outcome given that it may be a mismatch for their needs and this more nuanced effect is not commensurate with the giant test score gains.

This same disconnect between test scores and later life outcomes exists in several rigorously conducted studies of charter schools, including those of  the Harlem Promise Academy, KIPP, High Tech High, SEED boarding charter schools, and no excuses charters in Texas.  While of course we would generally like to see both test score gains and improved later life outcomes, the thing we really care about is the later life outcomes.  And the near-term test scores appear not to be very good proxies for later life outcomes.

So, what should we think about these new test results from DC vouchers, showing some declines for students after one year in the program?  We already know from rigorous research that the program improves later life outcomes, so I don’t think we should be particularly troubled by these test results.  It may be that control group students are in schools that will fare as well or better on test score measures.  But we should remember that 42% of that control group are in the types of charter schools that other research has shown can produce giant test score gains without yielding much in later life outcomes.  And we know that treatment group students are in a program that has previously demonstrated large advantages in later life outcomes.

I understand that many reporters, foundations, and policymakers act like they mostly care about test scores and these new results from DC have them all aflutter.  But if people could only step back for a second and consider what we are really trying to accomplish in education, the evidence is clearly supportive of private school choice in DC and elsewhere.

(edited to correct error noted in comments)


Evidence for the Disconnect Between Changing Test Scores and Changing Later Life Outcomes

November 5, 2016

Over the last few years I have developed a deeper skepticism about the reliability of relying on test scores for accountability purposes.  I think tests have very limited potential in guiding distant policymakers, regulators, portfolio managers, foundation officials, and other policy elites in identifying with confidence which schools are good or bad, ought to be opened, expanded, or closed, and which programs are working or failing.  The problem, as I’ve pointed out in several pieces now, is that in using tests for these purposes we are assuming that if we can change test scores, we will change later outcomes in life.  We don’t really care about test scores per se, we care about them because we think they are near-term proxies for later life outcomes that we really do care about — like graduating from high school, going to college, getting a job, earning a good living, staying out of jail, etc…

But what if changing test scores does not regularly correspond with changing life outcomes?  What if schools can do things to change scores without actually changing lives?  What evidence do we actually have to support the assumption that changing test scores is a reliable indicator of changing later life outcomes?

This concern is similar to issues that have arisen in other fields about the reliability of near-term indicators as proxies for later life outcomes.  For example, as one of my colleagues noted to me, there are medicines that are able to lower cholesterol levels but do not reduce — or even may increase — mortality from heart disease.  It’s important that we think carefully about whether we are making the same type of mistake in education.

If increasing test scores is a good indicator of improving later life outcomes, we should see roughly the same direction and magnitude in changes of scores and later outcomes in most rigorously identified studies.  We do not.  I’m not saying we never see a connection between changing test scores and changing later life outcomes (e.g. Chetty, et al); I’m just saying that we do not regularly see that relationship.  For an indicator to be reliable, it should yield accurate predictions nearly all, or at least most, of the time.

To illustrate the un-reliability of test score changes, I’m going to focus on rigorously identified research on school choice programs where we have later life outcomes.  We could find plenty of examples of disconnect from other policy interventions, such as pre-school programs, but I am focusing on school choice because I know this literature best.  The fact that we can find a disconnect between test score changes and later life outcomes in any literature, let alone in several, should undermine our confidence in test scores as a reliable indicator.

I should also emphasize that by looking at rigorous research I am rigging things in favor of test scores.  If we explored the most common use of test scores — examining the level of proficiency — there are no credible researchers who believe that is a reliable indicator of school or program quality.  Even measures of growth in test scores or VAM are not rigorously identified indicators of school or program quality as they do not reveal what the growth would have been in the absence of that school or program.  So, I think almost every credible researcher would agree that the vast majority of ways in which test scores are used by policymakers, regulators, portfolio managers, foundation officials, and other policy elites cannot be reliable indicators of the ability of schools or programs to improve later life outcomes.

With the evidence below I am exploring the largely imaginary scenario in which test scores changes can be attributed to schools or programs with confidence.  Even then, the direction and magnitude of changing test scores does not regularly correspond with changing later life outcomes.  I’ve identified 10 rigorously designed studies of charter and private school choice programs with later life outcomes.  I’ve listed them below with a brief description of their findings and hyperlinks so you can read the results for yourself.

Notice any patterns? Other than the general disconnect between test scores and later life outcomes (in both directions), I notice that the No Excuses charter model that is currently the darling of the ed reform movement and that New York Times columnists have declared as the only type of “Schools that Work” tend not to fare nearly as well in later outcomes as they do on test scores.  Meanwhile the unfashionable private choice schools and Mom and Pop charters seem to do much better on later life outcomes than at changing test scores.  I don’t highlight this pattern as proof that we should shy away from No Excuses charters.  I only mention it to suggest ways in which over-relying on test scores and declaring with confidence that we know what works and what doesn’t can lead to big policy mistakes.

Here are the 10 studies:

  1. Boston charters (Angrist, et al, 2014) – Huge test score gains, no increase in HS grad rate or postsecondary attendance. Shift from 2 to 4 yr
  2. Harlem Promise Academy (Dobbie and Fryer, 2014) – Same as Boston charters
  3. KIPP (Tuttle, et al, 2015) – Large test score gains, no or small effect on HS grad rate, depending on analysis used
  4. High Tech High (Beauregard, 2015) – Widely praised for improving test scores, no increase in college enrollment
  5. SEED Boarding Charter (Unterman, et al, 2016) – same as Boston charters
  6. TX No Excuses charters (Dobbie and Fryer, 2016) – Increase test scores and college enrollment, but no effect on earnings
  7. Florida charters (Booker, et al, 2014) – No test score gains but large increase in HS grad rate, college attendance, and earnings
  8. DC vouchers (Wolf, et al, 2013) – Little or no test score gain but large increase in HS grad rate
  9. Milwaukee vouchers (Cowen, et al, 2013) – same as DC
  10. New York vouchers (Chingos and Peterson, 2013) – modest test score gain, larger college enrollment improvement

The Over-Confidence of Portfolio Management

October 25, 2015

I’ve been having a debate over the last few weeks with Neerav Kingland and others about the dangers of a high-regulation approach to school choice.  (You can see my posts on this so far here, here, here, here, here, and here).  I know this seems like a lot of posts on a topic, but as one grad student observed, it took more than 100 pieces about the obvious error of government reported high school graduation rates before people fully acknowledged the error and too significant steps to correct it.  Let’s hope convincing ed reform foundations and advocates to scale way back on their infatuation with heavy regulation does not require the same effort as moving more obstinate and dim-witted government officials.

One heavy-handed regulatory approach that is particularly worrisome is the strategy of portfolio management.  Under this strategy, a portfolio manager, harbor master, or some other regulator actively manages the set of school options that are available to families by closing those believed to be sub-par and expanding or replicating those that are thought to be more effective.  This approach is being implemented in New Orleans and the city appears to be experiencing significant gains in achievement tests, so Neerav and others are puzzled as to why I don’t support it.

I’ve tried to express my reasons for opposing portfolio management in several ways.  I tried mocking it: “If education reform could be accomplished simply by identifying and closing bad schools while expanding good ones, everything could be fixed already without any need for school choice.  We would just issue regulations to forbid bad schools and to mandate good ones.  See?  Problem solved.”  That clearly didn’t work because folks like NOLA advocate Josh McCarty replied: “moving the left end of the performance curve to the right through regs has gotten more kids in higher perf schools.”

So, let me try again.  A portfolio manager can only move “the left end of the performance curve” if the regulator can reliably identify which schools are likely to harm students’ long-term outcomes and which ones are likely to improve them.  If you don’t really know whether schools are on the left or right end of some curve of quality, closing schools just limits options without improving long-term outcomes.  But backers of portfolio management are not lacking in confidence.  They have achievement test results, so they think they know which are the good and bad schools.

Unfortunately, they are suffering from over-confidence.  Achievement tests are useful but they are not nearly strong enough predictors of later life outcomes to empower a portfolio manager to close a significant number of schools because he or she “knows” that those schools are “bad.”  In fact, the research I reviewed on rigorous evaluations of long-term outcomes from choice programs suggests that using test scores to decide whether a bunch of schools should be closed or expanded would lead to significant Type 1 and Type 2 errors.  That is, in their effort to close bad schools, portfolio managers may very well close schools with lower test performance that actually improve high school graduation, college-attendance, and lifetime earnings.  And they may expand or replicate schools that have high test performance but do little to improve these later life outcomes.

If there were an active portfolio manager of Florida charter schools, they would have closed a bunch of charter schools that were doing a great job of improving students’ later life outcomes.  As Booker, et al’s research shows, relying solely on test scores to distinguish good from bad schools would lead to serious errors by an active portfolio manager:

The substantial positive impacts of charter high schools on attainment and earnings are especially striking, given that charter schools in the same jurisdictions have not been shown to have large positive impacts on students’ test scores (Sass, 2006; Zimmer et al., 2012)…. Positive impacts on long-term attainment outcomes and earnings are, of course, more consequential than outcomes on test scores in school. It is possible that charter schools’ full long-term impacts on their students have been underestimated by studies that examine only test scores. More broadly, the findings suggest that the research examining the efficacy of educational programs should examine a broader array of outcomes than just student achievement. (pp. 27-8)

Conversely, foundations and portfolio managers are pouring more resources into certain types of schools with strong test performance that are failing to show much benefit for students’ long-term outcomes.  As Angirst, et al, Dobbie and Fryer, and Tuttle, et al show, a bunch of charter schools with large achievement test gains, including Boston “no-excuses” schools, Harlem Promise Academy, and KIPP, have produced little or nothing in terms of high school graduation and college-attendance rates.

Portfolio management guided solely by test scores would seriously harm students by unwittingly closing a bunch of successful schools, like those Booker, et al studied in Florida, while expanding and pouring more resources into ones with less impressive long-term results, like those studied by Angirst, et al, Dobbie and Fryer, and Tuttle, et al.

Matt Barnum challenged me on Twitter to describe what evidence would persuade me to support portfolio management.  At a minimum I would want to see that portfolio managers have reliable tools for predicting long-term outcomes for students so they knew which choice schools should be closed and which should be expanded or replicated.  The evidence I’ve reviewed here and in more detail in this prior post suggests that they do not have a reliable tool and so the entire theory of portfolio management falls apart. I’m not making the strawman argument that test scores are useless or that no school should ever be closed by regulators.  I’m just arguing that portfolio management requires confidence in the predictive power of achievement tests that is not even close to being warranted by the evidence.

But what about the impressive achievement gains that Doug Harris and his colleagues find are being produced in New Orleans?  Let’s keep in mind that many reforms have been implemented in New Orleans at the same time.  Even if we were confident that the test score gains in New Orleans are not being driven by changes in the student population following Katrina (and Doug and his colleagues are doing their best with constrained data and research design to show that), and even if these test score gains translate into higher high school graduation and college attendance rates (which Doug and his colleagues have not yet been able to examine), we still would have no idea whether portfolio management and other high regulations in NOLA helped, hurt, or made no difference in producing these results.  In fact, the evidence from the 7 rigorous studies on school choice programs with long-term outcomes suggests that portfolio management and other heavy regulations are neither necessary nor desirable for producing long-term gains for students.

Neerav, Matt Barnum, and Josh McCarty have suggested that I am making overly-broad claims not consistent with evidence.  I think the opposite is true.  I’ve carefully cited and quoted the relevant research and drawn the obvious conclusion — active portfolio management based on achievement tests is likely to make harmful errors and unnecessarily restrict options.  In fact, it seems to me that the burden is on supporters of portfolio management to demonstrate that they are able to reliably distinguish between schools with good and bad long-term outcomes.  If you are going to go around telling families that they can’t choose a certain school because it is bad for them, you had darn better be confident that it really is bad.


Does regulation improve the political prospects for choice?

October 6, 2015

In this series of post against the high-regulation approach to school choice, I have demonstrated that performance accountability is not typical of government programs and that heavy regulation drives away quality supply, hurting rather than protecting the students these regulations are meant to help.  If high-regulation is not the norm and does not help children, supporters of this approach might still favor it if they think it has certain political advantages.

For those interested in private school choice, two political advantages are claimed: 1) High-regulation addresses some  objections, winning votes among skeptics to improve the political prospects of passing and sustaining those programs; 2) High-regulation protects private school choice programs from the political damage caused by scandals and embarrassing outcomes.

Neither of these arguments is supported by experience.  Conceding regulatory measures to skeptics and opponents has hardly changed a single vote.  Backers of the Milwaukee voucher program thought they would get relief from legislative opposition if they accepted more burdensome regulation.  No votes have changed as a result and the program remains as precarious as ever.  Nor has regulation protected programs from scandal.  Judging from the steady stream of news reports about teachers in traditional public schools sleeping with students, it appears that no amount of background checks or government oversight can eliminate rare but regular instances of misconduct.  I’m not arguing against a reasonable and light regulatory framework, I’m just suggesting that higher levels of regulation provide little or no additional political protection.  Determined opponents can always find scandals to exploit and cannot be appeased with anything short of preserving the traditional public system.

I’m actually more worried that key backers of school choice are starting to abandon private school choice and focus all of their energies on charters.  High-regulation is the norm in charter programs.  You don’t have to worry about charter schools refusing to participate in a heavily regulated program since they have no alternatives.  And charters seem to be flourishing.  Charter programs exist in more states with more schools serving more students than do private choice programs.  Many important backers of school choice seem to believe that charters are also getting better results.  As Neerav Kingsland of the Arnold Foundation tweeted yesterday: “why is it the over-regulated charter sector that has had the most breakthroughs with low income students?”

Unfortunately, Neerav is mistaken.  Charters are not producing better results than private school choice.  High-regulation comes with a cost to quality.  Let’s consider rigorous evidence on how charter and private school choice affect educational attainment.  For reasons I will discuss at greater length in the next post, I think attainment is a more meaningful indicator of long-term benefits than achievement test results.  I’m aware of 4 rigorous studies of the effect of charter schools on attainment.  The general pattern among them is that programs producing large gains in achievement test outcomes are producing little or no increase in educational attainment.

Angrist, et al examined Boston charter schools and found significant benefits for charter students on MCAS, SAT, and AP performance.  On attainment they write:

Does charter attendance also increase high school graduation rates? Perhaps surprisingly given the gains in test score graduation requirements reported in Table 4, the estimates in Table 7 suggest not. In fact, charter attendance reduces the likelihood a student graduates on time by 12.5 percentage points, a statistically significant effect. This negative estimate falls to zero when the outcome is graduation within five years of 9th-grade entry. (p. 15)

Nor are results much better for attending college: “While the estimated effect of charter attendance on college attendance is positive, it is not large enough to generate a statistically significant finding.” (p. 16)  Angrist, et al do find a significant shift of students from attending 2 year to 4 year colleges, but we don’t know yet if that shift represents a positive development until we see whether they complete their degrees.  Shifting students to 4 year college for which they are ill-suited and from which they drop out does them no favor.

Dobbie and Fryer examine the results of a single charter school in Harlem, the Promise Academy.  Like Angrist, et al, they find large achievement test gains but little benefit for attainment.  Dobbie and Fryer find a higher high school graduation rate after 4 years of the start of 9th grade, but it disappears by 6 years. (p, 18)  College attendance benefits are also fleeting: “Similar to the results for high school graduation,however, control students eventually catch up and make the treatment effects on college enrollment insignificant.”  Dobbie and Fryer similarly find a shift toward 4 year colleges, but again this result is ambiguous. Four year college should help students obtain more schooling but they report “The number of total semesters enrolled in college between lottery winners and lottery losers is small and statistically insignificant.” (p. 19)

Tuttle, et al’s recent evaluation of KIPP charter schools also finds large achievement test gains for charter students but little or no attainment benefit.  Tuttle and her team at Mathematica make two types of comparisons to assess the progress of KIPP high school students.  In one they find: “For new entrants to KIPP high schools, we also examine the probability of graduating within four years of entry. We find that this group of KIPP high schools did not significantly affect four-year graduation rates among new entrants.” (p. 36)  When they examine students who continued from KIPP middle schools into KIPP high schools, they find a small but statistically significant drop in the rate at which students drop out — about 2 percentage points. (p. 39)

Booker, et al examine charter schools in Chicago and Florida and find significant benefits in educational attainment as well as higher earnings later in the workforce — at least for Florida charter students.  They write: “In Florida, the charter high school students show a consistent advantage in absolute terms of 8 to 11 percentage points from high school graduation through a second year of college enrollment.” (p. 22)  On later earnings they find: “Charter high school attendance is
associated with an increase in maximum annual earnings for students between ages 23 and 25 of $2,347—or about 12.7 percent higher earnings than for comparable students who attended a charter middle school but matriculated to a traditional high school.”

Before the high-regulation folks get too excited about the Booker, et al results as vindication of their approach, they should note that these charter schools did not produce impressive achievement test results.  Booker, et al write:

The substantial positive impacts of charter high schools on attainment and earnings are especially striking, given that charter schools in the same jurisdictions have not been shown to have large positive impacts on students’ test scores (Sass, 2006; Zimmer et al., 2012)…. Positive impacts on long-term attainment outcomes and earnings are, of course, more consequential than outcomes on test scores in school. It is possible that charter schools’ full long-term impacts on their students have been underestimated by studies that examine only test scores. More broadly, the findings suggest that the research examining the efficacy of educational programs should examine a broader array of outcomes than just student achievement. (pp. 27-8)

In the high-regulation approach, these charter schools might well be identified as the “bad” schools for failing to improve test scores, and yet they are the ones that produce long-term success for their students.  In the high-regulation approach a portfolio manager or harbor master might kick these schools out of the program or restrict their growth for failing to produce achievement gains.

Let’s briefly review the results from the three rigorous examinations of the effect of private school choice on educational attainment.  Unlike the charter research, they all show significant benefits for attainment.  Wolf, et al examined the federally funded DC voucher program.  They found little benefit for voucher students on achievement tests but those students enjoyed a 21 percentage point increase in the rate at which they graduated high school.  Cowen, et al examined the public funded voucher program in Milwaukee and found a 5 to 7 percentage point increase in the rate at which voucher students attended college.  And Peterson and Chingos examined a privately funded voucher program in New York City and found that African-American voucher recipients experienced a 9 percentage point increase in attending college.  There was no significant benefit for Hispanic students.

If the high-regulation folks wanted to ditch private school choice to go all-in on charters, they would be making a horrible mistake.  The evidence suggests private school choice is producing stronger long-term results.  In addition, among charter schools, the kinds of schools that high-regulation folks like the most are the ones producing weaker long-term outcomes.  Focusing only on charters making the biggest achievement score gains would miss those charters with more modest achievement results but truly impressive attainment outcomes.  Charter schools offer the illusion of getting the benefits from choice without too much of the messiness markets.  As it turns out, central planning among charter schools is no easier than central planning among traditional public schools.

In addition to losing quality if key choice backers were to support charters to the exclusion of private school choice, there are obvious political advantages to backing both types of choice.  Private school choice has helped make the world safe for charters by taking more of the political heat.  We wouldn’t have the same expanding charter sector were it not for the credible threat of even more private school choice.  And the choice movement would be wise to spread its bets across a variety of approaches to expanding school choice.  No one knows the ideal political strategy or regulatory scheme, so having a variety of different approaches allows us to learn about how these different methods for expanding choice are doing.  We need choice among choice.


Florida Charter Schools: Show me the money!

January 16, 2014

(Guest Post by Collin Hitt)

There’s mounting evidence that charter schools decrease dropout rates, increase college attendance rates and improve the quality of colleges that college-bound students attend. But so what if these kids go to college? Do they actually graduate? And if charter schools really have lasting impacts, shouldn’t charter schools actually have an impact on how much money students earn? A new working paper examines these questions and the answer, in a word, is yes.

Kevin Booker, Tim Sass, Brian Gill  and Ron Zimmer have now extended their previous research on charter high schools. (Jay wrote about their research and their clever research design a few years back.) They look at students in Chicago and Florida who attend charter schools in eighth grade, some of whom go on to attend charter high schools and some whom go on to attend district-run high schools.

They find that students who attend charter high schools are more likely to graduate high school, attend college and persist in college. Such findings are extremely important. But the paper is truly novel in that it also examines the labor market outcomes for students. From the study:

In Florida, we also examine data on the subsequent earnings of students in our analytic sample, at a point after they could have earned college degrees. Charter high school attendance is associated with an increase in maximum annual earnings for students between ages 23 and 25 of $2,347—or about 12.7 percent higher earnings than for comparable students who attended a charter middle school but matriculated to a traditional high school.

Two years ago, the front page of the New York Times carried a headline that teachers can have lasting impacts on student’s earnings in adulthood, citing groundbreaking work by Jonah Rockoff, Raj Chetty and John Friedman. For a single school year, a one standard deviation increase in teacher quality – as measured by a teacher’s valued-added impact on test scores – increased a student’s annual earnings at age 28 by $182. Compare that to the impact of attending a charter high school in Florida: a $2,347 increase in annual earnings by age 25. Using Rockoff, Chetty and Friedman’s estimate, that’s equivalent to a student experiencing a one standard deviation in teacher quality every year from kindergarten through the twelfth grade.

So these findings stand out. Moreover, Booker and colleagues close the paper with a key observation. In Florida, as in other school choice research, a paradox became apparent. The improvements in long-term outcomes were in no way predicted by earlier research on test score impacts.

The substantial positive impacts of charter high schools on attainment and earnings are especially striking, given that charter schools in the same jurisdictions have not been shown to have large positive impacts on students’ test scores (Sass, 2006; Zimmer et al., 2012)…

 Positive impacts on long-term attainment outcomes and earnings are, of course, more consequential than outcomes on test scores in school. It is possible that charter schools’ full long-term impacts on their students have been underestimated by studies that examine only test scores. More broadly, the findings suggest that the research examining the efficacy of educational programs should examine a broader array of outcomes than just student achievement.

This, I can promise, will be a recurrent theme in school choice research in the coming years. Recall this passage from Will Dobbie and Roland Fryer’s research of the Harlem Promise Academy, where they found large gains in college attendance:

 “…the cross-sectional correlation between test scores and adult outcomes may understate the true impact of a high quality school, suggesting that high quality schools change more than cognitive ability. Importantly, the return on investment for high-performing charter schools could be much larger than that implied by the short-run test score increases.”

Test scores are supposed to be an indicator of how kids will fare later in life. Now we have another piece of school choice research finding that test scores missed the true positive impact that schools (and choice) had on kids. Something to think about if you’re going to argue that schools of choice should be held more accountable to state tests.


Boston Charter Schools Can’t Lose: Another Random Assignment No-Doubter

October 28, 2013

(Guest Post by Collin Hitt)

Another gold standard, random assignment study has found that Boston charter schools are producing large test score gains.Yesterday the Boston Foundation released the newest installment of its studies of Bean Town charters. It updates results from previous studies and finds continued, large test score gains for charter middle and high schools. From the study:

Since 2009, the middle school charter yearly gains for math are 0.23σ compared to 0.26σ overall and the gains for ELA are 0.15σ compared to 0.14σ overall. The comparison for charter high schools is similar. In recent years, the high school charter gains for math are 0.38σ compared to 0.35σ overall and the gains for ELA are 0.33σ compared to 0.27σ overall.

You will notice that these are yearly gains. The authors show that results are almost always stronger for poor and minority students, as well as English language learners. This kind of progress, for students of color, could easily eliminate the racial achievement gap over the course of middle and high school.

The report also looks at the question of whether charter schools effectively push out low-performing students. The authors find that charter middle schools are significantly less likely than other public schools to see their students transfer elsewhere. In high school, charter students for a time were more likely for a time to transfer out, but that trend has completely vanished since a state policy change regarding charter enrollment rules in 2010 – a time since which the test score results of charter high schools have improved.

So, we have another random assignment study finding gains for charter school students. We have another study dispelling the myth that charter schools push out their students.

Soon we should expect a retraction from all the people who’ve made evidence-free claims to the contrary. Right?


Brilliant New Measure of Non-Cognitive Skills

August 12, 2013

care hate glee Don t care

My student, Collin Hitt, and colleague, Julie Trivitt, have an amazing paper on how we can efficiently measure an important non-cognitive skill that is strongly predictive of later life outcomes.  A growing number of researchers have come to realize that lifetime success is partially a function of traditional academic achievement (cognitive skills) and partially a function of what are called non-cognitive skills, such as hard work, self-discipline, determination, etc…  Schools may play a central role in conveying both types of skills, but for the most part we have only been collecting information on cognitive skills in the form of standardized test results.  The main difficulty in expanding the types of measures we collect to include non-cognitive skills is that we have not developed efficient mechanisms for doing so.

Hitt and Trivitt have taken an enormous step forward to solve this problem.  They have discovered that student non-response on surveys (not answering questions or saying they don’t know) is an excellent measure of non-cognitive skills that are strongly predictive of later life outcomes.  In particular they examined survey response rates from the National Longitudinal Study of Youth (NLSY) given to students ages 13 to 17 in 1997.  The number of items that students answered was predictive of the highest level of education students attained by 2010, controlling for a host of factors including measures of their cognitive ability.  If students care enough to answer questions on a survey they are more likely to care enough to pursue their education further.

They then examined another data set to see if they found the same relationship.  They did.  The number of items that students in Milwaukee answered in a survey when they were in 9th grade was predictive of whether they graduated high school and went to college later, controlling for their academic achievement and other factors.

If this holds up when examined with multiple data sets, it will be an amazing breakthrough for researchers.  We will finally have a fairly easy to obtain measure of an important non-cognitive skill that is predictive of later life success.

When studying voucher or other school choice programs, for example, we have observed modest test score benefits for participants, but fairly large attainment benefits.  This suggests that school choice has larger effects on non-cognitive skills, but up until now we haven’t been able to observe these non-cognitive benefits without waiting nearly a decade to see if students graduate high school and go on to college.  With the Hitt and Trivitt measure, we will have an early warning indicator of whether students are acquiring non-cognitive skills and are more likely to have higher attainment later.

I am not suggesting that the Hitt and Trivitt measure can be used in an accountability system, since it is certain not to work once high stakes are attached.  But for research purposes it could be incredibly useful.

Developing an accurate and efficient measure of non-cognitive skills is especially important because one commonly considered measure, the self-reported “grit scale” developed by Angela Duckworth, may not be holding up very well.  In the recent Dobbie and Fryer evaluation of the Harlem Promise Academy, it actually appears that the Duckworth scale was a contrary indicator of later life success.  That is, students who rated themselves higher on the grit scale were less likely to succeed.  We have also tried the Duckworth scale in an experiment and found that it was uncorrelated with other, behavioral measures of non-cognitive skills, such as time devoted to a challenging task and delayed gratification.  But the self-reported grit scale was related to a student self-assessment of honesty, suggesting that the Duckworth scale may really measure how highly students will rate themselves more than actual grit or other non-cognitive skills.

Of course, the Hitt and Trivitt measure requires a lot more testing and field research, but it is one of the more exciting recent developments in education research.