
I’ve been having a debate over the last few weeks with Neerav Kingland and others about the dangers of a high-regulation approach to school choice. (You can see my posts on this so far here, here, here, here, here, and here). I know this seems like a lot of posts on a topic, but as one grad student observed, it took more than 100 pieces about the obvious error of government reported high school graduation rates before people fully acknowledged the error and too significant steps to correct it. Let’s hope convincing ed reform foundations and advocates to scale way back on their infatuation with heavy regulation does not require the same effort as moving more obstinate and dim-witted government officials.
One heavy-handed regulatory approach that is particularly worrisome is the strategy of portfolio management. Under this strategy, a portfolio manager, harbor master, or some other regulator actively manages the set of school options that are available to families by closing those believed to be sub-par and expanding or replicating those that are thought to be more effective. This approach is being implemented in New Orleans and the city appears to be experiencing significant gains in achievement tests, so Neerav and others are puzzled as to why I don’t support it.
I’ve tried to express my reasons for opposing portfolio management in several ways. I tried mocking it: “If education reform could be accomplished simply by identifying and closing bad schools while expanding good ones, everything could be fixed already without any need for school choice. We would just issue regulations to forbid bad schools and to mandate good ones. See? Problem solved.” That clearly didn’t work because folks like NOLA advocate Josh McCarty replied: “moving the left end of the performance curve to the right through regs has gotten more kids in higher perf schools.”
So, let me try again. A portfolio manager can only move “the left end of the performance curve” if the regulator can reliably identify which schools are likely to harm students’ long-term outcomes and which ones are likely to improve them. If you don’t really know whether schools are on the left or right end of some curve of quality, closing schools just limits options without improving long-term outcomes. But backers of portfolio management are not lacking in confidence. They have achievement test results, so they think they know which are the good and bad schools.
Unfortunately, they are suffering from over-confidence. Achievement tests are useful but they are not nearly strong enough predictors of later life outcomes to empower a portfolio manager to close a significant number of schools because he or she “knows” that those schools are “bad.” In fact, the research I reviewed on rigorous evaluations of long-term outcomes from choice programs suggests that using test scores to decide whether a bunch of schools should be closed or expanded would lead to significant Type 1 and Type 2 errors. That is, in their effort to close bad schools, portfolio managers may very well close schools with lower test performance that actually improve high school graduation, college-attendance, and lifetime earnings. And they may expand or replicate schools that have high test performance but do little to improve these later life outcomes.
If there were an active portfolio manager of Florida charter schools, they would have closed a bunch of charter schools that were doing a great job of improving students’ later life outcomes. As Booker, et al’s research shows, relying solely on test scores to distinguish good from bad schools would lead to serious errors by an active portfolio manager:
The substantial positive impacts of charter high schools on attainment and earnings are especially striking, given that charter schools in the same jurisdictions have not been shown to have large positive impacts on students’ test scores (Sass, 2006; Zimmer et al., 2012)…. Positive impacts on long-term attainment outcomes and earnings are, of course, more consequential than outcomes on test scores in school. It is possible that charter schools’ full long-term impacts on their students have been underestimated by studies that examine only test scores. More broadly, the findings suggest that the research examining the efficacy of educational programs should examine a broader array of outcomes than just student achievement. (pp. 27-8)
Conversely, foundations and portfolio managers are pouring more resources into certain types of schools with strong test performance that are failing to show much benefit for students’ long-term outcomes. As Angirst, et al, Dobbie and Fryer, and Tuttle, et al show, a bunch of charter schools with large achievement test gains, including Boston “no-excuses” schools, Harlem Promise Academy, and KIPP, have produced little or nothing in terms of high school graduation and college-attendance rates.
Portfolio management guided solely by test scores would seriously harm students by unwittingly closing a bunch of successful schools, like those Booker, et al studied in Florida, while expanding and pouring more resources into ones with less impressive long-term results, like those studied by Angirst, et al, Dobbie and Fryer, and Tuttle, et al.
Matt Barnum challenged me on Twitter to describe what evidence would persuade me to support portfolio management. At a minimum I would want to see that portfolio managers have reliable tools for predicting long-term outcomes for students so they knew which choice schools should be closed and which should be expanded or replicated. The evidence I’ve reviewed here and in more detail in this prior post suggests that they do not have a reliable tool and so the entire theory of portfolio management falls apart. I’m not making the strawman argument that test scores are useless or that no school should ever be closed by regulators. I’m just arguing that portfolio management requires confidence in the predictive power of achievement tests that is not even close to being warranted by the evidence.
But what about the impressive achievement gains that Doug Harris and his colleagues find are being produced in New Orleans? Let’s keep in mind that many reforms have been implemented in New Orleans at the same time. Even if we were confident that the test score gains in New Orleans are not being driven by changes in the student population following Katrina (and Doug and his colleagues are doing their best with constrained data and research design to show that), and even if these test score gains translate into higher high school graduation and college attendance rates (which Doug and his colleagues have not yet been able to examine), we still would have no idea whether portfolio management and other high regulations in NOLA helped, hurt, or made no difference in producing these results. In fact, the evidence from the 7 rigorous studies on school choice programs with long-term outcomes suggests that portfolio management and other heavy regulations are neither necessary nor desirable for producing long-term gains for students.
Neerav, Matt Barnum, and Josh McCarty have suggested that I am making overly-broad claims not consistent with evidence. I think the opposite is true. I’ve carefully cited and quoted the relevant research and drawn the obvious conclusion — active portfolio management based on achievement tests is likely to make harmful errors and unnecessarily restrict options. In fact, it seems to me that the burden is on supporters of portfolio management to demonstrate that they are able to reliably distinguish between schools with good and bad long-term outcomes. If you are going to go around telling families that they can’t choose a certain school because it is bad for them, you had darn better be confident that it really is bad.
Posted by Jay P. Greene 
















