(Guest Post by Jason Bedrick)
There’s plenty to quibble with in Mark Dynarski’s post at EdNext this morning, but his claims about over-regulation theory are downright odd:
Some commenters have concluded that the negative effects in Louisiana were the consequence of the program being ‘over-regulated.’ [6] But the conclusion that the Louisiana program was overregulated relies on unstated premises that private schools that agreed to participate were academically inferior to ones that did not agree but would have if the state did not impose requirements, or that regulation itself impairs academic achievement. Evidence of either is noticeably lacking in the argument.
Far from being an “unstated premise,” the notion that “private schools that agreed to participate were academically inferior to ones that did not agree but would have if the state did not impose requirements” is the explicit argument that I was making in the EdNext post to which he links.
And far from “lacking evidence,” I spelled it out. First, private schools in highly regulated Louisiana were much less likely to accept voucher students than private schools in states with less regulated school choice programs:
Due to the LSP’s high regulatory burden, two-thirds of Louisiana private schools do not accept voucher students. In an American Enterprise Institute survey of private schools, 79 percent of Louisiana school leaders reported that concerns about program regulations played a deciding factor in their decision not to accept LSP students, including 64 percent who listed this as a major factor. In particular, 71 percent worried about the effect on their school’s admissions policies, including 45 percent who stated that this played a major role in their decision. In addition, 54 percent expressed concerns about administering the state test, including 34 percent who said it played a major role in their decision. Other areas of great concern included paperwork and the effect on the schools’ character or identity.
By contrast, the same survey found substantially lower levels of concerns about school choice regulations among school leaders in Indiana and Florida, where the regulatory burdens are considerably lower. While both states limit their vouchers and tax-credit scholarships to low-income students, they do not otherwise restrict admissions criteria, nor do they prevent schools from charging full tuition. Like Louisiana, Indiana requires schools to administer the state test to voucher students, whereas Florida allows schools to choose among many nationally norm-referenced tests.
Unsurprisingly, Florida has the highest level of private school participation among the three states (about two-thirds), followed by Indiana (about half), and Louisiana (one-third). Moreover, Florida schools are the most likely to plan to increase the number of choice students they enroll, while Louisiana schools are the most likely to decrease that number.
Second, there was “suggestive but not conclusive” evidence (as I wrote) that the private schools that did participate were lower performing than those that chose not to:
Low rates of private school participation would not be so troubling if they reflected the decisions of high-performing schools to accept voucher students while the regulations kept low-performing schools away, as proponents of the regulations had desired. However, the regulations may have had the opposite of their intended effect, as Professor Jay P. Greene of the University of Arkansas recently cautioned:
The only schools who are willing to do whatever the state tells them they must do are the schools that are most desperate for money. […] If you don’t have enough kids in your private school and your finances are in bad shape, you’re in danger of closing — probably because you’re not very good — then you’re willing to do whatever the state says.
Indeed, Greene’s concern is borne out by the data. According to the NBER study, “LSP schools open in both 2000 and 2012 experienced an average enrollment loss of 13 percent over this time period, while other private schools grew 3 percent on average.” The authors note that this “indicat[es] that the LSP may attract private schools struggling to maintain enrollment,” and they conclude that these results “suggest caution in the design of voucher systems aimed at expanding school choice for disadvantaged students.”
And, indeed, the recent study by Wolf, DeAngelis, and Sude lends further evidence to the Over-regulation Theory:
Our results largely confirm our hypothesis that higher tuition levels and larger cohort enrollments, conditions normally associated with high quality schools, identify schools that are less likely to participate in voucher programs. We also find a consistent negative relationship between Great Schools Review score and the school participation decision, indicating lower quality schools have a higher tendency of participating in voucher programs in all three states, however the coefficients are not significantly different from zero. State fixed effects reveal private schools in D.C. and Louisiana, the two states that have higher regulatory burdens, are less likely to participate in voucher programs.
The evidence is still merely suggestive, not conclusive, but it’s the best evidence we have. Dynarski might not be persuaded by it, but he can’t ignore that it exists.
Dynarksi does a fair amount of slight-of-hand in this piece. He also says my argument about the disconnect between test scores and later life outcomes “discounts recent research showing that test scores improvements related to effective teachers were correlated with gains in adult labor-market outcomes.” I don’t “discount” that evidence. I acknowledge it, but note that it is only one study that is at odds with many other studies and has its own limitations. Dynarksi does not rebut that argument or provide any additional evidence that should make one more confident in that one study.
Dynarski does reasonably ask how confident we should be in evidence before using it for policy purposes. When it comes to using test scores to close schools or deny parents access to programs, I think the standard should be similar to that used in medical diagnostic testing. If a test is not a consistent predictor of disease, we shouldn’t rely on it for diagnostic purposes. But when it comes to assessing parental choice programs, I think the default should be on the other side. If there is some evidence of benefit, we shouldn’t deny parents choices.
Science does not tell us how to frame the null hypothesis. That is a reflection of our values, priorities, and a priori theories. I understand that reasonable people may disagree, but all of us have to choose how to frame the null hypothesis and I am just being explicit about how I would do so.
Also, it’s silly to average the same cohort across years. These aren’t independent observations.
Imagine you were trying to figure out how far people could run after taking a Vouchera pill. There are a few random-assignment studies so to conduct a meta-analysis, you average the effects across the studies. The studies find that Vouchera runners are slower in the first couple hours but by the third hour they catch up and they’re on track to possibly move ahead after four hours.
Then Dynarski comes along and he averages the distance the cohorts of runners covered after one hour, two hours, and three hours. He then concludes that, on average, Vouchera runners are slower.
“Vouchera” A+
Jason, I’m so disappointed in you. In responding to the charge that you don’t state your premise or provide evidence to support it, you shamelessly cherry-pick the parts of your post in which you do state your premise and provide evidence to support it. What about all the other parts of your post that don’t do those things? Huh? I think someone in this exchange owes the other an apology, don’t you?
Someone certainly does.
Fortunately for him, being a part of the in-crowd means never having to say you’re sorry. Just ask our friends in LA or parts of the charter sector.
https://jaypgreene.com/2017/04/06/pay-no-attention-to-the-research-behind-the-curtain/