This is the question raised by a new study by C. Kirabo Jackson, Rucker C. Johnson and Claudia Persico in Education Next. Jackson, et al claim to have up-ended decades of school finance research by finding a link between school spending and improved student outcomes. After reading that article and an earlier, more detailed version posted on the NBER web site, I find nothing to persuade me to abandon the long-standing and well-established finding that simply providing schools with more resources does not improve student outcomes.
Let’s remember how well-established this finding is by noting that Eric Hanushek conducted a comprehensive review of the literature and concluded:
…the research indicates little consistent relationship between resources to schools and student achievement. Much of the research considers how resources affect student achievement as measured by standardized test scores. These scores are strongly related to individual incomes and to national economic performance, making them a good proxy for longer run economic impacts. But, the evidence – whether from aggregate school outcomes, econometric investigations, or a variety of experimental or quasiexperimental approaches – suggests that pure resource policies that do not change incentives are unlikely to be effective. (p. 866)
Jackson, et al acknowledge that past research has failed to find a link between school resources and student outcomes:
Coleman found that variation in school resources (as measured by per-pupil spending and student-to-teacher ratios) was unrelated to variation in student achievement on standardized tests. In the decades following the release of the Coleman Report, the effect of school spending on student academic performance was studied extensively, and Coleman’s conclusion was widely upheld.
But they believe that past research was flawed in two important respects. First, test scores may be a weak indicator of later-life success, so it would be better to look at stronger measures, like educational attainment, employment, and earnings. Second, they believe that past studies of school spending may suffer from an endogeneity problem. That is, extra money has tended to go to schools facing challenges. The failure to find a link between more resources and better achievement may be because schools with a weaker future trajectory are the ones more likely to get more money. So, the causal arrow may be going in the wrong direction. Weak performance may be causing more resources rather than more resources causing weak performance.
Jackson, et al solve the first issue by focusing on longer-term student outcomes, like educational attainment and earnings. They claim to have a solution to the second problem by finding a type of spending increase that is unrelated to the expected trajectory of school performance. Court-ordered spending, they say, is exogenous, while regular legislative increases in spending are endogenous.
The surprising findings of the Jackson, et al article hinge entirely on this claim that court-ordered spending is exogenous. Looking at attainment and earnings by itself does not produce a different result than past research that has focused on test scores. The thing that allows Jackson, et al to find that spending is linked to better student outcomes is the fact that they do not examine actual spending increases. Instead, they predict changes in spending based on court-orders and use that predicted spending in place of the actual spending.
This instrumental variable technique developed by James Heckman, however, only works if the instrument is in fact exogenous. That is, court-ordered spending has to be unrelated to the future trajectory of school performance. Given how critical this point is to the entire article, you might think Jackson, et al would spend a fair amount of energy to justifying the exogeneity of court-ordered spending. They do not.
It is completely mysterious to me why we should believe that court-ordered spending differs from legislatively-originated spending in the likelihood that it is linked to the expected future trajectory of school performance. That is, schools facing challenges are just as likely to get extra money if the spending originates in the courts or in the legislature. If we are concerned that the causal arrow is going in the wrong direction in that weak performance causes more money rather than the other way around, we should have that concern just as much whether the motivation for the money came from the court or the legislature.
Jackson, et al do not make a proper case for the exogeneity of court-ordered spending other than to describe it as a “shock” to school spending. But there is nothing more shocking about spending that originates in the courts than in the legislature. Court cases take years to develop, be decided, and complete appeals. And then they have to be implemented by legislative action. The timing of court-ordered spending is no more surprising to schools than regular legislative spending. Nor is the amount of spending change necessarily more dramatic than those originating in legislatures. The passage of ESEA and its re-authorizations infused large amounts of money into schools.
Jackson et al need to convince us that court-ordered spending is exogenous to get their unusual result. If they just used conventional methods, they would confirm the wide-spread finding that extra money does not improve outcomes. As they describe it:
We confirm that our approach generates significantly different results than those that use observed increases in school spending, by comparing our results to those we would have obtained had we used actual rather than predicted increases as our measure of changes in district spending. For all outcomes, the results based simply on observed increases in school spending are orders of magnitude smaller than our estimates based on predicted SFR-induced spending increases, and most are statistically insignificant.
But Jackson, et al fail to justify the claim that court-ordered spending is exogneous on which their entire article depends nor does such a claim seem plausible.
But even if you were to somehow believe that court-ordered spending is exogenous, it would still be unwise to jump to the conclusion that we now know money matters and should open the resource spigots to K-12 education. First, the past research Hanushek reviewed includes studies that do not suffer from either of the concerns raised by Jackson, et al. That is, some of those studies examine later-life outcomes for students and not just test scores and some of those studies rely on experimental methods with which there is no problem with causation. Why should we disregard those studies for this one new study even if we were to ignore the concerns I’ve raised above?
Second, Jackson, et al are examining the effect of court-ordered spending in the 1970s when spending levels in real terms were much lower and variation in spending across districts within states was much higher. It’s quite a leap to think that more money now would have the same effect as then. To my surprise, Bruce Baker made this same point in response to the Jackson, et al article in comments to Education Week:
“[E]xploring such [far-apart] outcomes, while a fun academic exercise, is of limited use for informing policy,” he wrote in an email to Education Week. “Among other things, these are changes that occurred under very different conditions than today.”
Mr. Baker also disagreed with the researchers’ caveat that similar changes might have a much smaller effect if introduced today, in part because total school funding nationwide increased by 175 percent over 43 years, from an average of $4,612 per student in 1967 to about $12,772 per student in 2010, as measured in 2012 dollars.
So does school spending matter after all? I think the answer is still clearly “no.”