Does School Spending Matter After All?

This is the question raised by a new study by C. Kirabo Jackson, Rucker C. Johnson and Claudia Persico  in Education Next.  Jackson, et al claim to have up-ended decades of school finance research by finding a link between school spending and improved student outcomes.  After reading that article and an earlier, more detailed version posted on the NBER web site, I find nothing to persuade me to abandon the long-standing and well-established finding that simply providing schools with more resources does not improve student outcomes.

Let’s remember how well-established this finding is by noting that Eric Hanushek conducted a comprehensive review of the literature and concluded:

…the research indicates little consistent relationship between resources to schools and student achievement. Much of the research considers how resources affect student achievement as measured by standardized test scores. These scores are strongly related to individual incomes and to national economic performance, making them a good proxy for longer run economic impacts. But, the evidence – whether from aggregate school outcomes, econometric investigations, or a variety of experimental or quasiexperimental approaches – suggests that pure resource policies that do not change incentives are unlikely to be effective. (p. 866)

Jackson, et al acknowledge that past research has failed to find a link between school resources and student outcomes:

Coleman found that variation in school resources (as measured by per-pupil spending and student-to-teacher ratios) was unrelated to variation in student achievement on standardized tests. In the decades following the release of the Coleman Report, the effect of school spending on student academic performance was studied extensively, and Coleman’s conclusion was widely upheld.

But they believe that past research was flawed in two important respects.  First, test scores may be a weak indicator of later-life success, so it would be better to look at stronger measures, like educational attainment, employment, and earnings.  Second, they believe that past studies of school spending may suffer from an endogeneity problem.  That is, extra money has tended to go to schools facing challenges.  The failure to find a link between more resources and better achievement may be because schools with a weaker future trajectory are the ones more likely to get more money.  So, the causal arrow may be going in the wrong direction.  Weak performance may be causing more resources rather than more resources causing weak performance.

Jackson, et al solve the first issue by focusing on longer-term student outcomes, like educational attainment and earnings.  They claim to have a solution to the second problem by finding a type of spending increase that is unrelated to the expected trajectory of school performance.  Court-ordered spending, they say, is exogenous, while regular legislative increases in spending are endogenous.

The surprising findings of the Jackson, et al article hinge entirely on this claim that court-ordered spending is exogenous.  Looking at attainment and earnings by itself does not produce a different result than past research that has focused on test scores.  The thing that allows Jackson, et al to find that spending is linked to better student outcomes is the fact that they do not examine actual spending increases.  Instead, they predict changes in spending based on court-orders and use that predicted spending in place of the actual spending.

This instrumental variable technique developed by James Heckman, however, only works if the instrument is in fact exogenous.  That is, court-ordered spending has to be unrelated to the future trajectory of school performance.  Given how critical this point is to the entire article, you might think Jackson, et al would spend a fair amount of energy to justifying the exogeneity of court-ordered spending.  They do not.

It is completely mysterious to me why we should believe that court-ordered spending differs from legislatively-originated spending in the likelihood that it is linked to the expected future trajectory of school performance.  That is, schools facing challenges are just as likely to get extra money if the spending originates in the courts or in the legislature.  If we are concerned that the causal arrow is going in the wrong direction in that weak performance causes more money rather than the other way around, we should have that concern just as much whether the motivation for the money came from the court or the legislature.

Jackson, et al do not make a proper case for the exogeneity of court-ordered spending other than to describe it as a “shock” to school spending.  But there is nothing more shocking about spending that originates in the courts than in the legislature.  Court cases take years to develop, be decided, and complete appeals.  And then they have to be implemented by legislative action.  The timing of court-ordered spending is no more surprising to schools than regular legislative spending.  Nor is the amount of spending change necessarily more dramatic than those originating in legislatures.  The passage of ESEA and its re-authorizations infused large amounts of money into schools.

Jackson et al need to convince us that court-ordered spending is exogenous to get their unusual result.  If they just used conventional methods, they would confirm the wide-spread finding that extra money does not improve outcomes.  As they describe it:

We confirm that our approach generates significantly different results than those that use observed increases in school spending, by comparing our results to those we would have obtained had we used actual rather than predicted increases as our measure of changes in district spending. For all outcomes, the results based simply on observed increases in school spending are orders of magnitude smaller than our estimates based on predicted SFR-induced spending increases, and most are statistically insignificant.

But Jackson, et al fail to justify the claim that court-ordered spending is exogneous on which their entire article depends nor does such a claim seem plausible.

But even if you were to somehow believe that court-ordered spending is exogenous, it would still be unwise to jump to the conclusion that we now know money matters and should open the resource spigots to K-12 education.  First, the past research Hanushek reviewed includes studies that do not suffer from either of the concerns raised by Jackson, et al.  That is, some of those studies examine later-life outcomes for students and not just test scores and some of those studies rely on experimental methods with which there is no problem with causation.  Why should we disregard those studies for this one new study even if we were to ignore the concerns I’ve raised above?

Second, Jackson, et al are examining the effect of court-ordered spending in the 1970s when spending levels in real terms were much lower and variation in spending across districts within states was much higher.  It’s quite a leap to think that more money now would have the same effect as then.  To my surprise, Bruce Baker made this same point in response to the Jackson, et al article in comments to Education Week:

“[E]xploring such [far-apart] outcomes, while a fun academic exercise, is of limited use for informing policy,” he wrote in an email to Education Week. “Among other things, these are changes that occurred under very different conditions than today.”

Mr. Baker also disagreed with the researchers’ caveat that similar changes might have a much smaller effect if introduced today, in part because total school funding nationwide increased by 175 percent over 43 years, from an average of $4,612 per student in 1967 to about $12,772 per student in 2010, as measured in 2012 dollars.

So does school spending matter after all?  I think the answer is still clearly “no.”

12 Responses to Does School Spending Matter After All?

  1. Greg Forster says:

    I am shocked – shocked! – to discover that courts throwing money at failing schools is going on in here!

  2. matthewladner says:

    Taken at face value, this finding would have profound implications far outside of the spending debate, which makes it all the more important to see it replicated outside of highly stylized statistical model.

  3. Kirabo Jackson says:

    COME ON Jay…..you are turning a blind eye to the flaws in the existing research. Also, what are these experimental studies you speak of?

    • Thanks, Bo, for the comment and for a thought-provoking article. As to the experimental studies, I’m referencing Hanushek’s review which he says relies on studies that include “a variety of experimental or quasiexperimental approaches.” You might want to contact him for a complete list, but I see studies using experimental methods described on pp. 895-6 and 899-901 in that review.

      • Kirabo Jackson says:

        Hi Jay,
        I did look at his review after reading your post. It turns out that he is referring to experimental evidence on class size (i.e. Project Star). There is no experimental evidence (or even quasi-experimental evidence) in that review (or that I know of) showing no relationship between spending and outcomes. Simply put, the methods used in the many existing national studies are unlikely to reflect causal relationships. At the very least, (even if one does not like my study with Rucker and Claudia) one must concede this point. Also note that Hanushek only looks at studies written prior to 1995 (i.e. no studies written in the past 20 years).

        The time frame is important because there are a some recent studies (some are based on individual states rather than national data) based on credible research designs showing positive effects on test scores and even college-going.

        I believe that a more recent review of the literature that gave higher weight to those studies with better methods would come to the conclusion that money does matter. Also, to be clear, our results do not imply that one should “throw money at the problem” but rather that demonstrates that in very recent history money has mattered (and quite a bit).

        I thought you and your readers might want to know.
        Bo

      • Greg Forster says:

        Another thing I’m sure our readers would like to know is whether you have any response to Jay’s point that judicially imposed spending changes are neither exogenous nor “shocks” and thus your study has little value.

        Edit: And if you really believe a better lit review would show that money does matter, your time would be far better employed conducting a better lit review as opposed to a single new study based on dubious methodological assumptions.

      • Kirabo says:

        Hi Greg,
        If I thought you would actually listen to my response, I would take my time and go through it carefully for you…But I don’t.

        To keep it short, all social science research is based on identifying assumptions (Jay knows this, and Rick knows this, and any social scientist worth their salt knows this). As such, the question is whether our assumptions are plausible. And more importantly for this exchange, whether they are more plausible than those in the existing studies (most reasonable persons would agree that they are).

        If you want further details on it, I invite you to READ THE PAPER instead of taking Jay’s word. You have a mind of your own…I invite you to use it.

  4. markdynarski says:

    The instrumental variables technique is a case in which the textbook example works perfectly and the empirical ones never do. The assumptions always seem implausible in practice.

    When school spending is being discussed, I can rarely discern what people think more money will be spent on, but the margins matter. I think more spending for ‘professional accountability’ that Brian Gill discusses in his recent essay would be an easy sell to the public. More spending to pay the same teachers more for the same work would be a hard sell. Reducing class sizes are somewhere in between, but evidence on their effects is mixed.

    • Greg Forster says:

      I have to say I have always felt the same doubts about instrumental variables. It’s one of those things where everybody does it so it gains spurious additional plausibility from common use.

  5. […] This begs an obvious question: What exactly is the “right way” to examine the school funding issue? According to Kirabo et al., previous studies were limited by two major design decisions: The use of standardized test scores as dependent variables (or outcomes), and a focus on school funding as it is traditionally applied. Dr. Jay Greene nicely summarizes both the problems and the authors’ solutions to them in a blog post: […]

  6. Dylan Wiliam says:

    What depresses me about this exchange is that people seem to be defending extreme positions—that extra money will have zero effect on student achievement, and that extra money will always produce increased student achievement. Both of these positions seem absurd to me. Extra money just “thrown at” education will almost certainly have some impact on student achievement, even if it is only to make teaching one of the best paid graduate jobs, so that we have highly skilled graduates clamoring for jobs as teachers. The point is that just throwing money at education always results in poor value for money, even if it does increase student achievement.

    What I want to know is how much increased student achievement (measured in, say standard deviations) we can get for an extra expenditure of $1000 per student per year.

Leave a comment