Systematic Review of the Disconnect Between Test Scores and Later Life Outcomes

Today AEI released a systematic review by Collin Hitt, Mike McShane, and Pat Wolf on the relationship between changes in test scores and changes in later educational attainment in rigorous studies of school choice programs.  I’ve been writing and talking about this for some time now, inspired to a large degree by informal conversations with the authors of this new report.  Now they have made the point more systematically.

They examined every study of school choice programs with both test score and attainment effects, consisting of “39 unique impact estimates across studies of more than 20 programs.”  They examine whether the direction and significance of the estimated effects of those programs on test scores are consistent with the direction and significance on attainment.  They are not.

They find: “Across the studies we examine, there is no significant or meaningful association between school choice impacts on math scores and high school graduation or college attendance. Nor are ELA impacts meaningfully associated with high school graduation rates. Under some tests, the relationship between ELA impacts and college attendance are significant—but the relationship is weak in magnitude, and the sample of studies is far narrower for college attainment than for high school graduation.”

Keep in mind that the policy relevant question is not whether individual changes in test scores are correlated with individual changes in attainment.  There is some research that has found this relationship (see for example Chetty, et al), but a surprising number of studies find no or only a weak relationship between individual gains on these near-term and later-term measures of success.  But none of them directly address the policy relevant question of whether aggregate test score changes at the school or program level are predictive of aggregate changes in attainment.

If we are going to judge schools or programs as good or bad based on changes in test scores, then those aggregate measures (not individual results) should be predictive of later success.  The fact that they are not, at least when judging school choice programs and schools, suggests that there is something fundamentally wrong with how we have approached public regulation (wrongly called “accountability”) of those programs.  You can’t regulate the quality of schools and programs if you can’t predict their quality.

3 Responses to Systematic Review of the Disconnect Between Test Scores and Later Life Outcomes

  1. Michael J. Norton says:

    The proliferation of standardized testing does accurately predict two future successes.

    The future income of the testing company owners.

    And the kickbacks to those who impose the mandated testing.

  2. Greg Forster says:

    To avoid confusion, it should be noted that the absence of a connection between, for example, test score gains from choice and high school graduation gains from choice does not call into question the fact that choice does in fact produce (modest) test score gains and also does produce (immodest) graduation gains. The question here is whether the two are related.

    That they seem not to be related implies a major paradigm shift in our field that will be tons of fun to see unfolding.

    It will be interesting to see future work on the question of correlation at the individual level. While you’re right that this is not the immediate policy question, it is indirectly relevant in that the new paradigm will need to develop a general causal model, and to do that, we need to get a sense of just how deep the rabbit hole goes.

  3. […] because of the breadth and depth of the information that it provides.  Yet, some scholars point out that there is a weak relationship between test scores and post-secondary education and employment […]

Leave a comment