In a fascinating new study by Robert W. Fairlie and Peter Riley Bahr, they examine the effects of an experiment in which some community college students received free computers and others did not by lottery. Comparing these randomly assigned treatment and control groups, the researchers found that computer skills rose among students who were given computers, but those skills did not translate into higher college enrollment, employment, or earnings for the treatment group.
These results are particularly important because many politicians have focused on improving computer skills as the key to improving educational outcomes. In Arkansas, the main education policy initiative championed by the governor is a law that requires all public schools to offer computer science classes. Texas has adopted a similar policy. Leaving aside all of the obvious practical concerns, like whether schools have or can develop staff qualified to teach computer science, this new research raises questions about the aim of these policies. How important is increasing computer skills for the vast majority of students? No one doubts that most workers have to use computers, but many students may already possess the skills they need and it seems doubtful that raising average computer skills would lead to significant changes in employment outcomes — and that’s assuming we can improve computer skills in a meaningful way.
The new study is also incredibly useful in that it reminds us of how important it is to rely on randomized experiments rather than studies that use matching or controls for observables. They conclude:
Importantly, our null effect estimates from the random experiment differ substantially from those found from an analysis of CPS data, raising concerns about the potential for selection bias in non-experimental estimates of returns. Estimates from regressions with detailed controls, nearest-neighbor models, and propensity score models all indicate large, positive, and statistically significant relationships between computer ownership and earnings and employment, in sharp contrast to the null effects of our experiment. It may be that non-experimental estimates overstate the labor market returns to computer skills.
It is simply false that matching studies are just as good or almost as good as randomized experiments. Sometimes you get the same result in a matching and RCT study, but that could simply be because selection did not bias the result in that case or you were just lucky. Sometimes a coin flip will also give you the same result. Theoretically, we know that selection bias is a serious concern, which means that we can never have strong confidence in research designs that assume selection issues don’t exist.