Regulating School Choice: The Debate Continues

March 9, 2016

Design vs Experience

(Guest Post by Jason Bedrick)

Last week, the Cato Institute held a policy forum on school choice regulations (video here). Two of our panelists, Dr. Patrick Wolf and Dr. Douglas Harris, were part of a team that authored one of the recent studies finding that Louisiana’s voucher program had a negative impact on participating students’ test scores. Why that was the case – especially given the nearly unanimously positive previous findings – was the main topic of our discussion. Wolf and I argued that there is reason to believe that the voucher program’s regulations might have played a role in causing the negative results, while Harris and Michael Petrilli of the Fordham Institute pointed to other factors.

The debate continued after the forum, including a blog post in which Harris raises four “problems” with my arguments. I respond to his criticisms below.

The Infamous Education Productivity Chart

Problem #1: Trying to discredit traditional public schools by placing test score trends and expenditure changes on one graph. These graphs have been floating around for years. They purport to show that spending has increased much faster than expenditures [sic], but it’s obvious that these comparisons make no sense. The two things are on different scales. Bedrick tried to solve this problem by putting everything in percentage terms, but this only gives the appearance of a common scale, not the reality. You simply can’t talk about test scores in terms of percentage changes.

The more reasonable question is this: Have we gotten as much from this spending as we could have? This one we can actually answer and I think libertarians and I would probably agree: No, we could be doing much better than we are with current spending. But let’s be clear about what we can and cannot say with these data.

Harris offers a reasonable objection to the late, great Andrew Coulson’s infamous chart (shown below). Coulson already addressed critics of his chart at length, but Harris is correct that the test scores and expenditures do not really have a common scale. That said, the most important test of a visual representation of data is whether the story it tells is accurate. In this case, it is, as even Harris seems to agree. Adjusted for inflation, spending per pupil in public schools has nearly tripled in the last four decades while the performance of 17-year-olds on the NAEP has been flat.

U.S. Education Spending and Productivity

Producing a similar chart with data from the scores of younger students on the NAEP would be misleading because the scale would mask their improvement. But for 17-year-olds, whose performance has been flat on the NAEP and the SAT, the story the chart tells is accurate.

Voucher Regulations Are Keeping Private Schools Away

Problem #2: Repeating arguments that have already been refuted. Bedrick’s presentation repeated arguments about the Louisiana voucher case that I already refuted in a prior post. Neither the NBER study nor the survey by Pat Wolf and his colleagues provide compelling evidence that existing regulations are driving out potentially more effective private schools in the Louisiana voucher program, which was a big focus of the panel.

Here Harris attacks a claim I did not make. He is correct that there is no compelling evidence that regulations are driving out higher-quality private schools, but no one claimed that there was. Rather, I have repeatedly argued that the evidence was “suggestive but not conclusive” and speculated in my presentation that “if the enrollment trends are a rough proxy [for quality], though we can’t prove this, then it would suggest that the higher-quality schools chose not to participate” while lower-quality schools did.

Moreover, what Harris claims he refuted he actually merely disputed – and not very persuasively. In the previous post he mentions, he minimized the role that regulation played in driving away private schools:

As I wrote previouslythe study he cites, by Patrick Wolf and colleagues, actually says that what private schools nationally most want changed is the voucher’s dollar value. In Louisiana, the authors reported that “the top concern was possible future regulations, followed by concerns about the amount of paperwork and reports. When asked about their concerns relating to student testing requirements, a number of school leaders expressed a strong preference for nationally normed tests” (italics added). These quotes give a very different impression that [sic] Bedrick states. The supposedly burdensome current regulations seem like less of a concern than funding levels and future additional regulations–and no voucher policy can ever insure against future changes in policy.

Actually, the results give a very different impression than Harris states. The quote Harris cites from the report is regarding the concerns of participating schools, but the question at hand is why the nonparticipating schools opted out of the voucher program. Future regulations was still the top concern for nonparticipating schools, but current regulations were also major concerns. Indeed, the study found that 9 of the 11 concerns that a majority of nonparticipating private schools said played a role their decision not to participate in the voucher program related to current regulations, particularly around admissions and the state test.

"Views from Private Schools," by Brian Kisida, Patrick J. Wolf, and Evan Rhinesmith, American Enterprise Institute

Source: ”Views from Private Schools,” by Brian Kisida, Patrick J. Wolf, and Evan Rhinesmith, American Enterprise Institute (page 19)

Nearly all of the nonparticipating schools’ top concerns related to the voucher program’s ban on private schools using their own admissions criteria (concerns 2, 3, 5, 7, 8 and 11) or requiring schools to administer the state test (concerns 6, 9, 10, and possibly 7 again). It is clear that these regulations played a significant role in keeping private schools away from the voucher program. The open question is whether the regulations were more likely to drive away higher-quality private schools. I explained why that might be the case, but I have never once claimed that we know it is the case.

Market vs. Government Regulations in Education

Problem #3: Saying that unregulated free markets are good in education because they have been shown to work in other non-education markets. […] For example, the education market suffers from perhaps the worst information problem of any market–many complex hard-to-measure outcomes most of which consumers (parents) cannot directly observe even after they’ve chosen a school for their child. Also, since students can realistically only attend schools near their homes, and there are economies of scale in running schools, that means there will generally be few practical options (unless you happen to live in a large city with great public transportation–very rare in the U.S.). And the transaction costs are very high to switch schools. And there are equity considerations. And … I could go on.

Harris claims that a free market in education wouldn’t work because education is uniquely different from other markets. However, the challenges he lists – information asymmetry, difficulty measuring intangible outcomes, difficulties providing options in rural areas, transaction costs for switching schools – aren’t unique to K-12 education at all. Moreover, there is no such thing as an “unregulated” free market because market forces regulate. As I describe below, while not perfect, these market forces are better suited than the government to address the challenges Harris raises.

Information asymmetry and hard-to-measure/intangible outcomes:

Parents need information in order to select quality education providers for their children. But are government regulations necessary to provide that information? Harris has provided zero evidence that it is, but there is much evidence to the contrary. Here the disparity between K-12 and higher education is instructive. Compared to K-12, colleges and universities operate in a relatively free market. Certainly, there are massive public subsidies, but they are mostly attached to students, and colleges have maintained meaningful independence. Even Pell vouchers do not require colleges to administer particular tests or set a single standard that all colleges must follow.

So how do families determine if a college is a good fit or not? There are three primary mechanisms they use: expert reviews, user reviews, and private certification.

The first category includes the numerous organizations that rate colleges, including U.S. News & World Report, the Princeton Review, Forbes, the Economist, and numerous others like them. These are similar to sorts of expert reviews, like Consumer Reports, that consumers regularly consult when buying cars, computers, electronics, or even hiring lawyers – all industries where the non-expert consumer faces a significant information asymmetry problem.

The second category includes the dozens of websites that allow current students and alumni to rate and review their schools. These are similar to Yelp,, Urban Spoon and numerous other platforms for end-users to describe their personal experience with a given product or service.

Finally, there are numerous national and regional accreditation agencies that certify that colleges meet a certain standard, similar to Underwriters Laboratories for consumer goods. This last category used to be private and voluntary, although now it is de factomandatory because accreditation is needed to get access to federal funds.

None of these are perfect, but then again, neither are government regulations. Moreover, the market-based regulators have at least four major advantages over the government. First, they provide more comprehensive information about all those hard-to-measure and intangible outcomes that Harris was concerned about. State regulators tend to measure only narrow and more objective outcomes, like standardized test scores in math and English or graduation rates. By contrast, the expert and user reviews consider return-on-investment, campus life, how much time students spend studying, teaching quality, professor accessibility, career services assistance, financial aid, science lab facilities, study abroad options, and much more.

Second, the diversity of options means parents and students can better identify the best fit for them. As Malcolm Gladwell observed, different people give different weights to different criteria. A family’s preferences might align better with the Forbes rankings than the U.S. News rankings, for example. Alternatively, perhaps no single expert reviewer captures a particular family’s preferences, in which case they’re still better off consulting several different reviews and then coming to their own conclusion. A single government-imposed standard would only make sense if there was a single best way to provide (or at least measure) education, we knew what it was, and there was a high degree of certainty that the government would actually implement it well. However, that is not the case.

Third, a plethora of private certifiers and expert and user reviews are less likely to create systemic perverse incentives than a single, government standard. As it is, the hegemony of U.S. News & World Report’s rankings created perverse incentives for colleges to focus on inputs rather than outputs, monkey around with class sizes, send applications to students who didn’t qualify to increase their “selectivity” rating, etc. If the government imposed a single standard and then rewarded or punished schools based on their performance according to that standard, the perverse incentives would be exponentially worse. The solution here is more competing standards, not a single standard.

Fourth, as Dr. Howard Baetjer Jr. describes in a recent edition of Cato Journal, whereas “government regulations have to be designed based on the limited, centralized knowledge of legislators and bureaucrats, the standards imposed by market forces are free to evolve through a constant process of evaluation and adjustment based on the dispersed knowledge, values, and judgment of everyone operating in the marketplace.” As Baetjer describes, the incentives to provide superior standards are better aligned in the market than for the government:

Incentives and accountability also play a central role in the superiority of regulation by market forces. First, government regulatory agencies face no competition from alternative suppliers of quality and safety assurance, because the regulated have no right of exit from government regulation: they cannot choose a better supplier of regulation, even if they want to. Second, government regulators are paid out of tax revenue, so their budget, job security, and status have little to do with the quality of the “service” they provide. Third, the public can only hold regulators to account indirectly, via the votes they cast in legislative elections, and such accountability is so distant as to be almost entirely ineffectual. These factors add up to a very weak set of incentives for government regulators to do a good job. Where market forces regulate, by contrast, both goods and service providers and quality-assurance enterprises must continuously prove their value to consumers if they are to be successful. In this way, regulation by market forces is itself regulated by market forces; it is spontaneously self-improving, without the need for a central, organizing authority.

In K-12, there are many fewer private certifiers, expert reviewers, or websites for user reviews, despite a significantly larger number of students and schools. Why? Well, first of all, the vast majority of students attend their assigned district school. To the extent that those schools’ outcomes are measured, it’s by the state. In other words, the government is crowding out private regulators. Even still, there is a small but growing number of organizations like GreatSchools, Private School Review, School Digger, andNiche that are providing parents with the information they desire.

Options in rural areas:

First, it should be noted that, as James Tooley has amply documented, private schools regularly operate – and outperform their government-run counterparts – even in the most remote and impoverished areas in the world, including those areas that lack basic sanitation or electricity, let alone public transportation. (For that matter, even the numerous urban slums where Tooley found a plethora of private schools for the poor lack the “great public transportation” that Harris claims is necessary for a vibrant education market.) Moreover, to the extent rural areas do, indeed, present challenges to providing education, such challenges are far from unique. Providers of other goods and services also must contend with reduced economies of scale, transportation issues, etc.

That said, innovations in communication and transportation mean these obstacles are less difficult to overcome than ever before. Blended learning and course access are already expanding educational opportunities for students in rural areas, and the rise of “tiny schools” and emerging ride-sharing operations like Shuddle (“Uber for kids”) may soon expand those opportunities even further. These innovations are more likely to be adopted in a free-market system than a highly government-regulated one.

Test Scores Matter But Parents Should Decide 

Problem #4: Using all this evidence in support of the free market argument, but then concluding that the evidence is irrelevant. For libertarians, free market economics is mainly a matter of philosophy. They believe individuals should be free to make choices almost regardless of the consequences. In that case, it’s true, as Bedrick acknowledged, that the evidence is irrelevant. But in that case, you can’t then proceed to argue that we should avoid regulation because it hasn’t worked in other sectors, especially when those sectors have greater prospects for free market benefits (see problem #3 above). And it’s not clear why we should spend a whole panel talking about evidence if, in the end, you are going to conclude that the evidence doesn’t matter.

Once again, Harris misconstrues what I actually said. In response to a question from Petrilli regarding whether I would support “kicking schools out of the [voucher] program” if they performed badly on the state test, I answered:

No, because I don’t think it’s a wise move to eliminate a school that parents chose, which may be their least bad option. We don’t know why a parent chose that school. Maybe their kid was being bullied at their local public school. Maybe their local public school that they were assigned to was not as good. Maybe there was a crime problem or a drug problem.

We’re never going to have a perfect system. Libertarians are not under the illusion that all private schools are good and all public schools are bad… Given the fact that we’ll never have a perfect system, what sort of mechanism is more likely to produce a wide diversity of options, and foster quality and innovation? We believe that the market – free choice among parents and schools having the ability to operate as they see best – has proven over and over again in a variety of industries to have better outcomes than Mike Petrilli sitting in an office deciding what quality is… as opposed to what individual parents think [quality] is.

Harris then responded by claiming that I was saying the evidence was “irrelevant,” to which I replied:

It’s irrelevent in terms of how we should design the policy, in terms of whether we should kick [schools] out or not, but I think it’s very important that we know how well these programs are working. Test scores do measure something. They are important. They’re not everything, but I think they’re a pretty decent proxy for quality…

In other words, yes, test scores matter. But they are far from the only things that matter. Test scores should be one of many factors that inform parents so that they can make the final decision about what’s best for their children, rather than having the government eliminate what might well be their least bad option based on a single performance measure.

I am grateful that Dr. Harris took the time both to attend our policy forum and to continue the debate on his blog afterward. I look forward to continued dialogue regarding our shared goal of expanding educational opportunity for all children.

(Cross-posted at Cato-at-Liberty.)


Overregulation Is All You Need: A Response to Paul Bruno

February 29, 2016


(Guest Post by Jason Bedrick)

Over at the Brookings Institution’s education blog, Paul Bruno offers a thoughtful critique of  Overregulation Theory (OT), the idea that government regulations on school choice programs can undermine their positive effects. Bruno argues that although OT is “one of the most plausible explanations” of the negative results that two studies of Louisiana’s voucher program recently found, it is not “entirely consistent with the available evidence” and “does not by itself explain substantial negative effects from vouchers.”

I agree with Bruno–and have stated repeatedly–that the studies’ findings do not conclusively prove OT. That said, I believe both that OT is consistent with the available evidence and that it could explain the substantial negative effects (though I think it’s likely there are other factors at play as well). I’ll explain why below, but first, a shameless plug:

On Friday, March 4th at noon, the Cato Institute will be hosting a debate over the impact of regulations on school choice programs featuring Patrick Wolf, Douglas Harris, Michael Petrilli, and yours truly, moderated by Neal McCluskey. If you’re in the D.C. area, please RSVP at this link and join us! Come for the policy discussion, stay for the sponsored lunch!

Is the evidence consistent with Overregulation Theory?

Bruno notes that the differences in enrollment trends between participating and non-participating private schools is consistent with OT. Participating schools had been experiencing declining enrollment in the decade before the voucher program was enacted whereas non-participating schools had slightly increasing enrollment on average. This is consistent with the OT’s prediction that better schools (which were able to maintain their enrollment or grow) would be more likely eschew the vouchers due to the significant regulatory burden, while the lower-performing schools (which were losing students) were more desperate for students and funding, and were therefore more willing to jump through the voucher program’s regulatory hoops. However, Bruno calls this evidence into question:

For one thing, the authors of the Louisiana study specifically check to see if learning outcomes vary significantly between schools experiencing greater or lesser prior enrollment declines, and find that they do not. (Bedrick acknowledges this, but doubts there was enough variation in the enrollment trends of participating schools to identify differences.)

We should be skeptical of the explanatory value of the study’s enrollment check. There is no good reason to assume that the correlation between enrollment growth or decline among the small sample of participating schools (which had significantly negative growth, on average) is the same as among all private schools in the state. Making such an assumption is like a blind man holding onto the truck of an elephant and assuming that he’s holding a snake.

The study does not show the variation in enrollment trends among the participating and non-participating schools, but we could imagine a scenario where the enrollment trend among participating schools ranged, say, from -25% to +5% while the range at non-participating schools was -5% to +25%. As shown in the following charts (which use hypothetical data), there may be a strong correlation between enrollment trends and outcomes among the entire population, while there is little correlation in the subset of participating schools.

Enrollment Growth and Performance, Participating Private Schools (Hypothetical)

Screen Shot 2016-02-29 at 4.57.05 PM

Enrollment Growth and Performance, All Private Schools (Hypothetical)

Screen Shot 2016-02-29 at 10.05.29 PM.png

In short, looking at the relationship between enrollment growth and performance in the narrow subset of participating schools doesn’t necessarily tell us anything about the relationship between enrollment growth and performance generally. Hence the study’s “check” that Bruno cites does not provide evidence against OT.

Is there evidence that regulations improve performance?

Bruno also cites evidence that regulations can have a positive impact on student outcomes:

Joshua Cowen of Michigan State University also points out that there is previous evidence of positive effects from accountability rules on voucher program outcomes in other states (though regulations may differ in Louisiana).

The Cowen article considers the impact of high-stakes testing imposed on the Milwaukee voucher program during a multi-year study of that program. The “results indicate substantial growth for voucher students in the first high-stakes testing year, particularly in mathematics, and for students with higher levels of earlier academic achievement.” But is this strong evidence that regulations improve performance? One of the authors of both the original Milwaukee study and the cited article–JPG all-star, Patrick Wolf–cautions against over-interpreting these results:

Ours is one study of what happened in one year for one school choice program that switched from low-stakes testing to high-stakes testing.  As we point out in the report, it is entirely possible that the surge in the test scores of the voucher students was a “one-off” due to a greater focus of the voucher schools on test preparation and test-taking strategies that year.  In other words, by taking the standardized testing seriously in that final year, the schools simply may have produced a truer measure of student’s actual (better) performance all along, not necessarily a signal that they actually learned a lot more in the one year under the new accountability regime.

If we had had another year to examine the trend in scores in our study we might have been able to tease out a possible test-prep bump from an effect of actually higher rates of learning due to accountability.  Our research mandate ended in 2010-11, sadly, and we had to leave it there – a finding that is enticing and suggestive but hardly conclusive.

It’s certainly possible that the high-stakes test improved actual learning. But it’s also possible–and, I would argue, more probable–that changing the stakes just meant that the schools responded to the new incentive by focusing more on test-taking strategies to boost their scores.

For that matter, even if it were true that the regulations actually improved student learning, that does not contradict Overregulation Theory. Both advocates and skeptics of the regulations believe that schools respond to incentives. Those of us who are concerned about the impact of the regulations don’t believe that they can’t improve performance. Rather, our concern is that regulations imposed from above are less effective at improving performance than the incentives created by direct accountability to parents in a robust market in education, and may have adverse unintended consequences.

To explain: We’re concerned that regulations forbidding the use of a school’s preferred admissions standards or requiring the state test (which is aligned to the state curriculum) might drive away better-performing schools, leaving parents to choose only among the lower-performing schools. We’re concerned that price controls will inhibit growth, providing schools with an incentive only to fill empty seats rather than to scale up. We’re concerned that mandatory state tests will inhibit innovation and induce conformity. None of these concerns rule out the possibility (or, indeed, the likelihood) that over time, requiring private schools to administer the state test and report the results and/or face sanctions based on test performance will improve the participating schools’ performance on that test.

Again: we agree that schools respond to incentives. We just think the results of top-down incentives are likely to be inferior to the results of bottom-up choice and competition, which have proved to be powerful tools in so many other fields for spurring innovation and improving quality.

Can Overregulation Theory alone explain the negative results in Louisiana?

Finally, Bruno questions whether OT alone explains the Louisiana results:

[E]ven if regulation prevented all but the worst private schools from participating, this would explain why students did not benefit from transferring into them, but not why students would transfer into them in the first place.

So Overregulation Theory might be part of the story in explaining negative voucher effects in Louisiana, but it is not by itself sufficient. To explain the results we see in the study, it is necessary to tell an additional story about why families would sort into these apparently inferior schools.

Bruno offers a few possible stories–that parents select schools “that provide unobserved benefits,” that the voucher program “induced families to select inferior schools,” or that parents merely “assume any private school must be superior to their available public schools”–but any of these can be consistent with OT.  Indeed, the second story Bruno offers is practically an extension of OT: if the voucher regulations truncate supply so that it is dominated by low-quality schools, and the government gives false assurances that they have vetted those schools, then it is likely that we will see parents lured into choosing inferior schools.

That’s not to say that there are no other factors causing the negative results. It’s likely that there are. (I find Douglas Harris’s argument that the private schools’ curricula did not align with the state test in the first year particularly compelling, though I don’t think it entirely explains the magnitude of the negative results.) We just don’t have any compelling evidence that OT is wrong, and OT can suffice to explain the negative results.

I will conclude as I began: expressing agreement. I concur with Bruno’s assessment that “it is likely that the existing evidence will not allow us to fully adjudicate between competing hypotheses.” Indeed, it’s likely that future evidence won’t be conclusive either (it rarely is), but I hope that further research will shed more light on this important question. Bruno concludes by calling for greater efforts to “understand how families determine where their children will be educated,” noting that by understanding how and why parents might make “sub-optimal — or even harmful” decisions will help “maximize the benefits of school choice while mitigating its risks.” These are noble goals and I share Bruno’s desire to pursue them. I just hope that policymakers will approach what we learn with a spirit of humility about what they can accomplish.

Let a Thousand Magnolias Bloom: ESA Enrollment in Mississippi

February 5, 2016


(Guest Post by Jason Bedrick)

Citing low enrollment and bogus “research” that excludes the mountain of random-assignment studies, one anti-choice group says Mississippi’s education savings account program for students with special needs is a “failure.”

Of the more than 50,000 children with special needs in Mississippi public schools, 251 were qualified and approved to receive vouchers. Of those, only 107 appear to have used them, .0018 of one percent of Mississippi’s children with special needs.

The research claim clearly doesn’t hold water (unsurprisingly, the only gold standard study they cite is the recent one from Louisiana) but what about the low enrollment? Is this a program that parents don’t really want? Or perhaps there just aren’t enough private school seats for parents?

First, it’s pretty rich that a group that opposes educational choice cites low enrollment as a reason it is “failing.” If enrollment was high, do you think they would see that as a sign of success?

Second, the ESA program is still in its first year. As Empower Mississippi demonstrates in this helpful chart, programs that start small can grow significantly over time:

Screen Shot 2016-02-05 at 10.05.01 AM

As Empower Mississippi notes, detractors were probably quick to declare Florida’s McKay scholarships a “failure” when only two students used them in the first year, but after experiencing 1,505,100% growth in the next decade and a half, I doubt anyone is making that case anymore.

That said, detractors might be right that there aren’t enough private school seats right now. However, one of the purposes of educational choice is to expand the market. Greater demand should spark greater supply, if the price is right. Unfortunately, that’s a big “if.” The Magnolia State’s ESAs are currently funded at only $6,500 per year. Funding is tied to the state’s base student cost rather than the cost for students with special needs, as Arizona does.

If Mississippi lawmakers want to see greater supply in private school seats for students with special needs — and empower parents to use the ESAs to tailor their child’s education using tutors, online courses, educational therapy, etc. — then they should make sure that the ESAs are adequately funded.

[UPDATE: Grant Callen of Empower Mississippi wrote to let me know that I got one very important detail wrong: the image I used originally was of a Japanese Magnolia, not the North American Magnolia that is Mississippi’s state flower. I stand corrected!]

New Report-Turn and Face the Strain

February 4, 2015

Turn and Face the Strain

(Guest Post by Matthew Ladner)

Excel in Ed and the Friedman Foundation have co-released a study on state age demographics authored by yours truly.  The title reflects a couple of different things. First, I dig me some Bowie. Second, people are generally aware of the looming crisis in age demography we face, but they primarily have it framed as a federal issue. With 10,000 baby boomers reaching retirement age every day between now and 2030 (when they all reach retirement age) this certainly does represent a federal issue- trillions of dollars of unfunded liabilities in Social Security and Medicare, etc. The federal issue is not the only issue…

State policymakers must turn and face the strain that changing age demography will have on state government in the form of Medicaid, public pensions, a drag on economic growth and in many states an increasing K-12 population. Spoiler alert but all states have it bad with some states having it far worse than others.

The Baby Boom generation has already started retiring, and will be sending their grandchildren off to school. The United States Census Bureau projects the percentage of working age people to shrink in every state, meaning fewer people in the prime earning (and thus taxpaying) years to support a growing number of seniors and youth.  All states will be getting older, with only a handful of states projected to have a smaller elderly population than 2010 Florida by 2030. Many states also face large projected youth population increases.  With Medicaid currently constituting 23 percent of the average state budget and education approximately half, a fierce battle between the need for health and education spending looms with fewer working age people to foot the bill.

A great many of the working age population of 2030 btw sit in American classrooms right now. According to NAEP around a third of them can read proficiently. While a broad and difficult rethinking of the provision of vital public services will prove necessary including especially subjects such as health, pensions, immigration-the most urgent need is to improve both the effectiveness and cost-effectiveness of the K-12 system.

Most of the K-12 debate ultimately boils down to whether or not to change the status-quo. The status quo however is going to change us whether we like it or not.

More over on the EdFly blog, let me know what you think.

The 123s of the ABCs

February 3, 2015


(Guest post by Greg Forster)

We are now up to an astonishing 51 school choice programs in 24 states plus DC. We are one state short of having private school choice in half the states. Who wants to put us over the top?

Check out all the latest stats on all these programs in the 2015 edition of The ABCs of School Choice, just released from Friedman.

Fun With Peer Review

December 9, 2014

PHD Comics

(Guest post by Greg Forster)

I may have to revise my opinion of Vox; they seem to have taken an interest in the weaknesses of the peer review system. Of course there are a lot of responsible peer-reviewed journals and, well, peers. But there a lot of the other kind as well, and we are long past the point where simply having gone through something called “peer review” ought to count for anything.

One story details how unscrupulous researchers can manipulate journals, including – amazingly – posing as their own reviewers. In highly specialized fields, journal editors may not know who the appropriate reviewers would be, so they rely – apparently uncritically in some cases – on the “recommended reviewers” supplied by the article authors. Who in some cases are simply the authors themselves using another email address. One scientist used 130 email accounts to create a vast, self-validating “peer review and citation ring”; 60 papers were recently retracted after a 14-month investigation uncovered the fraud. A total of at least 110 articles have been pulled in the last two years due to this type of fraud.

Get me off your email list

Figure 1 from the article “Get Me Off Your Fucking Mailing List”

Accepted for publication by the highly reputable International Journal of Advanced Computer Technology

But the other story is a lot better. It details how some journals now survive not by selling subscriptions or getting institutional support, but by charging a fee to publish your paper. They are apparently known as “predatory journals” because they spam the email universe looking for gullible (or, presumably, unscrupulous) people looking to break into publication. “Article mills” (after the analogous “diploma mills”) would seem a more appropriate name.

As you can see above, the “peer review” process becomes somewhat lax in these cases. One pair of scientists slapped the above-referenced article and began submitting it to peer review spammers. They were amused to discover that one journal accepted their article for publication. Another journal not only accepted but published an article (consisting of nonsense text) by Maggie Simpson and Edna Krabappel. It now sends the authors regular demands that they pay their $459 bill.

But it’s not just spam scammers – peer review controls are easy to get past even at some highly reputable publishers.

Report Card on American Education Released Today

October 29, 2014

(Guest Post by Matthew Ladner)

The 19th Edition of ALEC’s Report Card on American Education: Ranking State Performance, Progress and Reform coauthored by yours truly and Dave Myslinski hit the presses today. Lots of good stuff in this year’s model, including an update of state rankings, a review of the first decade of universal NAEP participation, and a chapter focused on comparing the results of large urban districts.

So going up to the 30,000 level and back down, international results show that the United States is world-class in spending per pupil, not so much in learning per pupil, and that our results for Black and Hispanic students are closer to those of Mexico than of South Korea, despite the fact that Mexico has a far larger poverty problem and spends a small fraction of American spending.

The United States is making progress, but only an average amount of progress so we aren’t going to be catching up  much at the current pace. When you break down American results by state, you find that some states are pushing the national average cart, while others are riding in the cart. Which ones? Glad you asked:

4 NAEP exams


So the states in blue have made statistically significant gains in all four regular NAEP tests (4th and 8th grade reading and math) between 2003 and 2013.  Of the 21 states pulling that feat off, 14 are located in either the West or the South. The Midwest excepting MN, Great Plains, Mid-Atlantic, New York and Texas didn’t carry their weight on improvement (to varying degrees in general math gains were easier to come by than reading, 4th grade improvement easier than 8th grade) during this period. Michigan was the only state to make no significant progress on any of the four regular NAEP exams, a trend I hope they will reverse soon. All other states made progress on one or more of the exams. Note also that this map only shows improvement, few if any of the darkened states have internationally competitive scores, and the few that do tend to hold the good end of the stick on various achievement gaps.

So on the one hand, American education outcomes have never been higher than the 2013 NAEP.  On the other hand, no one yet has any cause for celebration. When we have any states that approach a Asian/European level of bang for the buck in learning outcomes, we’ll let you know about it, but thus far, not so much.

In Chapter 4 of the Report Card we take a close look at the Trial Urban District Assessment (TUDA) NAEP and apply the same “general education low-income” student comparison that we use in the states to improve comparability. Low-income general ed kids were seven times more likely to reach the Proficient level of 4th grade reading in Miami (the top performing district) as in Detroit (the lowest performing). Mind you have only a little better shot at 1 in 3 of scoring Proficient in Miami, so there are many miles to go. Looking at both 4th and 8th grade reading, Miami, New York City, Hillsborough County FL (Tampa) and Boston cluster near the top of the ratings. The District of Columbia does not (yet) rate near the top of the ratings, but their progress over time on NAEP is nothing short of remarkable since the mid 1990s. A large percentage of District students attend charter schools these days, and those charter schools show not only higher scores but also faster improvement than district schools, which are themselves improving.

In any case, slide on down to the following link if you want to see how your state is doing.

Indiana State page