My Apple is Bigger

August 29, 2017

(Guest post by Patrick J. Wolf)

New York City public charters have been much in the news of late (see here & here) for hitting it out of Yankee Stadium on student achievement.  When Judge-ing (sorry, couldn’t resist) The Big Apple’s charter school sector, add this little fact to the case:  charters out-slug their peers at a lower cost.

That is the conclusion of my latest study of charter school funding inequity, co-authored with Larry D. Maloney.  It is fun to study New York City, in part because of great potential for wordplay but also because the place is so darn big that you can disaggregate results by borough and still have district v. charter comparisons informed by large samples.  So, “start spreading the news…”

There are over 1,000,000 public school children in The Big Apple.  Seven percent of them attended charter schools during Fiscal Year 2014, the focus of our study.  Cash revenue to charter schools averaged $15,983 per-pupil while payments to district-run schools averaged a much more generous $26,560 per-pupil.  “You just wait a New York Minute Mr. Henley,” you might caution, “The New York City Department of Education actually provides in-kind services to students in charter schools that represent a funding resource not accounted for in your cash calculations.”  You would be right.  After factoring in the cash value of such in-kind services, charter schools receive a mere $4,888 less in per-pupil funding than district schools (Figure 3).  New York City charters schools are outperforming the City’s district schools at about 81 cents on the dollar.

I’ll admit that my figure isn’t nearly as MoMA-worthy as Matt’s post-modernist depiction of the Arizona school districts that refuse to accept students through inter-district choice, but it makes a crucial point.  Even accounting for the value of everything contributed in support of charter schools in New York City, district schools still get more money per student.

Critics of our prior charter school funding studies (available here and here) have claimed that we are making Big-Apple-to-Big-Orange comparisons, since district schools provide more extensive educational services to students than charters.  Our accounting for in-kind district services to charters fully addresses that argument.  After factoring in the value of co-located facilities, transportation, meals, special education services, health services, textbooks, software, etc., all of which are provided to charters in New York City so that the scope of their services is equal to that of district schools, the charters still receive less funding.  We even examined school spending patterns, in addition to funding patterns, and the story is the same.

Surely the student populations in district schools are needier than those in charter schools, thereby justifying the funding gap, right?  Actually no.  The population of charter school students in New York City contains a higher percentage of free-and-reduced price lunch kids than the population of district school students (Figure 4).

The percentage of students with disabilities is only slightly higher in district schools versus charter schools, 18.2% compared to 15.9%.  That means that districts enroll 21,342 “extra” students with disabilities compared to charters.  For the special education enrollment gap favoring districts to explain the entire funding gap favoring districts, each “extra” student with a disability in the district sector would have to cost an additional $214,376 above the cost of educating a student in general education.  It is simply implausible that the slight gap in special education enrollments explains the substantial gap in funding between district and charter schools in New York City.

Like rookie sensation Aaron Judge, this report has lots of hits besides just the homeruns described above, so check it out.  In sum, New York City has made a major commitment to provide material support to students in its public charter schools.  Still, inexplicable funding inequities persist depending simply on whether a child is in a charter or a district school.  Larry and I think this case study provides yet another Reason to support weighted student funding with full portability (see what I did there?).  Switching to such a simple and equitable method for funding all public school students definitely would put us in a “New York State of Mind.”


CREDO Is Not the Gold Standard

August 28, 2017

CREDO has produced a slew of studies comparing test score outcomes for students in charter and traditional public schools.  Those studies have come to dominate public policy and foundation discussions about charter schools and are sometimes thought to be the highest quality studies on charter effects.  They are not.

We actually have more than a dozen random-assignment studies of charter school achievement effects.  For a summary of what those gold-standard studies find, see this systematic review by Cheng, Hitt, Kisida, and Mills (or if you have difficulty with the pay-wall you can find an earlier working paper here).

CREDO’s research design is not gold standard.  It’s not even silver.  Maybe it’s formica.  It would be understandable for you to be confused and think CREDO was gold standard given how much people in policy circles talk about that research as opposed to the set of gold-standard random-assignment experiments.  And you might be further confused by the language CREDO uses when they describe their research design as comparing “virtual twins.

CREDO’s methodology does not compare twins, virtual or otherwise.  All they are doing is comparing students who are similar on a limited set of observable characteristics — race, age, gender, and prior achievement scores.  “Matching” students on those observable characteristics is just as prone to selection bias as any other observational study that controls statistically for a handful of observed characteristics when comparing students who choose to be in different school sectors.  That is, students who choose to attend charter schools are very likely to be different from those who choose to remain in traditional public schools in ways that are not captured by their race, age, gender, and prior test score.  In particular, their desire to switch to a different kind of school may well be associated with developments in their life that might affect the future trajectory of their test scores.  In short, school choice is prone to bias from selection in observational studies like CREDO.

CREDO overstates the strength of their methodology by referring to their approach as one that compares “virtual twins.”  They say: “a ‘virtual twin’ was constructed for each closure student by drawing on the available records of students with identical traits and identical or very similar baseline test scores.” ( p. 3)  It is probably unintentional, but this description gives the false impression that they are comparing “identical” students in different sectors.  In reality they are only comparing students who are similar on a handful of observed characteristics.  Ladner and I may both have beards, enjoy a malt beverage, and are interested in school choice but that does not make us “twins” nor would it be reasonable to describe us as having “identical traits.”

Unlike CREDO, gold-standard random assignment studies are not subject to selection bias because only chance distinguishes between whether students are in charter or traditional public schools.  On average, the students being compared in randomized control trials (RCTs) are truly identical on all observed and unobserved characteristics.  They really are virtual twins.

Backers of CREDO can point to the fact that the CREDO methodology has produced results that are similar to experimental studies in a few locations and claim that selection bias must therefore not be an important problem.  This is a faulty conclusion.  Finding that CREDO’s observational method and randomized control trials sometimes produce similar results only proves that selection did not bias the results in those cases.  In other cases charters may attract students who are very different in their future achievement trajectory and RCTs would produce results that are very different from an observational study.  Online charters are likely a clear example of where this selection bias would be severe.

The problem is compounded by the fact that policymakers and foundation officials are too eager to use CREDO results for a number of reasons that have nothing to do with the quality of the methodology.  Sometimes they want to use CREDO because it supports their preferred policy conclusions.  They also have a strong preference for studies that name the city or state they are considering.

It’s as if Jonas Salk proved that the polio vaccine works in an RCT, but policymakers and foundation officials want to know if it prevents polio in New Orleans or Detroit.  Rather than rely on a lower quality research design that mentions their town, policymakers and foundation officials should focus on the highest quality charter randomized control trials, of which we have more than a dozen.  If that evidence shows the polio vaccine to work, then they should assume it also works in their town.


Monday Links

August 14, 2017

(Guest Post by Matthew Ladner)

Andy Smarick turns in an interesting article on the Structure of Scientific Revolutions and education reform in National Affairs. Great article, although I believe Smarick’s article would profit by acknowledging almost catastrophic flaws in school district democracy, including often extremely low voter turnout rates and resulting opportunities for regulatory capture.

Kate Walsh on K-12 progress that is going unnoticed. Before this college success of charter school meme gets entirely out of hand, I want to suggest that we should get the comparisons between control group and experimental group studies on long-term success nailed down before going to town on this. We do have reason to suspect those applying to charters are different from those that don’t.

Interesting article on Why Education is the Hardest Sector to Automate by Raya Bidshahri.

Yours truly on Arizona Horizon debating ESAs with SoS leader Beth Lewis.

 

 


Christian Theology and Sending Your Kids to Public School

August 11, 2017

imagesF6CRZCR6

(Guest post by Greg Forster)

OCPA’s Perspective carries my latest, on the question of whether sending your children to public schools should be considered mandatory, forbidden or neither under Christian theology.

First I respond to a Presbyterian pastor who says Christians should feel obligated to send their kids to public schools, because hundreds of years ago churches created the schools that public schools were later modeled on:

Moore seems not to have asked why, if churches were so aggressive in creating schools, government stepped in and used its power of taxation to drive churches (mostly) out of the schooling business. One reason was because the church schools taught people to think for themselves and live independently. As industrialists grew more powerful in the 19th century, they enlisted government to create a new school system whose products would provide more submissive and narrow-minded cogs for the factory machine. (Big business and big government colluding to destroy freedom and oppress the poor—it all seems so familiar, almost as if we’ve seen it somewhere before.)

However, the more important reason was religious. Massachusetts created the first government school monopoly in America in the 1830s, partly out of submission to exploitative industrialists, but also because the Unitarian Boston Brahmins were horrified at the unreconstructed Calvinism of the Massachusetts countryside. The new public schools indoctrinated students in a religion of good works without the cross, tearing up the cultural roots of puritanism. (One would think this history would be of interest to a Presbyterian pastor.)

The tool created to destroy Calvinism in Massachusetts was soon deployed nationwide in hopes of destroying Catholicism…

Then I respond to an Anglican priest who says any school that isn’t explicitly Christian must be treated as atheistic and avoided:

It is true, as Fowler argues, that there is no such thing as religious neutrality. All human beings have some kind of cosmic worldview, and everything we do presupposes that worldview. Even to assert “2+2=4” presupposes the view that the human mind is rational, and its spontaneous intuitions about the relationships between numbers are sound (and, for that matter, that other minds exist and I can communicate meaningfully with them). All of these premises are religious, or at least metaphysical.

However, while there is no such thing as religious neutrality, there is religious ambiguity. To assert “2+2=4” is not a religiously neutral statement, but it does not settle all religious questions, either. A Christian, a Muslim, and a Hindu have three very different metaphysical accounts of what the human mind is, yet each can square “2+2=4” with their view.

Thus, a school that is not explicitly Christian need not become explicitly atheist. It need not even become implicitly atheist. It may be implicitly Christian! Or it may be a shared possession, offering an educational discourse that accommodates a diverse population without resolving their religious differences.

I argue we are neither compelled nor forbidden by Christian theology in the matter of sending children to public school:

Parents should evaluate local schools, public and private, and select the one that aligns best with their views and goals. My wife and I send our daughter to the local government school, because we are more satisfied with it on all counts—including spiritually—than the available alternatives. Others, facing other circumstances, may legitimately make other choices.

We live under neither a Babylonian captivity to public schools nor an Egyptian exodus from them. We live in the freedom of Pentecost, sent out into the nations to live in the tension of being in the world but not of it:

The problem with both Babylon and Exodus as social models for Christianity today is that they both come from the Old Testament, before Christ’s coming. With the Great Commission, Christ has sent his people out into the world; he wants disciples of every nation. Where the Jews were called out of Egypt, we are called into Egypt—our road is an eisodus, not an exodus. But we are not called to live passively, like the exiles in Babylon, merely marking time in a foreign land. The church has a mission to build godly ways of life wherever we go, and that means we can’t simply conform to the world around us or bunker down in Christian ghettos.

Whatever your faith and whatever you think of my theology, I welcome your feedback – and I promise not to lie about your views and then fire you!


More Chutzpah

August 7, 2017

In my last post I described how Atila Abdulkadiroglu, Parag Pathak, and Christopher Walters (APW) released an evaluation of the 1st year results from Louisiana’s voucher program through NBER dated December 2015 (although actually released in January 2016) that failed to cite or provide appropriate credit to earlier conference presentations by Jonathan Mills and Patrick Wolf and a dissertation by Mills.  In response APW issued a statement that raises many issues, but fails to address the heart of the matter. Before replying to some of those extraneous issues, let’s focus on the key questions:

  1. Did Jon and Pat conduct analyses, write papers, and present findings to the academic community of the same program using the same basic methodology and data as APW prior to their December 2015 NBER paper?
  2. Were APW aware of this prior work?
  3. Did APW fail to cite and give appropriate credit to that prior work in their December 2015 NBER paper?

If you read their statement closely you will see that APW do not deny the existence of prior work, do not deny being aware of that work, and do not deny failing to cite it. Let’s take a look at what they did say.

The No Prior Working Papers Claim

APW do not deny that Jon and Pat had multiple conference papers, starting with one to the Association for Public Policy Analysis and Management on November 8, 2013 followed by 8 more during 2014 and 2015.  Instead they focus on whether there were prior working papers: “Mills and Wolf released a working paper in February 2016. This is the first working paper by the Arkansas team that we are aware of.” Christopher Walters reiterates this point in a comment on the blog: “We would have cited a public working paper had we known of one in December 2015.”

There are two problems with this no prior working paper argument.  First, a working paper is not the only format of prior academic work that requires citation.  There is no exemption to the scholarly obligation to credit prior work if that work is in the form of conference papers and presentations.  In fact, the American Economics Association helpfully provides guidance on the appropriate way to cite “Lectures and Papers Presented at Meetings.”

Second, there is a very good reason why Jon and Pat did not have a working paper for APW to cite and APW are almost certainly aware of that reason: The Louisiana Department of Education (LDE) asked Jon and Pat not to widely disseminate their findings until after the second year of results were complete. LDE had made the same request of APW, who similarly complied with that request before deciding that it would take them too long to finish the 2nd year analyses.  As APW describe it in their statement: “We postponed the release of our paper because the LDE promised us additional data in exchange for delaying public disclosure of our results. We released the paper when we judged that this data was not forthcoming.”

While Jon and Pat did not widely publicize the results of their 1st year evaluation by issuing a working paper online, they did not keep their findings a secret.  They presented their results at numerous conferences and in a dissertation, and they solicited feedback and suggestions from colleagues (including from Abdulkadiroglu).  And they took that time to resolve missing data issues, complete the 2nd year analyses, and release both 1st and 2nd year results in a working paper only a month after APW released their 1st year results.

By contrast, APW operated in a different manner.  As far as I can tell they did not present their work at any meeting of a professional association prior to December 2015.  They did not disclose to Pat and Jon that they were planning to conduct or were already conducting their own study, even when Pat and Jon discussed the research with Abdulkadiroglu.  They also did not take the time to work out data issues with LDE and wait for the 2nd year of results before releasing their NBER report. And the failure of that report to cite and credit the work produced by Jon and Pat cannot be excused because Jon and Pat didn’t put a working paper online.

The Data Sharing Agreement Claim

APW’s statement does not deny being aware of prior work by Jon and Pat.  In Walter’s comment on the blog, however, he does say “We were not aware of the Mills dissertation chapter …”  But Abdulkadiroglu, not Walters, was at the lunch with Jon and Pat in June 2015.  The statement they issued together does not deny that Abdulkadiroglu was told about Jon’s dissertation work at that lunch.  Instead it denies that Jon’s dissertation was the origin of APW’s work: “This conversation centered on the use of school assignment mechanisms for program evaluation. The conversation occurred more than two years after we signed our data agreement to evaluate the LSP using lotteries, so it was clearly not the inspiration for our work.”

Whatever inspired their work does not obviate their scholarly obligation to cite and give appropriate credit to academic work on the same program that had been produced prior to their December 2015 NBER report. The existence and date of APW’s data sharing agreement with LDE is irrelevant to whether they failed to cite and give appropriate credit to previously produced work.

But in a comment on the blog, Walters offers a novel interpretation of what “prior work” means: “Your team’s research is not ‘prior work.’ As shown by the date on the data agreement provided in our response, the two projects were in progress simultaneously.” By prior work I mean research findings that had been produced and shared with the academic community before December 2015.  The fact that both research teams had data sharing agreements does not erase the fact that Jon and Pat had produced, presented, and published (in a dissertation) results prior to the December 2015 NBER report.

In addition, it should be noted that Walters’ description that “the two projects were in progress simultaneously” is very different from APW’s characterization of Jon and Pat’s work as a “followup analysis” in the footnote they added to their NBER report after Jon and Pat released their February 2016 working paper with the 1st and 2nd year results. It should be further noted that even that amended paper does not provide a proper citation because Jon and Pat’s work is missing from the reference section.

Even though the existence and date of the data sharing agreements is irrelevant to the scholarly obligation to cite prior work, the suggestion that APW had an agreement that pre-dated Jon and Pat’s by two years is incorrect.  Jon and Pat started negotiating a data sharing agreement with LDE during the fall of 2012, around the same time as APW, and concluded that agreement on January 8, 2013, two months after APW.  Jon and Pat were completely unaware of the existence of APW’s data sharing agreement or that APW were pursuing a similar line of research with the same data.

The data sharing agreements are further irrelevant because they do not establish when the research teams actually started the research in earnest.  We know that Jon and Pat had started their research during 2013 because they presented preliminary findings at the APPAM conference in November 2013. The earliest we know APW had written-up results, according to reporting by The 74 Million, was in October 2015, when they presented them to LDE.  And when Jon and Pat discussed their research with Abdulkadiroglu in June 2015, it is unclear whether he failed to disclose that they were working on a similar study because APW had not actually started that work yet.

The You Didn’t Cite the Dissertation Either Claim

Finally, the APW statement seems to suggest that they are somehow absolved of the responsibility of citing Jonathan Mills’ dissertation because Jon and Pat also failed to cite that dissertation in some of their subsequent papers, including the February 2016 working paper with 1st and 2nd year results. First, it is an entirely different thing to deny oneself credit than to deny someone else credit.  Whether Jon and Pat made an error in failing to credit all of their own prior work does not excuse APW in making that error about someone else’s work.  This is especially the case because Jon and Pat’s failure to cite Jon’s dissertation does not mislead the reader about who had been the first to produce these results.  Second, Jon and Pat’s February 2016 working paper did cite a series of their own prior conference papers from 2014 and 2015, so they did clearly establish that they had been presenting results on the Louisiana program well before December 2015. APW might have noticed those references and acknowledged that record of prior work when they added the footnote in an update to their December 2015 NBER paper mentioning Jon and Pat’s February 2016 working paper.


Raising this issue is certainly not pleasant.  In fact, it’s down-right nerve-wracking and I can completely understand why Jon was reluctant to press this matter earlier.  But I think credit is being given to other researchers for being the first to produce an evaluation of achievement effects from the Louisiana voucher program that properly belongs to Jon and Pat.  I actually tried to resolve this amicably with Parag Pathak when he came to give a lecture at the University of Arkansas in 2016.  Parag had been invited prior to the December 2015 NBER report and his visit was uncomfortable.  It would have been more uncomfortable if either Jon or Pat were there, but Jon had already left for a post-doc at Tulane and Pat had another obligation.  Contrary to the APW statement, I did raise the bruised feelings over credit and data access with Parag and suggested that he might smooth things over with Jon and Pat and perhaps regain access to LDE data if he were to suggest collaboration on some future project with them.  I suggested at the very least he should call Pat and talk to him.  He never did.


Florida Scholarship Program Serves 100,000+ Students

August 7, 2017
ednext_20111_figlio_open

Image: “Rally in Tally” to support school choice on March 24, 2011.

(Guest Post by Jason Bedrick)

I still frequently see the canard that “poor families can’t benefit from school choice programs.”

Oh really? Tell that to the 100,000+ kids from low-income families in Florida receiving tax-credit scholarships through Step Up for Students. Average family income: $24,000.

Also, contra the false “minorities don’t benefit” narrative, about seven in 10 scholarship students are black or Hispanic.

harry-potter-excited


The Chutzpah of Abdulkadiroglu, Pathak, and Walters

August 4, 2017

Chutzpah is jokingly defined as murdering one’s parents and then complaining about being an orphan. Atila Abdulkadiroglu, Parag Pathak, and Christopher Walters ( hereafter APW or the MIT team) sure show some chutzpah when complaining about not having continued access to data regarding the Louisiana Scholarshp Program (LSP) in a recent article. While I don’t know for sure why they were denied continued access to data, I believe that it is related to their rush to release 1st year results from their evaluation. Why they were rushing is an incredibly depressing story about how status and power in our field contributes to academic abuse and dishonesty– a story the reporter who wrote the article entirely missed.

It is not widely known or acknowledged, but the original analysis of 1st year result from LSP was conducted by Jonathan Mills when he was a doctoral student along with his advisor, Patrick Wolf, at the University of Arkansas.  They presented those findings at academic conferences 8 times during 2014 and 2015 and they were contained in Jon’s dissertation published in July 2015. APW were at some of those conferences.  Atila actually had lunch at one conference with Jon and Pat during which they discussed that study in June of 2015.  Atila never indicated that he was conducting or planning to conduct a similar study.  He offered to help and they sent him some materials.  He never responded with help but he did move forward with his own study with the MIT team without informing Pat or Jon that they were doing so.

APW released their own study as an NBER report in December 2015.  Nowhere in that report did they acknowledge or cite Jon and Pat’s earlier work of which they were almost certainly aware, having discussed it with them. Nor did APW acknowledge that their study was essentially a replication of Jon and Pat’s earlier study. The research designs were nearly identical.  The data were almost the same.  The only difference was that Jon and Pat had a more complete data set and as a result reported more negative results.

That’s right.  Jon and Pat had more negative results.  They released those results along with the negative 2nd year results in February 2016.  So the fact that Jon and Pat continued getting access to LA data while APW did not does not appear to have anything to do with reporting negative results.  It seems to be related to the fact that APW were rushing to release results.  They didn’t take the time like Jon and Pat did to solve missing data issues.  Instead they were determined to move fast to get their results out first.

Why did  it matter that they be first?  By being first to release they could act like they had the original analysis rather than a replication.  Top Econ journals tend not to be as  interested in replications of a grad student’s dissertation.  And by being first to release and not citing Jon’s work they could act like theirs was the original analysis.

Failing to credit and cite earlier work is a form of academic fraud.  I have not come forward earlier with this story because Jon was entering the academic job market and did not want to get on the wrong side of high status and powerful people in the field.  Pat and I, as his advisers, deferred to his wishes and remained quiet.  Now that Jon has a secure job ( with us) and a news article wrongly implies that APW were denied access because (presumably unlike us) they wouldn’t withhold negative result, I felt compelled to tell this story.  It’s an ugly one.

UPDATE: Pat Wolf checked his records and found that he also had a discussion at a conference in April 2015 with Atila regarding the Louisiana evaluation that he and Jon were doing. The materials he sent, however, were following that conversation, not following the June conversation as Pat had earlier remembered, and those materials were not directly related to the study.  In any event, it is clear from multiple conversations and multiple conference presentations that APW were aware of the existence of prior research.

2nd Update:  APW have a statement here.  My response to it is here.

(Edited for typos and to add links)

 

Read the rest of this entry »


USDOE Rediscovers Federalism

August 2, 2017

(Guest Post by Jason Bedrick)

Last month, as detailed here, the U.S. Department of Education rejected Delaware’s ESSA plan for being insufficiently “ambitious.” You see, Delaware was merely attempting to do something that no state had ever accomplished before.

The administration’s actions flew in the face of their frequently stated commitment to federalism. Now, however, it seems they have reversed course:

After some serious drama, U.S. Secretary of Education Betsy DeVos on Tuesday gave Delaware the green light for its Every Student Succeeds Act plan.

You read that right. Delaware, aka the state whose Feedback Shook the World, is the first state to get the all-clear to proceed on ESSA.

What drama are we talking about? Here’s some quick background: DeVos had been hitting the local control theme hard in speeches since taking office. But her team’s response to the submitted plan from Delaware, one of the first states to get ESSA plan feedback from the Trump education department, seemed out of line with that rhetoric.

The department questioned the ambitiousness of the First State’s student achievement goals and criticized the state for wanting to use Advanced Placement tests to gauge college and career readiness. (The department said this was a no-go because the tests and courses aren’t available in every school.)

That got many important people pretty upset, including Sen. Lamar Alexander, R-Tenn., the Senate education chairman and an ESSA architectChris Minnich, the executive director of the Council of Chief State School Officers, also said he was disappointed. Both said that DeVos’ team had essentially overstepped the bounds of the law.

In a press release, Secretary DeVos noted that she believed Delaware’s plan “adhered to the law” but she stopped short of recommitting her department to the principle of federalism:

“Delaware has always been a state of firsts, so it should be no surprise that theirs was both the first state plan submitted and the first approved under ESSA,” said Secretary DeVos.

“My criteria for approval is clear: does the state’s plan adhere to the law? Delaware demonstrated their plan does, and so I am happy to approve it. I hope it will give the students, families and educators in the state a strong foundation for a great education.

“Throughout the process, Delaware’s leaders have been terrific partners. I want to thank Gov. Carney, Secretary of Education Bunting and State Board President Loftus for their work and collaboration on putting forth a plan that embraces ESSA’s spirit of flexibility and creative thinking.”

All in all, this is a positive development. Nevertheless, this episode should serve to remind education reformers that even an administration that talks the federalist talk doesn’t necessarily always walk the walk. Those who respect subsidiarity and value local control — especially those who understand that the most local form of control is in the hands of parents — have good reason to be wary about giving the feds any power.


School Choice and Segregation in Perspective

July 31, 2017

image

(Guest post by Greg Forster)

OCPA’s Perspective carries my new article on school choice and segregation. I wrote it before the recent silliness from our friends at CAP, so my expectation that the recent increase in focus on this issue would only continue is holding up well so far:

The accusation that school choice will increase ethnic segregation in schools, after a long period on the rhetorical back-burner (during the age of test-score obsessions), has suddenly returned to the forefront of public debate. That’s no surprise, given rising levels of ethnic tension and polarization.

Last time the other side tried to make hay out of this, it failed, largely because the empirical evidence we have on this question is in favor of choice:

In fact, that body of research is the reason it’s been a while since we heard much talk about segregation in the debate over school choice. I remember hearing this talking point much more in the early 2000s, when fewer of these studies had been done. As the evidence piled up, the talking point went away.

In the new article I go into some of the politics of schools and ethnic segregation, arguing that while “most parents aren’t racist” is one possible explanation of the evidence for choice, you can also believe race is an important factor in school selection and still believe school choice will reduce ethnic segregation as compared with the status quo:

To whatever extent parents are racist, consciously or unconsciously, the government monopoly system is perfectly designed to cater to those racist preferences. Segregation flourishes under the government monopoly, both because schools are tied to ZIP codes and because power brokers draw the attendance lines.

So the strongest argument for choice is not “parents aren’t racist.” It’s “under the government monopoly, segregation happens by default, regardless of what parents prefer; only school choice creates the opportunity for integration.”

Check it out and let me know what you think!


Wonk Action Shot!

July 28, 2017

IMG_0087

(Guest Post by Matthew Ladner)

Shot from Denver of some of my favorite folks…