EdWize’s Racial Libel

September 28, 2009

Race Card w watermark

(Guest post by Greg Forster)

On EdWize, Jonathan Gyurko finds himself forced to acknowledge that Caroline Hoxby’s recent blockbuster study is good news for charter schools. He then starts desperately groping for any excuse he can find to neutralize the good news.

Most of his claims will be familiar to those who have seen the teachers’ unions try to spin away gold-standard empirical evidence that their positions are wrong. We’ve read all these cue cards before.

But one of his claims deserves more attention. Like many before him, Gyurko tries his hand at racial demagoguery to make parental choice seem like a scary throwback to Jim Crow:

Such a dramatically-presented conclusion is sure to feature prominently in charter advocates’ efforts to expand the number of charter schools across the city and state. And if it’s true, then why shouldn’t we? The answer actually depends on how policymakers weigh the goal of improved student achievement against other worthy goals, such as greater educational equity and meaningful diversity. And on these other objectives, nagging questions dog the charter sector.

For example, Hoxby finds that 92 percent of charter students are black or Hispanic, compared to 72 percent in district schools and concludes that “the existence of charter schools in the city therefore leaves the traditional public schools less black, more white, and more Asian.” Such racial segregation is consistent with research on charter schools in other states including North Carolina, Texas and elsewhere.

Although this statistic is likely to be a function of charter schools’ location in largely black and Hispanic neighborhoods, Hoxby also reports that fewer white students are applying to the charters; although 14 percent of residents in the charter school neighborhoods are white non-Hispanic, only 4 percent are applying.

There are two claims made here:

1) If the citywide aggregate population of all charter school students is more heavily minority than the citywide aggregate population of district school students, charters must be increasing segregation.

2) If charter school applicants who live near the charter schools are disproportionately minority, charters must be increasing segregation.

Both claims are transparently bogus.

On the first claim: citywide aggregate figures tell us nothing whatsoever about the impact charters are having on segregation, for the simple reason that citywide aggregate figures can tell us nothing whatsoever about segregation in any context, even aside from the whole charter question.

Imagine for a moment that New York is made up of 50% green children and 50% purple children. Let’s look at two scenarios:

Perfect segregation scenrio: All the green children go to fully segregated schools made up exclusively of green children, and all the purple children go to fully segregated schools made up exclusively of purple children.

Perfect integration scenario: All children attend perfectly integrated schools made up of half green children and half purple children.

Now, let’s take a look at the citywide aggregate figures we would get under these two scenarios.

Perfect segregation scenario: Citywide aggregate 50% green, 50% purple.

Perfect integration scenario: Citywide aggregate 50% green, 50% purple.

You see? Aggregate figures are intrinsically incapable of providing any information about school segregation. To find out whether schools are segregated, you must look at the individual schools.

Let’s apply that principle to the real world. Hoxby finds that the citywide aggregate population of district school students is 72% minority. But does that mean every individual school is 72% minority? Of course not. You could very well have all the white children going to perfectly segregated exclusively all-white schools, all the black children to perfectly segregated exclusively all-black schools, all the Hispanic children going to perfectly segregated exclusively all-Hispanic schools, etc., and the citywide aggregate figure would remain unchanged.

And, in fact, the reality on the ground is a lot closer to that dystopian hypothetical than it is to the utopian scenario of ideal racial balance. But Gyurko’s argument relies on the unspoken assumption that the reality on the ground in district schools is utopian.

Meanwhile, the citywide aggregate for charter schools is 92%. As with district schools, the aggregate figure tells us nothing about the actual racial balance in any individual school. Supposing for a moment that New York’s district schools are very heavily segregated – which they are – it is quite possible that the actual charter schools on the ground are better integrated than the district schools even though their aggregate population figure is disproportionately minority.

And, in fact, given that the primary cause of school segregation is housing segregation, the fact that charters can break down neighborhood barriers and draw students from other neighborhoods with different demographics makes it highly likely that they are, in fact, better integrated. That’s the reality in voucher programs, where the empirical evidence unanimously shows parent choice improves integration.

But at any rate, the data to which Gyurko appeals don’t tell us either way.

Once the essential sham behind the first claim is exposed, the second claim is much easier to refute. What counts is not how the local applicant pool differs from the local resident population, but how the final makeup of each charter school differs from the final makeup of each district school. Once the process of parents making choices is completed, are the individual charter schools more segregated? This datum tells us nothing about that.

Ironically, Gyurko’s argument on this second claim really implies that he wants charter schools to represent the racial balance of their local neighborhoods. That would imply endless racial segregation, given that neighborhoods are so racially homogeneous. Any serious attempt to break down racial segregation in schools must begin by acknowledging that schools representing their neighborhoods is the problem.

That’s why hyper-arrogant courts forced us to go through the disastrous failed experiment with forced busing. That was a terrible idea, just like anything that robs parents of their freedom. But at least those tyrannical judges understood the source of the problem correctly.

If parents want to send their children to their local neighborhood schools, they should be allowed. But anything we do that forces them to send their children to school locally is – among so many other evils – going to increase racial segregation. Assigning students to schools by ZIP code is not only educationally bankrupt, it’s racially poisonous.

Why JPGB Beats Edwize

December 11, 2008


  Edwize is a blog by Leo Casey that is sponsored by the United Federation of Teachers (UFT), the New York affiliate of the American Federation of Teachers.  The UFT has tens of millions of dollars at its disposal and thousands upon thousands of members.  Jay P. Greene’s Blog (JPGB) by contrast has a $25 registration fee for the domain name and a couple of laptops. 

Despite this huge disparity in resources, JPGB has a significantly larger audience than does Edwize.  According to Technorati JPGB has an authority rating of 95 while Edwize has a rating of 74.  An authority rating measures how many other blogs link to a given blog during the last 180 days, which is meant to capture how much influence a blog has in the blogoshpere.  In addition, each post on JPGB generates about 4 or 5 comments, on average, while posts on Edwize generate about 1 or 2 comments, on average.  Fewer comments suggest fewer readers and/or material on which people do not care to comment. 

None of these measures is perfect, but it is clear that JPGB beats Edwize.  Why?

The primary challenge for Edwize is that it has to tout teacher union views on education issues.  And those views are mostly junky.  So, Edwize suffers because it takes significantly more resources to interest people in crappy ideas than in sensible ones. 

In case you doubt that the unions have to push junky ideas, ask yourself whether it is sensible to have a system of education in which students are mostly assigned to schools based on where they live; where teachers are almost never fired, no matter how incompetent they are; where teachers are paid almost entirely based on how many years they’ve been around rather than on how well they do their job; where teachers are required to be certified even though there is little to no evidence that certification is associated with quality; and where all teachers are paid the same regardless of subject, even though we know that the skills required for expertise on certain subjects have much greater value in the market than other subjects.

The mental gymnastics required to sustain the union world view has a much greater “degree of difficulty” than the views that are regularly expressed on JPGB.  And the resources required to generate support for these union views are enormous.  You need millions of people financially benefiting from these policies to volunteer as campaign workers.  You need millions of dollars in union dues for campaign contributions.  You need a large team of paid staff in every state and in Washington, DC.  It takes an army and a fortune for the unions to hold their ground.

This not only helps explain why JPGB beats Edwize, but also why reformers are able to beat the unions in the policy arena.  It’s true that the unions win most of the time.  But given their enormous advantage in resources, it is amazing that the unions ever lose.  The reason that the unions lose as often as they do is that their policy positions are much more difficult to defend intellectually.

So, we should feel sorry for Leo Casey and his union comrades.  They may have a lot more money and a lot more people, but they constantly have to defend obviously dumb ideas.

(edited for clarity and to add photo)

Modest Programs Produce Modest Results . . . Duh.

September 3, 2008

HT perfect stranger @ FR

By Greg Forster & Jay Greene

Edwize is touting a new “meta-analysis” by Cecilia Rouse and Lisa Barrow claiming that existing voucher programs produce only modest gains in student learning.

Edwize quotes the National Center for the Study of Privatization in Education (NCSPE), which sponsored the paper and is handling its media push, describing the paper as a “comprehensive review of all the evaluations done on education voucher schemes in the United States.”

But the paper itself says something different: “we present a summary of selected findings…” (emphasis added). Given EdWize’s recent accusations about cherry-picking research, which are repeated in his post on the Rouse/Barrow paper, we thought he’d be more sensitive to the difference between a comprehensive review of the reserach and a review that merely presents selected findings. (By contrast, the reviews we listed here are comprehensive.)

Even more important, the Rouse/Barrow paper provides no information on the criteria they used to decide which voucher experiments, and which analyses of those experiments, to include among the “selected findings” they present, and which to exclude from their review. The paper includes participant effect studies from Milwaukee, Cleveland, DC, and New York City, but does not include very similar studies conducted on programs in Dayton or Charlotte. In New York it includes analyses by Mayer, Howell and Peterson, as well as Krueger and Zhu, but not by Barnard, et al. The paper includes systemic effect analyses from Milwaukee and Florida, but excludes analyses by Howell and Peterson as well as by Greene and Winters.

Clearly this paper is not intended to be, and indeed it does not even profess to be, a comprehensive review.

But even with its odd and unexplained selection of studies to include and exclude, Rouse and Barrow’s paper nevertheless finds generally positive results. They identified 7 statistically significant positive participant effects and 4 significant negative participant effects (all of which come from one study: Belfield’s analysis of Cleveland, which is non-experimental and therefore lower in scientific quality than the studies finding positive results for vouchers). In total, 16 of the 26 point estimates they report for participant effects are positive.

On systemic effects, they report 15 significant positive effects and no significant negative effects. Of the 20 point estimates, 16 are positive.

And yet they conclude that the evidence “is at best mixed.” If this were research on therapies for curing cancer, the mostly positive and often significant findings they identified would never be described as “at best mixed.” We would say they were encouraging at the very least.

Moreover, the paper is not, and doesn’t claim to be, a “meta-analysis.” That term doesn’t even appear anywhere in the paper. It’s really just a research review, as the first sentence of the abstract clearly states (“In this article, we review the empirical evidence on…”). It looks like the term “meta-analysis,” like the phrase “comprehensive review,” was introduced by the NCSPE’s publicity materials.

What’s the difference? A meta-analysis performs an original analysis drawing together the data and/or findings of multiple previous studies, identified by a comprehensive review of the literature. The “conclusions” of a research review are just somebody’s opinion. Meta-analyses vary from simple (counting up the number of studies that find X and the number that find Y) to complex (using statistical methods to aggregate data or compare findings across studies). But what they all have in common is that they present new factual knowledge. A research review produces no new factual knowledge; it just states opinions.

There’s nothing wrong with researchers having opinions, as we have argued many times. It’s essential. But it’s even more essential to maintain a clear distinction between what is a fact and what is somebody’s opinion. Voucher opponents, as the saying goes, are entitled to their own opinions but not their own facts. (Judging by the way they conduct themselves, this may be news to some of them – for example, see Greg Anrig’s claims in the comment thread here.)

By falsely puffing this highly selective research review into a meta-analysis, NCSPE will decieve some people – especially journalists, who these days are often familiar with terms like “meta-analysis” and know what they mean, even if NCSPE doesn’t – into thinking that an original analysis has been performed and new factual knowledge is being contributed, when in fact this is just a repetition of the same statement of opinion that voucher opponents have been offering for years.

(We don’t blame Edwize for repeating NCSPE’s falsehood; there’s no shame in a layman not knowing the proper meaning of the technical terms used by scholars.)

And what about the merits of the opinion itself? The paper’s major claim, that the benefits of existing voucher programs are modest, is exactly what we have been saying for years. For example, in this study one of us wrote that “the benefits of school choice identified by these studies are sometimes moderate in size—not surprising, given that existing school choice programs are restricted to small numbers of students and limited to disadvantaged populations, hindering their ability to create a true marketplace that would produce dramatic innovation.”

And there’s the real rub. Existing programs are modest in size and scope. They are also modest in impact. Thank you, Captain Obvious.

The research review argues that because existing programs have a modest impact, we should be pessimistic about the potential of vouchers to improve education dramatically either for the students who use them or in public schools (although the review does acknowledge the extraordinary consensus in the empirical research showing that vouchers do improve public schools).

But why should we be pessimistic that a dramatic program would have a dramatic impact on grounds that modest programs have a modest impact?

One of us recently offered a “modest proposal” that we try some major pilot programs for the unions’ big-spending B.B. approach and for universal vouchers (as opposed to the modest voucher programs we have now), and see which one works. He wrote: “Better designed and better funded voucher programs could give us a much better look at vouchers’ full effects. Existing programs have vouchers that are worth significantly less than per pupil spending in public schools, have caps on enrollments, and at least partially immunize public schools from the financial effects of competition. If we see positive results from such limited voucher programs, what might happen if we could try broader, bolder ones and carefully studied the results?”

Has Edwize managed to respond to that proposal yet? If he has, we haven’t seen it. Come on – if you’re really as confident as you profess to be that your policies are backed up by the empirical research and ours are not, what are so you afraid of?

And while we’re calling him out, here’s another challenge: in the random-assignment research on vouchers, the point gains identified for vouchers over periods of four years or less are generally either the same size as or larger than the point gains identified over four years for reduced class sizes in the Tennessee STAR experiment. Will Edwize say what he thinks of the relative size of the benefits identified from existing voucher programs and class size reduction in the empirical research?

A Modest Proposal for B.B.

August 18, 2008

The advocates of B.B. (Broader, Bolder; or is it Bigger Budgets? or is it Bloated Behemoth?) have yet to muster the evidence to support widespread implementation of their vision to expand the mission of schools to include health care, legal assistance, and other social services. They do present background papers showing that children who suffer from social problems fare worse academically, but they have not shown that public schools are capable of addressing those social problems and increasing student learning.

And if you dare to question whether there is evidence about the effectiveness of public schools providing social services in order to raise achievement, you are accused of being opposed to “better social and economic environments for children.” Right. And if you question the effectiveness of central economic planning are you also then opposed to a better economy? And if you question the effectiveness of an untested drug therapy are you then opposed to quality health-care?

To help the B.B. crowd generate the evidence one would need before pursuing a reform agenda on a large-scale, I have a modest proposal. How about if we have a dozen large-scale, well-funded pilot programs of the “community school” concept advocated by B.B.? And, at the same time let’s have a dozen large-scale, well-funded pilot voucher programs. We’ll carefully evaluate the effects of both to learn about whether one, the other, or both are things that we should try on an even larger scale.

I’m all for trying out new ideas and carefully evaluating the results. I can’t imagine why the backers of B.B. wouldn’t want to do the same. So as soon as Larry Mishel at the union-funded Economic Policy Institute, Randi Weingarten of the AFT, and Leo Casey of the AFT’s blog, Edwize, endorse my modest proposal, we’ll all get behind the idea of trying new approaches and studying their effects — “community schools” and vouchers.

Wait, my psychic powers are picking something up. I expect that some might say we’ve already tried vouchers and they haven’t worked. In fact, Randi Weingarten just wrote something very much like that when she declared in the NY Daily News that vouchers “have not been shown by any credible research to improve student achievement.” Let’s leave aside that there have been 10 random assignment evaluations (the gold-standard in research) of voucher programs and 9 show significant positive effects, at least for certain sub-groups of students. And let’s leave aside that 3 of those analyses are independent replications of earlier studies that confirm the basic positive findings of the original analyses (and 1 replication does not). And let’s leave aside that 6 of those 10 studies have been published in peer-reviewed journals (including the QJE, the Journal of the American Statistical Association, and the Journal of Policy Studies), three in a Brookings book, and one in a federal government report (even if Chris Lubienski somehow denies that any of this constitutes real peer-review). And let’s leave aside that there have been more than 200 analyses of the effects of expanding choice and competition, which Clive Belfield and Henry Levin reviewed and concluded: “A sizable majority of these studies report beneficial effects of competition across all outcomes… The above evidence shows reasonably consistent evidence of a link between competition (choice) and education quality. Increased competition and higher educational quality are positively correlated.”

Let’s leave all of that aside and ask Randi Weingarten how many random-assignment studies of the community school concept she has. Uhm, none. How many evaluations of community schools, period? Uhm, still none. But that doesn’t stop her from drawing the definitive conclusion: “Through partnerships with universities, nonprofit groups and other organizations, community schools provide the learning conditions and resources that support effective instruction and bring crucial services to an entire community.” How does she know?

But I’m eager to help her and all of us learn about community schools if she is willing to do the same to learn about vouchers. Better designed and better funded voucher programs could give us a much better look at vouchers’ full effects. Existing programs have vouchers that are worth significantly less than per pupil spending in public schools, have caps on enrollments, and at least partially immunize public schools from the financial effects of competition. If we see positive results from such limited voucher programs, what might happen if we could try broader, bolder ones and carefully studied the results?

And if community schools really deliver all that is being promised, great, let’s do that too. But if our goal is to do what works, why not give both ideas a real try?

(Link added)

Blog Rankings

July 14, 2008

This blog is not yet three months old but I am pleased to report that it is off to a good start.  According to Technorati’s rankings, JayPGreene.com is attracting more readers than the American Federation of Teachers’ blog, Edwize, more than Diane Ravitch and Deborah Meier’s, Bridging Differences hosted by Education Week, more than the Reason Foundation’s Out of Control, and the Center for Education Reform’s Edspresso.  It significantly trails the educouple of Eduwonk and Eduwonkette as well as Cato at Liberty (although that’s not primarily an education blog).  Flypaper, which started about the same time as this blog, is also off to a good start.  The Queen of education blogs seems to be Joanne Jacobs.

Here are the Technorati rankings (as of this morning) of education sites that seem to share some of the same audience as this blog.  By no means is this a comprehensive list of education blogs.  And I have no idea how reliable or meaningful Technorati’s rankings really are.  I’d continue blogging no matter what the rankings were because it’s fun.  I imagine the same is true of most others.

  1. Cato at Liberty               3,662
  2. Joanne Jacobs                3,709
  3. Eduwonkette                27,419
  4. Eduwonk                      30,876
  5. Flypaper                       95,943
  6. Jay P. Greene               104,227
  7. Bridging Differences   107,924
  8. D-Ed Reckoning         107,924
  9. AFT’s Edwize              116,227
  10. Edspresso                  123,039
  11. Out of Control            123,039
  12. Core Knowledge         127,851
  13. Sherman Dorn            151,703
  14. EdBizBuzz                   184,730

%d bloggers like this: