Acting White

November 22, 2010

Stuart Buck, the University of Arkansas graduate student and author of the well-reviewed book, Acting White, suggests that high academic achievement for African-American students is hindered by negative social pressure from peers.

Now Dan Willingham reviews a new study on the subject:

It used a sample of over 13,000 students, averaging about 15 years old. Social acceptance was measured with a simple 4 question interview that asked whether they felt socially accepted, and the frequency with which they felt lonely, felt disliked, or felt people were unfriendly to them.

The study took measures at two time points and examined the changein social acceptance across the year. The question of interest is whether students’ academic achievement (measured as grade point average) at Time 1 was related to the change in social acceptance over the course of the year.

For White, Latino, and Asian students, it was—positively. That is, the higher a student’s GPA was at Time 1, the more likely it was that his or her social acceptance would increase during the coming year. It was not a big effect, but it was present.

For African American and Native American students the opposite was true. A higher GPA predicted *lower* social acceptance during the following year. This effect was stronger than the positive effect for the other ethnic groups.

Thus, it seemed that the simpler version of the “acting white” hypothesis was supported.
But the story turned out to be a bit more complicated.

Further analyses showed that there was a social penalty for high achieving African Americans *only* at schools with a small percentage of black students. The cost was not present at high-achieving schools with mostly African-American students, or at any low-achieving schools.

At the same time, there was never a social benefit for academic achievement, as there was for White, Latino, and Asian students.

These more fine-grained analyses were not possible for the Native American students, because the sample was too small.

So what are we to make the of “acting white” phenomenon?

A single study is never definitive, but this study indicates that academic success is not universally taken by African American adolescents as a sign of rejecting African American culture. It is specific to particular contexts and is plausible a response to discrimination.

Sounds like this mostly supports Stuart’s argument but I’m curious to hear what he thinks.


Merit Pay Bust

September 22, 2010

For some time now I have expressed disillusionment with merit pay as an ed reform strategy. In a paper Stuart Buck and I produced last spring for a Harvard conference on performance incentives we wrote:

All of this leads us to measured skepticism about the merit of merit pay, unless coupled with other reforms such as competition between schools. After all, merit pay boils down to an attempt to recreate a market system within a tightly controlled state monopoly. This is an objective fraught with peril. Even if wise and benevolent state actors manage to get the incentives right at a particular moment in time in a particular place, their actions can always be undone by immediate successors. Those successors may well be more influenced by the powerful special interests that want to block merit pay, loosen the standards, or even to call a system “merit pay” while rewarding behavior that has no relation to actual achievement.

Now we have additional reasons for skepticism.  A well-designed random-assignment experiment led by Vanderbilt’s Matt Springer found:

While the general trend in middle school mathematics performance was upward over the period of the project, students of teachers randomly assigned to the treatment group (eligible for bonuses) did not outperform students whose teachers were assigned to the control group (not eligible for bonuses).

Keep in mind that this experiment only tests whether financial incentives increase teacher motivation, resulting in higher student achievement.  It does not address whether merit pay might change the composition of the teacher labor force, attracting and retaining more effective teachers.

Still, color me even more skeptical about the promise of merit pay as an ed reform strategy.  It may well be that the current crop of teachers we have believe that they are doing their best, so offering them money for trying harder doesn’t result in a significant change in effort.  And given the political and organizational barriers to merit pay, I hold out little hope that a well-designed program can be sustained long enough to effect the composition of the teacher labor market.

In the last week, I hope ed reformers have learned that we can’t really improve the school system by maintaining the same centralized system while trying to sneak a reformer into the control-room (a la Michelle Rhee).  And I also hope we’ve learned that we can’t tinker with the incentives within that same centralized system ( a la merit pay).  The key to effective reform is decentralization of control via school choice, including charters, vouchers, tax credits, weighted student funding, etc…
(edited for typos)

Ravitch is Wrong Week, Day #1

April 5, 2010

Diane Ravitch’s new book “The Death and Life of the Great American School System: How Testing and Choice Are Undermining Education” has been burning up the charts. Ravitch has been ubiquitous, writing op-eds in support of her book, doing lectures and interviews all over the place, and being reviewed in all sorts of high-profile venues.

As an overall matter, the book says little, if anything, that is actually new on the subjects of testing and choice. What Ravitch is really selling with this book is the story of her personal and ideological conversion. Not so long ago, she was writing articles like “In Defense of Testing,” or “The Right Thing: Why Liberals Should Be Pro-Choice,” a lengthy article in The New Republic that remains one of the most passionate and eloquent defenses of school choice and vouchers in particular. Now she seems to be a diehard opponent of these things. But she’s not saying anything that other diehard opponents haven’t already said countless times.

The book does score a few points in critiquing the charter school movement (e.g., charter schools have an unfair advantage in competing with Catholic schools in the inner cities, and charter test results haven’t been as promising as might have been expected), or in critiquing testing and accountability (e.g., states have been watering down their standards, as shown by wide discrepancies between NAEP and state tests).

But these few good points are outweighed by the bad arguments and leaps of illogic that permeate much of the book. The book’s faults fall into five general categories, each of which will be the subject of a blog post this week:

  1. Ignoring or selectively citing scholarly literature;
  2. Misinterpreting the scholarly literature that she does cite;
  3. Caricaturing her opponents in terms of strawman arguments, rather than taking the best arguments head-on;
  4. Tendering logical fallacies; and
  5. Engaging in a double standard, such as holding a disfavored position to a high burden of proof while blithely accepting more problematic evidence that supports one’s own position (or not looking for evidence at all).

IGNORING SCHOLARLY LITERATURE

An endemic problem with Ravitch’s book is the tendency to cite only one or two studies on a disputed empirical question as if that settled the matter, while ignoring other (often better) studies that undermine or refute her claims.

For example, Ravitch claims that vouchers don’t pressure traditional public school systems to improve (pp. 129-32), even though the scholarly consensus is precisely the opposite. Ravitch also highlights a couple of studies that failed to find achievement gains from vouchers, but ignores the fact that “9 of the 10 [random assignment studies] show significant, positive effects for at least some subgroups of students.“

One of the most egregious examples arises from Ravitch’s repetitive claim that charter schools tap into the most “motivated” students. This claim appears practically every time Ravitch mentions charter schools. See, e.g., p. 145 (“charter schools are havens for the motivated”); p. 156 (“A lottery for admission tends to eliminate unmotivated students”); p. 212 (“two-tiered system in urban districts, with charter schools for motivated students and public schools for all those left behind”); p. 220 (“Charter schools in urban centers will enroll the motivated children of the poor, while the regular public schools will become schools of last resort for those who never applied or were rejected.”); p. 227 (“Our schools cannot improve if charter schools siphon away the most motivated students”).

Notably, Ravitch doesn’t highlight any actual evidence for this claim. She treats it as definitionally true (“by definition, only the most motivated families apply for a slot,” p. 135). But that is wrong: The only thing that could be true by definition here is that parents who sign up their children for charter schools are the most motivated to sign up their children for charter schools, which is a trivial observation (and one that probably isn’t true anyway: some motivated parents might easily fail to hear about a charter school opportunity, while other parents might sign up on a whim).

But that’s not the “motivation” that Ravitch means. What Ravitch tries to imply — and what she lacks any evidence for — is that charter schools all over the country are over-enrolling those students who are the most motivated to succeed academically. That’s the only thing that could possibly lead to an unfair charter school advantage. To be sure, there are undoubtedly some charter students who are the most academically well-prepared and who are leaving the public school to seek a greener pasture elsewhere. But, Ravitch has zero evidence that these children are in the majority.

Nor would such a contention be consistent with the actual evidence, which Ravitch doesn’t bother to investigate (having presumed to settle the motivation issue “by definition”). In fact, a recent paper by Zimmer et al. analyzed data “from states that encompass about 45 percent of all charter schools in the nation.” They found: “Students transferring to charter schools had prior achievement levels that were generally similar to or lower than those of their [traditional public school] peers. And transfers had surprisingly little effect on racial distributions across the sites.” Similarly, Booker, Zimmer, and Buddin (2005) found that in California and Texas — both huge charter states — students who transferred to charter schools had lower test scores than their peers at public schools.

Given this evidence, it is more plausible to suspect that many charter school entrants have been struggling to get by in the public school, and they (or their parents) are “motivated” only in the sense that they’re trying to find something that might work. It’s hard to see how that sort of motivation would create an unfair advantage on the part of charter schools, as Ravitch wants the reader to believe.

There are numerous other examples of Ravitch ignoring scholarly literature that she finds inconvenient:

1. Ravitch focuses on a few studies about whether charter schools increase test scores. Leaving aside the fact that this is completely incoherent (given that Ravitch’s whole point elsewhere is that test scores shouldn’t be used to tell us the worth of a school), Ravitch ignores the recent study showing that charter schools increased the likelihood that a student will graduate and go to college. These are worthy goals.

2. Ravitch cites Walt Haney’s study asserting that “dramatic gains in Texas on its state tests” were a myth. (p. 96). But she ignores the Toenjes/Dworkin article contending that Haney’s article was biased and unreliable.

3. Ravitch attacks NCLB for failing to bring about its intended goal: improved test scores. For this argument, she relies on snapshots of NAEP scores during the 2000s. (pp. 109-10). But one looks in vain for Ravitch to cite Hanushek and Raymond’s paper noting that it is “not possible to investigate the impact of NCLB directly” — that is, it is not possible to do exactly what Ravitch purported to do. This is because “the majority of states had already instituted some sort of accountability system by the time the federal law took effect . . . 39 states did so by 2000.”

Hanushek and Raymond went on to find that “the introduction of accountability systems into a state tends to lead to larger achievement growth than would have occurred without accountability. The analysis, however, indicates that just reporting results has minimal impact on student performance and that the force of accountability comes from attaching consequences such as monetary awards or takeover threats to school performance. This finding supports the contested provisions of NCLB that impose sanctions on failing schools.” This finding is similar to Carnoy and Loeb 2002 (another paper left uncited by Ravitch), who found that “students in high-accountability states averaged significantly greater gains on the NAEP 8th-grade math test than students in states with little or no state measures to improve student performance.”


Critical Thinking About Critical Thinking

January 27, 2009

snakeoil553.jpg

Fayetteville Public Schools have been hypnotized by Tony Wagner’s The Global Achievement Gap.  They’ve bought 2,000 copies, which they’ve distributed to administrators, teachers, and members of the community.  They’ve organized three public discussions of the book.  They are bringing in Wagner himself.  And they’ve indicated that they would like to use this book as a guide for planning a new high school and other changes.

My colleague, Sandra Stotsky, applies her critical thinking skills in today’s Northwest Arkansas Times to Wagner’s call for more emphasis on “21st Century Skills,” like critical thinking, adaptability, and creativity, and less emphasis on subject content:

“Who can argue against teaching students ‘agility and adaptability’ or how to ‘ask good questions?’ Yet these ‘skills’ are largely unsupported by actual scientific research. Wagner presents nothing to justify his list except glib language and a virtually endless string of anecdotes about his conversations with high-tech CEOs.

Even where Wagner does use research, it’s not clear that we can trust what he reports as fact. On page 92, to discredit attempts to increase the number of high school students studying algebra and advanced mathematics courses, he refers to a ‘study’ of MIT graduates that he claims found only a few mentioning anything ‘more than arithmetic, statistics and probability’ as useful to their work. Curious, I checked out the ‘study’ using the URL provided in an end note for Chapter 3. It consisted of 17, yes 17, MIT graduates, and, according to my count, 11 of the 17 explicitly mentioned linear algebra, trig, proofs and/ or calculus, or other advanced mathematics courses as vital to their work – exactly the opposite of what Wagner reports! Perhaps exposure to higher mathematics is not the worst problem facing American students!

Similarly, while I agree with Wagner that too many public schools fail to teach ‘effective oral and written communication,’ I am utterly puzzled by his contention that teachers’ obsessions with teaching grammar, test-prep and teaching to ‘the test’ are the problem. Really? Which English teachers? A lot of parents would kill to get their children into a classroom where they knew the teacher cared about grammar, or at least was brave enough to try to teach conventional sentence structure and language usage.

As for too much testing in schools, another of his complaints, Wagner again cites no relevant research. On the other hand my colleague Gary Ritter finds that here in Arkansas public schools the most tested students – those in grades five and seven – spend only 1 percent of total instructional time being tested, probably less time than spent in class parties or on field trips. And without testing, how can we figure out what our students know, and which programs successfully teach them?

Wagner’s book is engaging and sometimes points to real defects in American schools. Yet it fails to use research objectively to ascertain what is truly happening in America’s 90,000 public schools. Moreover, like all too many education ‘reformers’ Wagner is simply hostile to academic content. Wagner does not seem to care if students can read and write grammatically, do math or know something about science and history – real subjects that schools can teach and policy-makers can measure.

Unfortunately, Wagner dismisses measurable academic content while embracing buzzwords like ‘adaptability’ and ‘curiosity,’ which no one could possibly be against, but also which no one could possibly measure. Do we really care if our students are curious and adaptable if they cannot read and write their own names? “

I have my own op-ed on Wagner pending at another local paper.  Meanwhile my colleague Stuart Buck has an excellent blog post on a related topic — Alfie Kohn’s attack on Core Knowledge.  Even worse, Stuart notes, Kohn accuses people who disagree with him of having bad intentions and not just being mistaken.

It is puzzling how this entire industry of education consultants, including Wagner, Kohn, Kozol, and Gardner, manage to have such large followings with such weak arguments.


Quality Counts Lacks Quality

January 9, 2009

(Guest post by Stuart Buck)

Education Week has released its annual report “Quality Counts,” which ranks all fifty states’ education systems along several dimensions, such as school finance, achievement, accountability, and the like.  You can find detailed statistics for any given state on an interactive map, and you can generate a table comparing the states of your choosing.

This Quality Counts report gets a huge amount of attention, as can be seen from the hundreds of results in a search of Google News.

  But the Quality Counts report suffers from two glaring flaws.  In fact, the report reminds me of the old joke (I can’t remember who to credit for this) of a beggar sitting on the streets of New York, with a sign reading, “Wars, 2; Legs Lost, 1; Wives Who Left Me, 2; Children, 3; Lost Jobs, 2.  TOTAL: 10.”  Well, obviously, the number “10” doesn’t represent ten of anything

 So what’s wrong with the Quality Counts report?

First, the “School Finance” measure has two basic components: equity and spending.  Equity refers to several measures that look at whether a state’s districts get relatively equal funding.  Fair enough, although there’s a decent argument that impoverished districts might need higher spending to attract better personnel.  But then part of the “School Finance” measure is based on per-pupil spending, as well as the percentage of a state’s taxable resources dedicated to education. 

The problem here is that it doesn’t make sense to reward a state with a higher grade just for spending more, in and of itself.  Indeed, the “spending” measure ends up getting averaged with the measure for “K-12 Achievement.”  This means that, in theory, a state with high spending and low achievement — thus combining incompetence and extravagance — could get an overall score equal to a state with low spending and high achievement.  But if a school manages to get high achievement with low spending, this means that, all else equal, that state has a more efficient and productive education system. 

Second, an even worse problem lies in the “Chance for Success” measure.  This ranking is supposed to tell us about the chances that people in a given state have of succeeding.  There are numerous components to the “Chances for Success” measure, including percent of students above 200% of the poverty line, percent of students with college-educated parents, percent of children whose parents speak English, and more.  Not surprisingly, the richer and more privileged states like Massachusetts, New Jersey, and Connecticut do quite well on this measure, while states like Arkansas, Mississippi, and New Mexico are near the bottom. 

What makes no sense whatsoever is that a high score on the “Chance for Success” measure is averaged together with all the other items — including K-12 Achievement — to produce each state’s final score.  You can see this for yourself: Pick your home state here, and then take the simple average of all six measures (Chances for Success; Standards, Assessment & Accountability; K-12 Achievement; Transitions & Alignment; School Finance; and Teaching Profession), and that average will be the state’s overall final score. 

In other words, imagine a state that managed to produce A-level achievement even though its population was poor and disadvantaged (and thus got a lower grade on the “Chances for Success” measure).  Under any rational grading system, we should give that state the highest possible rating.  But the Quality Counts method would actually downgrade the state for having too many poor children.  By the same token, Quality Counts would upgrade a poor-achieving state that happened to have a privileged and rich student population, even though that state’s education system would obviously be far more incompetent and inefficient.  If anything, the “Chances for Success” ranking should be counted inversely as compared to all the other measures of a state’s education system. 


Best. Choice. Argument. Ever.

August 6, 2008

 

 Brilliant.

 

(HT, Stuart Buck and Lydia McGrew at http://www.whatswrongwiththeworld.net/2008/08/great_video_clip_on_government.html#comments )


Arkansas Blogs Increase 200%

August 6, 2008

Well, not really.  But I’ve come across two relatively new Arkansas-based blogs (at least they are new to me).  One is The Arkansas Project, written by David Kinkade, Freeman Hunt, and Dan Greenberg.  Greenberg is a state representative who shares my interest in the naming of public buildings.  The other is the eponymous blog, Freeman Hunt.  And speaking of the symbolic power of names, Freeman Hunt appears to be her real name. 

They join the extremely high quality blog written by Stuart Buck at The Buck Stops Here.


%d bloggers like this: