Some States are Serious about K-12 Reform, Others Shirley

June 19, 2013

(Guest Post by Matthew Ladner)

John Chubb and Constance Clark have a very interesting new study out from Education Sector called The New State Achievement Gap: How NCLB Waivers Could Make it Worse or Better.

Chubb and Clark examine NAEP data and find that states are diverging into leaders and laggards. In the relative blink of an eye between 2003 and 2011 they found the gap between the performance of students in the best and worst performing states grew to 60 percent of the size of the White-Black achievement gap on the combined NAEP exams (4th/8th reading and math).

Note that part of what has happened here is that the White-Black gap shrank a bit. Note however that it is still sickeningly large-keep in mind that 10 points roughly equates to a grade level worth of average progress on NAEP- so 105 points across four tests is quite disgusting. The state achievement gap meanwhile grew steadily.

Chubb and Clark’s paper would have benefitted from examination of the gory details about how some states are playing fast and loose regarding NAEP inclusion standards for special needs and English language learners- especially in the case of Maryland and Kentucky. These details do not however take away the broad point- some states are improving and some are getting left behind.

The study gets even more interesting as the authors compare the NCLB waivers, accountability systems and standards choices of states with strong and weak NAEP gain performances. Included among these is a comparison between Florida and South Carolina. The referee needs to step in and wrap up Maryland before he pummels West Virginia to death. “Self-reflection” for teacher evaluation Mountaineers? Surely you can’t be serious…

In a not-quite-elliptical fashion, Chubb and Clark note a clustering of states with a recent history of weak NAEP gains with unconvincing NCLB waiver promises and the Smarter Balanced Assessment Consortium. I’m shocked…

Chubb and Clark have turned in a very interesting piece- go read it.


Momma Ain’t Happy

May 9, 2013

If Momma Aint Happy(Guest post by Greg Forster)

My colleagues at the Friedman Foundation have released a big new survey of mothers of school-age kids. And let me tell you, momma ain’t happy:

  • 61% of school moms say education’s on the wrong track; just 32% say it’s on the right track.
  • Watch out, Common Core test consortia: 79% of school moms rate the federal government’s handling of education as fair or poor; only 17% said good or excellent.
  • 82% of school moms gave an A or B to their local private schools, compared to 43% for public schools. (Momma ain’t unhappy enough!)

The study also surveyed non-moms, so you can compare and contrast. Unsurprisingly, the differences aren’t large – because if momma ain’t happy…

Image


We Win Pop Culture! Also, a Podcast on Win-Win

May 2, 2013

Sci-Fi fest poster

(Guest post by Greg Forster)

In a major news development, today the Heartland Institute described JPGB as a “widely read education reform-pop culture blog.” After all these years of struggling for recognition as a major voice in the pop culture world, at long last our toil and struggle has been vindicated.

Oh, and they have this podcast I did on the Win-Win report showing that the research consistently supports school choice. If you’re, you know, into that kind of thing.

Win-Win 3.0 chart

In case you forgot what that column of zeros on the right looks like, here it is again.


Third Edition of “Win-Win” Adds a Third Win

April 17, 2013

Win-Win 3.0 cover

(Guest post by Greg Forster)

This morning, the Friedman Foundation releases the third edition of my biannual report summarizing the empirical research on school choice. As in previous years, I survey all the available studies on academic effects – both for students who use school choice and for public schools. Hence the title “A Win-Win Solution” – school choice is a win for both those who use it and those who don’t.

New in this edition of the report, I also survey the impact of school choice on the democratic polity in three dimensions: fiscal impact on taxpayers, racial segregation and civic values and practices (such as tolerance for the rights of others). Guess what it shows? School choice is not just win-win, it’s actually win-win-win. It not only benefits choosing families and non-choosing families; it also benefits everyone else through fiscal savings and the strengthening of social and civic bonds.

Here’s the most important part of the report – that unbroken column of zeros on the right remains as impressive as it ever was. Do please read the rest if you’d like to know more!

Win-Win 3.0 chart


Wolf v. Ravitch/Welner on the Effects of School Choice

April 8, 2013

(Guest Post By Jason Bedrick)

Is school choice effective at improving measurable student outcomes?

That question has been at the center of a heated debate between Patrick Wolf of the University of Arkansas and Diane Ravitch, former U.S. Assistant Secretary of Education, and one of her supporters. The controversy began when Ravitch attempted to critique Wolf’s studies of voucher programs in Milwaukee and Washington D.C.

After questioning Wolf’s credibility, Ravitch made three main empirical claims, all of which are misleading or outright false:

1) Wolf’s own evaluations have not “shown any test score advantage for students who get vouchers, whether in DC or Milwaukee.” The private schools participating in the voucher program do not outperform public schools on state tests. The only dispute is “whether voucher students are doing the same or worse than their peers in public schools.”

2) The attrition rate in Wolf’s Milwaukee study was 75% so the results only concern the 25% of students who remained in the program.

3) Wolf’s study doesn’t track the students who left the voucher program. (“But what about the 75% who dropped out and/or returned to [the public school system]?  No one knows.”)

Wolf then rebutted those claims:

1) Ravitch ignores the finding that vouchers had a strong positive impact on high school graduation rates. Moreover, there was evidence of academic gains among the voucher students:

The executive summary of the final report in our longitudinal achievement study of the Milwaukee voucher program states:  “The primary finding that emerges from these analyses is that, for the 2010-11 school year, the students in the [voucher] sample exhibit larger growth from the base year of 2006 in reading achievement than the matched [public school] sample.” Regarding the achievement impacts of the DC program, Ravitch quotes my own words that there was no conclusive evidence that the DC voucher program increased student achievement.  That achievement finding was in contrast to attainment, which clearly improved as a result of the program.  The uncertainty surrounding the achievement effects of the DC voucher program is because we set the high standard of 95% confidence to judge a voucher benefit as “statistically significant”, and we could only be 94% confident that the final-year reading gains from the DC program were statistically significant.

2) The attrition rate in the Milwaukee study was actually 56%, not 75%. Ravitch was relying on a third party’s critique of the study (to which Ravitch linked) that had the wrong figure, rather than reading the study herself. Moreover, the results regarding the higher attainment of voucher students are drawn from the graduation rate for all students who initially participated in the voucher program in the 9th grade in the fall of 2006, not just those who remained in the program.

3) Wolf’s team used data from the National Clearinghouse of College Enrollment to track these students into college.

Ravitch responded by hyperventilating about Wolf’s supposed “vitriol” (he had the temerity to point out that she’s not a statistician, didn’t understand the methods she was critiquing, and that she was relying on incorrect secondary sources) and posting a response from Kevin Welner of the University of Colorado at Boulder, who heads the National Education Policy Center (NEPC), which released the critique of Wolf’s study upon which Ravitch had relied.

Welner didn’t even attempt to defend Ravitch’s erroneous first and third claims, but took issue with Wolf’s rebuttal of her second claim. Welner defends the integrity of his organization’s critique by pointing out that when they read Wolf’s study, it had contained the “75% attrition” figure but that the number had been subsequently updated a few weeks later. They shouldn’t be faulted for not knowing about the update. As Welner wrote, “Nobody had thought to go back and see whether Wolf or his colleagues had changed important numbers in the SCDP report.”

That would be a fair point, except for the fact that they did know about the change. As Wolf pointed out, page four of the NECP critique contains the following sentence: “Notably, more than half the students (56%) in the MPCP 9th grade sample were not in the MPCP four years later.” In other words, the author of the NECP critique had seen the corrected report but failed to update parts of his critique. This is certainly not the smoking gun Welner thought it was.

Ravitch replied, again demonstrating her misunderstanding of intention-to-treat (“And, I dunno, but 56% still looks like a huge attrition rate”) and leaving the heavy lifting to Welner. Welner’s main argument is that Wolf should have “been honest with his readers the first time around, instead of implying ignorance or wrongdoing as a cheap way to scores some points against Diane Ravitch and (to a lesser extent) NEPC.” Welner would have had a point if Wolf’s initial response had been to NECP and not Ravitch, but Wolf’s point was that Ravitch was holding herself out as an expert when she had never read the primary source material that she was criticizing. Instead, she relied on a secondary source that cited two contradictory figures. She either didn’t notice or intentionally chose what she thought was the more damning of the two figures—though, again, the figure doesn’t matter for purposes of an intention-to-treat study.

We all make mistakes. Wolf’s team made a mistake in their report and corrected it within a few weeks. Welner has stated that his team will correct the NECP report now that their error has come to their attention a year later. Ravitch should also correct her erroneous assertions regarding the results and methodology of the studies.

(Edited for typo)


Sports and Academics: Coleman vs. Coleman

February 5, 2013

Nerdiness vs. Athleticism

The path-breaking sociologist, James Coleman, was not a fan of high school sports.  He thought the culture of athletic prowess swamped the culture of academic success.  Schools should get rid of sports and channel that competitive spirit into inter-scholastic academic contests, like Quiz Bowl.

But James Coleman also believed that the enhanced social capital produced by church attendance was key to the success of Catholic schools.  The adults would get together at church, share information about their kids and school, and thus be better positioned to work together to improve their school academically.  The adult culture of academic success could prevail more easily if the adults were better connected with each other by seeing each other on a regular basis at church.

But maybe high school sports are the secular equivalent of church.  Perhaps Friday night football is an event, like church, that gathers parents, allows them to share information about their kids and school, and more effectively work together to improve their school.

So which James Coleman is right?  Is it the one who fears athletic success subordinating academic success or the one who thinks social capital is the key to school improvement?

Dan Bowen and I decided to examine this issue with an analysis of Ohio high schools.  We look at whether high schools that give greater priority to athletic success do so at the expense of academic success.  The results of our analysis are in the current issue of the Journal of Research in Education.

We found that high schools that devote more energy to athletic success also tend to produce more academic success.  In particular, we looked at whether high schools with a higher winning percentage in sports also had higher test scores as well as higher rates of educational attainment.  We also looked at whether high schools that offered more sports and had a larger share of their student body participating in sports also tended to have higher test scores and higher attainment.

Using several different specifications, we find that higher rates of athletic success and participation were associated with schools having higher overall test scores and higher educational attainment, controlling for observed school inputs.  For example, we found:

With regard to attainment, a 10 percentage point increase in a school’s overall winning percentage is
associated with a 1.3 percentage point improvement in its CPI, which is an estimate of its high
school graduation rate.

We also looked at whether schools that offered more opportunities to participate in sports had different rates of attainment:

When we only examine winter sports, an increase of one sport improves CPI by 0.01, which would be a 1
percentage point increase in the high school graduation rate. For the winter, the addition of 10
students directly participating in sports is associated with a 0.015 improvement in CPI, or a 1.5%
increase in high school graduation rate.

In addition to attainment, we also looked at achievement on state tests:

We observe similar positive and statistically significant relationships between the success
and participation in high school sports and student achievement as measured by the Ohio
standardized test results. A 10 percentage point increase in overall winning percentage is
associated with a 0.25 percentage point increase in the number of students at or above academic
proficiency. (See Table 4) When we examine the effect of winning percentage in each sport
separately, once again winning in football has the largest effect. Girls’ basketball also remains
positive and statistically significant (at p < 0.10), but boys’ basketball is not statistically
distinguishable from a null effect.

Lastly, we looked at the effect of participation rates in Ohio high schools on overall student achievement:

As for participation and achievement, the addition of one sport increases the number of
students at or above academic proficiency by 0.2 of a percentage point. The addition of 10
students directly participating in a sports team improves the proportion of students at or above
proficient by 0.4 of a percentage point. Both of these results are statistically significant at p < 0.01. (See Table 5) When examining just the winter season, adding one winter sport increases the
percentage of students performing proficiently by 0.4 of a percentage point, while an additional
10 student able to directly participate in sports during the winter season relates to a 0.6
percentage point increase in students at or above proficiency (see Table 5)

It is a common refrain among advocates for education reform that athletics “have assumed an unhealthy priority in our high schools.”  But these advocates rarely offer data to support their view.  Instead, they rely on stereotypes about dumb jocks, anecdotes, and painful personal memories as their proof.

Our data suggest that this claim that high school athletic success comes at the expense of academic success is mistaken. Of course, we cannot make causal claims based on our analyses about the relationship between sports and achievement.  It’s possible that schools that are more effective at winning in sports and expanding participation are also the kinds of schools that can produce academic success.  But the evidence we have gathered at least suggests that any trade-offs between sports and achievement would have to be subtle and small, if they exist at all.  Descriptively, it is clear that high schools that devote more energy to sports also produce higher test scores and higher graduation rates.

I guess James Coleman was right — er, I mean, the James Coleman who focused on social capital, not the other one who feared the culture of athletic competition.

[Updated for clarity and to correct typos]


Head Start Revealed

January 14, 2013

Despite the obvious effort to delay and conceal the disappointing results from the official and high quality evaluation of Head Start, the Wall Street Journal shines the light on the issue in today’s editorial.  DC’s manipulating scumbags might want to take note that efforts to hide negative research might just draw more attention.  It’s comforting to see that the world may sometimes look more like Dostoevsky’s Crime and Punishment than Woody Allen’s Crimes and Misdemeanors.

The Journal reveals that Head Start supporters have not only ignored the latest study, but they are trying to sneak an extra $100 million for Head Start into the relief package for victims of Hurricane Sandy.  They also note that the most recent disappointing Head Start result is just the latest in a string of studies failing to find benefits from the program despite a cumulative expenditure of more than $180 billion.

And then the Journal finishes with this:

The Department of Health and Human Services released the results of the most recent Head Start evaluation on the Friday before Christmas. Once again, the research showed that cognitive gains didn’t last. By third grade, you can’t tell Head Start alumni from their non-Head Start peers.

President Obama has said that education policy should be driven not by ideology but by “what works,” though we have to wonder given his Administration’s history of slow-walking the release of information that doesn’t align with its agenda.

In 2009, the Administration sat on a positive performance review of the Washington, D.C., school voucher program, which it opposes. The Congressionally mandated Head Start evaluation put out last month was more than a year late, is dated October 2012 and was released only after Republican Senator Tom Coburn and Congressman John Kline sent a letter to HHS Secretary Kathleen Sebelius requesting its release along with an explanation for the delay. Now we know what was taking so long.

Like so many programs directed at the poor, Head Start is well-intentioned, and that’s enough for self-congratulatory progressives to keep throwing money at it despite the outcomes. But misleading low-income parents about the efficacy of a program is cruel and wastes taxpayer dollars at a time when the country is running trillion-dollar deficits.

A government that cared about results would change or end Head Start, but instead Congress will use the political cover of disaster relief to throw more good money after proven bad policy.

[UPDATE: And here is a good follow-up op-ed on the study by Lindsey Burke on the Fox News web site.]


Follow

Get every new post delivered to your Inbox.

Join 2,635 other followers