Sports and Academics: Coleman vs. Coleman

February 5, 2013

Nerdiness vs. Athleticism

The path-breaking sociologist, James Coleman, was not a fan of high school sports.  He thought the culture of athletic prowess swamped the culture of academic success.  Schools should get rid of sports and channel that competitive spirit into inter-scholastic academic contests, like Quiz Bowl.

But James Coleman also believed that the enhanced social capital produced by church attendance was key to the success of Catholic schools.  The adults would get together at church, share information about their kids and school, and thus be better positioned to work together to improve their school academically.  The adult culture of academic success could prevail more easily if the adults were better connected with each other by seeing each other on a regular basis at church.

But maybe high school sports are the secular equivalent of church.  Perhaps Friday night football is an event, like church, that gathers parents, allows them to share information about their kids and school, and more effectively work together to improve their school.

So which James Coleman is right?  Is it the one who fears athletic success subordinating academic success or the one who thinks social capital is the key to school improvement?

Dan Bowen and I decided to examine this issue with an analysis of Ohio high schools.  We look at whether high schools that give greater priority to athletic success do so at the expense of academic success.  The results of our analysis are in the current issue of the Journal of Research in Education.

We found that high schools that devote more energy to athletic success also tend to produce more academic success.  In particular, we looked at whether high schools with a higher winning percentage in sports also had higher test scores as well as higher rates of educational attainment.  We also looked at whether high schools that offered more sports and had a larger share of their student body participating in sports also tended to have higher test scores and higher attainment.

Using several different specifications, we find that higher rates of athletic success and participation were associated with schools having higher overall test scores and higher educational attainment, controlling for observed school inputs.  For example, we found:

With regard to attainment, a 10 percentage point increase in a school’s overall winning percentage is
associated with a 1.3 percentage point improvement in its CPI, which is an estimate of its high
school graduation rate.

We also looked at whether schools that offered more opportunities to participate in sports had different rates of attainment:

When we only examine winter sports, an increase of one sport improves CPI by 0.01, which would be a 1
percentage point increase in the high school graduation rate. For the winter, the addition of 10
students directly participating in sports is associated with a 0.015 improvement in CPI, or a 1.5%
increase in high school graduation rate.

In addition to attainment, we also looked at achievement on state tests:

We observe similar positive and statistically significant relationships between the success
and participation in high school sports and student achievement as measured by the Ohio
standardized test results. A 10 percentage point increase in overall winning percentage is
associated with a 0.25 percentage point increase in the number of students at or above academic
proficiency. (See Table 4) When we examine the effect of winning percentage in each sport
separately, once again winning in football has the largest effect. Girls’ basketball also remains
positive and statistically significant (at p < 0.10), but boys’ basketball is not statistically
distinguishable from a null effect.

Lastly, we looked at the effect of participation rates in Ohio high schools on overall student achievement:

As for participation and achievement, the addition of one sport increases the number of
students at or above academic proficiency by 0.2 of a percentage point. The addition of 10
students directly participating in a sports team improves the proportion of students at or above
proficient by 0.4 of a percentage point. Both of these results are statistically significant at p < 0.01. (See Table 5) When examining just the winter season, adding one winter sport increases the
percentage of students performing proficiently by 0.4 of a percentage point, while an additional
10 student able to directly participate in sports during the winter season relates to a 0.6
percentage point increase in students at or above proficiency (see Table 5)

It is a common refrain among advocates for education reform that athletics “have assumed an unhealthy priority in our high schools.”  But these advocates rarely offer data to support their view.  Instead, they rely on stereotypes about dumb jocks, anecdotes, and painful personal memories as their proof.

Our data suggest that this claim that high school athletic success comes at the expense of academic success is mistaken. Of course, we cannot make causal claims based on our analyses about the relationship between sports and achievement.  It’s possible that schools that are more effective at winning in sports and expanding participation are also the kinds of schools that can produce academic success.  But the evidence we have gathered at least suggests that any trade-offs between sports and achievement would have to be subtle and small, if they exist at all.  Descriptively, it is clear that high schools that devote more energy to sports also produce higher test scores and higher graduation rates.

I guess James Coleman was right — er, I mean, the James Coleman who focused on social capital, not the other one who feared the culture of athletic competition.

[Updated for clarity and to correct typos]

Head Start Revealed

January 14, 2013

Despite the obvious effort to delay and conceal the disappointing results from the official and high quality evaluation of Head Start, the Wall Street Journal shines the light on the issue in today’s editorial.  DC’s manipulating scumbags might want to take note that efforts to hide negative research might just draw more attention.  It’s comforting to see that the world may sometimes look more like Dostoevsky’s Crime and Punishment than Woody Allen’s Crimes and Misdemeanors.

The Journal reveals that Head Start supporters have not only ignored the latest study, but they are trying to sneak an extra $100 million for Head Start into the relief package for victims of Hurricane Sandy.  They also note that the most recent disappointing Head Start result is just the latest in a string of studies failing to find benefits from the program despite a cumulative expenditure of more than $180 billion.

And then the Journal finishes with this:

The Department of Health and Human Services released the results of the most recent Head Start evaluation on the Friday before Christmas. Once again, the research showed that cognitive gains didn’t last. By third grade, you can’t tell Head Start alumni from their non-Head Start peers.

President Obama has said that education policy should be driven not by ideology but by “what works,” though we have to wonder given his Administration’s history of slow-walking the release of information that doesn’t align with its agenda.

In 2009, the Administration sat on a positive performance review of the Washington, D.C., school voucher program, which it opposes. The Congressionally mandated Head Start evaluation put out last month was more than a year late, is dated October 2012 and was released only after Republican Senator Tom Coburn and Congressman John Kline sent a letter to HHS Secretary Kathleen Sebelius requesting its release along with an explanation for the delay. Now we know what was taking so long.

Like so many programs directed at the poor, Head Start is well-intentioned, and that’s enough for self-congratulatory progressives to keep throwing money at it despite the outcomes. But misleading low-income parents about the efficacy of a program is cruel and wastes taxpayer dollars at a time when the country is running trillion-dollar deficits.

A government that cared about results would change or end Head Start, but instead Congress will use the political cover of disaster relief to throw more good money after proven bad policy.

[UPDATE: And here is a good follow-up op-ed on the study by Lindsey Burke on the Fox News web site.]

What Success Would Have Looked Like

January 10, 2013

Yesterday I described the Gates Foundation’s Measuring Effective Teachers (MET) project as “an expensive flop.”  To grasp just what a flop the project was, it’s important to consider what success would have looked like.  If the project had produced what Gates was hoping, it would have found that classroom observations were strong, independent predictors of other measures of effective teaching, like student test score gains.  Even better, they were hoping that the combination of classroom observations, student surveys, and previous test score gains would be a much better predictor of future test score gains (or of future classroom observations) than any one of those measures alone.  Unfortunately, MET failed to find anything like this.

If MET had found classroom observations to be strong predictors of other indicators of effective teaching and if the combination of measures were a significantly better predictor than any one measure alone, then Gates could have offered evidence for the merits of a particular mixing formula or range of mixing formulas for evaluating teachers.  That evidence could have been used to good effect to shape teacher evaluation systems in Chicago, LA, and everywhere else.

They also could have genuinely reassured teachers anxious about the use of test score gains in teacher evaluations.  MET could have allayed those concerns by telling teachers that test score gains produce information that is generally similar to what is learned from well-conducted classroom observations, so there is no reason to oppose one and support the other.  What’s more, significantly improved predictive power from a mixture of classroom observations with test score gains could have made the case for why we need both.

MET was also supposed to have helped us adjudicate among several commonly used rubrics for classroom observations so that we would have solid evidence for preferring one approach over another.  Because MET found that classroom observations in general are barely related to other indicators of teacher effectiveness, the study told us almost nothing about the criteria we should use in classroom observations.

In addition, the classroom observation study was supposed to help us identify the essential components of effective teaching .  That knowledge could have informed improved teacher training and professional development.  But because MET was a flop (because classroom observations barely correlate with other indicators of teacher effectiveness and fail to improve the predictive power of a combined measure), we haven’t learned much of anything about the practices that are associated with effective teaching.  If we can’t connect classroom observations with effective teaching in general, we certainly can’t say much about the particular aspects of teaching that were observed that most contributed to effective teaching.

Just so you know that I’m not falsely attributing to MET these goals that failed to be realized, look at this interview from 2011 of Bill Gates by Jason Riley in the Wall Street Journal.  You’ll clearly see that Bill Gates was hoping that MET would do what I described above.  It failed to do so.  Here is what the interview revealed about the goals of MET:

Of late, the foundation has been working on a personnel system that can reliably measure teacher effectiveness. Teachers have long been shown to influence students’ education more than any other school factor, including class size and per-pupil spending. So the objective is to determine scientifically what a good instructor does.

“We all know that there are these exemplars who can take the toughest students, and they’ll teach them two-and-a-half years of math in a single year,” he says. “Well, I’m enough of a scientist to want to say, ‘What is it about a great teacher? Is it their ability to calm down the classroom or to make the subject interesting? Do they give good problems and understand confusion? Are they good with kids who are behind? Are they good with kids who are ahead?’

“I watched the movies. I saw ‘To Sir, With Love,'” he chuckles, recounting the 1967 classic in which Sidney Poitier plays an idealistic teacher who wins over students at a roughhouse London school. “But they didn’t really explain what he was doing right. I can’t create a personnel system where I say, ‘Go watch this movie and be like him.'”

Instead, the Gates Foundation’s five-year, $335-million project examines whether aspects of effective teaching—classroom management, clear objectives, diagnosing and correcting common student errors—can be systematically measured. The effort involves collecting and studying videos of more than 13,000 lessons taught by 3,000 elementary school teachers in seven urban school districts.

“We’re taking these tapes and we’re looking at how quickly a class gets focused on the subject, how engaged the kids are, who’s wiggling their feet, who’s looking away,” says Mr. Gates. The researchers are also asking students what works in the classroom and trying to determine the usefulness of their feedback.

Mr. Gates hopes that the project earns buy-in from teachers, which he describes as key to long-term reform. “Our dream is that in the sample districts, a high percentage of the teachers determine that this made them better at their jobs.” He’s aware, though, that he’ll have a tough sell with teachers unions, which give lip service to more-stringent teacher evaluations but prefer existing pay and promotion schemes based on seniority—even though they often end up matching the least experienced teachers with the most challenging students.

The final MET reports produced virtually nothing that addressed these stated goals.  But in Orwellian fashion, the Gates folks have declared the project to be a great success.  I never expected MET to work because I suspect that effective teaching is too heterogeneous to be captured well by a single formula.  There is no recipe for effective teaching because kids and their needs are too varied, teachers and their abilities are too varied, and the proper matching of student needs and teacher abilities can be accomplished in many different ways.  But this is just my suspicion.  I can’t blame the Gates Foundation for trying to discover the secret sauce of effective teaching, but I can blame them for refusing to admit that they failed to find it.  Even worse, I blame them for distorting, exaggerating, and spinning what they did find.

(edited for typos)

Understanding the Gates Foundation’s Measuring Effective Teachers Project

January 9, 2013

If I were running a school I’d probably want to evaluate teachers using a mixture of student test score gains, classroom observations, and feedback from parents, students, and other staff.  But I recognize that different schools have different missions and styles that can best be assessed using different methods.  I wouldn’t want to impose on all schools in a state or the nation a single, mechanistic system for evaluating teachers since that is likely to be a one size fits none solution.  There is no single best way to evaluate teachers, just like there is no single best way to educate students.

But the folks at the Gates Foundation, afflicted with PLDD, don’t see things this way.  They’ve been working with politicians in Illinois, Los Angeles, and elsewhere to centrally impose teacher evaluation systems, but they’ve encountered stiff resistance.  In particular, they’ve noticed that teachers and others have expressed strong reservations about any evaluation system that relies too heavily on student test scores.

So the folks at Gates have been trying to scientifically validate a teacher evaluation system that involves a mix of test score gains, classroom observations, and student surveys so that they can overcome resistance to centrally imposed, mechanistic evaluation systems.  If they can reduce reliance on test scores in that system while still carrying the endorsement of “science,” the Gates folk imagine  that politicians, educators, and others will all embrace the Gates central planning fantasy.

Let’s leave aside for the moment the political reality, demonstrated recently in Chicago and Los Angeles, that teachers are likely to fiercely resist any centrally imposed, mechanistic evaluation system regardless of the extent to which it relies on test scores.  The Gates folks want to put on their lab coats and throw the authority of science behind a particular approach to teacher evaluation.  If you oppose it you might as well deny global warming.  Science has spoken.

So it is no accident that the release of the third and final round of reports from the Gates Foundation’s Measuring Effective Teachers project was greeted with the following headline in the Washington Post: “Gates Foundation study: We’ve figured out what makes a good teacher,”  or this similarly humble claim in the Denver Post: “Denver schools, Gates foundation identify what makes effective teacher.”  This is the reaction that the Gates Foundation was going for — we’ve used science to discover the correct formula for evaluating teachers.  And by implication, we now know how to train and improve teachers by using the scientifically validated methods of teaching.

The only problem is that things didn’t work out as the Gates folks had planned.  Classroom observations make virtually no independent contribution to the predictive power of a teacher evaluation system.  You have to dig to find this, but it’s right there in Table 1 on page 10 of one of the technical reports released yesterday.  In a regression to predict student test score gains using out of sample test score gains for the same teacher, student survey results, and classroom observations, there is virtually no relationship between test score gains and either classroom observations or student survey results.  In only 3 of the 8 models presented is there any statistically significant relationship between either classroom observations or student surveys and test score gains (I’m excluding the 2 instances were they report p < .1 as statistically significant).  And in all 8 models the point estimates suggest that a standard deviation improvement in classroom observation or student survey results is associated with less than a .1 standard deviation increase in test score gains.

Not surprisingly, a composite teacher evaluation measure that mixes classroom observations and student survey results with test score gains is generally no better and sometimes much worse at predicting out of sample test score gains.  The Gates folks trumpet the finding that the combined measures are more “reliable” but that only means that they are less variable, not any more predictive.

But “the best mix” according to the “policy and practitioner brief” is “a composite with weights between 33 percent and 50 percent assigned to state test scores.”  How do they know this is the “best mix?”  It generally isn’t any better at predicting test score gains.  And to collect the classroom observations involves an enormous expense and hassle.  To get the measure as “reliable” as they did without sacrificing too much predictive power, the Gates team had to observe each teacher at least four different times by at least two different coders, including one coder outside of the school.  To observe 3.2 million public school teachers for four hours by staff compensated at $40 per hour would cost more than $500 million each year.  The Gates people also had to train the observers at least 17 hours and even after that had to throw out almost a quarter of those observers as unreliable.  To do all of this might cost about $1 billion each year.

And what would we get for this billion?  Well, we might get more consistent teacher evaluation scores, but we’d get basically no improvement in the identification of effective teachers.  And that’s the “best mix?”  Best for what?  It’s best for the political packaging of a centrally imposed, mechanistic teacher evaluation system, which is what this is all really about.  Vicki Phillips, who heads the Gates education efforts, captured in this comment what I think they are really going for with a composite evaluation score:

Combining all three measures into a properly weighted index, however, produced a result “teachers can trust,” said Vicki Phillips, a director in the education program at the Gates Foundation.

It’ll cost a fortune, it doesn’t improve the identification of effective teachers, but we need to do it to overcome resistance from teachers and others.  Not only will this not work, but in spinning the research as they have, the Gates Foundation is clearly distorting the straightforward interpretation of their findings: a mechanistic system of classroom observation provides virtually nothing for its enormous cost and hassle.  Oh, and this is the case when no stakes were attached to the classroom observations.  Once we attach all of this to pay or continued employment, their classroom observation system will only get worse.

I should add that if classroom observations aren’t useful as predictors, they also can’t be used effectively for diagnostic purposes.  An earlier promise of this project is that they would figure out which teacher evaluation rubrics were best and which sub-components of those rubrics that were most predictive of effective teaching.  But that clearly hasn’t panned out.  In the new reports I can’t find anything about the diagnostic potential of classroom observations, which is not surprising since those observations are not predictive.

So, rather than having “figured out what makes a good teacher” the Gates Foundation has learned very little in this project about effective teaching practices.  The project was an expensive flop.  Let’s not compound the error by adopting this expensive flop as the basis for centrally imposed, mechanistic teacher evaluation systems nationwide.

(Edited for typos and to add links.  To see a follow-up post, click here.)

Head Start Manipulating Scumbags

December 20, 2012

I’ve heard that the latest round of results from the federal evaluation of Head Start is due to be released tomorrow afternoon.  And my psychic powers tell me that the results will show no lasting benefit from Head Start, just like the two previous rounds of results.

You heard that right — the federal government is releasing results that the administration dislikes on a Friday afternoon just before Christmas.  They might as well put the results on display in a locked filing cabinet in a disused lavatory behind the sign that says “beware of the leopard.”

Why is the Department of Health and Human Services burying this study just like they delayed, buried, or distorted the previous ones?  Well, because the study is an extremely rigorous and comprehensive evaluation, involving random assignment of a representative sample of all Head Start students nationwide, that I expect will find no enduring benefits from this program that politicians, pundits, and other dimwits constantly want to expand and fund.  Anyone who casts doubt on think tank research should cast a critical eye toward gross manipulations and abuse of research that are perpetrated by the federal government.

I should repeat that the researchers have done an excellent job evaluating Head Start in this case.  It is the bureaucratic class at the Department of Health and Human Services who have cynically manipulated, delayed, and misreported this research.  The pending report is already delayed several years and has been around for a long time.  The decision to release it on the Friday afternoon before Christmas is completely calculated.

I don’t know your names, but I’m going to invest a little energy in tracking down who is responsible for this cynical abuse of research.  If there were any reporters worth their salt left out there, they would bother to expose you but I guess that job has now been passed to bloggers and enterprising individuals.  When I do find your names I will post them so folks can know who the scumbags are who think they can manipulate the policy community by delaying, burying, or misreporting research.  And then when you get hired by that DC think tank, advocacy organization, or other waste of space we’ll be able to remember who you are and assign no credibility to what you have to say.  These kinds of dastardly acts by public servants should not be cost free and if I have any say in the matter they will not be in this case.

Florida Crushes the Ball on Progress in International Literacy Study

December 11, 2012

(Guest Post by Matthew Ladner)

TIMS released 2011 results today in a variety of subjects. This time a handful of states were brave enough to volunteer for a pullout of their results. Here are the results on 4th grade reading:


Here are the pullouts:


You got it: Florida students notched the second highest score in the world. Even above (gasp!) Finland.

Late for a meeting. More later, but for now:


And You Thought Administrative Bloat in Higher Ed Was Bad…

October 24, 2012

When Brian Kisida, Jonathan Mills, and I released our study of administrative bloat in higher education through the Goldwater Institute, we thought it was bad that universities had increased their hiring of administrators (professional staff who are not faculty) at twice the rate of faculty.

I now realize that the perpetrators of waste in higher ed are mere amateurs.  The administrative bloat pros can be found in K-12 education.  According to a new report from the Friedman Foundation released today, student enrollment has increased 96% since 1950, but the growth in “administrators and other non-teaching staff [was] a staggering 702 percent.”

The report provides results state by state, highlighting the growth in staffing in recent years.  Even in the few states where enrollment has declined, staffing levels have grown dramatically.  Check it out.

Charters v. Private Schools: Urban and Suburban Differences

August 28, 2012

(Guest post by Greg Forster)

Cato has new research out from Richard Buddin, examining where charter schools draw their students from. Adam Schaeffer offers a summary, emphasizing the dangers of charter schools: “On average, charter schools may marginally improve the public education system, but in the process they are wreaking havoc on private education.”

I agree with the basic premise: charters don’t fix the underlying injustice of government monopolizing education by providing “free” (i.e. free at the point of service, paid for by taxpayers) education, driving everyone else out of the education sector. As Jay and I have argued before, vouchers make the world safe for charters; that implies you can view charters as a response by the government to protect its monopoly against the disruptive threat of voucher legislation.

But what interests me more are the urban/suburban and elementary/secondary breakdowns of these data. It appears that charters are only substantially cutting into private schools in “highly urban” areas. In the suburbs, the charter school option is framed much more in terms of boutique specialty alternatives (schools for the arts, classical education, etc.) rather than “your school sucks, here’s one that works.” If you’d asked me, I would have guessed that would also cut heavily into the private school market – it would appeal to parents of high means who are looking for something out of the ordinary for their children, and that demographic would be most likely to already be in private schools. Yet the data show otherwise; apparently the families choosing boutique suburban charters weren’t much impressed with their private school options. And what’s up with this weird distribution on the elementary/secondary axis? Apparently public middle schools really stink in urban/suburban border areas.


Blinding Us with Science

August 15, 2012

(Guest post by Greg Forster)

Jay’s proposed reforms to the way Gates handles science are relevant far beyond the Gates Foundation, and foundations generally. He’s helping us think about how to wrestle with a deeper problem.

Public policy arguments need an authority to which they can appeal. The percentage of the population that is both willing and able to absorb all the necessary information to make a responsible decision without relying on pretty sweeping appeals to authority is very small. And even for us wonks, you can’t reduce the role of authority to zero; life doesn’t work that way. (Economists call this “the information problem.”)

So it’s normal, natural and right for public policy arguments to make some appeals to authority. The problem is that increasingly, our culture has no widely recognized authorities other than science. When there are many potential loci of authority, there is less pressure to corrupt them. If the science doesn’t back your view, you can appeal to other sources of authority. Where there is only one authoritative platform, there’s no alternative but to seize it.

As I once wrote:

Say that you favor a given approach – in education, in politics, in culture – because it is best suited to the nature of the human person, or because it best embodies the principles and historic self-understanding of the American people, and you will struggle even to get a hearing. But if you say that “the science” supports your view, the world will fall at your feet.

Of course, this means powerful interest groups rush in to seize hold of “science,” to trumpet whatever suits their preferences, downplay its limitations, and delegitimize any contrary evidence. If they succeed – which they don’t always, but they do often enough – “the science” quickly ceasees to be science at all. That’s why “scientific” tyrannies like the Soviet Union had to put so many real scientists in jail – or in the ground.

We need other sources of wisdom and knowledge – and hence of authority, because those who are recognized as having wisdom and knowledge will be treated as sources of authority – besides science. As Jay has written:

Science has its limits.  Science cannot adjudicate among the competing values that might attract us to one educational approach over another.  Science usually tells us about outcomes for the typical or average student and cannot easily tell us about what is most effective for individual students with diverse needs.  Science is slow and uncertain, while policy and practice decisions have to be made right now whether a consensus of scientific evidence exists or not.  We should rely on science when we can but we also need to be humble about what science can and can’t address…

My fear is that the researchers, their foundation-backers, and most-importantly, the policymaker and educator consumers of the research are insensitive to these limitations of science.  I fear that the project will identify the “right” way to teach and then it will be used to enforce that right way on everyone, even though it is highly likely that there are different “right” ways for different kids…

Science can be corrupted so that it simply becomes a shield disguising the policy preferences of those in authority.  How many times have you heard a school official justify a particular policy by saying that it is supported by research when in fact no such research exists?  This (mis)use of science is a way for authority figures to tell their critics, “shut up!”

To summarize the whole point, our group of school choice researchers put it well (false humility aside) in our Education Week op-ed earlier this year:

Finally, we fear that political pressure is leading people on both sides of the issue to demand things from “science” that science is not, by its nature, able to provide. The temptation of technocracy—the idea that scientists can provide authoritative answers to public questions—is dangerous to democracy and science itself. Public debates should be based on norms, logic, and evidence drawn from beyond just the scientific sphere.

What can we do about it? Beyond building in checks and balances to ensure that science isn’t being abused, we can make a deliberate effort to appeal to non-scientific sources of wisdom. There’s nothing unscientific about relying on “norms, logic, and evidence drawn from beyond just the scientific sphere.” In Pride and Prejudice, Caroline Bingley comments that it would be more rational if there were more conversation and less dancing at balls; her brother comments that this would indeed be “much more rational, I dare say, but much less like a ball.” It might be more scientific if our civic discourse appeals to nothing but science, but it’s much less like civic discourse.

For a good example of what I mean, check out Freedom and School Choice in American Education. When it came out, I commented on how it showed the diverse values that had led the authors to support school choice:

What’s particularly valuable about this book, I think, is how it gives expression to the very different paths by which people come to hold educational freedom as an aspiration, and then connects those aspirational paths to the practical issues that face the movement in the short term. Jay comes to educational freedom with an emphasis on accountability and control; against the Amy Gutmanns of the world who want to set up educational professionals as authority figures to whom parents must defer, Jay wants to put parents back in charge of education. Matt comes to educational freedom with an emphasis on alleviating unjustified inequalities; against the aristocrats and social Darwinists of the world who aren’t bothered by the existence of unjustified inequalities, Matt wants social systems to maximize the growth of opportunities for those least likely to have access to them. And I come to educational freedom with an emphasis on the historical process of expanding human capacities, especially as embodied in America’s entrepreneurial culture; agaisnt all forms of complacency, I want America to continue leading the world in inventing ever better ways of flourishing the full capacities of humanity. And each of the other contributors has his or her own aspirational path.

Individual liberty; the lifting up of the poor and the marginalized; the American experiment in enterprise culture. These are fine things worth fighting for, and they would remain so no matter what the science says.

President Bush Discusses Global Report Card

July 19, 2012

Last fall Josh McGee and I developed the Global Report Card (GRC) for the George W. Bush Institute. The GRC is a tool that allows people to compare the level of academic achievement in virtually every school district in the United States to the average for their state, the country, and a comparison group of 25 industrialized countries.

Above is a new interview with President Bush in which he discusses the Global Report Card (it’s around minute 25).

The Global Report Card received a fair amount of coverage when it was released, but keep your eyes out for an updated and improved version this upcoming fall.  The results of the GRC are consistent with other international comparisons, including a series of pieces by Eric Hanushek, Paul Peterson, and Ludger Woessmann (the most recent of which can be found here).  But the GRC goes a step further by allowing comparisons to be made at the school district  level.  GRC 2.0 will also have some new features and comparisons that people might find useful.


Get every new post delivered to your Inbox.

Join 2,519 other followers