Phony Numbers

September 1, 2009

A chronic problem with centralized accountability systems is that they require accurate information from the agent that is being held accountable.  But because people don’t like to squeeze vises on their own hands, they are often tempted to slip out of the vise by fudging the numbers.  And because the centralized authority is often reluctant to squeeze the vise anyway, preferring the happy story that schools are reforming but never reformed, obvious fudging of the numbers is tolerated.

I’ve documented this problem when it comes to graduation rates, which have often been misreported to avoid political embarrassment and accountability sanctions. 

Now David Muhlhausen, Don Soifer, and Dan Lips over at Heritage (with help from Jonetta Rose Barras at the Washington Examiner) have uncovered a new type of phony numbers — school crime and safety information. 

The Heritage report used Freedom of Information requests to the D.C. police to find reports of violence and criminal activity at DC schools.  The prevalence of violence and criminal activity is shocking and helps explain why students may be so eager to get vouchers for private schools or switch to charter schools.

But if you look at the officially reported numbers that D.C. schools report to the U.S. Department of Education in the “Indicators of School Crime and Safety,” as they are required to do by our centralized accountability law, you’d get a completely different (and almost certainly misleading) picture.

According to the Heritage report based on FOI requests of police records, there were 860 violent incidents at D.C. public schools during the 2007-08 school year, including 1 murder, 41 sex offenses, and 608 assaults.  But according to the office that submits the official D.C. crime and safety information to the U.S. Dept of Ed, there were only 40 violent crimes during that same period.  What happened to the other 820 that were reported to the police?

The difference between the crime and safety numbers reported for accountability purposes and those discovered through FOI requests to the police is huge.  They differ by a factor of 20!

I have to confess that stories like this shake my confidence in our ability to improve public schools through centralized accountability systems.

CORRECTION — I wrote “vice” when I meant “vise.”  That’s a great Freudian slip.


No More Revenge of the Nerds

August 31, 2009

According to the Wall Street Journal, Texas high school students can now receive additional course credit toward graduation for participation in athletics. 

Even before the Texas Board of Education and Texas legislature made this change, courses related to participation in high school sports could count for as many as 2 of the 26 courses required for graduation.  Now they can count for as many as 4 of the 26 required courses.

Advocates of the change “have been complaining for years that students weren’t getting credit for all their athletics courses. They argued that there was no comparable limit on marching band or ROTC military-training classes, which can earn students four years of credit.”

Detractors of the change complained: “There are only so many hours in a school day… This really equates to two less academic credits a student will then be taking.”

Of course students should be required to take a rigorous set of core academic classes, but the question is what they should be allowed to do to satisfy elective requirements.  Is football less academically beneficial than band or ROTC?

Many education pundits have a decidedly anti-athletics bias.  Perhaps it was those years of wedgies and romantic failure with the cheer-leading team, but whatever the cause, high school sports rarely receive a kind word from education reformers of all stripes (except maybe referee stripes).

To be sure, high school sports can detract time, energy, and money from core academic pursuits, but rigorous research suggests that athletics tend to be associated with academic and lifelong success.  For example, Eric Eide and Nick Ronan report in the Economics of Education Review that: “Using height as an instrument for participation, we find evidence that sports participation has a negative effect on the educational attainment of white male student athletes, a positive effect on the educational attainment and earnings of black male student athletes, and a positive effect on the educational attainment of white female student athletes. We find no effect of participation on the educational attainment or earnings of Hispanic males or black and Hispanic females.”

In the Review of Economics and Statistics, John Barron, Bradley Ewing, and Glen Waddell find, “There is a clear direct link for men between athletic participation and both additional formal education and wages.”  They use data from the National Longitudinal Study of Youth and the National Longitudinal Study of the High School Class of 1972, and employ multiple models to estimate the relationship between participating in high school sports and educational attainment and earnings later in people’s lives. 

For the most part, the Barron, et al analysis supports the conclusion that high schools sports select people who are likely to be successful later in life, rather than causing them to be successful later in life: “Higher-ability individuals or individuals with a reduced preference for leisure are more likely to choose to participate in athletic events. In such cases, athletic participation can be viewed as a signal of individuals with higher ability or greater ‘work ethic’ or industriousness. The resulting higher educational attainment and improved labor market outcomes that are linked to athletic participation then simply become a reflection of the inherent capabilities of more able or industrious individuals.”

But Barron, et al are not completely convinced that the link between high school sports and later success is purely a selection effect for industriousness since they do not detect a similar relationship for other extra-curricular activities: “However, we do find across both data sets that athletic participation is distinct from participation in other extracurricular activities in terms of its link to wages. This one finding does suggest that athletic participation may in fact serve as a training activity.”

If sports are associated with later success in life while band is not, it’s not clear why we would want to give more academic credit for band than sports.  And if sports particularly help black male students stay in school, there’s even more reason to allow athletics to count as an elective course.  


The Return of the Bogus “Excellence” Complaint

August 20, 2009

(Guest post by Greg Forster)

Fordham’s Fun Fact Friday feature, now in its sixth week, is a weekly one-minute video production that takes some fact about the education system and presents it using an interesting or unusual visual. The creators have been pretty consistently clever in coming up with ways to make obscure facts visually intuitive.

Unfortunately, the facts chosen to be presented are not always so cleverly chosen. When Fordham picks an important fact to visualize, such as the gap between spending and achievement growth or international comparisons of student-teacher ratios, the results are, well, superawesome. But when it chooses, say, a comparison of the US education budget with the GDP of some smaller countries, the visual presentation is still clever, but the result is kind of pointless. Is anyone really impressed by the point that a huge country like the US spends more on education than the GDP of, say, Indonesia? What does that prove? Some kind of argument or point was needed.

Last week they missed again. They decided to resurrect Fordham’s complaint from last year (dissected here and here) claiming that accountability systems make our schools more “equal” but less “excellent” because they create incentives for schools to increase the amount of attention they pay to low achievers, reducing the amount of attention they pay to high achievers. Never mind the fact that – according to Fordham in the very same report – the low achievers are benefiting from this diversion and the high achievers don’t seem to be losing any ground.

That would seem to me to be pretty clear evidence that schools were devoting too much attention to high achievers – perhaps because their parents are more likely to be influential – and that the incentives created by accountability were educationally healthy because they forced schools to focus their attention where they could create more improvement.

It’s obviously possible that in the long run accountability could push this too far and become counterproductive by focusing too much attention on low achievers at the expense of high achievers. That’s an argument for improving the design of accountability systems to preclude that result. But so far, on Fordham’s own evidence, we don’t seem to be having that problem.


Schoolhouses, Courthouses, and Statehouses

August 9, 2009

The new book from Rick Hanushek and Alfred Lindseth, Schoolhouses, Courthouses, and Statehouses, is a remarkably comprehensive and accessible review of K-12 education reform strategies.  It’s a must-read for education policymakers, advocates, and students — at both the graduate and undergraduate levels.  Even experienced researchers will find this to be an essential reference, given its broad sweep and extensive citations.

The book basically makes four arguments.  First it establishes how important K-12 educational achievement really is to economic success and how far we are lagging our economic competitors in this area.  Second, it demonstrates the dominance and utter failure of input-oriented reform strategies, including across-the-board spending increases and class-size reductions.  Third, it describes how the court system has perpetuated failed input-reform strategies after having bought intellectually dishonest methods of calculating how much spending schools really need.  And fourth, it makes the case for reform strategies that involve “performance-based funding,” including merit pay, accountability systems, and choice.

None of these arguments is original to this book.  But to the extent that others have made these arguments, they have drawn heavily on Rick Hanushek’s research.  In this book you get to hear it directly from the source and you get to hear it all so persuasively and completely.

If I have any complaint about the book it is that they are too restrained in their criticisms of the methods by which adequate school spending has been determined and the “researchers” who have developed and profited from those methods.  These fraudulent analyses have justified court decisions ordering billions of dollars to be taken from taxpayers and blown ineffectively in schools.  And the quacks promoting these methods have made millions of dollars in consulting fees in the process.

Those methods include the “professional judgment approach,” which essentially consists of gathering a group of educators and asking them how much money they think they would need to provide an “adequate” education,  Naturally, they need flying saucers, ponies, and a laser tag arena to ensure an adequate education. 

Another method is the “evidence-based approach,” which selectively reads the research literature to identify what it claims are effective educational practices.  It then sums the cost of those practices while paying no attention to how many are really necessary for an adequate education or whether any of them are really cost-effective.

There is also the “successful schools approach,” which looks at how much money a typical successful school spends and calls for all schools to spend at least that much.  This of course ignores the fact that many successful schools spend less than the typical amount and are still successful.  One would have thought it impossible for them to be successful with less money than that deemed necessary to succeed. 

And lastly, there is the “cost-function approach.”  This approach takes the conventional finding that higher spending, controlling for other factors, has little to no relationship with student achievement, and then turns that finding on its head.  It does this by switching  the dependent variable from student achievement to cost.  The question then becomes: how much each unit of achievement contributes to school costs.  Switching the dependent variable does nothing to change the lack of relationship between spending and achievement.  If you hide behind enough statistical mumbo-jumbo you can hope that the courts won’t notice that there is still virtually no relationship between spending and achievement controlling for other factors.

The Hanushek and Lindseth book lays all of this out (see especially chapter 7), but they are remarkably restrained in denouncing these approaches and the people who cynically profit from them.  I don’t think we should be so restrained.  The promoters of this snake oil are often university professors with sterling national reputations.  They’ve cashed in those reputations to market obviously flawed methods.  We shouldn’t let them do this without paying a significant price in their reputation.

The University of Southern California’s Larry Picus, and the University of Wisconsin’s Allan Odden, are both past presidents of the well-respected American Education Finance Association.  They shouldn’t be able to sell the “evidence-based approach” to 5 states for somewhere around $3 million without people pointing and laughing when they show up at conferences.

I know that Rick Hanushek and Alfred Lindseth are too professional and scholarly to call these folks frauds, but I’m not sure what else one could honestly call them.  Rick comes close in his Education Next article on these school funding adequacy consultants, entitled, “The Confidence Men.”  But in this book,perhaps with the tempered emotions of his co-author,  he adopts a more restrained tone.  Perhaps this is all for the best because the book maintains the kind of scholarly temperament that strengthens its persuasiveness to those who would be more skeptical. 

This has been a great year for education reform books.  Schoolhouses, Courthouses, and Statehouses joins Terry Moe and John Chubb’s Liberating Learning, released earlier this summer, as members of the canon of essential education reform works.


The National Standards Sausage-Making

June 9, 2009

Every decade or so we have to debate the desirability of adopting national standards for education.  People tend to be in favor of them when they imagine that they are the ones writing the standards.  But when everyone gets into the sausage-making that characterizes policy formulation, it generally becomes clear that no one is going to get what they want out of national standards.  What’s worse is that the resulting mess would be imposed on everyone.  There’d be no more laboratory of the states, just uniform banality.

Of course, some people always hope that they’ll somehow manage to sneak their preferred vision into place without having to go through the meat grinder.  That’s what is happening now with the National Governor’s Association effort at “voluntary” national standards.  In a process completely lacking in transparency and open-debate, some are rushing to announce a national standards fait accompli.

My colleague Sandra Stotsky tells us what’s what:

“If another country wanted other countries to respect its educational system and the reforms it was trying to make, who would it choose to lead such an important professional project as the development of its national standards in mathematics and in the language of its educational system itself?  In any other country in the world, one would expect a distinguished mathematician at the college level to be asked to chair the mathematics standards-writing committee–someone who commands the respect of the mathematics profession (and obviously is an expert on mathematics).   For the language standards-writing committee, one would likewise expect an eminent scholar in a college-level department–someone whose command of the language and understanding of the texts that inform the development of this language could not be questioned.   If the National Governors Association and the Council of Chief State School Officers had thought about national pride (and national need) as well as academic/educational expertise, then all of us would respect the Common Core Initiative and look forward with eagerness to the drafts the NGA and CCSSO have promised to make public in July.

 These two organizations could have followed, for example, the exemplary procedures followed by the National Mathematics Advisory Panel, on which I had the privilege to serve.  The Panel was chaired by the former president of one of the major universities in the country, all Panel members were identified at the outset, their qualifications were made known to the pubic, their procedures were open to the public and taped as well, and the final product was hammered out in public, after dozens of reviewers provided critical comments. 

 But instead of choosing nationally known scholars to chair and staff these committees–to assure us of the integrity and quality of the product–the NGA and the CCSSO have, for reasons best known to themselves,  treated the initiative as a private game of their own.  The NGA and the CCSSO haven’t even bothered to inform the public who is chairing these committees, who is on them, why they were chosen, what their credentials are, and why we should have any confidence whatsoever in what they come up with. 

 One person has announced on his own to the press and to a state department of education that he is chairing the mathematics standards-writing committee. He has not been contradicted by anyone at NGA or CCSSO, so we must assume he’s for real.  It turns out he is an English major with no academic degrees in mathematics whatsoever.  No one has yet announced on his/her own that he/she is chairing the English standards-writing committee.   One wag has already wondered whether this person might be a mathematics major with no academic degrees in English.  But it’s possible the sad joke in mathematics is not being repeated in English. 

 This country deserved better for a project of such national importance.”

Sandy Kress added these words of wisdom (pardon the capitalization since this was a comment on a post at Eduwonk):

“i suspect after the good feelings wear off, other governors and chiefs will begin to ask whether they can or should consider new standards at this time. once they learn about how hard it is to write new standards, they will ask even more questions. when we get to the controversies around whole language vs. phonics, they will ask more questions still. then comes computation vs. concepts. then comes all the many questions that arise once you get below the level of 30,000 feet. then – God forbid – you might even get to the place where you might possibly find the new standards under consideration to be no better than (or even possibly worse than) the standards you have! could it be that the tradeoffs that happen nationally will be the same as those that occur in the states? could the same interest groups intervene? could this nice dream be interrupted by the demons that bedevil state standard setting? could these interests be the problem as much as variation? oh no, could it be there’s no santa… no, i won’t go there.

and, oh yes, what about performance standards? if we ever get to detailed precise standards in each grade for reading and math, do the participants agree to common performance standards? if they don’t, who’s kidding whom? the real problem today is not so much that some states have vastly higher standards than other states; it’s more that their performance standards are greatly different. have the states, or will the states, commit to making those the same? if not, this will be utterly fruitless.

listen – DO NOT GET ME WRONG – i’m all for higher, fewer, clearer standards. i’ve spent a lot of time working on improving texas’ standards over the past 20 years. i’ve spent a lot of time with the hunt institute pushing more common standards. this is indeed the right thing to do.

but this process is going to be much more difficult than some think. it won’t happen overnight, nor should it. and there will remain great variation at the end of the day. it is utterly naiive and/or foolish to expect states to jump track from their current gameplans, particularly where they’re reasonably well thought out.

be prepared for states to recognize this “the morning after.” texas just recognized it before “the drinking began.”

also be prepared to realize that a better approach might be for one or more of these organizations to begin by recruiting the best and the brightest and actually doing the hard work of developing a few sets of model standards and then shopping them to the states, with the political support of those who rightly want high, common standards as well as perhaps some incentives from the feds to take these steps.” 

(edited for typos)


“Forward” our Motto?

April 29, 2009

(Guest Post by Matthew Ladner)

The MacIver Institute, Wisconsin’s new think-tank, released a report today by yours truly comparing the NAEP scores of Wisconsin and Florida. Let’s just say that UW-Madison would have probably fared better against the national champion Florida Gators in football last year.

wisconsin

Florida spends considerably less per student than Wisconsin and has a student profile considerably more challenging. Despite that fact, Florida surpassed Wisconsin overall on 4th grade reading (although within the margin of error) on 4th Grade Reading scores in 2007.

Most impressively, this gain was driven by much larger gains among traditionally underperforming student groups. The figure above shows the progress among Free and Reduced lunch kids in Florida and Wisconsin. In 1998, Florida’s low-income students were an average of 13 points behind their peers in Wisconsin. In 2007 however they had raced 8 points ahead.

Among African American students, Florida and Wisconsin once shared space near the bottom in reading achievement. Wisconsin is still there. Florida’s African Americans students now outscore their peers in Wisconsin by 17 points.

 

wisconsin-african-americanOne finds the same pattern among children with disabilities. In 1998, Wisconsin students with disabilities scored 18 points higher than those in Florida. In 2007, it was 4 points lower.

wisconsin-disabilities

The problem isn’t that Wisconsin’s scores are low, it is that they are flat.   When the Fordham Foundation found that Wisconsin had the lowest NCLB standards in the country it hinted that the state had not been vigorous in pursuit of broad K-12 reform.

Wisconsin of course was a trailblazer in parental choice with the Milwaukee Parental Choice Program. The learner has surpassed the master however with two statewide parental choice programs- one for low-income children, and one for children with disabilities. If anyone can explain why a low-income child in Milwaukee deserves an opportunity to attend a private school, but a similar child in Racine does not, I’d love to hear why. Florida also has a stronger charter school law.

Rather than sporting the lowest NCLB standards in the country, Florida doggedly pursued top-down accountability with the FCAT and grading schools A to F, and creating real consequences for school failure.

Florida embraced genuine alternative teacher certification, Wisconsin has not.

I am open to correction by my Cheesehead friends, but my distant view from the far-away desert leads me to wonder if Wisconsin may have become complacent when it comes to education reform. Coasting on their demographics, avoiding the tough calls and controversy necessary to improve public schools.

If so, perhaps inspiration can be drawn from the state song:

On, Wisconsin! On, Wisconsin!
Grand old Badger State!
We, thy loyal sons and daughters,
Hail thee, good and great.
On, Wisconsin! On, Wisconsin!
Champion of the right,
“Forward”, our motto,
God will give thee might!

Time will tell whether progressive Wisconsin will take this lying down. Will “Forward” or “comfortably stalled” be a better fitting motto for Wisconin in the next decade?


The Professional Judgment Un-Dead

March 25, 2009

It’s time we drive a stake through the heart of “professional judgment” methodologies in education.  Unfortunately, the method has come back from the grave in the most recent Fordham report on regulating vouchers in which an expert panel was asked about the best regulatory framework for voucher programs.

The methodology was previously known for its use in school funding adequacy lawsuits.  In those cases a group of educators and experts was gathered to determine the amount of spending that is required to produce an adequate education.  Not surprisingly, their professional judgment was always that we need to spend billions and billions (use Carl Sagan voice) more than we spend now.  In the most famous use of the professional judgment method, an expert panel convinced the state courts to order the addition of $15 billion to the New York City school system — that’s an extra $15,000 per student.

And advocates for school construction have relied on professional judgment methodologies to argue that we need $127 billion in additional spending to get school facilities in adequate shape.  And who could forget the JPGB professional judgment study that determined that this blog needs a spaceship, pony, martinis, cigars, and junkets to Vegas to do an adequate job?

Of course, the main problem with the professional judgment method is that it more closely resembles a political rather than a scientific process.  Asking involved parties to recommend solutions may inspire haggling, coalition-building, and grandstanding, but it doesn’t produce truth.  If we really wanted to know the best regulatory framework, shouldn’t we empirically examine the relationship between regulation and outcomes that we desire? 

Rather than engage in the hard work of collecting or examining empirical evidence, it seems to be popular among beltway organizations to gather panels of experts and ask them what they think.  Even worse, the answers depend heavily on which experts are asked and what the questions are. 

For example, do high stakes pressure schools to sacrifice the learning of certain academic subjects to improve results in others with high stakes attached?  The Center for Education Policy employed a variant of the professional judgment method by surveying school district officials to ask them if this was happening.  They found that 62% of districts reported an increase in high-stakes subjects and 44% reported a decrease in other subjects, so CEP concluded that high-stakes was narrowing the curriculum.  But the GAO surveyed teachers and found that 90% reported that there had not been a change in time spent on the low stakes subject of art.  About 4% reported an increase in focus on art and 7% reported a decrease.  So the GAO, also employing the professional judgment method, gets a very different answer than CEP.  Obviously, which experts you ask and what you ask them make an enormous difference.

Besides, if we really wanted to know about whether high stakes narrow the curriculum, shouldn’t we try to measure the outcome directly rather than ask people what they think?  Marcus Winters and I did this by studying whether high stakes in Florida negatively impinged on achievement in the low-stakes subject of science.  We found no negative effect on science achievement from raising the stakes on math and reading.  Schools that were under pressure to improve math and reading results also improved their science results.

Even if you aren’t convinced by our study, it is clear that this is a better way to get at policy questions than by using the professional judgment method.  Stop organizing committees of selected “experts” and start analyzing actual outcomes.


Add a Little Salt

March 20, 2009

(Guest Post by Jonathan Butcher)

Last week, a South Carolina education blog called “The Voice for School Choice” posted links to an article on the worst schools in the U.S.  South Carolinians should be particularly irked with the article because 11 SC schools made the top 25.  All is not what it seems, though; below is a touch of salt to be added to the results of this article (“25 Worst Performing Public Schools in the U.S.”).  At issue is not the intelligence of the authors nor their ability; however, they make very strong claims as to the significance of their findings, and readers should be aware of the foundation on which the authors make these claims regarding student achievement.

“Worst Schools” was composed by a website called “Neighborhood Scout” and published on a financial blog operated by AOL called “WalletPop.”  Neighborhood Scout specializes in “nationwide relocation software, retail site selection, and real estate investment advertising.”  They are not an academic department at a university nor a policy research institution, and their founders do not have backgrounds in education or education policy research.  The founders’ specialty is geography, computer mapping and web design (there is no evidence that the authors are different from those described on Neighborhood Scout’s web page).

Neighborhood Scout created their own methodology for the “Worst Schools” article.  They subtracted the percentage of students who “passed” NAEP in a particular state (I am assuming they mean students who scored at proficient or above—though it could mean basic or above) from the “average percentage” of students in the same state who scored at the proficient or advanced level on the state’s mandatory test.  Their objective was to find schools in states where there is a large difference between the percentage of students proficient on a state test and the percent proficient on NAEP in order to make judgments about the difficulty (or lack thereof) of a state test.  The article does not compare similar student populations—as does NAEP—or at the least this methodology section does not indicate such disaggregation.

Of note is that the study gives no indication of being peer-reviewed, and peer-review is a robustness check even among research reports not submitted to journals.  In addition, the study is a snapshot of test scores.  It does not take into account improvement over time, student population changes, or compare scores to some baseline indicator.  For example, in the past three years, 6th graders at W.A. Perry (one of the SC schools in the bottom 25) have gone from 48% meeting or exceeding state standards in math to 66%.  They are still below the state average, but more students are meeting or exceeding state standards now than three years ago.  Similar results can be found in English/Language Arts. 

Admittedly, W.A. Perry’s 6th graders’ scores are below the state average; however, they are making progress.  My aim is not to defend schools that may be low-performing, but a snapshot of a school’s test scores at one point in time does not a failing school make.  NCLB agrees with me, as a school must be in need of improvement for three years before significant intervention takes place.

Additionally, no indication is given by the article as to the student populations served at these schools.  For example, Milwaukee Spectrum School (#25) has a total population of 90 at-risk students who had a record of truancy at other schools.  The school is often a last stop for students ready to drop out of high school all together.  Of course the school is struggling; it is intended to serve struggling students.

In the article, different grades are represented for each school.  For example, high schools are not compared to high schools, but to elementary, middle, and high schools.  This presents a problem because the trend in NAEP (generally) is that more elementary students score proficient than middle school students, and more middle school students score proficient than high school students (this is true across subjects).    

Further, scores are not reported for every grade in every subject.  So a high school with low-scoring 11th graders may be on the “Worst Schools” list right before a middle school who has low-scoring 8th graders but a class of 6th graders with scores closer to a state’s average. 

In the end, of course, readers will decide if this list of worst performing schools is convincing.  However, before sinking your teeth in, take the article with a grain of salt.

The No Stats All Star

February 19, 2009

(Guest Post by Matthew Ladner)

Michael Lewis strikes again with a must read article about Shane Battier, the greatest professional basketball player you’ve never heard of because all he does is help his team win games.  The article is Moneyball for the NBA, but with several twists- most prominently some very nasty individual versus team dynamics. In short, in baseball, you essentially can’t aggrandize yourself as a player without also helping your team. If you are getting on base, you are padding your stats and helping your team win.

Not so in basketball, where you can get paid millions for padding your individual stats whether or not you help your team win games. An example raised in the article: NBA players don’t like to heave the ball at the end of the half or game because it lowers their percentage. In short, basketball is fraught with perverse incentives, making it much more like most of real life than baseball. The would be sabremetricians of the NBA have only begun to sort through this quandry.

Battier provides Lewis the perfect lense into this world, as a player that simultaneously has statistics that stink and is one of the most valuable players in the league.

Is there an education angle here? Yes indeed. Battier is what business guys call a “white space” employee. The term refers to the space between boxes on an organizational chart. A white space employee is someone who does whatever it takes to achieve organizational goals and makes the organization work much better as a whole.

As we move into the era of value-added analysis for teacher merit pay, this article provides much food for thought. School leaders must consider carefully what they will reward, and give some consideration to how white space behavior is rewarded. Rewards should not just be based on individual learning gains- reaching school wide goals should also be strongly rewarded. Otherwise my incentive as a math teacher will be to assign six hours of math homework a night- and to hell with everyone else (see Iverson, Allen).

Schools are more complex social organizations than basketball teams, so education sabrematicians have a great of work ahead of them. The good news however is that it can’t be hard to improve a system that generally only rewards teachers for length of service and often meaningless certifications and degrees.

There’s no reward for being a white space player OR a superstar in the current system of teacher compensation-just an old player. Imagine a system of compensation for the NBA in which Larry Bird was still riding the pine on NBA squads and getting paid more money than LeBron, Kobe or Battier. Hall of Fame = National Board Certified, but you no longer want Bird in the game if you want to win.

You wouldn’t need to be Bill James to figure out how to make such a system much more effective. Figuring out the right way to reward all the little invisible things that someone like Shane Battier does to make his team win, well, that’s trickier.  Overall we have nowhere to go but up, however. Remember both LeBron and Battier are multi-millionaires, while their equivalents in the teaching world have all too often left the profession in frustration or gone into administration.


Now She Tells Us

February 18, 2009

randi-weingarten-at-obama-rally

(Guest post by Greg Forster)

Randi Weingarten explained this week that, contrary to the outrageous slander that the unions are against education reform, she’s actually in favor of having the federal government create rigorous national academic standards for public schools, and will remain in favor of it as long as the Democrats are in power. (I’m paraphrasing.)

She writes: “Should fate, as determined by a student’s Zip code, dictate how much algebra he or she is taught?”

So the AFT now endorses the principle that a child’s education should not be determined by Zip code? When did that happen?

And if a child’s Zip code shouldn’t determine how much algebra he or she is taught, why should that determination be made in Washington instead? Apparently the amount of algebra you learn should be determined not by your Zip code, but by your international dialing code.

At least with Zip codes, some families can exercise school choice by moving to a different neighborhood. Yes, it’s an unfair system, since not all families are equally mobile; apparently Weingarten thinks the fair thing to do is to take away the freedom now enjoyed by some parents, so that there will be an equality of unfreedom.

Here we see the real modus operandi of the Left – achieve equality by leveling downward.