Is Ed Reform Tripping with a Testing High?

August 27, 2014

Marty West and colleagues have an incredibly important study described in Education Next this week.  It’s based on a piece published earlier this year in Psychological Science, a leading psychology journal, but the Ed Next version is probably easier for ed reform folks to access and grasp.

At its heart, the study applies well-established concepts from cognitive psychology to the field of education policy, with potentially unsettling results.  Intelligence, or cognitive ability, can be divided into two types: crystallized knowledge and fluid cognitive skills.  Crystallized knowledge is all of the stuff you know — facts, math formulae, vocabulary, etc… Fluid cognitive skills are the ability to think quickly, keep things in memory, and solve new problems.  The two are closely connected, but there are important distinctions between the two types.

West and his colleagues collected data from more than 1,300 8th graders in Boston, including some of the city’s famously high-performing charter schools to see how these schools affected both types of cognitive ability.  The bottom line is that schools believed to be high-performing are dramatically improving students’ crystallized knowledge, as measured by standardized tests, but have basically no effect on fluid cognitive skills.  That is, Boston’s successful charter schools appear to be able to get students to know more stuff but do not improve their ability to think quickly, keep things in memory, or solve new problems.

Perhaps we should be happy with the test score gains and untroubled by the lack of improvement in fluid cognitive skills.  Chetty et al suggest that test score gains are predictive of later success in life, so who cares about those other skills?  Maybe.  But maybe the students in Chetty experienced improvements in both crystallized knowledge and fluid skills, but he only has measures of the former.  It could still be the case that both types were essential for success.

There are worrisome signs that graduates from schools like KIPP are struggling in college despite impressive test score improvement in K-12.  Perhaps the mis-match between improved crystallized knowledge and stagnant fluid skills cannot produce sustained success.  Perhaps these products of successful ed reform know more of the high school curriculum but are unable to do things, like think quickly and solve new problems, that are important for later life accomplishment.  E.D Hirsch and his followers have been convinced that gains in crystallized knowledge would translate into improved fluid skills, but that appears not to be the case — at least not in these model charter schools in Boston.  

If fluid skills really matter, ed reform is in a serious pickle.  First, we almost exclusively measure crystallized knowledge with our reliance on standardized tests.  If anything, we appear to be increasingly emphasizing (and measuring) crystallized knowledge to the exclusion of fluid skills.  So even when we manage to produce test score gains, we are more likely to neglect fluid skills.  

Second, no one really knows how to improve fluid skills in a school setting.  There are some laboratory experiments that have successfully altered fluid skills, but those effects are often fleeting and have never been replicated in a school environment.  It’s taken us decades to devise some effective strategies for improving test scores.  It may take us decades more to devise strategies for schools to affect fluid skills, even if we start caring about and measuring those outcomes.

Educational success probably requires addressing student needs and abilities on multiple dimensions.  Large, technocratic systems built around standardized test results have a hard time focusing on more than the one dimension of test scores.  The research by West, et al suggests that not all dimensions of academic progress necessarily move in sync.  The unattended dimension of fluid skills may spoil the progress on the attended dimension of crystallized knowledge by undermining later-life success.


Chicago Teachers Union President Declares Jay Was Right

August 15, 2014

(Guest post by James Shuls)

It wasn’t long ago that Jay and Marcus Winters asked the question, “How much are public school teachers paid?” Rather than compare the annual salary of teachers and workers in other professions, Jay and Marcus compared salary based on how many hours and weeks the workers actually put in on the job. Not surprisingly, public school teachers fared well when their relatively short work year was factored into the equation.

Of course, Jay and Marcus’ analysis was roundly criticized. You simply cannot claim that teachers are decently paid. The audacity!

Now it seems, an unlikely ally has taken up the Jay and Marcus mantle on teacher pay – Karen Lewis, the president of the Chicago Teachers Union.

Recently, it was announced that Lewis is considering a run for Mayor of Chicago. As with any political race, this led to a closer examination of Lewis’ finances. The Chicago Sun-Times reports Lewis makes more than $200,000 in combined compensation from the Chicago Teachers Union and the Illinois Federation of Teachers, where she serves as executive vice president. Here’s the good part:

When she first ran for CTU president four years ago, Lewis promised not to make more than the highest-paid teacher.

“How can you criticize [the CPS CEO] for making $230,000 a year during these hard times if you’re making so much more than your members?” she told the Chicago Reader then.

Chicago Public Schools’ payroll records show no teacher makes as much as Lewis’ $136,890 CTU base salary.

In an interview Tuesday, Lewis said she didn’t break her promise not to make more as union president than Chicago’s highest-paid teacher makes, saying her CTU salary is for working the full year, rather than a 39-week school year. (emphasis mine)

What Lewis is saying, is that teachers in Chicago are making the equivalent of $136,890 or more. They just work fewer weeks. Now, where have I heard that before?

It’s almost as if Karen Lewis is saying…

Karen Lewis

 

 

 

James V. Shuls is an assistant professor of educational leadership and policy studies at the University of Missouri – St. Louis and a Distinguished Fellow of Education Policy at the Show-Me Institute. Follow on Twitter @Shulsie


Work Hard. Do Your Research. Does KIPP steal the best students?

August 8, 2014

(Guest Post by Collin Hitt)

KIPP schools demand a certain kind of student – a student who is willing to put in long hours and put up with very strict rules. KIPP has been shown to substantially increase student test scores. But critics argue that the culture at KIPP has major effects on recruitment and retainment. KIPP schools attract better students and are more likely to weed out low performing students, the argument goes. If this is true, KIPP students who persist in school are more likely to have a high-achieving peer group – and the effects of simply being in a peer group are really what explain any positive effects at school. A new study from Mathematica destroys this critique.

At its core, the critique of KIPP is a restatement of larger questions facing the charter school sector. Do charter schools cream the best students from nearby schools? And, compared to surrounding schools, are the lowest performing students at charter schools most likely to leave? Two rigorous studies reviewed here at JPGB answer an unequivocal “no.” But KIPP is a crucial case. The average charter school might take all the students it can find and do anything to keep those students. But surely if anybody engineers the makeup of their student body, it would be a school like KIPP, right?

So, does KIPP cream the best students (or at least better-than-average students) from nearby schools? The following chart shows, clearly not.

Entering KIPP students perform the same or worse than students in surrounding schools. But does KIPP then take exceptional efforts to push low performers back into surrounding schools. Again, clearly not.

Students transfer out of KIPP schools at the same rate as surrounding schools. And the students who transfer perform the same on standardized tests. So the only manner that KIPP may in some way create a measurably different peer group is through the quality of students in later grades who replace the KIPP students who transfer out. In this respect, the students who later transfer into KIPP are higher performers on average than students who transfer into district schools, according to the Mathematica study. But this, of all the ways to create a higher-performing peer group, is the least likely to have any meaningful impact on the performance of students who enter KIPP early on. The high performing peer group wouldn’t even be formed until students’ time at KIPP was almost over.

With their typical class, the Mathematica authors give their critics a charitable hearing, in fact constructing the strongest possible case for the peer-effect hypothesis. So, do peer effects explain KIPP’s impact on test scores? From the Mathematica study itself:

“One way to estimate the possible size of peer effects at KIPP is to combine our findings with other research on how peers’ prior scores affect student achievement. Unfortunately, published estimates of the effect of peer ability on student achievement range widely, from close to zero to nearly half a standard deviation impact for each standard deviation of difference in peer achievement. Even if the largest estimates of peer effects are correct, however, the improvement in peers’ prior test scores would appear to benefit KIPP students’ achievement only by about 0.07 to 0.09 standard deviations after four years at KIPP. KIPP’s cumulative impacts in middle school are three times that size, so even the largest estimates of the size of peer effects suggest that they are unlikely to explain more than one-third of the cumulative KIPP impact.

“Moreover, the best available evidence shows that KIPP produces large impacts on students in their first year at a KIPP school—before late-entering students could possibly have any effect. Consequently, the true peer effect resulting from late entrants is likely to be substantially below the back-of-the-envelope estimate of 0.07 to 0.09 standard deviations.”

The peer-group critique of KIPP essentially says this: anybody could get KIPP’s results if they had KIPP’s students. This simply isn’t true. KIPP is getting better results because of the work being done by teachers and staff. Rather than wonder, if only other schools could have students like KIPP’s, perhaps we should wonder why other schools don’t have adults like KIPP’s. (And, for that matter, why don’t other think tanks have scholars like Mathematica’s?)

 


Welcome Back to School!

August 8, 2014

Lorie Ann Hill

I know it’s early August but in some areas of the country they are already heading back to school.  And in Wagoner, Oklahoma we have a report of a special education teacher who was arrested on the first day back “after she showed up at school under the influence of alcohol and without her pants.

Andy Rotherham dryly observed: “I blame Common Core.

But hasn’t Oklahoma committed to withdrawing from Common Core?  Maybe he means that he blames the absence of Common Core.  I, as a believer in incentives, blame the relatively low cost of booze and the relatively high cost of pants. Or something like that.


Keeping Score in the Greene-Polikoff Wager

August 7, 2014
The unraveling of Common Core makes this flop the most obviously ill-conceived and doomed-to-fail reform effort since the Annenberg Foundation threw $500 million away in the 1990s.
Morgan responded:
At last count, 1 state out of 45 has repealed the standards.
So we agreed to make a wager:
In ten years, on April 14, 2024, I bet Morgan that fewer than half the states will be in Common Core.  We defined being in Common Core as “shared standards with shared high stakes tests-even if split between 2 tsts.”  Given 51 states and DC, Morgan wins if 26 or more states have shared standards and high stakes tests and I win if the number is 25 or less.  The loser has to buy the winner a beer (or other beverage).
It hasn’t even been four months, but I thought it might be useful to report the current score on our bet.  With the withdrawal of Iowa this week from the Smarter Balanced testing group, there are only 26 states that plan to use one of the two national tests to assess their students during the 2014-15 school year.  It’s true that 35 states remain part of the two testing consortia and some of the 9 states that have delayed implementation of the common tests may begin using one of them in the next few years.  But it’s safe to say that several of those 9 delayed start states will never follow through.  And some of the 26 states actually using a common test in 2015 are already making noises about withdrawing.  See for example reports coming out of Wisconsin and South Carolina.
If one more state that is currently using one of the common tests drops it than decides to follow through on implementation, I will have won the wager.  And we have more than 9 years to see that happen.  Mmmmm.  I’m thinking that a nice Belgian ale would be a delicious prize for victory.

School Breakfast Research

August 6, 2014

Education is dominated by “do-gooderism.”  Everybody wants to help children.  But sometimes the desire to help is seen as sufficient proof that one is actually helping.  And too often little thought is given to how trying to help might do some harm.

Which brings us to school breakfast programs.  Who could be against helping kids by making sure they start their day with a healthy breakfast provided in school?  Well, it is possible that those programs don’t do much good.  And it is possible they do some harm.

Last month Diane Whitmore Schanzenbach and Mary Zaki of Northwestern University released their analysis of a randomized experiment in which access to free school breakfast was expanded.  You can read the abstract and full report on the National Bureau of Economic Research web site.  Schools that offer free breakfast often have low participation rates.  So to learn about how to increase participation 70 matched pairs (or triplets) of schools participated in an experiment in which they could offer universal free school breakfast regardless of individual student eligibility for subsidized meals or breakfast in the classroom (BIC), where all students are given breakfast in their classroom at the start of the school day.

Schanzenback and Zaki conclude:

We find both policies increase the take-up rate of school breakfast, though much of this reflects shifting breakfast consumption from home to school or consumption of multiple breakfasts and relatively little of the increase is from students gaining access to breakfast. We find little evidence of overall improvements in child 24-hour nutritional intake, health, behavior or achievement, with some evidence of health and behavior improvements among specific subpopulations.

Providing breakfast in the classroom, not surprisingly, has a very large effect on whether students participate in the breakfast program because it’s given to every student in the classroom.  Pretty much the only way you could not participate is by not being in school.  Universal breakfast has a more modest effect on increasing participation in the program (about 10 percentage points) because students have to arrive early to get the breakfast.

So if the goal of the program is to have people participate in the program, BIC is a huge success and universal breakfast is a modest success.  But if the point is to increase the amount or quality of calories students consume or to alter their behavior or learning in school, these programs don’t seem to be effective.  Universal breakfast does not even seem to have an effect on whether students eat breakfast or not.  It only shifts whether students eat breakfast at home or at school.  BIC does increase whether students eat breakfast (or have two breakfasts), but has no effect on total caloric intake.  Students just shift their eating so that they have fewer calories at other meals.

But when students eat might affect their health, behavior, and learning outcomes, so the researchers looked at whether the BIC program helped by increasing the likelihood that students would have breakfast even if those calories were offset by a reduction in eating at other times.  Unfortunately it didn’t.  They conclude: “The BIC treatment does not statistically significantly improve any outcome.”

So, expanding access to school breakfast does not seem to have any meaningful benefits.  Where’s the harm?  Leaving aside the cost to taxpayers, the greatest potential harm to these programs is that they alter the relationship between families and their schools by displacing the family’s traditional role of feeding their own children.  Doing so may make the families and students feel more dependent on the government and make school teachers and administrators view families and students as generally incompetent.  The state becomes the new Daddy and the parents become children incapable of providing for themselves or their own children.

This all makes me think of the new book by Jason Riley of the Wall Street Journal: Please Stop Helping Us.  We need to hold in check our desire to do good by remembering first to do no harm.


Abracadabra

July 31, 2014

Ancient mystics believed that one could have the magical power to create reality simply by uttering certain words.  This is the origin of “magical words” like abracadabra, which means “I create as I speak” in Aramaic.  But the belief in using magical words to create reality continues to this day, and not just among cheesy stage illusionists.  The Gates Foundation and their various grant recipients have “in a series of strategy sessions in recent months… concluded they’re losing the broader public debate [over Common Core] — and need to devise better PR.

Common Core supporters haven’t considered the possibility that their political strategy is flawed because they are trying to impose a top-down reform on a hostile and well-organized opposition of teachers and affluent parents.  Nope.  It must be that they just aren’t using the right words.  In particular, they think they need to shift from talking so much about “facts” and “evidence” and start using more “emotional” words.  If only they say the right words, people’s interests will change and the opposition will melt.  Abracadabra!

This faith in magical words is a symptom of a larger disease.  Education reformers have invested way too much in people who do almost nothing except craft political messages.  They try to coin just the right soundbite to fit in their dozens of daily tweets.  But they don’t just repeat these soundbites on Twitter, they use this “messaging” at policy conferences, in essays, and in conversations with each other.  They have put so much energy into perfecting the Twitter-bite that they can no longer think in any way other than in short bursts of spin.  It is rotting their brains.

Unfortunately, I think the rot starts at the top.  The Gates Foundation not only funds a large amount of this messaging nonsense, but engages in this type of slogan-speak themselves.  I’ve been reviewing their own descriptions of the purposes of their grants and have found poetry, like “to support organizations in a strategic visioning engagement to develop their innovative professional development theory of action and implementation strategies” or “to bring together a coalition of thought leaders, policy-makers, consultants and practitioners as part of the Global Education Leaders’ Program (GELP) and support them through a convening.”  Ugh.  

Here on JPGB we’ve been warning about the abuse of the English language in education reform for a while now.  And Rick Hess has joined the party, alerting readers to common phrases that should raise alarms with your BS-detector.  As Orwell understood, the problem with slogan-speak is not just that it muddles debates by obscuring the substance of what people are really saying.  And the problem is also not limited to the fact that degrading policy discourse with this gibberish undermines the credibility of future attempts at serious policy discussion.

The worst problem of slogan-speak may be that it is distorting the thinking of the ed reformers themselves.  They are usually completely sincere when they spout this slogan-speak.  They believe it.  And so their analysis of education reform issues is stunted and superficial.  They can’t think through an issue much more than how it sounds in a Twitter post.  And perhaps this is why they are doubling-down on a top-down standards reform that has no political logic to it.  They just can’t think it through.  So, when it runs into trouble they revert to what they know — more messaging.


Follow

Get every new post delivered to your Inbox.

Join 2,539 other followers