I’d like to continue my review of Michael Lewis’ new book, The Undoing Project, on the collaboration between Daniel Kahneman and Amos Tversky by speculating about what advice Kahneman and Tversky might offer education foundations.
Foundations have particular challenge in detecting and correcting errors in their own thinking. Because many people want things from foundations, especially their money, foundations frequently are organized as to limit communication with them. They generally don’t open their doors, phone lines, and emails to whoever might want to suggest something or ask something of them for fear that they will be overwhelmed. So they typically hide behind a series of locked doors, don’t make their phone or emails readily available, and insist that all applications follow a specific format, be submitted at a particular time, and be designed to address pre-determined issues.
Insulating themselves from external influence is understandable, but it creates real problems if they ever hope to detect when they are mistaken and need to make changes. The limited communication that does make it through to foundations tends to re-affirm whatever they are already doing. Prospective grantees don’t like to tell foundations that they are mistaken and need to change course because that makes getting funded unlikely. Instead, foundations tend to hear that their farts smell like roses. To make matters worse, many foundations are run in very top-down ways, which discourage questioning and self-criticism.
The Undoing Project presents a very similar situation having to do with airline pilot errors. Lewis describes a case in which a commercial airline was experiencing too many accidents caused by pilot error. The accidents were not fatal, but they were costly and dangerous — things like planes landing at the wrong airport. So the airline approached Amos Tvsersky and asked for help in improving their training so as to minimize these pilot errors. They wanted Tversky to design a pilot training method that would make sure pilots had the information and skills to avoid errors.
Tversky told the airline that they were pursuing the wrong goal. Pilots are going to make errors and no amount of information or training would stop them from committing those mistakes. We often assume that our errors are always caused by ignorance, but Tversky told them this was not true. The deeper problem is that once we have a mental model of the world, we tend to downplay or ignore information that is inconsistent with that model and bolster facts that support our model. If a pilot thinks he is landing at the right airport, he distorts available information to confirm that he is landing in Fort Lauderdale rather than nearby Palm Beach even if that is incorrect. The problem is not a lack of information, but our tendency to fit information into our pre-conceived beliefs.
Tversky’s advice was to change the cockpit culture to facilitate questioning and self-criticism. At the time cockpits were very hierarchical based on the belief that co-pilots needed to implement pilot orders quickly and without question lest the delay and doubt promote indecision and disorder. So the airline implemented Tversky’s suggestions and changed their training to encourage co-pilots to doubt and question and pilots to be more receptive to that criticism. The results was a marked deline in accidents caused by pilot error. Apparently we aren’t very good at detecting our own errors, but we are more likely to do so if others are encouraged to point them out.
So what might Tversky suggest to education foundations? I think he’d recognize that they have exceptional difficulty in detecting their own errors and need intentional, institutional arrangements to address that problem. In particular, he might suggest that they divide their staff into a Team A and Team B. Each team would work on a different theory of change — theories that are not necessarily at odds with each other but are also not identical. For example, one team might focus on promoting school choice and another on promoting test-based accountability. Or one team may promote tax credits and the other ESAs. The idea of dividing staff into somewhat competing teams is that they then have incentives to point out shortcomings in the other team’s approach. Dividing into Team A and Team B could be a useful check on the all-too-common problem of groupthink.
Another potential solution is to hire two or three internal devil’s advocates whose job it is to question the assumptions and evidence believed by foundation staff. To protect those devil’s advocates, it is probably best to have them report directly to the board rather than the people they are questioning.
Whatever the particular arrangements, the point is that education foundations should strive to promote an internal culture of doubt and self-criticism if they wish to catch and correct their own errors and avoid groupthink. One foundation that I think has taken steps in this direction is the Arnold Foundation. They actually hold internal seminars in which they invite outside speakers to come and potentially offer critiques of their work. Neerav Kingsland, who heads education efforts at Arnold, is also especially available on blogs and twitter for critical discussion. I don’t always agree with Neerav but I am impressed by his openness to dissent.
The collaboration between Kahneman and Tversky was itself an example of the importance of engaging in tough criticism within an effort. Like the airline pilots, they developed habits of challenging each other, which made their work together better than it ever could have been individually.
‘The Undoing Project’ is a worthwhile book for foundations and other organizations to ponder. The power of self-affirming heuristics is undeniable.
Daniel Kahneman in ‘Thinking Fast and Slow’ wrote how even when he had evidence that the test he had devised for identifying effective IDF officers did not predict well (in real combat, lots of low-scoring officers did great and lots of high-scoring officers did not), he still wanted to use it. He fell victim to the same heuristics he and Amos had written about, and he even knew he was a victim.
If I may offer some blunt observations based on inside experience:
1) Who watches the watchmen? Outside consultants hired to evaluate whether your farts really do smell like roses are, if anything, even more likely to tell you your farts smell like roses than current or potential grantees. The grantees are usually restrained in their sycophancy by the need to establish and maintain some kind of credibility. The consultant always has credibility because he was hired to serve as your BS detector. Moreover, while the grantees have an independent value proposition to the foundation and are just adding sycophancy as a sort of bonus add-on, the consultant’s job really depends 100% on whether the people who hire him like him. And thus, ironically, the consultant whose sole job is to call BS on sycophancy is by far the most strongly incentivized to sycophancy. Having him report to the board rather than the staff does little to nothing to mitigate this; the board hire the staff and approve their projects so they’re ego-invested in the staff and their work.
2) Building communication walls is a waste of time. For a while, my email and desk phone were publicly listed on the website, and I got close to zero bad inquiries. The few I got were easily responded to with a polite refusal. (If telling people “no” bothers you, you have no business working at a foundation.)
3) YMMV, but I quickly became highly sensitive to, and disgusted with, sycophancy. Nothing moved me to Nope Out on a potential grantee or disregard a consultant’s opinion more rapidly. I’ll admit I got rolled a few times, but overall, I didn’t reward sycophancy.
4) What this leaves you with, I think, is the multiple teams approach, which I think is a good idea.
5) Much more than that, though, I’m convinced no foundation should allow itself to exist for more than ten years. The dysfunctions inherent in the enterprise are largely a function of time, and that’s a variable you can control. Pick a goal, hire a team, and tell them they have ten years to develop a strategy and build the networks and institutions that will carry it out.
I agree that the Team A/Team B approach is probably much better. I had envisioned the Devil’s Advocates as employees, not consultants, but either way they may be too easily captured. And you are right that reporting directly to the board provides little protection unless the board really wants internal criticism. Then again, if they really don’t want to hear criticism there may be no institutional arrangement that could fix this.
“There is no institutional arrangement that could fix this” is basically my conclusion here. With one exception: a shorter time limit.
A short time horizon reduces the odds of capture but also prevents learning from failure and adapting.
For the big foundations, an interesting question on Team A/B versus, say, Teams “A to H” (8 teams).
I wonder: would that depend on your belief at the outset of “your chances of success” (using whatever the metrics are that define success)?
I.e., if you think chance of a team succeeding is in the 50% range, having 2 makes sense.
“Hey, we have 2 big ideas. Small high schools. Measuring teachers. Good chance one will make a big difference for kids.”
But if you think (at the outset) that the chances of any one team succeeding are in the 10% range, then would you want to behave more like a VC?
Have a basket of bets, assume most will fail, tell each team that at the outset.
I wonder if two teams each with a 10% chance of success still leaves much of the self-criticism problem intact.
I don’t think the number of teams should be driven primarily by the odds of success. The motivation for multiple teams is just to help keep everyone honest so that you can detect failure and change course. So, you might have as many teams as you have good ideas to pursue and resources to support, but certainly more than one to help keep everyone honest.