Bits of Books - Books by Title


Moral Tribes:

Emotion, Reason, and the Gap Between Us and Them

By Joshua Greene

Just Babies:

The Origins of Good and Evil

By Paul Bloom





(Atlantic)

In 1999, Joshua Greene - then a philosophy graduate student at Princeton, now a psychology professor at Harvard - had a very fertile idea. He took a pretty well-known philosophical thought experiment and infused it with technology in a way that turned it into a very well-known philosophical thought experiment - easily the best-known, most-pondered such mental exercise of our time. In the process, he raised doubts, in inescapably vivid form, about the rationality of human moral judgment.

The thought experiment - called the trolley problem - has over the past few years gotten enough attention to be approaching needs no introduction status. But it's not quite there, so: An out-of-control trolley is headed for five people who will surely die unless you pull a lever that diverts it onto a track where it will instead kill one person. Would you - should you - pull the lever?

Now rewind the tape and suppose that you could avert the five deaths not by pulling a lever, but by pushing a very large man off a footbridge and onto the track, where his body would slow the train to a halt just in time to save everyone - except, of course, him. Would you do that? And, if you say yes the first time and no the second (as many people do), what's your rationale? Isn't it a one-for-five swap either way?

Greene's inspiration was to do brain scans of people while they thought about the trolley problem. The results suggested that people who refused to save five lives by pushing an innocent bystander to his death were swayed by emotional parts of their brains, whereas people who chose the more utilitarian solution - keep as many people alive as possible - showed more activity in parts of the brain associated with logical thought.

If you put Greene's findings in general form - human 'reasoning' is sometimes more about gut feeling than about logic - they are part of a wave of behavioral-science research that in recent years has raised doubts about how much trust your brain deserves. The best-seller lists have featured such books as Predictably Irrational, by the Duke psychologist Dan Ariely, and Thinking, Fast and Slow, in which the Princeton psychologist and Nobel laureate Daniel Kahneman covers acres of research into humanity's logical ineptitude.

But there's a difference between this work and Greene's work. Ariely and Kahneman spend lots of time in their books on financial and other mundane decisions, whereas Greene is focusing on moral matters. It's one thing to say 'Isn't it crazy that you'll drive 10 miles to save $50 on a $100 purchase but not to save $50 on a $500 purchase?' It's another thing to say 'Isn't it crazy that you'll dutifully kill a guy by pulling a lever but refuse on principle to give him a nudge that leads to the same outcome?' The first question is about self-help. The second question is about something more.

How much more? To judge by Greene's new book, a whole lot more. It's called Moral Tribes: Emotion, Reason, and the Gap Between Us and Them - and, in case the title alone doesn't convince you that the stakes are high, Greene writes that his book is about 'the central tragedy of modern life.' He's not alone in thinking this is high-gravitas stuff. The Yale psychologist Paul Bloom, who also studies the biological basis of morality, has a new book called Just Babies, about the emergence of moral inclinations in infants and toddlers. He's chosen the subtitle The Origins of Good and Evil.

I have a fairly robust immune response to book-marketing hype, but in this case it's showing no signs of activity. The well-documented human knack for bigotry, conflict, and atrocity must have something to do with the human mind, and relevant parts of the mind are indeed coming into focus - not just thanks to the revolution in brain scanning, or even advances in neuroscience more broadly, but also thanks to clever psychology experiments and a clearer understanding of the evolutionary forces that shaped human nature. Maybe we're approaching a point where we can actually harness this knowledge, make radical progress in how we treat one another, and become a species worthy of the title Homo sapiens.

Both Bloom and Greene evince concern for the human predicament; both authors would like to do something about it; and both have ideas about what that something should be. But only Greene's book brings the word messianic to mind. His concern is emphatic, his diagnosis precise, and his plan of action very, very ambitious. The salvation of humankind is possible, but it's going to take concerted effort. Greene offers readers 'the motivation and opportunity to join forces with like-minded others,' the chance to support what he calls a 'metamorality.' And as this metamorality spreads, we can expect to solve problems on both the domestic and international fronts, bringing reason to discussions about abortion and gay rights; calming tensions between India and Pakistan, Israel and Palestine; and so on.

I like ambition! And I'm fine with a bit of messianism, because the intense tribalism we're seeing, domestically and internationally, does suggest that we may be approaching a point of true planetary peril. I agree with Greene that the situation calls for dramatic action - action on a scale that could bring a kind of transformation of human consciousness. But I think the transformation Greene has in mind, though appealing, isn't the really urgent one. I think his diagnosis gives short shrift to the part of human psychology that has most condemned us to conflict.

But don't give up hope. Lying within Greene's rich, sprawling book are the elements of an alternative diagnosis that I think is more on-target and more promising. Salvation may not be at hand, but a path to it is discernible, even if you have to squint to see it.

Greene's diagnosis is, at its foundation, Darwinian: the impulses and inclinations that shape moral discourse are, by and large, legacies of natural selection, rooted in our genes. Specifically, many of them are with us today because they helped our ancestors realize the benefits of cooperation. As a result, people are pretty good at getting along with one another, and at supporting the basic ethical rules that keep societies humming.

Anyone who doubts that basic moral impulses are innate will have Paul Bloom's book to contend with. He synthesizes research - much of it done by him and his wife, Karen Wynn - demonstrating that an array of morally relevant inclinations show up in infants and toddlers. His list of natural moral endowments includes 'some capacity to distinguish between kind and cruel actions,' as well as 'empathy and compassion - suffering at the pain of those around us and the wish to make this pain go away.' Bloom's work has also documented 'a rudimentary sense of justice - a desire to see good actions rewarded and bad actions punished.'

So if we're such moral animals, why all the strife? Joshua Greene's answer is appealingly simple. He says the problem is that we were designed to get along together in a particular context - relatively small hunter-gatherer societies. So our brains are good at reconciling us to groups we're part of, but they're less good at getting groups to make compromises with one another. 'Morality did not evolve to promote universal cooperation,' he writes.

Greene seems to think this wouldn't be such a big problem if we were still living in the Stone Age, back when sparse population meant that groups didn't bump into one another much - and when, anyway, a neighboring village might share your language and your culture and maybe even include some of your kin. But the modern world smushes groups together, and to further complicate things, they have different values. Greene writes:

Many Muslims believe that no one - Muslim or otherwise - should be allowed to produce visual images of the Prophet Muhammad. Some Jews believe that Jews are God's chosen people and that the Jews have a divine right to the land of Israel. Many American Christians believe that the Ten Commandments should be displayed in public buildings and that all Americans should pledge allegiance to 'one nation under God.'

This fact - that different groups view life 'from very different moral perspectives' - is what Greene calls the 'Tragedy of Commonsense Morality.' He opens his book with a parable in which different tribes subscribing to different values can't get along and says, 'They fight not because they are fundamentally selfish but because they have incompatible visions of what a moral society should be.'

If this diversity of moral codes is indeed the big problem, one solution suggests itself: get rid of the diversity. We need 'a common currency, a unified system for weighing values,' Greene writes. 'What we lack, I think, is a coherent global moral philosophy, one that can resolve disagreements among competing moral tribes.' He is proposing nothing less than the moral equivalent of Esperanto.

One question you confront if you're arguing for a single planetary moral philosophy: Which moral philosophy should we use? Greene humbly nominates his own. Actually, that's a cheap shot. It's true that Greene is a utilitarian - believing (to oversimplify a bit) that what's moral is what maximizes overall human happiness. And it's true that utilitarianism is his candidate for the global metamorality. But he didn't make the choice impulsively, and there's a pretty good case for it.

For starters, there are those trolley-problem brain scans. Recall that the people who opted for the utilitarian solution were less under the sway of the emotional parts of their brain than the people who resisted it. And isn't emotion something we generally try to avoid when conflicting groups are hammering out an understanding they can live with?

The reason isn't just that emotions can flare out of control. If groups are going to talk out their differences, they have to be able to, well, talk about them. And if the foundation of a moral intuition is just a feeling, there's not much to talk about. This point was driven home by the psychologist Jonathan Haidt in an influential 2001 paper called 'The Emotional Dog and Its Rational Tail' (which approvingly cited Greene's then-new trolley-problem research). In arguing that our moral beliefs are grounded in feeling more than reason, Haidt documented 'moral dumbfounding' - the difficulty people may have in explaining why exactly they believe that, say, homosexuality is wrong.

If everyone were a utilitarian, dumbfoundedness wouldn't be a problem. No one would say things like 'I don't know, two guys having sex just seems ... icky!' Rather, the different tribes would argue about which moral arrangements would create the most happiness. Sure, the arguments would get complicated, but at least they would rest ultimately on a single value everyone agrees is valuable: happiness.

You may ask: How do you get devout Christians, Jews, and Muslims to abandon their religiously based value systems? And you may doubt that a book by a Harvard professor is going to do the trick. But Greene realizes that this isn't a summer-vacation project, and anyway, I don't think practicality is the core problem with his plan. I think the big problem comes earlier, in the diagnosis phase, and it's a problem common in analyses of the world's conflicts: overestimating the role played by divergent values.

Greene is proposing nothing less than the moral equivalent of Esperanto.

Consider one of Greene's examples, cited above. It's true that some Jews believe God reserved the West Bank for them. But it's also true that this belief - and religious belief in general - has had very little to do with the Israeli-Palestinian conflict. Zionism was a largely secular movement, featuring minimal God talk, and the Arab resistance to it wasn't particularly religious. After the Palestinian Sirhan Sirhan, enraged by Robert F. Kennedy's pledge to send arms to Israel, assassinated Kennedy, he said he had done it for 'my country,' not 'my religion.' And, anyway, his religion was the same as Kennedy's: Christianity.

The Israeli-Palestinian conflict is at its root a conflict between two peoples who think they're entitled to the same piece of land. When they argue about this, they don't generally posit different ethical principles of land ownership. They posit different versions of history - different views on how many Arabs were living in Palestine before 1948, on who started the fighting that resulted in many Arabs leaving the area, on which side did which horrible things to the other side first, and so on. It's not clear why these arguments over facts would change if both groups were godless utilitarians.

In fact, Greene's own book suggests they wouldn't. Notwithstanding its central argument, it includes lots of evidence that often the source of human conflict isn't different moral systems but rather a kind of naturally unbalanced perspective. He cites a study in which Israelis and Arabs watched the same media coverage of the 1982 Beirut massacre and both groups concluded that the coverage was biased against their side. Any suspicion that this discrepancy was grounded in distinctive Jewish or Arab or Muslim values is deflated by another finding he cites, from the classic 1954 study in which Princeton and Dartmouth students, after watching a particularly rough Princeton-Dartmouth football game, reached sharply different conclusions about which side had played dirtier. Was the problem here a yawning gap between the value systems prevailing at Princeton and Dartmouth in the 1950s? Maybe a mint-julep-versus-gin-and-tonic thing?

No, the problem was that both groups consisted of human beings. As such, they suffered from a deep bias - a tendency to overestimate their team's virtue, magnify their grievances, and do the reverse with their rivals. This bias seems to have been built into our species by natural selection - at least, that's the consensus among evolutionary psychologists.

Such biases seem to be, in part, a legacy of all the zero-sum games that got played during human evolution: two clans fighting over some finite resource, two people competing for the same mate or for social status.

Ironically, this means that some self-serving biases are rooted in what Greene presents as the good news about our evolutionary past: that people are designed by natural selection to extract the benefits of cooperating - successfully playing non-zero-sum games - with other people. After all, even many non-zero-sum games have a zero-sum component. Yes, you and your three fellow hunter-gatherers will all be better off if you cooperate on a hunt to kill an animal none of you could kill individually. But when it comes time to divvy up the meat, it's better to get 30 percent than 25 percent. So we're naturally good bargainers - with ready access to facts that support our worthiness and less-ready access to facts that don't; we seem designed to believe in our entitlement. In one study, collaborators on jointly authored academic papers were asked what fraction of the team's accomplishment they were responsible for. On the average four-person team, the sum of the claimed credit was 140 percent.

If real-world examples of our self-serving biases seem hard to find, that's because they're supposed to be hard to find; the whole idea is that we're not aware of the information these biases exclude. So, for example, if you're like the average American, here's a fact you don't know: in 1953, the United States sponsored a coup in Iran, overthrowing a democratically elected government and installing a brutally repressive regime that ruled for decades. Iranians, on the other hand, are very aware of this, which helps explain why, to this day, many of them are gravely suspicious of American intentions. It also helps explain the 1979 takeover of the U.S. Embassy in Tehran - an event that many Americans no doubt chalk up to unfathomable religious zealotry. This is the way the brain works: you forget your sins (or never recognize them in the first place) and remember your grievances.

As a result, the antagonisms confronting you may seem mysterious, and you may be tempted to attribute them to an alien value system. Indeed, this temptation may itself be part of our built-in equipment for making our rivals' positions seem groundless. In any event, viewing values as deeply causal, as Greene and so many others do, seems to be deeply human. It's also unfortunate, because, time and again, that belief keeps us from addressing the actual issues that underlie conflict.

There's a fine line between studying moral psychology and studying plain old psychology. Many of these self-serving biases fall under the broader rubric of confirmation bias - a tendency to notice facts consistent with your thesis and overlook facts that contradict it. Confirmation bias is generally called a cognitive bias, not a moral bias, and it gets discussed in Daniel Kahneman's book, along with our other funny intellectual quirks. But cognitive biases can have moral consequences just as surely as trolley-car intuitions do.

What makes the moral stakes especially high is the way these biases interact with a feature of psychology that is more obviously moral in nature. Namely: the sense of justice - the intuition that good deeds should be rewarded and bad deeds should be punished. A sense of justice sounds like a fine thing. Rewarding good behavior increases its frequency, and the threat of punishment discourages bad behavior. But this assumes impartial judgment - that the punishment will go to those who did the bad things and the rewards will go to those who did the good things. And, as we've just seen, our judgments are designed not to be impartial.

When you combine judgment that's naturally biased with the belief that wrongdoers deserve to suffer, you wind up with situations like two people sharing the conviction that the other one deserves to suffer. Or two groups sharing that conviction. And the rest is history. Rwanda's Hutus and Tutsis, thanks to their common humanity, shared the intuition that bad people should suffer; they just disagreed - thanks to their common humanity - about which group was bad.

Greene and Bloom, and lots of other scholars, believe the sense of justice to be a legacy of natural selection, and the logic is straightforward. For starters, though extracting the benefits of cooperation involves things like making overtures to help someone (since maybe that person will help you down the road), it also means following up selectively - reciprocating kindnesses extended to you but not continuing to help those who don't help you. It may even mean punishing those who have abused your trust by, say, feigning friendship only to desert you once they've reaped the benefits of your generosity. So the impulses governing cooperation range from the gratitude that cements friendships to the sort of righteous indignation that fuels violence.

This polarity may be the origin of what eventually evolved into a full-fledged sense of justice, evident in people from an early age. Bloom shows us 1-year-olds who feel that a puppet that doesn't play nice with other puppets should be punished - in fact, they'll personally do the punishing, by taking the puppet's treats away. It sounds sweet, in that context - hard to believe that this same impulse, when fused with natural cognitive biases, can lead to genocide.

Greene doesn't neglect these sorts of impulses and biases. Indeed, he explains the dark side of our cooperative machinery and of human nature generally. This all feeds into his belief that 'our brains are wired for tribalism.' But he doesn't seem to reflect on the import of that observation. If indeed we're wired for tribalism, then maybe much of the problem has less to do with differing moral visions than with the simple fact that my tribe is my tribe and your tribe is your tribe. Both Greene and Bloom cite studies in which people were randomly divided into two groups and immediately favored members of their own group in allocating resources - even when they knew the assignment was random.

Of course, for things to get really nasty, you need more than just the existence of two groups. The most common explosive additive is the perception that relations between the groups are zero-sum - that one group's win is the other group's loss. In a classic 1950s study mentioned by Bloom (a study that couldn't be performed today, given prevailing ethical strictures), experimenters created deep hostility between two groups of boys at summer camp by pitting them against each other in a series of zero-sum games. The rift was mended by putting the boys in non-zero-sum situations - giving them a common peril, such as a disruption in the water supply, that they could best confront together.

When you're in zero-sum mode and derogating your rival group, any of its values that seem different from yours may share in the derogation. Meanwhile, you'll point to your own tribe's distinctive, and clearly superior, values as a way of shoring up its solidarity. So outsiders may assume there's a big argument over values. But that doesn't mean values are the root of the problem.

The question of how large a role differing value systems play in human conflict hovers over some of the world's most salient tensions. Many Americans see Muslim terrorists as motivated by an alien 'jihadist' ideology that compels militants to either kill infidels or bring them under the banner of Islam. But what the 'jihadists' actually say when justifying their attacks has pretty much nothing to do with bringing Sharia law to America. It's about the perception that America is at war with Islam.

Just look at the best-known terrorist bombers and would-be terrorist bombers who have targeted the United States since 9/11: the Boston Marathon bombers, the would-be 'underwear bomber,' the wouldbe Times Square bomber, and the would-be New York subway bombers. All have explicitly cited as their motivating grievance one or more of the following: the wars in Iraq and Afghanistan, drone strikes in various Muslim countries, and American support for Israeli policies toward Palestinians.

There's no big difference over ethical principle here. Americans and jihadists agree that if you're attacked, retaliation is justified (an extension of the sense of justice, and a belief for which you could mount a plausible utilitarian rationale, if forced). The disagreement is over the facts of the case - whether America has launched a war on Islam. And so it is with most of the world's gravest conflicts. The problem isn't the lack of, as Greene puts it, a 'moral language that members of all tribes can speak.' Retributive justice, for better or worse, is a moral language spoken around the world - but it is paired with a stubborn and lethal bias about who should be on the receiving end of the retribution.

None of this is to deny the existence of genuine disputes over values, or to say that such disputes never matter. Certainly domestic politics features explicit debates over values, and here - in the realm of abortion and gay rights - Greene's argument may hold more water. I'm sure there would be less homophobia if people were driven more by pure reason - not just because they would sidestep scriptural injunctions, but also because they would transcend gut reactions against forms of sex that they themselves don't find enticing.

But even in the domestic arena, the fact that people fight over values doesn't mean values are the prime mover. The conflicts may draw at least as much energy from prior intertribal tensions - including a sense that your group is threatened by another group, so that the game is zero-sum. Two decades ago, then Vice President Dan Quayle paired criticism of abortion and gay parenting with a warning about the cultural elite who were said to be foisting such values on ordinary Americans. That was smart, because once you're convinced that an enemy group has contempt for your values, your values will seem, more than ever, worth fighting for.

And you'll do what it takes to fight for them. But you'd do that even if - as in Greene's ideal world - the ground rules confined your weapons to utilitarian argument. Indeed, debates in the public square over things like gay rights are already pretty utilitarian. (Is having gay parents good for children?) Yet the debates remain hard to resolve - in part because utilitarian arguments can be so creatively hypothetical. (Will the logic behind gay marriage lead to an acceptance of polygamy - and what effect would that have on human happiness?) It's enough to make you wonder how much conflict a moral lingua franca would really prevent.

Greene's evangelical mission, like many evangelical missions, is rooted in a not-very-flattering view of human nature. Namely: some of our deepest moral intuitions are gut feelings that are with us for no more lofty a reason than that they helped our ancestors protect themselves and spread their genes. Even the emotional aversion to pushing the guy onto the trolley track (an aversion so deep that utilitarians overcome it only with effort) isn't here because natural selection frowned on pushing people to their death per se. Greene speculates, rather, that we are averse to conspicuously harming people because in the hunter-gatherer environment of our evolution, that would have invited retaliation from the victims or their kin or friends. Killing someone remotely, by pulling a lever, doesn't trigger the same aversion as low-tech, obvious assault.

We seem designed to twist moral discourse to selfish or tribal ends.

In a sense, then, people who obey their moral intuitions and refrain from pushing the man to his death are just choosing to cause five deaths they won't be blamed for rather than one death they would be blamed for. Not a profile in moral courage! So you can see why Greene would like to recruit us to a moral philosophy that doesn't rest on a bunch of emotional intuitions. It holds out the hope of helping us transcend natural selection's amoral agenda.

But however dark the view of human nature that inspired this mission, I fear it's not dark enough. If Greene thinks that getting people to couch their moral arguments in a highly reasonable language will make them highly reasonable, I think he's underestimating the cleverness and ruthlessness with which our inner animals pursue natural selection's agenda. We seem designed to twist moral discourse - whatever language it's framed in - to selfish or tribal ends, and to remain conveniently unaware of the twisting.

So maybe the first step toward salvation is to become more self-aware. Greene certainly thinks more self-awareness would help. In addition to singing the praises of his global metamorality, he encourages the cultivation of a kind of meta-cognitive skill. This would depend on understanding how our minds work and could help us decide more wisely - presumably not just by showing us the transcendent virtue of utilitarianism, but by making us aware of the biases that routinely afflict judgment. But I think he fails to recognize just how crucial thoroughgoing metacognition is to the whole project - and how much good it alone would do, with or without a global moral philosophy.

Which leads to a question: Um, how exactly do you do metacognition? Well, you could start by pondering all the evidence that your brain is an embarrassingly misleading device. Self-doubt can be the first step to moral improvement. But our biases are so subtle, alluring, and persistent that converting a wave of doubt into enduring wisdom takes work. The most-impressive cases of bias neutralization I'm aware of involve people who have spent ungodly amounts of time - several hours a day for many years - in meditative practices that make them more aware of the workings of their minds. These people seem much less emotion-driven, much less wrapped up in themselves, and much less judgmental than, say, I am. (And brain scans of these highly adept meditators have found low levels of activity in brain networks associated with self-regarding thought.)

Happily for those of us who can't spare several hours a day, more-modest progress can be made by pursuing this mindfulness meditation in smaller doses. And this practice exploits a strategy that is common in successful evangelical missions: using self-help as bait. Loosening the grip of your emotions can make you happier, and for many meditators that's the big draw. The fact that emotionally driven and subtly self-centered moral judgments loosen their hold on you as well seems almost like a side effect.

Full disclosure: I practice (in moderation) this sort of meditation, so the previous paragraph was a sermon from my tribe. But other tribes have their own paths toward the goal of transcending selfish biases. I don't just mean that there are edifying meditative traditions in all major religions (though there are), or that very old scriptures say things like 'Why do you see the speck in your neighbor's eye, but do not notice the log in your own eye?' (though they do). I also mean that within various kinds of tribes - religions, nations, political parties - there are people urging their compatriots to see things from a less tribal perspective, to put themselves in the shoes of the other, to explore the non-zero-sum prospect of accommodation. These voices can prevail without Greene's recommended overhaul of tribal value systems, and if they do prevail, the distinctive tribal values that seem so explosively divisive now will start to seem less so.

This sort of global metacognitive revolution, even with the assistance it's getting from science, may be a long shot, and it certainly will be a long, hard slog. But nourishing the seeds of enlightenment indigenous to the world's tribes is a better bet than trying to convert all the tribes to utilitarianism - both more likely to succeed, and more effective if it does.

Don't get me wrong: Some of my best friends are utilitarians. In fact, the utilitarian tribe is another tribe I belong to. I even once spent a chapter of a book defending utilitarianism as the best available moral philosophy. I just don't think it's the key to salvation.

It's tempting for us utilitarians to look around at people with more emotionally rooted value systems and think that these primitive worldviews are what stand in the way of progress. But if psychology tells us anything, it is to be suspicious of the intuition that the other guys are the problem and we're not.

More books on Mind

More books on Politics

More books on Religion











Books by Title

Books by Author

Books by Topic

Bits of Books To Impress