Rationality, as it turns out, far from being the defining quality of humans, is very often used by humans only post hoc, to justify decisions we have already made based on unconscious, irrational processes. In fact, rationality may actually have evolved, not so much to help us make effective plans in the physical world, as to help us out socially, by “explaining” our decisions to our peers. This, at any rate, is one of the key lessons from Jonathan Haidt’s book The Righteous Mind: Why Good People are Divided by Politics and Religion. At least, that is a main “take-away” for me, which fits in well with some of the other reading I’ve been doing over the past couple of years, including Gerd Gigerenzer’s Rationality for Mortals: How People Cope with Uncertainty, and Hilary Kornblith, et al, Naturalizing Epistemology. Although, as a person who makes my living as an engineer, I may have more respect than psychologist Haidt for the role of rationality in helping us deal with our non-social reality.
This particular argument about rationality, Haidt documents well with experimental data (although the Gigerenzer book offers a caution in its chapter specifically on how statistical analysis is misused in most published psychology research). Some of Haidt’s other themes are also very interesting, but raise more issues for me; at least, I have more questions about them.
Haidt’s main interest, in the book is with moral psychology, including how moral systems actually function in our (conscious and unconscious minds), and how and why we evolved moral systems in the first place. (The evolution of moral systems by natural selection is counter-intuitive to a naïve, “red in tooth and claw” view of nature and “survival of the fittest”.) Haidt’s scientific work is mostly descriptive; i.e., he is trying to find out how moral psychology actually works, not how it ought to work. His main foray into the prescriptive, in this part of the book, is to advise us that we should try to understand how our minds (including our moral systems) actually work, and not shrink away from the knowledge, even if it makes us uncomfortable. This is a prescription I heartily endorse. Later, he veers into some prescriptions I think less of, as I discuss below.
A main element in his taxonomy of moral systems is that they center around a number of separate “modules”, which function unconsciously, and provide the automatic moral reactions that our reason later tries to justify. He identifies six of these: Care/harm, Liberty/oppression, Fairness/cheating, Loyalty/betrayal, Authority/subversion, and Sanctity/degradation. He defines them all, and suggests reasons why it might have been beneficial for our ancestors, long ago, to evolve such centers. He and his colleagues identified these modules experimentally (by questionnaires, interviews, and laboratory experiments). When he gets around to discussing politics, he says liberals care mostly about the care/harm module, a little less about liberty/oppression and fairness/cheating, and not much at all about the other three. Conservatives, on the other hand, care equally about all six, although they interpret some of them (care/harm and liberty/oppression) somewhat differently. This makes it possible for conservatives to appeal to more people than liberals*, since they can reach them on more levels. It can also make it very hard for one group to understand the other. For instance, in interpreting some family law situation, liberals may focus strictly on the wellbeing of the child (care/harm), which conservatives care about, too, but conservatives also care about parental rights (authority/subversion), state interference (liberty/oppression), and perhaps others; liberals tend to think these other things are irrelevant. He also says that his research has found that conservatives are better at predicting what liberals will think about a situation than the other way around, because the conservatives share the liberals’ concerns, but add to them, while the liberals don’t see the conservatives’ other concerns, at all.
Haidt’s assumption of six discrete modules bugs me. For instance, it is pretty well established that people tend to accept traditional authority as long as it is associated with responsibility, i.e., as long as the authority figure carries out a traditionally determined social role seen as having benefits for the community as a whole. It is not necessary for there to be any particular proportionality (let alone equality) in the distribution of benefits between the authority and the people. But if the traditional responsibilities are seen as being ignored, then people are more willing to rebel against the authority. (See, for example, Barrington-Moore, Injustice: The Social Bases of Obedience and Revolt.) So, is there a seventh module, responsibility/irresponsibility? Or is this subsumed under loyalty/betrayal, or fairness/cheating? Or both? Or all three? For that matter, couldn’t loyalty/betrayal just be a special case of fairness/cheating? And are liberty/oppression and authority/subversion really cognitively independent? At the very best, there seems to be a lot of overlap between these categories.
Ultimately answering these questions may require a detailed mapping of the moral modules on the brain/somatic structures that implement or create them. If this were done, then (for no particular reason but my gut feeling), I conjecture that:
- There would be more than six modules, and,
- Boundaries between modules would be fuzzy. There would be physical overlap between the structures implementing different modules, with some sub-structures involved in two, three, many different modules.
I suspect that it is difficult to structure the type of research Haidt was doing so as to be free of question-begging. You have to make decisions, consciously or unconsciously, as to what types of traits you are going to look for, and you are more likely to find the traits you identified than to stumble across ones you never thought of (although Haidt does describe how discrepancies in early results led him to expand his taxonomy from an original five modules to the final six).
A significant amount of Haidt’s book is spent discussing group selection and its role in evolution. Many of his observations on this topic I have come across, elsewhere. One that I have not is his belief that there can be, and has been, significant pro-social genetic evolution in historic times (or at least from late prehistoric, but definitely human, times). Essentially, he believes that culturally determined group behaviors (such as religious practices) can create selective pressures that weed out individuals less willing to conform to group norms. This he sees as a form of actual group selection, because groups whose members were selected to make them cohere more closely, passed on more genes than groups who did not.
I have no doubt that some kind of group selection for moral, cooperative behavior did occur (see previous blog posts, linked to at the end of this one). I’m inclined to be dubious about Haidt’s very late time frame, though. He justifies it by comparison with intentional selection (by breeders) of domestic species, which can cause significant genetic change in a small number of generations. But I think selective pressures in a rich, cultural environment are much more diffuse than in a controlled breeding situation. He does give a nod to the time-frame difference, and admits that the genetic response he envisions would happen much more slowly than in domestic breeding. But if the cultural pressures are sufficiently diffuse, they will not outweigh chance, and selection will not occur, at all. I think there are just too many ways to escape conforming, in most social situations. I think the level of late, culturally induced genetic change that Haidt envisions is unlikely.
Haidt’s ideas on group selection are intimately connected with his ideas on religion, social organization, generally, and politics. It seemed to me that once he entered these areas, his scientist’s insistence on evidence tended to weaken. He becomes much more willing to assert things without offering any (or at least, much) proof.
Haidt clear wants to believe that religion is a good thing. He is critical of the so-called New Atheists and their tendency to see religious belief as the root of all evil. I tend to be critical of the New Atheists, also, but I think the social utility of religion is much less than Haidt thinks it is. Haidt makes a strong case for the utility of religion early in human history (or prehistory) as something that helped improve the coherence of groups. I think group cohesion, partly helped by shared religion, almost certainly helped human groups to survive in early times, and to outperform (and perhaps out-breed) groups that were less cohesive. (Dennet, Dawkins, and other New Atheist use these facts in constructing a theory of meme-level selection, which Haidt thinks feeds back into genetic-level change.)
Haidt makes a much less compelling case for the value of close group bonding in modern times, in a globalized world, with awesome weaponry of mass destruction. He makes a claim that having a vast network of small, closely bonded groups (including religious ones), makes, in-and-of-itself, for a safer and more stable overall society. He offers essentially no evidence for this claim. In the interest of convincing us, he over-emphasizes the positives, and downplays the negatives. By implication, we should think of groups like U.U. Social Action Committees, or the League of Women Voters, and not Daesh, Westboro Baptist Church, or the KKK.
He discusses some research on the survival of small insular groups, which he believes supports his position. Groups like communes and religious communities are found to survive longer (as groups) if, (1) they demand costly sacrifices of their members, and (2) they “sacralize” the sacrifices (i.e., refer them to God’s will). Surely, it is not obvious that this is a net good, at least for the members of the group. What actual benefits does their membership in the group provide that outweigh what they are asked to give up? Haidt doesn’t even raise that question, let alone try to answer it. He seems to take for granted that survival of the group is a sufficient value, in-and-of itself. It seems to me, on the other hand, that the members might well be better off if the group dissolved, and they were re-absorbed back into the mainstream. (Survival of the group is not a precondition for survival of the members, after all.) The study Haidt cites seems to support Dawkins’s idea of religion as a parasitic meme at least as well, if not better, than Haidt’s idea of religion as a positive force. Another comparison that comes to mind is with drug addiction (i.e., a community of willing addicts).
Thinking of group survival as a “good” makes me think, re. morality, in general: is survival, pure and simple, a sufficient justificatory basis for morality? Or do other things matter? How about knowledge and truth for their own sakes, and not just instrumentally?
Because, when it comes to religion, Haidt does not seem to be a believer. The impression I got is that he is an atheist. But religion, as a social phenomenon, appeals to him deeply, and he wants to believe it is good, and should be promoted. But if religious beliefs are all wrong about matters of fact, does its appeal (to some people), and its (supposed) social utility justify our supporting and encouraging it? This seems to skirt perilously close to Plato’s Noble Lie. There is something elitist (and ignoble), it seems to me, about promoting beliefs you personally think are false, because of their social utility. (Tolerating them may be another matter.)
When Haidt turns to political analysis, he leaves the scientific method even further in the dust. He wants to prove that each of the different political constellations (which he identifies as “liberal”, “libertarian”, and “conservative”) has its own share of truth; therefore, we should learn to broaden our moral views, so that we can effectively listen to each other. He gives examples. For libertarianism, he gives a story entitled “Markets Are Miraculous”. And a “story” is exactly what it is: an entirely fictional, made-up scenario about an imaginary insurance scheme that proves exactly nothing.
Haidt’s approach to politics invites comparison with George Lakoff. Lakoff has also been very concerned that people improve their understanding of the latest scientific theories of how cognition functions, and apply them to political work, and, like Haidt, he believes that conservatives tend to do a better job of this, here and now, than liberals do, and this concerns him. But Lakoff unabashedly sides with liberals, and his interest is clearly that liberals should learn to better apply science in support of liberal moral views (although I know a lot of people who fail to understand that this is what he is saying). Haidt, on the other hand, strikes me as a former liberal who has become more conservative with age. He wants liberals to compromise on their principles, and he offers some fairly lame arguments for why they should do so.
Grains of salt are required. But the insights into the psychological structure or rationality and morality that Haidt’s work provides make his book worth the reading.
*I use the word “liberal” in this essay in the conventional, modern American way, because that is the way Haidt uses it, and not in the classical way still more in favor, I think, in Europe, which is closer to what Americans call “libertarian”.