Tuesday, November 22, 2016
One of these species of replicators is our own. And we have evolved so many purposes, goals, desires. Some time ago, long before we were human, we discovered that we could accomplish some of those goals better if we worked together – and we developed community, solidarity. But others of us love to place their personal goals above the goals of others – to dominate the community in their selfish greed.
I celebrate the solidarity of our human species that lets us, alone in an empty universe, work together for our common, noble goals, and overcome the powerful few who would set their selfish, private goals above the good of the community, and I celebrate the human, compassionate spirit that gives us the strength to fight, and overcome, the forces that would oppress us.
Saturday, May 14, 2016
If we say, “I have a good reason for being here, today,” and “The reason you slipped is because there is ice on the path,” we are using the word “reason” to describe two very different concepts. Both deal with cause-and-effect. My reason for being here (whatever it may have been) caused me to come, and the ice on the path caused you to slip. But one sentence involves a purpose. I had goals in mind, and in pursuit of those, I used my ability for mental analysis (another sense of the word “reason”), and made the decision to come. But no sentient being (presumably) had a goal in placing the ice on the path, let alone in causing you to slip thereon. In that case, the word “reason” refers to an explanation only, and not to a purpose.
Interestingly the dictionary does not well distinguish these two senses of “reason”. At least, the Webster’s edition that I consulted gives as the first definition “a statement offered in explanation or justification,” which, it seems to me, specifically conflates the two concepts – a justification involves a purpose, an explanation need not.
The distinction between these two concepts is very important when considering the workings of evolution by natural selection. There are reasons that certain traits are selected (preserved), in the sense that there are explanations for their selection, but they were not selected “for a reason”. Evolution has no purpose in selecting them. Evolution is not, in fact, an agent that can conceive a purpose. There is no end goal for the process of evolution. In Aristotelean terms, evolution has efficient causes; it does not have final ones. Evolution is a mechanical process - as mechanical as the process of fusion in a star producing heat and light. Everything accomplished by natural selection has a reason why it happens (explanation); it does not happen for a reason (purpose).
It is hard to keep your head wrapped around this distinction. We are so used to conflating the two concepts. I do not think our use of the same word for both is the reason (explanation) for the conflation. I think the conceptual conflation is the reason for the multiple meanings of “reason”. For us, in our daily lives, so many things are explained, at least partially, by the reasons that people have for doing the things they do. It is natural for us to think of chains of events in terms of purposes. It is natural for us, also, to project agency onto things that do not actually have it (onto events that do not actually have a sentient agent behind them). This is what the philosopher Daniel Dennett has called our “hyperactive agent detector”. Being a Darwinian, he of course posits explanations – reasons – for how we might have evolved such a characteristic by natural selection.
Now waitaminnit. Suppose we evolved a hyperactive agent detector, as Dennett suggests, because it was safer to err on the side of imagining a non-existent agent, for instance imagining a tiger when we hear the wind in the grass, than to possibly ignore a real agent (by assuming it is the wind when it is really a tiger). The consequences (cost) of one class of error are greater than those of the other. But don’t we have a purpose in evading the tiger? Isn’t evading tigers a goal of ours? So doesn’t purpose enter into the process of natural selection? Don’t we have a reason for evolving that trait, as well as there being a reason why we did?
The answer is that, while yes, we may have a purpose in evading that tiger, no, that purpose does not influence the process of natural selection. If I think I hear a tiger, I may well decide I want to get away; I conceive a purpose – to escape the tiger. This may cause me to climb a tree. (Well, a really skinny tree. Tigers can climb trees, I think, but they’re heavier than I am.) My purpose may be intimately involved in my actually escaping the tiger. But it had nothing to do with my possessing that nagging anxiety that asked, “What is that sound? Was that a tiger?” If Dennett’s hypothesis is correct (I suspect it is), then proto-people who possessed a hyperactive agent detector were more likely to survive, and have offspring, than proto-people who did not. The fact that they may have DESIRED to survive and have offspring was not causally efficacious in the natural selection process. In fact, the desires to survive and to have offspring (or at least, to have sex) THEMSELVES evolved, by natural selection, by exactly the same purposeless but explicable process: proto-creatures that didn’t have such desires (or didn’t act as if they did, anyway) tended not to have descendants.
It is not necessarily true that purpose – the ordinary purposes of people like us – can NEVER have a role in natural selection. Darwin, after all, modeled the idea of natural selection after the kind of selection practiced by (for instance) human stock breeders, who allow only animals with certain desirable traits to reproduce. Humans are in nature, and stock breeders are an important element in the environment of their animals, just as are predators the environments we collectively call “the wild”. Calling one form of selection “natural” and the other not is at least a little bit an artificial distinction that comes from a definition of “nature” as “everything but humans”. But we humans and our cultures also constitute a very, very important part of our OWN environment. Artificial selection can produce significant phenotypical changes in a very small number of generations – geological time scales are not required. Mightn’t humans also have evolved to conform to our own cultural expectations, not just culturally (by learned/taught behavior), but also biologically, by natural selection, i.e., by genetic change? Jonathan Haidt thinks so (and cites other researchers). To some extent, don’t theories of group selection and the evolution of morals depend on something like this? At the very least, it seems that such a hypothesis, if true, would strongly support certain group selection theories.
But the idea that humans and their purposes form an important part of the environment in which natural selection of humans takes place is very different than saying that evolution itself has a purpose, or that any specific trait arises “for a purpose”. As easy as it is to slip into that kind of language, that just ain’t the way it works. From time to time, new traits arise. The ones that work well stick around, simply because they work well. The ones that don’t work so well, go extinct. If everything seems to fit in the end, it’s because the things that fit are the ones that are still around. For now.
Monday, March 21, 2016
Saturday, February 20, 2016
On the other hand, I need to add something like "pragmatic radical", because I know that wishing don't make it so, and I do not believe in letting the best be the enemy of the good (or even of the "better than now"). Morally, it is better to do something to make things better, for at least some people, than to do nothing. Pretending to a high moral stance from "keeping your hands clean" is not actually better, morally, than being "compromised" by accepting, and working for, limited reforms. Life is short and (to quote a prominent liberal), in the long run we are all dead.
This leaves out a lot. Feminism, anti-racism... or, better, pro-POC... "embracing" LGBTQ (a phrase I learned recently from a friend, and like so much better than "ally"). I like to think of socialism as not limited to the economic realm. Socialism is democracy, taken to its ultimate conclusion. Socialism is the end of all exploitation, all preferential advantage or privilege of one group over another, for whatever reason, from whatever source.
Sunday, January 10, 2016
Rationality, as it turns out, far from being the defining quality of humans, is very often used by humans only post hoc, to justify decisions we have already made based on unconscious, irrational processes. In fact, rationality may actually have evolved, not so much to help us make effective plans in the physical world, as to help us out socially, by “explaining” our decisions to our peers. This, at any rate, is one of the key lessons from Jonathan Haidt’s book The Righteous Mind: Why Good People are Divided by Politics and Religion. At least, that is a main “take-away” for me, which fits in well with some of the other reading I’ve been doing over the past couple of years, including Gerd Gigerenzer’s Rationality for Mortals: How People Cope with Uncertainty, and Hilary Kornblith, et al, Naturalizing Epistemology. Although, as a person who makes my living as an engineer, I may have more respect than psychologist Haidt for the role of rationality in helping us deal with our non-social reality.
This particular argument about rationality, Haidt documents well with experimental data (although the Gigerenzer book offers a caution in its chapter specifically on how statistical analysis is misused in most published psychology research). Some of Haidt’s other themes are also very interesting, but raise more issues for me; at least, I have more questions about them.
Haidt’s main interest, in the book is with moral psychology, including how moral systems actually function in our (conscious and unconscious minds), and how and why we evolved moral systems in the first place. (The evolution of moral systems by natural selection is counter-intuitive to a naïve, “red in tooth and claw” view of nature and “survival of the fittest”.) Haidt’s scientific work is mostly descriptive; i.e., he is trying to find out how moral psychology actually works, not how it ought to work. His main foray into the prescriptive, in this part of the book, is to advise us that we should try to understand how our minds (including our moral systems) actually work, and not shrink away from the knowledge, even if it makes us uncomfortable. This is a prescription I heartily endorse. Later, he veers into some prescriptions I think less of, as I discuss below.
A main element in his taxonomy of moral systems is that they center around a number of separate “modules”, which function unconsciously, and provide the automatic moral reactions that our reason later tries to justify. He identifies six of these: Care/harm, Liberty/oppression, Fairness/cheating, Loyalty/betrayal, Authority/subversion, and Sanctity/degradation. He defines them all, and suggests reasons why it might have been beneficial for our ancestors, long ago, to evolve such centers. He and his colleagues identified these modules experimentally (by questionnaires, interviews, and laboratory experiments). When he gets around to discussing politics, he says liberals care mostly about the care/harm module, a little less about liberty/oppression and fairness/cheating, and not much at all about the other three. Conservatives, on the other hand, care equally about all six, although they interpret some of them (care/harm and liberty/oppression) somewhat differently. This makes it possible for conservatives to appeal to more people than liberals*, since they can reach them on more levels. It can also make it very hard for one group to understand the other. For instance, in interpreting some family law situation, liberals may focus strictly on the wellbeing of the child (care/harm), which conservatives care about, too, but conservatives also care about parental rights (authority/subversion), state interference (liberty/oppression), and perhaps others; liberals tend to think these other things are irrelevant. He also says that his research has found that conservatives are better at predicting what liberals will think about a situation than the other way around, because the conservatives share the liberals’ concerns, but add to them, while the liberals don’t see the conservatives’ other concerns, at all.
Haidt’s assumption of six discrete modules bugs me. For instance, it is pretty well established that people tend to accept traditional authority as long as it is associated with responsibility, i.e., as long as the authority figure carries out a traditionally determined social role seen as having benefits for the community as a whole. It is not necessary for there to be any particular proportionality (let alone equality) in the distribution of benefits between the authority and the people. But if the traditional responsibilities are seen as being ignored, then people are more willing to rebel against the authority. (See, for example, Barrington-Moore, Injustice: The Social Bases of Obedience and Revolt.) So, is there a seventh module, responsibility/irresponsibility? Or is this subsumed under loyalty/betrayal, or fairness/cheating? Or both? Or all three? For that matter, couldn’t loyalty/betrayal just be a special case of fairness/cheating? And are liberty/oppression and authority/subversion really cognitively independent? At the very best, there seems to be a lot of overlap between these categories.
Ultimately answering these questions may require a detailed mapping of the moral modules on the brain/somatic structures that implement or create them. If this were done, then (for no particular reason but my gut feeling), I conjecture that:
- There would be more than six modules, and,
- Boundaries between modules would be fuzzy. There would be physical overlap between the structures implementing different modules, with some sub-structures involved in two, three, many different modules.
I suspect that it is difficult to structure the type of research Haidt was doing so as to be free of question-begging. You have to make decisions, consciously or unconsciously, as to what types of traits you are going to look for, and you are more likely to find the traits you identified than to stumble across ones you never thought of (although Haidt does describe how discrepancies in early results led him to expand his taxonomy from an original five modules to the final six).
A significant amount of Haidt’s book is spent discussing group selection and its role in evolution. Many of his observations on this topic I have come across, elsewhere. One that I have not is his belief that there can be, and has been, significant pro-social genetic evolution in historic times (or at least from late prehistoric, but definitely human, times). Essentially, he believes that culturally determined group behaviors (such as religious practices) can create selective pressures that weed out individuals less willing to conform to group norms. This he sees as a form of actual group selection, because groups whose members were selected to make them cohere more closely, passed on more genes than groups who did not.
I have no doubt that some kind of group selection for moral, cooperative behavior did occur (see previous blog posts, linked to at the end of this one). I’m inclined to be dubious about Haidt’s very late time frame, though. He justifies it by comparison with intentional selection (by breeders) of domestic species, which can cause significant genetic change in a small number of generations. But I think selective pressures in a rich, cultural environment are much more diffuse than in a controlled breeding situation. He does give a nod to the time-frame difference, and admits that the genetic response he envisions would happen much more slowly than in domestic breeding. But if the cultural pressures are sufficiently diffuse, they will not outweigh chance, and selection will not occur, at all. I think there are just too many ways to escape conforming, in most social situations. I think the level of late, culturally induced genetic change that Haidt envisions is unlikely.
Haidt’s ideas on group selection are intimately connected with his ideas on religion, social organization, generally, and politics. It seemed to me that once he entered these areas, his scientist’s insistence on evidence tended to weaken. He becomes much more willing to assert things without offering any (or at least, much) proof.
Haidt clear wants to believe that religion is a good thing. He is critical of the so-called New Atheists and their tendency to see religious belief as the root of all evil. I tend to be critical of the New Atheists, also, but I think the social utility of religion is much less than Haidt thinks it is. Haidt makes a strong case for the utility of religion early in human history (or prehistory) as something that helped improve the coherence of groups. I think group cohesion, partly helped by shared religion, almost certainly helped human groups to survive in early times, and to outperform (and perhaps out-breed) groups that were less cohesive. (Dennet, Dawkins, and other New Atheist use these facts in constructing a theory of meme-level selection, which Haidt thinks feeds back into genetic-level change.)
Haidt makes a much less compelling case for the value of close group bonding in modern times, in a globalized world, with awesome weaponry of mass destruction. He makes a claim that having a vast network of small, closely bonded groups (including religious ones), makes, in-and-of-itself, for a safer and more stable overall society. He offers essentially no evidence for this claim. In the interest of convincing us, he over-emphasizes the positives, and downplays the negatives. By implication, we should think of groups like U.U. Social Action Committees, or the League of Women Voters, and not Daesh, Westboro Baptist Church, or the KKK.
He discusses some research on the survival of small insular groups, which he believes supports his position. Groups like communes and religious communities are found to survive longer (as groups) if, (1) they demand costly sacrifices of their members, and (2) they “sacralize” the sacrifices (i.e., refer them to God’s will). Surely, it is not obvious that this is a net good, at least for the members of the group. What actual benefits does their membership in the group provide that outweigh what they are asked to give up? Haidt doesn’t even raise that question, let alone try to answer it. He seems to take for granted that survival of the group is a sufficient value, in-and-of itself. It seems to me, on the other hand, that the members might well be better off if the group dissolved, and they were re-absorbed back into the mainstream. (Survival of the group is not a precondition for survival of the members, after all.) The study Haidt cites seems to support Dawkins’s idea of religion as a parasitic meme at least as well, if not better, than Haidt’s idea of religion as a positive force. Another comparison that comes to mind is with drug addiction (i.e., a community of willing addicts).
Thinking of group survival as a “good” makes me think, re. morality, in general: is survival, pure and simple, a sufficient justificatory basis for morality? Or do other things matter? How about knowledge and truth for their own sakes, and not just instrumentally?
Because, when it comes to religion, Haidt does not seem to be a believer. The impression I got is that he is an atheist. But religion, as a social phenomenon, appeals to him deeply, and he wants to believe it is good, and should be promoted. But if religious beliefs are all wrong about matters of fact, does its appeal (to some people), and its (supposed) social utility justify our supporting and encouraging it? This seems to skirt perilously close to Plato’s Noble Lie. There is something elitist (and ignoble), it seems to me, about promoting beliefs you personally think are false, because of their social utility. (Tolerating them may be another matter.)
When Haidt turns to political analysis, he leaves the scientific method even further in the dust. He wants to prove that each of the different political constellations (which he identifies as “liberal”, “libertarian”, and “conservative”) has its own share of truth; therefore, we should learn to broaden our moral views, so that we can effectively listen to each other. He gives examples. For libertarianism, he gives a story entitled “Markets Are Miraculous”. And a “story” is exactly what it is: an entirely fictional, made-up scenario about an imaginary insurance scheme that proves exactly nothing.
Haidt’s approach to politics invites comparison with George Lakoff. Lakoff has also been very concerned that people improve their understanding of the latest scientific theories of how cognition functions, and apply them to political work, and, like Haidt, he believes that conservatives tend to do a better job of this, here and now, than liberals do, and this concerns him. But Lakoff unabashedly sides with liberals, and his interest is clearly that liberals should learn to better apply science in support of liberal moral views (although I know a lot of people who fail to understand that this is what he is saying). Haidt, on the other hand, strikes me as a former liberal who has become more conservative with age. He wants liberals to compromise on their principles, and he offers some fairly lame arguments for why they should do so.
Grains of salt are required. But the insights into the psychological structure or rationality and morality that Haidt’s work provides make his book worth the reading.
*I use the word “liberal” in this essay in the conventional, modern American way, because that is the way Haidt uses it, and not in the classical way still more in favor, I think, in Europe, which is closer to what Americans call “libertarian”.
Saturday, November 21, 2015
But there's another reason I think is even more important. Wealth, ALL wealth, ultimately derives from profits - i.e., from something like a rent for the use of your property - and not from labor. This is true even if a high income is disguised as salary. NOBODY's labor is "worth" more than 200 times the median. Ultra-high CEO salaries come from their ability to control property (their own, and that of shareholders).
And profit is essentially a privatized tax - a tax that we pay on every good and most services that we consume, but that goes strictly to the benefit of a small class of individuals. (Small in percentage population terms; rather large in absolute numbers.) Profit represents the owning class blackmailing us by threatening to withhold the means of production, which would mean that we could not, with our labor, produce anything at all. And the more that wealth becomes concentrated, the stronger the stranglehold that property has on us, and the more they are able to squeeze out.
What progressive tax rates do, in a capitalist society that refuses to simply expropriate private owners, is to reclaim part of that private tax, for the public benefit. More progressive tax rates actually represent a LOWERING of the private tax that is capitalist profit.
And that is why progressive taxes are fair.