Saturday, December 26, 2009

Altruism and human evolution

The song "Easy to Be Hard" from the 1967 musical "Hair" posed the question: "How can people be so heartless?" Ever since Darwin, though, biologists have struggled with the opposite question: "How can people be so nice?" If traits arise through intense competition between living organisms in a war of all against all, how does a tendency to altruism arise in a population? What benefit does altruist behavior have that it contributes to the survival and reproductive success of the altruist?

How we define "altruism" is crucial to the form that the explanation will take, although the underlying facts, of course, are not changed by the definition. There is no question but that a heritable, genetic trait, in order to be selected for by natural selection, must provide some differential reproductive advantage to the gene complex coding for that trait. So if we define "altruism" as describing traits that benefit ONLY others, and confer NO adaptive advantage on either the organism bearing or the gene complex, then necessarily genes are selfish, and altruism cannot arise by natural selection. If such "altruistic" behavior in fact is not only neutral, but places the bearer at a selective disadvantage, then it cannot long survive, even if it arises by chance.

But this is a trivial result of the nature of selection, and a particular definition of "altruistic", and leaves a lot of real-life facts unexplained. Nature is full of examples of behavior that SEEMS altruistic, in the sense that it seems to be for the benefit of others, and of immediate cost to the organism exhibiting the behavior. Simply stating that there must be some hidden benefit to the organism, or at least to some of the gene complexes which code for the nature of the organism, does not explain how each particular behavior could develop.

So, rather than sacrifice a perfectly good word to preserve a trivial result (and make do, in consequence, with cumbersome phrases like "seemingly altruistic"), I prefer to define "altruistic behavior" simply as behavior that seems to benefit primarily others, at an immediate net cost to the altruistic individual. This definition of altruistic defines "fuzzier" classes of behaviors and organisms than the alternate definition, but classes that have the decided advantage that the constituent behaviors and organisms actually exist.

Elliot Sober’s book The Nature of Selection makes a strong connection between the evolution of altruistic behavior, and the process of "group selection". Group selection is a controversial concept among biologists, part of what Sober refers to as the "unit of selection" debate. The question is on what entities or "units" does natural selection work? The oldest tradition, starting with Darwin, focus on the organism (or phenotype). Individual organisms vary in traits, and have different rates of survival and reproduction deriving from those traits, causing certain types of organisms, with certain combinations of traits, to be "selected" as being better adapted to their environment, and others to die out.

Other candidates for the unit of selection exist. Many biologists have strongly argued for the individual gene as the only proper unit to be considered. Others extend this to complexes of genes. More lately, some theorists have argued for selection at the level of the group or "deme", or even at the species level. I won’t go into detail on all this. Sober argues strongly for a multi-level view; i.e., that all of these may be, from time to time, the level at which selection operates, and the particulars must be carefully examined in each case. In particular, he argues that the "genes only" school arises as a trivial result of the fact that the mathematics in each case can be reduced to calculations based on the averaged relative fitness of each gene. But to really understand the process, he believes you have to look at causality – at what level are the actual natural forces shaping evolution being applied?

The way Sober defines "group selection", there must be a trait or property definable only at the group level which confers a selective advantage on individual organisms. There is actually some flexibility in whether the "benchmark" of selection is the individual organism, a particular gene or gene complex, or the group, but focusing on the organism will do for now. The crucial feature is that the property that confers the selective advantage must be definable only at the group level. For instance, predators may avoid large individuals, but show even more aversion to groups of large animals; in that case, being a member of group of organisms with a large average size will confer a selective advantage to each organism in the group, irrespective of its own, individual size. There may be other selective effects in operation. For example, if smaller size happens to confer some selective advantage on organisms competing with other organisms within the group, then there will be two countervailing evolutionary forces at work, and the result is indeterminate (in the mathematical sense that one must look to other factors for a resolution), but this does not alter the fact that there is a group selective force at work.

Altruism seems, at least in many cases, to be a situation where group selection and individual selection act at cross purposes. Since altruists help other group members, there is a significant selective advantage to being a member of a group containing a large number of altruists. But this advantage accrues to selfish individuals, as well as to altruistic ones. Since selfish individuals gain the selective advantages of group membership without the personal costs, they may have a higher "fitness" value for intra-group competition. Which one wins out – whether selfish organisms or altruistic ones will predominate or "go to fixation" (one trait driving the other into extinction) – will depend on the relative strength of the two forces. The existence of individual (organism-level) selection as a countervailing force leads to an inherent instability in the fitness of altruism based on group selection, absent other forces.

I am mostly concerned with the evolution of altruistic behavior within primate species, and in particular within our own. I think group selection, as Sober defines it, is a large factor in this, but I think other factors can be identified, reinforcing what otherwise might seem to be an inadequately strong effect. One such factor is kin selection. Kin selection is a more powerful force than group selection, because an individual’s genes benefit more directly from the individual’s actions. This effect is most strong when I act for the benefit of my (biological) offspring. Even if I sacrifice my life for my children, this may well preserve my genotype much more effectively than had I failed to take the risk. The force of kin selection is somewhat less strong when I act for the benefit of siblings, who generally share most of my genetic material, and so on with decreasing force through first cousins, second cousins etc. I suspect that altruistic behavior may have originated, very early in the evolution of primate lineages, as a generalization of kin-selective behavior. Once it evolved, the group-selection advantages – the increase in fitness experienced by each individual member of a cooperative group – would have exercised at least a weak selective effect.

I think sex selection (selection by females of certain preferred traits in sexual partners) may also have played a part in the evolution of certain types of altruistic behavior, particularly in hominid lineages. Humans are unique in that our infants require a great deal of care – more care, and for a longer period than other primates, possibly more than any other animal. Stephen J. Gould has connected this to the neotenous nature of our species – we retain as adults, features that are infantile in other apes – for instance we continue to learn at a higher rate through much of our lives. But also we are born at a less developed stage. This may be in part because of our large brain size – a more fully developed brain and skull would be too large to pass through the birth canal.

For whatever reason, the result has been that human infants require much care, over a long period. This has led to other evolutionary changes; for example, it very likely led to the "always on" nature of human female sexuality. Other primate females, unlike human women, are interested in sex, and sexually interesting to males, only during the period in which they are fertile. By making sexuality a constant, hominid females could attract males to serve as ongoing helpmates, if not actually helping much with child rearing tasks (then or now), at least supplementing the female’s efforts in other areas, such as providing food and protection when the female was engaged with the child. This would have led to a tendency on the part of females to select for at least certain types of altruistic behavior, those that could fall under the rubric of being a "good provider". Constant sexuality, and the resulting pair-bonding, also led to another revolution – awareness of paternity, thereby extending the possible scope of kin selection.

Another factor I think played a big effect in the evolution of altruistic behavior, not only in human lineages but in other primates as well, at least those most closely related to us, such as gorillas and chimpanzees, is something we might call police action. Part of primate altruism involves cooperation and sharing – but it is not strictly necessary that these benefits be equally distributed between altruists and selfish individuals. Groups of individuals can choose NOT to share with individuals they don’t feel to be deserving. Groups can also band together to limit the power of otherwise dominant individuals who are perceived as abusing their power, and in extreme cases can even drive a selfish individual from the overall group. Evolution of policing behaviors would strongly support the group-selective benefits deriving from the evolution of altruistic forces, by lessening the selective advantage enjoyed by selfish individuals within the group, and the combination of traits would thus tend to be much more stable than altruism alone.

I’m no population geneticist, and I’m not going to be able to put together mathematical models to demonstrate all this, but it seems to me that these factors: kin selection, group selection, sex selection and evolution of cooperating behavior complexes (policing) together make up a sufficiently powerful set of forces to be adequate to ensure the evolution by natural selection of human altruism. The details I’ve laid out may not be quite right, but something like this history must be have occurred – because, indisputably, human altruism exists.

The lyric from "Hair" teaches us one important lesson: how natural all this seems to us. Because (saving a few who have imbibed too much Libertarian philosophy), we all tend to respond in the way the lyric suggests – we are surprised and startled by "heartlessness". By and large, in our day-to-day activities, we humans are more likely to go out of our way to be nice to each other than to be mean. Granted, exemplary occurrences of altruism are remarkable, and inspire awe and admiration, but it is meanness that shocks us, nags at our consciousness, and leaves us with the conviction that something must be done.

Sunday, December 20, 2009

Sober's causation

I have almost finished reading Elliot Sober's The Nature of Selection. It is a complex book, which inspired many marginal notes and a number of journal entries, and which I am sure I will need to come back to more than once to fully appreciate. With some temerity, perhaps, I have decided to address a couple of issues related to this book in my next couple of posts on this blog. This week, I am going to reflect on an idea about causality that Sober puts forth. Next week, hopefully, I will revisit the evolution of altruism, which I have discussed before (9/28/09).

Sober's causality claim relates specifically to what he calls "population level" causality, as distinguished from "individual level" causality. An example he uses to illustrate this difference is: suppose a golfer is trying to sink a putt, and a squirrel runs by after he hits the ball, and kicks it. Improbably, the ball deflects off of some obstruction, but sinks in the hole, anyway. From an individual level, we would wish to say that the squirrel's kick caused the ball to sink in the hole, because it started a chain of events that resulted in the ball sinking. But from a population level, we would not wish to say that "squirrel kicks sink balls," because we are convinced that, usually, this would not happen.

On the population level, Sober first points out, non-controversially, that causality is not implied by correlation. My own favorite example illustrating this truism is the theory that fire fighters cause damage at fires. It is observed that damage at fires is positively correlated with the number of fire fighters that arrive at the scene. If correlation is taken as proof of causation, the conclusion is that fire fighters cause damage. In fact, of course, the number of fire fighters and the amount of damage are correlated because they have a common background cause - the intensity of the fire.

Sober believes that by strengthening the criteria, it is possible to derive a probability-based definition of population-level causation. The rule he argues for is this: an event x is a (positive) causal factor of an event y if the probability of y given x is greater than or equal to the probability of y given (not-x) under all possible background conditions, and the inequality is a strict inequality in at least one condition. In other words (or, rather, in symbols):

x CF y <=> ∀z (P(y|x ⋅ z) ≥ P(y|!x ⋅ z)) ⋅ ∃z (P(y|x ⋅ z) > P(y|!x ⋅ z))

Here I am defining the relation "CF" given by "x is a causal factor of y". I will be using the dot operator both for the logical "and" between propositions and the probabilistic conjunction of events, and also using ! for a logical "not" and for negating an event (the event !x is the event that x does not occur). This should be contextually unambiguous. Hopefully your browser will display the symbols properly - if not, you may need to change your character set to "UTF-8". I've tested it with both Internet Explorer 7 and Firefox 3.5.6. (Firefox, annoyingly, sticks extra space above and below all equations, leading to very ugly page renderings.)

The reason for the universal modifier on the first part of the above expression is interesting. Sober argues that it is not enough for causality to increase the likelihood of an event in most circumstances, but decrease it in others (even in a minority of cases). If you allow any negative cases, he argues, the causality claim reduces to P(y|x) > P(y|!x), which simply represents correlation. So to have a definition of causality stronger than mere correlation, the first event must raise the probability of the second event, or be neutral, in all circumstances, and must positively raise it (strict inequality) in at least one.

Sober raises some issues with the above formulation regarding the causal independence of the background issues from the factor being examined. Specifically, he requires that the "background events" (z) not be "causally relevant" (either positively or negatively) to the proposed cause being investigated (x). If they are, this leads to undefined conditional probabilities of the form P(y|x ⋅ !x). Conceptually, this represents a form of double counting. Recasting this requirement in the quantificational form gives something like:

x CF y <=> ∀z (z ∈ B => P(y|x ⋅ z) ≥ P(y|!x ⋅ z)) ⋅ ∃z (z ∈ B ⋅ (P(y|x ⋅ z) > P(y|!x ⋅ z)))

B = {z: !(x CF z) ⋅ !(z CF x) ⋅ !(x CF !z) ⋅ !(z CF !x) ⋅ (z ≠ x) ⋅ (z ≠ y)}

The last two inequalities are necessary to avoid undefined conditional probabilities in P(y|!x ⋅ x), and because the strict inequality P(y|x ⋅ y) > P(y|!x ⋅ y) is always false.

The set notation above is kind of nasty, because it carries the "free variables" x and y outside of the expression. But we can eliminate this notation by expanding the independence criterion in place (although it gets a little unwieldy):

x CF y <=> ∀z ((!(x CF z) ⋅ !(z CF x) ⋅ !(x CF !z) ⋅ !(z CF !x) ⋅ (z ≠ x) ⋅ (z ≠ y)) => P(y|x ⋅ z) ≥ P(y|!x ⋅ z))
⋅ ∃z ((!(x CF z) ⋅ !(z CF x) ⋅ !(x CF !z) ⋅ !(z CF !x) ⋅ (z ≠ x) ⋅ (z ≠ y)) ⋅ (P(y|x ⋅ z) > P(y|!x ⋅ z)))

Now that certainly looks circular. I hesitate only because I'm not 100% certain that it is impossible to iteratively expand the "CF" terms, at least in a finite universe. I took a stab at it in a universe containing only 4 events {x, y, B1, B2}, but the problem quickly exceeded my limited powers of symbolic manipulation. But, anyway, I think the definition is circular.

Actually, I don't know why I even bother to fret over it, since Sober actually admits that it is a circular definition - but he argues that it is a useful definition, anyway. Hs references a 1979 Nous article by N. Cartwright, which supposedly goes into this in more depth. It would be interesting to read that, but I have no easy way of tracking it down.

I am not going to dispute that a definition can be conceptually useful, even if circular, outside certain strictly formal contexts. But I think we need to ask in each case where does the circularity come from, and why it is necessary, and/or useful. In this case, I have a suspicion that it is because we have an underlying, intuitive definition of causation that has nothing to do with the definition that is being attempted, here. This is also reflected in my sense that this idea is only useful if we "prune" it somehow, as suggested by my reference to a "finite universe" above. For another example of pruning, I think we are only interested in background conditions that have some causal effect, themselves - totally neutral conditions are not interesting. In other words:

B = {z: !(x CF z) ⋅ !(z CF x) ⋅ !(x CF !z) ⋅ !(z CF !x) ⋅ (z ≠ x) ⋅ (z ≠ y) ⋅ (z CF y)}

But how do we prune the universe of possible events, other than applying some other, a priori, theory of causation? And in that case, how is Sober's causation test any different then just using correlation as an empirical test of the a priori theory?

I'd be the first to admit that my reasoning above is a little mushy. But a specific example, I think, shows that Sober's definition doesn't quite jibe with our intuitive ideas about causation, and may not ultimately be satisfactory as a definition of population-level causation.

Imagine a rectangular pool table, with the long axis oriented north-south. From time to time, billiard balls are introduced approximately on the 1/3rd line (the imaginary line dividing the southern 1/3 of the table from the northern 2/3). Some of these balls will be struck with a cue. The horizontal angle with which the cue strikes the ball is normally distributed such that 90% of the variation is w/in ± 70 degrees of the mean, which is to the north. There is friction in the table, and spin (variation in the incident angle of the cue with respect to the radial angles from the center of the balls to the point of impact), and the impulse imparted by the cue is finite, so that a ball may strike a side wall, or other obstruction, and come to rest before hitting the north wall. As balls accumulate, they may strike, or be struck by, other balls. Impacts are approximately elastic (with frictional/damping losses). Additionally, a number of bumpers are introduced between the 1/3rd line and the north wall. The precise position of the bumpers is varied, from time to time. Usually, when a ball strikes a bumper, it will be a glancing blow, and the ball will continue in a generally northerly direction; however, occasionally the impact will be square enough that the ball will rebound to the south. This rebound may be sufficient to carry the ball south of the 1/3 line (e.g., if the obstruction is close to the line).

Finally, the entire table will be lifted and tilted, from time to time (but infrequently), either to north or to south, with the conditional probability of a southward tilt, given that a tilt occurs, equal to 50%. The tilt is of finite (temporal) duration - i.e., there is a probability greater than zero but less than 1 that a ball on the table will reach the south wall during a south tilt. Note that any ball which has ended up south of the 1/3rd line due to a rebound has a greater probability of touching the south wall during a southward tilt than it did when it was first introduced into the game.

When a ball strikes either the north or south wall, it is removed from the game. Its probability of striking the other wall, at that point, is zero.

It is hard to say that, in this game, the impact of the cue is not a "causal factor" in increasing the percentage of the ball population that touches the north wall, even though there are some members of "B" (combinations of bumper location, ball position, cue angle, other factors) under which the impact will, in fact REDUCE the probability of reaching the north wall. But by Sober's definition of causality we would, in fact, need to make that claim.

I find myself inclined to throw out Sober's definition, or rather, to view it only as a sort of statistical test of some underlying idea of causality. I'm inclined, further, to view population-level causation as just an aggregation of individual causation-events (including those that have actually occurred, plus hypotheticals). "Squirrel kicks cause balls to sink sometimes, but most of the time they don't." So squirrel kicks are not considered a "cause" of successful putts, at the population level.

I don't think this has a negative affect on Sober's substantive arguments about the nature of selection. His arguments about "units of selection", for example, depend on the distinction between "selection of" some kind of entity and "selection for" some specific quality or trait. The question is, at what level does the cause of the selection operate? One (admittedly artificial) example he uses for illustration is to postulate several groups of otherwise similar organisms which are homogenous within each group with respect to some quality - say tallness - but vary between groups. Suppose some predator differentially picks off the shorter organisms. Is this an example of group selection (the predator is selecting organisms from groups of short organisms), or individual selection (the predator selects shorter organisms)? The question cannot be resolved strictly by looking at results, because in either case there is selection of the same organisms. One needs to look at the cause - what trait is being selected for? Does the predator simply favor shorter animals? Or does it avoid tall groups of animals? A test of the causal assumptions would be to examine what would happen if a shorter organism happened to be found in a taller group. Would it be subject to the same level of predation as if it were in a group of small organisms? In that case, the individual selection model would be supported. Or would it have the same security as its taller group members? In that case, a group selection mechanism is indicated. Of course, this test might be empirically impossible, if this were a real-life example, but it illustrates the role that a concept of causality plays in determining the unit of selection. I see no way, though, in which this argument depends on Sober's specific formulation of his law for population-level causality.


P.S. I am not 100% certain how I feel about the explanatory necessity of the concept of "causation". For instance, if I say "an object subject to a given force F will experience an acceleration proportional to its mass", does it add anything useful to the explanation to say "the force causes the object to accelerate"? The idea of cause is important to Sober, and he bases a lot of his "units of selection" arguments on the concept. I have no dogmatic objection to this, but I can't help but wonder if the concept of causation isn't somehow reducible.

Saturday, December 5, 2009

Episteme

I have two fundamental epistemological premises: that the evidence of my experience is the best available (really only available) data I have for learning about the world, and that most people who study, think, speak and write about the world are not intentionally lying. These seem to be pragmatically a minimal set. I don’t see how one can practically set forth on the project of learning without them.

Note that the use of the words “most” and “intentionally” in the second premise imply two corollaries: some people are lying, and some people may be unintentionally stating mistruths. (In fact, I might argue that we are all unintentionally stating mistruths to a greater or lesser extent, but that would be more of a theorem than a premise.) Also, saying experience is the “best available” data doesn’t imply that it yields infallible insight.

The two premises do not form a complete (i.e., sufficient) set. All they really say is that I can trust what I experience, and what people tell me about what they experienced (including second or third hand, etc. reports) – but with a grain or two of salt. They don’t say anything about how to come up with that grain of salt, or to know how many grains to apply. They don’t, in other words, tell me how to distinguish the veracity of conclusions I draw from these sources, or how to distinguish between competing theories. They don’t specify any rules of inference, at all.

I’m afraid all I can say about making distinctions is, “It’s ad hoc.” I am no Descartes, to offer a single unified answer to the question of how to distinguish true ideas from false ones. Certainly, I do not believe that because I can hold some idea “clearly and distinctly” that it must be true (although it might suggest truthfulness prima fascie). Instead, it’s more a matter of how well does an idea “fit in” with the other body of ideas I have constructed, over time, from the same evidence. “Consistency”, in a word. But how do I decide if an idea is consistent? Certainly not the idea of the excluded middle. I am quite convinced that it is possible for a thing to be both A and not A. The clearest examples come from human emotions: do I want to spend a month’s vacation in Venice this year, even though the press of work before and after will be terrible, it will cost a lot of money, my Italian is rusty, and I will have to find a house-sitter and/or worry about my pets and everything else in my house? I do, but I don’t. Fuzzy logic may offer better (if inherently less certain) models. But I am convinced that real antinomies can also be supported, as matters of fact (at least as humans perceive fact) in the real world, as well. Or at least, I’m not convinced that they can’t.

Ad hoc. I know it when I see it. Maybe. More or less. (I do, but I don’t.) Kind of like Descartes, perhaps, except I substitute “vague and fuzzy” for “clear and distinct”?

This could be depressing, if, like many philosophers past, I desired the nature of my mind (or soul) to approach some ideal of perfection – to make me like a god. But I don’t believe in gods. Rather than being depressed at failing to approach a fictional divinity, I prefer to celebrate the humanness of it all. Because this messy, ad hoc, but often very effective process of distinction is the stuff of life, after all, and quintessentially human, if only because humans, by and large, do it exceptionally well. Not that we do it infallibly – there are a lot of people in the world who are dead certain of things about which I am certain they are dead wrong. But by and large, in the billions of tiny, every day distinctions and decisions we make over the course of our lives, we do mostly pretty well.

We do this, of course, because we’ve been programmed that way by natural selection. Our brains have evolved to do a job, and they do it rather well (just as flies fly very well, and frogs do an excellent job of catching them). We have certain decision making processes built into our equipment. By studying our thinking in a natural scientific sort of way, it is possible to get clues as to what they are. Philosophers, I think, who have tried to set rules of thought start with some biological rule and then codify it – so clarity and distinctness counts, biologically, as evidence, and so, on some level, does the excluded middle. But we can’t stop there. We move on to fuzzy logic, paradigmatic categories... and who knows how far beyond?

My guess is that, as in most things, the brain works by having a bunch of rules, without any necessary regard as to whether they are consistent in any a priori theoretical sense. Different rules are stimulated by a particular experience, others suppressed, memory of past experience and feedback loops are brought into play, until the system “settles” in some state (“settles” is a relative term for a system that is constantly in motion), and we feel that this “makes sense” or doesn’t. This is what I mean by “consistent” with the rest of my body of knowledge. It is, in fact, the biological basis of, in a sense the definition of, “consistency”. The rules exist because, at some time in the past, they have been found helpful in negotiating the world – they have been empirically proven. They may be “hard coded” rules, proven in the dim historical past of our heritage, but, again like most things in the brain, the “hard coded” rules can be modified, and new rules created, by our individual experience. And such learned rules may be passed on to subsequent generations via the “Lamarckian” evolutionary process represented by our culture and its systems of education.

Thinking in this natural historical way about distinction and rules of inference, etc., may not “prove” validity, in the sense that philosophers have traditionally sought such proofs. But it may give pretty damn’ good evidence of empirical functionality. And, I would argue that this empirical, matter-of-fact kind of “proof” is most suitable to our real-life existence as human beings in a material world, even if it fails for some fictional existence as souls aspiring to a divine one. If philosophy is the pursuit of the “good” and if good must be good for something, then this is the kind of knowledge and truth that is “good for humans”.

Sunday, November 22, 2009

Laplace’s demon

Reading the discussion of “Deterministic and Stochastic Processes”, in Elliott Sober’s book, The Nature of Selection, has me musing on Laplace’s demon. The brilliant 19th Century mathematician Pierre-Simon Laplace was convinced (despite his own pioneering work in probability) that the world was at root deterministic. His famous expression of this, as cited (in translation) in Sober is:

“Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it – an intelligence sufficiently vast to submit these data to analysis – it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain, and the future, and the past, would be present to its eyes.”

This hypothetical vast intelligence has come to be known as “Laplace’s demon”.

Is Laplace’s demon, i.e., some being which could know everything there is to know about the universe, even theoretically possible? (In a philosophical sort of definition of “theoretically”.) To address that question, we have to address at least a little of the question “What is knowledge?” That’s rather a big question for an amateur philosopher, but I’ll see what I can do.

First of all, whatever we mean by “knowledge”, it seems clear that it is not synonymous with “representation”. If we were to imagine an infinitely “true” mirror that perfectly reflected the incident light, or a computer disk copier that made perfect copies of one disk onto another, we would not say that the mirror, or the second disk had “knowledge”. What we call “knowledge” involves, I believe, a process of abstraction and interpretation. Abstraction involves making a representation of part of the world – the part we think relevant for our analysis. Interpretation is adding meaning. (I will not, at the moment, try to give a meaning for “meaning” – it is just whatever we add to a representation in order to possess knowledge. For that matter, I won’t discuss the nature of “representation”, either.) If I know a fox is in my chicken coop, I do not know exactly how many kilograms the fox weighs, or exactly where each chicken is relative to the position of the fox, but I know if I don’t get out there, fast, I am going to lose some chickens. Knowledge therefore involves both subtraction and addition. We subtract data that we believe irrelevant to our analysis (abstraction), and we add meaning by a process of interpretation.

So what is “exact” knowledge – that is knowledge that could “comprehend all the forces by which nature is animated and the respective situation of the beings who compose it”? It would seem to require a complete representation, leaving nothing out – i.e., perfect representation plus meaning, instead of abstraction plus meaning. It is therefore a purely additive process. But, as I discussed in “Creating the world” (11/5/09), a representation must be represented somewhere. Whatever mind or computing device is doing the knowing must have at least as many data storage locations as there are data to be represented – and in fact, must have more, since exact knowledge is additive. But if exact knowledge is to leave nothing out – if it is to incorporate every possible thing that could have any influence whatsoever, on the thing known, then it seems the knower must also know itself, and this leads to an infinite regress.

Maybe we could try a different formulation of “exact” knowledge. Thinking of the way “infinity” is usually represented in mathematics, we might try to define exact knowledge by some “limit” formulation, saying that, however much data we have already represented about the thing to be known, we can always if necessary represent more, so that, without ever claiming to have represented everything, we can always represent “as much as we please”. This theory seems to require that the thing to be known is in some sense “small” with respect to the knower, but it stops short of requiring infinite regress. But is it good enough for Laplace’s demon? It seems that with this definition of “exact knowledge” all we are saying is that we can make the probability that we have missed some important piece of information arbitrarily small. There always remains some non-zero probability that some important fact we haven’t considered can come crashing in and invalidate our model. It seems that Laplace’s demon is qualitatively in the same relation to determinism vs. uncertainty as the rest of us – it just has a really big brain, so it can know a lot more.

Another approach is to assume two separate universes – this approach might appeal to someone who still clings to some form of Cartesian dualism. The knower exists in a universe that is not part of the universe in which the known resides. But in order for this to escape the problem of regress, it must be impossible for the knowing universe to affect the known – the knower must be an pure observer, only – otherwise, the knower must still know itself, in order to know all possible influences. We must also posit some form of one-way communication in which information can pass from our universe into the other without even the information-bearing entities themselves being in anyway touched or affected.

It seems that even if such a knower-in-a-separate-universe were to exist, this could be of absolutely no interest to beings in our universe. This separate universe theory is of the same sort as the theory of pure, philosophical solipsism – the theory that only I exist, and everything else that seems to exist is merely a phantasm in my own mind. Each theory is completely untestable, as is inescapable implied by its own hypotheses. Beyond stating such a theory, and noting its inherent untestability, not much of interest can be said.

The above questions seem to render the thought experiment of Laplace’s demon useless as an argument for the determinacy of the universe. Does this imply that the universe is not determinate (i.e. that it is inherently stochastic)? Or could it be that it is determinate, but that this determinacy is unknowable? Personally, I find it hard to render the concept of “unknowably determinate” coherent, but perhaps some philosopher cleverer than I can do so.

Note that I’m not even discussing the possible implications of quantum mechanics. Quantum mechanics holds, of course, that there is inherent uncertainty at the level of the most fundamental constituents of the universe. (Quantum mechanics, by the way, although cast in the most abstruse mathematics – math well beyond my feeble capabilities – is at root an empirically based theory, developed not from abstract philosophical considerations, but in an attempt to explain some otherwise extremely intractable experimental data.) Quantum mechanics is sometimes – although not necessarily – held to imply that some “built-in” level of uncertainty exists at the macroscopic level, also.

Note that the above discussion is not about human limitations, or whether “exact knowledge” is possible to a human mind. The infinite regress problem does not say “no human brain can hold all this information”, it asks “how could the universe contain complete knowledge of itself?” Similarly, the “knowledge as a limit” idea is not about capabilities of human intelligence – in fact, I would argue that no human mind could even come close to getting “as close as we please” (in this quasi-mathematical sense) to exact knowledge about any real world problem. Rather, this is an argument about “theoretical” possibility, as I say above. I admit, I’m not really sure exactly what such a “theoretical” possibility means, except that if something is not “theoretically” possible, than it darn tootin’ is not a practical possibility, either.

Postscript: After writing this essay, I happened to look up the Wikipedia article on Laplace’s demon (http://en.wikipedia.org/wiki/Laplace's_demon). Some of the objections I make in this essay were covered therein (if somewhat more tersely), and physical arguments, including quantum mechanical, were gone into more deeply. The Wikipedia article did not mention the “knowledge as a limit” idea, nor the fact that if the demon were in an alternate universe, some form of one-way (and only one-way) communication between universes would be necessary.

Thursday, November 5, 2009

Creating the world

Could there be an algorithm for the creation of the world? An algorithm in the sense of a sufficiently complete description of (dynamical) initial conditions (matter, energy, various derivatives thereof) that it would accurately predict the actual course of development of the whole universe? Where would such an algorithm reside? It would seem that it could not reside within those original conditions. A complete description of the state of anything seems to require at least as many data points as there are objects in (or attributes of, or whatever) the thing to be described. It therefore cannot be contained within the thing described without generating an infinite regress (or an infinite expansion, more like).

Or can a part completely describe the whole? Can some rule-based description sufficiently describe something, even something of which it is a part, to allow perfect prediction? Doesn’t the absence of a complete description, in the sense above – a complete catalog of initial conditions – imply some necessary uncertainty as to outcome?

If the algorithm did not exist in the initial conditions, could it exist in some later evolution? Could the world evolve in such a way that it could eventually contain a complete description of the way it was at some former time? Doesn’t this imply that the future world has more stuff in it than the former one did? Doesn’t this defy conservation of matter/energy? But doesn’t every instant of the world contain stuff that the former didn’t? Because at every instant matter and energy are arranged differently than they were before. Isn’t this structure “stuff” in some sense? An object? A thing? A collection of things? (Even, potentially, an unlimited collection of things, in the sense that some observer might interpret the same structure in different ways, for the purpose of different analyses.) The law of conservation of matter and energy says there can be no net increase or decrease in the total quantity of matter/energy, only – it says nothing about “stuff” or “things”, per se. New things, in the sense above, are created and destroyed by (rule conforming) changes in the state of the world’s matter and energy all the time. Can these changes in state create an expansion of the total amount of “stuff” in a way that it could include a complete description of a former state w/out requiring an infinite expansion?

The answer to my original question may be “no”. I rather suspect that the structure-stuff cannot generate the kinds of things that could record sufficient data points to completely describe a former state of the matter/energy, unchanging in total quantity, of whose current state it is the structure. Which implies that the best we can even theoretically hope for in terms of world-generating algorithms is a rule based algorithm, to be applied to an incompletely specified set of initial conditions, which could create many different worlds, including, possibly (purely by chance) our own, or a complete algorithm, including initial conditions, of a much, much smaller universe.

But it sure is an interesting question to wonder about, in any case.

Saturday, October 24, 2009

The house I live in

A few weeks ago, some section I was reading in Richard Rorty’s “Philosophy and the Mirror of the Mind” impelled me to attempt a moment of pure introspection, turning off any conscious thought in so far as possible, and just trying to be aware of my immediate impressions – sense impressions, and random passing thoughts viewed as an observer rather than as agent. This is not the first time I have tried such a thing. For some reason, on this occasion, the thought occurred to me that I do not directly perceive my “self”. This led me to the conclusion that I infer myself. On further reflection, I speculated that humans, as infants, learn to infer the existence of themselves by comparison with the role others play as agents of actions (causes of effects) in the infant’s environment. They see other effects, with a “hole” in the middle (no agent evident), and infer that they exist, as people like others, in order to fill the hole. The effects inferred to be caused by this “self” are associated with feelings, desires, motivations, so they infer similar feeling, motivated “selves” associated with the other agents, as well.

I offer this as of interest mainly because of the immediacy and specificity of the intuition. I make no claim to originality – if nothing else, I am reminded of the motivating insight in the c. 1972 American Zen book “On Having No Head”, as well as a few barely remembered passages on the construction of the ego in Freud’s “Civilization and its Discontents”. I have long accepted the idea that our understanding of ourselves, in the sense of who we are, is constructed and reconstructed over the course of our lifetimes through social interaction and other life experience. And I certainly don’t offer the above speculations as a developed theory. They were the result of a few moments of introspection and reflection – what they mainly suggested to me was the need to do more research into other people’s views on the development of the self.

But I haven’t been able to avoid (or postpone) thinking more about this, because the nature of the self has been an important issue in several books I’ve been reading. It came up in Jean Grimshaw’s book, in her critique of some of Sartre’s ideas, it was important (from very different viewpoints) in MacIntyre’s book, and in Rorty’s, and it is important in Chapter 3 of the book I am reading now: “The Future of Democratic Equality”, by my friend Joe Schwartz, in which he critiques the ability of post-structuralist ideas, including the “fictive” nature of the self, to serve as a basis for building the concepts and institutions necessary to sustain democracy.

I guess I’ll have to read some existentialists and post-structuralists to get a first-hand understanding of their ideas. In the meantime, if I may be indulged in an argument from second-hand sources, it SEEMS to me that a false dichotomy is being drawn; i.e., if a “self” is not some natural, monist, indivisible, unchanging core of our being, then it must instead be “fictive” and “unstable”. Why? An automobile is a constructed artifact, but this doesn’t make it a phantasm, nor does the fact that it would fall apart if all the bolts were removed make it unstable.

I may not know much about the “self”, but I know a lot about houses. I built houses during my teens as a “carpenter’s helper”, working both with framing crews and finish crews; I’ve designed the structure (and restructuring) of many houses in my professional career as an engineer; and I’ve overseen at least four major remodeling projects in my role as a homeowner. To most of us who haven’t had these experiences, houses often seem the epitome of the solid, concrete, and stable. The British even have an expression, “Safe as houses”, which sums this up perfectly. But I, and others in the trade (or other “post-remodelist” homeowners) know differently.

Houses are “fuzzy” things, with uncertain boundaries, and they are in a constant state of flux. Our definition of “house” can change contextually: does it include the furniture? the outbuildings? the stove, sink, refrigerator? The house itself continually changes: we move furniture in or out, put up new curtains, paint the walls new colors. Left to itself, a house will sag, settle, decay. Termites eat the sills and other supports. A house not built properly is especially vulnerable: absent a few nails or ceiling ties, the walls can spread under the base of the rafters, causing the ceiling to crack and the ridge to sag. I’ve been involved in a few projects where houses in which this had happened needed to be pulled back together and resecured.

Houses are of course, initially constructed, and this construction is the result of a social “conversation”. The owner may have his ideas, more or less well articulated; the architect has hers; as does the contractor, and for that matter each of the many individual construction workers (carpenters, plumbers, electricians, painters...) There is no unity in these differing conceptions (despite the ambition of the architect), and each makes its own contribution to the outcome. The “final” product is massively unpredictable in its details as they will stand at the moment of “completion” (an arbitrary moment in time perhaps defined by the Building Inspector making a final sign-off on the permit form). And the house immediately begins to change, under the actions of the kinds of forces described above, as well as from the grander plans of the occupants, who may decide they need a new baby’s bedroom, home office, or kitchen.

Despite all of this, none of us, even we who are well acquainted with these processes, would refer to a built house as “fictive”. Nor, except in extreme (dare I say “psychotic”?) cases would we refer to it as “unstable”. “House” remains a concept, and houses things, that we would rather not do without.

So my “self” may be something that is formed and reformed continually throughout my life, by my social interactions (including those with powerful and/or repressive institutions), and by other things. It may be difficult for me to specify with precision, at any given time, just exactly what my “self” is, or what it contains. It may even be that my sense of agency is in some way illusory, because I can’t help doing what I do because of who I am, and who I am has been (and is being) constructed by forces that are beyond my control. Still, it’s a useful thing, this “self”, and it seems to have at least a certain pragmatic, dynamic stability (even if I can’t precisely define the state to which it “returns” after a “disturbance” – which is an engineer’s definition of “stability”).

So, for the moment, at least, I find I have no more desire to give up my "self" (either as a concept or an artifact) than I have to make my home permanently under the stars.

References: In the course of the above, I referred (yet again) to Jean Grimshaw’s “Philosophy and Feminist Thinking”, as well as Alasdair MacIntyre’s “After Virtue”, and to: Richard Rorty’s “Philosophy and the Mirror of Nature”, Sigmund Freud “Civilization and Its Discontents”, D. E. Harding “On Having No Head”, and last but not least Joseph M. Schwartz “The Future of Democratic Equality”. It’s also clear, if I am to get a better understanding of various ideas of the self, that I am going to have to read some Sartre, Foucault, and Derrida, as well as some more up-to-date books on the psychology of ego formulation. (I’m open to suggestions...)

Saturday, October 17, 2009

Rights and the Common Good

I was recently interviewed, in my capacity as a member of the National Political Committee of Democratic Socialists of America, for a monthly radio program called “The Socrates Exchange”, which airs on New Hampshire Public Radio (www.nhpr.org for schedule). The question I was asked to discuss was “Are individual rights more important than the common good?” The show’s producers told me they intended to interview a socialist and a libertarian on this, then edit the interviews into a kind of radio dialogue or debate. Since I had to order my thoughts for this interview, anyway, I thought the topic would make a good blog post for this week.

My first thought was that I was being asked, in a sense, to argue in the “language” of my libertarian opponent. Not that socialists aren’t profoundly interested in human rights. Over the past 200 years, socialists have probably fought more to expand human rights than any other single group of people –struggling to expand the franchise in the 19th Century, fighting for Civil Rights in the U.S. in the mid 20th Century, fighting for labor rights such as fair pay for a day’s work, and decent working conditions, in both Centuries, and more. But the concept of rights is not really central to a socialist analysis, in the way it is to libertarians. A socialist analysis is a dynamic analysis – we are interested in processes and forces. We are, thus, concerned with the political and economic forces which might prevent people from enjoying their rights; we are concerned with the way people organize themselves to win their rights; and we are concerned with the social and democratic process by which those rights are defined or constructed in the first place.

This matter of social construction particularly distinguishes a socialist from a libertarian view on “rights”, I think. A libertarian takes rights as prior to social construction – if not to society itself (cf. Robert Nozick “Anarchy, State and Utopia”). In fact, they take one particular right – the presumptive right of property – as prior to everything else, deriving all the rest of their political and social philosophy, and their ideal conception of the state, from this.

This priority of rights makes no historical or ethnographical sense. Our ancestors were social animals before they were humans; the idea that such a complicated conception as a right to property could arise, in a form any human society would recognize as such, in an animal perhaps less mentally developed than a macaque, strains the credulity. And the complex and varied systems of individual and collective property and use that anthropologists have found in different human groups, and even the sharing behavior ethologists have found in other primates, are simply incompatible with the kind of theoretical primacy libertarians like Nozick place on a simple “mine” and “yours” concept of property.

To a socialist, at least to this socialist, all imaginable rights, even the very concept of “property”, itself, are socially constructed. One thing that this suggests is a special importance for those rights which enable people to participate fully in the political processes by which all rights (including these) will be constituted, in other words, for democratic rights: the right of free association, right of free speech, right to vote, etc. Another thing that is suggested is that, unlike some imagined a priori right, socially constituted rights will be inherently contextual, and limited in scope – I may have the right to bear arms, but if I bear them so carelessly as to cause another’s death, I am guilty of manslaughter. I may have the right to free speech, but I may not maliciously cry “fire” in a crowded theater.

In fact, if rights are socially constructed, and in a democratic society, the very idea that individual rights could be set up against the common good appears as a false dichotomy. When people define a right socially, democratically, acting in solidarity, they are defining it precisely with a view to furthering “the common good”, according to their best lights. There may be a debate about how far to extend the right so as to best effect the common good, but the idea that the right should be extended beyond that point, to imperil the common good, is nonsense.

Of course, in a non-democratic, unsolidaristic society, an elite group may try to define “rights” for itself over against the rights of some other group – i.e., they are defining some special privilege which they should have, and the others not (hence the “divine rights” of the nobility, and kings.) But from a socialist point of view, when the attempt is made to define some elite privilege as a right, to the detriment of “the common good”, then it is incumbent on the non-elites to challenge legitimacy of that right.

In other words, from a socialist point of view, “individual rights” which are contrary to the “common good” are not “rights” but “wrongs”.

With regard to property, in particular, socialists not only disagree with libertarians on how the concepts of “property” and “property rights” are constituted, we disagree on how wealth is produced. Libertarians tend to focus on an individual’s role in production as legitimizing their subsequent monopolization of part of the product. Socialists understand that all property is socially produced. The idea that is possible to uniquely and precisely determine the contribution of one person to the total output is ludicrous. None of us makes it “on our own” in this life. If nothing else, we are raised, nurtured, educated by others. Our role in the production of goods and services is deeply embedded in a social matrix: we work on committees and task groups, process materials produced by somebody else, etc. (Perhaps we should exclude the fanatical few who go out of their way to live as “survivalists”. But even the most committed, hardy mountain man probably uses steel traps made by workers in a factory somewhere.)

Since wealth is socially produced, socialists believe that society should make collective decisions about how to use and distribute it. This doesn't mean society "owns" the shirt on your back. Socialists make a distinction between personal property and productive property, or capital. The ability to live a decent life requires a certain amount of non-interference by others in the basic day-to-day decisions affecting your life, and this implies a de facto “right” to “enjoy your personal property”. But accumulations of property – capital – are a productive resource that affects the lives of tens, hundreds, thousands, even millions of people. Decisions as to the use of that kind of property should be made collectively, with the fullest democratic participation possible of all people who will be affected.

In a sense, socialism, the kind of socialism that Democratic Socialists of America espouses, is simply deep democracy. We believe that all important decisions affecting our lives should be made with the full participation of all who are affected by them. The selfish will of the few should never be allowed to trump the human need of the many.

References: Robert Nozick’s “Anarchy, State, and Utopia” is a worthy polemic on libertarian theory by a serious (even if, in this case, completely wrong) philosopher. Frans De Waal “Good Natured”, which I’ve also referenced in a couple of other posts, has a good discussion on primate sharing behavior. “Understanding Capitalism” by Samuel Bowles, Richard Edwards and Frank Roosevelt has (mostly in side bars) some interesting ethnographical information of differing property relations. It also discusses some of the other themes above, and is an excellent basic text book on classical economics written from a democratic socialist point of view.

Monday, October 12, 2009

Political economy

In the 18th and early 19th Centuries, people studied what they called “political economy” when they studied what we now call “economics”. I've occasionally heard Left activists bemoan this change in terminology as if some sense of the interconnectivity of politics and economics was lost along the way. Regarding the terms, themselves, I believe they are historically incorrect. The term "political economy" was used in the sense that we might say "public economy", that is the economy of the Commonwealth, or "polis", with the intention to distinguish it from the more domestic sense that the bare word "economy" would have carried in the 18th century, that is, the economics of households, or what we might nowadays refer to as "home economics", or, with more dignity "household management". This was the sense that "oikonomia" had to the Greeks. The use of the two-word term did not imply any insight that the power relations of “politics” were somehow inseparable from the market and production relations of “economics”.

Still, something is lost in trying to analyze “the market” as if it could be divorced from the other power relationships in society. Marxists, of course, have never believed in making this division, but the liberal economic tradition makes it central. One who broke from the mold was John Kenneth Galbraith, whose central trilogy of The Affluent Society, The New Industrial State, and Economics and the Public Purpose, as well as the excellent biography by Parker, I have worked my way through over the past few years. One of Galbraith's central ideas is that economics and politics are inseparable "in the wild", and the academic separation of these two disciplines is therefore artificial, and causes analysis to deviate from reality.

Many of the details of Galbraith’s economic analyses were seriously flawed. These flaws have been analyzed in detail by many who are more learned than I, and I won’t try to reproduce the critique, except to say that my sense is that he was too in love with his own ideas, too inclined to spin them out as logical exercises to see where they would lead, and not inclined enough to the more demanding task of empirically testing them against facts. Still, despite flaws in detail, many of his key insights, including that on economics and politics, were and are profound.

There are many reasons why economics should not be separated from politics. One is that wealth is readily translatable into other kinds of power, and not only by means which most people would consider illegitimate or “corrupt” (such as bribing officials). Money buys access to politicians, it buys media coverage (either directly, or just because what the rich and powerful do and say is interesting and “newsworthy”), it buys research to “prove” your point of view (and it pays to bury the research results when they aren’t “right”). Money buys respect. Galbraith often commented ironically that nothing produces a semblance of intelligence and perspicacity so much as the possession of wealth. Even Adam Smith commented on the “natural” deference which we offer to the “opulent”.

Differential wealth also influences outcomes in the market place. Worker and owner don’t meet as equals in the labor markets when waiting a few more weeks to strike a bargain means the owner puts off buying a new yacht, but the worker can’t put food on her family’s table. And you don’t need to accept Galbraith’s entire theory of “the planning system” in all its detail to agree that an economy where major areas of production are controlled by a few large corporations, which spend huge amounts on advertising to manipulate consumer opinion regarding products that are, otherwise, functionally nearly indistinguishable, and which arguably add almost nothing to the quality of human life, is very different from an economy in which a large number of small firms compete to provide easily understood products in response to autonomous customer demand.

Nor is the answer as simple as breaking up big economic groupings in favor of the small. Markets may, as we’ve been told, create wealth – they also create unequal wealth. Even from a perfectly fair “starting position”, in which all participants had equal initial wealth and talent, pure chance would result from time to time in some participants temporarily having more wealth than others, and since differential wealth brings the power to manipulate, the unbalance would tend to be accentuated. And, of course, a perfectly fair starting position could never be achieved, in life. (Just see Marx’s chapters on Primitive Accumulation, in Capital, Vol. 1.)

Also, Galbraith was certainly correct that the complexities of producing the goods and services demanded by our modern technologies requires a certain amount of large scale coordination, even if he missed the “right sizing” and “outsourcing” trends that radically altered his “technostructure”-dominated mega firms.

The bottom line is that an economy is not a place where impersonal forces work themselves out in a way that automatically tends to justice. It is a complex arena where people strive for advantage, where advantage translates into power, and, power leads to more power. The only intellectual stance that makes sense is to accept that the economy is inherently linked to the political, and not strive for an artificial separation.

Fortunately, it now seems more common in academia to decry, at least by lip service, the artificiality of boundaries between disciplines, than it was when Galbraith was doing most of his writing, and at least some of today’s classically trained economists recognize the importance of linking power relations to economic outcomes. One of them, in fact, is John Kenneth’s son, James K. Galbraith. I haven’t read as much of his work yet, but he seems just as bright as his father (if not quite the wordsmith – and certainly not possessed of the same level of ego), and he is much more amenable to subjecting his ideas to the discipline of empirical testing.

So there is hope that even the liberal tradition may someday join with the Marxist in recognizing that the economic is political, after all.

Saturday, October 3, 2009

Keeping the lid on

This week, I’m moving from the sublime to the ridiculous, or so it may seem. But somehow, in here, there’s an underlying question which is no less puzzling to me than the “deeper” or more “philosophical” ones I’ve been playing with in my last few posts. So this week, I’m going to riff on coffee cup lids.

I bought a cup of coffee, this afternoon. The coffee was much too hot to drink when I got it. I put the cup in the cup holder of my car, and drove around for a while. The lid of the coffee cup had a hole in it, to sip coffee through. In the moving car, the coffee sloshed through the hole. By the time I got around to drinking my coffee, there was coffee all over my car, the side of the cup was wet, and it was impossible to drink without dripping coffee all over myself. So I waited until I got home, and transferred the coffee to another cup. By then, it was cold.

I first started buying coffee “to go”, I guess, in the mid-1960’s. In those days, nobody had heard of lids with sippy holes in them. A lid was just a lid. If it was a “good” lid (tight, and not too flimsy), it would keep the coffee in the cup until you got to where you were going. A tiny amount might escape from the pinhole in the top which was there to let steam out, but this was insignificant – it was only an issue, really, when the server placed the napkins on top of the cup before handing it to you, because the napkin would end up with a wet, brown spot on it. (This is still a problem!).

In those days, there was no built-in provision, at all, for drinking your coffee through the lid. The implied assumption of the lid makers was that you would keep the lid on until you got where you were going, take the lid off, and drink the coffee. Inveterate travelling coffee drinkers, like myself, learned to tear a little triangle out of the lid to sip through. This was sometimes a little difficult to do, if the lid was a really good one (i.e., tough, thick plastic. Carrying a pocket knife, as I always did, and do, helped.)

So somewhere around the mid-1970’s, I guess, someone came up with the idea of putting a perforated section in the lid, which would tear easily. At first, they would just tear out (like the old, self-made triangles), and you would throw the little piece away. Later, somebody got environmentally conscious, and the piece would stay attached – you would just fold it back, and tuck it under another part of the lid. Now to an old-timer like myself (I was in my 20’s!), this all seemed a bit effete. And I did have a legitimate beef that sometimes the tear-out that was provided was smaller than I wanted, and I had to tear it wider to drink comfortably. Still, both these changes were genuinely good ideas, and represented real improvements in the functional utility of the coffee cup lid.

Then, I don’t remember when – the 80’s? 90’s? – someone decided we should get rid of the tear off, and just put a permanent hole in the lid to sip through. Unlike lids with the old tear-back openings, these new lids would never be coffee-tight, which meant (and means!) that the coffee would start spilling out from the moment you bought it. It burns your fingers. It spots your clothes. It messes up your car. Yet, somehow, this new, and it seems to me, unambiguously inferior lid design has become ubiquitous, almost completely displacing the older, and better designs. How is something like this possible?

Does any coffee drinker actually like these lids better? Is the act of tearing back a little perforated section really so difficult for some people (or, actually, for most people) that dealing with sloshing, dripping, spilling coffee seems a small price to pay? I know at least somebody agrees with me, because I visited a coffee shop, once, where there they had small plastic lip-shaped stickers which they stuck over the holes, which you could then prize off when you were ready to drink the coffee. (Boy! I hoped that trick would catch on. Unfortunately, it has not.)

I suspect most people just don’t think about it. They just take whatever lid they’re offered, and deal with the consequences without much reflection. (They’re not persistent wonderers, I guess.)

Still, this really raises, in my mind, a question about progress. The universe, taken in the large, is not teleological, as Aristotle thought it was. It does not have a purpose, or a preferred direction toward which it tends. (At least not in the way that most humans would understand the meaning of “purpose”.) Life also, writ large, as for example in the process of natural selection, is not teleological. Natural selection “improves” the degree to which species are adapted because less well adapted organisms are differentially less successful in passing on their genes than more well adapted species, not because of some pre-ordained goal inherent in natural selection, itself. (I think the word for this false semblance of teleology is “teleonomy”.)

But individual organisms are teleological beings. We have goals, short and long term, toward which we direct our efforts, and in general, those goals involve improving our lives, according to our own lights. Human beings strive socially to carry out their goals, and human culture, which preserves the record of past successes and failures, and encodes values (mainstream and marginal), including goal-concepts, collectively agreed upon or dissident, ought to sustain some degree of telos, as well. Human cultural evolution, aka human history, ought to show some degree of something we could call progress.

So how can a really bad idea end up almost totally supplanting a good one? I wish I knew!

Monday, September 28, 2009

Moral knowledge and moral choice

Last week, I discussed a parallel I see between counterfactuals and moral argument that goes against using the empirical improvability of moral argument as proof for an extreme relativistic moral stance. I ended with the question, “What could be a basis for making non-arbitrary decisions in moral arguments?”

I wish I had a firm answer for this. Maybe someday I will. One valiant attempt to wrestle with this problem is by Alasdair MacIntyre in his book, “After Virtue”. MacIntyre is concerned with this problem in a form similar to that identified by Jean Grimshaw in Chapter 8 of “Philosophy and Feminist Thinking” (see my 8/30 post) – the problem between morality as knowledge and morality as choice. Classical Greek philosophers such as Plato and Aristotle believed that the answer to the question “What is the right way for a person to live?” could be found by reason – they in essence believed that morality was a matter of knowledge about the world. Modern philosophers, however, have tended to emphasize the role of “choice” in moral behavior, and to question the degree to which reason can answer fundamental moral questions at all. (See nice, brief, discussion in Grimshaw.) Absolute relativism is an extreme form of morality as choice. MacIntyre comes down squarely in the “classical” camp of morality as knowledge.

MacIntyre is concerned with an analysis of the virtues, taken as acquired human qualities which tend to enable us to achieve the “good” in our lives. He defines the virtues through a three stage process. First, he places them in the context of what he calls a “practice”, a cooperative activity which has “goods” internal to the activity. For instance, the game of chess has internal “goods” defined in terms of strategy and skill and other attributes of good play, which are fully knowable, and can be experienced, only by people who have committed themselves in a certain way to the playing of the game. Chess may also bring a player external goods – for instance, winning at chess may bring social praise, prize money, etc. – but, whereas the external goods may perhaps be achieved by cheating in some way (violating the virtues inherent in the practice of chess), the internal goods cannot. So “the virtues” are tentatively defined as qualities which help sustain practices.

In the second phase of his account of the virtues, MacIntyre places them in what he calls “the unity of a narrative embodied in a single life”. He starts with an interesting and compelling argument for the centrality of narrative to our understanding of our lives. He argues that, rather than the life of a person being conceivable as a series of actions or events, which we may choose to assemble into a narrative as a sort of literary exercise (or concerning which we may deny the veracity or authenticity of constructing any such narratives), in fact we can understand the concepts of “action” and even “person” only as abstracted elements of some narrative. From this, he claims it follows that a single life has a sort of narrative unity, and that to ask “what is good for me?” is the same as asking what sorts of things will lead to developing or discovering that unity. This leads him to define the virtues as, in part, those qualities which will help us in our quest for the answer to that question, and in part those qualities (whether the same or a more inclusive list) which will help us realize them once identified.

Finally, MacIntyre argues that the narrative unity of a life is comprehendible only in the context of a tradition, which he defines flexibly as “an historically extended, socially embodied argument, and an argument in part precisely about the goods which constitute that tradition.” Note in particular that a tradition so defined is not an inflexible body of practices that must be conservatively defended against change from any source.

After the third stage in the analysis, then, the virtues stand defined as those acquired qualities which (1) sustain us in practices, (2) help us find the narrative unity in our own lives, and (3) sustain the vital tradition in which we live.

Well, I can’t do justice to a complex book, especially one which, after one full and one partial reading, I still only imperfectly understand. There is much that rings true (or at least partially, or potentially true...) Other parts fail to convince. The concept of a practice, and the partial definition of virtues therein seems useful. Also, I find myself agreeing with MacIntyre in so far as placing narrative at the center of our understanding of (at least) our own lives and our social world; however, it seems to me that every person, rather than being something abstracted from a single narrative, is something at the nexus of an interlocking web of narratives, not just subjectively (because I am aware of things that you are not), but essentially, in that any narrative, as an act of analysis, necessarily abstracts from reality, and any abstraction necessarily leaves some things out. Like the abstraction inherent in ostension, discussed in my 9/5 post, the abstraction inherent in narrative is by no means arbitrary. It is constrained by what is “really there” in experience, but also by our immediate goals in analysis. To understand certain things (why I play guitar) we abstract certain things from experience. To understand others (why my sister likes red) we abstract others. I may have a role in each narrative – but to include every possible fact in experience in which I might conceivably play a role would be an “account”, if you could call it that, far too incoherent to be called a narrative, or, probably, comprehended at all. (Would it stop short of the entirety of the universe?) It certainly would be a poor candidate for “the narrative unity of a life”. So how do we decide WHICH narrative the virtues are supposed to support the development of? (To use the sort of language with which Churchill might not have put up.)

Similarly for tradition. The definition of a tradition as an ongoing argument is one I like, which frees tradition, at least in principle, from being a cage. But if a tradition is an argument, then which side of the argument should the virtues support? Why do we speak of “tradition”, at all? Would it be better to use the plural, “traditions”?

So I am not sure that MacIntyre has removed the arbitrary from his account of the virtues, and placed them on the plane of the knowable, rather than as objects of choice. I’m not sure he hasn’t, either. Maybe the virtues can be defined exactly as those qualities that allow us to navigate the choice of narratives, and decide which of the available candidates is the most central or important for our lives. (Maybe there really is one, and only one, that is best, and not many different but equally good.) Maybe the virtues are exactly those qualities which keep the central argument of a tradition alive and vital, allow it to adapt to changing circumstances, and keep it from degrading into arbitrary authoritarianism, or dying out all together. Maybe it is possible to determine which virtues would facilitate these processes, without begging the question of what the outcome of the process ought to be. Maybe this is even exactly what MacIntyre meant.

At any rate, as I said, a valiant attempt, and a book well worth reading.

Another possible basis for making non-relativistic arguments for moral principles, which is not incompatible with MacIntyre’s (I think) is by appealing to arguments about the evolutionary basis of what we call moral behavior, as I discussed a few weeks ago. Of course, any such argument runs into Hume’s predicament of deriving “ought” from “is”. The fact that certain traits have evolved in us by natural selection does not necessarily mean that we “ought” to give expression to them, or culturally reinforce them. In fact, the idea that we “ought not” formed a large part of moral thinking, when evolved traits were understood exclusively under earlier, “tooth-and-claw” understandings of natural selection. However, if humans have evolved traits by natural selection which tend to produce behaviors which, if practiced, tend to promote the adaptive success and posterity of the human species (both in the past and in the future), then there is a strong argument that those behaviors constitute a part of what is “good for a man” (in MacIntyre’s non-P.C. language. I will continue to try to use “human” or “human being” or other non-gendered language in similar contexts.)

Any argument for morality by natural selection must depend on the idea that society – the social group – is the Number One weapon in homo sapiens arsenal of adaptive strategies. So moral behavior is that which enhances the well-being and stability of the group, and its ability to provide nurturance, support and safety to the men, women and children therein. This does not exclude egoistic as well as altruistic behavior – de Waal, in “Good Natured” points out that without sustaining our own lives, we cannot lend succor to anyone else. But it does call for a balancing act between self- and other-directed behavior, as too much of either can be destructive of the whole. De Waal offers convincing examples and arguments that the rudiments of this balance are clearly discernable in apes and monkeys, and possibly in other social species.

In fact, my “natural historical” conception of morality (8/30 post) is exactly that of a constant, and irreducibly contextual, balance between divergent and possibly even contradictory claims.

Maybe morality, like MacIntyre’s traditions, must be seen as an ongoing argument – an argument amongst ourselves, on matters of general principles, and an argument within ourselves, in every case of practical application. Maybe the best we can hope to do is to clarify what are and are not the proper terms of debate.

References: The books that I referred to in this post are Jean Grimshaw, “Philosophy and Feminist Thinking”, Alasdair MacIntyre, “After Virtue”, and Frans de Waal, “Good Natured: The Origins of Right and Wrong in Humans and Other Animals”. (See also de Waal’s earlier “Chimpanzee Politics”.)

Monday, September 21, 2009

Counterfactuals and moral relativity

In my August 30 blog post, I segued from a discussion of Jean Grimshaw’s book “Philosophy and Feminist Thinking,” into a discussion of what I called a “natural historic” view of morals. I promised to return to the topic, which I am doing with this post, although not, I’m sorry to say, to the particular questions I had promised to address. (Someday...)

The fundamental difficulty in moral philosophy is to find some normative basis for behavior that has some claim to universality, but does not seem arbitrary. We are not comfortable with an absolute relativism which says moral choices are simply a matter of personal choice, and there is no “objective” basis for privileging one person’s, or culture’s choices over another’s. But an authoritative absolutism, which says these particular moral premises are right for all people and for all time, is not very satisfactory, either. Or rather, many people are in fact quite happy with the authoritarian approach, but the authoritarians never seem to agree on a set of premises. So the question is, if we reject absolute relativism, how do we come up with a rational means of evaluating completing claims? And how can we justify that means, without simply elevating the relativism to another level?

Before I tackle the above “big question”, though, I want to address one particular argument for absolute relativism – the argument that since no particular moral stance can be proven, there is simply no option but to view moral behavior as a purely personal matter of preference, choice, or taste.

I think there is an interesting parallel between moral arguments and certain kinds of argument from counterfactual hypotheses. Arguments from counterfactual hypotheses can be of different kinds. For instance, I can take a rubber ball from my desk drawer, hold it out at arm’s length for a moment, then put it back in the drawer and say to you, “If I had released that ball, it would have struck the floor and bounced. It would have bounced several times, but the altitude of each bounce would have been less than that of the previous bounce.” As an example of a very different kind of counterfactual argument, I could say, “If the South had won the Civil War, slavery would have been abolished anyway, within 25 years.”

The first of these two counterfactuals describes a possible experiment within the context of a well-understood physical theory (Newtonian mechanics). The theory is so well developed that I can present a precisely detailed argument, including mathematical equations, connecting my prediction firmly to the basic laws of the theory, an argument which nobody who understands the theory would care to deny. Finally, and most importantly, I can if I wish demonstrate the truth of my prediction by actually performing a similar experiment; I can take out the ball again, and drop it. Our acceptance of the theory, and of the underlying general theory of knowledge through experimental science, are so great that having seen the experiment done once, we probably accept immediately that it would have worked the same way in the first (untried) example. If not, I can repeat the experiment over and over until the most confirmed philosophical skeptic begs me to quit.

The second counterfactual argument in my example takes place within theoretical contexts (history, political science...) that are not as precisely defined as Newtonian mechanics. Most importantly, experiments of the kind I described above cannot be done. This is not to say that historical sciences cannot be empirical, but experiment in historical sciences is based on making predictions about patterns of unknown facts, and then seeking out and examining the facts to see if they conform to the predicted pattern. Experiments that go directly to arguments that “if this had happened, then that would have also happened”, as applied to specific cases, can never be done. This means that historical counterfactuals of this kind can never be proven or disproven with the degree of certainty that would apply to simpler, physical problems. Does this mean, as has been proposed, on similar basis, for moral arguments, that such counterfactuals are purely matters of opinion – of personal taste – and there is no right or wrong to them?

That historical counterfactuals are purely matters of taste flies against common sense. For instance, if I say slavery would have ended within 25 years after the Civil War, if the South had won, you might reply by saying, “Slavery would have ended, but it could have taken 50 to 100 years.” Many people might think that your prediction was more likely than mine. (Note: I personally have no opinion on this. My counterfactual example is such on more than one level!) Let’s take a more extreme argument: “If the South had won the Civil War, they would have immediately freed the slaves and given them the vote.” Very few people, if any, would believe that this argument was true. So, in fact, we do believe that there is a sort of truth and falsehood to counterfactual arguments. This is because we live in (and believe we live in) a rule-ordered universe, and we believe those rules may be used as a basis for prediction. The intellectual process of making a counterfactual prediction is no different than that of making a future prediction; it is just that in one case the experiment may sometimes be carried out, and in the other it cannot.

The parallel between counterfactuals and moral argument suggests that the mere fact that moral arguments cannot be answered on a firm, empirical basis of the sort possible with Newtonian physics is not sufficient to prove that they are reducible to nothing more than personal taste and preference. On the other hand, it doesn’t prove that there is a better basis for them, either. In the counterfactual case, I found a parallel between the process of making counterfactual predictions and making (sometimes or somewhat testable) future predictions. What could be a basis for making non-arbitrary decisions in moral arguments?

I have some thoughts toward an answer (certainly not the temerity to say I have an answer!) But when I wrote it all down, the outcome, at some 2400 words, I thought was much too long for a single blog post. So I’ll continue this next week. (Gives me more time to tinker, in any case.)

Sunday, September 13, 2009

A world for mind

Well, I finished the De Waal book, but I still haven’t been able to get back to the morality thing – I’m not too good at week-spanning posts, I guess. So I’m rehashing something else from my journal, somewhat rewritten for your benefit. If you’re out there...

Any world in which a mind could evolve by natural selection must have at least three characteristics: it must have stuff in it, the stuff must be lumpy, and the lumpiness must be orderly. “Stuff” is obvious – a world without anything in it would be no world worthy of the name. “Lumpiness” is the quality by which mind can make distinctions. (Plato proved in the Parmenides – if I can trust Cornford’s wonderful interpretation – that perfectly homogeneous stuff is indistinguishable from nothing at all.)

“Order” is that quality whereby mind can create useful rules about stuff. This is important, because making rules is what makes a mind useful, and usefulness is what makes natural selection preserve it. A mind could perhaps arise by chance in a chaotic world, but since there would be nothing for it to make rules about – nothing could be generalized – it would have no predictive ability, it could not enhance the reproductive success, or even the life experience, of the organism. It would be no good at all.

I don’t know how many specific conditions on the nature of stuff and its organization (order) are required to have a sufficient (as well as necessary) set of conditions for the evolution of some sort of mind. I suspect fewer than most people might think. I think we must make some sort of posit regarding the interaction of stuff, e.g., that inferences about important qualities of stuff (as they affect the organism) can more accurately made with information about proximate conditions than distant ones. Alternatively, this might serve as a definition of “proximate” and “distant”. The spacio-temporal variation of stuff must be such that most of the time predictions based on experience don’t become invalid before the organism has a chance to benefit from them. (“Experience” could here be defined as one particular set of interactions with proximate conditions.)

What is needed for natural selection is a world in which some lumps of stuff can interact with other lumps in such a way as to create near-perfect replicas of themselves. Self replication amidst random generation of other things and random destruction of all things leads to an increasing population of self-replicators. Changes (mutations) that enhance the efficiency of self replication increase the rate of population growth. Harsh circumstances or competition with other self-replicators may enhance the importance of some useful mutations, causing some forms of self-replicators to die out, while other populations continue to increase – et voila! Natural selection.

Mind (following Dewey) is obviously adaptive, at least for motile self-replicators, who actively seek to influence, and therefore must have some ability to predict, the spacio-temporal distribution of stuff in their environment. So given the above conditions, and perhaps a few more I haven’t thought of, it seems to me that mind has a fighting chance to evolve.

The whole point of this exercise (which principles of good writing might have had me put at the beginning of the post, and not near the end – but I’m feeling contrary, today) is to address the supposed mystery of how, of all the possible universes that might have existed, did it come to pass that the actual universe (assuming there is only one) is one in which such apparently frail things as humans and human intelligence could survive and prosper? The supposed intransigence of this question is apparently of great comfort to theists, who supply their own preferred answer.

Now God, of course, is really no answer at all (why, of all of the possible gods, did we get one who would decide to create a universe in which...? etc.), and the supposed statistical improbability is really irrelevant (if we didn’t have a universe in which mind could evolve, we wouldn’t be here to comment on it – the probability of an event that is known to have happened is 100%). But all that aside, I just don’t think that evolution of mind is all that hard to credit. A world without stuff in it, and lumpy stuff, at that, would be something we hardly could accept as being a world (or universe) at all. In a purely chaotic world, anything at all could be, but nothing at all would last, so no mind, but there’s no particular reason to believe that pure chaos is anymore “likely” than a world with SOME sort of order. And in a world with any kind of order, it seems that some sort of experience could be derived from interactions among “proximate” lumps of stuff, which would have at least limited utility in making predictions about further experience. So all that is necessary is for some lumps of stuff to have qualities that allow them to self replicate.

Okay, I’ve probably missed a few practical requirements (at least). And certainly the whole state of affairs is marvelous and wonderful (two words which are perhaps literally synonymous). But if a “miracle” is some occurrence that we just can’t rationally explain, then, folks, I just don’t see the existence of a world with mind in it as a thing miraculous.

Saturday, September 5, 2009

Ostension, abstraction and ambiguity

I wanted to continue with the topic of morals, this week, but the press of work in my “day job” didn’t leave me time to think through some things I wanted to talk about. So I’m posting this piece, which I wrote spontaneously in my journal a couple of days ago. The only connection is that I was reading the De Waal book, Good Natured, that I mentioned last week, and I was thinking about the evolutionary origins of human cognition. Anyway, hopefully I’ll get back to morals, next week. By then I should have finished De Waal, at least.

Bertrand Russell, among others, has pointed out the limits to definition. Words being defined in terms of other words, eventually one reaches the point where further verbal definition is possible only by permitting circularity. In formal languages, such as mathematical theories, the solution is to leave some terms undefined in the subject language, relying instead on definitions in some background language or “meta-language”.

In natural languages, the equivalent of meta-language definition is definition by ostension. One points to an object and says, “This is a table.” Or (to use W. V. Quine’s favorite example), one points to a furry animal and says, “Rabbit.”

What Russell doesn’t really point out, at least in my readings thus far, is the degree to which definition by ostension involves a process of abstraction. Quine makes much of the ambiguities involved – but this is a different, although related, point.

To identify a table, you need to determine the boundaries of “table”. You need to determine which particular parts of experience you are isolating (mentally) to define them as “table”. Your audience needs to perceive things similarly for communication by ostension to be meaningful. Especially to identify “table” as a general term, or identify a class “table”, you both need a similar functional/pragmatic relationship to a table object, otherwise the general term makes no sense. This is different than distinguishing between a rabbit and an undivided collection of rabbit parts (to consider Quine’s example). This is distinguishing between the rabbit and the ground it hops around on. An intelligent ant might find it impossible to distinguish between ground and rabbits by ostension, and might, in fact, find each term to be an incomprehensible generalization, rather like a human might react to a class containing jelly beans and squid.

The point is that the simple act of ostension, the most basic form of definition, involves a process of abstraction, and is therefore partly a function of the cognitive apparatus of the communicants. The cognitive apparatus in turn evolved into what it is as a function of the way the thinking organism interacts with its environment. This means, of course, that the cognitive component to ostensive definition, while it is “subjective”, is in no sense arbitrary – except, perhaps, at the margins of utility. It also means that since conspecifics share so much of the ways in which they functionally interact with their environment, members of the same species can generally go quite far in communicating meaning by ostension. Quinean ambiguities can be seen in context, here: there really is little functional difference, to a human, between a rabbit and an unseparated, reasonably complete, set of rabbit parts – hence the translational ambiguity that Quine finds.

References: Quine used his rabbit example frequently, most thoroughly, if I remember correctly, in Word and Object, but also in the essay “Ontological Relativity”. I’m damned if I can remember where I read Russell on definitions. (It was longer ago.) Quine, by the way, must be rolling over in his grave at my loose correlation of the notions of “class” and “general term”.

Sunday, August 30, 2009

A natural history of morals

I’ve been reading a great book, “Philosophy and Feminist Thinking”, by Jean Grimshaw, which I picked up serendipitously at Back Pages Books, in Waltham MA (http://www.backpagesbooks.com/).

As is appropriate for such a broad title, Ms. Grimshaw covers a lot of area, especially for such a short book. She hooked me in an initial section in which, in the course of discussing what it might mean to think of philosophy as “gendered”, she showed, by a very original argument, that, for instance, Kant held views on the nature of women which, to a sexism-aware reader, seem in the context of his general theory of moral behavior, to relegate women to second-class humanity; however, the views of on women, in Kant’s case, could be thrown away, and the general theory would not need to be changed. On the other hand, Aristotle’s teleological theory of natural history led him to see rational thinking as the most characteristic quality, and therefore the most appropriate end or goal, of human beings, because rational thinking derives from language, which is the one quality he saw as being uniquely human. The fact that women have language, but that he believes women not to be rational in the same way as men, thus creates a contradiction in his philosophy; but to eliminate the contradiction by admitting women (or for that matter, slaves) to be fully rational, would undermine parts of his moral and political philosophy which required the good life to be supported by the labor of women and slaves, in order for the full rational nature of humanity to find expression. Thus, Aristotle’s misogyny is integral to his philosophy, and his philosophy is more clearly “gendered” than Kant’s.

In another question she raises in the book, the question of what it might (or might not) mean to speak of women as having a “nature” distinct from men, or of “women’s ethics or values” as being distinct from “men’s”, she brings up some ideas I wish she had developed more fully. She mentions that many of the values which are often seen as being particularly women’s values – caring, attentiveness to relationships, alertness to the feelings of others – are actually behaviors that may be quite practical for survival to a person living powerlessly under the domination of others. A hyper-keen alertness to the feelings and moods of others, for example, is often a characteristic of people who grew up in an abusive environment (my example, not hers). I wish she had explored the political implications of this observation a little more – in particular, does this mean that some of these virtues might eventually dissolve, if we won a more egalitarian world? I hope not!

But what I really want to talk about in this essay, because it gibes in interesting ways with some thinking I’ve been doing, is a little theory of moral behavior that she just sort of casually tosses out in Chapter 7. She is discussing the question of “abstract” vs. “concrete” reasoning, and ideas that “men’s morality” is based on rules and principles, while “women’s morality” is contextual and specific. She points out that, besides being pretty imprecise as to what “women’s moral reasoning” really is, this argument dissolves rather readily into vague mysticism about women’s “intuitive” mental processes. She proposes as an alternative a way of looking at moral behavior that is based on distinguishing between “rules” and “principles”. The definition she uses is that “rules” simply direct behavior: “Do not kill.” “Principles”, on the other hand, direct you to take certain things into consideration: “Consider whether your actions will harm another.” Then, to use an example from her book, a person might hold one rule: “Do not sleep with someone to whom you are not married,” and two principles: “Consider whether your actions will condone immoral behavior,”, and “Consider whether your behavior will stand in the way of maintaining caring and relationships.” A person who chooses to maintain a close relationship to a daughter who was breaking the rule about sex and marriage is thus not seen as behaving in an unprincipled way, but as prioritizing one principle over the other, in a case in which the two led to contradictory behavior.

I think this is a fascinating, and quite compelling analysis. It is also quite close to a theory of moral behavior I’ve been kicking around, which I tend to refer to as my “natural historic” view of morality. (The name implies that this is a theory or hypothesis about what moral behavior in humans is “naturally like”, and not a normative or prescriptive theory, per se.) My natural historical view argues that human morality naturally takes the form of a collection of simple “rules” for behavior, which are not necessarily mutually consistent. (These “rules” in my theory thus play the role of both “rules” and “principles” in Grimshaw’s.) Social or other environmental circumstances have the effect of stimulating or reinforcing some rules, while suppressing others. Different aspects of the particular environmental context may stimulate contradictory rules. The rules, themselves, become part of the stimulus in a feedback mechanism: a rule, once stimulated or “fired”, may serve to have a suppressing or stimulating effect on others. Eventually, some rule (or some reasonably consistent set of rules) wins out, and the person takes moral action. (Of course, gridlock in the form of an inability to come to a decision may also win out.)

This view is consistent with a view of mind that I’ve been developing, under the influence of books like Kosslyn/Koenig “Wet Mind”, Patricia Churchland’s “Neurophilosophy” and George Lakoff’s “Women, Fire and Dangerous Things.” It is also consistent with a growing sense that I have that logical consistency, while certainly important, is grossly over-rated in most traditional philosophy, especially where it bears on the actual behavior of real human beings. (Lakoff’s book is particularly helpful, in this.) Another contributing factor in my thinking about this has come from primate ethological studies such as Jane Goodall’s “In the Shadow of Man”, and Frans de Waal’s “Chimpanzee Politics”. (De Waal’s “Good Natured: The Origins of Right and Wrong in Humans and Other Animals” is right at the top of my “to be read” pile.)

Since I’m billing this as a “natural historical” theory, I should provide some ideas on how my hypotheses might be empirically tested. I have, in fact, had some thoughts about this, and about the original source(s) of the rules (in our genes, and/or imbued by socialization), but I think this is a long enough post for now...