Sunday, August 30, 2009

A natural history of morals

I’ve been reading a great book, “Philosophy and Feminist Thinking”, by Jean Grimshaw, which I picked up serendipitously at Back Pages Books, in Waltham MA (http://www.backpagesbooks.com/).

As is appropriate for such a broad title, Ms. Grimshaw covers a lot of area, especially for such a short book. She hooked me in an initial section in which, in the course of discussing what it might mean to think of philosophy as “gendered”, she showed, by a very original argument, that, for instance, Kant held views on the nature of women which, to a sexism-aware reader, seem in the context of his general theory of moral behavior, to relegate women to second-class humanity; however, the views of on women, in Kant’s case, could be thrown away, and the general theory would not need to be changed. On the other hand, Aristotle’s teleological theory of natural history led him to see rational thinking as the most characteristic quality, and therefore the most appropriate end or goal, of human beings, because rational thinking derives from language, which is the one quality he saw as being uniquely human. The fact that women have language, but that he believes women not to be rational in the same way as men, thus creates a contradiction in his philosophy; but to eliminate the contradiction by admitting women (or for that matter, slaves) to be fully rational, would undermine parts of his moral and political philosophy which required the good life to be supported by the labor of women and slaves, in order for the full rational nature of humanity to find expression. Thus, Aristotle’s misogyny is integral to his philosophy, and his philosophy is more clearly “gendered” than Kant’s.

In another question she raises in the book, the question of what it might (or might not) mean to speak of women as having a “nature” distinct from men, or of “women’s ethics or values” as being distinct from “men’s”, she brings up some ideas I wish she had developed more fully. She mentions that many of the values which are often seen as being particularly women’s values – caring, attentiveness to relationships, alertness to the feelings of others – are actually behaviors that may be quite practical for survival to a person living powerlessly under the domination of others. A hyper-keen alertness to the feelings and moods of others, for example, is often a characteristic of people who grew up in an abusive environment (my example, not hers). I wish she had explored the political implications of this observation a little more – in particular, does this mean that some of these virtues might eventually dissolve, if we won a more egalitarian world? I hope not!

But what I really want to talk about in this essay, because it gibes in interesting ways with some thinking I’ve been doing, is a little theory of moral behavior that she just sort of casually tosses out in Chapter 7. She is discussing the question of “abstract” vs. “concrete” reasoning, and ideas that “men’s morality” is based on rules and principles, while “women’s morality” is contextual and specific. She points out that, besides being pretty imprecise as to what “women’s moral reasoning” really is, this argument dissolves rather readily into vague mysticism about women’s “intuitive” mental processes. She proposes as an alternative a way of looking at moral behavior that is based on distinguishing between “rules” and “principles”. The definition she uses is that “rules” simply direct behavior: “Do not kill.” “Principles”, on the other hand, direct you to take certain things into consideration: “Consider whether your actions will harm another.” Then, to use an example from her book, a person might hold one rule: “Do not sleep with someone to whom you are not married,” and two principles: “Consider whether your actions will condone immoral behavior,”, and “Consider whether your behavior will stand in the way of maintaining caring and relationships.” A person who chooses to maintain a close relationship to a daughter who was breaking the rule about sex and marriage is thus not seen as behaving in an unprincipled way, but as prioritizing one principle over the other, in a case in which the two led to contradictory behavior.

I think this is a fascinating, and quite compelling analysis. It is also quite close to a theory of moral behavior I’ve been kicking around, which I tend to refer to as my “natural historic” view of morality. (The name implies that this is a theory or hypothesis about what moral behavior in humans is “naturally like”, and not a normative or prescriptive theory, per se.) My natural historical view argues that human morality naturally takes the form of a collection of simple “rules” for behavior, which are not necessarily mutually consistent. (These “rules” in my theory thus play the role of both “rules” and “principles” in Grimshaw’s.) Social or other environmental circumstances have the effect of stimulating or reinforcing some rules, while suppressing others. Different aspects of the particular environmental context may stimulate contradictory rules. The rules, themselves, become part of the stimulus in a feedback mechanism: a rule, once stimulated or “fired”, may serve to have a suppressing or stimulating effect on others. Eventually, some rule (or some reasonably consistent set of rules) wins out, and the person takes moral action. (Of course, gridlock in the form of an inability to come to a decision may also win out.)

This view is consistent with a view of mind that I’ve been developing, under the influence of books like Kosslyn/Koenig “Wet Mind”, Patricia Churchland’s “Neurophilosophy” and George Lakoff’s “Women, Fire and Dangerous Things.” It is also consistent with a growing sense that I have that logical consistency, while certainly important, is grossly over-rated in most traditional philosophy, especially where it bears on the actual behavior of real human beings. (Lakoff’s book is particularly helpful, in this.) Another contributing factor in my thinking about this has come from primate ethological studies such as Jane Goodall’s “In the Shadow of Man”, and Frans de Waal’s “Chimpanzee Politics”. (De Waal’s “Good Natured: The Origins of Right and Wrong in Humans and Other Animals” is right at the top of my “to be read” pile.)

Since I’m billing this as a “natural historical” theory, I should provide some ideas on how my hypotheses might be empirically tested. I have, in fact, had some thoughts about this, and about the original source(s) of the rules (in our genes, and/or imbued by socialization), but I think this is a long enough post for now...

Sunday, August 23, 2009

Password plethora

If you’re like me, you have a multitude of login ID’s for a wide array of different computer systems and web sites. I think I have over 100, including some commercial web sites where I purchased something once, and may never again. Obviously, committing this number of strong passwords and login ID’s to memory is impossible (at least for me!) So there’s a tendency to repeat passwords, never change passwords, and write passwords down, ALL of which are security weaknesses. To make matters worse, a lot of those sites store sensitive personal or credit card information – and online credit card theft and identity theft are growth industries.

My own system is to use a small base number of passwords, which I modify from site to site. The passwords themselves are strong passwords. (I won’t tell you my specific strategy for making them strong. There’s a lot of advice for creating strong passwords available online.) I record all my login data in a list on my computer – but I don’t record the actual password. Instead, I record a code indicating which base password was used, and indicating the modifications that were made. The strong password makes it difficult for someone to guess my password, even with the aid of password hacking software, and the coded list makes it at least difficult for someone to figure out the passwords, even if they steal my list.

Unfortunately, the strongest passwords and the most secret-agenty codebooks are still vulnerable to sniffers – programs that lie in stealth on a computer, and record the keystrokes of the password. The only defenses against these are anti-malware software and frequent password changes. Using anti-malware software is a no-brainer. But changing an entire password infrastructure, and updating dozens upon dozens of logins is still a daunting and impractical task.

What is needed is for the whole identity-check process to be moved out of the responsibility of individual sites, and centralized in some way. Then, instead of recording its own password and ID information for every user, each site would just ask you for some sort of identity token, which it could verify through some central, secure site. Personal information could be passed from the central site to the commercial site on a “one use” basis; the central site would have a flexible system that let you specify which information a site was authorized to access. Each user would now have a small number of login ID’s at the central site (perhaps as few as one), and the use of strong passwords, memorized instead of committed to paper, and changed frequently, would be a lot less problematic.

Attempts have been made to set up systems like this, Microsoft’s Passport system comes to mind, and Google’s cross-system login that works for sites as diverse as Google Maps and Blogger. Paypal also implements some of this. But these systems have not been broad enough, or well enough received. (Distrust of the motives and disinterest of the parties setting them up, as well as their commitment to security, has probably been a factor in this.) Much of the infrastructure for a centralized login system exists, I think, in the digital certificate/digital signature industry. But it seems it will take a revolution for this to replace the plethora of individual logins that currently exists.

If anybody does try to set up a central login system, I hope they will keep in mind that people need to be able to have different ID’s. This may not be obvious, at first, or it may seem that multiple identities would only be necessary for nefarious purposes. But think about it. We all play different roles in our lives, and it is very useful to be able to keep these separate. Work and personal life are obvious choices. Some people may also work two or more jobs, or run two or more businesses. People may have hobbies or volunteer activities they may want to keep distinct. It is not hard to imagine a person having a work ID for her job, one for her moonlighting, one for her activities as a soccer coach, one for her music hobby, one for managing her portfolio, and one for general personal business. That is six “identities” and six login ID/password combinations, which is a bit much to manage – but a lot less than 50 or 100!

Management of multiple identities would be easier to manage (thinking even further ahead) if computer operating systems made switching users easier and faster. The epitome of a fast switch was found in old Unix text shell systems. You could just type “su”, a login id, and a password, and instantly you were someone else, until you logged out or typed “su” again. (The command “su” stood for “substitute user”, although it was so often used to switch to the all powerful system user “root”, that it became common to refer to it as the “superuser” command.) When you used the “su” command, nothing else changed. You were still in the same directory, working with the same files, on the same project, as you were before. Minimal additional system resources were used up. Compare this with “switching users” under Windows XP. (I am a Vista Resista’.) You choose “Log Off”, and wait, “Switch Users”, and wait (and wait), then log in again (and wait), and you wake up in an entirely new environment, with no easy way to navigate back to where you were before, AND it seems (at least to me), that the other user running in the background eats up a huge chunk of system resources, which can slow performance to nearly a crawl. Windows has something a little more like the “su” command in the “Run As” option, but this is obscure (I always have to hunt for it), and is restricted to one instance of one program at a time. (Hmmm..., I’ve never tried it with Outlook. That might be an interesting way to temporarily change your default email address and folder structure. Probably doesn’t work.)

So anyway, I dream on. The future is coming... eventually. But will it be bright, or more of the same muddled murk?

Saturday, August 15, 2009

Longevity and obsolescence

Every once and a while I hold some utilitarian object in my hand and marvel at how long I’ve had it. The most recent example was a humble nail clipper. In our age, to have an artifact in continuous use for a decade or two is remarkable. My kids are amazed if something is older than a year or two.

This was not always so. In older, less productive societies than our own, useful objects were handed down for generations, and it would not be uncommon for an artisan to be using tools that had belonged to his or her grandparent. But our society is SO productive, and for some reason, rather than taking the rewards of our productivity in increased leisure, we’ve chosen to use it to produce throwaway junk.

One person to recognize this tendency was that controversial and problematic economist, the late John Kenneth Galbraith. He made it, in fact, one of the centerpieces of his work, starting with The Affluent Society, and constructed an elaborate and interesting, if not ultimately convincing, system to explain it. But I find that the easiest way to understand our obsession with the new, and dispositiveness toward the old, is to go back beyond Galbraith, and find an explanation in Marxian terms.

The driving force of a capitalist economy is the desire of people with a little property to become wealthy, and the desire of those who are wealthy to become wealthier still. The wealthy derive their wealth from profits – essentially a tax on the labor of those who work. In order to maximize this tax, the working hours of the laboring population must be maximized. What better way to do this, in a world in which technology has exploded productive capacity, than by establishing a cult of the “new”. Admittedly, this cult of newness builds on the natural curiosity of humans and other primates, but it also serves the interests of the owning classes very well; and can anybody argue that corporate advertising campaigns don’t do EVERYTHING in their power to reinforce and develop it?

Of course planned obsolescence, in which products are built to fall apart after a short period of time helps, too. Late capitalism carefully defends itself against any natural selection-type processes of the market, which might discriminate against such ephemeral products. It is generally impossible to distinguish, at time of purchase, between an object that will soon fall apart and one that will not, and constant changes of brand and style protect the producer from the revenge of the customer when it comes time for replacement.

So this is where our dependence on “free enterprise” has led us. Instead of using our marvelous technology to produce (as it well could!) durable and well-functioning things that serve us for generations, free us from much toil, and greatly enhance our well being, our monkey instincts are manipulated by the greedy to force us to work endlessly in the creation of mountains of garbage, partly to satisfy our own love of novelty, but mostly to serve the relentless drive for wealth and luxury on the part of someone else.

Monday, August 10, 2009

Democratic choice

One often encounters an attitude, in writings or discussions about society, that being a (“small-d”) democrat requires adopting a “take people as they are” philosophy; i.e., an attitude that faults any questioning of people’s pre-existing attitudes, practices, desires or beliefs. Interestingly, this attitude has distinctive versions on both the right and left. On the right, it takes place in the context of market theory, specifically in the idea that outcomes from free market exchanges are not only economically more efficient (itself a challengeable claim), but are inherently more democratic than decisions made politically. On the left, it takes the form of cultural relativism – the idea that every society has the right to decide its own way of life, even if this involves practices (like female circumcision or stoning rape victims to death) that we may find morally reprehensible, by our own lights. The strong form of this attitude would claim that any attempt to educate others as to our own values is propaganda, imperialism and attempted culture-cide.

I would argue that both of these points of view are not, in fact, democratic, but actually anti-democratic, even anti-political, as well as elitist and degrading. I would argue, further, that they are in fact contrary to the way most people would evaluate their own decision-making process.

The market-theory version of the “take-’em-as-they-are” argument privileges small scale, individual decisions made by people acting in isolation, over those which the same people would make after collective deliberation, discussion and debate. Specifically, the individual decisions are considered “free”, and collective decision making considered to be in some way coerced. Some factors in real-life decision making are either ignored, or explicitly assumed away under such a rubric as “enlightened self-interest”. In particular, it dismisses or denies the fact that isolated individual decisions are likely to consider only a limited range of factors directly and obviously related to the immediate choice, whereas collective deliberation allows the opportunity to introduce broader perspectives, show how the decisions of one might have an unforeseen impact on others, and consider how each seemingly small decision contributes to broader, social outcomes.

One reason for this attitude in market theory, I am convinced, is that market theorists want to find a “once-for-all”, “magic bullet”-style solution to social problems. Democratic theory, in favoring collective decision making, does not offer this – it only offers a process: deliberate, discuss (perhaps argue), decide – which includes no guarantee that the final decision will be a good one. How much more appealing the idea of an impersonal mechanism (the market) that, if left alone (if we only don’t THINK too much), will automatically “get it right”. But this attitude, as it distrusts ANY form of collective decision making in favor of isolated, individual choice and impersonal mechanisms, is not only anti-democratic but it is actually anti-political. It is also degrading, in that is implies that people’s worst, most unreflective, selfish and egoistic sides are some how more “them” than their considered judgments. Further, it is often associated with a (degrading) assumption that people are more likely than not to mess up, if they try to make a conscious decision, which in conservative tradition goes hand-in-hand with efforts to limit popular participation in political governance, preferring to restrict it to an educated (usually wealthy) elite.

The “take-’em-as-they-are” theory is also contrary to the way most people would actually think about their decision making. If you were to say to most people, “The decisions you make when you think about a question, collect whatever information you can on it, listen to other people, and discuss it with your friends and colleagues, are likely to be better decisions than the snap judgments you make without doing these things,” this would not be a particularly controversial thesis.

One can also argue – not without pitfalls, but hard to refute in principle – that even the individual market decisions people make AFTER they have discussed, deliberated, educated themselves, or been better-educated, even constrained, as the result of collective decisions, will tend to result in greater INDIVIDUAL satisfaction in the outcomes than would have resulted from those ill-considered, a priori ones. This argument, if accepted, eliminates any conceivable utility-theoretical basis for preferring the less-considered to the more-considered choice.

“Common sense” decision making models also contradict the extreme multi-cultural leftist argument for non-interference. Most people have a reasonable amount of confidence in their own ability to weigh the evidence and make their own decisions. If you say, “You shouldn’t listen to other people, because they will lead you down a garden path, and convince you of things against your better judgment, and you will come to regret it, later,” people may say, “Yes, that could happen,” but by-and-large they will trust their own ability to separate the wheat from the chaff, and will not feel they need to do so by shutting their ears. (An exception, of course, is people with some pathological reason to deny reality, such as those who secretly fear their own position of privilege is unsupportable, and don’t want to listen to arguments that might turn this fear into an inescapable certainty.) Also, while people tend to be conservative about their own cultural values, they are not necessarily opposed totally and across the board to the idea of trying something new.

There are, of course, real difficulties in distinguishing between education and propaganda, and especially, there are real problems for democracy when “education” is presented within a political context marked by a pre-existing power disparity between the “educator” and the student. But to say that this means all attempts at education should be eschewed is itself elitist and anti-democratic – and self-contradictory. It makes no sense to say we should take people just as they are, and not try to educate them, because their opinions about how best to live their own lives should be trusted, and at the same time say they would not have sufficient wit to judge for themselves the pros and cons of contrary opinions we might express. People who adopt this attitude are expressing a basic distrust in political process, in favor of a head-in-the-sand, leave-well-enough-alone approach.

There are real moral problems, too, with adopting even mild forms of cultural relativism when the practices are extreme – slavery or genocide may be obvious examples, or, for that matter, stoning of rape victims. In these cases, despite theoretical pitfalls and slippery-slope arguments, it is morally appropriate to move beyond education and persuasion to outright coercion – if possible. Practical constraints here raise their ugly head, and the historical record for imposing morality at the point of a sword (besides predominating in examples of efforts later historians would not necessarily consider moral) has not been notably marked by practical success. In many cases, it may arguably only make matters worse. But I digress...

The imperfections of our “really existing” democracies do present problems with the idea of privileging political decision making over “invisible hand” market mechanisms, or in trying to promote culture-change by education and persuasion (let alone through coercion). The level of participation in public decision making in our societies is very low. Decisions are made by a tiny subset of the population, some elected by the people, others almost invisibly appointed to run agencies, or hired by those appointed, with broad latitude in how they implement the general directives that are set by the elected representatives. Public debate is largely moderated by privately (and elitely) owned news media, and the debaters mostly do not ultimately have a vote in the final decision making, which is often done by people far removed from any kind of democratic accountability whatsoever over their individual actions. The vast majority of the people only passively follow the debate, if at all, in the newspapers or (more likely) TV news stories, and limit their actual participation in the process to pulling the lever every 2-4 years. In the US a small majority of those eligible to vote usually don’t even do that much.

But anyway, that’s a topic for another post…

Monday, August 3, 2009

Epiphanies

From time to time in my life, I have experienced epiphanies, of varying degrees of profundity. For instance, there was the time in my daughter Laura's high school algebra class, on parent day, when I was racing to solve the problem on the board faster than the kids could, and I realized that I really like mathematics. Up until that point, my conscious relationship to mathematics centered on the idea that I wasn't very good at it. I think there were two main reasons for this: 1) I had high school teachers who were not very good at math, and passed on their confusion and sense of difficulty to me, and 2) as an ill-prepared freshman MIT student, I was surrounded with people who were SO good at math, that I developed a survival strategy based on doing the minimum necessary to get by, and deprecated both my own ability and my interest with regard to doing more. Not until some 20 years could I realize the amount of pleasure that mathematical reasoning could bring me. Note that nothing material had changed. I'm still not "good" at math, when measured on the MIT/Ivy League/graduate-math/physics/engineering student scale. I can't prove many theorems - but I can FOLLOW a lot more than I can prove (providing I start at the right level). AND I can derive significant enjoyment from doing so.

But the biggest epiphany in my life was when I realized that I have always been a philosopher. I mean “always” in the sense that my whole life, since my earliest memories, I have been challenging the meanings of the world, asking questions of adults, in my childhood, that they couldn’t answer, and drawing connections between answers that they found surprising; and I mean “philosopher” in the literal sense of a person who loves wisdom, at least to the extent that wisdom is represented by knowledge and truth, and who loves them to the extent of being willing to make sacrifices in their pursuit, especially, and especially in childhood, of social acceptances; but more, I feel, with Plato and Aristotle (once I finally got around to reading them), that pursuing these things is my greatest pleasure, and my truest calling – the thing that makes me feel, more than any other, both truly alive and truly me.

Similarly to my epiphany regarding mathematics, this epiphany about philosophy doesn't necessarily mean that I am a GOOD philosopher. Undoubtedly my ideas have often been naive and sophomoric; undoubtedly, because everybody's are, at some time in their life, but in my case, autodidact that I have become, it is more than likely that significant naiveté still remains. But that can't stop me from trying; can't, in the normative sense, because the endeavor is a noble one, but even more keenly in the empirical/natural historical sense. I can't stop trying, because this is who I am, who I have always been, and who I will always be until death or the ravages of age, injury or degenerative disease render my mind no longer capable of functioning. What IS a variable is whether I write, addressing my thoughts to a public audience, or just proceed as I have been, and impose them only on my wife and occasionally, a fortunate or unfortunate friend. If I had to go out in the world and find an editor and a publisher, this would probably be a foregone conclusion. But blogging is so easy...

So, I begin. I dump my thoughts to probably no one, but potentially the world. But whether I meet with approval, approbation, or simple apathy from the blog-following public, one thing I cannot stop doing is thinking, reasoning, exploring - in short, being a philosopher - because that is who I am. So I adopt a tagline from a tale by SF writer Edgar Pangborn: "Still I persist in wondering..."