Last week, I discussed a parallel I see between counterfactuals and moral argument that goes against using the empirical improvability of moral argument as proof for an extreme relativistic moral stance. I ended with the question, “What could be a basis for making non-arbitrary decisions in moral arguments?”
I wish I had a firm answer for this. Maybe someday I will. One valiant attempt to wrestle with this problem is by Alasdair MacIntyre in his book, “After Virtue”. MacIntyre is concerned with this problem in a form similar to that identified by Jean Grimshaw in Chapter 8 of “Philosophy and Feminist Thinking” (see my 8/30 post) – the problem between morality as knowledge and morality as choice. Classical Greek philosophers such as Plato and Aristotle believed that the answer to the question “What is the right way for a person to live?” could be found by reason – they in essence believed that morality was a matter of knowledge about the world. Modern philosophers, however, have tended to emphasize the role of “choice” in moral behavior, and to question the degree to which reason can answer fundamental moral questions at all. (See nice, brief, discussion in Grimshaw.) Absolute relativism is an extreme form of morality as choice. MacIntyre comes down squarely in the “classical” camp of morality as knowledge.
MacIntyre is concerned with an analysis of the virtues, taken as acquired human qualities which tend to enable us to achieve the “good” in our lives. He defines the virtues through a three stage process. First, he places them in the context of what he calls a “practice”, a cooperative activity which has “goods” internal to the activity. For instance, the game of chess has internal “goods” defined in terms of strategy and skill and other attributes of good play, which are fully knowable, and can be experienced, only by people who have committed themselves in a certain way to the playing of the game. Chess may also bring a player external goods – for instance, winning at chess may bring social praise, prize money, etc. – but, whereas the external goods may perhaps be achieved by cheating in some way (violating the virtues inherent in the practice of chess), the internal goods cannot. So “the virtues” are tentatively defined as qualities which help sustain practices.
In the second phase of his account of the virtues, MacIntyre places them in what he calls “the unity of a narrative embodied in a single life”. He starts with an interesting and compelling argument for the centrality of narrative to our understanding of our lives. He argues that, rather than the life of a person being conceivable as a series of actions or events, which we may choose to assemble into a narrative as a sort of literary exercise (or concerning which we may deny the veracity or authenticity of constructing any such narratives), in fact we can understand the concepts of “action” and even “person” only as abstracted elements of some narrative. From this, he claims it follows that a single life has a sort of narrative unity, and that to ask “what is good for me?” is the same as asking what sorts of things will lead to developing or discovering that unity. This leads him to define the virtues as, in part, those qualities which will help us in our quest for the answer to that question, and in part those qualities (whether the same or a more inclusive list) which will help us realize them once identified.
Finally, MacIntyre argues that the narrative unity of a life is comprehendible only in the context of a tradition, which he defines flexibly as “an historically extended, socially embodied argument, and an argument in part precisely about the goods which constitute that tradition.” Note in particular that a tradition so defined is not an inflexible body of practices that must be conservatively defended against change from any source.
After the third stage in the analysis, then, the virtues stand defined as those acquired qualities which (1) sustain us in practices, (2) help us find the narrative unity in our own lives, and (3) sustain the vital tradition in which we live.
Well, I can’t do justice to a complex book, especially one which, after one full and one partial reading, I still only imperfectly understand. There is much that rings true (or at least partially, or potentially true...) Other parts fail to convince. The concept of a practice, and the partial definition of virtues therein seems useful. Also, I find myself agreeing with MacIntyre in so far as placing narrative at the center of our understanding of (at least) our own lives and our social world; however, it seems to me that every person, rather than being something abstracted from a single narrative, is something at the nexus of an interlocking web of narratives, not just subjectively (because I am aware of things that you are not), but essentially, in that any narrative, as an act of analysis, necessarily abstracts from reality, and any abstraction necessarily leaves some things out. Like the abstraction inherent in ostension, discussed in my 9/5 post, the abstraction inherent in narrative is by no means arbitrary. It is constrained by what is “really there” in experience, but also by our immediate goals in analysis. To understand certain things (why I play guitar) we abstract certain things from experience. To understand others (why my sister likes red) we abstract others. I may have a role in each narrative – but to include every possible fact in experience in which I might conceivably play a role would be an “account”, if you could call it that, far too incoherent to be called a narrative, or, probably, comprehended at all. (Would it stop short of the entirety of the universe?) It certainly would be a poor candidate for “the narrative unity of a life”. So how do we decide WHICH narrative the virtues are supposed to support the development of? (To use the sort of language with which Churchill might not have put up.)
Similarly for tradition. The definition of a tradition as an ongoing argument is one I like, which frees tradition, at least in principle, from being a cage. But if a tradition is an argument, then which side of the argument should the virtues support? Why do we speak of “tradition”, at all? Would it be better to use the plural, “traditions”?
So I am not sure that MacIntyre has removed the arbitrary from his account of the virtues, and placed them on the plane of the knowable, rather than as objects of choice. I’m not sure he hasn’t, either. Maybe the virtues can be defined exactly as those qualities that allow us to navigate the choice of narratives, and decide which of the available candidates is the most central or important for our lives. (Maybe there really is one, and only one, that is best, and not many different but equally good.) Maybe the virtues are exactly those qualities which keep the central argument of a tradition alive and vital, allow it to adapt to changing circumstances, and keep it from degrading into arbitrary authoritarianism, or dying out all together. Maybe it is possible to determine which virtues would facilitate these processes, without begging the question of what the outcome of the process ought to be. Maybe this is even exactly what MacIntyre meant.
At any rate, as I said, a valiant attempt, and a book well worth reading.
Another possible basis for making non-relativistic arguments for moral principles, which is not incompatible with MacIntyre’s (I think) is by appealing to arguments about the evolutionary basis of what we call moral behavior, as I discussed a few weeks ago. Of course, any such argument runs into Hume’s predicament of deriving “ought” from “is”. The fact that certain traits have evolved in us by natural selection does not necessarily mean that we “ought” to give expression to them, or culturally reinforce them. In fact, the idea that we “ought not” formed a large part of moral thinking, when evolved traits were understood exclusively under earlier, “tooth-and-claw” understandings of natural selection. However, if humans have evolved traits by natural selection which tend to produce behaviors which, if practiced, tend to promote the adaptive success and posterity of the human species (both in the past and in the future), then there is a strong argument that those behaviors constitute a part of what is “good for a man” (in MacIntyre’s non-P.C. language. I will continue to try to use “human” or “human being” or other non-gendered language in similar contexts.)
Any argument for morality by natural selection must depend on the idea that society – the social group – is the Number One weapon in homo sapiens arsenal of adaptive strategies. So moral behavior is that which enhances the well-being and stability of the group, and its ability to provide nurturance, support and safety to the men, women and children therein. This does not exclude egoistic as well as altruistic behavior – de Waal, in “Good Natured” points out that without sustaining our own lives, we cannot lend succor to anyone else. But it does call for a balancing act between self- and other-directed behavior, as too much of either can be destructive of the whole. De Waal offers convincing examples and arguments that the rudiments of this balance are clearly discernable in apes and monkeys, and possibly in other social species.
In fact, my “natural historical” conception of morality (8/30 post) is exactly that of a constant, and irreducibly contextual, balance between divergent and possibly even contradictory claims.
Maybe morality, like MacIntyre’s traditions, must be seen as an ongoing argument – an argument amongst ourselves, on matters of general principles, and an argument within ourselves, in every case of practical application. Maybe the best we can hope to do is to clarify what are and are not the proper terms of debate.
References: The books that I referred to in this post are Jean Grimshaw, “Philosophy and Feminist Thinking”, Alasdair MacIntyre, “After Virtue”, and Frans de Waal, “Good Natured: The Origins of Right and Wrong in Humans and Other Animals”. (See also de Waal’s earlier “Chimpanzee Politics”.)
Monday, September 28, 2009
Monday, September 21, 2009
Counterfactuals and moral relativity
In my August 30 blog post, I segued from a discussion of Jean Grimshaw’s book “Philosophy and Feminist Thinking,” into a discussion of what I called a “natural historic” view of morals. I promised to return to the topic, which I am doing with this post, although not, I’m sorry to say, to the particular questions I had promised to address. (Someday...)
The fundamental difficulty in moral philosophy is to find some normative basis for behavior that has some claim to universality, but does not seem arbitrary. We are not comfortable with an absolute relativism which says moral choices are simply a matter of personal choice, and there is no “objective” basis for privileging one person’s, or culture’s choices over another’s. But an authoritative absolutism, which says these particular moral premises are right for all people and for all time, is not very satisfactory, either. Or rather, many people are in fact quite happy with the authoritarian approach, but the authoritarians never seem to agree on a set of premises. So the question is, if we reject absolute relativism, how do we come up with a rational means of evaluating completing claims? And how can we justify that means, without simply elevating the relativism to another level?
Before I tackle the above “big question”, though, I want to address one particular argument for absolute relativism – the argument that since no particular moral stance can be proven, there is simply no option but to view moral behavior as a purely personal matter of preference, choice, or taste.
I think there is an interesting parallel between moral arguments and certain kinds of argument from counterfactual hypotheses. Arguments from counterfactual hypotheses can be of different kinds. For instance, I can take a rubber ball from my desk drawer, hold it out at arm’s length for a moment, then put it back in the drawer and say to you, “If I had released that ball, it would have struck the floor and bounced. It would have bounced several times, but the altitude of each bounce would have been less than that of the previous bounce.” As an example of a very different kind of counterfactual argument, I could say, “If the South had won the Civil War, slavery would have been abolished anyway, within 25 years.”
The first of these two counterfactuals describes a possible experiment within the context of a well-understood physical theory (Newtonian mechanics). The theory is so well developed that I can present a precisely detailed argument, including mathematical equations, connecting my prediction firmly to the basic laws of the theory, an argument which nobody who understands the theory would care to deny. Finally, and most importantly, I can if I wish demonstrate the truth of my prediction by actually performing a similar experiment; I can take out the ball again, and drop it. Our acceptance of the theory, and of the underlying general theory of knowledge through experimental science, are so great that having seen the experiment done once, we probably accept immediately that it would have worked the same way in the first (untried) example. If not, I can repeat the experiment over and over until the most confirmed philosophical skeptic begs me to quit.
The second counterfactual argument in my example takes place within theoretical contexts (history, political science...) that are not as precisely defined as Newtonian mechanics. Most importantly, experiments of the kind I described above cannot be done. This is not to say that historical sciences cannot be empirical, but experiment in historical sciences is based on making predictions about patterns of unknown facts, and then seeking out and examining the facts to see if they conform to the predicted pattern. Experiments that go directly to arguments that “if this had happened, then that would have also happened”, as applied to specific cases, can never be done. This means that historical counterfactuals of this kind can never be proven or disproven with the degree of certainty that would apply to simpler, physical problems. Does this mean, as has been proposed, on similar basis, for moral arguments, that such counterfactuals are purely matters of opinion – of personal taste – and there is no right or wrong to them?
That historical counterfactuals are purely matters of taste flies against common sense. For instance, if I say slavery would have ended within 25 years after the Civil War, if the South had won, you might reply by saying, “Slavery would have ended, but it could have taken 50 to 100 years.” Many people might think that your prediction was more likely than mine. (Note: I personally have no opinion on this. My counterfactual example is such on more than one level!) Let’s take a more extreme argument: “If the South had won the Civil War, they would have immediately freed the slaves and given them the vote.” Very few people, if any, would believe that this argument was true. So, in fact, we do believe that there is a sort of truth and falsehood to counterfactual arguments. This is because we live in (and believe we live in) a rule-ordered universe, and we believe those rules may be used as a basis for prediction. The intellectual process of making a counterfactual prediction is no different than that of making a future prediction; it is just that in one case the experiment may sometimes be carried out, and in the other it cannot.
The parallel between counterfactuals and moral argument suggests that the mere fact that moral arguments cannot be answered on a firm, empirical basis of the sort possible with Newtonian physics is not sufficient to prove that they are reducible to nothing more than personal taste and preference. On the other hand, it doesn’t prove that there is a better basis for them, either. In the counterfactual case, I found a parallel between the process of making counterfactual predictions and making (sometimes or somewhat testable) future predictions. What could be a basis for making non-arbitrary decisions in moral arguments?
I have some thoughts toward an answer (certainly not the temerity to say I have an answer!) But when I wrote it all down, the outcome, at some 2400 words, I thought was much too long for a single blog post. So I’ll continue this next week. (Gives me more time to tinker, in any case.)
The fundamental difficulty in moral philosophy is to find some normative basis for behavior that has some claim to universality, but does not seem arbitrary. We are not comfortable with an absolute relativism which says moral choices are simply a matter of personal choice, and there is no “objective” basis for privileging one person’s, or culture’s choices over another’s. But an authoritative absolutism, which says these particular moral premises are right for all people and for all time, is not very satisfactory, either. Or rather, many people are in fact quite happy with the authoritarian approach, but the authoritarians never seem to agree on a set of premises. So the question is, if we reject absolute relativism, how do we come up with a rational means of evaluating completing claims? And how can we justify that means, without simply elevating the relativism to another level?
Before I tackle the above “big question”, though, I want to address one particular argument for absolute relativism – the argument that since no particular moral stance can be proven, there is simply no option but to view moral behavior as a purely personal matter of preference, choice, or taste.
I think there is an interesting parallel between moral arguments and certain kinds of argument from counterfactual hypotheses. Arguments from counterfactual hypotheses can be of different kinds. For instance, I can take a rubber ball from my desk drawer, hold it out at arm’s length for a moment, then put it back in the drawer and say to you, “If I had released that ball, it would have struck the floor and bounced. It would have bounced several times, but the altitude of each bounce would have been less than that of the previous bounce.” As an example of a very different kind of counterfactual argument, I could say, “If the South had won the Civil War, slavery would have been abolished anyway, within 25 years.”
The first of these two counterfactuals describes a possible experiment within the context of a well-understood physical theory (Newtonian mechanics). The theory is so well developed that I can present a precisely detailed argument, including mathematical equations, connecting my prediction firmly to the basic laws of the theory, an argument which nobody who understands the theory would care to deny. Finally, and most importantly, I can if I wish demonstrate the truth of my prediction by actually performing a similar experiment; I can take out the ball again, and drop it. Our acceptance of the theory, and of the underlying general theory of knowledge through experimental science, are so great that having seen the experiment done once, we probably accept immediately that it would have worked the same way in the first (untried) example. If not, I can repeat the experiment over and over until the most confirmed philosophical skeptic begs me to quit.
The second counterfactual argument in my example takes place within theoretical contexts (history, political science...) that are not as precisely defined as Newtonian mechanics. Most importantly, experiments of the kind I described above cannot be done. This is not to say that historical sciences cannot be empirical, but experiment in historical sciences is based on making predictions about patterns of unknown facts, and then seeking out and examining the facts to see if they conform to the predicted pattern. Experiments that go directly to arguments that “if this had happened, then that would have also happened”, as applied to specific cases, can never be done. This means that historical counterfactuals of this kind can never be proven or disproven with the degree of certainty that would apply to simpler, physical problems. Does this mean, as has been proposed, on similar basis, for moral arguments, that such counterfactuals are purely matters of opinion – of personal taste – and there is no right or wrong to them?
That historical counterfactuals are purely matters of taste flies against common sense. For instance, if I say slavery would have ended within 25 years after the Civil War, if the South had won, you might reply by saying, “Slavery would have ended, but it could have taken 50 to 100 years.” Many people might think that your prediction was more likely than mine. (Note: I personally have no opinion on this. My counterfactual example is such on more than one level!) Let’s take a more extreme argument: “If the South had won the Civil War, they would have immediately freed the slaves and given them the vote.” Very few people, if any, would believe that this argument was true. So, in fact, we do believe that there is a sort of truth and falsehood to counterfactual arguments. This is because we live in (and believe we live in) a rule-ordered universe, and we believe those rules may be used as a basis for prediction. The intellectual process of making a counterfactual prediction is no different than that of making a future prediction; it is just that in one case the experiment may sometimes be carried out, and in the other it cannot.
The parallel between counterfactuals and moral argument suggests that the mere fact that moral arguments cannot be answered on a firm, empirical basis of the sort possible with Newtonian physics is not sufficient to prove that they are reducible to nothing more than personal taste and preference. On the other hand, it doesn’t prove that there is a better basis for them, either. In the counterfactual case, I found a parallel between the process of making counterfactual predictions and making (sometimes or somewhat testable) future predictions. What could be a basis for making non-arbitrary decisions in moral arguments?
I have some thoughts toward an answer (certainly not the temerity to say I have an answer!) But when I wrote it all down, the outcome, at some 2400 words, I thought was much too long for a single blog post. So I’ll continue this next week. (Gives me more time to tinker, in any case.)
Sunday, September 13, 2009
A world for mind
Well, I finished the De Waal book, but I still haven’t been able to get back to the morality thing – I’m not too good at week-spanning posts, I guess. So I’m rehashing something else from my journal, somewhat rewritten for your benefit. If you’re out there...
Any world in which a mind could evolve by natural selection must have at least three characteristics: it must have stuff in it, the stuff must be lumpy, and the lumpiness must be orderly. “Stuff” is obvious – a world without anything in it would be no world worthy of the name. “Lumpiness” is the quality by which mind can make distinctions. (Plato proved in the Parmenides – if I can trust Cornford’s wonderful interpretation – that perfectly homogeneous stuff is indistinguishable from nothing at all.)
“Order” is that quality whereby mind can create useful rules about stuff. This is important, because making rules is what makes a mind useful, and usefulness is what makes natural selection preserve it. A mind could perhaps arise by chance in a chaotic world, but since there would be nothing for it to make rules about – nothing could be generalized – it would have no predictive ability, it could not enhance the reproductive success, or even the life experience, of the organism. It would be no good at all.
I don’t know how many specific conditions on the nature of stuff and its organization (order) are required to have a sufficient (as well as necessary) set of conditions for the evolution of some sort of mind. I suspect fewer than most people might think. I think we must make some sort of posit regarding the interaction of stuff, e.g., that inferences about important qualities of stuff (as they affect the organism) can more accurately made with information about proximate conditions than distant ones. Alternatively, this might serve as a definition of “proximate” and “distant”. The spacio-temporal variation of stuff must be such that most of the time predictions based on experience don’t become invalid before the organism has a chance to benefit from them. (“Experience” could here be defined as one particular set of interactions with proximate conditions.)
What is needed for natural selection is a world in which some lumps of stuff can interact with other lumps in such a way as to create near-perfect replicas of themselves. Self replication amidst random generation of other things and random destruction of all things leads to an increasing population of self-replicators. Changes (mutations) that enhance the efficiency of self replication increase the rate of population growth. Harsh circumstances or competition with other self-replicators may enhance the importance of some useful mutations, causing some forms of self-replicators to die out, while other populations continue to increase – et voila! Natural selection.
Mind (following Dewey) is obviously adaptive, at least for motile self-replicators, who actively seek to influence, and therefore must have some ability to predict, the spacio-temporal distribution of stuff in their environment. So given the above conditions, and perhaps a few more I haven’t thought of, it seems to me that mind has a fighting chance to evolve.
The whole point of this exercise (which principles of good writing might have had me put at the beginning of the post, and not near the end – but I’m feeling contrary, today) is to address the supposed mystery of how, of all the possible universes that might have existed, did it come to pass that the actual universe (assuming there is only one) is one in which such apparently frail things as humans and human intelligence could survive and prosper? The supposed intransigence of this question is apparently of great comfort to theists, who supply their own preferred answer.
Now God, of course, is really no answer at all (why, of all of the possible gods, did we get one who would decide to create a universe in which...? etc.), and the supposed statistical improbability is really irrelevant (if we didn’t have a universe in which mind could evolve, we wouldn’t be here to comment on it – the probability of an event that is known to have happened is 100%). But all that aside, I just don’t think that evolution of mind is all that hard to credit. A world without stuff in it, and lumpy stuff, at that, would be something we hardly could accept as being a world (or universe) at all. In a purely chaotic world, anything at all could be, but nothing at all would last, so no mind, but there’s no particular reason to believe that pure chaos is anymore “likely” than a world with SOME sort of order. And in a world with any kind of order, it seems that some sort of experience could be derived from interactions among “proximate” lumps of stuff, which would have at least limited utility in making predictions about further experience. So all that is necessary is for some lumps of stuff to have qualities that allow them to self replicate.
Okay, I’ve probably missed a few practical requirements (at least). And certainly the whole state of affairs is marvelous and wonderful (two words which are perhaps literally synonymous). But if a “miracle” is some occurrence that we just can’t rationally explain, then, folks, I just don’t see the existence of a world with mind in it as a thing miraculous.
Any world in which a mind could evolve by natural selection must have at least three characteristics: it must have stuff in it, the stuff must be lumpy, and the lumpiness must be orderly. “Stuff” is obvious – a world without anything in it would be no world worthy of the name. “Lumpiness” is the quality by which mind can make distinctions. (Plato proved in the Parmenides – if I can trust Cornford’s wonderful interpretation – that perfectly homogeneous stuff is indistinguishable from nothing at all.)
“Order” is that quality whereby mind can create useful rules about stuff. This is important, because making rules is what makes a mind useful, and usefulness is what makes natural selection preserve it. A mind could perhaps arise by chance in a chaotic world, but since there would be nothing for it to make rules about – nothing could be generalized – it would have no predictive ability, it could not enhance the reproductive success, or even the life experience, of the organism. It would be no good at all.
I don’t know how many specific conditions on the nature of stuff and its organization (order) are required to have a sufficient (as well as necessary) set of conditions for the evolution of some sort of mind. I suspect fewer than most people might think. I think we must make some sort of posit regarding the interaction of stuff, e.g., that inferences about important qualities of stuff (as they affect the organism) can more accurately made with information about proximate conditions than distant ones. Alternatively, this might serve as a definition of “proximate” and “distant”. The spacio-temporal variation of stuff must be such that most of the time predictions based on experience don’t become invalid before the organism has a chance to benefit from them. (“Experience” could here be defined as one particular set of interactions with proximate conditions.)
What is needed for natural selection is a world in which some lumps of stuff can interact with other lumps in such a way as to create near-perfect replicas of themselves. Self replication amidst random generation of other things and random destruction of all things leads to an increasing population of self-replicators. Changes (mutations) that enhance the efficiency of self replication increase the rate of population growth. Harsh circumstances or competition with other self-replicators may enhance the importance of some useful mutations, causing some forms of self-replicators to die out, while other populations continue to increase – et voila! Natural selection.
Mind (following Dewey) is obviously adaptive, at least for motile self-replicators, who actively seek to influence, and therefore must have some ability to predict, the spacio-temporal distribution of stuff in their environment. So given the above conditions, and perhaps a few more I haven’t thought of, it seems to me that mind has a fighting chance to evolve.
The whole point of this exercise (which principles of good writing might have had me put at the beginning of the post, and not near the end – but I’m feeling contrary, today) is to address the supposed mystery of how, of all the possible universes that might have existed, did it come to pass that the actual universe (assuming there is only one) is one in which such apparently frail things as humans and human intelligence could survive and prosper? The supposed intransigence of this question is apparently of great comfort to theists, who supply their own preferred answer.
Now God, of course, is really no answer at all (why, of all of the possible gods, did we get one who would decide to create a universe in which...? etc.), and the supposed statistical improbability is really irrelevant (if we didn’t have a universe in which mind could evolve, we wouldn’t be here to comment on it – the probability of an event that is known to have happened is 100%). But all that aside, I just don’t think that evolution of mind is all that hard to credit. A world without stuff in it, and lumpy stuff, at that, would be something we hardly could accept as being a world (or universe) at all. In a purely chaotic world, anything at all could be, but nothing at all would last, so no mind, but there’s no particular reason to believe that pure chaos is anymore “likely” than a world with SOME sort of order. And in a world with any kind of order, it seems that some sort of experience could be derived from interactions among “proximate” lumps of stuff, which would have at least limited utility in making predictions about further experience. So all that is necessary is for some lumps of stuff to have qualities that allow them to self replicate.
Okay, I’ve probably missed a few practical requirements (at least). And certainly the whole state of affairs is marvelous and wonderful (two words which are perhaps literally synonymous). But if a “miracle” is some occurrence that we just can’t rationally explain, then, folks, I just don’t see the existence of a world with mind in it as a thing miraculous.
Saturday, September 5, 2009
Ostension, abstraction and ambiguity
I wanted to continue with the topic of morals, this week, but the press of work in my “day job” didn’t leave me time to think through some things I wanted to talk about. So I’m posting this piece, which I wrote spontaneously in my journal a couple of days ago. The only connection is that I was reading the De Waal book, Good Natured, that I mentioned last week, and I was thinking about the evolutionary origins of human cognition. Anyway, hopefully I’ll get back to morals, next week. By then I should have finished De Waal, at least.
Bertrand Russell, among others, has pointed out the limits to definition. Words being defined in terms of other words, eventually one reaches the point where further verbal definition is possible only by permitting circularity. In formal languages, such as mathematical theories, the solution is to leave some terms undefined in the subject language, relying instead on definitions in some background language or “meta-language”.
In natural languages, the equivalent of meta-language definition is definition by ostension. One points to an object and says, “This is a table.” Or (to use W. V. Quine’s favorite example), one points to a furry animal and says, “Rabbit.”
What Russell doesn’t really point out, at least in my readings thus far, is the degree to which definition by ostension involves a process of abstraction. Quine makes much of the ambiguities involved – but this is a different, although related, point.
To identify a table, you need to determine the boundaries of “table”. You need to determine which particular parts of experience you are isolating (mentally) to define them as “table”. Your audience needs to perceive things similarly for communication by ostension to be meaningful. Especially to identify “table” as a general term, or identify a class “table”, you both need a similar functional/pragmatic relationship to a table object, otherwise the general term makes no sense. This is different than distinguishing between a rabbit and an undivided collection of rabbit parts (to consider Quine’s example). This is distinguishing between the rabbit and the ground it hops around on. An intelligent ant might find it impossible to distinguish between ground and rabbits by ostension, and might, in fact, find each term to be an incomprehensible generalization, rather like a human might react to a class containing jelly beans and squid.
The point is that the simple act of ostension, the most basic form of definition, involves a process of abstraction, and is therefore partly a function of the cognitive apparatus of the communicants. The cognitive apparatus in turn evolved into what it is as a function of the way the thinking organism interacts with its environment. This means, of course, that the cognitive component to ostensive definition, while it is “subjective”, is in no sense arbitrary – except, perhaps, at the margins of utility. It also means that since conspecifics share so much of the ways in which they functionally interact with their environment, members of the same species can generally go quite far in communicating meaning by ostension. Quinean ambiguities can be seen in context, here: there really is little functional difference, to a human, between a rabbit and an unseparated, reasonably complete, set of rabbit parts – hence the translational ambiguity that Quine finds.
References: Quine used his rabbit example frequently, most thoroughly, if I remember correctly, in Word and Object, but also in the essay “Ontological Relativity”. I’m damned if I can remember where I read Russell on definitions. (It was longer ago.) Quine, by the way, must be rolling over in his grave at my loose correlation of the notions of “class” and “general term”.
Bertrand Russell, among others, has pointed out the limits to definition. Words being defined in terms of other words, eventually one reaches the point where further verbal definition is possible only by permitting circularity. In formal languages, such as mathematical theories, the solution is to leave some terms undefined in the subject language, relying instead on definitions in some background language or “meta-language”.
In natural languages, the equivalent of meta-language definition is definition by ostension. One points to an object and says, “This is a table.” Or (to use W. V. Quine’s favorite example), one points to a furry animal and says, “Rabbit.”
What Russell doesn’t really point out, at least in my readings thus far, is the degree to which definition by ostension involves a process of abstraction. Quine makes much of the ambiguities involved – but this is a different, although related, point.
To identify a table, you need to determine the boundaries of “table”. You need to determine which particular parts of experience you are isolating (mentally) to define them as “table”. Your audience needs to perceive things similarly for communication by ostension to be meaningful. Especially to identify “table” as a general term, or identify a class “table”, you both need a similar functional/pragmatic relationship to a table object, otherwise the general term makes no sense. This is different than distinguishing between a rabbit and an undivided collection of rabbit parts (to consider Quine’s example). This is distinguishing between the rabbit and the ground it hops around on. An intelligent ant might find it impossible to distinguish between ground and rabbits by ostension, and might, in fact, find each term to be an incomprehensible generalization, rather like a human might react to a class containing jelly beans and squid.
The point is that the simple act of ostension, the most basic form of definition, involves a process of abstraction, and is therefore partly a function of the cognitive apparatus of the communicants. The cognitive apparatus in turn evolved into what it is as a function of the way the thinking organism interacts with its environment. This means, of course, that the cognitive component to ostensive definition, while it is “subjective”, is in no sense arbitrary – except, perhaps, at the margins of utility. It also means that since conspecifics share so much of the ways in which they functionally interact with their environment, members of the same species can generally go quite far in communicating meaning by ostension. Quinean ambiguities can be seen in context, here: there really is little functional difference, to a human, between a rabbit and an unseparated, reasonably complete, set of rabbit parts – hence the translational ambiguity that Quine finds.
References: Quine used his rabbit example frequently, most thoroughly, if I remember correctly, in Word and Object, but also in the essay “Ontological Relativity”. I’m damned if I can remember where I read Russell on definitions. (It was longer ago.) Quine, by the way, must be rolling over in his grave at my loose correlation of the notions of “class” and “general term”.
Subscribe to:
Posts (Atom)