Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts

Thursday, March 4, 2021

Meaning and Meaning

 The conflation of the concepts of "reason" as an explanation and "reason" as a purpose has done much mischief in philosophy. Similarly with "meaning" as an interpretation and "meaning" as a purpose or intent.

If I am a hunter tracking prey, its spoor has meaning to me. I can interpret it, adding something to my store of knowledge that will help me track its path. But nothing purposed for that meaning to exist. If I speak to you meaningfully, though, there is a meaning that is purposed (mine) and a meaning that is only interpreted (yours). In this case, your interpretation hopefully adds something to your store of knowledge that will help you track my intention (purpose).

Wednesday, October 28, 2015

Assumptions

As far as practicable, I would like to have no unexamined assumptions. Of course, there are practical limits. How do I know I'm not a brain in a vat? I don't. It is the nature of brain-in-a-vat arguments that you do not know they are false, ex hypothesis. All you can really say is, "I don't care. I am going to go forward, developing my ideas, ignoring that 'possibility.'' If it turned out that I was just a brain in a vat, and not a real-live boy, that would be sad, but then, ultimately, not much sadder than being an epiphenomenal consciousness arising in a meat-sack, pursuing my self-conceived goals for a brief period of time before dissolving into a stinking pile of mush. So be that as it may. In the mean time, I've got things to do...

Saturday, October 12, 2013

Pondering Truth

When we speak of “truth” do we describe some single entity or quality in the world?  Or is “truth” hypostasized to simplify a complex of relationships between the inner and outer worlds that we do not (cannot?) understand?

Is “truth” an operational concept?  There is no operational difference “now” between a justified belief and a justified true belief (a.k.a. “knowledge”, at least in many philosophers’ systems).  I make the same choices, take the same actions either way.  But there may be an effect on the outcome (or not).  Is “truth” operational in evaluating outcomes, and therefore, perhaps improving the reasons for future beliefs?

Even in evaluating the outcome of past choices, I still don’t “possess truth”.  I only formulate, and attempt to justify, further beliefs (e.g., beliefs about my past beliefs).  In Dewey’s terminology from Experience and Nature, truth does not seem to be something we can “have directly”.

There is a state of the world, and there is a state of my mind (or yours), which is, itself, part of the state of the world.  The state of my mind includes a simplified, impressionistic “image” (in some neural/synaptic medium) of the state of the world – to the extent it is accessible to my imagination.  In that image, the state of my mind figures under such rubrics as “reasons” and “beliefs”.  Based on my state of mind (at any given time), I will make certain choices, and take certain actions.  Partly because of such actions, at some future time the state of the world, and the state of my mind, will be different.  By comparing the current state of my mind (and especially its world-image) to my memory of its prior state, I make judgments about the truth of my previously held beliefs.  I formulate (reasoned) beliefs about my prior beliefs.

Is “truth”, then, just something we invent to explain our relative satisfaction or dissatisfaction with the outcomes of our past endeavors?  The trouble with that idea is that our feeling of satisfaction may be connected to aspects of the outcome which some mythical unbiased observer would be unwilling to call “truth”.  For instance, a racist may desire to join an organization of like-minded individuals.  His belief that African Americans are genetically inferior to “Aryans” may help him in that endeavor, and thereby lead him to be satisfied, but we would not want to call his belief “true” on such grounds.  Then there is the growing body of psychological research indicating that our brains have evolved so as to readily adopt certain beliefs which are adaptive, but not (necessarily) “true”.  An example is the “confirmation bias”, where by our minds tend to disproportionally accept data that confirms what we want to believe, and reject data that disconfirms it.  Another example is the “hyperactive agent detection” that Daniel Dennett discusses in Breaking the Spell (among other places).  Other examples can be found in various papers in Naturalizing Epistemology (Hilary Kornblith, ed.)

It seems there is something actual about the relationship between our mental states and the world state that we are trying to capture with the concept “truth”, which is related to, but not simply reducible to, our degree of satisfaction with outcomes.  I admit, at this point in my life, to be still wondering (persistently) about exactly what it is.

Postscript on Confirmation.  As with many of the musings on Persistent Wondering, this one pretty much starts “where I am” and doesn’t make much of an effort to relate to an audience that may not be starting at the same place.  I apologize for that… but after all, I am portraying myself as a “wonderer”, and making no claims to be a teacher.  (Lame excuse.)

In this essay in particular, though, it seems to me that many people may wonder why I would feel – at all – that “truth” is not directly accessible.  In many (most?) of our everyday interactions with the world, confirmation of our beliefs is direct and immediate, and seems incontrovertible.  I believe I left my keys on the kitchen counter.  I go downstairs – I either find them there, or I do not.

Other “facts” are not so easily confirmed or disconfirmed, though.  There are the challenges of philosophical skepticism.  How do I know I am not dreaming?  Or hallucinating?  Or a disembodied brain kept alive in a vat, with my neural inputs manipulated by alien scientists?  Then there is the question of “modeling”.  Complex physical or social systems cannot be grasped by our minds in their complete and detailed totality.  We need to abstract from them, simplify them, in order to understand them.  Do concepts like race, class, culture, the national income “truly” conform to some real world objects, and if so exactly what and how?  How do we indubitably confirm or disconfirm them?  Theoretical physics provides examples, also.  Do the objects of modern theoretical physics – quarks, bosons, photons – “really” exist, or are they just a convenient (not necessarily unique) way of mathematizing experimental results?  Are the relatively abstract and indirect confirmations of physics experiments really of the same class as our confirming (by looking) that our keys are on the counter?

But really, ALL of our knowledge involves some such modeling (abstraction and analysis).  All the objects we conceive involve some level of abstraction – focusing on certain aspects of experience and ignoring others.  Something of this is suggested by Heraclitus’s statement thousands of years ago that “You can never step into the same river twice.”  What exactly is a river?  Is it the specific water molecules?  But they start out in a glacier and end up in the ocean.  Is it the banks?  But they shift with time as soil particles are removed and deposited.  Is it some abstract (fractal?) pattern that encompasses changes over time?  What is the “truth” of the matter?

Our mental states (beliefs and so on) consist in synaptic patterns, roughly, stable-yet-changing patterns of chemical interactions between neurons.  The state of the world consists in the interplay of forces amongst patterned matter and energy, extending strongly or weakly between the various points of the entire universe.  It is not clear that some unique and transparent correspondence can be established between those two things and unambiguously labeled “truth”.  On a conceptual, theoretical level, the question of the truth of our beliefs, their confirmation or disconfirmation, is not at all a trivial one.  Although on the pragmatic level of day-to-day actions, it very often is.

Sunday, September 1, 2013

Truth, knowledge, skepticism, and stuff…



A standard philosophical definition of knowledge goes as follows:

(I know A) <=> (I have a justified belief that A) and (A is true)

As I write this essay, I’m in the middle of reading Robert Nozick’s discussion of knowledge and skepticism in Philosophical Explanations.  Nozick uses a very specific concept of “truth tracking” which leads to some very interesting results, but is essentially (it seems to me) just a particular approach to defining justification, one that leads to a coherent (I think), but sometimes quite peculiar conception of knowledge.  I find myself working around a rather different rejection of philosophical skepticism.  What follow are musings, not intended to represent a complete expression of a developed idea.

If some particular skeptical scenario (SK: SK => not-A) were true, then A would be false, and I would not know A.  However, A, so not-SK, and I do know A.  The skeptic’s objection fails, because SK is not true.

Truth is outside my direct experience.  It is not something I have direct access to.  To use Dewey’s language from Experience and Nature, it is not something I “have directly”.  All that I have directly is justification (evidence, inference rules, trusted sources, etc.)  There is no way to operationalize truth; I can only operationalize justification.  One reason that theories of knowledge remain so problematic is that knowledge, vs. justification and belief, has exactly zero influence on our behavior at time t = now().  We try to justify our beliefs, and to act on justified beliefs.  Later, we may come to believe that our prior beliefs were not (properly) justified, i.e., were false.  But at any given moment (outside of certain contexts of philosophical inquiry) the question, “Are my beliefs true, or merely justified?” is never a useful one.  The question, “Is this belief justified?”, however, is (always?) an important one.  Operationally, the question of “truth” always devolves to:  from perspective B (perhaps a later one), do the beliefs held to be justified from perspective A still appear justified?

There is no way to operationalize (general, philosophical) skepticism at all.  Skepticism places all logical possibilities on an equal footing.  All available evidence is discounted, and there is no way to distinguish one possibility from another, to prefer one over the other.  Skepticism cannot give any positive plan of action.

Philosophical skepticism of the sort I am referring to (what Hume referred to as Pyrrhonism) must not be confused with a critical evaluation of our methods of justification.  Critical evaluation of our methods of justification is completely operational and is critical to ensuring that they track truth as closely as possible.

P.S.  Nozick, I think, expresses an insight similar to mine above about truth and experience when he says (p. 232 in my paperback copy) “We have said that knowledge is a real connection of belief to the world, [which we call] tracking, and… this view is external to the viewpoint of the knower, as compared to traditional treatments, [though] it does treat the method he uses [to track truth] from the inside, to the extent it is guided by internal cues and appearances.”

P.P.S.  Anticipating certain gleeful but misguided reactions to my rejection of “skepticism”, I want to point out a deviation between a common vernacular use of the word “skeptical” and philosophical skepticism, to wit, the phrase: “Skeptical about God.”  Disbelief in God (or even just doubt) is usually based on a belief that you can trust the evidence to lead you to true conclusions about the world.  Doubt about God stems from observing the lack of positive evidence, and disbelief from observing that the available evidence is incompatible with the God hypothesis.  These methods of justifying belief are exactly antithetical to philosophical skepticism, which holds that no amount of evidence is ever sufficient to justify a belief.  Theists, in fact, often (mis-)apply a skeptical argument when they claim that “You can’t prove a negative result,” not realizing that, if true, this argument merely puts their God on exactly the same footing as being a brain in a vat manipulated by alien scientists in the Alpha Centauri system.

P.P.P.S.  (9/7/2013)  I need to correct my statement that Nozick's concept of "tracking" is just a special case of justification.  (He is at pains to distinguish them on p. 267, by which time I was in a position to realize what he meant.)  Tracking, like truth, is external:  our methods really do track truth.  Justification is internal: we believe our methods track truth, and are therefore reliable or justified.

Saturday, June 22, 2013

Choice and Determination



If we choose, and our choosing is not pure, unanalyzable magic, then there must be some mechanism, however complicated, by which we make the choice.  Logically, it seems that, either this mechanism must be “deterministic” in the sense that if the mechanism is in exactly the same state, and receives exactly the same inputs*, then it will decide in exactly the same way; or there is some purely random component (e.g. quantum uncertainty) that affects the choice.

Neither of these options “sounds like” free will.  But if this is not free will, and free will is not simply a meaningless concept, or at least irrelevant to the choices we actual can and do make, then there must exist some entity whose choices are being constrained by this process.  That is, there must be some entity that is not “free”.   But if some entity is unfree, what entity is it?  If “you” are not exercising “free will” in these conditions, who are “you”?

We, the material, biologically constructed entities that we are, do make choices, and our choices are made in an effort to further the ends that we have conceived for ourselves.  This much is manifest.  It seems meaningless to me to say that we do not have free will in these choices based on some highly abstract argument that somehow we will never make the choices we don’t end up making.  What does it even mean for us to somehow be “able” to make the choice we do not, in fact, end up making?  I don’t see how that idea can be rendered coherent, except by reference to physical/mechanical potentialities, and epistemic possibilities that are fully compatible with the “deterministic” mechanisms of choice referred to above.    

In any case, other than these organic mechanisms, which make choices in the manner that we, in fact, make them, there is no entity here that can be “unfree”.

Of course, our choices are constrained, by the materials and opportunities available in our environment, by our abilities, physical and cognitive, by our histories, including things that have happened to us and previous choices we have made.  And of course the ends and goals that we conceive are formed by similar internal and external factors.  But for this to mean that our choices, from among the options that seem epistemically possible to us at the given moment, are “unfree”, there would have to be some self, other than the selves so constructed by history, biology, and prior choice, whose freedom was constrained, whose will was being denied, by the choices the biological “we” are making.  I cannot, for the life of me, conceive of what that other self might be.

I conclude that to say that people do NOT have free will makes no sense.  All that is left is to either accept that the way we actually choose represents free will, or to take the question of free will as an ultimately meaningless “pseudo-question”.  But at this point the choice is purely “semantic” in the colloquial sense of “just a matter of words”, and it makes no real, philosophical difference which wording you pick.

*The conditions of the mechanism being in “exactly the same” state and receiving “exactly the same” inputs  are considered only to focus on the logical argument.  They are both, in practical reasoning, impossible.  The universe is far too complex for it ever to be put back in the same state twice, and as for our “mechanism”, it is altered by every experience, and therefore can never be in the same state twice.  Even settling for “highly similar” is dubious.  Most significant human choices are, neurologically, a chain (actually a highly “parallel” web) of micro decisions, each of which potentially alters decisions further down the line, possibly in a binary manner (on/off – no shades of gray).  The end result, it seems to me, must show sensitive dependence on initial conditions, whether the component steps do, or not.  (They may.)  Thus even “similar” inputs to a brain in a “similar” state can lead to widely different outcomes.


Note:  I wrote most of the above text while I was reading Ch. 2 in Daniel C. Dennett’s book “Elbow Room: The Varieties of Free Will Worth Wanting”, a book which I have subsequently completed.  I think my views on this, as on many other topics, are broadly compatible with Prof. Dennett’s.

Sunday, February 10, 2013

Similarity and Natural Kinds

Preface:  I’ve been reading “Naturalizing Epistemology” by Hilary Kornblith, Ed., which reprints a couple of papers by W.V.O. Quine, “Epistemology Naturalized” and “Natural Kinds”, along with papers by others that relate to questions raised by Quine in those essays, or raised by others in response to those essays.  The present post was inspired by ideas in the first few essays in that book.

“Similarity”, I think, arises from replaceability.  Two objects are similar to the degree to which you could replace one with the other without salient change in the state-of-affairs under analysis.  Obviously “salient” and even “change” are terms in need of further analysis.

Perceptually, we have “hardwired” (innate) and “softwired” (learned) structures in our brains (primarily – with some functionality distributed elsewhere in our somas) to identify different kinds of similarity.  Hardwired structures include things like edge detectors and angle detectors.  Softwired structures are built up from such.  Questions that can be asked include:  How would such features have come to evolve?  Why are they so apparently efficacious?  Just how reliably efficacious are they?

The nature of organisms suggests categorization of environmental factors on the basis of their effects on the organism.  Some factors encourage growth or reproduction.  Some limit these or produce death or extinction.  With complex organisms, some factors have these effects on a part of the organism, only, and may have very different effects (or little effect, at all) on other parts – stimulating production of this hormone, while retarding production of that, for example.  Replaceability of these factors with respect to their organic effects is the basis of categorization, and the source of similarity.  The salience of change can be suitably quantified in each case (e.g., rate of cell reproduction or hormone production).

Since they derive from our interactions with the world, our categories and sense of similarity relate closely to the world.  But there is still an element of subjectivity.  All categories are relative to an analysis.  Some categories may be relatively stable across all conceivable analyses we may do.  These we tend to characterize as “natural kinds”.  But even natural kinds such as species can have fuzzy edges for certain analyses.  (Exactly where, on the evolutionary tree, does a particular species begin?)  An intelligence arising from a very different history (a silicon-based life form, say, or a spontaneously evolving “artificial” intelligence) might cleave the world very differently than we do.  But in some categories (mineralogy, biological lineages) if it made an analysis at all, there would probably be considerable correspondence with our own.  Natural kinds are not meaningless, nor arbitrary, but neither are they perfectly precise.

Wednesday, December 29, 2010

Questioning “Cogito

It seems to me that Rene Descartes’ famous “Cogito ergo sum” begged the question.  His premise “I think” depends on his conclusion, “I exist”.   

Certainly the stream of impressions he adduces in support of his conclusion demonstrate something – but what?  A more appropriate conclusion would be along the lines of “Experience manifests, therefore something is,” or, more succinctly, and tautologically (this is not a bad thing): “Experience is, therefore something is.”  Or, to once more in this space quote the words (if not, in this case, the meaning) of the X-Files’ Fox Mulder:  “There is something out there.”

To get to the “I exist” actually requires a number of additional premises, first of all that this experience means something – that it is rule-based – and possibly even that is connected to something we might call a “real world”.  Descartes makes this connection only after establishing “ergo sum”, and then he makes it only by reference to a perfect and benevolent God – a premise that seems as improbable to me as it seemed certain to Descartes.  My own preferred premise for the orderliness of Experience is the anti-skeptical one:  “If  I don’t assume some rule-based nature to Experience, then there is no point in going on further with analysis”.  This is a weaker premise than Descartes’, but it has the decided advantage of being not false.  There is also the empirical argument, that I can test the rule-based nature of Experience experimentally, and therefore demonstrate it.  While testability is important, its importance is sometimes overestimated, since I cannot possibly subject every needed rule about Experience to my own personal set of empirical tests. 

As for the “real world”, well that’s really definitional, isn’t it?  I.e., a term of convenience more than a necessary premise.  If I can predictably alter Experience by action, and if I can generalize the rules for doing so such that some person on the other side of the planet whom I’ve never met can apply them to some experience I’ve never had, and also alter Experience to conform to the predictions, then it may be useful to introduce the idea of a “real world”, but this concept can hardly can be constructed so as to be testable (hence meaningful) beyond the definition just given.

Of course, as I have implied elsewhere in this blog (“Starting from Zero”, May 30, 2010), I do not really accept the Cartesian method of applying radical doubt to build up a whole body of knowledge about the world from a bare minimum of assumptions, let alone “clear and distinct” (hence immediate and unquestionable) impressions.  We always come to any problem with a whole theoretical structure, which we can twist, bend and otherwise alter piecemeal, but which we can never throw out en bloc to start afresh.   

Nevertheless, it can be fun, and sometimes instructive, to play with the Cartesian approach as with a game.  And then, developmentally, we do, at some point in our lives sort of start to build a world view from experience upon a base of the biologically given.  So it seems perhaps some form of the Cartesian approach may have a certain psycho-historical relevance.  But this is only more or less.  Given that the biochemical environment in the womb begins to affect development even before the brain begins to develop, there are no clear limits between “experientially derived” and “biologically given”.   It’s a problem with inherently fuzzy boundaries.    It is worth noting, to, that the psychological “program” or world view starts to develop long before the wiring of the biological “machine” – the adult brain – is complete.  Starting from zero remains problematic – in this case, because it’s hard to define the zero point.

Considering the developmental problem, I have written, before (“The house I live in”, October 24, 2009) that I think our concept of “I” or self is not inbred, but is learned from experience, and not at a particularly early point, either, which also contradicts the primacy of Descartes’ “ergo sum”.

Another way it is interesting to contrast Descartes’ “Cogito...” with modern thinking is in the nature of the self, itself.  Descartes, of course, assumed some privileged level of granularity associated with the self.  As a Christian of his age, he could hardly have done otherwise – the individual self was the level at which the soul was allocated, the soul was essentially indivisible, and was in a sense synonymous with the self, so clearly an individual was a discrete, and in an essential sense indivisible unit.  A person could colloquially be of “two minds” about something, but that didn’t disturb the essential unity – one of those two opinions had to be the “true” one (at least in the sense of “true to oneself”), and the other false.

Over the course of the twentieth century, the unitary nature of the self came under attack philosophically, psychologically and neurologically.  One good book that includes a good analysis of the nature of the self is Daniel Dennett’s Consciousness Explained.  The impression I drew from that book (consistent with others I’ve read, see note on references) is of a brain consisting of numerous centers, each evolved perhaps for one primary purpose, but co-opted to other uses, in other networks, by the opportunistic nature of evolution (see “Adaptation? Or Carpe Diem?”, November 30, 2010), with a conscious sense of self perhaps developing only rather late as a sort of supervisor circuit riding on top of this loose, collaborative network of parts.  In this analysis, to be of “two minds” about something may literally mean that two different parts of your brain produce differing explanations or predictions, or desire different goals.  There is no meaningful sense in which one of these is more “true” to your “essential nature” than the other.

But as I said in “The house I live in”, the constructed and non-atomic nature of my self doesn’t make it a useless concept.  As the wag said, and insofar as my day-to-day experience goes, I think I am, therefore I am, I think.

Useful references, cited or otherwise: Rene Descartes, Meditations on First Philosophy and Discourse on the Method of Rightly Conducting the Reason, and Seeking Truth in the Sciences, ch. 4; Gary Hatfield, Routledge Philosophy Guidebook to Descartes and the Meditations; Daniel C. Dennett Consciousness Explained; Stephen M. Kosslyn & Olivier Koenig, Wet Mind: The New Cognitive Neuroscience; Patricia Smith Churchland, Neurophilosophy: Toward a Unified Science of the Mind/Brain; Joseph M. Schwartz, The Future of Democratic Equality; John Dewey, Experience and Nature.

Saturday, August 21, 2010

The Mulder Argument

The essence of mystical/magical/religious thinking, that which distinguishes it from truly philosophical thinking, may well be something that could be termed “the Mulder Argument” (after fictional FBI agent Fox Mulder on the television show “The X-Files”).  This argument is given in four words: “I want to believe.”  I’ve run across this argument in many forms.  Once, when I had backed a theist who thought she could prove the existence of her God completely into a corner, she escaped by saying, “I just can’t accept a world like that.”  Others, asked to provide any sort of justification for their claims fall back on, “I just know.”  Philosophical thinking, on the other hand, must be much more ruthless than this.  If philosophy is “love of wisdom”, than you must love it wherever it leads you.  If you don’t like the direction, learn to live with it.  I summed this up to myself, years ago, in a epiphany which I worded as: “My desires are not probative.”

Since it is very central to my view of the world that all important traits of organisms can be explained by evolution by natural selection, I have had some concerns over the question:  “How could such a false argument become so ubiquitous in the human species?”  Isn’t a trait counter-adaptive if it consistently leads one to believe things that are false?  I think I’ve finally come at an answer to these questions, and come at it through another puzzle, specifically, through trying to reconcile my own principle of the non-evidentiary nature of desire with a premise given at various places in Marx and Gramsci, vis. the philosophy of praxis, which states quite the opposite, i.e., that reality is made by humans, especially acting socially, and that in this sense, the desire, or more precisely the will, is in fact probative.

I think it all boils down to taking the correct domain for each argument.  Reality is not something that is taken passively by organisms, but is something dynamically modified by them in the course of their existence.  This is a very important insight.  Organisms alter their “environment” just as much as the environment causes modifications in organisms.  In this sense, for a thinking organism like a human, believing in something can often be instrumental to its being so.  But there are limitations on this principle.  If I believe that I will change the government, then perhaps I can.  But if I believe I can literally move mountains with my mind, or with prayer, then that will not happen.  I will have to add a liberal quantity of muscle, time and the application of earth-moving equipment to my faith, if I want to bring this result about. 

Human minds have evolved to be capable of rational thinking, but it is by no means assured that all thinking that goes on therein will be rational.  If I believe that I will change the government, or find food, or win a fight, or find a wife, then perhaps I will.  If I believe that I cannot, then certainly I will not.  It would be more rational to believe that “maybe I can, if I try hard enough,” rather than simply “I will,” but the more rational belief is more complex, and more susceptible to doubt, and may therefore be inferior to the simple (if less correct) version in many matters of practical success or survival, i.e., matters directly susceptible to natural selection.  This would be enough for evolution to select for a tendency to find your desires probative.  In order to be equally efficacious, in many practical life matters, as the irrational belief, the rational belief would have to be combined with an iron determination.  Iron logic and iron determination may be a more admirable combination than sloppy thinking and blind faith, but it is perhaps a more difficult combination for random mutation to generate.  As for the cases in which blind faith leads to an incorrect judgment, well, believing in God or not believing may in general have far less effect on the existence or non-existence of your posterity than whether or not you believe you will successfully woo your wife.

Note that I am not saying that there is a “gene” for the Mulder Argument.  I am just saying that a predilection for believing what you want to be true, however complexly coded in genes, memes and individual experience, may in general lead to some differential reproductive success.

But none of this makes blind faith a rational argument for deducing the nature of things as they are, as opposed to (within a limited domain) things as they will be.  The will to believe probably yields even more success if it is tempered with a realistic appraisal of when to apply it and when not to (and, just possibly, when it really makes no difference one way or the other).  This is essentially what Gramsci was trying to get at with his oft-cited “pessimism of the intellect; optimism of the will.”  Despite the theological overtones, this same insight is reflected in a quotation popular within another tradition I’ve had some exposure to:  “Oh Lord, give me the serenity to accept the things I cannot change [including the evidence of your own non-existence!], the courage to change the things I can, and the wisdom to know the difference.”

Tuesday, August 10, 2010

Creating absurdity

Gramsci, in an offhand comment in Americanism and Fordism (Selections from the Prison Notebooks, p. 279), speaks of modern society creating “absurd positions”.

The idea that society can create an “absurd position” is an interesting one, with some (to me) less-than-obvious philosophical implications.  The concept of absurdity represents a disjunction between an observed condition and reason.  Can non-social reality, for instance “create” an absurd condition?  If it does, the presumption is that the flaw is in the reasoning, not in the underlying reality, since the purpose of reason is to understand reality, and the presence of a disjunction indicates that reason has failed, in some way, to do so.

With social reality, however, the situation is different.  This is because social reality itself involves an element of reason.  Social reality is the result of the individual and collective strivings of men and women (as well as their interactions with, and constraints on them from, the non-social world).  Their strivings reflect their goals, aspirations and beliefs, and their (individual and collective) reasoning with respect to these.  These goals and beliefs and this reasoning can include contradictions both within an individual element, and between elements of the aggregate.  The aggregate, or more properly Gestalt, is “modern society”.  When looking at the Gestalt as if from outside (really, of course, from one particular “inside” vantage point), some of these contradictions become manifest, resulting in the perception of an “absurd position”.

Of course, this perception of absurdity is itself relative to a particular analysis, involving a particular set of goals, beliefs and reasoning, which may contain its own errors and contradictions.  Locating the source of an apparent absurdity, therefore, whether in the “social position” or in the observer’s reasoning, is a non-trivial problem.  I do not wish to introduce the kind of relativism that says there can be no truth to the matter – that all is just a “matter of opinion”.  But developing a definitive, irrefutable argument in support of one’s interpretation – an argument that can successfully “take on all comers” – is probably impossible.

That doesn’t mean we shouldn’t try, of course.  Striving to develop such arguments, and convincing as broad a swathe of society as possible, is part of one of the fundamental problems of politics, the struggle for hegemony.

Sunday, May 30, 2010

Starting from Zero

I’ve never read Hegel, but I recently reread Russell’s chapter on Hegel in A History of Western Philosophy as background to some early Marx we were reading for the Democratic Socialists of America Boston local’s reading group. Per Lord R, Hegel starts from the idea that “the Absolute” is “pure being”, deduces that this means it has no qualities, and finds this self-contradictory because to have no qualities is to be nothing. This is the antithesis – “The Absolute is nothing” – and leads to the first synthesis: “The Absolute is Becoming.”

What struck me is that Hegel thinks he is starting from zero, using only one premise – that reality cannot be self-contradictory – but of course he really is applying many premises: that there is something that may reasonably be thought of as “absolute reality”, that “pure being” is a meaningful concept, which entails having no attributes, that to have no attributes (or qualities) is to be nothing, that the union of being and being nothing is becoming.

It seems there is a basic flaw in a philosophy which tries to start from zero, or from some very small set of premises, building up from these by synthesis; to wit, that we cannot do it. We always, in fact, have to start with a lot. Descartes’ radical doubt leads, perhaps, to “cogito ergo sum” (more precisely, perhaps, “...something is”, but to go further, he needs to introduce more premises, “undoubting” certain things. He selects premises that purport to justify the undoubting – clear and distinct ideas, and the goodness of God – but this is basically B.S., not justified by his method, as stated.

Really, what we do, always, is to start with everything we know, or think we know, and start juggling it about. We apply reasoning (rules of inference) that feel right to us and (perhaps) we try to codify these into a system of logic. We apply various tests in doing this. What tests depends on the person and what he or she brings to the process. Examples are empirical tests (how well do these premises and this logic predict future experience) and faith-based tests (do these premises and this logic lead to conclusions that conflict with “revealed” scripture?) We can apply tests, at any time, to some of our a priori beliefs about facts, and or to our rules of inference, but each test involves accepting other facts and rules, at least temporarily and contingently. We can never test the whole shebang.

This process is irreducibly ad hoc and messy, which is no doubt what has led many philosophers to reject it, and to try and find a purer and simpler alternative. But they were deluding themselves.

The actual process “works”, more or less, because our sensual and reasoning faculties have evolved by natural selection. If they failed badly at the task of extracting meaning from the universe, we would have become extinct, or at least these faculties would have. But this is no guarantor of “ultimate knowledge”, whatever that is. Aspects of our reasoning system may be selectively neutral, such that there is no effective, general difference in survival rates from reasoning one way or the other. These aspects, then, would not be fine tuned by natural selection. Like blue eyes and brown eyes, both may survive indefinitely in the species. I suspect this may be particularly true of the ways we construct abstract structures of meaning to explain our experience to ourselves.

Different ways of knowing can also serve the same person better or worse under different conditions. Something like what I called “faith based” learning is certainly the most satisfactory way of learning for a child, who will learn much too slowly if she must test every new factoid empirically, rather than trusting the wisdom imparted by her elders. In fact, realistically most of what we learn throughout all our lives, we learn from others whom we trust, either because of their social position as teachers, authors, or what-have-you, or simply because we believe they have no reason to lie. In complex situations, we have to apply other tests – such as when we find that the “experts” don’t agree with each other. And empirical experience is the ultimate arbiter, to which faith, for any reasonable person, must bow. Galileo’s telescope trumps the Pope.

Saturday, December 5, 2009

Episteme

I have two fundamental epistemological premises: that the evidence of my experience is the best available (really only available) data I have for learning about the world, and that most people who study, think, speak and write about the world are not intentionally lying. These seem to be pragmatically a minimal set. I don’t see how one can practically set forth on the project of learning without them.

Note that the use of the words “most” and “intentionally” in the second premise imply two corollaries: some people are lying, and some people may be unintentionally stating mistruths. (In fact, I might argue that we are all unintentionally stating mistruths to a greater or lesser extent, but that would be more of a theorem than a premise.) Also, saying experience is the “best available” data doesn’t imply that it yields infallible insight.

The two premises do not form a complete (i.e., sufficient) set. All they really say is that I can trust what I experience, and what people tell me about what they experienced (including second or third hand, etc. reports) – but with a grain or two of salt. They don’t say anything about how to come up with that grain of salt, or to know how many grains to apply. They don’t, in other words, tell me how to distinguish the veracity of conclusions I draw from these sources, or how to distinguish between competing theories. They don’t specify any rules of inference, at all.

I’m afraid all I can say about making distinctions is, “It’s ad hoc.” I am no Descartes, to offer a single unified answer to the question of how to distinguish true ideas from false ones. Certainly, I do not believe that because I can hold some idea “clearly and distinctly” that it must be true (although it might suggest truthfulness prima fascie). Instead, it’s more a matter of how well does an idea “fit in” with the other body of ideas I have constructed, over time, from the same evidence. “Consistency”, in a word. But how do I decide if an idea is consistent? Certainly not the idea of the excluded middle. I am quite convinced that it is possible for a thing to be both A and not A. The clearest examples come from human emotions: do I want to spend a month’s vacation in Venice this year, even though the press of work before and after will be terrible, it will cost a lot of money, my Italian is rusty, and I will have to find a house-sitter and/or worry about my pets and everything else in my house? I do, but I don’t. Fuzzy logic may offer better (if inherently less certain) models. But I am convinced that real antinomies can also be supported, as matters of fact (at least as humans perceive fact) in the real world, as well. Or at least, I’m not convinced that they can’t.

Ad hoc. I know it when I see it. Maybe. More or less. (I do, but I don’t.) Kind of like Descartes, perhaps, except I substitute “vague and fuzzy” for “clear and distinct”?

This could be depressing, if, like many philosophers past, I desired the nature of my mind (or soul) to approach some ideal of perfection – to make me like a god. But I don’t believe in gods. Rather than being depressed at failing to approach a fictional divinity, I prefer to celebrate the humanness of it all. Because this messy, ad hoc, but often very effective process of distinction is the stuff of life, after all, and quintessentially human, if only because humans, by and large, do it exceptionally well. Not that we do it infallibly – there are a lot of people in the world who are dead certain of things about which I am certain they are dead wrong. But by and large, in the billions of tiny, every day distinctions and decisions we make over the course of our lives, we do mostly pretty well.

We do this, of course, because we’ve been programmed that way by natural selection. Our brains have evolved to do a job, and they do it rather well (just as flies fly very well, and frogs do an excellent job of catching them). We have certain decision making processes built into our equipment. By studying our thinking in a natural scientific sort of way, it is possible to get clues as to what they are. Philosophers, I think, who have tried to set rules of thought start with some biological rule and then codify it – so clarity and distinctness counts, biologically, as evidence, and so, on some level, does the excluded middle. But we can’t stop there. We move on to fuzzy logic, paradigmatic categories... and who knows how far beyond?

My guess is that, as in most things, the brain works by having a bunch of rules, without any necessary regard as to whether they are consistent in any a priori theoretical sense. Different rules are stimulated by a particular experience, others suppressed, memory of past experience and feedback loops are brought into play, until the system “settles” in some state (“settles” is a relative term for a system that is constantly in motion), and we feel that this “makes sense” or doesn’t. This is what I mean by “consistent” with the rest of my body of knowledge. It is, in fact, the biological basis of, in a sense the definition of, “consistency”. The rules exist because, at some time in the past, they have been found helpful in negotiating the world – they have been empirically proven. They may be “hard coded” rules, proven in the dim historical past of our heritage, but, again like most things in the brain, the “hard coded” rules can be modified, and new rules created, by our individual experience. And such learned rules may be passed on to subsequent generations via the “Lamarckian” evolutionary process represented by our culture and its systems of education.

Thinking in this natural historical way about distinction and rules of inference, etc., may not “prove” validity, in the sense that philosophers have traditionally sought such proofs. But it may give pretty damn’ good evidence of empirical functionality. And, I would argue that this empirical, matter-of-fact kind of “proof” is most suitable to our real-life existence as human beings in a material world, even if it fails for some fictional existence as souls aspiring to a divine one. If philosophy is the pursuit of the “good” and if good must be good for something, then this is the kind of knowledge and truth that is “good for humans”.

Saturday, October 24, 2009

The house I live in

A few weeks ago, some section I was reading in Richard Rorty’s “Philosophy and the Mirror of the Mind” impelled me to attempt a moment of pure introspection, turning off any conscious thought in so far as possible, and just trying to be aware of my immediate impressions – sense impressions, and random passing thoughts viewed as an observer rather than as agent. This is not the first time I have tried such a thing. For some reason, on this occasion, the thought occurred to me that I do not directly perceive my “self”. This led me to the conclusion that I infer myself. On further reflection, I speculated that humans, as infants, learn to infer the existence of themselves by comparison with the role others play as agents of actions (causes of effects) in the infant’s environment. They see other effects, with a “hole” in the middle (no agent evident), and infer that they exist, as people like others, in order to fill the hole. The effects inferred to be caused by this “self” are associated with feelings, desires, motivations, so they infer similar feeling, motivated “selves” associated with the other agents, as well.

I offer this as of interest mainly because of the immediacy and specificity of the intuition. I make no claim to originality – if nothing else, I am reminded of the motivating insight in the c. 1972 American Zen book “On Having No Head”, as well as a few barely remembered passages on the construction of the ego in Freud’s “Civilization and its Discontents”. I have long accepted the idea that our understanding of ourselves, in the sense of who we are, is constructed and reconstructed over the course of our lifetimes through social interaction and other life experience. And I certainly don’t offer the above speculations as a developed theory. They were the result of a few moments of introspection and reflection – what they mainly suggested to me was the need to do more research into other people’s views on the development of the self.

But I haven’t been able to avoid (or postpone) thinking more about this, because the nature of the self has been an important issue in several books I’ve been reading. It came up in Jean Grimshaw’s book, in her critique of some of Sartre’s ideas, it was important (from very different viewpoints) in MacIntyre’s book, and in Rorty’s, and it is important in Chapter 3 of the book I am reading now: “The Future of Democratic Equality”, by my friend Joe Schwartz, in which he critiques the ability of post-structuralist ideas, including the “fictive” nature of the self, to serve as a basis for building the concepts and institutions necessary to sustain democracy.

I guess I’ll have to read some existentialists and post-structuralists to get a first-hand understanding of their ideas. In the meantime, if I may be indulged in an argument from second-hand sources, it SEEMS to me that a false dichotomy is being drawn; i.e., if a “self” is not some natural, monist, indivisible, unchanging core of our being, then it must instead be “fictive” and “unstable”. Why? An automobile is a constructed artifact, but this doesn’t make it a phantasm, nor does the fact that it would fall apart if all the bolts were removed make it unstable.

I may not know much about the “self”, but I know a lot about houses. I built houses during my teens as a “carpenter’s helper”, working both with framing crews and finish crews; I’ve designed the structure (and restructuring) of many houses in my professional career as an engineer; and I’ve overseen at least four major remodeling projects in my role as a homeowner. To most of us who haven’t had these experiences, houses often seem the epitome of the solid, concrete, and stable. The British even have an expression, “Safe as houses”, which sums this up perfectly. But I, and others in the trade (or other “post-remodelist” homeowners) know differently.

Houses are “fuzzy” things, with uncertain boundaries, and they are in a constant state of flux. Our definition of “house” can change contextually: does it include the furniture? the outbuildings? the stove, sink, refrigerator? The house itself continually changes: we move furniture in or out, put up new curtains, paint the walls new colors. Left to itself, a house will sag, settle, decay. Termites eat the sills and other supports. A house not built properly is especially vulnerable: absent a few nails or ceiling ties, the walls can spread under the base of the rafters, causing the ceiling to crack and the ridge to sag. I’ve been involved in a few projects where houses in which this had happened needed to be pulled back together and resecured.

Houses are of course, initially constructed, and this construction is the result of a social “conversation”. The owner may have his ideas, more or less well articulated; the architect has hers; as does the contractor, and for that matter each of the many individual construction workers (carpenters, plumbers, electricians, painters...) There is no unity in these differing conceptions (despite the ambition of the architect), and each makes its own contribution to the outcome. The “final” product is massively unpredictable in its details as they will stand at the moment of “completion” (an arbitrary moment in time perhaps defined by the Building Inspector making a final sign-off on the permit form). And the house immediately begins to change, under the actions of the kinds of forces described above, as well as from the grander plans of the occupants, who may decide they need a new baby’s bedroom, home office, or kitchen.

Despite all of this, none of us, even we who are well acquainted with these processes, would refer to a built house as “fictive”. Nor, except in extreme (dare I say “psychotic”?) cases would we refer to it as “unstable”. “House” remains a concept, and houses things, that we would rather not do without.

So my “self” may be something that is formed and reformed continually throughout my life, by my social interactions (including those with powerful and/or repressive institutions), and by other things. It may be difficult for me to specify with precision, at any given time, just exactly what my “self” is, or what it contains. It may even be that my sense of agency is in some way illusory, because I can’t help doing what I do because of who I am, and who I am has been (and is being) constructed by forces that are beyond my control. Still, it’s a useful thing, this “self”, and it seems to have at least a certain pragmatic, dynamic stability (even if I can’t precisely define the state to which it “returns” after a “disturbance” – which is an engineer’s definition of “stability”).

So, for the moment, at least, I find I have no more desire to give up my "self" (either as a concept or an artifact) than I have to make my home permanently under the stars.

References: In the course of the above, I referred (yet again) to Jean Grimshaw’s “Philosophy and Feminist Thinking”, as well as Alasdair MacIntyre’s “After Virtue”, and to: Richard Rorty’s “Philosophy and the Mirror of Nature”, Sigmund Freud “Civilization and Its Discontents”, D. E. Harding “On Having No Head”, and last but not least Joseph M. Schwartz “The Future of Democratic Equality”. It’s also clear, if I am to get a better understanding of various ideas of the self, that I am going to have to read some Sartre, Foucault, and Derrida, as well as some more up-to-date books on the psychology of ego formulation. (I’m open to suggestions...)

Sunday, August 30, 2009

A natural history of morals

I’ve been reading a great book, “Philosophy and Feminist Thinking”, by Jean Grimshaw, which I picked up serendipitously at Back Pages Books, in Waltham MA (http://www.backpagesbooks.com/).

As is appropriate for such a broad title, Ms. Grimshaw covers a lot of area, especially for such a short book. She hooked me in an initial section in which, in the course of discussing what it might mean to think of philosophy as “gendered”, she showed, by a very original argument, that, for instance, Kant held views on the nature of women which, to a sexism-aware reader, seem in the context of his general theory of moral behavior, to relegate women to second-class humanity; however, the views of on women, in Kant’s case, could be thrown away, and the general theory would not need to be changed. On the other hand, Aristotle’s teleological theory of natural history led him to see rational thinking as the most characteristic quality, and therefore the most appropriate end or goal, of human beings, because rational thinking derives from language, which is the one quality he saw as being uniquely human. The fact that women have language, but that he believes women not to be rational in the same way as men, thus creates a contradiction in his philosophy; but to eliminate the contradiction by admitting women (or for that matter, slaves) to be fully rational, would undermine parts of his moral and political philosophy which required the good life to be supported by the labor of women and slaves, in order for the full rational nature of humanity to find expression. Thus, Aristotle’s misogyny is integral to his philosophy, and his philosophy is more clearly “gendered” than Kant’s.

In another question she raises in the book, the question of what it might (or might not) mean to speak of women as having a “nature” distinct from men, or of “women’s ethics or values” as being distinct from “men’s”, she brings up some ideas I wish she had developed more fully. She mentions that many of the values which are often seen as being particularly women’s values – caring, attentiveness to relationships, alertness to the feelings of others – are actually behaviors that may be quite practical for survival to a person living powerlessly under the domination of others. A hyper-keen alertness to the feelings and moods of others, for example, is often a characteristic of people who grew up in an abusive environment (my example, not hers). I wish she had explored the political implications of this observation a little more – in particular, does this mean that some of these virtues might eventually dissolve, if we won a more egalitarian world? I hope not!

But what I really want to talk about in this essay, because it gibes in interesting ways with some thinking I’ve been doing, is a little theory of moral behavior that she just sort of casually tosses out in Chapter 7. She is discussing the question of “abstract” vs. “concrete” reasoning, and ideas that “men’s morality” is based on rules and principles, while “women’s morality” is contextual and specific. She points out that, besides being pretty imprecise as to what “women’s moral reasoning” really is, this argument dissolves rather readily into vague mysticism about women’s “intuitive” mental processes. She proposes as an alternative a way of looking at moral behavior that is based on distinguishing between “rules” and “principles”. The definition she uses is that “rules” simply direct behavior: “Do not kill.” “Principles”, on the other hand, direct you to take certain things into consideration: “Consider whether your actions will harm another.” Then, to use an example from her book, a person might hold one rule: “Do not sleep with someone to whom you are not married,” and two principles: “Consider whether your actions will condone immoral behavior,”, and “Consider whether your behavior will stand in the way of maintaining caring and relationships.” A person who chooses to maintain a close relationship to a daughter who was breaking the rule about sex and marriage is thus not seen as behaving in an unprincipled way, but as prioritizing one principle over the other, in a case in which the two led to contradictory behavior.

I think this is a fascinating, and quite compelling analysis. It is also quite close to a theory of moral behavior I’ve been kicking around, which I tend to refer to as my “natural historic” view of morality. (The name implies that this is a theory or hypothesis about what moral behavior in humans is “naturally like”, and not a normative or prescriptive theory, per se.) My natural historical view argues that human morality naturally takes the form of a collection of simple “rules” for behavior, which are not necessarily mutually consistent. (These “rules” in my theory thus play the role of both “rules” and “principles” in Grimshaw’s.) Social or other environmental circumstances have the effect of stimulating or reinforcing some rules, while suppressing others. Different aspects of the particular environmental context may stimulate contradictory rules. The rules, themselves, become part of the stimulus in a feedback mechanism: a rule, once stimulated or “fired”, may serve to have a suppressing or stimulating effect on others. Eventually, some rule (or some reasonably consistent set of rules) wins out, and the person takes moral action. (Of course, gridlock in the form of an inability to come to a decision may also win out.)

This view is consistent with a view of mind that I’ve been developing, under the influence of books like Kosslyn/Koenig “Wet Mind”, Patricia Churchland’s “Neurophilosophy” and George Lakoff’s “Women, Fire and Dangerous Things.” It is also consistent with a growing sense that I have that logical consistency, while certainly important, is grossly over-rated in most traditional philosophy, especially where it bears on the actual behavior of real human beings. (Lakoff’s book is particularly helpful, in this.) Another contributing factor in my thinking about this has come from primate ethological studies such as Jane Goodall’s “In the Shadow of Man”, and Frans de Waal’s “Chimpanzee Politics”. (De Waal’s “Good Natured: The Origins of Right and Wrong in Humans and Other Animals” is right at the top of my “to be read” pile.)

Since I’m billing this as a “natural historical” theory, I should provide some ideas on how my hypotheses might be empirically tested. I have, in fact, had some thoughts about this, and about the original source(s) of the rules (in our genes, and/or imbued by socialization), but I think this is a long enough post for now...