Saturday, December 5, 2009

Episteme

I have two fundamental epistemological premises: that the evidence of my experience is the best available (really only available) data I have for learning about the world, and that most people who study, think, speak and write about the world are not intentionally lying. These seem to be pragmatically a minimal set. I don’t see how one can practically set forth on the project of learning without them.

Note that the use of the words “most” and “intentionally” in the second premise imply two corollaries: some people are lying, and some people may be unintentionally stating mistruths. (In fact, I might argue that we are all unintentionally stating mistruths to a greater or lesser extent, but that would be more of a theorem than a premise.) Also, saying experience is the “best available” data doesn’t imply that it yields infallible insight.

The two premises do not form a complete (i.e., sufficient) set. All they really say is that I can trust what I experience, and what people tell me about what they experienced (including second or third hand, etc. reports) – but with a grain or two of salt. They don’t say anything about how to come up with that grain of salt, or to know how many grains to apply. They don’t, in other words, tell me how to distinguish the veracity of conclusions I draw from these sources, or how to distinguish between competing theories. They don’t specify any rules of inference, at all.

I’m afraid all I can say about making distinctions is, “It’s ad hoc.” I am no Descartes, to offer a single unified answer to the question of how to distinguish true ideas from false ones. Certainly, I do not believe that because I can hold some idea “clearly and distinctly” that it must be true (although it might suggest truthfulness prima fascie). Instead, it’s more a matter of how well does an idea “fit in” with the other body of ideas I have constructed, over time, from the same evidence. “Consistency”, in a word. But how do I decide if an idea is consistent? Certainly not the idea of the excluded middle. I am quite convinced that it is possible for a thing to be both A and not A. The clearest examples come from human emotions: do I want to spend a month’s vacation in Venice this year, even though the press of work before and after will be terrible, it will cost a lot of money, my Italian is rusty, and I will have to find a house-sitter and/or worry about my pets and everything else in my house? I do, but I don’t. Fuzzy logic may offer better (if inherently less certain) models. But I am convinced that real antinomies can also be supported, as matters of fact (at least as humans perceive fact) in the real world, as well. Or at least, I’m not convinced that they can’t.

Ad hoc. I know it when I see it. Maybe. More or less. (I do, but I don’t.) Kind of like Descartes, perhaps, except I substitute “vague and fuzzy” for “clear and distinct”?

This could be depressing, if, like many philosophers past, I desired the nature of my mind (or soul) to approach some ideal of perfection – to make me like a god. But I don’t believe in gods. Rather than being depressed at failing to approach a fictional divinity, I prefer to celebrate the humanness of it all. Because this messy, ad hoc, but often very effective process of distinction is the stuff of life, after all, and quintessentially human, if only because humans, by and large, do it exceptionally well. Not that we do it infallibly – there are a lot of people in the world who are dead certain of things about which I am certain they are dead wrong. But by and large, in the billions of tiny, every day distinctions and decisions we make over the course of our lives, we do mostly pretty well.

We do this, of course, because we’ve been programmed that way by natural selection. Our brains have evolved to do a job, and they do it rather well (just as flies fly very well, and frogs do an excellent job of catching them). We have certain decision making processes built into our equipment. By studying our thinking in a natural scientific sort of way, it is possible to get clues as to what they are. Philosophers, I think, who have tried to set rules of thought start with some biological rule and then codify it – so clarity and distinctness counts, biologically, as evidence, and so, on some level, does the excluded middle. But we can’t stop there. We move on to fuzzy logic, paradigmatic categories... and who knows how far beyond?

My guess is that, as in most things, the brain works by having a bunch of rules, without any necessary regard as to whether they are consistent in any a priori theoretical sense. Different rules are stimulated by a particular experience, others suppressed, memory of past experience and feedback loops are brought into play, until the system “settles” in some state (“settles” is a relative term for a system that is constantly in motion), and we feel that this “makes sense” or doesn’t. This is what I mean by “consistent” with the rest of my body of knowledge. It is, in fact, the biological basis of, in a sense the definition of, “consistency”. The rules exist because, at some time in the past, they have been found helpful in negotiating the world – they have been empirically proven. They may be “hard coded” rules, proven in the dim historical past of our heritage, but, again like most things in the brain, the “hard coded” rules can be modified, and new rules created, by our individual experience. And such learned rules may be passed on to subsequent generations via the “Lamarckian” evolutionary process represented by our culture and its systems of education.

Thinking in this natural historical way about distinction and rules of inference, etc., may not “prove” validity, in the sense that philosophers have traditionally sought such proofs. But it may give pretty damn’ good evidence of empirical functionality. And, I would argue that this empirical, matter-of-fact kind of “proof” is most suitable to our real-life existence as human beings in a material world, even if it fails for some fictional existence as souls aspiring to a divine one. If philosophy is the pursuit of the “good” and if good must be good for something, then this is the kind of knowledge and truth that is “good for humans”.

No comments:

Post a Comment