Wednesday, February 5, 2014

Government control



One “common sense” argument put forth against socialism, in the (limited) sense of direct government control over the economy, is that the government cannot be trusted to invest wisely.  They may keep non-viable enterprises going, for example, pouring in money drawn from better-performing portions of the economy, in order to win votes.

As a prima facie argument, this strikes me as silly.  After all, private capitalists cannot be trusted to invest wisely, either, and to the extent that they do, they invest for their own benefit, not ours, and the public weal follows as a side effect, if at all.  It’s not that I don’t believe in the existence of the “invisible hand”.  I just think its reach only extends so far.  Maintaining jobs in a money-losing business that produces a useful commodity may actually be a perfectly reasonable public investment in some circumstances, but not one that would ever appeal to the private capitalist.

On a more detailed, implementation-focused level, of course the anti-socialist objection is not entirely ridiculous.  If we have publicly managed investments, we do want them to be managed well, and managing investments well is not a trivially easy job.  We also need to protect ourselves from outright corruption.  Addressing these issues is partly a matter of management, of checks and balances, of trying to design a system as free as possible from irrelevant, systematic distortions.  But to get a well-balanced, robust plan of public investment that takes all direct and indirect costs and benefits into account, I think we would need a much more participatory planning process.   (Note that taking ALL direct and indirect costs and benefits into account is something that capitalist enterprise management NEVER does.)

Can we get a suitable democratic planning process within our current models of government?  I do not believe in the blanket statement, asserted by some on the left, that “you cannot tear down the master’s house using the master’s tools.”  But some of the master’s tools are in fact rather specifically engineered to serve the master’s purposes, and are not so readily adapted to the needs and purposes of the folk.  In my mind, our really-existing systems of representative government are examples of such tools.

Representative governments originated, in Europe, as a way for kings to appease the aristocracy, and eventually the richer bourgeoisie, without ceding too much central control.  Some constitutions, of course, were constructed as a conscious break with the rule of kings; however, the one such “revolutionary” constitution I know something about – our own in the U.S. – was consciously created to keep hoi polloi at arm’s length, and guarantee effective control by the richer classes.  Obviously, there have been modifications in these constitutions over the centuries, most of which (I think)  have been in the direction of greater democratization, but there still is a long way to go.  Most ordinary folks participate in democracy only by spending ten minutes every 2-4 years in voting  (if they even do that).  “Debate” takes place between politicians and pundits in the commercial news media (primarily), and is, to most people, at best a spectator sport.  People’s own discussions tend to take place amongst small groups of their own friends, and can influence the broader process of policy formulation only to the extent they happen to be queried through the passive (and eminently manipulable) process of polling. 

A truly democratic process would have people involved in interactive debate, with both themselves and opponents as participants; a debate taking place in a public forum that fed directly into the decision making process.  Listening to arguments pro and con, and being forced to make a meaningful decision afterwards, focuses the intelligence, I believe, like almost nothing else.

The punditry would assure us that people just don’t want to be that involved in their own government.  Maybe that’s only because they’ve never had the chance.

Friday, January 3, 2014

Isocrates and us or “plus ça change, plus c'est la même chose”



 “When I was a boy, being rich was considered so secure and honorable that almost everyone pretended he owned more property than he actually did possess, because he wanted to enjoy the prestige it gave.  Now, on the other hand, one has to defend oneself against being rich as if it were the worst of crimes… for it has become far more dangerous to give the impression of being well-to-do than to commit open crime; criminals are let off altogether or given trivial punishments, but the rich are ruined utterly.  More men have been deprived of their property than have paid the penalty of their misdeeds.”

This was the Greek, Isocrates, writing I guess in the early 4th Century BC.  The thing is, if de Ste. Croix is correct in his analysis in The Class Struggle in the Ancient Greek World,  Isocrates wrote these words during a time when, in fact, economic inequality was growing in much of the Greek world.  The demos was on the defensive; the oligarchic sector was (correctly) realizing that their interests were better served by supporting outside imperialists like Phillip II of Macedon (and his successors), who would permit them free reign to squeeze the local peasantry, as long as they eschewed outright political ambition beyond the local level, but would be very suspicious of the potential for a popular uprising by hoi polloi.  And, in fact, under, first, the Macedonian kings, and later the Romans, the last vestiges of democracy were stamped out, opening the world to more and more vicious exploitation of the poor and middling by the uber-rich, until finally the Emperors Diocletian and others basically enserfed the entire population below the economic and political elite.  Economic, and political, inequality in the late Roman Empire reached a level that we, in our still relatively open societies, can barely conceive.

The thing that bothers me about the Isocrates quote I opened with is that it seems so modern.  We again find ourselves subject to “poor me” complaints from the rich, fuming, for example, that they pay the lion’s share of income taxes (but taking as a natural right their claim to an even more disproportionate share of the fruits of economic production), and decrying the slender benefits allotted to the “undeserving” poor; calling for us to be tough on crime (while we imprison, in the U.S. more of our population than any other country in the world, and more black men than were slaves  before the Civil War), and for tax cuts for the “creators” of (mostly non-existent) Mac-jobs.  So, while financial markets soar post-depression, we cut unemployment benefits and food stamps in time for Christmas, and produce movies in which criticism of the excesses of the corrupt ruling class is so muted that members of that class can cheer at screenings, while critics on the left complain, in effect, that the director is praising with faint “damns”.

And I think:  It’s been 2400 years.  Why are we still fighting the same fight?

Thursday, October 31, 2013

Right and Wrong



These thoughts were engendered by some readings in Robert Nozick’s book Philosophical Explanations, in particular the discussion near the end of Ch. 5, Part III on “Deontology and Teleology”, and the preceding sections on the structure of moral rules.  I do not think it is necessary to read or have read Nozick’s book in order to follow my meditations, though.

If you perform certain types of bad acts, even for a “good purpose”, you lessen the amount of good that you, and perhaps others, may do in the future.  If you torture a known (even admitted) terrorist in order to thwart his plans and save innocent lives, you become the sort of person who will more easily torture in the future, perhaps sometimes on mere suspicion, and hence, eventually, an innocent.   Also, you fill your victim’s relatives and friends with resentment, anger, hate, lessening the good they will do in the future, and making it more likely that they will do wrong.  This doesn’t mean, necessarily, that one may never do a wrong act to accomplish a good end, but it is a factor to be weighed.  I think this connects to some extent the deontological (rules-based) and teleological (ends-based) views of morals, and helping to avoid some of the worst “ends justify the means” abuses of vulgar forms of the latter.

Nozick’s ideas of the foundation of ethics have, I think, some serious flaws.  The deepest flaw, to me, is his assumption (which he never really tries to justify?) of some Platonic realm of value, right, and wrong that transcends and has no necessary relation to (at least is not in any way derived from) human ends.  I reject this view, and hope to set down some alternative speculations in some detail, in the future.

On a more technical level, his discussion of the structure of moral rules gives food for thought, but his elaborate formulation (which even he does not try to complete) is far too complex for actual application.  Surely an analysis of morality must consider the “computability” of the resulting formulas – the possibility that the answers could actually be reached by real people in “real time” – otherwise it is asking us to be better than we possibly can be.

Perhaps deontological rules are best seen as heuristics (“rules of thumb”).  Heuristics are designed for quick computations that give good (not necessarily optimal) results in many (hopefully “most”) situations.  A set of heuristic rules does not necessarily need to be internally (logically) consistent.  Judgment applies in deciding which heuristic to apply, or even whether to apply the heuristics at all, rather than opt for some more “precise” formula such as a careful, weighted analysis of the long(er)-term moral benefits and costs.

Wednesday, October 30, 2013

Pondering Truth – Discussion (or extended rant)

Carla, on Facebook, asked a question that led me to elaborate at length on my 10/12 post “Pondering Truth”.

“Are you asking,” she asked, “If I don't like beets, and you do like beets, is there any truth about the taste of beets in my experience of the taste of beets?”

My response: Most simple, practical questions can be resolved pretty easily by adjusting the semantics. For instance, we can say: "'David likes beets' is true; 'Carla likes beets' is not true; and, 'Beets taste good' is a subjective statement that only reflects the tastes of the speaker, and there is no objective truth in the matter."

Even this can get complicated, though. Suppose your biochemistry fluctuates so that sometimes you would like beets if you tried, them, but most of the time you wouldn't. You've tried them only seldom (because you "know" you don't like them), and the probabilities worked out so that every time you did you were in your "don't like beets" state. Or suppose you don't like beets because of some terrible beet experience in your childhood, and with the right therapeutic breakthrough, you would come to really love them. Exactly what is the truth of the statement "Carla doesn't like beets", then?

You could come up with semantic tweaks to express these thoughts, but if we have to drill down to that level in every utterance in order to reach the truth of it, exactly what is "truth"? Can we EVER really say that we've drilled down far enough to reach the absolute bottom of it?

Then there's also the question of the difference between "truth" and the data that justifies a belief. I believe I have money in the bank. I confirm that by checking my balance, asking the teller. Suppose the teller lies to me, and has embezzled my money? Suppose he has embezzled some of my money but left a lot of it. I have no direct connection to the "truth" but only to data that (I believe) confirms it. I could go on for a long time acting as though all of my money were there. If I keep depositing more money, and spending less than I deposit, I could conceivably NEVER discover the embezzlement. Truth, it seems, has the POTENTIAL for operational impact but, unlike data and belief, does not NECESSARILY have any operational impact. Doesn't this seem weird? Shouldn't TRUTH somehow be inherently MORE important than "mere" belief?

Money, actually, is an interesting example because it turns out that I "have money" ONLY because everybody involved believes I do, which is really kind of strange, isn't it?

I should point out that, relative to the "beets" example, that I am comfortable saying something like "Carla doesn't like beets". I do NOT fee that you ACTUALLY have to drill down through all the actual or potential details of complexity to say something meaningful, useful, or (yes) true. I just don't feel that I know how to understand or express exactly what this quality or relation we call "truth" truly is.

Mathematicized science gives us the notion of "true within a context". I can describe the trajectory of an object in a way that is "true within Newtonian theory" and that may adequately describe the actual trajectory of the actual object for whatever present purpose I have. But for a different object and trajectory, I may need a description that is "true within the theory of General Relativity", and the description that would be "true within Newtonian theory" may be totally inadequate for my purpose. And General Relativity may not be the ultimate end of the progression, either. This is a relatively precise concept of "truth", but it doesn't necessarily help us, for example, in trying to decide if a given theory is "true"

Saturday, October 12, 2013

Pondering Truth

When we speak of “truth” do we describe some single entity or quality in the world?  Or is “truth” hypostasized to simplify a complex of relationships between the inner and outer worlds that we do not (cannot?) understand?

Is “truth” an operational concept?  There is no operational difference “now” between a justified belief and a justified true belief (a.k.a. “knowledge”, at least in many philosophers’ systems).  I make the same choices, take the same actions either way.  But there may be an effect on the outcome (or not).  Is “truth” operational in evaluating outcomes, and therefore, perhaps improving the reasons for future beliefs?

Even in evaluating the outcome of past choices, I still don’t “possess truth”.  I only formulate, and attempt to justify, further beliefs (e.g., beliefs about my past beliefs).  In Dewey’s terminology from Experience and Nature, truth does not seem to be something we can “have directly”.

There is a state of the world, and there is a state of my mind (or yours), which is, itself, part of the state of the world.  The state of my mind includes a simplified, impressionistic “image” (in some neural/synaptic medium) of the state of the world – to the extent it is accessible to my imagination.  In that image, the state of my mind figures under such rubrics as “reasons” and “beliefs”.  Based on my state of mind (at any given time), I will make certain choices, and take certain actions.  Partly because of such actions, at some future time the state of the world, and the state of my mind, will be different.  By comparing the current state of my mind (and especially its world-image) to my memory of its prior state, I make judgments about the truth of my previously held beliefs.  I formulate (reasoned) beliefs about my prior beliefs.

Is “truth”, then, just something we invent to explain our relative satisfaction or dissatisfaction with the outcomes of our past endeavors?  The trouble with that idea is that our feeling of satisfaction may be connected to aspects of the outcome which some mythical unbiased observer would be unwilling to call “truth”.  For instance, a racist may desire to join an organization of like-minded individuals.  His belief that African Americans are genetically inferior to “Aryans” may help him in that endeavor, and thereby lead him to be satisfied, but we would not want to call his belief “true” on such grounds.  Then there is the growing body of psychological research indicating that our brains have evolved so as to readily adopt certain beliefs which are adaptive, but not (necessarily) “true”.  An example is the “confirmation bias”, where by our minds tend to disproportionally accept data that confirms what we want to believe, and reject data that disconfirms it.  Another example is the “hyperactive agent detection” that Daniel Dennett discusses in Breaking the Spell (among other places).  Other examples can be found in various papers in Naturalizing Epistemology (Hilary Kornblith, ed.)

It seems there is something actual about the relationship between our mental states and the world state that we are trying to capture with the concept “truth”, which is related to, but not simply reducible to, our degree of satisfaction with outcomes.  I admit, at this point in my life, to be still wondering (persistently) about exactly what it is.

Postscript on Confirmation.  As with many of the musings on Persistent Wondering, this one pretty much starts “where I am” and doesn’t make much of an effort to relate to an audience that may not be starting at the same place.  I apologize for that… but after all, I am portraying myself as a “wonderer”, and making no claims to be a teacher.  (Lame excuse.)

In this essay in particular, though, it seems to me that many people may wonder why I would feel – at all – that “truth” is not directly accessible.  In many (most?) of our everyday interactions with the world, confirmation of our beliefs is direct and immediate, and seems incontrovertible.  I believe I left my keys on the kitchen counter.  I go downstairs – I either find them there, or I do not.

Other “facts” are not so easily confirmed or disconfirmed, though.  There are the challenges of philosophical skepticism.  How do I know I am not dreaming?  Or hallucinating?  Or a disembodied brain kept alive in a vat, with my neural inputs manipulated by alien scientists?  Then there is the question of “modeling”.  Complex physical or social systems cannot be grasped by our minds in their complete and detailed totality.  We need to abstract from them, simplify them, in order to understand them.  Do concepts like race, class, culture, the national income “truly” conform to some real world objects, and if so exactly what and how?  How do we indubitably confirm or disconfirm them?  Theoretical physics provides examples, also.  Do the objects of modern theoretical physics – quarks, bosons, photons – “really” exist, or are they just a convenient (not necessarily unique) way of mathematizing experimental results?  Are the relatively abstract and indirect confirmations of physics experiments really of the same class as our confirming (by looking) that our keys are on the counter?

But really, ALL of our knowledge involves some such modeling (abstraction and analysis).  All the objects we conceive involve some level of abstraction – focusing on certain aspects of experience and ignoring others.  Something of this is suggested by Heraclitus’s statement thousands of years ago that “You can never step into the same river twice.”  What exactly is a river?  Is it the specific water molecules?  But they start out in a glacier and end up in the ocean.  Is it the banks?  But they shift with time as soil particles are removed and deposited.  Is it some abstract (fractal?) pattern that encompasses changes over time?  What is the “truth” of the matter?

Our mental states (beliefs and so on) consist in synaptic patterns, roughly, stable-yet-changing patterns of chemical interactions between neurons.  The state of the world consists in the interplay of forces amongst patterned matter and energy, extending strongly or weakly between the various points of the entire universe.  It is not clear that some unique and transparent correspondence can be established between those two things and unambiguously labeled “truth”.  On a conceptual, theoretical level, the question of the truth of our beliefs, their confirmation or disconfirmation, is not at all a trivial one.  Although on the pragmatic level of day-to-day actions, it very often is.

Sunday, September 1, 2013

Truth, knowledge, skepticism, and stuff…



A standard philosophical definition of knowledge goes as follows:

(I know A) <=> (I have a justified belief that A) and (A is true)

As I write this essay, I’m in the middle of reading Robert Nozick’s discussion of knowledge and skepticism in Philosophical Explanations.  Nozick uses a very specific concept of “truth tracking” which leads to some very interesting results, but is essentially (it seems to me) just a particular approach to defining justification, one that leads to a coherent (I think), but sometimes quite peculiar conception of knowledge.  I find myself working around a rather different rejection of philosophical skepticism.  What follow are musings, not intended to represent a complete expression of a developed idea.

If some particular skeptical scenario (SK: SK => not-A) were true, then A would be false, and I would not know A.  However, A, so not-SK, and I do know A.  The skeptic’s objection fails, because SK is not true.

Truth is outside my direct experience.  It is not something I have direct access to.  To use Dewey’s language from Experience and Nature, it is not something I “have directly”.  All that I have directly is justification (evidence, inference rules, trusted sources, etc.)  There is no way to operationalize truth; I can only operationalize justification.  One reason that theories of knowledge remain so problematic is that knowledge, vs. justification and belief, has exactly zero influence on our behavior at time t = now().  We try to justify our beliefs, and to act on justified beliefs.  Later, we may come to believe that our prior beliefs were not (properly) justified, i.e., were false.  But at any given moment (outside of certain contexts of philosophical inquiry) the question, “Are my beliefs true, or merely justified?” is never a useful one.  The question, “Is this belief justified?”, however, is (always?) an important one.  Operationally, the question of “truth” always devolves to:  from perspective B (perhaps a later one), do the beliefs held to be justified from perspective A still appear justified?

There is no way to operationalize (general, philosophical) skepticism at all.  Skepticism places all logical possibilities on an equal footing.  All available evidence is discounted, and there is no way to distinguish one possibility from another, to prefer one over the other.  Skepticism cannot give any positive plan of action.

Philosophical skepticism of the sort I am referring to (what Hume referred to as Pyrrhonism) must not be confused with a critical evaluation of our methods of justification.  Critical evaluation of our methods of justification is completely operational and is critical to ensuring that they track truth as closely as possible.

P.S.  Nozick, I think, expresses an insight similar to mine above about truth and experience when he says (p. 232 in my paperback copy) “We have said that knowledge is a real connection of belief to the world, [which we call] tracking, and… this view is external to the viewpoint of the knower, as compared to traditional treatments, [though] it does treat the method he uses [to track truth] from the inside, to the extent it is guided by internal cues and appearances.”

P.P.S.  Anticipating certain gleeful but misguided reactions to my rejection of “skepticism”, I want to point out a deviation between a common vernacular use of the word “skeptical” and philosophical skepticism, to wit, the phrase: “Skeptical about God.”  Disbelief in God (or even just doubt) is usually based on a belief that you can trust the evidence to lead you to true conclusions about the world.  Doubt about God stems from observing the lack of positive evidence, and disbelief from observing that the available evidence is incompatible with the God hypothesis.  These methods of justifying belief are exactly antithetical to philosophical skepticism, which holds that no amount of evidence is ever sufficient to justify a belief.  Theists, in fact, often (mis-)apply a skeptical argument when they claim that “You can’t prove a negative result,” not realizing that, if true, this argument merely puts their God on exactly the same footing as being a brain in a vat manipulated by alien scientists in the Alpha Centauri system.

P.P.P.S.  (9/7/2013)  I need to correct my statement that Nozick's concept of "tracking" is just a special case of justification.  (He is at pains to distinguish them on p. 267, by which time I was in a position to realize what he meant.)  Tracking, like truth, is external:  our methods really do track truth.  Justification is internal: we believe our methods track truth, and are therefore reliable or justified.