Reading the discussion of “Deterministic and Stochastic Processes”, in Elliott Sober’s book, The Nature of Selection, has me musing on Laplace’s demon. The brilliant 19th Century mathematician Pierre-Simon Laplace was convinced (despite his own pioneering work in probability) that the world was at root deterministic. His famous expression of this, as cited (in translation) in Sober is:
“Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it – an intelligence sufficiently vast to submit these data to analysis – it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain, and the future, and the past, would be present to its eyes.”
This hypothetical vast intelligence has come to be known as “Laplace’s demon”.
Is Laplace’s demon, i.e., some being which could know everything there is to know about the universe, even theoretically possible? (In a philosophical sort of definition of “theoretically”.) To address that question, we have to address at least a little of the question “What is knowledge?” That’s rather a big question for an amateur philosopher, but I’ll see what I can do.
First of all, whatever we mean by “knowledge”, it seems clear that it is not synonymous with “representation”. If we were to imagine an infinitely “true” mirror that perfectly reflected the incident light, or a computer disk copier that made perfect copies of one disk onto another, we would not say that the mirror, or the second disk had “knowledge”. What we call “knowledge” involves, I believe, a process of abstraction and interpretation. Abstraction involves making a representation of part of the world – the part we think relevant for our analysis. Interpretation is adding meaning. (I will not, at the moment, try to give a meaning for “meaning” – it is just whatever we add to a representation in order to possess knowledge. For that matter, I won’t discuss the nature of “representation”, either.) If I know a fox is in my chicken coop, I do not know exactly how many kilograms the fox weighs, or exactly where each chicken is relative to the position of the fox, but I know if I don’t get out there, fast, I am going to lose some chickens. Knowledge therefore involves both subtraction and addition. We subtract data that we believe irrelevant to our analysis (abstraction), and we add meaning by a process of interpretation.
So what is “exact” knowledge – that is knowledge that could “comprehend all the forces by which nature is animated and the respective situation of the beings who compose it”? It would seem to require a complete representation, leaving nothing out – i.e., perfect representation plus meaning, instead of abstraction plus meaning. It is therefore a purely additive process. But, as I discussed in “Creating the world” (11/5/09), a representation must be represented somewhere. Whatever mind or computing device is doing the knowing must have at least as many data storage locations as there are data to be represented – and in fact, must have more, since exact knowledge is additive. But if exact knowledge is to leave nothing out – if it is to incorporate every possible thing that could have any influence whatsoever, on the thing known, then it seems the knower must also know itself, and this leads to an infinite regress.
Maybe we could try a different formulation of “exact” knowledge. Thinking of the way “infinity” is usually represented in mathematics, we might try to define exact knowledge by some “limit” formulation, saying that, however much data we have already represented about the thing to be known, we can always if necessary represent more, so that, without ever claiming to have represented everything, we can always represent “as much as we please”. This theory seems to require that the thing to be known is in some sense “small” with respect to the knower, but it stops short of requiring infinite regress. But is it good enough for Laplace’s demon? It seems that with this definition of “exact knowledge” all we are saying is that we can make the probability that we have missed some important piece of information arbitrarily small. There always remains some non-zero probability that some important fact we haven’t considered can come crashing in and invalidate our model. It seems that Laplace’s demon is qualitatively in the same relation to determinism vs. uncertainty as the rest of us – it just has a really big brain, so it can know a lot more.
Another approach is to assume two separate universes – this approach might appeal to someone who still clings to some form of Cartesian dualism. The knower exists in a universe that is not part of the universe in which the known resides. But in order for this to escape the problem of regress, it must be impossible for the knowing universe to affect the known – the knower must be an pure observer, only – otherwise, the knower must still know itself, in order to know all possible influences. We must also posit some form of one-way communication in which information can pass from our universe into the other without even the information-bearing entities themselves being in anyway touched or affected.
It seems that even if such a knower-in-a-separate-universe were to exist, this could be of absolutely no interest to beings in our universe. This separate universe theory is of the same sort as the theory of pure, philosophical solipsism – the theory that only I exist, and everything else that seems to exist is merely a phantasm in my own mind. Each theory is completely untestable, as is inescapable implied by its own hypotheses. Beyond stating such a theory, and noting its inherent untestability, not much of interest can be said.
The above questions seem to render the thought experiment of Laplace’s demon useless as an argument for the determinacy of the universe. Does this imply that the universe is not determinate (i.e. that it is inherently stochastic)? Or could it be that it is determinate, but that this determinacy is unknowable? Personally, I find it hard to render the concept of “unknowably determinate” coherent, but perhaps some philosopher cleverer than I can do so.
Note that I’m not even discussing the possible implications of quantum mechanics. Quantum mechanics holds, of course, that there is inherent uncertainty at the level of the most fundamental constituents of the universe. (Quantum mechanics, by the way, although cast in the most abstruse mathematics – math well beyond my feeble capabilities – is at root an empirically based theory, developed not from abstract philosophical considerations, but in an attempt to explain some otherwise extremely intractable experimental data.) Quantum mechanics is sometimes – although not necessarily – held to imply that some “built-in” level of uncertainty exists at the macroscopic level, also.
Note that the above discussion is not about human limitations, or whether “exact knowledge” is possible to a human mind. The infinite regress problem does not say “no human brain can hold all this information”, it asks “how could the universe contain complete knowledge of itself?” Similarly, the “knowledge as a limit” idea is not about capabilities of human intelligence – in fact, I would argue that no human mind could even come close to getting “as close as we please” (in this quasi-mathematical sense) to exact knowledge about any real world problem. Rather, this is an argument about “theoretical” possibility, as I say above. I admit, I’m not really sure exactly what such a “theoretical” possibility means, except that if something is not “theoretically” possible, than it darn tootin’ is not a practical possibility, either.
Postscript: After writing this essay, I happened to look up the Wikipedia article on Laplace’s demon (http://en.wikipedia.org/wiki/Laplace's_demon). Some of the objections I make in this essay were covered therein (if somewhat more tersely), and physical arguments, including quantum mechanical, were gone into more deeply. The Wikipedia article did not mention the “knowledge as a limit” idea, nor the fact that if the demon were in an alternate universe, some form of one-way (and only one-way) communication between universes would be necessary.
Sunday, November 22, 2009
Thursday, November 5, 2009
Creating the world
Could there be an algorithm for the creation of the world? An algorithm in the sense of a sufficiently complete description of (dynamical) initial conditions (matter, energy, various derivatives thereof) that it would accurately predict the actual course of development of the whole universe? Where would such an algorithm reside? It would seem that it could not reside within those original conditions. A complete description of the state of anything seems to require at least as many data points as there are objects in (or attributes of, or whatever) the thing to be described. It therefore cannot be contained within the thing described without generating an infinite regress (or an infinite expansion, more like).
Or can a part completely describe the whole? Can some rule-based description sufficiently describe something, even something of which it is a part, to allow perfect prediction? Doesn’t the absence of a complete description, in the sense above – a complete catalog of initial conditions – imply some necessary uncertainty as to outcome?
If the algorithm did not exist in the initial conditions, could it exist in some later evolution? Could the world evolve in such a way that it could eventually contain a complete description of the way it was at some former time? Doesn’t this imply that the future world has more stuff in it than the former one did? Doesn’t this defy conservation of matter/energy? But doesn’t every instant of the world contain stuff that the former didn’t? Because at every instant matter and energy are arranged differently than they were before. Isn’t this structure “stuff” in some sense? An object? A thing? A collection of things? (Even, potentially, an unlimited collection of things, in the sense that some observer might interpret the same structure in different ways, for the purpose of different analyses.) The law of conservation of matter and energy says there can be no net increase or decrease in the total quantity of matter/energy, only – it says nothing about “stuff” or “things”, per se. New things, in the sense above, are created and destroyed by (rule conforming) changes in the state of the world’s matter and energy all the time. Can these changes in state create an expansion of the total amount of “stuff” in a way that it could include a complete description of a former state w/out requiring an infinite expansion?
The answer to my original question may be “no”. I rather suspect that the structure-stuff cannot generate the kinds of things that could record sufficient data points to completely describe a former state of the matter/energy, unchanging in total quantity, of whose current state it is the structure. Which implies that the best we can even theoretically hope for in terms of world-generating algorithms is a rule based algorithm, to be applied to an incompletely specified set of initial conditions, which could create many different worlds, including, possibly (purely by chance) our own, or a complete algorithm, including initial conditions, of a much, much smaller universe.
But it sure is an interesting question to wonder about, in any case.
Or can a part completely describe the whole? Can some rule-based description sufficiently describe something, even something of which it is a part, to allow perfect prediction? Doesn’t the absence of a complete description, in the sense above – a complete catalog of initial conditions – imply some necessary uncertainty as to outcome?
If the algorithm did not exist in the initial conditions, could it exist in some later evolution? Could the world evolve in such a way that it could eventually contain a complete description of the way it was at some former time? Doesn’t this imply that the future world has more stuff in it than the former one did? Doesn’t this defy conservation of matter/energy? But doesn’t every instant of the world contain stuff that the former didn’t? Because at every instant matter and energy are arranged differently than they were before. Isn’t this structure “stuff” in some sense? An object? A thing? A collection of things? (Even, potentially, an unlimited collection of things, in the sense that some observer might interpret the same structure in different ways, for the purpose of different analyses.) The law of conservation of matter and energy says there can be no net increase or decrease in the total quantity of matter/energy, only – it says nothing about “stuff” or “things”, per se. New things, in the sense above, are created and destroyed by (rule conforming) changes in the state of the world’s matter and energy all the time. Can these changes in state create an expansion of the total amount of “stuff” in a way that it could include a complete description of a former state w/out requiring an infinite expansion?
The answer to my original question may be “no”. I rather suspect that the structure-stuff cannot generate the kinds of things that could record sufficient data points to completely describe a former state of the matter/energy, unchanging in total quantity, of whose current state it is the structure. Which implies that the best we can even theoretically hope for in terms of world-generating algorithms is a rule based algorithm, to be applied to an incompletely specified set of initial conditions, which could create many different worlds, including, possibly (purely by chance) our own, or a complete algorithm, including initial conditions, of a much, much smaller universe.
But it sure is an interesting question to wonder about, in any case.
Labels:
algorithms,
metaphysics,
ontology,
physics
Subscribe to:
Posts (Atom)