text
stringlengths
0
174
But even if we do fail to understand ourselves, there need not be any Godelian
"twist" behind it; it could be simply an accident of fate that our brains are too weak to
understand themselves. Think of the lowly giraffe, for instance, whose brain is obviously
far below the level required for self-understanding-yet it is remarkably similar to our own
brain. In fact, the brains of giraffes, elephants, baboons-even the brains of tortoises or
unknown beings who are far smarter than we are-probably all operate on basically the
same set of principles. Giraffes may lie far below the threshold of intelligence necessary
to understand how those principles fit together to produce the qualities of mind; humans
may lie closer to that threshold perhaps just barely below it, perhaps even above it. The
point is that there may be no fundamental (i.e., Godelian) reason why those qualities are
incomprehensible; they may be completely clear to more intelligent beings.
Undecidability Is Inseparable from a High-Level Viewpoint
Barring this pessimistic notion of the accidental inexplicability of the brain, what insights
might Godel’s proof offer us about explanations of our minds/brains? Godel’s proof
offers the notion that a high-level view of a system may contain explanatory power which
simply is absent on the lower levels. By this I mean the following. Suppose someone
gave you G, Godel’s undecidable string, as a string of TNT. Also suppose you knew
nothing of Godel-numbering. The question you are supposed to answer is: "Why isn't
this string a theorem of TNT?" Now you are used to such questions; for instance, if you
had been asked that question about SO=0, you would have a ready explanation: "Its
negation, ~S0=0, is a theorem ." This, together with your knowledge that TNT is
consistent, provides an explanation of why the given string is a nontheorem. This is what
I call an explanation "on the TNT-level". Notice how different it is from the explanation
of why MU is not a theorem of the MlU-system: the former comes from the M-mode, the
latter only from the I-mode.
Now what about G? The TNT-level explanation which worked for 50=0 does not
work for G, because - G is not a theorem. The person who has no overview of TNT will
be baffled as to why he can't make G according to the rules, because as an arithmetical
proposition, it apparently has nothing wrong with it. In fact, when G is turned into a
universally quantified string, every instance gotten from G by substituting numerals for
the variables can be derived. The only way to explain G's nontheoremhood is to discover
the notion of Godel-numbering and view TNT on an entirely different level. It is not that
it is just difficult and complicated to write out the explanation on the TNT-level; it is
impossible. Such an explanation simply does not exist. There is, on the high level, a kind
of explanatory power which simply is lacking, in principle, on the TNT-level. G's
nontheoremhood is, so to speak, an intrinsically high-level fact. It is my suspicion that
this is the case for all undecidable propositions; that is to say: every undecidable
proposition is actually a Godel sentence, asserting its own nontheoremhood in some
system via some code.
Consciousness as an Intrinsically High-Level Phenomenon
Looked at this way, Godel’s proof suggests-though by no means does it prove!-that there
could be some high-level way of viewing the mind/brain, involving concepts which do
not appear on lower levels, and that this level might have explanatory power that does not
exist-not even in principle-on lower levels. It would mean that some facts could be
explained on the high level quite easily, but not on lower levels at all. No matter how
long and cumbersome a low-level statement were made, it would not explain the
phenomena in question. It is the analogue to the fact that, if you make derivation after
derivation in TNT, no matter how long and cumbersome you make them, you will never
come up with one for G-despite the fact that on a higher level, you can see that G is true.
What might such high-level concepts be? It has been proposed for eons, by
various holistically or "soulistically" inclined scientists and humanists, that consciousness
is a phenomenon that escapes explanation in terms of brain-components; so here is a
candidate, at least. There is also the ever-puzzling notion of free will. So perhaps these
qualities could be "emergent" in the sense of requiring explanations which cannot be
furnished by the physiology alone. But it is important to realize that if we are being
guided by Godel’s proof in making such bold hypotheses, we must carry the
analogy through thoroughly. In particular, it is vital to recall tnat is s nontheoremhood
does have an explanation-it is not a total mystery! The explanation- hinges on
understanding not just one level at a time, but the way in which one level mirrors its
metalevel, and the consequences of this mirroring. If our analogy is to hold, then,
"emergent" phenomena would become explicable in terms of a relationship between,
different levels in mental systems.,
Strange Loops as the Crux of Consciousness
My belief is that the explanations of "emergent" phenomena in our brains-for instance,
ideas, hopes, images, analogies, and finally consciousness and free will-are based on a
kind of Strange Loop, an interaction between levels in which the top level reaches back
down towards the bottom level and influences it, while at the same time being itself
determined by the bottom level. In other words, a self-reinforcing "resonance" between
different levels-quite like the Henkin sentence which, by merely asserting its own
provability, actually becomes provable. The self comes into being at the moment it has
the power to reflect itself.
This should not be taken as an antireductionist position. It just implies that a
reductionistic explanation of a mind, in order to be comprehensible , must bring in "soft"
concepts such as levels, mappings, and meanings. In principle, I have no doubt that a
totally reductionistic but incomprehensible explanation of the brain exists; the problem is
how to translate it into a language we ourselves can fathom. Surely we don't want a
description in terms of positions and momenta of particles; we want a description which
relates neural activity to "signals" (intermediate-level phenomena)-and which relates
signals, in turn, to "symbols" and "subsystems", including the presumed-to-exist "self¬
symbol". This act of translation from low-level physical hardware to high-level
psychological software is analogous to the translation of number-theoretical statements
into metamathematical statements. Recall that the level-crossing which takes place at this
exact translation point is what creates Godel's incompleteness and the self-proving
character of Henkin's sentence. I postulate that a similar level-crossing is what creates our
nearly unanalyzable feelings of self.
In order to deal with the full richness of the brain/mind system, we will have to be
able to slip between levels comfortably. Moreover, we will have to admit various types of
"causality": ways in which an event at one level of description can "cause" events at other
levels to happen. Sometimes event A will be said to "cause" event B simply for the