text
stringlengths
0
174
variety. There are other programs which have randomizing devices that will give some
variety but not out of any deep desire. Such programs could be reset with the internal
random number generator as it was the first time, and once again, the same game would
ensue. Then there are other programs which do learn from their mistakes, and change
their strategy depending on the outcome of a game. Such programs would not play the
same game twice in a row. Of course, you could also turn the clock back by wiping out
all the changes in the memory which represent learning, just as you could reset the
random number generator, but that hardly seems like a friendly thing to do. Besides, is
there any reason to suspect that you would be able to change any of your own past
decisions if every last detail-and that includes your brain, of course-were reset to the way
it was the first time around?
But let us return to the question of whether "choice" is an applicable term here. If
programs are just "fancy marbles rolling down fancy hills", do they make choices, or not?
Of course the answer must be a subjective one, but I would say that pretty much the same
considerations apply here as to the marble. However, I would have to add that the appeal
of using the word "choice", even if it is only a convenient and evocative shorthand,
becomes quite strong. The fact that a chess program looks ahead down the various
possible bifurcating paths, quite unlike a rolling marble, makes it seem much more like
an animate being than a square-root-of-2 program. However, there is still no deep self-
awareness here-and no sense of free will.
Now let us go on to imagine a robot which has a repertoire of symbols. This robot
is placed in a T-maze. However, instead of going for the reward, it is preprogrammed to
go left whenever the next digit of the square root: of 2 is even, and to go right whenever it
is odd. Now this robot is capable of modeling the situation in its symbols, so it can watch
itself making choices. Each time the T is approached, if you were to address to the robot
the question, "Do you know which way you're going to turn this time?" it would have to
answer, "No". Then in order to progress, it would activate its "decider" subroutine, which
calculates the next digit of the square root of 2, and the decision is taken. However, the
internal mechanism of the decider is unknown to the robot-it is represented in the robot's
symbols merely as a black box which puts out "left"'s and "right'"s by some mysterious
and seemingly random rule. Unless the robot's symbols are capable of picking up the
hidden heartbeat of the square root of 2, beating in the L's and R's, it will stay baffled by
the "choices" which it is making. Now does this robot make choices? Put yourself in that
position. If you were trapped inside a marble rolling down a hill and were powerless to
affect its path, yet could observe it with all your human intellect, would you feel that the
marble's path involved choices? Of course not. Unless your mind is affecting the
outcome, it makes no difference that the symbols are present.
So now we make a modification in our robot: we allow its symbols-including its self-
symbol-to affect the decision that is taken. Now here is an example of a program running
fully under physical law, which seems to get much more deeply at the essence of choice
than the previous examples did. When the robot's own chunked concept of itself enters
the scene, we begin to identify with the robot, for it sounds like the kind of thing we do. It
is no longer like the calculation of the square root of 2, where no symbols seem to be
monitoring the decisions taken. To be sure, if we were to look at the robot's program on a
very local level, it would look quite like the square-root program. Step after step is
executed, and in the end "left" or "right" is the output. But on a high level we can see the
fact that symbols are being used to model the situation and to affect the decision. That
radically affects our way of thinking about the program. At this stage, meaning has
entered this picture-the same kind of meaning as we manipulate with our own minds.
A Godel Vortex Where All Levels Cross
Now if some outside agent suggests 'L' as the next choice to the robot, the suggestion
will be picked up and channeled into the swirling mass of interacting symbols. There, it
will be sucked inexorably into interaction with the self-symbol, like a rowboat being
pulled into a whirlpool. That is the vortex of the system, where all levels cross. Here, the
'L' encounters a Tangled Hierarchy of symbols and is passed up and down the levels. The
self-symbol is incapable of monitoring all its internal processes, and so when the actual
decision emerges-'L' or 'R' or something outside the system-the system will not be able to
say where it came from. Unlike a standard chess program, which does not monitor itself
and consequently has no ideas about where its moves come from, this program does
monitor itself and does have ideas about its ideas-but it cannot monitor its own processes
in complete detail, and therefore has a sort of intuitive sense of its workings, without full
understanding. From this balance between self-knowledge and self-ignorance comes the
feeling of free will.
Think, for instance, of a writer who is trying to convey certain ideas which to him
are contained in mental images. He isn't quite sure how those images fit together in his
mind, and he experiments around, expressing things first one way and then another, and
finally settles on some version. But does he know where it all came from? Only in a
vague sense. Much of the source, like an iceberg, is deep underwater, unseen-and he
knows that. Or think of a music composition program, something we discussed earlier,
asking when we would feel comfortable in calling it the composer rather than the tool of
a human composer. Probably we would feel comfortable when self-knowledge in terms
of symbols exists inside the program, and when the program has this delicate balance
between self-knowledge and self-ignorance. It is irrelevant whether the system is running
deterministically; what makes us call it a "choice maker" is whether we can identify with
a high-level description of the process which takes place when the
program runs. On a low (machine language) level, the program looks like any other
program; on a high (chunked) level, qualities such as "will", "intuition", "creativity", and
"consciousness" can emerge.
The important idea is that this "vortex" of self is responsible for the tangledness,
for the Godelian-ness, of the mental processes. People have said to me on occasion, "This
stuff with self-reference and so on is very amusing and enjoyable, but do you really think
there is anything serious to it?" I certainly do. I think it will eventually turn out to be at
the core of AI, and the focus of all attempts to understand how human minds work. And
that is why Godel is so deeply woven into the fabric of my book.
An Escher Vortex Where All Levels Cross
A strikingly beautiful, and yet at the same time disturbingly grotesque, illustration of the
cyclonic "eye" of a Tangled Hierarchy is given to us by Escher in his Print Gallery (Fig.
142) . What we see is a picture gallery where a young man is standing, looking at a
picture of a ship in the harbor of a small town, perhaps a Maltese town, to guess from the
architecture, with its little turrets, occasional cupolas, and flat stone roofs, upon one of