Title: Incompleteness
1Incompleteness
2- Suppose L is a logic and H(T,x) is a statement in
L expressing that Turing machine T halts on input
x. - Thus H(T,x) is true if and only if T halts on
input x. - Recall -- L is sound and effective. So
- If H(T,x) is provable in L then it is true so T
halts on input x. - If ?H(T,x) is provable in L then H(T,x) is false
so T does not halt on input x.
3- There is a Turing machine M such that M halts on
input i if and only if ?H(Ti,i) is provable in L. - M can be constructed from a theorem prover for L
- Let j be such that M is Tj.
- Claim Tj loops on input j but ?H(Tj,j) is not
provable in L. (Incompleteness.)
4- Proof If Tj halts on input j then by definition
of M, ?H(Tj,j) is provable in L. Thus Tj loops
on input j. Contradiction. - Thus Tj loops on input j. By definition of M
this implies that ?H(Tj,j) is not provable in L. - Thus ?H(Tj,j) is true but not provable in L.
5- The formula ?H(Tj,j) asserts its own
unprovability - If ?H(Tj,j) is true then Tj does not halt on
input j. - This means that ?H(Tj,j) is not provable in L.
- So if ?H(Tj,j) is true, it is unprovable.
- If ?H(Tj,j) is provable, L is unsound.
6- Let S be the sentence This sentence is false.
- This is self-referential like ?H(Tj,j)
- If S is true it is false.
- If S is false it is true.
- Either way, HL is contradictory unless this is
not considered a valid sentence of HL. - But which sentences then does HL allow?
7- Similar to Russells paradox
- Let X be S S ? S
- Then is X ? X?
- If so X ? X.
- If not X ? X.
- Either way set theory is contradictory.
- So set theory was modified to disallow this set.
8Constructing H(T,x)
- Legend
- tape(i,t) Tape symbol i at time t.
- state(t) State at time t
- scan(t) Symbol scanned at time t
- Transition function of T for (state,symbol)
- ns(state,symbol) New state
- wr(state,symbol) Symbol written
- dir(state,symbol) Direction moved, 0,-1,or 1
9- Let A be the conjunction of these formulae
- (0 lt i lt x) ? tape(i,0) xi.
- (0 ? i ? i ? x) ? tape(i,0) blank
- state(0)s (the start state of T)
- scan(0)0
- ?t?i (scan(t) ? i ? tape(i,t)tape(i,t1))
- ?t (state(t1)ns(state(t),tape(scan(t),t)))
- ?t (scan(t1)scan(t) dir(state(t),tape(scan(t),t
))) - ?t (tape(scan(t),t1) wr(state(t),tape(scan(t),t)
))
10- Assume ns, wr, and dir are defined in A too.
- Let Hn(T,x) be A ? (state(n) ?F)
- F is the set of halting states of T
- N is the natural numbers 0,1,2,3,
- Means that T halts in n steps on input x
- Let H(T,x) be ?n ?N Hn(T,x)
- Means that T halts on input x
- H(T,x) is true if T halts on input x
- H(T,x) is false if T loops on input x
11- Note that H(T,x) can be constructed from T
- Thus if j is known, ?H(Tj,j) can be constructed.
This formula is true but not provable in L. - j can be obtained from a Turing machine M that on
input i tests if ?H(Ti,i) is provable in L. - ?H(Ti,i) can be constructed as above
- From L a theorem prover for L can be constructed
and M can be found - Thus one can easily construct ?H(Tj,j)
12- But ?H(Tj,j) is true but not provable in L
- Constructing H(T,x) for a particular T and x
- Consider this TM
- pass, 1, pass, 1, R
- pass, 0, pass, 1, R
- pass, B, del1, B, L
- del1, 1, del2, B, L
- del2, 1, stop, B, R
- Tape B111011B Start state pass
13- Let A be the conjunction of these formulae
- tape(0,0)B tape(1,0)1 tape(2,0)1 tape(3,0)1
tape(4,0)0 tape(5,0)1 tape(6,0)1 (because x
B111011) - (0 ? i ? i ? 7) ? tape(i,0) B
- state(0)pass
- scan(0)0
- ?t?i (scan(t) ? i ? tape(i,t)tape(i,t1))
- ?t (state(t1)ns(state(t),tape(scan(t),t)))
- ?t (scan(t1)scan(t) dir(state(t),tape(scan(t),t
)))
14- ?t (tape(scan(t),t1) wr(state(t),tape(scan(t),t)
)) - ns(pass,1)pass wr(pass,1)1 dir(pass,1)1
- ns(pass,0)pass wr(pass,0)1 dir(pass,0)1
- ns(pass,B)del1 wr(pass,B)B dir(pass,B)-1
- ns(del1, 1)del2 wr(del1, 1)B dir(del1, 1)-1
- ns(del2, 1)stop wr(del2, 1)B dir(del2, 1)1
- Let Hn(T,x) be A ? (state(n) stop)
- For this T and x, H(T,x) is true and T halts on
input x.
15- In general ?H(Tj,j) is true but not provable in L
- Paradox How do we know that this formula is
true when L does not know it? - What logical system are we in when we think this
way? - We seem to have the ability to get outside of any
logical system. Thus we must not reason using a
fixed logical system. - Lucas argued on this basis that human thought
processes cannot be simulated by Turing machines.
16- Let HL be human logic
- If we know HL then we can construct a formula X
that is true but not provable in HL - But we just proved X in HL, contradiction.
- Possible solutions
- We can never know HL
- HL is not sound.
- HL is not effective. (We excel TM in thought.)
- We can never know that HL is sound.
17- All of these seem strange
- Dont we even know how we think?
- Cant we think logically?
- Can we really exceed Turing machines in reasoning
power? - Is our thought process really so subtle that we
cannot know it is sound?
18- In fact HL does not need to contain all of human
logic. - HL just needs enough logic to be able to show
that ?H(Tj,j) is true but not provable in L. - This restricted HL is known to us and very
likely free from error. - Either this restricted HL goes beyond TM or
cannot be shown (in HL) to be sound for the
formula ?H(Tj,j). - Pick Peano arithmetic as restricted HL.
19- What does restricted HL need to derive to
obtain the contradiction? - It needs to show that Tj halts on input j if and
only if Tj constructs a proof that Tj does not
halt on input j. - It needs to show that if there is a proof that Tj
does not halt on input j then in fact Tj does not
halt on input j. - The first part is straightforward.
- The second part is the problem.
20- Thus no sufficiently powerful logic L can derive
the statement (X is provable in L) implies X. - In fact this only has to be derivable for X of
the form ?H(T,x) - Any sufficiently powerful finite logic that can
do this, is inconsistent (contradictory). - But humans use the fact If X is provable in L
then X all the time. - Does this mean our thought processes cannot be
captured by any logical system?
21- If L is a logic such that L can derive the
statement (X is provable in L) implies X then
either L is inconsistent (can derive a
contradiction) or L cannot be represented by a
Turing machine (L is not effective). - We can try to strengthen L.
- Let L be L with the statement ?T?x(?H(T,x) is
provable in L) implies ?H(T,x) added. - In L one cannot derive the statement
?T?x(?H(T,x) is provable in L) implies
?H(T,x) !
22- We can also add the statement ?H(Tj,j) to L.
- But then for the new L, j will be different and
?H(Tj,j) will not be derivable for the new j! - For a logic L let G(L) be a statement which
permits ?H(Tj,j) to be derived. G(L) may be the
statement ?H(Tj,j) itself or G(L) may be the
statement ?T?x(?H(T,x) is provable in L)
implies ?H(T,x).
23- Then in L ? G(L) one can derive ?H(Tj,j). But
one cannot derive G(L ? G(L))! - Define Li by L0 L and Li1 Li ? G(Li).
- Let be the limit (union) of all these logics.
- In L? one still cannot derive G(L? )!
- Then one can construct L? ? G(L? ) et cetera.
Related to ordinals. - The process never stops.
- A staircase to infinity.
- An attempt to reach the inexpressible.
24- We can advance farther and farther but there is
always an infinite distance between us and
ultimate understanding. - Can a Turing machine construct the same sequence
of logics starting with some sound logic such as
Peano arithmetic? - What would it mean if it could?
- There is a shortcut to obtain a logic to solve
the halting problem, but it has a drawback
25- Let L be L with all true statements of the form
?H(T,x) added. - Then L can solve the halting problem.
- If L is Peano arithmetic (assuming it is
consistent) L is consistent too. - But L is not effective. L does not have a
theorem prover. - No finite logic can solve the halting problem.
- The question remains Do humans excel Turing
machines?
26- The process of strengthening Peano arithmetic
using Godels theorem can be made more formal
27- Let PA be Peano arithmetic.
- Let HL be the logic permitting us to apply
Godels theorem repeatedly to PA and obtain PA,
PA ? G(PA) and other stronger logics from PA. - Write HL ? L to mean that logic L can be
obtained from PA by such strengthening. - Thus HL ? PA.
- Also if HL ? L then HL ? L ? G(L)
- Also if HL ? Li for all i and the sequence Li is
effectively computable then HL ? ?iLi
28- Make HL into a logic -- say one can prove
formula ?H(T,x) if for some logic L, HL ? L and
?H(T,x) can be proved in L. - Suppose HL ? HL. Then HL ? HL ? G(HL).
Thus G(HL) is provable in HL. But this is a
contradiction. - Suppose it is not true that HL ? HL. This
seems strange -- if HL has a finite
axiomatization then we should be able to write it
down and reason about it.
29- Because PA is (very likely) sound, HL is also
very likely sound. - If HL ? HL then HL does not have a finite
axiom system and we excel Turing machines. - This is a more concrete version of Lucas paradox.
30- The essence of consciousness -- awareness of
ourselves. The ability to view our own thought
processes as objects from the outside. To
infer HL ? HL. - The ability to formalize any logical system we
use. - Can a computer with sensory input be self-aware
in this sense? - Could it be that a computer with sensory input
and the ability to see and think about itself,
excels Turing machines?
31- Note that such sensing devices would be able not
only to observe the machine M but also the
sensing devices themselves.
32Lucas view
- Goedel's theorem must apply to cybernetical
machines, because it is of the essence of being a
machine, that it should be a concrete
instantiation of a formal system. It follows that
given any machine which is consistent and capable
of doing simple arithmetic, there is a formula
which it is incapable of producing as being
true---i.e., the formula is unprovable-in-the-syst
em-but which we can see to be true. It follows
that no machine can be a complete or adequate
model of the mind, that minds are essentially
different from machines.
33Lucasview (contd)
- Goedel's theorem seems to me to prove that
Mechanism is false, that is, that minds cannot be
explained as machines. So also has it seemed to
many other people almost every mathematical
logician I have put the matter to has confessed
to similar thoughts, but has felt reluctant to
commit himself definitely until he could see the
whole argument set out, with all objections fully
stated and properly met.1 This I attempt to do.
34- For every machine there is a truth which it
cannot produce as being true, but which a mind
can. This shows that a machine cannot be a
complete and adequate model of the mind. It
cannot do everything that a mind can do, since
however much it can do, there is always something
which it cannot do, and a mind can.
35- The paradoxes of consciousness arise because a
conscious being can be aware of itself, as well
as of other things, and yet cannot really be
construed as being divisible into parts. It means
that a conscious being can deal with Goedelian
questions in a way in which a machine cannot,
because a conscious being can both consider
itself and its performance and yet not be other
than that which did the performance.
36- A machine can be made in a manner of speaking to
"consider" its own performance, but it cannot
take this "into account" without thereby becoming
a different machine, namely the old machine with
a "new part" added. But it is inherent in our
idea of a conscious mind that it can reflect upon
itself and criticize its own performances, and no
extra part is required to do this it is already
complete, and has no Achilles' heel.
37- We can even begin to see how there could be room
for morality, without its being necessary to
abolish or even to circumscribe the province of
science. Our argument has set no limits to
scientific enquiry it will still be possible to
investigate the working of the brain. It will
still be possible to produce mechanical models of
the mind.
38- Only, now we can see that no mechanical model
will be completely adequate, nor any explanations
in purely mechanist terms. We can produce models
and explanations, and they will be illuminating
but, however far they go, there will always
remain more to be said. There is no arbitrary
bound to scientific enquiry but no scientific
enquiry can ever exhaust the infinite variety of
the human mind.12
39Lucas view
- The arguments I put forward in "Minds, Machines
and Goedel" and then in Freedom of the Will have
been much attacked. Although I put them forward
with what I hope was becoming modesty and a
certain degree of tentativeness, many of the
replies have been lacking in either courtesy or
caution. I must have touched a raw nerve.
40- In recent years I have been less zealous to
defend myself, and often miss articles
altogether. There may be some new decisive
objection I have altogether overlooked. But the
objections I have come across so far seem far
from decisive.
41Encyclopedia article
- Roger Penrose claims that this (alleged)
difference between "what can be mechanically
proven" and "what can be seen to be true by
humans" shows that human intelligence is not
mechanical in nature. This view is not widely
accepted, because as stated by Marvin Minsky,
human intelligence is capable of error and of
understanding statements which are in fact
inconsistent or false.
42- However, Marvin Minsky has reported that Kurt
Gödel told him personally that he believed that
human beings had an intuitive, not just
computational, way of arriving at truth and that
therefore his theorem did not limit what can be
known to be true by humans.
43Penroses view
- The famous "incompleteness" theorem of Gödel
(illustrated by a particularly striking but
elementary example of it known as Goodstein's
theorem) tells us that human mathematical
understanding cannot be encapsulated in any
(knowably sound) computational procedure. This
has the implication that there is something
involved in human understanding that lies beyond
the actions of any computer. Understanding is a
particular manifestation of human consciousness,
so it appears that it is conscious mentality that
lies outside computational activity.
44- Why do I believe that consciousness involves
noncomputable ingredients? The reason is Gödel's
theorem. I sat in on a course when I was a
research student at Cambridge, given by a
logician who made the point about Gödel's theorem
that the very way in which you show the formal
unprovability of a certain proposition also
exhibits the fact that it's true. I'd vaguely
heard about Gödel's theorem that you can
produce statements that you can't prove using any
system of rules you've laid down ahead of time.
45- But what was now being made clear to me was that
as long as you believe in the rules you're using
in the first place, then you must also believe in
the truth of this proposition whose truth lies
beyond those rules. This makes it clear that
mathematical understanding is something you can't
formulate in terms of rules. That's the view
which, much later, I strongly put forward in my
book The Emperor's New Mind.
46Criticism of Penrose
In The Emperor's New Mind Penrose, 1989 and
especially in Shadows of the Mind Penrose,
1994, Roger Penrose argues against the strong
artificial intelligence thesis," contending that
human reasoning cannot be captured by an
artificial intellect because humans detect
nontermination of programs in cases where digital
machines do not. Penrose thus adapts the similar
argumentation of Lucas 1961 which was based on
Goedel's incompleteness results to one based
instead on the undecidability of the halting
problem, as shown by Turing 1936. Penrose's
conclusions have been roundly critiqued, for
example, in Avron, 1998 Chalmers, 1995a
LaForte et al., 1998 Putnam, 1995.
47In a nutshell, Penrose's argument runs as
follows 1. Collect all current sound human
knowledge about non-termination. 2. Reduce said
knowledge to a computer program. 3. Create a
self-referential version of said program. 4.
Derive a contradiction. The conclusion (by
reductio ad absurdum) is that the second step is
invalid A program cannot incorporate everything
humans know! (The reasoning is that humans can
know that a self-referential version of this
program does not halt, but the computer program
cannot know this.)
48Human mathematical intuition
Ramanujan had several extraordinary
characteristics which set him apart from the
majority of mathematicians. One was his lack of
rigor. Very often he would simply state a result
which, he would insist, had just come to him from
a vague, intuitive source, far out of the realm
of conscious probing. In fact, he often said
that the goddess Namagiri inspired him in his
dreams. This happened time and again, and what
made it all the more mystifying -- perhaps even
imbuing it with a certain mystical quality -- was
the fact that many of his intuition theorems
were wrong.
49Mahalanobis Now here is a problem for
you. Ramanujan What problem, tell me? I read out
the question from the Strand Magazine. Ramanujan
Please take down the solution. (He dictated a
continued fraction.) The first term was the
solution I had obtained. Each successive term
represented successive solutions for the same
type of problem as the number of houses in the
street would increase indefinitely. I was
amazed. Mahalanobis Did you get the solution in
a flash? Ramanujan Immediately I heard the
problem, it was clear that that solution was
obviously a continued fraction. I then thought,
Which continued fraction? and the answer came
to my mind. It was as simple as this.
50Johann Martin Zacharias Dase, who lived from 1824
to 1861 and was employed by various European
governments to perform computations, is an
outstanding example. He not only could multiply
two number each of 100 digits in his head he
also had an uncanny sense of quantity. That is,
he could just tell, without counting, how many
sheep were in a field, or words in a sentence,
and so forth, up to about 30 . Incidentally,
Dase was not an idiot.
51- The statement ?H(Tj,j) is true but not provable
in L. - This means that neither H(Tj,j) nor ?H(Tj,j) can
be proven in L. - If A is a statement and neither A nor ?A can be
proven in L, then A is said to be undecidable in
L. - Goedels theorem gives an undecidable statement
but other such statements are known.
52Other undecidable statements
- The combined work of Gödel and Paul Cohen has
given concrete examples of undecidable statements
(statements which can be neither proven nor
disproven) both the axiom of choice and the
continuum hypothesis are undecidable in the
standard axiomatization of set theory. These
results do not require the incompleteness
theorem.
53- In 1973, the Whitehead problem in group theory
was shown to be undecidable in standard set
theory. In 1977, Kirby, Paris and Harrington
proved that a statement in combinatorics, a
version of the Ramsey theorem, is undecidable in
the axiomatization of arithmetic given by the
Peano axioms but can be proven to be true in the
larger system of set theory. Kruskal's tree
theorem, which has applications in computer
science, is also undecidable from the Peano
axioms but provable in set theory. Goodstein's
theorem is a relatively simple statement about
natural numbers that is undecidable in Peano
arithmetic.
54- Gregory Chaitin produced undecidable statements
in algorithmic information theory and in fact
proved his own incompleteness theorem in that
setting. - The complexity (or Kolmogorov complexity) of a
string is the length of the shortest program
which, when run without any input, outputs that
string. - Chaitin's incompleteness result though we know
that most strings are complex in the above sense,
the fact that a specific string is complex can
never be proven (if the string's length is above
a certain threshold). The precise formalization
is as follows.
55- Suppose we fix a particular consistent axiomatic
system for the natural numbers, say Peano's
axioms. Then there exists a constant L (which
only depends on the particular axiomatic system
and the choice of definition of complexity) such
that there is no string s for which the statement
- I(s) ? L
- can be proven within the axiomatic system (even
though, as we know, the vast majority of those
statements must be true). - Thus there are very many true, unprovable
statements in any sufficiently strong axiom
system.
56Nonstandard Models
- ?H(Tj,j) is ?n ?N ?Hn(Tj,j)
- This formula is true but not provable in L
- It must be false in a nonstandard model
- Thus there must be a nonstandard integer n such
that Hn(Tj,j) is true. - This must be so in any sound, effective logic
that can express the integers and Turing
computations.
57- There is no formula A such that A(x) is true if
and only if x is a standard integer. - If there were then the formula ?n ?N (A(n) ?
?Hn(T,x)) would hold in all models if and only if
T did not halt on input x - Thus this formula would be provable if and only
if T does not halt on input x - This would provide a solution to the halting
problem.
58- We cant formally describe the integers.
- Any attempt to do so will also have nonstandard
models that describe infinite integers - Then how do we know what the standard integers
are? - Is this evidence of intuition that goes beyond
Turing machines? - No formal system can even formally describe what
it means for something to be finite.
59- ? ? ? ? ? ? ? ? ?
? ? ? ?????????????????
?????? ? ? ? ? ? ? ? ? ?
? ? ? ? ? ???????????????????? ?? ?
? ? - 1 2 3 4 5 6 7 8 9
10 ?
nonstandard integers - standard integers
60- In fact in any formal system there are models of
the real numbers that have the same size as the
integers - But we know there are more reals than integers (a
bigger infinity) - So no formal system can fully capture the idea of
an infinity larger than the integers - Then what makes us think we know what such
infinities are if we cant fully describe them?
Do we excel Turing machines again?
61- Gödel's theorem thus shows that there must always
exist such unusual, unintended interpretations of
the system as Henkin says, quoted in Turquette
50 - We tend to reinterpret Gödel's incompleteness
result as asserting not primarily a limitation on
our ability to prove but rather on our ability to
specify what we mean ... when we use a symbolic
system in accordance with recursive rules Gödel
the synthetic a priori. - Damjan Bojadziev, Mind versus Gödel, in M. Gams,
M. Paprzycki and X. Wu (eds.), Mind Versus
Computer, IOS Press 1997, pp. 202-210.
62- Similarly, Polanyi says, though only in
connection with the second theorem - we never know altogether what our axioms mean
Personal Knowledge, p. 259. We must commit
ourselves to the risk of talking complete
nonsense if we are to say anything at all within
any such system p. 94. - (from Mind versus Gödel)
- Bojadziev concludes that our self awareness is
limited (we can never know HL)
63- Applied to minds, it would translate to some
principled limitation of the reflexive cognitive
abilities of the subject certain truths about
oneself must remain unrecognized if the
self-image is to remain consistent Hofstadter
79, p. 696. - Conclusion It is not possible to see oneself
completely, in the literal, metaphorical
("seeunderstand"), formal and computational
sense of the word. Gödel's theorems do not
prevent the construction of formal models of the
mind, but support the conception of mind (self,
consciousness) as something which has a special
relation to itself, marked by specific
limitations.
64- How many true but unprovable statements are
there? - Are they rare or common?
- Chaitin showed that they are very common in any
formal system. - Why then does this not cause more of a problem
for mathematics research?