Tuesday, May 15, 2012

Computational Intelligence: Godel's Theorem and the Turing Test



In this post I take an initial look at intelligence as it relates to computing theory.  One of the issues raised here deals with the question: What is intelligence? I will return to the relationship between information and intelligence (in various forms) in other posts as well, in particular as I look at how information is used by various organisations.  However, this section sets the stage by looking at the extreme limits of what is possible in relation to information technology. Future posts then paint the picture of what is done within the realms of possibility, as established here. Since we are looking at limits, I begin with a short synopsis on the relevant theoretical concepts. A logical place to start this is with Gödel’s theorem as covered in the next section. Gödel’s theorem leads into a discussion of what is possible using formal languages, such as computer programs, and finally I look at how this relates to questions of intelligence in relation to fundamental work by Alan Turing who further explored the limits of computing. 

Gödel’s Theorem

(The following section is primarily based on ‘Gödel, Escher and Bach: An Eternal Golden Braid’, by D.R Hofstadter, 1979)

Gödel turned number theory on itself to determine its limits.  Number theory is a formal system, and a desirable property of a formal system is that any theorem that is valid (for processing by that system) can be determined to be either true or false.  This is somewhat analogous to us saying that given a mathematical equation we would like to be able to prove in all cases, using mathematical rules, whether the equation holds or not. For example, is 1 + 2 equal to 3 or not?  This seems a fairly elementary requirement for a formal mathematical system, and one might rightly assume it is a property of all mathematical systems. In fact, at some level this is not true.  Gödel proved, in a very ingenious and complicated way, that no matter how complex or sophisticated your formal system, you have a dilemma. There would always be at least one theorem that was either not provable in that formal system, or if its rules are applied, will be proved true when in fact it is false.  Gödel proved this by applying formal mathematical rules to formal mathematical systems. i.e by using meta-mathematics to analyse mathematics (by turning mathematics on itself, in particular Number Theory). Explaining this concept using what I will refer to as Formal System X he produced a theorem that effectively said: 

“I am not a theorem of Formal System X                              (derived from Hofstadter, 1979; pg 448)

Prior to Gödel, this was presumed to be impossible. If a theorem was produced using the rules of the formal system, it must be a true theorem of that formal system (by definition). But this theorem, produced using the rules of Formal System X, asserted that it could not be produced using Formal System X and be true at the same time. Formal System X must be either incomplete (some valid theorems cannot be proved) or inconsistent (false theorems can be proved to be true).  The theorem is “undecidable” (Hofstadter, 1979; pg 449). Either way, this finding has very profound implications in relation to the limitations of formal systems of logic and mathematics. It suggests that a formal “Theory of Everything” is unlikely and demonstrates that this limitation will be inherent in any formal system of any complexity (Stephen Hawking discusses this in a lecture he gave at Texas A&M University on the “End of Physics”). This includes formal systems proposed for emulating human thought, such as artificial intelligence. Whilst it does not prevent practical use of such systems (Expert Systems for example), this finding did serve to constrain some of the aspirations of mathematicians as they expected to find a rigorous proof that mathematics was contradiction free.  Regarding the practicality of these mathematical findings, it appears that these are more of a theoretical importance than any practical significance. And in fact, there is at least one apparently knowledgeable person who questions and criticises the logic behind the techniques used here, in particular the process of diagonalisation relied on by Gödel for his proof, and also claims that category errors are made when considering various versions of the ‘liar’ paradox which is seen as synonymous with Gödel’s statement (basic form: “This is not a lie”).  This criticism suggests that the liar paradox (and other similar paradoxes) arise from the accepted practice in both mathematics and computer science of creating and manipulating self-referential structures, for example, placing lists inside themselves. This is a criticism of the logic behind the use of such structures, and whilst this criticism may have some merits, current mathematical and computational theories allow such manipulations and in the absence of an alternative logic the findings need to be considered in relation to the prevailing view. Thus, within this view, Gödel’s theoretical results have implications in the area of information processing, if one is to consider what is ultimately possible using computerised systems and one person who considered this in detail was Alan Turing.

Turing and the Turing Test

Turing came up with a result similar to that of Gödel.  He was trying to determine if it was possible to prove, in all cases, whether or not an algorithm will halt (i.e stop). His conclusion was that such a proof was not possible (technically: undecidable). This solved an important open question in mathematics at that time; that there is no general algorithm for deciding mathematical questions (Hilbert’s problem) and also established that, in the general case, one could not prove that algorithms will halt, although there are many individual cases where such a proof is possible, and it is not hard to think of some examples (Penrose, 1989). Turing is also famous for his answer to a question asked by artificial intelligence researchers (and others) about whether artificial intelligence (i.e an intelligent computer) was possible and how would we know if a machine could really “think” (to many, but not all, this implies consciousness). Turing answered this with his famous imitation game (now known as the Turing Test) which basically avoided all the arguments about what was thinking? (and by implication what is intelligence?) by arguing that we should accept that a computer can think when we cannot distinguish a computer’s responses from a human’s responses. This avoids all the complications associated with the many attempts to formally define thinking and intelligence. To eliminate the obvious physical differences between a computer and a human, which Turing distinguishes from thinking capabilities, all questions and answers are provided through a keyboard and computer screen or printout (Turing 1950). Turing (1950) argues that such a computer system is possible. In contrast to Turing’s argument, Penrose (1989) stated his position that intelligence required consciousness and that both intelligence (in a human sense) and consciousness are non-algorithmic. This is quite a bizarre proposition from the perspective of classical computer science, as Turing did prove that a Universal Turing machine (logically equivalent to our modern computers) could compute all computable functions, and thus Penrose’s suggestion is that intelligence does not consist of some combination of computable functions, in which sense he appears to mean it cannot be specified in a way that can be programmed (Penrose 1989). Computer scientists might well regard this as an appeal to mysticism i.e some undefined and unknowable (in a scientific sense) influence outside of the boundaries of science. However, in conjunction with Stuart Hammeroff, Penrose later attributes consciousness to quantum physics and its effects  (Penrose and Hammeroff, 2011). If one engages with the theories and ideas presented in Penrose (1989) and Penrose and Hammeroff (2011) then philosophical questions in relation to determinism and free will are inevitable raised. Such questions influence arguments for particular systems of ethics, morals and self-responsibility, along the lines of:  was he responsible for his criminal action, or was it just an inevitable consequence of his circumstances and up-bringing? This might seem somewhat esoteric, but it possible that such ethical questions, and the ideologies that are implied by them, profoundly influence our approach to managing our systems and our societies, and thus our use of technology.  One way of thinking about this, is that if we accept that circumstances determine people’s actions, then this may lead to attempts to control circumstances in such a way as to achieve a ‘desired’ set of actions.  Such control could be by governments, or by marketing organisations or by education systems, in an attempt to produce an ‘ideal’ type of behaviour.  In this sense, concepts of people having an innate individuality and diversity in their talents and need (and thus diversity in their behaviours) may well be denied in attempts to manipulate people to achieve a determined predictability in behaviour and outcomes. This issue may become clearer in further posts on this topic.

References

Hofstadter, DR 1979, Gödel, Escher and Bach: an eternal golden braid, Penguin Books.

Penrose, R 1989, The Emperor’s New Mind, Oxford University Press.

Penrose, R & Hameroff, S 2011, ‘Consciousness in the Universe: Neuroscience, Quantum Space-Time 
Geometry and Orch OR Theory’, Journal of Cosmology, vol. 14.

 Turing, A.M 1950, Computing Machinery and Intelligence. Mind 59 no. 236 reprinted in The Mind’s Eye (ed. D. R Hofstadter and D.C Dennett), Basic Books Inc, ; Penguin Books, 1981  

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home