Search

Can Machines Think?

In a seminal paper published in 1950 titled "Computing Machinery and Intelligence", Alan Turing, legendary mathematician, computer scientist and philosopher, dared to give scientific meaning to the question "Can machines think?". Any serious attempt to make sense of the question has to deal with a problem of definitions. What exactly do we mean by "thinking"? In what sense does a computer, or a human for that matter, think? In his paper, Turing soon argues that the question whether machines can think is "too meaningless to deserve discussion". The problem is that "thinking" is a colloquial term, not a scientific one, and is therefore too ambiguous to allow for meaningful scientific scrutiny.


Moreover, if we believe that the act of thinking includes conscious experience, the question is mostly metaphysical. Descartes famously pointed out that the only thing a conscious being can establish with absolute certainty is his own existence. Descartes' cogito, ergo sum is an antidote to radical scepticism: a dedicated sceptic can doubt anything except their ability to doubt; the existence of a thinking entity, or self, that is responsible for the individual instances of subjective experience, generically called qualia, is the only incontrovertible fact about the human condition. This also means that, while we go about our daily lives assuming that there is such a thing as an outside world, anything external to the thought process could be a figment of our imagination, including other conscious observers. We can ascertain our own awareness because we subjectively experience qualia, but we cannot blindly extend the conclusion to other agents, no matter how conscious they appear. A machine could look, sound and act exactly like a human being without experiencing qualia, and therefore be as inanimate as your home computer.


The two attributes of intelligence (or how much the machine looks human) and consciousness (whether the machine is self-aware or not) are separate and should not be confused as one can exist without the other. Turing's insight was to realize that intelligence is easy enough to determine, as behaviours are directly accessible to scientific inquiry and can be compared to one another, whereas consciousness, its nature being subjective, is impossible to ascertain outside of one's own sense of self. Therefore, the original question "Can machines think?" is too ambitious for science, and has to be replaced by something less challenging but more practical. Turing proposed that instead of asking whether machines can think, one could ask whether a machine could do well at what he called the imitation game, meaning whether a machine could fool a human being in believing that they were talking to another human.


Turing's argument was that if a machine could do as well as a human being in a conversation, then for all practical purposes the machine would think, in the narrow sense that he defined as the game of imitating human behaviour. Whether the machine actually feels something, or understands anything of what it is saying, is a philosophical issue that falls outside the realm of science. Why a conversation? Conversations are notoriously difficult to master for a computer program, because there is no simple algorithm that can navigate the endlessly creative maze of human language. This is the reason why machines have been able to beat humans at chess, the ancient Chinese game of Go, and even Starcraft 2, but there is still no artificial intelligence that can convincingly play the imitation game, or the Turing Test, as it is known in academic circles.


What was Turing's point? If a machine, which runs some kind of computer program, can behave in a way that is indistinguishable from a human being, does that mean that biological minds are themselves computational? In other words, is the brain itself just an extremely complicated computer, executing some very sophisticated algorithm which takes sensory inputs and outputs the myriad of different actions a human can take? If this was the case, consciousness could arise in silicon-based processors as well as in biological brains. The conclusion rests on the substrate independence hypothesis, which assumes that consciousness is a phenomenon that can arise in any sufficiently complex physical system, irrespective of its constituents. Biological neurons or silicon transistors are not crucial to the emergence of mental states, as these are the result of the complicated interactions between the basic components of the system. In this view, consciousness is related to the mathematical pattern of the computational network, not to the matter that is doing the information processing.


Many have tried to disprove the frightening idea that human minds are nothing special, and could one day be implemented in non-biological substrates in ways that equal, if not surpass, human intelligence. An early attempt was made by Oxford philosopher John Lucas, whose argument was later refined by Roger Penrose in his book The Emperor's New Mind. The Penrose-Lucas argument goes something like this. Assuming that the human mind is a machine implementation of a formal system strong enough to speak of arithmetic, Gödel’s incompleteness theorem says that either the formal system is inconsistent or there exists some proposition in the mathematical language of the system that is neither provable nor disprovable within the system. One such undecidable sentence is a formal version of the self-referential proposition “I am not provable”, which nevertheless humans are capable of recognising as true. It follows that human minds cannot be machine implementations of consistent formal systems.


There are two major problems with this argument against machine intelligence. First of all, human minds could just be inconsistent. I am sure that if you look around you will find plenty of evidence for all kinds of irrational behaviours. People are perfectly capable of holding inconsistent beliefs, and nothing seems to indicate that the human mind can only think coherent thoughts, especially considering the cold-blooded effort that we all know is needed to follow strictly logical rules. Gödel sentences are true and unprovable only in consistent systems. In an inconsistent system, one can prove any claim whatsoever simply because anything follows from a contradiction; that is, an inconsistent system will not be incomplete.

Alice laughed: "There's no use trying," she said; "one can't believe impossible things." "I daresay you haven't had much practice," said the Queen. "When I was younger, I always did it for half an hour a day. Why, sometimes I've believed as many as six impossible things before breakfast. - Alice's Adventures in Wonderland

Moreover, and perhaps more importantly, even if the formal system is consistent, the fact that a human might recognise a truth that a given machine cannot, does not imply that the human mind is non algorithmic. Rather, it means that if the human mind is algorithmic, then its formal system is different from the one used by the given machine. Gödel's incompleteness theorem does more than just prove that any sufficiently strong axiomatic system is either incomplete or inconsistent. If we assume the consistency of such a system then it follows from the theorem that the system is essentially incomplete, meaning that the axiomatic system is not only incomplete but incompletable, no matter how many undecidable Gödel's truths we add to it. If we add a known, but unprovable, truth as an axiom to the system, then the new system would also necessarily contain unprovable statements. In this fashion we could create an entire sequence of systems, each one capable of proving something that the previous system could not by invoking an axiom, but, nevertheless, all essentially incomplete. Therefore, the mere fact that the human mind can recognise a truth that a specific axiomatic system cannot prove, could simply be a direct consequence of the essential incompleteness of the original system. Or, to put it another way, it is quite possible that the mind is merely an axiomatic system which just happens to be further along in the sequence of incomplete systems built on top of the original. In that case, there would exist propositions that the human mind can determine to be true (algorithmically), but a given machine cannot derive, simply because the mind, viewed as a formal system, is axiomatically more comprehensive. We might be very complicated Turing machines, but Turing machines nevertheless.


The whole problem can be recast in a different, simpler, form. Gödel's incompleteness theorem states that there exist true statements which have no proofs in formal systems that contain basic arithmetic. But how can we ascertain the truth of those statements without a formal proof? That is the crucial point upon which Lucas and Penrose built their anti-mechanistic argument. It must be that human intuition cannot be reduced to blind calculation, with the obvious consequence that the human mind is fundamentally different from a computer. The solution however, as far as I can see, is much more prosaic. Assuming the formal system S is consistent, Gödel shows there is a statement in S which is true but unprovable within the system. The statement is actually provable, but not in S: one needs the additional assumption that S in consistent, and that is not provable in S! The human mind is capable of proving Gödel's theorem because it can make that extra assumption and therefore escape the original axiomatic system. The misunderstanding of this critical point has been the cause of much confusion.


Another famous objection to computationalism (the view that the human mind is an information processing system and that cognition and consciousness together are a form of computation) is Searle's Chinese Room. While Lucas and Penrose argued against machine intelligence, insisting that a computer can never behave like a human being because the mind has non-algorithmic features, the Chinese room thought experiment presents an argument against machine consciousness, claiming a digital computer executing a program cannot have a mind, understanding or consciousness, regardless of how intelligently or human-like the program may behave.


The thought experiment goes like this (I'm paraphrasing Wikipedia here, which has a very clear explanation of the argument). Suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being. The question Searle wants to answer is this: does the machine literally understand Chinese? Or is it merely simulating the ability to understand Chinese? Searle calls the first position strong AI and the latter weak AI. Weak AI is an artificial intelligence which is capable of playing the imitation game successfully, that is it behaves identically to a human being. A strong AI experiences consciousness on top of that.


Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient papers, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually. Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behaviour which is then interpreted by the user as demonstrating intelligent conversation. However, Searle himself would not be able to understand the conversation. ("I don't speak a word of Chinese," he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.


Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that the strong AI position is false. Aphoristically, symbol-manipulation cannot produce understanding.


Searle's argument is, to me at least, even more problematic than Lucas&Penrose's. One obvious reason is that Searle is trying to argue against machine consciousness, which is an a priori unverifiable property of the mind. At the heart of Turing's brilliant idea lies a moral argument. Namely: if a computer interacted with us in a way that was indistinguishable from a human, then of course we could say the computer wasn't really thinking, that it was just a simulation. But on the same grounds, we could also say that other people aren't really thinking, that they merely act as if they're thinking. So what is it that entitles us to go through such intellectual acrobatics in the one case but not the other? A strong AI skeptic needs to face this elementary fact: one can indeed give weighty and compelling arguments against the possibility of thinking machines. The only problem with these arguments is that they're also arguments against the possibility of thinking brains! Other people's consciousness is also not directly accessible to us, but their behaviour is perfectly human-like. Should we conclude that they are all philosophical zombies, lacking conscious experience, qualia, or sentience? A die-hard skeptic could retort that we know other people are thinking because they are similar to us, while the structure of a robot's mind could be completely different than our own, down to the constituent material. The skeptic could believe that the substrate independence hypothesis is false, meaning that there is something special about the biological material of the human mind that allows for conscious experience. You can make up your own mind about the plausibility of such a scenario, but it is not unworthy of discussion.


What about the thought experiment per se? Well, an immediate objection to Searle's conclusion is that he might not understand Chinese, but the whole system does. Or, if you like, understanding Chinese is an emergent property of the system consisting of Searle and the rule book, in the same sense that understanding English is an emergent property of the neurons in your brain. Like other thought experiments, the Chinese Room gets its mileage from a deceptive choice of imagery; and more to the point, from ignoring computational complexity. We're invited to imagine someone pushing around slips of paper with zero understanding or insight. But how many slips of paper are we talking about? How big would the rule book have to be, and how quickly would you have to consult it, to carry out an intelligent Chinese conversation in anything resembling real time? If each page of the rule book corresponded to one neuron of a brain, then probably we'd be talking about a "rule book" at least the size of the Earth, its pages searchable by a swarm of robots traveling at close to the speed of light. When you put it that way, maybe it's not so hard to imagine that this enormous Chinese-speaking entity that we've brought into being might have something we'd be prepared to call understanding or insight.


What is the state of AI research today? Despite the enormous progress of artificial intelligence in the last few decades, and the latest machine learning revolution, we are still quite far from thinking machines, the reason being that in trying to write programs that simulate human intelligence we're competing against a billion years of evolution. The human brain is an elaborate network of neurons and synapses that was shaped by the particular and unrepeatable evolutionary history of our species over 4 billion years of life on Earth, and that makes artificial general intelligence particularly challenging to achieve. For instance, the machine learning program, that provides computer systems the ability to automatically learn and improve from experience without being explicitly programmed, only constitutes a small part of the brain functionality, learning in this example. However, consciousness is probably the result of the interconnected nature of brain functionalities, from processing of sensory information to language, and is probably impossible to replicate without a full understanding of the brain activity. Right now, we have close to no clue of how the brain operates when performing even the simplest actions, not to mention how something as mysterious as consciousness emerges from the complex dynamics of neuron firings.


Nevertheless, there are signs of things to come. In March 2016 AlphaGo, a computer Go program developed by the Google DeepMind team, played and won a five-game Go match against 18-time world champion Lee Sedol. Go is an abstract strategy board game invented in China more than 2500 years ago, and is widely believed to be the most complex strategy game ever developed by humans. With the number of legal board positions estimated to be around 10^170, Go requires intuition and creative thinking to be played effectively, as any brute-force approach would be doomed to failure.


Prior to the match, Lee Sedol predicted that he would win in a landslide, but he was sorely mistaken. AlphaGo went on to win all but the fourth game, which it probably lost due to Lee adopting a type of extreme strategy known as amashi, that capitalised on a known weakness in play algorithms which use Monte Carlo tree search (no such weakness is present in the new improved version of AlphaGo, AlphaZero). In the words of the commentators, AlphaGo "played so well that it was almost scary", the match clearly revealing that "AlphaGo is simply stronger than any known human Go player".


Even more surprising than AlphaGo's victory was its way of playing, that according to many professional commentators displayed true creativity and sometimes sheer brilliance, characteristics that are not generally associated to algorithms. In particular, AlphaGo's strategy throughout the match showed plenty of anomalous moves which professional Go players described as looking like mistakes at first sight but an intentional strategy in hindsight. Crucially, and unlike many human players, AlphaGo does not attempt to maximize its points or its margin of victory, but tries to maximize its probability of winning. If AlphaGo must choose between a scenario where it will win by 20 points with 80 percent probability and another where it will win by 1 and a half points with 99 percent probability, it will choose the latter, even if it must give up points to achieve it. In particular, move 167 in game 2 by AlphaGo seemed to give Lee a fighting chance and was declared an obvious mistake by commentators, but proved to be a perfect move in light of AlphaGo's counterintuitive strategy and superior skill.


Lee apologized for his losses, stating that he "misjudged the capabilities of AlphaGo and felt powerless", emphasizing that the defeat was "Lee Sedol's defeat" and "not a defeat of mankind". He also added that while his defeat seemed inevitable, "robots will never understand the beauty of the game the same way that we humans do". What are we to make of this? Is Lee Sedol's right? Despite AlphaGo's superhuman abilities, it is hard to argue that the relatively rudimentary computer program developed by the Google DeepMind team has any real understanding of the game. And neither does it need awareness in order to beat any human grand master. Consciousness is not a prerequisite for (narrow) intelligence, as the example of AlphaGo makes abundantly clear. In fact, we have every reason to believe that in humans, consciousness is a byproduct of the full range of human cognitive abilities. In the examples we are familiar with, consciousness seems to be a consequence of general intelligence, but we simply don't know if it is an inevitable consequence or simply an evolutionary accident. In any case, the story of AlphaGo teaches us one important lesson, namely that an algorithm can be supremely intelligent at one limited task, like playing Go, and exhibit human characteristics like creativity at the game, without having a sense of self or even understanding what the game is about. This, I think, is the quite astonishing and fairly counterintuitive moral of AlpaGo's story.


As technology advances, chances are computers will outperform humans not only at physical but also cognitive tasks. If computers are already better than us at chess and Go, strategic board games traditionally associated with intelligence, what is left to us? Science and Art appear to be supremely human activities, seemingly outside the scope of a non-sentient entity. Is the act of creation what separates us from automata? What happens if and when humans build a general artificial intelligence? Come what may, the AI revolution is upon us, and it will probably force us to rethink what it means to be human.


In direzione ostinata e contraria

  • LinkedIn
  • Twitter