I’m Afraid You Can’t Do That Hal

Georg Baselitz, Man of Faith

I’m Afraid You Can’t Do That, HAL.
By Hadrian Wise.

There is no more vivid dramatisation of the potential dangers of Artificial Intelligence than the moment in 2001: A Space Odyssey when the ship’s computer, HAL, refuses to open the pod doors to let the astronaut, Dave, back on board. “I’m afraid I can’t do that, Dave,” HAL says in his affable if metallic way, explaining that his instructions to maximise the probability of the mission’s success are best carried out by eliminating the ship’s unreliable human element. Stanley Kubrick released the film in 1969, and in the years since then, fears about super-intelligent computers with minds of their own defying and harming their supposed human masters have grown more shrill with the development of the technology. When Deep Blue defeated Kasparov in the 1990s, and Alpha Go found a way to beat the world’s best Go players 20 years later, there was an understandable sense that it was only a matter of time; now that Chat GPT can write better essays than some students, many wonder if the so-called “singularity”, the moment an A.I. system becomes in some general sense equivalent to the human mind, is imminent. But what does “equivalent to the human mind” actually mean? If it involves a machine’s being able to think and learn independently like a human, to make its own decisions, to be aware of its own mental life, to have a “mind of its own”, then we can say in summary it means – to be conscious. That would be the singularity.

The most widely known suggestion for determining whether an A.I. system has reached the singularity is the Turing Test, proposed by the British mathematician Alan Turing. If the output of a machine becomes indistinguishable from the output of a human mind – if we cannot tell whether we are talking to a machine or to a human being – then we can say the machine is to all intents and purposes equivalent to a human. If that is the test, we are indeed frighteningly close to the singularity and may well have reached it already. But it doesn’t take long to see that the Turing Test is inadequate, and if it weren’t for Turing’s immense and deserved reputation as a genius, hero and martyr to anti-homosexuality laws, his “Test” would have been consigned to the dustbin of history’s worst ideas long ago. All the Turing Test covers is how well the machine can simulate the output of a mind. But simulating the output of X is not the same as being, or even being equivalent to, X. Simulating a conscious being does not make a machine a conscious being, any more than repeating what my grandmother says makes my grandmother’s parrot an English-speaker. Just because a computer can simulate consciousness, does not mean the computer is conscious.

The most celebrated illustration of this crucial point is John Searle’s “Chinese Room” thought experiment. Searle, one of the greatest philosophers of the last century, asks us to imagine we are in a room with a slit in the wall through which people are posting squiggles we don’t recognise, these squiggles being, unknown to us, characters in “Chinese” (he does not specify which dialect). We have a pile of squiggles of our own, also, again unknown to us, characters in Chinese, and our job is to post back squiggles in response to the squiggles we receive. We have no idea what any of the squiggles means, but we do have an instruction manual, which tells us which squiggles to post back in response to whichever squiggles we get. By using the instruction manual, we are able, without knowing it, to post through grammatical sentences in Chinese that are reasonable replies to the sentences being posted to us, so that the people on the other side of the wall, who do understand Chinese, perceive us to be carrying on a conversation with them. Yet we do not understand a word of Chinese.

This, Searle says, is the condition of the digital computer. It produces content without knowing what that content means. It has syntax, but no semantics. It places symbols in a particular order according to instructions – its programme – without knowing, or needing to know, what those symbols mean. Deep Blue can manipulate enormous quantities of symbolic representations of moves in chess at incomparably higher speed than Garry Kasparov, but unlike Kasparov, Deep Blue does not know it is playing chess, let alone anything else. If the fire alarm goes off in the middle of the game, Kasparov’s chances of surviving any conflagration are far higher than Deep Blue’s.

We have not managed to build a digital computer advanced enough to overcome these problems. Are we ever likely to? Searle’s answer is we shall not, because we cannot, by definition. Could Searle be wrong? Could computers as we know them, digital computers, become conscious? We might be more confident if we could show that the human mind was just a very sophisticated digital computer, because in that case, we at least have an example in nature of a conscious digital computer. And as it happens, there is a long-standing well-supported case in the philosophy of mind that yes, the human mind is indeed a digital computer, and if we accept that the mind is conscious – and believe it or not, there are some philosophers and neuro- scientists who refuse to grant this, but we shall ignore them for now -, then we cannot say a digital computer can never be conscious, because we have a clear example of one that is. We don’t, of course, understand exactly how it works, but that is just because it is a fiendishly complicated digital computer.

You might think this hypothesis, known as mechanism, is on the face of it rather implausible. The other digital computers we know about, which we consider primitive by comparison with the human mind, seem to be an a lot better at doing sums than the human mind is. On the other hand, the human mind is able at an early stage of development to guide the body around a room without bumping into things, something non-human computers still find impossible. But these objections are not decisive. In principle you could have a digital computer that was bad at sums and good at navigating a room. The decisive argument against mechanism is J.R. Lucas’s “Godelian argument”, which goes like this. A digital computer is an instantiation of a formal logical system. This is uncontroversial. Godel’s incompleteness theorem – which, again, while extraordinary, is uncontroversial – holds that in every consistent formal logical system rich enough to accommodate simple arithmetic – which the human mind of course can also do -, there are formulae that are true, yet unprovable within that system, which given that “truth”, in a consistent logical system, means provability-in-the-system, is strange. These are formulae of the type, “Formula 17. Formula 17 is unprovable in the system.” Now if that formula is false, it would be provable in the system, which would make it true – but then it would be true and false at the same time, which is impossible. So it must be true. In which case, it must be unprovable in the system, which means the system cannot produce it as true. But you and I – we can see it is true. And this, however insignificant, is a clear difference between us on the one hand, and the formal logical system, the computer, on the other. As Godelian formulae can be constructed in any consistent formal logical system, it follows that no such system can be a complete model of the human mind, meaning no computer can be a human mind, meaning the human mind cannot be a computer.

There is one caveat. Godel’s theorem applies to consistent systems. Could the human mind be an inconsistent computer? This is plausible if we use the popular sense of “inconsistent”, but in its more specialised sense in formal logic, “inconsistent” means something more than just sometimes contradicting yourself. It means not caring that you’re contradicting yourself, even when it is pointed out. Perhaps Donald Trump is an inconsistent computer? Human beings, by and large, implicitly understand what it means to be consistent and aim to be consistent, and try to iron out inconsistencies when identified. Inconsistent computers, instantiating inconsistent formal logical systems, do not. So it seems implausible that we are inconsistent machines, and the hypothesis we are consistent machines founders on the rock of Godel’s theorem.

Thus, to sum up, if we accept Searle’s characterisation of digital computers it seems implausible they could ever be conscious, and if we cannot overcome Lucas’s Godelian argument we cannot adduce the human mind as a concrete example of a conscious digital machine. But of course, digital computers are not the only type of computer. There are now quantum computers. Could they one day become conscious? The honest answer is we do not know. It is simply too early to say. In the meantime, we can ask, where are all the new antibiotics? If quantum AI were making progress, the one thing we might expect it to be doing is coming up with new antibiotics. We are always being told this is one of the things AI will do for us, so if these exciting new quantum computers are so much better than the primitive old digital ones – where are the new antibiotics? Draw your own conclusions.

None of this is to say AI is nothing to worry about. There is plenty to worry about. The impact on jobs, for example. Even without being conscious, AI could replace and is replacing a lot of the work done by human beings, which we shouldn’t find surprising, since after all, as Dr Johnson said, “It is remarkable how little the intellect is engaged in the discharge of any profession.” Now if our corporate masters and government were agreeable, we could perhaps just find a way to tax the robots and usher in mankind’s long-yearned-for Age of Leisure, but unfortunately that is a big if. Then there is the problem of potential over-reliance on a technology that despite its name, is not in the true sense of the word intelligent, and which has no common sense. If we weren’t alive to the danger of assuming computers are always less fallible than human beings, after the Post-Office-Horizon scandal, we certainly are now. There is the potential for a deadening effect on free thought. Chat GPT and other AI applications are essentially enormous databases containing sentences written on the Web, and they construct sentences by selecting the statistically most probable next word from the database. It is not hard to see how this might reinforce a certain uniformity in a way analogous to the way social-media algorithms tend to distil an individual’s interests into a concentrated feed. What about AI weapons systems with insufficiently precise instructions on whom to target and whom to leave alone? Or indeed instructions that are all too precise? Or the real worry might be that the potential of AI has been over-hyped and we are looking at a gigantic “bubble” that will wreck economies around the world when it bursts. But barring that, machines with enormous computing power that lack the responsiveness to circumstances that comes with consciousness have, in short, all sorts of dangers of their own. I am sure that we can all think of several.

Hadrian Wise is a frequent contributor to conservative journals

This entry was posted in QR Home. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.