One of the great thought experiments of the modern age cuts right to the heart of the AI discourse: can machines ever think like humans?
Many, of course, take for granted that they will. Because HAL-9000. And The Terminator. And Commander Data. But the belief burns even brighter among actual AI experts and practitioners; few are the technologists who don’t think machines will become conscious.
“Whether we are based on carbon or silicon makes no fundamental difference,” declares Dr. Chandra, HAL’s creator, in 2010.
This question had seized geek minds as far back as 1980, when Berkeley philosopher John Searle, invited to speak on the subject at an academic conference, constructed that great thought experiment.
The Chinese Room.
Imagine a closed room containing a human being and a set of reference books. On one end of this room is a slot through which a person outside the room can insert a slip of paper covered with Chinese characters. The person in the room doesn’t understand Chinese; but they can nonetheless execute the task of looking up the characters in the books provided, follow the instructions in a rulebook that prescribes an appropriate series of Chinese characters constituting a response, and submit a slip of paper with those characters through the slot.
To the person on the outside, it would appear that the room understands Chinese. But the person inside the room doesn’t.
This scenario, Searle argued, mimics the linear, symbol-driven computer programming of the time. The programs were static; the processor executing it was just a very fast calculator, likewise static. The only thing not static in the Chinese Room is the person - and they are restricted to a static, purely mechanical function.
If the person in the room doesn’t understand Chinese, how could a microprocessor possibly do so?
In response to Searle’s argument, the nerd mob rose up in protest, and you could hardly hear yourself think for the roar of their bowels. Searle was declaring machine intelligence fundamentally impossible; and despite the complete absence of any proof to the contrary, this was considered a heresy.
Mind you, Searle wasn’t saying anything of the sort; he was saying that digital processors and symbolic processing could never deliver machines that think like humans. In the 40 years since, of course, nothing to convince us otherwise has emerged.
His many critics were all over the map in their denunciations: sentience is ‘emergent’, they argued; but the Chinese Room is utterly static, while emergent systems are inherently dynamic (you can only have emergence in a system that changes). The person in the Chinese Room doesn’t understand, but the system understands, they argued; no, the system can’t connect the person’s actual understanding, the product of their experience in the world, to whatever is going on the symbols to the ‘system’s’ input and output. They’re just symbols; from the room’s standpoint, they have no meaning.
The debate has become a point of piety over the decades, with neither side budging an inch; it long ago devolved into a parlor game where Searle’s opponents content themselves with inserting their own custom definitions of “understanding” into the argument, sidestepping its central point:
If the Chinese Room, as a system, doesn’t have any understanding of the meaning of any of the symbols, then it is purely syntactical; semantics – meaning – are by definition absent. Searle’s point is not that machines will never be able to think, or become conscious; it’s that there can be no human-like thinking or consciousness without semantics, without meaning. A purely syntactical system can never think like a human, have intentionality, or become conscious, he still asserts today; if we had an AI that could even pass the Turing Test, interacting with us in English, it would still be empty inside, if it was purely syntactical. There would be no ‘emergence’. It could never achieve consciousness.
And now... we have an AI that can pass the Turing Test. An AI that interacts with us in English (or any other language). An AI that even answers our questions.
ChatGPT, put simply, is the Chinese Room.
It does all the requisite symbol-mapping, and it even does so without a rulebook – the app is based on neural networks, not conventional programming of the sort Searle was discussing. It can even learn (though the learning is not part of the app itself, but the refresh process it undergoes during updates).
And no one who understands AI even a little bit, let alone those well-versed in deep learning and large language models, is about to argue that ChatGPT is sentient, or that it has any self-awareness or intentionality, or that any of these traits could possibly emerge – even though it can learn. On the contrary, these same people are loudly denouncing ChatGPT as nothing even close to sentient.
ChatGPT is semantically empty. It does not know the meaning of a single word submitted to it, nor a single word it offers in response.
For myself, I’ve supported Searle’s argument all these decades, and been shouted down more times than I can count – by lettered academics, cock-sure IT professionals, and breathless, indignant fanboys. None of them have tried to defend ChatGPT as evidence that the Chinese Room is incorrect.
I’m not alone in holding up ChatGPT as evidence that it is. Just google “ChatGPT Chinese Room”.
The Chinese Room has been, for more than 40 years, our standard metric for machine intelligence that truly rises to the level of our own. I submit that it has been replaced; ChatGPT far better illuminates the problem, and is stirring far more insights than its predecessor.
Comments