New technology has taken the world stage. And, as always happens when new technology takes the world stage, a loud chorus has arisen, declaring that the sky is falling.
In the particular case of AI, that stage was set many years ago by James Cameron, with the Terminator movies. But even Arnold Schwarzeneggar’s T-800 had its ancestors: Colossus; Westworld’s Gunslinger; HAL-9000. We’ve suffered existential dread over AI, thanks to the movie industry, for half a century.
It becomes easy, then, to push back against the growing cacophony, to be dismissive of that spreading Skynet Fever, to consider the doomsayers nothing more than reflexive Chickens Little.
On the other hand... it is inarguably true that AI is a technology different-in-kind from any of its predecessors. Inarguably true that it is orders of magnitude more complex. And, in principle, as historic in human evolution as fire.
And where the Skynet Fever chorus is focused primarily on “AI killing us all” or “AI taking us over”, there is no denying that AI presents less existentially-rooted terrors that should have us plenty worried: economic upheaval, societal disruption, the empowerment of bad actors on a global scale.
The thing is...
We have watched right-wing politicians and media players wage a decades-long campaign to diminish public trust in professional expertise and educated opinion, and we have pushed back, asserting that expertise and educated opinion are indispensable in parsing our future. We can’t really roll that back now, and shouldn’t if we could.
And the fact is that many of the voices alerting us to the dangers of AI are, in fact, those most expert in the field, and those with the most educated opinions.
Oh, we can be dismissive of Elon Musk, who doesn’t have a sincere bone in his body; but when Geoffrey Hinton and Jonathan Haidt speak up, as referenced above, we are obliged to listen.
Some of those experts can be heard in a 2023 survey conducted by Stanford University’s Institute for Human-Centered AI, in which 36 percent of current AI researchers believe that AI could bring about a “nuclear-level catastrophe.”
That sounds pretty hyperbolic, but there are stronger voices still who are on record as sharing that view – and they certainly are voices that deserve our ear:
Stephen Hawking: “The development of full artificial intelligence could spell the end of the human race… It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.
“The genie is out of the bottle. We need to move forward on artificial intelligence development, but we also need to be mindful of its very real dangers. I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans.”
Alan Turing: “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage, therefore, we should have to expect the machines to take control.”
But it is Sam Harris who may put this in the most compelling context. Here’s a survey of his statements on the subject of AI threat.
“Imagine we’ve just built a super-intelligent AI, right, that was no smarter than your average team of researchers at Stanford or MIT. Well, electronic circuits function about a million times faster than biochemical ones. So this machine should think about a million times faster than the minds that built it. So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week. How could we even understand, much less constrain, a mind making this sort of progress?
“The other thing that’s worrying, frankly - imagine the best-case scenario: imagine we hit upon a design of superintelligent AI that has no safety concerns; we have the perfect design the first time around. It’s as though we've been handed an oracle that behaves exactly as intended. Well, this machine would be the perfect labor-saving device: it can design the machine that can build the machine that can do any physical work, powered by sunlight, more or less for the cost of raw materials. So we’re talking about the end of human drudgery. We’re also talking about the end of most intellectual work.
“What would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a superintelligent AI? This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power. This is a winner-take-all scenario; to be six months ahead of the competition here is to be 500,000 years ahead, at a minimum. And so it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk.”
And this:
“What especially worries me about artificial intelligence is that I’m freaked out by my inability to marshal the appropriate emotional response. I think potentially it’s the most worrisome future possible, because we’re talking about the most powerful possible technology."
“The quote from Stuart Russell, the computer scientist at Berkeley, is, ‘Imagine we received a communication from an alien civilization which said, People of Earth: we will arrive on your planet in fifty years. Get ready.’ That is the circumstance we are in, fundamentally. We’re talking about the seeming inevitability that we will produce superhuman intelligence - intelligence which, once it becomes superhuman, then it becomes the engine of its own improvements. Then there’s really kind of just a runaway effect where we can’t even imagine how much better it could be than we are.”
“And at a certain point, we will build machines that are smarter than we are. And once we have machines that are smarter than we are, then they will begin to improve themselves. And then we risk what the mathematician I.J. Goode called an ‘intelligence explosion’ - that the process could get away from us.
“Now this is often caricatured as a fear that armies of malicious robots will attack us. But that isn’t the most likely scenario: it’s not that our machines will become spontaneously malevolent; the concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.
“Just think about how we relate to ants. We don’t hate them; we don’t go out of our way to harm them (in fact, sometimes we take pains not to harm them, we step over them on the sidewalk). But whenever their presence seriously conflicts with one of our goals, we annihilate them without a qualm. The concern is that we will one day build machines that could treat us with similar disregard.
“It’s crucial to realize the rate of progress doesn’t matter, because any progress is enough to get us into the end zone. We don't need Moore’s Law to continue, we don’t need exponential progress, we just need to keep going.
“So we will do this, if we can: the train is already out of the station, and there’s no brake to pull.”
And, finally, this:
“When you imagine the power that awaits anyone, you know, any government, any research team, any individual, ultimately, who creates a system that is superhuman in its abilities, and general in its abilities, well then, no one can really compete with you in anything. It’s really hard to picture the intellectual and scientific inequality that could suddenly open up.”
Whew!
My own view is that existential dread where AI is concerned is understandable but premature; AI as it is today doesn’t present the threat that Hawking, Turing and Harris are speaking of above. And my worries are over the challenges that will inundate us before we get that far – economic and social disruption on an unprecedented scale, primarily.
But that doesn’t mean I’m in any way dismissive of Skynet Fever. I don’t think it will be like the movies – we won’t find ourselves confronting malevolent robot armies or unstoppable cyborgs – but I think we’re in the most precarious position we’ve been in in living memory, the Cold War notwithstanding.
We don’t know what comes next, and we don’t know if we’re up for whatever that is. We have an abysmally bad record of handling this kind of socioeconomic adjustment with appropriate focus and sobriety; we leave altogether too much decision-making power in the hands of elites, and give too little thought to consequences. We may, through sheer carelessness and stupidity, allow the forward motion of AI to lead us over a cliff.
And so, until such time as we actually have a handle on all of this – I’m going to keep listening to the experts.
Comments