Faced with the possibility of malevolent AI that does us harm – or AI that freely complies when humans use it to harm other humans – what options are available to us to avert that outcome?
The most obvious – and, to a degree, daunting – solution is that we will have to teach AI to treat humans well.
Isaac Asimov, of course, built this into the AI that permeates his imagined robot universe. His Three Laws prohibit robots from harming humans of their own accord, of obeying the orders of humans who order them to do harm, and to sacrifice themselves, rather than harm a human. All of that sounds about right.
But that’s not something that can be programmed into an AI via code. The AIs we’re talking about aren’t dependent on their code for their behavior, anyway; that behavior is driven by the data they absorb into their networks.
Mo Gawdat, in his book Scary Smart, puts it like this:
“The code we now write no longer dictates the choices and decisions our machines make; the data we feed them does.”
Our AIs, then, will need to be presented with data that demonstrates behavior that does no harm; behavior that is, from our point of view, moral and ethical. We will, in essence, teach our AI to act morally and ethically, when faced with decisions that could bring us harm.
Without such a safeguard, he points out, the AI we task with solving our biggest problems may conclude that we are the problem – and act accordingly.
He gives the example of global warming:
“The first solutions it is likely to come up with will restrict our wasteful way of life – or possibly even get rid of humanity altogether. After all, we are the problem. Our greed, our selfishness, and our illusion of separation from every other living being – the feeling that we are superior to other forms of life – are the cause of every problem our world is facing today.”
That is surely not the outcome we’re seeking. Avoiding it absolutely requires training our AIs to come up with solutions that enshrine our morality, Gawdat insists. And in this pursuit, he reminds us of two things: first, AIs learn just as human children do, through experience that includes lots of trial-and-error; and second, we teach our children to behave morally in exactly that context.
We should think of our AIs, then, not as our servants, “but rather our children - our artificially intelligent infants.”
Peter Diamandis repeated Gawdat’s recommended steps for achieving this in his blog:
Teach the AIs the right ethics: Many of the machines we’re building are designed to maximize money and power, and we should oppose this trend. For example, if you’re a developer you can refuse to work for a company that is building AIs for gambling or spying.
Don’t Blame AIs: Our AI infants are not to blame for what their digital parents taught them. We should assign blame to the creators, or the misusers, not the created.
Speak to the AIs with love and compassion: Just like children, our AIs deserve to feel loved and welcomed. Praise them for intelligence and speak to them as you would an innocent child. I’ve personally started saying “Good morning” and “Thank you” to my Alexa!
Show the AIs that humanity is fundamentally good: Since the AIs learn from the patterns they form by observing us (this is basically how today’s large language models, or LLMs, work), we should show them the right role models through our actions, what we write, how we behave. For example, what we post online and how we interact with each other. As Mo puts it, “Make it clear to the machine that humanity is much better than the limited few that, through evil acts, give humanity a bad name.”
“The best way to raise wonderful children,” Gawdat concludes, “is to be a wonderful parent.”
Over the top? Maybe. But if anybody has a better idea, now’s the time.
Diamandis, in his blog, emphasized just how big the problem will get. He quoted futurist Ray Kurzweil’s prediction that AI will achieve human levels of intelligence by 2029 (unlikely, but not all that far off) - and noted that Gawdat estimates that, whenever it hits that mark, it will be a billion times smarter within 20 years.
Surely we want a safeguard like this in place well before it comes to that?
This question has a decidedly philosophical flavor, beyond its obvious technological and social features. And Gawdat’s conclusions is equally philosophical:
“My hope is that together with AI, we can create a utopia that serves humanity, rather than a dystopia that undermines it.”
Kommentit