In 2017, Google’s DeepMind team unveiled AlphaGo Zero, a game-playing AI that was a successor to AlphaGo, which had beaten Lee Sodel, the world champion of Go. Unlike AlphaGo, Zero wasn’t trained on an archive of previous human play, as AlphaGo had been; from a tabula rosa beginning, AlphaGo Zero trained strictly by playing games alone, building up expertise and strategic superiority by simply playing and playing, at extremely high speed.
In three days, AlphaGo Zero achieved enough expertise to beat AlphaGo, 100-0. After 40 days of self-training, it was the most powerful game-playing AI in the world.
What’s the math of that? Per the paper DeepMind published on AlphaGo Zero, the program played 29,000,000 games in its training phase. In human terms, assuming four games per day – 1,460 games per year – that's almost 20,000 years of Go play. Imagine a human being living that long and playing that many games; would such a player not be invincible?
In those 20,000 virtual years, Go attempted, tested, assessed, discarded, and otherwise studied and learned more Go strategies than any human being ever could. It contained orders of magnitude more strategies than any human ever could, including a staggering archive of strategies that had never been played on an actual Go board. It was indomitable, out of the box.
Now, recall that earlier, we discussed AGI and its essential component of analogical integration: networking deep learning AIs like AlphaGo Zero with other models, such that the AGI’s own network can mine the learning patterns of all of them, surfacing how they internally represent their domain-specific solutions to create an ensemble learning model – a general solution pattern it can apply cross-domain. In other words, an AGI can find solutions analogically, or it by definition isn’t an AGI.
Now, consider just how good our domain-specific AIs already are, and consider that AlphaGo Zero (and all modern gaming AIs like it) is learning something purely conceptual; unlike identifying faces and steering trucks and reading X-rays, chess and Go are nothing but ideas. But consider what those ideas represent in the real world, analogically:
“Here’s something we must keep in mind when thinking about games like chess and Go and their relationship to human intelligence,” wrote AI expert Melanie Mitchell. “Consider the reasons many parents encourage their kids to join the school chess club (or in some places the Go club) and would much rather see their kids playing chess (or Go) than sitting at home watching TV or playing video games (sorry, Atari). It’s because people believe that games like chess or Go teach children how to think better: how to think logically, reason abstractly, and plan strategically. These are all capabilities that will carry over into the rest of one’s life, general abilities that a person will be able to use in all endeavors.”
Put another way, chess and Go – which are, at their cores, purely mathematical and not about anything in the real world – nonetheless train the human brain to operate in ways that can be applied to real-world problems.
Chess is, of course, an excellent exemplar:
The USS Enterprise is beyond the edge of Federation space, charting previously unexplored territory. It encountered a starship of unknown make and model, hundreds of times larger, sent by an unknown race from some star system beyond to challenge Kirk and his crew.
After a brief and unproductive exchange, the commander of the alien starship – a fierce and frightening humanoid named Balok – declares that he and his peers have determined that the Enterprise must be destroyed, having violated their sovereign territory and demonstrated a propensity for violence (the Enterprise destroyed an alien warning buoy out of self-defense).
Kirk tries to withdraw, to negotiate, to deflect – move, countermove, dealing with Balok. Spock notes that Kirk is essentially playing chess. Finally, it occurs to Kirk to alter his strategy, drawing from another game – Poker – in an attempt to deflect Balok’s aggression with a bluff.
In the end, of course, it works, and the Enterprise crew winds up meeting Balok and offering overtures of peace and partnership between their civilizations.
But the point is clear: this Star Trek example of applying game strategy to military conflict is tried and true; the instances of warfare-as-chess found throughout literature and history are endless. The parents urging their kids to play chess and Go are, as Melanie Mitchell contends, nudging them to learn strategy and planning and cool reason, in the hope that they will apply them to human conflict and competition.
Finally, consider this:
When AGI arrives, and it is given control of major automated systems – military drone fleets, for instance – it will be able to apply, analogically, the planning and strategies gleaned from deep-learning game AIs like AlphaGo Zero (which will be improved upon by orders of magnitude by then). Put another way, the AGIs in control of those major automated systems will be capable of strategic operation beyond the ability of human beings to anticipate or defend.
Major automated systems like smart cities. And civilian traffic/airline systems. Financial networks. Physical plant resources. Healthcare resources.
We’d better not piss them off.
Comments