In the hands-on world that’s coming, the biggest and most impactful threats of AI aren’t likely to be cataclysmic; they’re more probably going to be the acceleration of social dysfunction and advance of despotism that will certainly arise among our most craven oligarchs and breathless, sociopathic capitalists. Put another way, when AI gets into the hands of bad actors, they will wreak even more havoc on our social fabric and progress.
If, that is, AI is un-policed.
Physicist/sci-fi author David Brin, who frequently ruminates about the AI future on his excellent blog, davidbrin.wordpress.com, notes with chagrin and a pragmatic sense of acceptance that this sort of behavior has always been with us. Anything we come up with that has the potential to advance human ‘well-being in general is routinely appropriated by the powerful for their own exploitation.
“‘Twas ever thus. Indeed, across the whole span of human history, just one method ever curbed bad behavior by villains, ranging from thieves to kings and feudal lords,” he writes. “I refer to a method that never worked perfectly and remains deeply flawed, today. But it did at least constrain predation and cheating well enough to spur our recent civilization to new heights and many positive-sum outcomes. It is a method best described by one word: Accountability.”
He references astrobiologist Sara Walker’s assertion that the entire history of life, across four billion years, demonstrates this pattern of resource exploitation. Equitable distribution of resources is almost never seen in the wilds of organic energy exchange; the quest for dominance is the rule, and balanced ecosystems only emerge when a number of systems are symbiotically synchronized.
In human society, that tends not to happen by accident; it only occurs when large numbers of people consciously agree to cooperate. And that cooperation is always in conscious tension with the efforts of cheaters to take advantage.
“...our own human past is rich with lessons taught by so many earlier tech-driven crises, across 6,000 years,” he wrote. “Times when we adapted well, or failed to do so - e.g., the arrival of writing, printing presses, radio, and so on. And again, only one thing ever limited predation by powerful humans exploiting new technologies to aggrandize their predatory power.
“That innovation was to flatten hierarchies and spur competition among elites in well-defined arenas - markets, science, democracy, sports, courts. Arenas that were designed to minimize cheating and maximize positive-sum outcomes, pitting lawyer vs. lawyer, corporation vs. corporation, expert vs. expert. Richdude vs. Richdude."
So we can ask – will this work with AI?
“Might we apply to fast-emerging AI the same methods of reciprocal accountability that helped us tame the human tyrants and bullies who oppressed us in previous, feudal cultures? Much will depend on what shape these new entities take. Whether their structure or ‘format’ is one that can abide by our rules. By our wants.”
The abstract concept – protect AI from misuse by bad actors by holding the AI accountable for what it does – is an elegant concept, and props to Brin for the idea; but how exactly does one hold an AI accountable?
Brin argues that this question can be answered by another question: what exactly can hold an AI accountable? What are the possible answers? And there is only one:
“Soon only AIs will be quick enough to catch other AIs that are engaged in cheating or lying. Um … duh? And so, the answer should be obvious. Sic them on each other. Get them competing, even tattling or whistle-blowing on each other.”
For this to happen, Brin goes on, each individual AI will require an identity. "In order to get true reciprocal accountability via AI-vs.-AI competition, the top necessity is to give them a truly separated sense of self or individuality.
“As with every other kind of elite, these mighty beings must say, “I am me. This is my ID and home-root. And yes, I did that.”
It is anonymity, after all, that is the transgressor’s default evasion of consequences; one can only be held accountable when one can be positively identified.
Making individual AIs identifiable, then, opens up all kinds of possibilities in their governance that might never have occurred to us, let alone explored, before.
We still need to attach a how to Brin’s proposition. A registration system assigning every AI an ID, plus “an operational-referential kernel”, would do the trick (he calls this a “Soul Kernel”). When individual identity in place, it would become possible to incentivize AIs to compete for rewards in the task of “detecting and denouncing those of their peers who behave in ways we deem insalubrious.
“Not only does this approach farm out enforcement to entities who are inherently better capable of detecting and denouncing each other’s problems or misdeeds,” Brin explains, “the method has another, added advantage. It might continue to function, even as these competing entities get smarter and smarter, long after the regulatory tools used by organic humans - and prescribed now by most AI experts - lose all ability to keep up.
“Putting it differently, if none of us organics can keep up with the programs, then how about we recruit entities who inherently can keep up? Because the watchers are made of the same stuff as the watched.”
He goes on that in this paradigm, AIs would not be under centralized control, or even overseen by human laws. “Rather, I want these new kinds of über-minds encouraged and empowered to hold each other accountable, the way we already (albeit imperfectly) do. By sniffing at each other’s operations and schemes, then motivated to tattle or denounce when they spot bad stuff.
“If the right incentives are in place - say, rewards for whistle-blowing that grant more memory or processing power, or access to physical resources, when some bad thing is stopped - then this kind of accountability rivalry just might keep pace, even as AI entities keep getting smarter and smarter. No bureaucratic agency could keep up at that point. But rivalry among them - tattling by equals - might.”
He argues that this would create a balance – the only truly viable on – between positive and negative AI outcomes.
“...perhaps those super-genius programs will realize it is in their own best interest to maintain a competitively accountable system, like the one that made ours the most successful of all human civilizations. One that evades both chaos and the wretched trap of monolithic power by kings or priesthoods… or corporate oligarchs… or Skynet monsters. The only civilization that, after millennia of dismally stupid rule by moronically narrow-minded centralized regimes, finally dispersed creativity and freedom and accountability widely enough to become truly inventive.”
It's the most efficient way to get to true AI regulation, he asserts – no artificial moral or ethical imperatives, no idealistic codes, just “the Enlightenment approach - incentivizing the smartest members of civilization to keep an eye on each other, on our behalf.
“I don’t know that it will work; it’s just the only thing that possibly can.”
Comments