Most reasonable people will agree that AI, unregulated (as it currently is), is a very dangerous thing. The question becomes, how is AI best regulated?
Jerald Hughes, an information systems professor at the University of Texas, sees a huge problem looming. Congress, he points out, is not particularly adept at defining the things being regulated (they can’t even get “assault rifle” right, for instance); he considers US lawmakers’ ability to define AI accurately enough for its regulation to be enforceable to be “negligible”.
“That approach to regulation is doomed at the start by the foundations of language and logic,” he wrote. “And that start has a terrible loophole: even if they succeeded in perfectly defining AI as it exists today, all that would happen is that the major players would simply innovate changes major or minor which succeed in escaping from the legal definitions. Then they escape legal responsibility simply through novelty of the AI - and there will be thousands of varieties of AI, at a minimum.”
If Hughes is right – and it’s not a great leap to conclude that he probably is – does that mean the effective regulation of AI is beyond us?
No, he continues; instead of regulating AI, Hughes suggests, “Regulation should proceed from a consideration of the uses to which digital systems are put, AI or not.”
The starting point would not be AI itself, but the uses to which it would be put. Increasingly, AI will be deployed into automation of all kinds of systems with which humans interact, from medical diagnostics to urban traffic systems. When AI is poised for introduction into any system where harm to humans, physical or otherwise, is a possibility, then regulation should be put in place to constrain it.
This would have the side benefit, he points out, of freeing many AI applications from the need for regulation; an AI embedded in a video game is of no consequence, so blanket policy covering it is superfluous.
AI placed in charge of a water treatment plant, he goes on, certainly would require regulation; but the idea would not be to regulate the AI itself, opening the door to having it slip the shackles of legal definition, but to regulate the control of the water treatment plant.
Hughes states that such regulation would require that there be human input at all major decision points; that proven failsafes be in place; that layered alerts be implemented across independent channels; that there be human access to all information moving in and out of the system; and that event logging be exhaustive, for purposes of analysis and forensic.
The regulation, then, would not call out AI specifically, but would apply to any control system placed in such a consequential role. In that way, the AI would be placed under de facto regulation, making its creators accountable and rendering irrelevant its definition or any changes or modifications it undergoes.
This approach is a pretty big shift from how regulation normally proceeds, and would require a great deal of effort to hammer out; but a constructive effort here would still be far less complex and frantic than trying to constantly maintain regulatory efforts in this domain done the old-fashioned way.
Comments