Amitai Etzioni, the famous sociologist, and his son Oren Etzioni, the famous computer scientist, have posted an intriguing paper on SSRN, Keeping AI Legal. The paper starts by outlining some of the many legal issues that will spin out from the progression of artificial intelligence (AI) in cars, the internet, and countless other devices and technologies–what they call “smart instruments”–given the ability of the AI programming to learn as it carries out its mission. Many of these issues are familiar to anyone following the bigger AI debate–i.e., whether it is going to help us or kill us, on which luminaries have opined both ways–such as who is liable if an autonomous car runs off the road, or what if a bank loan algorithm designed to select for the best credit risks based purely on socially acceptable criteria (income, outstanding loans etc.) begins to discriminate based on race or gender. The point is, AI smart instruments could learn over time to do things and make decisions that make perfect sense to the AI but break the law. The article argues that, given this potential, we need to think more deeply about AI and “the legal order,” defined not just as law enforcement but also as including preventive measures.
This theme recalls a previous post of mine on “embedded law”–the idea that as more and more of our stuff and activities are governed by software and AI, we can program legal compliance into the code–for example, to make falsifying records or insider trading impossible. Similarly, the Etzionis argue that the operational AI of smart instruments will soon be so opaque and impenetrable as to be essentially a black box in terms of sorting out legal concerns like the errant car or the discriminatory algorithm. Ex ante human intervention to prevent the illegality will be impossible in many instances, because the AI is moving too fast (see my previous post on this theme), and ex post analysis of the liabilities will be impossible because we will not be able to recreate what the AI did.
The Etzionis’ solution is that we need “AI programs to examine AI programs,” which they call “AI Guardians.” These AI Guardians would “interrogate, discover, supervise, audit, and guarantee the compliance of operational AI programs.” For example, if the operational AI program of a bank called in a customer’s loan, the AI Guardian program would check to determine whether the operational program acted on improper information it had learned to obtain and assess. AI Guardians, argue the Etzionis, would be superior to humans given their speed, lower cost, and impersonal interface.
I get where they are coming from, but I see some problems. First of all, many determinations of legality of illegality depend on judgement calls–balancing tests, the reasonable person standard, etc. If AI Guardians are to make those calls, then necessarily they will need to be programmed to learn, which leads right back to the problem of operational AI learning to break the law. Maybe AI Guardians will learn to break the law too. Perhaps for those calls the AI Guardian could simply alert a human compliance officer to investigate, but then we’ve put humans back into the picture. So let’s say that the AI Guardians only enforce laws with bright line rules, such as don’t drive over 50mph. Many such rules have exceptions that require judgment to apply, however, so we are back to the judgment call problem. And if all the AI Guardians do is prevent violations of bright line rules with no exceptions, it’s not clear they are an example of AI at all.
But this is not what the Etzionis have in mind–they envision that “AI Guardians…will grow smarter just as operational AI programs do.” The trick will be to allow the AI Guardians to “grow smarter” but prevent the potential for them as well to cross the line. The Etzionis recognize this lurking “Who will guard the guardians” question exists even for their AI Guardians, and propose that all smart instruments have a “readily locatable off switch.” Before long, however, flipping the off switch will mean more than turning off the car–it will mean turning off the whole city!
All of this is yet more Law 2050 food for thought…