Home » Law 2050 Initiative » Fighting Bad Artificial Intelligence: Law, Policy, and the AI Arms Race

Fighting Bad Artificial Intelligence: Law, Policy, and the AI Arms Race

Notwithstanding the concerns some very smart people have expressed about the risks of what the machines will do when they reach The Singularity, I’m actually a lot more concerned for my lifetime about what humans with evil intent are going to do with machines armed with artificial intelligence.

A few months ago I asked the question whether AI can make AI obey the law? There was no conclusive answer. That question, though, goes more to how AI might lead to socially undesirable results despite its use by good people with good intentions.

I call this the problem of Good AI Gone Bad, and it has gotten a lot of recent coverage in the media. Thankfully, on this front more very smart people are working on ways to make AI accountable to society by revealing its reasoning, and I expect we will see more and more effort in AI research to devise ways to keep it socially beneficial, transparent, and mostly under our control. Law should play a key role in that, and recent announcements by the White House and by major law firms are encouraging in that respect. My Vanderbilt colleague Larry Bridgesmith published a very concise and thorough summary of this concern in today’s Legal Tech News. It’s well worth the read as an entry point to this space.

But the problem is that there are bad people in the world who don’t want AI to obey the law, they want it to help them break the law. That is, there are bad people with bad intentions who know how to design and deploy AI to make them better at being bad. That’s what I call Bad AI. What do we do about that?

Much like any other good v bad battle, much comes down to who has the better weapon. The discipline of adversarial machine learning is where many on the good side are working hard to improve counter-measures to Bad AI. But this looks like an arms race, a classic Red Queen problem. And in my view, this one has super-high stakes, maybe not like the nuclear weapons arms race, but potentially pretty close. Bad AI is way beyond hacking and identity theft as we know them today–it’s about steering key financial, social, infrastructure, and military systems. Why disrupt when you can control? Unlike the nuclear weapon problem, though, mutually-assured destruction might not keep the race in check (although North Korea has changed the rules of that arms race as well). With AI, what is it exactly that we are “blowing up” besides algorithms, which can easily be rebuilt and redeployed from anywhere in the world.

As much as we are (rightfully) concerned that climate change could do us in eventually, the AI arms race is a more immediate, tangible, and thorny threat that could wreak tremendous financial and social havoc long before sea-level rise starts taking out islands and coastal cities. We at the Program on Law and Innovation see Bad AI as a pressing issue for law and policy, and so will be devoting our spring 2017 annual workshop on AI & Law to this issue as well as to the problem of Good AI Gone Bad. We will be bringing together key researchers (including Vanderbilt’s own expert on adversarial machine learning, Eugene Vorobeychik) and commentators. More to follow!


Leave a comment