Home » Posts tagged 'artificial intelligence'
Tag Archives: artificial intelligence
Notwithstanding the concerns some very smart people have expressed about the risks of what the machines will do when they reach The Singularity, I’m actually a lot more concerned for my lifetime about what humans with evil intent are going to do with machines armed with artificial intelligence.
A few months ago I asked the question whether AI can make AI obey the law? There was no conclusive answer. That question, though, goes more to how AI might lead to socially undesirable results despite its use by good people with good intentions.
I call this the problem of Good AI Gone Bad, and it has gotten a lot of recent coverage in the media. Thankfully, on this front more very smart people are working on ways to make AI accountable to society by revealing its reasoning, and I expect we will see more and more effort in AI research to devise ways to keep it socially beneficial, transparent, and mostly under our control. Law should play a key role in that, and recent announcements by the White House and by major law firms are encouraging in that respect. My Vanderbilt colleague Larry Bridgesmith published a very concise and thorough summary of this concern in today’s Legal Tech News. It’s well worth the read as an entry point to this space.
But the problem is that there are bad people in the world who don’t want AI to obey the law, they want it to help them break the law. That is, there are bad people with bad intentions who know how to design and deploy AI to make them better at being bad. That’s what I call Bad AI. What do we do about that?
Much like any other good v bad battle, much comes down to who has the better weapon. The discipline of adversarial machine learning is where many on the good side are working hard to improve counter-measures to Bad AI. But this looks like an arms race, a classic Red Queen problem. And in my view, this one has super-high stakes, maybe not like the nuclear weapons arms race, but potentially pretty close. Bad AI is way beyond hacking and identity theft as we know them today–it’s about steering key financial, social, infrastructure, and military systems. Why disrupt when you can control? Unlike the nuclear weapon problem, though, mutually-assured destruction might not keep the race in check (although North Korea has changed the rules of that arms race as well). With AI, what is it exactly that we are “blowing up” besides algorithms, which can easily be rebuilt and redeployed from anywhere in the world.
As much as we are (rightfully) concerned that climate change could do us in eventually, the AI arms race is a more immediate, tangible, and thorny threat that could wreak tremendous financial and social havoc long before sea-level rise starts taking out islands and coastal cities. We at the Program on Law and Innovation see Bad AI as a pressing issue for law and policy, and so will be devoting our spring 2017 annual workshop on AI & Law to this issue as well as to the problem of Good AI Gone Bad. We will be bringing together key researchers (including Vanderbilt’s own expert on adversarial machine learning, Eugene Vorobeychik) and commentators. More to follow!
As I write, the 2013 International Conference on Artificial Intelligence and the Law is taking place in Rome. I wish I had been able to attend–anyone remotely interested in the scope of Law 2050 should take a look at the program.
Most of the discourse on AI and the Law in the popular press has focused on the capacity AI to predict the law, as with Lex Machina and Lexis’s MedMal Navigator. But if you take a close look at the ICAIL program, the sleeper may be the capacity of AI to make the law. Many of the presentations delve into methods of using algorithms to extract and organize legal principles from the vast databases or cases, statutes, and other legal sources now available. The capacity to produce robust, finely-grained, broad scope statements of what the law is powerful not only for descriptive purposes, but as a force in shaping the law as well.
Consider the American Law Institute’s long-standing Restatement of the Law project. As ALI explains,”the founding Committee had recommended that the first undertaking of the Institute should address uncertainty in the law through a restatement of basic legal subjects that would tell judges and lawyers what the law was. The formulation of such a restatement thus became ALI’s first endeavor.” As I think any lawyer would agree, the idea worked pretty well, pretty well indeed. The Restatements have been so influential that they go well beyond describing the law–they contribute to making the law through the effect they have on lawyers arguing cases and judges reaching decisions.
How did ALI pull that off? Numbers. Anyone who has worked on a Restatement revision committee has experienced the incredible data collection and analytical powers that ALI assembles by gathering large numbers of domain experts and tasking them with distilling the law of a field into its core elements and extended nuances. The process, however, is protracted, costly, tedious, and often contentious.
Many of the ICAIL programs suggest the capacity of AI to generate the same kind of work product as ALI’s Restatements, but faster, cheaper, and perhaps better. ALI depends on large committees of experts to gather case law, analyze it, and extract and organize the underlying doctrines and principles. That’s exactly what AI for law does, only with a lot fewer people, a lot more data, and amazingly efficient and effective algorithms. Of course, you still (for now) need people to manage the data and develop the algorithms, but once you have it all in place you just hit the run button. When you want an update, you just hit the run button again. When you want to ask a question in a slightly different way, just enter it and hit the run button.
As the Restatements demonstrated, a reliable, robust source of reference for what the law is can be so influential as to become a part of the making of the law. As AI applications build the capacity to replicate that work product, it follows that they could have the same kind of influence.
One feature AI could not produce, of course, is the commentary and policy pushing one finds in the Restatements. The subjective dimension of the Restatements has its own pros and cons. The potential of AI to produce highly-accurate, real-time descriptions of the law, however, might change the way in which we approach normative judgments about the law as well.