Home » Posts tagged 'artificial intelligence and law'

Tag Archives: artificial intelligence and law

Can AI Make AI Obey the Law?

Amitai Etzioni, the famous sociologist, and his son Oren Etzioni, the famous computer scientist, have posted an intriguing paper on SSRN, Keeping AI Legal. The paper starts by outlining some of the many legal issues that will spin out from the progression of artificial intelligence (AI) in cars, the internet, and countless other devices and technologies–what they call “smart instruments”–given the ability of the AI programming to learn as it carries out its mission. Many of these issues are familiar to anyone following the bigger AI debate–i.e., whether it is going to help us or kill us, on which luminaries have opined both ways–such as who is liable if an autonomous car runs off the road, or what if a bank loan algorithm designed to select for the best credit risks based purely on socially acceptable criteria (income, outstanding loans etc.) begins to discriminate based on race or gender. The point is, AI smart instruments could learn over time to do things and make decisions that make perfect sense to the AI but break the law. The article argues that, given this potential, we need to think more deeply about AI and “the legal order,” defined not just as law enforcement but also as including preventive measures.

This theme recalls a previous post of mine on “embedded law”–the idea that as more and more of our stuff and activities are governed by software and AI, we can program legal compliance into the code–for example, to make falsifying records or insider trading impossible. Similarly, the Etzionis argue that the operational AI of smart instruments will soon be so opaque and impenetrable as to be essentially a black box in terms of sorting out legal concerns like the errant car or the discriminatory algorithm. Ex ante human intervention to prevent the illegality will be impossible in many instances, because the AI is moving too fast (see my previous post on this theme), and ex post analysis of the liabilities will be impossible because we will not be able to recreate what the AI did.

The Etzionis’ solution is that we need “AI programs to examine AI programs,” which they call “AI Guardians.” These AI Guardians would “interrogate, discover, supervise, audit, and guarantee the compliance of operational AI programs.” For example, if the operational AI program of a bank called in a customer’s loan, the AI Guardian program would check to determine whether the operational program acted on improper information it had learned to obtain and assess. AI Guardians, argue the Etzionis, would be superior to humans given their speed, lower cost, and impersonal interface.

I get where they are coming from, but I see some problems. First of all, many determinations of legality of illegality depend on judgement calls–balancing tests, the reasonable person standard, etc. If AI Guardians are to make those calls, then necessarily they will need to be programmed to learn, which leads right back to the problem of operational AI learning to break the law. Maybe AI Guardians will learn to break the law too. Perhaps for those calls the AI Guardian could simply alert a human compliance officer to investigate, but then we’ve put humans back into the picture. So let’s say that the AI Guardians only enforce laws with bright line rules, such as don’t drive over 50mph. Many such rules have exceptions that require judgment to apply, however, so we are back to the judgment call problem. And if all the AI Guardians do is prevent violations of bright line rules with no exceptions, it’s not clear they are an example of AI at all.

But this is not what the Etzionis have in mind–they envision that “AI Guardians…will grow smarter just as operational AI programs do.” The trick will be to allow the AI Guardians to “grow smarter” but prevent the potential for them as well to cross the line. The Etzionis recognize this lurking “Who will guard the guardians” question exists even for their AI Guardians, and propose that all smart instruments have a “readily locatable off switch.” Before long, however, flipping the off switch will mean more than turning off the car–it will mean turning off the whole city!  

All of this is yet more Law 2050 food for thought…  

 

 

Law’s Big Mechanism

Right now, as I write, researchers are loading medical journal articles into a computer to see if they can tease out the causes of cancer. Their goal is to use the artificial intelligence (AI) trio of big data, natural language processing, and machine learning to automate research on causal models of the complex biological systems underlying cancer.

Who’s doing this, you ask? It’s the Defense Advanced Research Projects Agency. That’s right, DARPA is researching cancer. As the agency explains it, the systems that matter most to the Defense Department tend to be very complicated systems in which interactions have important causal effects. While cancer might not be foremost as a system that influences national defense, its biology certainly is a complicated system in which interactions have important causal effects. So DARPA is testing methods for learning more about what causes cancer so it can learn more about the complex systems that do drive national defense decisions.

DARPA calls its research initiative the Big Mechanism program. Big mechanisms are models of how complex systems work. Although the collection of data needed to develop a big mechanism model is now largely automated—thus the rise of big data—the development of big mechanisms is still mainly the product of human research and reasoning ingenuity. The point of the Big Mechanism project is to see whether the development of useful big mechanism models also can be automated. If they can be, then DARPA could (automatically) load big data into the model to (automatically) develop causal models to (automatically) predict what’s going to happen of relevance to national defense.

OK, what’s this got to do with law? Most of the applications of AI in law thus far have been to improve predictive capacity in a non-causal sense, such as using machine learning in e-discovery to sort documents. The prediction isn’t based on a causal model. There’s certainly a lot of value in that approach, both scientifically and commercially. But what about law’s big mechanism? Surely the legal system is a complicated system in which interactions have important causal effects. If we had a big mechanism model of what factors cause moves in the legal system, such as the next new wave of products liability litigation, that would be a very different kind of predictive capacity. Knowing what’s coming next can come in handy for lawyers!

Shift over to another outfit called Praedicat, a spin-off of the RAND Corporation. Praedicat is using AI to develop big mechanism models of catastrophe risk for the property and casualty insurance industry. As the company explains it, their AI applications “track the science and commercial exposures for more than 100 emerging risks” and “bring technology to insurers’ emerging risk activities, converting risk avoidance to portfolio optimization; exclusion to accumulation management; and avoiding the “next asbestos” to driving sustainable profits.” Like DARPA, Praedicat relies on “the world’s community of toxicologists, epidemiologists, and bioscientists to algorithmically identify emerging risks.” Their “patented “saliency” algorithm combs through the corpus of peer-reviewed science [and regulatory documents] for new hypotheses that chemicals, products and substances might cause bodily injury. The risks are automatically prioritized by the energy and intensity of new attention the risks receive, and are tracked over time as they mature.” Then it produces “industry profiles to capture the litagion® agents that might be found at companies in the industry, and provides a “heat map” that explores the potential for clash between the profiled industry and other industries.” “Litagion agents”? That’s not a misspelling. It’s Praedicat’s trademarked term for what is essentially the big mechanism model of catastrophe insurance litigation.

What Praedicate is doing is the same as what DARPA is doing, but for insurance litigation. Open the lens wider and one can imagine applying the same approach to search for the “litagion agents” for IP litigation, drug litigation, securities litigation, products liability litigation, and a wide variety of other legal applications.  That would be law’s big mechanism. That would be cool!

Forms of Bespoke Lawyering and the Frontiers of Artificial Intelligence

In Machine Learning and Law, Harry Surden of the University of Colorado Law School provides a comprehensive and insightful account of the impact advances in artificial intelligence (AI) have had and likely will have on the practice of law. By AI, of course, Surden means the “soft” kind represented mostly through advancement in machine learning. The point is not that computers are employing human cognitive abilities, but rather that if they can employ algorithms and other computational power to reach answers and decisions like those humans make, and with equal or greater accuracy and speed, it doesn’t matter so much how they get there. Surden’s paper is highly recommended for its clear and cogent explanation of the forms and techniques of machine learning and how they could be applied in legal practice.

Surden quite reasonably recognizes that AI, at least as it stands today and in its likely trajectory for the foreseeable future, can only go so far in displacing the lawyer. As he puts it, “attorneys, for example, routinely combine abstract reasoning and problem solving skills in environments of legal and factual uncertainty.” The thrust of Surden’s paper, therefore, is how AI can facilitate lawyers in exercising those abilities, such as by finding patterns in complex factual and legal data sets that would be difficult for a human to detect, or in enhancing predictive capacity for risk management and litigation outcome assessments.

What Surden is getting at, in short, is that there seems to be little chance in the near future that AI can replicate the “bespoke lawyer.” That term is used throughout the commentary on the “new normal” in legal practice (which is actually a “post normal” given we have not reached any sort of equilibrium). But it is not usually unpacked any further than that, as if we all know intuitively what bespoke lawyering is.

To take a different perspective on bespoke lawyering and the impact of AI, I suggest we turn Surden’s approach around by outlining what is bespoke about bespoke lawyering and then think about how AI can help. In the broadest sense, bespoke lawyering involves a skill set that draws heavily from diverse and deep experience, astute observation, sound judgment, and the ability to make decisions. Some of that can be learned in life, but some is part of a person’s more complex fabric—you either have it or you don’t. If you do have these qualities under your command, however, you have a good shot at attaining that bespoke lawyer status. Here’s a stab at breaking down what such a lawyer does well:

Outcome Prediction: Prediction of litigation, transaction, and compliance outcomes is, of course, what clients want dearly from their lawyers. On this front AI seems to have made the most progress, with outfits like Lex Machina and LexisNexis’s Verdict & Settlement Analyzer building enormous databases of litigation histories and applying advanced analytics to tease out how a postulated scenario might fare.

Analogical and evaluative legal search: Once that pile of search results comes back from Lexis or Westlaw (or Ravel Law or Case Text), the lawyer’s job is to sort through and find those that best fit the need. Much as it is used in e-discovery, AI could employed to facilitate that process through machine learning. This might not be cost-effective, as often the selection of cases and other materials must be completed quickly and from relatively small sets of results. Also, the strength of fit is often a qualitative judgment, and identifying useful analogies, say between a securities case and an environmental law case, is a nuanced cognitive ability. Nevertheless, if a lawyer were to “train” algorithms over time as he or she engages in years of research in a field, and if all the lawyers in the practice group did the same, AI could very well become a personalized advanced research tool making the research process substantially more efficient and effective.

Risk management: Whereas outcome prediction is usually a one-off call, managing litigation, transaction, and compliance outcomes over time requires a sense of how to identify manage risk.  Kiiac’s foray into document benchmarking is an example of how AI might enhance risk management, allowing evaluation of massive transactional regime histories for, say, commercial real estate developers, to detect loss or litigation risk patterns under different contractual terms.

Strategic planning: Lawyers engage extensively in strategic planning for clients. Where to file suit? How hard to negotiate a contract term? Should we to disclose compliance information? Naturally, it would be nice to know how different alternatives have fared in similar situations. Here again, AI could be employed to detect those patterns from massive databases of transactions, litigation, and compliance scenarios.

Judgment (and judging): Judgment about what a client should do, or about how to decide a case when judge, involve senses not easily captured by AI, such as fairness, honesty, equity, and justice. The unique facts of a case may call for departure from the pattern of outcomes based on one of these sensibilities. Yet doctrines do exist to capture some of these qualities, such as equitable estoppel, apportionment of liability, and even departure from sentencing guidelines, and these doctrines exhibit patterns in outcomes that may be useful for lawyers and judges to grasp in granular detail. What is equitable or just, in other words, is not an entirely ad hoc decision. AI could be used to decipher such patterns and suggest how off the mark a judgment under consideration would be.

Legal reform: As I tell my 1L Property students, in almost every case we cover some lawyer was arguing for legal reform—a change in doctrine, a change in statutory interpretation, striking down an agency rule, and so on. And of course legislatures and agencies, when they are functional, are often in the business of changing the law. To some extent arguments for reform go against the grain of existing patterns, although in some cases they pick up on an emerging trend. They also rely heavily on policy bases for law, such as equity, efficiency, and legitimacy. In all cases, though the argument has to be that there is something “broken” about continuing to apply the existing law, or to not invent new law, in the particular case or broader issue in play. AI might be particularly useful as a way of building that argument, such as by demonstrating a pattern of inefficient results from existing doctrine, or detecting strong social objection to an existing law.

Trendspotting: In my view the very best lawyers—the most bespoke—are those ahead of the game—the trendspotters. What is the next wave of litigation? Where is the agency headed with regulation? Which law or doctrine is beginning to get out of synch with social reality? Spotting these trends requires the lawyer to get his or her head outside the law. Here, I think, AI might be most effective in assisting the bespoke lawyer. A plaintiffs firm, for example, might use AI to monitor social media to identify trends highly associated with the advent of new litigation claims, such as people complaining on Twitter about a product. Similarly, this approach could be used to inform any of the lawyer functions outlined above.

Handling people: Ultimately, a top lawyer builds personal relationships with colleagues, peers, and clients. AI can’t help you do that, I don’t think, but by helping lawyers do all of the above it may free up time for a game of golf (tennis for me) with a client!

The Artificial (Intelligence) Restatement of the Law?

As I write, the 2013 International Conference on Artificial Intelligence and the Law is taking place in Rome.  I wish I had been able to attend–anyone remotely interested in the scope of Law 2050 should take a look at the program.

Most of the discourse on AI and the Law in the popular press has focused on the capacity AI to predict the law, as with Lex Machina and Lexis’s MedMal Navigator. But if you take a close look at the ICAIL program, the sleeper may be the capacity of AI to make the law. Many of the presentations delve into methods of using algorithms to extract and organize legal principles from the vast databases or cases, statutes, and other legal sources now available. The capacity to produce robust, finely-grained, broad scope statements of what the law is powerful not only for descriptive purposes, but as a force in shaping the law as well.

Consider the American Law Institute’s long-standing Restatement of the Law project. As ALI explains,”the founding Committee had recommended that the first undertaking of the Institute should address uncertainty in the law through a restatement of basic legal subjects that would tell judges and lawyers what the law was. The formulation of such a restatement thus became ALI’s first endeavor.” As I think any lawyer would agree, the idea worked pretty well, pretty well indeed. The Restatements have been so influential that they go well beyond describing the law–they contribute to making the law through the effect they have on lawyers arguing cases and judges reaching decisions.

How did ALI pull that off? Numbers. Anyone who has worked on a Restatement revision committee has experienced the incredible data collection and analytical powers that ALI assembles by gathering large numbers of domain experts and tasking them with distilling the law of a field into its core elements and extended nuances. The process, however, is protracted, costly, tedious, and often contentious.

Many of the ICAIL programs suggest the capacity of AI to generate the same kind of work product as ALI’s Restatements, but faster, cheaper, and perhaps better. ALI depends on large committees of experts to gather case law, analyze it, and extract and organize the underlying doctrines and principles. That’s exactly what AI for law does, only with a lot fewer people, a lot more data, and amazingly efficient and effective algorithms. Of course, you still (for now) need people to manage the data and develop the algorithms, but once you have it all in place you just hit the run button. When you want an update, you just hit the run button again. When you want to ask a question in a slightly different way, just enter it and hit the run button.

As the Restatements demonstrated, a reliable, robust source of reference for what the law is can be so influential as to become a part of the making of the law.  As AI applications build the capacity to replicate that work product, it follows that they could have the same kind of influence.

One feature AI could not produce, of course, is the commentary and policy pushing one finds in the Restatements. The subjective dimension of the Restatements has its own pros and cons. The potential of AI to produce highly-accurate, real-time descriptions of the law, however, might change the way in which we approach normative judgments about the law as well.