Home » Posts tagged 'machine learning'
Tag Archives: machine learning
Notwithstanding the concerns some very smart people have expressed about the risks of what the machines will do when they reach The Singularity, I’m actually a lot more concerned for my lifetime about what humans with evil intent are going to do with machines armed with artificial intelligence.
A few months ago I asked the question whether AI can make AI obey the law? There was no conclusive answer. That question, though, goes more to how AI might lead to socially undesirable results despite its use by good people with good intentions.
I call this the problem of Good AI Gone Bad, and it has gotten a lot of recent coverage in the media. Thankfully, on this front more very smart people are working on ways to make AI accountable to society by revealing its reasoning, and I expect we will see more and more effort in AI research to devise ways to keep it socially beneficial, transparent, and mostly under our control. Law should play a key role in that, and recent announcements by the White House and by major law firms are encouraging in that respect. My Vanderbilt colleague Larry Bridgesmith published a very concise and thorough summary of this concern in today’s Legal Tech News. It’s well worth the read as an entry point to this space.
But the problem is that there are bad people in the world who don’t want AI to obey the law, they want it to help them break the law. That is, there are bad people with bad intentions who know how to design and deploy AI to make them better at being bad. That’s what I call Bad AI. What do we do about that?
Much like any other good v bad battle, much comes down to who has the better weapon. The discipline of adversarial machine learning is where many on the good side are working hard to improve counter-measures to Bad AI. But this looks like an arms race, a classic Red Queen problem. And in my view, this one has super-high stakes, maybe not like the nuclear weapons arms race, but potentially pretty close. Bad AI is way beyond hacking and identity theft as we know them today–it’s about steering key financial, social, infrastructure, and military systems. Why disrupt when you can control? Unlike the nuclear weapon problem, though, mutually-assured destruction might not keep the race in check (although North Korea has changed the rules of that arms race as well). With AI, what is it exactly that we are “blowing up” besides algorithms, which can easily be rebuilt and redeployed from anywhere in the world.
As much as we are (rightfully) concerned that climate change could do us in eventually, the AI arms race is a more immediate, tangible, and thorny threat that could wreak tremendous financial and social havoc long before sea-level rise starts taking out islands and coastal cities. We at the Program on Law and Innovation see Bad AI as a pressing issue for law and policy, and so will be devoting our spring 2017 annual workshop on AI & Law to this issue as well as to the problem of Good AI Gone Bad. We will be bringing together key researchers (including Vanderbilt’s own expert on adversarial machine learning, Eugene Vorobeychik) and commentators. More to follow!
In Machine Learning and Law, Harry Surden of the University of Colorado Law School provides a comprehensive and insightful account of the impact advances in artificial intelligence (AI) have had and likely will have on the practice of law. By AI, of course, Surden means the “soft” kind represented mostly through advancement in machine learning. The point is not that computers are employing human cognitive abilities, but rather that if they can employ algorithms and other computational power to reach answers and decisions like those humans make, and with equal or greater accuracy and speed, it doesn’t matter so much how they get there. Surden’s paper is highly recommended for its clear and cogent explanation of the forms and techniques of machine learning and how they could be applied in legal practice.
Surden quite reasonably recognizes that AI, at least as it stands today and in its likely trajectory for the foreseeable future, can only go so far in displacing the lawyer. As he puts it, “attorneys, for example, routinely combine abstract reasoning and problem solving skills in environments of legal and factual uncertainty.” The thrust of Surden’s paper, therefore, is how AI can facilitate lawyers in exercising those abilities, such as by finding patterns in complex factual and legal data sets that would be difficult for a human to detect, or in enhancing predictive capacity for risk management and litigation outcome assessments.
What Surden is getting at, in short, is that there seems to be little chance in the near future that AI can replicate the “bespoke lawyer.” That term is used throughout the commentary on the “new normal” in legal practice (which is actually a “post normal” given we have not reached any sort of equilibrium). But it is not usually unpacked any further than that, as if we all know intuitively what bespoke lawyering is.
To take a different perspective on bespoke lawyering and the impact of AI, I suggest we turn Surden’s approach around by outlining what is bespoke about bespoke lawyering and then think about how AI can help. In the broadest sense, bespoke lawyering involves a skill set that draws heavily from diverse and deep experience, astute observation, sound judgment, and the ability to make decisions. Some of that can be learned in life, but some is part of a person’s more complex fabric—you either have it or you don’t. If you do have these qualities under your command, however, you have a good shot at attaining that bespoke lawyer status. Here’s a stab at breaking down what such a lawyer does well:
Outcome Prediction: Prediction of litigation, transaction, and compliance outcomes is, of course, what clients want dearly from their lawyers. On this front AI seems to have made the most progress, with outfits like Lex Machina and LexisNexis’s Verdict & Settlement Analyzer building enormous databases of litigation histories and applying advanced analytics to tease out how a postulated scenario might fare.
Analogical and evaluative legal search: Once that pile of search results comes back from Lexis or Westlaw (or Ravel Law or Case Text), the lawyer’s job is to sort through and find those that best fit the need. Much as it is used in e-discovery, AI could employed to facilitate that process through machine learning. This might not be cost-effective, as often the selection of cases and other materials must be completed quickly and from relatively small sets of results. Also, the strength of fit is often a qualitative judgment, and identifying useful analogies, say between a securities case and an environmental law case, is a nuanced cognitive ability. Nevertheless, if a lawyer were to “train” algorithms over time as he or she engages in years of research in a field, and if all the lawyers in the practice group did the same, AI could very well become a personalized advanced research tool making the research process substantially more efficient and effective.
Risk management: Whereas outcome prediction is usually a one-off call, managing litigation, transaction, and compliance outcomes over time requires a sense of how to identify manage risk. Kiiac’s foray into document benchmarking is an example of how AI might enhance risk management, allowing evaluation of massive transactional regime histories for, say, commercial real estate developers, to detect loss or litigation risk patterns under different contractual terms.
Strategic planning: Lawyers engage extensively in strategic planning for clients. Where to file suit? How hard to negotiate a contract term? Should we to disclose compliance information? Naturally, it would be nice to know how different alternatives have fared in similar situations. Here again, AI could be employed to detect those patterns from massive databases of transactions, litigation, and compliance scenarios.
Judgment (and judging): Judgment about what a client should do, or about how to decide a case when judge, involve senses not easily captured by AI, such as fairness, honesty, equity, and justice. The unique facts of a case may call for departure from the pattern of outcomes based on one of these sensibilities. Yet doctrines do exist to capture some of these qualities, such as equitable estoppel, apportionment of liability, and even departure from sentencing guidelines, and these doctrines exhibit patterns in outcomes that may be useful for lawyers and judges to grasp in granular detail. What is equitable or just, in other words, is not an entirely ad hoc decision. AI could be used to decipher such patterns and suggest how off the mark a judgment under consideration would be.
Legal reform: As I tell my 1L Property students, in almost every case we cover some lawyer was arguing for legal reform—a change in doctrine, a change in statutory interpretation, striking down an agency rule, and so on. And of course legislatures and agencies, when they are functional, are often in the business of changing the law. To some extent arguments for reform go against the grain of existing patterns, although in some cases they pick up on an emerging trend. They also rely heavily on policy bases for law, such as equity, efficiency, and legitimacy. In all cases, though the argument has to be that there is something “broken” about continuing to apply the existing law, or to not invent new law, in the particular case or broader issue in play. AI might be particularly useful as a way of building that argument, such as by demonstrating a pattern of inefficient results from existing doctrine, or detecting strong social objection to an existing law.
Trendspotting: In my view the very best lawyers—the most bespoke—are those ahead of the game—the trendspotters. What is the next wave of litigation? Where is the agency headed with regulation? Which law or doctrine is beginning to get out of synch with social reality? Spotting these trends requires the lawyer to get his or her head outside the law. Here, I think, AI might be most effective in assisting the bespoke lawyer. A plaintiffs firm, for example, might use AI to monitor social media to identify trends highly associated with the advent of new litigation claims, such as people complaining on Twitter about a product. Similarly, this approach could be used to inform any of the lawyer functions outlined above.
Handling people: Ultimately, a top lawyer builds personal relationships with colleagues, peers, and clients. AI can’t help you do that, I don’t think, but by helping lawyers do all of the above it may free up time for a game of golf (tennis for me) with a client!
For a concise but thorough and insightful summary of how machine learning technology will transform the legal profession, and a sobering prediction of the winners and losers, check out The Great Disruption: How Machine Intelligence Will Transform the Role of Lawyers in the Delivery of Legal Services. Written by John McGinnis of Northwestern University Law School and Russel Pearce of Fordham Law School, this is a no-nonsense assessment of where the legal profession is headed thanks to the really smart people who are working on really smart machines. The key message is to abandon all notion that the progress of machine learning technology, and its incursion into the legal industry, will be linear. For quite a while after they were invented, computers didn’t seem that “smart.” They assisted us. But the progress in computational capacity was moving exponentially forward all the time. It is only recently that computers have begun to go beyond assisting us to doing the things we do as competently as we do, or better (e.g., IBM’s Watson). The exponential progress is not going to stop here–the difference is that henceforth we will see computers leaving us behind rather than catching up.
The ability of machines to analyze and compose sophisticated text is already working its way into the journalism industry, and McGinnis and Pearce see law as the next logical target. They foresee five realms of legal practice as the prime domains for computers supplanting human lawyers: (1) discovery, which is well underway; (2) legal search technology advancing far beyond the Westlaw of today; (3) generation of complex form documents, such as Kiiac; (4) composing briefs and memos; and (5) predictive legal analytics, such as Lex Machina. All of these trends are well in motion already, and they are unstoppable.
All of this is a mixed bag for lawyers, as some aspects of these trends will allow lawyers to do their work more competently and cost-effectively. But the obvious underside of that is reduced demand for lawyers. So, who wins and who loses? McGinnis and Pearce identify several categories of winners (maybe the better term is survivors): (1) superstars who are empowered even more by access to the machines to help them deliver high stakes litigation and transactional services; (2) specialists in areas of novel, dynamic law and regulation subject to change, because the lack of patterns will make machine learning more difficult (check out EPA’s 645-page power plant emissions proposed regulation issued yesterday–job security for environmental lawyers!); (3) oral advocates, until the machines learn to talk; and (4) lawyers practicing in fields with high client emotional content, because machines don’t have personalities, yet. The lawyering sector hardest hit will be the journeyman lawyer writing wills, handling closings, reviewing documents, and drafting standard contracts, although some entrepreneurial lawyers will use the machines to deliver high-volume legal services for low and middle income clients who previously were shut out of access to lawyers.
Much of what’s in The Great Disruption can be found in longer, denser treatments of the legal industry, but McGinnis and Pearce have distilled the problem to its core and delivered a punchy, swift account like no other I’ve seen. I highly recommend it.