Enter your email address to follow this blog and receive notifications of new posts by email.

Join 408 other followers

Topics

Vanderbilt Law Students Build Apps for Access to Justice!

One of the high points of each year in our Program on Law & Innovation is the “pitch event” in Adjunct Professor Marc Jenkins’ Technology in Legal Practice class. One of the major projects in the class involves students forming teams that pair with area legal aid organizations to build problem-solving apps improving access to justice. Now wrapping up its third year, the class and the students are firing on all pistons, building prototypes or live versions of some very meaningful apps that can help traditionally underserved populations who cannot affordably navigate our utterly complex legal system. Marc has worked closely with the legal aid organizations to develop strong bonds with the students, and also has opened ties with Vanderbilt’s Computer Science Department and our new entrepreneurship center, The Wond’ry, to leverage their expertise in building out the apps. Here’s just a quick summary of the students’ impressive accomplishment this year, describing for each team the organization, work product, and app authoring platform:

  • LGBT Legal Relief Fund: This new organization has been flooded with requests for help. The student team worked with the developers at KIM to build a workflow management app.
  • Legal Aid Society: The team built a mobile app prototype, which they named Clean Slate, to guide a person through the incredibly complicated criminal record expungement eligibility process. They used the JustinMind Mobile App prototyping tool.
  • Tennessee Justice for Our Neighbors: Using an app authoring platform designed by Vanderbilt CS undergrad student Ashley Peck (very impressive!), this team developed a prototype of what they call the Childcare Contingency Plan for undocumented immigrants hoping to contingency plan for their children in case the parents are detained or deported.
  • Tennessee Justice Center: This student team designed an app for the Sales Force platform that walks families through the SNAP (food stamps) eligibility criteria. They reduced 1000 pages of ridiculously complicated agency “guidance” to an interview consisting of 30 – 60 questions (depending on answers).
  • Nashville Arts and Business Council: This team picked up from a previous year’s team that used Neota Logic to design an interview aspiring musicians (we have a few here in Nashville!) can use to make business entity formation decisions appropriate to their plans. The team essentially beta tested the existing app, leading to improved wording and more accurate outcomes.
  • Legal Aid Society: This team also continued working on a mobile website app started by a prior team, built using the same authoring program designed by Ashley Peck, to guide the user through the often bewildering debt collection process.
  • Legal Aid Society: Using the A2J author platform, this team designed a web-based computer app they call Mission Expungement, for the criminal records expungement process directed specifically at the Nashville jurisdiction.

 

 

Ruhl, Katz, and Bommarito publish “Harnessing Legal Complexity” in Science

I am pleased to announce the publication in Science, the journal of the American Association for the Advancement of Science, of an article I co-authored with Dan Katz and Mike Bommarito, Harnessing Legal Complexity. The summary from Science:

Complexity science has spread from its origins in the physical sciences into biological and social sciences. Increasingly, the social sciences frame policy problems from the financial system to the food system as complex adaptive systems (CAS) and urge policy-makers to design legal solutions with CAS properties in mind. What is often poorly recognized in these initiatives is that legal systems are also complex adaptive systems. Just as it seems unwise to pursue regulatory measures while ignoring known CAS properties of the systems targeted for regulation, so too might failure to appreciate CAS qualities of legal systems yield policies founded upon unrealistic assumptions. Despite a long empirical studies tradition in law, there has been little use of complexity science. With few robust empirical studies of legal systems as CAS, researchers are left to gesture at seemingly evident assertions, with limited scientific support. We outline a research agenda to help fill this knowledge gap and advance practical applications.

More information is available at the Science online site. Working with Dan and Mike, two of the leading figures in the application of complexity science and artificial intelligence techniques in law (see their Computational Legal Studies site), was an immense pleasure. Now, onward with the legal complexity research agenda!

Vanderbilt Sponsors New eJournal on AI and Law

I am pleased to announce that the Program on Law & Innovation at Vanderbilt Law School is the sponsor of the new SSRN eJournal, Artificial Intelligence – Law, Policy & Ethics. The journal publishes abstracts and papers focused on two themes: “AI for Law,” covering the increasing application of AI technologies in legal practice, and “Law for AI,” covering the issues that will arise as AI is increasingly deployed throughout society.  I am serving as the editor, supported by a wonderful Advisory Board.

If you are working on a paper in this domain, please consider including our journal when posting to SSRN, and if you have an SSRN subscription, please consider adding our journal to your feed.

Vandy AI & Law Workshop Covers All the Boxes!

Last week Vanderbilt’s Program on Law & Innovation held our Second Annual Workshop on Artificial Intelligence and Law, and it was a truly wide-ranging and inspirational set of presentations and roundtable discussions.

One way I think about this topic is to (artificially) unpack it into four themes, as shown in this 2×2 space:

 

AI for Law

Law for AI

 

Research and Theory

 

   
 

Practice and Application

 

 

The idea is that AI will both be deployed in legal practice and, as it is deployed in society generally, will raise ethical and policy concerns requiring legal responses. In both of those realms, work is needed on the theory and research side to facilitate and manage how AI is applied in practice.

Our workshop presentations and discussions covered all the boxes, and many demonstrated that the boxes are not hermetically sealed—some themes and questions are cross-cutting. Indeed, several participants have engaged in a lively post-workshop email discussion on the extent to which using AI in dispute resolution could lock in doctrine or could be “programmed” for creativity, a question that requires engaging both theory and practice.

Even if one is skeptical about how soon we will see “general AI” coming online, if ever, there’s no question that “weak AI” is getting stronger and stronger in both the AI for Law and Law for AI realms. There’s no way to navigate around it! We engaged it in the workshop starting Thursday with big picture overviews of the two overarching themes by Oliver Goodenough (AI for Law) and John McGinnis (Law for AI). Friday had both deep dives and high-level theory in play. For example, Michael Bess asked how we should act now to avoid pitfalls of ever-stronger AI. Dan Katz discussed his work on predicting legal outcomes with AI tools combined with expert and crowd predictions. Jeannette Eikes outlined an agenda for building AI-based contract regimes. John Nay used topic modeling to parse out features of Presidential exercise of power that would have taken years to accomplish using traditional research methods. Cat Moon and Marc Jenkins unpacked AI in the legal practice world, showing where it faces uptake bottlenecks, and Doug Fisher kicked off a discussion of what AI means in the AI research world. Jeff Ward offered an insightful examination of the challenges AI will present for Community Economic Development programs, as well as the uses CEDs can make of AI. In short, we covered a lot of the boxes, and more!

Many thanks to this year’s participants—I’m looking forward to planning next year’s gathering as well!

Vanderbilt Law School’s Second Annual Workshop on Artificial Intelligence and Law – March 2 and 3

Long Time No Post! I’ll explain why later. For now, I’m diving back into Law 2050. First up in the post order is news about this week’s workshop on AI & Law. Here’s the scoop about this great lineup of participants and themes we’ll cover:

Second Annual Workshop on Artificial Intelligence and Law

Vanderbilt University Law School

Program on Law & Innovation

March 2-3, 2017

The Workshop on Artificial Intelligence and Law each year brings together academics and practitioners working in one or both of two themes—AI for Law, which explores how AI will be deployed in legal research and practice; and Law for AI, focused on the legal, policy, and ethical issues that the deployment of AI in society is likely to create. This year’s workshop includes some of the nation’s most thoughtful experts and thinkers in both spaces. Thursday afternoon sets the scene with two presentations tapping into the two big themes to help frame a “big questions” discussion. Friday’s agenda intersperses research and practice presentations representing both themes, circling the agenda back to the “big questions” question—did we answer any, or at least chart the next steps?

Itinerary

Thursday, March 2

Burch Room (1st Floor)

3:00 – 3:30          Welcome and Introductions

3:30 – 4:00          Oliver Goodenough, Vermont Law School: Law as AI

4:00 – 4:30          John McGinnis, Northwestern University Law School: Discussion Lead – Breakaway AI

4:30 – 5:00          Roundtable: What are the big questions?

5:00 – 6:30          Free Time

6:30                       Dinner at Amerigo, 1920 West End

Later on?             Broadway music venues

Friday, March 3

Bass Berry Sims Room (2nd Floor)            

8:00 – 8:30          Breakfast in meeting room

8:30 – 8:45          Additional Introductions

8:45 – 9:15          Dan Katz, IIT Chicago-Kent Law School: Predicting and Measuring Law

9:15 – 10:15        Cat Moon, Legal Alignment, and Marc Jenkins, Asurion: Discussion Leads – AI in Practice

10:15 – 10:30     Break

10:30 – 11:00     Michael Bess, Vanderbilt University History Department: Human-level AI and the Danger of an Intelligence Explosion: Questions of Safety, Security, and International Governance

11:00 – 11:30     Jeff Ward, Duke University Law School:  A Community Economic Development Law Agenda for the Robotic Economy

11:30 – 12:00     Doug Fisher, Vanderbilt University Computer Science: Discussion Lead – Unpacking AI

12:00 – 1:00       Lunch and conversation in meeting room

1:00 – 1:30          John Nay, Vanderbilt University College of Engineering: Analyzing the President—the First 100 Days

1:30 – 2:00          Jeannette Eikes, Vermont Law School: AI for Contracts

2:00 – 2:30          J.B. Ruhl, Vanderbilt University Law School: Envisioning and Building “Legal Maps”

2:30 – 2:45          Break

2:45 – 3:15          Roundtable: Did we answer any of the big questions?

3:15 – 3:30          Closing remarks and next steps

Fighting Bad Artificial Intelligence: Law, Policy, and the AI Arms Race

Notwithstanding the concerns some very smart people have expressed about the risks of what the machines will do when they reach The Singularity, I’m actually a lot more concerned for my lifetime about what humans with evil intent are going to do with machines armed with artificial intelligence.

A few months ago I asked the question whether AI can make AI obey the law? There was no conclusive answer. That question, though, goes more to how AI might lead to socially undesirable results despite its use by good people with good intentions.

I call this the problem of Good AI Gone Bad, and it has gotten a lot of recent coverage in the media. Thankfully, on this front more very smart people are working on ways to make AI accountable to society by revealing its reasoning, and I expect we will see more and more effort in AI research to devise ways to keep it socially beneficial, transparent, and mostly under our control. Law should play a key role in that, and recent announcements by the White House and by major law firms are encouraging in that respect. My Vanderbilt colleague Larry Bridgesmith published a very concise and thorough summary of this concern in today’s Legal Tech News. It’s well worth the read as an entry point to this space.

But the problem is that there are bad people in the world who don’t want AI to obey the law, they want it to help them break the law. That is, there are bad people with bad intentions who know how to design and deploy AI to make them better at being bad. That’s what I call Bad AI. What do we do about that?

Much like any other good v bad battle, much comes down to who has the better weapon. The discipline of adversarial machine learning is where many on the good side are working hard to improve counter-measures to Bad AI. But this looks like an arms race, a classic Red Queen problem. And in my view, this one has super-high stakes, maybe not like the nuclear weapons arms race, but potentially pretty close. Bad AI is way beyond hacking and identity theft as we know them today–it’s about steering key financial, social, infrastructure, and military systems. Why disrupt when you can control? Unlike the nuclear weapon problem, though, mutually-assured destruction might not keep the race in check (although North Korea has changed the rules of that arms race as well). With AI, what is it exactly that we are “blowing up” besides algorithms, which can easily be rebuilt and redeployed from anywhere in the world.

As much as we are (rightfully) concerned that climate change could do us in eventually, the AI arms race is a more immediate, tangible, and thorny threat that could wreak tremendous financial and social havoc long before sea-level rise starts taking out islands and coastal cities. We at the Program on Law and Innovation see Bad AI as a pressing issue for law and policy, and so will be devoting our spring 2017 annual workshop on AI & Law to this issue as well as to the problem of Good AI Gone Bad. We will be bringing together key researchers (including Vanderbilt’s own expert on adversarial machine learning, Eugene Vorobeychik) and commentators. More to follow!

Law 2050 Students Identify Legal Issues on the Horizon

Once again the core writing assignment in my Law 2050 class requires students to identify a trend of any kind—technological, environmental, social, economic, so long as it is likely to raise policy issues that could require legal responses—and spin out its impacts and legal implications in three styles of writing: (1) a blog post, (2) a client alert letter, and (3) a bar journal article. The idea behind the assignment is twofold. First, young lawyers can and increasingly must jump on emerging issues and brand themselves as among the “go to” legal experts. Second, the style of writing needed to make the brand is generally not taught in law schools.

What I enjoy most about the assignment is working with the students to identify topics, as I learn a lot about what’s on the horizon. This year’s topics:

  • Blockchain technology in banking
  • The rise of FinTech
  • Fitbits in the court room
  • Advances in assisted reproductive technology
  • Healthcare applications of nanobots in our bodies
  • Exoskeletons
  • Litigation finance
  • Space tourism
  • Moral programming of driverless cars
  • Smart fabrics
  • Personalized genome sequencing
  • Changing marriage norms
  • Brain mesh technology (aka neural lace)
  • Space colonization
  • The proposed Equality Act
  • AI robots in the workplace
  • New state physician assisted suicide laws
  • Cybersecurity and drones
  • Preimplantation genetics
  • Employee wellness programs using wearable tech
  • Cyberwars
  • Epigenetic manipulation of livestock
  • The new DOT driverless car policy
  • Global worker enslavement
  • Vertical farming
  • Smart homes
  • Climate geoengineering
  • Smart pills
  • Mobile IDs
  • Legalized pot
  • Nanomachines

There’s a lot in that lineup, to say the least! The semester ends with students giving 3-minute “elevator pitches” to convince the class that the topic has legal legs. My hunch is they will be pretty convincing!

The Halfway Mark for the Fall 2016 Law 2050 Class

I have been remiss in failing to post about this year’s Law 2050 class, which like past years has been a blast. The most important task I should take care of first is to thank the guest speakers and panelists you have enriched the class so far this semester.

  • Each year I start out the class in the first week with two lectures, one providing an overview of the legal industry’s “post-normal” times and the next providing a brief history of the American law firm (1650-2015). This year, Hank Heyming, a Vandy alum and General Counsel of UpThere, sat in on the law firm history lecture and offered his insights, which were spot on.
  • The second week of the class each year has been framed around two panels, the first composed of law firm leaders and the next day’s panel composed of in-house counsel leaders. This year’s panels did not fail to capture the students’ attention! For law firm leaders we had John Herman of Robbins Geller Rudman & Dowd, John-Paul Motley of O’Melveny & Meyers, and Rita Powers of Greenberg Traurig. Our in-house team was Louise Brock of Bridgestone America, Chris Howard of Acadia Healthcare, and Louise Rankin of American Baptist Homes of the West. The two panels provided plenty of topics for later class discussion.
  • In Week 4 this year we had a chance to get a primer on big data, machine learning, and natural language processing from John Nay, a PhD student in Vandy’s Computational Decision Science program and co-founder (with Oliver Goodenough of Vermont Law School and me) of PredictGov, a new legal tech startup.
  • James Mackler of Frost Brown Todd gave a repeat performance in Week 5 of his inspirational story of building a successful drone law practice from scratch in the past several years. James is a classic example of the “jump in” message I used as the central theme of my 2016 Vandy Law School graduation commencement address. It was amazing to see how much the drone law space has evolved in just one year and how James has kept pace.
  • To round out the first half of the semester, we heard about the innovative fixed-fee reverse auction program Glaxco Smith Kline has developed over the past several years to retain law firms for large pieces of litigation. To explain how it works from both perspectives we had Andy Bayman and Mike Duffy of King & Spalding, one of GSK’s long-standing outside law firms, and Brennan Torregrossa and Justin Ergler of GSK. The discussion centered around the realignment of incentives the fixed-fee and reverse auction approach has produced.

Law 2050 would not work without the devotion of these and many other of my guest speakers over the years. I cannot thank you enough!

More posts soon on my students’ innovative topics for their writing projects and the remaining lineup of this semester’s speakers.

The Emerging Global Human Synchrony – Good or Bad?

Social media are beginning to have profound impacts on society at global scales, with potentially good and bad impacts. On the latter, Morales et al. have recently published a paper reporting that they have detected what they call a “new emergent global synchrony that couples behavior in distant regions across the world.”  They analyzed over 500 million geolocated tweets and detected a marked heartbeat pattern of tweeting on global longitudinal scales, which makes sense as people at the same longitude share the same work, play, eat, and sleep times. These pulses move around the world like the “wave” in a stadium. The authors observe that this synchrony is the result of a mix of intentional and self-organized behavior and leads to increasing global social complexity.

While it’s nice to think of the world population beating in synchronization, all made possible by Twitter, Facebook, and other macro-scale social media, there can be downsides. I have posted before about research suggesting that complex adaptive systems are at their most fragile—and most susceptible to cascading contagion failure—when feedback mechanisms are strongly synchronized in the same direction.  As the authors hypothesize, the global synchrony might actually “condition people’s decisions and diminish individuals’ freedom, as they are constrained by the norms and conventions of the social environment.” Consider, as well, the possibility that misinformation and mal-intended information enters the synchronous global media system. It could surge through and have devastating impacts before corrective action can be taken. Much as the Arab Spring demonstrated the positive power of mass social media, the emergent global social media synchrony could become a medium for cascades of injury to people and social fabric.

Regulatory solutions to this potential do not appear practical. Unlike Wall Street’s ability to stop trading when the market starts heading south fast, it may be impossible to shut down social media globally. Even if it could be done, the idea has disturbing political implications for personal freedom. Alas, given my relative ineptness at Twitter and the like, I probably wouldn’t know if it were shut down!

Check Out PredictGov – A New Entrant in the Legal Predictive Analytics Space

As I and many others have covered, the rapid infusion of new technologies into law—what some refer to as “law+tech”—is one of the major transformational trends leading to the post-normal era in which lawyers find themselves. But there is a very broad spectrum of law+tech initiatives coming into play, from those automating quite mundane routinized processes to those in pursuit of what I would call the Holy Grail of law+tech—predictive analytics.

Those who follow the Computational Legal Studies blog are familiar with the powerful predictive analytics tools Dan Katz and Mike Bommarito are developing for law, most notably their work on Supreme Court decisions. Over here at the Vanderbilt Program on Law and Innovation, John Nay, a Vandy Engineering Ph.D. Candidate and PoLI Research Fellow, is also developing tools for predictive legal analytics, in his case on federal legislation. I’ll let John’s words explain what’s behind the project:

While working at a policy strategy firm in D.C. and while interning for the Majority Leader of the U.S. House, I was overwhelmed with the number of bills to track. After leaving D.C. and no longer reading Politico every morning, trying to keep up-to-date was hopeless. There are often more than 8,000 bills under consideration in Congress but less than 4% are likely to become law. Based on my research on predicting and understanding legislation with natural language processing, I created a machine learning system to predict bill enactment. Starting with the 107th Congress, models were trained on data from previous Congresses, and all bills in the current Congress were predicted until the 113th Congress served as the test. The median of the model’s predicted probabilities for enacted bills was 0.71, and the median of the predicted probabilities for failed bills was 0.01. To bring this predictive power to the public, I built a web interface, PredictGov, where all bills currently under consideration and their predictions (updated daily) can be interactively explored. Users can sort and filter the bills and download the results. I also provide an application for searching networks of similar bills based on their texts on the website and updates on key bills on Twitter @PredictGov.

I’m delighted to be working with John to help inform his project and other initiatives, even though I understand only half of what he’s talking about! Look for more to follow on John’s PredictGov website.

Archives

%d bloggers like this: