Home » Legal Technology (Page 3)

Category Archives: Legal Technology

Law 2050 Students Take a Deep Dive into Neota Logic

Many, many years ago, when I was practicing environmental law with Fulbright & Jaworski in Austin, I was unfortunate enough to have a number of clients whose needs required that I master the EPA’s utterly convoluted definition of solid and hazardous waste. One summer I assigned a summer associate the task of flowcharting the definition. Over the course of the summer we debugged draft after draft until, finally, we had a handwritten flowchart that flawlessly worked any scenario through the definition step-by-step. It was ten legal-sized, taped-together pages long. It worked, but it wasn’t very practical.

If only we had had Neota Logic back then!  Last week, in my Law 2050 class, Kevin Mulcahy, Director of Education for Neota, demoed their product over the course of two classes and a 3-hour evening workshop.  Prior to the session I had assigned the class the exercise of flowcharting the copyright law of academic fair use. Each student prepared a flowchart and explained its logic, then six groups collaborated on final work products. I sent the group flowcharts to Kevin so he could use them to explain the Neota platform in a context familiar to the students.

Neota is a software program that allows the user to translate legal (or other) content into a user-friendly interactive application environment, much like Turbo Tax does for tax preparation. Neota allows the content expert to build the app with no coding expertise, with end products that are quite sophisticated in terms of what can be embedded in the app and how smoothly the app walks the user through the compliance logic. Example apps Kevin offered covered topics as varied as songwriter rights to Dodd-Frank compliance.

The first class period Kevin introduced Neota and then walked through each of the group flowcharts to analyze how each one broke down the fair use compliance problem. The core theme was how important it is to develop the output scenarios first. In the fair use exercise, there are several yes/no questions specific to educational uses, and then a multi-factored balancing test applies in the event none of those binary questions leads to a fair use outcome. Like any balancing test, this one yields a range of scenarios from very likely fair use to very likely not fair use. We spent a good deal of time thinking about how to design an app component to capture the balancing test.

In the evening workshop a group of 20 students acted as content experts to guide Kevin through the process of building the fair use app, much in the way a legal expert might work worth a Neota software expert. The most striking learning experience from this session, besides the deep look under Neota’s hood, was how the process of building the app actually sharpened our fair use compliance logic. We tested various approaches for capturing the balancing test and conveying output scenarios with substantive explanations for the user.

The next day the entire class regrouped to go over the workshop product, allowing those who could not make the workshop due to conflicting classes the chance to get a good feel for both the flexibility and precision the Neota software offers. Thinking back to my perfectly accurate but impractical ten-page flowchart of the EPA’s waste definition, I could envision how that and many other tasks that required developing a compliance logic could have been leveraged into apps I could have shared with other attorneys in my firm as well as clients.

My Law 2050 students clearly got a lot out of the immersion in using Neota to attack a compliance logic problem. I can’t thank Kevin and Neota enough for the time he invested in preparing for and delivering what was an excellent hands-on and instructive workshop. By the way, the EPA now has an online decision tool for navigating through the waste definition. I think they might want to get in touch with Neota!

Lex Machina a Smash Hit in Law 2050

This week my Law 2050 class has been all about Lex Machina, and to quote one student at the end of the two sessions: “I can’t imagine being a patent law firm and not wanting to purchase that!” [Note: I have no connection whatsoever with Lex Machina other than having them appear in my class, nor, I believe, did this student.] That sentiment was widely shared.

I contacted Lex Machina early in the semester to explore how I could give the class a deep dive in their technology. Jeremy Mulder, Lex Machina’s Director of Customer Success, worked closely with me to make the site available to the students, design an exercise for us to complete in one class, and guide us through the site and the company’s vision over a JoinMe link the next day.

My reactions:

First, the Lex Machina product is a truly awesome example of turning Big Data into a useful, user-friendly legal analytics product. The depth and breadth of data contained in the site, particularly for patent law, was astounding. For example, pick any federal district judge and within a few seconds the site provides an array of data, including outcomes at granular levels, patents handled, time to case termination, lawyers appearing in  the court, and many more. The site display and navigation is a breeze. The class started to tackle the questions together at the beginning of the first class, and within about 10 minutes, with no instructions from Lex Machina, we had begun to navigate the site with ease and, over time, learned how to tap into one after the other of analytic tools. The site is a model for other law+tech developers.

Second, as the exercise progressed I began to wonder how I would describe Lex Machina within the “disruptive technology” space. Disruption comes in many forms, and whether good or bad depends on the beholder. Lex Machina strikes me as disruptive primarily by providing an additive function—it makes possible what a lawyer could not have imagined he or she could do, at least without a tremendous amount of effort, time, and cost. It adds a tool, but it does not necessarily replace lawyers, or suck away billable hours, or “commoditize” a lawyering function; indeed, by giving lawyers more power over how to analyze patent law’s expanse, it may do just the opposite. More on the “disaggregation” of the disruptive legal technology concept into more descriptive and refined categories in an upcoming post.

Riley Might Help: New Technology Aimed to Detect Texters Raises Privacy Concerns

Guest Post by 2050 student Catherine Moreton

Tech company ComSonics announced in September that it is developing a new type of radar gun that detects not speeding, but texting. ComSonics specializes in handheld radar devices used mostly by cable companies searching for emission leaks in broken wires. But at the second annual Virginia Distracted Driving Summit, ComSonics revealed that the same technology is being adapted to track radio frequencies emitted when a driver sends a text message. According to spokesperson Malcolm McIntyre, the device can distinguish between frequencies emitted by text messages and those emitted by phone calls or emails.

In a year that included the National Highway Traffic Safety Administration (“NHTSA”) launching its first-ever national advertising campaign against distracted driving and AT&T’s “It can wait” campaign going viral, overdue public awareness of the dangers of texting and driving has increased dramatically. This is wonderful news for road safety and the 44 states (plus D.C.) that have banned texting while driving. But at what cost should we allow police officers to enforce those statutes more directly?

While McIntyre says the radar gun is “close to production,” technological concerns range from how to pinpoint whether the driver or a passenger was the one texting to what to do about automatic response messages. The technology is also currently limited to SMS messages and cannot yet detect texts sent over Wi-fi between iOS devices. Absent a safe harbor, the government might eliminate this boost for smartphone owners using the Communications Assistance for Law Enforcement Act (“CALEA”) to require providers to enable detection.

Once those kinks are worked out, privacy law will take center stage. McIntyre insists that the technology cannot decrypt the content of the messages, and under conventional Smith v. Maryland wisdom, this distinction would limit the government’s Fourth Amendment liability. But in a 2012 concurrence, Justice Sotomayor began to poke holes in the applicability of Smith to cell phone cases, calling the third-party doctrine “ill suited to the digital age.”

Plus, even though a cell phone user never reasonably expects her metadata to be private, the best evidence that a driver was texting is the time-stamped text itself. And if police want a driver to hand over her phone and incidentally reveal its contents, two June 2014 Supreme Court rulings suggest they’re going to need a warrant.

This summer, Riley v. California and companion case United States v. Wurie made huge advances for individual data privacy rights regarding cell phones, requiring a warrant for police to search essentially any kind of cell phone. With those opinions, the Supreme Court granted digital devices full Fourth Amendment protection absent exigent circumstances.

It is yet to be seen how Riley will affect other privacy arguments that could challenge the radar guns. Kyllo v. United States (while distinguishable in that it had to do with a home, which carries the strongest expectation of privacy) could require a warrant until the radar guns are “in general public use.” Independently of Fourth Amendment causes of action, the Stored Communications Act (“SCA”) could provide a remedy for phones searched without a warrant, and the Pen Register Act could require a court order before the radar guns may be used at all.

The good news is that the Court, after some resistance, seems ready to embrace the challenges of the digital age by beginning to agree with Justice Scalia’s 2010 view that “applying the Fourth Amendment to new technologies may sometimes be difficult, but when it is necessary to decide a case we have no choice.”

Law 2050 Student Projects on Trends in Law and Law Practice

Given how much time we spend in law school covering what the law was and is, one of the goals of my Law 2050 class is to get students to think about what the law will be and how they can help shape it’s future. I have students identify examples of two kinds of trends. The first is an “inside law” trend, such as new technology and new kinds of service providers, that will influence how law is practiced. The other is an “outside law” trend, such as developments in health care, technology, and the economy, that will influence how law evolves in response.

Last year I had students work in groups to present “pitches” in a shark-tank setting, with the pitch being an assessment of whether to invest in the trend (e.g., put money into a new legal practice technology or devote firm resources to developing a new practice area). This year I have used this phase of the class to develop some practical, practice-oriented writing skills: a blog post, a client alert letter, and a bar journal article. As was the case last year, once again I am thoroughly impressed with the topics the students selected, and their blog post assignments were top-notch. Watch for several of them in coming days as students serve as contributing bloggers!

Here’s a sample of the topics:

Inside Law Trends: lawyer coaching for pro se clients; IP prior art search outsourcing; third party litigation funding; Shake, the contract app; legal hackathons; legal fee analytics; Ravel Law; Mitratech’s software for in-house counsel; “low bono” law firms; legal project management firms; online dispute resolution; pricing consultants; Islamic finance practice; speech recognition programs for lawyers; Bryan Cave’s Rosetta project; legal knowledge engineering; telecommuting and the decline of the law office; Counsel on Call; Integron; business for lawyers training programs; legal solution engineers; Clerky; Axiom–is it becoming another BigLaw?; virtual courts; Legal Force; and compliance lawyering.

Outside Law Trends: digital signatures; commercial delivery drones; invisibility cloaking; Google Glass; neural implants; predictive policing; driverless cars; commercial space travel; e-money; The Internet of Things (embedded sensor networks); newsgathering drones; unmanned cargo ships; virtual patient consultations; 3D printing of guns and organs; apps to convert 3D iPhone photos to 3D printing; Apple’s fitness watch; automobile connectivity and privacy issues; texting detection technology for police; cloud storage issues; sea level rise; crowdfunding; negligent infliction of disease; ridesharing (Uber etc.); robotic surgery; renewable energy trends; extreme reality TV; fracking; human gene patenting; and police body cameras.

Needless to say, we are going to have some interesting class discussions!


An Evening With Some Really Smart People Working In Law+Tech

As many interested in Law 2050 topics will know, Nashville has the pleasure of hosting this year’s International Legal Technology Conference. I have not been able to attend much of it given the ironic detail that I have been teaching Law 2050 classes the same days as the conference. So it was a real treat to be invited to a dinner gathering to discuss the law+tech landscape along with several current and former Law 2050 students, other Vanderbilt Law students, and local legal community members. .

Our hosts were Michael Dunn and Aria Safar of e-Stet, the California based litigation technology company. Also present, and presenting tomorrow at the conference, was Noah Waisberg, founder of Diligence Engine, which has developed transaction due diligence review software. E-Stet treated us to an excellent Nashville hot chicken spread and opened an informal forum on the state of play and future of law+tech and its impact on the legal services industry.  Although I can’t speak for anyone but myself, here’s my take home from the discussion:

  • Legal technology developments like those represented by e-Stet and Diligence Engine (and a fast-expanding universe of other developers) will make lawyers better and more efficient. Law is one of those professions in which making a mistake can be very, very costly, so why not reduce the risk of missing an important document or detail? The downside may be that efficiency cuts into hours billed, but the offsetting upside is that better lawyering results attract more work.
  • These advances in law+tech are going to flatten the legal services industry in two ways. First, it will make it more possible for lawyers to service the mid-tier market of consumers and small businesses. Firms that might in the past (and present) have seen their market as large corporations and wealthy individuals might very well be in a position to provide reasonable-cost services to those markets. Whether they will deign to do so is a different question. But one thing is for sure–if they don’t, someone will.
  • The other flattening effect of law+tech is that it levels the playing field between the AmLaw 50, concentrated as they are in New York, L.A., and other mega markets, and the major regional/city law firms. If you have a significant deal or piece of litigation in Nashville or Denver, why fly in lawyers from New York or L.A. when law+tech has made everyone better? The experiential advantage of spending 10 years working deals in New York etc. will erode as everyone, everywhere, has access to aggregated databases of deal documents and the computational analytics to crunch through them. Bespoke lawyering may still be more concentrated in a few major cities, but over time this trend could revolutionize the legal services industry, giving law grads and young lawyers even greater flexibility to combine a sophisticated legal practice with quality of life preferences.
  • I think I can speak for all present in concluding that law+tech is not headed in the direction of robot lawyers any time soon (speaking of which, here’s the program for a conference session on that tomorrow). Perhaps a substantial chunk of lawyering can be mechanized, commoditized, and computerized, but the bottom line is that life is complicated and as soon as a client’s preferences or needs depart a smidgen from the default context built into the “robot,” you need a human. But the human will use law+tech to provide a faster, better, more efficient outcome. Maybe the better way to think of it is lawyer+robot.

The most gratifying aspect of this fascinating evening (besides the ridiculously spicy hot chicken!) was seeing my students engage in the discussion at what I considered to be a high level of knowledge and insight. Most if not all of them are members of our Journal of Entertainment Law & Technology, and it was clear that their experience on the journal has paid off in terms of enhanced awareness of the trends in law+tech. Go Vandy!

Law 2050 Rides Again!

Summer is over and classes start today here at Vanderbilt Law School, which means Law 2050 is back in action! Later today I will ramp up the second year of the Law 2050 class and begin posting about it and topics of interest to legal futurists.

The first order of business is to thank the many wonderful people who have agreed to be guest speakers in the class. Like last year’s lineup, it’s an exceptional set of presenters. Their perspectives bring life to the class and enhance the student experience in so many ways. Today’s post is devoted to them–many thanks to you all!

Aug. 25: Guest Speaker Panel – Law firm leaders discuss the state of the practice

Aug. 26: Guest Speaker Panel – Corporate in-house counsel discuss the drivers of change

Sept 9: Guest Speaker Panel – The globalization and consolidation of law firms

Sept. 29: Guest speaker – Larry Bridgesmith of ERM Legal Solutions: Introduction to legal process management

Sept. 30: Guest speaker – Marc Jenkins of Cicayda: Introduction to e-discovery and information technology

Oct. 6: Panel Discussion: Alternatives to BigLaw – What is their “new normal”?

Oct. 14: Demonstration of Lex Machina Legal Analytics

Oct 20: Guest speaker – Zygmunt Plater of Boston College Law School: The future of environmental law

Oct. 27: Guest speaker – Michael Mills of Neota Logic: Introduction to Neota Logic compliance software

Nov. 17: Guest Speaker Panel – Law firm economics and advancement, big and small

Lawyers, Do Not Fail to Read “The Great Disruption”

For a concise but thorough and insightful summary of how machine learning technology will transform the legal profession, and a sobering prediction of the winners and losers, check out The Great Disruption: How Machine Intelligence Will Transform the Role of Lawyers in the Delivery of Legal Services. Written by John McGinnis of Northwestern University Law School and Russel Pearce of Fordham Law School, this is a no-nonsense assessment of where the legal profession is headed thanks to the really smart people who are working on really smart machines. The key message is to abandon all notion that the progress of machine learning technology, and its incursion into the legal industry, will be linear. For quite a while after they were invented, computers didn’t seem that “smart.” They assisted us. But the progress in computational capacity was moving exponentially forward all the time. It is only recently that computers have begun to go beyond assisting us to doing the things we do as competently as we do, or better (e.g., IBM’s Watson). The exponential progress is not going to stop here–the difference is that henceforth we will see computers leaving us behind rather than catching up.

The ability of machines to analyze and compose sophisticated text is already working its way into the journalism industry, and McGinnis and Pearce see law as the next logical target. They foresee five realms of legal practice as the prime domains for computers supplanting human lawyers: (1) discovery, which is well underway; (2) legal search technology advancing far beyond the Westlaw of today; (3) generation of complex form documents, such as Kiiac; (4) composing briefs and memos; and (5) predictive legal analytics, such as Lex Machina. All of these trends are well in motion already, and they are unstoppable.

All of this is a mixed bag for lawyers, as some aspects of these trends will allow lawyers to do their work more competently and cost-effectively. But the obvious underside of that is reduced demand for lawyers. So, who wins and who loses? McGinnis and Pearce identify several categories of winners (maybe the better term is survivors): (1) superstars who are empowered even more by access to the machines to help them deliver high stakes litigation and transactional services; (2) specialists in areas of novel, dynamic law and regulation subject to change, because the lack of patterns will make machine learning more difficult (check out EPA’s 645-page power plant emissions proposed regulation issued yesterday–job security for environmental lawyers!); (3) oral advocates, until the machines learn to talk; and (4) lawyers practicing in fields with high client emotional content, because machines don’t have personalities, yet. The lawyering sector hardest hit will be the journeyman lawyer writing wills, handling closings, reviewing documents, and drafting standard contracts, although some entrepreneurial lawyers will use the machines to deliver high-volume legal services for low and middle income clients who previously were shut out of access to lawyers.

Much of what’s in The Great Disruption can be found in longer, denser treatments of the legal industry, but McGinnis and Pearce have distilled the problem to its core and delivered a punchy, swift account like no other I’ve seen. I highly recommend it.

 

Big Data and Preventive Government: A Review of Joshua Mitts’ Proposal for a “Predictive Regulation” System

In Minority Report, Steven Spielberg’s futuristic movie set in 2050 Washington, D.C., three sibling “pre-cogs” are hooked up with wires and stored in a strange looking kiddie pool to predict the occurrence of criminal acts. The “Pre-Crime” unit of the local police, led by John Anderton (played by Tom Cruise), uses their predictions to arrest people before they commit the crimes, even if the person had no clue at the time that he or she was going to commit the crime. Things go a bit awry for Anderton when the pre-cogs predict he will commit murder. Of course, this prediction has been manipulated by Anderton’s mentor and boss to cover up his own past commission of murder, but the plot takes lots of unexpected twists to get us to that revelation. It’s quite a thriller, and the sci-fi element of the movie is really quite good, but there are deeper themes of free will and Big Government at play: if I don’t have any intent now to commit a crime next week, but the pre-cogs say the future will play out so that I do, does it make sense to arrest me now? Why not just tell me to change my path, or would that really change my path? Maybe taking me off the street for a week to prevent the crime is not such a bad idea, but convicting me of the crime seems a little tough, particularly given that I won’t commit it after all. Anyway, you get the picture.

As we don’t have pre-cogs to do our prediction for us, the goal of preventive government–a government that intervenes before a policy problem arises rather than in reaction to the emergence of a problem–has to rely on other prediction methods. One prediction method that is all the rage these days in a wide variety of applications involves using computers to unleash algorithms on huge, high-dimensional datasets (a/k/a/ Big Data) to pick up social, financial, and other trends.

In Predictive Regulation, Sullivan & Cromwell lawyer and recent Yale Law School grad Joshua Mitts lays out a fascinating case for using this prediction method in regulatory policy contexts, specifically the financial regulation domain. I cannot do the paper justice in this blog post, but his basic thesis is that a regulatory agency can use real-time computer assisted text analysis of large cultural publication datasets to spot social and other trends relevant to the agency’s mission, assess whether its current regulatory regime adequately accounts for the effects of the trend were it to play out as predicted, and adjust the regulations to prevent the predicted ill effects (or reinforce or take advantage of the good effects, one would think as well).

To demonstrate how an agency would do this and why it might be a good idea at least to do the text analysis, Mitts examined the Google Ngram text corpus for 2005-06, which consists of a word frequency database of the texts of a lot of books (it would take a person 80 years to read just the words from books published in 2000) for two-word phrases (bi-grams) relevant to the financial meltdown–phrases like “subprime lending,” “default swap,” “automated underwriting,” and “flipping property”–words that make us cringe today. He found that these phrases were spiking dramatically in the Ngram database for 2005-06 and reaching very high volumes, suggesting the presence of a social trend. At the same time, however, the Fed was stating that a housing bubble was unlikely because speculative flipping is difficult in homeowner dominated selling markets and blah blah blah. We know how that all turned out. Mitts’ point is that had the Fed been conducting the kind of text analysis he conducted ex post, they might have seen the world a different way.

Mitts is very careful not to overreach or overclaim in his work. It’s a well designed and executed case study with all caveats and qualifications clearly spelled out. But it is a stunningly good example of how text analysis could be useful to government policy development. Indeed, Mitts reports that he is developing what he calls a “forward-facing, dynamic” Real-Time Regulation system that scours readily available digital cultural publication sources (newspapers, blogs, social media, etc.) and posts trending summaries on a website. At the same time, the system also will scour regulatory agency publications for the FDIC, Fed, and SEC and post similar trending summaries. Divergence between the two is, of course, what he’s suggesting agencies look for and evaluate in terms of the need to intervene preventively.

For anyone interested in the future of legal computation as a policy tool, I highly recommend this paper–it walks the reader clearly through the methodology, findings, and conclusions, and sparks what in my mind if a truly intriguing set of policy question. There are numerous normative and practical questions raised by Mitts’ proposal not addressed in the paper, such as whether agencies could act fast enough under slow-going APA rulemaking processes, whether agencies conducting their own trend spotting must make their findings public, who decides which trends are “good” and “bad,” appropriate trending metrics, and the proportionality between trend behavior and government response, to name a few. While these don’t reach quite the level of profoundness evident in Minority Report, this is just the beginning of the era of legal computation. Who knows, maybe one day we will have pre-cogs, in the form of servers wired together and stored in pools of cooling oil.

 

Racing with the Legal Computation Machine at the Inaugural Center for Computation, Mathematics, and the Law Workshop

I took a deep dive last week into the world of legal computation, to see just how far it has come, where it is going, and how transformative it will be as a force in legal thought and practice. I was provided this opportunity as a participant in the inaugural workshop of the University of San Diego Law School’s new Center for Computation, Mathematics, and the Law (CCML). (Before going into the details, let me add that if one is going to attend a workshop, USD is one heck of a nice place to do it! To emphasize the point, and to highlight the impact the CCML already is having, the International Conference on Artificial Intelligence and Law has selected USD as the site for its 2015 annual meeting.) Ted Sichelman and Tom Smith at USD Law are the founders and directors of the CCML, and the workshop will rotate annually between USD and the University of Illinois Law School, where patent law expert Jay Kesan will coordinate the program.

By way of disclaimer, I have to emphasize that I am not a Comp Sci guy. My math ended with Calculus II, my stats ended with multivariate regression, and my coding ended with SPSS and Fortran, and all are in the distant past. To say the least, therefore, the workshop was a humbling experience, as I was reminded at every turn that I was not the smartest guy in the room! So I approached the workshop through the eyes of Law 2050—I don’t need to know how to code to know how the end product works and to assess its potential to influence legal theory and practice. From that perspective, the workshop revealed an astounding and exciting array of developments. All of the presentations were tremendously well done; here is a taste of those that resonated most with the Law 2050 theme:

Paul Ohm (University of Colorado Law School) presented a fascinating study of how to parse the U.S. Code text to extract instances of defined terms. While at the workshop, he coded a software search engine that instantaneously returns links to all provisions in the Code defining a particular term. I tried it—it works!

Dan Katz (Michigan State University Law School) presented his research team’s ongoing work on a classification algorithm for predicting affirm/reverse outcomes of U.S. Supreme Court decisions. Previous work on this front (Ruger et al., 2004) pitted expert lawyers against a classification tree algorithm applied to one year of Court decisions, with the computer’s accuracy outperforming the experts by 75% to 58%. Dan’s team applied a more advanced “random forests” classification approach to the last 50 years of Court decisions and maintained accuracy levels of 70%.

Kincho Law (Stanford Civil Engineering) presented a robust text parsing and retrieval project designed to allow the user to extract and compare regulations pertaining to specific topics. For example, if the user is interested in water toxicity regulations for a particular contaminant, the program identifies and compares federal and state regulations on point. His team also has embedded a plethora of information into many of the regulations (e.g., links to relevant regulatory documents) and has also embedded formal logic statements for many regulations, allowing the user to treat the regulations as a true set of coding.

Jay Kesan (University of Illinois Law School) demonstrated another text parsing and retrieval project aimed at unifying the various databases relevant to patent lawyers, including all the patents, court litigation, scientific publications, and patent file wrappers in the biomedical technology domain.

Harry Surden (University of Colorado School of Law) delved into what he calls “computable contracts,” referring to the trend in finance to embody contractual terms entirely as computer code. These “contracts” allow computers to understand the terms and generate real-time compliance assessments. His project assesses the conditions under which a broader array of contracting practices might move to this computable contract format and the implications of doing so.

Seth Chandler (University of Houston) gave us a deep dive into the Affordable Care Act with a demonstration of software he has developed to extract and evaluate a variety of important analytics from the database available at healthcare.gov.

David Lewis (Independent Consultant) outlined the use of predictive coding in e-discovery and presented the preliminary results of a study comparing human manual document review and computer predictive coded e-discovery accuracy based on a large (500K documents) real-world discovery event. The results suggest that predictive coding, while presenting challenges, has substantial promise.

Henry Smith (Harvard Law School) and Ted Sichelman presented work on legal entitlements illustrating the potential for legal computation to advance legal theory. Ted’s project carefully examines how legal entitlements can be represented in formal, computable logic models, and together they are developing a model for computing the “modularity” of real property entitlements using network analytics. By representing legal entitlements as networks of rights, duties, privileges, and powers, they propose a method for measuring the degree to which a property legal regime has departed from the state of fully unrestricted right to use and exclude.

Jack Conrad (Thompson Reuters R&D and President of the International Association for Artificial Intelligence and Law) explained the importance of the “use case” in developing applied uses of legal computation—i.e., what are you going to use this to do?—and also emphasized the importance of evaluation of experimental efforts using standard test sets and metrics.

Last but by no means least, Roland Vogl of Stanford’s CodeX Center for Legal Informatics Skyped in an overview of what CodeX is doing to advance information retrieval technology, legal technology infrastructure, and computational law, as well as a review of some of the start-up incubation successes (Lex Machina, LawGives, Ravel Law, Judicata, etc.).

All in all, the workshop made two things abundantly clear for me: (1) legal computation has taken off and its horizons are boundless, and (2) San Diego in March is OK!

Learning from My Students in Law 2050

My Law 2050 class has moved into group presentations (format explained here), the first round being their assessments of new companies and business models emerging in the “new normal.” In two days of presentations, so far we’ve heard about a wide variety of fascinating developments: Axiom, QuisLex, Neota, MetricStream, Yusin & Irvine, Pangea, CEB, Clerky, Onit, MyCase, and Legal Outsourcing Partners. Also, one of my students, Christine Carletta, wrote an insightful description and assessment of Lex Machina as a post on the JETLaw blog for Vanderbilt’s Journal of Entertainment and Technology Law. I couldn’t be more pleased with how the students are engaging with their projects and the class in general!