Home » Technology and Law (Page 3)

Category Archives: Technology and Law

Impact Scores for Disruptive Legal Technologies

This will date me, but I remember a day when law was practiced without computer-based Westlaw or Lexis, when legal technology consisted of the five essentials: a land line telephone, Dictaphone, IBM Selectric, light switch, and thermostat. Westlaw and Lexis were, from the late 1970s until 1986, accessed only via phone modem. I recall using the modem in law school, and then at my firm in the mid-1980s experienced the miracle of using a computer to run simple searches. Life after that was not the same.

So this is not the first time legal practice has faced “disruptive technology.” But what exactly does that mean—disruptive technology? And how do we apply a metric to “disruptiveness”?

As many readers will know, the origins of the term stem from Harvard Business School Professor Clayton Christensen’s theory of disruptive and sustaining innovations. A disruptive innovation helps create a new market or industry and eventually disrupts an existing market or industry. In contrast, a sustaining innovation does not create new markets or industries but rather evolves existing ones to achieve better value.

Much of the commentary on new legal technologies has focused on the disruptive side of the equation, whereas many have a sustaining quality as well. Overall, however, I don’t find that dichotomy very useful for purposes of understanding and teaching how the new wave of legal technology will affect the practice of law and thereby affect the demand for lawyers. So this fall in my Law 2050 class my students and I disaggregated “disruptive” and “sustaining” to get more under the hood of how new technology platforms like Lex Machina, Legal Zoom, Ravel Law, and Neota Logic will change the way law is practiced. (We did so purely intuitively without dipping deeply into Christensen’s detailed theory or other business theory and commentary on the topic—so he and my colleagues at Vanderbilt’s Owen Graduate School of Business Management might cringe at what follows.) Modifying somewhat the typology we developed in class, below I use the introduction of Westlaw and the current play of Lex Machina to explain our typology and impact scoring system.

What is disruptive (and sustaining) about disruptive legal technology?

One way of thinking about how new technologies change the world is to ask a “technology native”—a person who has only known life with the technology—what his or her world would be like if the technology disappeared. For example, while I actually was able to get by years ago without Google (I am a Google “technology immigrant”), I can’t imagine my world without Google now, but I can remember one. So just think about a Google native—someone who has never seen life without Google! Ironically, with Westlaw and Lexis this is becoming increasingly less scary, as Google alone has supplanted them as the first search engine of choice for many legal searches. But let’s envision Westlaw and Lexis coming on line in the 1980s or disappearing in the 2010s and ask, so what, who cares, and why? In what ways is the world of lawyering different with or without them? I come up with five effects, each of which has a 20-point impact scale:

Quality enhancing impact: In the do it better, faster, and cheaper trilogy dominating the legal industry today, quality enhancing technology works on the delivery of better service. For example, Westlaw and Lexis vastly improved the accuracy of search results, such as “find cases from the federal courts in the Fifth Circuit that say X and Y but not Z.” Sure, a lawyer could have run key number headings in the books and read through legal encyclopedias, but the miss rate simply went down when Westlaw and Lexis came on line. So to, with its deep database of IP cases and filings and assessable research design, does Lex Machina improve accuracy of searches about IP litigation, though at present it does not run broad substantive research searches. Scores: Westlaw and Lexis 18 (like Russian skating judge, leaving room for some later contenders); Lex Machina 12

Efficiency enhancing impact: Anyone who has ever run key numbers in hard copy digests or Shepardized a case using the books will appreciate the efficiency enhancement Westlaw and Lexis provided—the “do it faster” component of today’s client demands. Similarly, although one could use the brute force of Westlaw or Lexis searches to assemble the results of a Lex Machina search about the IP litigation profile of a judge or patent, it’s a heck of a lot faster using Lex Machina. Scores: Westlaw and Lexis 18; Lex Machina 18.

Demand displacement effect: Assume a world in which the number and scope of client driven legal searches does not change. In that case, the introduction of a new legal technology that has quality and efficiency enhancement effects is likely to displace demand for service in some sectors of the legal industry if the technology is a cost-effective competitor. For example, Westlaw and Lexis allowed better and faster legal searches, but unless priced to be cost-competitive with the old lawyer-intensive ways of doing legal searches, they won’t penetrate the market. Bottom line, there are fewer billable hours to go around. Given the success of Westlaw and Lexis in establishing their markets, one has to assign them the potential for this displacement effect. It’s much harder to tell with Lex Machina, because it’s not clear what the demand was for the information its type of searches provides prior to its availability. Scores: Westlaw and Lexis: 15; Lex Machina 8

Transformative effect: The opposite side of the coin is the potential a new technology has to open up new markets for legal tasks not previously possible or valued. For example, other than paying for a bespoke lawyer’s judgment about the profile of a particular court for IP litigation, I find it hard to believe many clients would have paid lawyers to perform the kinds of hyper-detailed big data litigation information searches Lex Machina makes possible about lawyers, courts, and patents. Even more so, some of the search techniques Westlaw and Lexis made possible would have been virtually impossible to replicate the old fashioned way with the books. To the extent these new capacities are valued—e.g., they lead to better litigation prediction and outcomes—they will increase demand for service. Hence the transformative effect can work to offset the displacement effect, meaning a new legal technology might increase the pool of billable hours. Scores: Westlaw and Lexis 15, Lex Machina 12

Destructive effect: All of the above discussion has assumed it will be lawyers using the new technology, which clearly will not always be the case—the new technology might reduce or eliminate the need for a lawyer at the helm. Some new technologies will provide user interfaces that do not require an attorney to operate. The rise of paralegals conducting research on Westlaw and Lexis is an example. Even more destructive are technologies like predictive coding, used in e-discovery to vastly reduce the need for lawyers, and online interfaces such as Legal Zoom, which sidesteps the Main Street lawyer altogether. My sense is that Westlaw and Lexis did not have so much destructive effect outside of pushing some work down to paralegals, and the same will hold true for Lex Machina. Scores: Westlaw and Lexis: 8; Lex Machina 8.

Total Impact Scores: Westlaw and Lexis 74; Lex Machina 58.

Of course, this is all meant to be a bit provocative and poke some at the overuse and misuse of the “disruptive technology” theme in our current legal world. As I said, it is not informed by formal business theory, nor do I have any empirical evidence to back up my scores. But the categories of effects seem on point and relevant to the discourse on impacts of new legal technologies, and the scores strike me as decent ballpark estimates. At the very least, I’ll have a model the students can use to dissect the legal technologies they choose to study in next fall’s Law 2050 class!

Forms of Bespoke Lawyering and the Frontiers of Artificial Intelligence

In Machine Learning and Law, Harry Surden of the University of Colorado Law School provides a comprehensive and insightful account of the impact advances in artificial intelligence (AI) have had and likely will have on the practice of law. By AI, of course, Surden means the “soft” kind represented mostly through advancement in machine learning. The point is not that computers are employing human cognitive abilities, but rather that if they can employ algorithms and other computational power to reach answers and decisions like those humans make, and with equal or greater accuracy and speed, it doesn’t matter so much how they get there. Surden’s paper is highly recommended for its clear and cogent explanation of the forms and techniques of machine learning and how they could be applied in legal practice.

Surden quite reasonably recognizes that AI, at least as it stands today and in its likely trajectory for the foreseeable future, can only go so far in displacing the lawyer. As he puts it, “attorneys, for example, routinely combine abstract reasoning and problem solving skills in environments of legal and factual uncertainty.” The thrust of Surden’s paper, therefore, is how AI can facilitate lawyers in exercising those abilities, such as by finding patterns in complex factual and legal data sets that would be difficult for a human to detect, or in enhancing predictive capacity for risk management and litigation outcome assessments.

What Surden is getting at, in short, is that there seems to be little chance in the near future that AI can replicate the “bespoke lawyer.” That term is used throughout the commentary on the “new normal” in legal practice (which is actually a “post normal” given we have not reached any sort of equilibrium). But it is not usually unpacked any further than that, as if we all know intuitively what bespoke lawyering is.

To take a different perspective on bespoke lawyering and the impact of AI, I suggest we turn Surden’s approach around by outlining what is bespoke about bespoke lawyering and then think about how AI can help. In the broadest sense, bespoke lawyering involves a skill set that draws heavily from diverse and deep experience, astute observation, sound judgment, and the ability to make decisions. Some of that can be learned in life, but some is part of a person’s more complex fabric—you either have it or you don’t. If you do have these qualities under your command, however, you have a good shot at attaining that bespoke lawyer status. Here’s a stab at breaking down what such a lawyer does well:

Outcome Prediction: Prediction of litigation, transaction, and compliance outcomes is, of course, what clients want dearly from their lawyers. On this front AI seems to have made the most progress, with outfits like Lex Machina and LexisNexis’s Verdict & Settlement Analyzer building enormous databases of litigation histories and applying advanced analytics to tease out how a postulated scenario might fare.

Analogical and evaluative legal search: Once that pile of search results comes back from Lexis or Westlaw (or Ravel Law or Case Text), the lawyer’s job is to sort through and find those that best fit the need. Much as it is used in e-discovery, AI could employed to facilitate that process through machine learning. This might not be cost-effective, as often the selection of cases and other materials must be completed quickly and from relatively small sets of results. Also, the strength of fit is often a qualitative judgment, and identifying useful analogies, say between a securities case and an environmental law case, is a nuanced cognitive ability. Nevertheless, if a lawyer were to “train” algorithms over time as he or she engages in years of research in a field, and if all the lawyers in the practice group did the same, AI could very well become a personalized advanced research tool making the research process substantially more efficient and effective.

Risk management: Whereas outcome prediction is usually a one-off call, managing litigation, transaction, and compliance outcomes over time requires a sense of how to identify manage risk.  Kiiac’s foray into document benchmarking is an example of how AI might enhance risk management, allowing evaluation of massive transactional regime histories for, say, commercial real estate developers, to detect loss or litigation risk patterns under different contractual terms.

Strategic planning: Lawyers engage extensively in strategic planning for clients. Where to file suit? How hard to negotiate a contract term? Should we to disclose compliance information? Naturally, it would be nice to know how different alternatives have fared in similar situations. Here again, AI could be employed to detect those patterns from massive databases of transactions, litigation, and compliance scenarios.

Judgment (and judging): Judgment about what a client should do, or about how to decide a case when judge, involve senses not easily captured by AI, such as fairness, honesty, equity, and justice. The unique facts of a case may call for departure from the pattern of outcomes based on one of these sensibilities. Yet doctrines do exist to capture some of these qualities, such as equitable estoppel, apportionment of liability, and even departure from sentencing guidelines, and these doctrines exhibit patterns in outcomes that may be useful for lawyers and judges to grasp in granular detail. What is equitable or just, in other words, is not an entirely ad hoc decision. AI could be used to decipher such patterns and suggest how off the mark a judgment under consideration would be.

Legal reform: As I tell my 1L Property students, in almost every case we cover some lawyer was arguing for legal reform—a change in doctrine, a change in statutory interpretation, striking down an agency rule, and so on. And of course legislatures and agencies, when they are functional, are often in the business of changing the law. To some extent arguments for reform go against the grain of existing patterns, although in some cases they pick up on an emerging trend. They also rely heavily on policy bases for law, such as equity, efficiency, and legitimacy. In all cases, though the argument has to be that there is something “broken” about continuing to apply the existing law, or to not invent new law, in the particular case or broader issue in play. AI might be particularly useful as a way of building that argument, such as by demonstrating a pattern of inefficient results from existing doctrine, or detecting strong social objection to an existing law.

Trendspotting: In my view the very best lawyers—the most bespoke—are those ahead of the game—the trendspotters. What is the next wave of litigation? Where is the agency headed with regulation? Which law or doctrine is beginning to get out of synch with social reality? Spotting these trends requires the lawyer to get his or her head outside the law. Here, I think, AI might be most effective in assisting the bespoke lawyer. A plaintiffs firm, for example, might use AI to monitor social media to identify trends highly associated with the advent of new litigation claims, such as people complaining on Twitter about a product. Similarly, this approach could be used to inform any of the lawyer functions outlined above.

Handling people: Ultimately, a top lawyer builds personal relationships with colleagues, peers, and clients. AI can’t help you do that, I don’t think, but by helping lawyers do all of the above it may free up time for a game of golf (tennis for me) with a client!

Law 2050 Students Take a Deep Dive into Neota Logic

Many, many years ago, when I was practicing environmental law with Fulbright & Jaworski in Austin, I was unfortunate enough to have a number of clients whose needs required that I master the EPA’s utterly convoluted definition of solid and hazardous waste. One summer I assigned a summer associate the task of flowcharting the definition. Over the course of the summer we debugged draft after draft until, finally, we had a handwritten flowchart that flawlessly worked any scenario through the definition step-by-step. It was ten legal-sized, taped-together pages long. It worked, but it wasn’t very practical.

If only we had had Neota Logic back then!  Last week, in my Law 2050 class, Kevin Mulcahy, Director of Education for Neota, demoed their product over the course of two classes and a 3-hour evening workshop.  Prior to the session I had assigned the class the exercise of flowcharting the copyright law of academic fair use. Each student prepared a flowchart and explained its logic, then six groups collaborated on final work products. I sent the group flowcharts to Kevin so he could use them to explain the Neota platform in a context familiar to the students.

Neota is a software program that allows the user to translate legal (or other) content into a user-friendly interactive application environment, much like Turbo Tax does for tax preparation. Neota allows the content expert to build the app with no coding expertise, with end products that are quite sophisticated in terms of what can be embedded in the app and how smoothly the app walks the user through the compliance logic. Example apps Kevin offered covered topics as varied as songwriter rights to Dodd-Frank compliance.

The first class period Kevin introduced Neota and then walked through each of the group flowcharts to analyze how each one broke down the fair use compliance problem. The core theme was how important it is to develop the output scenarios first. In the fair use exercise, there are several yes/no questions specific to educational uses, and then a multi-factored balancing test applies in the event none of those binary questions leads to a fair use outcome. Like any balancing test, this one yields a range of scenarios from very likely fair use to very likely not fair use. We spent a good deal of time thinking about how to design an app component to capture the balancing test.

In the evening workshop a group of 20 students acted as content experts to guide Kevin through the process of building the fair use app, much in the way a legal expert might work worth a Neota software expert. The most striking learning experience from this session, besides the deep look under Neota’s hood, was how the process of building the app actually sharpened our fair use compliance logic. We tested various approaches for capturing the balancing test and conveying output scenarios with substantive explanations for the user.

The next day the entire class regrouped to go over the workshop product, allowing those who could not make the workshop due to conflicting classes the chance to get a good feel for both the flexibility and precision the Neota software offers. Thinking back to my perfectly accurate but impractical ten-page flowchart of the EPA’s waste definition, I could envision how that and many other tasks that required developing a compliance logic could have been leveraged into apps I could have shared with other attorneys in my firm as well as clients.

My Law 2050 students clearly got a lot out of the immersion in using Neota to attack a compliance logic problem. I can’t thank Kevin and Neota enough for the time he invested in preparing for and delivering what was an excellent hands-on and instructive workshop. By the way, the EPA now has an online decision tool for navigating through the waste definition. I think they might want to get in touch with Neota!

Riley Might Help: New Technology Aimed to Detect Texters Raises Privacy Concerns

Guest Post by 2050 student Catherine Moreton

Tech company ComSonics announced in September that it is developing a new type of radar gun that detects not speeding, but texting. ComSonics specializes in handheld radar devices used mostly by cable companies searching for emission leaks in broken wires. But at the second annual Virginia Distracted Driving Summit, ComSonics revealed that the same technology is being adapted to track radio frequencies emitted when a driver sends a text message. According to spokesperson Malcolm McIntyre, the device can distinguish between frequencies emitted by text messages and those emitted by phone calls or emails.

In a year that included the National Highway Traffic Safety Administration (“NHTSA”) launching its first-ever national advertising campaign against distracted driving and AT&T’s “It can wait” campaign going viral, overdue public awareness of the dangers of texting and driving has increased dramatically. This is wonderful news for road safety and the 44 states (plus D.C.) that have banned texting while driving. But at what cost should we allow police officers to enforce those statutes more directly?

While McIntyre says the radar gun is “close to production,” technological concerns range from how to pinpoint whether the driver or a passenger was the one texting to what to do about automatic response messages. The technology is also currently limited to SMS messages and cannot yet detect texts sent over Wi-fi between iOS devices. Absent a safe harbor, the government might eliminate this boost for smartphone owners using the Communications Assistance for Law Enforcement Act (“CALEA”) to require providers to enable detection.

Once those kinks are worked out, privacy law will take center stage. McIntyre insists that the technology cannot decrypt the content of the messages, and under conventional Smith v. Maryland wisdom, this distinction would limit the government’s Fourth Amendment liability. But in a 2012 concurrence, Justice Sotomayor began to poke holes in the applicability of Smith to cell phone cases, calling the third-party doctrine “ill suited to the digital age.”

Plus, even though a cell phone user never reasonably expects her metadata to be private, the best evidence that a driver was texting is the time-stamped text itself. And if police want a driver to hand over her phone and incidentally reveal its contents, two June 2014 Supreme Court rulings suggest they’re going to need a warrant.

This summer, Riley v. California and companion case United States v. Wurie made huge advances for individual data privacy rights regarding cell phones, requiring a warrant for police to search essentially any kind of cell phone. With those opinions, the Supreme Court granted digital devices full Fourth Amendment protection absent exigent circumstances.

It is yet to be seen how Riley will affect other privacy arguments that could challenge the radar guns. Kyllo v. United States (while distinguishable in that it had to do with a home, which carries the strongest expectation of privacy) could require a warrant until the radar guns are “in general public use.” Independently of Fourth Amendment causes of action, the Stored Communications Act (“SCA”) could provide a remedy for phones searched without a warrant, and the Pen Register Act could require a court order before the radar guns may be used at all.

The good news is that the Court, after some resistance, seems ready to embrace the challenges of the digital age by beginning to agree with Justice Scalia’s 2010 view that “applying the Fourth Amendment to new technologies may sometimes be difficult, but when it is necessary to decide a case we have no choice.”

Lawyers, Do Not Fail to Read “The Great Disruption”

For a concise but thorough and insightful summary of how machine learning technology will transform the legal profession, and a sobering prediction of the winners and losers, check out The Great Disruption: How Machine Intelligence Will Transform the Role of Lawyers in the Delivery of Legal Services. Written by John McGinnis of Northwestern University Law School and Russel Pearce of Fordham Law School, this is a no-nonsense assessment of where the legal profession is headed thanks to the really smart people who are working on really smart machines. The key message is to abandon all notion that the progress of machine learning technology, and its incursion into the legal industry, will be linear. For quite a while after they were invented, computers didn’t seem that “smart.” They assisted us. But the progress in computational capacity was moving exponentially forward all the time. It is only recently that computers have begun to go beyond assisting us to doing the things we do as competently as we do, or better (e.g., IBM’s Watson). The exponential progress is not going to stop here–the difference is that henceforth we will see computers leaving us behind rather than catching up.

The ability of machines to analyze and compose sophisticated text is already working its way into the journalism industry, and McGinnis and Pearce see law as the next logical target. They foresee five realms of legal practice as the prime domains for computers supplanting human lawyers: (1) discovery, which is well underway; (2) legal search technology advancing far beyond the Westlaw of today; (3) generation of complex form documents, such as Kiiac; (4) composing briefs and memos; and (5) predictive legal analytics, such as Lex Machina. All of these trends are well in motion already, and they are unstoppable.

All of this is a mixed bag for lawyers, as some aspects of these trends will allow lawyers to do their work more competently and cost-effectively. But the obvious underside of that is reduced demand for lawyers. So, who wins and who loses? McGinnis and Pearce identify several categories of winners (maybe the better term is survivors): (1) superstars who are empowered even more by access to the machines to help them deliver high stakes litigation and transactional services; (2) specialists in areas of novel, dynamic law and regulation subject to change, because the lack of patterns will make machine learning more difficult (check out EPA’s 645-page power plant emissions proposed regulation issued yesterday–job security for environmental lawyers!); (3) oral advocates, until the machines learn to talk; and (4) lawyers practicing in fields with high client emotional content, because machines don’t have personalities, yet. The lawyering sector hardest hit will be the journeyman lawyer writing wills, handling closings, reviewing documents, and drafting standard contracts, although some entrepreneurial lawyers will use the machines to deliver high-volume legal services for low and middle income clients who previously were shut out of access to lawyers.

Much of what’s in The Great Disruption can be found in longer, denser treatments of the legal industry, but McGinnis and Pearce have distilled the problem to its core and delivered a punchy, swift account like no other I’ve seen. I highly recommend it.

 

Big Data and Preventive Government: A Review of Joshua Mitts’ Proposal for a “Predictive Regulation” System

In Minority Report, Steven Spielberg’s futuristic movie set in 2050 Washington, D.C., three sibling “pre-cogs” are hooked up with wires and stored in a strange looking kiddie pool to predict the occurrence of criminal acts. The “Pre-Crime” unit of the local police, led by John Anderton (played by Tom Cruise), uses their predictions to arrest people before they commit the crimes, even if the person had no clue at the time that he or she was going to commit the crime. Things go a bit awry for Anderton when the pre-cogs predict he will commit murder. Of course, this prediction has been manipulated by Anderton’s mentor and boss to cover up his own past commission of murder, but the plot takes lots of unexpected twists to get us to that revelation. It’s quite a thriller, and the sci-fi element of the movie is really quite good, but there are deeper themes of free will and Big Government at play: if I don’t have any intent now to commit a crime next week, but the pre-cogs say the future will play out so that I do, does it make sense to arrest me now? Why not just tell me to change my path, or would that really change my path? Maybe taking me off the street for a week to prevent the crime is not such a bad idea, but convicting me of the crime seems a little tough, particularly given that I won’t commit it after all. Anyway, you get the picture.

As we don’t have pre-cogs to do our prediction for us, the goal of preventive government–a government that intervenes before a policy problem arises rather than in reaction to the emergence of a problem–has to rely on other prediction methods. One prediction method that is all the rage these days in a wide variety of applications involves using computers to unleash algorithms on huge, high-dimensional datasets (a/k/a/ Big Data) to pick up social, financial, and other trends.

In Predictive Regulation, Sullivan & Cromwell lawyer and recent Yale Law School grad Joshua Mitts lays out a fascinating case for using this prediction method in regulatory policy contexts, specifically the financial regulation domain. I cannot do the paper justice in this blog post, but his basic thesis is that a regulatory agency can use real-time computer assisted text analysis of large cultural publication datasets to spot social and other trends relevant to the agency’s mission, assess whether its current regulatory regime adequately accounts for the effects of the trend were it to play out as predicted, and adjust the regulations to prevent the predicted ill effects (or reinforce or take advantage of the good effects, one would think as well).

To demonstrate how an agency would do this and why it might be a good idea at least to do the text analysis, Mitts examined the Google Ngram text corpus for 2005-06, which consists of a word frequency database of the texts of a lot of books (it would take a person 80 years to read just the words from books published in 2000) for two-word phrases (bi-grams) relevant to the financial meltdown–phrases like “subprime lending,” “default swap,” “automated underwriting,” and “flipping property”–words that make us cringe today. He found that these phrases were spiking dramatically in the Ngram database for 2005-06 and reaching very high volumes, suggesting the presence of a social trend. At the same time, however, the Fed was stating that a housing bubble was unlikely because speculative flipping is difficult in homeowner dominated selling markets and blah blah blah. We know how that all turned out. Mitts’ point is that had the Fed been conducting the kind of text analysis he conducted ex post, they might have seen the world a different way.

Mitts is very careful not to overreach or overclaim in his work. It’s a well designed and executed case study with all caveats and qualifications clearly spelled out. But it is a stunningly good example of how text analysis could be useful to government policy development. Indeed, Mitts reports that he is developing what he calls a “forward-facing, dynamic” Real-Time Regulation system that scours readily available digital cultural publication sources (newspapers, blogs, social media, etc.) and posts trending summaries on a website. At the same time, the system also will scour regulatory agency publications for the FDIC, Fed, and SEC and post similar trending summaries. Divergence between the two is, of course, what he’s suggesting agencies look for and evaluate in terms of the need to intervene preventively.

For anyone interested in the future of legal computation as a policy tool, I highly recommend this paper–it walks the reader clearly through the methodology, findings, and conclusions, and sparks what in my mind if a truly intriguing set of policy question. There are numerous normative and practical questions raised by Mitts’ proposal not addressed in the paper, such as whether agencies could act fast enough under slow-going APA rulemaking processes, whether agencies conducting their own trend spotting must make their findings public, who decides which trends are “good” and “bad,” appropriate trending metrics, and the proportionality between trend behavior and government response, to name a few. While these don’t reach quite the level of profoundness evident in Minority Report, this is just the beginning of the era of legal computation. Who knows, maybe one day we will have pre-cogs, in the form of servers wired together and stored in pools of cooling oil.

 

Twitter Made Me Do It! – New Legal Issues Emerging from Advances in the Science of Social Networks

Advances in neuroscience and genetics have opened up profound and difficult legal issues regarding individual behavior. For example, before her tragic death the late Jamie Grodsky published a set of stunningly good articles on the impacts of genetics science on environmental law and toxic torts, and my colleague at Vanderbilt, Owen Jones, heads a vast research project on neuroscience and the law.

But at the other end of the spectrum, rapid advances are also underway in how we understand crowd behavior, and there are legal issues waiting to boil over. Like many of the issues covered in Law 2050, these advances are the direct result of the Big Data-computation combo, in this case aimed at the science of social networks (and I’m not just talking about the NSA…uh-oh, probably by just saying that they’ll start following my posts!). Of course we all know that Big Brother and even our friends and businesses are snooping through our social media. As the International Business Times reported earlier this week, for example, insurance companies scour claimant’s social media posts at the time of the accident to detect fraud, admissions of fault, and so on. My focus here is different–it’s on how we can learn what an individual does from studying his or her social network behavior, not just what he or she communicates to it (see here for a great summary of legal issues surrounding the latter).

For example, researchers studying the equivalent of Twitter in China, Weibo, reached findings about the flow of emotions in social network suggesting that anger spreads faster than does joy. As they summarize their paper‘s findings:

Recent years have witnessed the tremendous growth of the online social media. In China, Weibo, a Twitter-like service, has attracted more than 500 million users in less than four years. Connected by online social ties, different users influence each other emotionally. We find the correlation of anger among users is significantly higher than that of joy, which indicates that angry emotion could spread more quickly and broadly in the network. While the correlation of sadness is surprisingly low and highly fluctuated. Moreover, there is a stronger sentiment correlation between a pair of users if they share more interactions. And users with larger number of friends posses more significant sentiment influence to their neighborhoods. Our findings could provide insights for modeling sentiment influence and propagation in online social networks.

It’s only a matter of time before clever lawyers start using similar techniques to inform questions of intent, motive, reputation, liability, and so on. For example, if it could be shown that a person’s social media network flared up with anger (e.g., hostile comments or rumors about a spouse) shortly before the person committed a crime, that could prove influential in determining motive. Similarly, social network analytics could be used to measure the reputation impact of alleged libel or slander, consumer confusion in trademark infringement claims, and market perceptions in shareholder derivative claims–basically, anything that involves crowd behavior. Of course, there will also be a swarm of related legal issues such as privacy, data breaches, and admissibility in legal proceedings. So, just as scientific advances at the genetic and brain level are fueling legal issues regarding the individual, so too are advances in the science of social networks likely to open up new legal issues regarding crowds as crowds as well as their impacts on individuals.

What You Get When 45 Law Students Brainstorm About Legal Futures

Last week my Law 2050 class moved into a group project phase. I’ve divided the 45 students into six groups. Each group is exploring a pair of legal future topics grouped under two themes: (1) emerging legal technologies and practice models, and (2) future legal practice scenarios. The six paired topics are:

Group

Tech/Industry Theme

Practice Scenario Theme

1

Outsourcing

Environment and energy

2

Legal process management

Social and demographic

3

Legal risk management

Economic and financial

4

Routinized and expert systems

Health and medicine

5

Legal prediction

Data and privacy

6

New legal markets Other technologies

Each group member prepared a proposed set of specific research projects fitting the group’s topics, and last week they pitched them to their groups. Each group selected 3-4 projects for each topic. They are exploring the viability of their tech/practice model selections and of their practice development selections. Later in the semester the groups will present their findings to the class as a whole.

Last week, the groups selected their final set of research projects and gave a quick summary to the class. I was quite impressed with the breadth and depth of their selections:

Future Practice Development Topics: synthetic organs, bitcoins, robotic surgery, student loan debt relief, Cloud computing, Google glass, 3-D printing, Dodd-Frank aftermath, crowdfunding,  sea level rise, cybersecurity standards, carbon sequestration, space law & asteroid mining, virtual real estate, ocean-based power sources, biometric identification, water rights issues, genetically pre-fabricated children, natural disaster law, AI decision making, majority-minority America, same sex marriage, LGBTQIA rights, mass human migration, the sharing economy.

Legal Tech and Practice Models: QuisLex, Yuson & Irvine, LPO security breach issues, rebundling of LPO functions, My Case, Onit, Clerky, Axiom, Lex Machina, Casetext, Clearspire, Lawyer Up, Jury Verdict Analyzer, Kiiac, Neota Logic, healthcare compliance software.

I’m looking forward to what they have to say about each of these!

Decomposing Compliance Counseling

One of many useful insights Richard Susskind has delivered on legal industry transformation is the idea of “decomposing” legal practice into discrete components of work, which allows one to think more clearly about how to identify opportunities to make the delivery of legal services more efficient. He aims this approach only at litigation and transactions, however, leaving out the third major domain of legal of legal practice–compliance counseling.

Compliance counseling is the neglected child in the legal practice family. Most law school course offerings emphasize litigation and transactions. Most law students decide soon into their second year that they want to do litigation or transactions. Most of the legal reinvention discourse is about litigation and transactions. But the reality is that there is a vast amount of legal work out there that is neither litigation nor transactions–it is compliance counseling. Believe me, I billed a lot of hours in this category as an environmental and land use lawyer, and there is no shortage of work like this in employee benefits, securities regulation, health care regulation, and the list goes on. It may not be as sexy as the courtroom or as glamorous as billion dollar deals, but it’s legal work so you can bet it’s going to be the target of optimization initiatives.

What is compliance counseling, and how would one “decompose” it to identify efficiency opportunities? The answer is not as clear as it is for litigation and transactions. Both litigation and transactions follow fairly standardized process paths. Litigation has its rules of procedure, and transactions center around the closing. Compliance counseling has nothing like that, and it comes in many forms. Yet, as my previous post on Neota + Littler reviewed, there clearly are opportunities to make compliance counseling more efficient, so it is worth devoting some thought to how to unpack what goes into it. (more…)

Neota + Littler = Smart Legal Innovation

There was an interesting news feed last week about “Neota Logic…collaborating with Littler Mendelson, P.C., the world’s largest employment and labor law firm representing management, to power Littler’s new Healthcare Reform Advisor. The Advisor enables Littler’s most experienced employee benefits attorneys to counsel employers on complex issues under the Affordable Care Act.” This is the kind of teaming up between innovative legal technology developers and innovative law firms that “rethink” theorists Richard Susskind and Bruce MacEwan say is a must for the survival of many segments of the legal services industry. (Note: I have no association with Neota or Littler)

Neota Logic uses proprietary technology and software to enable legal experts to “deliver knowledge in an operationally useful form as expert systems that can be consulted interactively online or embedded directly in business systems.” Littler is what MacEwan calls a “category killer” law firm–very good at one thing and not trying to be anything else. Littler’s one thing is employment law. The firm’s “single focus on employment and labor law has created a cartel of attorneys whose knowledge of and experience in these areas of law is unsurpassed. With lawyers who practice in more than 36 areas of law, there is no employment issue a company has faced that hasn’t been addressed by one of Littler’s attorneys.”

The Health Care Reform Advisor the two firms have developed allows an employer to use an online interface to upload general information about employees and benefits and receive some basic feedback about HCR impacts. Think of Turbo Tax, but this is for navigating the HCR. Sure, it’s designed to lead employers who decide they need more counsel to contact Littler, but unlike websites and blogs most firms use to do the same, this tool provides specific feedback to the user’s circumstances and educates the user about key HCR issues. It also signals that Littler knows its stuff and is in problem-solving mode.

I think of this as an example of how the term “disruptive technology,” which is hurled around liberally in “rethink” space, can misstate the case. Neota brings to the table a technology that enhances Littler–like any technology that has this potential, it’s only disruptive to the firms that don’t use it or something like it.

(My thanks to Marc Jenkins, formerly of the law firm Hubbary, Berry & Harris and e-discovery firm Hubbard & Jenkins, now with e-discovery software firm Cicayda, for alerting me to the story)