Enter your email address to follow this blog and receive notifications of new posts by email.

Join 121 other followers

Topics

This Is a Relationship Business!

The second week of Law 2050 concluded with a panel of corporate general counsel, the day after the panel of law firm leaders. As a reminder, the participants, whom I thank profusely, were

Also participating on the law firms panel was Mike Duffy, King & Spalding’s first Director of Growth & Client Service, who added a fascinating dimension to the discussion. I think it is fair to say the students were riveted by the discussion and will have lots to work with in their reaction papers.

I have enough notes from the panels to fill pages, so I plan to break my own reactions down into several posts over the next few weeks. Today’s has to do with a theme the cut across both panels: As one of the GCs put it, “This is a relationship business!” Now, I am sure everyone gets that in general–lawyers have to forge a relationship with their clients and vice versa–but what does that mean today, six years after 2008 hit the reset button.  I’d have to say that the GCs panel was most emphatic about this, so I’ll start there:

  • It’s about the lawyer, not the firm. I have always maintained that the spike in lateral partner and practice group movement in the late 1980s, which I saw happen around me in my firm and which has not abated, was the beginning of the erosion of the “trusted firm” relationship. It has been replaced by what the GCs described as the “follow the lawyer” relationship, which places less weight on the firm and more on the individual.
  • Share the risk. The GCs want that individual to forge a partnership that involves new ways of sharing risk, as opposed to the old model of firms shifting all the risk to clients through the billable hour.
  • Help yourself by helping me. The outside lawyer also has to recognize that GCs and their in-house legal team “live with the client.” They are on call 24/7 and are under intense pressure to get answers fast and to perform like any other branch of the business. There is low tolerance in that environment for outside lawyers looking to make themselves look good–the real value comes in making the in-house lawyers look good to their C-suite colleagues and to have their back. Do that and it will work both ways.
  • Look around the room. One student asked how to begin to forge those kinds of relationships, and the best advice was to start by looking around the room–in other words, it starts in law school and continues into the years of associateship. All those GCs out there were once law students and associates too, so the person next to you in class or down the hall at the firm could very well later be in-house at one of your current or prospective clients.
  • Don’t fake it. The GCs also had low tolerance for outside counsel who go too far outside their or their firm’s wheelhouse to keep work. That costs clients time and money, and won’t help get return business.

My sense is that none of this would have surprised the law firm leaders panel, but that the harder question for them is how to forge and sustain this kind of relationship in a static market facing intense competition from within (other firms) and outside (new kinds of service providers). All the law firm leaders agreed it starts with firm differentiation, although if the “follow the lawyer” trend is real and growing, one has to wonder whether differentiation matters. The firm lawyers suggested it does if it achieves niche or strength differentiation, as that is likely to attract the best lawyers. But the most interesting angle on it came from Mike Duffy, a non-lawyer who came to King & Spalding from Ernst & Young. Among his many functions there, one practice he has instituted is to interview clients who retain the firm, but also those who decline to hire the firm, as in what did the firm not bring to the table. That practice has to lead to some insight about how to get the relationships up and running next time.

To drive this message home, Mike Duffy also revealed to the class what he believes, based on years of observation, to be the traits of the most successful lawyers (more on which later). High on the list was “relationship skills.” Those do not come naturally to all people and are hard to teach in law school, but if I got anything out of the two panels (and I got a lot), it’s that this world of lawyering is, now more than ever, a relationship business.

 

 

An Evening With Some Really Smart People Working In Law+Tech

As many interested in Law 2050 topics will know, Nashville has the pleasure of hosting this year’s International Legal Technology Conference. I have not been able to attend much of it given the ironic detail that I have been teaching Law 2050 classes the same days as the conference. So it was a real treat to be invited to a dinner gathering to discuss the law+tech landscape along with several current and former Law 2050 students, other Vanderbilt Law students, and local legal community members. .

Our hosts were Michael Dunn and Aria Safar of e-Stet, the California based litigation technology company. Also present, and presenting tomorrow at the conference, was Noah Waisberg, founder of Diligence Engine, which has developed transaction due diligence review software. E-Stet treated us to an excellent Nashville hot chicken spread and opened an informal forum on the state of play and future of law+tech and its impact on the legal services industry.  Although I can’t speak for anyone but myself, here’s my take home from the discussion:

  • Legal technology developments like those represented by e-Stet and Diligence Engine (and a fast-expanding universe of other developers) will make lawyers better and more efficient. Law is one of those professions in which making a mistake can be very, very costly, so why not reduce the risk of missing an important document or detail? The downside may be that efficiency cuts into hours billed, but the offsetting upside is that better lawyering results attract more work.
  • These advances in law+tech are going to flatten the legal services industry in two ways. First, it will make it more possible for lawyers to service the mid-tier market of consumers and small businesses. Firms that might in the past (and present) have seen their market as large corporations and wealthy individuals might very well be in a position to provide reasonable-cost services to those markets. Whether they will deign to do so is a different question. But one thing is for sure–if they don’t, someone will.
  • The other flattening effect of law+tech is that it levels the playing field between the AmLaw 50, concentrated as they are in New York, L.A., and other mega markets, and the major regional/city law firms. If you have a significant deal or piece of litigation in Nashville or Denver, why fly in lawyers from New York or L.A. when law+tech has made everyone better? The experiential advantage of spending 10 years working deals in New York etc. will erode as everyone, everywhere, has access to aggregated databases of deal documents and the computational analytics to crunch through them. Bespoke lawyering may still be more concentrated in a few major cities, but over time this trend could revolutionize the legal services industry, giving law grads and young lawyers even greater flexibility to combine a sophisticated legal practice with quality of life preferences.
  • I think I can speak for all present in concluding that law+tech is not headed in the direction of robot lawyers any time soon (speaking of which, here’s the program for a conference session on that tomorrow). Perhaps a substantial chunk of lawyering can be mechanized, commoditized, and computerized, but the bottom line is that life is complicated and as soon as a client’s preferences or needs depart a smidgen from the default context built into the “robot,” you need a human. But the human will use law+tech to provide a faster, better, more efficient outcome. Maybe the better way to think of it is lawyer+robot.

The most gratifying aspect of this fascinating evening (besides the ridiculously spicy hot chicken!) was seeing my students engage in the discussion at what I considered to be a high level of knowledge and insight. Most if not all of them are members of our Journal of Entertainment Law & Technology, and it was clear that their experience on the journal has paid off in terms of enhanced awareness of the trends in law+tech. Go Vandy!

Law 2050 Rides Again!

Summer is over and classes start today here at Vanderbilt Law School, which means Law 2050 is back in action! Later today I will ramp up the second year of the Law 2050 class and begin posting about it and topics of interest to legal futurists.

The first order of business is to thank the many wonderful people who have agreed to be guest speakers in the class. Like last year’s lineup, it’s an exceptional set of presenters. Their perspectives bring life to the class and enhance the student experience in so many ways. Today’s post is devoted to them–many thanks to you all!

Aug. 25: Guest Speaker Panel – Law firm leaders discuss the state of the practice

Aug. 26: Guest Speaker Panel – Corporate in-house counsel discuss the drivers of change

Sept 9: Guest Speaker Panel – The globalization and consolidation of law firms

Sept. 29: Guest speaker – Larry Bridgesmith of ERM Legal Solutions: Introduction to legal process management

Sept. 30: Guest speaker – Marc Jenkins of Cicayda: Introduction to e-discovery and information technology

Oct. 6: Panel Discussion: Alternatives to BigLaw – What is their “new normal”?

Oct. 14: Demonstration of Lex Machina Legal Analytics

Oct 20: Guest speaker – Zygmunt Plater of Boston College Law School: The future of environmental law

Oct. 27: Guest speaker – Michael Mills of Neota Logic: Introduction to Neota Logic compliance software

Nov. 17: Guest Speaker Panel – Law firm economics and advancement, big and small

Lawyers, Do Not Fail to Read “The Great Disruption”

For a concise but thorough and insightful summary of how machine learning technology will transform the legal profession, and a sobering prediction of the winners and losers, check out The Great Disruption: How Machine Intelligence Will Transform the Role of Lawyers in the Delivery of Legal Services. Written by John McGinnis of Northwestern University Law School and Russel Pearce of Fordham Law School, this is a no-nonsense assessment of where the legal profession is headed thanks to the really smart people who are working on really smart machines. The key message is to abandon all notion that the progress of machine learning technology, and its incursion into the legal industry, will be linear. For quite a while after they were invented, computers didn’t seem that “smart.” They assisted us. But the progress in computational capacity was moving exponentially forward all the time. It is only recently that computers have begun to go beyond assisting us to doing the things we do as competently as we do, or better (e.g., IBM’s Watson). The exponential progress is not going to stop here–the difference is that henceforth we will see computers leaving us behind rather than catching up.

The ability of machines to analyze and compose sophisticated text is already working its way into the journalism industry, and McGinnis and Pearce see law as the next logical target. They foresee five realms of legal practice as the prime domains for computers supplanting human lawyers: (1) discovery, which is well underway; (2) legal search technology advancing far beyond the Westlaw of today; (3) generation of complex form documents, such as Kiiac; (4) composing briefs and memos; and (5) predictive legal analytics, such as Lex Machina. All of these trends are well in motion already, and they are unstoppable.

All of this is a mixed bag for lawyers, as some aspects of these trends will allow lawyers to do their work more competently and cost-effectively. But the obvious underside of that is reduced demand for lawyers. So, who wins and who loses? McGinnis and Pearce identify several categories of winners (maybe the better term is survivors): (1) superstars who are empowered even more by access to the machines to help them deliver high stakes litigation and transactional services; (2) specialists in areas of novel, dynamic law and regulation subject to change, because the lack of patterns will make machine learning more difficult (check out EPA’s 645-page power plant emissions proposed regulation issued yesterday–job security for environmental lawyers!); (3) oral advocates, until the machines learn to talk; and (4) lawyers practicing in fields with high client emotional content, because machines don’t have personalities, yet. The lawyering sector hardest hit will be the journeyman lawyer writing wills, handling closings, reviewing documents, and drafting standard contracts, although some entrepreneurial lawyers will use the machines to deliver high-volume legal services for low and middle income clients who previously were shut out of access to lawyers.

Much of what’s in The Great Disruption can be found in longer, denser treatments of the legal industry, but McGinnis and Pearce have distilled the problem to its core and delivered a punchy, swift account like no other I’ve seen. I highly recommend it.

 

ABA Symposium Panelists Offer Some Sound Advice for Law Students and Young Lawers

I had the pleasure of moderating a panel at the American Bar Association Section on Environment, Energy, and Resources (SEER) Annual Spring Symposium, held this year at Vanderbilt Law School last Friday, May 2nd. SEER Chair Bill Penny had the vision to build the symposium around the themes of the state and future of the practice, so it was a natural to host the event at Vanderbilt and I was glad to be a part of it.

My three panelists made for a powerhouse of energy, environmental, and resources practitioners: David Hill, Executive VP and GC of NRG Energy and former GC of the US Department of Energy; Ann Klee, VP of Environment, Health, and Safety at General Electric and former GC of the US EPA; and Janice Schneider, partner at Latham & Watkins in DC and just confirmed by the Senate the day before the symposium as Assistant Secretary of the Department of the Interior for Land and Minerals. Needless to say, I saw my job as moderator to be staying out of the way so my panelists could offer insight and advice, which they did immeasurably. Here I’ll distill what they said of most importance to law students and young lawyers about navigating the turbulence of today’s legal practice world and building a practice:

Don’t Skip the Basics: While it is enticing to think of riding a new trend like 3D printing to capture its practice opportunities, all the panelists agreed they do not hire young lawyers to be trend-spotters—they hire young lawyers who are good lawyers. That means lawyers with relevant domain knowledge, the ability to write crisply and clearly, strong communication skills, the capacity to work well in groups, the ability to manage relationships with clients, regulators, competitors, and the public, and the rest of what goes into the foundation of good lawyering. And don’t be a jerk.

Follow Emerging Technologies: Once you have the basics down, what’s the best way to spot and capitalize on emerging trends? The panelists agreed that, at least for the energy, environmental, and resources practice areas, emerging technologies drive legal change. Three emerging technologies that got the most attention were nanomaterials, distributed energy, and 3D printing. Distributed energy technology, for example, will change the level of control energy consumers have over their energy profile, thus leading to profound changes in the energy utility and distribution industries that will demand new legal regimes.

Learn Something About How Businesses Operate: Whether your practice is in a firm, government, NGO, or in-house, business actions and decisions drive an enormous slug of legal practice in the US. So it can’t hurt a law student or young lawyer to learn a bit about how businesses operate. Take basic law courses in corporate law, mergers and acquisitions, finance, etc., and even take some classes in a business school while in law school.

The Rise of Private Governance: One theme that spun through my panel and a panel later in the day was the increasing importance of private regulation as a legal practice field. The example my panel gave was supply chain regulation, in which a company demands upstream suppliers meet specified performance or product standards, embodied in contract terms, for environmental quality which often go above and beyond minimum standards established in public regulation. Not all regulatory practice, in other words, is about public regulation—your client’s customer might be its most aggressive regulator. (For more on this theme, see the work of my Vanderbilt colleague Mike Vandenbergh.)

Beware of Buzzwords: If you dream of being a “sustainability lawyer” or a “climate change lawyer,” the panelists had some sobering advice for you: they don’t hire “sustainability lawyers” or “climate change lawyers.” They hire lawyers with expertise in fields that are relevant to how their clients decide they need to respond to sustainability and climate change, in fields like air pollution, water pollution, endangered species, etc. Their advice was to build your expertise around relevant statutory regimes (Clean Air Act, Endangered Specie Act, Federal Power Act, etc.) to best position yourself to assist a client that is developing or implementing its sustainability and climate change policies.

Embrace Serendipity: Resonating with one of my Law 2050 class themes, the panelists all agreed that, now more than ever, young lawyers need to jump on opportunities to deepen and diversify their expertise, including taking chances to try new practice fields and settings.

Big Data and Preventive Government: A Review of Joshua Mitts’ Proposal for a “Predictive Regulation” System

In Minority Report, Steven Spielberg’s futuristic movie set in 2050 Washington, D.C., three sibling “pre-cogs” are hooked up with wires and stored in a strange looking kiddie pool to predict the occurrence of criminal acts. The “Pre-Crime” unit of the local police, led by John Anderton (played by Tom Cruise), uses their predictions to arrest people before they commit the crimes, even if the person had no clue at the time that he or she was going to commit the crime. Things go a bit awry for Anderton when the pre-cogs predict he will commit murder. Of course, this prediction has been manipulated by Anderton’s mentor and boss to cover up his own past commission of murder, but the plot takes lots of unexpected twists to get us to that revelation. It’s quite a thriller, and the sci-fi element of the movie is really quite good, but there are deeper themes of free will and Big Government at play: if I don’t have any intent now to commit a crime next week, but the pre-cogs say the future will play out so that I do, does it make sense to arrest me now? Why not just tell me to change my path, or would that really change my path? Maybe taking me off the street for a week to prevent the crime is not such a bad idea, but convicting me of the crime seems a little tough, particularly given that I won’t commit it after all. Anyway, you get the picture.

As we don’t have pre-cogs to do our prediction for us, the goal of preventive government–a government that intervenes before a policy problem arises rather than in reaction to the emergence of a problem–has to rely on other prediction methods. One prediction method that is all the rage these days in a wide variety of applications involves using computers to unleash algorithms on huge, high-dimensional datasets (a/k/a/ Big Data) to pick up social, financial, and other trends.

In Predictive Regulation, Sullivan & Cromwell lawyer and recent Yale Law School grad Joshua Mitts lays out a fascinating case for using this prediction method in regulatory policy contexts, specifically the financial regulation domain. I cannot do the paper justice in this blog post, but his basic thesis is that a regulatory agency can use real-time computer assisted text analysis of large cultural publication datasets to spot social and other trends relevant to the agency’s mission, assess whether its current regulatory regime adequately accounts for the effects of the trend were it to play out as predicted, and adjust the regulations to prevent the predicted ill effects (or reinforce or take advantage of the good effects, one would think as well).

To demonstrate how an agency would do this and why it might be a good idea at least to do the text analysis, Mitts examined the Google Ngram text corpus for 2005-06, which consists of a word frequency database of the texts of a lot of books (it would take a person 80 years to read just the words from books published in 2000) for two-word phrases (bi-grams) relevant to the financial meltdown–phrases like “subprime lending,” “default swap,” “automated underwriting,” and “flipping property”–words that make us cringe today. He found that these phrases were spiking dramatically in the Ngram database for 2005-06 and reaching very high volumes, suggesting the presence of a social trend. At the same time, however, the Fed was stating that a housing bubble was unlikely because speculative flipping is difficult in homeowner dominated selling markets and blah blah blah. We know how that all turned out. Mitts’ point is that had the Fed been conducting the kind of text analysis he conducted ex post, they might have seen the world a different way.

Mitts is very careful not to overreach or overclaim in his work. It’s a well designed and executed case study with all caveats and qualifications clearly spelled out. But it is a stunningly good example of how text analysis could be useful to government policy development. Indeed, Mitts reports that he is developing what he calls a “forward-facing, dynamic” Real-Time Regulation system that scours readily available digital cultural publication sources (newspapers, blogs, social media, etc.) and posts trending summaries on a website. At the same time, the system also will scour regulatory agency publications for the FDIC, Fed, and SEC and post similar trending summaries. Divergence between the two is, of course, what he’s suggesting agencies look for and evaluate in terms of the need to intervene preventively.

For anyone interested in the future of legal computation as a policy tool, I highly recommend this paper–it walks the reader clearly through the methodology, findings, and conclusions, and sparks what in my mind if a truly intriguing set of policy question. There are numerous normative and practical questions raised by Mitts’ proposal not addressed in the paper, such as whether agencies could act fast enough under slow-going APA rulemaking processes, whether agencies conducting their own trend spotting must make their findings public, who decides which trends are “good” and “bad,” appropriate trending metrics, and the proportionality between trend behavior and government response, to name a few. While these don’t reach quite the level of profoundness evident in Minority Report, this is just the beginning of the era of legal computation. Who knows, maybe one day we will have pre-cogs, in the form of servers wired together and stored in pools of cooling oil.

 

Racing with the Legal Computation Machine at the Inaugural Center for Computation, Mathematics, and the Law Workshop

I took a deep dive last week into the world of legal computation, to see just how far it has come, where it is going, and how transformative it will be as a force in legal thought and practice. I was provided this opportunity as a participant in the inaugural workshop of the University of San Diego Law School’s new Center for Computation, Mathematics, and the Law (CCML). (Before going into the details, let me add that if one is going to attend a workshop, USD is one heck of a nice place to do it! To emphasize the point, and to highlight the impact the CCML already is having, the International Conference on Artificial Intelligence and Law has selected USD as the site for its 2015 annual meeting.) Ted Sichelman and Tom Smith at USD Law are the founders and directors of the CCML, and the workshop will rotate annually between USD and the University of Illinois Law School, where patent law expert Jay Kesan will coordinate the program.

By way of disclaimer, I have to emphasize that I am not a Comp Sci guy. My math ended with Calculus II, my stats ended with multivariate regression, and my coding ended with SPSS and Fortran, and all are in the distant past. To say the least, therefore, the workshop was a humbling experience, as I was reminded at every turn that I was not the smartest guy in the room! So I approached the workshop through the eyes of Law 2050—I don’t need to know how to code to know how the end product works and to assess its potential to influence legal theory and practice. From that perspective, the workshop revealed an astounding and exciting array of developments. All of the presentations were tremendously well done; here is a taste of those that resonated most with the Law 2050 theme:

Paul Ohm (University of Colorado Law School) presented a fascinating study of how to parse the U.S. Code text to extract instances of defined terms. While at the workshop, he coded a software search engine that instantaneously returns links to all provisions in the Code defining a particular term. I tried it—it works!

Dan Katz (Michigan State University Law School) presented his research team’s ongoing work on a classification algorithm for predicting affirm/reverse outcomes of U.S. Supreme Court decisions. Previous work on this front (Ruger et al., 2004) pitted expert lawyers against a classification tree algorithm applied to one year of Court decisions, with the computer’s accuracy outperforming the experts by 75% to 58%. Dan’s team applied a more advanced “random forests” classification approach to the last 50 years of Court decisions and maintained accuracy levels of 70%.

Kincho Law (Stanford Civil Engineering) presented a robust text parsing and retrieval project designed to allow the user to extract and compare regulations pertaining to specific topics. For example, if the user is interested in water toxicity regulations for a particular contaminant, the program identifies and compares federal and state regulations on point. His team also has embedded a plethora of information into many of the regulations (e.g., links to relevant regulatory documents) and has also embedded formal logic statements for many regulations, allowing the user to treat the regulations as a true set of coding.

Jay Kesan (University of Illinois Law School) demonstrated another text parsing and retrieval project aimed at unifying the various databases relevant to patent lawyers, including all the patents, court litigation, scientific publications, and patent file wrappers in the biomedical technology domain.

Harry Surden (University of Colorado School of Law) delved into what he calls “computable contracts,” referring to the trend in finance to embody contractual terms entirely as computer code. These “contracts” allow computers to understand the terms and generate real-time compliance assessments. His project assesses the conditions under which a broader array of contracting practices might move to this computable contract format and the implications of doing so.

Seth Chandler (University of Houston) gave us a deep dive into the Affordable Care Act with a demonstration of software he has developed to extract and evaluate a variety of important analytics from the database available at healthcare.gov.

David Lewis (Independent Consultant) outlined the use of predictive coding in e-discovery and presented the preliminary results of a study comparing human manual document review and computer predictive coded e-discovery accuracy based on a large (500K documents) real-world discovery event. The results suggest that predictive coding, while presenting challenges, has substantial promise.

Henry Smith (Harvard Law School) and Ted Sichelman presented work on legal entitlements illustrating the potential for legal computation to advance legal theory. Ted’s project carefully examines how legal entitlements can be represented in formal, computable logic models, and together they are developing a model for computing the “modularity” of real property entitlements using network analytics. By representing legal entitlements as networks of rights, duties, privileges, and powers, they propose a method for measuring the degree to which a property legal regime has departed from the state of fully unrestricted right to use and exclude.

Jack Conrad (Thompson Reuters R&D and President of the International Association for Artificial Intelligence and Law) explained the importance of the “use case” in developing applied uses of legal computation—i.e., what are you going to use this to do?—and also emphasized the importance of evaluation of experimental efforts using standard test sets and metrics.

Last but by no means least, Roland Vogl of Stanford’s CodeX Center for Legal Informatics Skyped in an overview of what CodeX is doing to advance information retrieval technology, legal technology infrastructure, and computational law, as well as a review of some of the start-up incubation successes (Lex Machina, LawGives, Ravel Law, Judicata, etc.).

All in all, the workshop made two things abundantly clear for me: (1) legal computation has taken off and its horizons are boundless, and (2) San Diego in March is OK!

Designing a Law 2050 Law School Curriculum

One of the final assignment prompts for my Law 2050 class asked the students to write a memo to the Law School Curriculum Committee recommending “how to innovate the curriculum to respond to the ‘new normal’ in the legal industry and best position students to enter and succeed in legal practice over first 10 years of their careers.” I received 45 very thoughtful and comprehensive responses. Recall that Law 2050 could best be described as a boot camp on the “new normal,” exploring everything from outsourcing to legal tech to how to make a practice out of Google Glass, so these students were primed to go on the topic of what to include in the curriculum beyond a survey course like Law 2050.  Here’s my synthesis of what they would like to tell the Curriculum Committee.

First, four proposed new course cluster themes dominated the student proposals, with well over 80 percent of the papers proposing courses in two or more of these clusters:

  • Legal Process/Project/Program Management: Students want to know more about efficient management of legal processes (e.g., due diligence), discrete projects (e.g., drafting a contract), and broad programs (e.g., managing origination of hundreds of similar contracts). This theme also includes suggestions for courses on E-discovery and Legal Risk Management, which draw on routinized and efficient process techniques.
  • Legal Technologies and Technologists: Michael Mills’ presentation of Neota Logic’s flowcharting technology platform was one of the smash hits of the class, and a good number of students presented law-tech and big data companies for their case studies, such as Lex Machina and Tymetrix. Students want to understand what these emerging technologies do, how they work, and even how to design them. This theme also includes suggestions for courses on Legal Software and Coding and Legal Computation and Analytics, as well as a number of suggestions that the law-tech theme be designed around some type of clinical delivery model.
  • Legal Entrepreneurism and Startup: Although much of the discussion of the “new normal” dwells on Big Law, plenty of class time focused on innovative legal startups such as Ravel Law and Casetext, as well as on how legal innovations can better support other industry entrepreneurs and startups. This theme also included many suggestions for a clinical setting, such as teaming up with business incubators.
  • Legal Business Management and Innovation:  With all the emphasis on “more for less” and “disruption,” students expressed a strong demand for courses they described as Law Firm Management and Finance, the Future of Legal Practice, Alternative Legal Services Business Models, Solo and Small Firm Practice, and similar themes.

Beyond these four dominant themes, which I am happy to say are being integrated into the offerings at Vanderbilt, quite a number of other innovations popped out of the papers, including:

  • As courses like the above are integrated into the curriculum, design a Legal Technology and Management Certificate
  • Push some of the content of Law 2050 themes into the 1L year
  • Offer a course focusing on nontraditional legal jobs, such as legal process management, legal risk management, and regulatory compliance systems
  • Offer a course on Comparative Legal Services Regulation
  • Offer a course on Legal Leadership
  • Include regulatory compliance flowcharting exercises in more classes
  • Integrate the law-tech issues into the Professional Responsibility course
  • Develop a year-long speaker series picking up on many of the Law 2050 themes

Finally, many students included proposals which, while not fitting within the Law 2050 scope directly, are consistent with the theme, heard over and over again, that they need to hit the ground running (or at least walking a lot faster than my peers and I were when we graduated!). The dominant topics in this category were:

  • Expand extern and clinic offerings, and even make taking one mandatory
  • Require each student to take at least two “skills” designated courses
  • Include courses and training on non-trial pre-trial skills, such as taking depositions, interviewing clients, communicating with courts and other counsel, reading records, etc.
  • Offer a course on understanding how businesses operate, how they make the “legal buy” decision, and how they manage their legal operations
  • Offer a class on “behavioral practice skills” such as case evaluation, legal communications, and risk assessment and communication to clients
  • Offer more and broader transactional document drafting courses
  • Offer a three-year JD/MBA
  • Offer a course like Harvard’s Legal Problem Solving workshop
  • Offer a more practice-oriented advanced Legal Writing course covering short answer memos, white papers, client letters, letters to opposing counsel, drafting interrogatories and document requests, etc.

Overall, I found this set of papers impressive in terms of the attention my students gave to the exercise and their creative, thoughtful suggestions. I was also gratified to think that my class sparked such a depth of interest in learning more about the topics fitting under the Law 2050 roof. With this kind of student effort and input coming in the first offering of the class, I’m looking forward to the Fall 2014 class even more.

Beyond Black Swans: The Dragon Kings of Climate Change

Weird stuff happens. Sometimes really weird stuff happens. And sometimes freaky weird stuff happens–the kind of events that just don’t fit the imaginable.

Nassim Nicholas Taleb’s 2007 book Black Swan: The Impact of the Highly Improbable, had a huge impact on our understanding of weird and really weird events. The essence of Taleb’s Black Swan theory:

What we call here a Black Swan (and capitalize it) is an event with the following three attributes.

First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme ‘impact’. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.

I stop and summarize the triplet: rarity, extreme ‘impact’, and retrospective (though not prospective) predictability. A small number of Black Swans explains almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives.

The key to preparing for Black Swans rests in understanding the way events are statistically distributed over time. Unlike the normal distribution found in many phenomena, such as SAT scores, other phenomena follow what is known as a power law distribution, with many small events and few large events. Think forest fires. Black Swan provided a compelling account of the problem of over-relying on normal distributions to explain the world. For problems defined by “fat tail” power laws that have outlier events way out on the tail one would not find on a normal distribution, sooner or later an event at the end of that tail is going to hit, and it’s going to be big. So, planning for some policy problem based on a normal distribution can lead to under-preparation if in fact the problem follows a power law distribution.

Normal Power Law

Well, here’s the thing–it’s worse than that. A recent article by Didier Sornette of the Department of Physics and Earth Science at ETH Zurich, Dragon-Kings, Black Swans and the Prediction of Crises, discusses what he calls “life beyond power laws,” meaning “the existence of transient organization [of a system] into extreme events that are statistically and mechanistically different from the rest of their smaller siblings.” In short, he documents the existence of “genuine outliers,” events which don’t even follow the power law distribution. (In the power law graph shown above, sprinkle a few dots way out to the right of the chart and off the line.) The Black Swan event isn’t really an outlier, in other words, because it follows the power law and is simply an event way out on the tail. Genuine outliers violate the power law–they are even “wilder” than what would be predicted by the extrapolation of the power law distributions in their tails. A classic example is Paris–whereas all the populations of all other cities in France map well onto a power law, Paris is a genuine outlier. But Sornette documents that other such outliers exist in phenomena as varied as financial crashes, materials failure, turbulent velocities, epileptic seizures, and earthquakes. He calls such events Dragon Kings: dragon for “different kind of animal” and king to refer to the wealth of kings, which historically has been an outlier violating power law distributions of the wealth of their citizens. (Dragon Kings are also mythical Chinese shapeshifting deities ruling over water, as well as the name of some pretty good Chinese restaurants in cities around the U.S. according to Yelp.)

dragon king

So, what causes Dragon Kings? Sornette’s theory is complex, but boils down largely to instances when, for whatever reason, all of the feedback mechanisms in a system harmonize in one coupled, self-reinforcing direction. Massive outlier earthquakes, for example, are the result of “interacting (coupled) relaxation threshold oscillators” within the earth’s structure, and massive outlier financial crashes are the result of  “the unsustainable pace of stock market price growth based on self-reinforcing over-optimistic anticipation.”

What’s the lesson? The key to Dragoon Kings is that they are the result of the same system properties that give rise to the power law, but violate the power law because those properties have become arranged in such a way as to create severe instability in the system–a systemic risk of failure. When all feedback in the system has harmonized in the same self-reinforcing direction, a small, seemingly non-causal disruption to the system can lead to massive failure. As Sornette puts it: “The collapse is fundamentally due to the unstable position; the instantaneous cause of the collapse is secondary.” His assessment of the financial crash, for example, that, like other financial bubbles, over time “the expectation of future earnings rather than the present economic reality that motivate[d] the average investor.” What pops the bubble might seem like an inconsequential event in isolation, but it is enough to set the collapse in motion. “Essentially, anything would work once the system is ripe.”  And the financial system keeps getting ripe, and the bubbles larger, because humans are essentially the same greed-driven creatures they were back centuries ago when the Tulip Bubble shocked the world, but the global financial system allows for vastly larger resources to be swept into the bubble.

The greater concern for me, however, lies back in the physical world, with climate change. Sornette did not model the climate in his study, because we have never experienced and recorded the history of a genuine outlier “climate bubble.” But the Dragon King problem could loom. We don’t really know much about how the global climate’s feedback systems could rearrange as temperatures rise.  If they were to begin to harmonically align, some small tipping point–the next tenth of a degree rise or the next ppm reduction in ocean water salinity–could be the pin that pops the bubble. That Dragon King could make a financial crisis look like good times….

The New Jersey Supreme Court Discovers Ecosystem Services, Just in Time for Climate Change

Valuing ecosystem services—the streams of benefits functioning ecosystems provide to human populations—has become a powerful theme in natural resources management research and policy, but not so much yet in hard law to apply. The problem has not been with the ecosystem services that are obvious and well registered in markets—crops, recreation, timber, and water supply to name a few. We have plenty of law surrounding services like those. Rather, ecosystem services such as groundwater recharge by wetlands, storm surge protection by coastal dunes, and pollination by wild honeybees are not bought and sold in markets and thus suffer from a classic Tragedy of the Commons dilemma. People get that these are valuable benefits in a big picture sense, but incorporating these values in law—whether in protective regulations, performance standards, incentives, or in core principles of property law—has proven difficult. Yet with climate change looming as a threat to property in general—increased flooding, drought, storm surges, and other threats are not far into the future—it seems that there would be some urgency to incorporating ecosystem services ideas into property law.

One big step in that direction has come from a recent decision by the New Jersey Supreme Court regarding how much compensation beachfront owners are due when the state plops sand dunes on their property. See Borough of Harvey Cedars v. Karan, 70 Atlantic Rep. 524 (NJ 2013). Like many states, New Jersey (with federal help) spends considerable money shoring up the shore, so to speak, by importing sand to beaches subject to erosion. Sometimes these projects go further, in the form of constructing massive dunes on the beach to, in the court’s words, “serve as a barrier-wall, protecting homes and businesses…from the destructive fury of the ocean.” In other words, the idea is to create or supplement the dune ecosystem to enhance the flow of one very valuable ecosystem service—stopping storm surges. And after Hurricane Sandy, there’s not a person in New Jersey who doesn’t get that.

Well, maybe there are a few. There’s another ecosystem services that’s pretty valuable to beachfront owners—their view of the beach! You can see the problem already—higher dunes mean less view. So when the federal, state, and local governments embarked on a dune project in Long Beach Island, some property owners resisted. The project involved purchasing perpetual easements from the beachfront owners and constructing a 22-foot dune system the length of the beach. The local borough was more than willing to provide compensation for the easement, and most property owners were happy to have the dunes. One couple, however, decided not to sell. The borough exercised its power of eminent domain and took the easement from them anyway. Things got interesting when it came time to decide how much “just compensation” was due to the property owners.

This situation involves what is called a “partial taking” of property. If the borough had taken title to the entire property, the owners and the government would have argued over the fair market value of the entire parcel, which while contestable is fairly easy to determine within a reasonable range the same way appraisers estimate home values for loans. It’s trickier when the government is taking only part of the property (in this case the easement), because one has to determine the value of what was taken as well as the impact on the value of what remains. For over a century, New Jersey law allowed the government to offset the losses to the property owners for that “remainder” (in this case the diminished view) with the benefits the owners receive from the public project that required the partial taking (in this case the protection from the ocean), but only if the benefits were “special benefits” the owner received independent of the “general benefits” the project provides to the public at large. At the trial level in the case, the trial court ruled that the protection benefits from the dune project were general benefits, which meant the jury could not include them as offsets. Under that approach, the jury awarded the owners $375,000, and the appellate court affirmed. As is easy to imagine, if the government had to pay every beachfront owner a sum like that–and there were a lot of owners who refused to participate in the project–the project would have been dead in the water (no pun intended). (Note: I’m going to stay away from the part of the story involving public vilification of the recalcitrant owners, like when Governor Christie called them “knuckleheads.”)

The New Jersey Supreme court turned the case into an opportunity to ditch the outdated special benefits/general benefits doctrine. After a very careful review of the history and policy of the doctrine, the court concluded that “the terms special and general benefits do more to obscure than illuminate the basic principles governing the computation of just compensation in eminent domain cases.” Instead, the court ruled, “just compensation should be based on non-conjectural and quantifiable benefits, benefits that are capable of reasonable calculation at the time of the taking.”

From there the court made some rather obvious but refreshing observations about the dune project, as in “without the dune, the probability of serious damage or destruction to the [owners’] property increased dramatically over a thirty-year period,” and thus it is “likely that a rational purchaser would place a value on a protective barrier that shielded his property form partial or total destruction.” Seriously, this is not rocket science—if you want your house standing in 30 years, deal with the dunes!

The court sent the case back to the trial court with instructions that “at that trial, the Borough will have the opportunity to present evidence of any non-speculative, reasonably calculable benefits that inured to the advantage of the [owners’] property at the time of the taking.” In other words, calculate the value of the ecosystem services the dunes provide to beachfront owners. That trial never took place, however, because the parties settled – the borough paid the owners one dollar in compensation (and covered $24,000 of their attorneys fees). One can reasonably assume the property owners saw the writing on the wall.

The Karan case is a huge development for the law of ecosystem services. Not only did the court recognize the inherent value of the dunes, it gave that value firm legal status. One can anticipate many public infrastructure projects in the future as part of climate change adaptation, many of which will require use of or impacts to private property. As with the Long Beach Island dune project, one can hope that many of these infrastructure projects will rely on restoration, enhancement, or creation of natural ecosystems such as dunes, wetlands, and riparian habitat. Certainly just compensation will be due to the property owners, but at least in New Jersey the calculation of just compensation will include recognition of and valuation of the ecosystem services provided by those ecosystem-based projects.

Archives

Follow

Get every new post delivered to your Inbox.

Join 121 other followers

%d bloggers like this: