Home » Legal Complexity

Category Archives: Legal Complexity

The Biases Percolating Algorithms: Will AI Facilitate Disparity and Discrimination?

By Emily Lamm

Cryptically crafted and living behind the façade of technology, algorithms have escaped the standards we hold ourselves to.  The allure of coding and quantum computing arouses a sense of intrigue and elevates the status of the underlying algorithms. Yet, this charm should not obscure the fact that the authority afforded to technology is constructed and highly sensitive to context.  For instance, when a deep learning, neural network is introduced to an incongruous object––an elephant within a living room––pixels are crossed and previously detected objects are misidentified.  These types of errors are not uncommon, but they do take on forms far more sinister than an elephant-triggered kerfuffle. High-profile examples include LinkedIn’s platform showing high-paying job ads to men more frequently than women, and law enforcement officials and judges relying upon patently racist AI-powered tools.

On one hand, the United States has developed a robust body of laws combating discrimination.  The Equal Protection Clause of the Fourteenth Amendment and Title VII of the Civil Rights Act have been paramount, and the Americans with Disabilities Act of 1990 is considered an immense success in protecting individuals with qualifying disabilities.  On the other hand, the United States has no such analogue to offer protection from algorithmic bias.  In effect, algorithms––just one step removed from humans––have escaped the rule of law despite being a reflection (or manifestation) of the implicit values of the very humans who created them.

Now, just because there is no general legislation or regulatory scheme to control for algorithmic bias, doesn’t mean there won’t be soon.  Other countries have filled this gap by implementing a data protection regime. In due time, perhaps with a change of administration, we will begin to see a drastically different approach to Artificial Intelligence.  Although Americans have been rather lackadaisical about data privacy (often trading their Facebook information for a quiz predicting what their child will look like), they have been quick to advocate against discrimination.  Just look to the sweeping nature of the civil, women’s, and LGBT rights movements.  Accordingly, there are numerous initiatives––launched by the likes of Facebook, IBM, Google, and Amazon––researching algorithmic bias and announcing tools to bolster AI fairness.

Lawyers are also not immune from the mysterious nature of algorithms.  Indeed, most litigators interface with it regularly. Every time we run a search in Lexis Advance or Westlaw, the results we see are the product of algorithms hard at work behind the scenes.  Recently, Fastcase provided the option for users to toggle with its research algorithms through factors like relevancy and authoritativeness.  Although this tool appears to have little influence upon the results generated, it is responsive to a growing demand for algorithmic accountability.  Undoubtedly, lawyers today must embrace and implement technology in order to remain at the forefront of the industry.  Nevertheless, lawyers must also continue to be skeptical, discerning, and autonomous thinkers that refuse to grow complacent with inadequate technology.

As the United States citizenry grows increasingly diverse, technology’s “black box” must begin to encompass an intersectional awareness that accounts for the vast array of identities its users embody.  Ensuring that technology is implemented and monitored responsibly should be at the forefront of everyone’s mind.  Whether it be lobbying for new legislation or updating corporate policies, the time is ripe to seriously consider the role of law in algorithmic bias.

Resilience

Resilience theory has become a dominant framework across many disciplines, from engineering to ecology. Resilience is formally defined as “The capacity of a system to absorb disturbance and reorganize while undergoing change so as to still retain essentially the same function, structure and feedbacks, and therefore identity, that is, the capacity to change in order to maintain the same identity” (Folke et al. 2010). In the theoretical model, “engineering” resilience refers to building in hard barriers to disturbances, such as a concrete seawall to fight off big storms, whereas “ecological” resilience refers to methods that bend more but bounce back, such as enhancing coastal wetlands to take the brunt of the storm.

Ten years after the Great Recession swept through the economy like a big storm, we can ask, how resilient was the legal services industry and how resilient is it today?  This gets us deeper into what goes into resilience. There are five attributes, with some trade-offs at play:

  • Reliability: The parts of the system have to perform as expected, and the system has to perform if a part fails
  • Efficiency: The system should minimize waste and perform as expected even in times of resource scarcity
  • Scalability: The system can perform as expected even as its scale increases or decreases
  • Modularity: The system can rearrange and replace its parts to respond to disturbance
  • Evolvability: The system can make changes necessary to perform as expected over long time frames

Engineering resilience is often associated with boosting reliability and efficiency, whereas ecological resilience is often more about working on scalability, modularity, and evolvability. You can quickly see where some of the trade-offs could complicate matters. For example, to build scalable and modular features in a system may require redundancy of parts, which may not always promote efficiency. Optimal efficiency would build in just the right amount of redundancy to keep the system resilient, but knowing how much that is can be a challenge.

Looking back on it, I’d say the legal services industry was pretty resilient to the Great Recession. So-called Big Law is back on the rise when measured by revenues and profits, albeit still less so than before the recession. And the emergence of significant new forms of legal services providers, such as United Lex and Integreon, and an array of new technology solutions suggests that the legal services industry is building modularity and scalability in order to evolve. And there are other positive signs, such as increasing employment and increasing law school applicants. Bottom line: contrary to all the “death of lawyers” rhetoric at the beginning of this decade, it didn’t happen—the industry was resilient. Yes, it has changed, but change to some degree is a hallmark of evolvability, an essential ingredient of resilience.  The question is whether it has maintained the same identity, and I would say for them most part, it has.

But how resilient is it still? What if another recession even half as bad as 2008 hit the economy in two years? The concern may be that the legal services industry, and Big Law in particular, has been so driven by the efficiency goal that it has dispensed with too much redundancy to take another head on blow like that. A concrete seawall may provide more immediate protection than a coastal wetland, but when it blows out, it’s ugly. In short, keep an eye on continuing to build scalability, modularity, and evolvability too.

Ruhl, Katz, and Bommarito publish “Harnessing Legal Complexity” in Science

I am pleased to announce the publication in Science, the journal of the American Association for the Advancement of Science, of an article I co-authored with Dan Katz and Mike Bommarito, Harnessing Legal Complexity. The summary from Science:

Complexity science has spread from its origins in the physical sciences into biological and social sciences. Increasingly, the social sciences frame policy problems from the financial system to the food system as complex adaptive systems (CAS) and urge policy-makers to design legal solutions with CAS properties in mind. What is often poorly recognized in these initiatives is that legal systems are also complex adaptive systems. Just as it seems unwise to pursue regulatory measures while ignoring known CAS properties of the systems targeted for regulation, so too might failure to appreciate CAS qualities of legal systems yield policies founded upon unrealistic assumptions. Despite a long empirical studies tradition in law, there has been little use of complexity science. With few robust empirical studies of legal systems as CAS, researchers are left to gesture at seemingly evident assertions, with limited scientific support. We outline a research agenda to help fill this knowledge gap and advance practical applications.

More information is available at the Science online site. Working with Dan and Mike, two of the leading figures in the application of complexity science and artificial intelligence techniques in law (see their Computational Legal Studies site), was an immense pleasure. Now, onward with the legal complexity research agenda!

Can AI Make AI Obey the Law?

Amitai Etzioni, the famous sociologist, and his son Oren Etzioni, the famous computer scientist, have posted an intriguing paper on SSRN, Keeping AI Legal. The paper starts by outlining some of the many legal issues that will spin out from the progression of artificial intelligence (AI) in cars, the internet, and countless other devices and technologies–what they call “smart instruments”–given the ability of the AI programming to learn as it carries out its mission. Many of these issues are familiar to anyone following the bigger AI debate–i.e., whether it is going to help us or kill us, on which luminaries have opined both ways–such as who is liable if an autonomous car runs off the road, or what if a bank loan algorithm designed to select for the best credit risks based purely on socially acceptable criteria (income, outstanding loans etc.) begins to discriminate based on race or gender. The point is, AI smart instruments could learn over time to do things and make decisions that make perfect sense to the AI but break the law. The article argues that, given this potential, we need to think more deeply about AI and “the legal order,” defined not just as law enforcement but also as including preventive measures.

This theme recalls a previous post of mine on “embedded law”–the idea that as more and more of our stuff and activities are governed by software and AI, we can program legal compliance into the code–for example, to make falsifying records or insider trading impossible. Similarly, the Etzionis argue that the operational AI of smart instruments will soon be so opaque and impenetrable as to be essentially a black box in terms of sorting out legal concerns like the errant car or the discriminatory algorithm. Ex ante human intervention to prevent the illegality will be impossible in many instances, because the AI is moving too fast (see my previous post on this theme), and ex post analysis of the liabilities will be impossible because we will not be able to recreate what the AI did.

The Etzionis’ solution is that we need “AI programs to examine AI programs,” which they call “AI Guardians.” These AI Guardians would “interrogate, discover, supervise, audit, and guarantee the compliance of operational AI programs.” For example, if the operational AI program of a bank called in a customer’s loan, the AI Guardian program would check to determine whether the operational program acted on improper information it had learned to obtain and assess. AI Guardians, argue the Etzionis, would be superior to humans given their speed, lower cost, and impersonal interface.

I get where they are coming from, but I see some problems. First of all, many determinations of legality of illegality depend on judgement calls–balancing tests, the reasonable person standard, etc. If AI Guardians are to make those calls, then necessarily they will need to be programmed to learn, which leads right back to the problem of operational AI learning to break the law. Maybe AI Guardians will learn to break the law too. Perhaps for those calls the AI Guardian could simply alert a human compliance officer to investigate, but then we’ve put humans back into the picture. So let’s say that the AI Guardians only enforce laws with bright line rules, such as don’t drive over 50mph. Many such rules have exceptions that require judgment to apply, however, so we are back to the judgment call problem. And if all the AI Guardians do is prevent violations of bright line rules with no exceptions, it’s not clear they are an example of AI at all.

But this is not what the Etzionis have in mind–they envision that “AI Guardians…will grow smarter just as operational AI programs do.” The trick will be to allow the AI Guardians to “grow smarter” but prevent the potential for them as well to cross the line. The Etzionis recognize this lurking “Who will guard the guardians” question exists even for their AI Guardians, and propose that all smart instruments have a “readily locatable off switch.” Before long, however, flipping the off switch will mean more than turning off the car–it will mean turning off the whole city!  

All of this is yet more Law 2050 food for thought…  

 

 

Is the 21st Century Going to Be One Ginormous Long-Tail Event?

In Book of Extremes: Why the 21st Century Isn’t Like the 20th Century, Ted Lewis builds the case for defining the 21st century as likely to become a morass of extreme events unlike any prior century in terms of magnitude and frequency. The core theme of the book is that the world has entered an era of unprecedented network scope and connectedness, which, while offering us all sorts of advantages like social media and global trade (if you think those are benefits), exposes society to massive cascading failures.

Lewis is clearly wired into complexity science, network analysis, and data science. He’s held a variety of positions in academia, industry, and publishing, and spins out a fascinating account of how all those and other disciplines are necessary to even begin to understand what is happening in the world today. He pulls from the internet, marine shipping, climate change, the financial system, and wealth concentrations to argue that we have gone well past the “tipping point” of exposure to black swan events and worse (see my prior posts on systemic risk and dragon kings). Although I disagree with Lewis’s assessment of prior centuries as essentially flat, linear, and relatively free of global networks and extreme events – anyone who thinks so should read Distant Mirror and 1493 – the evidence he amasses regarding the breadth, tightness, and impact of today’s interlinked social, economic, political, and technological networks is impressive. These networks of networks, while robust in one sense, are fragile in others—fragile in ways that can lead to extreme outlier failures. One example Lewis offers is the global shipping trade, which is a complex network linking lanes and ports and which depends disproportionately on just three ports (Hong Kong, Shanghai, and Los Angeles), so much so that failure of any one of those ports can bring down the whole network (which then cascades to other networks such as finance).

These massive networks also can produce behaviors that appear unusual and counter-intuitive. For example, although social media networks theoretically connect everyone around the world and should produce convergence and harmony, there is evidence they are more an agent of fragmentation. Consisten with LEwis’s theme, for example, Curtis Hougland explains in a post today on the Wharton School’s website how social media allow people that have been assembled according to conventional ordering (nations, religions, employment, education) to reassemble according to other personal affinities, thus cutting across traditional boundaries such as nation states. “Social media provides both an organizing tool through its ability to structure and facilitate communication and an organizing principle in the way people gravitate toward the extreme. In this way, social media accelerates political unrest like a giant centrifuge, sinning faster and faster and spitting out those who disagree.”

Book of Extremes provides an excellent, albeit fast and furious, tour through networks analysis, complex adaptive systems, data science, and an array of other disciplines. Lewis uses metaphors such as waves, flashes, sparks, booms, bubbles, shocks, and bombs to tie the science to real-world contexts with scads of historical and modern examples. His bottom line is that governments and individuals need to start taking big “leaps” to avoid continuing down the spiral leading to cascade failures, including more instances of private initiatives not waiting for government to lead, the way SpaceX has launched itself (pun intended).

So, what does this mean for law? For starters, if Lewis is right, get ready for a century of unprecedented demand on the legal system. Law students and young lawyers, watch trends, anticipate disruption, and think hard about what pressures these will place on the legal system to produce solutions, protect rights, and adapt new legal doctrines. You can help shape how law responds, and you can be the first to “jump on it” with thoughtful analysis and reasoned proposals for legal action. In short, think Law 2025!

Network Analysis In Law 2014 Conference

The second Network Analysis in Law conference will take place in Krakow, Poland, December 10-12. The call for papers outlines intriguing lines of research about legal networks:

We invite papers and demonstrations of original works on the following aspects of network analysis in the legal field:

  1. Analysis and visualization of networks of people and institutionslaw is made by people, about and for people and institutions. These people or institutions form networks, be it academic scholars, criminals or public bodies and these networks can be detected, mapped, analysed and visualised. Can we better study institutions and their activities by analysing their internal structure or the network of their relations? Does it help in finding the ‘capo di tutti i capi’ in organized crime?
  2. Analysis and visualization of the network of lawlaw itself forms networks. Sources of law refer to other sources of law and together constitute (part of) the core of the legal system. In the same way as above, we can represent, analyse and visualise this network. Can it help in determining the authority of case law or the likelihood a decision will be overruled? Does it shed light on complex or problematic parts of legislation? Is it possible to exploit networks visualization to support legal analysis and information retrieval?
  3. Combination of the first and second aspects: people or institutions create sources of law or appear in them: Research on the network of one may shed light on the other. Two examples: (1) Legal scholars write commentaries on proposed legislation or court decisions. Sometimes they write these together. These commentaries may provide information on the network of scholars; the position of an author in the network of scholars may provide information on the authority of the comment. (2) The application of network analysis techniques to court decisions and proceedings is proving to be helpful in detecting criminal organizations and in analysing their structure and evolution over time.

I wish I could go!

Big Data and Preventive Government: A Review of Joshua Mitts’ Proposal for a “Predictive Regulation” System

In Minority Report, Steven Spielberg’s futuristic movie set in 2050 Washington, D.C., three sibling “pre-cogs” are hooked up with wires and stored in a strange looking kiddie pool to predict the occurrence of criminal acts. The “Pre-Crime” unit of the local police, led by John Anderton (played by Tom Cruise), uses their predictions to arrest people before they commit the crimes, even if the person had no clue at the time that he or she was going to commit the crime. Things go a bit awry for Anderton when the pre-cogs predict he will commit murder. Of course, this prediction has been manipulated by Anderton’s mentor and boss to cover up his own past commission of murder, but the plot takes lots of unexpected twists to get us to that revelation. It’s quite a thriller, and the sci-fi element of the movie is really quite good, but there are deeper themes of free will and Big Government at play: if I don’t have any intent now to commit a crime next week, but the pre-cogs say the future will play out so that I do, does it make sense to arrest me now? Why not just tell me to change my path, or would that really change my path? Maybe taking me off the street for a week to prevent the crime is not such a bad idea, but convicting me of the crime seems a little tough, particularly given that I won’t commit it after all. Anyway, you get the picture.

As we don’t have pre-cogs to do our prediction for us, the goal of preventive government–a government that intervenes before a policy problem arises rather than in reaction to the emergence of a problem–has to rely on other prediction methods. One prediction method that is all the rage these days in a wide variety of applications involves using computers to unleash algorithms on huge, high-dimensional datasets (a/k/a/ Big Data) to pick up social, financial, and other trends.

In Predictive Regulation, Sullivan & Cromwell lawyer and recent Yale Law School grad Joshua Mitts lays out a fascinating case for using this prediction method in regulatory policy contexts, specifically the financial regulation domain. I cannot do the paper justice in this blog post, but his basic thesis is that a regulatory agency can use real-time computer assisted text analysis of large cultural publication datasets to spot social and other trends relevant to the agency’s mission, assess whether its current regulatory regime adequately accounts for the effects of the trend were it to play out as predicted, and adjust the regulations to prevent the predicted ill effects (or reinforce or take advantage of the good effects, one would think as well).

To demonstrate how an agency would do this and why it might be a good idea at least to do the text analysis, Mitts examined the Google Ngram text corpus for 2005-06, which consists of a word frequency database of the texts of a lot of books (it would take a person 80 years to read just the words from books published in 2000) for two-word phrases (bi-grams) relevant to the financial meltdown–phrases like “subprime lending,” “default swap,” “automated underwriting,” and “flipping property”–words that make us cringe today. He found that these phrases were spiking dramatically in the Ngram database for 2005-06 and reaching very high volumes, suggesting the presence of a social trend. At the same time, however, the Fed was stating that a housing bubble was unlikely because speculative flipping is difficult in homeowner dominated selling markets and blah blah blah. We know how that all turned out. Mitts’ point is that had the Fed been conducting the kind of text analysis he conducted ex post, they might have seen the world a different way.

Mitts is very careful not to overreach or overclaim in his work. It’s a well designed and executed case study with all caveats and qualifications clearly spelled out. But it is a stunningly good example of how text analysis could be useful to government policy development. Indeed, Mitts reports that he is developing what he calls a “forward-facing, dynamic” Real-Time Regulation system that scours readily available digital cultural publication sources (newspapers, blogs, social media, etc.) and posts trending summaries on a website. At the same time, the system also will scour regulatory agency publications for the FDIC, Fed, and SEC and post similar trending summaries. Divergence between the two is, of course, what he’s suggesting agencies look for and evaluate in terms of the need to intervene preventively.

For anyone interested in the future of legal computation as a policy tool, I highly recommend this paper–it walks the reader clearly through the methodology, findings, and conclusions, and sparks what in my mind if a truly intriguing set of policy question. There are numerous normative and practical questions raised by Mitts’ proposal not addressed in the paper, such as whether agencies could act fast enough under slow-going APA rulemaking processes, whether agencies conducting their own trend spotting must make their findings public, who decides which trends are “good” and “bad,” appropriate trending metrics, and the proportionality between trend behavior and government response, to name a few. While these don’t reach quite the level of profoundness evident in Minority Report, this is just the beginning of the era of legal computation. Who knows, maybe one day we will have pre-cogs, in the form of servers wired together and stored in pools of cooling oil.

 

On Systemic Risk and the Legal Future

If you’ve heard the term “systemic risk” it was most likely in connection with that little financial system hiccup we’re still recovering from. But the concept of systemic risk is not limited to financial systems–it applies to all complex systems. I have argued in a forthcoming article, for example, that complex legal systems experience systemic risk leading to episodes of widespread regulatory failure.

Dirk Helbing of the Swiss Federal Institute of Technology has published an article in Nature, Globally Networked Risks and How to Respond, that does the best job I’ve seen of explaining the concept of systemic risk and relating it to practical contexts. He defines systemic risk as

the risk of having not just statistically independent failures, but interdependent, so-called “cascading” failures in a network of N interconnected system components. That is, systemic risks result from connections between risks (“networked risks”). In such cases, a localized initial failure (“perturbation”) could have disastrous effects and cause, in principle, unbounded damage as N goes to infinity….Even higher risks are multiplied by networks of networks, that is, by the coupling of different kinds of systems. In fact, new vulnerabilities result from the increasing interdependencies between our energy, food and water systems, global supply chains, communication and financial systems, ecosystems and climate.

As Helbing notes, the World Economic Forum has described this global environment as a “hyper-connected” world exposed to massive systemic risks. Helbing’s paper does a wonderful job of working through through the drivers of systemic instability (such as tipping points, positive feedback, and complexity) and explaining how they affect various global systems (such as finance, communications, and social conflict). Along the way he makes some fascinating observations and poses some important questions. For example:

  • He suggests that catastrophic damage scenarios are increasingly realistic. Is it possible, he asks, that “our worldwide anthropogenic system will get out of control sooner or later” and make possible the conditions for a “global time bomb”?
  • He observes that “some of the worst disasters have happened because of a failure to imagine that they were possible,” yet our political and economic systems simply are not wired with the incentives needed to imagine and guard against these “black swan” events.
  • He asks “if a country had all the computer power in the word and all the data, would this allow government to make the best decisions for everybody?” In a world brimming with systemic risk, the answer is no–the world is “too complex to be optimized top-down in real time.”

OK, so what’s this rather scary picture of our hyper-connected world got to do with Law 2050? Quite simply, we need to build systemic risk into our scenarios of the future. I argue in my paper that the legal system must (1) anticipate systemic failures in the systems it is designed to regulate, but also (2) anticipate systemic risk in the legal system as well. I offer some suggestions for how to do that, including greater use of “sensors” style regulation and a more concerted effort to evaluate law’s role in systemic failures. More broadly, Helbing suggests the development of a “Global Systems Science” discipline devoted to studying the interactions and interdependencies in the global techno-socio-economic-environmental system leading to systemic risk.

There is no way to root out systemic risk in a complex system–it comes with the territory–but we don’t have to be stupid about it. Helbing’s article goes a long way toward getting smart about it.

Deep Structure — The Next Generation of Empirical Legal Studies

The use of statistical techniques to tease out empirical patterns in legal contexts has had a profound impact on legal practice and scholarship over the past few decades. From employment discrimination claims to academic studies of judicial voting patterns, we have learned a lot from regression analyses and other statistical applications. But getting at the deep structure of law has been more difficult with that tool kit. The convergence of big data, network theory, data visualization, and vastly enhanced computational capacities is changing that–now we can begin studying law and legal systems in ways that open up new frontiers for practitioners and academics.

As a practical example, sign on to Ravel Law. You will find a simple search field with no instructions. Plug in a term–I used “climate change.” Whereas in Westlaw and Lexis you receive a list of cases, in Ravel Law you receive something very different. Ravel Law gives you the list of cases, to be sure, but it also displays an interactive graphic representation of the citation network of all cases using the search term. The visual representation allows the user effortlessly and instantly to identify cases citing cases, the strength of each case as a citation source for others, and the timeline of cases in the network. So, if a practitioner wants to identify the “big case” in a topic, or to quickly trace the growth of the topic in case law, Ravel Law finds it for you in seconds, whereas piecing that together through traditional searches would take hours and a lot of mental gymnastics.

On a more theoretical level, tools like those used to power Ravel Law can help academics plumb the deeper structure of legal systems. For example, legal concepts and principles can be broken down into finely grained components, as in the way legal research services such as Westlaw and Lexis have developed their “keynote” and “headnote” cataloging systems. These cataloging systems produce hierarchical concept frameworks placing broad legal concepts such as constitutional law and environmental law at the top and then drill down from those broad concepts through successive levels of increasingly narrow subtopics. Michael Bommarito’s study of opinion headnotes in over 23,000 Supreme Court cases illustrates the branching form of what this hierarchy looks like when laid out graphically.  (See Michael J. Bommarito II, Exploring Relationships Between Legal Concepts in the United States Supreme Court). As any lawyer knows, however, (more…)

Managing Systemic Risk in Legal Systems

In an article forthcoming in the Indiana Law Journal, Managing Systemic Risk in Legal Systems, I draw on complexity science, network theory, and the prospects of enhanced legal computation capacities to explore how systemic risk arises and persists in legal systems. The American legal system has proven remarkably robust even in the face vast and often tumultuous political, social, economic, and technological change. Yet our system of law is not unlike other complex social, biological, and physical systems in exhibiting local fragility in the midst of its global robustness. Understanding how this “robust yet fragile” (RYF) dilemma operates in legal systems is important to the extent law is expected to assist in managing systemic risk—the risk of large local or even system-wide failures—in other social systems. Indeed, legal system failures have been blamed as partly responsible for disasters such as the recent financial system crisis and the Deepwater Horizon oil spill. If we cannot effectively manage systemic risk within the legal system, however, how can we expect the legal system to manage systemic risk elsewhere?

The Article employs a complexity science model of the RYF dilemma to explore why systemic risk persists in legal systems and how to manage it. Part I defines complexity in the context of the institutions and instruments that make up the legal system. Part II defines the five dimensions of robustness that support functionality of the legal system: (1) reliability; (2) efficiency; (3) scalability; (4) modularity, and (5) evolvability. Part III then defines system fragility, examining the internal and external constraints that impede legal system robustness and the fail-safe system control strategies for managing their effects. With those basic elements of the RYF dilemma model in place, Part IV defines systemic risk, exploring the paradoxical role of increasingly organized complexity brought about by fail-safe strategies as a source of legal system failure. (more…)

%d bloggers like this: