Home » Posts tagged 'systemic risk'

Tag Archives: systemic risk

Is the 21st Century Going to Be One Ginormous Long-Tail Event?

In Book of Extremes: Why the 21st Century Isn’t Like the 20th Century, Ted Lewis builds the case for defining the 21st century as likely to become a morass of extreme events unlike any prior century in terms of magnitude and frequency. The core theme of the book is that the world has entered an era of unprecedented network scope and connectedness, which, while offering us all sorts of advantages like social media and global trade (if you think those are benefits), exposes society to massive cascading failures.

Lewis is clearly wired into complexity science, network analysis, and data science. He’s held a variety of positions in academia, industry, and publishing, and spins out a fascinating account of how all those and other disciplines are necessary to even begin to understand what is happening in the world today. He pulls from the internet, marine shipping, climate change, the financial system, and wealth concentrations to argue that we have gone well past the “tipping point” of exposure to black swan events and worse (see my prior posts on systemic risk and dragon kings). Although I disagree with Lewis’s assessment of prior centuries as essentially flat, linear, and relatively free of global networks and extreme events – anyone who thinks so should read Distant Mirror and 1493 – the evidence he amasses regarding the breadth, tightness, and impact of today’s interlinked social, economic, political, and technological networks is impressive. These networks of networks, while robust in one sense, are fragile in others—fragile in ways that can lead to extreme outlier failures. One example Lewis offers is the global shipping trade, which is a complex network linking lanes and ports and which depends disproportionately on just three ports (Hong Kong, Shanghai, and Los Angeles), so much so that failure of any one of those ports can bring down the whole network (which then cascades to other networks such as finance).

These massive networks also can produce behaviors that appear unusual and counter-intuitive. For example, although social media networks theoretically connect everyone around the world and should produce convergence and harmony, there is evidence they are more an agent of fragmentation. Consisten with LEwis’s theme, for example, Curtis Hougland explains in a post today on the Wharton School’s website how social media allow people that have been assembled according to conventional ordering (nations, religions, employment, education) to reassemble according to other personal affinities, thus cutting across traditional boundaries such as nation states. “Social media provides both an organizing tool through its ability to structure and facilitate communication and an organizing principle in the way people gravitate toward the extreme. In this way, social media accelerates political unrest like a giant centrifuge, sinning faster and faster and spitting out those who disagree.”

Book of Extremes provides an excellent, albeit fast and furious, tour through networks analysis, complex adaptive systems, data science, and an array of other disciplines. Lewis uses metaphors such as waves, flashes, sparks, booms, bubbles, shocks, and bombs to tie the science to real-world contexts with scads of historical and modern examples. His bottom line is that governments and individuals need to start taking big “leaps” to avoid continuing down the spiral leading to cascade failures, including more instances of private initiatives not waiting for government to lead, the way SpaceX has launched itself (pun intended).

So, what does this mean for law? For starters, if Lewis is right, get ready for a century of unprecedented demand on the legal system. Law students and young lawyers, watch trends, anticipate disruption, and think hard about what pressures these will place on the legal system to produce solutions, protect rights, and adapt new legal doctrines. You can help shape how law responds, and you can be the first to “jump on it” with thoughtful analysis and reasoned proposals for legal action. In short, think Law 2025!

The Law and “Ultrafast Extreme Events” – Is it Possible to Regulate “Machine Ecology” If it Moves Faster than the Human Mind Can React?

In a fascinating new article in Nature’s Scientific Reports, researchers describe a “machine ecology” humans have built through which we have ceded decisionmaking across a wide array of domains to technologies moving faster than the human mind can react. Consider that the new transatlantic cable underway is being built so we can reduce communication times by another 5 milliseconds, and that a new chip designed for financial trading can execute trades in just 740 nanoseconds (that’s 0.00074 milliseconds!), whereas even in its fastest modes (flight from danger and competition) the human mind makes important decisions in just under 1 second. As the article abstract suggests, the proliferation of this machine ecology could present as many problems as benefits:

Society’s techno-social systems are becoming ever faster and more computer-orientated. However, far from simply generating faster versions of existing behaviour, we show that this speed-up can generate a new behavioural regime as humans lose the ability to intervene in real time. Analyzing millisecond-scale data for the world’s largest and most powerful techno-social system, the global financial market, we uncover an abrupt transition to a new all-machine phase characterized by large numbers of subsecond extreme events. The proliferation of these subsecond events shows an intriguing correlation with the onset of the system-wide financial collapse in 2008. Our findings are consistent with an emerging ecology of competitive machines featuring ‘crowds’ of predatory algorithms, and highlight the need for a new scientific theory of subsecond financial phenomena.

One has to wonder how we can design regulatory mechanisms that will prove effective in controlling “ultrafast extreme events” and how legal doctrine will handle issues of liability, property, and contract when such events are moving at nanosecond speeds beyond human recognition. Indeed, the article’s authors focus on the financial system, and observe that the extent to which the thousands of UEEs their research has detected as occurring during the financial crisis were actually “provoked by regulatory and institutional changes around 2006, is a fascinating question whose answer depends on a deeper understanding of the market microstructure.” I’d love to see how Congress tees up that committee hearing!

On Systemic Risk and the Legal Future

If you’ve heard the term “systemic risk” it was most likely in connection with that little financial system hiccup we’re still recovering from. But the concept of systemic risk is not limited to financial systems–it applies to all complex systems. I have argued in a forthcoming article, for example, that complex legal systems experience systemic risk leading to episodes of widespread regulatory failure.

Dirk Helbing of the Swiss Federal Institute of Technology has published an article in Nature, Globally Networked Risks and How to Respond, that does the best job I’ve seen of explaining the concept of systemic risk and relating it to practical contexts. He defines systemic risk as

the risk of having not just statistically independent failures, but interdependent, so-called “cascading” failures in a network of N interconnected system components. That is, systemic risks result from connections between risks (“networked risks”). In such cases, a localized initial failure (“perturbation”) could have disastrous effects and cause, in principle, unbounded damage as N goes to infinity….Even higher risks are multiplied by networks of networks, that is, by the coupling of different kinds of systems. In fact, new vulnerabilities result from the increasing interdependencies between our energy, food and water systems, global supply chains, communication and financial systems, ecosystems and climate.

As Helbing notes, the World Economic Forum has described this global environment as a “hyper-connected” world exposed to massive systemic risks. Helbing’s paper does a wonderful job of working through through the drivers of systemic instability (such as tipping points, positive feedback, and complexity) and explaining how they affect various global systems (such as finance, communications, and social conflict). Along the way he makes some fascinating observations and poses some important questions. For example:

  • He suggests that catastrophic damage scenarios are increasingly realistic. Is it possible, he asks, that “our worldwide anthropogenic system will get out of control sooner or later” and make possible the conditions for a “global time bomb”?
  • He observes that “some of the worst disasters have happened because of a failure to imagine that they were possible,” yet our political and economic systems simply are not wired with the incentives needed to imagine and guard against these “black swan” events.
  • He asks “if a country had all the computer power in the word and all the data, would this allow government to make the best decisions for everybody?” In a world brimming with systemic risk, the answer is no–the world is “too complex to be optimized top-down in real time.”

OK, so what’s this rather scary picture of our hyper-connected world got to do with Law 2050? Quite simply, we need to build systemic risk into our scenarios of the future. I argue in my paper that the legal system must (1) anticipate systemic failures in the systems it is designed to regulate, but also (2) anticipate systemic risk in the legal system as well. I offer some suggestions for how to do that, including greater use of “sensors” style regulation and a more concerted effort to evaluate law’s role in systemic failures. More broadly, Helbing suggests the development of a “Global Systems Science” discipline devoted to studying the interactions and interdependencies in the global techno-socio-economic-environmental system leading to systemic risk.

There is no way to root out systemic risk in a complex system–it comes with the territory–but we don’t have to be stupid about it. Helbing’s article goes a long way toward getting smart about it.