Home » 2016 (Page 2)
Yearly Archives: 2016
Can Humans End the Anthropocene?
There has been a great deal of buzz and attention in the science and policy communities over the idea that Earth has left the Holocene epoch and entered the Anthropocene, a proposed epoch that begins when human activities started to have a significant global impact on Earth’s geology and ecosystems. What is unique about the Anthropocene is that it is human-driven. We started it through the massive impacts our industry, resource extraction, agriculture, and sheer numbers have had on the biosphere. The question is whether we can end it, and if so, how and at what cost to humanity?
In a fascinating article in Science on this theme, Francois Sarrazin and Jane Lecomte outline five different scenarios for how humans handle the Antrhopocene based on how we treat our fellow species. In the most dystopian (for Earth) scenario, which they call the Blind Anthropocene, we give up on conservation of ecosystems and engage in runaway consumption to serve human needs only. In a range of three Deliberate Anthropocene scenarios, humans engage in conservation efforts, but for different goals. In the most human-focused scenario we conserve biodiversity to produce flows of provisioning (e.g., extracting timber) and regulating (e.g., wetlands providing sediment capture) ecosystem services benefitting human communities. An intermediate scenario adds protection of wilderness and landscapes, but only to enhance cultural (e.g., recreation) ecosystem services. In the most progressive Deliberate Anthropocene scenario, conservation is aimed at inter-generational fitness of humanity, which would focus on maintaining sustainable flows of regulating ecosystem services even at the expense of satisfying wants of present society.
Most environmental policy discourse focuses on which of these four scenarios, all of which are anthropocentric, should guide our decisions and actions. As Sarrazin and Lecomte argue, however, none of these approaches, not even the most aggressive Deliberate Anthropocene conservation scenario, will bring the Anthropocene to an end. They argue that a fifth scenario, which they call the Deliberate Overcoming of the Anthropocene, will be required. In this “evocentric” scenario, humans design conservation to ensure not only the fitness of future generations of humans, but also to ensure the future evolutionary fitness of all other species. Only if we can return other species to such an evolutionary trajectory—one not so influenced by human impacts—could we begin to entertain the idea that the Anthropocene is drawing to a close, thanks to us.
Their proposal is, to say the least, radical. What would it take to accomplish it? What laws and policies would we need to put in place now to start turning the Anthropocene around—to actually end rather than soften its impacts—and how long would it take? Is it even possible?
Regardless of its audacity, their proposal could prompt a useful thought exercise to test just how progressive even our most progressive conservation policies truly are. It could also provide a reference point for measuring how deeply entrenched the Anthropocene moves over time. It is at the very least worth thinking about.
Can AI Make AI Obey the Law?
Amitai Etzioni, the famous sociologist, and his son Oren Etzioni, the famous computer scientist, have posted an intriguing paper on SSRN, Keeping AI Legal. The paper starts by outlining some of the many legal issues that will spin out from the progression of artificial intelligence (AI) in cars, the internet, and countless other devices and technologies–what they call “smart instruments”–given the ability of the AI programming to learn as it carries out its mission. Many of these issues are familiar to anyone following the bigger AI debate–i.e., whether it is going to help us or kill us, on which luminaries have opined both ways–such as who is liable if an autonomous car runs off the road, or what if a bank loan algorithm designed to select for the best credit risks based purely on socially acceptable criteria (income, outstanding loans etc.) begins to discriminate based on race or gender. The point is, AI smart instruments could learn over time to do things and make decisions that make perfect sense to the AI but break the law. The article argues that, given this potential, we need to think more deeply about AI and “the legal order,” defined not just as law enforcement but also as including preventive measures.
This theme recalls a previous post of mine on “embedded law”–the idea that as more and more of our stuff and activities are governed by software and AI, we can program legal compliance into the code–for example, to make falsifying records or insider trading impossible. Similarly, the Etzionis argue that the operational AI of smart instruments will soon be so opaque and impenetrable as to be essentially a black box in terms of sorting out legal concerns like the errant car or the discriminatory algorithm. Ex ante human intervention to prevent the illegality will be impossible in many instances, because the AI is moving too fast (see my previous post on this theme), and ex post analysis of the liabilities will be impossible because we will not be able to recreate what the AI did.
The Etzionis’ solution is that we need “AI programs to examine AI programs,” which they call “AI Guardians.” These AI Guardians would “interrogate, discover, supervise, audit, and guarantee the compliance of operational AI programs.” For example, if the operational AI program of a bank called in a customer’s loan, the AI Guardian program would check to determine whether the operational program acted on improper information it had learned to obtain and assess. AI Guardians, argue the Etzionis, would be superior to humans given their speed, lower cost, and impersonal interface.
I get where they are coming from, but I see some problems. First of all, many determinations of legality of illegality depend on judgement calls–balancing tests, the reasonable person standard, etc. If AI Guardians are to make those calls, then necessarily they will need to be programmed to learn, which leads right back to the problem of operational AI learning to break the law. Maybe AI Guardians will learn to break the law too. Perhaps for those calls the AI Guardian could simply alert a human compliance officer to investigate, but then we’ve put humans back into the picture. So let’s say that the AI Guardians only enforce laws with bright line rules, such as don’t drive over 50mph. Many such rules have exceptions that require judgment to apply, however, so we are back to the judgment call problem. And if all the AI Guardians do is prevent violations of bright line rules with no exceptions, it’s not clear they are an example of AI at all.
But this is not what the Etzionis have in mind–they envision that “AI Guardians…will grow smarter just as operational AI programs do.” The trick will be to allow the AI Guardians to “grow smarter” but prevent the potential for them as well to cross the line. The Etzionis recognize this lurking “Who will guard the guardians” question exists even for their AI Guardians, and propose that all smart instruments have a “readily locatable off switch.” Before long, however, flipping the off switch will mean more than turning off the car–it will mean turning off the whole city!
All of this is yet more Law 2050 food for thought…
Our Grandchildren Redesigned, by Michael Bess – A Legal Futurism Treasure Chest
As you may have noticed (or if not, now you know), I haven’t posted anything on the site for a while. I have all the typical excuses: busy at work, family stuff, the holidays, etc. But truth be told, not much grabbed me. That changed when I read Our Grandchildren Redesigned, the latest by my Vanderbilt colleague and friend, historian Michael Bess. As a dabbler in legal futurism, Bess’s book is a treasure chest to me. The subtitle says it all: Life in the Bioengineered Society of the Near Future.
In Redesigned, Bess pulls off what others have tried but failed to deliver. Using what is known today about the past, present, and trajectory of pharmaceuticals, bioelectronics, and genetics and epigenetics (plus nanotechnology, AI, robotics, and synthetic biology), Bess constructs plausible scenarios of how humans will use these technologies to “improve” on our biology and how society will respond. There is no science fiction in the book, no extreme claims, no utopian or dystopian indulgence. Bess the careful, acclaimed historian has turned his sights on the bioengineered future with the same measured, thoughtful, methodical attention to detail and cogency. And one could spin an endless stream of questions about the law’s future from his scenarios, many of which Bess signals or even digs into.
Bess opens the book (and its ongoing website) with three premises. First, “It’s almost certainly going to happen.” By “it” he means the convergence of the technologies towards the capacity for human physical and mental engineering through drugs, biotech devices, and epigenetic manipulations. Lest there be any doubts, chapters two through five put them to rest. Second, “It will bring both opportunity and peril.” Sure, you might say, so have smartphones. So what? But third, “Its impact will be radical.” Of course, it’s this third of his premises that might attract the charge that it’s Bess who is being radical, but by the end of the book my only concern was that he didn’t play the scenario out as fully crazy as it could get!
I’m not going to review Bess’s account of the technologies or even the scenarios he builds in any detail. Read the book! Rather, what makes the book of such tremendous potential impact and of value to legal futurists is Bess’s engagement of the social and ethical choices that will have to be made as redesigning becomes possible, then practical, then popular, and eventually part of all our (grandchildren’s) lives. There are three big themes Bess develops in this regard.
First, this will not happen overnight. Many of the legal issues one can envision will flow from the transitional nature of the uploading of redesign technology into society. New technologies will at first be expensive, thus furthering already pervasive wealth disparities. Some technologies will need to begin at young ages to be effective, creating inter-generational disparities. Of course, responding to social disparity is nothing new to the law, but we are not talking about who can afford smartphones, we are talking about who gets the smart pills, the fully-functional artificial eye, the tweaked gene expression for holding off cancer, and so on. Bess’s concern is on target—the redesign disparity could begin to rip apart society as it comes online. How will law respond?
Second, Bess explores issues that will be inherent in the new normal in which a substantial level of redesign is eventually available to the masses. If the average age moves to 150, it takes little imagination to play out what that could mean for employment, marriage, welfare, the environment, prisons, you name it? And if people can be better at anything, with potentially vast improvement on the horizon, what does that mean for sports, warfare, science, the arts, you name it? Plus, in all likelihood we can’t become the bast at everything, so, much as children do today, we will likely see specializations that produce even more extreme differences between groups than are possible today. Will the best tennis players have anything in common with the best flutists? And what about people who, for moral or religious reasons, choose not to participate? What will we do with them? Lots of law change in store!
Third, Bess asks what we should do now to shape the new normal, if we can. Bess believes, and I agree, that getting control of the direction and intensity of redesign will be hard, but necessary. If the U.S. backs off on moral grounds (e.g., as with stem cell research), what’s to stop North Korea? And if we set international limits, domestic controls on private experimentation will need to be rigorous. And what would the limits look like? Bess suggests seven key challenges, including controlling radical inequality, defending mental privacy, and avoiding commodification of the human being. Again, law will have to be engaged.
I should emphasize that there is far more to Bess’s work than I have let on in this law-focused account. There is a profoundly philosophical dimension, as Bess asks early in the book whether we should redesign and then develops a set of human flourishing factors that he believes should guide our way. Bess animates his descriptive scenarios with short fictional vignettes of life and lives, and even some laws, in the redesign future. By no means corny or out of place, these allow the reader to personalize the impacts of a redesign future. In my case, I found myself drifting into thought about the legal future as well. In short, all I have hoped to do here is scratch the surface of Bess’s brilliant work to whet your Law 2050 appetites.
Bottom line, if you want to get a picture of how being a human will take a sharp turn by around 2050, Our Grandchildren Redesigned is your starting point.