Two Canadian philosophers, Jason Miller and Ian Kerr, have posted an article on The Prospect of Expert Robots in which they consider a philosophical question that will have thorny implications for law: What if an expert human and an expert robot disagree on a matter of importance? By expert robot they mean a Big Data-loaded computer juiced up with algorithms that scour the data to produce answers within a complex decision domain that are on average more accurate than the answers counterpart human experts provide. Watson, in other words, is an expert computer at the game of Jeopardy, because it beat the world’s two most expert humans quite handily. But Watson is a toddler compared to the kind of expert computers on our horizon. Google’s driverless car, for example, is operating in a far more complex decision domain than is Watson, and seems to be doing quite a good job of avoiding accidents and traffic violations.
So consider some scenarios in the not too distant future in which expert computers are common throughout a wide array of decision domains and generally outperform their human expert counterparts. In one scenario they have replaced most of the human experts, making decisions free of human oversight. We’ve taken our hands off the wheel, so to speak, and delegated decision making to the expert computers. The expert computers aren’t perfect, however, so they will make mistakes. There will be driverless car crashes. Who’s liable when that happens? Can expert computers be negligent, or act with intent?
The more complex question Miller & Kerr treat, however, is what happens when the expert computers are working alongside human experts to produce good decision results and the two disagree about a crucial decision. Do we go with the human or the computer? If we go with the human and it turns out the computer was right, and the cost of the human’s error is significant, where does liability fall? And the reverse scenario presents the same question.
Miller & Kerr set up these scenarios nicely and work through some of the more profound normative questions they pose, concluding that there will be strong arguments in favor of delegation to expert computers but that the human impulse to retain control might make it difficult for society to take full advantage of what expert computers can offer. Liability rules also can have tremendous impact on the development and use of technology, and the expert computer world will present that problem in high resolution. Miller & Kerr concede that “our current models for assessing responsibility are not easily applicable in the case of expert robots” and that we have “barely scratched the surface regarding potential liability models.” Nevertheless, they worry that lawyers might gum up the works, such as by advising the roboticists designing the expert computers to ensure that the computers can explain their operations in the event of lawsuits, just as human experts do, which could impede the zeal with which roboticists work to develop better experts.
Watson playing Jeopardy is unlikely to get into any legal tangles, but IBM is not stopping with a win at Jeopardy. The law of expert robots is not that far into our future.