Testimony on Law and Ethics at the Automated Vehicles Symposium of the Transportation Research Board 2016
Presented at Automated Vehicles Symposium 2016 Users, Vehicles, Infrastructure
San Francisco, Ca, July 19, 2016
Stephen S. Wu
Silicon Valley Law Group
Statement of Attorney Stephen S. Wu
Automated Vehicles Symposium
2016 July 19, 2016
Good afternoon. I thank Professor Patrick Lin of California Polytechnic State University for inviting me here today to give you my thoughts, from a lawyer’s perspective, about the importance of ethics in programming and developing automated vehicles. My perspective is based on my practice as a lawyer representing technology companies as outside counsel, as well as my previous experience as an in-house attorney for a large technology company. I am also drawing upon my undergraduate training in ethics and public policy at the University of Pittsburgh, where my major was Politics and Philosophy.
Before sharing my thoughts with you, I would like to add at the outset that I am speaking only for myself and not on behalf of my law firm or my colleagues at the firm. Also, this statement discusses general principles of law and does not constitute legal advice.
The presentations at the symposium have discussed the great potential for automated vehicles. Autonomous vehicles (which I will refer to as “AVs”) hold the promise of saving tens of thousands of lives each year in the U.S., and many more worldwide, reducing traffic, saving energy, and providing mobility to those who cannot drive conventional cars. Major automobile manufacturers are testing AVs, while Google has been testing multiple prototype vehicles for some time. Tesla has permitted its customers to use what it calls the “Autopilot” system, which consists of a number of driver assistance features. Companies are also testing automation solutions for freight trucks.
Despite these advances, a whole host of legal and social problems pose obstacles to bringing AVs to the mass market. The tragic accidents involving Tesla automobiles in recent weeks have placed legal and ethical issues into sharp focus. Therefore, our program today is particularly timely.
As I see them, the main legal issues surrounding AVs fall into three categories:
- First, issues of compliance: can people legally operate AVs in a given jurisdiction and can manufacturers sell them there legally?
- Second, issues of liability: what happens when an AV has an accident? Who is responsible? Will manufacturers face such crushing liability that it could cause them to exit the market or deter them from marketing AVs in the first place?
- Third, issues of information governance: how can businesses protect the privacy of information generated by AVs, protect the security of AVs from hacking attacks, and manage the digital evidence AVs generate?
The prospect of liability is especially worrisome. Yes, AVs have the potential to save tens of thousands of lives. Nonetheless, if crushing liability or just the specter of liability drives manufacturers out of the AV market, deters them from entering the market, or perhaps even poses an existential threat to a manufacturer, it may be that AV technology will not be deployed on a mass scale and will fail to live up to its potential to save tens of thousands of lives each year. If that happens, arguably a large number of people will die each year needlessly while lifesaving AV technology is sidelined.
In discussing the impending deployment of AV technology, one of the most popular themes for news stories has been handling ethical dilemmas in designing AVs. These news stories raise important questions about what AVs should do when an accident is unavoidable. Should they change direction to minimize anticipated harm? Should they try to maximize the safety of the occupants at a cost of increasing the risk to bystanders and people in other vehicles?
Professor Lin wrote several important pieces about ethical dilemmas faced by engineers when programming AVs and, by extension, their manufacturer employers designing AVs for sale in the mass market. He has discussed various “thought experiments” useful in analyzing problems of moral philosophy. Thought experiments are “similar to everyday science experiments in which researchers create unusual conditions to isolate and test desired variables.” 1 The most publicized thought experiments from Professor Lin’s work are so-called variants of traditional ethical “trolley problems” as applied to AVs.
“Trolley problems,” as the name suggests, concerns a runaway trolley where various actors in various situations face ethical dilemmas concerning whether it is morally obligatory, permissible, or forbidden to act or refrain from acting. The basic problems concern whether or not it is permissible to steer the trolley away from killing five people but at the cost of killing one. These problems tease out and test our ethical intuitions concerning what may seem self-evident ethical principles but, upon further examination, may trigger counter-intuitive conclusions when the hypothetical situation changes. For instance, moral philosophers frequently claim it is worse to take action to kill someone than it is to let someone die without saving them. But is it worse to allow five, fifty, five hundred, or more to die when killing one could have saved their lives? These are the subjects of trolley problems in moral philosophy.
Professor Lin has written a number of pieces about conducting thought experiments using trolley problems in the context of AVs. Here is one common version of the trolley problem in the AV context:
You are about to run over and kill five pedestrians. Your car’s crash-avoidance system detects the possible accident and activates, forcibly taking control of the car from your hands. To avoid this disaster, it swerves in the only direction it can, let’s say to the right. But on the right is a single pedestrian who is unfortunately killed.2
News writers have (with or without crediting Professor Lin) repeated this and similar scenarios in numerous recent news articles.3
Professor Lin urges engineers and manufacturers to consider the ethical implications of their designs. When designing AVs, some manufacturers will conduct a careful ethical analysis and determine that, other things being equal, it is better for an AV to steer away from a sudden collision with a large group at the cost of harming one person or a small group. They may wish to implement these ethical decisions in the code underlying the intelligence in their AV systems.
The law, however, introduces complicating factors. Manufacturers generally try to comply with the law and minimize legal liability. When implementing what it believes is the morally right programming, would a manufacturer violate legal standards?
Law and morality overlap. Some acts are both immoral and illegal, such as murder. Some acts are both moral and legal, or perhaps even obligatory, such as a parent discharging her obligation to provide support to her child. In other cases, however, law and morality may diverge. Many people give the example of moderate speeding when driving in order to maintain a safe speed consistent with the flow of traffic. To them, driving with the flow of traffic is the safest speed and is therefore moral, even if it is technically illegal. In other instances, legal conduct is immoral. Infidelity to a spouse is a common example.
As with “thought experiments” in philosophy, legal education trains law students and lawyers with “hypotheticals.” Hypotheticals serve the same function as thought experiments. They serve to illustrate how legal principles would apply to a specific set of circumstances. For purposes of these remarks, imagine a hypothetical manufacturer trying to “do the right thing” from an ethical perspective. It has considered design alternatives carefully from an ethical perspective. At the end of that analysis, it wants to implement a certain ethical conclusion, such as programming a car to avoid harming or killing a large group of people when an accident is inevitable by steering the car away from the large group towards one or a much smaller group.
The legal issue in this hypothetical is whether a manufacturer could face liability for implementing such a programming decision. Is it possible that a representative of the one person struck and killed could sue the manufacturer in tort and prevail for selling an AV that implements this ethical decision despite the result of saving more lives than lost? In other words, could the manufacturer be liable from a legal perspective for “doing the right thing” from an ethical perspective—a result that most would find unfortunate?
I believe the answer is that under current law, yes, a manufacturer might be liable for implementing this ethical decision. At the same time, if the manufacturer programs the car not to steer away when a collision with a large group is inevitable, it would also be liable. Accordingly, on the face of it, current law seems to create a “no win” situation for the manufacturer.
This year, I have been drafting a paper to discuss three possible traditional defenses to see if a manufacturer can avoid liability if it make what appears to be an ethical design choice—(1) necessity, (2) defense of others, and (3) the sudden emergency doctrine. Necessity provides a defense to a tort claim when the defendant can prove that particular conduct causing damage was necessary to prevent some greater harm. The necessity defense, however, applies to property damage cases brought under causes of action such as trespassing or nuisance.
The “defense of others” concept, similar to self-defense, excuses conduct necessary to protect a third party against wrongful injury. For instance, a parent pushing away an assailant injuring his or her child would not be liable for battery. Nonetheless, this defense exists only if the conduct is necessary to stop someone’s wrongful conduct. It does not apply to our hypothetical where the car may steer towards an innocent bystander in order to prevent injury to a larger number of other innocent bystanders.
The sudden emergency doctrine recognizes that a person’s general duty to act reasonably in accordance with the applicable standard of care changes in the face of a sudden emergency. Some cases recognize that car drivers cannot be expected to act as precisely or as carefully when faced with a split-second reflexive decision. Nonetheless, our hypothetical involves a manufacturer planning the behavior of AVs far in advance of any actual emergency. They can take their time to think clearly about programming the AVs without the pressure of a split-second decision. Thus, this doctrine does not appear to apply to manufacturers’ design choices.
I will conclude in my paper that under current law, I believe the best way for a manufacturer to limit its liability is to cause the car to choose the path that will maximize the probability of avoiding a collision altogether. Maximizing collision avoidance may even raise the possibility of causing more harm than steering away from the large group towards the single person. Nonetheless, maximizing collision avoidance seems to be the most defensible design. I conclude that the only way to provide complete protection to a manufacturer seeking to implement ethical choices involving steering away from the large group towards a single person is special legislation or regulation.
Beyond this trolley problem hypothetical, our panel today touches on the importance of ethics and ethical decision-making in connection with the development of automated vehicles. From a legal perspective, a careful consideration of ethics in the process of designing AVs will likely help manufacturers reduce their risk of liability. While I have not done a full search of the legal literature, legal journal articles talk about the role of ethics and involving ethical consultants in reducing the risk of liability.4 The best example of the benefits of consulting ethicists is in the medical field. “Ethics consultation can improve the decision-making process and may reduce legal exposure for individuals and institutions, particularly for care given to patients near the end of life.”5 Applying ethics consultation methodologies to the AV design process may similarly reduce liability risk.
Why might ethics consultation help reduce legal risk in the United States? One possible explanation relates to the jury system in this country. Most likely, a plaintiff suing a manufacturer in a product liability case will demand a jury trial. The conventional wisdom is that juries are more likely than judges to award large verdicts to plaintiffs, and jury trials offer plaintiffs the possibility of winning enormous awards.
Why are jurors willing to render these large verdicts against manufacturers? The short answer is juror anger. “Angry jurors mean high damages.”6 More specifically, juries render large verdicts when they become angry at the defendant’s conduct. When juries become angry, the only way that they see they can redress the defendants’ wrongs is to render very large verdicts against them in an effort to send a message that their conduct is unacceptable. Accordingly, jurors will use verdicts to punish what they perceive as callous or reckless behavior of the manufacturer.
How can a manufacturer diffuse possible juror anger? One commentator stated, “The most effective way for [counsel for] a corporate defendant to reduce anger toward his or her client is to show all the ways that the client went beyond what was required by the law or industry practice.”7 Meeting minimum standards is insufficient because of juror skepticism about the rigor of standards set or influenced by industry and because jurors expect corporate clients to know more about product safety than a “reasonable person” – the standard for judging the conduct of defendants under the law.8 “A successful defense can also be supported by walking jurors through the relevant manufacturing or decision-making process, showing all of the testing, checking, and follow-up actions that were included. Jurors who have no familiarity with complex business processes are often impressed with all of the thought that went into the process and all of the precautions that were taken.”9 Even though accidents do occur, a defendant’s proactive approach would show the jury that the manufacturer tried hard to do the right thing.10 Consequently, efforts to go above and beyond the minimum standards would diffuse juror anger and mitigate the manufacturer’s risk.
Besides spending extra time, attention, and resources to improve safety, consulting with ethicists to evaluate the moral dimensions of design is another way to “do the right thing.” An ethical consultation can show that the manufacturer went beyond the legal minimum standards to investigate proactively the moral dimensions of the design. It is another way for the manufacturer to go above and beyond what the law requires.11
In addition to reducing liability, consulting ethicists may provide additional benefits to a manufacturer. Again, bioethics analyses in the context of patient care provide a useful analogy. For instance the Department of Veterans Affairs identified a number of benefits for healthcare facilities creating a bioethics program:
- increasing patient satisfaction,
- improving employee morale,
- enhancing productivity,
- conserving resources/avoiding costs,
- improving accreditation reviews,
- reducing ethics violations,
- reducing risk of lawsuits,
- sustaining corporate integrity, and
- safeguarding the organization’s future.12
An ethics program for AV design may lead to similar results for manufacturers, their employees, and their customers.
Consulting ethics professionals could occur in a number of ways. In the bioethics field, “[m]any health care facilities have in-house or on-call trained ethicists to assist health care practitioners, caregivers and patients with difficult issues arising in medical care, and some facilities have formally constituted institutional ethics committees.”13 Likewise, AV manufacturers could hire in-house or on-call outside ethicists, and may decide to create ethics committees.
In summary, investigating the intersection between law and ethics and applying ethics to the design process will be helpful to automated vehicle manufacturers. Ethics will help sharpen thinking through hypotheticals, such as the trolley problem scenarios discussed in the media. Moreover, they will likely help a manufacturer reduce its liability risk by showing how they are going above and beyond minimum legal standards. Finally, ethics consultations will lead to a number of other benefits for manufacturers, their employees, and ultimately AV consumers.
Thank you for your interest today. I look forward to answering your questions.
Stephen S. Wu
Silicon Valley Law Group
50 W. San Fernando Street, Suite 750
San Jose, CA 95113
E-mail: firstname.lastname@example.org Web: www.svlg.com
Stephen Wu is an attorney with Silicon Valley Law Group in San Jose, CA. He advises clients on compliance, liability, security, and privacy matters regarding the latest technologies, including autonomous driving, robotics, artificial intelligence, the Internet of Things, Big Data, and augmented and virtual reality. His litigation practice includes resolving information technology and intellectual property disputes. He drafts and negotiates service agreements, licenses, marketing agreements, and other technology contracts. Finally, he acts as outside general counsel to startups and technology companies.
Mr. Wu served as the 2010-2011 Chair of the American Bar Association Section of Science & Technology Law. Before his work in private practice, Mr. Wu was in charge of VeriSign, Inc.’s worldwide policies and practices governing its digital certification security services. The Daimler & Benz Foundation and American Bar Association recently published his book chapters on driverless car and drone product liability. The ABA will publish the second edition of his book on healthcare data security in August. He received his B.A., summa cum laude, from the University of Pittsburgh in 1985, and received his J.D., cum laude, from Harvard Law School in 1988.
1 Patrick Lin, Why Ethics Matters for Autonomous Cars, in Autonomous Driving: Technical, Legal And Social Aspects 69 (Markus Maurer, et al. eds., 2016).
2 Id. at 75.
3 See, e.g., Why Self-Driving Cars Must be Programmed to Kill, MIT Technology Review, Oct. 22, 2015, reprinted here.
4 See, e.g., John La Puma, et al., How Ethics Consultation Can Help Resolve Dilemmas About Dying Patients, 163 W. J. Medicine 263 (1995).
7 Robert D. Minick & Dorothy K. Kagehiro, Understanding Juror Emotions: Anger Management in the Courtroom, For The Defense, July 2004, at 3.
7 Id. at 2.
8 See id.
10 See id.
11 For an analogous discussion in the bioethics context, see Lisa Brock, Clinical Ethics and the Law, (Jan. 22, 2013) (“Risk management is guided by legal parameters but has a broader institution-specific mission to reduce liability risks. It is not uncommon for a hospital policy to go beyond the minimum requirements set by a legal standard.”).
12 Sharon Caulfield, Health Care Facility Ethics Committees: New Issues in the Age of Transparency, Human Rights, Fall 2007, reprinted here.
13 Brock, supra.