Professionalism/The Death of Elaine Herzberg

On the night of March 18, 2018, in Tempe, Arizona, an autonomous vehicle (AV) collided with and killed Elaine Herzberg. The AV, a modified Volvo XC90, was part of a test program of the Uber Advanced Technologies Group (ATG). Herzberg was jaywalking across a four-lane roadway with her bicycle when struck. Toxicology screening showed methamphetamine and THC in her bloodstream. The safety driver of the AV, Rafaela Vasquez, was streaming a TV show on her phone right before the crash, a violation of Uber policy and Arizona state law. The National Transportation Safety Board (NTSB) found that Vasquez’s distractedness compromised her reaction time, preventing her from manually avoiding Herzberg when prompted by the AV.

Uber’s software had dangerous flaws
As stated above, the proximate causes for Herzberg’s death were the impairment and jaywalking of Herzberg and the distractedness of Vasquez, however, we have to take a closer look at the mechanics of the AV to determine the higher-level causes. The AV had three mechanisms for sensing its environment - LIDAR, radar, and cameras. These three worked together to paint a 3D picture of the surroundings and allow the car’s computer vision algorithms to judge distances, predict trajectories of moving objects, and classify different objects as cars, bicycles, pedestrians, or something else.

There were a few key flaws in Uber’s AV software, known as Perception. First, reclassified objects lost their movement history, and the AV could not predict a trajectory for them. In the seconds leading up to Herzberg’s death, the computer vision system reclassified Herzberg multiple times, cycling between an unknown object, a bicycle, and a vehicle. The AV could not predict Herzberg’s movement path to avoid her. Second, objects classified as unknown did not have a predicted trajectory. Third, according to the NTSB, Perception “did not include consideration for jaywalking pedestrians.” Fourth, Perception had flawed emergency braking. Uber programmed Volvo’s native emergency braking systems to deactivate when Perception was active. Post-crash, Volvo conducted simulations of the crash scenario and determined that if Volvo’s emergency braking system had been active, it would have allowed the AV to avoid Herzberg or at least slow down to reduce impact speed. Perception’s own emergency braking did not activate to reduce impact speed but rather only if the AV could avoid impact.

Strikingly, there was no technical malfunction, and all sensor systems were fully active doing their job. An executive of Velodyne, the LIDAR supplier, stated that the LIDAR “doesn’t make the decision to put on the brakes or get out of her [Herzberg's] way”. The key takeaway is that the AV performed as it was programmed to under Perception, and therefore the design of Perception was flawed, not the underlying technology.

Poor safety culture leads to tragedy
If the problem was Perception’s design, why did Uber fail to detect these flaws earlier, or why did they fail to design redundant safety measures into the AV to guard against undetected shortcomings? The NTSB points to Uber’s “poor commitment to safety culture”. This was Uber’s ethical pitfall. The following is a list of practices at Uber ATG that the NTSB cited as evidence of weak safety culture: Notably, Uber ATG employees knew Perception and safety standards were flawed even before Herzberg’s death. Uber AVs were frequently involved in accidents, almost “every other day” in the February before Herzberg died. Safety drivers who committed fireable offenses, such as Vasquez’s cell phone use, did not receive punishment. To Uber’s credit, the company claimed to rectify almost all of these weak areas post-crash, but nevertheless, these factors still led to Herzberg’s death.
 * Weakened safety-redundancy
 * Removal of second safety driver in 2017
 * Disabling Volvo’s native emergency braking and forward collision detection
 * Poor supervision of safety drivers
 * e.g. supervisors “spot-checking” safety drivers using the feed from their inward-facing camera
 * Failure to anticipate "human factors"
 * Pedestrian - jaywalking
 * Safety driver - inattentional blindness, automation complacency, and risk compensation (see below)
 * No corporate division to oversee safety

AV systems have ethical assumptions built in
There will always be ethical decisions embedded in designing a software for AVs. The concept of consequentialism explains that the right action is whatever leads to the best results in quantified terms, such as the least number of deaths. AV manufacturers apply consequentialism when they consider crash optimization - if a crash is imminent, how will an AV system minimize the amount of damage, injury, or death? To execute this, targeting algorithms are implemented, which deliberately discriminate against certain people depending on certain factors.

For example, consider a thought experiment where a car would have to choose between hitting a motorcyclist wearing a helmet and a motorcyclist not wearing one. It might seem reasonable to hit the one wearing the helmet since the one not wearing a helmet would probably not survive the crash. But, this essentially would mean that motorcyclists are being penalized and discriminated against for wearing a helmet, possibly encouraging motorcyclists to not wear helmets to avoid this targeting. Consequentialism pertains to one of the most famous ethical dilemmas in history, the trolley problem, and similar to the trolley problem, there is no definitive answer as to how to perfect crash optimization.

AV companies deflect blame and support continued street testing
In response to AV crashes, companies such as Uber and Tesla have refused to accept responsibility. In 2016, Joshua Brown was killed when his Tesla Model S collided with a tractor-trailer in Autopilot mode. Elon Musk, CEO of Tesla, has blamed crashes such as Brown’s on driver overconfidence in the AV, stating that “the issue is more one of complacency” rather than any shortcoming of Tesla’s Autopilot system. Marketing terms such as Tesla’s “Autopilot” may imply a greater level of autonomy than the system provides, inadvertently leading to the complacency issue that Musk mentioned. Uber has stated that it is impossible for them to “anticipate and eliminate” every potential risk before street testing, and that is why testing on public roads is necessary. Uber's poor safety culture may say otherwise. Uber was well aware of flaws in their AV system and safety standards before Herzberg’s death and continued to test their vehicles on public roads despite regular crash occurrences. Waymo has echoed the need for continued street testing, but their safety record has been better with no recorded fatalities.

Current AVs have impractical demands for human attention
SAE International, also known as the Society of Automotive Engineers, has defined six levels of autonomy for AVs. Level 3 AVs, such as the one involved in Herzberg's death, do not suit human psychology and can give the safety driver a sense of false safety. When a human is driving (Levels 0-2), the human is engaged in the driving task and is focused on the road. In Level 3 autonomy, the AV obviates the need to focus on the road, yet the driver still must focus on the road and be ready to take over in case of emergency. This is an unreasonable task to ask humans, due to inattentional blindness, the failure to notice things (like a jaywalking pedestrian) when not attentive to them.

Automation leads to driver recklessness
When engineers design a technology to be safer, the user may take more risks with the technology, thereby nullifying any safety benefit. This phenomenon is called risk compensation. A special variant of risk compensation occurs for automated technologies, called automation complacency. When a technology becomes automated, users tend to forget to perform the old manual tasks that the system now performs automatically. If the automated system fails, the user forgets to remedy the failure. When safety drivers are operating AVs, automation complacency makes them forget the old safety precautions they would take if they were driving a conventional car. For example, safety drivers become more likely to engage in distractions, such as Vasquez’s cell phone use. Therefore, the challenge for safety drivers is that they may be distracted when automation failures occur, and they may not be ready to take control in an emergency.

When California said no, Arizona said yes
In 2016, Uber was originally testing its AVs in its headquarters city of San Francisco, CA. However, after an Uber AV ran a red light, California officials wanted Uber to apply for permitting. Back in 2014, California created a permit system for AV testing. Uber refused to undergo the permitting process, and Arizona Governor Doug Ducey tried to attract Uber’s AV testing to his state. In a press release, Ducey said that while “California puts the brakes on innovation and change with more bureaucracy and more regulation, Arizona is paving the way for new technology and new businesses” and that “Arizona welcomes Uber self-driving cars with open arms and wide open roads”. Ducey’s intentions to foster his state’s economy may have been pure, but his hastiness to attract Uber left Arizona vulnerable to tragedy. He lured Uber to Arizona without establishing regulations to screen AV companies or their safety practices, ultimately leading to the demise of Arizona citizen Elaine Herzberg. Post-crash, Gov. Ducey placed a moratorium on Uber’s AV testing via executive order, but the NTSB has criticized Arizona’s failure to create a “safety-focused application-approval process” for AV testing.

Lack of federal regulation leads to ethical quandaries for states
The NTSB also criticized the National Highway Traffic Safety Administration (NHTSA) for “not providing definitive leadership to the states to manage the expansive growth in AV testing”. The NHTSA has yet to publish comprehensive standards for AVs, outline best practices for AV testing, or create a mandatory vetting process for AVs and AV companies' safety protocols. In 2017 and 2020, Congress tried to pass comprehensive AV legislation but failed. This has left a regulatory vacuum, forcing the states to regulate AV standards themselves. Perhaps states are not solely to blame for the ethical dilemmas brought on by AV testing, since there is no unified guidance from the federal government.

Key lessons
This case shows what happens when we value progress and profit over safety. Uber had forewarning that their AV program and Perception were unsafe. They made the problem even worse by eliminating the second safety driver to cut costs. Arizona’s government could have been more careful in letting Uber test on their streets, but Gov. Ducey’s hunger for economic prosperity was ahead of the state’s AV legislation. The result was the death of an innocent pedestrian. Herzberg’s death may have hurt Uber financially, as the company experienced decelerating growth over 2018. It seems Uber has addressed some of these issues in the years after the crash, but all industries should view this case as a lesson to prioritize safety first.

Future directions
Future research could examine whether individual state AV regulations are efficacious. We may also forecast how much a federal AV-regulating body would cost and the potential benefits of such an organization. There are countless ethical questions that remain in the AV field, and some such as crash optimization may never have definitive answers. One ethical issue that may have some definitive answer is an objective criteria to determine if an AV company is morally responsible for a crash. Finally, we must realize that hindsight is 20/20, and no AV company will be able to forecast every possible contingency before street testing.