in ,

Humans take more blame than cars for killing pedestrians, Ars Technica

Humans take more blame than cars for killing pedestrians, Ars Technica


      Blame game –

             

When shared-control drivers both make a mistake, humans take the fall.

      

          –

  

        

A two-car crash may now involve as many as four drivers, not all of them human.

Enlarge/A two-car crash may now involve as many as four drivers, not all of them human.

MediaNews Group / Orange County Register via Getty Images

Tonight, drivers in the US will kill more pedestrians than any other night of the year. An increase in people walking in low-light conditions makes Halloween the mostdangerous night of the yearfor pedestrians.

Pedestrian deaths are on the rise overall, as cars become safer for driversbut more dangerous for everyone else. Sophisticated pedestrian detection systems, which are becoming more common in cars, aren’t doing particularly well. Some of them scorehighly on easier testsin broad daylight, but they donot fare so wellin more difficult conditions like low light.

When a driver shares the blame for a pedestrian death with an automated car, how do people assign blame? A study in Nature Human Behavior this week suggests that people may focus their ire on the human in a shared-control situation. The authors argue that this could result in an under-regulation of the safety of shared-control vehicles.

Shared responsibility?

Although the broad availability of fully autonomous cars lurksover the horizon, a range of advanced driver assistance systems (ADAS) —from intervention during emergency maneuvers to Tesla’s Autopilot suite — offload some of the responsibility for safety from the driver to the car.

“We are entering a delicate era of shared control between humans and machines,” write researcher Edmond Awad and his colleagues. Understanding how people respond to the moral implications of this situation is crucial for everything from calculating the expected liability for manufacturers to anticipating how the legal system should react to deaths in situations of shared control between driver and car.

How the people who set these policies will react is anyone’s guess. Based on press coverage of a2016 Tesla Autopilot deathand thekilling of a pedestrian by an Uber self-driving car, it seems that people might be more inclined to heap blame heavily on the human drivers and less on the cars, Awad and his colleagues suggest. “Was this pattern a fluke of the circumstances of the crash and press environment?” they ask. “Or does it reflect something psychologically deeper that may color our responses to human-machine joint action and, in particular, when a human-machine pair jointly controls a vehicle?”

Bad call vs negligence

To test this, Awad and colleagues constructed a series of vignettes that described a shared-control car killing a pedestrian. These vignettes described a range of sole- or dual-driver car options, ranging from a sole human driver to a fully automated car (referred to in the vignettes as a “machine driver”).

In between were dual-driver options with a primary human driver and a secondary machine driver (much likeToyota’s Guardian) or a primary machine driver and secondary human driver (like Tesla’s Autopilot). There were also options for human-human pairings — like dual-control cars for people learning to drive — and machine-machine pairings.

The researchers explored how people apportioned blame in these setups. First, each person read a description of one of the different driver combinations — so each participant only ever had to think about one type of human-machine jointly driven car. Then, they read about two different crash scenarios, both which described driver error leading to the death of a pedestrian. But the two scenarios differed in whether both drivers made an error or only one of them.

In “bad intervention” scenarios, the main driver (either human or machine) makes a driving decision that would avoid hitting the pedestrian , but the secondary driver intervenes with the wrong call, resulting in a collision. In bad interventions, it makes sense that the secondary driver is really the one to blame, since they overrode the correct actions of primary driver.

This expectation matches how people reacted. When participants saw this scenario and rated how blameworthy each driver was and how much they caused the death, on a scale of 1 to 100, the secondary driver came out bearing most of the blame. This was true whether or not the secondary driver was a human or machine.

In “missed intervention” scenarios, though, things looked a little different. In these scenarios, the main driver is the one who makes the wrong call, but the secondary driver doesn’t intervene to rescue the situation. In these scenarios, both drivers made an error.

Participants did apportion some blame to both drivers in these scenarios — but the human took more blame than the car. This was true when the human had made the initial error and the car had failed to correct it — and it was also true when the car made the error and the human failed to correct. That suggests that participants weren’t assigning blame based on whether the driver had made a wrong call or just not intervened; rather, they were assigning blame based on the type of driver.

These results replicated across different studies, including a replication that used a different participant sample and two that tweaked the presentation of the vignettes to make them look more like newspaper articles.

Under-reacting to the dangers

The results might be intuitive, but science is about checking whether intuitions are supported by actual data. People seem to understand that self-driving cars have pretty hard limitations on dealing flexibly with a range of scenarios and that the more flexible driver — the human — should be trying to compensate for that.

The study isn’t the final word. For one thing, participants knew they weren’t reading real stories, which could have affected their answers — but the responses do match the public reactions to the Tesla and Uber crashes, the authors note. “While there may be many psychological barriers to self-driving car adoption,” they write, “public over-reaction to dual-error cases is not likely to be one of them.”

Instead, the researchers argue, there might be a risk ofUnder-reaction. If regulation of self-driving cars is based on public pressure and juries’ decisions, car manufacturers might be absolved of blame pretty easily. And that might reduce the pressure for manufacturers to improve their systems.

There’s a historical analogue for this, write Awad and his co-authors: people used to ignore car manufacturers’ liability when car occupants were hurt in collisions, because they attributed the collisions to driver error. “Top-down regulation was necessary to introduce the concept of ‘crash worthiness’ into the legal system,” they note. Similarly, they argue, self-driving car safety might need top-down safety regulations.

Nature Human Behavior, 2018. DOI:. 1038 / s 41562 – 019 – 0762 – 8(About DOIs).

                                 

                  

Brave BrowserRead More

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Texas court upholds “do it on a computer” check-cashing patent, Ars Technica

Texas court upholds “do it on a computer” check-cashing patent, Ars Technica

Vodafone Group denies rumors of exiting India – Livemint, Livemint.com

Vodafone Group denies rumors of exiting India – Livemint, Livemint.com