Study collects data on human moral judgments for autonomous vehicle decision-making

New experiment aims to teach self-driving cars morality without the trolley problem

Driverless-car-germany-Reuters

Researchers have devised a new experiment to gain insight into how people perceive moral decisions in relation to driving, with the aim of using the data to train autonomous vehicles to make "good" choices. This study focuses on capturing a wider range of realistic moral challenges on the road, rather than the well-known ethical dilemma posed by the "trolley problem."

This innovative approach aims to move away from the trolley problem and explore the complexities of everyday driving decisions. With the increasing presence of autonomous vehicles on our roads, ensuring they make morally sound choices is crucial for the safety and trust of passengers and pedestrians alike.

The trolley problem presents a scenario where a person must decide whether to intentionally sacrifice one life to save multiple lives, which goes against moral norms. However, the researchers argue that this scenario is not representative of the everyday moral decisions drivers face. Questions like whether to exceed the speed limit or run a red light are more common and can have life-or-death consequences.

To address the lack of data in this area, the researchers designed a series of experiments to collect information on how humans make moral judgments about low-stakes traffic situations. They created seven different driving scenarios, including one where a parent must decide whether to break a traffic signal in order to get their child to school on time.

Each scenario was programmed into a virtual reality environment, allowing participants to experience the audiovisual information of the drivers' actions. The researchers utilised the Agent Deed Consequence (ADC) model, which considers three factors when making moral judgments: the character or intent of the person (agent), the action being taken (deed), and the outcome of that action (consequence).

To gather robust data, the researchers developed eight different variations of each scenario, altering the combination of agent, deed, and consequence. Participants were then asked to rate the morality of the driver's behavior on a scale from one to ten.

The ultimate goal of this study is to generate data that can be used to develop AI algorithms for moral decision-making in autonomous vehicles. By understanding how humans perceive moral behavior in driving situations, researchers hope to train self-driving cars to make ethical choices on the road.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp