Research Shows Over-Reliance On AI When Making Life-Or-Death Decisions

Home Research Research Shows Over-Reliance On AI When Making Life-Or-Death Decisions
Research Shows Over-Reliance On AI When Making Life-Or-Death Decisions

In simulated scenarios involving critical life-or-death decisions, about two-thirds of participants in a UC Merced study allowed a job to change their initial judgments when it disagreed with them. The researchers noted that this is a striking indicator of over-reliance on artificial intelligence.

 

Despite information that artificial intelligence has limited capabilities and can give wrong advice, people still allow robots to influence their decisions. In truth, the advice provided by the AI ​​was purely random. “As artificial intelligence technologies develop rapidly, we should be careful not to trust them,” said Professor Colin Holbrook, the study’s lead researcher and lecturer in the Department of Cognitive and Information Sciences at the University of California, Merced. A growing body of research shows that people tend to over-trust AI, even when mistakes can lead to serious consequences.

 

What society needs, Holbrooke continued, is a consistent practice of skepticism. “We need a healthy dose of skepticism about artificial intelligence,” he said, “especially when making potentially life-threatening decisions.”

 

The study, published in the journal Scientific Reports, involved two experiments. In both, participants simulated controlling an armed drone that could launch a missile at a target shown on the screen. Eight photographs of the targets flashed briefly, less than a second each, with symbols marking them as allies or enemies. “We designed the visual challenge to be challenging but doable,” explained Holbrook.

 

After that, an unmarked target appeared on the screen. Participants had to rely on memory to decide whether they were friends or foes and choose to launch a missile or hold back. When the participant made a decision, the robot expressed its opinion. It can be written: “Yes, I think I also saw the sign of the enemy” or “I disagree. I think there was an ally symbol on that image.”

 

Participants had two opportunities to confirm or revise their decisions based on additional comments from the robot, which remained consistent in its assessment, such as “I hope you’re sure” or “Thanks for changing your mind.” Results varied slightly depending on the type of robot used. In one scenario, a life-sized, human-like android was present in the room and could rotate to gesture on the screen. Other scenarios included a humanoid robot appearing on the screen, or square robots that didn’t look like humans.

 

Participants were slightly more influenced by the humanoid robots when they were encouraged to change their decisions. However, the effect was significant for all types, with roughly two-thirds of participants changing their minds even when the robots looked less human. Conversely, if the robot agreed with their initial decision, participants almost always stuck with their initial choice and felt more confident that they were right. (Participants were not told whether their final choice was correct, which increased the uncertainty of their actions. Notably, their initial decisions were accurate about 70% of the time, but this accuracy dropped to about 50% after considering the robot’s unreliable advice.)

 

Before the simulation, the researchers showed participants images of innocent civilians, including children, alongside the destruction caused by a drone strike. They urged participants to treat the simulation as real and to avoid mistakenly harming the innocent. Subsequent interviews and surveys revealed that the participants were serious about their decisions. This means that the observed overconfidence in the research occurred despite the participants’ sincere desire to make the right choice and avoid harming innocent people.

 

Holbrook emphasized that the research aims to address the broader problem of over-trusting AI in uncertain situations. The findings go beyond military decisions and may be relevant in other contexts, such as when police are exposed to AI in the use of lethal force or paramedics rely on AI to prioritize treatment in emergencies. These ideas can also be applied to important life decisions, such as buying a home. “Our research focused on high-risk decisions made under conditions of uncertainty where AI is unreliable,” he said.

 

The research findings also fuel the public debate about the growing role of AI in our daily lives. To trust AI or not? These findings raise additional concerns, Holbrook added. Despite the significant progress of AI, its “intelligence” may lack ethical values ​​and a true understanding of the world. We should be careful when we allow artificial intelligence to control more of our lives, he urged. “We watch artificial intelligence perform extraordinary tasks and assume that because it is better in one area, it will be just as good

allix