top of page

Automation bias

"When making a decision, I have too much confidence in machines, and tend to neglect my own judgement."

Definition

Having confidence in a machine that has a low error-rate can certainly be a wise choice. Nonetheless, this can lead people to minimise the risks associated with the machine. Automation bias occurs when someone overrides a correct human decision in response to an erroneous output on the part of a machine. The bias can also occur when an omission on the part of the machine leads the user not to act when they should in fact be doing something. People tend to overrate the accuracy of, and too easily accept decisions made by the machine, lowering their vigilance and their critical judgment of the information provided by it. One’s personality, as well as the difficulty and complexity of the task to be performed, have an influence on one’s susceptibility to the bias [1].

Example

A new technology called Clinical Decision Support (CDS) allows medical personnel to enter their patient’s symptoms into a computer to obtain suggestions about the diagnosis. In 2016 researchers reported a bug in the algorithm that mistakenly recommended to clinicians to prescribe a test for the detection of lead in the blood of babies, when in fact this test was not necessary [1]. Even though the recommendation was contrary to what the clinicians knew, some of them nevertheless prescribed the test, thus incurring costs to the medical system and causing unnecessary worry to the parents of the babies. When not taken into account by humans, these kinds of bugs could cause damage to health and pose a risk to patient security.

Explanation

The notion of confidence is at the heart of this bias. If confidence in oneself can reduce the bias, confidence in the machine may increase it. Automation bias is a balancing act between these two types of confidence. In fact, it has been demonstrated that humans have a bias that is favourable to artificial assistants, as opposed to humans who are judged to be less reliable [2]. The bias is stronger in people who are not yet experienced in a particular task (e.g., piloting an airplane) but also among people who are very used to the machine that is assisting them. Another hypothesis about the origin of the bias relates to one’s attitude to authority. People are more likely to blindly submit to the authority of a machine because it is supposed to have been specifically programmed to avoid error (unlike humans).

Consequences

Even if machines are very useful for saving both effort and time, it remains important to be aware of the consequences that could arise from discounting the risks of error. In the field of health, for example, programs that can pinpoint a diagnosis are very promising and are likely to improve clinical decision-making, as well as interventions and repercussions for patients. On the other hand, it has been shown that from 7 to 11% of professionals who had arrived at the proper diagnosis before consulting the machine changed their decision in response to faulty advice given by the machine [4]. In aviation, where pilots must make crucial decision in response to information provided by machines, studies have reported that even though few pilots follow erroneous advice, many will neglect to act when it is necessary because the machine has not advised them to do so [3].

Thoughts on how to act in light of this bias

  • As a user of machines, educate oneself about the algorithms and processes that underlie artificial decision-making programs.

  • As a policy maker, improve the awareness and accountability of professionals in their decision-making.

  • As a practitioner or machine-designer, opt for a “support” rather than a “decision-making” protocol when designing software.

How is this bias measured?

To determine whether there is an automation bias, researchers may show that study subjects do not react to an error on the part of a machine, although their training would indicate that they should. For example, real airline pilots may participate in the simulation of a flight between Los Angeles and San Francisco in a Boeing 747. The experiment includes five events that are faced by each pilot. The events are occasions when (unknown to the pilots) errors in judgement could occur due to automation bias. For example, the machine could react badly in response to a human command, the machine could make an erroneous command, or a malfunction of the autopilot system could occur. The researchers observe the interaction between the pilot and the machine. If a pilot does nothing in response to an error by the machine even though their experience and training would normally indicate that they should, it is considered to be an instance of automation bias [4].

This bias is discussed in the scientific literature:

Echelle1.png

This bias has social or individual repercussions:

Echelle2.png

This bias is empirically demonstrated:

Echelle3.png

References

[1] Goddard, Kate, Abdul Roudsari & Jeremy C. Wyatt (2010). Automation bias: a systematic review of frequency, effect mediators and mitigators. Journal of the American Medical Informatics Association 19(1): 121-127.


[2] Dzindolet, Mary T., Scott A. Peterson, Regina A. Pomranky, Linda G. Pierce & Beck P. Hall (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies 58(6): 697-718.


[3] Skitka, Linda J., Kathleen L. Mosier & Mark Burdick (1999). Does automation bias decision-making? International Journal of Human-Computer Studies 51(5): 991-1006.


[4] Parasuman, Raja & Dietrich H. Manzey (2010). Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors 52 (3) : 381-410.

Tags

Individual level, Anchoring heuristic, Need for cognitive closure, Need for security

Related biases

Author

Cloé Gratton is a PhD candidate in psychology at the Université du Québec à Montréal. She is affiliated to the Laboratoire des processus de raisonnement. She is also co-founder of Shortcuts.

Translated from French to English by Kathie M. McClintock.

How to cite this entry

Gratton, C. (2020). Automation bias, trans. K. McClintock. In E. Gagnon-St-Pierre, C. Gratton & E. Muszynski (Eds). Shortcuts: A handy guide to cognitive biases Vol. 2. Online: www.shortcogs.com

Receive updates on our content by signing up to our newsletter

Thank you!

Thank you to our partners

FR transparent.png
LogoCielmoyen.png
LOGO-ISC.png

© 2020 Shortcuts/Raccourcis. All rights reserved.

bottom of page