Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Decision Making in Emergency Medicine: Biases, Errors and Solutions
Decision Making in Emergency Medicine: Biases, Errors and Solutions
Decision Making in Emergency Medicine: Biases, Errors and Solutions
Ebook774 pages9 hours

Decision Making in Emergency Medicine: Biases, Errors and Solutions

Rating: 0 out of 5 stars

()

Read preview

About this ebook

The book covers various scenarios when errors, biases and systemic barriers prevail in emergency medicine, discusses their impact, and then offers solutions to mitigate their undesired outcomes. The process of clinical reasoning in emergency medicine is a complex exercise in cognition, judgment and problem-solving that is prone to mistakes. The book presents various cases written by a team of emergency specialists and trainees in an engaging format that is helpful for the practicing and teaching emergency doctor and trainees.

The book discusses 60 different types of biases and errors with clinical cases, and knowledge of strategies to mitigate them—a concept known as ‘cognitive debiasing’ that has the potential to reduce diagnostic error, and therefore, morbidity and mortality. It aims to help the readers during assessment of patients in the emergency department. Each chapter includes 4 cases illustrating the bias, error or barrier discussed, followed by a potential solution.

This book helps in polishing the thinking and behavior of the readers so to potentially enhance their clinical competence in emergency department.

LanguageEnglish
PublisherSpringer
Release dateMay 29, 2021
ISBN9789811601439
Decision Making in Emergency Medicine: Biases, Errors and Solutions

Related to Decision Making in Emergency Medicine

Related ebooks

Medical For You

View More

Related articles

Reviews for Decision Making in Emergency Medicine

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Decision Making in Emergency Medicine - Manda Raz

    © Springer Nature Singapore Pte Ltd. 2021

    M. Raz, P. Pouryahya (eds.)Decision Making in Emergency Medicinehttps://doi.org/10.1007/978-981-16-0143-9_1

    1. We Can’t Escape Bias

    Justin Morgenstern¹  

    (1)

    University of Toronto, Toronto, ON, Canada

    Justin Morgenstern

    Email: justin.morgenstern@utoronto.ca

    As this book clearly demonstrates, the human mind is imperfect. We all make mistakes. We are all susceptible to bias. Through learning about the various cognitive biases, and identifying some strategies to mitigate common errors, the hope is that readers will be able to avoid future mistakes. Unfortunately, there are limitations to the application of cognitive theory in medicine. Even armed with the wealth of knowledge provided by this book, we will still make mistakes.

    It is unlikely that we will ever completely eliminate medical error. The decisions we make are incredibly complex, and the human mind is inherently fallible. Integrating what we know about cognitive theory and psychology into medicine is a logical step forward, but there are significant limitations, both theoretical and practical, to the application of cognitive theory in medicine. This chapter explores some of those limitations.

    1.1 Theoretical Problems

    An assumption that underlies much of this book is that, although the human mind is fallible, it also has the tools to self-correct. This is often explained in terms of dual process theory. Most of our thinking is rapid, unconscious, and intuitive—system 1 thinking. However, system 1 thinking is also prone to bias. Luckily, we are also capable of slower, more contemplative, analytical thought—system 2 thinking. Most proposed solutions for biased thinking involve recognizing faulty type 1 thinking, and shifting to the presumably more accurate type 2 thinking.

    However, this simple blueprint may be misleading. Type 1 thinking is not always bad and type 2 thinking is not always better. In fact, especially when it comes to experts like physicians, it isn’t clear that thinking is so easily dichotomized. The clean distinction between type 1 and type 2 thinking is based largely on studies of undergraduate students, usually performing tasks in which they lack expertise, and so it isn’t clear that these results are applicable to expert medical decision making.

    The heuristics employed in type 1 thinking are efficient mental strategies that help us deal with uncertainty and ambiguity. Experts often use heuristics very effectively. In fact, in some scenarios, heuristics may lead to better decisions than analytical thinking [1–3]. Discussions about cognitive biases tend to overemphasize the harms of using heuristics, while ignoring their many benefits. In medical emergencies, the speed of (well trained) type 1 thinking is almost certainly more important than the accuracy of formal analytic thought. Although it occasionally fails, it is important to recognize that type 1 thinking is not inherently bad [4–6].

    Similarly, although analytic thought results in more accurate decisions in some settings, it is by no means infallible. In fact, conscious reasoning can sometimes produce worse results, because type 2 thinking puts a heavy load on working memory, which has significant limitations [5]. Furthermore, many of the described cognitive biases also impair type 2 thinking. For example, premature closure and confirmation bias are both phenomena that arise during data gathering and synthesis, and are therefore more likely to be associated with type 2 thinking [5, 7].

    A final and significant problem for dual process theory is the poorly defined interface between systems 1 and 2. How exactly is one supposed to effectively and consistently transition from type 1 to type 2 thinking? System 1 is generally described as always active, rapidly sorting through the avalanche of available data. Meanwhile, system 2 is described as monitoring system 1 and making corrections as necessary. However, it is not clear how that monitoring happens. What triggers the transition from system 1 to system 2? The act of monitoring would seem to require rapid analysis and pattern recognition to identify possible errors. Thus, the monitoring of system 1 sounds like a system 1 process, which presumably would also be prone to the same type of errors.

    If we want to correct errors, we need to be able to recognize those errors. Strategies to mitigate cognitive errors are based on the major assumption that we have active control over our decision making processes. They assume that, in the moment, we will be able to recognize that our thinking is biased and flip from non-analytical to analytical thinking. Unfortunately, there is little evidence that this process occurs reliably [2].

    It seems like a simple task—we recognize errors in other people’s thinking all the time. However, the blind spot bias tells us that we have a much harder time identifying our own biases. In fact, a core paradox of cognitive theory is that you cannot know that you are wrong. While in the midst of making a mistake, being wrong feels exactly like being right [6]. Thus, although we can recognize past errors, there is actually no mechanism that alerts us that we are currently wrong.

    Much like understanding the concept of a visual blind spot does not eliminate the blindness, simply understanding the existence of cognitive biases does not prevent them from occurring. In fact, Daniel Kahneman (the Nobel Prize winning originator of dual process theory) says that after 30 years of study, although he can more readily recognize errors in others, he isn’t sure that he is any better at avoiding these biases himself [8].

    1.2 Biases Are Often More Complex than We Make Them Seem

    Individual biases are generally more complex than we initially realize. We tend to talk about biases as dichotomous. We either committed an error or we didn’t; our thinking was either biased or it wasn’t. However, much of the research describes behavior that falls into a grey area between those two extremes.

    For example, although the original research on base rate neglect involved participants completely ignoring the base rate, further research has made it clear that the base rate is often considered, and errors, when they occur, mostly arise from not fully adjusting for the base rate, rather than completely ignoring it. Furthermore, the extent of the error is significantly influenced by the specifics of the scenario, and many biased results can be explained by rational thinking that simply conflicts with researcher expectations [9, 10].

    The majority of the research establishing cognitive biases was performed in carefully controlled laboratory settings, usually with college undergraduates as the subjects. This is important because there is evidence that experience can reduce or eliminate biased thinking. For example, athletes demonstrate much better statistical intuition when a problem is presented using a sporting example, as compared to when the same problem is presented in a less familiar context [11]. Similarly, a classic puzzle used to demonstrate confirmation bias involves asking participants to prove the rule if a card has a vowel on one side, it has an odd number on the other side. In this abstract, non-intuitive example, people frequently demonstrate confirmation bias. However, if you present people with the exact same problem using a real-world example (prove that if a person is drinking beer, that person must be over 18 years of age), participants perform almost perfectly [9]. Therefore, we should not automatically assume that the biases described in laboratory settings generalize to expert clinicians [7].

    1.3 Studies in Medicine Are (Thus Far) Underwhelming

    The true incidence and impact of cognitive biases in medicine is unknown. The evidence is incomplete and imperfect. According to one meta-analysis, the majority of studies looking at cognitive bias in medicine did not take place in real clinical scenarios, but instead employed paper-based or simulated vignettes, often done by trainees, and therefore may not generalize well to clinical practice [12]. Studies that have attempted to examine bias in clinical settings have generally been retrospective and focused on known misdiagnoses rather than all clinical decisions. Therefore, the results will be skewed by significant hindsight bias and selection bias.

    Any attempt to classify medical bias retrospectively is fraught with problems. When assessing cases, experts frequently disagree about which biases might be present. When looking at the same case, experts are twice as likely to identify biases if they are told the clinician chose the wrong diagnosis, a clear indication of hindsight bias [13]. Similarly, whether or not physicians believe an error has occurred is heavily influenced by the patient outcome [14].

    There seems to be a general consensus in medicine that diagnostic errors are more likely to result from cognitive errors than knowledge deficits. However, the evidence for this claim is somewhat unconvincing. One of the most frequently cited studies, a 2005 survey by Graber et al., is a retrospective analysis of 100 cases of known diagnostic error. They state that knowledge deficits were only involved in four cases, whereas faulty synthesis of data (such as premature closure) was involved in the vast majority. However, it is almost impossible to distinguish premature closure from a scenario in which a diagnosis was not considered because it was unknown to the clinician, or because a known disease presented in an unknown way (in other words, from a knowledge deficit) [15]. In fact, knowledge deficits (whether medical or statistical) could explain a lot of decisions that appear to be affected by bias. Thus, knowledge deficits may be an underestimated cause of diagnostic error [5]. Furthermore, addressing knowledge is the best technique we currently have to improve medical decision making.

    That being said, considering the sheer number of decisions we make in medicine, and the large number of possible biases, it is likely that these biases play an important role in medical error. Assuming that our decisions are impacted by these biases, the more important questions are how and if we can prevent these errors.

    Unfortunately, the evidence that biases can be mitigated in medicine is mixed, with the bulk of the trials showing no benefit. There are a few trials that demonstrate improved diagnostic accuracy by trainees on paper-based vignettes when more time is taken for reflection [16, 17]. However, Sherbino et al. (2012) actually demonstrated more errors occurred when trainees were instructed to slow down and be thorough [18]. Numerous other studies have demonstrated no difference in accuracy between clinicians instructed to work rapidly and those instructed to work slowly and thoroughly [19–22].

    Three studies looked at educational interventions designed to improve diagnostic thinking by educating students about cognitive biases (meta-cognition). Another study attempted to use a cognitive debiasing checklist, with questions such as did I consider the inherent flaws of heuristic thinking? None of these interventions have resulted in improved accuracy [23–26].

    Considering the potential extent of the problem, there has been relatively little research into potential solutions. The failures thus far are a sobering reminder of the complexity of human cognition. We should probably be skeptical of overly simplistic solutions. Our training as medical experts spans many years, and our training in critical thinking (whether formal or informal) started many years before that. It is doubtful that simple instructions to think about our thinking will be enough to change the momentum of our ingrained strategies.

    However, I don’t think these early failures should dissuade us. You wouldn’t decide that a child has no musical ability after only a month of piano lessons, but our early attempts at teaching cognitive debiasing look a lot more like that month than 10,000 h of deliberate practice. We need more research, and we need to find ways to train doctors to use their cognitive resources efficiently and effectively.

    1.4 Recognizing Potential Harms

    Although improving medical decision making seems like a clear win, I think it is important to consider the potential harms of applying cognitive theory in medicine. The most obvious harm is opportunity cost. Thus far, there is no evidence that cognitive debiasing techniques improve decision making or patient outcomes. Time is a precious resource in medicine. If cognitive theory does not improve outcomes, the time and effort required to create curricula, teach, and learn this new material could be better used elsewhere.

    Likewise, in eschewing rapid heuristics and promoting slow analytic thought, debiasing techniques are likely to make the practice of medicine less efficient. This inefficiency would be worthwhile if it translates into better decisions. However, to date there is no evidence that these debiasing techniques are effective, so the inefficiency is just inefficient. In a worst case scenario, attempts to use slower analytic thinking in medical emergencies could result in delays to critical interventions and bad patient outcomes.

    Attempts to avoid cognitive biases could also result in substantial costs. Confirmation bias tells us to focus on ruling out alternatives, rather than searching for confirmatory evidence. However, there are always numerous potential alternative diagnoses. If the solution to confirmation bias is understood as requiring tests to rule out each of those alternatives, the result could be significant increases in testing, costs, and harms to our patients.

    A more subtle harm is the potential for attempts at debiasing to actually increase error. Many of the described biases exist at opposite ends of a spectrum. Avoiding one may cause us to necessarily commit the other. For example, the chapter on base rate neglect reminds us to consider the base rate whenever we make diagnostic decisions. Rare conditions are rare, and shouldn’t be pursued frequently. However, in avoiding the workup of rare conditions, we are falling into another cognitive bias: the zebra retreat. Rare conditions, although rare, do happen, so need to be worked up. The solution to one bias necessarily leads us towards another.

    1.5 Summary

    Although there seems to be little doubt that cognitive biases play some role in medical error, the extent of their impact is not clear. Most importantly, it isn’t clear if these biases can be prevented, and if so, how. Thus far, attempts to mitigate cognitive biases through educational programs in medicine have mostly failed, although the research has been quite limited thus far. It is also important to acknowledge that many of the processes described as biases are really heuristics that are frequently used to efficiently and accurately arrive at a correct diagnosis. When attempting to improve our cognition, we need to be careful not to throw the baby out with the bathwater.

    How should the practicing clinician proceed? As we are used to with most scientific reviews, the conclusion is: more research is needed. I am reassured by evidence that more experienced physicians are less prone to bias than trainees [25, 27]. It is likely that we can teach ourselves to be more effective thinkers, but we are a long way from understanding the full impact of these biases on medical practice, and more importantly the techniques that may help prevent them. In the meantime, astute clinicians will endeavor to learn about these biases, attempt to identify specific areas of cognitive reasoning that might be improved, and, most of all, remain humble in their clinical reasoning.

    References

    1.

    Croskerry P. Diagnostic failure: a cognitive and affective approach. In: Henriksen K, Battles JB, Marks ES, Lewin DI, editors. Advances in patient safety: from research to implementation, vol. 2. Concepts and methodology. Rockville, MD: Agency for Healthcare Research and Quality (US); 2005.

    2.

    Eva KW, Norman GR. Heuristics and biases—a biased perspective on clinical reasoning. Med Educ. 2005;39(9):870–2.Crossref

    3.

    Monteiro SM, Norman G. Diagnostic reasoning: where we’ve been, where we’re going. Teach Learn Med. 2013;25(S1):S26–32.Crossref

    4.

    Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):1124–31.Crossref

    5.

    Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med Educ. 2010;44(1):94–100.Crossref

    6.

    Dhaliwal G. Premature closure? Not so fast. BMJ Qual Saf. 2017;26(2):87–9.Crossref

    7.

    Norman GR, Monteiro SD, Sherbino J, Ilgen JS, Schmidt HG, Mamede S. The causes of errors in clinical reasoning. Acad Med. 2017;92(1):23–30.Crossref

    8.

    Kahneman D. Thinking, fast and slow. New York: Farrar, Straus and Giroux; 2011.

    9.

    Klayman J. Varieties of confirmation bias. Psychol Learn Motiv. 1995;32:385–418.Crossref

    10.

    Koehler JJ. The base rate fallacy reconsidered: descriptive, normative, and methodological challenges. Behav Brain Sci. 2010;19(1):1–17.Crossref

    11.

    Nisbett RE, Krantz DH, Jepson C, Kunda Z. The use of statistics in everyday inductive reasoning. Psychol Rev. 1983;90:339–63.Crossref

    12.

    Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak. 2016;16(1):138.Crossref

    13.

    Zwaan L, Monteiro S, Sherbino J, Ilgen J, Howey B, Norman G. Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Qual Saf. 2017;26(2):104–10.Crossref

    14.

    Caplan RA, Posner KL, Cheney FW. Effect of outcome on physicians’ judgments of appropriateness of care. JAMA. 1991;265:1957–60.Crossref

    15.

    Graber ML. Diagnostic error in internal medicine. Arch Int Med. 2005;165:1493–9.Crossref

    16.

    Mamede S, Schmidt HG, Rikers RM, Penaforte JC, Coelho-Filho JM. Influence of perceived difficulty of cases on physicians’ diagnostic reasoning. Acad Med. 2008;83:1210–6.Crossref

    17.

    Hess BJ, Lipner RS, Thompson V, Holmboe ES, Graber ML. Blink or think: can further reflection improve initial diagnostic impressions? Acad Med. 2015;90:112–8.Crossref

    18.

    Sherbino J, Dore KL, Wood TJ, Young ME. The relationship between response time and diagnostic accuracy. Acad Med. 2012;87:785–91.Crossref

    19.

    Ilgen JS, Bowen JL, Yarris LM, Fu R, Lowe RA, Eva K. Adjusting our lens: can developmental differences in diagnostic reasoning be harnessed to improve health professional and trainee assessment? Acad Emerg Med. 2011;18(S2):S79–86.Crossref

    20.

    Ilgen JS, Bowen JL, McIntyre LA, Banh KV, Barnes D, Coates WC, et al. Comparing diagnostic performance and the utility of clinical vignette-based assessment under testing conditions designed to encourage either automatic or analytic thought. Acad Med. 2013;88:1545–51.Crossref

    21.

    Norman G, Sherbino J, Dore K. The etiology of diagnostic errors: a controlled trial of system 1 versus system 2 reasoning. Acad Med. 2014;89:277–84.Crossref

    22.

    Monteiro SD, Sherbino JD, Ilgen JS, Dore KL, Wood TJ, Young ME, et al. Disrupting diagnostic reasoning: do interruptions, instructions, and experience affect the diagnostic accuracy and response time of residents and emergency physicians? Acad Med. 2015;90:511–7.Crossref

    23.

    Sherbino J, Yip S, Dore KL, Siu E, Norman GR. The effectiveness of cognitive forcing strategies to decrease diagnostic error: an exploratory study. Teach Learn Med. 2011;23:78–84.Crossref

    24.

    Sherbino J, Kulasegaram K, Howey E, Norman G. Ineffectiveness of cognitive forcing strategies to reduce biases in diagnostic reasoning: a controlled trial. CJEM. 2014;16:34–40.Crossref

    25.

    Shimizu T, Matsumoto K, Tokuda Y. Effects of the use of differential diagnosis checklist and general de-biasing checklist on diagnostic performance in comparison to intuitive diagnosis. Med Teach. 2013;35:e1218–29.Crossref

    26.

    Smith BW, Slack MB. The effect of cognitive debiasing training among family medicine residents. Diagnosi. 2015;2:117–21.

    27.

    Feltovich PJ, Johnson PE, Moller JH, Swanson DB. CS: the role and development of medical knowledge in diagnostic expertise. In: Clancey WJ, Shortliffe EH, editors. Readings in medical artificial intelligence: the first decade. Reading, MA: Addison Wesley; 1984. p. 275–319.

    © Springer Nature Singapore Pte Ltd. 2021

    M. Raz, P. Pouryahya (eds.)Decision Making in Emergency Medicinehttps://doi.org/10.1007/978-981-16-0143-9_2

    2. Aggregate Bias

    Khin Moe Sam¹, ²  

    (1)

    Emergency Department, Dandenong Hospital, Monash Health, Dandenong, VIC, Australia

    (2)

    Faculty of Medicine, Nursing and Health Sciences, School of Clinical Sciences at Monash Health, Monash University, Melbourne, VIC, Australia

    Khin Moe Sam

    Email: khin.sam@monashhealth.org

    Have you ever prescribed a medication to a patient because of their repetitive request, despite there being no clear clinical indication? Have you ever ordered an unnecessary investigation because you felt that your patient was somehow different without any real grounds to believe so? Have you ever ordered a test because a patient or their relative insists you do so to exclude a condition even though such a diagnosis is implausible based on their presentation?

    These are all examples of ‘aggregate bias’ where a physician commits to unnecessary investigations or treatments based on the fallacy that statistical associations found between variables in a patient population do not apply to a particular individual. This is often due to a tendency for physicians to rely on anecdotal evidence from past experiences or treat their own patients as atypical. Physicians may use the aggregate bias to rationalise treating an individual patient differently from what has been deemed appropriate in the clinical guidelines for that group of patients.

    Furthermore, the clinician’s behaviour may be augmented by a patient’s demanding behaviour. In such situations, aggregate bias may be compounded by commission bias, where physicians have a tendency to avoid harm by active intervention or by ‘doing something’ for the patient [1].

    Definition: The belief that aggregated data, such as that used to inform evidence-based practice, do not apply to the individual.

    2.1 Case 1

    Sarah is a 24-year-old female at 8 weeks gestation. She has had one previous uncomplicated pregnancy. She has no significant past medical history and is a non-smoker. She has been receiving antenatal care from her GP and had an ultrasound earlier during the week which showed single intra-uterine live gestation. Her blood group is A positive. She presents to the Emergency Department (ED) at 9 pm after experiencing a vaginal bleeding for the past 3 h. The bleeding started as spotting and became slightly heavier which is comparable to a light period for her. It is associated with mild lower abdominal cramping. She reports no dizziness or urinary symptoms.

    You, a junior emergency registrar, assess Sarah, concerned that she may be having a miscarriage. You find her to be afebrile with a heart rate of 80/min and blood pressure of 115/70 mmHg. Her respiratory rate is 17/min and she is maintaining oxygen saturation of 98% on room air. Her abdominal examination is normal. You perform a speculum examination and find a closed cervical os with no abnormality.

    You order a range of blood tests and she is observed in ED while you wait for the test results to come back. Her bleeding settles while waiting for the blood test results. Her haemoglobin is 135 g/L (110–160) and serum beta-HCG is 65,000 IU/L which is appropriate for her gestation.

    Despite being reassured by her examination findings and test results, you make a plan to keep the patient overnight in the short stay unit for an obstetric ultrasound in the morning. A consultant present during the night shift handover does not agree with the plan and advises that Sarah can be discharged if there is no other outstanding issue. They suggest that Sarah could either return the next day for an ultrasound or have an ultrasound organised through the GP. A discussion is made between the patient and the senior doctor and Sarah agrees to go home and be followed up with her GP.

    A further discussion is made between you and the consultant to understand the reason why a plan was made to keep a low risk, asymptomatic patient overnight. You then revealed your past experience with a patient who had first trimester bleeding whom you discharged home only to have them return with worsening bleeding and cervical shock. The concern was acknowledged, nonetheless, the consultant recommends to risk-stratify patients on a case by case basis in accordance with the established guidelines, rather than based on a past anecdote.

    2.2 Case 2

    Julia,an 18-year-old female, presents to a tertiary ED with a rash. She presented to the ED 1 day earlier with a sore throat and was seen by a junior registrar. Upon reviewing the documentation, you, an emergency physician, note that Julia had a 2-day history of fever, sore throat, cough and myalgia. She was febrile at 38.2°C, but the rest of the vital signs were within normal limits. On examination of her throat, she had swollen and erythematous tonsils but no tonsillar exudate or no generalized cervical lymphadenopathy. No joint pain or tenderness was noted. She was prescribed amoxicillin by the treating doctor and discharged home.

    You revisit the history with Julia and find that it is consistent with the documentation from the previous presentation. Julia tells you that after the second dose of the amoxicillin she broke out in a rash. However, she reports no known allergies. You assess Julia and find that she has a temperature of 37.8°C, heart rate of 92/min, blood pressure of 100/65 mmHg, respiratory rate of 16/min and oxygen saturation of 98% on room air. She has a widespread non-blanching maculopapular rash which is more pronounced over the extremities. You send off a rapid monospot test which comes back positive. You make a diagnosis of amoxicillin-related rash.

    You explained to Julia that the antibiotic was unnecessary and is in fact responsible for her rash in this case. You apologise for the suboptimal care given earlier and tell her to stop taking the antibiotic and then discharge Julia home with supportive therapy.

    A review of records of patients seen by the doctor who prescribed the antibiotics liberally was done. It was noted that the doctor prescribed antibiotics to all patients presenting with a sore throat.

    A discussion with the treating doctor took place to get a better understanding of their rationale behind the prescription of antibiotics. The doctor revealed that they used to work in Central Australia and had past experiences with patients having bacterial tonsillitis complicated by glomerulonephritis or infective endocarditis. It had been their standard practice to prescribe antibiotics to all patients with tonsillitis. After appreciating the underlying reason, the epidemiology of sore throat in the local region and the standard of practice in the local hospital was explained to the junior registrar. Concept of antimicrobial stewardship, use of guidelines to risk-stratify for streptococcal sore throat, alongside with the use of clinical scores, such as Centor score [4] were explained to the junior registrar to mitigate the effect of the aggregate bias. The registrar understood the explanation and would now look into further literature and had set goals to improve their routine standard of care.

    2.3 Case 3

    Paul, a 40-year-old male, was brought into ED at 2 am by his friends after he got punched in the face. Paul tells you, a junior emergency registrar, that he was hanging out with his friends at a pub and got into an argument with a stranger. He then got punched once to the left side of the face. He had two glasses of beer prior to the incident. The event was witnessed by his friends. He and his friends report no head strike or loss of consciousness. Paul reports one episode of vomiting about 15 min after the incident. He has ongoing mild nausea, but no further vomiting. He has no headache, neck pain or limb paraesthesia. He is complaining of pain over the left maxillary region.

    Paul has no significant past medical history and is not on any regular medications. He has smoked about five cigarettes per day for the last 20 years and he has 2–3 standard drinks per night, three nights per week.

    On examination, you find Paul to have a temperature of 36.9°C, heart rate of 98/min, blood pressure of 135/70 mmHg, respiratory rate of 19/min and oxygen saturation of 97% on room air. His GCS 2 h post injury is 15. His pupils are equal and reactive bilaterally. His cranial nerves examination is normal. Motor, sensory and cerebellar examinations are also normal. You note no retrograde amnesia.

    You find no scalp haematoma or depressed skull fracture on examination. There is a tender bruise over the left maxillary region without bony crepitus or subcutaneous emphysema. His mouth opening is normal without any trismus. No battle sign or racoon sign are noted. The cervical spine examination is normal.

    You order non-contrast CT scans of brain, facial bones and cervical spine and analgesia for Paul. You organize for Paul to be observed in the short stay unit while he awaits his scans. During morning handover, it is questioned why the patient needed a CT scan of his brain and cervical spine in addition to the facial bones. You let everyone know that previously you have dealt with similar patients who had attained spinal injuries after being king punched and hence, it had been your practice to organize both CT brain and cervical spine for those who had been punched in the head and neck region. The consultant explains about dangerous vs. non-dangerous mechanisms of injury and other factors to be considered when requesting an investigation with potential significant adverse effects, such as the increased risk of malignancy from extra radiation exposure in this case. They let you know about clinical decision rules such as NEXUS and Canadian criteria for cervical spine imaging and Canadian criteria for brain imaging.

    The CT scans are reported back as soft tissue injury of face without any underlying fracture or acute abnormality. Paul is discharged home with simple analgesia afterwards.

    2.4 Case 4

    You are a junior ED registrar reviewing Michelle, a 33-year-old female who presented with a headache. She has a past history of migraine since the age of 18 and also has a strong family history of migraine. She reports that this headache is similar to her usual migraine attack, but more severe and has not responded to her usual treatment regimen of paracetamol, ibuprofen and rest. Her headache started yesterday evening and she was not able to sleep well during the night because of it. Michelle reports being stressed recently at work and also having her period which has triggered her migraine in the past. She has associated nausea and was seeing flashing lights intermittently. She has not eaten anything since yesterday afternoon due to nausea. Michelle does not report any fever or recent travel. She had no other past medical history and is not on any regular medications. She is not a smoker.

    Her vital signs are: temperature of 36.8 °C, heart rate 88/min, blood pressure 135/77 mmHg, respiratory rate15/min and oxygen saturation of 99% on room air. Her GCS is 15 and her pupils were equal and reactive without objective photophobia. Her neurological examination is normal. No neck stiffness or rash was present on examination and her systemic examination was normal.

    You make a provisional diagnosis of migraine. You approach an emergency consultant and have a discussion regarding the management of Michelle. You formulate a plan of IV cannulation, IV rehydration and analgesia which the consultant agrees with. In regard to analgesia you propose prescribing IV morphine. Your consultant highlights that this suggestion deviates from the clinical guidelines for the management of acute migraine and asks you why you thought IV morphine was appropriate in this instance.

    You state that you chose IV morphine as the first-line treatment since you thought IV opioids were the first line of analgesia in cases of severe pain. The consultant explains the guidelines for treatment of migraines and discusses the importance of the analgesic ladder and its application [2].

    Together, you make a treatment plan of IV fluid rehydration, anti-emetics and high-dose aspirin with clinical review after 2 h to consider additional analgesia. You review Michelle after 2 h and she reports significant improvement in her symptoms and requests to go home. You discharge the patient home with aspirin and non-opioid analgesia after she tolerates oral intake in the ED.

    In this case, an aggregate bias was identified early and the effect was alleviated which benefited the patient by preventing the introduction of opioids and the potential risk of opioid dependence.

    2.5 Conclusion

    The inability to control the influx of patients to ED, combined with routine battles of access block and management of patient flow, ED clinicians are highly distracted and extremely vulnerable to all kinds of clinical prejudices. It is often more convenient to give in to the temptation of providing the patients and family members what they demand instead of what is the best for their clinical presentations. From the examples above, it is now obvious that the recognition of aggregate bias and mitigating its potential detrimental effects are crucial to best clinical practice. We will now discuss strategies to actively overcome the aggregate bias.

    Potential Solutions

    1.

    Reflect on own thought processes; It is important to regularly self-analyse clinical decisions. By practising reflectively, clinicians may be able to detect possible cognitive biases or flaws in their reasoning and help detect errors [3].

    2.

    Utilise clinical decision rules and scoring systems to guide management plans; Strict implementation of these tools can help to improve the clinician’s decision making process and prevent over-investigation or over-treatment due to anecdotal experiences.

    3.

    Seek help and advice from colleagues; By discussing patient cases and your proposed management plans with colleagues, physicians are able to get feedback on their thought processes and potential cognitive biases can be highlighted and mitigated.

    4.

    Keep up to date with latest research, protocols and guidelines; This helps clinicians to stay on top of the most recent evidence-based practice as well as benchmark their current practice [3].

    References

    1.

    Croskerry P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Acad Emerg Med. 2002;9:1184–204. https://​doi.​org/​10.​1197/​aemj.​9.​11.​1184.CrossrefPubMed

    2.

    Vargas-Schaffer G. Is the WHO analgesic ladder still valid? Twenty-four years of experience. Can Fam Physician. 2010;56(6):514–e205.PubMedPubMedCentral

    3.

    Graber M, Kissam S, Payne V, Meyer A, Sorensen A, Lenfestey N, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21:535–57. https://​doi.​org/​10.​1136/​bmjqs-2011-000149.CrossrefPubMed

    4.

    Centor RM, Witherspoon JM, Dalton HP, Brody CE, Link K. The diagnosis of strep throat in adults in the emergency room. Med Decis Making. 1981;1(3):239–46.

    © Springer Nature Singapore Pte Ltd. 2021

    M. Raz, P. Pouryahya (eds.)Decision Making in Emergency Medicinehttps://doi.org/10.1007/978-981-16-0143-9_3

    3. Ambiguity Bias

    Elizabeth Sheffield¹  

    (1)

    Emergency Department, Austin Health, Heidelberg, VIC, Australia

    Elizabeth Sheffield

    Email: elizabeth.sheffield@doctors.org.uk

    What would you say if you were offered: either 200 dollars now or a mystery sum of money which might be lower, higher, or the same value as the initial offer? Alternatively, imagine you were late for a meeting, you could take your standard route where you know the likely time of arrival or you could try a new course that may improve your travel time, but alternatively could make us even later?

    You would not be alone by choosing the first of these options—the ones for which a specific outcome is known. This tendency to choose a strategy or outcome where uncertainty is reduced is referred to as the ambiguity effect and it has an impact on everyday decisions ranging from which restaurant we choose to go to, what suburb we decide to live in and what banking strategies we use to prepare for retirement.

    In medicine, particularly emergency medicine, this ambiguity effect is especially pertinent. Many of our decisions are, by necessity, based on a variety of unknowns, and this tendency to try to choose options or differential diagnoses where this unknown quality is reduced and the potential outcome is more familiar can lead to inappropriate choices in the tests we order, the diagnoses we make and the treatments we commence [1].

    Definition: The tendency for one to make a decision favouring the familiar, and thus avoidance of the ambiguous or uncertain, despite the lack of supporting evidence [2].

    3.1 Case 1

    Lottie is an 8-year-old girl who attends late one evening with her parents. She has been complaining of lower abdominal pain for a week and a half, which occasionally is so intense that she is unable to stand up out of bed. She has no other associated symptoms and no fevers. She has been to see her general practitioner four times since the pain began and has had repeat urine samples and a set of bloods performed, which have revealed no obvious abnormalities. Her GP suspects abdominal migraines. Her parents have brought her in tonight because she has had another episode of severe abdominal pain just before dinner and they are uncertain what to do.

    You examine Lottie and find that she has a reassuringly soft abdomen, that she appears well hydrated, has normal observations, and is afebrile. Her parents report that she is immunised and has no significant past medical history, although she has suffered from milder tummy aches over the last year or two. Her mother has a history of migraines, but otherwise they are fit and well.

    You suspect that there will be no acute surgical cause for Lottie’s abdominal pain and concur that a diagnosis of abdominal migraines is likely. However, you feel unhappy sending an 8-year-old home without further evidence of no sinister pathology. Lottie’s parents warn you that she is quite scared of needles, so you apply a topical anaesthetic for bloods. When you go back to take bloods Lottie begins shaking and crying, and you have to pause several times before eventually getting two nurses to come in and help facilitate the procedure.

    It is nearly 1 a.m. and you advise Lottie’s parents that you need another urine sample. Lottie keeps falling asleep and you have to keep going in to wake her to encourage her to drink water, of which she reluctantly takes occasional sips. After her approximately 5 h in the department all of Lottie’s investigations including a urine dipstick come back normal. You ask Lottie’s parents to stay as you wish to obtain a senior review in the morning. You consider obtaining an abdominal x-ray, although you advise the parents that you suspect this would be normal.

    At this point Lottie is unable to stay awake and her pain has been well-controlled. Lottie’s parents also appear exhausted. They take your recommendation and stay until the morning. When the consultant arrives, she reviews the patient and discharges her. She advises the parents that although the diagnosis is unclear, she is reassured by the improving symptoms and lack of abnormal results on several occasions. She determines that the family lives 10 min away and advises them to return if any red flags which she counsels them on.

    She advises you that in the presence of reassuring signs and a child who lives nearby with responsible caregivers, it would be sometimes preferable to send the family home with appropriate safeguarding and plan for review in normal hours.

    In this case, your discomfort with the ambiguity of not having a concrete diagnosis has created a tiring and unnecessarily traumatising stay for Lottie and her parents. Although it is appropriate to keep a child or patient if you have ongoing concerns, it is also reasonable to send patients home with no clear diagnosis, in particular if they live nearby, can return readily and your examination had been reassuring at the time of assessment.

    3.2 Case 2

    You are a second-year resident working with another resident on nights at a rural centre. You review Brian, a 49-year-old male who presents with worsening dyspnoea on the background of a 1-week history of general malaise, myalgias, fevers and cough. He has also lost 2 kg over the last week. His past medical history includes HIV, multiple previous admissions with opportunistic infections and low CD4 counts.

    On review, you find Brian sleeping and very lethargic on waking. He is appropriately orientated; however, he appears clinically dry. He is afebrile, his oxygen saturations are 92% on room air, with a respiratory rate of 20, and non-invasive blood pressure is 142/86 and HR of 100. His CURB-65 score is 0. You order a CXR which shows multilobar consolidation.

    You feel uneasy as you have not had much experience with patients who are immunocompromised or patients with HIV; however, you remember from medical school that patients with low CD4 counts are particularly susceptible to Pneumocystis pneumonia. Working with PJP as a presumptive diagnosis, you decide to commence him on IV sulfamethoxazole-trimethoprim. Brian continues to appear very flat, but his observations remain unchanged, so you ask the medical registrar if Brian can be admitted up to the ward for assessment there, to which he agrees.

    You find later that evening that a MET call has been activated on the ward for Brian, who became increasingly tachycardic and subsequently hypotensive and confused. He was admitted to intensive care. After following up the case, you learn that Brian did in fact have a low CD4 count; however, his cultures have grown Streptococcus pneumoniae, a typical pathogen for pneumonia in the community.

    You ask your clinical supervisor to debrief the case during which she tells you that although low CD4 counts will put patients at higher risk of opportunistic infections they will also be more susceptible to typical community acquired infections as well. The discomfort you have felt at having limited experience with patients with HIV has meant that you have inadvertently chosen to prematurely determine a diagnosis without adequately safeguarding treatment against other possible causes of your patient’s symptoms.

    3.3 Case 3

    Janet is a 56-year-old lady who has been brought in by ambulance with a 1-day history of severe lower back pain. She thinks she may have awoken that morning with the pain, but she reports that as she is moving house she has been lifting things more often than usual over the previous week. She tells you that she does suffer from low back pain normally and takes amitriptyline for chronic pain. She is a diabetic which has been poorly controlled. She also is wondering if the move and added stress has been causing her to be run-down, as she’s been feeling a bit tired as well as fluey and feverish over the last couple of days.

    On examination, Janet is afebrile with a heart rate of 84 and a blood pressure of 120/80. She was unable to walk with the ambulance, but you complete a neurological examination and are reassured that the power, tone, reflexes and sensation of her limbs remain intact. She also has preserved perianal sensation and anal tone. She has some midline pain to palpation in the L3/4 region, which does extend paravertebrally, and you diagnose a likely mild disc herniation and subsequent muscular spasm of the lower back. The systemic symptoms in the context of back pain make you slightly uneasy; however, you know that getting an MRI is difficult and you decide you favour a mechanical cause.

    You admit Janet to your short-stay unit for analgesia and review by your physiotherapy and extended care coordinator team. When you go to review her mid-shift, she tells you the pain is worsening and she is unable to get comfortable lying down, so you prescribe her a muscle relaxant as well as some further oral analgesia and decide to review her again later that evening.

    Another hour passes and Janet becomes febrile and hypotensive. Her pain has increased even further, and you get some bloods which show a raised WCC of 34 × 10⁹/L. At this stage, you become seriously concerned as to a sinister cause to her lower back pain and order a CT which reveals a large multi-level epidural abscess. You arrange urgent transfer to the intensive care and refer to the neurosurgical team for urgent decompression.

    You realise that despite there being some symptoms in the context of back pain that were concerning, you failed to take bloods and investigate non-mechanical causes in part because you wanted to create a definite diagnosis and plan. You also wonder if the issue of imaging being difficult to attain has influenced your decision-making. You reflect that this has resulted in a delay to obtaining a correct diagnosis and delaying appropriate management of the patient.

    3.4 Case 4

    You are the Emergency Registrar in charge overnight at a small urban district hospital. Lakshmi is a 28-year-old woman who has been triaged as throbbing right-sided abdominal pain. Lakshmi tells you that she had felt well when she went to sleep but when she woke up to use the toilet, she realised she had right iliac fossa pain which began to increase in intensity when she was trying to fall back asleep. She had intermittent right-sided cramping over the last few days prior to this but had been working so had ignored it. Her brother had a complicated admission in hospital from appendicitis which burst so she asked her husband to drive her to hospital to get checked out. Lakshmi is normally fit and well and has no history of intrauterine devices or sexually transmitted diseases. She is currently mid-cycle and has regular periods. She has no history of abdominal surgeries. She takes no regular medications.

    You examine Lakshmi and find there to be mild guarding in her right iliac fossa. Her observations are normal though and she is afebrile. You prescribe her some intravenous morphine and oral analgesia for her pain and do some basic bloods and obtain a urine specimen and request a urinary b-HCG.

    Her pain improves with analgesia, and her bloods return normal. Her urine is clear, and her pregnancy test is negative. You suspect appendicitis but are also somewhat concerned about ovarian torsion. There is no O&G service at your hospital, but the surgical registrar is on-call 24-h and covers all referrals for abdominal pain.

    After an hour, the surgical registrar comes down and reviews the patient. Lakshmi is drowsy from the morphine and reports her pain has improved slightly with analgesia. The surgical registrar advises Lakshmi that she does not feel that this is likely appendicitis. She advises discharging Lakshmi with a presumptive diagnosis of a ruptured ovarian cyst. She further mentions that if this was during the day she might consider an ultrasound; however, ultrasound service is only available on-call for emergencies and she feels this is unlikely an acute abdomen. Despite your initial reservations, you decide to favour the opinion of the surgical registrar and acknowledge that ultrasound is not readily available.

    Early in the morning, Lakshmi returns with worsening abdominal pain and vomiting. Her pain has significantly increased, and she is writhing on the bed in pain. You obtain an emergency ultrasound which shows a torted right ovary and arrange for Lakshmi to be transferred to the nearest tertiary hospital for surgery.

    On reflection, you realise that you agreed with the surgical registrar not because of any convincing evidence to support her hypothesis but partly because it was felt more comfortable labelling Lakshmi as having a specific diagnosis. This combined with the lack of easy access to diagnostic imaging made you more likely to accept a common diagnosis despite a relatively less common and acute one actually being the cause of her symptoms.

    3.5 Conclusions

    No matter the stage of training, it can be difficult for clinicians to overcome ambiguity effect. There is comfort in anchoring on diagnoses or care plans that are familiar. It is worth noting that there will always be subjects around which knowledge gaps are present, and therapies in which fluency is lacking. Also, that there is an inevitability for encountering situations where no clear diagnoses are found [3].

    Thus, we need to normalise and plan for ambiguity in our practice. In doing so, we can ensure that we seek assistance from our emergency and subspecialty colleagues when required. However, it is equally important to consider how one’s discomfort with ambiguity can lead to over-investigation which is not without consequences: e.g. if you refer a low-risk patient for an angiogram, this procedure has potential complications which can have serious consequences [4, 5].

    Potential Solutions

    1.

    Consider potential alternatives to your prospective diagnosis. Checklists and surgical sieves can force you to consider other differentials and help avoid premature closure with decision-making. Although initial instincts frequently prove correct, we are often wrong, and should approach each differential with this in mind.

    2.

    Foster and encourage a supportive workplace culture, so colleagues at every level will feel empowered to employ shared decision-making and seek counsel for the cases where they feel uncertain.

    3.

    Continue prioritising lifelong medical education. In actively seeking to teach juniors, you can stay up to date on topics that have become less familiar or where goalposts of care may have changed.

    4.

    Remember that mistakes are inevitable. Compassion to your colleagues as well as to yourself is an important part of ensuring a healthy mental model for your team, and also for future patients.

    References

    1.

    Croskerry P. ED cognition: any decision by anyone at any time. CJEM. 2014;16(1):13–9. PMID: 24423996.Crossref

    2.

    Han PK, Reeve BB, Moser RP, Klein WM. Aversion to ambiguity regarding medical tests and treatments: measurement, prevalence, and relationship to sociodemographic factors. J Health Commun. 2009;14(6):556–72. https://​doi.​org/​10.​1080/​1081073090308963​0.CrossrefPubMedPubMedCentral

    3.

    Howard J. Cognitive errors and diagnostic mistakes: a case-based guide to critical thinking in medicine. Cham: Springer; 2019. https://​doi.​org/​10.​1007/​978-3-319-93224-8.Crossref

    4.

    Inukai K, Takahashi T. Decision under ambiguity: effects of sign and magnitude. Int J Neurosci. 2009;119(8):1170–8. https://​doi.​org/​10.​1080/​0020745080217447​2.CrossrefPubMed

    5.

    Osmont A, Cassotti M, Agogué M, Houdé O, Moutier S. Does ambiguity aversion influence the framing effect during decision making? Psychon Bull Rev. 2009;22(2):572–7. https://​doi.​org/​10.​3758/​s13423-014-0688-0.Crossref

    © Springer Nature Singapore Pte Ltd. 2021

    M. Raz, P. Pouryahya (eds.)Decision Making in Emergency Medicinehttps://doi.org/10.1007/978-981-16-0143-9_4

    4. Anchoring Bias

    Michail Kosmidis¹  

    (1)

    Emergency Department, Armadale Health Service, Mount Nasura, WA, Australia

    Michail Kosmidis

    Email: michail.kosmidis@health.wa.gov.au

    Imagine walking into a used car dealership, where a car that you are interested in is priced at $800. Being a savvy consumer, you manage to haggle the price down to a more palatable $700 and happily drive off with your new car. If the same car was priced at $1600, however, you would happily shake hands at $1300, well above the price that you did not find acceptable in the first scenario.

    This phenomenon can be explained by the well-known tendency of the human brain to seek meaningful patterns, rather than make accurate and logical estimations, the latter often impossible in a world where countless stimuli are constantly competing for attention. In an attempt to make sense of a chaotic world, the brain desperately ‘anchors’ itself to the first available piece of meaningful information, such as the first price uttered during a negotiation, assessing all subsequent pieces of information by referring to the ‘anchor’, even if there is no real connection between them.

    In medicine, the ‘anchor’ can be the first of serial measurements of a particular vital sign, the first symptom mentioned when taking a history, or perhaps the first symptom that the physician considers important in the search for a diagnosis, meaning that subsequent findings are seen through the lens of this initial impression. The result is that any subsequent clinical or laboratory finding that discredits the initial impression is more likely to be ignored than to force a change in the working diagnosis [1].

    Definition: The tendency to focus too heavily on the first available piece of information when making clinical decisions [2].

    4.1 Case 1

    Gerald is an 81-year-old man with a history of hypertension, hyperlipidaemia and a previous ischaemic stroke. He lives at home with his wife and is visited every few weeks by his daughter and son-in-law. He has become forgetful in the last few months and is presumed to be

    Enjoying the preview?
    Page 1 of 1