With invigorating play-by-play analysis reminiscent of the MNF ManningCast, Klein brilliantly examines some of the highest stakes decision-makers in the midst of important crisis points as varied as missile launches to checkmates. This book challenges the medical student to re-examine what they are really doing while scanning and answering serial UWorld questions with bloodshot eyes. While those in professions where information in the form of words may find the recognition-primed decision model less than entirely explanatory, the discussion around how much of even those professions revolves around recognition is enlightening. The discussion of chess mastery and the academic rigor of its cited studies were highlights, with only occasional digresses into personal anecdotes. I would give this book a 9.0/10 and worth a read for those looking to explore how to most effectively achieve mastery of some topic or skill.
The author Klein suggests that we are more likely to use a one-at-a-time approach to decisions when we are under time pressure, whereas we are more likely to compare options when we have to explain our decisions to a third party. As a resident, I can apply this by imagining how I would justify certain diagnoses to an attending, for example, even if the attending is not necessarily going to ask or is there.
The book suggested that experts are able to run mental simulations effectively. As a resident, I can try and think of various ways things on my unit could go wrong and have a general idea of methods in which I can correct such actions in the most effective manner possible. In this way, I'll be prepared and better able to handle crises that occur in the matter of a few seconds. This is reminiscent of how chess grandmasters tend to not be affected by time constraints nearly as much as others.
As a resident, I want to routinely set out goals as a team for the patients we are taking care of. This practice, as recommended in the book, can significantly enhance our ability to make well-informed decisions when unexpected situations arise.
Do I agree with the recognition prime decision-making model? I am partially convinced. There are compelling arguments to be made for such a system one dominant process. Since reading this book, I have been noticing a variety of instances in my academic life where I'm using more recognition of a few selected options than comparative analysis among a wide choice of options. For example, when I'm reading a case vignette in Symptom to Diagnosis and immediately think of an answer, am I really going through a mental paragraph of logical rules-based discussion because of fever that rules out pyelonephritis versus kidney stone, which might present with more of a painful presentation, etc. It seems more a sequential process than a parallel process oftentimes, like I am recognizing the pattern more so. However, I am not sure if I agree with his assertion towards the end of the book about how stress doesn't so much cloud the mind but prevents time for information gathering (therefore claiming stress doesn't impede a comparative decision process but more of a recognition one). Stress from time pressure does involve less information gathering ability but to me is also a clouding of rational thought helping me compare different options on exams. When I am tired and sleep-deprived, I still tend to do well on flashcards where I am recognizing things, but I do markedly worse on board questions where more comparing and logicality is involved. I think this inherently means there is in some domains more of a cognitive process than recognition going on, which this book tries to minimize I would say. He seems to have chosen primarily first responder and military decisions in his studies, but I wonder if he would find anything different with professions with more semantic and impersonal knowledge, like history or medicine.
For example, a historian would likely be thinking through a wide variety of words, terms, and periods in a historical person's life (filtered by recognition to a degree of course) when asked if a figure had experienced something similar to a modern-day event, for example. When generating a differential diagnosis as a physician, one is not necessarily thinking back to personal experience most of the time (maybe an experienced clinician would be) but of words, terms, and semantic knowledge regarding similar conditions and presentations they have read about. This is a type of "naming" decision, not touched on by most of Sources of Power, which is less of a recognition-primed model than most other decisions in our lives, I would argue.
I am still mulling over my exact stance on the book's proposed model, but regardless I think it brought to light the importance of recognition in many decisions of real world importance and in the creation of experts. This book gave me a whole new perspective to how I've been going about my medical education which is testament to the quality of its content. I admit on board questions I am NOT generating a very long differential (only like two or three most of the time) which is either a fault of my own or part of this recognition prime decision-making which was fascinating to ponder.
The discussion around a global impression of whether or not a situation will work out in a decision maker sounds very much like a meta-cognition reminiscent of System 3 that some have theorized (Marcum 2012). This seems to stem from a mental simulation step that they include in their evaluation of a proposed action after the situation has been recognized. This seems similar to a differential diagnosis because with a differential you're imagining how particular things would present (to a degree). The mental simulation aspect of this model is fabulous because it seems to include an aspect of medical decision-making that I have not yet learned about in my readings so far in this ddx course, that is, how do we order a work-up based on a differential that is cost-effective yet rules out the diagnoses most likely and most deadly? Simulation allows physicians to consider the potential legal repercussions of including a life-threatening illness in Epic that they don't test for (b/c of cost or improbability) yet could end up being the diagnosis. In a less depressing way, simulation is integral in figuring out what tests (based on the differential) would actually change management in the future. A prominent example from a Symptom to Diagnosis vignette is with back pain (you usually don't image initially because the results of which usually doesn't change management). He proposes a model that is similar to Kahneman's in a first step of recognition and then a second of mental simulation, kind of bound together by an awareness of problem solvability. This is plausible, though simulation isn't the same as system 2 rationality, which seems to encompass simulation as a subtype.
Systems are important in life and in medicine. A system for exercise lets you do it more and more effectively. Whereas a system for taking notes as a medical student allows you to more effectively give presentations on rounds and manage complex situations. The system in the Vincennes missile mishap was not optimal for decision-making. It involved a confusing computer π₯οΈ set up where two different screens were used to show information that really needed to be together and on a larger screen πΊ. The computer was also mishandling tracking numbers and assigning two different airplanes the same number. This relates to medicine and how you should set yourself up for success to make informed decisions. Much of decision making involves having reliable information accessible at the same time so that they can all be meshed together and acted upon in concordance. When that doesn't happen in medicine, like when test results are delayed or when patients don't follow up for lab studies, then there's no way that an optimal outcome decision can be made (I would argue). In order for a recognition to occur in system one or in the recognition prime decision-making model which will lead to a correct outcome, you need to have an accurate picture to work from.
I loved how this book began by being clear it would routinely include application sections because I entirely agree with their statement a good theory is applicable. Einstein may have come up with his theories in isolation, not really considering whether or not they were testable, but his theories still ended up being confirmed by many subsequent scientists.
Table 7.2 in the book was downright inspiring, not from necessarily an academic differential diagnosis perspective, but from sheer scientific fervor. I was reminded of the Mythbusters as they went to such varied study locations as urban fire ground sites, to tank platoon leaders, to wildfires, to battleships. Such curiosity and being willing to go out into the real world to find hands-on ways I can do medicine better is something I want to do one day. Staying in a clinic in a sterile environment is not necessarily meeting patients where they are and can potentially predispose to being blind to other technologies and developments in the world around that could be utilized within my specialty to great effect. I think domain transitions between technologies are the most common ways breakthroughs happen, which I think is only found out by braving "the elements" (whatever they may be) and testing theories and ideas in the world like Klein et al. did.
π A table from the book illustrating the varied study settings utilized
I love the suggestion that Gary Klein made about deliberate practice and trying to not necessarily evaluate the outcomes but look at the thought processes that led to the outcomes. In my own life, I have gotten test questions correct for the wrong reasons, and in such cases I still try and take notes to correct my thinking.
He made the analogy to Gary Kasparov and how he had a type of creative desperation that experts seem to display. As a resident in the next few years, I hope to try and cycle through different starting points and thinking strategies in situations that are "desperate," like when a diagnosis has remained uncertain and obscure for an extended period. Knowing a treatment plan's leverage points (what it is primarily aiming to treat, what the testing had high specificity for, etc.) and its weak points (potentially missed diagnoses, alternative symptom explanations, etc.) can be the mark of an expert physician. Being familiar with these patterns, like Gary Kasparov, can help you have a creative repertoire of "moves" to go to when obstacles appear.
One particularly unique problem of decision-making that applies to medicine is a slow onset of problems making it harder to identify that there is an issue. The book gave the example of a pilot oblivious to the accumulation of dangers in his flight plan, which made me think of the classic example of how a frog πΈ doesn't realize it's in hot water until too late. In medicine you might not realize gradual inconsistencies in a diagnosis accumulating (like in diagnosis momentum), particularly if the treatment team agrees with such (group think). I had this scenario when a patient with urethral pain had a superficial skin yeast infection that the treatment team I was on thought for a long time was a urinary tract infection. There were inconsistencies along the way, in hindsight, that we dismissed because of the gradual nature of the problem.
I enjoyed the discussion of clarifying a goal around decision-making because this relates to medicine greatly in how you and the patient should be clear on the goals of what your treatment and differential diagnosis should be. Otherwise, per Klein, a decision maker can be "trapped."
One example of the power for experts to see the invisible (chapter 10) was on my internal medicine rotation when there was a patient who we were seeing for a heart concern. During my presentation, I did not mention whether she had chest pain at home. I had actually asked this in my initial interview with her, but had not mentioned it that day, and my attending made note of that. I was not aware of the pattern disruption to the usual med student presentation of her medical issue, but my attending with much more experience could see right away my glaring omission. Such experience is gained over time and with deliberate practice.
One of the strongest parts of the book was the research-backed section on chess βοΈ players and how time affected their effectiveness. I thought the finding that experts were not affected by time pressures as much as less experienced players was fascinating. This made me think of how an experienced critical care physician running a code is not affected by the pressure of the situation unlike the cowering medical student in the compression line. They have seen many patterns like this before and can thereby more creatively generate alternative courses of action to solve problems than someone who is less experienced.
One of the weaker parts of the book was the chapter on teamwork, which was reminiscent of a business self-help book than something rooted in research. Of course, it would be challenging to research this topic, but I did like the discussion about team metacognition where teams can be aware of what others need at any particular time. I would be interested in learning more about studies and experimental findings regarding how system 1 and system 2 interact when multiple "neural nets" are involved in teams.
The author mentioned that he lives in Yellow Springs and gave an example from Wright Patt Air Force Base, and, from a reader who has lived in Dayton his whole life, it was hard not to smile. It did make me think that Wright Patt Air Force Base, which I pass all the time on the road, is a site of very high stakes decision making on a daily basis, where not only bottom lines but lives are on the line.
One of the most useful things from the book for my own personal life, not just in the future as a resident, but now, is how clarifying goals when assigning tasks to teams can help team members make their own decisions and have a sense of autonomy if things go awry. The example that the book gave was of Winston Churchill and a miscommunication with his officers around a naval ship they were trying to destroy. Mr. Churchill had given orders to avoid fighting a superior force, with the intent to avoid needless battles before taking out their main target, which was misconstrued as an order to avoid the largest German warship in the area when in fact that was Mr. Churchill's intended target. I think this can correlate to medicine where clarification around the purpose of a particular treatment plan might prevent miscommunication and let other team members more effectively respond to impromptu questions, for example, from the patient or their family about treatment plans and potential modifications that would be best. I remember when I was explaining to a patient who had a potential life-threatening retropharyngeal abscess that it would be a good idea for them not to leave against medical advice. It was because I knew the goal of the treatment team to protect them from potential life-threatening progression of their illness that I was able to adapt in that moment and give them appropriate guidance.
The concluding figure in the book in chapter 17 was a reminder of the many amazing sources of power that our brains have access to. Delving into how to use one's mind more effectively over this elective was a privilege and has helped me become a better physician-in-training.
References:
Marcum JA. An integrated model of clinical reasoning: dual-process theory of cognition and metacognition. Journal of Evaluation in Clinical Practice. 2012;18(5):954-961. doi:10.1111/j.1365-2753.2012.01900.x