How Doctors Think manages to imbue the wisdom of a tenured physician into the pages of a book readable in length and complexity that anyone can read. Despite occasionally lacking in applicability in the middle of the text and more surface-level in its medical discussions than someone in the medical field may optimally appreciate, this is a fantastic work whose stories stick in the mind and translate dry decision-making lingo into practical pitfalls to avoid.
Sometimes, complex concepts need to be retold in more comprehensible and simpler ways. I believe this book has managed to simplify the incredibly complex research and cognitive processes that doctors undergo when formulating a diagnosis and treatment plan. At the start of the book, there's a story about a woman who, for 15 years, received an incorrect diagnosis of anorexia bulimia until a gastroenterologist viewed her problems from a different angle and diagnosed an absorption disorder. This reminds me of a unique case I observed during my OB/GYN rotation, where a woman presented with delirium and neurologic dysfunction that confounded various teams. An attending psychiatrist I knew suggested ovarian teratoma-induced encephalitis, which was confirmed on pelvic imaging. As a medical student, I assisted with the surgery, an experience I will never forget, especially due to the rarity of the diagnosis. It was great to see that she had rapid improvement afterwards. Both of these clinical examples involved someone coming up with a unique diagnosis others hadn’t thought of yet, which is the goal of a good differential diagnosis. This was a vivid reminder to me that developing diagnostic acumen is vital as a physician.
The author goes on to offer a unique perspective on decision trees and how insurance companies favor them for providing guidance on coverage decisions, suggesting these trees falter when symptoms are vague. From personal experience in family medicine trying to make sense of very long complicated clinical pictures, I agree that this is often the case; therefore, examining a doctor's thought process when algorithmic UpToDate tools fail is crucial.
From my experience reading the DSM-5's Handbook of Differential Diagnosis on this rotation, I somewhat concur with the view that creativity in a physician's approach might be stifled by such decision trees. I sometimes became bogged down in the decision trees of the book and wasn't necessarily considering how this might present in a patient vignette or scenario. They are helpful, certainly, but I wonder if they might obscure the clinician's broader pattern recognition abilities. This is particularly relevant when too much Type 2 thinking occurs, whereas pattern recognition and Type 1 thinking could potentially be more appropriate when symptoms are vague and information is incomplete.
This text was unique in that it was specifically written for laypeople with the goal of allowing patients to assist their doctors in reaching accurate conclusions by understanding how doctors think. This collaborative approach to healthcare is something I wish to emphasize in my future practice. If you can get past the initial slightly repulsive feeling of someone trying to assist you in your work, as if there was an underlying assumption that you are not adequately skilled, I think it would not only make for a more enjoyable personal clinician experience but also build therapeutic rapport. This would likely lead to better outcomes (more information shared, more time efficiency, etc.). I thought this was very unique and something I frankly loved because medicine is not necessarily rocket science and, if our patients are more inquisitive about what's going on with their health and the process of health care, I think collaboration can only be a positive thing, unless taken to extremes, of course. Many physicians may agree with the potential benefit in those suffering from chronic illness related to preventable causes taking more of an interest in their own health. The author even gave the layman a heuristic to help their treatment team: asking their doctor in the case of unknown sources of pain what body parts are near the area of pain. This smelled like the mental CT scan approach that Dr. Leeds, the attending working with me ont his differential diagnosis rotation, has suggested is a valuable tool to improve your differential.
Following such patient intuitions before your own can be important sometimes, as in the example of a pediatrician who was seeing this child with an "over-anxious" mother. The mother was in the medical field herself, and this may have partially influenced the pediatrician to dismiss abdominal pain as something non-serious. Later on, however, it was determined there was an intestinal obstruction, necessitating emergency treatment! I thought about how feedback in that scenario would likely not have occurred (since the mother and child were traveling from out of state), and in order to improve through purposeful training (as Moonwalking with Einstein so eloquently described) one needs to have a short feedback loop (where corrections can be made continually and often). Medicine often doesn't involve such feedback loops, as there is often not enough time to follow up on patients from previous days, which I think is unfortunate. I wonder in the future, with medicine leaning towards value-based care, whether or not there will be a push towards paying someone in the clinic to purposely call patients and asking how certain treatments went to gather statistics for better reimbursement. I think this would be very interesting and would allow doctors the chance to get a closed loop of treatment feedback. That is, given that they have the proper incentives and the time allotted to go back and make those connections in the chart! Artificial intelligence may assist the physician in this, whereas if that feedback information gets put into the chart (maybe in the morning of each day) then there could be a generated paragraph of, say, 20 sentences (if 20 patients were seen the day before) that gives updates on what treatments were successful or not.
There is a great clinical example of Dr. Valchuck in the book accompanying a patient with a 15-year history of uncertain diagnosis, strikingly described as a paper chart 6 inches thick on countertop. The doctor however pushed all that paperwork to the side, asking her if she could explain in her own words what has been going on. This kind of presence and focus is something I want to emulate one day when I'm a full-time clinician. Particularly with complex cases like the one in this example, a strong framing bias can develop if you aren’t mindful of it. I don’t have enough clinical experience to suggest you should never read the chart at all, but I have worked with attendings and residents who purposely didn’t dig into charts very much for this very purpose: to get a fresh perspective. At first, this was confusing to me, but now, after reading this book’s example, I can see the merit of such an approach. When Dr. Valchuck eventually made a very complicated situation appear much simpler by proposing a gluten allergy as the cause of all her trouble, I like how the book emphasized it was somewhat boring research he did years ago into absorption disorders that brought him the ability to recognize the clinical pattern. Our experiences, our memories, our unique perspectives give us insight no one else can provide into the care of our patients—and that’s why oftentimes a second opinion even among your colleagues is really important. In some ways I aim to one day have humility enough to be the first one to recommend a second opinion on cases I don't necessarily see as clear.
One of the studies in the book quoted that 15% of medical diagnoses are inaccurate (significant room for improvement!). The book quickly quoted the eminent medical decision-making researcher Croskerry when it was discussing how doctors came up with 2 to 5 diagnoses in their mind very shortly after meeting a patient. The author itself took the view that heuristics are essential tools, which I totally agree with, and the book flat out criticized medical schools for not teaching heuristics and furthermore actively discouraging them. Indeed, this was partially my experience during my first two years, as often times I would find myself after challenging class discussions going back to mnemonic-filled titles like First Aid to help me find clarity. While not a direct indicator of clinical utility, such heuristics at least worked out very well for me on exams and being able to pattern recognize in complex clinical test vignettes. Without a way to conjure up knowledge, sometimes it is lost in the black box of our mind.
The discussion of “productive anxiety” and the Yerkes-Dodson law was helpful to me because I tend to care too much and run higher on the x-axis of the famous graph than I would like. I have lately started to become more reserved in challenging situations, but in general when I was on my third year clerkships I tended to run at a higher mental rpm than I think was optimal for me. Yet another example from Dr. Valchuck (the gastroenterologist who found the nebulous gluten allergy) shared how he had a personal liking towards a patient and therefore had a cognitive bias against an uncomfortable-but-needed work up involving invasive endoscopy. He eventually ordered the more invasive workup, but at this point it was discovered that the patient actually had a lymphoma diagnosable by such "uncomfortable" diagnostic procedures. This made me think of how I in the future can develop strong patient relationships yet try and be entirely objective in how I'm approaching potentially life or death scenarios where the right diagnosis is paramount. In some sense, if you were taking care of a family member and you were thinking very long and hard about them, maybe you would come up with a better diagnosis for them than someone you didn't know, but in another way you tend to have emotional clouding (e.g. wanting to protect them from discomfort) that can obscure optimal functioning as well. Awareness of one's cognitive biases and having thorough type two thinking (in Kahneman's terminology) may be a solution to this.
A key learning point I took away from this book is that any test, no matter how lovely a place it holds in your mind, can be wrong. There was an example of the EKG in the book, where misplacing the leads inadvertently lead to findings very suggestive of a conduction abnormality when really it was artifact! This was striking to me and a reminder of how we should take everything with a slight grain of salt and not with certainty. This is why I subscribe to a reasonable faith perspective regarding my religious beliefs.
While I do not have certainty as to whether or not a Jewish man in the first century named Jesus in rural Judea actually raised from the dead, there are reasonable arguments to be made about such. William Lane Craig states believing in the God of the Bible is a rational thing for a person to do in the case there are four facts agreed upon by the majority of scholars who have written on these subjects which any adequate historical hypothesis must account for:
Jesus’ burial is attested in the very old tradition quoted by Paul in I Cor. 15.3-5:
On the Sunday following the crucifixion, Jesus’ tomb was found empty by a group of his women followers.
On multiple occasions and under various circumstances, different individuals and groups of people experienced appearances of Jesus alive from the dead.
The original disciples believed that Jesus was risen from the dead despite their having every predisposition to the contrary.
He makes the case that the best explanation is that Jesus rose from the dead. Which, of course, takes faith, but of a reasonable type. I can't help but think that in medicine there's a certain reasonability and unknown "faith" factor when we take into account imperfect information. Even in chest x-rays, where there's evidence of pneumonia 🫁, it could also just be the patient not taking a whole breath in! Thus, we should be mindful of the limitations of everything that we order as physicians and factor that into our thinking process as we place a “reasonable faith” in those findings.
I also really liked the clarity of the yin-yang out cognitive fallacy section, as the example of being in the ER with someone who had already been worked up extensively by gynecology, family medicine, and gastroenterology (who actually had an ectopic pregnancy) cleared up my confusion of how this fallacy translates into the real world. It made total sense that, after they'd gone down all these avenues, they didn't think that anything else could “shine a light” on the diagnosis.
I thought the discussion around how fatigue in the workplace can have a direct effect on differential diagnosis was fruitful in the fact that often in medicine we are working under restricted sleep. I had never thought of the potential of a minimus bias (as Gary Klein uses in Sources of Power) where symptoms are explained away and minimized by alternative explanations. Groopman proposes that this can occur while listening to a patient all subconsciously to relieve a workload. No physician would purposely admit to this, but this insidious cognitive bias is plausible in my view. Such fatigue-generated effects underscore the importance of maintaining a healthy work-life balance and doing positive healthy behaviors like running, meditating, etc. I'm looking forward to enhancing the culture and awareness about the significance of sleep and self-care among Boonshoft's students and staff. On November 17th, I'll be giving a lecture alongside my father and some colleagues on lifestyle medicine, focusing on how to craft lifestyle prescriptions for both patients and fellow colleagues!
While listening to this book, the discussion of Bayesian reasoning made me think of how ChatGPT could potentially help with this one day. If it had encyclopedic knowledge of the test specificities and sensitivities, it could calculate probabilities that would otherwise be rather difficult for the physician armed with a sliding scale and a Casio. One criticism of the book was the superficial discussion around racial prejudices that some patients have and the way this may affect physician decision making. This was in the middle of the book and didn’t flow well with other sections. While I am of the opinion that this may indeed affect physician diagnosis, I thought the author could have done a better job elucidating the effects of such through studies or clear examples.
I did enjoy the unique insight that primary care is under-appreciated in terms of its complexity. It takes much skill and knowledge to know what you do not know, and to know when you should investigate more or less. Dr. Leeds, my attending, calls this an epistemic honesty about the limits of your abilities, which translates into an epistemic integrity, which is more related to how you act on such uncertainty in a moral way. The book made the statement that specialists tend to receive far less murky situations than primary care, and in that way are “over-appreciated” in a sense and relatively overcompensated. A personal example of this is when I was on a burn surgery elective, which was overall a great rotation where I learned a lot despite not being what I expected. I initially anticipated dealing with incredibly complex situations that couldn't be taken care of in a normal trauma situation. In reality, we ended up having a rather small differential diagnosis and potential treatment options, which illustrates how often specialization makes diagnosis "easier." From a more practical point of view, the diagnosis has already been “filtered” and thought about before the referral gets made to a specialist.
I am simultaneously reading Symptoms To Diagnosis, which is a very matter-of-fact text that does not involve many memorable, clinical cases. So, this book’s discussion of how someone the author knew at synagogue went to Vietnam, adopted a child with a strange presentation that initially looked like SCID, and ended up being correctly diagnosed (after a long stint in the hospital!) with a nutritional deficiency was very helpful to me. It made me think of how in medicine we are privileged to be the listeners to some of the most precious stories in people’s lives. I think I will always think of the “zebra retreat” fallacy in association with that book example from now on. Stories like this is really where this book shines.
I learned from this book that we should never assume that knowledge fell from the sky. An example of this was illustrated with Dr. Locke and the Harvard Cardiology Department. There apparently is a well-established rule within cardiology that you should repair a cardiac shunt between the right and left atria when there is a two-to-one flow. However, that rule apparently came from doctors debating at a noon conference one day, the results of which were published in the American Journal of Cardiology without any real-time data backing it up (at least this is what the author describes). While it is necessary to act on information that is imperfect in medicine, we should strive to reduce that as much as possible. I think one area that comes to mind is nutrition research, where there are still a lot of unknowns regarding the true benefits of nutritional interventions with a variety of diseases. There is great evidence that eating healthier and exercising a healthy amount can lead to positive health outcomes,1,2 but it is a greatly unexplored field.
I thought the discussion around vertical line failure and the author’s troublesome journey around trying to find a hand surgeon for a chronic injury was interesting. While they eventually found the rare scaphoid lunate ligament tear, you could feel the author’s frustration and turmoil with trying to find an appropriate surgical approach to the problem. No fewer than four highly-regarded specialists were consulted, all with different answers until a high-sensitivity MRI aided the correct diagnosis. This impressed upon me how easy it can be to get a diagnosis wrong as a physician and how that can directly cause great turmoil and frustration in someone’s life (the author ended up not using his hand much for a good three years until he found out the right problem!).
Having just been in a radiology rotation, I found the discussion around radiologic diagnosis interesting. The book shared how the least accurate radiologists in one study, who got about 75% of the correct diagnoses, tended to be the most confident of their answers compared to the most accurate (but least confident) radiologists who got around 90-95% of the correct diagnoses. It was also interesting how after about 38 seconds false positive rates go up quite a bit in radiologists who end up looking at them so long. The parallel play between type 1 and type 2 thinking here would certainly be worth researching more.
I thought the discussion around radiologist reports and the uncertainties with mammograms was tremendous, as it highlighted in microcosm what all of medicine has to try and do in making sense of edge cases and trying to be right when the stakes are high. The book shared an example of someone who found a high-grade breast cancer in a scan containing calcifications which appeared totally benign—except for the fact that they weren’t there at the last scan. The physician ended up tentatively recommending biopsy, only to discover a very high-grade ductal carcinoma. The doctor was debating whether to even share this case at a conference because it could taint the diagnostic predispositions of colleagues in a negative manner by creating a cascade of unnecessary testing. In my own life, I think I have had such instances where I would not recommend others to follow the same path that I did, even if the outcome was favorable. I remember being on a basketball team for two years in high school, which I did not enjoy too much because of a less than optimal environment. Looking back on that time, though, those experiences did teach me how to handle challenging team dynamics and how to work really hard.
The discussion of a study where they used computer programs to help aid radiologists in making difficult lung nodule reads made the point that physicians need to be careful of how they integrate new technology into their practices. In the study, the book talked about how it was not helpful in a net sense to have the computer program assist diagnosis because, while they were able to detect about 14% more cancers, they also had 10% more false positives. The physician alongside technology will become a more important topic as artificial intelligence becomes more and more a daily part of our lives.
The author mentioned how his own diagnosis of back pain influenced how he researched heavily into the various types of COX-2 inhibitors and their potential effects. This discussion was apt to a book titled, How Doctors Think, because conditions that physicians have personal experience with tend to be more familiar to them. I am thinking of a family physician who provided some of the best asthma care available in the region in no small part to how she had severe asthma herself. In my own life, I try and eat plant-based and exercise on a routine basis, so I am much more knowledgeable about these subject areas than rheumatology, for example. I think it is therefore helpful to "artificially" create interest in certain subjects (like "tricking" yourself into enjoying a particular subject), a technique which I have used before with academics to great effect.
I was inspired by the story of Dr. Teppler, who would routinely stay until 8 or 9 p.m. at his office, thinking about patients who were complex that he saw during the day. He would read the latest medical literature on the subject to stay up to date, and would just ponder in his own free time, only later sharing a treatment plan with the patients. This was very inspiring because I want to be the kind of person who takes my profession as seriously as this. This same physician saw a patient who had been previously recommended to have surgery for a metastatic ovarian lesion of the colon, which would necessitate pausing her chemotherapy treatment in the meantime. Subsequent physicians in her care team expressed hesitancy at the potential the metastatic cancer could spread greatly during the pause of chemo, but they all still recommended surgery because of the fear of lawsuits if the cancer really were to actually harm the patient and a previous surgery recommendation had been put into the chart by the first physician in the chain. Dr. Teppler, seeing this patient finally, resisted the urge to cloud his thinking with the fear of litigation and recommended against the surgery because it was truly not in the patient's best interest. The patient went on to get the surgery, against his advice, and unfortunately his worst fears realized and the cancer overtook her. Whereas maintaining a legal standard of practice can be motivational to physicians who might not otherwise feel like they need to maintain their skills, the presence of a lawyer hovering over the shoulder of an otherwise well-meaning and responsible physician trying to come up with the absolute best situation for a patient doesn’t seem like an ideal scenario.
In a general point of mild criticism about the book, I would say that the author seemed to consistently over-alert the reader that the Massachusetts General Hospital is an extremely good institution. Otherwise, this book was very instructional and something that I am glad I invested time into.