DURING CHRIS HICKS' third year of studies at a small medical school in southern Ontario, he developed a fear, one that must torment all aspiring physicians: that he would kill a patient — or, worse, all of them. His dread grew as he prepared for his 2003 clerkship, when medical education shifts from the classroom to the actual work of caring for patients in hospitals. He and his fellow students practised resuscitations on mannequins. Time after time, he tried to save his mannequin, and time after time it died. So he was terrified when he met his first real patient, a cyanotic baby girl who had turned blue from lack of oxygen. "I thought, 'The killing spree begins,'" he says.*
But the infant survived, and so did Hicks, and a few weeks later he found himself on a neurology rotation, caring for a man in his sixties who had recently suffered a spinal cord infarction, a stroke that left him paralyzed from the chest down. The patient was affable and cheerful, despite his illness. Hicks liked him. The plan was to move him into a rehabilitation hospital or long-term care facility once his condition was stable.
A common complication in immobile patients are pressure ulcers, sores that develop where the skin meets the surface of a bed or a wheelchair for long periods. They can appear quickly: within hours, a warm, reddish or purplish spot turns into an open wound, and within a day or two the flesh becomes necrotic. Often they don't heal, and if an infection develops and spreads to the bone or the bloodstream it can be fatal. These sores are particularly pernicious among the elderly and those who suffer from circulatory problems, diabetes, drug and alcohol addictions, or poor nutrition. The sicker the patient, the more serious the risk.
To prevent tissue damage from lack of blood circulation, patients must be turned every two hours. As a student, however, Hicks was unfamiliar with pressure ulcers and their treatment. After the patient had been on the neurology ward for a few weeks, he developed a large sore, about the size of a man's palm, over his sacrum. The attending physician and the nursing staff knew the patient was at risk for bedsores and had ordered a special air mattress that reduces their incidence, but the hospital only had two, and both were in use. Still, given that the potential for sores had been recognized, Hicks was unsure why the ulcer had developed: Had the nurses neglected to turn him? Had the attending physician forgotten to instruct them to do so? Should Hicks have examined the man more thoroughly to determine whether he was developing sores? Was this a case of negligence, or a failure to communicate?
A plastic surgeon was called in to treat the ulcer and debride the wound. Because of his paralysis, the patient couldn't feel anything, and Hicks remembers with horror listening to him happily chat with them while the dead tissue was dug away from his back, exposing his spine. Two days later, when Hicks returned to the ward, he looked at the patient's chart. In capital letters, the plastic surgeon had written, "This was a medical error and it needs to be explained to the family as such." Hicks remembers the moment, he says, "like it was the Kennedy assassination. I thought, that's the end of my career."
When the attending physician explained the situation to the family, they were understanding, even after the patient developed sepsis, a bacterial infection. In many cases, this would have set off a chain of mounting treatments and attendant complications: a prolonged stay in the intensive care unit; more antibiotics; and invasive procedures, such as intubations, that can cause damage or infection. The patient's condition improved, however, and he was eventually able to leave the hospital.
Hicks, a lanky man with schoolboy glasses and a scruffy beard, is now in his early thirties. He works at St. Michael's Hospital, a teaching facility in downtown Toronto, as an emergency physician and trauma team leader. Eight years afterward, this early experience continues to shape his interests as an academic; at the University of Toronto's medical school, he conducts research in the fields of team performance, patient safety, and error prevention. He still wonders about his role in the case, and whether a better medical student would have recognized the ulcer sooner. Back then, he says, "I assumed that it was a knowledge problem, that when I learned more I wouldn't make those kinds of mistakes."
Writ large, the entire medical profession has been engaged in its own long stomp from ignorance to knowledge, from bloodletting and incantations to an ever more precise understanding of anatomy and disease, germs and genetics. Today the practice of medicine draws from a variety of disciplines, including biology, chemistry, physics, engineering, computing, and psychology, and the field exemplifies the triumph of human intelligence, diligence, and creativity. Vaccinations have eradicated polio from developed countries and are poised to do the same in the developing world. Kidneys, hearts, and lungs can be removed from one body and transplanted into another. Ultrasound and magnetic resonance imaging enable non-surgical access to the mysteries of the womb and the brain. Deadly micro-organisms are vanquished by antibiotics, shattered bones are mended, and sheared-off limbs are reattached, or replaced with bionic prostheses. Laser eye surgery allows the myopic to cast off their glasses like pilgrims at Lourdes. Three medications released over the past fifty years — the birth control pill, Prozac, and Viagra — have fundamentally altered our sexual, emotional, and social lives.
Yet as the tools for healing proliferate, so do the difficulties in determining and executing correct diagnoses and treatments; each discovery creates new opportunities for mistakes, side effects, and dangers. Modern medicine is plagued by adverse events such as the one suffered by Hicks' patient: potentially preventable, unintended injuries or complications caused by health care management itself. They encompass the out-and-out mistakes (such as misdiagnoses or surgical slips), as well as the system breakdowns that bedevil hospitals (inconsistent hand washing protocols, or poor communication during patient transfers). A 2004 study estimates that 7.5 percent of Canadians who are admitted to hospitals each year experience at least one adverse event. These errors are responsible for more than a million extra days spent in medical facilities, and the resulting annual death toll may be as high as 24,000.
Consider some notable cases: in the 1980s, negligence in the testing of blood products by the Canadian Red Cross allowed the spread of HIV/AIDS infections to 2,000 people and hepatitis C to at least 30,000 others. More recently, the bungling of pathology reports in Newfoundland and Labrador led to hundreds of patients being given inaccurate breast cancer screening results. And according to a 2011 report from the Organisation for Economic Co-operation and Development, Canada ranks among the worst of the thirty-four member nations for adverse events related to surgery. The list of mishaps reads like a series of David Cronenberg plot treatments: obstetrical trauma, foreign objects left inside the body during procedures, accidental punctures or lacerations, and post-operative sepsis.
Even the most temperate doctors call this state of affairs an "epidemic" of errors, and unfavourably compare safety records in medicine with those of other high-stakes, high-pressure professions such as aviation. More troubling is just how intractable and complicated the problem remains. The most obvious, culpable offenders — the pill-pushing quack, or the cowboy with a scalpel — are the outliers, and the most dramatic mistakes, such as amputating the wrong leg, are comparatively rare. Rather, medical error consists of thousands of small screw-ups and oversights: the unnecessary cases of post-surgical deep-vein thrombosis, or the one in ten patients given the wrong dose or type of medication. These misadventures persist despite, and even because of, medicine's growing sophistication and our increased expectations of its efficacy. So if we want to understand medical error and why it is so prevalent, we must start with the question that has dogged Chris Hicks for nearly a decade: how is it that a man admitted to a well-run twenty-first-century hospital to be treated for a stroke can leave sicker than when he arrived?
THE AMERICAN medical industry has long known about the problem of adverse events, largely due to the rise in malpractice claims in the 1980s. When Hicks began his medical training, the received wisdom surrounding medical error was heavily influenced by malpractice litigation: someone had screwed up, and they would have to pay for it. Error was viewed as resulting from ignorance or negligence — doctors or nurses gone rogue. Even the long-standing tradition of morbidity and mortality rounds (M&Ms, open discussions between physicians about their mistakes) contributes to this perspective. M&Ms often focus on content or skill — on what a doctor didn't know, or didn't know how to do.
To determine whether litigation was improving or hindering care, the Harvard Medical Practice Study in 1991 quantified the scope and nature of medical mistakes. Its findings, chief among them significant rates of death and disability caused by medical mishaps, were startling. But the results didn't achieve traction outside the medical field until 2000, when the National Research Council published To Err Is Human: Building a Safer Health System, based on a report by the US Institute of Medicine. Among its most shocking statistics: "Preventable adverse events are a leading cause of death in the United States…at least 44,000 and perhaps as many as 98,000 Americans die in hospitals each year as a result of medical errors."
Shortly afterward, the British Medical Journal devoted an issue to the subject. "In the time it will take you to read this editorial, eight patients will be injured, and one will die, from preventable medical errors," the opening article announced. "When one considers that a typical airline handles customers' baggage at a far lower error rate than we handle the administration of drugs to patients, it is also an embarrassment."
It's so embarrassing, and the threat of litigation so unnerving, that physicians have long been reluctant to discuss mistakes. The BMJ editorial goes on to note that "we tend to view most errors as human errors and attribute them to laziness, inattention, or incompetence." Like Hicks, many doctors were taught that individual diligence alone should prevent medical errors, and that admitting their existence could lead to lawsuits, humiliation, or job loss. In Canada, Ross Baker — now a professor of health policy at the University of Toronto and director of graduate studies at the university's Centre for Patient Safety — followed these discussions with anticipation. He and his colleagues across the country involved in the then nascent health care safety movement hoped the alarming data would incite action here in Canada. Instead, To Err Is Human was viewed as proof that the American system was fundamentally flawed. So in 2004, Baker and Peter Norton, now a professor emeritus in family medicine at the University of Calgary, published a paper, The Canadian Adverse Events Study. "There was no conspiracy to hide this information," Baker says. "No one had looked carefully at the data before." The researchers erred on the conservative side in their estimate of preventable medical errors (by their count, up to 23,750 patients had died as a result of these mistakes in 2000), opting not to include incidents if they suspected any doubt or ambiguity about whether such occurrences constituted mistakes, which suggested that the problem is actually larger. (No formal follow-up has been done since.)
Paradoxically, the problem has been exacerbated as the field of medicine has grown more complex. In the 1960s, as scientific and technical wisdom developed, physicians began to specialize, which vastly improved medicine — the more narrow the focus, the greater the expertise and skill — but it meant that an individual patient's care was now shared among multiple practitioners. In the case of a child who suffers a head trauma, for example, her treatment may be handled by dozens of professionals: paramedics, emergency doctors and nurses, a neurologist, a neurosurgeon, an anesthesiologist, surgical and ICU nurses, pharmacists, pediatricians, residents and medical students, occupational therapists, and so on. As the patient is handed from one to the next, myriad opportunities arise for her medical history to be lost, for conflicting drugs and treatments to be prescribed, for lab results to be delayed, for symptoms to be overlooked, and for confusion in the transmission of vital information.
James Reason, a British psychologist specializing in human error, has dubbed this "the Swiss cheese model," in which small, individual weaknesses line up like holes in slices of cheese to create a full system failure. And in a modern hospital environment — a busy, stressful setting with many competing priorities, where decisions are made under duress, with frequent shortages of nurses, beds, and operating rooms — a patient's care slipping through the holes at some point is almost inevitable.
Failings in teamwork and communication compound these flaws, which according to patient safety research lie at the core of preventable adverse events. Baker likens the health care field to "a series of tribes who work together but don't really understand one another." To put it less diplomatically: egos, territorialism, and traditional hierarchies can create toxic environments in hospitals, where senior physicians disregard input from nurses and junior staff, who in turn become resentful and defensive.
The patient safety movement Baker helped initiate in the early 2000s profoundly changed the conversation about medical error. It was no longer a matter of assigning blame, but of improving bad systems. In Canada, the Halifax Series, an annual symposium about quality in health care, was launched in 2001; and a few years later, the Canadian Patient Safety Institute, an advocacy and research body, opened its offices in Edmonton and Ottawa. Hospitals across the country recruited safety experts to advise them, and to encourage physicians and other practitioners to talk more openly about adverse events.
The focus on flawed systems made addressing the problem an easier sell, especially as it became evident that the rampant problems in health care were errors of omission, not commission. While the old malpractice model routed out villains, the systems approach tackled the day-to-day snafus that frustrated everyone: long waits in the emergency department, under-stocked supply rooms, vague lines of communication, and so on.
To Kaveh Shojania, co-author of the 2004 book Internal Bleeding: The Truth Behind America's Terrifying Epidemic of Medical Mistakes, shocking statistics about medical error are useful mainly as headline grabbers, drawing attention to more quotidian concerns about quality improvement. Shojania, an internist at Sunnybrook Health Sciences Centre in Toronto and director of the Centre for Patient Safety at U of T, says the root of the problem is the ad hoc way medicine was established over its long history. He compares it to a series of cottage industries that developed with no larger organizing vision. The medical industry has grown so vast and complicated that tackling inefficient systems is akin to untying a Gordian knot.
In his cluttered office on the sprawling Sunnybrook campus, Shojania, an Eeyore-ish fellow in a rumpled suit, navigates through stacks of files, books, and papers to show me an image on his computer. It's a drawing of a Rube Goldberg pencil sharpener, a ridiculously convoluted device that involves a kite, an iron, an opossum, a rope, a woodpecker, and moths. That's the current medical system, he tells me by way of analogy. "This isn't an issue of incompetent people making stupid mistakes," he says. "It's many average, decent people working in poorly designed systems. Most medical mistakes were accidents waiting to happen."
TALK TO any error expert, and the conversation turns, inevitably, to Tenerife, in the Canary Islands, March 27, 1977, when two commercial jumbo jets collided on a runway, killing 583 people. A series of small mishaps set the accident in motion: one of the planes had been diverted from another airport, and there was pressure to make up time; fog obscured the runway; and the directions from the control tower may have been unclear, leading one pilot to believe he had clearance to fly. The most chilling aspect of the disaster was that the pilot disregarded the warnings from his co-pilot, who told him not to take off. At the time, such maverick behaviour was standard, but Tenerife changed that. The aviation industry began to emphasize crew resource management (CRM) techniques: more teamwork and collaboration; improved communication; and greater adherence to protocols, through the use of checklists, which break down the complex operation of planes into more manageable, standardized chunks.
Aviation has had tremendous success with this approach, lowering its rate of fatal accidents even as the number of flights worldwide has increased. In 1991, all the jet airplanes in the world logged about 20 million hours in flight, a number that leaped to over 46 million in 2008. During that time, the rate of fatal accidents has declined steadily toward zero, with Canadian and US operators performing better than the global average.
Three decades after Tenerife, in January 2009, US Airways 1549 made its miraculous emergency landing on the Hudson River. As it happened, the captain, Chesley Sullenberger III, was an aviation safety expert, and the industry credited the secure landing to the flight crew's strict observance of CRM techniques.
As with the airline industry, the practice of medicine requires discipline, teamwork, complex decision-making under pressure, and a facility with evolving science and technology. So when Peter Pronovost, a physician at Johns Hopkins Hospital in Baltimore, Maryland, was searching in 2001 for models to improve patient safety, he turned to aviation checklists for guidance. In particular, he was seeking a way to reduce infection rates from central lines, catheters inserted into large veins in the neck, chest, or groin to deliver medication, draw blood, or monitor heart function. Such infections are common enough to be considered unavoidable in hospitals, yet their cost to the system is immense.
Inserting a central line is tricky: in the neck, for instance, a doctor must negotiate a needle around the clavicle to find the subclavian vein, and meanwhile avoid hitting the carotid artery or sticking herself. Once the line is in place, there's a potential for infection, especially if the procedure was performed under less than sterile conditions. To mitigate that risk, a doctor should don clean gloves, a mask, and a gown; cover the patient with a full-body drape; and disinfect the area on the patient's body (an ultrasound probe wrapped in a sterile cover is often used to guide the needle). Following this protocol takes about three to ten minutes and appreciably lowers the chance of harm and infection. In the past, doctors often took shortcuts, typically because supplies weren't right at hand, or the procedure wasn't standardized at their facility.
Pronovost devised a simple solution: all the necessary equipment was to be assembled and stored in a bundle or on a cart, and a nurse would consult and monitor a short, step-by-step checklist while the doctor inserted the line. When the Michigan Health and Hospital Association implemented the central line checklist statewide in 2003, hospital-acquired infections dropped from 2.7 per 1,000 patients to zero; within months, hundreds of lives were saved, along with millions of dollars in costs to treat infections. Many North American hospitals have adopted a central line protocol, and the checklist method has been adapted to other functions, such as monitoring patients at risk for bedsores.
The checklist encourages collaboration and empowers other staff to speak up if they notice the doctor veering from the protocol — a classic CRM virtue. It also prevents physicians from relying on automatic behaviour, the kind of unconscious momentum that develops when performing an oft-repeated task. Running on autopilot isn't necessarily a bad thing: doctors must retain a great deal of information and act swiftly. But automatic behaviour can also cause slips, particularly in high-stress situations, when medical practitioners can forget critical details. The checklist forces them to be methodical. It does their thinking for them.
Here in Canada, the checklist protocol is also changing how medications are recorded and tracked upon patients' admission to and release from hospital. About half of patients experience accidental drug discrepancies upon admission to acute care hospitals, and at least 40 percent on discharge. For example, an ongoing prescription might not be dispensed to them during their hospital stay, or a new medication used in the hospital might be contraindicated with one they already take. To remedy this, hospitals are now required to create an accurate list of a patient's home prescriptions on admission. A 2008 pilot study at Sunnybrook Hospital found that half of the discrepancies caught by the new reconciliation method could have caused harm had they not been identified and corrected.
As a model for medicine, though, the aviation approach has its limitations. Even with smarter, more efficient practices such as checklists, vexing human factors remain. In the case of the medication reconciliation strategy, practitioner buy-in presents the most significant hurdle: doctors say they are too busy to handle more paperwork; nurses feel that monitoring medication is a doctor's responsibility; and pharmacists, who are in the best position to perform the task, can't do it because hospitals can't afford to hire more of them. The dilemma is one of too little time and too few resources, says Shojania, and the targets are perpetually moving. There are ever-more diseases and diagnoses to understand, medications to juggle, procedures to study, and technologies to master.
Shojania shows me another image on his computer, of a person's wrist wrapped in five medical bracelets that snake halfway up to the elbow, each noting a different risk or indicating a reminder about the patient's care plan. Taken separately, each of these wristbands provides valuable information, but as a whole they can create, rather than lessen, confusion. Even for a systems type like Shojania, this takes the systems approach too far: "Who is even going to bother to look at all of those?"
THE PENDULUM may have swung too far. Perhaps adverse events don't just result from faulty systems, but also, as some experts suggest, from faulty thinking. The myth persists that doctors are judicious, rational creatures of science, yet even the best clinicians can be biased by any number of factors: their age, gender, class, or emotional state; their personal and professional histories; the team around them at any given moment; and their level of fatigue or stress. How much of what physicians do wrong is because the way they think is wrong?
Dalhousie University in Halifax has become a research and training hub for teaching doctors to think better. This is in large part due to Patrick Croskerry, an emergency doctor with a Ph.D. in clinical psychology, and a renowned expert in the fields of cognition, clinical decision-making, and medical error. On an unusually hot and humid November afternoon, I am ushered into an office at Dalhousie's medical school to meet with him and two of his colleagues: Preston Smith, senior associate dean of medicine, and David Petrie, an emergency physician and trauma team leader.
Early in his career, Croskerry was struck by the role of cognition in clinical decision-making, the mental processes doctors engage in when determining a diagnosis and a course of treatment. In the 1990s, well before the issues of medical error and patient safety were widely discussed, he began to study and write about the various factors that affect a physician's judgment, from unconscious biases to the impact of stress and time pressure. Diagnosis, for instance, is among the most crucial tasks in medicine and must be performed correctly, he says. Yet despite all the diagnostic technologies now at doctors' disposal — blood tests, X-rays, CAT scans, and so on — they still exhibit a 10 to 15 percent failure rate. Why do they get it wrong so often?
To make his point, Croskerry relates the example of a gifted young Halifax linebacker who recently reported for a football game despite having been sick for over a week. It was a tough game and he played hard, but at halftime he felt ill and signalled the team's doctor, who took a quick history of his illness: fever, cough, sore throat, headache, and fatigue. Convinced it was pneumonia, the doctor wrote the player a prescription for antibiotics and told his parents to take him home and put him to bed.
But the parents were worried and called an ambulance, and when the paramedics arrived they asked the athlete to explain his symptoms. Paramedics work on algorithms, utilizing systematic procedures to efficiently deduce medical priorities from the evidence and symptoms they observe. In this case, the paramedics heard "sore throat and coughing" and the doctor's diagnosis of pneumonia, so they only examined his chest, assuming a healthy young athlete couldn't be suffering from anything serious. They suspected the parents were overreacting but took him to a hospital anyway. There, the triage nurse listened to the paramedics and put him in a low-priority area. Eventually, he saw a physician, who as part of his routine check palpitated the young man's belly, which was tender. The doctor sent him for a CAT scan, where the imaging revealed that he had a ruptured spleen. As it turned out, the football player had infectious mononucleosis, which can cause swelling of the spleen. During the game, he had been hit several times, and as a result the inflamed organ had burst.
This, Croskerry says, is exactly how otherwise smart, well-trained professionals can make such incorrect diagnoses. Point by point, he dissects the errors: the first doctor did a "drive-by" diagnosis, the paramedics couldn't see past the patient's youth and general good health, and the triage nurse got caught up in the diagnostic momentum. This was mainly the fault of what psychologists call "anchoring." The practitioners ignored multiple possibilities for the cause of the linebacker's complaints and seized upon the first and most notable pieces of information. Once that anchor was down, they ignored all other details and neglected to investigate further.
Anchoring is just one example of more than a hundred identified heuristics, cognitive shortcuts that could also be called common sense, or rules of thumb, beliefs based on experience and intuition. The practice of medicine uses heuristics all the time — symptoms A, B, and C usually suggest X diagnosis, or this type of person is more prone to this disease than that one — but heuristics also play a major role in biased thinking. Israeli psychologists Daniel Kahneman and Amos Tversky wrote a formative paper in 1974 on heuristics and biases in judgment and decision-making. They identified several factors, a widespread one being representativeness, illustrated by the idea that, say, librarians tend to be sober and methodical, or that boys love to play with toy guns. As the authors note, "These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors." In other words, these generalizations hold up most of the time, but some librarians are, in fact, risk-taking party animals, and some little boys prefer Barbies.
Many of these biases are widely recognized in medicine, such as "psych-out error"; or the dislike or disregard for mentally ill people, who may be assumed to be delusional or drug-seeking. Another common bias is gender: women are often misdiagnosed when they have heart attacks, which are seen as mainly affecting men. Overconfidence in one's own intuition and expertise constitutes another cognitive pitfall. Then there's Sutton's slip, named after a criminal who, when asked by a judge why he robbed banks, replied, "Because that's where the money is!" Less conspicuous possibilities may be overlooked in favour of obvious ones.
To reduce the instances when heuristics fail, physicians need to develop meta-cognition, an awareness of why they think what they're thinking. Historically, medical school has focused on content and skill, not critical thinking; Croskerry calls the traditional training "filling coconuts with a lot of facts." He instructs physicians to recognize their biases, teaching them to take themselves out of the immediate pull of the situation and say, "I've just locked on to something. Why did I come to that particular conclusion, and what else should I be thinking of?" — which applies to groups as well as individuals. "If the whole team is stampeding toward a diagnosis, we want someone to detach themselves and ask, what else could this be?"
The capacity for meta-cognition has become even more critical as medicine has progressed. Preston Smith pulls out his smart phone and lays it on the table. "The medical database is doubling every three to five years," he says, "but all that content is now readily available on devices like this." That's undoubtedly a boon, but understanding how to wade through this information requires clear thinking and self-awareness.
Indeed, as medical intervention pushes its limits, extending lifespans, curing previously fatal illnesses, and stimulating the body's capacity to heal, sound judgment will serve as one of a physician's most necessary skills. "You can fill that coconut up with every medical fact ever known," David Petrie says, "and that still won't lead to good decisions. In the future, as we continue to know more and continue to have more treatment options, there will be far more of a premium on complex problem solving."
Yet, as compelling and as intuitive as this approach may seem, demonstrating its efficacy could prove impossible. Did a doctor make the right diagnosis because he recognized his bias and corrected it, or because the evidence was particularly clear? Croskerry readily acknowledges that addressing medical error through sharpening cognition and decision-making doesn't easily offer quantitative evidence of success. But does that matter? "It's a little like jumping out of a plane without a parachute," he says. "I'm not sure we need to do a study to show that it's a bad idea."
A THIRTY-SIX-YEAR-OLD man named John is lying on stretcher with his head and neck stabilized by a brace, in a trauma room at St. Michael's Hospital. He's barely conscious, he's sweating, his breathing is rapid, and he has soiled himself. One of the paramedics who brought him in explains to the four doctors in the room that the man was found on the floor of the shipping warehouse where he works, covered in an unidentified liquid. It appears that he fell from a height, perhaps a catwalk, and suffered a lung injury.
The physicians start the usual procedures: they insert an IV, monitor the patient's vital signs, assist his breathing with a manually pumped bag valve mask, and order chest X-rays. Then the situation falls apart. His blood pressure drops, and his breathing is reduced to gurgling wheezes; the doctor tries to intubate him, but he can't get the tube through John's airway. The X-rays reveal no trauma, just fluid in the lungs. The team is confused about the diagnosis, and they move slowly and uncertainly about their tasks, second-guessing their theories aloud, then falling quickly into silence. Meanwhile, the paramedic, now wheezing and coughing himself, won't stop talking, and he hovers around the exam table, offering suggestions. In an attempt to get rid of him, one of the physicians orders him to go see a doctor. "But aren't you a doctor?" the paramedic snaps back.
Eventually, the team deduces that the patient is reacting to a toxic chemical. John starts having a seizure. While urging the doctors to call poison control, the paramedic collapses against a wall. They fumble, because they don't know the number. He recites it to them, then he too falls into a seizure. A phone consultation with a toxicologist confirms that John was exposed to a dangerous insecticide that still covers his skin and clothing. The paramedic has been exposed to it, and likely so have the physicians. They look at one another: no one has bothered to put on a mask, gloves, or a gown. "Oh, shit," the team leader says with resignation. Everyone stalls. John and the paramedic seem forgotten.
Just then, Ken Freeman, a fifth-year resident who has been observing the scene from behind a two-way mirror in the next room, enters and ends the simulation session. Chris Hicks, who has been playing the paramedic, gets up from the floor. The four physicians — three medical students and a first-year resident — abandon John, an expensive high-tech mannequin, to his lonely gurgling and spasms, and shuffle into a conference room to debrief.
Medical schools rely on such exercises to train doctors before they see real patients. In St. Michael's state-of-the-art simulation centre, students and residents are taught to perform surgeries, intubations, and other procedures on robotic patients that are programmed to sweat, breathe, convulse, blink, maintain a pulse (or not), and have heart attacks. The setting exactly mimics a real examination, trauma, or operating room. Roger Chow, the amiable wizard-behind-the-curtain technician who operates John's physiology from a computer, has a perfectionist's commitment to vérité. When I met him before the session, he was emptying the contents of a colostomy bag under the mannequin's butt. Later, he would feign an excellent Darth Vader rasp into the microphone: John's gasping voice.
Hicks uses these simulations to develop teamwork, communication, and reasoning abilities. As artificial as the scenario felt at the start, within minutes the students were caught up in the curious symptoms, the jabbering interloper's interruptions, the awkwardness of solving a problem with a group of strangers, and the humiliation of being wrong. This forms the nexus of the systems and cognitive approach: learning to work efficiently and competently within a group, and to think clearly and rationally under pressure.
Freeman opens the discussion Socratically: "So, how do you think it went?" One student looks mortified: "Well, I think we killed our patient, so it didn't go well." Another, the most senior participant, says the team was fragmented and couldn't figure out how to operate as a group, and they became too flustered by Hicks. Someone else regrets forgetting to put on gloves, what seems like an amateur mistake. After listening for a few minutes, Freeman gently reminds them of what they did right. They remembered the ABC's, he tells them, a mnemonic for "airway, breathing, and circulation." He adds that intubation is especially difficult to perform on a mannequin, because its rubber throat is stiff. Hicks tells them he has witnessed experienced doctors subjected to a similar simulation, and has watched all of them walk past a shelf stacked with gowns, masks, and gloves without grabbing them.
The conversation continues in this vein for a half-hour, and later they will spend another two hours learning about toxicology and how to treat poisoning. Freeman and Hicks say little, allowing the students and the resident to interrogate themselves and each other. Questions are raised about the team leader's role, how to vocalize information and suggestions, how to handle distractions like the chatty paramedic, and how to respectfully challenge a colleague's decision. This freedom to openly parse their mistakes and faulty thinking is important. Perhaps when dealing with a real patient, they will be more confident in asking for help, questioning their colleagues, or consulting a checklist.
A few weeks earlier, over coffee, Hicks had told me that one of the reasons he decided to become an emergency physician was that he hated the idea of not knowing how to handle a crisis. The specialty appealed to him because emergency and trauma present all kinds of acute situations, involving every part and system of the body. I remembered this after the debriefing, when Hicks told the group that poisoning cases such as this are atypical, and that as emergency doctors they may only encounter a few of them during their careers. Still, they will spend the day immersed in the subject, aiming to refine their knowledge, because that's what doctors do — and it's what we need them to do.
Training will never entirely inoculate doctors from making mistakes, because science and medicine are too complex and too imperfect, and human beings inevitably remain terminal cases. But in its ideal form, the medical profession looks something like this: a group of serious, curious, compassionate practitioners perpetually grappling with how they can do better.