Robert Turbow, MD, JD
“The very first requirement in a hospital is that it should do the sick no harm” – Florence Nightingale
Contemporary healthcare is “…the most complex undertaking in human history” – Peter Drucker
Background:
Healthcare workers face quite a challenge. They must try to take the safest possible care of patients while working in extraordinarily complex systems. The High-Reliability theory offers insight into this dilemma. Increasing reliability has the potential to not only improve outcomes but also to decrease a hospital’s liability.
Hippocrates may have first addressed preventable harm in healthcare, and there have continually been attempts to try to find better systems. In the last few decades, increasing attention has been paid to the challenge of medical errors. Countless books and articles have been written about the problem and the search for meaningful and sustainable solutions. Contributing factors include complex systems, distractions, culture, routine rules violations, and misaligned incentives.
Charles Perrow addressed challenges in complex systems in his classic 1984 book, Normal Accident. The book was primarily focused on other high-risk industries, such as the nuclear industry, chemical manufacturing, and commercial aviation. However, healthcare has similarities to other high consequence industries. In Normal Accident, Perrow outlined the three components of catastrophic failure:
1- Complex systems
2- A culture of “blame the victim.”
3- High productivity pressures
Anyone working in the healthcare industry can evaluate for themselves if the three elements listed above apply to their own job.
The term “high-reliability organization” has been used since at least the 1980s. A landmark contribution was made in 2001 when Weick and Sutcliffe published the first edition of Managing the Unexpected. These authors described the common themes in organizations that had found ways to improve outcomes in complex systems. A brief summary of their findings has been referred to with the acronym “FSORE.”
1- Preoccupation with failure
2- Reluctance to simplify
3- Sensitivity to operation
4- Commitment to resilience
5- Deference to expertise
As with any acronym or brief summary, “FSORE” is an oversimplification of the elegant themes developed by Weick and Sutcliffe. Managing the unexpected became a blueprint for contemporary efforts for organizations to become a so-called “HRO.”
There have been subsequent editions of the book, and the principles above have been extensively refined. Increasingly, the principles of HRO are being applied at hospitals.
Introductory Concepts to Improve Reliability
As in any methodology, it is critical to understand the terminology. The current pandemic has ushered in the increasing use of the vocabulary of HRO.
Example 1– a hospital worker actively infected with Covid-19 attempts to enter the hospital.
Precursors- attempt to prevent the “error” before it happens
- Vaccine (theoretically, the worker may never become actively infected)
- Education- widespread campaign to inform employees not to come to work if ill
Barriers- if the “error” takes place, attempt to minimize/prevent the harm
- masks, PPE, screening employees at the entrance to the hospital
- redundancy- second “screener” confirms work done by the initial screener
- recovery- take employees temperature and prohibit entry if the employee is febrile
Mitigation- if the harm takes place, attempt to limit the damage/downstream effects
- ICU care
- Anti-viral therapy
- Attempt to protect staff and visitors from exposure to the patient
Example 2– NICU RN mistakenly attempts to attach milk feeding tube to UVC line
Precursor–
- Training (consider simulation labs)
- Proctoring
- Regular competency evaluation
Barriers–
- non-compatible connector (physical barrier),
- follow the line to the patient (recovery),
- two RN’s confirm that the feeding is connected correctly (redundancy)
Mitigation-
- fluid resuscitation
- blood pressure support
- ICU care
Take Home Messages:
- Attempt to understand common fail points and vulnerabilities in the system.
- Attempt to categorize the types of failures- communication, diagnosis, etc.…
- It is generally advisable to invest in “precursors” such as vaccines, training, proctoring.
- Is it possible to prevent the “error” from taking place?
- If the error happens, is it possible to prevent the error from causing harm?
- Design robust and resilient systems so that “errors” do not get through to the patient.
- Attempts to realistically model the risk- will the human being comply with the rule/process?
References:
- https://www.nature.com/articles/477404a
- https://www.eventsafetyalliance.org/news/2016/11/9/normal-accident-theory-explained
- https://www.jstor.org/stable/j.ctt7srgf
- http://high-reliability.org/Managing_the_Unexpected.pdf
- https://hbr.org/2007/10/managing-the-unexpected-1
- http://www.ihi.org/Topics/Reliability/Pages/default.aspx
- https://www.pressganey.com/about/news/zero-harm-howto-achieve-patient-and-workforce-safety-in-health-careis-now-available-via-amazon-barnes-noble-and-800ceoread
- https://www.prnewswire.com/in/news-releases/the-trajectories-company-llc-offers-covid-19-risk-modeling-software-872368421.html