JOAN Porcaro: Welcome to the WTW vital signs podcast. I'm Joan Porcaro, Senior Vice President of Risk Services for WTW Healthcare. I'm pleased to introduce our newest podcast series, The AI Project-- What Risk Managers Need To Know. This is a podcast series exploring the intersection of artificial intelligence and healthcare risks. As AI tools become embedded in everything, from radiology and triage to documentation and diagnostics, risk managers face new challenges in oversight, safety, and accountability.
This series unpacks the critical issues such as spotting bias in AI outputs, governance, assessing risk in technology, navigating informed consent, tracking, trending, and documenting AI-related incidents, and understanding the evolving regulatory landscape. So today, we're taking a deep dive into one of the most urgent and rapidly infusing questions in modern care delivery. How do we, as risk managers, recognize, investigate, and respond when artificial intelligence may have influenced a clinical or operational event?
We've titled this session Behind the Glitch-- Tracing the Use of AI in Safety Events, and we'll explore where AI shows up in everyday practice and how risk managers can look to determine whether it played a role in a critical incident. Today, our guests joining me in this episode include someone who's been with us before. Let me introduce Mallory Earley. She serves as assistant vice president of risk management at ProAssurance, where she has dedicated the past decade to advancing risk management practices.
In her role, Mallory leads the CME accreditation team and oversees five regional managers along with their respective teams, delivering comprehensive risk management services across the nation.
Additionally, she supervises the data analytics and technology team, utilizing diverse data points and metrics to strategically direct risk management efforts. Mallory also provides expert advice to policyholders and agents on professional liability matters. She authors insightful articles, develops educational courses, and frequently presents at various conferences and professional associations.
She's an active member of the Alabama Bar Association, the American Society for Healthcare Risk Management, and the Alabama Society for Healthcare Risk Management, where she held the position of president from 2020 to 2021. Welcome again, Mallory.
MALLORY EARLEY: Thank you, Joan. It's great to be here today with you.
JOAN Porcaro: Thank you. Also with us today, I'm excited to introduce Kathleen Shostak. Kathleen is currently an independent healthcare risk management and patient safety consultant. She is an industry recognized healthcare professional and a national speaker who has had a long tenure working with health systems to improve patient safety and reduce liability risks. Having worked for national risk and safety consultancies as well as healthcare liability insurers, she has led multi-facility clinical risk and safety collaboratives, conducting clinical risk assessments, teaching safety science, and facilitating improvement initiatives.
Kathleen serves as faculty for ASHRM's patient safety certificate course, is a past board member of ASHRM, and a contributing author of ASHRM's publications on patient safety and risk management. ASHRM has awarded Kathy the Distinguished Service Award for her contributions to healthcare risk management and patient safety. Welcome, Kathy.
KATHLEEN SHOSTAK: Thank you very much, Joan. And I'm really happy to be here with you and Mallory.
JOAN Porcaro: All right. Well, let's begin our conversation. Kind of throw the question out to Kathy first. So where is AI, artificial intelligence or augmented intelligence, showing up in healthcare settings today, and how would a risk manager know it's being used?
KATHLEEN SHOSTAK: I like that you use the term augmented intelligence. I think that's how the American Medical Association sees it. But certainly artificial or enhanced or-- Yeah. So lots of things that risk management professionals should be thinking about, firstly, to find out the extent of AI being used in their organizations and where it's in use, such as is it informing patient assessment or treatment algorithms? So a clinical example might be in sepsis profiling to identify earlier that patient who is headed for sepsis problem. Clinical decision-making, such as medication prescribing.
We see AI-enabled prescription and pharmacy distribution systems in place. With regards to patient engagement, I think we're all very familiar with that since we're patients, too. Marketing. Using patient portals and responding to healthcare education tools. We're figuring in those Net Promoter Scores, how engaged are we with the health providers and systems that we're engaging in. There's all of the areas and considerations to think about where it might be being used within the organization that they're working in and supporting. And so, Mallory, what do you think?
MALLORY EARLEY: I think similar to where you started the conversation, Kathy. We're really seeing artificial intelligence pop up in a number of areas. We're seeing it with the use of chatbots on websites and even connected to patient portals. We're seeing how some EMR systems are integrating artificial intelligence into their own features, and other systems are still requiring physicians or practitioners to document the use of the technology separate. So there's the portion that are integrating, others that are keeping artificial intelligence separate.
We're hearing examples of ambient listening, the use of artificial intelligence when it comes to scribes, and recording our transcription in patient care settings, whether it's office based or in the hospital. This can lead to some consent considerations and a number of concerns regarding claims and litigation that we'll hopefully get a chance to explore later in the conversation.
JOAN Porcaro: Yeah. Well, thank you both. So, as a risk professional for a long time and looking out to the world to my fellow risk management professionals, how can risk management keep themselves updated as to what and where AI may be in use? Kathy.
KATHLEEN SHOSTAK: Well, I would suggest they identify the IT, electronic medical record professionals, supply chain and products staff, the data people, and any other groups that are governing and managing the AI in their organization and finding out which ones can provide the information you need. Meet with them or attend committee or department meetings. That would be really informative. And if that's not possible, then just finding the documents, the policies, the task force or committee minutes that you can review, and just asking a lot of questions.
I saw a very interesting discussion post on the ASHRM listserv recently around AI. And one member posted that they put together an oversight committee, and they actually were using the enterprise risk management domains to really critically apprise what the AI platform is doing and how. And an example given is that they worked through the process of onboarding a system, which is their ambient room listening. And they learned a lot in looking at that particular AI use, ensuring patient consent that Mallory mentioned, and how the AI-generated notes are reviewed for accuracy and so forth. So it's very interesting.
JOAN Porcaro: Good call-out. Mallory.
MALLORY EARLEY: I absolutely agree. The first step to figure out artificial intelligence is to first identify where is it being utilized. And then in the next vein is not only where it's being used, but where could it be available. Because we are seeing this as an ever-changing and ever-growing industry where an update may come along to your system or your EMR that maybe previously didn't have a whole lot of integration with artificial intelligence that ultimately becomes much more integrated. With that, training is often missed, and so you want to make sure that your staff and your providers are trained on the artificial intelligence.
So once you've identified it, you know it's available, you know it's being used. You want to make sure that those who are in charge of being in the medical records and documenting are aware of this artificial intelligence and really have the training to go alongside it. Within our office practices, we've seen artificial intelligence really be used with the bureaucratic tasks, so things like letter writing, especially with insurance denials, and you're trying to appeal. We've seen that happen. We've also seen some patient follow-ups be generated through artificial intelligence, as well as the typical summarization of medical records and that type of thing.
We really would encourage that risk management is a part of the discussion, so attend those internal meetings when they're available. Make sure you're in good communication with your IT staff, your patient safety, your nursing, your med exec teams. You really be a player that not only is aware of what's going on, but becomes a champion within your organization to identify where it's being used and make sure the proper training is taking place.
JOAN Porcaro: Both of you, thank you for really good call-outs. I just want to add one other thing. I've been hearing from some physician practices that they're being contacted by AI startups pretty regularly. We'll be covering in a future episode contracting with AI providers and how to analyze their efficiency, their integrity, and where are they using those services already, where they can provide you with some good feedback. I want to jump to another question. I'm going to direct that to Mallory. Mallory, what elements are critical when the tools that focus on critical decision-making and clinical rationale? What are the most critical points?
MALLORY EARLEY: Sure. Not to be the negative one by any means, but really keep in mind the downside of things. The data set you're using, you've got to understand how it was created. Was it something that was purchased externally, or is your current model being used to continuously train and update the data? Reflective of your patient population, we've seen instances where biases have been innate in the data that was used to train an algorithm. And so that obviously leads to some concerning output because, very much like many technology systems, garbage in, garbage out is a common term. Same goes for artificial intelligence.
We want to make sure the data sets that are purchased are built with incoming data, so you're aware of what your model is based on, but more importantly, how you're able to see if there's some drift. So if your decision-making seems to not be populating the appropriate differentials, it could be that your data sets off a little bit.
So it's something that has to be monitored. It is not a plug-and-play-and-leave-it-alone type scenario. With artificial intelligence, it absolutely has to be monitored throughout the use of it to really make sure it's going to be helpful and supportive of the critical decision-making that your clinicians are using it for.
JOAN Porcaro: Yeah. Thank you. Kathy, yeah, go ahead.
KATHLEEN SHOSTAK: Just to pick up on that, Mallory, that's awesome. I see that risk professional's role is really helping those users of the AI-informed clinical decision support tools. Algorithms meant to support diagnostics and treatment recommendations. But to thinking about how that tool is going to assist in the care of the patients-- So if we used sepsis as an example. It is certainly informing those profiles to consider when patients present with conditions that fit or potentially fit the diagnosis, and then give a pathway for the diagnosis and treatment. But they're not ironclad.
We know that systems continuously learn, and so it's really the provider's actual assessment and observation, discussions with the patient, all of that, that's key to arriving at the diagnosis in the subsequent treatment. And then, of course, when an AI-informed pathway is not followed, the provider's documentation should clearly note why. What was their thinking in the moment? Patient assessment and treatment utilizing the information being provided, but their choice was different for a reason, and having that documented in the record would be key.
JOAN Porcaro: Yeah. So really highlighting the clinical rationale as to any decision-making. So I want to bump back over to Mallory. So when we're looking at a critical event, historically we, as risk managers, we always evaluate whether human factors played a role. What about tech? How would we address this aspect?
MALLORY EARLEY: Absolutely. I think this question kind of plays well into Kathy's previous answer. In that, you have to your system, and you have to understand what type of AI you're utilizing and how it is being impacted into your clinical decision-making. So during an RCA, it's important to evaluate the human roles as well as those system errors, including maybe an EMR inconsistency, the artificial intelligence might have hallucinated, there could have been an incorrect decision with the support tools. All of these are going to be critical to determining the real cause of the event.
The decision and the support tool really can vary from the physician's clinical judgment, and at which times the best practices and documentation should always be followed. So at the end of the day, the clinician is the one in control, and they may be able to get confirmation through the use of artificial intelligence or other technology, which is great. Other times they might disagree, and so it really comes down to evaluating it from the human aspect and the people involved, as well as how it integrates with the technology.
JOAN Porcaro: Thank you. Kathy, how do we define an AI event? What is the common terminology? And again, are we all speaking the same language across the country?
KATHLEEN SHOSTAK: Great question. Event reporting systems are not yet designed to capture AI-related events or events where artificial intelligence contributed to the event, so it's really incumbent on healthcare organizations and the providers using them to include this information when reporting. But there is no standard definition yet. I will say that one example I have seen-- in Pennsylvania, there's mandated reporting, and there is a state reporting system, which they acknowledge was not designed to capture these types of events.
However, in their communications with the reporting hospital systems and so forth, they have provided a working definition. They say an AI-involved event, specifically refers to incidents where the use of an AI tool may have contributed to a patient safety incident, requiring the documentation of the specific software, the context, and the outcome. So they are looking at this and asking reporters to take the initiative to help them when collecting the data and analyzing it for feedback.
JOAN Porcaro: Mallory, anything to add?
MALLORY EARLEY: Yeah. I think the example that Kathy just shared and outlined for us goes straight to the ultimate reason why we're here on this podcast, creating some type of standardized nomenclature that healthcare can adopt is absolutely critical to be able to identify these events, code them appropriately, and then potentially take a look at future prevention. So without a common language, we're really struggling to see what is the true incidence of technology or AI-related events versus other reasons where an outcome may have been less than ideal.
So really taking the time to develop an industry definition and nomenclature to use, and then also making sure that our reporting systems have that capability, is absolutely critical.
KATHLEEN SHOSTAK: I wanted to just pick up on that, Mallory. That's a really good point because we've seen that happen over time. When patient safety organizations came out, AHRQ developed the common formats, and then that was adopted for those organizations encoding. There have been enhanced systems in various PSOs and so forth. So I agree there should be some starting point industry-wide, and then, of course, depending on systems, there may be some customization made beyond that. But having at least a common definition would be a great starting point.
JOAN Porcaro: Thank you. A couple of weeks ago, I was at a doctor's appointment. And the good news is, when I walked in, they disclose that they did use an ambient AI-Scribe. And it was interesting to me because the fun part was is the doctor looked right at me. We were engaged in a conversation. They weren't having to type ferociously trying to get all the information in. But after I left the appointment, something kind of triggered for me.
What considerations should risk managers have when looking at how long to keep the transcripts for the recordings that are generated by these ambient scribes, so doctor doesn't use the whole transcript? They use a portion of it. Mallory, any thoughts?
MALLORY EARLEY: Sure. I mean, as we mentioned before, ambient listening and the use of ambient scribes is really becoming popular, both in the medical practice and in our hospital settings. And we're seeing vendors that are pushing recording devices because, as you mentioned, it eases the burden of documentation on the physician. It can absolutely improve the patient experience because, as you said, you can sit down face-to-face with your doctor without feeling like you're talking into a recording device or having another third party in the room furiously writing down everything you're saying.
But you really do have to take a step back and consider the effects of artificial intelligence and how long are these transcripts maintained. Do they have a quick write-over policy after 24 hours, or five days, or one month? What does it look like? How long are you keeping the transcripts? How does the technology differentiate who is speaking? What happens if the person has an accent or a different dialect? What types of languages are being spoken, or differentiate between the patient and maybe a guest they brought with them to help in the healthcare decision-making?
So there's a lot of questions around ambient listening. I think overall is a great addition, but there are some areas to be concerned about, and more importantly, create policies and procedures around how it's being used, how long are you keeping things such as a transcript? Because it's going to come up when this ultimately makes its way into the courtroom through litigation and potential claims. I think we'll see some of this being used as evidence or spoliation of evidence. So we'll see how it shakes out, but highly encourage there to be policies around those transcripts and what practices or hospitals are going to do about maintaining them.
JOAN Porcaro: So Kathy, any thoughts?
KATHLEEN SHOSTAK: Just to say that, in the absence of specific guidelines, I think we can look to organizations that have promulgated guidelines prior to this high-tech use-- the American Health Information Association. They've adopted various guidelines, and so we can look to that and maybe adopt that until it's finalized. Once we have that finalized, we would have some guidelines in place to utilize.
JOAN Porcaro: So who is ultimately responsible with regard to professional negligence? Mallory.
MALLORY EARLEY: Oh goodness. That's a tough one. At the end of the day, there's really two questionable outcomes that you want to call attention to in regards to AI. Really it comes down to an artificial intelligence prompt is correct and a physician potentially overrides it or ignores it based on their clinical decision-making, or there's an artificial intelligence prompt-- it's wrong, and the physician ultimately follows it. And so I think at the end of the day, it's clear that the ultimate responsibility for any type of diagnostic or therapeutic decision will still remain with the physician.
I mean, ultimately, they are the person who validated the results of any clinical decision-making tools, and so it's their name on the record. Although artificial intelligence might be used as a means of addressing a tool or helping in the decision-making process. AI is not there to make the ultimate decision. The physician must use his or her clinical judgment, and I don't see that changing anytime soon. I think that artificial intelligence is there to help and to assist, but it's just one more tool in the toolbox, not an ultimate answer by any means.
JOAN Porcaro: Anything to add, Kathy?
KATHLEEN SHOSTAK: Completely agree. It's really going to come down to the provider knowledge of the system and the documentation.
JOAN Porcaro: Yeah. One of the topics I wanted to touch on today. I have been seeing more requests from risk managers. They're looking to create an organizational policy on AI usage. Not the policy that maybe your IT department would be looking at, but what as an organization should be the structure or the framework for this particular process? Who should be using it? When should they be using it? What's some practical framework and guidelines in talking with clients, in building a policy?
I often will say you have to have a purpose. You have to know who's the scope related to. What departments are you talking about-- clinical applications, like diagnostics, decision support, as both of you called out? Well, what about non-clinical areas like finance, HR, just operational forecasting? The policy really needs to be very ERM, enterprise risk in nature, of course. And then making sure that staff across the board-- providers, nurses, physicians, everyone who's touching this type of tool-- understands the difference between AI, and Gen AI, and algorithmic systems.
What type of system is predicting or creating automation and looking at governance, and what's permitted and what's not permitted for the usage? So we could probably do an entire conversation, and likely we will. And what are those elements that need to be part of your AI usage policy? Either of you have any thoughts on this?
MALLORY EARLEY: I know from my experience with my employer, ProAssurance, we're internally even using these types of documentations to really guide the individuals that are using it. And I think, just as within an insurance carrier, I think hospital and physician practice based absolutely need some guidance, because you can't just leave it completely unpoliced, because there are a number of HIPAA concerns, and what information is going in and the consent of the patients whose information is going in. So really, just having some guardrails in place, I think, can absolutely help this process.
And more importantly, goes back to what Kathy and I were saying before, it develops that nomenclature within at least your organization, even if it might be a little bit different than what nationally folks are using as terminology. If you have a defined nomenclature within your own organization, you're going to be at a better position than if you didn't.
KATHLEEN SHOSTAK: Yeah. I would fully support an enterprise approach, Joan, as you've mentioned, because it really does address all of those domains and provides that framework in how to assess and set up guidelines in terms of governance strategy, all of those operations, and so forth.
JOAN Porcaro: Yeah. So I want to take a moment and explore something else. So what are the early warning signs that the front line needs to be aware of for early recognition of contributing factors that possibly, in event, might have occurred that was influenced by an AI application? Kathy.
KATHLEEN SHOSTAK: Sure. I think it goes back to Mallory's discussion about identifying where the AI is being used, what tools is it embedded in, what is it informing, and the training that goes along with that. Certainly should include some indicators of what are the tools. What can we help the providers and the frontline staff understand to really recognize how AI may have impacted, or been involved, or partly causative to an event that occurred? So if we look at recognizing clinical deterioration, this is a big area.
We know that it causes delays, whether on the inpatient side or even in ambulatory care. The patient's vitals are deteriorating. We have many monitoring systems in place for that, but even when they're identified acting on them, the outpatient area, a patient who has congestive heart failure starts to gain weight progressively. That's an indicator. So all of these types of things we are using to monitor patients. And if AI is informing them and providing information for the clinical staff, the frontline providers, and nurses and so forth to act on when that information comes through.
Are there indicators that you really need to pay attention, evaluate it? Is it important to the patient? And so acting on that so that there are not as many delays in acting on patient changes or condition changes when the patient really does start to deteriorate and their clinical condition needs intervention. So a lot of other tools go along with that. I know, Mallory, you can chime in as well. Joan, we've discussed this a lot in terms of cognitive tools, having people aware. And that, again, goes back to that bedside providers, the frontline providers, even in the ambulatory space.
And then those system tools for early warning-- how do we respond? What's the response infrastructure like? And if it failed to warn us, that could certainly be an indicator that the AI-enabled tool we're monitoring, or informing, or decision tool, algorithm, whatever it is, really did not act as intended. So that could be an indicator that certainly contributed to an event involving a patient.
JOAN Porcaro: Mallory.
MALLORY EARLEY: I think you absolutely summed it up. And my thoughts are your training and your experience is key. Just because you now have an added tool that might make your job a little bit easier, doesn't mean at the end of the day, you're not still doing your job. So I think that really is something to keep in mind, that you can't solely rely on these new tools to always tell you when there's going to be a problem, or anticipate a change in vital signs that needs attention.
The front lines of healthcare still needs to be in early recognition from those that are bedside and able to see when things change, able to review lab results and notice a trend. It's great to have software help you, but you can't always rely on it. You still have to use your clinical judgment and your training and experience in these situations as well.
JOAN Porcaro: Oftentimes, we think about with something like this, it has a newness to it, and we're trying to ensure that there's a good awareness across the organization. So I throw this question to Kathy. How are the insights from these post-incident reviews applied to strengthen prevention efforts and really look to drive some improvements? So, said differently, how are these learnings shared with internal teams and frontline staff?
KATHLEEN SHOSTAK: Yeah. This would be critical, Joan. An event involving AI that affected a patient's outcome really should be reviewed in a similar manner as other clinical event reviews, whether its quality, or performance improvement committees, or peer review. But learnings really need to be part of the risk and safety communications. And most organizations have lots of them in place during safety rounds, daily briefs, posting them on those service or department safety boards.
And then the ongoing monitoring really needs to be part of those action plans that result from root cause analysis and apparent cause analysis, really to determine was your action plan effective? So an example might be a drug dosing error by an AI-informed prescribing system. So the results of the analysis should be part of an action plan aimed at preventing or at least mitigating a future similar event, but include the AI system, so that you would be pulling in IT or the developers and the users of that system.
JOAN Porcaro: Mallory.
MALLORY EARLEY: I absolutely agree with what Kathy said. I mean, although technology may be new and different, how you identify issues, or work towards prevention, or even initiate improvements really comes from the knowledge of your system and the ability to quickly identify the issue, discuss it amongst the appropriate team members, and work towards a quick solution to, again, prevent it from happening again. So really, it's an awareness that you've got to be able to recognize it, then evaluate, and ultimately, hopefully, make some changes to prevent recurrence.
JOAN Porcaro: Yeah. Well, thank you. We're coming to a close for our episode today. So if you could only share one key point about this topic, what would it be? Mallory
MALLORY EARLEY: I think mine would be to know what you're using and learn how to use it. Ultimately, knowledge is going to be power when it comes to any technology, but in particular artificial intelligence, and this type of technology absolutely has to be embraced, but with proper guardrails and really the appropriate training to be able to move patient safety forward. You don't necessarily need to be fearful of the impact because it could be very positive, but at the end of the day, you have to know where you're using it in your organization, how you're using it, that you've been properly trained on it, and how it's changing.
So it's not just a one-and-done learning item. It's something that's progressive and should continue on throughout your tenure with the organization.
JOAN Porcaro: Yeah. Thank you. Kathy.
KATHLEEN SHOSTAK: I would second exactly what Mallory said, but also the frontline caregivers needing those tools, cognitive ones that help with situational awareness, system tools, the rapid response pathways, and team tools, something we didn't really touch on. But communications-- psychological, safety, how the team works together, so that people understand how are we using these tools. They have great potential and are already helping to improve patient care.
But as Mallory mentioned, there needs to be guardrails and safeguards put in place, and everyone aware that the AI-enabled tools that we are utilizing can introduce other potential downfalls or events that need to be considered.
JOAN Porcaro: Thank you. Well, Mallory, I first want to thank you for joining us today. I really appreciate you being here.
MALLORY EARLEY: Well, always happy to help, Joan. I really enjoyed it. Thank you for having me.
JOAN Porcaro: And Kathy, again, thank you for bringing your expertise to our conversation.
KATHLEEN SHOSTAK: Thanks, Joan. It's been a pleasure talking and discussing this and planning it with you and Mallory. And I hope our discussion is informational and helpful to those taking a listen.
JOAN Porcaro: Well, that's it for today's episode of WTW Vital Signs. Thank you for spending part of your day with us. And if you found this information helpful, please follow the show on YouTube, Apple, or Spotify and share it with any other of your fellow risk management professionals who might enjoy it too. I'm Joan Porcaro, and I look forward to being with you again next time on future WTW Vital Signs Podcast.
SPEAKER: Thank you for joining us for this WTW podcast, featuring the latest thinking on the intersection of people, capital, and risk. WTW hopes you found the general information provided in this podcast informative and helpful. The information contained herein is not intended to constitute legal or other professional advice and should not be relied upon in lieu of consultation with your own legal advisors. In the event you would like more information regarding your insurance coverage, please do not hesitate to reach out to us.
In North America, WTW offers insurance products through licensed entities, including Willis Towers Watson Northeast, incorporated in the United States, and Willis Canada, incorporated in Canada.