MPOG QI - Quality Committee Meeting Notes Monday, February 23rd, 2026
Attendance:
Balfanz, Greg (North Carolina)
Liu, Bin (Michigan Medicine)
Yuan, Yuan (MPOG)
Bauza, Diego (Weill Cornell)
Lewandowski, Kristyn (Corewell Troy)
Zehr, Levi (Nebraska)
Berndt, Brad (Bronson)
Lopacki, Kayla (Mercy Health - Muskegon)
Zhao, Xinyi (Sarah) (MPOG)
Berris, Josh (Corewell - Farmington Hills)
Lozon, Tim (Henry Ford - Wyandotte)
Zittleman, Andrew (MPOG)
Bollini, Mara (WUSTL)
Lu-Boettcher, Eva (Wisconsin)
Bowman-Young, Cathlin (ASA)
Mathis, Mike (MPOG)
Brennan, Alison (Maryland)
Mack, Patricia (Weill Cornell)
Brown, Morgan (Boston Children’s)
Malenfant, Tiffany (MPOG)
Buehler, Kate (MPOG)
Mango, Scott (MyMichigan)
Calabio, Mei (MPOG)
McKinney, Mary (Corewell Dearborn / Taylor)
Cassidy, Ruth (MPOG)
Milliken, Christopher (Sparrow)
Charette, Megan (MPOG)
Mirizzi, Kam (MPOG)
Clark, Courtney (Henry Ford)
Munson, Kristin (Michigan Medicine)
Claybaugh, Deborah (MyMichigan)
O’Conor, Katie (Johns Hopkins)
Clark, David (MPOG)
O’Dell, Diana (MPOG)
Cohen, Bryan (Henry Ford - WB
Ohlendorf, Brian (Duke)
Coleman, Rob (MPOG)
Owens, Wendy (MyMichigan - Midland)
Colquhoun, Douglas (MPOG)
Pace, Nathan (Utah)
Corpus, Charity (Corewell Royal Oak)
Pantis, Rebecca (MPOG)
Delhey, Leanna (MPOG)
Pardo, Nichole (Corewell Grosse Pointe)
Dewhirst, Bill (Dartmouth)
Paul, Jonathan (Columbia)
Edelman, Tony (MPOG)
Pimentel, Marc Phillip (B&W)
Esmail, Tariq (Toronto)
Poindexter, Amy (Holland)
Fermi, Lilibeth (Cleveland Clinic)
Qazi, Aisha (Corewell)
Finch, Kim (Henry Ford Detroit)
Ratcliff, Kristie (Michigan Medicine)
Gedela, Radhika (University of Vermont)
Riggar, Ronnie (MPOG)
Gibbons, Miranda (Maryland)
Rolfzen, Megan, MD (Michigan Medicine)
Goatley, Jackie (Michigan)
Ruiz, Joseph (MD Anderson)
Goldblatt, Josh (Henry Ford Allegiance)
Schwerin, Denise (Bronson)
Greenblatt, Lorile (U Penn)
Scranton, Kathy (Trinity Health St. Mary’s)
Grewal, Ashan (Maryland)
Shah, Nirav (MPOG)
Hall, Meredith (Bronson Battle Creek)
Shettar, Shashank (OUHSC)
Heiter, Jerri (Trinity Health)
Smith, Mason (MyMichigan)
Henson, Patrick (Vanderbilt)
Stam, Benjamin (UMHS West)
Huntington, Michelle (Corewell West)
Stewart, Alvin (UAMS)
Janda, Allison (MPOG)
Stierer, Tracey (Johns Hopkins)
Johnson, Rebecca (UMHS West)
Tyler, Pam (Corewell Farmington Hills)
Kaper, Jon (Corewell Trenton)
Vaughn, Shelley (MPOG)
Karamchandani, Kunal (UT Southwestern)
Vitale, Katherine (Trinity Health)
Khan, Meraj (Henry Ford)
Wade, Meredith (MPOG)
Kheterpal, Sachin (MPOG)
Wedeven, Chris (Holland)
Kirke, Sarah (Nebraska)
Weinberg, Aaron (Weill Cornell)
Krauss, Kristin (Temple)
Westfall, Christine (Sparrow)
LaGorio, John (Trinity Health)
Wissler, Richard (University of Rochester)
Lalonde, Heather (Trinity Health)
Woody, Nathan (UNC)
Agenda & Notes
Opening, Attendance, and Minutes:
Meeting Start: 10:01 am
Roll Call: Via Zoom or contact Coordinating Center (support@mpog.zendesk.com) if you were
present but not listed on Zoom.
Minutes from January 2026 Quality Committee Meeting
Upcoming Events:
2026 Meetings & Events
MSQC + ASPIRE Combined Meeting: Friday, March 13 (Marriot, East Lansing).
ASPIRE-only Meeting: Friday, July 17 (Weber’s Hotel, Ann Arbor).
MPOG Retreat: Friday, October 16 (San Diego).
Sites outside Michigan are welcome at the Michigan in-state meetingscontact the
Coordinating Center for details.
Announcements:
New Cardiac Subcommittee Vice Chair Dr. Ashan Grewal (University of Maryland): Dr. Ashan
Grewal is a cardiac anesthesiologist and longstanding active contributor to the group, and
leadership is excited to have him join Dr. Allison Janda and Kate Buehler.
New Measures:
1. AKI-03-Peds : Acute Kidney Injury, Peds Cardiac
Evaluates acute kidney injury in pediatric cardiac surgery patients undergoing
cardiopulmonary bypass.
This measure is essentially a pediatric specific subset of the existing adult AKI measures,
tailored to the pediatric cardiac population.
2. NCR-01-OB : Neuraxial Catheter Replacement for Childbirth
Tracks neuraxial catheter replacement rates, focusing on documentation when an epidural
catheter has been replaced in OB anesthesia cases.
Sites practicing in OB are encouraged to review the measure, provide feedback, and report
any issues, as this is part of expanding detailed subspecialty work.
Key Operational Updates
Glycemic Management Workgroup Update: The workgroup has continued refining
ambulatory/outpatient glycemic management measures, with some changes expected to
influence broader MPOG glycemic metrics. Their recent focus centers on how hyperglycemia
assessment should be measured across perioperative settings. The group plans multiple
standalone measuressuch as checking glucose pre-op in diabetic patients, timely rechecks
after hyperglycemia, and appropriate timing of glucose checks after initiating insulin. These will
eventually combine into a composite hyperglycemia bundle. Reach out with questions, request
past notes, or express interest in participating in future glycemic management workgroup
meetings.
Measure Review #1 TOC-02, Reviewer: Alvin Stewart, MD (University of Arkansas)
Review Document link: TOC-02 - Alvin Stewart Review 2.23.26 - Google Docs
Discussion Highlights:
Dr. Kunal Karamchandani provided a comprehensive update on the Epicembedded ORtoICU
handoff tool developed through the EMR workgroup of the Multicenter Handoff Collaborative.
Over two years, the group built a tool that automatically compiles all intraoperative
informationpreop details, drugs administered, IVs placed, and ventilator settingsinto a
clean, colorcoded summary easily readable by ICU, PACU, and other hospital teams. The tool is
currently live at six to seven pilot sites, where it generates a summary report and includes a
dedicated ORtoICU handoff button that allows tracking whether a handoff was completed.
Karamchandani emphasized the need for user feedback as they refine the tool prior to a
broader Epic release planned for later in the year.
Dr. Nirav Shah expressed interest in how Epic plans to deploy the tool systemwidewhether
through an update or as a foundation buildand encouraged continued collaboration, noting
that MPOG has representation on Epic’s Anesthesia Steering Board. He also highlighted the
potential value of feedback from ASPIRE sites to inform future improvements.
Dr. Michael Mathis expanded on the challenge of handoff documentation becoming a
“checkbox” activity rather than truly reflecting communication quality. He noted that
meaningful evaluation of handoff quality requires realtime audits, which are resourceintensive
and hard to sustain. He suggested that rotating audits among a small percentage of sites could
be a feasible approach, as used in other qualityimprovement frameworks, but acknowledged
this may still be difficult without local staffing support.
The group also discussed broader themes:
Persistent tension between electronic “onebuttonanddone” documentation versus actual
communication.
Variation in exposure to the pilot Epic tool, with many participants interested in screenshots
and live examples.
Recognition that existing Epic anesthesia summaries are difficult for cliniciansespecially
nonanesthesiologiststo interpret.
Dr. Karamchandani confirmed he would share screenshots from both Hyperspace and Haiku
versions and noted that ICU nurses and nonanesthesia intensivists have repeatedly expressed
difficulty identifying what actually happened in the OR, reinforcing the value of the new tool.
Vote & Decision:
Modify TOC-02 by removing outdated procedure-specific exclusions.
Update the measure to use the organ-procurement phenotype.
Measure Review #2 TOC-03, Reviewer: Alvin Stewart, MD (University of Arkansas)
Review Document link: TOC-03 - Alvin Stewart Review 2.23.26 - Google Docs
Discussion highlights:
Dr. Alvin Stewart noted that, similar to prior discussions on transfer of care, evidence
supporting ICU handoff processes is limited. While small studies exist on checklist adaptation,
none are robust enough to inform MPOG specification changes. The rationale for TOC-03
remains strong: effective OR-to-ICU communication is essential, possibly even more critical
than PACU handoff. The current inclusion and exclusion criteria remain appropriate. Patients
transferred to the ICU are included, and others are excluded. Documentation workflows (e.g.,
Epic button click or event entry) capture completion but not the quality of the handoff, and
without stronger evidence on which elements influence outcomes, he recommended keeping
TOC-03 unchanged.
Dr. Nirav Shah agreed and highlighted that ICU handoff documentation varies more across sites
compared with PACU handoff, reinforcing the continued need for the measure. He expressed
interest in the Epic OR-to-ICU handoff report and its potential to improve clarity for clinicians
and create future research opportunities comparing implementing vs. non-implementing sites.
Clarifications were provided regarding the Epic handoff project: it was developed specifically by
the EMR workgroup of the Multicenter Handoff Collaborative. Only a subset of institutions
participated directly. Once refined and widely released, the tool will be evaluated for impact on
safety outcomes and near misses.
Vote & Decision:
No modifications were proposed or needed for TOC-03.
Measure Updates
1. TRAN-03-P - Transfusion Vigilance (peds)
The measure was updated to include all cases in which red blood cells were transfused, instead
of only those above the previous 15 mL/kg threshold, allowing evaluation of a broader patient
population.
All relevant transfusion cases are now captured accurately.
2. PUL-01 - Protective Tidal Volume
The measure was updated to exclude bronchoscopy-only cases, based on site feedback and case
review.
Avoids excluding bronchoscopies performed as part of larger thoracic surgeries, ensuring
accuracy without unintended omissions.
3. TEMP-01 - Active Warming
The case-start algorithm was updated and the measure now requires a minimum case duration
of 60 minutes for active warming, excluding shorter cases.
Administrative updates were made, including replacing ASA-code-based exclusions with the
organ-procurement phenotype, mirroring updates made to the TOC measures.
2026 Anonymized Benchmarking Plan
Plans for 2026 include adding benchmarking visualizations based on case-volume tranches so
sites can compare themselves with similar-sized institutions.
Additional benchmarking views will be developed specifically for ambulatory surgical centers
(ASCs) to better highlight ASC performance.
By late 2026, benchmarking will expand to include health-system-level visualizations using Epic,
AHA, and site-submitted data.
“Academic vs. community” benchmarking will not move forward in 2026 due to unclear
definitions and risk of misleading comparisons.
NMB-01 (TOF Monitoring) Discussion
Dr. Tariq Esmail raised concerns about applying NMB-01 in spine and neurosurgical cases where
MEPs/SSEPs are used. Providers resist TOF monitoring because neuromonitoring requires
minimal paralytic use, resulting in frequent NMB-01 flags. Staff become frustrated by perceived
“poor performance,” and neuromonitoring documentation is paper-based, preventing
automated exclusions.
Several sites reported the same issue: neuromonitoring performs TOF/fade assessments, but
because anesthesia does not document the values in Epic, cases are flagged.
Potential solutions discussed:
Manual workaround (Michigan): anesthesia asks neuromonitoring for the TOF ratio and
documents it in Epic so MPOG can capture it.
Local Epic solutions:
o Use neuromonitoring resources (SSEP/MEP) from case requests to exclude cases.
o Create an anesthesia event indicating external neuromonitoring, which MPOG could
map for exclusion.
Epic enhancement suggestion: Add a checkbox (“neuromonitoring performed externally”) to
exclude these caseshelpful but not required if local events are implemented.
Participants agreed automation would help reduce documentation burden. Manual entry is
acceptable but not ideal.
A related question arose about lung transplant exclusions; these exclusions stem from older
documentation patterns and may need reevaluation as extubation practices evolve.
2026 Best Practices Exchange Proposal
A new “best practice exchange” will spotlight high-performing sites during measure reviews and
allow them to briefly share what has worked well in their local QI efforts.
The exchange will begin with measures showing wide variationglycemic management was
identified as an early candidate.
Meeting Adjourned: 10:59 am
Next meeting: Monday, May 18, 2026
Appendix A Full Transcript
ASPIRE Quality Committee February 23, 2026
(Original wording preserved, but sentence structure, punctuation, and flow corrected for clarity.)
Dr. Nirav Shah (MPOG): Good morning, everyone, or good afternoon, depending on where you're joining from. I
hope everyone had a nice weekend. For those of you in the Northeast who are getting inundated by a blizzard
right now, I hope everyone is safe and warm.
As always, we have a busy agenda. We'll go ahead and get started. I have a couple of announcements, and then we
have our measure review. Dr. Alvin Stewart will be reviewing Transfer of Care 02 and 03. Then we have a few
measure updates and a couple of other updates we wanted to share, time permitting.
Minutes from the previous meetingthe January meetingare published on our website. If anyone has any
questions on those, please let us know. If anyone has any revisions, please let us know at the Coordinating Center;
otherwise, we will consider those approved. For roll call, we will use the Zoom participant list. If you joined by
phone and your name is not appearing, please let us know and we will mark you as attended.
We have a couple of upcoming events. In a couple of weeks, we're excited to have our MSQCASPIRE Joint
Collaborative Meeting. It’s in March this year in Lansing versus Southeast Michigan where it typically is, so we’re
excited about that. We have a great agenda. Dr. Karsten Bartels will be giving our keynote talk on multimodal
anesthesia. We have a panel presentation as well as our performance review, so we’re looking forward to seeing
many of you at that meeting. Typically this is mostly State of Michigan folks, but any active MPOG site that’s
interested in joining us for this meeting is more than welcome. If you’re interested in this or future meetings, don’t
hesitate to reach out and let us know.
We have our ASPIRE-only collaborative meeting on Friday, July 17, in Ann Arbor this year, rather than Lansing. The
theme will be centered around surgical site infections and perioperative sepsis, and we’re really interested and
excited about that theme. And then, of course, the MPOG Retreat this year will be Friday, October 16, in San
Diego. We hope to see many of you from around the country at the MPOG Retreat.
Big congratulations to Dr. Ashan Grewal (Maryland) for being named the new Cardiac Subcommittee Vice Chair,
working with Dr. Allison Janda (MPOG) and Kate Buehler (MPOG), who lead that committee. Many of you already
know Ashan is a cardiac anesthesiologist at the University of Maryland and has already been an active member of
the MPOG Cardiac Subcommittee, so I’m super excited to have him join the team and look forward to those
meetings.
I also wanted to share a couple of new measures we’ve been working on in the subspecialty space. AKI-03, which is
a pediatric AKI measure for pediatric cardiac surgery patients who have undergone cardiopulmonary bypass, has
been released. It’s essentially a subset of our adult AKI measures focused on that pediatric cardiac population.
In the OB space, we’re starting to get some pretty detailed measures based on the data that’s being submitted and
cleaned. NCR-01, which is neuraxial catheter replacement rates, looks at documentation in our OB anesthetic
records around when a neuraxial catheteran epidural catheterhas been replaced. That was just released. For
those who are interested or who practice in these spaces, please take a look and let us know what you think and if
you are having any issues. We’re super excited to release both of those in the subspecialty space.
We also have a couple of other updates to share before we get to Dr. Stewart’s measures. The first is a quick
update on our glycemic management work within the glycemic management workgroup. We’ve now had a couple
of discussions with that group, and I want to thank them for their efforts and feedback helping us update measures
in this space. As most of you know, the initial focus was largely the ambulatory/outpatient space, although some of
the results will spill over into our regular glycemic management measures as well.
Our most recent meeting focused on hyperglycemia assessment. We had already made decisions around building
treatment-of-hyperglycemia measures for the outpatient space, and that work is ongoing and those details have
been finalized. But we hadn’t really talked about how we planned to measure assessment of hyperglycemia. The
group landed on developing multiple standalone measures around whether glucose is being checked appropriately
in patients at risk for hyperglycemia, and then combining those into a composite hyperglycemia bundle.
Some standalone measures include: if a patient has a history of diabetes, are we checking glucose in the
preoperative space? Are we rechecking in a timely fashion if hyperglycemia is documented? If we start insulin
especially IV or subcutaneous insulinare we rechecking blood glucose in an appropriate timeframe? The team is
building measure specifications for those standalone measures and then figuring out how they tie together into
the composite bundle.
We will also need a phenotype around the diagnosis of diabetes so we can identify patients with diabetes going
into surgery. That will likely involve diagnosis codes, documentation on the problem list and in the medical history,
and potentially other documentation. This will be a fair amount of work at the Coordinating Center to ensure we’re
accurately labeling patients. We don’t yet have a specific timeline for releasing these measures, especially those
that depend on identifying diabetes, but we’ll provide more information as we have it. In the meantime, we
wanted folks to know the plan.
As mentioned at previous meetings, if anyone has questions about this or wants us to resend notes from the
workgroup meetings, or wants to be involved in future meetings, don’t hesitate to reach out to us.
Any questions or comments about the glycemic management work that’s been ongoing?
[No verbal comments.]
I’ll keep an eye on the chat as well. With that, I think we’re ready for the measure review.
Dr. Nirav Shah (MPOG): Dr. Alvin Stewart from the University of Arkansas has graciously agreed to review both
Transfer of Care 02 and 03. I’ll turn it over to him and bring up his measure review so folks can see it. We’ll start
with Transfer of Care 02. Let me know if anyone has problems seeing what I’m sharing.
Dr. Alvin Stewart (UAMS): Thanks. It’s been a little while. I actually reviewed one of these two measures a couple
of years ago. First and foremost, not much has changed in the literature on transfer of care and handoff
communication. If anyone is interested in a research project, this is an area you could easily delve into because
there just isn’t much out there. There are lots of small quality improvement projects at various sites, but there’s no
major Cochrane review and no major meta-analysis of how we are communicating between the operating room
and PACU. The literature is very sparse.
It is important to have this documentation. There are elements of handoff that are included in the measure and
recommended, especially for new sites that might not have a formal tool. The measure remains valid. As we all
know, handing off between different providers is a source of miscommunication and dropped communication.
Good patient care requires that communication between units be effective.
The measure suggests certain elements via MPOG. Whether everyone follows that is determined by local
institutional requirements. The recommendations are available, and every institution needs flexibility to include
what their own quality and compliance people need; they may require certain things not on the MPOG list or may
go into more detail. That’s fine for this kind of measure. It allows flexibility for institutions to do what they need.
But again, there’s not much published, and you’re going to hear me say that a lot—there’s just not much there.
That said, we still have to do it. I think the measure’s success definition and flagged cases are fine. The updated
MPOG definition of timing was an improvement and made things more specific. Now we use anesthesia start to
fifteen minutes prior to PACU start. Previously, folks could document the handoff before it actually happened; now
there’s a bit of wiggle room but it’s closer to the appropriate time window. We’ve refined this over the years; I
don’t know how much more we can refine it without better research and published studies that define a more
granular list of required elements for better communication.
In terms of changes, I think we need only minor modificationsmainly removing some of the exclusion criteria
that are holdovers from the past. Other than that, the measure is cut-and-dry: do the documentation, and please
do it.
[Joshua Berris (Corewell Farmington Hills) via chat]: It just seems like Epic has it too easy to just
have “handoff.” I don’t see people really doing a handoff anymore. I don’t know how to actually
force the handoff to be done rather than the handoff button just being pushed.
[Greg Balfanz (North Carolina) via chat]: Working clinically. But I agree that measuring real-time
compliance with an actual handoff tool is very challenging. We have a tool and need to be doing
this related to an RCA, and even with that level of support from the institution it’s challenging to
do the audits.
[Joshua Berris (Corewell Farmington Hills) via chat]: We used to have an Epic “Transfer of Care,”
which at least was a more complete record of what we did in the OR so that a nurse could use it.
[Greg Balfanz (North Carolina) via chat]: UNC would love to see something like this from Epic.
[Mara Bollini (WUSTL) via chat]: I believe WashU is one of the six sites and Joanna Abraham is a
key contact here.
[Morgan Brown (Boston Children’s) via chat]: It would be great to see screenshots if that’s
possible.
[Eva Lu-Boettcher (Wisconsin) via chat]: Great if you can send a demo.
[Kunal Karamchandani (UT Southwestern) via chat]: EPIC embedded OR-ICU handoff_JA (2).pdf.
Dr. Nirav Shah (MPOG): Thank you, Alvin, and thank you for catching that exclusion. For historical perspective: the
Transfer of Care measures were built when MPOG was participating as a QCDR, so folks could submit measures to
CMS as part of pay-for-performance. At that time, we tried to mirror existing CMS/MIPS measures, which is why
there are still remnants of those specifications across MPOG. Those remnants are slowly being filtered out as
measures are reviewed and re-reviewed, and teams like Alvin’s catch them.
As you mentioned, this is one of those areas we know is important but there’s not a ton of research. Folks have
commented in the past that this can feel like a “check-the-box” measure. You don’t know for sure that a good
handoff is being completed just because someone does well on the measure. The measure essentially captures
that a provider documented that they completed a handoff.
In the past, especially in the ASPIRE collaborative in Michigan, we had staff perform bedside audits in PACU, rating
the quality of handoffs and whether all components were included. That is very hard to sustain. It’s resource-
intensive to assign staff to audit handovers: watching for an extended period of time, waiting for a handover,
documenting all the elements.
Having used the Epic handoff tool here at Michigan for a couple of yearsclicking that handoff button and looking
at the Epic sidebar report—I do get value out of it. I’m hoping similar tools exist for Cerner and other EHRs as well,
and that for Epic sites, there’s still real value so that this is not just a check-the-box exercise.
From the Coordinating Center perspective, we also included a couple of administrative updates: we’re using the
organ procurement phenotype instead of procedure codes, and we felt that, in addition to arteriography and
venography, any procedure-specific exclusions probably need to go away. We’re now quite good at detecting
whether a patient was transferred from the OR to PACU, so this is a good opportunity to remove those one-off
procedure-specific exclusions. Thanks again for bringing that to light.
If you look at performance across sites, the vast majority are using some kind of handoff documentation, which is
awesome. People are using it, and hopefully the quality of handoffs is improving. There is still a chunk of sites
maybe 2025%where nothing is documented in the anesthetic record that MPOG can consume, and another
chunk where documentation is inconsistent. That suggests opportunities for quality improvement to ensure
handoffs are both documented and conducted with less variation.
At the Coordinating Center we agree with your assessment that the measure is still valid, should be modified as
you suggested, and that there is still work to do across the collaborative.
Before moving on to Transfer of Care 03, I’d like to open it up to the group: any comments on this measure,
thoughts about how you’re documenting handoffs, handoff initiatives at your departments that might help others,
or specific comments on the specifications that we haven’t already discussed? Kunal, I know you’ve been involved
in the handoff collaborative.
Discussion:
Dr. Kunal Karamchandani (UT Southwestern): Yes. I wanted to update you on that. We worked with Epic
through the EMR workgroup of the Multicenter Handoff Collaborative, and over the last two years we
created an OR-to-ICU handoff tool embedded in Epic. It’s live now at only six or seven centers. We’re
piloting it and doing a survey and study to see how we can modify and edit it based on feedback. It should
go live by the end of this year for all Epic sites.
It’s a tool where everything you did in the OR gets automatically pulled into a report and is available for
PACU, ICU, or anyone in the hospital as part of a summary tab. I’m an ICU guy: if I get a patient from the
OR and want to know what happened in the OR, right now in Epic I can go to the anesthesia record as a
non-anesthesiologist and see a PDF. It’s pretty cluttered; you can’t make much out of it. With this, it’s like
going to a patient summary and seeing an ICU summaryeverything from pre- to intra- to post- pops up,
color-coded, telling you exactly what happened in the OR: drugs given, IVs placed, last vent settings, and a
bunch of other details.
Right now it’s being piloted at about six institutions. Feedback would be great. We built it so that after
“post” you have an OR-to-ICU handoff button; you click it, the report pops up, and then there is a tab that
says you did the handoff. There is a way to track it. It took two years to build, but as I said, feedback
would be very helpful.
Dr. Nirav Shah (MPOG): Thank you. It would be interesting for us to learn more about Epic’s plans for
deploymentwhether it will be a foundation build that sites can pull down or will arrive automatically in a version
update. If you get clarity on that, we’d be very interested in sharing it with the group. We also have folks on Epic’s
Anesthesia Steering Board we can reach out to.
Dr. Kunal Karamchandani (UT Southwestern): We can certainly take feedback from MPOG back to Epic. If I
have feedback from you all in ASPIRE—“this would be really good”I can share it in our Epic meetings. As
I mentioned, at pilot sites, once you finish in the OR, you have an OR-ICU handoff button; you click it, the
report pops up, and then you click a tab that indicates you completed the handoff. There is a way to track
it. Again, feedback about how to improve it is welcome.
Dr. Nirav Shah (MPOG): Great. Maybe we can chat offline about opportunities for MPOG sites to provide feedback
or for you, as a representative to both the handoff group and MPOG, to share information from that collaborative.
Mike, I think you had your hand up, and we also have a lot of chat comments.
Dr. Michael Mathis (MPOG): Yes. I was reading the chat as well and agree with much of it. I totally
acknowledge the tension between making this a meaningful measureany time you give somebody a
checkbox that says “Are you a great anesthesiologist or not?”, they’re going to check “Yes, I’m a great
anesthesiologist”and the tension of how we overcome that.
The alternative is auditing these handoffs in real time, which is hugely resource-consuming. Other QI
forums in other specialties have similar problems. One solution I’ve seen is that only a small subset of
sitesmaybe 10%are audited in a given period, and those sites rotate over quarters or years. These
audits aren’t punitive but provide qualitative information about handoff quality. I don’t know whether
that works for this measure, because local sites would likely have to perform these audits themselves. I
don’t know if MPOG central could realistically offer a way to audit, since you would need someone
standing in PACU observing the handoff. But for similar problems requiring significant resources to do well
across all sites, one could imagine the Coordinating Center offering resources to a subset of sites.
Dr. Nirav Shah (MPOG): Yes, or maybe we could provide a playbook for auditswhich is what we did before.
Where we struggled was frequency and having people with enough time and bandwidth to get a large enough
sample at their institution. Also, some aspects of handoff are case-type specific: the handoff you do for a Whipple
is likely very different from a cataract.
We’ve seen a range of comments in the chat: some about the handoff workgroup and Epic tools we haven’t seen
yet at Michigan; some about sites that are part of the pilot versus those that are not; and some about the “one
button and done” tension that Josh mentioneda difference between checking the box and truly communicating.
Even in my own practice, the button may be checked but the quality of the handoff could have been better.
Greg at UNC raised similar points. A number of folks are interested in seeing the outputs of the handoff
collaborative. Kunal has already shared a PDF in the chat, which is a great start.
Dr. Kunal Karamchandani (UT Southwestern): I’ll get a screenshot of what it looks like. A bunch of people
wanted to see it. We’ve built it for both Hyperspace and Haiku so institutions that rely on the mobile
interface can use it. You’re in the ICU, you have your phone, you can see the report.
From the ICU side, when I talked to nurses and non-anesthesia intensivists, the main feedback was, “How
do I know what happened in the OR? It’s so difficult to figure out.” The handoff might happen physically,
someone might write on the whiteboard, but if I have a question the next morning on rounds, what do I
look at? That feedback was very valid.
Dr. Nirav Shah (MPOG): Totally. Many share our experience that the standard Epic anesthesia summary or full
record is not easy to digest, even for anesthesiologists or CRNAs, and it’s even harder for non-anesthesia clinicians.
For non-Epic sites: Haiku is the mobile Epic app, and Hyperspace is the desktop/laptop interface.
Any other comments from the group about this or about the measure specifications or details before we put it to a
vote and move on to Transfer of Care 03? There’s a lot of overlap between 2 and 3, and Kunal has already
referenced some ICU elements, but we’ll still discuss it.
[No additional verbal comments.]
Josh Goldblatt (Henry Ford Allegiance): Nirav, just to clarifythe changes here are the administrative-type
changes we discussed?
Dr. Nirav Shah (MPOG): Yes, thank you, Josh. These are the exclusions for arteriography, venography, and other
procedure-type exclusions, and then the more procedural change of replacing specific codes with the organ
procurement phenotype. Thank you for reiterating that.
[Joshua Berris (Corewell Farmington Hills) via chat]: Change my vote… thought “keep as is” meant
accepting the admin changes.
[Ashan Grewal (Maryland) via chat]: Change my vote to accepting the admin changes suggested.
[Nathan Pace (Utah) via chat]: Follow your lead.
Dr. Nirav Shah (MPOG): It looks like the initial poll results were 57% “keep as is” and 43% “modify,” which
surprised me. I suspect I may have confused people by not summarizing the changes before launching the poll.
Some folks in the chat are clarifying they meant to accept the administrative changes. Thanks for correcting your
votes and apologies for any confusionI will own that.
We will proceed with modifying TOC-02 as discussed, removing outdated procedure-specific exclusions and
updating to the organ procurement phenotype.
Dr. Nirav Shah (MPOG): Let’s move on to Transfer of Care 03. I’ll zoom into that review. Alvin, back to you.
Dr. Alvin Stewart (UAMS): Very similar story here. We’re again talking about transfer of carethis time to the
ICUand the literature is also sparse. There are small studies focused on adapting checklists to individual units,
but they’re small enough that I didn’t think it was worth listing them in detail.
The rationale for the measure remains sound: we’re transferring information between the OR and the ICU team,
which is extremely importantarguably more important in some cases than PACU handoff. We know elements are
needed, though we don’t have a definitive published list of them.
The inclusion and exclusion criteria for TOC-03 are quite clean. If the patient goes to the ICU, they’re included; if
they don’t, they’re not. I don’t see a need for changes there.
The evaluation of success and flagged cases is, again, analogous to PACU. For Epic sites, it’s a button click. At our
institution, it’s an event that the in-room provider places. The event includes suggested elements, but we can’t
realistically query those content details right now. And as we discussed earlier, even if we did, someone could
easily click the button and still not provide a meaningful report to the receiving ICU team.
We’re therefore left with the same limitations: until we have better data defining which specific elements improve
outcomes, we will continue to rely on local lists and local practice. My recommendation is to keep TOC-03 as is,
with no changes.
Dr. Nirav Shah (MPOG): Thank you. We at the Coordinating Center agree. We did not propose additional changes
beyond what you discussed. Interestingly, when we looked at performance across sites, we saw somewhat more
variation in documentation for ICU handoff in TOC-03 than in PACU handoff for TOC-02. That supports the idea
that this remains a valid measure and that there’s still work to do.
Kunal, I’m excited to learn more about that Epic OR-to-ICU handoff report you described and how it might mitigate
some of the confusion for intensivist teams reading anesthetic recordsboth locally here at Michigan and across
MPOG. There may be opportunities for research comparing sites that adopt the tool at different times or not at all.
Any comments from folks about TOC-03 or anything we missed?
Dr. Nirav Shah (MPOG): We’ll launch the TOC-03 poll. This is to decide whether to continue the measure as is, in
line with Dr. Stewart’s recommendation.
[chat Lilibeth Fermi (Cleveland Clinic)]: Are the members of the Multicenter Handoff Collaborative part
of the Epic handoff project?
Dr. Kunal Karamchandani (UT Southwestern): I was going to type this in response to Lilibeth’s question,
but I’ll just say it. The Multicenter Handoff Collaborative has multiple groups. The EMR workgroup is the
group that worked with Epic. The folks who were part of that EMR group were the ones involved in the
Epic handoff project. Joanna at WashU, Andrea at Iowa, myself at UT Southwestern, partners at Harvard,
Yale, etc.five or six institutions were directly involved. Not all members of the collaborative were part of
that project; only the EMR workgroup was tasked with creating this tool.
As for quality and outcomes: once we refine the tool based on feedback and open it to everyone by the
end of this year, the next step is to look at its impact on outcomes, especially near misses and patient
safety indicators. That’s similar in spirit to the study by Scott and Amit Saha regarding intra-op handoff
and postoperative outcomes. Once we’re done refining and rolling it out, that’s where we could
collaborate with MPOGcomparing sites that are using it and those that are not.
Josh Goldblatt (Henry Ford Allegiance): And there are no modifications being proposed to TOC-03, so “modified”
probably shouldn’t be used as an option unless someone wants to propose a specific change.
Dr. Nirav Shah (MPOG): Thank you, Josh. Unless someone is proposing a modification, “keep as is” is the expected
choice.
Dr. Nirav Shah (MPOG): The poll shows 87% “keep as is,” which aligns with the Coordinating Center and Dr.
Stewart’s recommendation. Alvin, any final comments?
Dr. Alvin Stewart (UAMS): No, that’s all I have.
Dr. Nirav Shah (MPOG): Thank you again. We really appreciate your review and we’ll work on the TOC-02
modifications.
Dr. Nirav Shah (MPOG): Let’s move to a couple of measure updates from the last few months.
For the pediatric transfusion measure, the Pediatric Subcommittee previously voted to include any cases in which
red blood cells were transfused, rather than only those above a 15 mL/kg threshold. They wanted a broader
population from which to examine transfusion practices. We needed to do some work at the Coordinating Center
to ensure we captured all relevant cases, but that change is complete.
For the tidal volume measure, we are now excluding patients undergoing only bronchoscopies, based on case
review and feedback from sites. Thank you to those who reached out. We agreed that bronchoscopy-only cases
should be excluded, but had to work carefully to avoid excluding bronchoscopies that were part of larger thoracic
surgeries.
For TEMP-01, we updated the case start algorithm. There is a 60-minute minimum duration for active warming;
cases shorter than 60 minutes are excluded. We made administrative updates regarding how case start is defined
and, as with the TOC measures, we replaced an older ASA-code-based exclusion with the organ procurement
phenotype to make things more accurate.
If folks have questions on any of these updates, don’t hesitate to reach out to the Coordinating Center.
Now, regarding the NMB-01 question Tariq raised on Basecamp…
Dr. Nirav Shah (MPOG): Dr. Esmail from University Health Network in Toronto brought up a question around NMB-
01 and some nuances in spine and neurosurgical patients. Tariq, would you be willing to describe your question for
the group?
Dr. Tariq Esmail (Toronto): Sure. I tried to summarize it in my Basecamp post, but some of the nuance around
engagement and change management doesn’t translate well in writingthe resistors, the lack of champions.
At Toronto Western, one of our sites, we have a niche practice: all neurosurgery and spine cases are concentrated
there, and we have neuromonitoring technicians (electrophysiology) in almost every case.
In trying to ramp up NMB-01, we did all the usual things. A handful of staff, predominantly neuro staff assigned to
these cases, are seeing four out of seven cases flagged. They look at them and see spine surgeries with MEPs and
SSEPs; their reaction is, “My patient was being twitched; I used one dose of muscle relaxant 12 hours ago; this is
ridiculous. I’m not going to put the train-of-four monitor on.”
I’ve tried to hold the line and say, “This is informational. This is just a flag. You now know that all those flagged
cases were appropriate; you verified they’re spine cases with MEPs—ignore the flag.” But people don’t like seeing
the percentage. They don’t like being told they’re doing poorly compared to others. I’ve gotten criticism from a
small group about that.
I looked into it at our site first. Even if we had the ability to exclude cases with MEPs, we have a documentation
issue: the organization is totally digital except for neurophysiology. The neurophysiology team documents on
paper and then scans it into Epic. So even if we wanted a digital exclusion, we don’t have the data.
I wondered whether others have the ability to exclude these cases. Personally, I still think there is value in doing a
train-of-four. The neuromonitoring techs can give us a TOF module, but they often stop long before extubation. I
could go on, but I’ll pause there and I’m happy to hear others’ thoughts.
Dr. Nirav Shah (MPOG): Thanks, Tariq. It’s a super interesting question because when we rolled out NMB-01 (TOF
monitoring) and NMB-02 (reversal) we saw similar issues. At Michigan, we use neuromonitoring fairly often, but
we still ask the anesthesia team to document the train-of-four—either by asking neuromonitoring what they’re
seeing and entering it, or by checking it themselves.
In cases where they’re doing quantitative monitoring, we specifically state that if you don’t think reversal is
appropriatefor example, if the TOF ratio is greater than 0.9 according to neuromonitoringyou can still not give
reversal and be compliant, but you must document that ratio in the anesthesia record so MPOG can see it. That’s
how we’ve approached it here.
I know that might not work at every site—some might say, “If neuromonitoring is documenting it, why should I?”
but that’s been our approach.
[chat Tariq Esmail (Toronto)]: Hi all, I placed a question for consideration in the Basecamp posts a
couple weeks ago. If anyone hasn’t seen it and can take a lookwelcoming any/all feedback from anyone
using NMB-01 and your thoughts on my challenges/nuances in the spine/neuro population. Thanks so
much!
[chat Kunal Karamchandani (UT Southwestern)]: Hi Tariq, we struggle with the same.
[chat Kunal Karamchandani (UT Southwestern)]: We get flagged for cases where neuromonitoring
documents TOF and we don’t in Epic.
[chat Alvin Stewart (UAMS)]: In Epic case request, you can see if they require a resource of SSEPs, etc.
This can be excluded.
[chat Alvin Stewart (UAMS)]: Or make an anesthesia event in Epic. This is easily doable.
[chat Tariq Esmail (Toronto)]: Thank you for the discussion!
Dr. Tariq Esmail (Toronto): Thanks. I can see Kunal’s comments in chat, so I suspect he has thoughts too.
Dr. Kunal Karamchandani (UT Southwestern): Yes. I was going to respond to your Basecamp post, but it was too
complicated for that format; I’m glad you brought it here. We struggle with the same thing. I do a lot of spines.
Many of my cases get flagged because we didn’t document a TOF, but neuromonitoring did. They often do a better
job: they have fade ratios, etc. We rely on their assessment and reverse based on what they see, but we don’t
document it, so MPOG doesn’t see it.
I’d be interested in a way to automatically capture thisor at least, a checkbox somewhere in the anesthesia
record that says “neuromonitoring being done by external agency,” and then those cases could be excluded from
NMB-01. That was my thought when I saw your post: if Epic could build a checkbox—“neuromonitoring done
externally”—and if that’s checked, the case would be excluded from the measure. Something to discuss with Epic
at a higher level.
Dr. Nirav Shah (MPOG): We don’t strictly need Epic for that. We could, among ourselves, create an anesthesia
event that indicates neuromonitoring was used, then map that to an MPOG concept, and exclude those cases.
That’s one approach.
We already have a quantitative monitoring field in our anesthesia record. For the few sites where we get an
automatic twitch monitor feed, the ratio comes in automatically; users can also enter it manually. We’ve asked
providers to enter the ratio reported by neuromonitoring and to put a comment noting that it came from
neuromonitoring. That value then works for both NMB-01 and 02. I’m not saying this will work at every site, but
that’s what we’re doing.
What you suggested, Kunala checkbox or event—could also work, and we wouldn’t strictly need Epic to build it
as long as sites can map it. It would be easier if Epic built it, but not strictly necessary.
Dr. Tariq Esmail (Toronto): So, Nirav, in your workflow, neuromonitoring tells the anesthesiologist, “Here’s the TOF
ratio,” and the anesthesiologist manually documents it?
Dr. Nirav Shah (MPOG): Yes. It’s more of a pull than a push: the anesthesiologist, CRNA, or resident asks
neuromonitoring, “What’s the ratio?” They say, “0.9,” and we enter it.
Dr. Tariq Esmail (Toronto): Does that line always appear on the flowsheet, or only after some trigger? At our site,
the TOF line only shows up in the respiratory section after a certain entry.
Dr. Nirav Shah (MPOG): I’m not 100% sure; I’ll need to look at the build.
Dr. Tariq Esmail (Toronto): No problem. I’ll look at it locally after this.
Dr. Kunal Karamchandani (UT Southwestern): The less labor-intensive we can make it, the better. Someone
suggested pulling cases that had a neuromonitoring resource requested in Epic and excluding those. I don’t know if
that’s practical, but the more we can automate it and not rely on people adding comments, the better.
Dr. Nirav Shah (MPOG): I totally agree. Doing it based on procedure text or diagnosis codes alone would be error-
prone because of variation in how sites submit that information to MPOG, so that probably wouldn’t solve the
problem easily.
Dr. Tariq Esmail (Toronto): I’ll consider an immediate local fix while you’re exploring options centrally. I do like the
idea of a checkbox, but it’s also a potential “workaround” people might misuse—though I don’t think they would.
Dr. Patrick Henson (Vanderbilt): Yes, this discussion triggers another question: why are lung transplants
excluded from this measure? We’re extubating in the OR now, a lot. I’m wondering if that exclusion is still
justified. I don’t need to change it today; I’m just curious.
Dr. Nirav Shah (MPOG): It’s a remnant from earlier documentation patterns when almost all of those
cases went to the ICU intubated and sedated, and we didn’t have great documentation of that across
sites. So it’s legacy logic based on those patterns.
Dr. Patrick Henson (Vanderbilt): That makes sense. But since we are prioritizing positives, I’d prefer a way
to track them rather than exclude them, especially for neuro cases. I still think those are patients where
NMB monitoring is meaningful and we could miss opportunities by carving them out.
Dr. Nirav Shah (MPOG): Totally. Things do happen; I completely agree.
Dr. Nirav Shah (MPOG): I’d like to wrap up with a brief update on benchmarking enhancements and a “best
practice exchange” concept.
At a previous meeting, we talked about expanding the benchmarking data we share and got some feedback from
this group. We discussed feasibility at the Coordinating Center. Our plan in 2026 is to add indicators or
visualizations around case-volume tranches—so if you’re a very small, low, medium, or high-volume site for a given
measure, you’ll see where you fall and can benchmark against more similar institutions.
We also received feedback about having separate benchmarking visualizations for ambulatory surgical centers
(ASCs). We will continue to push on that, especially as more cases are done in ASCs. You can already filter for ASCs,
but these visualizations will highlight ASC performance more clearly.
For multi-hospital health systems, we have good documentation via Epic, AHA data, and site-submitted
information. By the end of 2026, we’d like to start using that to share some health-system-level visualizations.
One thing we likely won’t get to in 2026 is visualizations based on “academic vs. community” status. The
definitions are blurring; we want better definitions and data before building visualizations that could be confusing.
We also discussed a “best practice exchange” concept as part of these measure reviews, where we highlight sites
that have done exceptional work on a measure and ask them to spend a few minutes informally sharing what
they’re doing and how, as a way to learn from each other. We’ll start doing that with measures where
performance has more variation. The glycemic management measures we discussed earlier are good candidates.
We may reach out to some of you who have done successful work in that area.
We’re nearly out of time. Are there any questions or other matters before we adjourn?
[No additional questions.]
Thank you again to Dr. Stewart for the TOC-02 and TOC-03 reviews, to Tariq for raising the NMB-01 issue, and to
everyone for the thoughtful discussion. Thank you also to Josh for helping clarify the poll confusion earlier.
Have a nice Monday. Take care.
Dr. Kunal Karamchandani (UT Southwestern): Thanks, everyone. Bye.
Dr. Diego Bauza (Weill Cornell): Thanks, everyone.