MPOG QI - Quality Committee Meeting Notes Monday, January 26th, 2026
Attendance:
Abess, Alex (Dartmouth)
Jacob, Seth (University of Kansas)
Stam, Benjamin (UMHS West)
Agerson, Ashley (UMH West)
Johnson, Rebecca (UMHS West)
Steadman, Randolph (Houston
Methodist)
Berndt, Brad (Bronson)
Kaper, Jon (Corewell Trenton)
Stumpf, Rachel (MPOG)
Bollini, Mara (WUSTL)
Karamchandani, Kunal (UT
Southwestern)
Tyler, Pam (Corewell Farmington
Hills)
Bow, Peter (Michigan)
Khan, Meraj (Henry Ford)
Vaughn, Shelley (MPOG)
Bowman-Young, Cathlin (ASA)
Kirke, Sarah (Nebraska)
Vitale, Katherine (Trinity Health)
Brennan, Alison (Maryland)
Kunkler, Bryan (Corewell West)
Wade, Meridith (MPOG)
Brown, Morgan (Boston Children’s)
Lacca, Tory (MPOG)
Wedeven, Chris (Holland)
Brown, Sheree (Trinity Health)
LaGorio, John (Trinity Health)
Weinberg, Aaron (Weill Cornell)
Buehler, Kate (MPOG)
Lalonde, Heather (Trinity Health)
Westfall, Christine (Sparrow)
Calabio, Mei (MPOG)
Liu, Bin (Michigan Medicine)
Wissler, Richard (University of
Rochester)
Cassidy, Ruth (MPOG)
Liu, Linda (UCSF)
Woody, Nathan (UNC)
Charette, Megan (MPOG)
Lewandowski, Kristyn (Corewell
Troy)
Yacoubian, Stephanie (B&W)
Chopra, Ketan (Henry Ford - Detroit)
Lopacki, Kayla (Mercy Health -
Muskegon)
Yeoh, Cindy (Moffitt)
Claybaugh, Deborah (MyMichigan)
Lu-Boettcher, Eva (Wisconsin)
Yuan, Yuan (MPOG)
Cohen, Bryan (Henry Ford - West
Bloomfield)
Ludwig, Kristen (Henry Ford)
Zhao, Xinyi (Sarah) (MPOG)
Coleman, Rob (MPOG)
Mathis, Mike (MPOG)
Zittleman, Andrew (MPOG)
Corpus, Charity (Corewell Royal Oak)
Mack, Patricia (Weill Cornell)
Cusick, Jordan (OHSU)
Malenfant, Tiffany (MPOG)
Cywinski, Jacek (Cleveland Clinic)
McComb, Joseph (Temple U)
Delhey, Leanna (MPOG)
McKinney, Mary (Corewell Dearborn
/ Taylor)
Denchev, Krassimir (St Joseph
Oakland)
Mentz, Graciela (MPOG)
Dewhirst, Bill (Dartmouth)
Milliken, Christopher (Sparrow)
Edelman, Tony (MPOG)
O’Conor, Katie (Johns Hopkins)
Ellison, Pavithra (WVU Medicine)
O’Dell, Diana (MPOG)
Esmail, Tariq (Toronto)
Ohlendorf, Brian (Duke)
Everett, Lucy (MGH)
Owens, Wendy (MyMichigan -
Midland)
Finch, Kim (Henry Ford Detroit)
Pace, Nathan (Utah)
Gedela, Radhika (University of
Vermont)
Pantis, Rebecca (MPOG)
Georgiadis, Paige (University of
Vermont)
Pardo, Nichole (Corewell Grosse
Pointe)
Gereboff, Avner (Cedar Sinai)
Paul, Jonathan (Columbia)
Gibbons, Miranda (Maryland)
Pimentel, Marc Phillip (B&W)
Goatley, Jackie (Michigan)
Poindexter, Amy (Holland)
Goldblatt, Josh (Henry Ford
Allegiance)
Qazi, Aisha (Corewell)
Greenblatt, Lorile (U Penn)
Rolfzen, Megan, MD (Michigan
Medicine)
Hall, Meredith (Bronson Battle Creek)
Russell, Michael (University of West
Virginia)
Harwood, Tim (Wake Forest)
Salamanca, Yuliana (Temple)
Heiter, Jerri (St. Joseph A2)
Schwerin, Denise (Bronson)
Herren, Melanie (MPOG)
Shah, Nirav (MPOG)
Horton, Brandy (Anes Associates)
Shettar, Shashank (OUHSC)
Huntington, Michelle (Corewell West)
Smith, Mason (MyMichigan)
Agenda & Notes
Opening, Attendance, and Minutes:
Meeting Start: 10:03 am
Roll Call: Via Zoom or contact Coordinating Center (support@mpog.zendesk.com) if you were
present but not listed on Zoom.
Minutes from November 2025 Quality Committee Meeting
Upcoming Events:
2026 Meetings & Events
MSQC + ASPIRE Combined Meeting: Friday, March 13 (Marriot, East Lansing).
ASPIRE-only Meeting: Friday, July 17 (Weber’s Hotel, Ann Arbor).
MPOG Retreat: Friday, October 16 (San Diego).
Sites outside Michigan are welcome at the Michigan in-state meetingscontact the
Coordinating Center for details.
Announcements
MPOG recognized Bethany Pennington, PharmD (Washington University) as the Featured Member of
the Month for January and February. The committee also welcomed Indiana University Health System as
MPOG’s newest participating site.
Two leadership updates were announced:
Dr. Sharon Reale (Harvard Medical School / Brigham and Women’s Hospital) was introduced as
the new Obstetric Subcommittee Vice Chair.
Dr. Eva Lu-Boettcher (University of Wisconsin) was announced as the new Pediatric
Subcommittee Vice Chair.
Both were acknowledged for their longstanding contributions to MPOG research and quality-
improvement efforts.
Glycemic Management Workgroup Update
Dr. Tony Edelman provided an update on the Glycemic Management Workgroup, which was formed
following committee agreement in September 2025 and met in December to explore outpatient
hyperglycemia metrics.
Key Discussion Points
Patient inclusion: The group agreed that hyperglycemia itself, not a formal diabetes diagnosis,
should trigger inclusion. Non-diabetic patients with elevated glucose values were felt to
represent a clinically meaningful population.
Treatment thresholds: There was extensive discussion about whether the threshold should be
180 mg/dL or 250 mg/dL, reflecting differences across existing guidelines (e.g., SAMBA vs. other
perioperative recommendations).
Measure focus: The workgroup emphasized timely treatment and reassessment, rather than
insulin dosing precision.
Data considerations: IV insulin was discussed but deprioritized due to limited use in ambulatory
settings. Capillary glucose monitor values were also questioned due to accuracy concerns.
Decisions
MPOG will develop two outpatient hyperglycemia measures:
o GLU-15: Treatment of blood glucose >250 mg/dL within 60 minutes
o GLU-15b: Treatment of blood glucose >180 mg/dL within 60 minutes
Discussion
Richard Wissler, MD, PhD (University of Florida): I had a question. For outpatients without a
diabetes diagnosis, how are people actually getting blood glucose measurements? Are we
proposing that everyone gets a blood glucose checked?
o Anthony Lewis Edelman (MPOG): No, not necessarily. There’s no expectation that every
patient walking in the door gets a glucose measurement. The intent is simply that if
hyperglycemia is identified, then treatment would be expected.
Richard Wissler, MD, PhD (University of Florida): I’m just wondering how often
that’s realistically going to happen in the outpatient world without a diabetes
diagnosis. I’m supportive of the concept—I just don’t know how frequently we’ll
capture those cases.
o Nirav J. Shah (MPOG): I think that’s a very fair point. Where the workgroup landed was
that if, for whatever reason, a glucose is checked—whether it’s part of a basic metabolic
panel or another reasonand hyperglycemia is noted, then even without a diabetes
diagnosis, those patients should still be treated.
Josh Goldblatt (Henry Ford Allegiance) (via chat): Henry Ford tests all ambulatory patients
except simple endoscopy and very simple procedures (like cataracts).
o Anthony Lewis Edelman (MPOG): Again, we’re not expecting that practice universally,
but it’s helpful context.
Tariq Esmail (University Health Network) (via chat): Can you please add these units to the
measure proposal:
250 mg/dL → 13.9 mmol/L
180 mg/dL → 10.0 mmol/L
AKI Complication Phenotype Update
Dr. Nirav Shah reviewed recent updates to the AKI complication phenotype, prompted by its use as an
outcome in the Intraop-Ox Trial.
Baseline creatinine logic: Baseline is now defined using the most recent preoperative creatinine,
rather than the highest preoperative value.
KDIGO alignment: The logic gap related to detecting a ≥0.3 mg/dL rise within 48 hours was
corrected.
Exclusion logic: The highest preoperative creatinine is still used to determine renal failure
exclusions, ensuring appropriate exclusion of recently dialyzed patients.
Impact
AKI-01 and AKI-02-C scores will increase by approximately 23 percentage points across MPOG.
Effects will be more pronounced for cardiac AKI measures due to smaller denominators.
Historical data will be retrofitted to ensure apples-to-apples comparisons.
Discussion
Michael Mathis, MD (University of Michigan): I can expand briefly on the first point. While we
are now using the most recent preoperative creatinine to establish baseline for AKI
determination, we still use the highest preoperative creatinine value when determining the
renal failure exclusion. This ensures that patients who recently required dialysis or had severe
renal dysfunction are still appropriately excluded, even if their creatinine appears normal
immediately before surgery.
o Jonathan Paul, MD (Columbia University Medical Center): I’m assuming that historical
data will be retrofitted to follow the new logic so that we’re comparing apples to apples.
Nirav J. Shah (MPOG): Yes, absolutely. Thanks for bringing that up. Historical
data will be recalculated using the updated logic.
Measure Review AKI-01 - Mike Mathis Review 1.26.26 - Google Docs
Dr. Michael Mathis presented a comprehensive review of AKI-01, reaffirming its appropriateness as a
quality measure.
Key Points
AKI remains a clinically significant complication influenced by anesthetic management, including
hypotension avoidance, volume optimization, and medication decisions.
Recent evidence does not support adding balanced crystalloids, dexmedetomidine, or other
medication-based interventions to the measure at this time.
AKI-01 continues to rely on creatinine-based KDIGO criteria due to data reliability constraints.
Several ongoing MPOG studies (e.g., VEGA-2, IV fluid composition studies) may inform future
revisions.
Recommendation
No changes to AKI-01 at this time.
Discussion
Nirav J. Shah (MPOG): I just had a couple things to add from the Coordinating Center side. We review
measures for technical changes. We’ve transitioned to using kidney, lung, and liver transplant
phenotypes rather than CPT codes for exclusions, which are more accurate. We’re also using the organ
procurement phenotype instead of CPT codes.
We found some cases where Foley catheter CPT codes were being sent to MPOG along with procedure
CPT codes, and because that’s sort of kidney-related, those cases were being excluded. We removed
those CPT codes from the exclusion list so that if you got a Foley catheter, you wouldn’t be excluded
from this measure. We also updated the specification to reflect that the measure timeframe extends to
seven days after anesthesia end. The code already did this, but the specification now matches
Decision: Continue as Is
Anonymized Benchmarking 2026 Planning Discussion
The committee discussed a proposal to expand anonymized benchmarking within the QI Reporting Tool.
Key Themes
Strong interest in benchmarking by case volume, ASC vs. inpatient, and site characteristics
Support for self-selection of categories (e.g., academic vs. community) with guardrails
Recognition that sites near category thresholds need flexibility
Emphasis on allowing all sites to view all anonymized segments, not just their own
Discussion of risk adjustment vs. process measures, with acknowledgment that some outcomes
(e.g., AKI) require more nuance
Preliminary Direction
Free-standing ASCs identified as a logical first category
Further exploration of quartiles, severity-of-illness indices, and observed-to-expected
comparisons
Discussion
Josh Goldblatt (Henry Ford Health Allegiance) (via chat): Love the grouping. Can you add self-
selection criteria with the metric selection?
o Nirav J. Shah (MPOG): Josh, would you mind unmuting and expanding on that?
Josh Goldblatt (Henry Ford Health Allegiance): Sure. You already have site-level customization
built into the dashboard, and you are already controlling settings that are individualized to each
site. Instead of trying to centrally collate and validate all of that information, it might make
sense to allow sites to self-select their categories. For example, sites could self-identify as small,
medium, or large based on case volume, hospital size, or whether they are a hospital versus an
ambulatory surgery center. Some of those, like hospital versus ASC, could potentially be
automated, but others could reasonably be self-selected. This is especially relevant for the
academic versus community distinction. When you are part of a large health system, there is
often a lot of blending between those categories. Being able to toggle between different
categorizations and see how performance compares across them would be helpful.
o Xan Abess, MD (Dartmouth Health) (via chat): I agree with Josh. There is a lot of
blending between “academic” and “community,” and allowing sites to self-select seems
like a good idea, with guidelines.
Kunal Karamchandani (UT Southwestern) (via chat): I think the two factors that matter most are
case volume and the severity of illness of patients undergoing surgery. Can we add quartiles
based on these?
o Anthony Lewis Edelman (MPOG): That could be a useful approach, since it allows
comparison to similarly sized sites within a reasonable range, even though edge cases
will still exist.
Nirav J. Shah (MPOG): Thanks for that feedback. As we move into distinctions like academic
versus community hospitals and differences in payer mix, there is clearly a lot of nuance. Our
initial thought, as Tony mentioned, was to use standardized definitions such as those from the
American Hospital Association. We may also be able to pull information from external datasets,
but not everything will be available that way. That creates an opportunity to think carefully
about how self-selection could work. It would not make sense for a large hospital to classify
itself as a small hospital, but there may be ways to allow sites to provide information when we
do not already have it, while also building logic to prevent extreme mismatches. This is
something we can explore further, and Josh, we may reach out to you again as we begin
designing this.
Mason Smith, MD (MyMichigan Medical Center Midland) (via chat): I do not think it should be
anonymized. I believe it is helpful to compare your program to others inside and outside your
health system.
Morgan Brown, MD (Boston Children’s Hospital) (via chat): I am interested in patient volume as
a category, but it would make more sense to define volume based on the number of patients
eligible for a specific metric rather than overall hospital size.
Michael Mathis, MD (University of Michigan): I do not think sites necessarily need to assert what
they are. Objective, consistent definitions are important, and where possible, we should use
them. At the same time, it would be valuable to always allow sites to see their own hospital and
how it compares to others. For example, you could always see your hospital’s performance, but
then filter to compare yourself against small community hospitals or large academic medical
centers. That way, regardless of how a hospital is classified, it can still compare itself to different
types of institutions and understand performance across various phenotypes.
o Anthony Lewis Edelman (MPOG): That is an interesting point. Thinking about the
extremes, is there meaningful value in comparing a large inpatient academic facility to a
group of freestanding ambulatory surgery centers? Is that a question people want
answered? I am trying to think through where the utility is highest.
Josh Goldblatt (Henry Ford Health Allegiance) (via chat): Another issue is when a hospital falls
very near a threshold.
o Michael Mathis, MD (University of Michigan): That is exactly the type of situation where
self-definition could be helpful. Josh, could you expand on that?
o Josh Goldblatt (Henry Ford Health Allegiance): We are a medium-sized hospital within
a large academic health system. For certain metrics, we may want to compare ourselves
to other small or medium hospitals, because many of the academic-center issues play
out differently at that scale. The threshold issue is important. For example, if a small
hospital is defined as under 500 cases per month, medium is 500 to 1,000, and large is
1,000 to 1,500, we might be right at 1,150 or 1,200 cases per month. Depending on the
context, leadership might want us treated as a medium hospital or as a large hospital.
Allowing self-selection would make those comparisons more intentional and
meaningful.
Morgan Brown, MD (Boston Children’s Hospital): This gets to the issue of risk
adjustment. A benchmark value is just a benchmark value, and what we struggle
with is understanding whether we are performing well with a higher-risk
population or poorly with a lower-risk population. Because the benchmark is the
same regardless, the solution may not be a simple filter, but rather the ability to
adjust the benchmark for different scenarios. That would allow a more apples-
to-apples comparison of performance.
Tariq Esmail, MD (University Health Network) (via chat): What Morgan is
describing is why NSQIP is so valuable: observed-to-expected rates.
Xan Abess, MD (Dartmouth Health) (via chat): A reasonable starting point for the Coordinating
Center might be to separate free-standing ambulatory surgery centers, which are fairly
objectively defined.
Xan Abess, MD (Dartmouth Health): I agree with Josh’s points about self-selection, particularly
for sites near thresholds and for distinctions like academic versus medical-school-affiliated
hospitals. As I mentioned in the chat, we have discussed free-standing ambulatory surgery
centers extensively, and there is clear interest in developing an ambulatory-focused dashboard
rather than lumping all sites together. I support allowing self-selection based on size and site
type.
o Anthony Lewis Edelman (MPOG): I agree. An ambulatory surgery centerspecific
dashboard seems like a relatively straightforward and reasonable first step.
o Nirav J. Shah (MPOG): With our updated location mapping, we already have fairly good
data on which sites are freestanding ASCs. That is something we can definitely build as
one of the initial iterations.
Kunal Karamchandani (UT Southwestern): The reason I raised this point is that raw numbers
alone are not very helpful for analysis. It is more informative to look at index values and case
mix. Many institutions rely heavily on Vizient data, including case mix index and comparisons
across peer groups. One challenge in perioperative quality is that we lack a robust way to
capture severity of illness. Unless we build something like a Johnson-style index, we are limited.
Vizient incorporates case mix and academic status, which is why observed-to-expected metrics
are so powerful. ASA status alone is too variable to be reliable. If we could capture severity of
illness in a meaningful way and combine it with volume-based quartiles, that would allow much
more accurate, apples-to-apples institutional comparisons.
Tariq Esmail, MD (University Health Network) (via chat): It would be interesting to compare a
freestanding ASC to the ambulatory population within a hospital that does both, perhaps by
filtering to day-surgery patients. I realize that may be beyond the current scope.
o Joseph McComb, MD (Temple University Hospital): I agree that ASCs are fundamentally
different and should be examined separately. One thing I value about MPOG, especially
working in an underserved academic safety-net hospital, is that it does not allow us to
lower expectations simply by saying our patients are sicker. I often hear arguments that
poorer outcomes are acceptable because of patient complexity, and I worry that this
lowers the bar. I appreciate that MPOG allows me to show leadership that we are
performing as well as leading institutions, even with a sicker population. I support
thoughtful risk adjustment, but I do not want it to become a justification for accepting
poorer performance. I like being able to compare ourselves to the best in the country
and demonstrate that high standards are achievable.
Josh Goldblatt (Henry Ford Health Allegiance) (via chat): Risk adjustment has a different
meaning for outcome metrics than for process metrics.
o Nirav J. Shah (MPOG): I agree. Over the years, we have had to make strategic decisions
about where to invest time and resources. In some cases, we have relied on inclusion
and exclusion criteria rather than formal risk-adjustment models, which are resource-
intensive to build and maintain. In some areas, we may have gone too far in that
direction, and it may be time to reinvest in risk adjustment and severity-of-illness
modeling in selected, high-impact areas. The points raised by Joe and Kunal highlight
where that may be most appropriate. One final point is that as we build anonymized
benchmarking, it is important that all sites have access to all views. Small hospitals
should be able to see how medium and large hospitals are performing, and large
hospitals should be able to see smaller sites as well, even if anonymized. We want to
avoid a situation where sites only compare themselves within a narrowly defined peer
group. While de-anonymization can occur during in-person meetings, the reporting tool
itself should allow broad visibility across segments. As Tony mentioned, your site would
appear as the blue bar within its category, but you would still be able to view
anonymized performance across other categories. That is how we are envisioning
proceeding.
o Michael Mathis, MD (University of Michigan): I agree with all of that. Some measures,
like AKI, require substantial risk adjustment. Other measures are more straightforward
and focus on adherence to clearly endorsed practices. In those cases, the emphasis is
less on risk adjustment and more on consistently doing the right thing.
Action Items
Coordinating Center to draft anonymized benchmarking framework
Begin with ASC-specific benchmarking views
Best Practices Exchange Proposal
The committee reviewed a proposal to launch a Best Practices Exchange, aimed at sharing successful
workflows and QI strategies.
Proposal Highlights
The Best Practices Exchange proposal was driven by repeated requests from sites to be
connected with peers who are performing well on specific quality measures or improvement
initiatives.
The proposal includes two complementary approaches: identifying high-performing sites during
measure review and offering a brief, optional “how we do it” spotlight to share practical
insights.
Presentations are intended to focus on transferable, real-world lessons such as culture,
workflow, and specific quality-improvement efforts rather than polished or theoretical content.
The exchange is designed to be low burden, with short sessions, minimal preparation, optional
slides, and limited frequency to encourage participation without overtaxing sites.
The initiative is meant to support both newer sites and experienced sites that are seeking
improvement but have not yet achieved desired results.
Discussion:
C. Bowman-Young (American Society of Anesthesiologists) via chat: Could you link to an online
community or bulletin board to facilitate connection outside of specific meetings? That could be
used to encourage sharing of best practices in between meetings, and information could be
mined for education.
o Michael Mathis, MD (University of Michigan): We do have the MPOG Basecamp
community. That could be a place where people post questions, share documents, or
follow up after these kinds of discussions. MPOG Quality Forum
o Nirav J. Shah (MPOG): I agree. I think this fits well with how we already use Basecamp,
and it gives people a way to connect without requiring additional meetings.
Action Items
Rather than committing to a permanent structure immediately, the group agreed to:
Pilot the Best Practices Exchange over the next few Quality Committee meetings
Evaluate interest, feasibility, and value before expanding further
This incremental approach was intended to allow refinement based on real-world experience.
Meeting Concluded: 1103
Appendix A Full Transcript
ASPIRE Quality Committee January 26, 2025
(Original wording preserved, but sentence structure, punctuation, and flow corrected for clarity.)
Opening Announcements
10:03:34 Anthony Lewis Edelman (MPOG): We have some new announcements to start with. First is
our Featured Member of the Month for January and February. This is Bethany Pennington, PharmD,
from Washington University. Her profile will be featured on the website, so please feel free to take a
look and read more about her work if you’re interested.
Our newest MPOG site is officially up and running. Indiana University Health System has joined MPOG,
so welcome to the group. I don’t know if we have any members from IU Health on the call today, but
we’re very excited to have you.
We also have a couple of new subcommittee Vice Chairs to announce. First, congratulations to Dr.
Sharon Reale from Harvard Medical School and Brigham and Women’s Hospital. She serves as the
Obstetric Anesthesia Fellowship Program Director at Brigham and Women’s and is the MPOG Research
Lead. Dr. Reale continues to publish impactful obstetric anesthesia studies using MPOG data, including
work on difficult intubation and maternal cardiac arrest.
We are also pleased to welcome our new Pediatric Subcommittee Vice Chair, Dr. Eva Lu-Boettcher. She
is a pediatric anesthesiologist at the University of Wisconsin, where she serves as Associate Vice Chair
for Quality and Safety. She has been the site PI and MPOG Quality Champion since 2021, leading local
quality-improvement efforts using MPOG data. Congratulations to both of you, and welcome to the
team.
Glycemic Management Workgroup Update
10:05:05 Anthony Lewis Edelman (MPOG): I’ll move on to some key operational updates. Back in
September 2025, this committee agreed to begin discussions around a new glycemic management
workgroup. That group met in December to discuss the feasibility of developing clinically meaningful
metrics addressing hyperglycemia thresholds and treatment in ambulatory surgery.
A few key themes emerged from that meeting. First, there was strong consensus that patients without a
formal diagnosis of diabetes should still be included. The threshold here is hyperglycemia itself, not
whether a patient carries a diabetes diagnosis. We believe there is a meaningful subset of non-diabetic
patients who present with elevated glucose values.
There was extensive discussion about treatment thresholds, specifically whether that threshold should
be 180 mg/dL or 250 mg/dL. Different guidelines recommend different valuesSAMBA guidelines
reference 250, while other guidelines recommend 180. The discussion focused on balancing ideal clinical
practice with operational feasibility and the ability to gain clinician buy-in at ambulatory sites.
The group also emphasized that the measure should focus on timely treatment and reassessment,
rather than insulin dosing accuracy. There was discussion about excluding IV insulin given how rarely it’s
used in ambulatory settings, as well as considering exclusion of capillary glucose monitor values and
focusing instead on finger-stick or arterial values due to accuracy concerns.
Ultimately, the decision was made to build two new outpatient hyperglycemia treatment measures.
GLU-15 will measure treatment of blood glucose greater than 250 mg/dL within 60 minutes. A sub-
measure, GLU-15b, will look at treatment of blood glucose greater than 180 mg/dL within the same 60-
minute window. We’re meeting again tomorrow to continue refining the assessment and glucose-
checking logic. I’ll pause there—are there any questions, concerns, or discussion around these new
glucose measures?
Discussion Glycemic Management
Richard Wissler, MD, PhD (University of Florida)
I had a question. For outpatients without a diabetes diagnosis, how are people actually getting blood
glucose measurements? Are we proposing that everyone gets a blood glucose checked?
Anthony Lewis Edelman (MPOG): No, not necessarily. There’s no expectation that every patient
walking in the door gets a glucose measurement. The intent is simply that if hyperglycemia is
identified, then treatment would be expected.
Richard Wissler, MD, PhD (University of Florida)
I’m just wondering how often that’s realistically going to happen in the outpatient world
without a diabetes diagnosis. I’m supportive of the concept—I just don’t know how frequently
we’ll capture those cases.
Nirav J. Shah (MPOG): I think that’s a very fair point. Where the workgroup landed was that if,
for whatever reason, a glucose is checked—whether it’s part of a basic metabolic panel or
another reasonand hyperglycemia is noted, then even without a diabetes diagnosis, those
patients should still be treated.
Josh Goldblatt (Henry Ford Allegiance) (via chat): Henry Ford tests all ambulatory patients except simple
endoscopy and very simple procedures (like cataracts).
Anthony Lewis Edelman (MPOG): Again, we’re not expecting that practice universally, but it’s
helpful context.
Tariq Esmail (University Health Network) (via chat): Can you please add these units to the measure
proposal:
250 mg/dL → 13.9 mmol/L
180 mg/dL → 10.0 mmol/L
MPOG Phenotype AKI Complication Update
10:09:36 Anthony Lewis Edelman (MPOG): I’ll turn things over now to Nirav to give an update on the
AKI complication phenotype.
Nirav J. Shah (MPOG): Great, thank you. Most of you are aware that we started measuring acute kidney
injury many years ago at MPOGprobably seven, eight, maybe even nine years ago—and we’ve done
that in two different ways. One approach has been through our AKI performance measures, AKI-01 and
others, focused on specific case types. The second has been through the AKI complication phenotype. At
different points, those have been used separately and at other times merged together.
Recently, as part of one of our new MPOG pragmatic clinical trialsthe Intraop-Ox trial, which is
examining different intraoperative FiO₂ strategies—we took a closer look at the AKI phenotype because
AKI is a planned outcome for that trial. During that review, we identified several issues in the AKI logic
that needed to be addressed.
Those issues have now been fixed, and I’ll spend a few minutes reviewing the changes. These updates
will impact AKI-01 and AKI-02 scores, and they will also affect the AKI-01 risk adjustment model. Once
these updates are implemented, we’ll revisit and update the AKI-01 risk adjustment model accordingly.
I want to share a summary of the changes, and we’re very happy to walk through any of this offline if
people have questions. Mike Mathis is also on the call and has been reviewing AKI-01, so he can chime
in as well.
Previously, when determining baseline creatinine, we referenced the highest preoperative creatinine
value. We’ve updated that logic to instead reference the most recent preoperative creatinine. For
example, if a patient had AKI several weeks before surgery and their creatinine was elevated at that
time, but then improved prior to surgery, we will now use the most recent value as the baseline rather
than the earlier peak.
The second major change relates to how we identify postoperative creatinine increases. To align with
KDIGO criteria, we assess AKI in two ways: either a rise of at least 0.3 mg/dL within 48 hours, or a 1.5-
times increase from baseline within seven days. Previously, there was a gap in the logic where we did
not always check for the 0.3 mg/dL rise within the first 48 hours if a later creatinine was higher. That gap
has now been corrected.
10:13:27 Michael Mathis, MD (University of Michigan) (via chat): Note, to determine preop
renal failure exclusion, we still reference the highest preop creatinine, so as to not miss renal
failure patients who were recently dialyzed immediately before procedure.
I want to specifically thank our Vanderbilt colleagues for identifying this issue. It turned out to be
frequent enough to meaningfully impact results. As a consequence of capturing these early creatinine
rises more consistently, sites will see an increase in AKI-01 rateson the order of about two to three
percentage points across MPOG, with some site-level variability.
This effect is even more pronounced for the cardiac AKI measure, AKI-02-C, due to smaller denominators
in open cardiac cases. We’ll be discussing this in more detail at the Cardiac Subcommittee meeting as
well.Are there any questions about these updates, which we’re planning to implement over the next
couple of weeks?
Discussion AKI Complication Update
10:15:32 Michael Mathis, MD (University of Michigan): I can expand briefly on the first point. While
we are now using the most recent preoperative creatinine to establish baseline for AKI determination,
we still use the highest preoperative creatinine value when determining the renal failure exclusion. This
ensures that patients who recently required dialysis or had severe renal dysfunction are still
appropriately excluded, even if their creatinine appears normal immediately before surgery.
Jonathan Paul, MD (Columbia University Medical Center): I’m assuming that historical data will
be retrofitted to follow the new logic so that we’re comparing apples to apples.
Nirav J. Shah (MPOG): Yes, absolutely. Thanks for bringing that up. Historical data will be
recalculated using the updated logic.
Measure Review AKI-01 (Acute Kidney Injury)
Anthony Lewis Edelman (MPOG): And then now we’ve got Dr. Mathis with his formal review of AKI-01.
Mike, do you want to share, or would you like me to share your screen?
10:17:02 Michael Mathis, MD (University of Michigan): Yeah, maybe I can share my screen so I can
scroll through it as we’re chatting. We can put the link in the chat, or I think it’s already on the invite.
Some of this review is unchanged from three years ago when I did the review as well, but I think it’s still
important to rehash some of that to give context for what direction we’re headed in. One of the
questions is whether this is an appropriate quality measure to hold an anesthesiologist accountable to. I
still think yes.
This is an important complication that has associations with other comorbidities and other
complications, and it can lead to increased healthcare expenditures. It’s a significant problem, and
anesthesiologists can do things to help mitigate the risk of AKI. We can avoid hypotension when it’s
indicative of hypoperfusion. It’s not always the case, but to the extent that it commonly is, avoiding
hypotension and renal hypoperfusion is part of the anesthesiologist’s responsibility.
We can optimize volume status and cardiac output through potentially goal-directed therapies. We can
potentially use balanced crystalloids. I’ll get into this a little bit more later. We can mitigate the risk of
potentially nephrotoxic agents and weigh the relative risks versus benefits of certain medications that
could increase AKI risk. We also talked earlier about hyperglycemia, and certainly glyco-oxidative injury
can contribute to AKI.
While we recognize that not all AKI is avoidable, and that the anesthesiologist is not solely responsible
for mitigating AKI, I still think this measure is appropriate to the extent that we can uphold our part in
contributing to AKI risk reduction through these practices and potentially participating in broader
hospital-level initiatives that could impact AKI, like glycemic management or enhanced recovery
protocols.
In terms of evidence, I did look for any updated guidelines since the last review, from nephrologists,
anesthesiologists, or surgeons. There really hasn’t been anything since the 2021 consensus statement
that went into as much detail as that article did.
That was a joint consensus statement across nephrologists, anesthesiologists, and other health
professionals. They offered a number of consensus statements, many of which are not capturable within
MPOG. We can’t reliably track a lot of them. But the two that we could track were hypotension and
balanced crystalloids.
For hypotension, we already have attribution within the AKI measure where the attributable provider
could be one who was signed into the case during periods of prolonged hypotension. That was a
recommendation supported by grade C evidence, but a recommendation nonetheless.
The other recommendation was the use of balanced crystalloids rather than normal saline to reduce the
risk of postoperative AKI. Digging deeper into that, for otherwise healthier perioperative populations
like orthopedic or colorectal patientsthe SOLAR trial published in Anesthesiology in 2020 randomized
patients to normal saline versus lactated Ringer’s and did not see a difference in acute kidney injury.
Presumably, the effect of balanced crystalloids versus normal saline plays out more strongly in sicker,
critically ill ICU populations. That’s been shown in trials like SALT-ED and SMART. But in the
perioperative population, with relatively healthy patients and the brevity of the intraoperative
encounter, there was not a significant difference. Because of that, I don’t think balanced crystalloids
should be included in this quality measure at this time.
There are a few newer medication-related studies. Acetaminophen has been studied for its antioxidant
effects. Observational studies show an association with reduced AKI, but there haven’t been strong
randomized controlled trials in perioperative populations.
Dexmedetomidine has also been studied. There have been some randomized trials, but they’ve had
conflicting conclusions, with some supporting benefit and others refuting it. The evidence isn’t strong
enough to include dexmedetomidine in a quality measure for AKI.
There was a sub-analysis of the RELIEF trial looking specifically at AKI. That trial compared restrictive
versus liberal fluid strategies for major abdominal surgery. One secondary finding was that more
restrictive strategies were associated with increased AKI, but because it was a secondary outcome, no
strong conclusions were made. A subsequent sub-analysis published in 2024 suggested that NSAIDs or
COX-2 inhibitors could increase the odds of AKI, likely through prostaglandin inhibition and renal
vasoconstriction. Again, the evidence isn’t strong enough to be practice-changing.
There was also a multicenter randomized trial in Europe published in Anesthesiology in 2025 evaluating
hypotension prediction index-guided hemodynamic therapy versus standard care. The primary outcome
was stage 2 or 3 AKI, and they found no difference between groups.
Looking ahead, there are several MPOG-sponsored AKI studies on the horizon. There are several in the
cardiac literature that will likely be discussed in cardiac subcommittee meetings. There’s also the VEGA-
2 trial comparing phenylephrine versus norepinephrine as first-line vasopressors. We demonstrated
feasibility, and full results are anticipated around 2027 or 2028.
There are also observational studies examining IV fluid amount and composition. One study is leveraging
the IV fluid supply disruption from Hurricane Helene as a natural experiment to look at AKI outcomes.
Another study led by one of our MPOG research fellows is comparing saline versus balanced crystalloids
in elective major non-cardiac surgery.
In terms of inclusion and exclusion criteria, we continue to exclude minor procedures where
postoperative AKI is more likely related to other factors. We also exclude procedures with complex
pathophysiology where it’s not an apples-to-apples comparison. We rely on the creatinine component
of the KDIGO criteria because other elements, like oliguria or dialysis, are not reliably captured in MPOG.
Overall, while there is evolving evidence, I think the quality measure should remain as is. My
recommendation is no changes.
Discussion AKI-01 Review
10:30:29 Anthony Lewis Edelman (MPOG): Great. Thanks, Mike. Really appreciate the in-depth
review. Any questions or comments?
Nirav J. Shah (MPOG): I just had a couple things to add from the Coordinating Center side. We
review measures for technical changes. We’ve transitioned to using kidney, lung, and liver
transplant phenotypes rather than CPT codes for exclusions, which are more accurate. We’re
also using the organ procurement phenotype instead of CPT codes.
We found some cases where Foley catheter CPT codes were being sent to MPOG along with procedure
CPT codes, and because that’s sort of kidney-related, those cases were being excluded. We removed
those CPT codes from the exclusion list so that if you got a Foley catheter, you wouldn’t be excluded
from this measure. We also updated the specification to reflect that the measure timeframe extends to
seven days after anesthesia end. The code already did this, but the specification now matches.
Anonymized Benchmarking 2026 Plan (Part 1)
10:35:41 Anthony Lewis Edelman (MPOG): Uh, so another thing we wanted to discuss today is a plan
for anonymized benchmarking. Uh, going forward. And this is an idea that came out of this group. Um,
and the thought here is to, um… build more anonymized benchmarking into the QI Reporting Tool, and
this can be done in a number of different ways. It can be done based on case volume at sites, whether or
not it’s ambulatory or inpatient centers, health system, uh, health system-wide, um, metrics, academic
versus community hospitals.
And the idea is that you’ll be able to segment and review within the specific sub-area that your
concerns, hospital or site, uh, fit. And, um, you know, so if your concern is, you know, an outpatient
freestanding ambulatory surgery center, and it’s really being reviewed against other freestanding
ambulatory surgery centers, you would still see your blue bar against the gray background of the other
sites, so still anonymized.
Um, but a couple of questions and discussion areas we would love thoughts about from the group is,
you know, coming up with some of the definitions for these things. Um, so we’re able to use the
American Hospital Association definitions for some standardization. And then we have some
phenotypes that we’re able to use, like medical-school associated or not. Phenotypes, but there will be
some need to discuss more intricate definitions depending on how we sub-segment the sites.
Then our initial thought was to allow all sites to see these visualizations and filter by those, um, sub-
segments. And if you, again, have a site that meets criteria for that segment, you’d be able to see it with
that blue bar. If you don’t, all you would see is the anonymized gray bars throughout. And then finally,
you know, what should those groupings look like based on hospital size, small hospital versus large
hospital?
Uh, could that be based on beds, based on case volume, based on inpatient status? Um, so this is really,
you know, an opportunity, everyone, to give some feedback, their thoughts, some ideas. Um, any initial
thoughts that anyone might have? Is this something that people are interested in seeing?
10:37:38 Josh Goldblatt (Henry Ford Health Allegiance) (via chat): Love the grouping! Can
you add self-selection criteria with the metric selection?
10:38:08 Nirav J. Shah (MPOG): Josh, do you mind being able to unmute and kind of expand upon
that a little more?
Josh Goldblatt (Henry Ford Health Allegiance): Yeah, I was just thinking, uh, you already have
us on the dashboard, and you’re already controlling some of the settings that we have
individualized to a site. Um, so instead of trying to collate the information and maybe validate it
in a particular way, that you would just let sites self-select and say, “Okay, I’m small, medium,
large in case volume; I’m small, medium, large on hospital size; hospital versus ASC,” and just…
Well, maybe that one can be… can be automatic, but, uh, but some of those others, um, we can
kind of, um, self-select and pick our category.
Um, especially the academic versus community. There’s a lot of blending when you’re part of a
large health system, um, so it might be helpful to flip the switch one way and then flip the
switch another way, and see how we’re comparing in different categories.
10:38:17 Kunal Karamchandani (UT Southwestern) (via chat)
I think the two factors that matter most are the case volume and the severity of illness of
patients undergoing a surgical procedure. Can we add quartiles based on these?
10:39:16 Nirav J. Shah (MPOG): Yeah, yeah, thanks for that feedback. Um, yeah, I think there’s, um… I
think especially as we get into kind of academic versus community and, you know, government payer
mix, and some of those things, there’s going to be a lot more nuance there.
Um, so, you know, one of the things that we were trying to do was, as Anthony mentioned, use some of
the standard definitions via the American Hospital Association. And we might be able to get some of
that information from other external datasets, but not everything. So I think there’s going to be an
opportunity for feedback, how some of this… the self-selection would work.
It’d be weird if a large hospital were to select themselves as, like, a small hospital. But maybe there are
still some things we can do to actually… where, if we don’t have the information, you know, we can give
the site an opportunity to, um… to share that with us. And maybe have some logic to prevent extreme
mismatches. So, um, it’s something that… that I think… I think we can look at. So, Josh, maybe we’ll be
reaching out to you again when we start building it, to get some more thoughts on it.
10:39:33 Mason Smith, MD (MyMichigan Medical Center Midland) (via chat)
I don’t think it should be anonymized. I believe it is helpful to be able to compare your program
to others inside and outside of your health system.
10:40:01 Morgan Brown, MD (Boston Children’s Hospital) (via chat)
Yes, interested in patient volume as a category. But I think it would make sense as the volume of
patients your site has eligible for a particular metric, rather than overall hospital size.
10:40:02 Xan Abess, MD (Dartmouth Health) (via chat)
Agree with Josh I think there is a lot of blending with “academic” vs “community,” and
allowing sites to self-select seems a great idea (with guidelines).
Michael Mathis, MD (University of Michigan): I was just gonna say, yeah, I don’t know if you… you may
not need to assert what you are, you know, yourself. I mean, again, yeah, there is this issue of just, like,
you know, you want to have some objective, consistent definitions, and to the extent that you can, you
should. But, um, to the extent that you could, we should, like, have filters to, like… you should maybe
always be able to see your hospital and how you stack up against other hospitals.
Sure, like, you could always see your hospital, but then filter and compare your hospital to, um, you
know, small community hospitals, or, you know, your hospital to larger academic medical centers. You
know, agnostic to whatever your hospital is, but you can always see your hospital. Maybe that’s what
we want to do. You don’t want to assert what you are, but you… it’d be nice to be able to compare your,
uh, performance to, um, you know, uh, different phenotypes of different kinds of hospitals.
Anthony Lewis Edelman (MPOG): But I think that’s an interesting thought — just thinking, say, in the
extremes, is there meaningful data or meaningful presentation looking at, say, a large inpatient, you
know, academic facility as compared to, you know, a group of freestanding ambulatory centers? Is that a
question that people would want to answer? You know, I’m just trying to think about, in my head, um,
you know, where the utility is.
10:41:36 Josh Goldblatt (Henry Ford Health Allegiance) (via chat): Other issue is if a hospital
falls very near the threshold.
Michael Mathis, MD (University of Michigan): Like I said, maybe I’ll turn that over to Josh, you know, to
say a little bit more about that. Because, you know, I think that’s exactly the kind of thing where, if you
could self-define yourself yeah, define yourself, you know, compare yourself to other small hospitals
versus other large academic medical centers to, you know… is there, uh, value there?
10:42:16 Josh Goldblatt (Henry Ford Health Allegiance): Yeah, so, I mean, we’re a medium-sized
hospital, or a smaller-sized hospital in a large academic health system. And so, I think, you know, we
would probably want the ability to say, “Okay, for this metric, we want to compare ourselves to other
small or medium-sized hospitals,” because a lot of the academic issues come to play at a smaller scale.
Um, I think the other thing that I just threw in the chat would be, you know, if your hospital is near the
threshold. So, you know, if you said, you know, a small hospital has case volume under 500 cases a
month, and then a medium hospital is 500 to 1,000, and a large hospital is 1,0001,500. You know,
we’re at, like, 1,150 to 1,200 cases a month… 1,220, you know, depending. Do we fall in the medium, or
do we fall in the large? We might be really near that threshold. And for our comparison purposes, my
leadership might say, “Treat us like a medium,” or they might say, “Treat us like a large.” Um, so that’s
where it gets a little… a little clearer and more intentional if we get to self-select.
10:43:37 Anthony Lewis Edelman (MPOG): Yeah, and there was mention in the chat around quartiles.
I think that’s an interesting idea, too, because then, really, you’re looking at other similar volume-sized
[sites] within a range that seems reasonable, although you’ll still have edge cases within those quartiles.
Uh… Morgan?
10:43:57 Morgan Brown, MD (Boston Children’s Hospital): Yeah, I think this gets to the question of,
um, risk adjustment, right? Because the benchmark value is just… the benchmark value. And I think what
the… what sometimes we’re struggling with is: do we have a riskier population that we’re actually doing
well with, or do we have a lower-risk population where we’re actually doing poorly?
Because it’s the same benchmark no matter what. And so, maybe the answer rather is, you know, in the
filters maybe it’s not exactly a filter — but is being able to adjust that benchmark to different
scenarios that would then let you have more of exactly what you’re thinking is an apple-and-apple
comparison of your performance.
Anthony Lewis Edelman (MPOG): Yeah, I think that was the initial thought was to be able to compare
similar, you know, like apples to like apples.
10:44:47 Tariq Esmail, MD (University Health Network) (via chat)
What Morgan is talking about is why NSQIP is so valuable. Observed vs. Expected rate.
10:45:11 Xan Abess, MD (Dartmouth Health) (via chat)
Perhaps a reasonable starting point for the Coordinating Center is at least to separate free-
standing ambulatory surgery centers those are fairly objectively recognized (I think).
Xan Abess, MD (Dartmouth Health): Sure, thanks. Yeah, I mean, I think Josh’s points are great about the,
um, ability to self-select, especially on places that are near thresholds, and what’s an academic center,
what’s a medical-school-affiliated location or not.
I, you know, I put it in the chat you know, perhaps a thing to think about like, we’ve talked a lot
about free-standing ambulatory surgery centers, and there is definitely interest in creating this kind of,
like, ambulatory-type, you know, dashboard really, rather than just having everything lumped together.
So I do like that ability to kind of self-select based upon size, etc.
Anthony Lewis Edelman (MPOG): Yeah, I mean, I agree. I think the ambulatory surgery center-type
dashboard is a pretty low-hanging fruit and, you know, pretty reasonable dissection point. Awesome.
Nirav J. Shah (MPOG): Yeah, I don’t know if Kate or Meridith want to comment. I think we have pretty
good data on what’s a freestanding ASC right now with our updated location mapping. Would you agree
with that? Alright. Yeah, you would. Okay, so that is something that we can do as well. Cool, thank you.
So that’s something that we can definitely build, at least as one of the first cuts.
Kunal Karamchandani (UT Southwestern): Yeah, hi. So, you know, the reason I mentioned that was
because, you know, when I work with data, just looking at numbers is not helpful. I think it’s more
helpful to look at index numbers and case mix. A comparison makes analysis more accurate.
A lot of places like, I know where I work everybody’s very big on, you know, their Vizient data and
what others are doing, what’s their case mix index, and if you’re actually providing that quality of care
that you should. And some of the challenge in perioperative quality is we don’t have… unless we build in
a Johnson-[type] commodity index, or something like that, to capture the severity of illness.
And like Tariq mentioned, you know, Vizient looks at their CMI and, you know, academic status and
whatnot. So maybe it’s about time that, you know, we and ASPIRE, as for perioperative quality as well,
kind of concentrate on O:Es not just the observed, but what we would expect. Because if you only
look at ASA status, given the variability and how ASA status is graded, I don’t think you’ll get anywhere.
But if we can build a measure where we capture their severity of illness in some shape or form, and then
use that plus the volume to create quartiles, that might be a more accurate way to compare institutions
and apples to apples. Just a thought.
10:48:29 Tariq Esmail, MD (University Health Network) (via chat): @Xan Abess it would be interesting
to be able to compare an ASC to the ambulatory population at a center that does both. Perhaps a filter
within the overall organization to allow filtering by day-surgery patients (i.e., have ASC-like population).
(Although I know that’s not what we are talking about lol.)
10:49:22 Joseph McComb, MD (Temple University Hospital): Yeah, thank you. Good discussion. I think
the ASC, I agree, is very different, you know to be able to look at those separately would be nice. The
one thing I do like about MPOG, though, is that it doesn’t give you the ability to do what I hear in some
other venues in the house when I’m in an academic underserved safety-net hospital that sometimes I
hear, “Well, it’s because our patients are sick,” or, “Our outcomes aren’t as good,” and, you know, we’ve
proven that our patients are sicker.
And so we’re concerned that it lowers our bar. And we don’t want to push it as hard, and, you know,
when we don’t tolerate that, we just accept that our outcomes aren’t as good as other hospitals. Um, I
like the idea of, though, trying to risk-adjust because certainly, yes, you know, we’re doing… a lot of our
CMI are like lower-income, government-payer patients.
But I’m worried — and I don’t have an answer to that — but the one thing I do like about MPOG is that
when I look at our data and I see us moving up the rankings and I can say, “No, we’re actually doing as
well as others, even though our patients are sicker.” And I present that to leadership.
So I just wanted to sort of voice that I like that this is one area where I present data that we’re not, um,
sort of washing it down. I like that I’m comparing myself against the leaders in the country; I don’t want
to be in a situation where we say, “Well, we’re risk-adjusted and so we’re doing okay for our
population,” and then accept poorer performance.
Maybe some of the answer is some of these markers we could subset more easily select out patients
who are very high risk objectively, not high risk because “our patients are sicker.” Again, I don’t know if
that gives you any help, but I just wanted to say I like that I’m comparing myself against the leaders in
the country.
10:51:37 Josh Goldblatt (Henry Ford Health Allegiance) (via chat)
Application of risk in Outcome metrics has different meaning than for Process metrics.
Nirav J. Shah (MPOG): Totally. And, you know, I… you know… um, and, you know, over the years, as
we’ve had to make choices about, like, where to spend time and energy and resources, um, you know,
did we build, you know, a new measure where, you know, where we don’t necessarily have… we’re, um,
we’re not as worried about the, you know, severity of illness, where we build inclusion and exclusion
criteria instead of building a risk-adjustment model, which is very resource-intensive and maintenance-
intensive.
Um, that’s… Joe, that’s kind of where we’ve landed, you know, for the exact reasons that you’ve
mentioned, I think. Um… and I think one could make an argument that, in some ways, we’ve gone a little
bit too far, and now we need to get back to, um, focusing some resources on risk adjustment and
severity of illness in selected cases and in selected ways.
I think you mentioned one or two of those, and Kunal, you mentioned one or two as well, so I think
those are all… those are all good points. So thanks for making them.
Anthony Lewis Edelman (MPOG): Yeah, I mean, it’s a great discussion. I think there’s maybe more here
than we can realistically get into today, but I think there’s a lot to discuss and a lot to dissect. Um, so…
Nirav J. Shah (MPOG): Yeah. The one thing I will say and Mike is that I do think it is important that,
you know, as we’re building these anonymized graphs, that everybody has access to all of them. So, for
example, like, you know, Anthony was mentioning small hospitals, medium hospitals, large hospitals.
I think it’s important that the small hospitals should also be able to see how the medium and large
hospitals are doing, anonymized. And similarly, large hospitals should be able to see how small and
medium-sized hospitals are doing. I think that’s important, because, you know, Joe, to your point, we
don’t want people to just say, “Well, this is my group, I’m going to compare myself only to my group,
and that’s it.” So, um, I think, you know, we have these forums for identifying or de-anonymizing sites,
and those are typically at our in-person meetings. For, like, putting up the graphs and saying, you know,
“This is us,” or, “That’s you,” and having those conversations. But in the tool itself, I think it’s important
that everybody should have access to all of these graphs, even in anonymized fashion.
So, as Tony mentioned, you know, let’s say if we’re doing large hospitals versus small versus medium
hospitals, your blue bar will be in that group, but you still have access to look at the anonymized graphs
for the medium and large, so that you can, you know, make those comparisons across segments. And I
think that will be an important component of this. That’s… that’s kind of how we were planning on
proceeding with it. Um, Mike?
Michael Mathis, MD (University of Michigan): Yeah, yeah, I agree with all of that. I was going to add just
one additional thought which is that I think there’s certain measures that need a lot of risk
adjustment. AKI is probably the best example of one that needs a lot of risk adjustment.
And there’s other measures that are just, like, “Don’t be lazy — just do this thing,” that is, you know,
clearly endorsed, um, by, you know, society guidelines and everything else. And, you know, in that case,
it’s less about risk adjustment and more about: “This is the thing, and you know, the thing can be
solvable, and it doesn’t need to have a lot of critical thinking — it’s just more, ‘Do the thing.’”
Nirav J. Shah (MPOG)
Yeah.
Best Practices Exchange Proposal
10:55:34 Anthony Lewis Edelman (MPOG): Alright, so the next thing we wanted to talk about is
something we’re calling a Best Practices Exchange. This is still very much a proposal, and we wanted to
get feedback from the group before moving forward with anything.
The idea here is to identify high performers during measure review and then create a short, low-prep
opportunity for those sites to briefly share how they do it. This could be during the Quality Committee
meeting itself or potentially during an adjacent forum.
We’re envisioning this as a short spotlight — less than ten minutes where a high-performing site or a
site that’s made a big improvement could talk through workflows, policies, culture, or specific
interventions that helped them succeed on a measure.
There are really two lenses we’re thinking about here. One is consistently high performers — sites that
are always at the top and likely have strong underlying culture and processes. The second is intentional
improvers sites that weren’t doing well, recognized that, and then implemented specific quality-
improvement strategies like PDSA cycles, education, reminders, or workflow changes.
The goal is not to put anyone on the spot, but to make it easier for sites to learn from each other
without having to reinvent the wheel.
We think this could be especially useful for newer sites or for measures where traction has stalled. The
idea would be that interested sites could then connect offline for deeper dives if they want to.
In terms of logistics, we’re thinking something very lightweight. Slides would be optional, but some
material would probably be encouraged. The cadence would likely be two to three times per year,
aligned with Quality Committee meetings. Topics would be driven by upcoming measure reviews, site-
initiated ideas, or suggestions from the Coordinating Center.
The presenter could be the quality champion or a designee whoever is most familiar with the work.
And again, this is meant to be low burden and high value.
So I’ll stop there. Thoughts? Reactions?
10:58:03 C. Bowman-Young (American Society of Anesthesiologists) via chat
Could you link to an online community or bulletin board to facilitate connection outside of
specific meetings?
Anthony Lewis Edelman (MPOG): Yeah, that’s a great question. I think that ties in nicely
with this idea. We’ve talked before about ways to encourage more interaction between
meetings.
Michael Mathis, MD (University of Michigan): We do have the MPOG Basecamp community.
That could be a place where people post questions, share documents, or follow up after these
kinds of discussions.https://launchpad.37signals.com/basecamp/2754939/signin
C. Bowman-Young (American Society of Anesthesiologists)
That could be used to encourage sharing of best practices in between meetings, and information could
be mined for education.
Anthony Lewis Edelman (MPOG)
Yeah, exactly. That’s really the spirit of what we’re thinking. Having a space where people can continue
the conversation, share tools, or ask follow-up questions after hearing a Best Practices Exchange
presentation.
Nirav J. Shah (MPOG)
I agree. I think this fits well with how we already use Basecamp, and it gives people a way to connect
without requiring additional meetings.
Anthony Lewis Edelman (MPOG)
Great. So it sounds like there’s interest in exploring this further. We’ll take this feedback and come back
with something more concrete.
10:56:02 Nirav J. Shah (MPOG)
Okay, alright, thanks. Yeah, so, you know, folks have, um, in the past, they’ve, um, reached out to us at
the Coordinating Center many times, said, “Hey, can you introduce us to someone who’s doing really
well in a given measure or a given quality improvement initiative?” And so.
Um, this concept is based on, you know, that, um, requests that we’ve had, you know, time and time
again, and at the Coordinating Center, we’ve made those connections in the past.
Uh, we’ve usually, you know, looked and asked the folks ahead of time, “Hey, do you mind if we make a
connection with someone who’s interested in learning more about the great work you’re doing at your
site?” And so, um, thought about it in two ways.
One is that during a measure review, for example, like Mike’s AKI review, we would identify who the
high performers are, if it made sense for that measure then put their contact information up so that
others could reach out to them. And to learn a little bit more about their processes and how they got to
where they were. was one component.
The second component is, to add, like, a shortnot something that we want folks to spend a
tremendous amount of time on but a low-prep, kind of “how we do it” spotlight. We’ve done versions
of this at our in-person ASPIRE meetings in the state of Michigan or at the MPOG retreat.
Call it a QI story in those scenarios, but, something like that where, for a few minutes, a high-
performing site, kind of shares a little bit about, like, what’s the culture, the workflow, The work that
they’ve done in a particular area. Through a couple of different lenses. One is, is it a culture issue, a
process that… that, you know, they’ve always been great at this, why is that? Or. Was there a specific
project that they initiated and were successful at, that others could learn from?
And then, sites that were interested in reaching out could connect with those folks offline if they wanted
To have a more in-depth conversation. And we think it’ll be useful, you know, not just for newer sites,
but for also sites that have been working on something for a while, and just have not seen the progress
that they’ve wanted.
We thought that, it could be brief, less than 10 minutes, as I mentioned. Low-prep, slides optional, but
maybe some material would be encouraged to help transmit the info maybe not with every Quality
Committee meeting, just a couple times a year to start, and we’ll see how successful it is or not and then
topics could come either from the measure review so as we’re doing the measure reviews, we could
say, “Hey, is this a good topic for this, you know, 10-minute ‘how we do it’ thing?” Or, if a site was
interested in sharing the great work they’ve done, then we would create a forum for that, or if we at the
Coordinating Center realized, you know, through our own case review, that a site is doing amazing in a
certain area, we would… we could reach out to them as well.
And it could either be the Quality Champion or someone they designate as well.
Um, so we wanted to get some very, very early feedback. We pretty much only have time for the vote
right now about two things.
One is that, is it okay if you feel comfortable with identifying a high performer, not a low, but a
high performer, during our measure review?
And number two, should we create this new segment you know, calling it Best Practice Exchange…
Meridith will come up with a better name um, if folks are interested in it.
I’m just gonna share this as a vote, and then, just to kind of get, you know, early thoughts about what
people think. And then
Anyone can vote here; it does not have to be one vote per site. We are simply looking to gather some
early feedback. Because this would involve additional effort for participating sites, there are two
considerations. First, for the initial question, a high-performing site would be identified. Second, for the
Best Practices Exchange itself, participating sites would need to invest a small amount of time and effort,
though we intend to keep that burden minimal.
We’ll give everyone just a few moments to respond. We are at time, so we’ll stop the poll shortly and
share a brief summary of the feedback.
Based on the responses, it appears that most participants feel this is a good idea, so we will continue to
explore it further. We plan to pilot the first Best Practices Exchange over the next few Quality
Committee meetings. Before identifying any high-performing sites during a measure review, we will
communicate that plan in advance, at least for the next few meetings.