Predictive Models of Patient Flow Presenter: Dr. James Benneyan

advertisement
Predictive Models of Patient Flow
Presenter: Dr. James Benneyan, Northeastern University
Recorded on: December 3, 2014
Perfect. And on we go for today's presenter. Dr. Benneyan?
>> Thank you so much, Leslie. And thanks everybody for joining this webinar. So, I'm Jim Benneyan, I'm
at Northeastern University, one of the four now five universities that are members of the CHOT, Center
for Health Organization Transformation. And most of you I know and most of you probably know me.
So I run a Healthcare Systems Engineering Institute up here at Northeastern in Boston and most of our
work to date has been in statistical methods and classic operations research optimization. And we've
given webinars in the past so I'm particularly excited by this overview because it's different in nature.
We've been doing a lot of exploratory work looking at predictive modeling. So I stole this subtitle phrase
from Leslie, actually this is a snapChot. This is a work in progress summary of where we are on an
exploratory phase one project that's part of CHOT. And so some of what you'll see is work in progress.
And that's really what these webinars are about. My hope is to give you a fairly broad overview of a
number of things we've been doing in predictive modeling so that we can get some feedback as to
where this research should go next. That's one objective. Another objective is so that we can identify
other CHOT health systems that would like to be involved in this work.
We're always, I think it's the constant struggle to get a lot of these projects going in more than one
health system. And some of you are hearing this at our member meetings. How can we get more crosssystem collaboration going? So that's what I hope to get out of this webinar is feedback and possible,
Collaborative work.
And I'm pausing because I'm, oh, there we go. I couldn't advance the slide. So, by way of an outline, just
a quick overview of why we're doing this work in predictive modeling. And then a summary, a couple of
slides on each of these four focus areas that we've been working on so far.
And these aren't the only areas we're interested in. So, we're again, definitely interested in feedback.
And I would love to land with a few minutes for discussion, Q and A, and particularly identifying other
test-beds in other health systems or some of the current health systems that are working with us.
How can we test the generalized ability of some of this stuff? So far we've been working on four basic
problems, which you see there under item two, and I'll explain them more as we get into them. But the
basic focus is not using things like logistic regression to predict a clinical outcome, will a patient survive
or not, but to predict capacity needs and to predict patient flow.
That's sort of the common theme that all of these problems have. They differ slightly in methodology,
they look similar on the surface but the methods are different, and they differ a little bit in motivation.
But the common theme again is when I have a hand full of patients or many, many handfuls of patients
in an ED, can I predict downstream inpatient bed demand in a fairly accurate way so that somebody
fairly intelligent can do something reasonable with that information?
Can I predict downstream work demand in a variety of settings? From the OR to anesthesia PACU
recovery. From getting chemo lab work done to downstream getting infusion, a variety of contexts like
that. Usually those downstream steps are just pure chaos and if we could predict what's coming a little
better we might be able to manage it better.
For that matter can I predict three weeks out bed census across my whole my whole health system so
that I'm not doing morning bed huddles and sort of crisis management. But I'm able to do that morning
huddle days in advance. And can I predict who will need what types of subspecialty referrals so that we
can manage that flow better, and in particular, avoid a lot of unnecessary work.
So that's the general overview of this work and let me dive a little deeper. This is just the obligatory
comment required by my employer as a University. So, all of this work is part of CHOT. So, it's not out
there in the public domain and it's got IP associated with it.
So we just ask that it not be globally disseminated or commercialized in any particular way. Right now it
all belongs and resides inside the CHOT Consortium. What I'm gonna summarize is really the work of
probably a dozen of our graduate students and post docs and some people I'm forgetting and I've listed
some of their names here.
And if there's anything that we show you that you're particularly interested in learning more about, just
contact us. We've got published papers and working papers and lots of further information. So, I think I
mentioned the general motivation, but here it is again. What's of interest is how can we use predictive
information and how would it be most valuable in the work that we do in healthcare, in terms of likely
patient flows, likely resource needs, likely appropriate services and etc.
The word likely is important. Accurate guesses that we might be able to better manage our processes
with. For example, logically adapting staff and resources, adapting scheduling, diverting patients to
some other facility, making discharge decisions, expediting discharge or not expediting discharge if
there's no need. So the metaphor I frequently use for this is the weather forecast because that's actually
the idea here.
If we knew, as most of us do if we're slaves to our iPhones, three, four, five hours out, or three, four, five
days out what the weather's going to be, most of us adapt to that in some way, some logical way. If we
know a storm's coming, we start to think about leaving work early so we're not caught in rush-hour rain
traffic.
Or if it's a snowstorm we start to think about putting appropriate resources into place. If I manage the
state highway system I might just start to make some phone calls and get drivers for the snow trucks.
Start to make sure I've got sand and salt where I need it located, things like that.
So the value of an imperfect crystal ball looking into the future in our real lives, allows us to start
thinking about how we're gonna deal with something that may or may not happen. But we have some
probabilistic information it may happen. We may not know the severity of the snowstorm but we know
one is probably coming that's small, medium, or large, and we'll do something About that.
So that's the overall motivation, putting predictive information in the hands of people who've been
there before and probably have a sense for how to start reacting and getting prepared. What we
proposed at the fall 2013 Chop Meeting was doing a phase one exploratory project, jumping into
predictive modeling, identifying three to five generalized o problems, starting to work on them and
learning what we could about what's most useful, what has the most value, where can we be accurate,
where can't we be accurate and let this phase one project run it's course and learn from it and decide
what to pursue further?
And where to go from here. So, as we're leading into the spring CHOT meeting in March or April, we're
hoping that these results start to inform, wow, what do we do from here? So if you need further
motivation, if you were following the news and certainly if you lived in New England.
About a week ago, the lake areas, particularly Buffalo, got dumped on big time with snow. There was a
forecast, people knew it was coming, didn't know too far in advance but had some advance look. Didn't
know exactly the quantity, just knew it was gonna be big. And there are sort of these bands.
If you look at the upper left hand corner with the weather service's best guess as to quantity. And it
didn't really matter if you were in the dark purple or light purple, you knew you had a lot of snow
coming. Or medium blue or dark blue, you knew you had a medium amount of snow coming.
And the exact quantity didn't matter so much as the general range. And you can kind of see what
eventually rolled out. Some people prepared and some people didn't. So applications. I'll step through
these one at a time and just give a little sense for the general nature of the problem, the approach we've
taken, and where we are so far.
So I think I already mentioned really all five of these, with the exception of predicting no-shows. So that
we can manage capacity a little bit better. So, you'll see this slide again as the intro to each of the
sections. So let me just jump to the first application.
This is a joint work that we're still working on, but I started with a former PhD student of mine, Jordan
Peck, who's now up in the Maine health system. As a systems engineer doing good work but what we
were interested in were all the problems associated with ED flow and ED boarding, and so on.
So the general concept here on the left hand side in some schematics on the right hand side, but can we
predict on arrival to the ED if a patient's going to eventually need an in-patient bed?
>> And if we can predict that the individual patient level as patients start to arrive to the ED and the
room becomes- the waiting room becomes more and more crowded, if we could aggregate those
probabilities at the patient level across the inpatients currently in the ED, could we do that in a way
where we know something about the mean, the standard deviation, and for that matter the probability
distribution of the number of patients that eventually will need a bed.
So in the upper right-hand corner you see two pro graphs, and the top one being sort of this classic
process where after your course of treatment in the ED, a decision's made and there's a request for a
bed that's made and then a bed's available or not and the patient boards or doesn't.
And there's a, a little bit of delay. So if we knew with some probability if I, as a patient were likely to
need a bed earlier in my ED length of stay, that information may or may not be useful to a bed manager
to start to make beds available and start to think about flow.
I mean so that's the real general idea. In the lower right hand corner you can see it laid out in a slightly
different way where the horizontal gray bars are patients that are arriving.
>> The x-axis, if you will, moving left to right is time. So Patient 1 arrives, and by one way or another
we're predicting they have a 95% chance he or she is going to need a bed, eventually.
Patient 2 who arrives sometime later and they're assigned a probability of 67% of needing a bed
eventually. Patient 3, 11%, patient 4, and so on. So as patients cascade in and adding up those
probabilities gives us the expected number of beds we'll need at any given time. And then we can
similarly compute the standard deviation of the number of beds we'll need at some point in time.
And so, it's simply the sum of the probabilities and you can see the formula for the standard deviation.
We actually know how to compute the exact probability distribution of the number of patients who will
need beds if one get to that level of Of complexity, I guess. So what we've found is that this process
seems to work pretty well and we've compared a bunch of different methods for predicting at the
individual patient level.
The obvious ones, logistic regression and various classifier methods, machine learning, support vectors,
bayesian methods, and interestingly, simply asking a triage nurse at time of arrival to the ED to
categorize a patient into one of these six categories from definitely down to definitely no, seems to be
pretty accurate and you don't lose that much accuracy from a more advanced type of analytic method,
which when we talk about implementation is a really nice finding.
In terms of results and accuracy so this work's been published in two places, the Academic Emergency
Medicine Two Papers and Health Care Management Science. But, the basic takeaway is the method at
the aggregate level, because the bed manager is mostly interested in the number of beds that she or he
is going to need this afternoon.
If there are sixteen people on this phone call, knowing that seven of us need a bed. It doesn't really
matter which seven, that's not as accurate. But knowing seven as a number tends to be pretty accurate.
You can see the area under the curved numbers and the r-squared values.
Terms of expected minus actual. So, it's pretty accurate. We've replicated this work now in four different
EDs, with slightly different predictors and coefficients. Triage nurse remains pretty competitive. And so
we're getting similar results. So what we're very interested now is how to roll this out and implement it.
Short term and long term, I mean long term implementation One would code this into the work flow of
an IT system somehow. Short term there may be some down and dirty ways to do quick PDSA tasks. To
aggregate information in the EDD and make that visible to a bed manager somewhere else in the facility.
For example, Subject to HIPAA compliance and so on, using something analogous to laptops or iPads and
simple Google forms and spreadsheets, feeding that information upstairs to a bed manager. So that's
one area we've been working on in predictive modeling, sort of at the micro process. I want to talk
through several other problems.
So this other area that, the third area we've actually done a lot of work in and it just seemed in the
echelon to put this one second, which is a fairly new problem we've been working on. But, it's basically
upstream-downstream flow. And the original motivator was the system that we're working with that has
a, originally, we thought this was an operating room scheduling problem.
But, it really turns out the interest is in operating room scheduling because there's a PACU problem.
There's sort of no room at the end frequently. And so, the PACU being full is leading to upstream
problems, in terms of needing to delay surgery starts or for elective surgery scheduling of another day.
And it's analogous to the airplane problem when I'm held on the runway Boston Logan because there's
no air slot for my plane to land out in Chicago. Plenty of capacity to take off here, just no capacity to
receive me later. So that's a logistics problem. It doesn't only happen in OR to PACU flow, it may also
happen, for example, in same day chemo where one comes in and we can schedule what time I come in
for my labs and my workup and then I'm gonna flow down to my infusion and of course the infusion
chairs and beds is sort of, total chaos.
So, so, that's the general context. The approach here is basically developing a simulation of the
predicted number of people in the upstream process and in the downstream process by time of day. So,
in the upper right-hand corner, you can see a mock-up of a spread sheet tool we're developing where
one would enter the scheduled work.
In this case, it's operating rooms and we have ORs and start times for procedures of different types.
Somewhere else in this workbook is an inputs tab where there's information on the mean and standard
deviation and probability distributions of the time those types of operations take and the time those
types of patients take to recover.
It's a downstream process to PACU. And so simulation coded up in VB basic and a macro behind the
scenes replicates a hundred, or a thousand, or a user entered number possible days. And displays
number of occupied beds. Assuming an infinite number of beds in the PACU. And then some horizontal
line that's to the actual number of beds we have and you can see by time of day when we're going to
have a capacity problem and then the final bullet on the left-hand side in some way react to that.
Either prepare for it, how else are we gonna recover these patients, and stick to the plan or modify the
plan. Say, well wait a minute, I'm simulating next Monday's schedule, and this just isn't gonna work, we
need to move some of these procedures around. I think one could use a tool like this to run a lot of
scenarios, do scenario analysis and develop general rules.
That just are good principles to follow to make flow work well. So, that's the motivation and the concept
and the approach to this problem. I should thank one of our post acts graduate student intern in with us
right now because this is sort of hot off the presses.
This morning produced this graph you see on the right hand, upper right hand corner which is an
example of the output. So what you see is the green line which is the hypothetical capacity of this
particular PACU, and these three brown lines, one is the average of 100 days that were run, 100
tomorrows.
And the two line sandwich in that are the mean plus one standard deviation and the mean minus one
standard deviation. So this is sort of a probability range by time of day in terms of how full the PACU is
going to be. And we're getting this kind of weird oscillating effect.
Because we're still working out the logic. But, that's the general idea. Input the schedule. Procedures
take random amounts of time. So, at some random departure from the OR, patient's gonna land in the
PACU and be there for a random amount of time and then depart. So, how does all that stack up in the
PACU and are we gonna have any problems?
The yellow line at the top is of all hundred days that ran, what was the worst day? So that's about as bad
as it's gonna be out of those 100. So where we are right now with this is we've got a prototype tool
under development. We think that this is a nice down and dirty approach.
We're also sort of trying to work on scheduling optimization in the OR, but these are difficult problems.
So, or difficult well they're difficult problems but solving them is difficult. Particularly to account for all
the variability. And so we think that is a complimentary tool that would be very useful.
Just to give a heads up as to how bad tomorrow is gonna be and maybe do something about it. So that's
another project that we're trying to do enough exploratory work in during phase one. Again that's the
whole philosophy. We have this phase one project. Let's identify a broad range of problems and do
enough exploratory work that we can learn whether this is a viable idea.
There are enough members inside of CHOT and outside of CHOT that this resonates with, that we want
to continue to work on. Or, you know we're not really able to predict so well and let's invest a little time
to learn that before we launch any further. So both of these first two problems naturally extend to what
about a dashboard for my whole facility?
So, can we predict, and what about a longer time frame? So, could I predict bed sentences three weeks
out across the whole facility? And so, that's what this other project that's been working on it. We've
been working on this one for a long time. Methodologically, it's a lot more advanced, but the concept is
sort of a Simple one, which is we know our known work, we know what's scheduled over the next
several days.
Some of the schedule hasn't filled in yet, but we know what's scheduled. We know what's also
unscheduled in a likely way, one can there is predictability around urgent emergent with some error
band, obviously. But, if you look at historical data, one can predict what's coming in through the ED.
So, if we take the known work, and the predictable, but not perfectly known work and we sort of
propagate that through the patient's episode of care including their probabilistic flow pads, patients
come into the EED, what percent of them historically need to go to the ICU, need to go to surgery need
to go to an in-patient bed.
Probabilistically, by type of patient and type of ailment, what are their lengths of stay in each of those
locations? Probabilistically, how many ED patients after being in the ICU flow where, and how long are
they there? A mean and standard deviation sense and a probability distribution sense. So if you take all
that information which is known, knowable, or discoverable and propagate that forward, one can
develop a long-term forecast, which is what you see in the lower left hand, right hand corner.
This green funnel graph, which our couple of our students, Kendall Sanderson and Sam Davis. Sam's
been really doing an extraordinary job with this. And so the way one reads this left to right is day zero is
today. And in this particular department, what's the occupancy going to be tomorrow, the next day on
day four, on day eight and etc.
And the darker green is the mean +1 standard deviation and the lower green is the mean -K standard
deviations where right now K is 1. So you have this probability range what the future is gonna look like.
Less and less accurate further and further out, just like the weather forecast.
Much more accurate an hour from now, just like the weather forecast. But there's this point in time,
days, let's just say 10 through 14 where there's some probability I'm going to exceed capacity. I might
wanna keep my eye on that as that horizon rolls towards me. I might wanna start to make some phone
calls if that were tomorrow or the next day and think about who else could come in and work.
Where am I gonna find the extra beds? Is there any work I could move somewhere else. We've talked
with NICUs who say, yeah, my rule base is if we're approaching capacity and I know I have a mother
who's gonna be delivering with triplets, quite likely, this is gonna be three patients in the NICU and I just
won't have room.
I admit that patient somewhere else, some other facility somewhere else in town, maybe even outside
of our healthcare system, whether that's a viable rule base or not. But that's an example of how one
would react to what you see coming at you on the horizon for a weather forecast.
So in terms of the approach without going into a lot of detail, I'm happy to go into the detail. We've
taken three different approaches to this. One is just a brute force Monte Carlo simulation embedded
behind Excel with an Excel front-end and back-end for usability, and it works pretty well.
And then because we're academics, we've been exploring more mathematical ways to do this, including
using Markov chains and using a fairly complex probability convolution, the logic. There actually all
equivalent, there just sort of in different ways, to get to the same place. So that's sort of the
methodology which took me 60 seconds to explain, and kind of a couple of years to work out probably.
But the neat idea is implementing a lot of these tools in Excel so that they can be disseminated and used
without people having to buy third party commercial software. In terms of where we are in the work
right now, we've identified a number of tests that actually now more than three of a range of types.
Ranging from simple to complex. So in one health system, an application in orthopedics where most the
work is scheduled work, joint replacements. Not all of it, some might come through the ED, car
accidents and whatnot, but most is scheduled work. And the variability is low in that sense, but also low
in the length of stay sense.
Length of stay for a total knee or a hip is not perfectly deterministic, but there's not enormous
variability. That's one type of application. We're starting to work with several ICUs and neonatal
technology ICUs where there's more variability and longer length of stay. And then even doing this
system wide, which is what you see in the lower right hand corner.
A mock up, I made up the names of these departments, but you may have four or six or more locations
in your health system that you're interested in looking at and seeing the long-term forecast. And it's
starting to do some flow pre-management. So, we're in the process of getting or have data, historical
data from the systems.
And what we're doing is retrospectively validating, so dial back the clock. We know what census actually
worked out to be over the last couple of years. Can we go back 400 days and then pretend we're
running prospectively. And can we accurately predict the past, and we hope to have that exploratory
work done over the next three months, so then we can this winter start to say, well, that's pretty cool.
And these contexts we're able to predict the past pretty well and in these contexts not so well. Let's
figure out why not in the latter case, but in the former case let's go live. And then study the value
prospect in terms of what would the value of a tool like this be prospectively, how does it help people
manage flow, what types of decisions do they make.
Let's even discover their sort of expert rule base and even start to investigate what optimal strategies
would be. Presumably experts are experts, but mathematically can we develop algorithms that are
optimal resource adaptational algorithm, and if you're an operations research person you can kind of
see what we're thinking here in terms of stochastic programming.
In terms of, well, maybe things like but in terms of sort of several possible futures, several possible
scenarios at what point in time do I react by how much is the general idea. So the systems we've
presented this to so far have all sort of thought this would be quite beneficial if we could bring it to
fruition and more is better than less, so we are absolutely hoping to test this in more systems and report
out at it on it at the next CHOT meeting.
Let me switch gears real quickly, we have Little bit of time left and talk about, primarily this fourth
application, maybe a little bit the fifth application. This is a little different. The first three were all sort of
patient flow path prediction. We've been doing work that looks a little similar but is also a little
different, motivated by unnecessary referrals, but it's more general than that.
That's our, I guess alpha platform, but I think it's a more general problem than that in terms of
predicting what do people really need for care services. Here is the general context with language
around referrals. And so our test bed to date has been neurology sub specialty referrals.
So in some other part of the care system of, reading this flowchart left to right, primary care provider, or
somebody, indicates a need for a referral to a sub specialist. Then there's the black box that decides,
does this individual really need a face to face referral? On the bottom part of this flow, which costs a fair
amount of money.
Often retrospectively it wasn't needed and creates an access problem. So black box, should this patient
go to a referral or could this patient be handled in some other way? And there's more than one other
way. Curbside consult, sort of doc to doc, I have this patient what would you advise, E-consult, Hello
Health, email and so on.
So there multiple, but here it's just sort of dichotomous for ease of simplicity. Now what will happen is a
patient may get that faster, so more patient centric perhaps, mode of consultation and the result of that
may be satisfactory and that's all that's needed, so that was tremendously successful.
The result of that may be in talking with you or with your provider, I think you should come in for a face
to face. So no real harm there. A little bit of cost incurred for that e-consult. A little bit of a delay, but
not an enormous delay in terms of days and weeks and months.
So then the question is the black box. So we've been working with real data, and exploring the accuracy
of all the common predictive models one could list out. Logistic regression, support vectors,
classification trees, ensemble, methods, combining the best of the best, and others. And we're getting
pretty good accuracy.
With any of these methods, or with most of these methods, they don't with certainty tell you that
person does need a face to face, or that person, certainly only needs curbside consult. I mean logistic
regression produces a p value. So, a likelihood. So if you run a logistic regression and we know the
probability distribution of all the patients who, ultimately, retrospectively were decided needed face to
face or curb side was good enough.
You have these distributions, and now you have a threshold optimization problem because in the lower
left hand corner, sort of the true positives and the true negatives overlap. So you have to decide where
to set that threshold in that black box to figure out those in the middle to which path do they stream.
So, we've worked out methods for doing threshold optimization and a number of students have been
helping with this. Cory Stasco and Sophie Sun in particular. And I want to recognize their extraordinary
work, but also acknowledge that others have been helping us with this. So it's really a cool problem
because it just makes a lot of sense.
There's some nice predictive work and some nice optimization work. Here are preliminary results to
date, working in one sub specialty in neurology, taking data from one recent month, and doing this
offline testing, where essentially, retrospectively, a panel, not quite that formal, of clinicians looked back
on a number of patients and decided.
The quasi-consensus, did this patient actually need a face to face, or could they have been as well served
with a curbside consult. And then looking at what our tool would have recommended versus them as
the gold standard, and that's what you see this table in the upper right-hand corner.
Comparing the tool to, let's call it truth, and how patients would have flowed through the flow path in
the previous slide. And what the net cost and cost savings would have been if all of these patients hadn't
gotten the face to face. And so you can see the workbook by doing that math.
In the left hand corner, you can see the bullet summary in terms of about a 25% reduction in face to
face consults. Of those patients that flowed, hypothetically, to face to face about, in terms of we're
talking about sensitivity and specificity now right, 3% unnecessary but that's better than 26%
unnecessary.
So that equates to better access and faster, fewer days till for next available, or days until I can see face
to face when I need it, a specialist and an extraordinary amount of potential savings by diverting all that
unnecessary work away from a constrained resource. So where we are now is trying to transition, which
is a cultural comfort thing, the health system from this offline evaluation to doing an online evaluation
prospectively and actually using the tool.
So that requires a little bit of facilitation and soft skill. We have students here working in classifiers who
have developed a simpler sort of tool that's almost as accurate, a sort of decision tree tool which you
can see in the lower right-hand corner. And we have others that have worked on sort of, wait a minute,
we're actually changing the process, that means we have to think about process flow and how this
referral work flow logic will work.
Who's gonna do this? Who's gonna run the tool, figure out what patients divert in which way? How is
this all gonna work from a management perspective? So, some basic process flow design work going on.
So that's, I think, a really interesting project that we've called on a number of different skills in a general
industrial engineering operations research.
Tool kit. Just for sake of, I guess, completion, let me talk about briefly about no shows only because it
has predictive modeling in it. So in this referral flow problem. What we're doing is predicting whether a
patient does or does not need a face-to-face referral. And so, a no-show is kind of a classic approach is
to predict is a patient going to show up or not.
This is a ubiquitous problem in out-patient setting. And so we've worked on this problem, and we're
certainly interested in continuing to work on this problem. But I'm mostly showing this as an example of
another scenario where predictive modeling can be useful, and how it fits in the broader approach of
tackling a problem.
Our approach has sort of been, of course, from a process improvement perspective, what can we do to
reduce the no show rate? But you're always gonna have some no show rate. So can we predict that?
And if you can predict that, how do you do differently but analogous to the airlines and hotels and
optimally overbook.
And there are simple and very advanced methods for optimally over-booking. I think simple is good
enough, because ultimately, health systems don't do exactly what the math prescribes because all
models are wrong. They're just useful in informers. So, what's the ballpark of how much we should
overbook by using simulation or some other rule base, or common sense, or algorithm to figure out
where in the day to overbook.
And then putting this into practice through sort of a series of incremental scale-up tests from model
says, we should overbook by four, let's start with one, learn from it, build familiarity and comfort with
this. Let's build up to two. Let's build up to three, and even though models says four, let's stop there.
Because we've got some doubting Thomas is, or whatever. That's normally how this really works. So,
that's why I'm not totally obsessed about optimal in this case. But their opportunities by predicting no
shows to use some of the retrospectively unused appointment slots to increase revenue, therefore, and
to improve appointment access, because you're working down the queue.
So I'm now showing you results in a second place we'd done this. Both these applications have been in
OBGYN, somewhat coincidentally, somewhat a colleague who I'm grateful works here with me, who's an
obstetrician, this is the world she's familiar with. The different system same general approach. And you
can see we have pretty good predictability in terms of who is gonna no show.
And we're trying to move this health system to strategically overbooking in the most sensical way, and
it's incremental. I don't think we're where the model suggests we would be yet. But at least they're sort
of partially overbooking strategically and not doubling up on people who are likely to show.
That's what I mean by strategic or non-strategic. But you can see these graphs, these are pretty neat, on
the right hand side, in terms of the improvement in appointment access on the top. Better on the top,
might have been I should have put on the top the improvement in slot utilization, which leads to getting
more people in the door today, so pulling people out of the appointment access queue.
So, another project where prediction has played a role. So that's a lot of material, I am very interested in
feedback. All of us here are very interested in feedback because that was the whole idea of this phase
one. Let's just spread ourselves cross of bunch of different opportunities, new prediction, and see what
has traction.
So I know it was a lot, so just a recap. We've done this preliminary work in really these four slash five
application areas. Predicting patients coming through the ED that are gonna need a bed at some point
today. And we better make sure that we have beds. Predicting based on scheduled work as opposed to
ED arrivals, scheduled work, such as in an OR, how are they gonna flow to a downstream resource and
what sort of bottle neck are we gonna create, so we can alleviate that.
Third bullet doing the system wide days ahead weather forecast, so we can proactively manage a
resource or schedules. And then predicting who really needs what type of referrals or other type of care
like imaging, and whether they should flow to something else first. And a large percent of them, that's
really what they needed and they don't need the more resource intensive, cost intensive resource
whether it's expensive imaging or face-to-face referral.
So those were the four problems primarily we've been working on in addition to the no show problem,
and in our view, the methods seem very approachable and seem to have face validity and seem to have
predictive power. So again, as I've said several times, but it doesn't hurt to say it another time, we really
wanna test this stuff in practice and larger samples are better than smaller samples.
So we've got great partners and we'd love to have a few more systems that wanna test any of these
methods so that we can evaluate them for generalized ability. We can serve the mission of CHOT, which
is really to do cross system collaboration and to contribute to the broader body of knowledge that
health systems everywhere can benefit from.
So larger ending increases our ability to disseminate the publication and etc. So that's the end of the
content itself. Leslie, I'll sort of pass coordination of the rest of the webinar back to you. I haven't had
the chat window open. So I would love to have discussion from anybody on the webinar, comments as
to what they see most useful, or questions that I could respond to where we could handle offline.
>> Okay well, thank you very much, Jim for your presentation. What I am gonna do now is, I am going to
unmute everyone for the Q and A session and give me a couple minutes, or a couple seconds. There are
a lot of people on, which is great.
Those of you that are wearing headphones, we might have some ambient noise because of this. So if
you could mute yourselves, unless you're speaking, that would be great. But if we could have some
questions or comments for Jim, that would be wonderful. Let's start. Kim yeah, hey, this is Peter
Prosiosie.
I thought excellent, excellent presentation, really interesting and was curious to find out if you have
tried to link any of your work to management of resources, say around purchasing departments and the
kinds of equipment that needs to be purchased, and supplies and so forth, and also around staffing
patterns.
>> So great to hear from you, Peter. I always appreciate your questions. Thanks so much. Yeah, no.
That's exactly where the conversation is. I was just at Health System on Monday who's not on call but
they're hoping to join CHOT, one of a great academic health system and that's exactly what they want to
do.
Right, so, if we know what's coming through both the scheduled elective work and probabilistically, that
ought to be able to drive inventory, that ought to be able to drive just in time arrival better, in terms of
flow of disposables, but also equipment management, non disposables inside of facility.
It ought to be able to drive as an input to scheduling algorithms, the staffing algorithms, right? So that's
exactly what we want to do. We haven't linked this yet. I mean, we're right now just working on the
prediction piece. But linking it to, independent of this completely, we're working on a different project
that's basically nurse staffing by specialty in the OR.
Right? So eventually those two link. Because an input to that is basically the mean and standard
deviation of the number of cases by type that we expect to happen. So our output is the input to that
problem. Thanks Jim. Other comments from anyone? So either no comment, or there was wonderful
question and you were on mute.
I think what we'll wanna do, I mean, again, I'm so interested in translational science like all this work is
getting pulled. It's not sort of being invented in a university and now how do we push it out there. So it's
not that type of bench-to-bedside problem, because it's bedside-to-bench-to-bedside.
I mean, it's motivated by systems we're working with. But, I'm tremendously interested in the
generalizability and in reducing the amount of work we do that are sort of one-system, one-problem
projects. So, I think what we will do is survey monkey of some type and try to reach out to people with
some questions about which of these problems are they most interested in, do they see a potential
application inside of their system, And if so, who might we contact or who might they have contact us to
discuss that.
So apologies if you get one more email in your inbox. Hopefully you won't view it as spam.
>> Oh, well if we have no more comments or questions and Jim, you and I, we can talk about flying
about how to create some kind of survey. Thank you very much for attending today's webinar.
And please feel free to send us any email you have and I'll forward it on to Jim. And we'll definitely be
seeing you, hopefully, at next week's webinar. Thank you very much.
>> Thanks Leslie. Thanks everybody.
>> Thank you.
Download