Measuring the Impact of Electronic Health Records on Charge

advertisement
Measuring the Impact of Electronic Health Records on Charge Capture: A Second Generation EHR
Research Approach
Presenter: Nick Edwardson, Texas A&M University
Recorded on: June 18, 2014
And now, we will now go to today's presenter, Dr. Edwardson.
>> Thank you very much Sherry. And thanks Rose and everyone else for joining us here today. Feel free
to send any questions and interrupt along the way. Sherry's gonna have everyone on mute but I'm more
than welcome to stop.
Today's topic is obviously the impact of electronic health records on charge capture. But we're gonna be
breaking this talk up today into really two different components. The first, I wanna briefly introduce
everyone to this, sort of conceptual model that we came up with in conducting the literature review,
and that is this issue of what we're calling second generation EHR research.
And how exactly that's different from first generation EHR research and why that difference is
important. Why we think its worthwhile, and why we're interested in getting this manuscript published?
Again full disclosure as well, this data set today is based out of one of CHOT's members industry advisory
board members and that's Texas Childrens Pediatrics which is the integrated care network Network of
Texas Children's Hospital.
So briefly today, again, we're gonna break this into two different talks. The first is gonna be on the
differentiating first generation EHR research versus second generation. And then the second component
will be actually measuring the impact of the EHR on charge capture. Let's briefly talk about this second
generation EHR research approach.
We know exactly from, and this should be common or expected known to most of our industry
members on the phone here today. That in spite of sort of the carrot and stick method that the
government took back in 2009 with the HITECH Act of trying to induce organizations to adopt EHR.
We know very very few organizations still have it and those that do we'll be talking about this later.
We're slowly approaching a good number, maybe 50% of both hospitals and on the ambulatory side,
organizations that can check off the box, yes we have an EHR. But we'll be talking about it a little later
how few of those organizations can actually say, yes we're actually using a comprehensive EHR.
And we're a little confused about that, by us I mean the health services researchers who have been
looking at this issue for well over a decade now. But exactly here, as you can see in the very last bullet
point as of 2013, fewer than 14% of office based physicians have adopted EHR systems with the capacity
to support or at least 14 stage 2 meaningful use requirements.
Briefly let's talk about what we know about EHRs in general. They tend to have impacts, we know, on
productivity, patient care quality, and billing, today obviously we'll be talking about billing. From the
productivity standpoint, and the care quality, we got some mixed results so far. We'll be discussing why
exactly we think there might be some mixed results.
But to speak specifically to productivity, Tim Wirtz, a colleague of mine at Ohio State, recently found
from the hospital side, now this is hospital, not ambulatory. But the hospitals had recently adopted
EHRs, they exhibited lower productivity gains than hospitals who had not yet adopted the technology.
However, at the ambulatory side, Adler-Milstein and Huckman found the exact opposite to be true.
That is, they actually had positive gains in productivity. When we talk about EHRs and their impact on
care quality, we're obviously gonna find some mixed results as well here. Zhou and colleagues, this was
the article that came out around the HITECH Act that got a lot of bad press.
For the policy, Zhou and colleagues, they could find no linkage between the EHR adoption and
improvement across the six quality of care composite scores. However, more recently we have found
some weak but positive linkages between EHR use and improved process compliance, improved patient
satisfaction, and reduction in medication errors.
So we are starting to see the needle move in the right direction as we would have anticipated back in
2009, it turns out we just needed to wait a little longer perhaps. And we'll talk about that, why exactly
we think that might be occurring. Meanwhile, other health service researches have identified a number
of barriers, this includes us is some work that we originally did in CHOT back in 2009.
But we know some other barriers to EHR adoption include new costs to the organization, that includes
both these up front large capital investments and then recurring maintenance fees. We know that
there's difficulty in working with the existing systems in interfacing with the existing practice
management systems of organizations, so that's been a well identified barrier.
And then finally, we know that a lot of these organizations that have physicians and staff and other
allied health professionals who have just, they're stuck in their ways. They've been using this other
paper-based system or this other home grown electronic system for 20 or 30 years and there's just
inadequate training and sufficient on site technical of support.
Now here's the interesting correction, this is sort of the one line that got us looking at this first
generation research versus second generation research. It's these barriers are perceived to be higher by
those who have not yet adopted EHR versus those who have already done it. So there seems to be a
difference in perception versus reality and we wanted to look into that little more.
And I'll be referring to those groups heretofore sort of as the EHR Haves and the Have Nots. And those
are really the organizations who have or have not adopted the technology yet. So just looking at the
literature alone, prior to getting to any sort of analysis that we'll talk about today, we wanted to find out
why exactly, maybe why there are these perceptions.
The different in perceptions between the EHR Haves and the EHR Have Nots. We found at least two
issues that could explain the the industry's, this random, this very slow adoption process. And we'll talk
about both of those today, and this is setting the groundwork, if you will, for this second generation EHR
research.
So really, two options we're dealing with. One perhaps us, as health services researchers, have been
premature in our summative evaluations. Maybe they weren't summative, maybe we need to wait a
little longer before we're really pronouncing whether or not EHR did or did not have an impact. Or in our
formative evaluations, where we're just saying here's what we found so far, we know we need to wait
another year or two before we see what's going on.
But perhaps we've been too vocal and we've had too much of the soapbox stance that we've kind of
scared people back under their rocks and entered this hibernation mode of wait and see to adopt the
technology. And then the second option that we're arguing here today for the difference in perceptions
between the EHR haves and have nots, is that researchers have been unable to study organizations
whose compositions and local environments are similar to EHR have-nots.
So we'll talk about that, too, so much of the EHR research has been on the Geisingers of the world, the
Intermountains, the Mayo Clinic. These organizations that are great, they're very advanced, they're tech
forward but frankly they're just very compositionally different than the EHR have-nots. Now I do have to
sort of acknowledge to some of you on the phone here.
Some of you might very well belong to some of these EHR pioneers, and not to say that we don't need
to do some research on the EHR effects in those organizations. But ultimately, from a generalized ability
standpoint, we need to acknowledge that evaluations of the EHR's impact at Geisinger, or Intermountain
,or Kaiser don't necessarily help out the solo practices.
Or, here we are in southeast Texas, the two or three physician practices, they're not really too terribly
interested in what's going at Intermountain, let's say. So we're gonna break this up into two different
research issues here. Basically we've already covered those but we're gonna label this one now as the
short game versus the long game.
This is again the notion that the researchers, we've been pulling the trigger a little too early, we haven't
waited long enough for the EHR to fully mature and develop. And the context here is kind of what we've
already talked about. But perhaps, we’ve been sharing too many EHR horror stories.
We found many, many different examples of the literature. We sort of botched EHR implementations.
I've listed a few here. And then we also mentioned that from a publication standpoint, it would be a
little easier, and even from an observational standpoint, it would be easier and quicker to spot a bad
botched implementation of an EHR than a good one.
Let's say everything goes to plan, we wouldn't want that many disruptions. But as a result, because
there aren't too terribly many disruptions or many, obviously we're changing, we're drastically
improving medication errors. In lieu of that sort of statistic, it's gonna take a while for us to flush out
improvements.
Whereas, when we go in with these qualitative evaluations such as what we did back in the second year
of CHOT, with the former CHOT member, it was very easy and apparent to see that this organization was
not enjoying or benefiting this EHR implementation. And, frankly, it was sort of this alarmist perspective
that we were able to take.
We got some publications out of it. But now we're sort of wanting to take a step back and say, okay but
was that fair to the EHR? Had we waited long enough to discuss, yes there were some short term
hurdles and barriers and pitfalls, but maybe if had we waited another 12 to 18 months we would have
seen some improvement.
So we're just talking about that right now, but we're not saying it's good or bad, but just bringing up the
issue of short game versus long game. And then again, we spoke to this briefly, but let me just throw
some actual numbers out here cuz to me, this is sort of alarming.
Given the body, we're all very familiar, maybe, with this existing body of EHR research that shows a lot
of mixed results. I found this number the most alarming. And this is back in 2009, granted, but, only
1.5% of hospitals had met the criteria of having a comprehensive EHR.
Now the comprehensive context is just this notion of having at least four of the EHR components set
forth by CMS which is, I believe a component for HIE. The EHR has to be present in all units of the
organization. They have to have the ability to be able to share imaging, and then I think there has to be
CPOE as well.
When you hold those four criteria up, in 2009, only 1.5% met the criteria. When we look at ambulatory
side, it's also pretty gloomy as well. More than half reporting right now, and this is a fairly recent study
here, 2012. More than half reported that yes we have an EHR but only one third of those reported
consistent use of basic features, such as, as you can see here demographics, laboratory and imaging
results, problem list, clinical notes, and the CPOE.
So again, the question then is, is this fair to the EHR? Is it right that we are going in and evaluating what
kind of impact does EHR have on our organization when so few of the organizations have truly allowed
the EHR to come in and blossom and bloom and mature.
That's really the question we're calling here. You can sort of see, maybe, where I'm getting at with this
first generation EHR research versus second generation EHR research. Now the solution that we
fortunately have at hand now is we've had a number of years pass since the High Tech Act of 2009.
The first major implementation milestone for the High Tech Act of 2009 was a 2011 cutoff date for the
carrot side, what I'm calling, the carrot versus the stick side. So this was an actual rewarding of
organizations for getting EHR's online by 2011. Well now we've had almost two and a half years since
that deadline passed, we're now standing to benefit from this time as self services researchers to be
able to look at these sort of robust longitudinal data sets.
For a lot of these organizations we have pre-EHR implementations, so prior to 2009, but then we also
have, importantly and possibly more importantly, 24 to 36 months of post-implementation data. So this
is sort of the first component that we're gonna talk about with second generation research is the ability
to have this pre-post data, and where we're not gonna be limited to what we're describing here with
first generation research, where they were typically limited to only cross-sectional snapshots.
So for instance, when we went in and look at Scott & White's EHR implementation here in CHOT back in
2010, they had implemented it literally, in some cases, seven to eight months prior and we were limited
to cross sectional data. Whereas now, because we have more time, and with fewer organizations that
would have been capturing data prior to the EHR implementation, we've got this nice panel data which
we'll actually be looking at during the second half of today's presentation.
So this is the short game versus the long game solution. We're saying this is sort of an evolution of EHR
research, and this is the first component of what we're gonna be calling second generation EHR
research. Now the second issue that we wanna talk about, and again, we alluded to this earlier.
But it's sort of this notion of, who have we been evaluating so far in the EHR research? Again, a lot of the
research that came out, especially prior to the High Tech Act of 2009, was of these large organizations
that had taken it upon themselves just as forward-thinking, progressive organizations that we're gonna
do this regardless of any sort of carrot or stick from the government because it's just who we are.
It has to do with the type of physicians we attract, the type of work force we attract. We're going to do
this because we want to do this. These organizations, we can argue from a compositional standpoint,
are very, very different than the everyman, I'm calling them, of healthcare organizations.
But health services researchers had very little choice but to look at the Geisingers and the
Intermountains because they were the only people that really existed with EHRs in the U.S. prior to the
High Tech Act. It's intriguing for us, as researchers, to look at these organizations, but it's not necessarily
actionable or generalizeable for non EHR using providers and administrators.
And again, we sort of found some support for this notion in that, you can see here, Kemper and
colleagues found that 32% of large pediatric practices had an EHR in 2005 whereas only 3.5%. Now
obviously there's gonna be a financial barrier that stands to explain why the discrepancy there.
But we don't have any research yet to even show whether or not the results that we might find in a
large academic setting hold. Maybe they're better, maybe they're worse in a smaller, solo practice. Let's
say just to look at two complete ends of the spectrum. So this is sort of just a call to action on EHR
research that we need to stop looking at just these EHR pioneers if you will and we need to start looking
at organizations that are maybe compositionally closer to the every man organization.
Now the solution is just that, again, because the High Tech Act came in 2009 we suddenly started to see
organizations that weren't just these pioneers that had taken it on but organizations that, prior to the
High Tech Act, were paper based. But now in response to this, they're gonna start to move and adopt
the EHRs, and we're looking at such an organization here with Texas Children Pediatrics, which is our
data set here today.
We're arguing they're one step closer to the everyman. Well, of course, this is Texas Children's Hospital.
They're a large organization of many, many people here, perhaps on today's call, might argue that
they're very far from an everyman. I would argue, though, that they're one step closer in the sense that,
this was a paper based organization prior to 2010.
They did the EHR implementation in response to the High Tech Act and in addition to the fact they just
saw sort of the writing on the wall and this is what they wanted to do. But from a generalizability stand
point this sort of study, this analysis stands to be more actionable to an organization who is also paperbased and is it full of physicians who, let's say, have spent their entire careers working in electronic
health record environment.
So again now, just to briefly define and then we're transitioning now to the second But what is it exactly
we're talking about with Second Generation EHR Research versus first generation? Second generation
we're arguing must include robust longitudinal dataset that includes both pre- and post-intervention
data points to really allow us to make sure we're not jumping the gun or just picking up on regression to
the means or other issues that we know or that abound across sectional data points.
And then option two, requirement two for Second Generation EHR Research is it must belong to an
organization that implemented in the EHR in response to the HITECH Act of 2009. So somewhere in and
around 2011 they would have implemented the EHR. And as a result there were arguing is that they're
one step closer to being an organization that isn't necessarily an EHR pioneer like an inner mountain or a
geisinger.
This an organization that was sort of compositionally and historically different than the rest of the
healthcare organizations in the United States. So briefly, just some context on innovation. We talked
earlier about EHR's impact on productivity and EHR's impact on quality. Similar with regards to billing,
we're finding some mixed results in the literature.
MGMA's national study revealed increased revenue. Whereas Cornelil Weill's, which was a smaller
academic practice up in New York, produced a neutral impact on billing. And then, finally, as you can see
here, both of these in the literature organizations, both of these studies would have applied. The first
year we would have categorized them as first generation EHR research.
Because it was obviously prior to 2009, so these organizations we would argue are a little different than
the organizations who are the EHR have-nots of today. There was one more recent study that did
technically meet the definition of second generation EHR research and that was are friends up here from
where, are we are in college station Baylor.
And they looked into sort of it's impact on billing and they found a negative impact on revenue in the
short term, and that was Fleming and colleagues here in 2014. Ok, so now we're transitioning again. As a
reminder for any people that joined us here late. The data set that we're working with today is a
longitudinal data set from Texas Children's Pediatrics.
Who I believe we have a few representatives on the phone and they'll be able to jump in and clarify at
any points here or even after the call during the discussion section. But we've got beautiful, wonderful,
robust financial panel data from them. 372 providers across 42 practices. They had implemented EPIC in
the fall of 2010.
They had included in the data set were monthly encounters, charges, collections from October 2008, so
again we had almost two full years of pre-intervention data. And then almost three full years of postintervention data. Again, the implementation with our dosage being in the fall, September, October, and
November of 2010.
Let's look at some descriptive data from the data sets. Average monthly encounters, charges and
collections for the physicians, so again, this is just an average sampling across all 300 some odd
physicians. 477 monthly encounters, nearly $90,000 in charges and $64,000 in collections. Again we
gathered this data from the network's billing and practice which had been implemented in the fall of
2008 so we had this prior to the EHR implementation.
Let's talk about our dependent variables. So today we're going to take a leap and basically talk about the
idea of charge capture. By using three different variables, so three different proxy variables. There is no
variable that we're aware of that it very neatly and cleanly defines charge capture, but again, what we're
trying to capture with the idea of charge capture, or the definition of charge capture, how sure are we
that our physicians are billing correctly and accurately for the procedures they are conducting during
their visits.
We're gonna take three different proxy variables that we could find in the dataset. First we're gonna
look at monthly charges at the per provider patient level that will just answer the question of did
charges go up or down or did they stay the same as a result of the EHR implementation?
We're gonna look at collections, how those changed. And then finally we created a ratio, sort of this
charge to collection ratio at the per provider, per patient level. And the idea here is that we've got your
standard, prior to the intervention we've had our standard charge to collection ratio.
So the bigger that number is, basically a one. It's going to imply that for every one dollar in charges we
submitted, we received one dollar in collections. So that's sort of the ideal would be a one. Whereas as
that number gets bigger, that sort of implies they charged more but maybe they collected less.
And that's the ratio that we'll be discussing. So these are sort of three hybrid dependent variables that
we'll be looking at. And there you can see three just fairly normalized data. I didn't have to do any long
transformations across our three dependent variables. Our independent variables that we're gonna
include in the model, we had the EHR dosage which is just simply, there were four different groups of
implementation go live dates that the organization selected across three.
Really three months but it was actually closer to 45 days because it was the end of September, I believe
two in October and then one in November to where all 42 practices had received it within the EHR dose
that we'll call it in the fall of 2010. We didn't break them up also we added a parmix category because
we've got this network Texas Children's Pediatric serves a wide array.
They've got 42 practices across all these different geographic locations in the greater Houston area. And
as a result we see an entire large mix of some practices are seeing over a hundred a significant majority
of the patients they see are going to be commercial. And at the other end of the spectrum, TCP, Texas
Children's Pediatrics has a number of community care practices which are seeing mostly public pay and
then even indigent care as well.
And then finally we added a secular trend variable that we're calling year, but it's basically just to
account for changes in reimbursement, changes in inflation, overall growth in the Houston community
that we just wanted to try to capture in this secular trend that we're calling year. We did not try to do,
accomplish amputations so we basically just went with the lowest level.
We found 57 providers that met all the criteria across 32 practices that is they were present in the
financial data in 2008. And then they were still present at the conclusion which was the fall October and
November of 2013. So 57 providers across 32 practices were in our final data set here.
Here's the analysis we took. It's a two level fixed effects model where basically we're looking at practice
level time and variant variables and then position level. So this is sort of the mathematical formula we
followed. Brief sort of argument in favor of the model that we chose. We're selecting the fixed effects.
We're arguing that the errors are correlating with the regressors. So we've got this sort of serial
correlation. And we're arguing that something within the physician, their practice, or their payer mix is
biasing the predictor variables or the outcome variables. And then we're also assuming these time timeinvariant characteristics are unique to the physician and that each practice's error term and constant are
not correlated with one another.
And we actually ran a Houseman test to confirm this assumption. This is descriptive data. This is just
giving a high level sort of overview of what we were looking at. We broke this up here. You can see I
have four different categories of this public private pay ratio.
What we're looking at here, we got a sample of 26 in this group, 26 physicians. This group of physicians
the lower the ratio, basically, the higher number of private pay patients they saw on average. So this
group is seeing fewer than 10% of their patients are public pay which includes CHIP, Medicaid and then
this other government program that they have in Houston.
You can see mean encounters from 2008 started off at 476. And then basically we had a non-significant
delta of negative 83. So all sorts of things could be accounting for this. But then mean per patient
charges in this group went from 198 to 232 but a non-significant change there.
And then as we move down the groups here, so by the time we get down here to this bottom group,
these are physicians, we've only got a sample size of eight in this group. These are the physicians at TCP
who are seeing a significant majority of public pay patients.
So again that's the CHIP, here in Texas, S-chip, Medicare, Medicaid and some other government paired
options that we have here in the state of Texas. And you can see basically, as well, their encounters are
for the most part relatively flat but then mean monthly charges are actually increasing from 141 in
October of 2008 all the way up to 223.
Collections jumped significantly, from $65 per patient, to $160. And then finally this ratio that we saw is
a very, very high ratio, which means again, the higher this number, the higher number of charges went
out and the fewer collections that come back. So as this number approaches one, we're arguing that's
actually an improvement in the charge capture.
And we did find significant differences here again just at the descriptive level across these four groups of
payer mix. Now again, just for some more descriptive data, here are mean encounters broken up by
these four different categories of public/private payer ratio. Not really any significant changes, but
again, we're not attempting to look at productivity.
But these are just the encounters we see over time. It might be interesting to some of you on the phone
call to see that in fact, we did not see. Again, our dosage is right around here. October, November of
2010, not any real significant dips, per se, or spikes in the encounters as a result of the EHR
implementation.
Again, this is outside the scope of this paper, but there are a number of manuscripts, articles, in the
literature that sort of talked about what happens with our mean encounters, do productivity fall, does it
increase. Overall here we can see no significant changes. Maybe around that time you could argue that
this lower group of public/private is changing but no significant changes to worry about.
Here are our mean per patient charges. Now here's where we start to see some interesting action
happen again. Around our dosage time, would these three months here, we see a real nice bump for
these two lower groups. Again, these were the physicians that mostly see public pay patients.
And then a slightly lower percentage of public pay patients now. This is sort of a mixed group. Right
around the dosage time we see some interesting stuff go on. If anything we're seeing across the
organization a neatening, a tightening of the data, but then we're also maybe seeing we've gotta
account for some secular trend here and some seasonal data.
But ultimately we might be seeing some increases here right around the time of the EHR
implementation. Move on to patient collections. This is, again, mean per patient. So these are
normalized based on charges per physician divided by the number of encounters that they saw that
month. Mean per patient collections, again, right around the time of the dosage here, we're seeing a
jump for these lower two groups.
And then you could actually argue a fall for everyone, a brief dip in the data. And then overall once it
picks back up after that there's much less variance across the four groups of payer mix. And then finally
here is that charge to collection ratio we created. Again the most significant change that happens
around the time of the dosage is for this group of physicians, the eight physicians who mostly see public
pay patients, significant decreases.
So again this is an improvement in the charge to collection ratio falling significantly, almost approaching
1.25 here, which is an improvement. And then for the other groups, maybe not much change at all but
it's kind of difficult to tell just looking at this descriptive level. And here are our fixed effects model
results.
Again, this is looking at the patient charges, sort of at a nested model where we just added variables. By
the time we got here to the very last model we have all of our variables added. What we're looking at
right here, this is significant change. This is implying that resulting of the EHR dosage is associated with
an $11.09 increase in per patient charges as a result of the EHR, all else held equal.
Now, here's where it gets very interesting. Now we're transitioning from patient charges to, well, and
let's briefly just go through these other ones. So when we look across the paramix group we do see that
it appears the group seen mostly public pay increased the most, or sort of a $60 improvement compared
to the lowest group.
They're basically getting better and then we see the secular trend, the year as sort of expected there's
an increase in charges just looking at year alone. Year's our constant. Number of practices stayed the
same because we had perfectly balanced data and here's our r squared implying a pretty high impact.
Let's switch now to per patient collections, again just looking at model three here, this is them telling us
that all else held equal, we have a $11.49 increase in collections as a result of the EHR dosage. Again this
is positive from the organization's perspective. They're now collecting $11.50 per patient on average as
a result of the EHR.
Our ratios are also implying, as we would have expected given the descriptive data, that it does appear
this group that the more public pay patients you see the greater the benefits. The more money, the
bigger improvements you made. And then finally, a similar secular trend is accounting for just changes in
medicare reimbursement, or excuse me, payer reimbursement, differences and inflation.
Also significant results, our r squared falls to 6% but still it's significant showing our model is picking up
on some important things. And then finally let's look at that charge to collection ratio. Again a significant
difference, and again this is a negative number which is telling us what we wanted to to assume that is,
the charge to collection ratio, as we approach 1, that means we are improving our charge capture.
We are seeing an increase there across all the physicians and practices. And then as we would expect,
some secular trend and differences among the four payer mix groups. A smaller r squared but still
significant. So again just now moving these tables to narrative. Study suggests that the introduction of
the EHR to the pediatric care network was independently associated with a $11.09 increase in average
per patient charges.
A $11.49 increase in per patient collections. And an overall improvement in the physicians' charge to
collection ratio. Despite varying starting points, so again the fixed effects model allows us to have
different intersects with same slopes, EHRs benefit all physicians types. And it appears that the HR acted
as a leveling mechanism or as a tightening mechanism across the organization, creating greater parody
or decreasing the variance across the four groups that we were discussing earlier.
So that's sort of the end of the discussion here that I've got today. I sort of do want to open it up to
those of you on the phone who might have some insight as well but some looming questions that we
generated here that we can't really answer with the current data set, but we would like to look further
to CHOT organizations to be able to answer some of these questions is.
Are these EHRs enabling providers to deliver higher quality in-office care, and is that why we're seeing
the $11 increase, or is it merely improving the providers' charting processes, and that also sort of
increasing that $11 charge, and then improving even more. I think, what stands to the argument we are
attempting to make is that because collections improved more than charges did, that we are seeing
some of that as simply just an improvement in charting.
The question is how much of that is just a result of improved charting versus actual perhaps these
physicians are now administering different types of care that they weren't given EHR's ability. To prompt
them for care, warn them about different issues given the patient's background and history, and then
that brings us here to the third point.
If the EHRs really are just improving charting and not producing higher quality, is the $11 difference, is
that a fair price for the downstream benefit? We know EHRs are gonna improve quality of care. We
know that they're improving transitions of care to and from a hospital, let's say.
Transitions to and from pediatric care to adult care, right? This patient now has this very nice, neat
bundled medical history that they can take. What do we wanna make the argument? What's the case
we wanna make for this $11 change in charges and this $11.50 change in collection?
And then finally this was in a fee for service model. How different might these results look in a capitated
environment? Briefly just some limitations before we open it up to any questions or comments. Given
the nature of the data set, we have very few variables to hold constant, so we're increasing the
likelihood for omitted variable bias.
We knowledge that. And then this sort of proxy regressor calendar year that I told you we set up for a
secular trend to account for secular trend, it was at the 12 month level, which we know isn't maybe as
granular as it should be given that there might have been other small changes in Houston markets or in
changes in reimbursement.
And then finally, as we acknowledged at the beginning, this is from a single CHOT organization. We're
always interested in looking to grow out data set, increase our data set, run our same models for
different organizations. But because this is just looking at Texas Children's Pediatrics, we can't
necessarily generalize some of these findings to other networks or especially to other non-pediatric
health care organizations.
That's the end of my talk. I've been talking now for 40 minutes. I need a break. But I'd like to open it up
to the group and sort of Beata, Beata Cash who is also an investigator on this project is here. Beata feel
free to jump in and see if, mention if I, I'm sure I've left some stuff out.
But that said, any questions, comments? Concerns? I'll open it up to the group. Sherry, if you could take
everyone off mute.
>> All right, thank you Nick for the presentation. I have a question. With the headset. Rose has a
question for a link. And then the question is, do they implement the epic billing system at the same
time?
>> I do not believe so. They were still running with this other practice management system. I do not
have the answer to that question. Beata, are you aware if they implemented the epic billing at the same
time?
>> I would have to ask them, but I'm under the impression that they did implement the whole package.
But that's a great question.
>> Rose, do you care to elaborate? How might that implicate some of our findings? I'm just curious.
>> Well, yeah, so I think, can you hear me?
>> Yes, we sure can.
>> Okay, great. So, I think it's hard to isolate the impact on collections from just the EHR if you're also
making changes in the billing system.
Because while I actually agree that you probably are capturing more charges. It's hard to really
completely determine whether the actual collections increased are due to the charge increase, or also
due to performance credit due to implementing ethics select systems. So if that was held constant, then
it might be easier to say that the increase in collections was due to better documentation or better
overall processes.
So it's just something that I would be interested in knowing.
>> No. That's very helpful and that's probably just a simple question we can ask them and find out.
>> Yeah that'd be great.
>> All right. Does anybody have any other questions?
>> It might be. Can you hear me?
This is Beata. And I am thinking to follow up on that question it might be helpful to share the feedback
we got from sharing these results with the Texas Children's pediatric CEO.
>> So Rose, effectively we were presenting some of these results internally to members of TCP. If this
sort of exhibits a high level of faith validity to them.
They were sort of saying well you know we did expect the increased discharges. We weren't sure about
the collections, but ultimately this matches with what we were hoping to see. It does sort of explain
some movement that what we were achieving. So we think, we were excited to find out that the
organization was not expecting otherwise.
They weren't sure the actual dollar amount, but overall this did mesh with what they were seeing from
sort of an aggregate level. Beata did you want to add to that?
>> That's perfect. And I believe the way they were explaining it was doing a better job and actually
capturing and charging for services rendered was I believe their number one explanation for it.
The question of did this happen at the same time as the billing part of the epic implementation is, I
think, an excellent one that we need to address.
>> So I, I would say after, and we are not live with Epic clinicals yet here at Partners so we are in the
process right now of, we are live with one of our community hospitals with revenue cycle only.
We're going live with Massachusetts General Hospital in about 20 days with revenue cycle only and then
in about ten months or so we'll be live with our first implementation that will include clinical. But we've
been spending a lot of time understanding the charging of structures within Epic and I can absolutely see
how the EHR will increase our ability to capture more accurately the services that are being rendered.
Because the challenge in our environment today is that clinical documentation is done in the EHR, but
then charging is done separately. And so what we often find is that key things that are done are just
never captured. They're either not on the form or they're not on the tool that they use or some of them
forget.
It might be something that the medical assistant or the nurse did, but the doctor never documented on
the encounter form. And so having all of that really coordinated in one environment where the
physician does his or her clinical documentation, does his or her care plan, all of the orders that need to
be done, the diagnosis codes are all there, ready to be used linked from the problem list.
And then ultimately their charging is just an output, quite frankly, of their clinical documentation, I could
see how it's so much more seamless to capture all of the services that are being rendered. It makes
complete sense to me that this is your outcome.
>> Right, well that's very helpful and yes, to speak exactly to that, so getting more granular with what
actually the President of Texas Children's Pediatrics mentioned to us was, she gave the example of, and I
forget which vaccination it was but let's just say it's MMR.
That there's actually, let's say, five different types of MMR based on maybe a patients allergens, or
history, or tons of issues, and meanwhile, the physicians historically would just say, yes, you know it was
the fall, it was before school they got MMR. The difference though between which of those five they
would've received and they were, she was arguing the physicians were still giving the correct
vaccinations all along, but they weren't necessarily knowing the coding differences between the four
different types of vaccine.
>> Exactly, exactly.
>> So as a result the EHR comes in and clarifies, and specifies, and then you're exactly right. Then you're
allowed to basically just, you were capturing it before but it's again, the granularity by which you can
send charges for and as a result, let's say one of those, I think she mentioned one of the four, the
difference was a $30 versus a $4 reimbursement.
>> Right, right, did they increase prices over those four, over those years? Cuz I think just using charges,
it's difficult because typically most groups and hospitals have price increases year to year.
>> Right, no and so we do not have that level of data in their financial dataset.
Now we have the option of looking at that, but again, we were just normalizing that based on overall
encounters, and we don't even know what percentage of those encounters were private versus public.
The variable that would've picked up price changes would have, again, been that very aggregated, ugly
year secular trend which we're sort of acknowledging that their word changes along the way.
We're acknowledging that's a weakness, but no, we don't have that, except that we did have a variable
that was able to absorb, where we would expect to see price fluctuations and price changes, and then
holding that variable constant we were in addition to those changes able to find this sort of EHR dosage.
>> Got it.
>> The $11 increase.
>> Great that's helpful, and then the last question I have is, I've had this conversation with counterparts
across the country who have implemented either EPIC or other EHRs. One thing that they measure often
is, that relative value unit, the RVUs, they use that as a metric versus charges.
So did the overall RVUs go up even though the number of visits have potentially gone down? So some of
my colleagues have said, that their experiences, that physicians may have to reduce some of the visits
that they have and actually have fewer encounters, although clearly your data doesn't demonstrate
that.
But that they do see a more intense service being rendered or captured in the billing data. So did you
consider using RVUs as a way to monitor the increase in intensity?
>> You're doing such a good job that my committee did in pointing out all of the shortcomings of our
data set.
Yes, so the majority of that one>> And that's not my intent by the way.
>> That isn't my intent, by the way. I think that the data is terrific and speaks to the need, one reason for
continuing to do this and not having, to be able to say, that there'll be a positive impact on the billing
side is one of the things that we actually try to leverage a lot.
So I hope you don't take my comments as a negative.
>> No, no. Please, we love it, this is exactly right. I'm just saying, you would also make for a brilliant
health services researcher. But, no, we did wanna look at our views, we simply did not have any of that
clinical data.
We're still not sure we could. We don't know if that's an ability of, off the top of my head, I don't know if
that was just an IRB issue or I don't know if at some point, the ability of TCP to capture that, if that also
goes back to 2008.
>> Right.
>> Because we ran into a couple of issues where they sent us some variables that they only started
tracking, let's say, 2010, and it made our balanced panel data very unbalanced. But I would encourage
you to look, so Fleming's data out of the Baylor implementation, they did look at RVU's And again, they
were the people that found a brief negative.
They found exactly what you are saying, a brief decrease in overall encounters, however the mean RVUs
increase over that time which means, that's exactly what you were implying. It's reminding physicians of
all of these different things and prompting them, which we could make an argument is maybe a proxy
for improved quality, but ultimately with Fleming's ultimate findings were that DHR's did have a
negative short-term impact on productivity.
>> Got it. Yep, terrific. Great study.
>> Well thank you so much and we really appreciate all these questions. And by all means, if you ever
have any additional questions, feel free to reach out to us individually cuz we'd love to find out some of
the data. Obviously, you guys are partners, this amalgamation of all these different organizations who
are at different stages of the implementation and that would be a fascinating case study to look at.
>> Yeah, I'd actually love to repeat this process for partners once we go live on the clinical system,
because we will have a multi-year rollout of this. So we can do almost a multi-year study around this. For
the different organizations, we could use, we could do academic medicine versus community medicine
and we might, we could do ambulatory versus other types of charging inpatient surgical charging, et
cetera.
So yeah, I mean this is really fabulous. This is something that I'd like to make sure we measure when we
go live with EPIC next May. So this is terrific, it's been a great webinar.
>> All right, well thank you so much Rose, Sherry. Thanks for hosting us, be there any following, closing
comments?
>> Yeah, I think Rose, if you don't mind, I would like to follow up on your idea with the IAB, our Industrial
Advisory Board, and make sure that becomes maybe an agenda item in our fall meeting for a potential
collaborative project across universities and sites.
>> Yeah, we would love that.
>> And it would be very good to include inpatient and be able to differentiate what happens with EMR
implementation on the ambulatory side versus the inpatient side. Because there seems to be, I hear
from colleagues at UT who are doing similar studies on the inpatient side, and it looks like effect sizes
are about similar, but in the opposite direction.
>> Interesting.
>> So it would be good to know what's really happening on the two different settings, and also make it a
collaborative project
>> Yeah, I would be very interested in thinking through how we would measure the inpatient process,
cuz so much, it's very difficult to isolate charging changes in an inpatient setting.
Because you have intensity changes and service shifts, and more surgery, less surgery, more medicine,
less medicine, and all of that really has an impact. Act and, quite frankly, it's why we're gonna have
difficulty measuring the impact of ICD-10, because you can't just isolate the transition to a new coding
approach when you also have lots of other things going on around that.
And so it'd be really interesting to have a group of people think through, what would we measure? What
would we hold constant to be able to measure the impact of the EHR? Because, quite frankly, there's a
lot of things that change when you move to an EHR. I mean we're hoping that tying the administrative
and clinical workflows together will actually reduce a lot of the denials that we have today that really
aren't due to anything but disconnected processes.
So medical necessity is a perfect example of you have the charging process and the ordering processes
connected, and you also don't have an ability to launch any decision support tools at the time of
ordering without a comprehensive EHR. And so it's just there's a lot that goes into it.
It's a lot more complicated when you start thinking about the in-patient world and the ancillary world
and surgical world, but I'd be interested in having further conversations with you around how would we
think about that. I would love to study our experience, and by March of 2016, we'll have our two largest
academic medical centers live on revenue cycle and clinicals, and we'll have probably close to 3,000
providers, physicians live on EPIC ambulatory at the same time, and so we'll certainly have enough data
to be able to monitor what's going on.
And we have a lot of good baseline data, including RVUs and other things, that we can really use to
assess the impact of ethics implementation, so I'd love to have further conversations with you.
>> Great, great. Well, Rose, thank you so much.
>> Rose, I hope you can join us at the Fall meeting, then I'll put you in charge of that breakout session.
>> Yes, can you do me a favor? I don't know anything about your organization. Jim Noga, who's our CIO,
sent me a link to this webinar and said hey, you might be interested, and clearly, I was, and so if you
could just send me an email on any of the details of the organization and the meeting in September,
that'd be great.
>> Awesome, great, we will do that. We've got your email address here via WebX, so we will send you
that.
>> Sounds great.
>> Yeah, so it's in Waltham, in your own backyard.
>> Oh, even better.
>> Love to see you there.
>> That sounds great, nice talking with both of you.
>> Thanks so much.
>> Alright, take care, bye bye.
>> Bye, bye.
>> Bye, Rose.
Download