Project Diamond Systemic Underfunding – Analysis

advertisement
Further Analysis of Systemic Underfunding for Project Diamond Trusts – Misallocation of
Reference Costs
Executive Summary

This paper seeks to extend the analysis carried out in the report “Project Diamond – Analysis of
the PBR tariff and underfunding of complex activity” carried out on behalf of the Project
Diamond trusts by Ernst and Young (EY). The report examined whether reference cost
misallocation was leading to underfunding of more specialised trusts.

The EY report postulated that trusts’ costing exercises were unable to fully take into account the
acuity of care between different spells, so that some of the additional care costs associated with
more complex activity ends up being allocated to more simple HRGs. The report goes on to
suggest that the national tariff, which is based on the national average reference costs,
therefore over-funds simple activity and under-funds more complex activity. However, the issue
arises that at less specialised district general hospitals (DGHs) less complex activity occurs, so
there are fewer additional care costs to be absorbed by the simple activity. This leads to lower
reported reference costs at DGHs for simple activity, which ‘drags down’ the national average
reference costs, and hence the national tariff, for this activity. The report suggests that for trusts
providing specialised services this ‘drag down effect’ leads to systematic underfunding.

The EY report tries to estimate the extent of the underfunding by comparing trusts’ reference
costs to the national average. This paper extends the analysis by estimating the level of
underfunding for a wider group of trusts and using a wider set of definitions of simple and
complex activity to see if the results remain robust to these changes.

We begin by testing for reference cost misallocation. This is important to establish because
further examination of EY’s method for calculating for underfunding shows that it is unable to
fully separate out underfunding due to reference cost misallocation, and other factors such as
differences in efficiency. We find some limited evidence to support reference cost misallocation,
but are unable to determine the extent of the problem and whether, at a national level, it would
result in the situation as described by the EY report. Consequently, although we find some
positive estimates of underfunding in our subsequent analysis, it is difficult to determine what
the cause of this is and whether it is in fact related to reference cost misallocation.

Our estimates found that for all specialist trusts on average, the size of the ‘drag down’ effect
ranged from 2% and 11% depending on the definition of complex activity used. This was much
lower than some of the results for individual trusts found in the original report and to reconcile
this we also computed the results based on our method for one of the trusts examined in the
original report, Guys and St Thomas NHS Foundation Trust (GSTT).

This produced some estimates of underfunding that were more in line with those in the original
Project Diamond report. However, we also found that the estimates at the individual trust level
displayed much more variation and were not necessarily consistent with our previous estimates
for all specialised providers. This led us to consider further extensions to the model that could
explain these results.

We discovered that the simple model and method for estimating the underfunding presented in
the EY was unlikely to fully capture the complex reality of the situation. Adding in additional
considerations into the model quickly became very complex and called into question the validity
of the estimates. On balance it was felt that EY’s method for calculating underfunding is only
appropriate when looking at the national picture, as here any variations for individual trusts
would tend to average out. For individual trusts this method is unlikely to produce robust
estimates, as the data is too variable and was found to deviate quite significantly from the
simple model.

Overall, although we find some positive evidence of underfunding, caution must be taken in the
interpretation of these estimates. It is not entirely clear if the calculated levels of underfunding
do in fact represent the effect of reference cost misallocation. Additionally it is important to
note that other mechanisms within the tariff exist to take into account the increasing costs of
complex activity, such as specialist top ups and long stay payments. The value of these payments
must be balanced against the calculated level of underfunding. We estimate that in 2013/14 the
total value of specialist top ups paid to the specialist trusts examined in this paper will equate to
approximately £200m, or 4% of their total admitted patient care income from PbR.
Introduction and Background
1. Project Diamond is a collection of London based specialist and teaching hospitals. The group
commissioned Ernst and Young to examine whether there are systematic features of the
national tariff that lead to underfunding of complex and specialist care. The report “Project
Diamond – Analysis of the PBR tariff and underfunding of complex activity” examined, among
other things, the role of reference cost misallocation on the potential underfunding of specialist
trusts.
2. The report argued that the “true cost” of care is often different to that reported in reference
costs, with the simplest, low cost activity tending to be over-costed, whilst the most complex,
high cost activity is under-costed. This reference cost misallocation is argued to occur at both
DGHs and specialist trusts, but as DGHs do less complex, high cost work, their low cost work
does not suffer the same extent of over-costing, resulting in lower reported reference costs.
The effect of this is that the national average costs (on which the tariff is based) will be
‘dragged down’ by the DGH reference costs at low complexity, creating a break in the national
average costs and leading to underfunding for specialists’ non-specialised activity.
Complex work
is under-costed
Simple work is
over-costed
National average is
dragged down by
DGHs
3. The report calculated the value of this underfunding based on six scenarios using different
definitions of complex activity and specialist average cost data for four different trusts. From
this data, the report finds a “drag down” effect of 27.1% equivalent to £31,864 for Guy’s and St
Thomas’ NHS Foundation Trust (GSTT) using the first scenario and estimates £54.3m of
underfunding across the four trusts.
4. This paper extends this analysis in two further areas:

As the analysis in the report was limited to a small and specific group of trusts, we wish
to examine whether the results remain consistent if widened to a national level.

As there are many possible ways to define specialist/complex activity, we wish to
examine what the most appropriate definition would be and see how robust the results
are to changes in this definition.
Initial Analysis
5. We wanted to see how closely the real reference cost data resembled the theoretical model
presented in the Project Diamond report. As it is difficult to measure complexity, we have used
cost as a proxy for complexity. This is consistent with the Project Diamond model, which
hypothesises a positive relationship between cost and complexity.
6. We identified the specialist trusts as the single speciality trusts, teaching hospitals, specialist
children’s hospitals and specialist orthopaedic hospitals, whilst DGHs were identified as the
remaining acute NHS trusts in England1. After adjusting for the Market Forces Factor (MFF), we
calculated from the 10/11 reference cost data the national average cost for each HRG and an
average cost for DGHs and for the specialist trusts.
7. Figure 1 below shows the national average, specialist average and DGH average costs for the
HRGs combined across all settings2 and ranked in ascending order of national average cost (as a
proxy for complexity). Trend lines (listed as poly in the key) were added to more clearly show
the relationship between the national average and the specialist and DGH average costs.
8. This shows that the national average cost trend is closer to the DGH cost trend at lower
cost/complexity and that it is closer to the specialist cost trend at higher cost/complexity. It
also shows that generally, the specialist average costs are higher than the national average and
the DGH average costs. This seems to support the theoretical model in the Project Diamond
report.
9. However, one limitation of this approach is that it is not possible to verify whether or not there
is a break in the National Average Costs, as is predicted by the Project Diamond model. This is
because, given the large volume of raw data, we have based our conclusions on the trend lines
added to the graph, which, by their nature, are continuous lines. While the lack of a break in
the National Average Cost Curve does not necessarily rule out the proposed model, it does
potentially complicate the analysis as explored later in this paper.
1
2
A list of trusts included in each definition can be found in the annex.
Calculated by first finding the average costs for each of the settings separately and then taking a weighted
average across all settings, weighted by activity
Figure 1
Evidence for reference cost misallocation
10. The identification of the “drag-down” effect as a gap in funding relies on the existence of the
pivoted “true cost” curve. The EY report suggests that the true cost curve is steeper and
significantly different to the reference costs due to the way reference costs are allocated which
means simple HRGs absorb costs driven by more complex ones. The result is that complex work
is underfunded and simple work over funded. As specialists do more of the complex work, they
have more costs to misallocate and this is the reason for the systemic underfunding of
specialist trusts.
Figure2
11. However, as the true cost curve cannot be plotted, we are unable to examine its shape
compared to the other cost curves as in figure 1. Instead, we look directly for evidence of
reference cost misallocation and hence a pivoted cost curve by extending EY’s method as
described below.
12. The evidence for the pivoted cost curve in the Project Diamond report is based on the
reference costs of three trusts and the “homogeneity” of costs per bed day shown in the three
examples of specialities, reproduced in figure 3 below. The Project Diamond report claims that
this is likely to be common across many trusts, as it is consistent with national costing guidance.
The report does however point out that the effect is less for trusts using PLICs systems.
Figure 3
Source: Project Diamond report
13. We undertook some further analysis to see if these findings are widespread by examining the
costs per bed day across specialities for a larger number of different trusts, looking particularly
at those who have not used PLICS (and have no plans to introduce such a system) and those
who have had a PLICS system in place for some time.
14. From our analysis it seems that homogenous costs per bed day across specialities are quite
common for many trusts that do not use PLICS, especially in the non-elective settings. This
finding appears to be less common for those organisations that have used PLICS for some time,
although it is difficult to determine what the ‘correct’ level of variation in costs per bed day
ought to be, and whether these organisations display enough variation to conclude that their
cost curves are not pivoted.
15. For example, looking at the costs per bed day in non-elective long stay spells for Salford Royal
NHS Foundation Trust (RM3), which has used PLICS since October 2007, the costs (as shown in
figure 4) do display some variation within each speciality, although they remain relatively close.
Whilst this is obviously an improvement on the perfectly homogenous costs within specialities,
we do not know how well this reflects the “true cost” because we are unable to measure this.
Whilst PLICS allows more accurate cost allocation, costs are still allocated by cost drivers, as it
would be impossible to actually document the real costs patients incur. Therefore, we do not
know if these costs would still result in a pivoted cost curve as shown in figure 2.
Figure 4
16. Additional problems occur
when we try to combine
the individual trust results
to gain a sense of the
national picture. For
example, if not all trusts
suffer from reference cost
misallocation, it is not clear
how to determine whether
reference cost
misallocation is a problem
at the national level.
17. Furthermore, if all trusts
suffer misallocation to
different extents, there is
the possibility that the
reference cost curves for
DGHs and specialist trusts
are not pivoted by the
same amount. It is not
clear what the implications
for this would be and
whether the EY model
would still predict
underfunding in this
situation. As the majority
of the trusts we have
classified as specialist use
PLICS, this could suggest
that there is likely to be
less of a pivot or no
pivoted true cost curve for
specialist trusts.
18. Without knowing the
shape of the true cost curve it is difficult to know if the estimated underfunding found by the
“drag down” effect is actually due to the problem of cost misallocation outlined in the Project
Diamond report or due to other factors such as differences in efficiency.
19. For example, if we imagine that there is no reference cost misallocation and the true cost
curves were to lie exactly on the average reference costs curve for all trusts, then so long as
DGH average costs were below the specialist average costs, the national average cost curve
may still be exactly as described above. There would still be a measurable gap in costs but this
could no longer be attributed to reference cost misallocation.
20. We know that in the actual data (as shown in Figure 1), all we can say is that the specialist
average cost curve lies above the DGH cost curve and that the national average costs are closer
to the DGH costs for simple activity and closer to the specialist average costs for complex
activity. Without knowing the true cost curve we do not know whether this is due to reference
cost misallocation, or some other reason. Consequently, it is very difficult to know exactly what
is being measured when the “drag down” effect is calculated and it is important to bear this in
mind when interpreting the results of the analysis below.
Replicating the Analysis of the “Drag-Down” Effect
21. We aimed to follow the framework set out in the Project Diamond report, although a few
changes to the methodology were made. Starting with the 10/11 reference cost data, we first
cleaned the dataset by removing data that appeared unlikely (for example, where activity was
reported as a day case when the HRG specifies a length of stay of two or more days). We also
adjusted costs by the MFF before doing any calculations to make cost comparisons more
consistent between trusts.
22. Although the Project Diamond report calculated the “drag down” effect for one trust at a time,
and often using the trust’s own costs, we have calculated an average for all specialist trusts, as
we are interested in national results. As in the framework, we excluded non-PBR activity from
the analysis.
23. It was not clear from the report which treatment settings were included in the calculation. We
chose to include all admitted patient care settings (daycase, elective and non-elective) in our
analysis, although specialist activity was defined at the HRG level only (we required the
definition to be met in all settings for the HRG to be included3).
24. As in the Project Diamond report, we calculated the size of the drag down effect by comparing
the average percentage difference between the specialist average costs and the national
average costs calculated over the simple and the complex HRGs (i.e. specialist and nonspecialist activity). A drag down effect occurs when there is a larger gap in the specialist and
national average costs for non-specialist activity compared to the case for specialist activity.
Paragraph 24 provides a more detailed explanation of this methodology.
Definitions of Specialist Activity
25. The following 6 scenarios were used in the Project Diamond report. As noted above, while the
Project Diamond report also calculated the drag down effect using trust’s own reference costs
compared to the national average, we use the average costs for all specialists (equivalent to
scenarios 4-6).
3
Except for definitions 6 and 7, as this would have resulted in no HRGs being included. HRGs were included if
the criteria was met in at least 1 setting instead. See paragraph 14.
Source: Project Diamond Report
26. Initial analysis of the data did not suggest that one particular definition of specialist activity
would be most appropriate. Consequently, we repeated these calculations using a number of
alternative definitions for specialist activity to check the robustness of the analysis against
different definitions of complex/specialist activity.
27. Firstly, we aimed to replicate the definitions used in scenarios 4 and 5 in the Project Diamond
report but as we also wished to see the wider picture, we used various random samples of any
five DGHs rather than only those based in London4.
The definitions 8-14 aimed to replicate scenario 4 in the Project Diamond report.
Specialist HRG Definition
8
HRGs where (Random Sample 1) 5 DGHs have 1 unit or less of activity each
9
HRGs where (Random Sample 2) 5 DGHs have 1 unit or less of activity each
10
HRGs where (Random Sample 3) 5 DGHs have 1 unit or less of activity each
11
HRGs where (Random Sample 4) 5 DGHs have 1 unit or less of activity each
12
HRGs where (Random Sample 5) 5 DGHs have 1 unit or less of activity each
13
HRGs appearing in all samples1-5 where 5 DGHs have 1 unit or less of activity each
14
HRGs appearing in at least 3 of the samples1-5 where 5 DGHs have 1 unit or less of activity each
The definitions 15-21 aimed to replicate scenario 5 in the Project Diamond report.
Specialist HRG Definition
15
HRGs where (Random Sample 1) 5 DGHs have 4 units or less of activity each
16
HRGs where (Random Sample 2) 5 DGHs have 4 units or less of activity each
17
HRGs where (Random Sample 3) 5 DGHs have 4 units or less of activity each
18
HRGs where (Random Sample 4) 5 DGHs have 4 units or less of activity each
19
HRGs where (Random Sample 5) 5 DGHs have 4 units or less of activity each
20
HRGs appearing in samples1-5 where 5 DGHs have 4 units or less of activity each
21
HRGs appearing in at least 3 of the samples1-5 where 5 DGHs have 4 units or less of activity each
28. The other definitions of specialist activity came from applying rules related to activity levels,
providers and costs that we thought were in line with the general assumptions about specialist
activity in the Project Diamond report. Definitions 1-5 applied different rules based around the
idea that the most complex activity should be predominantly done by specialist trusts and/or
4
A list of the DGHs used in each random sample can be found in the annex.
by a maximum number of providers. The maximum number of providers was set at 30, as this is
approximately the number of specialist providers in our definition.
Specialist HRG Definition
1
HRGs with any number of providers with 50% or more of spells done by specialists
2
HRGs with any number of providers, but 50% or more of the providers are specialists
3
HRGs with 30 providers or less
4
HRGs with 30 providers or less and with 50% or more of spells done by specialists
5
HRGs with 30 providers or less and where 50% or more of the providers are specialists
29. The idea behind definitions 6 and 7 stemmed from the theoretical graphs in the Project
Diamond report which suggested that at the highest levels of complexity, the tariff would be
close to or the same as the specialists average costs as non-specialists would not be doing these
HRGs. We found all the HRGs for which in any setting, the specialist average cost was equal to,
within 1% or within 2% of the national average cost. For definition 7 we eliminated those HRGs
we thought might have similar costs due to fluke by removing HRGs with 50% or more of
activity in all settings done by non-specialists.
Specialist HRG Definition
6
HRGs where Specialist Average Cost = National Average Cost
7
HRGs where Specialist Average Cost = National Average Cost with HRGs with 50% or more of activity
done by non-specialists in all settings removed
Results
30. The findings (shown in Table 1 below) varied by the different definitions of specialist activity
but were broadly consistent in showing a “drag down” effect of between 2% and 11%, the
average effect being 6.96%. The random sample definitions showed how the drag down effect
can vary when using different groups of DGHs and their activity as an indicator of specialist
HRGs.
31. However, these figures are much lower than the 27.1% estimated in the Project Diamond
Report for Guys and St Thomas’ NHS Foundation Trust.
32. To understand this better we repeated the analysis using the reference costs data for Guys and
St Thomas’ NHS Foundation Trust (GSTT) instead of for all specialist trusts. This is equivalent to
scenarios 1 and 2 in the report. The results are shown in Table 2.
Table 1 – Drag Down Effect for All Specialists
Specialist HRG Definition
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
HRGs with any number of providers with 50% or more of spells done by specialists
Number of Specialist
HRGs Identified
Drag Down
Effect
Value of Drag-Down
Effect (All Specialists)
234
4.71%
£207m
98
10.89%
£546m
HRGs with 30 providers or less
114
9.50%
£496m
HRGs with 30 providers or less and with 50% or more of spells done by specialists
234
4.71%
£207m
62
10.18%
£532m
HRGs where Specialist Average Cost = National Average Cost
383
8.72%
£320m
HRGs where Specialist Average Cost = National Average Cost with HRGs with 50% or more of activity
done by non-specialists in all settings removed
167
7.87%
£378m
HRGs where (Random Sample 1) 5 DGHs have 1 unit or less of activity each
114
5.77%
£305m
HRGs where (Random Sample 2) 5 DGHs have 1 unit or less of activity each
115
5.69%
£301m
HRGs where (Random Sample 3) 5 DGHs have 1 unit or less of activity each
177
8.71%
£422m
HRGs where (Random Sample 4) 5 DGHs have 1 unit or less of activity each
187
9.58%
£465m
HRGs where (Random Sample 5) 5 DGHs have 1 unit or less of activity each
84
8.37%
£443m
HRGs appearing in samples1-5 where 5 DGHs have 1 unit or less of activity each
234
8.15%
£389m
HRGs appearing in at least 3 of the samples1-5 where 5 DGHs have 1 unit or less of activity each
117
8.52%
£450m
HRGs where (Random Sample 1) 5 DGHs have 4 units or less of activity each
186
4.19%
£218m
HRGs where (Random Sample 2) 5 DGHs have 4 units or less of activity each
298
3.27%
£165m
HRGs where (Random Sample 3) 5 DGHs have 4 units or less of activity each
265
7.49%
£349m
HRGs where (Random Sample 4) 5 DGHs have 4 units or less of activity each
284
7.00%
£329m
HRGs where (Random Sample 5) 5 DGHs have 4 units or less of activity each
163
2.82%
£147m
HRGs appearing in samples1-5 where 5 DGHs have 4 units or less of activity each
354
6.18%
£281m
HRGs appearing in at least 3 of the samples1-5 where 5 DGHs have 4 units or less of activity each
195
3.88%
£201m
HRGs with any number of providers, but where 50% or more of the providers are specialists
HRGs with 30 providers or less and where 50% or more of the providers are specialists
Table 2 – Drag Down Effect for All Specialists and Specifically for Guys and St Thomas’ NHS Foundation Trust (GSTT)
Specialist HRG Definition
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
HRGs with any number of providers with 50% or more of spells done by specialists
Number of
Specialist HRGs
Identified
All Specialists
Drag Down
Effect
All Specialists
Value of DragDown Effect
GSTT Drag
Down Effect
GSTT Value of
Drag-Down
Effect
234
4.71%
£207m
37.31%
£71m
98
10.89%
£546m
24.47%
£53m
HRGs with 30 providers or less
114
9.50%
£496m
11.39%
£25m
HRGs with 30 providers or less and with 50% or more of spells done by specialists
234
4.71%
£207m
37.31%
£71m
62
10.18%
£532m
11.54%
£26m
HRGs where Specialist Average Cost = National Average Cost
383
8.72%
£320m
7.18%
£11m
HRGs where Specialist Average Cost = National Average Cost with HRGs with 50% or
more of activity done by non-specialists in all settings removed
167
7.87%
£378m
15.75%
£32m
HRGs where (Random Sample 1) 5 DGHs have 1 unit or less of activity each
114
5.77%
£305m
19.14%
£43m
HRGs where (Random Sample 2) 5 DGHs have 1 unit or less of activity each
115
5.69%
£301m
18.93%
£43m
HRGs where (Random Sample 3) 5 DGHs have 1 unit or less of activity each
177
8.71%
£422m
27.43%
£57m
HRGs where (Random Sample 4) 5 DGHs have 1 unit or less of activity each
187
9.58%
£465m
25.07%
£51m
HRGs where (Random Sample 5) 5 DGHs have 1 unit or less of activity each
84
8.37%
£443m
10.59%
£24m
HRGs appearing in samples1-5 where 5 DGHs have 1 unit or less of activity each
234
8.15%
£389m
28.79%
£59m
HRGs appearing in at least 3 of the samples1-5 where 5 DGHs have 1 unit or less of
activity each
117
8.52%
£450m
12.11%
£27m
HRGs where (Random Sample 1) 5 DGHs have 4 units or less of activity each
186
4.19%
£218m
13.54%
£30m
HRGs where (Random Sample 2) 5 DGHs have 4 units or less of activity each
298
3.27%
£165m
23.18%
£50m
HRGs where (Random Sample 3) 5 DGHs have 4 units or less of activity each
265
7.49%
£349m
34.29%
£68m
HRGs where (Random Sample 4) 5 DGHs have 4 units or less of activity each
284
7.00%
£329m
29.23%
£59m
HRGs where (Random Sample 5) 5 DGHs have 4 units or less of activity each
163
2.82%
£147m
9.96%
£22m
HRGs appearing in samples1-5 where 5 DGHs have 4 units or less of activity each
354
6.18%
£281m
34.15%
£66m
HRGs appearing in at least 3 of the samples1-5 where 5 DGHs have 4 units or less of
activity each
195
3.88%
£201m
10.85%
£24m
HRGs with any number of providers, but 50% or more of the providers are specialists
HRGs with 30 providers or less and where 50% or more of the providers are specialists
GSTT Results
33. The results for GSTT (shown in table 2 above) showed figures roughly similar to those found in the
Project Diamond report (27.1%). However, the results are much more widely spread for different
definitions of specialist, ranging from 7% to 37% with a standard deviation of 9.90% (compared to
2.39% for the all specialists results). When compared with the results from calculating the “drag
down” effect for all specialists, the relative size of the drag down effect for each definition is
inconsistent. For example, a definition with a relatively high drag down effect (e.g. definition 6) for all
specialists (8.72%) may have a relatively low drag down effect for GSTT (7.18%). Definition 1 has a
relatively low drag down effect for all specialists (4.71%) and the highest drag down effect for GSTT
(37.31%).
34. This suggests several possible issues with the model, which will be examined in more detail below.
Detailed Examination of the method
35. The “drag down” effect is approximately calculated by taking the average percentage difference
between the specialist average costs and the national average costs for simple HRGs (vertical distance
A) and subtracting the average percentage difference between the specialist average costs and the
national average costs for complex HRGs (vertical distance B). This gives a percentage drag down
effect, which can be converted to a monetary value (illustrated by area C) by applying this percentage
to the total ‘non specialist’ cost base for the specialist trusts. In the above analysis we have calculated
this cost base by finding the sum of all non-specialist PbR activity for the specialists multiplied by the
specialist average cost.
Figure 5
36. Although the overall gap on the cost base might change when we use different estimates of the
specialist average costs, the calculated drag down effect as a percentage should be approximately the
same. This is because even if the specialist average cost line moves up or down, as this affects A and B
by the same amount, C should remain approximately unchanged. This however does not happen
when we move from the results in table 1 using all specialist trusts’ costs to table 2 using GSTT’s costs
only. One possible explanation is that the lines A and B do not increase by the same amount, or in
other words, the average cost curves are not parallel.
37. However, once we introduce the idea of non-parallel cost curves, we immediately have a number of
issues. Firstly, the definition of specialist activity becomes crucial to the accurate estimation of the
drag down effect. Depending on our definition of specialist activity, we may estimate the average
cost differences for simple and complex work to be either A1 and B1, or A2 and B2 in the diagram
below (whereas before with parallel cost curves the cost differential would always be the same no
matter where it is estimated). As can be seen, this will lead to different estimates of the drag down
effect. Unfortunately, our analysis has shown us that there is no clear way to determine the
definition of specialist activity. This may explain the fact that the estimates using GSTT only costs
display far more variance with the different definitions than the average estimate for all specialists.
Figure 6
38. The second issue around non-parallel cost curves can be seen when we also take into consideration
the value of the underfunding. This is derived by multiplying the estimated drag down effect by the
non-specialist cost base and, as can be seen in figure 5 found above, is illustrated by the blue area C.
However, when the cost curves are no longer parallel, it becomes less clear whether this continues to
provide an accurate measure of the level of underfunding. Figure 7 below shows the implication of
applying this method to the case with non-parallel cost curves.
39. Based on the previous estimates of the drag down effect in figure 6 (the green and orange lines), and
assuming that they would imply break points of X and Y respectively, the green and orange areas
illustrated on the graph would provide the equivalent measures of underfunding as area C in figure 5.
However, this is very different to the true size of the drag down effect, which might be represented by
black striped area on the diagram (see below for further discussion on the true nature of the drag
down effect with non-parallel cost curves).
40. Even if the true break point were identified, the corresponding drag down effect and level of
underfunding estimated would be larger than striped area of the diagram below (and would lie
somewhere between the green and orange lines, as the true break point lies between these two
estimates). This calls into question whether the assumption of a constant drag down effect in this
case would be valid, as it implies such a large level of underfunding relative to the case in the original
model. This could help to explain the much higher estimates of the drag down effect seen with the
GSTT estimates compared to the all specialist estimates.
Figure 7
Extending the methodology
41. The previous analysis has continued to assume that, as in the Project Diamond report, the size of the
drag down effect is the same for all HRGs so that the level of underfunding is most accurately
represented by the rectangular shaded areas on the diagram. However, with the introduction of nonparallel cost curves, this does not necessarily have to be the case. We could instead make the
assumption that the size of the drag down effect differs by HRG so that simpler HRGs experience a
greater level of drag down. Intuitively, this would make sense as the simpler the HRG, the more DGH
activity in the HRG.
42. The shaded areas in Figure 8 below illustrates the estimated and ‘true’ levels of the drag down effect
based on this new assumption. By allowing this change, the striped area (representing the ‘true’ drag
down effect) is now much closer to the green and orange estimates than previously. However it is still
important to note that different definitions of specialist HRGs still do produce different estimates of
the drag down effect and the level of underfunding. The estimate of the level of underfunding with
the correct break point would still lie between the green and orange areas and exceed the striped
area below.
Figure 8
43. However, there is no reason to assume that this model accurately reflects the true picture. For
simplicity we have assumed that the change in the drag down effect would be such that the green,
orange and grey lines would become parallel to the specialist average cost curve. In reality, the
change in the drag down effect could follow any pattern and the area representing the level of
underfunding could take on any shape as a result. Unfortunately, it would not be possible to
empirically estimate what the true picture is, and consequently we can never be sure in this situation
whether our methodology is able to capture the true level of underfunding.
Does this matter?
44. The issues identified in the estimates for GSTT and the subsequent discussions suggest that this
methodology may not be robust enough to estimate underfunding for a single trust. However, it is
less clear whether the same issues apply when estimating the overall level of underfunding for all
specialist trusts. The estimates based on all trust data did appear to be more stable than the GSTT
estimates, although there was still variation depending on the definitions used.
45. It is likely that by taking an average over trusts’ individual cost curves will help to dampen many of
the problems described above relating to non-parallel cost curves. However, referring back to figure
1, we can see that the specialist cost curve still does not appear to be parallel with the national cost
curve5. Therefore, it may be subject to similar problems when estimating the drag down effect
although to a lesser extent than when a single trust’s costs are used.
46. The extent to which this method of estimation can produce an accurate figure for the level of
underfunding and drag down effect therefore depends on how close the costs are to parallel and
more generally how close they are to the relationships hypothesised in the project diamond report.
5
Although this may partly be a consequence of using trend lines rather than the raw data, the picture in the raw
data is even more confused and less likely to produce perfect straight and parallel cost curves.
Any deviation will cause inaccuracies, which will be deepened by the lack of a clear definition for
specialist/complex activity. Unfortunately, it is difficult to know precisely how accurate any estimate
is and therefore even the aggregate results should be treated with caution.
47. It is worth noting that there may be other possible complications arising from this model not fully
explored in this paper. We still have been unable to fully account for all of the problems we identified
in the GSTT estimates, such as the fact that the relative size of the estimated drag down effects based
on different defintions of specialist activity do not match the relative size when the all specialists data
is used. One possibility we identified in the GSTT data was that the cost curve for the individual trust
appeared to fall below the national cost curve for more complex work, but it is not clear clear what
the implications of this finding is. Overall, it seems that the model becomes extremely complex and
may not accurately capture any true level of underfunding in many cases.
Conclusions
48. Intuitively and based on our analysis of the data, there does seem to be some homogeneity of bed
day costs across specialities and that this may cause cost misallocation problems. However as we
cannot identify the true cost curve it is impossible to know how much misallocation there is, if it is a
widespread issue and whether different types of trusts exhibit the same degree of misallocation. The
EY model is unable to distinguish between whether the calculated level of underfunding from the
model is due to costs misallocation or other reasons, such as differences in efficiency.
49. Additionally, it is unclear how robust the method used in the Project Diamond report to estimate the
resulting level of underfunding is, and we would be very cautious in making any estimates using
individual trusts’ data. Overall, there appear to be additional complexities in the data that the method
may not be able to account for.
50. The evidence from our calculations using data from all specialists suggests that, if the drag down
effect estimated by this method is reliable, it is much smaller than that estimated in the Project
Diamond report. However without a true cost curve, it is difficult to conclude that any drag down
effect identified results from purely from reference cost misallocation. The methodology employed by
EY is unable to fully distinguish between cost differences arising from the ‘drag down effect’ and
other factors, such as differences in efficiency. It is also important to remember that the national
tariff also includes additional mechanisms to account for the higher costs of specialist activity, such as
specialist top ups and long stay payments. These must be balanced against the calculated level of
underfunding resulting from this model. We estimate that in 2013/14 the total value of specialist top
ups paid to the specialist trusts examined in this paper will equate to approximately £200m, or 4% of
their total admitted patient care income from PbR.
PbR Team
Department of Health
January 2013
Annex 1
List of trusts classified as specialist
Specialist Trusts
RA7
UNIVERSITY HOSPITALS BRISTOL NHS FOUNDATION TRUST
RAL
ROYAL FREE HAMPSTEAD NHS TRUST
RAN
ROYAL NATIONAL ORTHOPAEDIC HOSPITAL NHS TRUST
RBB
ROYAL NATIONAL HOSPITAL FOR RHEUMATIC DISEASES NHS FOUNDATION TRUST
RBF
NUFFIELD ORTHOPAEDIC CENTRE NHS TRUST
RBQ
LIVERPOOL HEART AND CHEST NHS FOUNDATION TRUST
RBS
ALDER HEY CHILDREN'S NHS FOUNDATION TRUST
RBV
THE CHRISTIE NHS FOUNDATION TRUST
RCU
SHEFFIELD CHILDREN'S NHS FOUNDATION TRUST
REN
CLATTERBRIDGE CENTRE FOR ONCOLOGY NHS FOUNDATION TRUST
REP
LIVERPOOL WOMEN'S NHS FOUNDATION TRUST
RET
THE WALTON CENTRE NHS FOUNDATION TRUST
RGM
PAPWORTH HOSPITAL NHS FOUNDATION TRUST
RGT
CAMBRIDGE UNIVERSITY HOSPITALS NHS FOUNDATION TRUST
RHM
SOUTHAMPTON UNIVERSITY HOSPITALS NHS TRUST
RHQ
SHEFFIELD TEACHING HOSPITALS NHS FOUNDATION TRUST
RJ1
GUY'S AND ST THOMAS' NHS FOUNDATION TRUST
RJ7
ST GEORGE'S HEALTHCARE NHS TRUST
RJZ
KING'S COLLEGE HOSPITAL NHS FOUNDATION TRUST
RL1
ROBERT JONES AND AGNES HUNT ORTHOPAEDIC AND DISTRICT HOSPITAL NHS TRUST
RLU
BIRMINGHAM WOMEN'S NHS FOUNDATION TRUST
RM2
UNIVERSITY HOSPITAL OF SOUTH MANCHESTER NHS FOUNDATION TRUST
RM3
SALFORD ROYAL NHS FOUNDATION TRUST
RNJ
BARTS AND THE LONDON NHS TRUST
RP4
GREAT ORMOND STREET HOSPITAL FOR CHILDREN NHS TRUST
RP6
MOORFIELDS EYE HOSPITAL NHS FOUNDATION TRUST
RPC
QUEEN VICTORIA HOSPITAL NHS FOUNDATION TRUST
RPY
THE ROYAL MARSDEN NHS FOUNDATION TRUST
RQ3
BIRMINGHAM CHILDREN'S HOSPITAL NHS FOUNDATION TRUST
RQ6
ROYAL LIVERPOOL AND BROADGREEN UNIVERSITY HOSPITALS NHS TRUST
RQM
CHELSEA AND WESTMINSTER HOSPITAL NHS FOUNDATION TRUST
RR8
LEEDS TEACHING HOSPITALS NHS TRUST
RRJ
THE ROYAL ORTHOPAEDIC HOSPITAL NHS FOUNDATION TRUST
RRK
UNIVERSITY HOSPITALS BIRMINGHAM NHS FOUNDATION TRUST
RRV
UNIVERSITY COLLEGE LONDON HOSPITALS NHS FOUNDATION TRUST
RT3
ROYAL BROMPTON AND HAREFIELD NHS FOUNDATION TRUST
RTD
THE NEWCASTLE UPON TYNE HOSPITALS NHS FOUNDATION TRUST
RTH
OXFORD RADCLIFFE HOSPITALS NHS TRUST
RW3
CENTRAL MANCHESTER UNIVERSITY HOSPITALS NHS FOUNDATION TRUST
RWE
UNIVERSITY HOSPITALS OF LEICESTER NHS TRUST
RX1
NOTTINGHAM UNIVERSITY HOSPITALS NHS TRUST
Annex 2
List of DGH’s in Random Samples
Random Sample DGHs
sample 1
RH8
RK9
ROYAL DEVON AND EXETER NHS FOUNDATION TRUST
PLYMOUTH HOSPITALS NHS TRUST
RCC
RJL
SCARBOROUGH AND NORTH EAST YORKSHIRE HEALTH CARE NHS TRUST
NORTHERN LINCOLNSHIRE AND GOOLE HOSPITALS NHS FOUNDATION TRUST
RFW
WEST MIDDLESEX UNIVERSITY HOSPITAL NHS TRUST
sample 2
RVW
NORTH TEES AND HARTLEPOOL NHS FOUNDATION TRUST
RWA
RJF
HULL AND EAST YORKSHIRE HOSPITALS NHS TRUST
BURTON HOSPITALS NHS FOUNDATION TRUST
RTR
SOUTH TEES HOSPITALS NHS FOUNDATION TRUST
RLT
GEORGE ELIOT HOSPITAL NHS TRUST
sample 3
CITY HOSPITALS SUNDERLAND NHS FOUNDATION TRUST
SOUTH DEVON HEALTHCARE NHS FOUNDATION TRUST
MID ESSEX HOSPITAL SERVICES NHS TRUST
NORTHERN DEVON HEALTHCARE NHS TRUST
RLN
RA9
RQ8
RBZ
RN5
BASINGSTOKE AND NORTH HAMPSHIRE NHS FOUNDATION TRUST
sample 4
RWW
WARRINGTON AND HALTON HOSPITALS NHS FOUNDATION TRUST
RVV
RGC
RJL
RA3
EAST KENT HOSPITALS UNIVERSITY NHS FOUNDATION TRUST
WHIPPS CROSS UNIVERSITY HOSPITAL NHS TRUST
NORTHERN LINCOLNSHIRE AND GOOLE HOSPITALS NHS FOUNDATION TRUST
WESTON AREA HEALTH NHS TRUST
sample 5
RNZ
RYJ
RNL
SALISBURY NHS FOUNDATION TRUST
IMPERIAL COLLEGE HEALTHCARE NHS TRUST
NORTH CUMBRIA UNIVERSITY HOSPITALS NHS TRUST
RNH
RTE
NEWHAM UNIVERSITY HOSPITAL NHS TRUST
GLOUCESTERSHIRE HOSPITALS NHS FOUNDATION TRUST
Download