Money for nothing or payments for biodiversity?

advertisement
Money for nothing or payments for
biodiversity?
Design and performance conservation tender metrics
Stuart M. Whitten and Art Langston
CSIRO Ecosystem Sciences
Jacobsberg, September 2013
Different measures in conservation policy
Step in policy design
Example measure(s)
Information for target setting
Vegetation status
Threatened species status
Policy targets
Water quality targets
Vegetation area targets
Selection & design of MBI (eg Tender)
Measuring impacts in Tender
(metric design)
Metrics in conservation tenders
Monitoring of actions within Tender
framework
Quantity of water used
Habitat structure benchmarks
Vegetation condition index
Evaluation of Tender
Cost and outcomes versus alternatives
Evaluation of outcomes against targets
Benefit indices etc.
What the metric does in a conservation
tender…
Targeting
Resultant allocation metric
Allocation order
Cost targeting
$, usually per unit of land
Cheapest first ($C)
Benefit targeting Benefit index – usually a
dimensionless index reflecting either
attributes of a biodiversity, or in some
instances several environmental
attributes
Highest benefits first
(BI)
Benefit index /
cost ratio
Benefit index / cost
Highest benefits / $ first
(BI / $C)
Scaled benefit
index / cost
Benefit index scaled to reflect
investment preferences / cost
Highest scaled benefits
/ $ first (BI / $C)
Benefit cost
ratio
A monetized or otherwise scaled
benefit index / cost
Highest benefit cost
ratio first ($B / $C)
Either a cost (budget) or a benefit (units desired) target
may be applied in each case.
Sources: Duke et al (2013); Glebe (2013)
What the metric does in a conservation
tender…
• Means of comparing different investment opportunities
• They link an economic criteria of cost-effective investment to biophysical responses
to management.
• Critical to understand what is being bought and sold.
• Difficult for commodities like biodiversity.
• Must adequately translate often diverse attributes into a score that
sufficiently reflects relative investment value:
• Allows for comparison of apples and oranges
 If you can’t do this in the metric then it should be done elsewhere in program design
• Must be practical to implement (time, skill, cost, etc. etc.).
• Embedded in utility theory:
• Utility is taken to be correlative to Desire or Want … found in the price which a
person is willing to pay for the fulfilment or satisfaction of his desire. (Marshall
1920:78) … in this case willingness to pay is most often the purchaser who may be
a government.
Substitutability and spatial scale …
Biodiversity asset
heterogeneity
Nonsubstitutable
(maximum
differentiation)
Substitutable
Noah’s Ark Problem (Weitzman)
Systematic conservation
planning
Forest
Conservation
Fund
MEC, Biometric
Bushtender,
CRP
Paddock Patch
Landscape
Bio-region
Spatial ecological scale
Continental
Bid interactions and spatial scale …
Independence of bids
Bid value
completely
dependent on
other offers in
set accepted
Weitzman approach – none identified
but some discussion
SCP: pilot in Australia, others?
Forest
Conservation
Fund (Tas)
Independent
bids
Bushtender,
Biometric, CRP,
MEC
Paddock Patch
Landscape
Bio-region
Spatial ecological scale
Continental
Deciding across differentiation and
quality oriented metrics
Factor
Differentiation oriented
Substitution oriented
Biodiversity objective
Conservation of broadest
range of biodiversity
Conservation of a single
type of biodiversity
Marginal contribution of
each project to objective
High (preferably meets
objective for a single type)
Low – each bid delivers
small step against objective
Biodiversity value
information needed
Coverage across targeted
biodiversity types
Coverage within a
biodiversity type
Budget
High
Lower
Geographic scale
Regional to national
Local to regional
Management action
Likely to be protection
effectiveness information oriented due to large
range of possible actions
Others?
Possible to include many
activities targeting outcome
Choosing between scale within
biodiversity metrics
• Project-focussed, no spatial attributes.
• Landscape-focussed, marginal value of project.
Bid values not
independent
• Landscape-focussed, optimal package
Bid values
independent
• Project -focussed, with spatial attributes.
Some performance criteria for a metric…
• Scientifically robust: simple but captures biophysical relationships
and responses to investment.
• Repeatable: same score each time it is used for the same project
and investment action.
• Applicable across scope: can be applied across different locations,
generates same score for equivalent projects and actions
• Economically and socially defensible: Prioritise options in a
defensible way.
• Scale comparable: the difference in scores must be a sufficiently
accurate estimate of the difference in investment values to support
decisions about relative funding levels:
• i.e. twice the score is twice as valuable from an investment criteria.
Three broad forms of paddock/patch metrics:
By far the most common in practice
• Condition index based metrics
• Implied ratio interpretation
• Multi-criteria based indices
• Usually applied to bundles of environmental
services
• Expected value oriented indices
• In all cases llocation is usually:
• In order of BI / $ bid by landholder
• Sometimes by BI for multi-criteria indices (eg CRP)
Condition 2 > 3 > 4
Peppermint box grassy woodland
Photos: Veronica/Erik Doerr
Three broad forms of paddock/patch metrics:
By far the most common in practice
• Condition index based metrics
• Implied ratio interpretation
• Multi-criteria based indices
• Usually applied to bundles of environmental services (not
always – e.g. Nature Assist in Qld)
• Expected value oriented indices
• In all cases allocation is usually:
• In order of BI / $ bid by landholder
• Sometimes by BI for multi-criteria indices (eg CRP)
Does this matter?
• Metrics are often a significant cost in tender design
• Can be a significant budgetary impediment
• Good metrics require specialist skill sets
• These skills are often scarce – especially in developing countries
and regional areas.
• Is the investment in a metric worthwhile?
• How much efficiency improvement do we see from better metrics?
• Would a rough and ready approach perform just as
well?
Metric benefit forms compared …
Metric compared
Alternative benefit estimates
Total future
benefits (Vfuture)
A
MEC data set (2011)
BGGW metric version 1
(2007-2009) – Gibbons and
Ryan
BGGW metric version 2
(2010) – G&R modified
Additionality
Additionality
benefit (Vbenefit) over duty of care
B
C
Benefit definitions and metrics …
C
B
Maron, M., Rhodes, J.R. and Gibbons, P. (2013). Calculating the benefit of conservation
actions, Conservation Letters
A
Comparison across metrics
100%
Original BGGW
90%
Proportion environmental benefit
BGGW revised
80%
Expected value
70%
60%
50%
40%
30%
20%
10%
0%
0%
10%
20%
30%
40%
50%
60%
70%
Proportion of budget allocated
80%
90%
100%
Agreement between BGGW + original
Agreement of allocation across metrics
100%
3 metrics agree
90%
2 metrics agree
80%
Proportion of allocated sites
no agreement
70%
60%
50%
40%
30%
20%
10%
0%
$0
$10
$20
$30
Bugeted cost $ millions
$40
$50
$60
MEC across total, full additional and DOC
Agreement of allocation across metrics
100%
90%
Proportion of allocated sites
80%
70%
3 metrics agree
60%
2 metrics agree
50%
no agreement
40%
30%
20%
10%
0%
$0
$10
$20
$30
Bugeted cost $ millions
$40
$50
$60
Bids sought
Prime Biodiversity
Photos: Veronica/Erik Doerr
Take home messages …
• Important for metrics to meet a range of performance criteria for
confidence that conservation tenders can deliver on their
hypothesized advantages.
• BUT does the difference in functional structures or the difference
benefit estimates have a strong or weak impact on investment
decisions by governments:
IF WEAK:
• greater flexibility in metric structure ok so long as it consistently
discriminates across the tenders submitted
• greater freedom to include other elements than economic efficiency
into metrics with relatively low risk of distorting decision
• E.g. reward landholders for past management keeping in good condition.
• Conclude – 1. Rough and ready OK if acceptance
rates are high.
• Conclude – 2. How much metrics matter once
acceptance rates fall depends at least in part on
the metric structure.
• Conclude – 3. There are other reasons why policy
makers will want to keep acceptance rates high
Thank you
CSIRO Ecosystem Sciences
Dr Stuart Whitten and Art Langston
Email: stuart.whitten@csiro.au
www.csiro.au/science/Markets.html
Contact Us
Phone: 1300 363 400 or +61 3 9545 2176
Email: enquiries@csiro.au Web: www.csiro.au
Download