Assessment of Intraseasonal to Interannual Climate Prediction and Predictability

advertisement
Assessment of
Intraseasonal to Interannual
Climate Prediction and Predictability
Ben Kirtman, Univ. of Miami
Randy Koster, NASA
Eugenia Kalnay, Univ. of Maryland
Lisa Goddard, Columbia Univ.
Duane Waliser, Jet Propulsion Lab, Cal Tech
September 14, 2010
Climate Prediction Center
The National Academies
ƒ A private, non-profit organization charged to provide
advice to the Nation on science, engineering, and
medicine.
ƒ National Academy of Sciences (NAS) chartered in 1863;
The National Research Council (NRC) is the operating arm
of the NAS, NAE, and IOM.
ƒ NRC convenes ad hoc committees of experts who serve
pro bono, and who are carefully chosen for expertise,
balance, and objectivity
ƒ All reports go through stringent peer-review and must be
approved by both the study committee and the
institution.
2
Committee Membership
ROBERT A. WELLER (Chair), Woods Hole Oceanographic Institution
ALBERTO ARRIBAS, Met Office, Hadley Centre
JEFFREY L. ANDERSON, National Center for Atmospheric Research
ROBERT E. DICKINSON, University of Texas
LISA GODDARD, Columbia University
EUGENIA KALNAY, University of Maryland
BENJAMIN KIRTMAN, University of Miami
RANDAL D. KOSTER, NASA
MICHAEL B. RICHMAN, University of Oklahoma
R. SARAVANAN, Texas A&M University
DUANE WALISER, Jet Propulsion Laboratory, California Institute of Technology
BIN WANG, University of Hawaii
3
Charge to the Committee
The study committee will:
1. Review current understanding of climate predictability on
intraseasonal to interannual time scales;
2. Describe how improvements in modeling, observational capabilities,
and other technological improvements have led to changes in our
understanding of predictability;
3. Identify key deficiencies and gaps remaining in our understanding of
climate predictability and recommend research priorities to address
these gaps;
4. Assess the performance of current prediction systems;
5. Recommend strategies and best practices that could be used to assess
improvements in prediction skill over time.
4
Outline
1) Motivation and Committee Approach
ƒ
ƒ
ƒ
Why Intraseasonal to Interannual (ISI) Timescales?
What is “Predictability?”
Framework for report
2) Recommendations
ƒ
ƒ
ƒ
Research Goals
Improvements to Building Blocks
Best Practices
3) Case Studies
4) Summary
5
Why Intraseasonal to Interannual
(ISI) Timescales?
ƒ “ISI” - timescales ranging from a couple of weeks
to a few years.
ƒ Errors in ISI predictions are often
related to errors in longer term
climate projections
ƒ Useful for a variety of
resource management
decisions
ƒMany realizations/verifications possible.
6
What is “Predictability?”
ƒ “The extent to which a process contributes to
prediction quality.”
ƒ Literature provides variety of interpretations;
committee agreed on qualitative approach.
Key aspects of committee approach
ƒ Quantitative estimates of a upper limit of predictability
for the real climate system are not possible.
ƒ Verification of forecasts provide a lower bound for
predictability.
ƒ Traditional predictability studies (e.g., twin model
studies) are qualitatively useful.
7
Framework for Analyzing
ISI Forecasting
Performance of ISI forecasting systems is based upon:
1) Knowledge of Sources of Predictability
How well do we understand a climate process/phenomenon?
2) Building Blocks of Forecasting Systems
To what extent do observations, data assimilation systems,
and models represent important climate processes?
3) Procedures of Operational Forecasting Centers
How do these centers make, document, and disseminate
forecasts?
8
Recommendations Regarding
Sources of Predictability
Many sources of predictability remain to
be fully exploited by ISI forecast systems.
Criteria for identifying high-priority sources:
1) Physical principles indicate that the source has an
impact on ISI variability and predictability.
2) Empirical or modeling evidence supports (1).
3) Identifiable gaps in knowledge/representation in
forecasting systems.
4) Potential social value.
9
Six Research Goals for
Sources of Predictability
1) Madden-Julian Oscillation (MJO)
Develop model diagnostics and forecast metrics. Expand
process knowledge regarding ocean-atmosphere coupling,
multi-scale organization of tropical convection, and cloud
processes.
2) Stratosphere-troposphere interactions
Improve understanding of link between stratospheric
processes and ISI variability. Successfully simulate/predict
sudden warming events and subsequent impacts.
10
Six Research Goals for
Sources of Predictability
3) Ocean-atmosphere
coupling
Understanding of sub-grid scale
processes should be improved.
4) Land-atmosphere feedbacks
Investigate coupling strength between land and
atmosphere. Continue to improve initialization of
important surface properties (e.g., soil moisture).
11
Six Research Goals for
Sources of Predictability
5) High impact events
(volcanic eruptions, nuclear
exchange)
Develop forecasts following rapid, large
changes in aerosols/trace gases.
6) Non-stationarity
Long-term trends affecting
components of climate system
(e.g., greenhouse gases, land use
change) can affect predictability
and verification techniques.
Changes in variability may also be
12important.
Building Blocks of ISI Forecasting Systems
Data Assimilation
Systems
Statistical/
Dynamical
Models
Observational
Networks
13
Forecast Improvements involve each
of the Building Blocks
Past improvements to ISI
forecasting systems have
occurred synergistically.
(e.g., with new
observations comes the
need for model
improvement and
expansion of DA system)
14
Improvements to Building Blocks
1) Errors in dynamical models should be
identified and corrected. Sustained
observations and process studies are
needed.
Observations (top) and Model (bottom)
ƒ Examples:
* double intertropical convergence zone
* poor representation of cloud processes
ƒ Climate Process Teams serve as a useful
model for bringing together modelers
and observationlists
ƒ Other programmatic mechanisms should
be explored
(e.g. facilitating testing of increased
model resolution)
15
SST (shading); precipitation (contours)
Improvements to Building Blocks
Continue to develop and employ statistical
techniques, especially nonlinear methods.
2)
Statistical methods are useful in making predictions, assessing
forecast performance, and identifying errors in dynamical models.
Cutting-edge nonlinear methods provide the opportunity to augment
these statistical tools.
Statistical methods and dynamical models are
complementary and should be pursued.
3)
Using multiple prediction tools leads to improved forecasts.
Examples of complementary tools:
ƒ Model Output Statistics
ƒ Stochastic Physics
ƒ Downscaling techniques
16
Improvements to Building Blocks
4) Multi-model ensemble forecast
strategies should be pursued, but
standards and metrics should be
developed.
MME mean (in red)
outperforms individual
models (other colors).
Black is persistence (baseline
forecast).
17
Improvements to Building Blocks
5) For operational forecast systems, state-of-the-art
data assimilation systems should be used (e.g. 4-D
Var, Ensemble Kalman Filters, or hybrids).
Operational data assimilation systems should be
expanded to include more data, beginning with ocean
observations.
Number of
satellite
observations
assimilated into
ECMWF
forecasts.
18
Relationship between
Research and Operations
Collaboration has expanded knowledge of ISI
processes and improved performance of ISI
forecasts.
Collaboration is necessary BOTH:
ƒ between research and operational scientists
ƒ among research scientists; linking observations,
model development, data assimilation, and
operational forecasting.
19
Examples of
Collaborative Programs
20
Making Forecasts More Useful
Value of ISI forecasts for both
researchers and decision makers can
be tied to:
ƒAccess
ƒTransparency
ƒKnowledge of forecast performance
ƒAvailability of tailored products
21
Best Practices
1) Improve the synergy between research
and operational communities.
ƒ Workshops targeting specific forecast system
improvements, held at least annually
ƒ Short-term appointments to visiting
researchers
ƒ More rapid sharing of data, data assimilation
systems, and models
ƒ Dialog regarding new observational systems
22
Best Practices
2) Establish publicly-available
archives of information
associated with forecasts
ƒ
Includes observations, model code, hindcasts,
forecasts, and verifications.
ƒ
Will allow for quantification and tracking of
forecast improvement.
ƒ
Bridge the gap between operational centers
and forecast users involved in making
climate-related management decisions or
conducting societally-relevant research.
3) Minimize the subjective components of
operational ISI forecasts.
23
Best Practices
4) Broaden and make available forecast metrics.
ƒ Multiple metrics should be used;
No perfect metric exists.
ƒ Assessment of probabilistic information is important.
ƒ Metrics that include information on the distribution of skill
in space and time are also useful.
Examples of
probability density functions
representing forecasts for ENSO
24
Case Studies
El Niño–Southern Oscillation (ENSO)
Madden-Julian Oscillation (MJO)
Soil Moisture
Case studies illustrate how improvements of
building blocks of ISI forecasting system led to an
improved representation of a source of
predictability.
Also illustrate collaboration among researchers
and operational forecasting centers.
25
ENSO: Progress to Date
ƒ Observations by
TAO/TRITON have been
critical to progress in
understanding and
simulation.
ƒ Dynamical models have
improved and are
competitive with statistical
models.
ƒ MME mean outperforms
individual models.
26
Errors in Nino3.4 Predictions since 1962
ENSO: Gaps in Understanding
ƒ How does intraseasonal variability (e.g., MJO,
westerly wind bursts) affect ENSO event initiation and
evolution?
ƒ Chronic biases (e.g., double ITCZ) in climate models
affect ENSO simulation.
ƒ Gaps still remain in initializing forecasts.
Efforts should fuse improvements in
understanding ocean-atmosphere coupling to the
upgrading of prediction tools (targeted process
studies, simulation of sub-grid scale processes,
expanded data assimilation, etc.)
27
MJO: A key source of
intraseasonal
predictability
ƒ Dominant form of intraseasonal
atmospheric variability, affecting
precipitation and convection
throughout the tropics
ƒ Can interact with Indian
monsoon and extratropical
circulation
28
MJO: Gaps in
Understanding
ƒ Evaluating
available
prediction tools is
critical
ƒ Targeted
investigations of
cloud processes,
vertical structure
of diabatic heating
are necessary
29
Observations
ƒ Forecasting of MJO is
relatively new; many
dynamical models still
represent MJO poorly
Models
Soil Moisture:
Predictability for Temperature,
Precipitation, and Hydrology
Soil moisture can affect
regional temperature
and precipitation. It
also has implications for
streamflow and other
hydrological variables.
30
Soil Moisture: Gaps in Understanding
ƒ Initialization is a challenge due to spatial and
temporal heterogeneity in soil moisture
ƒ Procedures for measuring land-atmosphere
coupling strength are still being developed
ƒ Land Data Assimilation
Systems (LDAS) coupled with
satellite observations could
contribute to initialization
ƒ Further evaluation and
intercomparison of models
are necessary
Forecast skill: r2 with land ICs minus that
obtained w/o land ICs
Summary of Recommendations
Research Goals
Improve knowledge of sources
of predictability
ƒMJO
ƒOcean-atmosphere
ƒLand-atmosphere
ƒStratosphere
ƒNon-stationarity
ƒHigh impact events
Long-term:
years to decades;
mainly the research
community
32
Improvements to Building
Blocks
ƒ Identify and correct model errors by
supporting sustained observations and
process studies
ƒ Implement nonlinear statistical
methods
ƒ Use statistical and dynamical
prediction tools together
ƒ Continue to pursue multi-model
ensembles
ƒ Upgrade data assimilation schemes
Medium-term:
coming years; shared responsibility of
researchers and operational centers
Summary of Recommendations
Best Practices
ƒImproved synergy between
research and operations
ƒArchives
Short-term:
related to current and
routine activities of
operational centers
ƒMetrics
ƒMinimize subjective intervention
Adoption of Best Practices:
• requires stable support for research gains to be integrated into operations;
• establishes an institutional infrastructure that is committed to doing so;
• will establish “feedbacks” that guide future investments in making
observations, developing models, and aiding decision-makers
(i.e., BEYOND “traditional” operations);
• represents a continuous improvement process.
33
For more information:
National Research Council
Joe Casola
202.334.3874
jcasola@nas.edu
Report is available online at www.nap.edu.
34
Image Credits
Slide
6 Grand Coolee Dam – Bonneville Power Administration; Wheat field – USDA; Cloud
fraction image - M. Wyant, R. Wood, C. Bretherton, C. R. Mechoso, Pre-VOCA
11 ENSO - McPhaden (2004), BAMS, 85, 677–695
12 Volcano – USGS; Keeling curve – Scripps Institute of Oceanography
13 Buoy – NOAA; Model globe - NOAA
14 SST graph – Balmaseda et al., Proceedings of Oceanobs’09, ESA Pub. WPP-306,
(2009)
15 Double-ITCZ - Lin (2007) J. Climate, 20, 4497–4525.
17 MME – Jin et al., Climate Dynamics, 31, 647-664 (2008)
18 Satellite obs – ECMWF
20 Sources are from the respective organizations
21 Flooding – NRCS; Volcano – NASA; Drought – NESDIS; Moscow sun - BBC
23 National Archives
24 Pdf’s - IRI
26 Line plot – Stockdale et al., Clim. Dyn. (in review, 2010); CFS – adapted from
Saha et al., J.Climate, 19, 3483-3517 (2006)
28 MJO – Waliser, Predictability of Weather and Climate, Cambridge Univ. Press
(2006)
29 MJO Models – Kim et al., J.Climate, 22, 6413-6436 (2009)
30 Soil moisture – Seneviratne et al., Earth-sci. Reviews, 99, 3-4, 125-161 (2010)
31 US Map plot - Koster et al., GRL, doi10.1029/2009GL041677, (2010)
35
Download