Quantifying Precipitation Uncertainty for Land Data Assimilation Applications Please share

advertisement
Quantifying Precipitation Uncertainty for Land Data
Assimilation Applications
The MIT Faculty has made this article openly available. Please share
how this access benefits you. Your story matters.
Citation
Alemohammad, Seyed Hamed, Dennis B. McLaughlin, and Dara
Entekhabi. “Quantifying Precipitation Uncertainty for Land Data
Assimilation Applications.” Monthly Weather Review 143, no. 8
(August 2015): 3276–3299. © 2015 American Meteorological
Society
As Published
http://dx.doi.org/10.1175/mwr-d-14-00337.1
Publisher
American Meteorological Society
Version
Final published version
Accessed
Thu May 26 15:12:23 EDT 2016
Citable Link
http://hdl.handle.net/1721.1/101396
Terms of Use
Article is made available in accordance with the publisher's policy
and may be subject to US copyright law. Please refer to the
publisher's site for terms of use.
Detailed Terms
3276
MONTHLY WEATHER REVIEW
VOLUME 143
Quantifying Precipitation Uncertainty for Land Data Assimilation Applications
SEYED HAMED ALEMOHAMMAD, DENNIS B. MCLAUGHLIN, AND DARA ENTEKHABI
Department of Civil and Environmental Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts
(Manuscript received 13 October 2014, in final form 14 April 2015)
ABSTRACT
Ensemble-based data assimilation techniques are often applied to land surface models in order to estimate components of terrestrial water and energy balance. Precipitation forcing uncertainty is the principal
source of spread among the ensembles that is required for utilizing information in observations to correct
model priors. Precipitation fields may have both position and magnitude errors. However, current uncertainty characterizations of precipitation forcing in land data assimilation systems often do no more than
applying multiplicative errors to precipitation fields. In this paper, an ensemble-based Bayesian method for
characterization of uncertainties associated with precipitation retrievals from spaceborne instruments is
introduced. This method is used to produce stochastic replicates of precipitation fields that are conditioned
on precipitation observations. Unlike previous studies, the error likelihood is derived using an archive of
historical measurements. The ensemble replicates are generated using a stochastic method, and they are
intermittent in space and time. The replicates are first projected in a low-dimension subspace using a
problem-specific set of attributes. The attributes are derived using a dimensionality reduction scheme that
takes advantage of singular value decomposition. A nonparametric importance sampling technique is
formulated in terms of the attribute vectors to solve the Bayesian sampling problem. Examples are presented using retrievals from operational passive microwave instruments, and performance of the method is
assessed using ground validation measurements from a surface weather radar network. Results indicate
that this ensemble characterization approach provides a useful description of precipitation uncertainties
with a posterior ensemble that is narrower in distribution than its prior while containing both precipitation
position and magnitude errors.
1. Introduction
High-resolution precipitation retrievals from spaceborne instruments provide useful global precipitation
estimates. However, high uncertainties are associated
with these retrievals. Satellite observations of precipitation are indirect, noisy, and sometimes limited in
space and time. As part of the Global Precipitation
Measurement (GPM) mission, precipitation retrievals
from the GPM constellation of satellites are to be
merged to produce global high-temporal- and highspatial-resolution maps of precipitation. However, the
associated uncertainties in each of the single-satellite
retrievals need to be quantified for an optimal merging
algorithm. Moreover, since precipitation forcing is the
Corresponding author address: Seyed Hamed Alemohammad,
Department of Civil and Environmental Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave., MIT Bldg.
48-102, Cambridge, MA 02139.
E-mail: hamed_al@mit.edu
DOI: 10.1175/MWR-D-14-00337.1
Ó 2015 American Meteorological Society
primary source of uncertainty in surface hydrological
models, it is important to characterize precipitation errors in a realistic way, both to obtain better hydrologic
predictions and to assess the quality of these predictions.
The common approach toward precipitation uncertainty
characterization assumes that precipitation is a homogeneous random field, with the same statistical properties
everywhere. Usually the probabilistic descriptions of these
fields are parametric, providing only limited flexibility for
generating samples of realistic complexity. A more realistic characterization is inhomogeneous, allowing for
statistical variations and intermittency over space (and
over time for dynamic characterizations). Ensemble-based
probabilistic approaches provide an elegant nonparametric way to account for such features. Moreover,
an ensemble of spatially inhomogeneous precipitation
replicates can be used as realistic random forcing in
ensemble forecasting and uncertainty quantification
analyses of land surface models (LSMs).
Several studies investigate the uncertainties associated with precipitation retrievals from spaceborne
AUGUST 2015
ALEMOHAMMAD ET AL.
instruments. These studies can be categorized into two
groups: those that investigate and evaluate precipitation
products (Morrissey and Greene 1998; McPhee and
Margulis 2005; Tian and Peters-Lidard 2007; Habib et al.
2009; Sapiano and Arkin 2009; Tian et al. 2009; Tian and
Peters-Lidard 2010; Shen et al. 2010; Anagnostou et al.
2010; Stampoulis and Anagnostou 2012; Habib et al.
2012; Kirstetter et al. 2013a; S. Chen et al. 2013a,b; Joshi
et al. 2013; Alemohammad et al. 2014; Tang et al. 2014;
Alemohammad et al. 2015) and those that model the
errors associated with these retrievals (Bellerby 2007;
AghaKouchak et al. 2009; Villarini et al. 2009;
AghaKouchak et al. 2010b; Tian et al. 2013; Kirstetter
et al. 2013b; Seyyedi et al. 2014a,b; Maggioni et al. 2014).
Previous studies can be further categorized into those
that investigate global products based on merging retrievals from different instruments and those that investigate single-instrument-based products.
Stochastic methods of synthesizing precipitation fields
from satellite data have also been used to investigate
uncertainties in satellite retrievals. However, some have
been focused on quantifying the effects of limited satellite
overpasses (Bell 1987; Bell et al. 1990; Astin 1997; Steiner
et al. 2003; Gebremichael and Krajewski 2004), some
have not considered the spatial correlation of precipitation and errors (Bardossy 1998; Kwon et al. 2007;
Gomi and Kuzuha 2013), some are only precipitation
simulation rather than conditional precipitation estimation (Sivapalan and Wood 1987; Wheater et al. 2000;
Cowpertwait et al. 2002; Ferraris et al. 2003), and some
others use parametric distributions for characterizing the
space–time variability of rainfall (Gupta and Waymire
1993; Nykanen and Harris 2003; Bellerby and Sun 2005;
Hossain and Anagnostou 2006; Clark and Slater 2006;
Teo and Grimes 2007; Grimes 2008; AghaKouchak et al.
2010a; Paschalis et al. 2013).
There are also different methods to generate ensembles, and several studies use them to explore uncertainties in precipitation products (Bellerby and Sun
2005; Clark and Slater 2006; Hossain and Anagnostou
2006; Grimes 2008; Wojcik et al. 2009; Paschalis et al.
2013; Wojcik et al. 2014). These studies suffer from using
parametric descriptors for generating ensembles, generating stationary precipitation storms in the replicates,
or being limited to binary precipitation replicates (rain/
no rain). In the present study, we use the method developed by Alemohammad et al. (2015a, manuscript
submitted to Water Resour. Res.) to generate realistic
and spatially discontinuous precipitation fields. The
method is here augmented to include Bayesian update
and conditioning on uncertain observations.
Data assimilation techniques characterize the state
of a natural system using data from different uncertain
3277
sources (e.g., models and observations). Each source
will be given a relative weight based on its uncertainty,
and the estimate balances information from different
sources. This characterization provides an estimate of
the true state of the system together with the uncertainty
of that estimate (McLaughlin 2002, 2014). Ensemble
data assimilation techniques have also been developed
and implemented in many applications. In these techniques, an approximation to the posterior probability
distribution of the system state is derived instead of only
the mean or covariance (Evensen 1994; Reichle et al.
2002; Evensen 2003; Margulis et al. 2006; Zhou et al.
2006; Jafarpour and McLaughlin 2008; Peters-Lidard
et al. 2011; Ines et al. 2013; Eicker et al. 2014; DeChant
and Moradkhani 2014).
The common approach to incorporate the uncertainty
in precipitation forcing in land data assimilation (LDA)
is a perturbation of the input precipitation using either
an additive or multiplicative error model (Reichle et al.
2007; Li et al. 2012; Brocca et al. 2012; H. Chen et al.
2013). However, recent studies show that the use of an
ensemble-based error model [in this case, a twodimensional satellite rainfall error model (SREM2D)]
to characterize the spatial variability of precipitation errors could better capture soil moisture error properties in
land data assimilation. A more realistic representation of
the sources of error in precipitation retrievals requires a
more complex model that can capture both intensity and
detection error (Maggioni et al. 2011, 2013).
Bayesian estimation theory provides a convenient
data assimilation framework for merging prior information with measurements from a diverse set of data
sources. It simultaneously accounts for both model and
observation uncertainties. In particular, a Bayesian
approach allows us to update prior ensembles using
measurements and to investigate the associated uncertainties (Bocquet et al. 2010). In the recent decade,
Bayesian-based approaches have been widely used to
estimate hydrologic variables, especially for probabilistic hydrologic forecasting (Kavetski et al. 2006;
Bulygina and Gupta 2010; Renard et al. 2011;
Moradkhani et al. 2012; DeChant and Moradkhani
2012; Najafi et al. 2012; Vrugt and Sadegh 2013;
DeChant and Moradkhani 2014). All of these studies
have been focused on quantifying scalar variables or
low-dimensional parameter spaces. However, spatially
distributed environmental variables such as precipitation are higher dimensional, and it is impractical to
implement a nonparametric Bayesian update in highdimensional space (for further explanation, refer to
section 5). One option for dealing with this problem is to
reduce the problem size by mapping data to a reduceddimension subspace. If the subspace is small enough, it is
3278
MONTHLY WEATHER REVIEW
computationally feasible to derive a reduced-dimension
nonparametric Bayesian posterior distribution from a
reduced-dimension prior distribution and reduceddimension measurements.
In this paper, for the first time, we propose a framework for applying nonparametric Bayesian inference to
high-dimensional measurements of precipitation and
characterize the uncertainty associated with the true
precipitation. In this framework, an ensemble of equally
probable precipitation replicates is generated using a
realistic stochastic prior replicate generator. Then, the
prior replicates are assigned a likelihood weight from
historical observations. These replicates and the associated weights constitute the posterior distribution of
precipitation. Depending on the application, different
characteristics of the posterior distribution (such as
mode, mean, etc.) can be evaluated. The main innovations of this approach are that we do not assume
any parametric distribution for the precipitation or the
associated errors and that we provide a set of ensemble
replicates that represent the uncertainty in the
measurements.
The focus of this study is to characterize uncertainties
in single-satellite retrievals of precipitation, and no
merged product is considered here. The reason for
taking this approach is that the methods that are used to
merge different retrievals, to produce spatially and
temporally continuous precipitation estimates over the
globe, add errors into the final product by blending retrievals from different instruments that are not characteristics of the input retrieval algorithms. Therefore,
evaluation of the merged product does not provide
useful insight about the quality of the retrieval algorithms. By focusing on quantifying errors in singlesatellite retrievals, we aim to provide information on
how the future retrieval algorithms should be improved
and how new merging systems should be designed.
The paper is organized as follows: first an overview of
the approach is presented, and then the datasets are
introduced. Next, the methodology, including the dimensionality reduction technique, is explained in detail.
Finally, examples of this uncertainty quantification
along with evaluation of the method are presented.
2. Approach
Bayes’s theorem provides a tool to measure the degree of belief in the outcome of an event using probability distributions. Based on Bayes’s theorem, the
degree of belief changes as new observations of the
event become available. This theorem provides a useful
platform for merging prior knowledge with current observation(s) to infer the state of a system. Assume X is
VOLUME 143
the state of the system to be inferred and Z is an observation of X. Based on Bayes’s theorem,
pXjZ (x j z) 5
pZjX (z j x) 3 pX (x)
pZ (z)
,
(1)
in which pX (x) is the prior distribution, pZjX (z j x) is the
measurement model, pZ (z) is the marginal distribution
of pZjX (z j x), and pXjZ (x j z) is the posterior distribution
(Wikle and Berliner 2007; Gelman et al. 2013). In this
formula, x and z are particular values of the random
variables X and Z, respectively, and have m dimensions.
In many applications pZ (z) is not known, and
pXjZ (x j z) is usually estimated to a normalizing factor
since pZ (z) is independent of x. This is the case in the
present study as well; therefore,
pXjZ (x j z) 5 C 3 pZjX (z j x) 3 pX (x) ,
(2)
in which C is a constant. The most common uses of
Bayes’s theorem rely on Gaussian assumptions for the
distributions pX (x) and pZjX (z j x) (Bocquet et al. 2010).
This results in a Gaussian pXjZ (x j z) with ensemble
moments the same as those provided by the Kalman
filter, which is the time-recursive best linear estimator
(Kalman 1960; Maybeck 1982). However, as was mentioned earlier, the goal of this study is to go beyond
parametric distributions such as the Gaussian. Instead,
we will derive empirical measurement model and prior
distributions directly from data.
In this study, we use three types of information. The
first one is the current uncertain precipitation measurement (zc ). This is the retrieval from a passive microwave (PMW) instrument to be characterized. The
characterization aims to infer the uncertainties associated with the true precipitation, given zc . The second
information source is a set of prior replicates. These
replicates are generated using a training image (TI),
following the details described in section 4. Prior replicates are denoted as xi , i 5 1, . . . , Nprior . The prior TI
itself is characterized by a vector of pixel values denoted
as xp . The third type of information is a set of archival
pairs of satellite and ground-based radar measurements.
In our implementation, these are collocated retrievals of
precipitation obtained from satellite and ground-based
radar. The ground-based radar product is used as a more
accurate benchmark (or ground truth) for the true precipitation while the satellite product is generally less
accurate. These pairs serve as samples for deriving the
error likelihood of each measurement [PZjX (zc j x)].
Details of the likelihood calculation procedure are
presented in section 6a. There are a total number of Na
pairs in this group. In general, NT data points are used in
AUGUST 2015
ALEMOHAMMAD ET AL.
3279
this study, in which NT 5 Nprior 1 2 3 Na 1 1 (1 is for the
current measurement zc ).
Figure 1 shows the overall structure of our methodology. First, we use a stochastic method to generate
prior replicates for a given measurement. These prior
replicates form the prior distribution in the Bayes’s
theorem. Details of prior generation are described in
section 4. Next, all of the measurements and prior replicates are mapped to a reduced-dimension space. This
mapping is necessary since it is impractical to apply
nonparametric distributions in Bayes’s theorem in high
dimensions. Besides the use of nonparametric distributions, introduction of the dimensionality reduction
scheme for precipitation images is another novel contribution of this study. The dimensionality reduction
scheme is described in section 5.
Finally, we use importance sampling in the reduceddimension space to generate samples from the posterior
distribution. In this regard, we construct the likelihood
and posterior weights using the prior replicates and
archival measurements as described in this section. Details
of the importance sampling are presented in section 6.
3. Datasets
The spatial coverage of the study is part of the central
continental United States ranging from 318370 3000 to
478370 3000 N latitude and from 1048370 3000 to 808370 3000 W
longitude, as shown in Fig. 2. The domain is selected in a
way to avoid measurements in mountainous regions of
the western United States, where ground-based radars
are prone to beam blockage and considerable errors in
estimation (McCollum et al. 2002). We have also excluded areas over the Great Lakes since the precipitation retrieval algorithm from the PMW instrument is
different over water bodies than the one used over land
areas. To avoid swath limitations and missing pixels
and at the same time maximize the number of samples,
the study domain is divided into 23 subregions, each
covering a 48 3 48 region. Samples are generated over
each subregion using the collocated datasets. All of the
three products are posted on a 0.258 latitude 3 0.258
longitude grid; therefore, the size of each sample is 16
pixels 3 16 pixels.
The temporal coverage of this study is from July 2003
to December 2010. We only use data from the months
April–October since the storm patterns and the errors
have different characteristics in different seasons. Four
datasets are used in this study:
1) PMW-based precipitation retrievals. The uncertain
PMW-based measurements of precipitation are obtained from the Advanced Microwave Sounding
Unit B (AMSU-B) on board the NOAA-16 satellite
FIG. 1. Flowchart of the ensemble-based characterization proposed
in this paper.
(hereafter referred to as NOAA-16). The precipitation rates from this product are available through the
Microwave Surface and Precipitation Products System (MSPPS) orbital data at NOAA. These retrievals are available on orbital grids, and we map
them into a 0.258 3 0.258 grid using nearest-neighbor
sampling that does not affect the marginal distribution of precipitation intensities.
2) Ground-based radar precipitation retrievals. Groundbased radar measurements of precipitation are obtained from the National Weather Service’s WSR88D network (NEXRAD-IV product). These serve as
ground truth for quantification of errors in the PMW
precipitation measurements. NEXRAD-IV is a national
radar-based and surface gauge–corrected 4-kmresolution precipitation product produced from a
mosaic of precipitation measurements obtained from
single radars across the United States (Fulton et al.
1998; Lin and Mitchell 2005). We coarse-grain them
(using areal averaging) to a 0.258 3 0.258 grid so they
are compatible with the NOAA-16–based observations. The NEXRAD-IV data are obtained from the
National Center for Atmospheric Research (NCAR)
Earth Observing Laboratory (EOL).
3280
MONTHLY WEATHER REVIEW
VOLUME 143
FIG. 2. Spatial coverage of the study domain. Grids show the 23 sampling subregions. Subregions that are labeled 1 and 2 are the ones used later in examples 1 and 2, respectively.
3) Prior precipitation retrievals. Precipitation measurements from the Tropical Rainfall Measuring Mission
(TRMM) Microwave Imager (TMI) (hereafter referred to as TMI) are used as training images to
generate prior replicates, as described in section 4.
The TRMM 2A12 product (version 6) that is used here
is produced on an orbital grid, and we map it into a
0.258 3 0.258 grid using nearest-neighbor sampling.
4) Cloud observations. Cloud-top temperature observations from the infrared (IR) instrument on board
the Geostationary Operational Environmental Satellites (GOES) are used here to derive the potential
binary rain and nonrain areas. The potential rain
areas are used to constrain the prior replicates to
generate precipitation fields only inside cloudy areas.
The GOES data are obtained from National Climatic
Data Center (NCDC) dataset 6146. This is a global,
half-hourly, IR-based cloud-top brightness temperature compiled from all available geostationary meteorological satellites (Janowiak et al. 2001). The
original product is posted on a 4-km spatial grid. We
coarsen the data to a 0.258 3 0.258 grid by areal
averaging. The potential rainy areas are detected
using the method described in Wojcik et al. (2009)
and Alemohammad et al. (2015a, manuscript submitted to Water Resour. Res.).
In total, 4640 collocated pairs of precipitation measurements from NOAA-16 and NEXRAD-IV are generated. These pairs are used to derive the likelihood
function in the Bayesian update framework.
There is a difference between the temporal resolution
of the NEXRAD-IV and NOAA-16 that introduces a
temporal representativeness error into the NOAA-16
retrievals. NEXRAD-IV data are hourly accumulations
and NOAA-16 data are instantaneous observations
centered at the middle of the hourly accumulation time
of NEXRAD-IV. This is an inherent limitation of the
data, and it is considered as part of the total error in
NOAA-16 measurements. We derive the error likelihoods by comparing historical NEXRAD-IV and
NOAA-16 measurements and characterize the errors in
NOAA-16 with the same definition.
4. Prior generation
To generate an informative prior replicate population, the method developed by Alemohammad et al.
(2015a, manuscript submitted to Water Resour. Res.) is
used. This method uses a TI to describe the spatial
structure of precipitation storms. It then generates replicates that have a spatial structure that is statistically
similar to the training image in terms of precipitation
intensity and spatial cluster size distribution. This
structure is defined by decomposing the training image
with an orientation and scale-dependent spatial filter
(Alemohammad et al. 2015a, manuscript submitted to
Water Resour. Res.). The training image for the Bayesian update problem (xp ) should be a precipitation image
the same size and spatial resolution (pixel size) as the
current measurement (zc ). This image is intended to
reflect the prior knowledge of the user about the state of
AUGUST 2015
ALEMOHAMMAD ET AL.
precipitation at time t before zc is made. TI selection is
problem specific, and it is based on the expert’s judgment about the general characteristics of the true rainfall to be quantified from noisy measurements. In
addition, the technique used to generate replicates from
the TI should produce a prior ensemble that is sufficiently diverse to accurately describe uncertainty in the
true precipitation.
In this study, we obtain prior training images from
TMI-based precipitation measurements over the same
geographical region at the time preceding the current
uncertain measurement (between 1–3 h previous to the
current measurement time). TMI-based retrievals are
one of the most accurate measurements of precipitation
available from microwave instruments. We use a lagged
TMI as a source of prior information both to illustrate our
general approach to uncertainty quantification and because it is shown by Alemohammad (2015) to be an effective and reasonable choice. In general, the user can
choose any appropriate source of prior information to
generate a set of prior replicates. For example, the output
of a forecast model (physical or stochastic) initialized with
estimates constructed at a previous time could also be a
convenient training image for generating a prior ensemble.
This approach for constructing prior information is
similar to analog techniques applied in statistical downscaling (Zorita and von Storch 1999; Tian and Martinez
2012; Shao and Li 2013) and weather and climate forecasting (Fernandez-Ferrero et al. 2010; Orlowsky et al.
2010; Delle Monache et al. 2013; Martin et al. 2014).
Analog techniques take advantage of a pattern recognition approach in which the probability of the future state
of the system is estimated using observed patterns that
have evolved from an analog state in the past.
The replicate generation method used here has parameters that can be tuned to generate a population of
replicates with different levels of variance. The variance
in the prior population is important because too much
diversity will make the prior uninformative and gives
high weight to the noisy measurement. On the other
hand, if the prior population is not sufficiently diverse,
the estimator will return an image that looks very much
like the training image and ignore the measurement.
The parameters to be set in the prior replicate generation are the number of orientations (Nor ) and scales
(Nsc ) in the decomposition as well as the number of
replicates (Nprior ). After some experimentation, we
found that the following parameter choices work well
for our study: Nor 5 4, Nsc 5 2, and Nprior 5 1000. From
the 1000 replicates, half are not constrained by any TI
pixel values, but the other half are constrained to preserve 5% of the TI pixel values. Moreover, all of the
replicates are constrained by cloud observations from
3281
GOES satellites to not have precipitation outside of the
cloudy area. This constraining is necessary since any
raining pixel outside of the cloudy area would be physically impossible. Details of this constraining process are
described in section 3.4 of Alemohammad et al. (2015a,
manuscript submitted to Water Resour. Res.).
A set of 166 training images (generated from historical
TMI observations) are used to determine the variance of
the prior replicates. The generated prior population is
informative and diverse, and the replicates capture the
uncertainty in the true precipitation. Moreover, results
indicate that the true precipitation (based on the benchmark NEXRAD-IV measurement) can be a likely
member of the prior population (Alemohammad 2015).
5. Mapping precipitation data in a reduced
attribute space
It is impractical and computationally inefficient to
generate samples from the unknown Bayesian posterior
density in a high-dimensional space. Moreover, even if
this were feasible, we would need to generate a very
large number of samples in order to construct accurate
moments and to approximate high-dimensional nonparametric posterior distributions. In this study, the
precipitation test images are 16 3 16 pixels, which is a
256 dimensional space. Even this relatively small space
is too large for nonparametric Bayesian methods. One
possible approximate solution to this dilemma is to map
all of the data (current measurement, prior replicates,
and archival pairs) to a low-dimensional space, hereafter
called attribute space. The necessary Bayesian computations can be efficiently performed in the attribute
space, and the posterior weight of each low-dimensional
sample will be associated with the corresponding highdimensional sample.
The goal of the dimensionality reduction process is to
identify attributes or dimensions that can best represent
the data in low-dimensional space. Once these attributes
are identified, the high-dimensional images are mapped
to the attribute space, and each data point in the original
pixel space (or image space) will have a set of attributes
that describes, in an approximate way, the corresponding high-dimensional image. Inevitably, there is some
information loss in this mapping, and the reconstruction
from attribute space to pixel space is not one-to-one.
However, we choose the mapping to minimize this information loss and to develop the most accurate possible
low-dimensional representation of the original image.
This dimensionality reduction is problem specific, and
many approaches are proposed in previous research
studies (van der Maaten et al. 2009; Rajaraman and
Ullman 2011).
3282
MONTHLY WEATHER REVIEW
One common approach is to use multidimensional
scaling (MDS) techniques to map data into the attribute
space (Torgerson 1952; Cox and Cox 2001). MDS techniques are a family of techniques that aim to map data
from an original high-dimensional space to a lowdimensional space by preserving the pairwise distances
between data as much as possible (Borg and Groenen
2005). There are a range of algorithms for this purpose,
including but not limited to Sammon mapping (Sammon
1969), curvilinear component analysis (CCA; Demartines
and Herault 1997), generative topographic mapping
(GTM; Bishop et al. 1998), Isomap (Tenenbaum et al.
2000), locally linear embedding (LLE; Roweis and Saul
2000), diffusion maps (Coifman and Lafon 2006), datadriven high-dimensional scaling (DD-HDS; Lespinats
et al. 2007), and RankVisu (Lespinats et al. 2009).
Applying MDS techniques to map precipitation images to a low-dimensional space has three challenges.
First, there is a need for a distance metric in pixel space.
To the best of authors’ knowledge, there is no distance
metric that can measure distances between textured
images. Common options such as the Euclidean distance
cannot capture the complex spatial structure and intermittency of textured rainfall images. Scalar distances
based on Euclidean and related measures frequently
miss visually obvious differences and similarities. Second, the quality of mapping from MDS techniques reduces as the number of samples increases. Besides the
mathematical proof for the mapping error [presented in
Matousek (1996) and Wojcik et al. (2014)], it is intuitive
that with more samples, more pairwise distances need to
be preserved in the low-dimensional space. This limits
the number of samples that can be mapped to the attribute space, and probability distributions estimated
from such restricted sample sets will have lower accuracy. Third, mapping functions from these techniques
operate on all data simultaneously. When the data are
mapped to the low-dimensional space, any new data
point that arrives cannot be mapped individually and all
of the data need to be mapped again together. Considering these three challenges, MDS techniques are not
appropriate for use in this problem.
Other techniques such as discrete cosine transform
(DCT) are also available for dimensionality reduction
(Ahmed et al. 1974). Using this technique, data are
mapped to a set of coefficients in the frequency domain;
then, the nonsignificant coefficients are truncated and
the high-dimensional image is represented using a lowdimensional set of significant coefficients (Jafarpour
et al. 2009, 2010). This is the technique that is used for
compressing JPEG and MP3 files. However, our investigations (not shown here) reveal that this is not
appropriate for the precipitation application. The basis
VOLUME 143
functions that are used in this method are a set of different cosine functions (with different amplitudes and
wavelengths). To be able to properly describe the
highly nonhomogeneous precipitation images using
this technique, it is necessary to keep a large number of
coefficients, which increases the dimension of the
attribute space.
The dimensionality reduction problem used in this
study is based on the minimization of reconstruction error—
the error that occurs when a high-dimensional image is
reconstructed from a low-dimensional approximation.
We define the problem as follows. Given a set of data
points in pixel space (denote yi , i 5 1, . . . , NT ), find the
set of attributes ^
yi by solving the following optimization
problem:
^
yi )2 ,
yi 5 arg min å(yi 2 C^
^
yi
i 5 1, . . . , NT ,
(3)
in which C is the matrix of basis vectors defined as
2
c11
6c
6 21
C56
6 ..
4 .
cm1
c12
c22
..
.
cm2
...
...
⋱
...
3
c1k
c2k 7
7
.. 7
7.
. 5
cmk
Each column of C is one basis vector denoted as
Ci , i 5 1, . . . , k; m is the dimension of data in the pixel
space; and k is the number of dimensions in the
attribute space.
The objective function minimizes the square difference between the original image in pixel space and the
reconstructed image from the attributes. The goal is to
find the set of ^
yi given C so that there is minimum information loss as a result of mapping. This information
loss is formulated as the difference between input samples and reconstructed samples from attributes.
The choice of basis vectors is problem specific. The
approach here is to use singular value decomposition
(SVD) and derive the eigenvectors of the samples.
These eigenvectors can be used as the basis vectors in
this formulation. However, if we use all of the samples to
derive the basis vectors using SVD, this will result in a
set of smooth basis vectors that cannot properly distinguish the pairs of archival satellite and ground-based
radar measurement. Defining a set of measurementspecific basis vectors is required to distinguish these
images in the attribute space.
Basis vector selection
The basis vectors that define the attribute space
should capture as much variability as possible for a given
AUGUST 2015
ALEMOHAMMAD ET AL.
3283
FIG. 3. Comparison between the eigenvectors derived from censored and uncensored population of archival pairs.
number of attributes. This enables us to maximize the
information contained in the attribute space approximations to the original images. Here, three sets of vectors are derived from three populations of samples; then,
they are used together to define a final common set of
basis vectors to map all of the data points to the
attribute space.
The first population used to derive basis functions is a
censored set of all the archival pairs. The censoring
identifies pairs of archival precipitation measurements,
with each member similar to the current measurement
from the satellite retrieval. The population of storms
from all years over the region is diverse, and using all of
them as samples to derive basis vectors results in smooth
basis functions. Figure 3 shows a comparison between
eigenvectors derived from all of the archival samples
and the ones derived from a censored population.
Smooth basis vectors cannot adequately distinguish
between the ground validation and satellite measurements in a given pair. Therefore, they give errors with
values of zero in the attribute space even though the
actual errors are nonzero. This artifact produces very
narrow likelihood functions that incorrectly imply that
the current measurement is perfect. This problem can be
solved if the basis functions are measurement specific, as
is done when the censoring procedure is implemented.
Four geometric features are used to censor the archival pairs: orientation, solidity, and center of mass in
two directions. Historical events that are similar to the
current measurement with respect to all of these features are retained for the basis function derivation.
These particular features were selected after an extensive evaluation of possible alternatives [texture based,
geometric, and morphological (Lee et al. 1985; Kumar
and Foufoula-Georgiou 1994)]. These features are similar to the ones used in verification of spatial precipitation forecasts (Gilleland et al. 2009; Ahijevych
et al. 2009; Gilleland et al. 2010; Wolff et al. 2014).
Orientation is the angle between the x axis and the
major axis of the ellipse that has the same second moments as the precipitation area in the image. Solidity is
the ratio of the area of the storm to the area of the
convex hull that is the smallest convex polygon that can
contain the storm area. Center of mass are coordinates
of the center of the precipitation storm in the x and y
directions.
For each zc , these four features are calculated. Then,
the archival pairs are monitored and each pair that has a
3284
MONTHLY WEATHER REVIEW
value within 610% of the values calculated for the features in zc is accepted. The rest of the pairs are eliminated
as not being relevant for the current measurement. Next,
SVD is applied to the censored population and the
leading eigenvectors are derived. We use between 1 and 3
leading eigenvectors from this population.
The second population used to derive basis functions
is the set of prior samples generated from a training
image appropriate for the current measurement. SVD is
applied to this population, and the leading eigenvectors
are calculated. Similar to the censored archival data,
between 1 and 3 eigenvectors from this population are
used as basis vectors.
The final member of the basis function population is
the current measurement itself. To use this measurement
as a basis vector, the spatial mean of the measurement is
removed from each pixel, then the measurement is normalized with respect to its length:
zc 5
zc 2 mean(zc )
.
jzc j
(4)
The eigenvectors from the censored archival population and the TI-based prior population together with
the normalized current measurement (zc ) form the basis
vector matrix (C) of the reduced-dimension attribute
space. To show the sensitivity of the results to the
number of basis vectors (equivalently the number of
dimensions in the attribute space), three combinations
are used for C. The first one has three dimensions and
uses one eigenvector from the censored archival population, one eigenvector from priors, and the normalized
measurement. The second one has five dimensions and
uses two eigenvectors from the censored archival population, two eigenvector from priors, and the normalized
measurement. The last one has seven dimensions and
uses three eigenvectors from the censored archival
population, three eigenvectors from priors, and the
normalized measurement. The sensitivity analysis is
presented in section 8.
6. Importance sampling
One of the novel contributions of this study is that it
does not assume any parametric or additive/multiplicative
form either for the prior distribution of precipitation or
for the measurement error distribution. Indeed, the
error likelihood function in the Bayes’s theorem is calculated using pairs of satellite and benchmark groundbased radar measurements that are mapped to the
attribute space. This section presents the nonparametric
importance sampling technique that is implemented to
characterize the posterior distribution in (1).
VOLUME 143
Importance sampling is a method to generate samples
from a target distribution (p) by weighting samples that
are generated from a second distribution (q, called the
importance distribution or the proposal). In our application, the target distribution is the posterior distribution of the low-dimensional vector of precipitation
attributes. Sampling from an importance distribution is
necessary when there are complexities in sampling directly from p. In the Bayesian update problem, it is not
possible to directly sample from the posterior distribution [ pXjZ (x j z)]; therefore, samples are generated from
the importance distribution (Robert and Casella 2004;
Cornuet et al. 2012). In the application considered here,
the prior distribution [ pX (x)] is selected as the importance distribution so the importance samples are the
prior replicates. These samples are weighted using the
likelihood function [pZjX (z j x)]. The weighted prior
samples constitute the posterior distribution that can be
characterized for different applications.
a. Likelihood function and posterior weights
This section presents details of the posterior weight
calculations. Following the Bayesian formalism, these
weights are derived from the likelihood function. However, if we take a nonparametric approach we do not
have a functional expression, such as a Gaussian function,
for the likelihood. Instead, the likelihood is defined implicitly, through the pairs of archival satellite and ground
truth (ground-based radar) measurement attributes that
are mapped into the reduced-dimension space. We
construct a likelihood function by fitting a kernel density
consisting of a mixture of Gaussian functions to these
attribute pairs. The number of component functions used
is adaptive and is determined using the Akaike information criterion (AIC). This fitted distribution prox) between the
vides the joint probability of pZ^ ,X^ (^z, ^
current measurement and truth (the ‘‘hat’’ notation indicates that a variable is a vector of attributes in the
reduced-dimension space). The joint density can be used
to derive a nonparametric likelihood function. Two points
should be noted about this joint density. First, using a
mixture of Gaussians does not mean that the likelihood is
assumed to be a Gaussian distribution since the mixture
may have a highly non-Gaussian shape and may even be
multimodal, depending on how the component Gaussian
functions are combined. Second, the likelihood function
derivation does not assume that measurement errors are
additive or multiplicative. The joint density depends
symmetrically on the satellite and ground-based radar
measurements, rather than on their difference.
Once the joint density is constructed, the likelihood
value for each of the prior samples ^
xi is obtained from
the definition of conditional probability:
AUGUST 2015
ALEMOHAMMAD ET AL.
pZ^ jX^ (^zc j ^xi ) 5
pZ^ ,X^ (^zc , ^
xi )
pX^ (^
xi )
i 5 1, . . . , Nprior .
;
(5)
The value of the likelihood function for each prior
replicate is basically the posterior weight for that sample. Therefore, we define the posterior weight of each
xi ).
prior sample ^xi as wx^i 5 pZ^ jX^ (^zc j ^
The numerator of (5) is evaluated as described above.
However, the denominator is the marginal distribution
xi ). This is impractical to
of the joint density pZ^ ,X^ (^zc , ^
calculate from numerical integration on a regular grid
since there is no analytical expression for the joint
xi ) by applying a
density. Instead, we compute pX^ (^
Monte Carlo integration technique using a second importance sampling algorithm (Robert and Casella 2004).
To this end, an importance density qi is selected for each
prior sample ^xi and the integration is calculated using:
ð
pX^ (^xi ) 5
pZ^ ,X^ (z, ^
xi ) dz 5
"
5 Eq
i
pZ^ ,X^ (z, xi )
qi (z)
#
ðp
xi )
^ ,X
^ (z, ^
Z
qi (z)
qi (z) dz
,
(6)
in which Eqi is the expectation over qi . Therefore, pX^ (^
xi )
is evaluated by generating samples z from qi and calculating the expectation on the right side of (6). For this
problem, the importance density is chosen for convenience to be Gaussian with mean mqi and variance s2qi .
Note that this does not imply that either the likelihood or
posterior are assumed to be Gaussian. It only implies that
the importance samples weighted in the Monte Carlo integration procedure are distributed according to a
Gaussian distribution. These weights will generally yield a
xi ). The values of mqi and s2qi used in qi
non-Gaussian pX^ (^
are estimated based on a population of ^zj samples from
archival pairs that have their corresponding ^
xj close to the
prior sample ^xi . Computational experiments revealed that
this choice of qi is appropriate and the value of the integral converges as the number of z samples increases.
The likelihood calculation described above is also
applied to ^zc . Since this measurement is itself a potential
representation of the true precipitation, it is included in
the prior ensemble. One of the examples presented in
section 7 assigns the highest posterior probability to the
current measurement. This is important since it shows
that in this specific example the prior has not been informative enough to improve our characterization beyond the uncertain current measurement itself.
b. Characterizing the posterior distribution
The ultimate goal of our Bayesian uncertainty quantification procedure is to generate samples from the
3285
posterior distribution pX^ jZ^ (^
x j ^zc ). In particular, we seek
the subset of samples with significant probability that
collectively describe the region in the attribute space that
is most likely to include the true precipitation attribute
vector (and, by implication, the true precipitation image).
The high probability region is centered on the mode of
the posterior distribution, which is calculated using the
following steps:
1) Resample the prior replicates using the importance
weights wx^i . This resampled population represents
samples from the posterior distribution.
x j ^zc ) using a kernel density fit (mix2) Estimate pX^ jZ^ (^
ture of Gaussian distributions) to the resampled
population.
3) Identify the mode of the posterior distribution.
4) Select a subset of replicates that is closest to the
mode of the posterior distribution.
The last step is conducted because the mode of the
distribution is not necessarily a point in the pixel space
(i.e., an image). All of these calculations are carried out
in the attribute space and not all of the points in the lowdimensional attribute space can be mapped back to the
pixel space. Only points that correspond to a prior replicate have a representation in the pixel space. The prior
replicates that are closest to the mode of the distribution
define a set of corresponding high posterior probability
images in the original pixel space. Each image inherits
the probability weight of the corresponding point in the
attribute space.
7. Examples
This section describes how we use the uncertainty
quantification approach outlined above to characterize
and display posterior samples for different precipitation measurements from the AMSU-B instrument on
NOAA-16. We also evaluate the effect of the number of
dimensions in the attribute space on the quality of the
posterior.
For each example, the precipitation retrieval from
NOAA-16 is used as the current measurement (zc ), prior
replicates (xi , i 5 1, . . . , Nprior ) are generated using TMI
retrievals from preceding times as a prior training image
(xp ), and archival pairs of NOAA-16 and NEXRAD-IV
are used to derive the likelihood. Moreover, the
NEXRAD-IV measurement (xT ) at the time of NOAA16 measurement (zc ) is used as the ground truth for the
observed precipitation. This measurement is also mapped to the attribute space (denoted ^
xT ) for evaluation.
Figure 4 shows the current measurement and the
ground validation for example 1 (over subregion 1 in
Fig. 2). Figure 5 shows the example 1 training image and
3286
MONTHLY WEATHER REVIEW
VOLUME 143
FIG. 4. Precipitation measurements for example 1 from NOAA-16 (current measurement) and NEXRAD-IV
(ground validation).
16 of the prior replicates (randomly selected among
1000 replicates) generated from this. Recall that the
training image is the TMI retrieval from 1 h before the zc
measurement time.
Figure 6 shows 16 of the most probable replicates
from the posterior distribution for example 1. These
replicates have the closest distance to the mode of the
posterior distribution. This figure also includes the Jaccard distance between each posterior replicate and the
ground validation (NEXRAD-IV). Jaccard distance is
used here to evaluate the similarity of the posterior
replicates and the ground validation. Jaccard distance
is a proximity measure that calculates the similarity
between two binary images (Tan et al. 2005;
Alemohammad et al. 2014). Jaccard distance takes any
value between zero and one, and a lower distance means
the two images are more similar. Precipitation images
are thresholded at 0 mm h21 to create binary images
appropriate for use with Jaccard distance. In this example, all of the posterior samples have a Jaccard distance less than the one between the measurement and
the ground validation (J 5 0.538 in Fig. 6).
Moreover, Fig. 7 shows the mean and standard deviation of the most probable posterior replicates for this
example. These figures show that the top posterior replicates have good agreement with the true precipitation
both with respect to spatial pattern and precipitation rate.
Figures 8–11 show another pair of precipitation
measurement examples and the results of the characterization. In this second example, the most probable
sample from the posterior (closest sample to the mode)
is the current measurement itself. In this case, the prior
is not informative enough to improve our knowledge of
the true precipitation above the current measurement
itself. This demonstrates the importance of a good prior.
If the archival data indicate that the current measurement is very accurate, then the prior must be very informative in order to move the most probable posterior
samples away from the measurement. The prior is most
useful when it provides new information, not contained
in the current measurement, about the true state. In the
next section, we will evaluate this property over many
examples and investigate quality of priors.
8. Evaluation
Evaluating the statistical properties of an ensemble of
replicates can be an especially challenging task when the
replicates are images and not scalar values. There are
several studies that define indices and skill scores to
evaluate quality of an ensemble forecast, but they are
focused on scalar variables (Brier 1950; Mason 1982;
Anderson 1996; Wilks 2004; Muller et al. 2005; Roberts
and Lean 2008). Other techniques are also developed for
AUGUST 2015
ALEMOHAMMAD ET AL.
3287
FIG. 5. Prior TI and 16 of the prior replicates for example 1.
evaluation of spatial samples, but they are appropriate
only for categorical data (Hagen 2003; Hagen-Zanker
2009). Here, the ensemble of samples can be viewed as an
implicit approximation to the Bayesian posterior distribution. In fact, if there are enough samples, an approximate cumulative distribution function could be
constructed by ordering these samples from smallest to
largest (in the multidimensional attribute space) and
applying their weights to obtain the change in cumulative
probability occurring at each sample point. In practice,
we are unlikely to have enough samples to do this if the
attribute space has more than a few dimensions. Even if
we could construct such an approximate posterior distribution, we do not have an exact expression for the
posterior distribution to compare it to. The best way to
assess the accuracy of the approximate posterior ensemble is to compare it to a large ensemble generated with
Markov chain Monte Carlo (MCMC) methods, but these
methods are also computationally challenging for more
than a few attributes. In the examples that were presented
in section 7, we at least know the true precipitation field
(xT ), so we can use this field to check the credibility of our
approximate posterior distribution. This can serve as a
benchmark for evaluation purposes.
3288
MONTHLY WEATHER REVIEW
VOLUME 143
FIG. 6. (right four columns) The most probable replicates from the posterior distribution for example 1. (left) The uncertain satellite
measurement and the ground validation. On top of each of the 16 replicates, as well as the current measurement from NOAA-16, the
Jaccard distance (J) between the replicate and the ground validation is also reported.
As it was presented in section 4, the prior distribution
(or the ensemble of prior replicates) represents our
knowledge of the true precipitation before the current
measurement is considered. If the current measurement
is informative it should give a posterior distribution that
is narrower than the prior. Or, in terms of samples, the
posterior ensemble is less variable and more compatible
with the current measurement than the prior ensemble.
This section presents a comparison between the posterior replicates and the true precipitation relative to
the prior replicates and the current measurement.
This comparison shows how accurate the posterior
distribution is compared to the prior distribution with
respect to the true precipitation. Specifically, three evaluations are presented. First, a relative distance measure is
calculated (in pixel space and attribute space) to evaluate
the closeness of the high probability posterior samples to
the benchmark precipitation image. This examines if the
posterior samples are a better representation of the true
compared to the prior samples. Then, rank histogram is
calculated for two scalar variables: total precipitation
mass and percentage of rainy area in the posterior replicates. These two rank histograms show how variable or
biased the posterior ensemble replicates are. Finally, the
AUGUST 2015
3289
ALEMOHAMMAD ET AL.
FIG. 7. Mean and standard deviation of the 20 most probable posterior ensemble replicates for example 1. The
coefficient of variation in the ensemble is large, indicating spread among the members. Standard deviation is
a symmetric measure of spread and here only indicative of the distribution.
Jaccard distance is evaluated at different thresholds
(based on quantiles). The multithreshold Jaccard distance evaluation provides an insight on the spread of the
ensemble replicates in the posterior and prior, as well as
their closeness to the true precipitation.
We define two indices to evaluate the posterior distribution and to show how much the posterior is improved
with respect to prior and the measurement. These indices
measure the position of the posterior probability mode to
the true precipitation image relative to the current measurement and prior. They are defined as
JfxT , mode[pXjZ (x j zc )]g
(7)
J(xT , zc )
and
R2 5
JfxT , mode[pXjZ (x j zc )]g
J(xT , xp )
,
(8)
in which J is the Jaccard distance function. A similar
notation is used to define two other relative distance
indices in attribute space:
Df^
xT , mode[pX^ jZ^ (^
x j ^zc )]g
(9)
D(^
xT , ^zc )
and
c5
R
2
a. Relative distance indices
R1 5
c5
R
1
Df^
xT , mode[pX^ jZ^ (^
x j ^zc )]g
D(^
xT , ^
xp )
.
(10)
Note that the Euclidean distance D rather than the
Jaccard distance J is used in the attribute space. If R1 (or
c1 ) is less than one, it means the mode of the posterior is
R
closer to the true image than the current measurement.
c2 ) is less than one, it means the mode of the
If R2 (or R
posterior is closer to the true image than the prior
training image. If both indices are less than one, the
posterior mode gives a better portrayal of the true precipitation than either the current measurement or the
training image, at least in terms of the chosen distance
measure. It is preferred that all four indices are less than
one. However, this is not always the case, as in some
situations the choice of prior might not be informative
enough or maybe the likelihood function is not
adequate.
We used 166 examples and evaluated the four indices.
The results are summarized in Fig. 12. This figure shows
3290
MONTHLY WEATHER REVIEW
VOLUME 143
FIG. 8. Precipitation measurements for example 2 from NOAA-16 (current measurement) and NEXRAD-IV
(ground validation).
that in 82% of the examples (in 3D attribute space) R1 is
less than or equal to one. Therefore, in 18% of the examples the posterior samples generated by this method
are not better than the measurement itself. This can be
caused by a misleading prior or likelihood distribution
or by an uninformative current measurement. The prior
can be misleading because the storm might have evolved
quickly between the TMI measurement (used as prior
training image) and the time of current measurement.
The likelihood might be wrong if the true precipitation
is a rare event and no storm similar to that has occurred
before. In this case, the likelihood function has limited
information about the error and it may bias the Bayesian
updating process.
In the case of R2 , in 65% of the examples the index is
less than one, so in 35% of the cases the posterior
samples are not better than the prior TI, and this image
gives a better description of the true precipitation than
the Bayesian posterior mode. Although such comparisons are useful, our goal is not to provide a single deterministic estimate of the true precipitation but to
characterize its uncertainty given a particular set of data.
Bayesian sampling aims to provide an ensemble that
collectively represents our uncertainty about the true
precipitation. This uncertainty cannot be adequately
summarized in a single scalar measure. Even though the
prior training image is in a minority of cases closer to the
truth than the mode of the posterior distribution, the
ensemble of posterior replicates provides understanding
about the many possible forms of the true precipitation
event, all more or less equally likely, while the training
image does not. This ensemble can be used as a random
forcing in many hydrological and meteorological models
for ensemble forecasting and to derive the uncertainty
for hydrologic states and fluxes.
c2 . For these
c1 and R
Similar results are obtained for R
two indices, the percentage of the cases that have values
less than or equal to one is slightly lower; however, similar
trends are observed. This also shows that the dimensionality reduction method applied here properly
projects data into a low-dimensional space and that the
attributes are representative of images in the pixel space.
SENSITIVITY ANALYSIS
Figure 12 also shows the results of a sensitivity analysis that evaluates the effect of the attribute space dimension on performance. The figure compares the
results for three-, five-, and seven-dimensional attribute
space. In section 5, the combination of eigenvectors
from three different populations that form the basis
vectors for dimensionality reduction was introduced.
For five dimensions, we use two eigenvectors from the
censored archival population, two eigenvectors from
the prior replicates, and the normalized current
AUGUST 2015
ALEMOHAMMAD ET AL.
3291
FIG. 9. Prior TI and16 of the prior replicates for example 2.
measurement as the fifth basis vector. In seven dimensions, we use three eigenvectors from the censored
archival population, three eigenvectors from the prior
replicates, and the normalized current measurement.
c1 show that by increasing the
Results for R1 and R
number of dimensions in the attribute space, the results
get worse, with fewer cases giving a posterior mode better
than the current measurement. The changes are small but
noticeable. This might seem contradictory because there
is less information loss when we keep more dimensions.
However, when the number of dimensions increases,
more data points are needed to be able to adequately
approximate the probability distributions constructed in
the attribute space. In particular, the number of paired
archival samples used to calculate the multidimensional
likelihood function (4640 samples) is small compared to
the number of dimensions (five or seven) and the likelihood approximation is less accurate. This limits the
benefits of introducing more attributes. Results for R2
c2 do not show a meaningful change in the quality of
and R
the posterior compared to prior training image as the
number of dimensions increase.
b. Rank histogram
The rank histogram is a measure to evaluate the
spread of an ensemble. This measure is only applicable
3292
MONTHLY WEATHER REVIEW
VOLUME 143
FIG. 10. (right four columns) The most probable replicates from the posterior distribution for example 2. (left) The uncertain satellite
measurement and the ground validation. On top of each of the 16 replicates, as well as the current measurement from NOAA-16, the
Jaccard distance (J) between the replicate and the ground validation is also reported.
to scalar values and has been used in many hydrological
and meteorological applications (Hamill 2001; Candille
and Talagrand 2005; Aligo et al. 2007; Brussolo et al.
2008; Clark et al. 2009; Gaborit et al. 2013; Duda et al.
2014). In this technique a set of bins is defined by distributing the ensemble replicates on the real axis. Then,
the probability of occurrence of the observation within
each bin is evaluated. The result is expected to be a
uniform distribution, meaning that the observation has
equal probability of occurrence in each of the bins.
Details of this technique can be found in Anderson
(1996) and Hamill (2001).
The ensemble replicates in this problem are high dimensional, so it is not possible to apply the rank histogram concept directly to the replicates. However, we can
examine two scalar statistics derived from the replicates’
quantities: the total mass of precipitation in each replicate and the percentage of rainy area. These two quantities are informative of the spread of the ensembles with
respect to both rain detection and rain intensity.
Figures 13 and 14 show the resulting rank histograms by
considering the top 20 posterior replicates. The rank
histogram for the rainy area shows that the more probable posterior replicates (low-rank ones) are more
AUGUST 2015
ALEMOHAMMAD ET AL.
3293
FIG. 11. Mean and standard deviation of the 20 most probable posterior ensemble replicates for example 2.The
coefficient of variation in the ensemble is large, indicating spread among the members. Standard deviation is
a symmetric measure of spread and here only indicative of the distribution.
similar to the benchmark measurement (since they have a
higher frequency in the histogram). The rank histogram
for the total mass of precipitation shows that the benchmark measurement has equal probability to have a total
mass in any of the 20 bins (between top 20 posterior
replicates). These two histograms show that the posterior
ensemble replicates have a reasonable spread with respect to rain detection and rain intensity. The large value
of the histograms for the last replicate is due to the cutoff
we have made at the twentieth replicate. We have more
replicates than the 20 most probable ones used to calculate the histograms here; therefore, the last replicate
shows a higher value.
c. Multilevel Jaccard distance
In the examples shown in section 7, the Jaccard distance between each posterior replicate and the benchmark measurement was reported. It was shown that in
most of the cases these distances are lower than the
Jaccard distance between the current measurement and
FIG. 12. Percentage of cases that the relative distance indices are #1 over 166 examples.
3294
MONTHLY WEATHER REVIEW
VOLUME 143
FIG. 13. Rank histogram for rainy area. The red line shows the
uniform distribution.
FIG. 14. Rank histogram for total amount of precipitation. The red
line shows the uniform distribution.
the benchmark measurement. However, in that analysis the Jaccard distance was applied only by thresholding
the replicates at 0 mm h21. Here we present the results
of comparing the posterior replicates and the benchmark measurements using Jaccard distance at thresholds higher than 0 mm h21.
This evaluation is carried out over all the 166 examples and the results are shown in Fig. 15. The plots show
the mean and one standard deviation of the Jaccard
distance at different thresholds. These thresholds are
located at the 5%, 10%, 50%, 90%, and 95% quantiles.
The figure shows that at higher precipitation rates both
the current measurements and the posterior replicates
(the top 20 replicates) have a higher Jaccard distance;
however, for all the thresholds the posterior replicates
have a better performance. This multilevel Jaccard distance evaluation shows that the posterior replicates are
more similar to the truth both with respect to spatial
patterns and precipitation intensity.
Figure 15 also shows the Jaccard distance between all
the prior replicates and the true precipitation. The results
indicate that there is a large variability in the prior replicates (high standard deviation); however, the mean is
similar to the top posterior replicates at higher thresholds.
At lower thresholds the priors have a higher mean of
Jaccard distance. This comparison shows that the ensemble characterization presented in this study is properly
characterizing the posterior distribution by providing a
narrower posterior distribution compared to the prior.
each of the constellation of GPM satellites to provide
global precipitation estimates with high temporal and
spatial resolution. However, error quantification of
precipitation retrievals from each of the satellites is a
vital step toward implementing an optimal merging algorithm. This paper presents a Bayesian framework to
characterize the uncertainty in a satellite-based retrieval
of precipitation using historical satellite and groundbased observations without making any distributional
assumptions about measurement error. This framework
generates an ensemble of samples from the Bayesian
9. Summary
The new Global Precipitation Measurement (GPM)
mission focuses on merging precipitation retrievals from
FIG. 15. Jaccard distance between the benchmark measurements
and the current measurements (blue), the top 20 posterior replicates (red), and the prior replicates (green). The error bars show
one standard deviation. Thresholds are based on the 5%, 10%,
50%, 90%, and 95% quantiles of the precipitation rates in the
samples.
AUGUST 2015
3295
ALEMOHAMMAD ET AL.
posterior distribution that collectively represent the
uncertainty in the true precipitation.
Going beyond previous work, this approach relies on
nonparametric probability densities derived from an
archival dataset rather than on parametric distributions.
It provides a more realistic characterization of error and
takes advantage of a rich dataset of collocated satellite
and ground-based radar measurements.
The importance sampling technique used here has
limitations in high-dimensional space. For precipitation
retrievals that are high dimensional, a very large prior
ensemble is needed to provide an accurate characterization of the Bayesian posterior distribution. Generating such a large ensemble is impractical. Therefore, a
dimensionality reduction algorithm is introduced to map
all the data to a low-dimensional attribute space. The
dimensionality reduction is measurement specific, and it
provides useful and concise descriptions of the original
images that are suitable for Bayesian updating.
We applied this method to a population of storms
observed by the AMSU-B instrument on board the
NOAA-16 satellite and compared the results with
the measurements obtained from the ground-based
NEXRAD-IV. The comparison reveals that our
method lowers the uncertainty in the true precipitation
by providing a posterior distribution that is narrower
than its prior.
The main goal of this paper is to illustrate the feasibility of using nonparametric Bayesian uncertainty
quantification to characterize errors in precipitation
retrievals. The choice of prior replicate generation
(including a prior training image) and the choice of dimensionality reduction technique are user dependent.
These two elements can be improved or replaced with
other algorithm.
The primary limitation of implementing importance
sampling in a low-dimensional attribute space is the inevitable information loss in the mapping. This is a function of the number of dimensions in the attribute space:
there is more information loss when the number of attribute dimensions is smaller. However, by increasing the
number of dimensions in the attribute space, more samples are needed to properly characterize the probability
densities. More diverse and rich archival data will
provide a better approximation of the error likelihood.
The flexible error characterization approach described
here is a valuable tool for comparing the relative performance of different satellite instruments. This comparison is useful for future instrument and retrieval
algorithm developments as well as calibration studies.
The nonparametric replicate-based approach to uncertainty quantification used here is also very convenient
for ensemble forecasting applications and can improve
the realism of the precipitation ensembles used in assessments of hydrologic model forecasting uncertainty.
Acknowledgments. This study is part of the first author’s Ph.D. dissertation at the Massachusetts Institute
of Technology (Alemohammad 2015) and has also
benefited from the comments from his external committee members, Prof. Emmanouil N. Anagnostou and
Dr. William J. Blackwell. The authors wish to thank the
anonymous reviewers for their constructive feedback
that lead to improvements in the paper. This study is
supported by NASA Grant NNX13AI40G. The TRMM
data used in this study were acquired as part of the
NASA’s Earth-Sun System Division and archived and
distributed by the Goddard Earth Sciences (GES) Data
and Information Services Center (DISC).
REFERENCES
AghaKouchak, A., N. Nasrollahi, and E. Habib, 2009: Accounting
for uncertainties of the TRMM satellite estimates. Remote
Sens., 1, 606–619, doi:10.3390/rs1030606.
——, A. Bardossy, and E. Habib, 2010a: Conditional simulation
of remotely sensed rainfall data using a non-Gaussian
v-transformed copula. Adv. Water Resour., 33, 624–634,
doi:10.1016/j.advwatres.2010.02.010.
——, E. Habib, and A. Bardossy, 2010b: Modeling radar rainfall
estimation uncertainties: Random error model. J. Hydrol. Eng.,
15, 265–274, doi:10.1061/(ASCE)HE.1943-5584.0000185.
Ahijevych, D., E. Gilleland, B. G. Brown, and E. E. Ebert, 2009:
Application of spatial verification methods to idealized and
NWP-gridded precipitation forecasts. Wea. Forecasting, 24,
1485–1497, doi:10.1175/2009WAF2222298.1.
Ahmed, N., T. Natarajan, and K. Rao, 1974: Discrete cosine
transform. IEEE Trans. Comput., C-23, 90–93, doi:10.1109/
T-C.1974.223784.
Alemohammad, S. H., 2015: Characterization of uncertainty in
remotely-sensed precipitation estimates. Ph.D. thesis, Massachusetts Institute of Technology, 156 pp.
——, D. Entekhabi, and D. McLaughlin, 2014: Evaluation
of long-term SSM/I-based precipitation records over land.
J. Hydrometeor., 15, 2012–2029, doi:10.1175/JHM-D-13-0171.1.
——, K. A. McColl, A. G. Konings, D. Entekhabi, and
A. Stoffelen, 2015: Characterization of precipitation product
errors across the US using multiplicative Triple Collocation.
Hydrol. Earth Syst. Sci. Discuss., 12, 2527–2559, doi:10.5194/
hessd-12-2527-2015.
Aligo, E. A., W. A. Gallus, and M. Segal, 2007: Summer rainfall
forecast spread in an ensemble initialized with different soil
moisture analyses. Wea. Forecasting, 22, 299–314, doi:10.1175/
WAF995.1.
Anagnostou, E., V. Maggioni, E. Nikolopoulos, T. Meskele,
F. Hossain, and A. Papadopoulos, 2010: Benchmarking highresolution global satellite rainfall products to radar and raingauge rainfall estimates. IEEE Trans. Geosci. Remote Sens.,
48, 1667–1683, doi:10.1109/TGRS.2009.2034736.
Anderson, J. L., 1996: A method for producing and evaluating
probabilistic forecasts from ensemble model integrations.
J. Climate, 9, 1518–1530, doi:10.1175/1520-0442(1996)009,1518:
AMFPAE.2.0.CO;2.
3296
MONTHLY WEATHER REVIEW
Astin, I., 1997: A survey of studies into errors in large scale spacetime averages of rainfall, cloud cover, sea surface processes
and the earth’s radiation budget as derived from low orbit
satellite instruments because of their incomplete temporal and
spatial coverage. Surv. Geophys., 18, 385–403, doi:10.1023/
A:1006512715662.
Bardossy, A., 1998: Generating precipitation time series using
simulated annealing. Water Resour. Res., 34, 1737–1744,
doi:10.1029/98WR00981.
Bell, T. L., 1987: A space-time stochastic-model of rainfall for
satellite remote-sensing studies. J. Geophys. Res., 92, 9631–
9643, doi:10.1029/JD092iD08p09631.
——, A. Abdullah, R. L. Martin, and G. R. North, 1990: Sampling
errors for satellite-derived tropical rainfall: Monte Carlo study
using a space-time stochastic model. J. Geophys. Res., 95,
2195–2205, doi:10.1029/JD095iD03p02195.
Bellerby, T. J., 2007: Satellite rainfall uncertainty estimation using
an artificial neural network. J. Hydrometeor., 8, 1397–1412,
doi:10.1175/2007JHM846.1.
——, and J. Sun, 2005: Probabilistic and ensemble representations of
the uncertainty in an IR/microwave satellite precipitation product.
J. Hydrometeor., 6, 1032–1044, doi:10.1175/JHM454.1.
Bishop, C., M. Svensen, and C. Williams, 1998: GTM: The generative topographic mapping. Neural Comput., 10 (1), 215–234.
Bocquet, M., C. A. Pires, and L. Wu, 2010: Beyond Gaussian statistical modeling in geophysical data assimilation. Mon. Wea.
Rev., 138, 2997–3023, doi:10.1175/2010MWR3164.1.
Borg, I., and P. J. Groenen, 2005: Modern Multidimensional Scaling: Theory and Applications. 2nd ed. Springer, 614 pp.
Brier, G. W., 1950: Verification of forecasts expressed in terms
of probability. Mon. Wea. Rev., 78, 1–3, doi:10.1175/
1520-0493(1950)078,0001:VOFEIT.2.0.CO;2.
Brocca, L., T. Moramarco, F. Melone, W. Wagner, S. Hasenauer,
and S. Hahn, 2012: Assimilation of surface- and root-zone
ASCAT soil moisture products into rainfall–runoff modeling.
IEEE Trans. Geosci. Remote Sens., 50, 2542–2555,
doi:10.1109/TGRS.2011.2177468.
Brussolo, E., J. von Hardenberg, L. Ferraris, N. Rebora, and
A. Provenzale, 2008: Verification of quantitative precipitation
forecasts via stochastic downscaling. J. Hydrometeor., 9, 1084–
1094, doi:10.1175/2008JHM994.1.
Bulygina, N., and H. Gupta, 2010: How Bayesian data assimilation
can be used to estimate the mathematical structure of a model.
Stochastic Environ. Res. Risk Assess., 24, 925–937, doi:10.1007/
s00477-010-0387-y.
Candille, G., and O. Talagrand, 2005: Evaluation of probabilistic
prediction systems for a scalar variable. Quart. J. Roy. Meteor.
Soc., 131, 2131–2150, doi:10.1256/qj.04.71.
Chen, H., D. Yang, Y. Hong, J. J. Gourley, and Y. Zhang, 2013:
Hydrological data assimilation with the ensemble square-rootfilter: Use of streamflow observations to update model states
for real-time flash flood forecasting. Adv. Water Resour., 59,
209–220, doi:10.1016/j.advwatres.2013.06.010.
Chen, S., and Coauthors, 2013a: Evaluation of spatial errors of
precipitation rates and types from TRMM spaceborne radar
over the southern CONUS. J. Hydrometeor., 14, 1884–1896,
doi:10.1175/JHM-D-13-027.1.
——, and Coauthors, 2013b: Evaluation of the successive V6 and
V7 TRMM multisatellite precipitation analysis over the continental united states. Water Resour. Res., 49, 8174–8186,
doi:10.1002/2012WR012795.
Clark, A. J., W. A. Gallus, M. Xue, and F. Kong, 2009: A comparison
of precipitation forecast skill between small convection-allowing
VOLUME 143
and large convection-parameterizing ensembles. Wea. Forecasting, 24, 1121–1140, doi:10.1175/2009WAF2222222.1.
Clark, M. P., and A. G. Slater, 2006: Probabilistic quantitative
precipitation estimation in complex terrain. J. Hydrometeor.,
7, 3–22, doi:10.1175/JHM474.1.
Coifman, R. R., and S. Lafon, 2006: Diffusion maps. Appl. Comput.
Harmonic Anal., 21, 5–30, doi:10.1016/j.acha.2006.04.006.
Cornuet, J.-M., J.-M. Marin, A. Mira, and C. P. Robert, 2012:
Adaptive multiple importance sampling. Scand. J. Stat., 39,
798–812, doi:10.1111/j.1467-9469.2011.00756.x.
Cowpertwait, P. S. P., C. G. Kilsby, and P. E. O’Connell, 2002: A
space-time Neyman-Scott model of rainfall: Empirical analysis of extremes. Water Resour. Res., 38, 1131, doi:10.1029/
2001WR000709.
Cox, T. F., and M. A. Cox, 2001: Multidimensional Scaling. 2nd ed.
CRC Press, 308 pp.
DeChant, C. M., and H. Moradkhani, 2012: Examining the effectiveness and robustness of sequential data assimilation
methods for quantification of uncertainty in hydrologic
forecasting. Water Resour. Res., 48, W04518, doi:10.1029/
2011WR011011.
——, and ——, 2014: Toward a reliable prediction of seasonal
forecast uncertainty: Addressing model and initial condition
uncertainty with ensemble data assimilation and sequential
Bayesian combination. J. Hydrol., 519D, 2967–2977,
doi:10.1016/j.jhydrol.2014.05.045.
Delle Monache, L., F. A. Eckel, D. L. Rife, B. Nagarajan, and
K. Searight, 2013: Probabilistic weather prediction with an
analog ensemble. Mon. Wea. Rev., 141, 3498–3516,
doi:10.1175/MWR-D-12-00281.1.
Demartines, P., and J. Herault, 1997: Curvilinear component
analysis: A self-organizing neural network for nonlinear
mapping of data sets. IEEE Trans. Neural Networks, 8, 148–
154, doi:10.1109/72.554199.
Duda, J. D., X. Wang, F. Kong, and M. Xue, 2014: Using varied
microphysics to account for uncertainty in warm-season QPF
in a convection-allowing ensemble. Mon. Wea. Rev., 142,
2198–2219, doi:10.1175/MWR-D-13-00297.1.
Eicker, A., M. Schumacher, J. Kusche, P. Doll, and H. Schmied,
2014: Calibration/data assimilation approach for integrating GRACE data into the WaterGAP Global Hydrology Model (WGHM) using an ensemble Kalman filter:
First results. Surv. Geophys., 35, 1285–1309, doi:10.1007/
s10712-014-9309-8.
Evensen, G., 1994: Sequential data assimilation with a nonlinear
quasi-geostrophic model using Monte Carlo methods to
forecast error statistics. J. Geophys. Res., 99, 10 143–10 162,
doi:10.1029/94JC00572.
——, 2003: The ensemble Kalman filter: Theoretical formulation
and practical implementation. Ocean Dyn., 53, 343–367,
doi:10.1007/s10236-003-0036-9.
Fernandez-Ferrero, A., J. Saenz, and G. Ibarra-Berastegi, 2010:
Comparison of the performance of different analog-based
Bayesian probabilistic precipitation forecasts over Bilbao,
Spain. Mon. Wea. Rev., 138, 3107–3119, doi:10.1175/
2010MWR3284.1.
Ferraris, L., S. Gabellani, N. Rebora, and A. Provenzale, 2003:
A comparison of stochastic models for spatial rainfall
downscaling. Water Resour. Res., 39, 1368, doi:10.1029/
2003WR002504.
Fulton, R. A., J. P. Breidenbach, D. J. Seo, and D. A. Miller, 1998:
The WSR-88D rainfall algorithm. Wea. Forecasting, 13, 377–
395, doi:10.1175/1520-0434(1998)013,0377:TWRA.2.0.CO;2.
AUGUST 2015
ALEMOHAMMAD ET AL.
Gaborit, E., F. Anctil, V. Fortin, and G. Pelletier, 2013: On the
reliability of spatially disaggregated global ensemble rainfall
forecasts. Hydrol. Processes, 27, 45–56, doi:10.1002/hyp.9509.
Gebremichael, M., and W. Krajewski, 2004: Characterization of
temporal sampling error in space-time-averaged rainfall estimates from satellites. J. Geophys. Res., 109, D11110,
doi:10.1029/2004JD004509.
Gelman, A., J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and
D. B. Rubin, 2013: Bayesian Data Analysis. 3rd ed. CRC Press,
675 pp.
Gilleland, E., D. Ahijevych, B. G. Brown, B. Casati, and E. E.
Ebert, 2009: Intercomparison of spatial forecast verification
methods. Wea. Forecasting, 24, 1416–1430, doi:10.1175/
2009WAF2222269.1.
——, D. A. Ahijevych, B. G. Brown, and E. E. Ebert, 2010: Verifying forecasts spatially. Bull. Amer. Meteor. Soc., 91, 1365–
1373, doi:10.1175/2010BAMS2819.1.
Gomi, C., and Y. Kuzuha, 2013: Simulation of a daily precipitation
time series using a stochastic model with filtering. Open
J. Mod. Hydrol., 3, 206–213, doi:10.4236/ojmh.2013.34025.
Grimes, D., 2008: An ensemble approach to uncertainty estimation
for satellite-based rainfall estimates. Hydrological Modelling
and the Water Cycle, S. Sorooshian et al., Eds., Water Science
and Technology Library, Vol. 63, Springer, 145–162,
doi:10.1007/978-3-540-77843-1_7.
Gupta, V., and E. Waymire, 1993: A statistical analysis of mesoscale rainfall as a random cascade. J. Appl. Meteor., 32, 251–267,
doi:10.1175/1520-0450(1993)032,0251:ASAOMR.2.0.CO;2.
Habib, E., A. Henschke, and R. F. Adler, 2009: Evaluation of
TMPA satellite-based research and real-time rainfall estimates during six tropical-related heavy rainfall events over
Louisiana, USA. Atmos. Res., 94, 373–388, doi:10.1016/
j.atmosres.2009.06.015.
——, A. T. Haile, Y. Tian, and R. J. Joyce, 2012: Evaluation of the
high-resolution CMORPH satellite rainfall product using
dense rain gauge observations and radar-based estimates.
J. Hydrometeor., 13, 1784–1798, doi:10.1175/JHM-D-12-017.1.
Hagen, A., 2003: Fuzzy set approach to assessing similarity of
categorical maps. Int. J. Geogr. Info. Sci., 17, 235–249,
doi:10.1080/13658810210157822.
Hagen-Zanker, A., 2009: An improved fuzzy kappa statistic that
accounts for spatial autocorrelation. Int. J. Geogr. Info. Sci.,
23, 61–73, doi:10.1080/13658810802570317.
Hamill, T. M., 2001: Interpretation of rank histograms for verifying
ensemble forecasts. Mon. Wea. Rev., 129, 550–560,
doi:10.1175/1520-0493(2001)129,0550:IORHFV.2.0.CO;2.
Hossain, F., and E. N. Anagnostou, 2006: A two-dimensional satellite rainfall error model. IEEE Trans. Geosci. Remote Sens.,
44, 1511–1522, doi:10.1109/TGRS.2005.863866.
Ines, A. V., N. N. Das, J. W. Hansen, and E. G. Njoku, 2013: Assimilation of remotely sensed soil moisture and vegetation
with a crop simulation model for maize yield prediction. Remote Sens. Environ., 138, 149–164, doi:10.1016/
j.rse.2013.07.018.
Jafarpour, B., and D. McLaughlin, 2008: History matching with an
ensemble Kalman filter and discrete cosine parameterization. Comput. Geosci., 12, 227–244, doi:10.1007/
s10596-008-9080-3.
——, and ——, 2009: Reservoir characterization with the discrete
cosine transform. Soc. Pet. Eng. J., 14, 182–201, doi:10.2118/
106453-PA.
——, V. Goyal, D. McLaughlin, and W. Freeman, 2010: Compressed history matching: Exploiting transform-domain
3297
sparsity for regularization of nonlinear dynamic data integration problems. Math. Geosci., 42, 1–27, doi:10.1007/
s11004-009-9247-z.
Janowiak, J. E., R. J. Joyce, and Y. Yarosh, 2001: A real-time
global half-hourly pixel-resolution infrared dataset and its
applications. Bull. Amer. Meteor. Soc., 82, 205–217,
doi:10.1175/1520-0477(2001)082,0205:ARTGHH.2.3.CO;2.
Joshi, M. K., A. Rai, and A. C. Pandey, 2013: Validation of TMPA and
GPCP 1DD against the ground truth rain-gauge data for Indian
region. Int. J. Climatol., 33, 2633–2648, doi:10.1002/joc.3612.
Kalman, R. E., 1960: A new approach to linear filtering and prediction problems. J. Fluids Eng., 82, 35–45, doi:10.1115/
1.3662552.
Kavetski, D., G. Kuczera, and S. W. Franks, 2006: Bayesian analysis of input uncertainty in hydrological modeling: 1. Theory.
Water Resour. Res., 42, W03407, doi:10.1029/2005WR004368.
Kirstetter, P.-E., Y. Hong, J. J. Gourley, M. Schwaller,
W. Petersen, and J. Zhang, 2013a: Comparison of TRMM 2a25
products, version 6 and version 7, with NOAA/NSSL ground
radar-based national mosaic QPE. J. Hydrometeor., 14, 661–
669, doi:10.1175/JHM-D-12-030.1.
——, N. Viltard, and M. Gosset, 2013b: An error model for instantaneous satellite rainfall estimates: Evaluation of BRAINTMI over West Africa. Quart. J. Roy. Meteor. Soc., 139, 894–
911, doi:10.1002/qj.1964.
Kumar, P., and E. Foufoula-Georgiou, 1994: Characterizing
multiscale variability of zero intermittency in spatial
rainfall. J. Appl. Meteor., 33, 1516–1525, doi:10.1175/
1520-0450(1994)033,1516:CMVOZI.2.0.CO;2.
Kwon, H.-H., U. Lall, and A. F. Khalil, 2007: Stochastic simulation
model for nonstationary time series using an autoregressive
wavelet decomposition: Applications to rainfall and temperature. Water Resour. Res., 43, W05407, doi:10.1029/
2006WR005258.
Lee, B. G., R. T. Chin, and D. W. Martin, 1985: Automated rainrate classification of satellite images using statistical pattern
recognition. IEEE Trans. Geosci. Remote Sens., 23, 315–324,
doi:10.1109/TGRS.1985.289534.
Lespinats, S., M. Verleysen, A. Giron, and B. Fertil, 2007: DDHDS: A method for visualization and exploration of highdimensional data. IEEE Trans. Neural Networks, 18 (5), 1265–
1279.
——, B. Fertil, P. Villemain, and J. Herault, 2009: RankVisu:
Mapping from the neighborhood network. Neurocomputing,
72, 2964–2978, doi:10.1016/j.neucom.2009.04.008.
Li, B., D. Toll, X. Zhan, and B. Cosgrove, 2012: Improving estimated soil moisture fields through assimilation of AMSR-E
soil moisture retrievals with an ensemble Kalman filter and a
mass conservation constraint. Hydrol. Earth Syst. Sci., 16, 105–
119, doi:10.5194/hess-16-105-2012.
Lin, Y., and K. E. Mitchell, 2005: The NCEP Stage II/IV hourly
precipitation analyses: development and applications. 19th
Conf. on Hydrology, San Diego, CA, Amer. Meteor. Soc., 1.2.
[Available online at https://ams.confex.com/ams/Annual2005/
techprogram/paper_83847.htm.]
Maggioni, V., R. H. Reichle, and E. N. Anagnostou, 2011: The
effect of satellite rainfall error modeling on soil moisture
prediction uncertainty. J. Hydrometeor., 12, 413–428,
doi:10.1175/2011JHM1355.1.
——, ——, and ——, 2013: The efficiency of assimilating satellite
soil moisture retrievals in a land data assimilation system using
different rainfall error models. J. Hydrometeor., 14, 368–374,
doi:10.1175/JHM-D-12-0105.1.
3298
MONTHLY WEATHER REVIEW
——, M. R. P. Sapiano, R. F. Adler, Y. Tian, and G. J. Huffman,
2014: An error model for uncertainty quantification in hightime-resolution precipitation products. J. Hydrometeor., 15,
1274–1292, doi:10.1175/JHM-D-13-0112.1.
Margulis, S. A., D. Entekhabi, and D. McLaughlin, 2006: Spatiotemporal disaggregation of remotely sensed precipitation
for ensemble hydrologic modeling and data assimilation.
J. Hydrometeor., 7, 511–533, doi:10.1175/JHM492.1.
Martin, M., F. Valero, A. Pascual, J. Sanz, and L. Frias, 2014:
Analysis of wind power productions by means of an
analog model. Atmos. Res., 143, 238–249, doi:10.1016/
j.atmosres.2014.02.012.
Mason, I., 1982: A model for assessment of weather forecasts. Aust.
Meteor. Mag., 30, 291–303.
Matousek, J., 1996: On the distortion required for embedding finite
metric spaces into normed spaces. Isr. J. Math., 93, 333–344,
doi:10.1007/BF02761110.
Maybeck, P. S., 1982: Stochastic Models, Estimation, and Control.
Vol. 3, Academic Press, 291 pp.
McCollum, J. R., W. F. Krajewski, R. R. Ferraro, and M. B. Ba,
2002: Evaluation of biases of satellite rainfall estimation
algorithms over the continental United States. J. Appl.
Meteor., 41, 1065–1080, doi:10.1175/1520-0450(2002)041,1065:
EOBOSR.2.0.CO;2.
McLaughlin, D., 2002: An integrated approach to hydrologic
data assimilation: Interpolation, smoothing, and filtering. Adv. Water Resour., 25, 1275–1286, doi:10.1016/
S0309-1708(02)00055-6.
——, 2014: Data assimilation. Encyclopedia of Remote Sensing,
E. G. Njoku, Ed., Springer, 131–134.
McPhee, J., and S. A. Margulis, 2005: Validation and error characterization of the GPCP-1DD precipitation product over the
contiguous united states. J. Hydrometeor., 6, 441–459,
doi:10.1175/JHM429.1.
Moradkhani, H., C. M. DeChant, and S. Sorooshian, 2012: Evolution of ensemble data assimilation for uncertainty quantification using the particle filter-Markov chain Monte Carlo
method. Water Resour. Res., 48, W12520, doi:10.1029/
2012WR012144.
Morrissey, M. L., and J. S. Greene, 1998: Uncertainty analysis of
satellite rainfall algorithms over the tropical Pacific.
J. Geophys. Res., 103, 19 569–19 576, doi:10.1029/98JD00309.
Muller, W., C. Appenzeller, F. J. Doblas-Reyes, and M. A. Liniger,
2005: A debiased ranked probability skill score to evaluate
probabilistic ensemble forecasts with small ensemble sizes.
J. Climate, 18, 1513–1523, doi:10.1175/JCLI3361.1.
Najafi, M. R., H. Moradkhani, and T. C. Piechota, 2012: Ensemble
streamflow prediction: Climate signal weighting methods vs.
climate forecast system reanalysis. J. Hydrol., 442–443, 105–
116, doi:10.1016/j.jhydrol.2012.04.003.
Nykanen, D. K., and D. Harris, 2003: Orographic influences on the
multiscale statistical properties of precipitation. J. Geophys.
Res., 108, 8381, doi:10.1029/2001JD001518.
Orlowsky, B., O. Bothe, K. Fraedrich, F.-W. Gerstengarbe, and
X. Zhu, 2010: Future climates from bias-bootstrapped weather
analogs: An application to the Yangtze River basin. J. Climate,
23, 3509–3524, doi:10.1175/2010JCLI3271.1.
Paschalis, A., P. Molnar, S. Fatichi, and P. Burlando, 2013: A stochastic model for high-resolution space-time precipitation
simulation. Water Resour. Res., 49, 8400–8417, doi:10.1002/
2013WR014437.
Peters-Lidard, C. D., S. V. Kumar, D. M. Mocko, and Y. Tian, 2011:
Estimating evapotranspiration with land data assimilation
VOLUME 143
systems. Hydrol. Processes, 25, 3979–3992, doi:10.1002/
hyp.8387.
Rajaraman, A., and J. D. Ullman, 2011: Mining of Massive Datasets. Cambridge University Press, 326 pp.
Reichle, R. H., D. B. McLaughlin, and D. Entekhabi, 2002: Hydrologic data assimilation with the ensemble Kalman filter. Mon.
Wea. Rev., 130, 103–114, doi:10.1175/1520-0493(2002)130,0103:
HDAWTE.2.0.CO;2.
——, R. D. Koster, P. Liu, S. P. P. Mahanama, E. G. Njoku, and
M. Owe, 2007: Comparison and assimilation of global soil
moisture retrievals from the advanced microwave scanning
radiometer for the earth observing system (AMSR-E) and the
scanning multichannel microwave radiometer (SMMR).
J. Geophys. Res., 112, D09108, doi:10.1029/2006JD008033.
Renard, B., D. Kavetski, E. Leblois, M. Thyer, G. Kuczera, and
S. W. Franks, 2011: Toward a reliable decomposition of predictive uncertainty in hydrological modeling: Characterizing
rainfall errors using conditional simulation. Water Resour.
Res., 47, W11516, doi:10.1029/2011WR010643.
Robert, C., and G. Casella, 2004: Monte Carlo Statistical Methods.
2nd ed. Springer-Verlag, 649 pp.
Roberts, N. M., and H. W. Lean, 2008: Scale-selective verification
of rainfall accumulations from high-resolution forecasts of
convective events. Mon. Wea. Rev., 136, 78–97, doi:10.1175/
2007MWR2123.1.
Roweis, S., and L. Saul, 2000: Nonlinear dimensionality reduction
by locally linear embedding. Science, 290, 2323–2326,
doi:10.1126/science.290.5500.2323.
Sammon, J., 1969: A nonlinear mapping for data structure analysis. IEEE Trans. Comput., C-18, 401–409, doi:10.1109/
T-C.1969.222678.
Sapiano, M. R. P., and P. A. Arkin, 2009: An intercomparison and
validation of high-resolution satellite precipitation estimates
with 3-hourly gauge data. J. Hydrometeor., 10, 149–166,
doi:10.1175/2008JHM1052.1.
Seyyedi, H., E. Anagnostou, P.-E. Kirstetter, V. Maggioni,
Y. Hong, and J. Gourley, 2014a: Incorporating surface soil
moisture information in error modeling of TRMM passive
microwave rainfall. IEEE Trans. Geosci. Remote Sens., 52,
6226–6240, doi:10.1109/TGRS.2013.2295795.
——, ——, E. Beighley, and J. McCollum, 2014b: Satellite-driven
downscaling of global reanalysis precipitation products for
hydrological applications. Hydrol. Earth Syst. Sci., 18, 5077–
5091, doi:10.5194/hess-18-5077-2014.
Shao, Q., and M. Li, 2013: An improved statistical analogue downscaling
procedure for seasonal precipitation forecast. Stochastic Environ.
Res. Risk Assess., 27, 819–830, doi:10.1007/s00477-012-0610-0.
Shen, Y., A. Xiong, Y. Wang, and P. Xie, 2010: Performance of
high-resolution satellite precipitation products over china.
J. Geophys. Res., 115, D02114, doi:10.1029/2009JD012097.
Sivapalan, M., and E. F. Wood, 1987: A multidimensional model of
nonstationary space-time rainfall at the catchment scale. Water
Resour. Res., 23, 1289–1299, doi:10.1029/WR023i007p01289.
Stampoulis, D., and E. N. Anagnostou, 2012: Evaluation of
global satellite rainfall products over continental Europe.
J. Hydrometeor., 13, 588–603, doi:10.1175/JHM-D-11-086.1.
Steiner, M., T. L. Bell, Y. Zhang, and E. F. Wood, 2003: Comparison of two methods for estimating the sampling-related
uncertainty of satellite rainfall averages based on a
large radar dataset. J. Climate, 16, 3759–3778, doi:10.1175/
1520-0442(2003)016,3759:COTMFE.2.0.CO;2.
Tan, P. N., M. Steinbach, and V. Kumar, 2005: Introduction to Data
Mining. Addison-Wesley, 769 pp.
AUGUST 2015
ALEMOHAMMAD ET AL.
Tang, L., Y. Tian, and X. Lin, 2014: Validation of precipitation
retrievals over land from satellite-based passive microwave
sensors. J. Geophys. Res. Atmos., 119, 4546–4567, doi:10.1002/
2013JD020933.
Tenenbaum, J. B., V. de Silva, and J. C. Langford, 2000: A global
geometric framework for nonlinear dimensionality reduction.
Science, 290, 2319–2323, doi:10.1126/science.290.5500.2319.
Teo, C. K., and D. I. F. Grimes, 2007: Stochastic modelling of
rainfall from satellite data. J. Hydrol., 346, 33–50, doi:10.1016/
j.jhydrol.2007.08.014.
Tian, D., and C. J. Martinez, 2012: Comparison of two analogbased downscaling methods for regional reference evapotranspiration forecasts. J. Hydrol., 475, 350–364, doi:10.1016/
j.jhydrol.2012.10.009.
Tian, Y., and C. D. Peters-Lidard, 2007: Systematic anomalies over
inland water bodies in satellite-based precipitation estimates.
Geophys. Res. Lett., 34, L14403, doi:10.1029/2007GL030787.
——, and ——, 2010: A global map of uncertainties in satellitebased precipitation measurements. Geophys. Res. Lett., 37,
L24407, doi:10.1029/2010GL046008.
——, and Coauthors, 2009: Component analysis of errors in
satellite-based precipitation estimates. J. Geophys. Res., 114,
D24101, doi:10.1029/2009JD011949.
——, G. J. Huffman, R. F. Adler, L. Tang, M. Sapiano,
V. Maggioni, and H. Wu, 2013: Modeling errors in daily precipitation measurements: Additive or multiplicative? Geophys. Res. Lett., 40, 2060–2065, doi:10.1002/grl.50320.
Torgerson, W. S., 1952: Multidimensional scaling: I. Theory and
method. Psychometrika, 17, 401–419, doi:10.1007/BF02288916.
van der Maaten, L., E. Postma, and J. van den Herik, 2009: Dimensionality reduction: A comparative review. TiCC Tech.
Rep. 2009-005, Tilburg University, 35 pp.
Villarini, G., W. F. Krajewski, and J. A. Smith, 2009: New paradigm
for statistical validation of satellite precipitation estimates:
Application to a large sample of the TMPA 0.25-deg 3-hourly
3299
estimates over Oklahoma. J. Geophys. Res., 114, D12106,
doi:10.1029/2008JD011475.
Vrugt, J. A., and M. Sadegh, 2013: Toward diagnostic model calibration and evaluation: Approximate Bayesian computation.
Water Resour. Res., 49, 4335–4345, doi:10.1002/wrcr.20354.
Wheater, H. S., and Coauthors, 2000: Spatial-temporal rainfall
fields: Modelling and statistical aspects. Hydrol. Earth Syst.
Sci., 4, 581–601, doi:10.5194/hess-4-581-2000.
Wikle, C. K., and L. M. Berliner, 2007: A Bayesian tutorial for data
assimilation. Physica D, 230, 1–16, doi:10.1016/j.physd.2006.09.017.
Wilks, D. S., 2004: The minimum spanning tree histogram as a verification tool for multidimensional ensemble forecasts. Mon. Wea.
Rev., 132, 1329–1340, doi:10.1175/1520-0493(2004)132,1329:
TMSTHA.2.0.CO;2.
Wojcik, R., D. McLaughlin, A. Konings, and D. Entekhabi, 2009:
Conditioning stochastic rainfall replicates on remote sensing
data. IEEE Trans. Geosci. Remote Sens., 47, 2436–2449,
doi:10.1109/TGRS.2009.2016413.
——, ——, S. H. Alemohammad, and D. Entekhabi, 2014:
Ensemble-based characterization of uncertain environmental
features. Adv. Water Resour., 70, 36–50, doi:10.1016/
j.advwatres.2014.04.005.
Wolff, J. K., M. Harrold, T. Fowler, J. H. Gotway, L. Nance, and
B. G. Brown, 2014: Beyond the basics: Evaluating modelbased precipitation forecasts using traditional, spatial, and
object-based methods. Wea. Forecasting, 29, 1451–1472,
doi:10.1175/WAF-D-13-00135.1.
Zhou, Y., D. McLaughlin, and D. Entekhabi, 2006: Assessing the
performance of the ensemble Kalman filter for land surface
data assimilation. Mon. Wea. Rev., 134, 2128–2142,
doi:10.1175/MWR3153.1.
Zorita, E., and H. von Storch, 1999: The analog method as a simple
statistical downscaling technique: Comparison with more
complicated methods. J. Climate, 12, 2474–2489, doi:10.1175/
1520-0442(1999)012,2474:TAMAAS.2.0.CO;2.
Download