Notes on uncertainty from Jan 2002 annual meeting

advertisement
2/18/02
Summary of 1/17/02 Breakout Session on Uncertainties in PBEE Methodology
NSF Statement:
Weakness: Lack of detailed plans and priority to identify important sources of uncertainty
and model the uncertainty within the PBEE framework is currently a weakness but will
soon become a threat if not addressed.
The participants of the breakout session came to the following (mostly“ obvious”)
conclusions:







Key issues in performance assessment are the tasks of
o Identification of sources of uncertainty
o Quantification of the level of uncertainty present in each source
o Propagation of these uncertainties into the quantities that control
performance assessment, and
o Refinement of probabilistic models, if merited.
We should distinguish between aleatory and epistemic uncertainties. The former
should be evaluated and propagated to assess their effect on the DVs, but by
definition there is little that can be done to reduce them. Aleatory uncertainties
exist mostly in IM prediction and, to a lesser degree, in record-to-record EDP
response given IM. More emphasis should be placed on epistemic uncertainties,
because these we can reduce through research efforts, and some may be
significant especially as we push into near-failure predictions.
All important sources of uncertainties should be identified and, to the extent
possible, quantified. We recognize that the latter is an idealistic statement,
because quantification of uncertainties is mostly a matter of information (i.e., data,
expert opinions) acquisition, and it will take much more than the PEER effort to
make large headways in this aspect. But PEER should identify information needs
and effective methods for information acquisition and uncertainty quantification.
PEER should accelerate efforts in uncertainty quantification to the extent that
reasonable bounds can be placed on important uncertainties, so that these
uncertainties can be propagated through the performance methodology and their
effect on performance assessment (DVs) can be evaluated and quantified. This
should lead to the identification of the most important uncertainties (largest effect
on DVs), which then should become the focus of further study.
PEER should accelerate efforts in developing effective methods for uncertainty
propagation and incorporating these methods in analysis tool kits (OpenSees, etc.)
PEER should solicit research ideas and fund projects that provide insight into
uncertainty quantification and propagation.
The general consensus of the group was that no single project or testbed will
provide a “fair” assessment of the importance of uncertainties, because this
importance will strongly depend on IM, geotechnical, structural, DM, and DV
assumptions.
1
2/18/02

A matrix of structures and soil conditions should be developed (buildings and
bridges, highly uncertain soil properties vs. uncertain structural properties, etc.)
whose evaluation will provide a “fair” assessment of uncertainties. [For instance,
the Van Nuys testbed, by itself, likely will indicate that uncertainties in
geotechnical properties are not important.] This matrix is expected to incorporate
cases that are not covered by the testbeds.
2
2/18/02
The following provides a summary of the issue discussions in the breakout session:
Sources of uncertainties
See attachment
Methods for quantifying uncertainties (ability to quantify)
 Observations
 Statistics
 Judgment
 Expert opinions (much needed for uncertainties in DVs)
Ranking of uncertainties
 This requires analytical studies with the aforementioned matrix.
Methods for propagating uncertainties
Propagation of Uncertainties
In some respects, the PEER methodology already addresses the propagation of some
important sources of uncertainty in the PBEE framework equation. For example, the
mean annual frequency that an engineering design parameter (EDP) exceeds a
prescribed threshold y is computed as
 EDP ( y )   P( EDP  y | IM  x)d IM ( x) ,
(1)
where IM is an appropriate ground motion intensity measure. In equation (1), the
uncertainties associated with the random occurrence of earthquakes in time and space
is represented by the hazard curve IM (x) and the record-to-record variability of
structural response to an ensemble of ground motions that have IM  x is
represented by P( EDP  y | IM  x) . Clearly, the effects of these sources of
uncertainty are included in EDP ( y) . However, we must recognize that IM (x) and
P( EDP  y | IM  x) are defined by parameters that are not known with certainty
(e.g., the slope of the hazard curve, the model parameters that define the hysteretic
behavior of nonlinear elements in the structure etc.) and are therefore random
quantities. By virtue of the fact that EDP ( y) is a function of these random quantities,
it too is uncertain. A critical component of the PBEE approach under development by
PEER is the assessment of the uncertainty present in computed quantities such as
EDP ( y) .
Random variables are called basic if they are observable, i.e., statistical information
can be collected for them. Material properties, member dimensions, soil properties
(anything that you can physically test or measure on its own) are all examples of basic
random variables. In contrast, random quantities that are functions of basic random
variables are called derived random variables. Virtually all EDPs such as stress, strain
and displacements are examples of derived random variables. Of course, quantities
3
2/18/02
such as EDP ( y) are also derived random variables. In assessing the uncertainty in
derived random variables, it is natural to quantify and then propagate the uncertainty
in the underlying basic random variables that the derived random variables depend
upon. The PBEE framework proposed by PEER, which disaggregates the
performance assessment into four distinct parameters (IM, EDP, DM and DV), is well
suited to the identification and quantification of the basic random variables that may
influence quantities such as EDP ( y) . The challenge that PEER faces is the
propagation of the uncertainties associated with these basic random variables up to
derived quantities of interest.
To better understand the nature of this challenge, a summary of procedures available
for time-invariant problems are first described followed by a discussion of the
obstacles that must be overcome in order to apply these procedures to the time-variant
problems of interest to PEER.
Procedures available for time-invariant problems
For time-invariant problems, there are many well-established methods of assessing
the uncertainty present in derived random variables, depending upon the nature of the
statistical information available for the basic random variables.
For example, consider a derived quantity R( X 1 , X 2 ,, X N ) that is a function of
basic random variables X 1 , X 2 ,, X N . When only second moment information is
available for the basic random variables (i.e., means, variances and correlation
coefficients), first-order estimates of the mean and variance of R are
 R  R (  1 ,  2 , ,  N )
(2a)
N 1 N 
 dR  2
dR dR 
  i  2  
   
 .

 ij i j
i 1  dxi 
i 1 j i 1 dxi dx j 
N
2
2
R
(2b)
Note that the implicit current practice in PEER for derived quantities such as
R   EDP ( y) is to compute  R (assuming that all the uncertain basic random
variables are assigned their mean values). It should be clear from the above equation
for  R2 that, in addition to statistical information on the underlying basic random
variables (i.e.,  i2 , i  1,2, , N ), we also require the gradient of R with respect to the
basic random variables, which may or may not be easy to compute. Also note that the
above estimate for  R2 can lend some insight into the relative importance of the basic
random variables; clearly, the largest terms in the first summation of equation (2a) are
associated with those basic random variables that contribute most to the uncertainty in
R . This information can be useful in an initial screening of the random variables to
identify those that are most significant.
For most practical applications, however, information beyond the second moments is
available for the basic random variables (e.g., probability distribution functions) that
can be exploited to provide improved uncertainty estimates in derived quantities,
usually in terms of the probability that the derived quantity exceeds a prescribed
threshold. The first-order reliability method (FORM) and Monte Carlo simulation
4
2/18/02
(MCS) are two such approaches. Like the first-order-second-moment estimate in
equation (2b), FORM requires the gradient of R with respect to the basic random
variables and readily provides sensitivity measures that can be used to ascertain the
relative importance of the underlying basic random variables. The primary advantage
of a FORM-based approach lies in its computational efficiency relative to alternative
approaches such as MCS (typically, the number of evaluations of R required by a
FORM-based approach is several orders of magnitude less than that required by a
MCS-based approach). MCS avoids the need to compute gradients (which can be
difficult to compute), but at the expense of an increased number of evaluations of R .
Variations of MCS (e.g., importance sampling and directional simulation) that reduce
the required number of evaluations of R are available, but these improved procedures
still require many more evaluations of R than a FORM-based approach. Sensitivity
measures, which can be used to rank the relative importance of the basic random
variables, are also available within a MCS framework. Finally, in addition to the
inherent randomness present in the basic random variables, FORM and MCS-based
approaches can also incorporate statistical uncertainty (due to the use of limited
sample sizes for estimating the parameters that define the probability distributions of
the basic random variables) and modeling uncertainty (due to imperfect models and
limit state definitions).
Obstacles associated with time-variant problems
In theory, the procedures listed above for time-invariant problems are applicable to
time-variant problems such as those encountered in earthquake engineering. However,
in practice, their implementation is difficult.
The most serious obstacle lies in the computation of R and its gradient with respect
to the basic random variables. In particular, the derived quantities of interest to PEER
(e.g., EDP ( y) in equation (1)) often involve terms such as P( EDP  y | IM  x) that
are typically determined by MCS. Consequently, the calculation of the uncertainty in
EDP ( y) can be time-consuming and prohibitively expensive if it involves random
variables that have a significant influence on P( EDP  y | IM  x) . The problem is
particularly serious if the gradient of R is required, as in the case of equation (2b) or
FORM-based approaches. As a minimum, in order for uncertainty propagation to be
plausible, one must be able to compute the derived quantities of interest in an efficient
manner. In the case of quantities based on the PEER framework equation, this implies
that “clever” algorithms must be developed to minimize the number of simulations
needed to achieve accurate results.
Unfortunately, for FORM-based approaches, which promise a significant reduction in
the required number of evaluations of R , a second obstacle associated with timevariant problems lies in the computation of the gradient of R with respect to the basic
random variables. In particular, for time-variant problems the gradient is often not
continuous and can be very nonlinear (as an example, consider how IDAs will often
“wiggle” away from a smooth curve as one increases the intensity measure), leading
to numerical difficulties when implementing a gradient-based approach such as
FORM. Consequently, an active area of research in structural reliability is focused on
the development of effective tools that circumvent the problems associated with
5
2/18/02
gradient calculations. However, in spite of the recent attention afforded to this
problem and the development of several promising approaches, the problem remains
an open one and there is a need for additional research. For the propagation of
uncertainties within the PEER framework equation, the best approach is not obvious.
Research into the effectiveness of available procedures and the development of
improved algorithms tailored to the needs of PEER is necessary.
Incorporation in OpenSees
Reliability modules are being developed within OpenSees to provide a comprehensive
tool for uncertainty analysis. The user of OpenSees will be able to declare any set of
material, geometry or load variables as random quantities with distributions assigned
from a library or provided by the user. Dependence between random variables is
accounted for. Certain models of stochastic input with non-stationary characteristics
are also being developed. By specifying a set of “limit-state” functions that describe
an event of interest, e.g., exceeding of a deformation threshold, formation of a
mechanism, the user can compute the probability of the event or the mean rate of its
occurrence in time. Furthermore, the user can determine the sensitivities of the
probability with respect to any set of desired parameters. These sensitivities are useful
in identifying critical sources of uncertainty. Statistical and model uncertainties are
accounted for within a Bayesian reliability framework. It is important to understand
that this analysis is conducted with the same finite element model that the user may
use for deterministic analysis.
Effects of uncertainties on final product = DV (sensitivity of DV to uncertainties)
 This requires analytical studies with the aforementioned matrix.
Research needs
 See “conclusions”; to be expanded.
Develop plan that shows to NSF that we are paying attention
 See “conclusions”, to be expanded.
C.A. Cornell comment: In the PG&E fragility project we are obtaining “bottom line”
epistemic uncertainty measures (one beta to be combined with the EDP aleatory beta) by
a simple expert elicitation (3 engineers) for four classes of PG&E small buildings (tiltups, steel pre-fab, mill and “other”) . We are doing this for three damage levels. So the
experts are providing the combining and propagating through to drift or Sa capacity
uncertainties in their heads. We will include these in our three tagging “fragility curves.”
(We don’t do the IM part in this project.)
6
Attachment
2/14/02
Sources of Uncertainties (emphasis on individual facilities):
Common uncertainties (exist in all other categories):
Model Uncertainty
Mathematical models are used in all aspects of PBEE, starting from the modeling of
the input motion (IMs), characterization of the ground effects and structural response
(EDPs and DMs), and assessment of decision variables (DVs). For example, an
attenuation law used to predict the spectral displacement at a site for a given
earthquake magnitude and location is a mathematical model. Similarly, a finite
element model of the site and the structure, including the employed material laws,
geometric configurations and simplified mechanics rules, is a mathematical model. So
is a rule describing the down time of a structure (a DV) for a given level of structural
damage.
Without exception, all mathematical models are idealizations of reality and, therefore,
implicit with error and uncertainty. The uncertainty associated with some of the
models used in PBEE is well known and quantified. For example, the error inherent
in an attenuation law is quantified in the process of fitting the model to observed data.
In other cases, the model uncertainty remains completely unknown.
An important area where the model uncertainty remains largely unknown is where
models are used to predict the response of soils and structures. If the soil/structure
remains within the linear elastic range, the existing models of linear theory are fairly
accurate. These models have been validated against numerous laboratory and field
observations. However, when the soil or structure behavior is in the inelastic range,
the behavior is a lot more complex and our mathematical models are unable to capture
that complexity. As a general rule, model uncertainty tends to increase with the
severity of structural response. Unfortunately, this is the region of most interest in
PBEE. Models used in the current practice to assess damage and collapse of structures
are likely to have large errors relative to real structures. This uncertainty is likely to be
far greater than the uncertainty arising from the natural variability in materials and
some loads.
The only way to assess model uncertainty is to compare model predictions with realworld observations, either in the field or in the laboratory (with proper account of the
departure of laboratory specimens from field reality). Observations of building
response after major earthquakes, including the occurrence or non-occurrence of
damage or collapse, provide valuable information for model assessment. Although
detailed measurements are most informative, observations without measurements can
also be used. Note that observations of no damage or no collapse after an earthquake
can be as informative as the observation of damage or collapse. Laboratory
observations can be used to assess models at component level, as most laboratory
tests are conducted for structural components.
The earthquake engineering community, including PEER researchers, have yet to
make systematic and careful assessment of the uncertainty inherent in models used in
critical stages of PBEE. At the same time, enormous amount of information exists in
the form of laboratory and field observations. We propose that PEER initiate projects
7
Attachment
2/14/02
aimed at: (a) gathering and cataloging observational information in a form that can be
used for statistical model assessment, (b) select key models from the PBEE process
and assess the associated uncertainty, and (c) develop methods for propagating these
uncertainties through the decision process for performance-based design and analysis.
Bayesian statistical methods are most effective for this purpose and a general
framework for such analysis has already been developed under PEER funding.
Statistical and Measurement Uncertainties
In any kind of statistical analysis, including the assessment of model uncertainty, one
has to deal with statistical and measurement uncertainties. Statistical uncertainty
arises from the sparseness of data. It can be reduced by gathering more data. If
additional data cannot be gathered, then one must properly account for the effect of
this uncertainty in all predictions.
Measurement uncertainty arises from errors inherent in our laboratory or field
measurements. This kind of uncertainty is also present when certain variables in a
model remain unknown, such as in the case of assessing the capacity of an existing
building where the material strength cannot be directly measured. Measurement
uncertainty can be reduced by use of more accurate measurement devices and
procedures.
Uncertainties associated with IM




Random occurrence of earthquakes in space and time (distance and rate of
occurrence)
Earthquake magnitude
Attenuation from source to site
Inherent randomness of ground motion time histories
Regardless of which ground motion parameter(s) is (are) used in the PEER
framework equation, its estimate is affected by both aleatory uncertainty (i.e.,
randomness) and epistemic uncertainty. The latter type of uncertainty, at least
theorethically, differs from the former because it can be reduced by more research and
increased knowledge. The most comprehensive reference to date on this subject is the
SSHAC (1997) report.
The aleatory uncertainty affects both the phenomenon of earthquake occurrence and
the generated ground motion severity at the site. The aleatory uncertainty concerns,
for example, the location and time of occurrence of future earthquakes, the magnitude
of the event, the extent of the fault rupture area given the location and the magnitude,
the location of the hypocenter within the rupture area, the relative likelihood of
occurrence of events of different magnitude within the same fault, and the severity of
the ground motion at the site given that an event with specified characteristics has
occurred. The last aspect is customarily accounted for in the error term of empirical
ground motion prediction equations. This error term is intended to consider
uncertainty in the site ground motion generated by a given event that are due to source
effects (e.g., different slip distributions for the same mean slip, different fault rupture
velocity and rise time) source-to-site path effects, basin effects, and local soil effects.
Within the Probabilistic Seismic Hazard Analysis context (PSHA) the aleatory
8
Attachment
2/14/02
uncertainty is (numerically) integrated over all possible values that each parameter
can take. The result of this operation is a site-specific hazard curve for a single
ground motion parameter or a joint frequency distribution for two or more parameters.
The epistemic uncertainty deals with uncertainty in the “true” values of parameters
that we include in the PSHA model (e.g., the slip rate of a fault, the width and the dip
angle of a fault, the segmentation of a fault, the distribution of magnitude of events
generated by a fault, the maximum magnitude that a fault can generate and the ground
motion attenuation model). PSHA deals with epistemic uncertainty by repeating the
hazard calculation for all possible combinations of the (uncertain) input parameters.
The result of this exercise is a family of seismic hazard curves for the single
parameter of choice or a family of joint frequency distributions if more than one
ground motion parameter at the time is considered.
Uncertainties associated with geotechnical aspects (IM-EDP)



Geotechnical geometric and material properties and their effects on ground
motion time histories (site soil modification)
Geotechnical geometric and material properties and their effects on SFSI
Kramer/Kutter to expand
Uncertainties associated with EDP




Structural geometric and material properties
Modeling uncertainties at component and system levels
Prediction bias and uncertainty in bias
Construction uncertainties
The prediction of EDPs, given IM and given the uncertainties in IM and geotechnical
parameters, is greatly affected by modeling uncertainties. As stated previously, model
uncertainties tend to increase with the severity of structural response. These
uncertainties are likely to be far greater than the uncertainty arising from the natural
variability in materials and some loads. The following modeling uncertainties
deserve special consideration in this context:
 Location of earthquake input (IM or ground motion record)
 Soil-foundation-structure interface
 Hysteretic characteristics of all important components of the
foundation/structure/nonstructural system (including “nonstructural” elements
that contribute to strength and stiffness). This should include effects of
deterioration due to earthquake loading, but also due to aging and previous
exposure to damaging events. It also should include the effects of insufficient
documentation for existing buildings.
 3-D effects
 Analytical assumptions in computer model (damping, integration time step
and technique, stability of solution technique, etc.)
 Others??
9
Attachment
2/14/02
Engineering assessment procedures often are based on “approximate” methods of
analysis (e.g., static pushover). Approximations introduce a bias, which needs to be
assessed. The bias will depend on the effects of the approximation, which may be
strongly dependent on structural configuration and ground motion input. Thus, any
measure of bias will be uncertain and may have a significant dispersion.
Prediction models are based on design documentation (which in the case of existing
structures may be supplemented by field observations and measurements). Great
uncertainty in properties may be introduced by construction errors and construction
decisions that are not documented in the design information.
Uncertainties associated with DM (at component level, i.e., assembly-based)

Uncertainties in establishing the structural response parameters to which building
components are sensible to (it is as simple as just using peak interstory drift ratio
and/or peak floor acceleration?)

Uncertainties on establishing two dimensional effects (demands) on individual
components (structural and nonstructural)

Epistemic uncertainties associated with the introduction of simplifying
assumptions such as neglecting the variation of dispersion of EDPs with changes
in IM (this source of uncertainty can be eliminated if such variation is explicitly
considered)

Uncertainties with the characterization of damage states

Uncertainties on establishing basic central tendency and dispersion parameters to
develop fragility functions (this uncertainty may be very large for many
nonstructural for which there is little or no information to establish “damagemotion” pairs).

Uncertainties associated with using fragility functions based on peak structural
response parameters that neglect all cumulative effects

Uncertainties associated with establishing the correlation between different kinds
of EDP acting in the same location of the structure

Uncertainties associated with establishing the correlation between the same type
of EDP but acting on different locations within the same structure

Uncertainties on the correlation of damage measures in different components
(structural and nonstructural) for a given EDP. For example what is the
probability that the ceiling in the third floor will see the same level of damage as
in the second given that the peak floor accelerations in both levels is the same?
Construction uncertainty (from construction defects) would be here too

Uncertainties in the effects that different DM on different components (structural
and nonstructural) will affect the functionality of these components

Uncertainties on establishing damage to some components produced by damage
states reached by others components (as opposed to the result of having subjected
10
Attachment
2/14/02
to a certain level of EDP). For example broken windows caused by falling panels
of ceiling.
Uncertainties associated with DV




Consequences of limit state exceedance (collapse, life safety, $ losses, downtime)
Economic assumptions (discount rate and other assumptions in cost modeling)
Economic consequence modeling
Recovery rates (availability of finances, business/construction capability, state of
economy in region)

Uncertainties in establishing repair/replace actions for each damage state on each
structural and nonstructural component in the building

Uncertainties on establishing basic central tendency and dispersion parameters of
cost functions relating the probability of exceedance a certain dollar amount (loss)
for each type of structural and nonstructural components

Uncertainties on establishing basic central tendency and dispersion parameters of
time functions relating the probability of exceedance of a certain amount of time
to repair/replace for each type of structural and nonstructural components

Uncertainties in establishing probability distributions of cost given damage states
on each component

Uncertainties in establishing probability distributions of time to repair/replace
given damage state on each component

Uncertainties on establishing the correlation between the repair/replace costs of
the same type of components within the same location in the building

Uncertainties on establishing the correlation between the repair/replace costs of
the same type of components in different location in the building

Uncertainties on establishing the correlation between the repair/replace costs of
the different types of components

Uncertainties in the “economy of scale” effects on repair/replace costs functions

Uncertainties on establishing the correlation between the times to repair/replace of
all types of components

Uncertainties in the construction planning

Uncertainties in the construction schedule

Uncertainties associated with cost related to changes in the construction schedule
(same activities with different schedule have different costs).

Uncertainties of the effect of loss of functionality of individual components on the
loss of functionality in portion of the building or on all the building
11
Attachment

2/14/02
Recovery rates (availability of finances, business/construction capability, state of
economy in region)
Uncertainties in Urban Risk Analysis

To everybody: please provide input
12
Download