Abstract - Department of Economics

advertisement
Predictive Methodology and Application in Economics and Finance
Conference Presenters and Abstracts
______________________________________
Ted Anderson (twa@stat.Stanford.EDU)
Department of Statistics, Stanford University
Reduced Rank Regression for Blocks of Simultaneous Equations
_________________________________________________
Richard Carson (rcarson@ucsd.edu)
Department of Economics, University of California, San Diego
Air Travel Passenger Demand Forecasting
Abstract
Transportation modelers have traditionally used a four-stage model of travel demand: (1) generation of the total number of trips, (2) distribution of these trips between origins and
destinations (O-D), (3) the choice of the mode of travel for each trip, and (4) the choice of route for each trip. Linkage between these steps has largely been ad hoc and “equilibrium”
solutions have generally been achieved by “recalibration” of aberrant estimates. This paper proposes a unified estimation framework in the context of airline passenger demand and
draws heavily on the recently industrial organization literature (e.g., Berry, Levinsohn, and Pakes, 1995) that deals with trying to sort out the endogeneity of price and product attributes.
Stages (1) and (3) of the traditional four-stage model are effectively combined into a single stage which estimates the propensity for air travel taking the relevant population as known.
Stage (4) is moved to the right hand side by assuming that passengers can only choose between itineraries offered by airlines and only care about the attributes of those itineraries such
as airline and travel time. The framework put forth recognizes: (a) that a key component of the trip generation process from a particular origin is identified only in a panel data context,
(b) that the attractiveness of flight options from an origin should influence the number of trips in a well-defined utility sense that ties the first two stages together, and (c) that it is
possible with a richer specification of the cost component to estimate a (latent) airline specific component, and (d) that introducing a richer set of “attractor” variables and lagged O-D
proportions as predictors can start to help explain the evolution of the (usually assumed to be static) origin-destination matrix over time. The paper concludes by discussing how the
framework put forth can be empirically implemented by using the long standing U.S. Department of Transportation’s quarterly sample of 10% of all airline tickets augmented with
available data sources.
_______________________________________________
Xiaohong Chen (xiaohong.chen@nyu.edu)
Department of Economics, New York University
[joint with Yanqin Fan, Department of Economics, Vanderbilt University, Box 1819 Station B, Nashville, TN 37235-1819,
yanqin.fan@vanderbilt.edu ]
Estimation of A New Class of Semiparametric Copula-based Multivariate Dynamic Models
Abstract
Economic and financial multivariate time series are typically nonlinear, non-normally distributed, and have nonlinear co-movements beyond the first and second conditional moments.
Granger (2002) points out that the classical linear univariate and linear co-movements of multivariate modelling (based on the Gaussian distribution assumption) clearly fail to explain
the stylized facts observed in economic and financial time series and that it is highly undesirable to perform various economic policy evaluations, financial forecasts, and risk
managements based on the classical conditional (or unconditional) Gaussian modelling. The knowledge of the multivariate conditional distribution (especially the fat-tailedness,
asymmetry, positive or negative dependence) is essential in many important financial applications, including portfolio selection, option pricing, asset pricing models, Value-at-Risk
(market risk, credit risk, liquidity risk) calculations and forecasting. Thus the entire conditional distributions of multivariate nonlinear economic (financial) time series should be studied,
see Granger (2002). One obvious solution is to estimate the multivariate probability density fully nonparametrically. However, it is known that the accuracy and the convergence rate of
nonparametric density estimates deteriorate fast as the number of series (dimension) increases. The "curse of dimensionality" problem gets even worse for economic and financial
multiple time series, as they often move together, leading to the sparse data problem where there is plenty data in some regions but little data in other regions within the support of the
distribution. Also, economic and financial multivariate time series typically have time-varying conditional first and second moments, which makes it hard to statistically justify the
nonparametric estimation of the conditional density of the observed series. Moreover, it is known that fully nonparametric modelling will lead to less accurate forecasts, risk
management calculations and policy evaluations.
In this paper, we introduce a new broad class of semiparametric copula-based multivariate dynamic (hereafter SCOMDY) models, which allows for the estimation of multivariate
conditional density semiparametrically. The new SCOMDY class of models specifies the multivariate conditional mean and conditional variance parametrically, but specifies the
distribution of the (standardized) innovations semiparametrically as a parametric copula evaluated at nonparametric univariate marginals. By the Sklar's (1959) theorem, any multivariate
distribution with continuous marginals can be uniquely decomposed into a copula function and its univariate marginals, where a copula function is simply a multivariate distribution
function with uniform marginals. In our SCOMDY specification, the copula function captures the concurrent dependence between the components of the multivariate innovation, while
the marginal distributions characterize the behavior of individual components of the innovation. Our semiparametric specification has important appealing features: First, the distribution
of the multivariate innovation depends on nonparametric functions of only one dimension and hence achieves dimension reduction. This is particularly useful in high dimensions and in
cases where individual time series tend to move together and hence are scarce in certain regions of the support. In the econometrics and statistics literature, great attention has been
devoted to the development of semiparametric regression models that achieve dimension reduction. Well known examples include the partial linear model of Robinson (1988), the single
index model of Ichimura (1993), and the additive model of Andrews and Whang (1990), to name only a few. In view of the importance of modelling the entire multivariate distribution
in economic and financial applications, there is a great need for dimension reduction techniques in modelling the entire distribution; Second, the flexible specification of the innovation
distribution via the separate specification of the copula and the univariate marginal distributions is particularly attractive in financial applications, as financial time series are known to
exhibit stylized facts regarding their comovements (positive or negative dependence, etc.) and their marginal behaviors (skewness, kurtosis, etc.). These stylized facts can be easily
incorporated into the semiparametric specification of the distribution of the innovations, as there exist a wide range of parametric copulas capturing different types of dependence
structures, see Joe (1997) and Nelsen (1999); Third, the conditional mean and conditional variance can take any parametric specifications such as the multivariate ARCH, GARCH,
stochastic volatility, Markov switching, and combinations of them with observed common factors, detrending, deseasonalizing etc., while the copula function can also take any
parametric form such as time-varying, Markov switching, deseasonalizing etc.
Recently copulas have found great success in modelling the (nonlinear) dependence structure of different financial time series and in risk management. See Embrechts, et al. (1999)
and Bouyé, et al. (2000) for reviews; Hull and White (1998), Cherubini and Luciano (2001) and Embrechts, et al. (2001) for the portfolio Value-at-Risk applications; Rosenberg (1999),
and Cherubini and Luciano (2002) for multivariate option pricing via copulas; Li (2000), and Frey and McNeil (2001) for modelling correlated default and credit risk via copulas;
Costinot, et al. (2000) and Hu (2002) for contagion via copulas; Patton (2002a, b, c), and Rockinger and Jondeau (2001) for the copula-based modelling of the time-varying conditional
dependence between different financial series. In the probability literature, the copula approach has mainly been used to generate (or simulate) various multivariate distributions with
given marginals; In the statistics literature, the copula method has been widely used in survival analysis to model nonlinear correlations, see e.g. Joe (1997), Nelson (1999), Clayton
(1978) and Oakes (1982); The copula method has also been applied in micro-econometrics literature, see e.g. Lee (1982, 1983) and Heckman and Honore (1989).
Semiparametric copula-based multivariate models have been widely applied, however, their econometric and statistical properties are still lacking. In this paper, we first study the
identification, ergordicity and other probabilistic properties of the SCOMDY class of models. We then propose a simple two-step estimation procedure and establish the properties of the
estimators under correct or incorrect specification of copulas. Our results will make original contributions to the existing theoretical and empirical literature on semiparametric copulabased multivariate modellings and applications.
______________________________________
Valentina Corradi [joint with Norman R. Swanson] (v.corradi@qmul.ac.uk)
Department of Economics, Exeter University
Predictive Density Evaluation In the Presence of Generic Misspecification
This paper outlines a procedure for assessing the relative out-of-sample predictive accuracy of multiple conditional distribution models. The procedure is closely related to Andrews'
(1997) conditional Kolmogorov test and to White's (2000) reality check approach. Our approach is compared with methods from the extant literature, including methods based on the
probability integral transform and the Kulback Leibler Information Criterion. An appropriate bootstrap procedure for obtaining critical values in the context of predictions constructed
using rolling and recursive estimation schemes is developed Monte Carlo experiment comparing the performance of rolling, recursive and fixed sample schemes for the bootstrapping
methods developed in this paper shows that the coverage probabilities of the bootstrap are quite good, particulatly when compared with the analogous full sample block bootstrap.
Finally, an empirical illustration is provided and the predictive density accuracy test is used to evaluate a small group of competing inflation models.
______________________________________
Frank Diebold (fdiebold@sas.upenn.edu)
Departments of Economics, Finance and Statistics, University of Pennsylvania
[joint with Torben Andersen, Department of Finance, J.L. Kellogg Graduate School of Mgmt., Northwestern University, 2001 Sheridan Road,
Evanston, IL 60208-2006, andtoga@casbah.acns.nwu.edu,
Tim Bollerslev, Department of Economics, Social Science Building, Duke University, Durham, NC 27708-0097, boller@econ.duke.edu, Ms.
Ginger Wu, Dept. of Econs., U. of Pa., 3718 Locust Walk, Phila., PA 19104-6297, jinw@ssc.upenn.edu]
Realized Beta
Abstract:
A large literature over several decades reveals both extensive concern with the question of time-varying systematic risk and an emerging consensus that systematic risk is in fact timevarying, leading to the conditional CAPM and its associated time-varying betas. Set against that background, we assess the dynamics in realized betas, vis a vis the dynamics in the
underlying realized market variance and individual equity covariances with the market. We use powerful new econometric theory that facilitates model-free yet highly efficient
inference allowing for nonlinear long-memory common features, and the results are striking. Although realized variances and covariances are very highly persistent, realized betas,
which are simple nonlinear functions of those realized variances and covariances, are much less persistent, and arguably constant. We conclude by drawing implications for asset pricing
and portfolio management.
______________________________________
Jean-Marie Dufour (jean.marie.dufour@umontreal.ca)
Department of Economics, Université de Montréal
[joint with Tarek Jouini, Université de Montréal]
Finite-sample simulation-based inference in VAR models with applications to order selection and causality testing
Abstract:
Statistical inference in vector autoregressive (VAR) models is typically based on large-sample approximations, involving the use of asymptotic distributions or bootstrap techniques.
After documenting that such methods can be very misleading even with realistic sample sizes, especially when the number of lags or the number of equations is not small, we propose a
general simulation-based technique that allows one to control completely the level of tests in parametric VAR models. In particular, we show that maximized Monte Carlo tests
[Dufour(2002)] can provide provably exact tests for such models, whether they are stationary or integrated. Applications to order selection and causality testing are considered as special
cases. The technique developed is applied to a VAR model of the U.S. economy.
______________________________________
Eric Ghysels (eghysels@email.unc.edu)
Departments of Economics and Finance, University of North Carolina, Chapel Hill
[joint with Elena Andreou, Dept. of Economics, University of Cyprus, P.O. Box 537
CY 1678, Nicosia – Cyprus, elena.andreou@ucy.ac.cy]
Monitoring and Forecasting Disruptions in Financial Markets
Abstract
Disruptions of financial markets are defined as change-points in the conditional distribution of asset returns that result into financial losses beyond those that can be anticipated by
current risk management measures such as Expected Shortfalls or Value at Risk. The conditional distribution is monitored to establish stability in financial markets by sequentially
testing for disruptions. Some recent examples of sequentially monitoring structural change in economics are Chu et al. (1996) and Leisch et al. (2000) which focus however on linear
regression models. Our analysis considers strongly dependent financial time series and distributional change-point tests. Forecasting the probability of disruptions is pursued along two
dimensions. The first dimension involves the Black-Scholes (BS) formula and the second the multivariate conditional information (CI) approach. In the univariate framework the BS
formula in conjunction with de-volatilized returns forecasts the probability of a disruption within a given horizon (e.g. of 10 days). The following statistical results are used for the BS
inputs. The empirical process convergence results that relate to transformations of the weighted sequential ranks and the (two parameter) sequential Empirical Distribution Function
(EDF) of normalized returns yield Brownian motion and Brownian Bridge approximations, respectively (Bhattacharya and Frierson, 1981, Horvath et al., 2001). The power of this
procedure is assessed (via simulations and empirical applications) with respect to different weighting schemes, warning lines, optimal stopping rules (and in particular speed of
detection) as well as testing the validity of sequential probability forecasts (Seillier-Moiseiwitsch and Dawid, 1993). In addition, the local power asymptotic results show that it is more
optimal to capitalize on high frequency (say hourly) returns and volatility filters (Andreou and Ghysels, 2002, 2003b). This has the advantage of multiplying the sample size by the intraday frequency (as opposed to the daily sample) and thus increasing the accuracy of the forecast, as well as uncovering additional intra-day information that are considered statistically as
early warning signs for disruptions and which are otherwise lost with aggregation or constitute forgone opportunities for hedging strategies. The absolute returns (Ding et al., 1993, Ding
and Granger, 1996, Granger and Sin, 2000, Granger and Starica, 2001) and power variation filters of volatility (Barndorff-Nielsen and Shephard, 2003) enjoy power in detecting changepoints in financial markets. In a multivariate framework the CI method is based on variables that can predict the conditional distribution of stock returns and can be used as leading
indicators of a disruption. Conditioning variables involve volume, flight of foreign exchange, banking sector indicators etc. (see also Chen et al., 2001). The international comovements
of financial markets are also captured and sequentially monitored in this framework (Andreou and Ghysels, 2003a). Leading indicators and warning lines with different probabilities are
used to evaluate the probability of a disruption in a multivariate framework. The BS formula complements the multivariate forecasting framework by using residual-based EDF results.
______________________________________
Jesus Gonzalo (jgonzalo@est-econ.uc3m.es)
Departamento de Estadística y Econometría, Universidad Carlos III de Madrid
[joint with Oscar Martinez, Departamento de Estadística y Econometría, Universidad Carlos III de Madrid]
TIMA models: Does Size Matter?
______________________________________
Niels Haldrup (nhaldrup@econ.au.dk)
Department of Economics, Aarhus University
[joint with Morten Orregaard Nielsen, Department of Economics,
Cornell University, 482 Uris Hall,
Ithaca, NY 14853, mon2@cornell.edu]
A Markov Switching Model with Long Memory
Abstract:
The purpose of the present study is to develop new techniques to analyze e.g. the price behaviour of spot electricity markets which are characerized
by strong seasonal variation as well as a high degree of long memory with fractional integration orders in the range of 0.4 to 0.5. For instance,
several of the Nordic electricity areas (Nordpool) are physically connected bilaterally in the exchange of electricity on the spot market and relative
prices between the various areas tend to have a markedly lower degree of long memory and hence suggesting fractional co-integration to exist. One
explanation of this, is that the spot markets are highly liberalized whereby identical prices exist in the whole of the Nordic region when no
congestions across the bilateral markets exist. However, transmission capacity is limited and congestion may arise in the transmission network in
the sense that the level of transmission desired by the market exceeds the available capacity in one or more Nordic transmission lines. In these
situations several market prices will exist in the Nordic area depending upon the number of congestion constraints and hence, in these periods
several price areas may co-exist. Therefore, there are hours where some prices are identical and other periods where prices differ due to bottlenecks.
These properties call for the development of new econometric techniques to account for the possibility of both (seasonal) fractional integration and
the possibility of state-dependent regimes reflecting congestion and non-congestion periods. In so doing, we formulate a Markov-switching
(seasonal) fractional integration model. Estimation methods using the Maximum Likelihood technique are developed in addition to procedures for
model selection and evaluation. Finally, the Nordic electricity spot market will be analyzed using the new techniques.
______________________________________
Jim Hamilton (jhamilton@ucsd.edu)
Department of Economics, University of California, San Diego
[joint with Michael C. Davis, Department of Economics, 101 Harris Hall, 1870 Miner Circle, University of Missouri, Rolla, MO 65409-1250,
davismc@umr.edu]
Why Are Prices Sticky? The Dynamics of Wholesale Gasoline Prices
Abstract
The menu-cost interpretation of sticky prices implies that the probability of a price change should depend on the past history of prices and fundamentals only through the gap between
the current price and the frictionless price. We find that this prediction is broadly consistent with the behavior of 9 Philadelphia gasoline wholesalers. We nevertheless reject the menucost model as a literal description of these firms’ behavior, arguing instead that price stickiness arises from strategic considerations of how customers and competitors will react to price
changes.
______________________________________
Bruce Hansen (bhansen@ssc.wisc.edu)
Department of Economics, University of Wisconsin
Interval Forecasts and Parameter Uncertainty
Abstract
Forecast intervals generalize point forecasts to represent and incorporate uncertainty. Forecast intervals calculated from dynamic models typically
sidestep the issue of parameter estimation. This paper shows how to construct asymptotic forecast intervals which incorporate the uncertainty due to
parameter estimation. Our proposed solution is a simple proportional adjustment to the interval endpoints, the adjustment factor depending on the
asymptotic variance of the interval estimates. Our analysis is in the context of a linear forecasting equation where the error is assumed independent
of the forecasting variables but with unknown distribution. The methods are illustrated with a simulation experiment and an application to the U.S.
monthly unemployment rate.
______________________________________
David Hendry (david.hendry@nuffield.ox.ac.uk)
Department of Economics, Oxford University
Unpredictability and the Foundations of Economic Forecasting
______________________________________
Yongmiao Hong (yh20@cornell.edu)
Department of Economics, Cornell University
Generalized Cross-Spectral Tests for Out-of-Sample Granger Causality in Mean
Abstract
We use the generalized cross-spectral density derivative to develop an omnibus test for Granger causality in an out-of-sample context. The generalized cross-spectral density derivative
can detect both linear and nonlinear Granger causality in conditional mean and is robust to conditional heteroskedasticity of unknown form and other higher order serial dependence.
Fixed sample, rolling and recursive estimation methods are all allowed, and parameter estimation uncertainty has no impact on the asymptotic distribution of the proposed test statistics.
We apply the test to examine the semi-strong form of market efficiency for stock returns using interest rate spreads.
______________________________________
Cheng Hsiao (schless@email.usc.edu)
Department of Economics, University of Southern California
[joint with Siyan Wang, 450 Rue Adelard, Apt 2N, Nun’s Island, Quebec H3E 1B5, Canada, wangs@be.udel.edu]
Modified Two Stage Least Squares Estimator for Nonstationary Structural Vector Autoregressive Models
Abstract
We show that despite all variables being integrated of order 1, the ordinary least squares estimator remains inconsistent for a structural VAR model. We propose two modified two stage
least squares estimator that are consistent and have limiting distributions that are either normal or mixed normal. Monte Carlo studies are conducted to evaluate their finite sample
properties.
______________________________________
Tae-Hwy Lee (Taelee@ucrac1.ucr.edu)
Department of Economics, University of California, Riverside
[joint with Yang Yang, Department of Economics, University of California, Riverside, yangy08@student.ucr.edu]
Bagging Predictor for Time Series Using Generalized Loss Functions
Abstract
Bootstrap aggregating or Bagging, introduced by Breiman (1996), has been proved to be a very effective way to improve on unstable classifier forecast. However, almost all of the
current works are dealing with cross-section data with symmetric squared error loss. We show how bagging will work for time series binary response and binary quantile problems
under asymmetric, possibly non-differentiable, loss functions. Monte Carlo experiment will be done to demonstrate the results. This method can be then applied to the prediction of
signs (marker timing, turning point) and any other decision-making problems (based on generalized loss function). In particular, we use the artificial neural network model for the sign
and quantile forecasts. Optimal bagging combinations under the general loss are also examined.
______________________________________
Mark Machina [joint with Clive W.J. Granger] (mmachina@ucsd.edu)
Department of Economics, University of California, San Diego
[joint with Clive W.J. Granger, Department of Economics, University of California, San Diego, cgranger@ucsd.edu]
Structurally Induced Volatility Clusters
______________________________________
Hashem Pesaran (hashem.pesaran@econ.cam.ac.uk)
Department of Economics, Cambridge University
[joint with Paolo Zaffaroni, Department of Applied Economics University of Cambridge
Sidgwick Avenue, Cambridge CB3 9DD, England, paolo.zaffaroni@econ.cam.ac.uk]
Application of Bayesian Model Averaging to Volatility Forecasting and Risk Management
Forecasting conditional volatility and conditional cross-correlations of financial asset returns is essential for optimal asset allocation, managing portfolio risk, derivative pricing and
dynamic hedging. Volatility and correlations are not directly observable but can only be estimated using historical data on asset returns. Many alternative parametric and nonsemiparametric specifications do exist, although when looking at large scale problems only a limited number of options are computationally feasible. For instance, the multivariate
generalized autoregressive heteroskedasticity model of order 1,1(multivariate GARCH(1,1)), in its most general specification (see Bolleslev, T., R.~Engle, andJ.~Wooldridge (1988)), is
rt = t ½ zt
vech(t)= 0 + A0 vech(t-1) + B0 vech(rt-1r’t-1) ,
(1)
where rt = (r1t, …, rmt)’ and rit is the rate of return the ith asset.
Hereafter t indicates the conditional covariance matrix of the rt with respect to the non-decreasing information set Ft-1 . For expositional simplicity we shall assume that E(rt | Ft-1) = 0.
Here vech(denotes the column stacking operator of the lower portion of a symmetric matrix, zt = (z1t, …, zmt)’ is an m-valued i.i.d. sequence with E(zt) = 0 and E(ztzt’) = Im where Im is
the identity matrix of dimension mxm. Concerning model parameters, 0 is an m(m+1)/2 x 1vector and 0, 0 are m(m+1)/2 x m(m+1)/2 matrices of coefficients. Note how such loworder model already contains a large number of parameters even for moderate values of m. For this reason, a so-called diagonal GARCH(1,1) is often considered for practical estimation,
obtained with spherical 0, 0 yielding
ij,t = ij,0 + 0ij,t-1 + 0ri,t-1rj,t-1 , i,j = 1, …, m
(2)
The diagonal GARCH(1,1) of (2) has close analogies with the so called Riskmetrics (see J.P.Morgan - Reuters (1996)) model, which represents the workhorse model used by
practitioners
ij,t = (1-0)/1-0n)  0s-1ri,t-srj,t-s ,
(3) 
which satisfies the recursion (in matrix notation)
t0 t-1 + (1-0)/1-0n) rt-1r’t-1 + (1-0)/1-0n)0n-1 rt-n-1r’t-n-1, (4)
When letting n   in (3), then diagonal GARCH(1,1) and Riskmetrics coincide for ij,0 = 0 and 0 = 1 – 0 = 0 Such constraints, however, have relevant consequences in terms of the
statistical properties of the model, as discussed below. For a detailed description of this approach, we refer to J.P.Morgan - Reuters (1996) and Litterman, R., and K.Winkelmann (1998).
In order to overcome some statistical problems associated with the Riskmetrics approach (see Zaffaroni, P.(2003) for details) and, at the same time, to allow the multivariate GARCH to
handle large scale applications, a number of approaches have recently been advanced in the literature. Alexander, C.(2001) proposed orthogonal GARCH, whereby the rit are
orthogonalized by principal components before applying the GARCH filter. Engle, R. (2002) introduced the dynamic conditional correlation (DCC) model. This is based on the identity
t = Qt Rt Qt where Rt denotes the conditional correlation matrix and Qt the diagonal matrix with iit on the (i,i)th entry. The DCC model generalizes the constant correlation model of
Bollerslev, T. (1990) which, in turn, is based on the strong assumption Rt =R. Zaffaroni, P (2003) reconsiders the original Riskmetrics approach and establish a feasible estimation
procedure for a suitable generalization.
This paper focuses on these and other alternative approaches to multi-asset volatility modelling, in the case where the number of assets under consideration is relatively large, of the
order of 20-30 assets, and considers the application of recent developments in model evaluation and model averaging techniques to multi-asset volatility models. This is important both
for construction of {\em efficient} portfolios as well as for evaluation of existing portfolios. In the econometric literature models are often evaluated by their out-of-sample forecast
performance using standard metrics such as root mean square forecast errors (RMSFE). The application of this approach to volatility models is subject to a number of difficulties. As
pointed out above, volatility is not directly observable and is often proxied by square of daily returns or more recently by the standard error of daily returns using intra-day observations,
known as realized volatility (see, for example, Andersen, T., T.Bollerslev, F.Diebold, and P.Labys (2000). In multi-asset contexts the use of standard metrics such as RMSFE is further
complicated by the need to select weights to be attached to different types of errors in forecasts of individual asset volatilities and their cross-volatility correlations.
A more satisfactory approach would be to compare different volatility models in terms of their performance in trading and risk management. For example, consider return t of a given
portfolio defined by t = w’rt , where w is a vector of fixed (pre-determined) weights, and suppose that we are interested in computing the capital Value at Risk (VaR) of this portfolio
expected at the close of business on day t-1 with probability 1- . Denoting this VaR by Lt-1(w,) > 0 it is required that Pr w’xt < - Lt-1(w,) |t-1|  , where t-1 is the information
available on close of day t-1. Using this framework we propose to develop simple criteria for evaluation of alternative volatility forecasts by examining the historical VaR performance
of their associated portfolios. The approach is general and can be applied to strategic asset allocation problems that require volatility forecasts over relatively long periods as well as
more traditional VaR problems with horizons ranging from a single day to a month. Faced with a large number of possible models the general tendency in the econometric literature has
been to select the ``best'' model using a variety of model selection criteria such as Akaike Information Criteria or the Schwartz Bayesian Criterion. In this paper we shall consider an
alternative model pooling (or averaging) procedure, where risk of model uncertainty and its diversification via Bayesian model averaging will be emphasized. In this way, following
Granger, C., and M.Pesaran (2000), we hope to present a more unified treatment of the empirical portfolio analysis from a decision-theoretic perspective.
______________________________________
Ser Huang Poon (ser-huang.poon@man.ac.uk)
Department of Finance, University of Manchester
[joint with Namwon Hyung, Department of Economics, The University of Seoul,
90 Cheonnong-dong Dongdaemun-ku, Seoul, 130-743, South Korea, nhyung@uos.ac.kr]
Modelling and Forecasting Financial Market Volatility: The Relevance of
Long Memory
Abstract
It has been recognised for some time that financial market volatility exhibits long memory. In the last few years, there have been a number of studies that exploit this long memory
phenomenon in
producing volatility forecasts. All these studies have used variations of fractional integrated models such as FIARMA, FIGARCH and FIEGARCH. Although many short memory
models, such as breaks,
regime switching and volatility components models, are also capable of producing long memory in second moments while each of them entails a different data generating process, there
is no direct contest
between these and the long memory models. The proposed study aims to fill this gap in the literature by comparing the forecasting performance of these models, all of which are
associated with long memory in volatility.
______________________________________
Timo Terasvirta (Timo.Terasvirta@hhs.se)
Department of Economic Statistics, Stockholm School of Economics
[joint with Dick van Dijk, Econometric Institute, Erasmus University Rotterdam, P.O. Box 1738, 3000 DR Rotterdam, The Netherlands,
djvandijk@few.eur.nl]
Smooth Transition Autoregressions, Neural Networks, and Linear Models in Forecasting Macroeconomic Series: A Re-examination
______________________________________
Allan Timmerman (atimmerm@weber.ucsd.edu)
Department of Economics, University of California, San Diego
[joint with Marco Aiolfi, Bocconi University]
Persistence in Forecasting Performance
______________________________________
Mark Watson (mwatson@princeton.edu)
Department of Economics, Princeton University
[joint with Massimillano Marcelino, massimiliano.marcellino@iue.it and James H. Stock, Department of Economics, Harvard University,
james_stock@harvard.edu]
A Comparison of Direct and Iterated AR Methods for Forecasting Macroeconomic Series h-Steps Ahead
Ken West (kwest@ssc.wisc.edu)
Department of Economics, University of Wisconsin
[joint with Todd E. Clark, Kansas City Federal Reserve Bank, Kansas City, MI, todd.e.clark@kc.frb.org]
Alternative Approximations for Inference about Predictive Ability
____________________________________
Halbert White (hwhite@ucsd.edu)
Department of Economics, University of California, San Diego
A Framework for Stochastic Causality
Abstract
This paper develops a framework in which to understand causal relationships in a stochastic setting. This framework for stochastic causality is developed by first
treating a simpler deterministic framework and extending this framework in a natural way. Both the deterministic and stochastic causal frameworks described here
accommodate and elucidate simultaneous causality and provide an explicit structure in which to view the systems of simultaneous equations central to structural
economic modeling. The framework makes clear the ceteris paribus aspects of these models, incorporates Granger non-causality in a natural way, and facilitates
insight into when and how unobserved causal factors create obstacles to causal understanding.
______________________________________
Gawon Yoon (gyoon@pusan.ac.kr)
Department of Economics, Pusan University
A Modern Time Series Assessment of `A Statistical Model for Sunspot Activity' by CWJ Granger (1957)
Download