Structural Model

advertisement
Macro Econometric Structural Model
1. Macro Theoretical Models and the Role of the Theory
An economy consists of people making and carrying out decisions and
interacting with each other through markets. Theories provide explanations of
how the decisions are made and how the markets work.
More specifically, a theory includes the choice of the decision-making
units (economic agents), the decision variables and object function of
each unit, the constraints facing each unit, and the amount of information
each unit has at the time when the decisions are made. Possible
constraints include budget constraints, technological constraints, and
institutional (or legal) constraints. If expectations of future values affect
current decisions, another ingredient of a theory is an explanation of how
expectations are formed.
2. The transition from Theoretical to Econometric Models
The transition from theoretical models to empirical models is probably the least
satisfying aspect of macroeconomic work. (The main reason is that people are
severely constrained by the quantity and quality of the available data. Therefore,
restrictive assumptions are generally needed in the transition from the theory to
the data.)
Step 1: Data Collection and the Choice of Variables and Identities
Collect data, create the variables of interest from the raw data,
separate the variables into exogenous variables, endogenous
variables, explained by identities, and endogenous variables
explained by stochastic equations.
Notes: the data should match as closely as possible the variables in the
theoretical model. However, in macroeconomic work this match is usually not
very close because of the highly aggregated nature of the macro data.
Theoretical models are usually formulated in terms of individual agents
(households, firms and so on), whereas the macro data are usually for entire
sectors.
There are many special features are limitation of almost any data base that you
should be aware of, and one of the most important aspects of macroeconomic
work, perhaps the most important, is to know your data well. Knowledge of how
to deal with data comes in part through experience and in part from reading
about how others have done it. You can’t learn data in abstract.
Step 2: Treatment of Unobserved Variables
Most theoretical models contain unobserved variables, and one of
the most difficult aspects of the transition to econometric
specifications is to deal with these variables. Much of what is
referred to as the “ad hoc” nature of macroeconomic modeling
occurs at this point. The most common unobserved variables in
empirical work is to assume that expected future values of a
variable are a function of the current and past values of the
variable. The current and past values of the variable are then used
as “proxies” for the expected future values.
However, this treatment of expectation is unsatisfying. Agents may
look at more than the current and past values of a variable in
forming an expectation of it, and even if they do so, the shapes of
the lag distributions may be quite different from the shapes usually
imposed in econometric work.
Notes: Reading the DOB’s methodology about the expectation VAR model.
Step 3: Specification of the stochastic Equations
Write down the equations to be estimated. Note that the stochastic
equations are the key part of any econometric models, this step is
of crucial importance. If the theoretical approach is the traditional
one, theory has presumably chosen the LHS and RHS variables. If
theory has not indicated the functional forms and lag lengths of the
equations, a number of versions of each equation may be written
down to be tried, the different versions corresponding to different
functional form and lag lengths.
Notes: Theory generally has little to say about the stochastic features of the
model, that is, about where and how error terms enter the equations.
Step 4: Estimation
Once the equations of a model have been written down in a form
that can be estimated, the next step is to estimate them. Much
experimentation usually takes place at this step. Different functional
forms and lag lengths are tried, and RHS variables are dropped if
they have coefficient estimates of the wrong expected sign. Variables with
coefficient estimates with right sign may also be dropped if the estimates
have t-statistics that are less than about two in absolute value, although
practice varies on this. If at this step things are not working out very well in
the sense that very few significant coefficient estimates of the correct sign
are being obtained, you may go back and rethink the theory or the
transition from the theory to the estimated equations. This process
may lead to new equations to try and perhaps to better results. This
back-and-forth movement between theory and results can be an
important part of the empirical work.
Notes: The initial estimation technique that is used usually is a limited information
technique, such as 2SLS.
Step 5: Testing and Analysis
After estimating the model you want to test and analyze it. This step
is the one that has been the most neglected in macroeconomic
research. RMSE (Root mean squared errors), or Theil’s inequality
coefficient U can be used to evaluate Ex Ante/Ex Post forecast
errors.
Notes: Monte Carlo simulation methods can be used to evaluate forecasting
accuracies, which take into account of uncertainties related to parameters
estimated, error terms, or combined.
3. Evaluation Forecast
In practice, there are no perfect forecasts. Even you have some perfect models.
(Example: suppose we have a white noise series with zero mean, forecast values
for that is simply zero, but the actual number will be very different.)
The crucial object in measuring forecast accuracy is the loss function:
L(Yt  h , Yt  h ,t )h  1, 2,3,...
Sometimes we write it as
L (et  h ,t ) , which is called h-step-ahead forecasting errors.
In addition to the shape of the loss function, the forecasting horizon h is of crucial
importance. Rankings of forecast accuracy may be very different across different
loss functions and different horizons.
A few important and popular measures of accuracy:
First define
Forecast errors: et  h ,t  Yt  h  Yt  h ,t
Percent errors: pt  h,t 
Yt  h  Yt  h,t
Yt  h
Then we have
1 T
 et h,t
T t 1
Which measures bias. Other things being the same, the smaller the ME , the
better is the model.
Mean Error: ME 
1 T
(et  h ,t  ME ) 2

T t 1
measures dispersion of forecast errors. Other things being the same, the
smaller EV the better is the model.
Error Variance: EV 
1 T 2
 et h,t and
T t 1
1 T
Mean Squared Percent Error: MSPE   pt2 h ,t
T t 1
Mean Squared Error: MSE 
(which, by far, are the most popular measures.)
Often the squared roots of these measures are used to preserve units, yielding
Root Mean Squared Error: RMSE 
1 T 2
 et h,t and
T t 1
Root Mean Squared Percent Error: RMSPE 
1 T 2
 pt h,t
T t 1
Somewhat less popular but nevertheless common accuracy measures are:
1
Mean Absolute Error: MAE  | et  h ,t | and
T
1
Mean Absolute Percent Error: MAPE  | pt  h ,t |
T
Theil’s Inequality Coefficient U:
U
1 T
 ( ytf  yta )2
T t 1
1 T
1 T
f 2
(
y
)

 t
 ( yta )2
T t 1
T t 1
where ytf is forecast value of yt , yta is actual value of yt . Note that the numerator
of U is just the root mean squared forecasting error, but the scaling of the
denominator is such that U will always fall between 0 and 1. If U  0 , then we
have ytf  yta for all t and there is a perfect fit; if U  1 , the predictive performance
of the model is as bad as it could possibly be. Hence, the Theil inequality
coefficient measures the root mean squared errors in relative terms.
Also the Theil inequality coefficient can be decomposed in a useful way. It can be
shown with a little algebra that
1 T
( ytf  yta ) 2  (Y f  Y a ) 2  ( f   a ) 2  2(1   ) f  a

T t 1
where Y f , Y a ,  f and  a are the means and standard deviations of the series
ytf and yta , respectively.  is their correlation coefficient , that is
  (1/  f  aT ) ( ytf  Y f )( yta  Y a )
Now we can define the proportion of inequality
U 
m
(Y f  Y a ) 2
T
(1/ T ) ( ytf  yta ) 2
t 1
Us 
( f   a ) 2
T
(1/ T ) ( ytf  yta ) 2
t 1
and
Uc 
2(1   ) f  a
T
(1/ T ) ( ytf  yta ) 2
t 1
The proportions U m ,U s , and U c are called the bias, the variance, and the
covariance proportion of U . They are useful as a means of breaking down the
simulation error into its characteristic. (Note that U m  U s  U c  1 .)
The bias proportion U m is an indication of systematic error, since it measures the
extent to which the average values of the forecast and actual series deviate from
each other. Whatever the value of the inequality coefficient may be, we would
hope that U m would be close to zero. A large value of U m (above 0.1) would
mean that a systematic bias is present, so that the revision of the model is
necessary.
The variance proportion U s indicates the ability of the model to replicate the
degree of the variability in the variable of interest. If U s is large, it means that the
actual series has fluctuated considerably while the simulated series shows little
fluctuation, or vice versa; this would indicate again that the model should be
revised.
Finally, the covariance proportion U c measures unsystematic error; i.e., it
represents the remaining error after deviations from average values have been
accounted for. Since it is unreasonable to expect prediction to be perfectly
correlated with actual outcomes, this component of error is less worrisome than
the other two. Indeed, for any value of U  0 , the idea distribution of inequality
over the three sources is U m  U s  0 and U c  1 .
Download