Uploaded by Rokhim Bathara

IDEC8017 Note 01

advertisement
IDEC8017 Econometric Techniques
LECTURE OUTLINE
TOPIC 1: INTRODUCTION TO ECONOMETRICS
Lecturer: Hoa Nguyen
Disclaimer: This outline has not been subjected to the usual scrutiny reserved for formal publications. It may be distributed outside this class only
with the permission of the Lecturer. Its objective is to save some of students’
time in writing.
Contents
1.1Data sources
1-3
1.1.1 Census . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
1.1.2 Surveys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
1.1.3 Compiling and aggregating data over time . . . . . . . . . . . 1-3
1.1.4 Internet data . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
1.2Types of Data
1-4
1.3Introduction to econometrics
1-5
1.4Estimation, Estimators and Estimates
1-5
1.4.1 Small sample properties . . . . . . . . . . . . . . . . . . . . . 1-6
1.4.1.1 Unbiasedness: . . . . . . . . . . . . . . . . . . . . . . . 1-6
1.4.1.2 Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
1.4.1.3 Mean square error . . . . . . . . . . . . . . . . . . . . 1-7
1.4.2 Asymptotic properties (Large sample properties) . . . . . . . . 1-8
1.4.2.1 Consistency . . . . . . . . . . . . . . . . . . . . . . . . 1-8
1.4.2.2 Asymptotic normality . . . . . . . . . . . . . . . . . . 1-9
1-1
Topic 1: Introduction to Econometrics
1.5Examples of econometric models and their applications
1-2
1-11
Topic 1: Introduction to Econometrics
1.1
1-3
Data sources
see the 01.01. Lecture video: Sources and types of data
1.1.1
Census
Census: collect information on each member of a population of interest
• Advantage: universality
• Disadvantage: Expensive, and time consuming, thus limiting the topic
coverage
1.1.2
Surveys
Survey collects information on some members of a population.
• Advantage: Cost effective; Feasible
• Disadvantage:
– Sampling errors:
– Non-sampling errors:
∗ Non-response: responses may not be obtained fully or partially
∗ Sampling frame under-coverage: lack some units in the target
population
∗ Measurement errors: recorded information inaccurate (can
happen to the Census as well or No?)
1.1.3
Compiling and aggregating data over time
• Examples are data on GDP, exports, imports, ect.
Topic 1: Introduction to Econometrics
1-4
• The frequency of data depends on how frequently they are aggregated
or compiled
• The level of aggregation varies and depends on how they are aggregated
and compiled.
1.1.4
Internet data
• records by Facebook, Google, Twitter, etc.
• internet data is growing exponentially and can be analyzed using techniques such Big Data, Machine Learning, etc. They are beyond the
scope of this course.
1.2
Types of Data
There are four key types of data available for empirical analysis.
• Cross-sectional data are data collected or observed at only one point in
time.
• Pooled data is a combination of different samples at different points in
time.
• Time series data are data collected over the time
• Panel data are data collected over the time on the same people or
subjects
Data can also be classified into three following types: Experimental, nonexperimental and quasi-experiemental
• experimental data:
– traditionally are available only in basic science. Why?
– Getting experimental data in economics can be expensive, challenging, unethical or even not feasible at times. Why?
Topic 1: Introduction to Econometrics
1-5
– But with some creativity, randomized control experiments (RCT)
are increasingly popular in economics to measure the impact of
certain interventions.
• non-experimental or observed data are the most common form of data
in economics. However, using them can pose challenges on finding the
causal effects.
• Natural or quasi-experimental data:
– Sometimes the nature gives us random events beyond our control,
and it can be seen as a natural experiment.
– Techniques for analysing quasi-experimental data such as Regression Discontinuity Design and Difference in Differences are increasingly important in the economic literature
1.3
Introduction to econometrics
Watch: 02.01.Lecture video: Introduction to Econometrics, Estimators versus
Estimates, and criteria to select Estimators
• Why do you need to learn econometrics?
• Classical Econometric Modeling
1.4
Estimation, Estimators and Estimates
• Parameters:
• Estimation:
• Estimator vs estimate:
• How economists justify an estimate?
Topic 1: Introduction to Econometrics
1-6
Figure 1.1: Description of Classical Econometric Modeling
1.4.1
Small sample properties
1.4.1.1
Unbiasedness:
The estimator β ∗ is said to be unbiased if the mean of its sampling distribution
is equal to the true parameter β. That is, the expectation of the value of β ∗ ,
denoted as E(β ∗ ) in repeated sampling is equal to β:
E(β ∗ ) = β
Note: Average, mean, expectation mean the same thing.
Topic 1: Introduction to Econometrics
1-7
Figure 1.2: Unbiasedness
1.4.1.2
Efficiency
Sometimes there exist more than one unbiased estimators. Which one to
choose?
Figure 1.3: Efficiency
taller, more efficient; less variance; (with narrower estimate reach
unbiasedness)
1.4.1.3
Mean square error
How can we make a trade-off b/w low bias and low variance? Estimators
can be chosen in order to minimise a weighted average of the variance and
the square of the bias. When the weight is equal, the criterion is the popular
Mean Square Error (MSE).
M SE(β ∗ ) = V ar(β ∗ ) + (E[β ∗ ] − β)2
Topic 1: Introduction to Econometrics
1-8
Figure 1.4: Mean square error
1.4.2
Asymptotic properties (Large sample properties)
Motivation:
• Unbiasedness does not depend on the size of the data at hand.
• Sometimes it is impossible to find estimators being unbiased in small
samples → econometricians may justify an estimator on the basis of
its asymptotic properties, i.e. the nature of the estimator’s sampling
distribution in extremely large samples.
• Sampling distribution of most estimators change as the sample size
changes. In many cases, it happens that a biased estimator becomes
less and less biased as the sample size gets larger and larger.
• Econometricians formalise these phenomena by structuring the concept of an asymptotic distribution and defining desirable asymptotic or
“large-sample properties.”, i.e. what happens to a statistic, say β̂, when
the sample size N → ∞.
1.4.2.1
Consistency
Consistency: The estimator β ∗ is said to be a consistent estimator of β if it
converges in probability to β as the sample size → ∞.
That is, for all δ > 0, there holds:
Topic 1: Introduction to Econometrics
1-9
P (|β ∗ − β| > δ) → 0 as n → ∞
or limN →∞ P (|β ∗ − β| > δ) = 0 or plim(β ∗ ) = β
Figure 1.5: Asymptotic Properties
Why do we need “consistency”? If an estimator is not consistent, then it
does not help us to learn about the true parameter, even with unlimited data
→ consistency is a minimal requirement of an estimator.
Note: Unbiased estimators are not necessarily consistent.
A good example of a consistent estimator is the sample average. Let Y1 , Y2 , ..., Yn
be an independent identically distributed (i.i.d) random variables with mean
1P
µ. Let the sample average Ȳ =
Yi .
n
Law of Large Numbers: plim(Ȳ ) = µ, which means if we are interested
in estimating the population average µ, we can get arbitrarily close to µ by
choosing a sufficiently large sample.
1.4.2.2
Asymptotic normality
Why we need this? Consistency is a good property of point estimators but
it says nothing about the shape of the estimator’s distribution. Need this
information/ways to approximate the distribution for constructing interval
estimators and testing hypotheses.
Central Limit Theorem (CLT): one of the most powerful results in probability and statistics. It states that the average from a random sample for
Topic 1: Introduction to Econometrics
1-10
any population (with finite variance), when standardized, has an asymptotic
standard normal distribution.
Let Y1 , Y2 , ..., Yn be an independent identically distributed (i.i.d) random variables with mean µ and variance σ 2 . Then,
Zn =
Ȳn − µ
σ/√n
d N (0, 1)
→
−
has an asymptotic standard normal distribution.
Note: Most estimators in statistics and econometrics can be written as functions of sample averages → we can apply LLN and CLT.
Topic 1: Introduction to Econometrics
1.5
1-11
Examples of econometric models and their
applications
Example 1: Friedman’s Permanent Income Hypothesis
In 1957 Friedman of the University of Chicago objected to the simple Keynesian formulation of the consumption function, in which: consumption =
β0 + β1 income. He contended that if income and consumption were properly
measured, consumption should be zero when income is zero, or more strongly,
consumption should be strictly proportional to income. He named income
properly measured “permanent income” and consumption properly measured
“permanent consumption.” Thus he specified the consumption function to be:
permanentconsumption = β1 (permanentincome)
Friedman linked permanent income very closely to a consumer’s wealth. He
argued that consumers pay attention to their expected long-term financial
circumstance when choosing how much to consume. For example, he would
argue that a recent college graduate with no plans for graduate school and
earning $35K per year is likely to consume less than a young medical student
since the medical student can expect much higher income in the future with
which to pay for higher spending today.
In formal terms, he said:
permanentconsumption = E(consumption|permanentincome)
He presumed that households would deviate from their mean consumption
levels:
consumption = β1 (permanentincome) + vc
(1)
where vc is a random term with E(vc ) = 0
Friedman observed that permanent income and income as usually measured
would differ from one another in any given year. He hypothesised that, on
average, permanent income and income as usually measured would be equal.
Thus, he expressed that relationship as follows:
income = β1 (permanentincome) + vy
Topic 1: Introduction to Econometrics
1-12
where vy is a random term with E(vy ) = 0
Since permanent income is unobserved, we must estimate β1 not from equation (1) but instead from:
consumption = β1 (income − vy ) + vc = β1 (income) + u
where u=−β1 vy + vc Friedman argued that the best way to estimate β1 is
using an estimator:
P
consumptiont
P
β1 =
incomet
Using data across 31 household budget studies and 14 time series studies,
Friedman’s estimates averaged 0.92, whereas previous estimates averaged
0.75.
In 1976, Friedman received the Nobel prize in economics “for his achievements
in the fields of consumption analysis, monetary history and theory and for
his demonstration of the complexity of stabilisation policy.”1
Example 2: Demand for Drugs
Medical science tells us that cocaine is addictive but is there economic evidence? If cocaine is addictive, we would expect current consumption of
cocaine to be larger for those past consumption is larger. We might also
expect consumption to be quite unresponsive to price. Moreover, economists
often argue that rational individuals who know a drug is addictive reduce use
today if they prefer to consume less in the future (in an effort to reduce their
future dependency). Such rational addiction and addiction avoidance would
be evidenced by today’s consumption being related to tomorrow’s, as well as
yesterday’s.
In 1998, economists Michael Grossman of the City University of New York
and Frank Chaloupka of the University of Illinois at Chicago used data on
young adults to study short-run and long-run elasticities of young adults’
cocaine demand.
Grossman and Chaloupka specified the demand for cocaine as follows:
Ct = f (Ct−1 , Ct+1 , pricet , inc, male, black, marijuana, legaldrinkingage)
1
Friedman, 1957. (http://www.nber.org/chapters/c4405.pdf). Adapted from Murray, 2006
Topic 1: Introduction to Econometrics
1-13
where Ct , Ct−1 , and Ct+1 are an individual’s cocaine consumption this year,
last year and the next year, respectively; pricet is cocaine’s price; inc is
the individual’s real income; and legaldrinkingage is the product of the
legal drinking age in the individual’s state and an indicator of whether the
individual is 21 years old or younger. male, black, and marijuana are three
variables that equal to one when the individual is male, or is black, or lives
in a jurisdiction in which marijuana is decriminalized, and that equal to zero
otherwise.
Results: Grossman and Chaloupka found that present consumption does depend on past consumption, supporting the medical evidence that cocaine is
addictive. Most importanly, the price of cocaine has a surprisingly strong
estimated effect on cocaine consumption. Despite the addictive charter of
cocaine, consumers can adjust their consumption levels when price change.
The authors estimate that at the median values of their variables,
• the long-run price elasticity of demand for concaine is -1.35;
• the estimated short-run elasticity (the responsiveness in the first year
following a permanent change in price) is markedly smaller in magnitude, but still not small: -0.96.
• the responsiveness to a temporary (one-year) change in price is -0.5
Policy implications: Grossman and Chaloupka’s results, if correct, have strong
implications for government drug policies.
• The demand elasticity in excess of one suggests that policies that raise
cocaine providers’ costs lower consumers’ long-run consumption of cocaine. That is, higher street cocaine prices bring a more than offsetting
decrease in quantity demanded. On the basis of this evidence, policies
that permanently curb cocaine supply reduce the profits of drug sellers
by simultaneously raising their costs and lower their revenues.
• short-run elasticity of -0.96 suggests drug seller’s revenues would not
fall immediately after a permanent price increases
• The estimated elasticity to a temporary price change of -0.5 indicates
that transient policies that increase the costs of cocaine suppliers, and
hence temporarily raise street prices increases cocaine suppliers’ revenues, as quantities fall only half as much as prices rice in such cases
Topic 1: Introduction to Econometrics
1-14
Reducing cocaine use in the United States is a challenging and multifaceted
problem that mere econometric analyses can not solve. However, this empirical evidence can inform policy makers and help them avoid counterproductive
measures as they work to shape effective policies2 .
2
Grossman and Chaloupka,
S0167629697000465)
1998.
(http://www.sciencedirect.com/science/article/pii/
Download