Do Analyst Forecasts Vary Too Much?

advertisement
Do Analyst Forecasts Vary Too Much?
by
Russell Lundholm
and
Rafael Rogo
First draft: October 16, 2013
Prepared for the 2013 Columbia Business School Burton Workshop
I. Introduction
Different analysts make different forecasts, and individual analysts change
their forecasts, for a variety of reasons. The rational forecasting explanation for this
variability in forecasts is that it is caused by variation in information across analysts
and over time. This explanation is difficult to refute without access to analysts’
underlying information. Alternatively, analysts may make different forecasts, or
change their forecasts, for strategic reasons unrelated to their information, or they
may respond non-optimally to the information they receive. In this paper we
introduce a new test of the optimality of analyst forecasts based on their time series
and cross sectional variability. For a given firm at one point in time, do the forecasts
of different analysts differ from each other too much? And for a given firm and
analyst, do forecasts fluctuate too much over time?
For both questions we offer a novel approach to identifying when forecasts
vary “too much.” We derive a bound for the variance of forecasts that is completely
general – it only assumes that a variance of the underlying company earnings exists
– and then we examine forecast data to see how frequently the bound is violated.
Importantly, our test does not require any knowledge of the underlying information
available to analysts. While we derive our bound mathematically, a loose
translation would be that the variance of rational forecasts about a random variable
must be lower than the actual variance of the random variable. Collections of
forecasts (either in time series or in cross section) that violate this bound are
unquestionably too volatile to be rational; that is, there is no amount of variation in
information that can justify this much volatility. Consequently, they are clear cases
where forces beyond rational forecasting have come into play.
Cases where analyst forecasts are excessively volatile are of interest beyond
providing evidence of non-optimal forecasts. Changes in analyst forecasts of
earnings are often accompanied by changes in the firm’s stock price. If forecasts are
excessively volatile then this could contribute to excessively volatile stock prices.
Whether or not stock prices change too much has been studied extensively in
finance. The short answer to this question is ‘Yes” although the results are not
without controversy (see Shiller, 1981 for the original study and Gilles and LeRoy
1991 for a comprehensive review of the evidence). In the stock price setting, the
researcher must assume that the observed price fluctuations are driven by
fluctuations in the forecasts about the underlying “true value” of the company. The
researcher observes neither the actual forecast nor the “true value” of the company,
and must make restrictive assumptions and imprecise estimates of each. In our
setting, we observe the forecasts directly because analysts regularly report their
forecasts to IBES. We also observe the underlying series being forecasted –
company earnings. Consequently, we can precisely establish a variance bound on
forecasts of earnings.
There are at least two reasons why an analyst might produce forecasts that
are too volatile over time, or why a group of analysts’ forecasts might be collectively
too disperse at a point in time. First, regardless of their information, analysts may
have incentives to make non-optimal forecasts. For instance, they may make a
“bold” forecast in order to attract the attention of the investment community (see
2
Clarke and Subramanian 2006). Second, they may simply respond too aggressively
to new information, as would be the case if they suffered from a saliency bias
(Kahneman and Tversky 1973). Working against these forces are an analyst’s
incentive to herd toward the consensus, either rationally because the consensus is a
good aggregator of private information, or strategically because the analyst doesn’t
want to appear unusual (see Hirshliefer and Teoh 2003 for a review of the herding
literature).
Based on both cross-sectional tests and time-series tests, and using both
annual and quarterly data, we find a non-trivial number of cases where analyst
forecasts are excessively volatile. Individual analysts forecasts are too volatile in
time series approximately 17 percent of the time, with roughly 20 percent of the
analysts producing at least one excessively volatile forecast series one quarter of the
time. The cross-section of analyst forecasts are better behaved, although we still
find roughly five to seven percent of the cases are excessively volatile.
We supplement our analysis with an exploration of the analyst
characteristics, firm characteristics, and time period characteristics that are
associated with excessively volatile forecasts. We find that periods of excessively
volatility tend to precede large market corrections. The frequency of excessively
volatile forecasts peak and then plummet around Black Monday (1987-1988), the
Dot Com bubble and bust (2001-2002), and the Global Financial Crisis (2008-2010).
We also find that analysts are very different in terms of their propensity to produce
an excessively volatile series of forecasts, with some analysts contributing to this
phenomenon regularly and others who have never produced such a series of
3
forecasts. Similarly, some firms are considerably more prone to violations of the
variance bound than others.
In the next section we discuss related literature, in section three we derive
our variance bound, and in section four we talk about how to estimate the variances
that go into the variance bound test. We present our results are in section five and
conclude in section six.
II. Related Literature
Barron et al. (1998) present a rational model of analyst forecasts where each
analyst forms a posterior belief about the firm’s upcoming earnings based on
common public information and a noisy private signal. Dispersion in forecasts is
caused by dispersion in the private signal errors. The authors then map the
statistics found on the IBES consensus database onto the parameters of their model.
The result is a powerful tool that has be used to estimate the average amount of
private signal precision that analysts have at a point in time. For instance, Barron et
al. (2002a) find that consensus, measured as the cross-sectional correlation in
forecast errors, decreases around earnings announcements, and Barron et al.
(2002b) find that consensus is lower for firms with relatively more intangible
assets. Based on the Barron et al. (1998) model, the interpretation is that analysts
collect more private information after earnings announcements, and for firms with
more intangible assets, and that this is the source of the cross-sectional dispersion
in their forecasts. However, this interpretation places considerable faith in the
structure of the model, including the assumption that all random variables are
4
normally distributed, that analysts are Bayesian processors of information, and that
their only motivation is to make an accurate forecast. As we show later, in this
setting theoretical forecasts never violate the variance bounds that we propose. Our
test takes a very different tack. We assume very little about the structure of the
information used by analysts, including no restrictions on the realizations or
distributions of the random variables, but can only identify one type of irrational
forecasts – those that are too variable either in the cross-section across analysts or
in time-series for each analyst.
Although there is a wealth of analyst forecast literature (see Ramnath et al.
2008 for an excellent review), very little has focused on the variance of the forecasts
as a collection. In terms of cross-sectional variation in analyst forecasts, there is
evidence that public information lowers dispersion. Lang and Lundholm (1996)
find that firms who provide better information to analysts, as measured by their
AIMR score, have lower dispersion. And Bowen et al. (2002) find that dispersion
decreases following earnings announcement conference calls.
There is also some indirect evidence that analysts are influenced by the
forecasts of other analysts, which will affect the cross-sectional variance. For
instance, if analysts herd toward the consensus estimate then this will lower the
cross-sectional variance in their forecasts. Graham (1999) finds that analysts with
high reputation, or low ability, herd toward the consensus. And Welch (2000) finds
that analysts herd toward the consensus when there is little information available.
In contrast, Clement and Tse (2005) find that bold forecasts – those that move away
from the consensus – are more accurate. Clarke and Subramanian (2006) find that
5
both very accurate, and very inaccurate, forecasters produce bold forecasts. And
Bernhardt et al. (2006) report evidence of “anti-herding,” meaning that analyst
forecasts are repelled away from the consensus. Finally, Hong et al. (2000) find that
inexperienced analysts are more likely to be fired for issuing a “bold” forecast,
giving them an incentive to herd toward the consensus. The evidence of herding, or
boldness, is based on comparing analyst forecast revisions to the consensus, with
movements toward the consensus labeled as herding and movements away from the
consensus labeled as boldness. But without access to the information used by the
analysts, these studies cannot rule out that the forecasts were simply the result of
rational use of information. By considering a collection of forecasts together, we can
unambiguously say when the forecasts are collectively too variable to be consistent
with rational forecasting. Note that this will identify “boldness” generally, and any
countervailing forces that create herding will work against our measure.
In terms of the time-series variation in forecasts, Gleason and Lee (2003) find
that bold forecast revisions generate larger stock return responses. In addition,
there is evidence that forecast revisions are positively serially correlated (Zhang
2006, Chen et al. 2013). Zhang (2006) also provides some results that link the timeseries forecast variance to the cross-sectional variance. In particular, he finds that
the drift in analyst forecast revisions is greater for firms with greater cross-sectional
dispersion, and that the effect is more pronounced following bad news. This type of
incomplete adjustment to new information will lower the estimated time-series
variance, and work against violations of our bound.
6
There are behavioural reasons that analysts might make forecasts that are
too volatile as well. Kahneman and Lovallo (1993) describe a judgement bias
wherein forecasters take an “inside view” of the problem, causing them to
overweight the specific details of the forecast at hand and underweight baseline
priors derived from previous forecasting exercises. This is a specific version of a
general judgement bias wherein agents overweight salient information (Kahneman
and Tversky 1973). Such a bias will therefore overweight recently-received private
information, causing the forecasts to excessively respond to the private signals and
increase the variance of the collection to a possibly irrational level. DeBondt and
Thaler (1990) provide some related evidence on this judgement bias based on IBES
consensus analyst earnings forecasts from 1976-1984. They find that differences in
forecasts across firms and years appears to be too extreme, insofar as the level of
the forecast is negatively related to the forecast error. In other words, analysts
forecast as if the differences in firms and years are greater than they actually are,
and they would be more accurate if they tempered their extreme forecasts. In
contrast, we examine the excess volatility in forecasts within firm-years. For a given
firm-year, are the time-series of forecasts too volatile, or the cross-section of
forecasts, too volatile?
III. A Variance Bound for Earnings Forecasts
Let X be the underlying random variable being forecasted, which has density
g(x). Let Y be a summary statistic for all public and private information used by a
rational forecaster in constructing a posterior distribution of X, denoted as g(x|y). If
7
Y is a set of information, rather than a single summary statistic, the derivation of the
variance bound is almost the same, but uses conditioning sets of random variables.
The variance bound is based on the following condition (see DeGroot 1975, p. 183):
V(X) = V[E(X|Y)] + E[V(X|Y)].1
(1)
Rearranging (1) gives
V[E(X|Y)] = V(X) - E[V(X|Y)].
(2)
Since E[V(X|Y)] is strictly positive for all non-degenerate posteriors g(x|y),
V[E(X|Y)] < V(X).
(3)
Equation (3) is the basis for our tests. It says that the variance of expectations of X,
seen as a random variable in Y, must be less than the ex ante variance of X.
To illustrate the bound in a specific context, consider the Barron et al. (1998)
model. Let X denote unknown future earnings and Y = X +  denote the analyst’s
private signal, where X and  are independently normally distributed,  is mean zero,
X has mean , V(X) = 1/r, and V() = 1/s. In this case
A brief proof of (1) goes as follows: Note that V(X) = E(X2) – E(X)2. Next, V[E(X|Y)] =
E[E(X|Y)2] – E[E(X|Y)]2 which equals E[E(X|Y)2] – E(X)2 by iterated expectations on
the second term. Similarly E[V(X|Y)] = E[E(X2|Y) – E(X|Y)2] = E(X2) - E[E(X|Y)2].
Adding V[E(X|Y)] and E[V(X|Y)] gives E(X2) – E(X)2 = V(X).
1
8
𝐸(𝑋│𝑌) = (𝑟 ∗ 𝜇 + 𝑠 ∗ 𝑌)/(𝑟 + 𝑠) and
𝑠 2
𝑠 2 1 1
𝑠
1
) 𝑉(𝑌) = (
) ( + )=(
)∗
𝑟+𝑠
𝑟+𝑠
𝑟 𝑠
𝑟+𝑠 𝑟
𝑉[𝐸(𝑋|𝑌)] = (
which is strictly less than V(X) = 1/r.
In the context of the Barron et al. (1998) model, a rough intuition for why
V[E(X|Y)] is less than V(X) goes as follows. There are two reasons why the signal Y
might be highly variable; either X is highly variable or  is highly variable. If X is
highly variable then the RHS of the bound, V(X), will be large as well. Alternatively,
if Y is highly variable because is highly variable, then the weight on Y in the
expectation will be low and it won’t flow through to variation in the posterior
expectation.
The Barron et al. (1998) model illustrates the bound, but we emphasize the
extremely general nature of bound as given in (3) – as long as the densities g(x) and
g(x|y) have variances, equation (3) holds. The distributions do not need to be
normal and the signal does not need to have an additive error. In fact, the signal can
be a multidimensional set of signals.
IV. Estimation
The next challenge is to estimate V[E(X|Y)] and V(X) using analyst forecasts
and earnings realizations.
9
A. Estimating the Variance of Earnings Changes
A crucial estimate in our analysis is V(X). This establishes the benchmark
variance that bounds the forecast variance. Because earnings for a given firm evolve
in a time series, we need to consider its time series properties. We consider two
units of observation: forecasts and outcomes for firm-years, and forecasts and
outcomes for firm-quarters. Early literature has found that quarterly earnings
evolve approximately as a seasonal random walk and annual earnings evolve
approximately as a simple random walk (Brown et al 1987). More recently, Gerakos
and Gramacy (2012) and Li and Mohanram (2013) find that at a one-year
forecasting horizon, a random walk performs about as well as many other more
complicated models. Therefore, for firm-quarters, we define the object of the
analyst forecast as the seasonal change in the firm’s earnings. That is, we specify the
unknown object of interest to be
Xjt = Ejt – Ejt-4,
where Ejt is the realized quarterly earnings from IBES for firm j announced at time t,
and Ejt-4 is the realized earnings from the same quarter a year earlier. When the unit
of observation is the firm-year, we define the object of the analyst forecast as the
annual change in earnings. That is, we specify the unknown object of interest to be
Xjt = Ejt – Ejt-1,
10
where Ejt is in this case is the realized annual earnings from IBES for firm j
announced at time t, and Ejt-1 is the realized annual earnings from a year earlier.
By focusing on forecasts and realizations of changes in earnings we increase
the likelihood that the time-series variances are computed from a stationary series.
In particular, for each realized earnings change (annual or seasonal quarter) and for
each firm in the sample, we estimate the variance based on previous realizations of
Xjt. Denote this estimate as 𝑉̂ (𝑋𝑗𝑡 ).
Because the estimate of 𝑉̂ (𝑋𝑗𝑡 ) is such an important value for our tests, we
consider three different estimation periods; the previous eight realizations, all
previous realizations, and the previous seven realizations plus the current period’s
realization of the Xjt that is being forecast. Which estimation period is the best
depends on the appropriate horizon that a firm’s Xjt series is stationary, and what an
analyst could possibly know about the distribution of Xjt at the time she makes her
forecast. If a firm’s change-in-earnings process is stationary over its entire history,
then all prior realizations would be the best choice as it maximizes the number of
observations in the estimate, and all this information would be available to analysts.
Denote this as the [-, -1] window, where time zero is when the outcome Xjt is
announced. However, if the nature of a firm’s earnings process changes over time,
then a shorter window might yield a more accurate estimate of the variance that an
analyst could reasonably expect, and so we also consider an estimation period based
on periods [-8, -1]. Finally, we consider the window [-7, 0] to rule out the possibility
that the earnings process has fundamentally changed in period zero, and the analyst
knows this, but our estimates based on periods before time zero do not take this
11
into account. That is, suppose that in period zero there is extreme news, which the
analyst discovered and accurately forecast near the end of her time-series of
forecasts. In addition, this extreme news is indicative of a new earnings regime with
significantly higher variance. The time-series variance of analyst forecasts would be
increased by this late extreme news, but a 𝑉̂ (𝑋𝑗𝑡 ) estimate based on prior
information would not, potentially leading to false violations of our variance bound.
However, by including the realized Xjt in the estimate, we will wrongly inflate the
𝑉̂ (𝑋𝑗𝑡 ) estimate whenever there is an extreme realization of Xjt, even when the
underlying process variance has not changed. This will cause us to wrongly
eliminate true violations of the variance bound. Finally, the estimates based on the
[-7, 0] window gives the analyst clairvoyance for one of the variance estimate
inputs, so it isn’t surprising that it will result in fewer variance bound violations. We
consider this window as a specification check, but not as a legitimate estimate of
𝑉̂ (𝑋𝑗𝑡 ).
We consider both annual changes in earnings and seasonal quarterly changes
in earnings. Each has its strengths and weaknesses. Using quarterly data we get
287,428 earnings realizations to be forecast, as compared to only 72,804 annual
realizations. Further, quarterly data puts the time period when the variance of Xjt is
estimated relatively close to the period when the variance of forecasts is estimated.
However, annual changes have the advantage that they are roughly three times
more variable that seasonal quarterly changes, potentially making the forecasting
problem significantly harder.
12
B. Estimating the Variance of Forecasts
The individual analyst forecast data from IBES provides a rich set of forecasts
to use in estimating V[E(X|Y)]. A key assumption is that the analyst forecast is a
reasonable proxy for E(X|Y). While this assumption is common (as in Barron et al.
1998 and subsequent literature), if the analyst has a loss function other than
minimizing the mean squared error, then this will inflate our variance estimate.
To estimate V[E(X|Y)] we take two approaches, one based on the crosssection of forecasts at a point in time and the other based on the forecasts for an
individual analyst over time. For a given firm-period’s realized earnings change Xjt,
denote the sth forecast of Xjt by analyst i as 𝑓𝑗𝑡𝑖𝑠 . The key thing to note is that, for each
realized Xjt, there are multiple forecasts made by different analysts, indexed by i, and
each analyst i makes multiple forecasts over time, indexed by s. We can therefore
construct two types of estimates of the variation in forecasts about Xjt: 1) we can
estimate the cross-sectional variation over the i analyst forecasts at a point in time,
denoted 𝑉̂𝑐𝑠 (𝑓𝑗𝑡𝑠 ) and 2) we can estimate the time-series variation over the s different
forecasts made by analyst i, denoted 𝑉̂𝑡𝑠 (𝑓𝑗𝑡𝑖 ). Our notation is slightly strained for the
cross-sectional estimate because each analyst does not simultaneously issue a
forecast at the same point in time. Empirically, we take all the forecasts in a
specified window of time (to be discussed). Note that for the quarterly forecasts,
prior to the realization of Ejt-4, computing 𝑓𝑗𝑡𝑖𝑠 requires two inputs from the analyst –
a forecast of Ejt and a forecast of Ejt-4; after Ejt-4, is realized, it only requires a forecast
of Ejt. Similarly, for the annual forecasts, computing 𝑓𝑗𝑡𝑖𝑠 requires two inputs from the
analyst before the prior-year’s annual earnings are reported and one input after.
13
For the quarterly data, 79.7 percent of the forecasts occur after the public
realization of Ejt-4 (untabulated). For the remaining 20.3 percent, the forecast of Xjt
requires a forecast of both Ejt and Ejt-4. Of these, 76.5 percent of the time both
forecasts are provided on the same day. If a Ejt-4 forecast is not provided on that day,
we take the closest available forecast, with the constraint that it occurs no more
than 60 days prior to the forecast of Ejt that it is being differenced with. Similarly, for
the annual data 50.5 percent of the forecasts occur after the public realization of Ejt-4
(which is now the prior year’s earnings). For the remaining 49.5 percent that
require a forecast of both Ejt and Ejt-4, 78.0 percent of the time both forecasts are
provided on the same day. If a Ejt-4 forecast is not provided on that day, we take the
closest available forecast, with the constraint that it occurs no more than 60 days
prior to the forecast of Ejt.
Figure 1 illustrates the collections of forecasts available for estimating the
variance in forecasts based on seasonal quarterly changes in earnings. For a given
firm-quarter realization, there are multiple forecasts spread over time and across
analysts. For the time-series estimates, we collect all the forecasts from analyst i,
requiring a minimum of four forecasts. These are illustrated inside the long
rectangular boxes in the figure. For each forecast series, we can then compute
𝑉̂𝑡𝑠 (𝑓𝑗𝑡𝑖 ) and compare each of these time-series estimates with 𝑉̂ (𝑋𝑗𝑡 ) to see if they
satisfy the variance bound. For the cross-sectional estimates, we collect the first
forecast from different analysts in a given time period, again requiring a minimum
of four forecasts. These are illustrated inside the square rectangular box in the
figure. To exhaust the data, we could compute cross-sectional variance estimates for
14
all possible time periods. However, we get the largest number of observations if we
consider the cross-section of forecasts in between the announcement of Ejt-4 and Ejt3;
in this case the analysts know the prior year’s seasonal quarter results when
forecasting Ejt. The results for windows one quarter before or after this are virtually
the same, so we only report results for one cross-section. For each firm-quarter we
then compare the cross-sectional estimates of 𝑉̂𝑐𝑠 (𝑓𝑗𝑡𝑠 ) with 𝑉̂ (𝑋𝑗𝑡 ) to see if they
satisfy the variance bound.
The method for annual changes is exactly the same, except that the change is
in annual earnings between two adjacent periods. The estimate of 𝑉̂ (𝑋𝑗𝑡 ) is based
on the previous eight years of realized annual earnings changes [-8, -1], all prior
history of earnings changes [-, -1], or the eight years of changes including the
current realization [-7, 0]. As before, we require a minimum of four forecasts to
estimate the variance in time-series and cross-sectional forecasts.
To summarize so far, for each firm-quarter or firm-year we estimate 𝑉̂ (𝑋𝑗𝑡 )
using three different estimation windows. We then estimate 𝑉̂𝑡𝑠 (𝑓𝑗𝑡𝑖 ) in time series
for each analyst, and we estimate 𝑉̂𝑐𝑠 (𝑓𝑗𝑡𝑠 ) in cross-section for all analysts with a
forecast of Xjt in between Ejt-4 and Ejt-3. We then count the number of times that
𝑉̂𝑡𝑠 (𝑓𝑗𝑡𝑖 ) or 𝑉̂𝑐𝑠 (𝑓𝑗𝑡𝑠 ) exceed 𝑉̂ (𝑋𝑗𝑡 ) in all these combinations, and label these as
occurrences of ‘excess volatility.’ Finally, we report the percent of excessively
volatile occurrences for both types of forecast variance estimates, labeled as K, for
the three different 𝑉̂ (𝑋𝑗𝑡 ) estimation windows, for quarterly and annual data, and
for different subsets of the data.
15
C. Assessing the Sampling Error
The final step is to assign statistical significance to the percent of excessively
volatile occurrences. Because 𝑉̂𝑡𝑠 (𝑓𝑗𝑡𝑖 ), 𝑉̂𝑐𝑠 (𝑓𝑗𝑡𝑠 ) and 𝑉̂ (𝑋𝑗𝑡 ) are estimates, they have
sampling error and so, by random chance we could record a violation of the
variance bound. To assess how likely this is, we need to know the sampling
distribution of the statistic K, the percent of observations that violate the variance
bound. Because our estimates are based on small samples, we do not rely on
parametric statistics. Rather, we construct a bootstrap estimate of the confidence
interval around our violation percentage. Specifically, we construct a pseudosample from our data by drawing from the original dataset with replacement a
sample that is the same size as the original sample. From this pseudo-sample we
then compute the percentage of violations of the variance bound. We then repeat
this procedure 1000 times to construct a sampling distribution around our actual
sample estimate. As will be seen in table 2, these confidence intervals are very tight;
so much so that we can generally ignore issues of statistical significance.
V. Results
A. The Sample and Descriptive Statistics
We begin with the universe of observations on IBES between 1983 and 2013
for quarterly data, and between 1976 and 2013 for annual data. All that we require
are a perm number, a fiscal period ending date, a realized earnings-per-share, and at
least four observations on the change in earnings to compute a variance, resulting in
a sample 287,428 firm-quarters and 72,804 firm-years of earnings change
16
realizations. Both the forecasted and realized earnings amounts are for changes in
earnings-per-share, either quarterly or annual, taken from the IBES database. By
using IBES, or “pro forma,” earnings rather than GAAP earnings, we insure
consistency between the object being forecast and the realization. In addition,
because IBES earnings eliminate most special items (Bradshaw and Sloan 2002),
and analysts generally forecast “pro forma” earnings before special items, our
estimate of 𝑉̂ (𝑋𝑗𝑡 ) and the variance of forecasts are not unduly influenced by these
transitory events.
Table 1 describes the sample, where panel A describes the variance of
earnings changes 𝑉̂ (𝑋𝑗𝑡 ), panel B describes the time series variance of forecasted
earnings changes 𝑉̂𝑡𝑠 (𝑓𝑗𝑡𝑖 ), and panel C describes the cross-sectional variance of
forecasted earnings changes 𝑉̂𝑐𝑠 (𝑓𝑗𝑡𝑠 ). In each panel we describe the quarterly data in
the left-hand columns and the annual data in the right-hand columns.
As seen in panel A of table 1, most firm-periods use nearly eight observations
in the estimate of 𝑉̂ (𝑋𝑗𝑡 ), with an average of 7.51 observations per quarterly
estimate and 6.63 observations per annual estimate for the [-8, -1] window. Using
all previous changes (the [–, -1] window) results in an average of 31.20
observations per quarterly estimate and 11.62 observations per annual estimate.
The 𝑉̂ (𝑋𝑗𝑡 ) estimates are reasonably stable across the three different estimation
windows. For the quarterly data, the median standard deviation is 0.098 for the [-8,
-1] window, 0.118 for the [–, -1] window, and 0.100 for the [-7, 0] window. For the
annual data, the standard deviations are 0.355, 0.346, and 0.383 across the three
17
windows, respectively. The stability of the 𝑉̂ (𝑋𝑗𝑡 ) estimates over time suggests that
the Xjt series is reasonably stationary, and that our 𝑉̂ (𝑋𝑗𝑡 ) estimates present
reasonable bounds for the forecast variances. Unless otherwise stated, in the results
to follow the 𝑉̂ (𝑋𝑗𝑡 ) estimate is based on the [-8, -1] window.
Panel B of table 1 describes the sample of time-series variance estimates.
Recall from figure 1 that for each firm j and period t, there are multiple analysts per
firm, and multiple forecasts per analyst, and so we can compute a 𝑉̂𝑡𝑠 (𝑓𝑗𝑡𝑖 ) time series
variance for each firm-period-analyst combination. The result is 856,781 quarterly
forecast series and 626,551 annual forecast series. On average, this estimate is
based on 6.29 forecasts for quarterly changes and 8.09 forecasts for annual changes.
The median time between the first forecast and the last forecast in the time-series is
333 days for the quarterly data and 492 days for the annual data (untabulated). The
distribution of time-series variance estimates shown in panel B is clearly shifted to
the left of the 𝑉̂ (𝑋𝑗𝑡 ) distribution described in panel A. The median 𝑉̂𝑡𝑠 (𝑓𝑗𝑡𝑖 ) is 0.049
for the quarterly data and 0.129 for the annual data. However, the 𝑉̂𝑡𝑠 (𝑓𝑗𝑡𝑖 )
distribution clearly overlaps the 𝑉̂ (𝑋𝑗𝑡 ) distribution. The 75th percentile of 𝑉̂𝑡𝑠 (𝑓𝑗𝑡𝑖 ) is
0.107 for the quarterly data and 0.311 for the annual data, which are approximately
the same as the corresponding median values of 𝑉̂ (𝑋𝑗𝑡 ).
Panel C of table 1 describes the sample of cross-sectional forecast variance
estimates. Recall that for each firm j and quarter t, we take the first forecast from
each analyst after the realization of Ejt-4 and before the realization of Ejt-3. The
median window of time that surrounds the cross-sectional estimates is 53 days for
18
the quarterly data and 97 days for the annual data (untabulated). The net result is
75,506 firm-quarter cross-sectional forecast collections and 56,001 annual crosssectional forecast collections, with an average of 7.71 analysts contributing to the
quarterly estimate and an average of 10.34 analysts contributing to the annual
estimate. The median 𝑉̂𝑐𝑠 (𝑓𝑗𝑡𝑠 ) is 0.031 for the quarterly data and 0.077 for the
annual data. As with the time series estimates, the distribution of 𝑉̂𝑐𝑠 (𝑓𝑗𝑡𝑠 ) is shifted
to the left of 𝑉̂ (𝑋𝑗𝑡 ), although there is certainly overlap; the 90th percentile of 𝑉̂𝑐𝑠 (𝑓𝑗𝑡𝑠 )
exceeds the median of 𝑉̂ (𝑋𝑗𝑡 ) for both the quarterly and annual data.
B. Do analyst forecasts vary too much?
To begin, figure 2 panel A plots 𝑉̂𝑡𝑠 (𝑓𝑗𝑡𝑖 ) - 𝑉̂ (𝑋𝑗𝑡 ) in the case where 𝑉̂ (𝑋𝑗𝑡 ) is
based on the [-8, -1] window of prior quarterly realizations and 𝑉̂𝑡𝑠 (𝑓𝑗𝑡𝑖 ) is estimated
in time series for each analyst-firm-quarter. Not surprisingly, most of the forecasts
fall inside the variance bound, as seen on the left side of the graph. However, in the
figure 16.91 percent of the observations do indeed have positive values and thus
violate the variance bound. These series of forecasts are therefore excessively
volatile. The analyst’s sequence of forecasts over the year change more than can be
justified by rational forecasting, regardless of the information they may have
received over the year. Note that, for a given firm and fiscal period end, this is a
statement about the series of forecasts an individual analyst made over time – no
one forecast in the series can be identified as being irrational in this test.
19
Table 2 gives the frequency of violations of the variance bound for the
different combinations of time-series or cross-sectional estimates of forecast
variances, 𝑉̂ (𝑋𝑗𝑡 ) estimation windows, and quarterly or annual data. For the timeseries estimates shown at the top of the table, based on quarterly data, the
frequency of violations is 16.91 percent for the [-8, -1] window and 15.53 percent
for the [-, -1] window. For the estimates based on annual data, the results are
slightly larger, with 17.57 percent violations for the [-8, -1] window and 19.00
percent violations for the [-, -1] window. The relative stability of the estimated
frequency of violations across the two windows suggests that results are unlikely to
be due to failure of the stationary assumption on 𝑉̂ (𝑋𝑗𝑡 ). For the [-7, 0] estimation
window the estimates are lower, as expected, but at 13.25 percent for the quarterly
data and 11.15 percent for the annual data, the results are still well above zero.
Even after giving analysts clairvoyance of the realized future earnings change in the
𝑉̂ (𝑋𝑗𝑡 ) estimate, there are still a significant number of variance bound violations.
Figure 2 panel B plots 𝑉̂𝑐𝑠 (𝑓𝑗𝑡𝑖 ) - 𝑉̂ (𝑋𝑗𝑡 ) in the case where 𝑉̂ (𝑋𝑗𝑡 ) is based on
the [-8, -1] window of prior quarterly realizations and 𝑉̂𝑐𝑠 (𝑓𝑗𝑡𝑖 ) is estimated in the
cross-section for each firm-quarter. Again, not surprisingly, most of the forecasts fall
inside the variance bound, seen as the negative values on the left side of the graph.
However, in this figure 7.73 percent of the observations do indeed have positive
values and thus violate the cross-sectional variance bound. These collections of
forecasts are therefore excessively volatile. In a window of time near the beginning
of the fiscal year, the forecasts of different analysts differ from each other too much
to be justified by rational forecasting, regardless of the differences in the
20
information the analysts might have received. Note that this is a statement about the
collection of forecasts at a point in time – no one analyst can be identified as being
irrational in this test.
Table 2 panel B gives the frequency of violations of the variance bound based
on the cross-sectional variation in forecasts for different estimation windows. In
general there are fewer violations for these estimates than for the time-series
forecast series, although the results in all cases are still well above zero. Based on
the [-8, -1] window, 7.73 percent of the firm-quarters and 6.94 percent of the firmyears are excessively volatile. That is, the differences in analyst forecasts at a point
in time are too extreme to be rational. The results are lower for quarterly data and
higher for annual data in the [-, -1] window, with 4.95 percent of the firm-quarters
and 7.31 percent of the firm-years violating the bound. Finally, the results for both
quarterly and annual data are lower, but still well above zero, when the estimation
window for 𝑉̂ (𝑋𝑗𝑡 ) is [-7, 0].
Finally, the bootstrapped 99 percent confidence intervals shown in table 2
are all very tight around their estimates, with the typical confidence interval adding
or subtracting less than 0.25 percent for the time series estimates and 0.50 percent
for the cross-sectional estimates. In sum, we find an impressive number of cases
where the time-series of individual forecasts are excessively volatile, meaning that
individual analysts often change their forecasts too much over the course of the
year. We also find a smaller but still significant number of cases where the crosssection of analyst forecasts at a point in time differ from each other too much to be
rational.
21
In the next section we investigate the pervasiveness of excess time-series
volatility across a number of dimensions, and the circumstances that lead to excess
volatility. We follow this with a similar investigation of the cross-sectional
estimates, and discuss what the two different types of variance bound violations can
teach us about the analyst forecast process.
C. When and where does excess time-series volatility occur?
Figure 3 plots the frequency of time-series variance bound violations by
calendar year with the quarterly data in panel A and annual data in panel B. The
first observation from the two plots is that a large number of violations have
occurred every year in the sample, with the most occurring in the quarterly data in
2001 (30 percent) and in 1983 in the annual data (34 percent).
The second observation from figure 3 is that there appear to be a few
discrete changes in the frequency of violations. The frequency is relatively high and
then drops significantly around 1983-1984 (annual data only), 1987-1988, 20012002, and 2009-2010. Interestingly, there were significant stock market corrections
in each of these periods. The Dow dropped 15 percent between October 1983 and
June 1984 at a time when treasury yields reached 14 percent. “Black Monday”
occurred on October 10, 1987, when the Dow Jones Industrial Index dropped 23
percent in one day; the index did not regain its pre-crash level for almost two years.
The 2001-2002 encompasses the deflating of the “dot-com” bubble; the Nasdaq
dropped 32 percent and the Dow dropped 17 percent in 2002. Finally, while the
timing is harder to pin down, the 2009-2010 period lies at the tail end of the global
22
financial crisis; between October 2007 and March 2009 the Dow lost more than 50
percent of its value. In sum, periods of excessive volatility in analyst forecasts seem
to correspond to periods of “irrational exuberance,” to quote Alan Greenspan, and
the immediately subsequent periods of far fewer violations correspond to periods
with more sober market values. Jiang et al. (2005) and Zhang (2006b) posit a
connection between the volatility of analyst forecasts and market over-valuations.
They argue that short-selling constraints causes the market to price good news
forecasts more fully than bad news forecasts, and so extreme values of good news
get priced while extreme values of bad news do not. Whether the excessively
volatile analyst forecasts contributed to the market-wide irrational exuberance in
the periods listed above, or were victims of it, is impossible to know.
Figure 4 presents a market-value-weighted version of figure 3. For any firmquarter (panel A) or firm-year (panel B) with an analyst who produced an
excessively volatile series of forecasts, we weight the observation by the fraction of
total market value the firm represents. The resulting weighted percent of violations
are shown each year as the left-hand blue bars in the chart. As a point of reference
we also plot the equal-weighted percentages of violations as the right-hand red bars.
For both sets of data the market-weighted value of excessively volatile analyst
forecasts is generally larger than the equal-weighted value, impacting up to 51
percent of the market value in the quarterly data (2009) and 79 percent of the
market value in the annual data (1983). By comparing the heights of the two bars,
figure 4 also shows that occurrences of excessive volatility are relatively more
common for larger firms.
23
In figure 3 we aggregated the variance bound violations by calendar year.
Next we aggregate the violations by analyst and by firm to better understand the
distribution of excessive volatility. As a benchmark, recall that approximately 17
percent of the time series variances violate the bound, based on either quarterly or
annual data, as reported in table 2. If every analyst and firm contributes equally to
violations of the variance bound, then we would expect to see the distribution of
violations concentrated around 17 percent as we form the data into groups. In
figure 5 we compute the frequency that each of the 13,519 different analysts in the
IBES database violated the variance bound and plot histograms for quarterly and
annual forecasts. Because different analysts follow different numbers of firms and
forecast more or less frequently, we also show as a red line the percent of all
forecasts in each bin. In both histograms there is clearly a mass around 17 percent.
For the quarterly data, the three bins containing 10-25 percent together have 43
percent of the analysts and 74 percent of the data; for the annual data, the same
three bins have 37 percent of the analysts and 67 percent of the data. However, the
histograms also reveal considerable variation around this mass. For quarterly data
22 percent of the analysts never violate the variance bound, although they only
represent 1 percent of the data; for the annual data 31 percent of the analysts never
violate the bound, representing 3 percent of the data. The mass of analysts who
never violate the variance bound are those who produce far fewer forecasts,
averaging only 6 in their whole history with IBES, as compared to the average of 165
forecasts for analysts in the 15-20 percent bin; for the annual data in panel B the
zero bin averages 5 forecasts while the 15-20 percent bin averages 71 forecasts
24
(untabulated). The histograms also reveal rather long tails. For both the quarterly
data and annual data, aggregating all the bins greater than 25 percent shows that
approximately 21 percent of the analysts violate the bound at least one in four
times, on average, for every firm and period they forecast.
Our next aggregation of the data is based on the firm – are some firms more
likely to be the subject of excessively volatile time series forecasts than others?
Figure 6 gives the histograms of data aggregated by the frequency of violations for
each firm. These histograms closely resemble the analyst aggregations in figure 5.
In both figures there is a large mass of firms who never have a violation, although
these firms represent a proportionately small amount of the data, there is a large
mass around the grand estimate of 17 percent representing proportionately a large
amount of the data, and then a rather long tail.
To summarize the results for the time-series violations of the variance
bound, we find that roughly 17 percent of the analyst-firm-periods violate the
bound, exposing up to 79 percent of the market value in some years to excessively
volatile forecasts. The frequency of violations also tends to peak and then plummet
around major stock market crashes. The violations tend to come from a subset of
analysts and firms; some are never associated with a case of excessively volatile
forecasts and others are repeat offenders.
D. When and where do Cross-sectional forecast vary too much?
In this section we repeat the analysis of the previous section but apply it to
the cross-sectional variance bound violations. Recall from the bottom of table 2 that
25
the frequency of violations ranges from roughly five percent to seven percent,
depending on the data used and estimation window.
Figure 7 shows that the frequency of cross-sectional violations of the
variance bound varies over calendar years, and shows the same peaks and valleys
over time as seen in the time-series violation plots in figure 3. The highest
frequency occurs in 1987 for both the quarterly data, at 20 percent, and for annual
data, at 18 percent. Figure 8 shows the market-value-weighted frequency of
violations (shown as the LHS blue bar) and the equal-weighted frequency of
violations (shown as the RHS red bar). The quarterly data in panel A shows that
generally the market-value-weight of violations is greater than the equal-weighted
value, implying that larger firms are more likely to experience a cross-sectional
variance bound violation. The annual data in panel B, however, is much more
balanced.
Figure 9 gives histograms of how frequently a firm violates the crosssectional variance bound. Compared to the parallel results in figure 4, both panels
of the figure show very large masses of firms who never experience a violation of
the cross-sectional bound followed by a flat and long tail of increasingly common
violators. However, the red line shows that the mass of observations, rather than
firms, has a more dispersed distribution. For instance, panel A shows that 10
percent of the firms are in the 5-10 percent of violations bin (the blue bar), but they
represent 20 percent of the observations (the red line). Panel B shows similar
results for the annual data. While the histograms in figure 9 are generally flatter
above the zero bin than the same histograms for the time-series violations in figure
26
6, consistent with the lower overall frequency of violations, both panels suggest that
there are firms who regularly violate the bound and firms who never violate the
bound.
VI. Conclusion
We document a significant number of excessively volatile forecasts,
measured in time series over individual analysts or measured at a point in time
across the collection of analysts. What do excessively volatile forecasts mean to the
market? At the most basic level, if individual analysts are changing their forecast
too extremely over the forecast period, this could directly lead to excess volatility in
the stock market. This is indeed a likely outcome, as large forecast revisions have
been shown to have a disproportionately large impact on stock prices (Gleason and
Lee 2003). And the rise and fall of excess volatility frequencies around major stock
market crashes suggests that analysts could have played a pivotal role in these
events. The time-series violations are consistent with a human judgement error
wherein analysts over-react to new information in certain circumstances.
The cross-sectional violations of the variance bound present cases where it
appears analysts are unduly influenced by the forecasts of other analysts. But
rather than herd to a common forecast, these represent cases where they appear to
be repelled from each other’s forecasts. In other words, these are cases where
strategic considerations may have overshadowed rational forecasting.
In future research we intend to model the causes of each type of variance
bound violation. Preliminary evidence shows that analyst characteristics that are
27
associated with accurate forecasters, such as the years of experience, or how many
firms they follow, are negatively associated with violations of the time series bound.
Interestingly, the number of days between their first and last forecast is a very
strong positive predictor of a violation. This suggests that when there is a long time
for information events to arrive, analysts are more likely to over-react to these
events. Preliminary evidence also finds that firm characteristics associated with
complexity or the difficulty of the forecasting process, such as size or the presence of
a loss, are positively associated with occurrences of excessive volatility.
Finally, we are interested in the different combinations of time-series and
cross-sectional violations. Imagine all the analysts forecasting basically the same
thing, so there is little cross-sectional variability, but moving together up and down
so much that they individually create too much time-series volatility. Now contrast
this with a case where analysts make irrationally different forecasts, but none of
them change their opinions much over time, resulting in a cross-sectional violation
but no time-series violation. These would seem to represent very different types of
analysts’ group behaviour. Understanding the causes of the different combinations
would shed more light on the nature of the analysts’ collective forecasting process.
28
Figure 1: Forecasts 𝑓𝑗𝑡𝑖𝑠 and outcomes Xjt for firm j’s fiscal period announced at time t
𝑓𝑗𝑡𝑖𝑠 is the sth forecast for analyst i (j and t suppressed in the figure)
29
Figure 2: Variance bound violations
(positive values shown in red indicate violations of the variance bound)
Panel A: Time-series estimates of 𝑉̂𝑡𝑠 (𝑓𝑗𝑡𝑖 ) - 𝑉̂ (𝑋𝑗𝑡 ) for each analyst-firm-quarter
Panel B: Cross-sectional estimates of 𝑉̂𝑐𝑠 (𝑓𝑗𝑡𝑖 ) - 𝑉̂ (𝑋𝑗𝑡 ) for each firm-quarter
30
Figure 3: Frequency of time-series variance bound violations by calendar year
Panel A: quarterly data
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
Panel B: annual data
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
0
31
Figure 4: Market-weighted time-series variance bound violations by calendar year
(market-weighted shown as blue bars on LHS, equal-weighted shown as red bars on
RHS)
Panel A: quarterly data.
0.6
0.5
0.4
0.3
0.2
0.1
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
0
mksize_w/violation
perc_firms_w/vio
Panel B: annual data.
90.00%
80.00%
70.00%
60.00%
50.00%
40.00%
30.00%
20.00%
10.00%
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
0.00%
mksize_years_w/vio
perc_year_w/vio
32
Figure 5: histogram of frequencies that individual analysts violated the time series
variance bound
(analyst frequencies shown as blue bars, forecast frequencies shown as red line)
Panel A: quarterly data
35.00%
30.00%
25.00%
20.00%
15.00%
10.00%
5.00%
0.00%
%Analyst
%forecats_per_bin
Panel B annual data
35.00%
30.00%
25.00%
20.00%
15.00%
10.00%
5.00%
0.00%
%Analyst
%Forecast
33
Figure 6: histogram of frequencies that a firm was associated with a violated time-series
variance bound (firm frequencies shown as blue bars, forecast frequencies shown as red
line)
Panel A: quarterly data
30.00%
25.00%
20.00%
15.00%
10.00%
5.00%
0.00%
%analyst_violation
%forecast
%analyst_violation
%forecast
Panel B: annual data
35.00%
30.00%
25.00%
20.00%
15.00%
10.00%
5.00%
0.00%
34
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
35
2012
2011
2010
2009
2008
2007
2006
2005
2004
2003
2002
2001
2000
1999
1998
1997
1996
1995
1994
1993
1992
1991
1990
1989
1988
1987
1986
Figure 7: Cross-sectional variance bound violations by calendar year
Panel A: quarterly data
0.25
0.2
0.15
0.1
0.05
0
Panel B: annual data
0.2
0.18
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
Figure 8: Market-weighted cross-sectional variance bound violations by calendar year
(market-weighted shown as blue bars on LHS, equal-weighted shown as red bars on
RHS)
Panel A: quarterly data
0.35
0.3
0.25
0.2
0.15
0.1
0.05
mktsize_vio=1
2012
2011
2010
2009
2008
2007
2006
2005
2004
2003
2002
2001
2000
1999
1998
1997
1996
1995
1994
1993
1992
1991
1990
1989
1988
1987
1986
0
%firms_withVio
Panel A: annual data
0.25
0.2
0.15
0.1
0.05
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
0
%size_vio=1
%Firms_wévio=1
36
Figure 9: histogram of frequencies that a firm was associated with a violated crosssectional variance bound (firm frequencies shown as blue bars, forecast frequencies
shown as red line)
Panel A: quarterly data
70.00%
60.00%
50.00%
40.00%
30.00%
20.00%
10.00%
0.00%
%analyst_violation
%forecast
%analyst_violation
%forecast
Panel A: annual data
80.00%
70.00%
60.00%
50.00%
40.00%
30.00%
20.00%
10.00%
0.00%
37
Table 1: Variance estimates
Panel A: variance of earnings changes - 𝑉̂ (𝑋𝑗𝑡 )
QUARTER
Estimation
window
n
ANNUAL
[-8, -1]
[-oo, -1]
[-7, 0]
[-8, -1]
[-oo, -1]
287,428
287,428
287,422
72,804
72,804
[-7, 0]
72,792
Average n
in estimate
P1
7.51
31.20
7.66
6.63
11.62
7.21
0.004
0.005
0.004
0.014
0.015
0.016
P5
0.011
0.015
0.011
0.040
0.041
0.046
P10
0.018
0.025
0.019
0.067
0.069
0.076
P25
0.042
0.054
0.043
0.151
0.150
0.168
median
0.098
0.118
0.100
0.355
0.346
0.383
P75
0.231
0.274
0.237
0.800
0.782
0.862
P90
0.574
0.695
0.587
1.944
1.940
2.070
P95
1.154
1.369
1.179
3.710
3.873
3.931
P99
5.854
7.820
6.071
25.695
28.958
28.332
38
Panel B: time series variance of forecasted earnings changes 𝑉̂𝑡𝑠 (𝑓𝑗𝑡𝑖 )
n
QUARTER
ANNUAL
856,781
626,551
Average n in
Estimate
6.29
8.09
P1
0.004
0.005
P5
0.008
0.015
P10
0.013
0.025
P25
0.023
0.055
median
0.049
0.129
P75
0.107
0.311
P90
0.230
0.711
P95
0.389
1.238
P99
1.469
5.096
Panel C: Cross-section variance of forecasted earnings changes 𝑉̂𝑐𝑠 (𝑓𝑗𝑡𝑠 )
n
QUARTER
ANNUAL
75,506
56,001
Average n in
Estimate
7.77
10.34
P1
0.002
0.004
P5
0.005
0.010
P10
0.008
0.016
P25
0.016
0.034
median
0.031
0.077
P75
0.066
0.177
P90
0.138
0.401
P95
0.223
0.696
P99
0.718
2.839
39
Table 2: Frequency of violations of the variance bound
QUARTER
Cross-sectional Time-series
violations
violations
n
v(F) - v(x) >0
ANNUAL
[99% Conf. Interv]
n
v(F) - v(x) >0
[99% Conf. Interv]
Range (-8, -1)
811,423
16.91%
16.81%
17.01%
504,319
17.57%
17.44%
17.69%
Range (-oo, -1)
811,423
15.53%
15.42%
15.59%
504,319
19.00%
18.82%
19.08%
Range (-7, 0)
811,423
13.24%
13.21%
13.48%
504,319
11.15%
11.12%
11.37%
Range (-8, -1)
75,427
7.73%
7.50%
7.97%
42,225
6.94%
6.67%
7.23%
Range (-oo, -1)
75,427
4.95%
4.77%
5.13%
42,225
7.31%
7.02%
7.58%
Range (-7, 0)
75,427
6.96%
6.76%
7.18%
42,225
4.11%
4.09%
4.54%
40
References
Barron, O., Kim, O., Lim, S. and D. Stevens. 1998. Using Analysts’ Forecasts to
Measure Properties of Analysts’ Information Environment. The Accounting Review,
73, pp. 421-433.
Barron, O., Byard, D., & Kim, O. 2002a. Changes in analysts' information around
earnings announcements. The Accounting Review, 77, 821-846.
Barron, O., Byard, D., Kile, C., & Riedl, E. 2002b. High-technology Intangibles and
Analysts' Forecasts. Journal of Accounting Research, 40, 289-312.
Bowen, R., Davis, A., & Matsumoto, D. 2002. Do conference calls affect analysts'
forecasts? The Accounting Review, 77, 285-316.
Bradshaw, M. T., & Sloan, R. G. 2002. GAAP versus the street: An empirical
assessment of two alternative definitions of earnings. Journal of Accounting
Research, 40(1), 41–66.
Brown, L. D., Hagerman, R. L., Griffin, P. A., & Zmijewski, M. E. 1987. An evaluation of
alternative proxies for the market’s assessment of unexpected earnings. Journal of
Accounting and Economics, 9(2), 159–193.
Brown, L.D., and A. Hugon. 2009. Team Earnings Forecasting. Review of Accounting
Studies, 14:587-607.
Chen, P., Narayanamoorthy, G., Sougiannis, T., and H. Zhou. 2013. “Analyst forecast
revision momentum and the post-forecast revision price drift.” Working paper.
Clarke, J., & Subramanian, A. 2006. Dynamic forecasting behavior by analysts:
Theory and evidence. Journal of Financial Economics, 80, 81-113.
Clement, M. 1999. Analyst forecast accuracy: Do ability, resources, and portfolio
complexity matter? Journal of Accounting and Economics 27 (July): 285-304.
Clement, M., and S. Tse. 2005. Financial Analyst Characteristics and Herding
Behavior in Forecasting. The Journal of Finance, Vol. 60, No. 1 (Feb., 2005), pp. 307341.
Clement, Michael, Lisa Koonce, and Thomas Lopez, 2007, The roles of task-specific
forecasting experience and innate ability in understanding analyst forecasting
performance, Journal of Accounting and Economics 44, 378-398.
De Bondt, W., and R. Thaler. 1990. “Do securities analysts overreact?” American
Economic Review. 80:2 52-57.
41
DeGroot, M. 1975. Probability and Statistics. Addison-Wesley Publishing Co., Inc.
Gilles, C. and S. LeRoy. 1991. Econometric Aspects of the Variance-Bounds Tests: A
Survey. The Review of Financial Studies 4:4, pp. 753-791.
Gerakos, J. and R. Gramacy. 2012. Regression-based earnings forecasts. University
of Chicago working paper.
Gleason, C., and C. Lee. 2003. Analyst forecast revisions and market price discovery, The Accounting Review 78, 193-225.
Graham, J. 1999. Herding among investment newsletters: Theory and evidence.
Journal of Finance, 54, 237-268.
Hirshliefer, D. and S. Teoh. 2003. Herd Behaviour and Cascading in Capital Markets:
a Review and Synthesis. European Financial Management, Vol. 9, No. 1, 2003, 25–66.
Hong, H., and J. Kubik, 2003. Analyzing the analysts: Career concerns and biased
earnings forecasts, Journal of Finance 58: 313-351.
Jiang, G., C. Lee, and G. Zhang. 2005. Information uncertainty and expected returns.
Review of Accounting Studies 10 (2–3): 185–221.
Kahneman, D., and D. Lovallo. 1993. “Timid choices and bold forecasts: a cognitive
perspective on risk taking.” Management Science 39:1, 17-31.
Kahneman, D., and A. Tversky. 1973. “On the psychology of prediction.”
Psychological Review. 80: 237-51.
Lang, M. and R. Lundholm. 1996. Corporate disclosure policy and analyst behavior.
The Accounting Review. 71:467-492.
Li, K. and P. Mohanram. 2013. Evaluating cross-sectional forecasting models for
implied cost of capital. University of Toronto working paper.
Ramnath, S., S. Rock, and P. Shane. 2008. Financial analyst forecasting literature: a
taxonomy and suggestions for future research. International Journal of Forecasting
24: 34-75.
Shiller, R. 1981. Do Stock Prices Move Too Much to by Justified by Subsequent
Changes in Dividends? American Economic Review, 71, pp. 421-436.
Welch, I. 2000. Herding among security analysts. Journal of Financial Economics, 58,
369-386.
42
Zhang, F. 2006a. Information uncertainty and analyst forecast behavior.
Contemporary Accounting Research 23:2 pp. 565-590.
Zhang, X. F. 2006b. Information uncertainty and stock returns. The Journal of
Finance 61 (1): 105–37.
43
Download