Mutual Fund Ratings: Do They Really Matter?

advertisement
MUTUAL FUND RATINGS:
DO THEY REALLY MATTER?
Philip S. Russel
Philadelphia University, USA
2006 PBFEA Conference, Taipei, Taiwan
Research Funded by:
Lindback Foundation
Agenda







1. Introduction
2. Motivation
3. Research Questions
4. Morning Star Rating Methodology
5. Data
6. Discussion of Results
7. Conclusion
Introduction

Mutual Fund Industry


Exponential growth from a mere $50
b5llion in 1974 to nearly $9,000 billion in
2004.
In USA, the number of mutual funds
exceed the number of stocks listed on
organized exchange, making the selection
of mutual funds an onerous task for
average investor.
Role of Rating Agencies

How can investors screen thousands of
funds?

One option is to rely on some kind of rating
services


Ratings provide a composite measure of
mutual fund performance
Currently being provided by Lipper, Value
Line and Morningstar. Morning star is the
most prominent.
2. Motivation
Ratings and Fund Flow

While Morningstar does not claim to forecast
performance, anecdotal and empirical evidence suggests
that investors are increasingly relying on mutual fund
ratings to make their investment decisions.



97 percent of the money invested in no-load
equity funds flowed into funds with four or five
star rating (Wall Street Journal, 1996)
5 Star funds claim 50.5% of all assets of domestic
equity funds (Keenan, 2002)
Based on 3,500 funds and 12,000 rating changes,
Guercio and Tkac (2002) report causal relationship
between rating and fund flow.

Thus predictive ability of morning star
rating system is an important question
as ratings have become extremely
popular and seem to significantly
influence the allocation of investment
dollars.
3. Research Questions
Research Questions
1.
2.
3.
4.
Performance: Does picking 5-star funds lead
to superior performance?
Persistence: How persistent are the ratings?
Does the degree of reliability vary among
the groups? That is, is a 5-star rating more
reliable than a 3-star rating in forecasting
future performance?
Star Attributes: Are there any distinctive
differences among funds, based on their
rating category?
Age Bias: Is there an age bias in rating?
4. Morning Star Rating
Methodology
Morning Star Rating
Methodology




Mutual funds are classified into 48 investment groups. Until
2002, there were only 4 groups - domestic equity, international
equity, taxable bond, and municipal bond
Ratings recognizes the performance within each group based
on historical risk and return measures.
Based on monthly data, ratings assigned for 3,5,10 years and
also “overall rating” based on weighted average of 3,5,10 year
ratings.
The Stars
Rank
Stars
Top 10%
Next 22.5%
Middle 35%
Next 22.5%
Bottom 10%
5
4
3
2
1
Biases in Rating

Load versus No-load



Rating biased in favor of no-load funds
Morey (2002) reports that out of 164 funds receiving
5-stars, 75% were no-load funds
Age of Fund



Rating favors new funds due to surviorship effect
(Blume, 1998)
Rating favors seasoned funds due to weighting
system (Morey, 2002)
Seasoned funds regress towards the mean due to
interaction between age of fund and fund size
(Adkisson and Fraser, 2003)
5. Data
Data



Source: Morningstar Principia
Morning star changed the rating methodology
in July 2002
Current study based on quarterly data for 2003
for growth funds.
Quarter
I
II
III
IV
Number of Funds
1811
1915
1996
2074
6. Discussion of Results
6.1 What are the attributes of
Winning Funds?


Reference:
Table 4
5-star
groups are
clearly
“winning
funds”
Star
5-year
Ret%
15 year
Ret%
Beta
1-Star
-4.85%
.81
1.21
5-Star
7.12
2.05
0.88
All
funds
.60
1.77
0.95
Attributes of Star Categories
Attribute
1-Star
5-Star
Alpha
-11.38
1.56
Turnover
159.04
99.15
Tenure
3.73
6.04
Expense
Ratio
Asset Size
2.03
1.46
171.88
982.93
6.2 Is There an Age Bias in
Rating?

We first divide the sample for each quarter
into three groups:




Large Cap, Medium Cap and Small Cap
We further divide the sample based on their
age, in years: Seasoned funds (age ≥ 10
years), Middle-aged funds (5 ≤ age < 10), and
young funds (5 > age ≥ 3)
We then test for differences in overall rating
between Seasoned Versus Young funds,
Seasoned versus Middle age funds, and Middle
age versus Young funds.
==> 9 times 4 = 36 inter-group comparisons


Of the 36 inter-group comparisons, only
7 are statistically significant.
Thus Morey’s (2002) contention that
seasoned funds systematically receive
higher overall rating is not observed.
6.3 Is there a Persistence
in Rating?



Short-Term Persistence in Rating
Ratings for March compared with
June/September/December ratings
Funds matched by name



1692 identical funds for March Versus June
1522 identical fund for March Versus September
1195 identical fund for March Versus December
Very few funds maintain their rating and
persistence deteriorates.
The following table shows March Versus December
Stars
5
# of
funds
91
3/30
Mean
5
12/30
mean
4.13
# same
star
39
4
262
4
3.48
140
3
457
3
2.86
251
2
298
2
2.41
142
1
87
1
1.97
46
All
1195
2.97
2.92
618
(52%)
7. Preliminary Conclusion
Preliminary Conclusion

Does Morning Star Rating Really Matter?

No.



Investors are interested in future performance and
results show that very few funds maintain their “star
rating”. Thus ratings should not matter!
The continued aggressive promotion of ratings
in advertisements may indeed foster a “false”
sense of confidence among naïve investors
(who seem to ignore Morningstar disclaimer
that stars do not reflect future performance)
Buyer Beware!
In progress:



Expansion of results to bigger database
(2003, 2004, and 2005).
Refining the measurements of risk and
return to assess performance.
Application of decision
making/forecasting models.
Download