The Myth of Rigorous Accounting Research

advertisement
THE MYTH OF RIGOROUS ACCOUNTING RESEARCH
Paul F. Williams
Department of Accounting
Box 8113
North Carolina State University
Raleigh, NC 27695
Phone: 919-515-4435
Fax: 919-515-4446
Email: paul_williams@ncsu.edu
Sue Ravenscroft
Department of Accounting
2330 Gerdin Building
Iowa State University
Ames, IA 50011
Phone: 515-294-3574
Fax: 515-294-3525
Email: sueraven@iastate.edu
1
THE MYTH OF RIGOROUS ACCOUNTING RESEARCH
Abstract: In this brief paper we provide an argument that the rigor that allegedly
characterizes contemporary mainstream accounting research is a myth. Expanding
on an argument provided by West (2003) we show that the numbers utilized to
construct statistical models are, in many cases, not quantities. These numbers are,
instead, operational numbers that cannot be construed as measures or quantities of
any scientifically meaningful property. Constructing elaborate calculative models
using operational numbers leads to equations whose results are undecipherable
without the assumption of the validity of a prescribed narrative already embedded
in the logic relied upon in the construction of the model. Rigor is thus largely a
matter of appearance and not a substantive quality of the research mode labeled
“rigorous.”
2
THE MYTH OF RIGOROUS ACCOUNTING RESEARCH
Introduction
The financial reporting revolution (Beaver 1981) led to the empirical
revolution in accounting research, at least in the US. Beaver outlined a paradigm
for accounting research that established the positive economic approach to
accounting epistemology and turned academic accounting into a sub-discipline of
neoclassical economics (Reiter and Williams, 2002). As a result, empirical
archival financial research is now the dominant mode of scholarly discourse in the
US (Williams, et al., 2006). This research is notable for its alleged scientific rigor
and involves the construction of statistical models to create economic explanations
to all manner of behaviors, e.g., managers, investors, creditors, auditors, and
security markets.
According to the Financial Accounting Standards Board (hence FASB),
"The information provided by financial reporting is primarily financial in nature –
it is generally quantified (emphasis added) and expressed in units of money.
Information that is to be formally incorporated in financial statements must be
quantifiable (emphasis added) in units of money," (1978, par. 18). The
International Accounting Standards Board (hence IASB) reiterates the primacy of
quantification, i.e., “Most of the information provided in financial statements about
3
resources and claims and the changes in them results from the application of
accrual accounting" (2008, par. 0B19).” Asserting that accounting is about
quantification is uncontroversial; accrual accounting is the process of assigning
quantities to economic categories. Accounting is described as a “measurement
discipline; accountants measure things” (Ijiri, 1975; Mock, 1976). Financial
statements contain numbers and numbers appear, after all, to be quantities. Most
financial accounting research is predicated on the quantitative nature of accounting
information, and it is the quantitative character of accounting information and its
manipulation via complex statistical models that identifies such financial research
as “rigorous.”
“Rigorous” is a powerful adjective when used to describe accounting
research, particularly because it is used to distinguish it from scholarship that fails
the rigor test.1 A compelling example of such power may be found in a recent
white paper published by the Rock Center for Corporate Governance, as part of its
Closer Look Series. Authors Larcker and Tayan (2011) confidently assert the
mythical nature of seven widely believed propositions related to the issue of
corporate governance. Two of the ‘myths’ – CEOs are systematically overpaid and
there is no pay for performance in CEO compensation – are believed by many to
be well established truths, and to pose significant consequences for the continued
1
Another term used in place of ‘rigorous’ is ‘scientific’.
4
viability of the U.S. Republic (e.g., Phillips, 2002; Hacker and Pierson, 2010;
Corning, 2011; Lessig, 2011). The confident dismissal of other scholars’ evidence
for believing in such “myths” derives from Larcker and Tayan’s claim that their
conclusions are based on “rigorous research.” Presumably those who mistakenly
believe what Larcker and Tayan call myths came to those beliefs via less
intellectually rigorous ways. The authors offer a confident exhortation about
failing to heed the results of rigorous research:
Decisions regarding the structure and processes of a governance system
should be based on concrete evidence, not the educated guesses of selfstyled experts. To this end, a comprehensive and rigorous (emphasis
added) body of research exists that examines many of these important
questions. We believe that this research should be consulted and
carefully considered when governance decisions are being made (Larcker
and Tayan, 2011, p. 5).
The authors do not provide an explicit bibliography of this body of rigorous
research, but since the references in the footnotes are to their own works or those
of others published in leading U.S. finance journals (Journal of Finance and the
Journal of Financial Economics) we conclude that what rigorous research in this
context is of the form published by these authors.
Our concern in this paper is not over the issue of corporate governance or
with Larcker and Tayans’ claims, but with the adjective “rigorous” as it is
employed by scholars to describe their research, and is used to give credibility to
their assertions (and discredit to those of others) simply because their research is
5
described as “rigorous.”2 This valorization of a particular form of research as
“rigorous” has substantial implications for the status of accounting knowledge and
the relative stature of persons within the accounting academy (Reiter and Williams,
2002; Williams, et al., 2006; Lee and Williams, 1999). Of most importance is that
any recommendations made by someone claiming to have reached those
conclusions on the basis of “rigorous research” may lead others to believe those
recommendations are sound. A course of action may be followed that ultimately
leads to bad consequences because the research’s conclusions were other than what
one would expect from truly rigorous research.3 Our intention in this paper is to
point out one (there are many more) significant flaw in the claim that the
prevailing form of rigorous research is actually as “rigorous” as claimed. Rather
than rigor, there is a prevailing myth about rigor that is rooted in the misconception
that accounting is actually a discipline concerned with quantification.4
2
The Auditing Section of the American Accounting Association recently asked its members to respond to a survey
about retaining the word “Practice” in the title of its journal. The task force coordinator explained the charge:
“Our charge is to consider the concern expressed by some Section members that the word “Practice” in the journal
name Auditing: A Journal of Practice and Theory (AJPT) can connote a sense that AJPT is a practitioner journal,
thereby potentially lessening its impact as a rigorous (emphasis added) academic research journal in the minds of
deans and others who oversee faculty evaluations and promotions (Kachelmeier, 2012).” Of all the evaluative
criteria that could potentially apply to scholarly work, the one of greatest concern is a criterion best described with
the adjective “rigorous.”
3
The case of Alan Greenspan comes readily to mind. As Federal Reserve Board chairman he pursued policies
based on his beliefs about markets based on what he regarded as “rigorous research.” Those beliefs and policies
later proved to be disastrously wrong by Greenspan’s own admission.
4
There is a rather profound, and apparently missed, irony in claims to rigor in AAA president Gregory Waymire’s
theme of Seeds of Innovation. He reaches the following conclusion about one consequence of rigorous accounting
research: “As a result, I believe our discipline is evolving towards irrelevance within the academy and the broader
society with the ultimate result being intellectual irrelevance and eventually extinction (Waymire, 2011, p.3).”
6
What Makes for Rigorous Accounting Research?
The type of research that receives the accolade of being rigorous is familiar
to anyone who is even casually acquainted with the so-called “top three” academic
accounting journals: Journal of Accounting Research (JAR), Journal of Accounting
and Economics (JAE), and The Accounting Review (TAR). The fundamental
premises underlying this research are: (1) its engineering or positive economics
approach (Sen, 1988) based on the additional belief that anything that counts as
knowledge must be algorithmic in form; (2) accounting is a quantitative,
measurement discipline; and, (3) the natural science model applies without
reservation to a social practice like accounting. Thus, academic papers in leading
accounting journals consist almost exclusively in the presentation of elaborate
mathematical models, F-statistics, correlations, and forecast errors. Those that
aren’t statistical models are analytical models of accounting phenomena relying on
the calculus of differentiation and integration and assumptions of standard
neoclassical economics.
To illustrate the nature of rigorous research, we examined the 135 papers
published in JAR, JAE and TAR during 2011.5 Five of the papers were analytic,6
thirteen were papers that involved behavioral experiments utilizing
ANOVA/ANCOVA designs , one utilized path analysis (a linear modeling), and
5
We excluded discussions and commentaries, including only research papers.
“Analytic” means they were all based on deriving consequences via mathematical models based on standard
rational economic decision theoretic suppositions.
6
7
116 (86% of all articles) produced some form of regression model (e.g., probit,
logit, etc.). Of the 135 papers, 78% pertained to a finance related topic. Fourteen
were management in nature, eleven were auditing in nature, and three were
“other.” Of all the papers (regardless of topic), 96% involved some kind of linear,
statistical causal analysis. The end product of these scholarly efforts is a statistical
(mathematical) model conforming to Sen’s (1988) engineering approach to
economics. These mathematical models presume some predictive purpose. Such
models are constructed on the belief that were outcomes of future independent
variables entered into the model the calculative results of the model would produce
values for the dependent variable that approximate the actual future value of that
variable; without that belief such models would lack any potential for explaining
some phenomenon. Thus, accounting research that qualifies as rigorous by its
publication in the premier journals is a process of constructing predictive models,
i.e., algorithmic forms of knowledge of accounting phenomena. Of course, failure
of a model to perform in that manner discredits the model as an explanation of the
phenomenon it models because it fails the important test of being replicable.
The 116 regression papers utilized an average of 20 variables (dependent
and explanatory) in their models; an average nine variables (43% of all variables)
were taken from the financial statements of actual firms (mostly from the
COMPUSTAT data base). Many of such accounting measures are taken directly
8
from financial statements, some are based on simple arithmetic operations
performed on financial statement numbers, e.g., common ration, and some are
rather complex calculations including numbers derived from regressions on
accounting numbers. In many cases accounting or accounting derived numbers
represented the dependent variable, i.e., the phenomenon being explained was
represented via accounting operations. Thus, accounting scholars engaged in
conducting rigorous research act as if these accounting numbers are particularly
useful for analyzing and evaluating accounting phenomena in a “rigorous” manner.
But is this particular supposition valid?
In a rather ironic way, it is current accounting policy-making based on the
same suppositions as rigorous accounting research (Williams and Ravenscroft,
2012) that leads to a situation whereby the usefulness of accounting data to
rigorous accounting financial research is not particularly plausible. This is so for
reasons advanced by West (2003) in his Notable Contribution to the Accounting
Literature award winning book. West (2003) argues that although accountants still
produce numbers, those numbers are no longer meaningful quantities. Quoting
Jourdain (1960) to the effect that the basic functions of arithmetic, addition and
subtraction, may be performed with only concrete numbers of one “type,” West
argues that today’s accounting reports lack such uniformity of character, that is,
“Present accounting rules, and the conventional accounting practices upon which
9
they are so often based, appear to treat all amounts expressed in a common
currency as being of the same kind. However, such quantifications made in a
common unit of currency may be of `different types’ (West, 2003, p. 75).”
Motivated by concerns for “decision usefulness” standard-setters have created a
situation where currently financial statements comprise numbers representing
different “types,” such as historical cost, replacement cost, present value, market
value, and “fair” value (“as if” accounting in the extreme). Whether this
heterogeneous mix is useful for predicting cash flows is doubtful, but whether it
provides reliable data suitable for use in formal models is not in doubt. West
claims that performing arithmetic operations on numbers of different types yield
results which are by definition undecipherable.
To illustrate the point, consider some quantities (or measures) we regularly
assign to students. A student could have a GPA of 3.62, a GMAT of 661, and a
weight of 187 pounds. These numbers summed yield an amount for the student of
851.62, a number apparently precise to two decimal places, but what could that
number possibly mean? If we multiply GPA by weight and divide by GMAT we
get another precise number, 1.024, but what could that number possibly mean other
than “if we multiply this student’s GPA by his weight in pounds and divide by his
GMAT we get 1.024?” Accounting numbers are analogous to our uninformative
student numbers because both represent numbers composed of other numbers
10
representing many different types. Before some bit of information could be useful
for rigorous research it should first be able to convey something meaningful about
some relevant “type” about which it is a quantity, e.g., weight, height, or cost.
In the research context in which accounting data are used, the problem West
describes has significant implications for what rigorous empirical accounting
research could mean. In a research context Stamp (1993) observes:
Thus despite the common use of the terms `measure’ or `measurement’ in
accounting, it seems to this author that there is no operation whatsoever in
accounting that could be described as `measurement’, in the sense used by
physicists, or biologists, or geologists, and so on. To measure in science
(and, actually, in the everyday sense described above) necessarily involves
the physical operation of comparison of the quantity being measured with
some standard (ibid, p. 272).
Most financial accounting research is based on Sen’s engineering approach
described previously. The success of this “positive” model of economic reality as a
calculative one analogous to physics is dismal.7 West’s analysis of the need for
single types of data offers one explanation for conventional economics failure (and
as derivative of such accounting’s as well) to become a mathematical predictive
science.8
7
That the normal science model has failed so far in the social sciences is not all that controversial. Russ Roberts,
writing for the Wall Street Journal in February 2010 opined, “We should face the evidence that we are no better
today at predicting tomorrow than we were yesterday. Eighty years after the Great Depression we still argue
about what caused it and why it ended (Roberts, 2010, p. 2).”
8
Chang (2010) puts it this way: “Free-market economists may want you to believe that the correct boundaries of
the market can be scientifically determined, but this is incorrect. If the boundaries of what you are studying
cannot be scientifically determined, what you are doing is not a science (ibid, p. 10).
11
Gillies (2004) makes an argument similar to West’s to explain the success
of physics and the lack of success of economics:
The physical world appears on the surface to be qualitative, and yet
underneath it obeys precise quantitative laws. That is why mathematics
works in physics. Conversely economics appears to be mathematical on
the surface, but underneath it is really qualitative. This is why attempts
to create a successful mathematical economics have failed (Gillies, 2004, p.
190).
This failure is due to the phenomenon of “operational numbers”, which Gillies
contrasts with numbers in physics, thusly:
Whereas numbers in physics are estimates, which may be more or less
accurate, of exact quantities (emphasis added) which exist in reality,
operational numbers do not correspond to any real quantities. They are
a convenient, but sometimes misleading, way of summarizing a complicated,
qualitative situation. Moreover their values depend to a large extent on
conventional decisions and procedures (emphasis added) and are therefore
arbitrary to a degree. Operational numbers are the numerical surface form
of an underlying reality which is qualitative in character (ibid, p. 190).
The raw material of accounting information (or data) is prices of various types:
past ones, present ones, and, increasingly, future ones (highly subjective), which
presumes some extra-market wisdom to discern. Such prices and numbers derived
from performing operations on them are, by definition, operational numbers.
Because the rules produced by FASB, IASB, and other accounting standardsetting bodies are conventional decisions and procedures, the numerical
information produced via the application of GAAP yields only operational
numbers, which largely comprise the numbers that make “crunching” them a
12
rigorous research activity. Gillies cites goodwill as an example of an operational
number, which, at the time Gillies wrote his paper, was determined differently in
the US and Canada from how it was determined in Australia, which was different
from Germany, which was different from the UK. Even within the U.S. goodwill
has gone from a charge against equity at the moment of its inception to an asset
with a life arbitrarily limited to a maximum of forty years to an asset that is
permanent! Even within a single accounting regime, accountants interpret
standards differently; some are more aggressive while others take more
conservative stances.
Another example of the non-quantity nature of accounting numbers is
provided by Beaver and Demski, who criticized the American Institute of Certified
Public Accountants for its effort to formalize the objectives of accounting; "the
term income determination is used as if it were some unambiguous, monolithic
concept (such as true earnings) devoid of any measurement error," (1974, p. 180).
While we question the idealized concept of "true earnings" we agree with their
criticism of the ontological status of income. Accounting is not a measurement
activity. It could be construed as measurement only in the weak sense of assigning
numbers; what scale those numbers are from is ambiguous at best. Accounting
numbers are not quantities of underlying physical reality, but numerical
representations of a qualitative state. “Rigorous” accounting researchers employ
13
accounting data in an effort to describe conditions (states of independent variables)
that will allegedly enable prediction (calculation of) and, thus, explanation of all
accounting phenomena. Because of the operational nature of accounting data,
however, accounting numbers do not accomplish their alleged role in rigorous
accounting research.
The status of accounting data as operational numbers has important
implications for whether quantitative research is actually very rigorous or reliable.
These implications flow from Gillies’ warning that operational numbers may be
used only as...:
... a rough indication, formed in a somewhat arbitrary and conventional
fashion of a more complicated and qualitative underlying reality. As long as
the number is understood in this way it is a useful tool, but the danger lies in
taking the number more seriously and regarding it as an approximation to an
exact mathematical quantity existing in reality, as would be the case for a
similar number in physics (ibid, p. 195).9
Because accounting data are operational numbers, they are unlikely to prove to be
consistently useful in mathematically reducing uncertainty about any future
outcome being modeled by rigorous accounting research; indeed the events being
modeled are often represented with operational numbers as well.
The operational nature of accounting numbers has profound implications in
the domain of accounting epistemology. As the data from the 2011 publications
9
For example, seismologists have developed instruments that can register the foot stomps of soccer fans 30 miles
from the instrument; time has been measured within a millionth of a billionth of a second, and the wavelength of
light to 470 millionths of a billionth of an inch (Boyd, 2008, p. 13A). The same cannot be said for the precision by
which “leverage” or “ROA” is measured.
14
in the so-called premier journals show, the vast majority of articles being produced
today culminate in equations that calculate the contribution each variable makes to
the phenomenon being modeled. In each article the authors then proceed to
explain what these equations mean as if the numbers in the equations were
representative of quantities of some real economic thing. But the numbers being
used to make these calculations are largely accounting numbers. They are
operational numbers, not quantities of anything, so what the equations actually
calculate is indeterminate.10
For example, many of the authors of papers in 2011 utilized a “measure” of
leverage as a control/explanatory variable in their regressions. “Leverage” is
represented via an arithmetic operation of dividing liabilities by either assets or
equity (the debt-to-equity ratio). “Leverage” is a concept about some potential
ability to utilize debt financing to increase equity holders’ return on their
investment in a firm. It is something, unlike wavelengths, that evades any kind of
precise measurement. Even if it can be so measured, it could be so only in context,
not by so simplistic an operation as dividing liabilities by assets.11 “Measures” of
leverage for any two firms are not comparable for any number of obvious reasons:
10
This is one factor contributing to the lack of progress in much of the work in all social sciences predicated on
rational decision theory and statistical modeling. In describing this work, Elster (2009, p. 9) says: “My claim is that
much work in economics and political science is devoid of empirical, aesthetic, or mathematical interest, which
means it has no value at all (emphasis in original).”
11
Two firms could have the same accounting measures, but not have the same “leverage.” An extreme example is
Lehman Bros. and AIG. One was allowed by the political system to fail, the other was bailed out.
15
 Through time accounting standards have changed what are considered
liabilities (e.g., post-retirement health benefits), what are assets (e.g.,
capitalization of interest), and equity (e.g., all manner of new “expenses”
have been created).
 At any moment in time standards allow for discretion in choice of
operations, so for any two firms the permissible choices within the rules will
produce different numbers for debts, assets, and equities.
 “Leverage” is a concept that eludes anything corresponding to “precise”
measurement
 The conventional wisdom is now that numbers are manipulated. Indeed, a
principal focus of financial accounting research has been investigating
“quality of earnings” or management of earnings. It is a tenet of
principal/agent theory that managers opportunistically manipulate financial
statement numbers, e.g., via off-balance sheet financing, “abnormal
accruals,” etc.
The last bullet point illustrates what we describe as the “clock” problem in
empirical financial research, which poses significant difficulties for the rigor of
such research The “clock problem” is not confined simply to early principal/agent
research dealt with by Williams (1989), but is a generic problem for all accounting
research that relies on archival financial data. We describe it as the “clock
problem” based on the clock’s essential function as a pure measurement device,
i.e., its function is simply the measurement of time whatever the purposes for
measuring time happen to be (deciding a winner in a slalom race, determining
when an examination is over, knowing when to take a pie out of the oven,
scheduling trains arrivals and departures, etc.).
As a thought exercise, suppose neuroscientists have established that human
reaction time to visual or aural stimuli is explained by the physical characteristics
16
of the brain, e.g., the number of neuronal connections. A neuroscientist wants to
utilize this knowledge to test another contentious hypothesis, which is whether
human intelligence, rather than being mainly determined by environmental/social
factors, is largely a function of the physical properties of the brain that lead to
reaction times being faster for some individuals than others. An experiment is
conducted whereby standard IQ tests are administered to a group of human
subjects to “measure” their intelligence. These same subjects are also exposed to
various stimuli in a controlled setting and their reaction times measured with a
sophisticated, precise clock that measures to very small parts of a second. In this
experiment the clock provides the measures correlated with IQ scores to test the
hypothesis that nature is dominant over nurture. Whether the theory is sustained or
not by the experimental results, the interpretation of the clock’s readings is
unaffected. That is, the meaning afforded the clock’s readings are independent of
the status of the theory being tested. A reading of .18 seconds means .18 seconds
elapsed between stimulus and reaction whether the theory about reaction times and
intelligence is retained or rejected. The clock is independent of the theory.
In empirical financial accounting research we have no such assurance that
our “clock” (published financial statements, i.e., COMPUSTAT data) is
independent of the theories we test with them. If, for example, we have a theory
that managers manipulate earnings (they manage earnings) how can one
17
confidently use the very data management is allegedly manipulating to “measure”
the variables alleged to contribute to managers doing so? Much work in
accounting has focused on the phenomenon of “abnormal accruals.” The use of
such terminology presumes such concept as “normal accruals.” Abnormal is that
which deviates significantly from what is considered normal. But such a norm
doesn’t exist; there is no “clock” that measures what accruals would be ”normal”
in any situation. The researcher resorts to deriving abnormal accruals from
accruals that have been made in the past. At best abnormal could only be
abnormal in a statistical sense presuming everything in the past used to derive the
model can be construed as “normal.” The meaning of the readings on the “clock”
used by accounting researchers depends on whether a particular theory about the
production of accounting data holds or does not. The problem is ubiquitous for
financial research because the “clock” that is being employed by financial
researchers is not constructed for their research purposes, i.e. to provide replicable,
verifiable, neutral data.
Instead, for over 40 years FASB and other policy-makers have been building
a “clock” to produce readings, allegedly useful to investors and creditors, for
predicting cash flows based on a rational decision theoretic characterization of
investors and creditors (Young, 2006). It is not at all certain that data produced in
such a manner are also reliable “measures” of the economic constructs germane to
18
a particular hypothesis being tested by financial accounting researchers. To be
certain a researcher would need the ability to adjust the data produced by standardsetters for one purpose (decision usefulness) to be suitable for her purpose
(measuring some economic construct). She can’t simply presume that standardsetters have produced the optimal data system suitable for her scientific purposes.
To successfully do this the researcher would have to already possess an
understanding of the financial reporting process sufficient to make the necessary
adjustments to make the accounting data precise measures for her purposes. But if
she already knows how to do this of what significant value can her study of the
process add? She must already know what she is trying to find out. When one
considers the operational character of such numbers the great faith in methodology
completely dependent on such numbers is inexplicable. The standard excuse for
ignoring such problems is usually “measurement error,” but that is not relevant
because the experimenter is not actually measuring anything.
With respect to utilizing operational numbers as scientific measures Gillies
states: "Operational numbers are, however, only rough guides to a more
complicated qualitative (emphasis added) situation. If we start performing
elaborate mathematical calculations with them, the results can all too easily cease
to bear any relation to reality," (2004, p. 195). Meaning thus becomes the
significant problematic. Do accounting data mean what we claim they mean?
19
What do all of the literally thousands of financial studies done in the past 40 years
mean? A financial study with 20 variables “measured” mainly by operational
numbers on 300 firms condensed into the calculations of a linear equation: what
could that equation possibly mean (not just the bits and pieces of it)?12 If it has no
meaning in a mathematical sense, what explanatory power or policy implications
could it plausibly have? Why must all accounting scholarship be in such
mathematical forms? What kind of sense does it make?13 Unfortunately, the
meaning of such studies is nearly always contained in their conception. Financial
research continually attempts to close an open system. In spite of all the rigorous
financial research and the deliberations of standard-setters we are no closer to
knowing with any greater precision how to make accounting policy than we were
before we began to do “rigorous accounting research.”14
12
For example, the equation f = ma means something to designers of golf clubs; it has implications for the choice
of materials used in and the design of golfing implements. No equation has emerged from over 40 years of
empirical financial research that contains equivalent meaning for accountants, preparers, or users of financial
information.
13
Elster, citing the work of David Freedman (2005) reaches the following conclusion: “I suggest that a nonnegligible part of empirical social science consists of half-understood statistical theory applied to half-assimilated
empirical material (emphasis in original) (2009, p.17).” Given the operational nature of accounting numbers, this
goes double for empirical financial accounting research.
14
Ray Ball (2008), reflecting on our continuing lack of progress, provided an extensive list of as yet unanswered
questions. That these questions, which were asked in 1967, whose answers are crucial to sound policy making,
have remained unanswered after over 40 years of “rigorous accounting research” should not induce sanguinity
about rigorous accounting research ever being able to provide real answers to the questions it is allegedly best at
finding.
20
Conclusion
In the fall of 2011 physicists at CERN and the Gran Sasso National
Laboratory conducted an experiment that appeared to demonstrate that light speed
is not the maximum speed attainable in the universe, contrary to Einstein’s claim in
the Special Theory of Relativity. Energized neutrinos were accelerated to the point
where they appeared to cover the distance between Geneva and Gran Sasso 60
nanoseconds faster than light speed (Powell, 2011).15 Sixty nanoseconds is a very,
very small quantity (60 billionths of a second). The clock designed to measure
such small quantities must indeed be a very precise instrument. Accounting
numbers, on the other hand, are not so precise since they are not actually
quantities, but operational numbers. The so-called rigor of rigorous accounting
research comes from the claim that it resembles what the physicists at CERN were
doing. But it is analogous to physics by appearance only. Thus, the rigor that is
evoked by accounting researchers in the empirical financial tradition (like Larcker
and Tayan to claim the mythical nature of others’ claims) has the status of myth
itself.
15
This result later was proved to be incorrect because of a loose wire.
21
References
Ball, R. 2008. What is the actual economic role of financial reporting. Accounting
Horizons 22 (4): 427-432.
Beaver, W. H. 1981. Financial Reporting: An Accounting Revolution. Englewood
Cliffs, NJ: Prentice-Hall.
Beaver, W. H. and J. S. Demski. 1974. The nature of financial accounting
objectives: A summary and synthesis. Journal of Accounting Research 12:
170-187.
Boyd, R.S. 2008. Soccer fans’ stomps register as tiny earthquakes. The Raleigh
News & Observer, April 20, 2008: 13A.
Chang, H-J. 2010. 23 Things They Don’t Tell You About Capitalism. New York,
NY: Bloomsbury Press.
Corning, P. 2011. The Fair Society. Chicago, IL: The University of Chicago Press.
Elster, J. 2009. Excess Ambitions. Capitalism and Society.
http://www.bepress.com/cas/
Vol14/iss2/art1.
FASB. 1978. Statement of financial accounting concepts No. 1: Objectives of
financial reporting by business enterprises Stamford, CN: FASB.
Freedman, D. 2005. Statistical Models. Cambridge, UK: Cambridge University
Press.
Gillies, D. 2004. Can mathematics be used successfully in economics? In
Fullbrook, E.G., editor, A Guide to What’s Wrong with Economics. London,
UK: Anthem Press: 187-222.
Hacker, J.S. and Pierson, P. 2010. Winner-Take-All Politics. New York, NY:
Simon and Schuster.
Ijiri, Y. 1975. The Theory of Accounting Measurement. Sarasota, FL: American
Accounting Association.
22
International Accounting Standards Board. 2008. Exposure Draft of An Improved
Conceptual Framework for Financial Reporting: Chapter 1 and Chapter 2.
Jourdain, P.E. 1960. The Nature of Mathematics, in J. R. Newman (ed.) The World
of Mathematics, George Allen and Unwin, London, UK.
Kachelmeier, S. 2012. Letter to members of the Auditing Section of the AAA,
April 30, 2012.
Larcker, D.F. and B. Tayan. 2011. Seven myths of corporate governance. Closer
Look Series: Topics, Issues and Controversies in Corporate Governance, CGRP16, Dated 6/01/11. Palo Alto, CA: Stanford University, Graduate School of
Business. Available at http://ssrn.com/abstract=1856869.
Lee, T.A. and Williams, P.F. 1999. Accounting from the inside: Legitimizing the
accounting academic elite. Critical Perspectives on Accounting 10(6): 867895.
Lessig, L. 2011. Republic, Lost. New York, NY: Twelve.
Mock, T. J. 1976. Studies in Accounting Research #13: Measurement and
Accounting Information Criteria.
Sarasota, FL: American Accounting Association
Phillips, K. 2002. Wealth and Democracy. New York, NY: Broadway Books.
Powell, D. 2011. Not so fast, neutrinos. Science News, December 31, 2011: 22.
Reiter, S.A. & P.F. Williams. (2002). The structure and productivity of accounting
research: The crisis in the academy revisited. Accounting, Organizations and
Society 27 (6): 575-607.
Roberts, R. 2010. Is the dismal science really a science? The Wall Street Journal,
February 26, 2010.
Rodgers, J.L. and P.F. Williams. 1996. Patterns of research productivity and
knowledge creation at The Accounting Review: 1967-1993. Accounting
Historians Journal, June: 51-88.
Sen, A.K. 1988. On Ethics and Economics. Blackwell Publishing, Malden, MA.
23
Stamp, P. 1993. In search of reality. In Philosophical Perspectives on
Accounting: Essays in Honour of Edward Stamp, eds. M. J. Mumford and
K. V. Peasnell. Routledge, London and New York. 255-314.
Waymire, G. 2011. Seeds of innovation in accounting scholarship. Working paper:
Goizueta Business School, Emory University, Atlanta, GA.
West. B. (2003). Professionalism and Accounting Rules. London, UK: Routledge.
Williams, P.F. 1989. The logic of positive accounting research. Accounting,
Organizations and Society 11(5/6): 455-468.
Williams, P.F., J.G. Jenkins and L. Ingraham 2006. The winnowing away of
behavioral accounting research in the U.S.: The process for anointing
academic elites. Accounting, Organizations and Society 31: 783-818.
Williams, P.F. and Ravenscroft, S. 2012. Rethinking decision usefulness. Working
paper, Poole College of Management, North Carolina State University.
Young, J.J. (2006). Making up users. Accounting, Organizations and Society 31
(6): 579-600.
24
Download