politische ökonomie der rechnungslegung

advertisement
THE ORGANIZATIONAL CONTEXT OF STANDARD SETTING – THE ROLE OF IASB’S
TECHNICAL STAFF IN COMMENT LETTER ANALYSIS
Sebastian Hoffmann
Assistant Professor
HHL – Leipzig Graduate School of Management
Department of Accounting and Auditing
Jahnallee 59
04109 Leipzig
Germany
email: sebastian.hoffmann@hhl.de
tel. +49 (0) 3 41 – 98 51 70 8
I thank Anne Hartmann for her coding support and Dominic Detzen for always critical and helpful discussions and comments. Moreover, I’m
grateful to Michael Bourne, Kees Camfferman, David Cooper, Hui Du (discussant), Norio Sawabe (discussant), Henning Zülch, the faculty
and doctoral students of the School of Accounting at Florida Atlantic University in 2011 (Boca Raton), participants of the Critical Perspectives on Accounting Conference 2011 (Clearwater) and the Annual Meeting of the American Accounting Association 2011 (Denver) as well
as the faculty of the Department of Accounting at Vrije Universiteit Amsterdam in 2011 for their valuable comments.
1
THE ORGANIZATIONAL CONTEXT OF STANDARD SETTING – THE ROLE OF IASB’S
TECHNICAL STAFF IN COMMENT LETTER ANALYSIS
Abstract
Being the most important transnational standard setting body for accounting the International
Accounting Standards Board (IASB) employs technical staff that is responsible for many crucial activities within the standard setting process leading to new International Financial Reporting Standards (IFRSs). A rare opportunity to examine the staff’s work is to investigate
their analysis of comment letters the IASB received as a response to its invitations to comment on new regulatory proposals. Applying a content and frequency analysis to the staff
analysis paper of comment letters received on ED 9 Joint Arrangements one finds that verbal
quantifiers are used to describe the frequency with which an issue has been raised by commentators. Those quantifiers do not consistently represent a specific number or proportion of
comments but seem to be used in a biased way. An in-depth qualitative analysis reveals that
this bias often leads to an underweighting of comments that are very critical with respect to
the IASB’s original proposals. Both findings indicate that the IASB appears to be less receptive for comment letters and stipulate a change of the mode used to present IASB’s staff analysis of comment letters. This change may enhance transparency which is a necessary prerequisite for creating the critical level of trust which a privately organized standard setter like the
IASB needs.
Keywords accounting standard setting • quantifiers • International Accounting Standards
Board (IASB) • IASB staff • standard setting process
2
1
Introduction
The importance of an organizational analysis of staff activities at the International Accounting
Standards Board (IASB) may be derived from the fact that the IASB is the most important
standard setter for accounting (Djelic & Sahlin, 2009). It promulgates International Financial
Reporting Standards (IFRS) that are mandatory for capital market oriented companies in the
EU since 2005, accepted by the Securities and Exchange Commission (SEC) for non-US
companies since 2007, and required or allowed for financial reporting purposes in overall
more than 120 countries worldwide. IFRS thus continues to gain further currency in global
accounting (Humphrey & Loft, 2009).
Setting global accounting standards the IASB not only mitigates (capital) market barriers
through harmonization of financial reporting (Camfferman & Zeff, 2007) but goes far beyond.
Arnold (2009) argues that (international) financial reporting, and hence the IASB, may facilitate the transformation of economies being perceived as pure capital accumulation systems to
being recognized in terms of financialization. This perception is strengthened by Botzem &
Quack (2009) who shed light on how the IASB has become a significant component of the
global (not only Anglo-American) financial architecture. Similarly important components in
this context are multinational audit firms that also shape global financial reporting by being
heavily involved in global accounting regulation (Cooper & Robson, 2006; Suddaby et al.,
2007). Thus professional firms play a key role in accounting regulation, both at national as
well as transnational levels, since they get involved in IASB activities. They are represented
on the board, the staff, and other positions of its governance structure enabling them to influence the outcome of accounting standard setting. Later on they are engaged in translating accounting standards into practice by auditing and advising their clients on the use of IFRS,
3
which not necessarily needs to take place on a global level but may be transferred to local
sites (Robertson, 1995). Thus, the IASB is shaped by and shapes its environment, be it (capital) markets, the accounting profession, corporations, or the financial architecture; be it on a
global stage in relation to other transnational organizations like the International Federation of
Accountants (IFAC), the World Trade Organization (WTO) (Arnold, 2005) or the International Organization of Securities Commissions (IOSCO), or at local sites where the individual
preparer and auditor, the user of financial statements as well as local regulatory agencies get
involved. All those multidimensional affiliations of the IASB attach a significant importance
to this accounting standard setting organization.
In order to have the aforementioned outreach, the IASB’s standards need to be authoritative
(Tamm Hallström, 2004). As the IASB is a private body, this is not a trivial issue and requires
a certain bindingness of its standards, either through voluntary compliance or an endorsement/enforcement mechanism (Djelic & Sahlin, 2009). Whichever way creates the bindingness of standards, the basis for such is trust and participation of the constituents (de Woot,
1969) in the activities of the regulating body. This is why privately organized accounting
standard setting bodies, on the one hand, have implemented a due process (Dyckman, 1988)
offering possibilities for all constituents to participate. On the other hand, trust requires a notion of transparency (Jahansoozi, 2006) and openness (Grunig and Huang, 2000), both of
which the IASB tries to implement: “[t]he IASB’s objective is to work […] under principles
of transparency, open meetings, and full due process” (Tweedie, 2002). However, transparency and openness means more than granting access to all documents that are produced. According to Rawlins (2009), it additionally requires providing information in an accurate, timely, balanced and unequivocal manner in order to be useful for assessing the accountability of
an organization. In many cases, too much information leads to a reduction of trust (Strathern,
2000), which is why useful information in the sense of transparency should enhance the un4
derstanding of the information that is provided (Wall, 1996). What is more, requiring full
transparency may have negative side effects and discourage the standard setting body, restrain
it from necessary actions (Malsch & Gendron, 2011) or even change its behaviour outside the
visible sphere (Roberts, 2009). Trying to achieve trust through transparency and openness as
well as participation at the same time may produce contradicting results. It thus requires a
proper balance of all criteria (Collins & Evans, 2007) which constitutes a major challenge for
privately organized accounting standard setters in general, and the IASB in particular.
The tremendous importance of the IASB combined with its need for trust and participation
encourages an investigation of what is happening behind the scenes of the IASB as a transnational standard setting body. Usually only the IASB board members are perceived to be a critical organizational element. However, the importance of IASB’s technical staff may not be
underestimated. Having had a rather supportive function for the standard setter’s early project
steering committees, during the 1990s technical staff turned into research staff being responsible for research of accounting issues and drafting standards (Camfferman & Zeff, 2007).
New responsibilities led to a steady increase in the number of technical staff since 2001, now
being heavily involved throughout the whole due process (Zeff, 2012). Although one may
suppose the technical staff to play a significant role in shaping the outcome of the standard
setting process only little is known about its actual behaviour and influence on the board. By
now, research on the organizational context of accounting standard setting has focused on
various broad aspects, like the participation of constituents in general (e.g. Tandy & Wilburn,
1992; Jorissen et al., 2010) and with respect to national differences (e.g. McLeay et al., 2000;
Power, 2004). The emergence of individual standards (e.g. Hope & Gray, 1982; Kenny &
Larson, 1993; Hill et al., 2002), and general ‘political’ considerations (e.g. Zeff, 2002) have
also been under investigation. Only little attention has been paid to internal organizational
aspects of accounting standard setting. Kirsch (2006) as well as Camfferman & Zeff (2007)
5
provide detailed descriptions of organizational aspects surrounding the development of the
IASB’s predecessor, the International Accounting Standards Committee (IASC)1. Moreover,
the agenda of accounting regulators has been identified as a critical organizational item
(Dyckman, 1988; Young, 1994; Ryan, 1998; Weetman, 2001) as has been their board (Kwok
& Sharp, 2005; Perry & Nölke, 2005; Richardson, 2009). However, a critical analysis of the
IASB’s organizational mechanisms and its impacts on international accounting regulation
runs short (Botzem & Quack, 2009).
This study investigates a particular IASB standard setting project with respect to the staff’s
summary of comment letters in order to analyze how the staff performs its activities and contributes to the formation of accounting standards within the due process. The analysis contributes to unpacking the black box surrounding the organizational context of IASB accounting
standard setting, particularly with respect to the role of technical staff. Accordingly, the contribution of this paper is rather a substantive than a theoretical one.
Applying a content and frequency analysis to the staff summary of comment letters on
IASB’s Exposure Draft ED 9 Joint Arrangements and drawing on findings of linguistics and
psychology, it is analyzed how the staff uses verbal quantifiers (like ‘some’, ‘many’, ‘a few’)
with respect to their functions and meanings. For this purpose, limiting the investigation to
one project not only provides a stable context and avoids discussing different topics but also
ensures involvement of only few staff members (Stamp, 1985), which helps focusing the
analysis. Moreover, ED 9 is a significant project as findings might challenge Sutton’s (1984)
presumption that a regulator does not have preferences of its own. He supposes that the effec-
1
In this context it is interesting to note that Camfferman and Zeff (2007) not only perceive the formation of the
IASC to be motivated by an interest in creating global accounting standards but also as a means to protect the
United Kingdom from rival accounting harmonization approaches, e.g. within the European Economic Community.
6
tiveness of influencing a standard setter is exclusively dependent of the economic resources of
the one who is trying to exercise influence. This implicitly assumes that there is no own agenda of the standard setter that eventually might rule out external attempts of influence. Taking
into consideration that an adoption of IFRS in the US, for which convergence is a prerequisite, would ultimately make the IASB the global accounting standard setter (Humphrey &
Loft, 2009), the IASB might reveal a strong own preference for convergence with US GAAP,
no matter what respondents wrote. For ED 9 this may be observed because it was part of the
short-term convergence agenda, falling under the Memorandum of Understanding and being a
FASB-IASB joint project. Nonetheless, it was solely conducted by the IASB, allowing the
IASB to direct it on its own responsibility. Results indicate that the staff does not use verbal
quantifiers unambiguously. They often represent wide ranges of proportions of comments and
absolute numbers of comment letters indicating their usage for mitigation of issues and arguments. Accordingly, they also seem to be used to (de-)emphasize certain issues, in particular
those that are related to convergence with US GAAP. It may thus be questioned whether the
current mode of comment letter analysis fosters transparency and openness of the due process
in the intended way, which eventually may have the potential to not only harm trust in the
IASB but also challenge the Board’s position in accounting standard setting.
7
2
Investigating the Organizational Context of Standard
Setting at the IASB
2.1
The Technical Staff’s Duties within the Due Process
The IASB sets its accounting standards in the course of a due process. Such a due process is a
necessary and effective means to build trust and eventually achieve legitimacy of the standards that are produced. Internal organizational needs as well as the regulatory space with respect to the involvement of the state and the enforcement mechanisms determine the actual
design of the due process (Richardson, 2008). However, the process is subject to changes.
Responding to abundant public criticism concerning issues of legitimacy the IASB changed
its due process in 2005. The result was an increase in transparency of the whole process as
well as concerning the expertise requirements of employed parties (Chiapello & Medjad,
2009). Both affected the organizational context of IASB’s standard setting and further empowered its technical staff.
The (technical) staff is mainly operating behind the visible scenes of the IASB but paving the
way for the board’s decisions thus taking a predominant position within the internal organization of the IASB. Its activities embrace the whole standard setting process and therefore need
to be perceived another critical element of the IASB’s organization and its outcomes (Geiger,
1993; Georgiou, 2004; Dick & Walton, 2007; Richardson & Eberlein, 2010). Figure 1 summarizes the staff’s responsibilities within the IASB’s due process.
[Insert Figure 1]
8
In the course of setting the agenda, the staff is “asked to identify, review and raise issues”
(IASCF, 2008, due process handbook, paragraph 22) for consideration to become an official
IASB project. Already at this early and decisive stage (Georgiou, 2004) the staff may influence the proceeding of certain standard setting projects by suggesting a project idea to the
board or stressing a lack of relevance of a project under consideration. Once a project is added
to the agenda and taken on, the director of technical staff is responsible for selecting the project team and appointing a project manager who then develops the project plan (IASCF, 2008,
due process handbook, paragraph 29). Depending on the actual issue, a discussion paper may
be developed and published. At this stage, the staff may contribute to the discussion paper via
conducting own research and issuing recommendations (IASCF, 2008, due process handbook,
paragraph 33). Moreover, the project team analyzes and summarizes the comment letters received on the discussion paper under consideration (IASCF, 2008, due process handbook,
paragraph 36).
Subsequently and on instruction of the board members, the staff drafts and – after approval –
publishes an exposure draft (IASCF, 2008, due process handbook, paragraph 40). Following
the comment period, the project team analyzes and summarizes the comments received and
prepares them for further deliberations within the IASB (IASCF, 2008, due process handbook,
paragraph 43). Similar to the previous stage, the staff is responsible for drafting the new IFRS
on instruction of the IASB board members (IASCF, 2008, due process handbook, paragraph
51). Once, a new IFRS is issued, the staff shares responsibility with the board to regularly
hold meetings with interested parties in order to understand the impact, problems and open
issues related to the new standard (IASCF, 2008, due process handbook, paragraph 52).
Given those significant functions of IASB’s staff it is important to point out that most of these
actions indeed take place behind the IASB’s closed doors. The summaries of comment letters
9
however provide one of the rare opportunities to actually and publicly recognize and analyze
how the staff operates within the standard setting process because these documents are to be
made publicly available. Furthermore the summary of comment letters has important organizational functions within the due process. It not only summarizes but also synthesizes, considers and prepares the comment letters that were submitted as a response to the board’s formal
invitation on a draft paper for further deliberations within the board (IASCF, 2008, due process handbook, paragraphs 15 and 43). The technical staff thus has a mediating role through
transforming the actual content of comment letters into a condensed and consolidated paper.
During this process comment letters need to be interpreted, and findings have to be summarized vis-à-vis the original documents. All these activities constitute a social process that
transforms the original papers, finally creating a summary of comment letters written from the
point of view and through the lenses of the IASB’s technical staff. The very impact of this
summary is evident as it would be fairly naïve to assume that board members rather spend
their constrained time resources going through several hundred pages of comment letters
(written in very different style and quality) than relying on a well-prepared and functional
summary of its own staff.
The technical staff in this respect is a kind of conduit through which the comment letters must
pass in order to be recognized by the board and in public (Botzem and Quack, 2009). Set
against the important tasks of the IASB’s staff and its preparation of summaries of comment
letters it is vital to know how the staff actually operates and if or in what respect they contribute to fostering openness and transparency, both of which are essential for making the IASB’s
a trustworthy regulatory organization.
10
2.2
On the Use of Verbal Quantifiers
Taking a closer look at the staff analysis papers, one usually finds that the staff uses verbal
frequency quantifiers (like ‘a few’, ‘many’ or ‘some’) in order to describe the frequency of
statements and issues raised in the comment letters sent to the IASB. A reason for this may be
that “human subjects are more effective at reasoning with verbal expressions than with numerical expressions, even if the tasks performed rely on frequency information” (Zimmer,
1983, p. 180). Another explanation may be that the costs for performing and presenting a numerical analysis are estimated too high. Moreover, the staff may be aware of the fact that
there is always room for interpretations of comment letter statements which usually results in
some differences in how people interpret such a statement. Hence, they may use verbal quantifiers in order to avoid the notion of accuracy with respect to the analysis of comment letters
which cannot be achieved at all. Last but not least, verbal quantifiers may also be used for
political reasons, i.e. allowing the IASB to specifically (de-)emphasize issues raised in comment letters. However, the central problem of using such quantifiers is that they are not precisely defined. One often finds several meanings for one quantifier or only vague descriptions
of quantities expressed through their usage which may leave the reader guessing what exactly
is meant when quantifiers are used.
Given the IASB’s need for transparency it is worthwhile to investigate how the analysis of
comment letters prepared by the staff works with respect to the use of verbal quantifiers. The
importance of this question is even more evident if one considers that the IASB used to apply
a rather quantitative mode of comment letter analysis for some projects in the past, either
through providing solely the number and proportion of respondents raising a certain issue
(e.g. the comment letter analysis on the ED Management Commentary, IASB, 2007) or
through using a mixed approach, i.e. showing some numbers and using some verbal quantifiers (e.g. the comment letter analysis on the ED Annual Improvement Process 2008, IASB,
11
2008c). Both modes of analysis can no longer be observed since around the second quarter of
20082.
In linguistics Mosier (1941) is one of the first researchers trying to examine the precise (and
quantifiable) meaning of words, in general. He finds that the interpretation of words heavily
varies among individuals but seems to have some kind of stability in large groups, i.e. on an
aggregated level. He concludes that words consist of two parts: one constant (that is a kind of
anchor point in a given continuum) and one variable (that is dependent on the individual
speaker and the context of usage). His findings have been empirically validated by Jones and
Thurstone (1955) and were extended to frequency words by Simpson (1944) and Hakel
(1968), mainly resulting in the same findings. The constant in Mosier’s model later on has
been challenged by Parducci (1968), Chase (1969) and Johnson (1973) who find that the context the word is used in, determines the constant Mosier identified. Hence, people may use the
same quantifier in different ways with different meanings. Borges & Sawyers (1974) measured the frequencies of marbles and found that the interpretation of quantifiers varied with the
total number of marbles available. Pepper & Prytulak (1974), Newstead & Collins (1987) as
well as Moxey & Sanford (1993) have shown that the meaning attached to quantifiers varies
according to the expected frequency of the event under consideration. The same quantifier
turned out to represent a lower amount for relatively seldom events (e.g. earthquakes) than for
relatively frequent events (e.g. breakfast). Newstead & Collins (1987) extend their analysis to
other context effects, like the set size or the number of quantifiers available, but do not find
evidence for an influence of such effects. Despite these context effects, the same person tends
to use quantifiers highly consistent (Johnson, 1973, Bryant & Norman, 1980, Beyth-Marom,
1982, Budescu & Wallsten, 1985, Rapoport et al., 1987).
2
It seems to be worth noting that at least in recent years the Financial Accounting Standards Board’s (FASB)
comment letter analyses, , also do not provide any numerical quantifications of issues raised by respondents
(cf. FASB, 2009; and FASB, 2010). However, in contrast to the IASB’s comment letter analyses, the FASB
explicitly names some of the respondents along with their respective views.
12
For the purpose of this study it is necessary to make certain assumptions about verbal quantifiers. Since it is the quantifiers’ primary function (Paterson et al., 2009, p. 1390), an implicit
presumption in applying linguistics to verbal quantifiers is that quantifiers ex ante are supposed to represent the amounts or proportions of issues mentioned in the comment letters,
rather than emphasizing any particular issue or comment letter. This presumption is also
backed by transparency’s claim for an exact and unequivocal presentation of information
(Rawlins, 2009). Furthermore, the previously discussed linguistic research indicates that
quantifiers are used consistently by people if there is no change in context and only one person applying them. Consequently, the study is limited to only one standard setting project, i.e.
it uses only one particular staff analysis of comment letters. This assures that the context is
stable3 and – as there are usually only one or two people involved in the analysis of comment
letters of one particular standard setting project – reduces distorting effects of inter-personal
differences in quantifier application to a minimum.
2.3
The Joint Arrangement Project
The IASB’s project on joint venture accounting started in December 2005 as part of the short
term convergence project of the IASB and FASB within their Memorandum of Understanding. The new standard IFRS 11 Joint Arrangements replaced IAS 31 Interests in Joint Ventures and SIC 13 Jointly Controlled Entities – Non-Monetary Contributions by Venturers.
Following discussions at the IASB and workshops with preparers in 2006 and early 2007, the
IASB issued its exposure draft ED 9 Joint Arrangements on 13 September 2007. The changes
3
As the word context may have very different meanings, one could also argue that even one single standard
setting project is insufficient to assure stability of context. This is because there are e.g. sections dealing with
definitions, recognition, measurement and disclosures in the Exposure Drafts which might be perceived as different contexts. Nonetheless, linguistic research indicates that context should rather be interpreted as a broad
term with respect to the analysis of quantifiers. Furthermore, a review of the results of the analysis of quantifiers used by the IASB’s staff within the staff analysis paper of comment letters does not show any notable deviations in the usage of quantifiers among the different sections of ED 9. Thus, one may consider ED 9 as a stable context.
13
proposed by ED 9 can be classified into three areas: changes in terminology, changes in accounting rules, and changes in disclosures.
Concerning terminology, ED 9 reforms the systematic of joint arrangements which hereinafter
will be the umbrella term for joint operations, joint assets and joint ventures. The term “controlled”, which has been used with respect to joint operations and joint assets, is no longer
existent. This is in order to clarify that “joint control” is necessary for joint ventures only.
Those changes seem to be of minor importance because they do not affect accounting for joint
arrangements. Disclosures also change substantially but only with regard to joint ventures. In
accordance with the exclusive use of the equity method, disclosures shall be harmonized with
the regulations of IAS 28 for associates which are consolidated by using the equity method. In
effect, current disclosures on joint ventures will be extended and will thus require the companies to provide more information in their annual reports’ notes. The most severe change of
this project is the elimination of proportionate consolidation for joint ventures. IAS 31 allows
accounting for joint ventures using either proportionate consolidation (IAS 31.30 – 31.37) or
the equity method (IAS 31.38 – 31.41). According to ED 9, joint ventures would have to be
accounted for using the equity method only. The IASB accepted comment letters on this ED
until 11 January 2008. The staff analysis of the comment letters was published on 18 April
2008. Only one year later the IASB continued deliberations on this project and finished it in
May 2011.
The Joint Arrangement project has several features that make it suitable for this research
study. First of all, it is a Memorandum of Understanding project as were most of the IASB’s
projects undertaken between 2007 and 2011 have been. Second, the IASB received a total of
113 comment letters on ED 9. This number is only slightly above the mean of comment letters
per exposure draft (98) as determined by Jorissen et al. (2010, p.11) for the period between
14
2002 and 2006. Hence, ED 9 provides an adequate basis for analysis, in particular if one takes
into consideration that – due to the growing number of companies reporting under IFRS –
there is a further increase in the number of comment letters received per exposure draft since
2007. Third, the project deals with a group accounting. In its biggest market, the European
Union (EU), IFRS are mandatory for group financial statements only, which adds a special
importance to the Joint Arrangement project for companies within the EU. Moreover, the ED
has been discussed quite controversially, particularly by some (Continental-european) constituents who were heavily opposing the abolishment of proportionate consolidation. This controversy implies an extraordinary challenge for the staff during the process of summarizing
comment letters.
From a theoretical perspective, ED 9 is interesting to investigate because it was conducted by
the IASB only, although it was a joint project with the FASB. This feature offers the possibility to challenge one of the basic assumptions Sutton (1984) makes in his analysis of accounting standard setting activities, namely that the standard setting body has no own preferences
with regard to the outcome of its work. Given the case of ED 9, the situation may be different.
Accounting for Joint Ventures was – when the project began – one of the issues that constituted a significant gap between US GAAP and IFRS, as US GAAP preferred the equity consolidation method (allowing for some exemptions for specific industries) and IFRS generally allowed to either use proportionate or equity consolidation. Aiming for global acceptance, the
IASB may be highly interested in converging its standards with US GAAP in order to become
fully accepted by the SEC. Beyond this theoretical reasoning, a look at the rhetoric of the
IASB (Young, 2003) in the context of ED 9 also supports this view:
“The IASB has been developing proposals to improve the accounting for joint ventures, and remove differences between IFRSs and US GAAP. The IASB plans to final15
ise its new requirements in June 2010, which includes removing the ability to use proportionate consolidation for joint ventures, [...]” (IASB, 2010, emphasis in the original)
The IASB applies elements of direct speech and directly identifies itself within the text, thus
expressing an own will. Furthermore, the wording suggests a clear opinion of the IASB: The
repetitive use of the word remove seems characteristic for a definite preference. Investigating
the content of the statement, the IASB says that an improvement of accounting may only be
reached by removing differences between IFRS and US GAAP. This is only feasible if proportionate consolidation is removed. Other options do not seem to be considerable at all. The
analysis of the summary of comment letters may be able to reveal further indicators pointing
at an own interest of the IASB in the outcome of its standard setting process. Such indicators
could be that arguments raised in the comment letters and opposing convergence with US
GAAP are de-emphasized.
16
3
The Use of Verbal Quantifiers in the Context of the
Joint Arrangements Project
3.1
Collection and Constitution of Data
According to Maxwell (1996), a meaningful analysis of written documents may only be done
using a qualitative research design. Since the aim of this paper is to examine specifics in the
IASB staff analysis paper on ED 9 – which is a written document – an indication for a qualitative research design is given. As a method for a systematic analysis of texts one may use any
form of content analysis (Dawson, 2002).4 As this study has the purpose of describing a specific characteristic of communication and is to investigate the actual usage of verbal quantifiers in a particular context, a comparative content analysis needs to be applied (Holsti, 1969)
matching contents in comment letters with equal contents in the summary of comment letters
prepared by the staff. The analysis should be performed in the context of a frequency analysis
(Berelson & Salter, 1946; Pustet, 2007; Roland et al., 2007) that aims at determining the
numbers that are represented by using verbal quantifiers through people. The following research framework for frequency analysis is not only applicable to simple cases that only rely
on identical words but also to complex analyses that require interpretative elements
(Gottschalk & Gleser, 1969):
1. Determination of the research question;
2. Determination of the research material;
3. Construction and definition of a system of categories;
4. Determination of units of analysis;
5. Coding;
4
It should be noted that content analyses in various forms are frequently applied to accounting related research
questions (Fisher et al., 2010). Moreover, Alan Teixera, IASB’s Director of Technical Activities, explained
during a conference speech in September 2011 that the IASB also runs a content analysis of all comment letters received.
17
6. Data analysis;
7. Presentation and interpretation of results.
The basic research question is how the IASB’s staff uses verbal quantifiers in order to summarize comment letters. To answer this question the use of verbal quantifiers in the staff analysis paper of comment letters on ED 9 is investigated with the aim to come up with the numerals that underlie each quantifier. Set against previously discussed findings in psychology
and linguistics and the focus on just one standard setting project one could suppose that the
same quantifier consistently represents similar frequencies or proportions.
Given the research question, the IASB staff analysis paper of comment letters on ED 9 has to
be compared with the original comment letters sent to the IASB. Therefore both documents
form the basis for this study. The original and complete staff analysis paper of comment letters on ED 9 has been published on 17 April 2008 as agenda paper 10B (IASB, 2008a) of that
day’s IASB board meeting. The 113 comment letters (CL) received on ED 9 are available on
the IASB’s website (IASB, 2008b). Altogether, the analysis comprises 114 individual documents.
The system of categories used for coding is determined by the staff analysis paper of comment letters on ED 9. Within this paper, the staff uses eleven different verbal quantifiers,
namely ‘some’, ‘a few’, ‘many’, ‘several’, ‘some of these’, ‘one’, ‘most of these’, ‘several of
these’, ‘a few of these’, ‘a majority’, and ‘most common’. These quantifiers are used to describe the frequency with which respondents raised one of the 170 (not necessarily completely
mutually exclusive)5 issues on ED 9 that the staff derived from the comment letters. Those
170 issues have been determined before technical coding in the context of a thorough review
5
Note that four of the issues have not been quantified by the staff making those unsuitable for analysis.
18
of the staff analysis paper. The quantifiers used in relation to the distribution of respondents
and the structure of the analysis paper itself were excluded from analysis. These 170 issues
accordingly are the categories for further analysis. The frequency of their appearance in the
comment letters determines the meaning of quantifiers used in the staff analysis paper. Generally, the 170 categories are defined by the explanations provided in the staff analysis paper
since each of the issues is more or less briefly described. The following excerpt from the staff
analysis paper shows an issue classified as ‘many’ that formed one of the 170 categories.
“Many respondents questioned whether the proposed standard will contribute to the
achievement of convergence with US GAAP. For these respondents proposals will rather create divergence.” (IASB, 2008a, p. 6, No. 10)
As the descriptions of issues heavily vary in detail and precision, they were supplemented
where necessary after a first review of comment letters in order to provide a second coder,
who is necessary to ensure reliability of content analysis, with a better understanding of the
issue raised. Such supplements usually contained some additional remarks on specific terminology or context and background information.
In order to grasp as much of the meaning as possible and due to the fact that the linguistic
style and quality heavily varies among the comment letters, the units of analysis (i.e. those
parts of the comment letters that qualify for an assignment to one of the previously defined
170 categories) were determined quite flexibly, meaning that a unit of analysis could be anything between one word and a whole paragraph.
Usually, a frequency analysis exclusively relies on the appearance of predefined words or
groups of words within the materials under analysis, also allowing for the use of software: the
19
appearance of predefined categories is simply counted and automatically coded to predefined
words as shown by Smith & Taffler (2000), for example. In the course of analyzing the staff
analysis paper of comment letters on ED 9 in conjunction with the affiliated comment letters
such an approach is not feasible. This is due to the fact that the categories and their definitions
are rather complex in terms of their (accounting specific) content. Additionally, the comment
letters use different words and style in order to express their opinions. Hence, the coding (i.e.
the assignment of words, sentences or paragraphs to the aforementioned categories) also has
to take interpretative elements into consideration. According to Denzin (2002) interpretations
of the materials should:
 follow processes of social interaction;
 be conducted from the perspective of the issuer rather than from an outsider’s view;
 consider the social background of the materials; and
 take the actual situation, which led to the formation of the documents, into consideration.
To exemplify this approach, the following two citations from comment letters on ED 9, that
have been coded (assigned) to the aforementioned category derived from the staff analysis
paper, are presented.
“We believe that the practical effect of the Board’s proposals is to create divergence
with US GAAP…” (PricewaterhouseCoopers LLP, Comment Letter on ED 9 as of 7
January 2008)
“If our understanding is correct, it follows from this that convergence will not be
achieved in many instances.” (European Financial Reporting Advisory Group, Comment Letter on ED 9 as of 6 February 2008)
20
In order to ensure reliability of the coding6, a second independent coder with no training in
accounting was involved apart from the author. The task for both was to attach the predefined
170 categories (i.e. issues mentioned and verbally quantified in the staff analysis paper) to the
comment letters. In order to reach further consistency, the coding results were discussed by
the coders after individual coding. During these discussions, the observations of the predefined categories were harmonized. It turned out that in several cases the coders interpreted
comment letters in different ways. Sometimes, a category was assigned to a phrase in a specific comment letter by one of the coders but not by the other, or units of analysis were constructed in different length but coded to the same category. Only in very few cases, the same
phrase within one comment letter was classified into completely different categories. Eventually both coders agreed on joint interpretations of comment letters where necessary, possibly
asking a third independent party for interpretative judgment. The coders also made decisions
on whether or not to assign deviating coding decisions, and – if applicable – in which of the
170 categories the underlying phrases would fall. As a result of the discussion among the coders, a final set of 2,490 coding decisions was composed. Although these measures have been
taken, there is always an inherent bias in research that takes place in the realm of the social
(Law, 2009).
As numerical terms need to be assigned to verbal quantifiers, the number of units of analysis
(codes) allocated to each of the 170 predefined categories was calculated. This kind of quantification of the categories was considered useful in two dimensions. For each category, first,
6
There is no single concept or measure for intercoder reliability. For a predefined set of coding decisions (i.e.
units of analysis) with a given set of categories, quantifiable measures like those developed by Scott (1955),
Cohen (1960) or Krippendorf (1980) may be used. However, in case of an ex ante undetermined number of
coding decisions, as applicable in this case, a less sophisticated reliability approach is preferable. Such an approach may be the coder reliability as defined by Holsti (1969, p. 140). Both coders identified 1,630 identical
units of analysis in the comment letters and assigned them to identical categories, whereas coder 1 made 2,536
decisions and coder 2 made 2,411 decisions. That results in a C.R. of 66 % which may be evaluated as a satisfying value given the complexity of the actual task (Miles and Huberman, 1994, p. 64).
21
the number of unique comment letters that at least once contains the category was determined.
The result indicates how many unique comment letters are represented by each of the 170
verbal quantifiers used for describing the respective category. Then, as a second dimension,
the total number of coding decisions generated for each category was determined. The reason
is that the use of the verbal quantifiers may not only depend on unique comment letters but
also on the overall frequency the issue underlying the respective category was mentioned.
Consequently, further analysis of the verbal quantifiers was based on unique comment letters
as well as total frequency with which the quantified issue was raised over all comment letters.
Subsequently, all calculations were consolidated with respect to each quantifier. This provides
the range of numerical representations for each of the verbal quantifiers in terms of unique
comment letters and total frequency, respectively.
As a frequency analysis of textual elements necessarily encompasses both qualitative and
quantitative elements further analysis needs to be conducted. Most common in qualitative
research designs is the use of so called ‘quasi-statistics’, which are understood as descriptive
statistics of the qualitative data (Barton & Lazarsfeld, 1955). Since the frequency analysis
provides observations in numerical terms for each quantifier, statistics may be applied. In order to come up with those quantifiers for which a statistical analysis makes sense, the number
of observations was determined for each quantifier. The following table 1 shows the quantifiers used with their respective number of observations.
[Insert Table 1 about here]
For a statistical analysis only quantifiers with at least ten observations were considered.
Moreover ‘some of these’ was excluded because this quantifier always refers to a previously
quantified issue. Hence, an analysis of this quantifier would only have made sense if other
22
such quantifiers (‘… of these’) had been available for analysis. Consequently, the four quantifiers ‘some’, ‘a few’, ‘many’ and ‘several’ were considered for a statistical analysis. In a first
step, descriptive statistics were calculated (see section 3.2). As the aim of this paper is to analyze the meaning of quantifiers, it is of high interest to evaluate if different quantifiers also
have a different statistical meaning. In linguistic science this is usually done using median- or
mean-tests (Pepper and Prytulak, 1974). These tests are discussed in section 3.3.
3.2
The Numbers Behind the Words
Descriptive statistics were calculated for both dimensions. On the one hand, the frequency of
individual comment letters (CL) assigned to each verbal quantifier was analyzed. On the other
hand, the frequency of underlying codes (i.e. how often the category was mentioned overall)
was considered. For both kinds of analysis, absolute and relative values were composed. Absolute values represent the numbers of comment letters and underlying codes for each observation, i.e. each category. For the calculation of relative values, these numbers were divided
by the total number of CLs (N = 113) and the total number of underlying codes (N = 2,490),
respectively. The results in terms of CLs are presented in table 2 and in form of a box plot
(absolute figures only) in figure 2.
[Insert Table 2]
[Insert Figure 2]
The analysis reveals a certain hierarchy of the four quantifiers analyzed, at least in terms of
the average number of comment letters representing each quantifier. In terms of mean and
median figures, ‘a few’ seems to be the smallest quantity, in ascending order followed by
‘several’, ‘some’ and ‘many’. Nonetheless, as the minimum and maximum statistics show, the
23
range of comment letters represented by each quantifier is very wide. In particular for the
quantifiers ‘several’ and ‘some’ one may observe a number of outliers. For the quantifier
‘several’ one may observe one far outlier (depicted as a little star) and one near outlier (depicted as a little circle). The far outlier represents a case where an issue was classified as ‘several’ but raised in 39 individual comment letters and thus even beyond the third quartile of the
quantifier ‘many’. The near outlier identified represents an issue classified as ‘several’ that
was raised in 17 individual comment letters. With 17 individual comment letters, this issue
could well have been classified within the hinge of ‘many’ or the inner fence of ‘some’. The
descriptive statistics based on individual comment letters illustrate that there is no consistent
use of the quantifiers in terms of unique comment letters. Moreover, some overlapping may
be observed, especially for the quartiles of the lowest ranked quantifiers ‘a few’, ‘several’ and
‘some’.
Table 3 and figure 3 provide the descriptive statistics (absolute and relative values) and the
box plot (absolute values only), respectively, for the underlying codes assigned to the quantifiers.
[Insert Table 3]
[Insert Figure 3]
The results obtained using the underlying quotations as a proxy for the quantifiers are generally consistent with those for unique comment letters. The ranking of quantifiers in terms of
mean and median values is confirmed. Moreover, the range of codes representing the same
quantifiers is very wide, and overlapping can be observed. The most extreme outliers again
are observed for ‘several’ and ‘some’. For the quantifier ‘several’ one far and two near outliers occur. The far outlier represents an issue classified as ‘several’ that is mentioned 62 times
24
and thus lies even beyond the third quartile of the quantifier ‘many’. The near outliers represent cases that are named as ‘several’ but mentioned 23 and 18 times, respectively. Both outliers would well fit into the hinge of ‘many’ or the inner fence of ‘some’. Generally, the observations made using unique comment letters as the numerical basis of the quantifiers are
confirmed when analyzing the quantifiers on the basis of underlying codes, i.e. the total frequency the issue was raised over all comment letters.
The descriptive statistical analysis shows that – on an average basis – there is a certain ranking among the quantifiers used. Nonetheless, it is not possible to derive a definition of the
quantifiers, neither in terms of unique comment letters nor in terms of the number of underlying codes of the issue that was quantified. Moreover, the quantifiers are not mutually exclusive in the numbers and proportions they express, since the same number of individual comment letters or quotations of an issue may be assigned to very different quantifiers. Against
the background of linguistics, which state that the context of use of quantifiers heavily influences its quantification (e.g. Parducci (1968), Chase (1969), Pepper & Prytulak (1974), Laird
et al. (2008)), at a first glance these findings are not surprising. However, the context should
not matter that much in the current case since the organizational, temporal and topic specific
context can reasonably be perceived as stable for such a short-tem project as the analysis of
comment letters on ED 97. Furthermore, since only one or two people of the staff were involved in the analysis, the wide range of comment letters and underlying codes that are referred to by the use of quantifiers is rather unusual, especially if one considers that the involved staff was supposed to work on this project as a team.
7
It took the staff only three months from the end of the comment period to the publication of their staff analysis
paper.
25
3.3
Differences in the Numerical Meaning of Words
The descriptive statistics indicate that the use of quantifiers is not unambiguous. Further analysis using mean and median tests shall reveal if the quantifiers differ from one another based
on statistics. For testing differences in empirical samples, one usually compares means of different variables using either an analysis of variances (ANOVA) or t-statistics. Both tests require the samples to be normally distributed. Table 2 also includes the probability figures of
the statistic according to Bera & Jarque (1980), which tests for normal distribution of a sample. As the null hypothesis of this test is, that the sample is normally distributed, only ‘many’
may be assumed to be normally distributed at the five per cent level. This is why nonparametric tests, which do not compare means but medians, need to be used. For the analysis
of more than two independent samples the test statistics proposed by Kruskal & Wallis (1952)
and van der Waerden (1952) may be applied. For the comparison of only two independent
samples the test statistics proposed by Wilcoxon (1945) (rank sum test for unpaired experiments) and Mann & Whitney (1947) (Mann-Whitney-U test) are relevant.
At first, it is tested if the four quantifiers generally differ in their meaning, i.e. if their median
differs significantly from each other regarding comment letters and underlying codes. Table 4
presents the results.
[Insert Table 4]
The test statistics indicate that the null hypothesis (the medians represent an equal number of
comment letters or codes, respectively) has to be rejected. The results are statistically significant at the one per cent level. One may conclude that the four quantifiers stand for different
median numbers in terms of comment letters and underlying codes, respectively, and therefore do at least not seem to be used at random.
26
In a next step, the four quantifiers are compared pair wise in order to provide a more detailed
analysis. Table 5 presents results using the number of comment letters; table 6 shows the results when using the number of underlying codes as the basis for analysis.
[Insert Table 5]
Considering the use of unique comment letters one finds that most pairs of quantifiers are
significantly different from each other on the one per cent level of significance. Only the pair
‘some’ – ‘several’ is not significant on the one per cent but only on the five per cent level.
[Insert Table 6]
Taking the number of underlying codes as the representative measure for the quantifiers, one
may conclude that all pairs of quantifiers are significantly different from each other at the five
per cent level of significance, except for one: ‘several’ and ‘a few’. As in the case of unique
comment letters, taking a significance level of one per cent, one would have to the rejection of
the null hypothesis for the pair ‘some’ – ‘several’.
The two dimensions for quantifying the verbal quantifiers may also be used interchangeably
by the staff. This is why in a next step a median test is performed for each of the four quantifiers, comparing differences in the medians between the absolute number of unique comment
letters and the absolute number of total codes. In contrast to the aforementioned comparisons
samples are dependent, because now different characteristics of the same category are compared: for the same category (i.e. the same quantifier) two paired samples (comment letters
and total codes) are investigated. Consequently the test statistic has to be changed. The test
27
proposed by Wilcoxon (1945) for paired samples (sign rank test) is applicable. Table 7 presents the results.
[Insert Table 7]
The null hypothesis of this test is that the medians are equal, which needs to be rejected at the
five per cent level of significance for all quantifiers analyzed. The statistics indicate that the
number of underlying quotations does significantly differ from the number of comment letters
assigned to the respective quantifier. That means the quantifiers are not used alternately with
respect to their reference to underlying comment letters or underlying codes, respectively.
Nonetheless, further interpretation of these results from applying the Wilcoxon (1945) sign
rank test is deemed problematic. The reason is first that the results are counter-intuitive. The
underlying pairs consist of many pairs without any deviation, which usually indicates that the
null hypothesis may not be rejected. In the course of the test, these pairs are eliminated. The
second point is that the underlying pairs only have deviations that are either positive or negative which leads to one of the relevant rank sums always to be zero. This heavily influences
the test statistics, and ceteris paribus leads to an increase in the test statistics which consequently leads to a rejection of the null hypothesis. Therefore, there is no further interpretation
of this test. Nonetheless, the basic idea of this test is valid. Checking if the number of comment letters used for one quantifier differs from the number of total codes underlying the
same quantifier is of relevance, because it provides an indication for the question if the frequency with which an issue has been raised overall or only the number of unique comment
letters that raised an issue, matters for the analysis by the IASB’s staff. However, this question offers opportunities for future research projects that could explore if it matters how often
a certain issue is raised in comment letters in order to have a convincing effect.
28
4
The Dynamics of Using Verbal Quantifiers in Comment Letter Analysis
4.1
A Look Behind Ambiguity of Quantifier Usage
The statistical analysis showed that there appears to be a ranking among the four quantifiers
analyzed, at least in terms of mean and median figures. According to this ranking, ‘a few’ is
the smallest number followed (in ascending order) by ‘several’, ‘some’ and ‘many’. This
holds for the number of individual comment letters as well as for total codes represented.
Moreover, the analyzed quantifiers significantly differ from each other at a level of significance of five per cent, with one exception: ‘several’ and ‘a few’. Nonetheless, the analysis
indicates that the quantifiers (on average) seem to have a different meaning. However, it remains unclear what exactly the quantifiers mean, both in absolute and relative terms, no matter whether unique comment letters or total codes are taken as a numerical basis. This result is
due to the wide range of numbers the quantifiers represent and the observed overlapping.
Considering the statistical results from a definitional perspective one also may question the
use of quantifiers in general. In English language the quantifiers used by the IASB’s staff
have the following meanings:
 ‘a few’ is ‘a small number of’;
 ‘several’ is ‘more than two but not many’;
 ‘some’ is ‘an unspecified amount or number of’;
 ‘many’ is ‘a large number of’.
Especially for the quantifiers ‘small’ and ‘large’ the findings of linguistics may well apply,
namely that their quantification is highly dependent on several factors (Borges & Sawyers,
1974; Newstead & Collins, 1987; Moxey & Sanford, 1993). Concerning ‘several’ the statistics applied in this paper reveal that the minimum number underlying ‘several’ is indeed three.
29
However, ‘several and ‘many’ overlap which – by definition – should not be the case. Hence,
‘several’ is not used as defined. The use of ‘some’ indicates that the user of this word probably is not sure about the underlying number he or she is willing to express. And indeed,
‘some’ is used for a wide range of underlying numbers that does not allow for an inference on
the actual frequency or proportion represented by this quantifier. This definitional ambiguity
of verbal quantifiers may explain a good part of the empirical findings presented. Additionally, it questions the general suitability of an unconditional use of verbal quantifiers for the
communication of summarized information that is as substantial and important as the content
of comment letters within the IASB’s due process.
The implications of the empirical findings in general are that an external reader of the staff
analysis paper of comment letters may only conclude a (vague) relative meaning of the quantifiers but not conclude any (more or less precise) quantities or proportions they represent.
The reasons for this variety of meanings of quantifiers may be various. It could be that the
staff uses quantifiers in the analysis paper just like everyone does in daily life, namely on a
‘rule of thumb and feeling’ basis, without thinking about which numbers shall really be expressed. A different approach would be to think of the quantifiers as mediating elements of
the process of summarizing comment letters for the IASB’s members. Thinking this way,
there might be an implicit ‘codex’, an unwritten element in the work relation between the staff
and the board that perhaps both of them are not aware of. This ‘codex’ may unconsciously
guide staff’s analysis eventually resulting in a use of quantifiers that seems to be biased from
an outsider’s perspective. Alternatively, the staff may be instructed to conduct the analysis in
a (de-)emphasizing manner by the IASB board members who may be interested in directing
the staff analysis in a predetermined way.
30
Moreover, personal preferences of staff members concerning certain issues or commentators
may play a role and attach a specific weighting to certain statements in the comment letters.
Comments of respondents being perceived as more important or capable than others might be
emphasized. Thus they may e.g. anticipate that certain comment letters provide more detailed
and profound feedback which presumably proves higher competence of the respondent. This
could finally end up in a higher weighting of the comments this respondent made. Also letters
from issuers being well known to the respective staff member, either from previous work for
them, earlier comment letter submissions or through other personal contacts, might be (de)emphasized. Additionally, own experiences from analyzing comment letters of other projects
might be influential. In this context specific characteristics of comment letters in terms of
length, fonts, letter size, or other format properties may be influential for the choice of a quantifier. It also might be the case that the staff is influenced externally by certain parties. Such
an influence might result in a biased presentation of issues in the analysis paper. It may also
be that the quantifiers are intentionally underweighted by the staff in order to promote positions on issues raised which correspond to the personal preferences of the staff member conducting the analysis, and to diminish others that do not.
Furthermore, one also cannot rule out that the observations made in this paper result from
systematic mistakes in the analysis of comment letters by the IASB’s staff. In the end, IASB’s
staff members are only human, and therefore subject to mistakes. The need for interpretation
and judgment in analyzing comment letters that are very heterogeneous in terms of quality,
quantity8 and style adds another very subjective component to summarizing and verbal quantification. This interpretative element also limits the statistics presented in this paper as the
two coders may have interpreted issues raised by respondents substantially different from the
way the staff did. At least one should exclude physical inabilities of IASB’s staff as an expla8
The 113 comment letters on ED 9 vary between a length of 43 and 10,333 words.
31
nation for high variance in the use of verbal quantifiers which Clark & Kar (2011) found as a
reason for quantifier usage ambiguity.
The diversity of possible explanations for the empirical findings indicates that there is still a
lot of potential for further research work on the IASB’s staff, underlying processes but also
personalities. Apart from this and whatever the reasons for the observed variety of numerical
meanings of quantifiers in IASB staff analysis papers are, in order to ensure and enforce
transparency, which in the end helps building trust, quantifiers in a document as important as
the summary of comment letters is should be explicated. The importance of the staff analysis
of comment letters may be derived from the fact that the IASB makes its final decisions based
on this analysis, and that the analysis is a valuable source of information for people interested
in the IASB project under consideration but not willing to go through hundreds of pages of
comment letters. It therefore seems to be essential that a common understanding of quantifiers
is assured. Against the background of the definitional problems that are inherent in quantifiers
and the linguistic discretion that underlies their quantification, the first best solution for an
alternative mode of summarized presentation would be to use numbers instead of verbal quantifiers, as done by the IASB in the past (IASB, 2007; IASB, 2008c). Alternatively, the IASB
could also provide more detailed and structured analyses naming the respective issuer of each
comment. An example for such an approach is given by the Committee of European Insurance and Occupational Pensions Supervisors (CEIOPS, 2009a and 2009b), nowadays transformed into the European Insurance and Occupational Pensions Authority (EIPOA). If this is
– for whatever reasons – not realizable within the IASB, at least a definition (either in absolute terms or proportions) of the quantifiers published by the IASB seems necessary.
32
4.2
(De-)Emphasizing Issues
The descriptive statistical analysis has shown that outliers are evident, particularly for the
quantifiers ‘several’ and ‘some’ on which the analysis focuses. For both of these quantifiers, a
total of eight (in terms of unique comment letters) or ten (in terms of total codes) outliers were
observed. In all cases the outliers represent numbers that are much higher than the remaining
numerical representations for these quantifiers but in all cases they are well within the numerical range identified for ‘many’. It seems that all of the issues underlying the outliers observed
for ‘several’ and ‘some’ were underweighted, comparing the quantifiers used and the actual
number of times they were raised. While this analysis cannot reveal what the actual reason for
these observations is, the following outlier analysis brings up some interesting insights.
The quantifier ‘several’ includes a total of three (underweighted) outliers (either in terms of
individual comment letters or total codes). The following issues were affected:
 By excluding the equity method from any assessment, the project became a short-term
convergence project. Commentators perceived that this fact is an indicator that conducting the project as a short-term project is inappropriate and might lead to premature
conclusions (39 individual commentators raising the issue 62 times).
 The objective of an enhancement of financial reporting (as stated in ED 9.IN 1) may
not be reached since the proposals force preparers to set up two different sets of consolidated statements: one according to proportionate consolidation used for internal
reporting purposes and another one according to the new IFRS standard for purposes
of external reporting (17 individual commentators raised the issue 23 times).
 Respondents expressed the fear that the core principle of the proposed standard seems
to be difficult to be applied in practice (13 individual commentators raised this issue
18 times).
33
The first two issues criticize the core of ED 9, namely the elimination of proportionate consolidation. The board proposes the exclusive application of the equity method on joint venture
consolidation which indicates a strong preference of the board to not further discuss this issue:
Actually, an analysis of the proceedings of ED 9 reveals that the staff clearly recommends “to
perform further analysis of the equity method and proportionate consolidation, including
which method better meets the definitions of elements and qualitative characteristics of financial information in the Framework” (IASB, 2008d, p.11). However, the IASB’s Meeting Staff
Papers of the remaining board meetings during which the project Joint Arrangements was
discussed9 do not indicate that a fundamental analysis of both accounting treatments for joint
ventures had taken place. The documents published in June 2009 try to answer some of the
issues raised on the elimination of proportionate consolidation, but mainly aim at a clarification of wording in the new standard. The document published in March 2010 only deals with
transitional and disclosure issues of proportionate consolidation. These observations somewhat indicate a preference of the IASB for the equity over the proportionate consolidation
method for joint ventures which might be driven by the IASB’s intention to converge IFRS
with US GAAP under which – despite some exemptions – the equity method has to be applied
for joint venture consolidation. Furthermore, this strong internal preference may also explain
the underweighting of critical comments (if one assumes an intentional underweighting of
issues) but in consequence would not contribute to the intended openness of the IASB’s due
process. Moreover, the indication for an own preference of the IASB on a particular accounting treatment over another seems to raise some doubt about Sutton’s (1984) presumption of a
standard setting body having no preferences of its own.
9
Those meetings were held in May, June, December 2009 and February, March, May, June 2010 as well as
February 2011.
34
The third issue identified as an outlier is closely connected to the ongoing change within principles of IFRS accounting which ED 9 appears to anticipate. Up to now, assets and liabilities
are supposed to be on the balance sheet while the core principle of ED 9 proposes to recognize rights and obligations. Against the background of this current change, underweighting
may be explained as follows: the staff considers this issue to be of lower importance since the
conceptual framework was expected to be changed towards rights-and-obligations accounting
by the time the new standard on Joint Arrangements would be published. Hence, practical
application guidance would already be provided through other publications of the IASB. It is
interesting to note that within the summary of comment letter analysis (IASB, 2008d)10, the
three issues mentioned above are quantified as ‘many’ – contrary to the original document
analyzed in this study. Such an inconsistency is unusual but may indicate that it is unlikely
that a mistake or an unintentional weighting explain the observed ambiguity in quantifier usage. Instead, it could be that the staff prepared both the analysis of comment letters and the
respective summary and modified its analysis paper afterwards without changing the summary, maybe because the Board does not pay attention to the summary of comment letter
analysis.
The quantifier ‘some’ includes seven outliers in total (i.e. either in terms of individual comment letters or total codes). These outliers were observed as follows:
 It is premature to justify the elimination of proportionate consolidation with respect to
the Conceptual Framework. This is because the framework (particularly the asset and
liability definition) is currently under review in phase B of the Conceptual Framework
10
It should be noted that an unambiguous assignment of categories from the staff analysis of comment letters to
statements in the summary of comment letter analysis is not always possible. This is either due to the fact that
issues have been aggregated or disaggregated in the summary, or because an issue was not mentioned there at
all.
35
project. In addition to this, the arguments provided in the Basis for Conclusion are too
vague (26 individual commentators raised the issue 37 times).
 The elimination of proportionate consolidation would lead to a divergence between internal and external reporting. This is due to the fact that many companies use proportionate consolidation for internal (management) accounting. Especially banks use proportionate consolidation for risk management and bank supervisory reporting (22 individual commentators raised this issue 29 times).
 Respondents expressed the fear that key performance indicators are no longer comparable between companies with and without joint ventures, respectively, if proportionate consolidation was abolished. This, in conjunction with a possible change in financial communication strategies, will lead to a decrease of relevance and understandability of financial statements (24 individual commentators raised the issue 38 times).
 The objective to enhance financial reporting (ED 9.IN 1) cannot be achieved because
the proposals made in ED 9 will not adequately reflect the substance and economic reality of the company’s performance and financial position (33 individual commentators raised the issue 44 times).
 The correct classification of joint arrangements into one of the proposed categories is
perceived to be a matter of individual interpretation rather than of economic substance.
Therefore, it is necessary to provide more guidance on the classification of joint arrangements (23 individual commentators raised the issue 25 times).
 The disclosure of a list and description of significant subsidiaries and associates is not
appropriate due to cost-benefit-assessments. There is only very little evidence that users need such information while providing this information would be extremely difficult, especially for large companies (16 individual commentators raised the issue 25
times).
36
 Respondents perceived the usefulness of the disclosure of current and non-current assets and liabilities, respectively, as doubtful (30 individual commentators raised the issue 35 times).
The first three issues deal with criticism or concerns about the elimination of proportionate
consolidation – as did the first two outlier issues of the quantifier ‘several’. These issues
might again be explained with the importance of this issue for the IASB, also with respect to
US GAAP convergence. The outlier analysis for ‘several’ seems to support the aforementioned inferences, namely that the IASB shows some kind of own preferences, and that the
ambiguity in quantifier usage does not seem to be due to mistakes. The fourth issue questions
the achievement of one of the objectives of ED 9 by doubting the correct presentation of economic substance in the financial figures. This issue may also be linked to the elimination of
proportionate consolidation and be explained in a similar way.
The fifth issue deals with matters of classification of joint arrangements where commentators
feel a necessity for more guidance. Neglecting mistakes as an explanation, a reason for this
underweighting may be that the staff fears a lot of additional work if they had to implement a
lot of new guidance. Personal motivation thus may play a role. Regulations for disclosures are
raised in the last two issues. Additional disclosures are usually perceived to increase the usefulness of financial statements for users. Moreover, the few comment letters coming from
users place quite a strong emphasis on disclosure issues. As IASB’s conceptual framework
expresses the aim of providing information useful to users (particularly investors) of financial
information, the needs of users might be overweighted, also by the staff. In consequence, this
could explain the observation that critical comments on additional disclosures have been underweighted and supports the notion that the staff places certain emphasis on specific commentators or groups of such.
37
Within the summary of comment letter analysis (IASB, 2008d), it is possible to observe quantifiers for five of the issues discussed here. Two of these are also quantified using ‘some’,
nonetheless, the remaining three of these issues are classified as ‘many’. Although it is again
not possible to explain this difference, its effect is notable: Taken together with the observations made for the outliers discussed for the quantifier ‘several’ the summary of comment
letter analysis seems to include fewer outliers (i.e. less underweighting) than the detailed staff
analysis of comment letters. Again, this seems to favor explanations that are not related to any
kind of mistake for the bias in quantifier usage by IASB’s staff.
4.3
Additional Observations
The quantifier ‘one’ is used nine times within the staff analysis paper on ED 9. Each time this
quantifier is used, it indeed refers to only one comment letter and a total of one underlying
coding. Out of those nine references to one specific comment letter, the comment letters by
the European Financial Advisory Group (EFRAG) and Ernst & Young are referred to twice.
With one reference each, the comment letters of Suez, PriceWaterhouseCoopers, the Confederation of British Industry, Sheritt International Group and the European Public Real Estate
Association (EPRA) are mentioned. Preparers (Suez, Conferedation of British Industry,
Sheritt International Group, EPRA) are mentioned four times, audit and professional firms
(Ernst & Young, PriceWaterhouseCoopers) are referred to three times, and standard-setters
(EFRAG) twice. This represents the three largest groups of commentators in an adequate proportion, based on the distribution of total comment letters. The quantifier ‘a majority’ indeed
refers to the majority of comment letters, representing 60 out of 113 comment letters (ca. 53
per cent), but does not stand for more than 50 per cent of the underlying codes. Both observations provide some indication that the quantifiers are rather used as a representation of com38
ment letters than how often an issue has been raised. Concerning the remaining quantifiers,
there seems to be no need for further analysis. In this context it may also be mentioned that
the IASB published overall 113 comment letters on its website, whereas the summary of the
staff analysis paper (IASB, 2008d) mentions only 111 letters.
In some rare cases, the staff analysis paper specifies how the commentators, that raised a certain issue, were distributed by industry and geographical origin in such a way that an in-depth
analysis may precisely reveal the underlying comment letter(s) and its author(s). One of these
cases is quite interesting. The staff analysis paper states on a certain issue: ‘The respondents
expressing this view were professional bodies from South Africa and Asia-Pacific and a preparer from the telecoms industry based in Europe.’ Indeed, the South African Institute of
Chartered Accountants (professional body from South Africa), the Hong Kong Institute of
Certified Public Accountants (professional body from Asia-Pacific) and Deutsche Telekom
(preparer from the telecoms industry based in Europe) commented on the issue under consideration. Besides them, Sasol (preparer from the basic resources industry based in South Africa) also raised this issue using the same wording as the South African Institute of Chartered
Accountants. Nonetheless, Sasol was not mentioned by the staff analysis paper at all. Maybe
the identical wording led to the exclusion of the preparer within the analysis. That would support the idea that the staff uses some judgment in assessing and quantifying the comment letters and issues. The question remains open why the preparer, and not the professional body,
was eliminated from enumeration. The comment letters sent to the IASB do not allow for an
inference to clarify who copied from whom, although – as preparers are the auditors’ clients
(McKee et al, 1991; van Lent, 1997) – it seems to be rather persuasive to conclude that the
South African Institute of Chartered Accountants included the specific issue on behalf of Sasol. However, the staff analysis should not eliminate any comment or commentator from the
analysis without any further explanation, as this does not foster transparency and does not
39
contribute to building trust in the work of the IASB. However, it may also have been the case
that Sasol’s comment was only eliminated (or ‘forgotten’) due to a mistake made during the
analysis. Whichever way it is, the finding leaves some room for improvement in the organization of the IASB’s standard setting process.
40
5
Conclusion
The primary aim of this paper was to contribute to organizational research on the staff of the
IASB as a transnational regulatory institution. Using the analysis of comment letters sent to
the IASB on ED 9 Joint Arrangements which was prepared by the IASB’s staff, it was analyzed how verbal quantifiers were used within this analysis paper. Applying interpretative
content and statistical analyses it was found that one cannot precisely conclude which numerical meaning the staff attaches to verbal quantifiers. This indicates a certain bias within the
analysis performed. The range and standard deviations of the numbers underlying those quantifiers are quite high (in terms of unique comment letters and total codes represented, respectively). Taken together with areas where the numbers underlying the quantifiers partly overlap, it is not possible to conclude on a consistent meaning of the verbal quantifiers used.
However, on average there appears to be a certain ranking among the quantifiers that were
analyzed. In ascending order of the numbers they represent, the ranking is as follows: ‘a few’,
‘several’, ‘some’, and ‘many’. Although there may be various explanations for these findings,
they are critical to the work of the IASB and its staff, particularly in the course of the due process for the formation of a new regulation, namely a new IFRS. No reader of the staff analysis
paper may conclude on the actual number of comment letters raising an issue or on how often
an issue was raised in total.
Among the reasons that may explain the ambiguity found in the use of quantifiers, further
analysis indicates that pure mistakes can seemingly be excluded. As the analysis showed,
quantifiers that underweighted certain comments to a large extent criticized core parts of the
IASB’s original proposal. Viewed in the context of the remaining project progress, these findings seem to challenge Sutton (1984) and may be interpreted as an indicator for own preferences of the IASB which it tries to push through. Additionally it was found that most of these
41
underweightings were not observable in the summary of comment letter analysis, leading to
additional inconsistency within the IASB’s documents. Irrespective of the reasons behind
these observations, the ambiguous use of verbal quantifiers and the inconsistency among documents, that are supposed to have the same content, point to organizational issues within the
IASB. Transparency is thus not fostered although it is crucial for achieving a sufficient level
of trust a privately organized standard setter such as the IASB requires. Furthermore, these
results indicate that the IASB tries to shape the public perception of issues raised in the due
process in its own favor. Considering the tremendous impact the IFRS have on the economies
in which they are applied, unanticipated allocation effects may arise.
Set against this background, the IASB could explicate its analysis of comment letters. As a
first best (i.e. most transparent) solution this should be done by using the actual numbers of
comment letters or the names of commentators raising an issue, or by indicating the number
of times an issue has been raised instead of using vague verbal quantifiers to describe issues
raised in comment letters. The IASB has shown in past comment letter analyses that it is able
to implement such an approach. Moreover, other standard setting institutions (like CEIOPS)
follow a similar mode of analysis. A second best solution would be to precisely define what is
meant by the quantifiers used, e.g. by defining specific mutually exclusive ranges of numbers
or proportions a quantifier represents. Both would enhance the quality of staff analysis papers
and make them more useful to readers. Moreover, a statement on how the IASB staff deals
with identical comment letters could be helpful. Even if one agrees that all those suggestions
are not realizable because the staff analysis of comment letters is supposed to include some
weighting or personal reflections a note that the quantifiers used in the staff analysis do not
represent any fixed or predefined numbers or proportions would help to assess the meaning
and implications of these documents. Such measures would not only help the IASB to become
more transparent, it would also allow external parties to make sense of the staff analysis of
42
comment letters and, from an organizational point of view, provide guidance to staff members.
The analysis done in this paper also offers possibilities for further research. There is a general
lack of research explicitly dealing with the IASB’s staff. In the context of technical standard
setting and following the analysis done in this paper one could examine in depth why certain
issues are (de-)emphasized within the staff analysis paper, e.g. by conducting interviews with
the staff or by analyzing supplementing materials. Moreover, further research on the question
which of the issues, which had been raised in comment letters but were not mentioned by the
staff or in the following IASB deliberations, seems to be promising. As this analysis was not
able to clearly examine which of the reasons underlie the ambiguous use of verbal quantifiers
further research might take on this question to enhance the understanding of the work taking
place behind the scenes of transnational regulatory bodies.
43
References
Arnold, P.J. (2005). Disciplining domestic regulation: the World Trade Organization and the
market for professional services. Accounting, Organizations and Society, 30(4), 299-330.
Arnold, P.J. (2009). Global financial crisis: The challenge to accounting research. Accounting, Organizations and Society, 34(6/7), 803-809.
Barton, A. & Lazarsfeld, P. F. (1955). Some functions of qualitative data analysis in sociological research. In Frankfurter Beiträge zur Soziologie (pp. 321-361). Frankfurt: Europäische Verlagsanstalt.
Bera, A.K. & Jarque, C.M. (1980). Efficient tests for normality, homoscedasticity and serial
independence of regression residuals. Economics Letters, 6(3), 255-259.
Berelson, B. & Salter, P.J. (1946). Majority and Minority Americans: An Analysis of Magazine Fiction. Public Opinion Quarterly, 10(2), 168-190.
Beyth-Marom, R. (1982). How Probable is Probable: A Numerical Translation of Verbal
Probability Expressions. Journal of Forecasting, 1, 257-269.
Borges, M.A. & Sawyers, B.K. (1974). Common Verbal Quantifiers: Usage and Interpretation. Journal of Experimental Psychology, 102(2), 335-338.
Botzem, S. & Quack, S. (2009). (No) Limits to Anglo-American accounting? Reconstructing
the history of the International Accounting Standards Committee: A review article. Accounting, Organizations and Society, 34(8), 988-998.
Bryant, G.D. & Norman, G.R. (1980). Expressions of Probability: Words and Numbers. New
England Journal of Medicine, 302(7), 411.a.
Budescu, D.V. & Wallsten, T.S. (1985). Consistency in Interpretation of Probability Phrases.
Organizational Behavior and Human Decision Processes, 36, 391-405.
Camfferman, K. & Zeff, S.A. (2007). Financial Reporting and Global Capital Markets. Oxford: Oxford University Press.
CEIOPS (2009a). CEIOPS-SEC-94/09: Summary of comments on CEIOPS-CP-30/09 Consultation Paper on the Draft L2 Advice on TP - Treatment of Future Premiums.
<https://eiopa.europa.eu/fileadmin/tx_dam/files/consultations/consultationpapers/CP30/CEI
OCE-SEC-94-09-Comments-and-Resolutions-Template-on-CEIOPS-CP-30-09.pdf>
Accessed 05 October 2011.
44
CEIOPS (2009b). CEIOPS-SEC-92/09: Summary of comments on CEIOPS-CP-28/09 Consultation Paper on the Draft L2 Advice on SCR Standard Formula - Counterparty default
risk.
<https://eiopa.europa.eu/fileadmin/tx_dam/files/consultations/consultationpapers/CP28/CEI
OPS-SEC-92-09-Comments-and-Resolutions-Template-on-CEIOPS-CP-28-09.pdf>
Accessed 05 October 2011.
Chase, C.I. (1969). Often is where you find it. American Psychologist, 24(11), 1043.
Chiapello, E. & Medjad, K. (2009). An unprecedented privatisation of mandatory standardsetting: The case of European accounting policy. Critical Perspectives on Accounting, 20(4),
448-468.
Clark, D.G. & Kar, J. (2011). Bias of quantifier scope interpretation is attenuated in normal
aging and semantic dementia. Journal of Neurolinguistics, 24, 401-419.
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37-46.
Collins, H. & Evans, R. (2007). Rethinking Expertise. Chicago and London: The University
of Chicago Press.
Cooper, D.J. & Robson, K. (2006). Accounting, professions and regulation: Locating the
sites of professionalization. Accounting, Organizations and Society, 31(4/5), 415-444.
Dawson, C. (2002). Practical Research Methods. Oxford: How to Books.
de Woot, P. (1969). Pour une doctrine de l’entreprise. Paris: Editions de Seuil.
Denzin, N.K. (2002). Interpretative Interactionism (2nd ed.). Applied Social Research Method, 16, London: Sage.
Dick, W. & Walton, P. (2007). The IASB Agenda – A Moving Target, Australian Accounting Review, 17(2), 8-17.
Djelic, M.-L. & Sahlin, K. (2009). Governance and Its Transnational Dynamics: Towards a
Reordering of our World? In C.S. Chapman, D.J. Cooper & P.B. Miller (Eds.), Accounting,
Organizations and Institutions (pp. 175-204), Oxford: Oxford University Press,.
Dyckman, T.R. (1988). Credibility and the Formulation of Accounting Standards under the
Financial Accounting Standards Board. Journal of Accounting Literature, 7, 1-30.
FASB (2009). Disclosures of Certain Loss Contingencies – Final Comment Letter Summary:
Memorandum
No.
13
as
of
August
6,
2009.
45
<http://www.fasb.org/cs/BlobServer?blobcol=urldata&blobtable=MungoBlobs&blobkey=id
&blobwhere=1175819438449&blobheader=application%2Fpdf> Accessed 14 October 2011.
FASB (2010). Statement 167 Implementation – Comment Letter Summary: Memorandum
No.
3
as
of
January
27,
2010.
<http://www.fasb.org/cs/BlobServer?blobcol=urldata&blobtable=MungoBlobs&blobkey=id
&blobwhere=1175820258420&blobheader=application%2Fpdf> Accessed 14 October 2011.
Fisher, I.E., Garnsey, M.R., Goel, S. & Tam, K. (2010). The Role of Text Analytics and Information Retrieval in the Accounting Domain. Journal of Emerging Technologies in Accounting, 7, 1-24.
Geiger, M.A. (1993). Setting the Standard for the New Auditor’s Report: An Analysis of
Attempts to Influence the Auditing Standards Board. Studies in Financial and Managerial
Accounting, 1, Stamford: JAI Press.
Georgiou, G. (2004). Corporate Lobbying on Accounting Standards: Methods, Timing and
Perceived Effectiveness. Abacus, 40(2), 219-237.
Gottschalk, L.A. & Gleser, G.C. (1969). The Measurement of Psychological States through
the Content Analysis of Verbal Behaviour. Los Angeles: University of California Press.
Grunig, J. E. & Huang, Y. (2000). From organization effectiveness to relationship indicators:
Antecedents of relationships, public relations strategies, and relationship outcomes. In J.A.
Ledingham & S.D. Bruning (Eds.), Public relations as relationship management (pp. 23–
53). Mahwah: Lawrence Erlbaum Associates.
Hakel, M.D. (1968). How often is often? American Psychologist, 23(7), 533-534.
Hill, N.T., Shelton, S.W. & Stevens, K.T. (2002). Corporate Lobbying Behaviour on Accounting for Stock-Based Compensation: Venue and Format Choices. Abacus, 38(1), 78-90.
Holsti, O.R. (1969). Content Analysis for the Social Sciences and Humanities. Reading/Menlo Park/London/Don Mills: Addison-Wesley.
Hope, T. & Gray, R. (1982). Power and Policy Making: The Development of an R&D
Standard. Journal of Business Finance and Accounting, 9(4), 531-558.
Humphrey, C. & Loft, A. (2009). Governing Audit Globally: IFAC, the New International
Financial Architecture and The Auditing Profession. In C.S. Chapman, D.J. Cooper and P.B.
Miller (Eds.), Accounting, Organizations and Institutions (pp. 205-232). Oxford: Oxford
University Press,
IASB (2007). Information for Observers – Management Commentary: Comment letter analysis (Agenda papers 11A). <http://www.ifrs.org/NR/rdonlyres/6BF615CC-2601-4E69-806FD7D762BBB2AD/0/MC0701b11aobs.pdf> Accessed 14 October 2011.
46
IASB (2008a). Information for Observers – Joint Arrangements: Staff analysis of comment
letters (Agenda paper 10B). <http://www.ifrs.org/NR/rdonlyres/9328D778-9E98-42499C5C-65C4F68CFEB9/0/JV0804b10bobs.pdf> Accessed 18 December 2010.
IASB
(2008b).
Comment
Letters
on
ED
9.
<http://www.ifrs.org/Current+Projects/IASB+Projects/Joint+Ventures/ED/Comments/Comm
ent+Letters.htm> Accessed: 18 December 2010.
IASB (2008c). Information for Observers – ED Annual improvements process–Comment
analysis
(Agenda
papers
8A
to
8J).
<http://www.ifrs.org/Meetings/IASB+Board+Meeting+11+March+2008.htm> Accessed 14
October 2011.
IASB (2008d). Information for Observers – Joint Arrangements: Staff analysis of comment
letters:
Summarised
overview
(Agenda
paper
10A).
<http://www.ifrs.org/NR/rdonlyres/9328D778-9E98-4249-9C5C65C4F68CFEB9/0/JV0804b10aobs.pdf> Accessed 18 December 2010.
IASB (2010). IASB and FASB Commitment to Memorandum of Understanding: Quarterly
Progress Report, 31 March 2010. <http://www.iasb.org/NR/rdonlyres/184E570C-808F45B9-9710-8249A76A0677/0/April2010progressreport3.pdf> Accessed: 22 November
2011.
IASCF (2008). International Accounting Standards Committee Foundation: Due Process
Handbook for the International Accounting Standards Board. London.
Jahansoozi, J. (2006). Organization-stakeholder relationships: Exploring trust and transparency. Journal of Management Development, 25(10), 942–955.
Johnson, E.M. (1973). Numerical Encoding of Qualitative Expressions of Uncertainty. U.S.
Army Research Institute for the Behavioural and Social Sciences, Technical Paper 150.
Jones, L.V. & Thurstone, L.L. (1955). The psychophysics of semantics: an experimental
investigation. Journal of Applied Psychology, 39(1), 31-36.
Jorissen, A., Lybaert, N., Orens, R. & van der Tas, L. (2010). Formal Participation in the
IASB’s Due Process of Standard Setting: A Multi-issue/Multi-period analysis. European
Accounting Review, in press, DOI 10.1080/09638180.2010.522775.
Kenny, S.Y. & Larson, R.K. (1993). Lobbying Behaviour and the Development of International Accounting Standards: The Case of the IASC’s Joint Venture Project. European Accounting Review, 2(3), 531-554.
Kirsch, R.J. (2006). The International Accounting Standards Board – A Political History.
Kingston up Thames: Wolters Kluwer.
47
Krippendorf, K. (1980). Content Analysis – An Introduction to its Methodology. Beverly
Hills/London: Sage.
Kruskal, W.H. & Wallis, W.A. (1952). Use of ranks in one-criterion variance analysis. Journal of the American Statistical Association, 47(260), 583-621.
Kwok, W.C.C. & Sharp, D. (2005). Power and International Accounting Standard Setting:
Evidence from Segment Reporting and Intangible Asset Projects. Accounting, Auditing and
Accountability Journal, 18(1), 74-99.
Laird, T.F.N., Korkmaz, A. & Chen, P.S.D. (2008). How Often is “Often” Revisited: The
Meaning and Linearity of Vague Quantifiers Used on the National Survey of Student Engagement. Paper presented at the Annual Meeting of the American Educational Research
Association, San Diego, <http://cpr.iub.edu/uploads/aera_how%20often.pdf> Accessed 18
December 2010.
Law, J. (2004). After Method: Mess in social science research. London and New York:
Routledge.
Malsch, B. & Gendron, Y. (2011). No one is perfect: The limits of transparency and an ethic
for ‘intelligent’ accountablility. Accounting, Organizations and Society, 36(7), 456-476.
Mann, H.B. & Whitney, D.R. (1947). On a test of whether one of two random variables is
stochastically larger than the other. Annals of Mathematical Statistics, 18(1), 50-60.
Maxwell, J.A. (1996). Qualitative Research Design. Applied Social Research Method, 41,
London: Sage.
McKee, A.J., Williams, P.F. & Frazier, K.B. (1991). A Case Study of Accounting Firm Lobbying: Advice or Consent. Critical Perspectives on Accounting, 2(3), 273-294.
McLeay, S., Ordelheide, D. & Young, S. (2000). Constituent lobbying and its impact on the
development of financial reporting regulations: evidence from Germany. Accounting, Organizations and Society, 25(1), 79-98.
Miles, M.B. & Huberman, A.M. (1994). Qualitative Data Analysis – An Expanded Sourcebook. (2nd ed.). Thousand Oaks/London/New Delhi: Sage.
Mosier, C.I. (1941). A psychometric study of meaning. Journal of Social Psychology, 13(1),
123-140.
Moxey, L.M. & Sanford, A.J. (1993). Prior Expectation and the Interpretation of Natural
Language Quantifiers. European Journal of Cognitive Psychology, 5(1), 73-91.
48
Newstead, S.E. & Collins, J.M. (1987). Context and the interpretation of quantifiers of frequency. Ergonomics, 30(10), 1447-1462.
Parducci, A. (1968). Often is often. American Psychologist, 23(11), 828.
Paterson, K.B., Filik, R. & Moxey, L.M. (2009). Quantifiers and Discourse Processing. Language and Linguistic Compass, 3(6), 1390-1402.
Pepper, S. & Prytulak, L.S. (1974). Sometimes frequently means seldom: context effects in
the interpretation of quantitative expressions. Journal of Research in Personality, 8(1), 95101.
Perry, J. & Nölke, A. (2005). International Accounting Standard Setting: A Network Approach. Business and Politics, 7(3), 1-32.
Power, M. (2004). Academics in the Accounting Policy Process: England and Germany
Compared. In C. Leuz, M. Pfaff & A. Hopwood (Eds.), The Economics and Politics of Accounting (pp. 376-392). Oxford: Oxford University Press.
Pustet, R. (2007). Frequency analysis of grammemes vs. lexemes in Taiwanese. In G. Altmann, P. Grzybek & R. Köhler (Eds.), Exact Methods in the Study of Language and Text
(pp. 567-574). de Gruyter: Berlin.
Rapoport, A., Wallsten, T.S. & Cox, J.A. (1987). Direct and Indirect Scaling of Membership
Functions of Probability Phrases. Mathematical Modelling, 9(6), 397-417.
Rawlins, B. (2009). Give the Emperor a Mirror: Toward Developing a Stakeholder Measurement of Organizational Transparency. Journal of Public Relations Research, 21(1), 7199.
Richardson, A.J. (2008). Due Process and Standard-setting: An Analysis of Due Process in
Three Canadian Accounting and Auditing Standard-setting Bodies. Journal of Business Ethics, 81(3), 679-696.
Richardson, A.J. (2009). Regulatory networks for accounting and auditing standards: A social network analysis of Canadian and international standard-setting. Accounting, Organizations and Society, 34(5), 571-588.
Richardson, A.J. & Eberlein, B. (2010). Legitimating Transnational Standard-Setting: The
Case of the International Accounting Standards Board. Journal of Business Ethics, 98(2),
217-245.
Roberts, J. (2009). No one is perfect: The limits of transparency and an ethic for ‘intelligent’
accountability. Accounting, Organizations and Society, 34(8), 957-970.
49
Robertson, R. (1995). Glocalilzation: Time-space and homogeneity-heterogeneity. In M.
Featherstone, S. Lasch & R. Robertson (Eds.), Global modernities (pp. 25-44). London:
Sage.
Roland, D., Dick, F. & Elman, R. (2007). Frequency of basic English grammatical structures: A corpus analysis. Journal of Memory and Language, 57(3), 348-379.
Ryan, C. (1998). The introduction of accrual reporting policy in the Australian public sector.
Accounting, Auditing and Accountability Journal, 11(5), 518-539.
Scott, W.A. (1955). Reliability of Content Analysis: The Case of Nominal Scale Coding.
Public Opinion Quarterly, 19(3), 321-325.
Simpson, R.H. (1944). The specific meanings of certain terms indicating different degrees of
frequency. Quarterly Journal of Speech, 30(3), 328-330.
Smith, M. & Taffler, R.J. (2000). The chairman’s statement: A content analysis of discretionary narrative disclosures. Accounting, Auditing and Accountability Journal, 13(5), 624646.
Stamp, E. (1985). The politics of professional accounting research: some personal reflections. Accounting, Organizations and Society, 10(1), 111-123.
Strathern, M. (2000). The tyranny of transparency. British Educational Research Journal,
26(3), 309-321.
Suddaby, R., Cooper, D.J. & Greenwood, R. (2007). Transnational regulation of professional
services: Governance dynamics of field level organizational change. Accounting, Organizations and Society, 32(4/5), 333-362.
Sutton, T.G. (1984). Lobbying of accounting standard setting bodies in the UK and the USA:
a Downsian analysis. Accounting, Organizations and Society, 9(1), 81-95.
Tamm Hallström, K. (2004). Organizing International Standardization – ISO and the IASC
in Quest of Authority. Cheltenham: Edward Elgar.
Tandy, P.R. & Wilburn, N.L. (1992). Constituent Participation in Standard-Setting: The
FASB’s Frist 100 Statements. Accounting Horizons, 6(2), 47-58.
Tweedie, D. (2002). Statement before the Committee on Banking, Housing and Urban Affairs of the United States Senate. Washington, D.C. on 14 February 2002.
<http://www.iasplus.com/resource/020214dt.pdf> Accessed 18 December 2010.
van der Waerden, B.L. (1952). Order tests for the two-sample problem and their power. Proceedings Koninklijke Nederlandse Akademie van Wetenschappen (Indagationes Mathemati50
cae), 14, 453-458.
van Lent, L. (1997). Pressure and Politics in Financial Accounting Regulation: The Case of
the Financial Conglomerates in The Netherlands. Abacus, 33(1), 1-26.
Wall, S.P. (1996). Public Justification and the Transparency Argument. The Philosophical
Quarterly, 46(185), 501-507.
Weetman, P. (2001). Controlling the Standard-Setting Agenda: The Role of FRS 3. Accounting, Auditing and Accountability Journal, 14(1), 85-108.
Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometrics, 1(6), 80-83.
Young, J.J. (1994). Outlining Regulatory Space: Agenda Issues and the FASB. Accounting,
Organizations and Society, 19(1), 83-109.
Young, J.J. (2003). Constructing, persuading and silencing: the rhetoric of accounting standards. Accounting, Organizations and Society, 28(6), 621-638.
Zeff, S.A. (2002). ’Political’ Lobbying on Proposed Standards: A Challenge to the IASB.
Accounting Horizons, 16(1), 43-54.
Zeff, S.A. (2012). The Evolution of the IASC into the IASB, and the Challenges it Faces.
The Accounting Review, in press.
Zimmer, A.C. (1983). Verbal vs. Numerical Processing of Subjective Probabilities. In R.W.
Scholz (Ed.), Decision Making under Uncertainty (pp.159-182). North Holland: Elsevier
Science Publishers.
51
Tables and Figures
Table 1: Quantifiers used in the staff analysis paper and numbers of observations
Quantifier
Observations
some
53
a few
36
many
27
several
19
some of these
15
one
9
N/A
4
most of these
2
several of these
2
a few of these
1
a majority
1
most common
1
Total
170
Table 1 lists all verbal quantifiers identified within the staff analysis paper (IASB, 2008a) and the number of observations,
i.e. how often the respective quantifier has been used. N/A is reported for such issues that were not described using a verbal quantifier.
52
Table 2: Descriptive statistics (basis: unique comment letters)
Absolute Values
Relative Values
Probability
Quantifier Observations
mean
std. deviation
median
minimum
maximum
mean
std. deviation median
minimum
maximum
Jarque-Bera
a few
36
3.89
2.04
4
1
9
3.4%
1.8%
3.5%
0.9%
8.0%
0.026
several
19
7.84
8.40
5
3
39
6.9%
7.4%
4.4%
2.7%
34.5%
0.000
some
53
9.77
7.08
8
1
33
8.6%
6.3%
7.1%
0.9%
29.2%
0.000
many
27
24.78
12.44
27
9
48
21.9%
11.0%
23.9%
8.0%
42.5%
0.345
Table 2 shows descriptive statistics for those four quantifiers (in terms of the number of underlying comment letters) that have been analyzed. Mean, standard deviation, median, minimum
and maximum value are calculated in absolute and relative (in relation to total comment letters) values. Moreover the probablity of the test statistic according to Bera and Jarque (1980)
is presented.
Table 3: Descriptive statistics (basis: underlying codes)
Absolute Values
Relative Values
Probability
Quantifier Observations
mean
std. deviation
median
minimum
maximum
mean
std. deviation median
minimum
maximum
Jarque-Bera
a few
36
4.81
2.61
4
1
12
0.19%
0.10%
0.16%
0.04%
0.48%
0.018
several
19
10.05
13.62
6
3
62
0.40%
0.55%
0.24%
0.12%
2.49%
0.000
some
53
12.64
10.07
10
2
44
0.51%
0.40%
0.40%
0.08%
1.77%
0.000
many
27
37.81
24.24
34
10
98
1.52%
0.97%
1.37%
0.40%
3.94%
0.342
Table 3 shows descriptive statistics for those four quantifiers (in terms of the number of underlying codes) that have been analyzed. Mean, standard deviation, median, minimum
and maximum value are calculated in absolute and relative (in relation to total comment letters) values. Moreover the probablity of the test statistic according to Bera and Jarque (1980)
is presented.
53
Table 4: Median test for all four quantifiers together
Kruskal-Wallis van der Warden
Median tests
(probability)
(probability)
a few - several - some - many
70.33
68.06
(quantified based on unique comment letters)
(0.00)
(0.00)
a few - several - some - many
65.96
64.68
(quantified based on total codes)
(0.00)
(0.00)
Table 4 shows the results of the statistics according to Kruskal and Wallis
(1952) and van der Waerden (1952) which test for significance of differences
of medians within more than two independent samples.
Table 5: Median test for paired quantifiers (basis: unique comment letters)
Median tests
(quantified based on unique comment letters)
Wilcoxon-Mann-Whitney Probability
a few - several
2.77
(0.01)
a few - some
5.46
(0.00)
a few - many
6.73
(0.00)
several - some
2.04
(0.04)
several - many
4.71
(0.00)
some - many
5.55
(0.00)
Table 5 shows the results of the statistics according to Wilcoxon (1945, rank sum test)
and Mann and Whitney (1947) which tests for significance in differences of medians
within two independent samples. The underlying quantifications are based on unique
comment letters.
54
Table 6: Median test for paired quantifiers (basis: total codes)
Median tests
(quantified based on total codes)
Wilcoxon-Mann-Whitney Probability
a few - several
1.95
(0.05)
a few - some
5.10
(0.00)
a few - many
6.69
(0.00)
several - some
2.15
(0.03)
several - many
4.76
(0.00)
some - many
5.38
(0.00)
Table 6 shows the results of the statistics according to Wilcoxon (1945, rank sum test)
and Mann and Whitney (1947) which tests for significance in differences of medians
within two independent samples. The underlying quantifications are based on total
codes.
Table 7: Median test for single quantifier using different quantification dimensions
Median tests
(quantified based on unique comment letter vs. total codes)
Wilcoxon (sign rank)
Probability
a few
4.03
(0.00)
several
2.53
(0.01)
some
5.18
(0.00)
many
4.54
(0.00)
Table 7 shows the results of the statistics according to Wilcoxon (1945, sign rank test) which tests for
significance of differences in medians of two dependent samples. For each verbal quantifier, the median
in terms of unique comment letters is tested against the median in terms of total codes.
55
Figure 1: IASB due process and responsibilities of staff
Figure 1 outlines the IASB’s due process with particular respect to the technical staff’s duties
within each stage of the process.
Staff Involvement within the IASB’s Due Process
Stage 1
Duty of staff
Setting the agenda
Stage 2
Identifying, reviewing and raising
potential issues
Duty of staff
Project planning
Stage 3
Development and publication of a
discussion paper (DP) (not mandatory)
Stage 4
Development and publication of an exposure
draft (ED)
Stage 5
Development and publication of an IFRS
Stage 6
Procedures after an IFRS is issued
Selection of project team and
development of project plan
Duty of staff
Own research, analysis and
summary of comment letters
Duty of staff
Drafting of ED, analysis and
summary of comment letters
Duty of staff
Drafting of IFRS on instruction of
IASB’s board
Duty of staff
Regular meetings with interested
parties
56
Figure 2: Distribution of unique comment letters underlying the quantifiers
Figure 2 illustrates the distribution of the number of unique comment letters that are represented by each of the four quantifiers ‘a few’, ‘several’, ‘some’ and ‘many’. It thus shows
how many comment letters are referred to by the quantifiers.
50
40
30
20
10
Y
M
AN
E
M
SO
R
VE
SE
AF
E
W
AL
0
57
Figure 3: Distribution of total codes underlying the quantifiers
Figure 3 illustrates the distribution of the number of total codes that are represented by each
of the four quantifiers ‘a few’, ‘several’, ‘some’ and ‘many’. It thus shows how often an issue
was mentioned overall to be assigned one of the quantifiers.
100
80
60
40
20
Y
M
AN
E
M
SO
R
VE
SE
AF
EW
AL
0
58
Download