Report_on_MC_research.doc

advertisement
IASB ED: MANAGEMENT COMMENTARY: An analysis of experts´
opinion through a delphi method
Igor Álvarez (Basque Country University)
José Antonio Calvo (Basque Country University)
Araceli Mora (University of Valencia) 1
1. MOTIVATION, OBJECTIVE AND METHODOLGY
In June 2009 the IASB Exposure Draft “MANAGEMENT COMMENTARY”
has been issued with a deadline for comments by 1 March 2010.
In order to give a relevant input on the issue to the IASB and to the EFRAG (TEG) we
have addressed a research based on a “Delphi model” which have been increasingly
used in Social Science to deal with problems (including normative matters) for which
consensus among experts can help to policy makers. We briefly explain below how the
method works. Although we give a complete explanation and literature review in the
annex.
Delphi method is a scientific and rigorous method of analysis widely used in social
sciences in the last decades. According to the classical definition, the Delphi method is
a general way of structuring the group communication process and making it effective
enough to allow a group of individuals, functioning as a whole, to deal with complex
problems. It is a systematic process, which attempts to obtain group consensus resulting
in much more open and in depth research than a traditional survey.
The individuals involved are a small group of experts in the topic (known as the Delphi
panel). The structure of the group is designed by a monitor/s (the researcher/s) that
formulates a reiterative survey to address the research topic. The survey is sent to the
designated group of experts who subsequently return their responses to the Delphi
monitor. Each of the iterative mail-outs of the survey is called a round, and rounds
continue until stable responses between rounds are achieved (usually two or three
rounds are enough). The Delphi does not depend on statistical power, but rather on
group dynamics.
A key advantage of the approach is that it avoids direct confrontation of the experts (as
they do not know who are the others during the process) while they give their views and
arguments. This allows richer argumentation and better validation of arguments. It has
the advantages of research techniques based on group interaction, but ensuring that
individual answers are anonymous for the panellists and, therefore feedback is unbiased
as to the source of the original opinion. So, it documents the opinion of the panellists,
while avoiding the pitfalls of face-to-face interaction such as group conflict and
individual dominance. At the same time controlled feedback and anonymity can help
panellist to review their views without publicly admitting that they have done so,
encouraging them to take up a more personal viewpoint rather than an institutional
position.
1
Address for correspondence: Araceli.Mora@uv.es
The panel of experts is divided into groups with different backgrounds, which permits
comparison of the perspectives of the different groups.
The results of the Delphi method do not try to substitute the traditional comment letters
but might complement them. Comment letters are generally sent by organizations
representing the “institutional opinion”, as well as regulatory bodies. Due to different
reasons, individuals and non- institutional opinions (as for example from the academics)
are less probable to participate in the due process, as they do not normally send
comments letters, so interesting views are lost in the process.
The Delphi method gives another approach as it results from the exchange of arguments
among the commentators, and it allows an analysis of this anonymous exchange of
arguments with a scientific and rigorous methodology which permits avoiding a great
part of the subjectivity inherent to any analysis of opinions.
This research is focused on ED Managemnt Commentary with the intention to
help policy makers in their decisions. In this study we consider, among others, the
main questions related with the ED, including the specific questions to
constituents, and show the opinions and arguments of the selected experts and
their ex-change (and change) of views through the process.
Next we indicate the experts panel (names and affiliations), them we show the main
conclusions of the study in relation with the different topics in the ED, and finally
we explain in more detailed the research process and the statistical results.
2. THE EXPERTS PANEL
The panel consists of 22 experts carefully selected from different European countries
(UK, Germany, Austria, Sweden, Spain, Switzerland, Denmark, Italy). Among them,
there are two main groups:
a) Non-Affected:
- Prestigious European academics involved in research related with financial
reporting and disclosure and, in most cases, having been involved in some way
in the national or international accounting regulations processes.
b) Affected:
- Auditors (partners in the big firms) in charge of the IFRS matters
- Managers from big listed European companies
- Managers from big European investing societies.
The name of the experts and their affiliations are shown in table 1:
Panel A: Academics
MANFRED BOLIN
LEANDRO CAÑIBANO
ROBERTO DI PIETRA
GÜNTHER GEBHARDT
BEGOÑA GINER
JOSÉ ANTONIO GONZALO
JAN MARTON
FLORA MUIÑO
FRANK THINGGAARD
ALFRED WAGENHOFER
GEOFFREY WHITTINGTON
Pane B: Affected (preparers/auditors/users)
CLEBER CUSTODIO
ALAN DANGERFIELD
ARANTZA GINER
JESUS HERRANZ/ROSALIA MARTINEZ
JORGE HERREROS
SIMON INGALL
PAUL LEE
ROBERTO MONACHINO
MARTA SOTO
MIKE STARKIE
CARSTEN ZIELKE
Table 1: Panels of experts
Institution
International School of Management,
Dortmund
Autonomous University of Madrid
University of Siena
Goethe University of Frankfurt
University of Valencia
Alcalá University
Gothenburg University
Carlos III University
Aarhus University
University of Graz
University of Cambridge
Organization
DELOITTE
Roche
BDO
Ferrovial
KPMG
Shell International
Hermes Equity Ownership Services
Unicredit
Telefónica
British Petrolium
Société Générale
3. SUMMARY OF THE FINDINGS AND CONCLUSSIONS
Two rounds were enough to get stability in the answers. That means that the experts did
not change their responses significantly from the statistical point of view, with a couple
of exceptions. There are interesting qualitative changes in some of them, in the
additional arguments. The most interesting part of the second round was the increase in
the number of arguments in favour or against the positions taken by some respondents
in the first round. The higher changes from the statistical point of view are related with
qualitative characteristics and with the list of contents. The questions addressed in the
questionnaires are shown in table 2, and the statistical results in both rounds are showed
in table 3.
Before showing the research process and the results we can summarize the main
conclusions of the study in the following paragraphs:
a)
b) Usefulness and users of MC
Everybody agree the MC is useful to make decisions as it provides an input to put the
numbers of the financial statements into the proper context, although many respondents
do not give the highest mark (5. Strongly agree) as they point out that usefulness can be
lost in practice by the (low) quality of the MC presented by the companies, and some
others agree about the “potential usefulness” subject, in some way, to its final
structure.
Everybody agree that providers of capital (users and creditors) are the primary users of
MC, althoughany experts stress the importance of other users, mainly employees, and at
a lower level competitors and institutions.
c) Mandatory versus non-mandatory status
The great majority of the experts, independently of their background, agree with having
a non-mandatory guide “at this stage”. However, it is important to remark that this is
one of the responses that have slightly changed in the second round due to the feedback
with the arguments in favour and against the different status and considering the
possibility of deferring the decision about the status in the next future.
While most of the auditors and preparers insist on having a long term non-mandatory
guide, some other experts (4 academics, 3 preparers and 1 user) consider the
convenience of having a non-mandatory guide now, while experimenting the usage of
the MC, and in a short/ medium term period it would be possible to evaluate the
decision to make mandatory its preparation. Just 1 expert (academic) consider the MC
must be mandatory at this stage. It is important to point out that there has been a change
in views in this second round as a consequence of the feedback and now more experts
show doubts and indecision in this matter giving a “3” (neither agree nor disagree) or
a”4” (agree with IASB decision) but introducing some comments about that potential
change of the status in the future
The arguments of most of the auditors and preparers for having a long-term nonmandatory guidance are that a) will more likely avoid unnecessary contradictions in
some jurisdictions while helping enhance reporting practices and provide a useful
reference framework in jurisdictions that lack established reporting requirements of a
MC-type and b) pointing out in some other cases that the management commentary are
beyond the financial reporting, and that is not a matter to be considered by IASB (it is a
matter of other institutions) or at least it is not a priority for the IASB.
However, on the other side, half of the experts (in the second round) consider the IASB
must (or should consider the possibility) issue a mandatory standard at some point in
time to get a useful document .Their argument is that a non-mandatory guide does not
get comparability and usefulness for the investors. They consider that a mandatory
principles-based standard gives entities enough leeway to further improve it through
voluntary disclosures and it does not contradict national rules, and the more principlesbased it is the less it generates a checklist to tick.
It can be said that the experts, after reading the other people arguments, are clearly
divided (around 50%) about the convenience of making the MC compulsory in the
future. Although there is not a clear distinction among the backgrounds it seems that all
the auditors get a high consensus being strongly reluctant to have a mandatory standard,
while just 2 of the 11 academics consider the guide must be never mandatory.
The overall conclusion can be that there must be a non-mandatory guide at this
stage, although it seems to be a strong conflict of opinions about next steps
d) Qualitative characteristics
There is a great consensus about the importance of relevance, faithful and timeliness.
Also there is quite consensus about the importance of comparability over time and
understandability (although this last characteristic is related with the plain language in
question 5.c).
Although it seems to be low consensus in the question related with the use of plain
language, after reading the arguments it can be concluded that most of the experts
consider that the MC must be understandable, but always considering that it is
addressed to “sophisticated users”.
The main conflicts with the qualitative characteristics in the conceptual framework
seems to be related with neutrality, comparability between companies and verifiability.
We find the highest deviation in the opinions in the case of neutrality and comparability
However it is important to remark that the mode is “4” meaning “very important” in
both cases in the first round but “3” meaning “average” in the second round
Neutrality: Although many experts consider neutrality has not sense in a document
based on the management’s view on not only what has happened, but also why
management believes it has happened and what management believes the implications
are for the entity’s future. Some others experts consider that it depends of the real
meaning of “neutrality”. Management could be expected to be transparent and provide
its views in a balanced manner by including any relevant information in good faith, not
having the intent of deceiving users, and presenting both positive and negative
considerations and events. If neutrality definition is refocused in that sense then MC
must have it.
Comparability between companies: There is no consensus in this matter either.
However, it is important to remark that only 6 of the 22 experts (with different
backgrounds) gives low or none importance to comparability between entities arguing
that it is very difficult to get it when giving the information through the eyes of the
management. Other experts (7) consider that the same argument about difficulties in
getting comparability between entities from different industries can be applied to any
financial statement. Comparability between entities in the same industry must be a
desirable characteristic of the MC and must be maximized (always considering the
inherent flexibility of this kind of document). The rest has not a clear position, or think
that it should debate with forthcoming Conceptual Framework.
The case of verifiability is different as there are very few considering that it is not
important, nobody said that it is very important. In fact the mode is 3 (average score).
Many experts with different backgrounds point out that the characteristic of verifiability
could be confusing. MC in practice, particularly if prospective information is included
whereas users may consider important some sort of checking provided by an auditor, as
for example consistency.
Several responders show in the second round comments in relation with the importance
of having more debate about the conceptual framework in general, including the
qualitative characteristics, and pointing out this is not the proper context (the MC) to
have such a debate.
As an overall conclusion it can be said that there is a majority in favour of
considering that “the desirable characteristics for MC must be the same that those
definitively consider in the framework” (considering the aspects related with the
refocusing meaning of some of them applied to MC, as neutrality or verifiability)
and considering that more debate about the Conceptual Framework should be
necessary
e) Principled-based and list of contents
There is a high consensus of agreement with the principled-based approach of the
documents and at the same time with the list of contents showed in the document.
However there are many comments in relation to the extent of some topics in the list,
mainly in the second round, as for example:
a) Results and prospects must be separated, and an analysis of variations over goals or
prospective data provided earlier must be required
b) More concrete information about risk and its management must be detailed in this
list
Also the extent of “looking-forward information” is a concern for some experts.
The reaction of some experts to the feedback from the first round in order to consider
other information as social, environmental or corporate governance makes some of them
slightly changed their opinions by adding comments in relation of taking into account
that information (social, environmental and corporate governance) to the extent this
information is relevant to understand the strategy and overall performance of the entity.
However, most of the experts tend to think that detailed information focused on the
specifics of the environmental or social performance of the entity can be very useful and
necessary, but a more appropriate content for specific separate reports.
All the experts agree in having a principled-based document. However they also
agree about having a list of contents being all the elements considered in the ED
very important or essential (although a little bit more of detail about the content
in some cases would help)
f) Application guides and examples
There is not consensus about this topic.
While all the preparers and most of the auditors prefer not to have an illustrative
guide or examples, the users are divided, and it seems that also the academics are
divided in this topic. However, although 4 academics are in favour of not having a
guide, the reason in two of these cases is that it is premature, but when getting more
experience guides and illustrative examples would be convenient, and in one case the
reason is that having a non-mandatory document with a detailed guide seems to be a
curiosity.
So, it could be said that the majority of the academics are in favour of having guides
and examples now or in the next future. The reasons to be in favour are similar in all
cases: a) The proposals in the exposure draft related to forward-looking information are
difficult to put into operation without application guidance or/and examples. Examples
would be helpful to get an idea how comprehensive disclosures are expected to be; and
b) If the development of such application guidance and illustrative examples were left to
other organizations (national or stock market regulators), there would be inconsistency
and confusion – diminishing the comparability and usefulness.
The reasons against having illustrative guides and examples given by the reluctant
(mainly auditors) in the documents are that they might be interpreted as a floor or a
ceiling. It is interesting that the only auditor in favor of having guides and examples
argues that a very clear explanation that an example should not be considered as floor
or ceiling can avoid that, and he/she believes that such risk is low compare to the
potential benefits that an example can provide.
As a conclusion it can be said that there is not a consensus about the convenience
or not of having guides and examples, and this is one of the few topics in which we
can find different opinions depending on the background of the experts
g) The placement criteria
Although with some concerns in relation with the importance of having a placement
criteria, most of the experts agree with the decision to defer development of placement
principles (where to disclose MC vs. Notes) until phase E of Conceptual
Framework is completed. Although it is recognized in general the importance of the
topic, considering that the intention of the IASB is to issue a non-mandatory guide, then
the placement criteria could be quite confusing in those circumstances (although some
others point out that it would be also confusing not having those criteria until the phase
E is finished, maybe in some years)
In conclusion it can be said that the majority agree with the decision to defer the
development of placement principles, but pointing out that this situation can
create difficulties in practice
Other aspects: Finally some experts make comments related with auditing. Although
this is not a matter the IASB has any competence, and so there is not any question
related with that in the questionnaire. However this is an important topic that can affect
the compliance and the usefulness of MC. Some experts consider the MC must be out of
the scope of auditing as much of the information is prospective and subjective and
cannot be verified (which affect the qualitative characteristic of verifiability). Others
think that a requirement for an audit may encourage a rigid and excessively-cautious
approach to management commentary.
However, other experts point out that MC must be subject to audit of some sort, but
taking into account the specific challenges of an audit of prospective financial
information, stemming from its different nature. The auditor can merely assess whether
the MC is” consistent” with the annual financial statements.
4. THE RESEARCH PROCESS
The phases have been:
1. Selection of the experts
2. Sending the questionnaire in the first round
3. Qualitative and quantitative analysis of the responses
4. Preparation of the feedback report for the panellists and the questionnaire of the
second round
5. Analysis of the final responses and elaboration of the report
4.1.
The questionnaires
The first questionnaire round has closed and opened questions, and it allows the respondents
freedom to define the topics that should be included. We included all the questions addresses to
the constituents in the ED. The close addressed questions and the scales are shown in table 1.
Table 1: QUESTIONS ADDRESSED IN THE QUESTIONNAIRES
Linkert scale from 1 to 5 meaning: 1. Strongly disagree, 2. Disagree, 3. Neither agree nor disagree, 4. Agree, 5. Strongly agree(except in Q.1.: 1. None knowledge, 2. Little knowledge, 3. Average knowledge, 4. Extensive knowledge, 5. Complete knowledge;
and in Q.5.b) and Q.7.b) 1. None, 2. Little, 3. Average, 4. A lot, 5. Essential)
ALL QUESTIONS ENDING WITH THE OPEN QUESTION ABOUT ADDITIONAL
COMMENTS, ARGUMENTS, IDEAS OR EXPLANATIONS ABOUT THEIR ANSWERS
Q.1. What do you know about the draft that the IASB has issued concerning its proposals for a type of
disclosure labelled: “Management Commentary”?
Q.2 Do you think MC increases usefulness of financial reporting for investment decisions.
Q.3.a) Do you agree with the IASB decision to develop a guidance document for the preparation and
presentation of management commentary instead of a compulsory IFRS?
Q.3.b). If finally we have a non-mandatory guide at present, do you consider it would be convenient to
have a mandatory standard in the next future?
Q.4. The ED (par 9) considers “the needs of existing and potential capital providers, as the primary users
of financial reports, are paramount when managers consider what information to include in MC”. The
following potential users should be addressed by the MC. How much do you agree in each case?
Q.5.a) The Conceptual framework ED (May 2008) considers the fundamental qualitative characteristics
of general purpose financial reports are relevance and faithful representation and the enhancing
qualitative characteristics are comparability (over time and between entities) verifiability, timeliness and
understandability. The ED considers “the desirable characteristics of MC are those set out in the May
2008 framework ED”. Do you agree that those characteristics are desirable for MC purpose?
Q.5.b) Indicate the importance which you attribute to each of the named characteristics in relation to the
MC
Q.5.c) In order to achieve the characteristics of understandability, the Board should recommend the use
of plain language, because very technical and difficult disclosure could be against the aim and usefulness
of the document. Dou you agree?
Q.6. The IASB has decided to adopt a high-level principles-based approach to MC. Do you agree with
this?
Q.7.a).In spite of the adoption of a principle-based approach, the ED presents a list of content elements.
Do you agree with the offering of a list of content elements?
Q.7.b) Considering the content elements described in paragraphs 26-35 (listed in the questionnaire)
value them considering their importance.
Q.8 The Board’s decision is not to include detailed application guidance neither illustrative examples in
the final management commentary guidance document. Do you agree with Board’s decision?
Q.9 The IASB has decided to defer development of placement principles (where to disclose MC vs.
Notes) until phase E of Conceptual Framework is completed (in some years). Do you agree with Board’s
decision?
Q.10 Is there a compulsory MC in your country’s legislation? How much time have you needed to finish
the questionnaire?
Both quantitative (distribution statistics) and qualitative (arguments and comments of
the experts) have been considered for the next round. The second questionnaire has
basically the same questions (in some cases with some minor formal modifications) but
adding arguments and feedback obtained from the panel’s answers to the first
questionnaire. The objective of this second round is to get a higher level of consensus
giving the opportunity to the experts to change their views and/or adding comments or
arguments considering the other experts responses (which are always anonymous during
the process)
4.2.
The results
It can be concluded that there is already high consensus for several answers but some
controversial aspects are detected in relation with:
(i) The consideration of a mandatory versus non-mandatory guide or standard.
(ii) The qualitative characteristics specially comparability (between companies),
verifiability and neutrality.
(iii) Application guides and illustrative examples
The literature establishes that the responses may be different if the panellists had
different characteristics (academics, users, auditors, preparers...). In order to compare
this characteristic of the panel, a hierarchical cluster was the first analysis we
performed. This procedure tries to identify relatively uniform groups of cases (agents)
based on the selected characteristics.
From the dendogram2 it can be concluded that, in general, there is not a clear difference
between groups so, excluding some specific questions as we will comment later, we
cannot say there is a significant difference in the answers as a whole depending on the
background of the responder so we cannot make groups with “similar characteristics”
After receiving the responses to the second round we can conclude we have not get
consensus in the controversial matters but we have got stability in all the answers from
the statistical point of view with the exception of two questions for which the change in
the answers from the first to the second round can be considered significant3. Anyway
2
3
From number 1 to 11 are academics, from 12 to 16 and 22 listed companies, 17 and 18 are investing companies
In our case the dates are qualitative, and the samples are related and quite small, so we chose the test of the signs from other
no parametric methods. The test of the signs is used to resist hypothesis on the centralization parameter, and is used
we think that a third round wouldn´t add any significant input and wouldn´t allow to get
a higher consensus so we consider the process must stop with this second round.
What we really get in this round is more feedback and arguments from the experts to
explain their positions
The statistical results for each question in both rounds are shown in table 2 and the
results of the stability test are shown in table 3 were it can be observed that stability has
not been get for questions question Q.7.b c) (in relation with resources, risks and
relationships). Meaning that the changes in views about this matters have been significant in
being more concrete in the requirements.
fundamentally in the comparison of related samples, as in our case. The test of the signs calculates the differences between two
variables for all the cases and classifies the differences like positive or negative. If the two variables have a similar distribution, the
number of positive and negative differences does not differ significantly. In out work we define the level of meaning < 0,05
Table 2. Statistical results in the two rounds
1 Round
N
Question 1 (1
round)
Question 2 (1
round)
Question 3-a
(1 round)
Question 3-b
(1 round)
Question 4-a
(1 round)
Question 4-b
(1 round)
Question 4-c
(1 round)
Question 4-d
(1 round)
Question 4-e
(1 round)
Question 5a(1 round)
Question 5b(a)(1 round)
22
Mean Mode
3,68
3
Standart
Desviation
,839
Percentile
2 Round
50 (Median) 75
Min. Max 25
2
5 3,00
4,00
4,00 Question 1 (2
round)
3
5 4,00
4,00
5,00 Question 2 (2
round)
1
5 4,00
4,00
5,00 Question 3-a
(2 round)
1
5 1,00
3,00
4,00 Question 3-b
(2 round)
1
5 4,00
5,00
5,00 Question 4-a
(2 round)
2
5 4,00
4,00
5,00 Question 4-b
(2 round)
1
5 3,00
3,00
4,00 Question 4-c
(2 round)
1
5 2,00
3,00
4,00 Question 4-d
(2 round)
1
5 2,00
3,50
4,00 Question 4-e
(2 round)
2
5 3,00
4,00
5,00 Question 5a(2 round)
1
5 4,00
5,00
5,00 Question 5b(a)(2 round)
22
4,27
4
,550
22
4,05
4
1,046
22
2,86
1
1,490
22
4,55
5
,912
22
4,36
4
,727
22
3,32
3a
,995
22
2,82
3
1,220
22
3,18
4
1,296
22
3,86
4
,990
22
4,55
5
,912
Question 5b(b)(1 round)
22
4,41
5
,854
2
5
4,00
Question 5b(c)(1 round)
22
4,41
5
,734
2
5
Question 5b(d)(1 round)
22
3,27
4
1,316
1
Question 5b(e)(1 round)
22
3,50
3
1,058
Question 5b(f)(1 round)
22
4,05
4
Question 5b(g)(1 round)
22
4,18
Question 5b(h)(1 round)
22
Question 5c(1 round)
Question 6(1
round)
Question 7a(1 round)
Question 7-b
(a)(1 round)
N Mean
22 3,77
3a
Standart
Desviation
,752
Mode
Percentile
50 (Median) 75
Min. Max 25
3
5 3,00
4,00
4,00
22
4,27
4
,550
3
5
4,00
4,00
5,00
22
4,27
5
,827
2
5
4,00
4,00
5,00
22
3,05
3
1,463
1
5
1,75
3,00
4,25
22
4,82
5
,501
3
5
5,00
5,00
5,00
22
4,64
5
,581
3
5
4,00
5,00
5,00
22
3,36
3
,727
2
5
3,00
3,00
4,00
22
2,73
3
,935
1
5
2,00
3,00
3,00
22
3,18
4
1,140
1
5
2,00
3,00
4,00
22
3,95
4
,950
2
5
3,00
4,00
5,00
22
4,55
5
,963
1
5
4,00
5,00
5,00
5,00
5,00 Question 5- 22
b(b)(2 round)
4,59
5
,666
3
5
4,00
5,00
5,00
4,00
4,50
5,00 Question 5- 22
b(c)(2 round)
4,27
4
,767
2
5
4,00
4,00
5,00
5
2,00
3,50
4,00 Question 5- 22
b(d)(2 round)
2,95
3
,950
1
4
2,00
3,00
4,00
1
5
3,00
3,50
4,00 Question 5- 22
b(e)(2 round)
3,32
3
,716
2
5
3,00
3,00
4,00
,785
3
5
3,00
4,00
5,00 Question 5- 22
b(f)(2 round)
4,14
4
,710
3
5
4,00
4,00
5,00
5
,958
2
5
3,00
4,50
5,00 Question 5- 22
b(g)(2 round)
4,45
5
,858
2
5
4,00
5,00
5,00
3,32
4
1,359
1
5
2,00
3,50
4,25 Question 5- 22
b(h)(2 round)
3,32
3
1,041
1
5
3,00
3,00
4,00
22
3,50
4
1,058
2
5
2,00
4,00
22
3,59
4
,959
2
5
3,00
4,00
4,00
22
4,14
4
,774
2
5
4,00
4,00
22
4,27
4
,631
3
5
4,00
4,00
5,00
22
4,09
4
,610
3
5
4,00
4,00
22
4,09
4
,526
3
5
4,00
4,00
4,00
22
4,36
5
,727
3
5
4,00
4,50
4,00 Question 5c(2 round)
5,00 Question 6(2
round)
4,25 Question 7a(2 round)
5,00 Question 7-b
(a)(2 round)
22
4,59
5
,666
3
5
4,00
5,00
5,00
Question 7-b
(b)(1 round)
22
4,59
5
,666
3
5
4,00
5,00
5,00 Question 7-b 22
(b)(2 round)
4,68
5
,646
3
5
4,75
5,00
5,00
Question 7-b
(c)(1 round)
22
4,55
5
,671
3
5
4,00
5,00
5,00 Question 7-b 22
(c)(2 round)
4,50
5
,673
3
5
4,00
5,00
5,00
Question 7-b
(d)(1 round)
22
4,36
5
1,049
1
5
4,00
5,00
5,00 Question 7-b 22
(d)(2 round)
4,45
5
1,011
1
5
4,00
5,00
5,00
Question 7-b
(e)(1 round)
22
4,41
5
,796
3
5
4,00
5,00
5,00 Question 7-b 22
(e)(2 round)
4,55
5
,739
3
5
4,00
5,00
5,00
Question 8)(1
round)
Question 9)(1
round)
22
3,45
4
1,262
1
5
2,00
4,00
3,55
4
1,143
2
5
2,00
4,00
4,25
22
3,77
4
,922
1
5
3,00
4,00
4,25 Question 8)(2 22
round)
4,00 Question 9)(2 22
round)
3,59
4
,908
1
5
3,00
4,00
4,00
Table 3: Stability test
Question 1
Sig. exacta (bilateral)
,625
Question 2
Sig. asintót. (bilateral)
,125
a
1,000
a
,375
a
,125
a
1,000
a
,453
a
1,000
Question 5-a
a
,500
Question 5b(a)
a
1,000
Question 5b(b)
a
,250
Question 5b(c)
a
,625
1,000
Question 5b(f)
a
,625
Question 5b(g)
a
,219
Question 5b(h)
a
1,000
Question 5-c Question 6
a
1,000
a
,500
Question 7-a Question 7-b Question 7-b
(a)
(b)
a
1,000
a
,063
a
,500
Question 7-b
(d)
a
,500
Question 7-b Question 8)
(e)
a
,500
a
,500
Question 9)
a
Sig. asintót. (bilateral)
Pregunta sin llegar a
la consistencia 7-b -c
Annex . THE RESEARCH METHODOLOGY: A DELPHI MODEL
1.1.
Definition and general characteristics of the model
For several decades, organizations have tried to capture the collective knowledge and
experience of experts in a give field to improve decision making (Gupta and Clarke,
1996). Although the first Delphi experiment was performed in 1948 the method became
popular with Dalkey and Hemer (1963) who conducted a number of Delphi experiments
to reduce negative effects of group interactions in decision making.
According to the classical definition, the Delphi method4 is a general way of structuring
the group communication process and making it effective enough to allow a group of
individuals, functioning as a whole, to deal with complex problems (Linstone and
Turoff, 1975, 2002). It is a systematic process, which attempts to obtain group
consensus (MacCarthy and Attirawong, 2003) resulting in much more open and in depth
research than a traditional survey. Castells (1999) claims that one of the bases of the
Delphi techniques is rooted in the fact that they are more socially representative than
statistics based on opinions of experts in the field under investigation.
The individuals involved in the group are experts in the topic. The structure of the group
is designed by a monitor or monitor team that formulates a reiterative survey to address
the research topic. The survey is sent to the designated group of experts (known as the
Delphi panel) who then anonymously rank their preferences regarding a continuum of
answers related to a series of questions or propositions posed. The experts subsequently
return their responses to the Delphi monitor. Each of the iterative mail-outs of the
survey is called a round, and rounds continue until stable responses between rounds are
achieved.
A key advantage of the approach is that it avoids direct confrontation of the experts.
(Sánchez et al, 1999). This allows richer argumentation and better validation of
arguments. It has the advantages of research techniques based on group interaction, but
4
Question 5b(d)
a
1,000
Question 5b(e)
Sig. exacta (bilateral)
Question 3-a Question 3-b Question 4-a Question 4-b Question 4-c Question 4-d Question 4-e
a
A complete analysis and explanation of the delphi method and extensive literature review can be found
in Gupta and Clarke (1996) and in Landeta ( ….)
,687a
,125a
ensuring that all answers are anonymous and, therefore feedback is unbiased as to the
source of the original opinion. So, it documents the opinion of the panellists, while
avoiding the pitfalls of face-to-face interaction such as group conflict and individual
dominance. As B point out, direct confrontation “often induces the hasty formulation of
preconceived notions, an inclination to close one´s mind to novel ideas, a tendency to
defend a stand once taken, or, alternatively and sometimes alternately, a predisposition
to be swayed by persuasively stated opinions of others”. At the same time controlled
feedback and anonymity can help panellist to review their views without publicly
admitting that they have done so, encouraging them to take up a more personal
viewpoint rather than an institutional position
Although achieving consensus seems to be the main goal of the Delphi method, Delphi
literature suggests that consensus criteria should be applied only after response stability
has been established (see Chaffin and Talley, 1980; Dajani et al, 1979; Regier, 1986;
Sharma and Gupta, 1993). As Scheibe et al (1975, page 262) observe, using response
stability rather than consensus levels as the stopping criterion “allows much more
information to be derived from the Delphi”. Stability is a measure of the extent and
degree to which panel members are selecting the same responses between successive
rounds. In fact, the possible outputs that could be got would be: complete consensus,
majority, bimodality, bipolarity, plurality or disagreement (see Novakowski and
Wellar, 2008 )
As Nielsen and Thangadurai (2007) point out “the Delphi method is based on a
dialectical inquiry that encourages the sharing and exploring of divergent point of view.
The emphasis is not to secure a single, universal truth, but on the range of quality ideas
it generates, not only those around which consensus may form, since they may be less
important to current investigations” “now a useful product of the Delphi method is
crystallization of reasons for dis-sensus” (Gordon, 2004)
Although there are a number of different types of Delphi exercises, they can usually be
assigned to one of three broad categories (Novakowski, and Wellar, 2008)
1. Normative or regulatory Delphi. Obtaining a consensus about a preferred future
2. Forecasting Delphi. Making future predictions
3. Policy Delphi. Exploring a question of interest or with political consequences
The majority of de Delphi efforts during the first decade were for pure forecasting
Being the use of the Delphi method, the majority can be classified within one of the
following categories. (Novakowski and Wellar, 2008):
4. Regulatory Delphi:
5. Forecasting Delphi: Political Delphi: Exploring a question of interest or with the
normative Delphi the one that deals with “exercises are explorations of what
should be” and so the type which is more suitable for the aim of this study. Even
though there is a great diversity in political consequences
As we have already established, the purpose of the study and the dimension of the
application of the theory are regulatory; therefore, our work falls within the first Delphi
group. In these sense there are not many studies relating with financial information
using this methodology, being remarkable for example Meritum...... about the disclosure
of information related with intangible assets, or Cañibano and Alberto (2008) in relation
with the “enforcement” of IFRS. Related with the normalizing process coincides with
the current adoption by the EU of the IFRS, Julia and Polo (2006) tried to develop an
adjustment of the accounting standards to the co-operative societies using Delphi
technique.
We therefore have opted to use the structure developed by Novakowski and Wellar
(2008) in the use of the regulatory Delphi, who establish the following steps for its
correct development:
Step 1: Reviewing the literature.
Step 2: Preparing the questionnaire.
1.2.
The development of the research
According to Loo (2003) particular attention must be paid to the detailed planning and
then effective and execution.
(1)
(2)
(3)
(4)
Problem definition;
panel selection;
determining the panel size; and
conducting the Delphi rounds
1.2.1. The panel of experts
Reid (1988) points out that one of the keys to success in this type research is appropriate
selection of panel members: they should be selected for their capabilities, knowledge
and independence. One important point to recognise is that a Delphi panel is
purposively rather than randomly selected. The panel is therefore not representative of
any target population (Goodman, 1987)
The issue of respondent bias can be overcome in this study by selecting experts from
diverse backgrounds with a range of interests; so all viewpoints could be expressed
(Adler and Ziglio, 1996)
It is interesting to divide the experts into panels (academics, practitioners, government
officials...). Their size and constitution depends on the nature of the research question
There is no agreement in the literature as far as the optimum number of the panel of
experts is concerned and even though there have been various attempts at establishing it
in a scientific manner, the conclusions have not been significant (Galanc and Mikus,
1986). Despite this lack of clear criteria, the majority of the opinions collected in the
literature consider that the number must be the groups with a minimum of seven and a
maximum of 30 members (MacCarthy and Atthirawong, 2003; Denzin and Lincoln,
1994; Linstone and Turoff, 1975; Landeta, 1999). More specifically, the literature
advises a number between 8 and 12 experts when using the Delphi method for
regulatory purposes. Novakowski and Wellar (2008) and Richey et al (1985) suggest a
similar number of experts, as these authors believe that a small panel would be
sufficient when developing appropriate opinions in a consensus manner. It is important
t remark that Delphi study does not aim to be a statistically representative sample of a
study target population, but rather it is a group decision mechanism, which requires
qualified experts with an in-depth knowledge of the study target (Okoli and Pawloski,
2004).
This paper establishes two of experts recognised by the literature (Linstone and Turoff,
1975; Landeta, 1999): experts in the area (academics) and agents involved (preparers,
auditors and users). This structure permits potential comparison of the different
stakeholder groups, and could allow to obtain a sufficient number of perspectives from
the “inside” and to perform analyses to see if there are differences in perspectives,
which can be also interesting.
It is essential that the experts selected receive information about the objectives of the
study, the estimated time required for their participation, the potential of the research
and possible benefits they can obtain by participating in it, and it is important to remark
that the participants must not know the identities of the other members of the group
when expressing their opinions.
1.2.2. The questionnaires
Another basic aspect of the successful use of this methodology is rooted in the writing
of the questions to be included in the different questionnaires. They must be clear and
concise, and correctly understood by the experts. In the first phase, it is advisable to
begin with open-ended questions (Heras, et al, 2006)
The cover letter is especially important in a Delphi study because members must be
informed and motivated about participating in all rounds and in returning their
completed questionnaires in a timely manner so that analyses can be conducted and both
the feedback report and next questionnaire can be constructed and distributed for
successive rounds.
Adler and Ziglio (1996) considers that the emphasis of the first questionnaire round (R1
is to allow the respondents freedom to define the topics that should be included in the
analysis, thus minimising the degree of influence from the monitor group. Respondents
are given the opportunity to indicate where they felt the questionnaire had not
sufficiently addressed issues. Okoli and Pawlowski (2008) consider that no single
questionnaire should take more than 30 min. to complete
After the first round, once the responses have been received, the work with the results
begins. Both quantitative and qualitative analyses are performed on the returned
questionnaires to prepare a feedback report for the panel, as well as to assist the
moderator in preparing materials for the next round. So, the monitor reviews and
summarizes the responses, and then employs a measure of central tendency (usually the
mean, median, or mode) to indicate where the majority of the panel responses are
located on the response continuum. In the questions permitting it, the interquartile range
of the responses is also estimated, as a measurement of its dispersion (Landeta, 1999).
The response continuum can be based on Likert scale categories of five to seven points,
or on some other rationale, in order to indicate the degree to which panel members agree
or disagree with the questions or propositions posed (Critcher and Gladstone, 1998).
The monitor then develops a second-round survey that reveals the response dispersion
of the panel, with the different statistical indicators, such as the mean, the mode, the
typical deviation and the percentiles and also includes the feedback obtained from the
first round’s open-ended question(s) to ensure that the survey author has not overlooked
something relevant to the topic. Questionnaire items in the second and following rounds
might become more specific or precise, focussing on areas where consensus has not yet
been achieved.
Upon receipt of the second round, the experts are requested to reconsider their first
estimations, if they consider it necessary, giving to the experts an exact copy of their
responses to the first questionnaire.
The process is repeated until the responds stabilise, that is, when the median shows
practically no oscillation and the interquartile space stops getting narrower. This
indicates that, following and anonymous exchange of information, maximum consensus
has been reached (MacCarthy and Atthirawong, 2003) Although the process of response
and reiteration can be repeated as many times as required, Delphi practice has revealed
that the rate of response convergence is highest between rounds 1 and 2 (Linstone and
Turoff, 1975a).
Following the final round, the moderator prepares a comprehensive report and
distributes it or a short version to all members
1.2.3. Advantages and limitations of the Delphi method
The Delphi method, like all methodologies, has attracted its share of critics. Fortunately,
a large body of literature has accumulated to demonstrate the usefulness of the method
when well-designed and executed (Loo, 2003)
Some researchers (trained in positivism with its emphasis on large sample sizes, and
preferably random samples) might find these sample sizes small relative to the hundreds
required for surveys, given considerations or measurement error, respondent bias, and
the need for statistical power. However, the careful selection of the experts is a key
factor in the Delphi method that enables a researcher confidently to use a small panel.
According to Okoli and Pawlowski (2004) the Delphi group size does not depend on
statistical power, but rather on group dynamics for arriving at consensus among experts.
Following Nielsen and Thangadurai (2007) statistical information matters less than
knowledge about the behaviours, opinions, attitudes and aspirations of people.
Reliability and validity are critical properties of measures in all types of research in
which questionnaires are involved. Delphi method has specific critics, claiming that the
reliability of measures obtained from judgments is questionable. Hasson et al. (2000)
define reliability as the extent to which a procedure produces similar results under
different conditions at all times. Given that responses from different panels to the same
questions con differ substantially, that the consensus achieved in later rounds might be
due more to some pressure to conform than to a genuine converging consensus of
opinions, and that the sue of open-ended questions con make it difficult to assess
measurement reliability and validity. On the other hand Keeney et al. (2001) found a
great deal of criticism accusing the Delphi technique as having no test of reliability.
There is also a problem with validating numerical values attributed to expert opinion.
In defence of this method from such criticisms when conducting a policy Delphi, it can
be said, according to Loo (2003), that one should not necessarily expect to achieve
consensus or a decision. Rather, two or more potentially conflicting policy directions
might emerge, and such a result would not necessarily mean that the study lacks
reliability or not valid. In fact, such an outcome might be desirable to provide the client
(e.g. policy marker) with options and to stimulate the client to critically evaluate options
in the decision-making process.
Delphi method deserves serious consideration because the careful design and execution
of a Delphi study should lead to useful findings for policy makers and program
managers. Fortunately, there is a large, long-established, open literature on designing
Delphi studies and examples of successful Delphis (e.g. Bijl, 1992; Loo, 1996, 1997).
For an excellent discussion of the advantages and disadvantages of the Delphi method
refer to Rowe et al (1991), Wounderberg (1991) and Okoli and Pawlowski (2004)
Much remains to be discussed, but that discussion would obviously extend beyond the
objectives of this article. We will limit ourselves to pointing out, as a means of
reflection, that Delphi technique as applied here can be effective in identifying areas of
agreement, disagreement and where information is lacking and where further research is
required. Therefore although there are limitations to the Delphi methodology, it is
concluded in the literature that the findings provide useful guidance for further
consideration of the complex issue. As Nakatsu and Iakovou , 2009 point out, the
Delphi method provides a good solution that allowed us to conduct an investigation
with rigor and internal consistency, while allowing to produce results efficiently and
with external validity.
Bibliography
Adler, Michael, and Erio Ziglio (eds.) (1996). Gazing into the Oracle: The Delphi Method and its
Application to Social Policy and Public Health . London: Jessica Kingsley Publishers.
Bijl, R. (1992), "Delphi in a future scenario study on mental health and mental health care", Future,
Vol. 2 No.3, pp.232-50.
Cañibano L. and Alberto f. (2008):”El control institucional de la información financiera: aplicación
de un estudio DELPHI”. Vol. XXXVII Núm. 140 Octubre-Diciembre 795-829
Castells, M. (1999), “La era de la información. Economía, sociedad y cultura, Vol. 1. “La sociedad
red”, Alianza Editorial, Madrid
Chaffin W. and Talley W, (1980):” Individual stability in Delphi studies” Technological Forecasting
and Social Change, Vol. 16, 1, January, Pages 67-73
Critcher C and Gladstone B, (1998) ``Utilizing the Delphi technique in policy discussion: a case study
of a privatized utility in Britain'' Public Administration , 76, pp. 431- 449
Dajani, J. S., Sincoff, M. Z., & Talley, W. K. (1979). Stability and agreement criteria for the
termination of Delphi studies. Technological Forecasting and Social Change, 13, 83-90.
Dalkey N.C. and Helmer, O. (1964):”An experimental application of the delphi method to use of
experts”, Management Science, Vol. 9, pp 458-467.
Denzin, N.; Lincoln, Y. (1994): Handbook of qualitative research, Sage. Thousand Oaks.
Galanc, T. and Mikus, J. (1986): ”The choise of an optimum group of experts”, Techinal Forescating
and Social Change, Vol. 30, pp 245-250
Goodman, C. M. (1987). The Delphi technique: A critique. Journal of Advanced Nursing, 12, pp. 729734.
Gordon, T.J., (2004). The Delphi Method. In: Glenn, J.C., Gordon, T.J. (Eds.), Futures Research
Methodology. American Council for The United Nations University Millennium Project,
Washington, D.C.
Gupta U.G. and Clarke (1996):”Theory and aplications of the Delphi technique: A bibliography
(1975-1994)”, Technological Forecasting and Social Change, Volume 53, Number 2,
October, pp. 185-211
Hasson, F., Keeney, S. and McKenna, H., 2000. Research guidelines for the Delphi survey technique.
Journal of Advanced Nursing 32, 1008–1015
Heras I, Arana G. and Casadesús M (2006): “A Delphi study on motivation for ISO 9000 and EFQM”;
International Journal of Quality & Reliability Management, Vol. 23 No. 7, pp. 807-827
Julia J.F. and Polo F. (2006) La adaptación de las normas contables a las sociedades cooperativas con
especial referencia a los fondos propios. Una aplicación del método Delphi. Revista
Española de Financiación y Contabilidad Vol. XXXV Núm. 132 Enero-Marzo 789-816
Keeney, S., Hasson, F., McKenna, H.P., 2001. A critical review of the Delphi technique as a research
methodology for nursing. International Journal of Nursing Studies Vol. 38, pp. 195–200.
Landeta, J. (1999): “El método Delphi : una técnica de previsión para la incertidumbre”, Editorial
Ariel, Barcelona.
Landeta, J. (2006):”Current validity of the Delphi method in social sciences”, Technological
Forecasting & Social Change, 73, 467-482.
Linstone H, Turoff M, 1975a, ``Introduction'', in The Delphi Method: Techniques and Applications
Eds H Linstone, M Turoff (Addison-Wesley, Don Mills, ON) pp 3-12
Linstone H, Turoff M, 1975b, ``Evaluation: introduction'', in The Delphi Method: Techniques and
Applications Eds H Linstone, M Turoff (Addison-Wesley, Don Mills, ON) pp 229 - 235
Linstone, H.A. and Turoff, M. (1975): “The Delphi Method: Techniques and Applications”, AddisonWesley, London.
Linstone, H.A. and Turoff, M.(2002): “The Delphi Method: Techniques and Applications”, at
www.is.njit.edu/pubs/delphibook
Loo, R and Thorpe K. (2003):” A Delphi study forecasting management training and development for
first-line nurse managers” Journal of Management Development, Vol. 22 No. 9, pp. 824834
Loo, R. (1996), “Managing workplace stress: a Canadian Delphi study among human resource
managers”, Work and Stress, Vol. 10 No. 2, pp. 183-9.
Loo, R. (1997), “The future of management training in Canadian healthcare organizations”, Journal
of Management Development, Vol. 16 No. 9, pp. 680-9.
MacCarthy, B.L. and Atthirawong, W. (2003), “Factors affecting location decisions in international
operations: a Delphi study”, International Journal of Operations & Production
Management, Vol. 23 No. 7, p. 794.
Martino, J. (1999): “Thirty years of change and stability”, Technological Forescasting and Social
Change, Vol. 62, pp 13-18
Nakatsu R. and Iakovou C., (2009):” A comparative study of important risk factors involved in
offshore and domestic outsourcing of software development projects: A two-panel Delphi
study” Information & Management, Vol. 46, pp. 57–68
Nielsen C. and Thangadurai M. (2007):” Janus and the Delphi Oracle: Entering the new world of
international business research” Journal of International Management, Vo. 13, 2, June, pp
147-163
Novakowski, N. and Wellar, B. (2008): “Using the Delphi technique in normative planning research:
methodological design considerations”, Environment and Planning A, Vol. 40, 1485-1500
Okoli, C. and Pawlowski, S. (2004): “The Delphi method as a research tool: an example, design
considerations and applications”, Information & Management, Vol. 42, Nº 1, 15-29.
Regier W, (1986): ``Directions in Delphi developments: dissertations and their quality'' Technological
Forecasting and Social Change, Vol. 29, pp. 195-204
Reid, N. (1988), "The Delphi technique: its contribution to the evaluation of professional practice",
in Ellis, R. (Eds),Professional Competence and Quality Assurance in the Caring Professions,
Chapman Hall, London, .
Richey, J.; Mar, B. and Horner, R. (1985): ”The Delphi tecnique in enviromental assessment:
implementation and effectiveness”, Journal of Enviromental Management, Vol 21, pp 135146
Rowe, G. and Wright, G. (1999): “The Delphi technique as a forecasting tool: issues and analysis”,
International Journal of Forecasting, Vol. 15, nº 4, 353-375.
Sanchez P.; Caminade C. and Escobar C. (1999):” En busca de una teoria sobre la medicion y gestión
de los intangibles en la empresa: Una proximación metodológica” Revista VAsca de
Economía Ekonomiaz, nº 45, pp. 188-213.
Scheibe M, and SkutschM, Scofer J, 1975,``Experiments in Delphi methodology'', in The Delphi
Method: Techniques and Applications Eds H Linstone, M Turoff (Addison-Wesley, Don
Mills, ON) pp 262- 287
Sharma H, and Gupta A, 1993, ``Present and future status of system waste: a national-level Delphi in
India'' Technological Forecasting and Social Change Vol. 44, pp. 199-218
Ziglio E, 1996, ``The Delphi method and its contribution to decision-making'', in Gazing into the
Oracle:The Delphi Method and its Application to Social Policy and Public Health Eds M
Adler, E Ziglio (Jessica Kingsley, London) pp 3- 33
Download