Delphi Technique - My Civil Engineering Projects and Presentations

advertisement
Delphi Technique
Introduction
Named after the Greek oracle at Delphi to whom the Greeks visited for information about
their future, the Delphi technique is the best known qualitative, structured and indirect
interaction futures method in use today (Woudenberg 1991). Created by Olaf Helmer and
Norman Dalkey in 1953 at the RAND corporation to address a future military issue, the
technique became popular when it was applied a decade later to large scale technological
forecasting and corporate planning (Helmer 1983). From a number of RAND reports ( eg.
Dalkey & Helmer 1962, Dalkey 1967, Brown 1968, Rescher 1969, Helmer 1967) the
technique has gone on to become the subject of numerous books and journal articles
(Armstrong 1985). Similarly its use has been broadly based and prolific throughout many
parts of the world, but especially in the US, eastern and western Europe and Japan
(Masini 1993). It seems few methodologies have captured the imagination of planners
and forecasters the way Delphi has.
Essentially, Delphi is the name given to a set of procedures for eliciting and refining the
opinions of a group - usually a panel of experts (Dalkey 1967, Brown 1968). It is a way
whereby a consensus and position of a group of experts is reached after eliciting their
opinions on a defined issue and it relies on the "informed intuitive opinions of specialists"
(Helmer 1983:134). This collective judgment of experts, although made up of subjective
opinions, is considered to be more reliable than individual statements and is thus more
objective in its outcomes (Johnson & King 1988, Helmer cited in Masini 1993). As
Linstone and Turoff (1975:3) write, "Delphi may be characterized as a method for
structuring a group communication process, so that the process is effective in allowing a
group of individuals, as a whole, to deal with a complex problem."
Methodology Development
The development of the Delphi technique had its main genesis in earlier work to
overcome the shortcomings of human judgment for planning purposes. Douglas
MacGregor, for example, undertook a study in 1936 and formulated what came to be
known as the `MacGregor effect'. This refers to his finding that predictions made by a
group of people are more likely to be right than predictions made by the same individuals
working alone (Loye 1978). It had also been well established by this time that face-toface meetings had several problems such as being dominated by one or a few individuals,
falling into a rut of pursuing a single train of thought for long periods of time, exerting
considerable pressure on participants to conform and regularly becoming overburdened
with periphery information (Preble 1983, Riggs 1983).
The formulation of the Delphi technique was a response to these two major findings. The
first experiment using a Delphi style technique was carried out in 1948 in the hope of
improving betting scores at horse races (Woudenberg 1991, Preble 1983). However, it
was Helmer and Dalkey at the RAND corporation in the 1950's, who really advanced the
technique to increase the accuracy of forecasts.
From this beginning, the Delphi technique found its way into private corporations, think
tanks, government, education and academia. With such proliferation of use, the technique
also came to be modified to the point where we now have a family of `Delphi-inspired
techniques' in a broad range of applications (Martino 1973, van Dijk 1990). These are:
(1) the Conventional Delphi; (2) the Policy Delphi; and (3) the Decision Delphi
(Woudenberg 1991, van Dijk 1990).
The Conventional Delphi has two main functions. That is forecasting and estimating
unknown parameters and is typical of Delphi as it was originally conceived. It is used to
determine consensus on forecasting dates and developments in many areas - but
particularly in the area of long term change in the fields of science and technology. By
estimating unknown parameters, respondents make their own estimates regarding the
expected levels of an activity relative to present levels. The Policy Delphi on the other
hand, does not aim for consensus but seeks to generate the strongest possible opposing
views on the resolution of an issue and to table as many opinions as possible. The
objective is for it to act as a forum for ideas and to expose the range of positions
advocated and the pros and cons of each position (Bjil 1992). And finally the Decision
Delphi is utilized to reach decisions amongst a diverse group of people with different
investments in the solution. The subject of the decision, for which the Delphi is used as a
resolution mechanism, is usually harshly contested and complex and thus the structured
group communication process is deemed effective. Helmer (1994) has more recently
written on the potential for Delphi to also be used to assist in the process of decision
making to resolve adversarial situations such as physical planning, budgeting and
abortion.
Description
Although there are a range of Delphi techniques now in use and adapted for various
needs, it is still possible to talk of a broad procedural outline that they follow. Firstly, the
subject of the study is circulated to the participants in an unstructured manner to enable
them to comment on the issues in question. This material is then synthesized by the
monitoring team (one or more people co-ordinating the study) and distributed to the
participants in a questionnaire format. It needs to be mentioned here also that this first
round is very often circumvented by the issue being explored comprehensively by the
monitoring team which gathers the information and uses it to frame the questions to the
respondents. Secondly, a questionnaire is drawn up to ascertain the opinions of the
experts and to try and begin to elicit points of convergence and divergence. Thirdly, the
questionnaires are distributed repeatedly, each time with the information from previous
questionnaires that has been interpreted and reformulated by the coordinating team. The
feedback often provides textual and statistical material to participants with the groups
response as well as their own and asks them to reconsider their response or if their
response is radically different from the group to justify it. The aim is to repeat this
process until finally a certain level of consensus or stability is reached. A final report,
pulling the responses together, is then prepared by the coordinating team (Masini 1993).
Supplementing this broad outline, the many derivatives of the Delphi technique have
developed different processes to suit each application. For example, some studies have
interspersed the rounds with personal interviews with panel members, sometimes panel
members have been bought together in a meeting format to discuss the results of the
Delphi survey and to come to a final conclusion. Others use structured group conferences
such the nominal group technique (NGT) and computer conferencing and communication
(Amara 1975, Webler et al 1991). The number of rounds can vary from two to ten. And
as mentioned above, the first round of questionnaires to the panel can be presented as an
inventory or it can can be prepared by the monitoring team (researching, interviewing key
people, pretesting the questionnaire etc). (Woudenberg 1991). The use of technology has
also found its way into Delphi procedures enabling it to be automated and thus
streamlined (Helmer 1983, Cundiff 1988, Cho, Jeong & Kim 1991).
Characteristics
The Delphi was designed to optimize the use of group opinion whilst minimizing the
adverse qualities of interacting groups. As such, it has four basic features: structured
questioning, iteration, controlled feedback and anonymity of responses. Structured
questioning is achieved through the use of questionnaires. This keeps a clear focus on the
study and enables the moderator/s to control the process and channel it into a compact
product. Iteration is the process by which the questionnaire is presented over a number of
rounds to enable participants to reconsider their responses. Controlled feedback is
achieved by feeding back to the panel members the responses of the whole group as well
as their own response for their reconsideration. This means that all the responses of the
panel are taken into account. Anonymity is achieved through the questionnaires ideally
giving group members the freedom to express their opinions without feeling pressured by
the wider group. In many Delphi studies, statistical aggregation of the group response is
also a common feature. This means that where consensus is required at the end of the
process, it is taken to be the median response of the panel and the spread of the
interquartile range as the degree of consensus (Rowe, Wright & Bolger 1991). Another
version of gaining consensus is for the respondents to make a self appraisal as to their
competence in giving their responses. The answers from those who grade their
competency level high are then used as the median, rather than the group as a whole.
Helmer (1983) explains the rationale for this, arguing that it has been found that these
experts achieve a result closer to the actual outcome than the rest of the group.
The respondents and the coordinating team are advised to be inter-disciplinarian with at
least one person on the monitoring team having a working knowledge of the issue in
question. By having as diverse a panel as possible, biases is able to be minimized (Masini
1993, Webler et al 1991). Where consensus is required, questionnaires need to be
designed so that answers are not too long for consensus to be impossible or too short so
that the consensus is superficial (Masini 1993). The monitor, in preparing the feedback,
also needs to cull superfluous information to keep the group focused.
Application of Delphi Technique
Delphi and other techniques based on collective opinions. - Several forecasting
techniques rely on group interactions to arrive at a collective opinion. In the Delphi
method, different persons respond individually and confidentially to a sequence of
questions. At each stage in the sequence, the results from the preceding questions are
revealed to everyone. Then, each member of the group is given the opportunity to change
his or her assumptions and predictions. Because these changes are made confidentially,
each individual is free to change a previous position without being influenced by personal
relationships. In a variation of this method, the discussions are open and shared.
However, this sometimes gives dominant personalities a great amount of influence, e.g.,
by getting other individuals to "agree" with them.
All planned changes to use and conserve forests aim to stabilize or improve social,
environmental, and economic conditions. Good strategic planning has to anticipate the
type and magnitude of impacts to be expected. The principal techniques for this are social
assessment, environmental assessment, and benefit-cost analysis. These approaches are
increasingly mixed and integrated, e.g., benefit-cost analysis of environmental changes,
social assessment of capital investments, and so on:
Social Assessments. - Social assessments have many forms, ranging from ethnographic
studies to formal surveys. Some ethnographic studies may take a trained anthropologist
several years to complete. Formal surveys can also be time-consuming and expensive,
especially if they attempt to include hundreds of people. For these reasons, an increasing
number of practical cases aim at an intermediate level of analysis, such as "rapid rural
appraisal" (RRA) and "participatory rural appraisal" (PRA). In RRA and PRA, you rely
on a combination of interviews, direct observations, and small-group discussions to
identify social issues and problems. For most medium and long-term strategic planning,
this should be adequate. In other cases, your initial findings will suggest that you need
wider coverage or deeper analysis, leading to subsequent stages of fact finding.
Environmental Assessments - Ideally, environmental assessments provide an agency
with a complete ex ante forecast of biophysical impacts, their distribution in space and
time, and an analysis of the best ways to mitigate negative impacts. But in practice, it is
unlikely to achieve this within the time frame of the planning of the agency. Rather, the
agency may need to be satisfied with checklists that state possible impacts. In truly
complex or controversial cases, the planning team has to allow for the time and costs of a
specialized environmental impact assessment (EIA). Manuals on EIA are available from
most of the United Nations agencies, international development banks, and international
aid agencies. These can be satisfactory if an agency adapts them for the circumstances of
its own country. Moreover, an increasing number of national governments are writing
their own EIA policies and procedures.
Benefit-Cost Analysis. - An agency wants its strategic actions to be measured by
standards that are financial and economic, e.g., through the application of benefit-cost
analysis (BCA). For each major component of a strategic plan, the agency should indicate
what levels of investments and recurrent expenditures will be needed. Moreover, the
agency tries to establish measures of private and social profitability to guide selection
among planning options. This demands a fairly complete set of prices and values, now
and into the future. This is almost always a large task, but is generally worth the effort if
the planned investments are large. The planning team needs to understand the difference
between financial vs. economic BCA, as well as techniques for extending BCA to cover
environmental benefits and costs.
Evaluation
It is very difficult to evaluate the accuracy and reliability of a judgment method such as
the Delphi, because the technique is based on determining the opinion of panel members
and the findings thus become person and situation specific. Therefore, each application of
the methodology will be different, preventing comparison and measurement to be carried
out. The only way Woudenberg (1991) argues you can evaluate its accuracy is to
compare it with other judgment methods in the same situation and many of the previous
evaluations of Delphi have not done this. In addition, much of the work undertaken to
evaluate the Delphi technique has been done with university students asking almanactype questions. This raises questions about the applicability and validity of results when
trying to evaluate the technique for its effectiveness in generating alternative futures
(Amara 1975).
Dalkey wrote two articles in 1968 and 1969 summing up most of the negative aspects of
Delphi, including the strong response of the group to conform to the statistical feedback
of the panel. However, it was Sackman in 1974 that provided the major critique of the
Delphi attacking it on the grounds that it was unscientific and its application was highly
questionable. His view was that the method lacked the necessary rigor to be taken
seriously as a scientific methodology. Rieger (1986) argues that the Delphi drew this
response from Sackman because the creation of the method was an attempt to move
beyond the conventional research paradigm of which Sackman was a member. It has also
been argued that Sackman's critique was based on studies that had used the technique
sloppily, thus causing his evidence to be selective.
Linstone (1975) responded to Sackman by agreeing with Coates (as cited in Rowe,
Wright and Bolger 1991) that the Delphi method must be considered as one of last resort
- to deal with extremely complex problems for which there are no other models. "...one
should expect very little of it compared to applicable analytical techniques. One should
expect a great deal of it as a technique of last resort in laying bare some crucial issues on
a subject for which a last resort technique is required...If one believes that the Delphi is of
value not in the search for individual knowledge but in the search for public wisdom; not
in the search for individual data but in the search for deliberative judgment, one can only
conclude that Sackman missed the point"(quoted in Linstone 1975:573). Hughes (1985)
concurs, arguing that the Delphi technique is more about opinion gathering than
explanations of causality and thus its use is not a retreat from objectivity. Judgment and
informed opinion have always played a crucial role in human enterprises and will
continue to be useful so long as the structure of an investigation is made subject to some
of the safeguards that are commonly used to assure objectivity in any scientific inquiry
(Brown 1968).
Other criticisms that have been leveled at Delphi are:
* it has not been shown consistently that the results this method produces are any better
than those achieved through other structured judgmental techniques (Rowe, Wright &
Bolger 1991);
* a Delphi study is at the mercy of the world view and biases of the coordinating or
monitor team, who choose the respondents, interpret the returned information and
structure the questions. There is a great deal of debate therefore over whether this
coordinating group should be chosen from within or outside the organization initiating
the study and whether they should be be experienced in the subject area of the study in
question (Masini 1993);
* The way the process and questionnaire is structured can lead to a bias (like IQ tests),
which assume a certain cultural background. People may give responses they think the
monitoring group wants to hear, or they may not respond at all. Thus, the cultural
background of respondents will impact upon the results (Linstone 1978);
* Simmonds (1977) argues that one of the key weakness in using the Delphi technique is
that certain questions do not get asked as they do not seem important when the study
begins. However, once it is underway new questions cannot be added, which in turn can
weaken the study considerably;
* the process of choosing the panelists is often not considered seriously enough. Yet, it is
the calibre of the panelists which determines the quality of the outcomes of the study;
* in the process of achieving consensus, extreme points of views run the risk of being
suppressed, when in fact they may provide important new information or insights; and
* the flexibility of the technique means it can be adapted to a whole range of situations
which in turn can make it vulnerable to misrepresentation and sloppy execution (Amara
1975).
Masini (1993) argues that these reasons are why developing countries have rarely used
the methodology and when they have, it has been on narrow subjects. Reliance on experts
in such countries, has made potential users wary of the Delphi technique.
Linstone and Turoff (1975:6) also outline some of the common reasons for failure of the
Delphi. These are:
"* Imposing monitor views and preconceptions of a problem upon the respondent group
by overspecifying the structure of the Delphi and not allowing for the contribution of
other
perspectives related to the problem
* Assuming that Delphi can be a surrogate for all other human communications in a
given situation
* Poor techniques of summarizing and presenting the group response and ensuring
common interpretations of the evaluation scales utilized in the exercises
* Ignoring and not exploring disagreements, so that discouraged dissenters drop out and
an artificial consensus is generated
* Underestimating the demanding nature of a Delphi and the fact that the respondents
should be recognized a consultants and properly compensated for their time if the
Delphi is not an integral part of their job function."
In terms of its positive contribution to futures methodologies, Ono & Wedemeyer (1994)
argue that the accuracy of the technique in short range-forecasting has been proved fairly
conclusively. Similarly, in their own study carried out in 1976 and evaluated in 1994,
they show how the technique is also valid in long range forecasting, Ascher & Overholt
(1983:259) likewise show from their own experience that Delphi studies have an
excellent record of forecasting computer capability advances, nuclear energy expansion,
energy demand and population growth. "They offer a very inexpensive means of
achieving interdisciplinary interaction." The technique is also said to expose real
agreements and disagreements among respondents as well as giving the facilitator simple
and direct control over the scope of the study (Amara 1975).
The strong critique of the Delphi in the 1970’s was aimed at the conventional Delphi
which focuses on forecasting and estimating unknown parameters. Other Delphi methods
that have since evolved such as the Policy and Decision Delphi have drawn less attention
in the way of a critique. Evaluation of these types of Delphi is scarce and it is not known
whether the drawbacks of the quantitative Delphi are not also to be found in the Policy
and Decision Delphi's.
Seldom an agency is able to completely estimate the social, environmental, financial, and
economic impacts of a strategic plan in quantitative terms. Even if possible, it has no way
to combine these dimensions in one measure, e.g., monetary value. In principle, planners
solve this dilemma through multi-criteria analysis that uses mathematical programming.
But in practice, multi-criteria analysis is a sophisticated planning tool that few of the
interest groups and team members will be able to understand.
On the other hand, almost everyone should be able to understand a matrix for trade-off
analysis. The agency accomplishes this by ranking proposed actions according to
different evaluation criteria. This can be another way to build participation into planning,
and to favor a "systems approach" in comparing alternatives.
An Example: - Methods to Clarify Issues and Problems 0n EIA.
All organizations, private and public, employ a variety of methods to clarify issues and
problems. An agency needs to develop competence in selecting these methods and in
assisting planning groups to use them. This can be important for building up working
relationships within the planning team, with the advisory committees, and with the
interest groups.
Box 1. Criteria for Evaluating the Appropriateness of Planning Tools
Criteria
Relevance
Appropriateness
In what ways does this method help an agency answer important
questions and focus on key issues?
Acceptability
How well the method is developed, and to what extent is it
accepted as a standard instrument? (In own country's context)
Cost
How much time and how many resources does an agency need to
adapt and apply this method?
Data Requirements
Does an agency have - or will it be able to generate - the data for a
reliable application of the method?
Breadth
Versatility
and To what extent can the method represent cultural, intrinsic,
aesthetic, and other non-market aspects of forests?
Distributional Aspects Does the method help address gains and losses: (i) across the
society, and (ii) between present and future generations?
Communications
To what extent can ordinary people understand this method?
Sustainability
What are the chances that you will continue to use this method
(and therefore to refine and improve it in the future)?
Source: Adapted from Nilsson-Axberg (1993), Forestry Sector and Forest Plantations
Sub-Sector Planning in South and South-East Asia, Swedish University of Agricultural
Sciences, SIMS Report 34, Uppsala, p. 145.
The best sources of problem-solving tools for an agency are books, articles, and videos in
management science. These are increasingly available in even the remotest places of the
world - and in an increasing number of languages. Here, we briefly mention some of the
classical methods that your planning team is most likely to need:
Brainstorming. - Brainstorming is generally superior to conventional committee
meetings for rapid generation of creative ideas. Suppose that an improvement goal has
been defined, such as to increase the effectiveness of an agency's agro forestry program.
In brainstorming, team members make rapid suggestions on how to achieve this.
Somebody writes down all of the suggestions (e.g., on a large sheet of paper) - even those
that at first may seem strange or impractical. All ideas are acceptable, and nobody is
allowed to criticize another person's suggestion. The agency's aim is to quickly produce
ideas which only later will be evaluated for feasibility, cost, and other decision criteria. In
the end, it will arrive at a smaller set of proposals after the initial ideas are modified,
combined, or eliminated summarizes three variations of this method.
Box 2. Variations of "Brainstorming" to Generate Creative Ideas.
1. Call Out Ideas Freely in Any Order
Each person calls out as many ideas as possible
Advantages: Spontaneous and fast:
in random order. Each idea is recorded where no restrictions.
everyone can see it. Continue until the time limit
is reached, or until nobody has anything to add.
Disadvantages: Quiet persons may
not speak out; a few powerful persons
may dominate; the process can be
chaotic if everyone talks at the same
time.
2. Call Out Ideas in Orderly Sequence
Advantages: Everyone has the
chance to participate; it is more
difficult for powerful personalities to
control the session.
Each person presents an idea in turn (e.g., by
Disadvantages: People can be
going systematically around a table). If a person frustrated while waiting for their turn.
has nothing to add, the person says "pass."
Continue until the time limit is reached, or until
nobody has anything to add.
3. Each Person Writes Ideas on Paper
Advantages: All contributions are
anonymous; ideas can be recorded in
an organized way.
Each person writes down as many ideas as
Disadvantages: Creativity is lost
possible on a piece of paper. The papers are because persons are not able to react to
collected, and the ideas are written where the ideas suggested by others.
everybody can see them.
Source: Adapted from James H. Saylor, 1992, TQM Field Manual, McGraw-Hill, Inc.,
New York, p. 80-82.
Problem Statement Guidelines. - Sometimes an agency's improvement goals are vague
and poorly defined. It applies problem statement guidelines to sharpen the definitions of
any problem into its what, when, where, who, why, and how dimensions. Each team
member is asked to state the problem according to these guidelines. In a subsequent step,
the agency compares these statements to make a final problem statement acceptable to
the group as a whole.
Strengths and Weaknesses, Opportunities and Threats (SWOT). - In relation to a given
goal or strategy, an agency wants to take advantage of its strengths and opportunities. At
the same time, it has to be aware of weaknesses and threats that will impede its progress.
The SWOT framework helps the agency think about this in a direct and systematic way.
Problem Trees. - For complex issues, an agency systematically identifies cases and
effects with the help of a problem tree. A problem tree is a diagram of boxes and arrows
that show causes at a low level, leading to effects at a higher level. The causes are the
roots of the tree, and the effects are the fruits. The team lists different problems, and then
connects them with arrows to show linkages. It repeats this several times until the
problem tree is complete and logical. The problem tree directs the attention of the team to
fundamental, deep-rooted explanations.
Box 3. Example of a Problem Tree
Insufficient Cooperation Between NGOs and The Forestry Agency in Policies and
Actions for Forest Conservation
Source: Adapted from FAO, 1994, Formulation of Agricultural and Rural Investment
Projects: Planning Tools, Case Studies, and Exercises, Volume 2 (Reconnaissance),
Rome, Italy, Tool No. 5.
- The logical framework encourages planners to specify causeand-effect relationships, and to explicitly state all assumptions. At the top of the
framework is a clearly defined goal. Lower levels of the framework specify the why,
what, and how to achieve this goal. For these linkages to be possible, the internal logic of
planning must be sound, and the assumptions of the agency must be valid. illustrates for a
strategy to reduce depletion pressures on fuelwood supplies.
Box 4. An Example of a Logical Framework
Summary
Indicators
Means to Verify
Main Goal
To
reduce
depletion
pressures
on
fuelwood supplies
Reduce
fuelwood No.
of
removals
from removed
upland pine forests,
South Region.
headloads Rapid appraisals; spot
checks;
periodic
surveys.
Why?
(1) To lessen (1)
Fuelwood (1) Number of hours per (1) Focus groups;
women's work;
collection is time- week per woman for time studies;
consuming;
fuelwood collection;
(2) To aid forest (2)
Trees
are (2) Field evidence of (2) Walk-through spot
regeneration and mutilated
and damaged
trees
and checks;
forest
growth
regeneration is poor regenerating trees
inventories.
What?
Fuelwood is more Increase supply and Reduce average walking
available
and decrease demand
distance per headload;
better utilized
fewer headloads per
week; smaller size of
headloads.
Interviews;
time
studies;
physical
measurements (mass
and volume).
How?
(1) Grow energy (1) Promote clean- (1) Plant 5,000 trees per (1)
Walk-by
trees in home burning species;
year of species X, Y, and inspections
and
gardens;
Z, starting in year 1998; household surveys;
(2) Increase the (2) Only 1 in 50 (2) Increase to 2 in 50 (2)
adoption
of families uses them families by year 2001;
surveys;
cooking stoves; now;
Household
(3) Seek kerosene (3) Debate through (3) Reduce kerosene (3) Market studies.
subsidies
political means
price 25% by 1999.
-Field Analysis. - Most goals are characterized by restraining forces that hold
back an agency and driving forces that push it forward. In force-field analysis, the agency
identifies these forces, and it assesses its degree of influence to control them. If the
agency knows which forces are holding it back and which can carry it forward, then the
planning focuses on how to reduce the former and exploit the latter. The agency rates the
different forces for both importance and the extent of its control over them. The agency
concentrates its actions on the high-rated forces. An example is presented in for
improving success in afforestation and reforestation.
Box 5. An Example of a Force-Field Analysis
The force-field is a diagram of driving forces carrying the agency to the right
("progress"), and restraining forces pushing the agency to the left ("obstacles"). The
agency is now at position A, and it wants to be at Position B in the future.
Figure
In the following example of afforestation and reforestation, the agency lists all the driving
forces and restraining forces. It rates each force by its importance (i.e., amount of impact)
and by its degree of control over it. Let's suppose that these scales are measured from "1"
(low) to "5" (high), and suppose that the ratings of the agency give the following results:
Importance
Your Control
Total
Rising prices of wood products
2
2
4
Genetically-improved planting stock
2
4
6
Improved operational planning
4
5
9
Increasing public support
2
2
5
2
2
5
Poor procedures for hiring and paying 5
field workers
1
6
Losses to fires and grazing
4
4
8
5
3
8
Driving Forces
Restraining Forces
Decreasing agency budget
Irregular annual precipitation
If the agency can find some forces that explain others, the effectiveness of its actions will
be greater. For example, suppose that "improved operational planning" can reduce "losses
to fires and grazing" as well as "poor procedures for hiring and paying field workers."
Because it has these cross-impacts, the agency gives special attention to "operational
planning."
Comparison Matrix. - Frequently, an agency needs to rank several options in a
systematic way to arrive at a single choice. The comparison matrix does this through one-
by-one comparisons, indicating how any one option (Choice A) compares with all others
(Choices B, C, ...Z). The agency constructs a frequency table which shows how many
times A, B, C, and Z are rated superior to the other options
.
Box 6. The Comparison Matrix
The comparison matrix lays out the options of the agency in vertical and horizontal rows.
Working in a group or as individuals, the planning team makes one-by-one comparisons
among the intersecting pairs. This is a systematic approach to choose among several
options, such as when the agency needs to compare programs that have different elements
in them. For illustration, suppose that an agency has the following program options and
the following decision criteria:
Social Annual
Biodiversity Political Acceptability
Equity Expenditure
Option A
High
Option B
Very Large
++
++
Neutral Large
++
++
Option C
High
Modest
+++
+
Option D
Poor
Large
+++
?
Construct the Comparison Matrix:
Option A
Option B Option C
Choose A Choose C
Option D
Choose A
Option B
Choose C
Choose B
Option C
Choose C
In these hypothetical comparisons. Option C is preferred three times. The next best
choice is Option A, selected twice.
Role Playing. - An agency wants members of its planning team to interpret problems
and feel emotions in the same way as its actual interest groups. The use of role playing
can be surprisingly effective for this. The agency assigns different team members to "act"
as if they are personalities among its interest groups. Role playing is never a substitute for
genuine participation by these groups. But it can be used within the planning team to
widen perceptions, compare options, and prepare for comments by the real personalities
Box 7. The Trade-Off Matrix
The trade-off matrix helps the agency choose among alternatives when it has multiple
decision criteria to consider. (See Box 15 and Worksheet 18 for another approach to
multi-criteria decisions.) In the trade-off matrix, it ranks proposed actions according to
each criterion. In the example below, the ''total score" is the horizontal unweighted sum
of the rankings.
Strategy
Option
or Social
Ranking*
Environmental
Ranking*
Economic
Ranking*
Total
Score**
A
2
1
3
6
B
1
3
4
8
C
5
4
5
14
D
3
5
2
10
E
4
2
1
7
* 1 = highest; 5 = lowest
** You may choose to weight some criteria more heavily than others. Suppose that you
want to emphasize social more than environmental and economic factors. Depending on
the weights you apply, this may favor Strategy B over Strategy A.
Source: Adapted from Yves C. Dubé, 1995, "Macroeconomic Aspects of Forestry Sector
Planning", p. 69 in D.G. Brand (ed.). Forestry Sector Planning: Proceedings of a Meeting
Held 18-22 September 1994 in Anchorage, Alaska. Canadian Forest Service/Food and
Agriculture Organization, Ottawa, Canada.
To conclude, the social, environmental, and economic tools an agency selects for its
strategic planning should be determined by:
The importance and sensitivity of a proposed policy, program, or project;
The time and budget for conducting the analysis;
The technical and administrative capacity within the agency - or through the help of
external assistance - to undertake the analysis; and
The extent to which the interest groups will accept the analysis as useful and valid.
References: 1. Socio-economic methods may be used during the preparation of an EIA: ... The Delphi
2.
3.
4.
5.
technique can be used not only to determine environmental priorities (as ...
www.icsu-scope.org/downloadpubs/scope5/chapter06.html - 34k
So how do the Delphi technique and the related methodologies of EIA relate to each
other? Masini (1993) provides the best answer to this when she describes ...
www.futures.hawaii.edu/j7/LANG.html - 90k
The requirements for meeting ESD criteria in EIA all have economic ... Delphi
Technique This approach uses direct questioning of experts or community ...
www.iucn.org/places/vietnam/our_work/ecosystems/assets/11%20Guideline%20for%20E
conomic%20Effects.pdf
A major barrier to the use of either EIA or LCA for the assessment of ... Decision theory
techniques, such as the Delphi technique, can be used to make the ...
www.defra.gov.uk/environment/economics/rtgea/5.htm - 15k
File
Format:
Microsoft
Word
EIA Techniques. - Checklists (simple and descriptive) ... Two common techniques for
pooling
knowledge/opinion.
Delphi
Technique
...
www.cwu.edu/~myronn/files/REM522/REM522-Notes2.1.doc
Download