1 Glenn J. Browne Rawls College of Business Administration

advertisement
1
Stopping Rule Use
During Information Search
in Design Problems
Glenn J. Browne
Rawls College of Business Administration
Texas Tech University
Mitzi G. Pitts
Fogelman College of Business & Economics
The University of Memphis
Send all correspondence to:
Glenn J. Browne
Information Systems & Quantitative Sciences
Rawls College of Business Administration
Texas Tech University
Lubbock, TX 79409-2101
Email: gbrowne@ba.ttu.edu
March 30, 2003
2
Stopping Rule Use During Information Search in Design Problems
ABSTRACT
Information search is critical in most decision-making tasks. An important aspect of information
search is the stopping rule used by the decision maker to terminate information acquisition.
Decision-making problems may be usefully decomposed into design problems and choice problems.
The distinction is critical because the goals of stopping behavior in the two types of problems are
quite different. In design problems, the focus is on the sufficiency of information obtained for
problem structuring and generating alternatives, while choice problems focus on convergence toward
a solution. While previous research has studied stopping behavior in choice problems, the present
research is concerned with stopping rule use during information search in design problems. We
presented 54 practicing systems analysts with an information search problem in a systems
development context and asked them to gather information for a proposed system. Protocols of the
search sessions were analyzed, and stopping rules used and information gathered by the analysts
were identified. Results indicated that the use of certain stopping rules resulted in greater quantity
and quality of information gathered, suggesting the prescriptive application of these rules.
Additionally, stopping rule use differed between more experienced and less experienced analysts.
Finally, stopping rule use, rather than analyst experience, accounted for the quantity and quality of
information elicited. Implications for information search theory and practice are discussed.
KEYWORDS: Stopping Rules, Information Search, Requirements Determination, Design Problems,
Systems Development, Decision-Making Processes
3
Stopping Rule Use During Information Search in Design Problems
INTRODUCTION
Information search is a critical aspect of most decision-making tasks. Information is sought
to illuminate possibilities, to structure problems, and to make choices or to design artifacts. A
crucial assessment for a decision maker is establishing when acquired information is sufficient to
continue to the next step in a decision-making process. Since the subsequent steps in the process
typically rely on this acquired information, failures to gather adequate and/or appropriate information
can have strong negative impacts on the eventual decision or problem-solving effort. The heuristics,
or stopping rules, used by decision makers to decide when to terminate information search are the
subject of the current research.
An important distinction can be made between information search in design and choice
problems.
Information search for design occurs early in a decision-making process, while
information search for choice occurs later in the process (Simon, 1981). The purposes of the search
behavior in these two types of problems are generally different. The goal of design search behavior
is to explore future possibilities and preferences, to structure the problem, and, in many cases, to
determine what choices are available. Design problems are characterized by divergent thinking, in
which the decision maker attempts to think in a variety of directions in open inquiry (Couger, 1996).
In contrast, in choice problems the decision maker gathers evidence to select one or more of the
available options. Choice is thus dominated by convergent thinking, in which the decision maker
converges on a solution or choice (Couger, 1996; Guilford, 1957).
Stopping rules for information search in choice problems have been investigated in excellent
research by Gigerenzer and colleagues (e.g., Gigerenzer and Todd, 1999; Gigerenzer, Todd, and
4
ABC Research Group, 1999; Gigerenzer and Goldstein, 1996), Rapoport and colleagues (e.g.,
Busemeyer and Rapoport, 1988; Rapoport, 1966; Rapoport and Tversky, 1970; Seale and Rapoport,
1997), Aschenbrenner, Albert, and Schmalhofer (e.g., 1984; Schmalhofer et al., 1986; Bockenholt et
al., 1991), and many others (e.g., Beach and Strom, 1989; Brickman, 1972; Busemeyer, 1982;
Connolly and Gilani, 1982; Meyer, 1982; Saad and Russo, 1996; Svenson, 1992; Swensson and
Thomas, 1974). Stopping rules for information search in design problems have been studied much
less. Understanding why decision makers terminate their information acquisition is critical, since the
remaining stages of decision making (including choice) rely on the information gathered. Therefore,
the current research focuses on stopping rules for information search in the design problem context.
The design setting we have chosen for investigating stopping rules is information systems
development. This context has features characteristic of most design problems; e.g., there is a need
for information gathering to determine goals, constraints, and alternatives for the eventual decision
or artifact (Smith and Browne, 1993). Information systems development requires an investigation of
the functional and technical needs for the system together with an exploration of design alternatives.
It is generally recognized that gathering information from people who will eventually use an
information system (the “users”) is the most important stage in all of systems development (Davis,
1982; Leifer, Lee, and Durgee, 1994; Vessey and Conger, 1993; Watson and Frolick, 1993). Termed
“information requirements determination” in this context, the gathering of information allows
systems analysts to build their understanding of the problem to be solved and the definition of the
users’ needs and expectations for a proposed system. The largest source of information systems
development failures is incomplete and inaccurate information requirements (Bostrom, 1989; Byrd
et al., 1992; Davis, 1982; Vessey and Conger, 1993; Watson and Frolick, 1993; Wetherbe, 1991;
Whitten and Bentley, 1998), and incomplete requirements account for approximately two-thirds of
5
the maintenance costs for information systems (Lientz and Swanson, 1980; Ramamoorthy et
al.,1984; Shemer, 1987). Therefore, given the impact of requirements determination on eventual
systems outcomes, information systems development is a useful design context for investigating
stopping rule use in information search. Moreover, the similarity of the information gathering phase
of systems development to most decision-making problems means that the results of the current
research should be generalizable to many contexts.
The paper is organized as follows. The next section sets the context for the research and
reviews the use of stopping rules in information acquisition. This is followed by a description of the
hypotheses tested and the methodology utilized. Finally, the results of an empirical study with
practicing systems analysts are provided, followed by a discussion of the implications of this work
for information search behavior.
STOPPING RULES IN INFORMATION SEARCH
Background
During a decision-making process, an individual expends costly resources (e.g., time and
cognitive effort) in “predecisional information gathering in the hopes of reducing the risk of later
decision error” (Connolly and Thorn, 1987, p. 397). Information gathering requires that the
individual make a judgment regarding the sufficiency of the information obtained and then decide
whether to acquire additional information. Normatively, sufficiency is characterized by both the
completeness and correctness of the information (Smith et al., 1991). When a decision maker
believes the acquired information is sufficient, he or she stops gathering additional information and
moves to the next step in the decision-making process. For example, a city planner must decide when
to stop gathering information from various constituents and begin to envision design alternatives. A
6
person contemplating an automobile purchase must decide when to stop assessing his needs and
preferences and begin to find cars that address those requirements. A systems analyst must decide
when to stop gathering information from users and proceed with development of the system. Such
situations have been termed “optional stopping problems” (Rapoport, Lissitz, and McAllister, 1972).
In such problems, the decision maker invokes some heuristic or test, called a stopping rule, to
determine the completeness or sufficiency of the information obtained.
A person applying a stopping rule has the conflicting goals of effectiveness (trying to acquire
the best information possible), and efficiency (not wasting time and money on costly information
acquisition that is not needed). Consequently, it is important for the person to balance acquisition
costs against improved completeness and accuracy of information. Unfortunately, in design
problems in particular, costs may be difficult to identify, while benefits are often realized only in the
long term. Often, the value of a piece of information cannot be determined until much later in the
decision-making process, if at all.
Experimental results indicate that humans do not balance information costs and benefits well
(Connolly and Gilani, 1982; Connolly and Thorn, 1987; Pitz, Reinhold, and Geller, 1969).
Generally, decision makers fall victim to two types of acquisition errors: overacquiring and
underacquiring (Connolly and Thorn, 1987). Both types of errors are the result of sub-optimal
application of stopping rules. Overacquisition and underacquisition of information have received
considerable attention in the general decision making literature (e.g., Ackoff, 1967; Connolly and
Gilani, 1982; Hershman and Levine, 1970; Pitz and Barrett, 1969). Overacquiring involves
gathering more information than is needed, causing excessive acquisition costs. In requirements
determination for systems development, overacquisition results in wasted time and resources in the
gathering and analysis of requirements. Underacquiring, on the other hand, results in a deficiency in
7
acquired information, creating the need for more acquisition later or a risk of decision error (if no
additional information is gathered).
Underacquisition of information during requirements
determination results in an incomplete view of the goals and functionality of the proposed system,
leading to potential design problems, iterative redesign, implementation difficulties, and possible
system failure. The costs associated with discovering information inadequacy during the latter stages
of systems development (and in decision making more generally) are typically several orders of
magnitude higher than problems discovered during information gathering (Boehm, 1981, Shemer,
1987). Therefore, it is arguable that the costs of underspecification are much greater than the costs of
overspecification when the entire decision-making problem is considered.1
The concept of stopping rules has been investigated extensively in decision-making theory
and optional stopping contexts. Numerous normative stopping rules have been recognized
(Busemeyer and Rapoport, 1988; Pitz et al., 1969; see also Goodie et al., 1999). For example, past
research has identified stopping rules based on the economic value of information (Spetzler and Staël
von Holstein, 1975), the expected value of additional information (Kogut, 1990), and the expected
loss from terminating information acquisition (Busemeyer and Rapoport, 1988). However, these
normative models usually fail to describe the actual behavior of decision makers. The computations
required by these optimal stopping rules imply that the decision maker must “think ahead” to the
final decision to be able to assess the value of additional information (Busemeyer and Rapoport,
1988). Thinking ahead, however, is cognitively difficult for people due to the limited capacity of
working memory. The decision maker is unable to hold and evaluate enough information in working
1
Additionally, experimental results have shown that underacquisition of information is likely in tasks in which the
number of important information items is large (e.g., Connolly and Gilani, 1982). Since the number of important
requirements is always large in systems development, this is another reason for concern about underacquisition of
information in the present context.
8
memory to consider all possible outcomes fully.2 Furthermore, evidence suggests that a decision
maker’s planning horizon is seriously restricted (Rapoport, 1966). Consequently, people may fail to
appreciate dependencies and interactions between future events.
There is evidence that people perform sub-optimally when acquiring information as a result
of these cognitive challenges. This sub-optimal performance includes stopping acquisition too soon
(Baron et al., 1988; Perkins et al., 1983; Rapoport and Tversky, 1970; Seale and Rapoport, 1997),
failing to access relevant information (Fischhoff, 1977; Shafir and Tversky, 1992), failing to consider
all appropriate alternatives (Farquhar and Pratkanis, 1993), and underestimating the amount of
missing information (Fischhoff et al., 1978). Further, prior research in choice tasks has shown that
people’s knowledge about their own stopping behavior is not reliable in terms of judgmental
accuracy, and such stopping behavior may even be arbitrary (Browne et al., 1999, in the context of
categorization of choices).
Descriptive Stopping Rules
As a result of the failure of normative models to describe the stopping behavior of individuals
accurately, stopping rules have been proposed that attempt to represent the actual cognitive processes
of people as opposed to the idealized processes required by the normative models. As noted earlier,
many researchers have studied stopping rules used in choice problems. Typical choice situations
studied have included choosing an apartment to rent (Saad and Russo, 1996), selecting a one-year
subscription to a choice of magazines (Schmalhofer et al., 1986), and choosing a summer vacation
venue (Bockenholt et al., 1991).
2
Of course, methodologies such as decision analysis include mechanisms to reduce the load on working memory and
to help decision makers decide when to stop gathering information. In information systems development, systems
development lifecycle (SDLC) methodologies aid analysts in structuring the information gathering process.
However, most managerial problems are undertaken without the use of decision analytic techniques, and SDLCs do
not provide guidance on when to stop gathering information or on how much information is enough.
9
In choice problems, numerous stopping rules have been found to be descriptive of individual
behavior in at least some contexts. For example, Gigerenzer and Goldstein (1999) have suggested
three simple stopping rules that they term “The Minimalist,” “Take the Last,” and “Take the Best.”
All three rules focus on examining information cues to make a choice. “The Minimalist” and “Take
the Last” rules require the decision maker to choose an alternative based only on the first positive cue
value he encounters. The “Take the Best” strategy is a variant on the lexicographic choice strategy,
requiring the decision maker first to order the cues according to their validity in predicting the item
of interest, and then to choose according to the first cue that provides discriminability.
Additionally, Aschenbrenner, Albert, and Schmalhofer (1984; see also Schmalhofer et al.,
1986) proposed a “stochastic dimension selection” model, which states that binary choice is a
process in which sequential comparisons are made between two alternatives on a number of
attributes. Once the evidence supporting one of the choices exceeds some previously-defined level,
that alternative is selected. Finally, Saad and Russo (1996) proposed the “Core Attributes” heuristic,
which states that a person will stop acquiring information and commit to an alternative after having
found information on all of his or her important attributes.
All these stopping heuristics have been shown to be useful under certain choice conditions.
However, their general utility appears confined to choice problems, as all focus on the convergence
to a single alternative. None has been directly applied in design problems, in which the focus is on
the sufficiency of information gathered for design (although, as we discuss below, generalizations of
10
the Aschenbrenner et al. (1984) and Saad and Russo (1996) rules are useful in design problems).3
Therefore, for the current research, we sought different stopping rules.
A set of stopping rules described by Nickles, Curley, and Benson (1995) are proposed to be
more useful in design search problems, in which the goal is not to choose between existing
alternatives, but rather to decide whether to terminate the information gathering process. All four
rules are aimed at assessing the sufficiency of information collected.
These rules rely on
psychologically distinct processes, although all require that the decision maker be able to distinguish
a new and useful piece of information or evidence from information that is either already known or is
irrelevant to the problem at hand. These rules are discussed in detail in the following paragraphs.
Magnitude Threshold Stopping Rule. The magnitude threshold stopping rule assumes that a
person’s degree of belief concerning the sufficiency of evidence must reach some predetermined
level, or threshold, before he will stop gathering information (Nickles et al., 1995; see also Wald,
1947). A decision maker sets a mental threshold of necessary information on a key dimension that
acts as the stopping criterion. He then maintains a mental “running total” of the cumulative impact
of the evidence (Gettys and Fisher, 1979). When the internal tabulation crosses the intended
threshold, the acquisition of additional evidence is terminated.
The psychological ability to set a threshold criterion and to judge when it has been exceeded
is familiar in a variety of research contexts, ranging from judgments in psychophysical tasks
(Swensson and Thomas, 1974) to decision making under uncertainty (Busemeyer, 1982) to signal
3
Some studies have investigated stopping rules in contexts such as searching for an object in a hidden location,
which is closer in its goals to the current context than typical choice problems (e.g., Edwards and Slovic, 1965;
Rapoport, 1969; Rapoport, Lissitz, and McAllister, 1972). However, the ultimate goal in such contexts has still been
a convergence on a solution, which is fundamentally different than the goal of information search in the current
study. Additionally, the “differentiation” portion of Svenson’s (1992) Differentiation and Consolidation Theory,
which includes pre-decisional search, focuses on differentiating one alternative from another and explicitly excludes
the initial information search stage of decision making. Thus, that research is also distinguishable from the current
context.
11
detection theory (Green and Swets, 1974). There is also evidence of the descriptive usefulness of
threshold models in everyday choice tasks (e.g., Aschenbrenner et al., 1984; Saad and Russo, 1996).
We expect that the magnitude threshold rule, a sufficiency threshold model, may be descriptive of
stopping behavior in design tasks. An abstract representation of the magnitude threshold stopping
rule is shown in Figure 1.
**Insert Figure 1 about here**
Difference Threshold Stopping Rule. Using the difference threshold stopping rule, a decision
maker assesses the marginal value of the latest piece of information acquired (Nickles et al., 1995).
A cumulative assessment is made after the acquisition of each additional piece of information. Then,
a comparison is made between the cumulative assessment after the most recently acquired
information and the cumulative assessment prior to the last item. When the difference between the
two assessments is less than a predetermined difference amount, the person stops the information
acquisition process. Pragmatically, the difference threshold stopping rule motivates the decision
maker to stop gathering information when he judges that he is no longer learning anything new. A
graphical view of the difference threshold stopping rule is presented in Figure 2.
**Insert Figure 2 about here**
Mental List Stopping Rule. The mental list stopping rule involves the use of belief structures
possessed by people for the construction of mental lists or criteria sets (Bartlett, 1932; Schank and
Abelson, 1977), and is a generalization of the Core Attributes heuristic proposed by Saad and Russo
(1996). As information is obtained, arguments are made for or against using each piece of
information to fulfill requirements on a mental list. Once the decision maker reasons that all of the
items contained on the list or set have been attained, the gathering of additional information ceases.
Figure 3 illustrates the mental list stopping rule.
12
**Insert Figure 3 about here**
Representational Stability Stopping Rule. The representational stability stopping rule
concerns the adaptation of a person’s mental model or representation of a problem situation (Nickles
et al., 1995). Such a representation provides a framework within which new information or evidence
can be assimilated (Johnson-Laird, 1983; Schank and Abelson, 1977). Psychologically, this rule
requires the ability to reason whether a new and different piece of information should cause the
person’s mental representation to change. As new information is obtained, arguments are developed
that either support the use of the information to modify the representation or reject the use of the
information. When the person’s mental representation of the problem is no longer being developed,
he ceases acquisition of additional information (Yates and Carlson, 1982). An abstract illustration of
the representational stability stopping rule is depicted in Figure 4.
**Insert Figure 4 about here**
These four stopping rules are arguably more appropriate in information search problems than
other stopping rules proposed in the literature because they focus on the completeness or sufficiency
of information obtained rather than on choosing between existing alternatives. They are therefore
proposed to help understand analysts’ stopping behavior in the present research.
The Role of Experience in Stopping Rule Use
An issue of importance to both theory and practice is the role of analyst experience in
stopping rule use. From a theoretical standpoint, it is known that a person’s procedures for
performing a task generally change as he gains experience (Anderson, 1981; Simon, 1981). This is
as true for systems development as it is for other domains (Schenk et al., 1998). Thus, it seems
probable that the stopping rules used by more experienced analysts will differ from those used by
less experienced analysts.
13
We anticipate that the mental list and magnitude threshold rules will be used by more
experienced analysts. The mental list rule requires that an analyst have enough experience to be able
to construct a meaningful list of requirements, even if he is working in an application area in which
he has little domain knowledge. It is unlikely that less experienced analysts will have this ability.
Further, more experienced analysts should have good heuristics for knowing how much information
is enough to design a system, and so should have confidence in setting a magnitude threshold for
information. For an analyst with limited experience, establishing a magnitude threshold level a priori
can be an intimidating and fruitless prospect.
On the other hand, it is reasonable to expect that the difference threshold and representational
stability rules will be used by less experienced analysts. The difference threshold stopping rule
seems to require less experience on the part of the analyst. When using this rule, there is no need for
the analyst to know how much information is enough. He simply stops the information acquisition
when he is no longer learning anything new, without regard to volume. Further, the use of the
representational stability rule also appears to require a relatively lower level of experience. The
analyst does not have to form a mental list or set a magnitude threshold a priori; rather, he simply
continues collecting information until his mental representation of the problem becomes stable.
The Impact of Analyst Experience on Information Gathered
We anticipate that the amount of information gathered will not be directly affected by the
experience level of the analyst. Past research has demonstrated that experienced decision makers
exhibit better judgments than novices, but they do so without using more information (see, e.g.,
Connolly and Gilani, 1982; Shanteau, 1992). Some research has shown that experienced decision
makers are in fact distracted by too much information, and that too much information can interfere
with decision making (Gaeth and Shanteau, 1984; Glazer et al., 1992). In information systems
14
development in particular, some studies focusing on differences between experienced and novice
analysts have shown that experience is an indicator of improved performance (Davis, 1982; Schenk
et al., 1998; Walz et al., 1993). However, other research has demonstrated that high and low
experienced analysts are equally likely to elicit incomplete and inaccurate requirements (Marakas
and Elam, 1998). Even the elicitation of higher quality requirements is not necessarily to be
expected from more experienced analysts in information systems development. Research has shown
that higher levels of experience may result in a tendency to infer requirements rather than to elicit
them explicitly (Miyake and Norman 1979). Based on these findings, we expect that more
experienced analysts will not gather more information than less experienced analysts. Instead, we
expect that the stopping rule utilized will determine the amount and quality of information gathered.
MEASURING INFORMATION REQUIREMENTS
Since the goal of the analyst in a design process is to obtain a sufficient amount of
information, we next provide ways to measure the information elicited and describe how we
operationalized sufficiency.
In the context of information systems development, Byrd, Cossick, and Zmud (1992)
proposed a taxonomy of requirements that was later expanded upon by Rogich (1997; see also
Browne and Rogich, 2001). This categorization scheme includes problem domain entities believed
to be critical for the successful design of an information system (Byrd et al., 1992).4 Thus, an ideal
set of requirements for information systems would arguably include a significant number of
requirements from each of the defined categories. In the Byrd et al.-Rogich taxonomy, requirements
4
The categorization scheme was developed from theory and past research. In addition, two expert systems analysts
were consulted to verify that the categories were appropriate and comprehensive.
15
are organized into four levels: goals, processes, tasks, and information. Goal level requirements
focus on understanding the overall context in which the system is being developed and the
organizational goals for the system. In process level requirements, emphasis is placed on analyses of
business activities. Task level requirements concentrate on the specific steps that are required to
fulfill the business activities and how they are influenced by events in the environment. Finally, the
information level requirements are based on a complete understanding of the domain’s data needs
and data relationships. These generic requirements categories arguably pertain to any system
development effort and many other problem domains (Browne and Rogich, 2001). Therefore, we
used this classification technique as one method for capturing requirements elicited in the present
study. Figure 5 illustrates the requirement categories and subcategories.
**Insert Figure 5 about here**
In this study we measure sufficiency by the quantity and quality of requirements gathered.
Quantity is measured in three ways. First, we measured the total number of requirements elicited by
an analyst. We also measured the breadth and depth of requirements. Breadth refers to the number
of different requirements categories that were utilized, and depth of requirements refers to the
number of requirements elicited within each requirements category. A more complete set of
requirements would comprise a broad range of requirement categories and explore each of these
categories in depth.
16
To measure the quality of requirements, a different coding scheme was necessary.5 Quality is
best assessed using a coding scheme that reflects the content and context of the problem situation
(see, e.g., Browne et al.,1997), rather than a generic context-independent requirements list. To
facilitate this coding, we first developed a list of content categories. The task used in this study was
the development of an on-line grocery shopping application (discussed below). To develop the
content categories, we performed a task analysis of the experimental task and examined requirements
elicited from subjects in a previous study that used the same task (Browne and Rogich, 2001). The
coding scheme was then given to five employees at a large regional grocery chain in Texas. The
employees were asked to rate each category in the coding scheme using the following rating scale:
0
Not
Relevant
1
Not very
important
2
Slightly
important
3
Moderately
important
4
Fairly
important
5
Very
important
The job titles of the five people completing the ratings were as follows: Chief Information
Officer and Vice-President, Director of Marketing, Technical Analyst, Programmer/Analyst, and
Data Analyst. These people represented a range of experiences and viewpoints on systems
development and the retail grocery business. The people had a mean of 5.6 years (median = 6 years)
of experience in information systems development and a mean of 5.6 years (median = 5 years) in the
grocery store business. The means of the ratings of the five people were used to analyze the quality
of requirements elicited. The coding scheme utilized appears in Figure 6.
**Insert Figure 6 about here**
5
The true “quality” of requirements cannot be determined with any degree of certainty until much later in the system
development process (e.g., when writing computer code, after system implementation, and/or several years later,
after the system has been in use in the organization), if ever. Thus, only surrogates for quality can be used. This is an
important difference between design problems and choice problems. In most choice problems (at least those
performed in laboratory studies), the quality of information used for selecting an alternative can be assessed easily
because the accuracy of the choice is known (e.g., as in a general knowledge question) or soon will be (e.g., as in
most experimental forecasting tasks). Thus, if a person relied on cue X, a researcher (and the person himself) can
conclude immediately or shortly thereafter that this was useful or not useful for informing the choice. In the case of
design problems generally, the lack of temporal connection to the ultimate decision limits the assessment of quality
or usefulness of the information considered.
17
HYPOTHESES
The theory concerning stopping rules was used to formulate hypotheses. To investigate the
behavior of analysts that can lead to underspecification of requirements, we proposed the following
hypotheses (stated in the alternative form):
H1a: The use of some stopping rules will result in different quantities of requirements than
the use of others.
H1b: The use of some stopping rules will result in different breadth of requirements than the
use of others.
H1c: The use of some stopping rules will result in different depth of requirements than the
use of others.
H2: The use of some stopping rules will result in different quality of requirements than the
use of others.
To understand the impact of experience on stopping rule use, we tested the following hypotheses:
H3a: A greater number of experienced analysts will use the mental list rule than will use the
representational stability rule.
H3b: A greater number of experienced analysts will use the mental list rule than will use the
difference threshold rule.
H3c: A greater number of experienced analysts will use the magnitude threshold rule than
will use the representational stability rule.
H3d: A greater number of experienced analysts will use the magnitude threshold rule than
will use the difference threshold rule.
To test the impact of experience on quantity and quality of requirements, we proposed the following
hypotheses:
H4a: There will be no relationship between the experience of the analyst and the quantity of
18
requirements elicited.
H4b: There will be no relationship between the experience of the analyst and the breadth of
requirements elicited.
H4c: There will be no relationship between the experience of the analyst and the depth of
requirements elicited.
H4d: There will be no relationship between the experience of the analyst and the quality of
requirements elicited.
METHODOLOGY
Participants and Procedure
The participants for this study were 54 practicing information systems analysts who were
recruited from organizations in the Baltimore metropolitan area. Analysts from twelve different
organizations participated, representing a variety of industry segments including banking, finance,
insurance, construction, manufacturing, aerospace, government, research, and education. Only
analysts with at least two years of experience in system development projects were eligible to
participate in the study. This condition was used to help ensure that analysts had been involved in
enough system development projects to possess fully developed heuristics for terminating the
requirement determination process, which is the focus of this study.
The experiment utilized a case scenario concerning the development of an on-line grocery
shopping information system. The familiarity of grocery shopping in general increased the
likelihood of a similar level of domain knowledge across all analysts. It was expected that the
novelty of on-line grocery shopping would provide a challenge to the systems analysts in identifying
requirements for the system and ensure a realistic requirements gathering process.6
Analysts performed the task individually, and all sessions were tape recorded. Each analyst
6
Note: These data were collected prior to the rise and fall of on-line grocery companies.
19
was asked to vocalize his thoughts as he generated requests for information and evaluated responses.
A research assistant was the proposed system “user” in the scenario, assuming the role of a grocery
store manager. The same user was employed for all analysts. This user was thoroughly briefed
concerning requirements for the system, and was a person unfamiliar with systems development and
blind to the hypotheses of the study. During the experimental session, the analyst made requests for
information concerning requirements for the proposed system and the user responded with a
statement of information that directly addressed the analyst’s request. The analyst continued
gathering information from the user to the point at which he felt sufficient information had been
obtained to continue with the design of the scenario system.7 The analyst was then asked to
complete a self-reporting questionnaire designed to assist with the ensuing evaluation of the stopping
heuristic used by the analyst. Finally, the analyst was de-briefed.
Data Analysis
A transcribed verbal protocol of each analyst’s requirements gathering session was used to
identify the specific requirements elicited from the user. To utilize the protocols, they were parsed
and then coded into requirements categories based on the Byrd et al.-Rogich taxonomy. The
protocols were parsed by identifying blocks of utterances in which the participant was discussing the
same idea or issue (Curley et al., 1995; Reichman-Adar, 1984). An independent coder unfamiliar
with the purposes of the research was used to code all of the parsed transcriptions. In addition, to
assess the reliability of the initial coding, a second independent coder was asked to code a random
sample of 10 analyst transcriptions. A comparison of the results revealed that, on average, the coders
assigned 82% of the requirements to the same categories. To assess the degree of interrater
agreement not attributable to chance, Cohen’s kappa was calculated (Everitt, 1996). The kappa
7
To increase the realism of the task, each analyst was informed in the instructions that he would be asked to draw
diagrams representing the system requirements after he elicited the requirements from the user (such diagrams are
the typical next step after gathering requirements in systems development methodologies). This was intended to
increase the motivation for each analyst, since reasonable diagrams can only be constructed if adequate and
appropriate requirements have been gathered.
20
coefficient for these data was .701, which is considered “substantial” agreement under the guidelines
established by Landis and Koch (1977). Considering the number of categories and complexity of the
utterances, this level of agreement is considered quite reliable (Everitt, 1996). The results of the
coding performed by the primary coder were used in the data analysis.
The verbal protocols of each analyst’s session were also used, along with the self-reporting
questionnaires, to determine the stopping rule applied by the analyst. Utterances and statements
reflecting stopping behavior were analyzed and coded into stopping rule categories based on the
characteristics of the stopping rules presented above. Two independent coders unfamiliar with the
purposes of the research were used to code the stopping rules. A comparison of the coding results
indicated that 89% of the analysts were coded into the same stopping rule category by the coders.
Again, Cohen’s kappa was calculated to assess the degree of interrater agreement not attributable to
chance. The kappa coefficient for these data was .849, which is considered “perfect” agreement by
Landis and Koch (1977). For instances in which the coders disagreed, the disagreements were
resolved by the two coders through discussion.
The verbal protocols were also used to code the data into content categories to permit the
analysis of requirements quality. One researcher working independently coded all of the content of
the protocols into categories. To check the reliability of the coding, a second researcher also working
independently coded a random sample of 22 protocols into the content categories. Interrater
reliability for the coding was 83%. Considering the complexity of coding verbal utterances, and the
number of available categories, this reliability was deemed satisfactory. The codes from the first
coder was used in the analyses.
To facilitate the assessment of requirements quality, the content categories used by each
subject were compared to the quality assessments made by the grocery store employees. The
specified number of points was assigned for each category on the coding scheme discussed by a
subject. This provided a measure of “quality points” for each subject that could be compared across
21
stopping rule groups.8
RESULTS
Stopping Rule Use
All analysts were determined by the coders to have used one of the four stopping rules
proposed by Nickles et al. (1995) (an available “other” stopping rule category was not utilized by the
coders). The number of analysts using each stopping rule was as follows: Difference Threshold =
22; Representational Stability = 13; Mental List = 10; Magnitude Threshold = 9.9
Requirements Elicited by Stopping Rule
To test whether analysts utilizing a particular stopping rule obtained significantly greater
quantity, breadth, depth, and/or quality of requirements than analysts applying other stopping rules,
analysts were grouped by the stopping rule utilized and comparisons were made between groups.
The results are shown in Table 1.
**Insert Table 1 about here**
First, the quantity of requirements was determined by examining the total number of
requirements gathered by the analysts. An analysis of variance revealed that there was a marginally
significant difference in the total number of requirements obtained between the stopping rule groups
(F (3,50) = 2.72; p = .05). Multiple comparisons revealed two marginal differences; analysts using the
mental list rule elicited significantly more requirements than analysts using the magnitude threshold
rule, and analysts using the difference threshold rule elicited more requirements than analysts using
the magnitude threshold rule. No other differences were significant. These results offer support for
8
As often occurs in such rating schemes, the raters did not use the entire scale in assigning ratings to categories
(despite requests to do so). Therefore, after checking for the normality of the original distribution, the data were
normalized and re-scaled around the midpoint (3) in the scale.
9 It should be noted that decision makers may use different stopping rules in different situations, and may even
employ a combination of stopping rules in a particular task. The coders in the present task were informed of this
possibility, but did not code any subjects as having utilized more than one stopping rule.
22
Hypothesis 1a.
Next, the breadth of requirements elicited by analysts was determined. An analysis of
variance showed no significant differences (F(3,50) = 1.723; p = .174). All analysts, regardless of the
stopping rule used, exhibited the same breadth of elicited requirements during the experimental
sessions. Thus, Hypothesis 1b is not supported. On average, analysts elicited requirements from
57% of the available categories (15.41 of 27 possible), a point we return to in the discussion section.
For the depth variable, the mental list and magnitude threshold stopping rule groups showed
serious violations of the equality of variance assumption. Thus, the Kruskal-Wallis non-parametric
test for equality of means was administered (Conover, 1999). The analysis of the data indicated that
there was a significant difference in depth of requirements between the stopping rule groups (χ2(3) =
8.978; p = .03). Multiple comparisons showed that both the difference threshold and mental list
stopping rules resulted in significantly more depth of requirements than the representational stability
stopping rule and the magnitude threshold rule. These results offer support for Hypothesis 1c.
To test for the quality of requirements elicited, the results from the content coding scheme
were utilized. The mean numbers of quality points for subjects using each stopping rule are shown
in Table 2. An analysis of variance showed that there were significant differences between some of
the groups (F(3,53) = 2.99; p = .040). The Tukey multiple comparison procedure showed a significant
difference in quality of requirements elicited by analysts using the difference threshold rule and the
magnitude threshold rule. This difference supports Hypothesis 2. No other differences in means
were significant. As a further note, there were 230 total quality points available on the content
coding scheme. Subjects on average elicited requirements associated with 62.6 quality points, an
average of 27.2% of the quality points available.
**Insert Table 2 about here**
The implications of these findings for information gathering theory and systems analysis
practice are significant, and we return to this issue in the discussion section.
23
Influence of Experience on Stopping Rule Use
As noted above, one potential explanation for differences in stopping rule usage is different
experience levels of analysts. Although we required all analysts to have at least two years of
experience in systems analysis to participate, there were large differences in experience level (the
range of experience was 2 to 32 years).
To test whether any differences existed in stopping rule use as a result of years of experience,
we first calculated the mean years as an analyst by stopping rule used. The means were as follows:
mental list: 14.30 years; magnitude threshold: 14.06 years; difference threshold: 11.11 years;
representational stability: 7.65 years. Four a priori orthogonal contrasts were then tested, based on
Hypothesis 3 above. The results showed that users of the mental list rule were more experienced
than users of the representational stability rule (t(21) = 2.27; p = .019), providing support for
Hypothesis 3a. Users of the magnitude threshold rule were also more experienced than users of the
representational stability rule (t(20) = 2.00; p = .03), supporting Hypothesis 3c. The other contrasts
were not significant; users of the mental list rule were not significantly more experienced than users
of the difference threshold rule (t(30) = 1.21; p = .119), nor were users of the magnitude threshold rule
more experienced than users of the difference threshold rule (t(29) = 1.04; p = .152). Thus,
Hypotheses 3b and 3d are not supported.
Influence of Experience on Requirements Elicited
To test the impact of analyst experience on requirements gathered, we calculated correlation
coefficients for the relationships between experience and quantity of requirements, breadth of
requirements, depth of requirements, and quality of requirements. Analysts’ years of experience were
unrelated to the total number of requirements elicited (Pearson’s r = .08; p = .59). This provides
support for Hypothesis 4a; more experienced analysts gathered no more requirements than less
experienced analysts. Breadth of requirements (r = .15; p = .27) and depth of requirements (r = .02; p
= .91) were also unrelated to analysts’ years of experience. Therefore, Hypotheses 4b and 4c are also
24
supported. The relationship between number of years of analyst experience and number of quality
points for requirements elicited was also investigated using the content coding scheme. The data
showed that the years of experience of an analyst were unrelated to the number of quality points
associated with the requirements elicited (r = .09; p = .53). This supports Hypothesis 4d. Together,
these results show that analyst experience was not an important factor in the gathering of
requirements.
DISCUSSION
This research has identified the stopping rules used by systems analysts in a design problem
task. As noted, stopping rules in design-related contexts have been studied much less than stopping
rules in choice problems. The current findings thus help fill a significant gap in our understanding of
individual behavior in the full decision-making process. Further, our results provide a link between
the information gathering process and the subsequent choice process in decision making. The
analysts in our study stopped after eliciting requirements from only 57% of the generic categories
considered important for developing a system, and earned only 27% of the quality points available in
the content categorization scheme. These data provide potential evidence and explanation for a
number of the shortcomings of normative models of decision making (e.g., failure to use relevant
information and failure to consider appropriate alternatives).
The study of stopping rule use across varying levels of experience is another important
contribution of the present research. Heuristics for reducing options in choice tasks as a result of
experience or expertise have been investigated in a variety of contexts (e.g., chess (Baylor and
Simon, 1966), the stock market (Borges et al., 1999)). However, no prior studies have identified
stopping rule use as a function of experience in design problems. Our findings showed that more
experienced analysts used the mental list and magnitude threshold stopping rule more often, while
less experienced analysts were more likely to utilize the representational stability stopping rule. The
25
results are not surprising given that less experienced problem solvers are more likely to use heuristics
that have face validity and are easy to apply. As noted earlier, the representational stability stopping
rule is arguably less cognitively demanding and thus applied more easily by less experienced systems
analysts.
Further, our result showing that the amount and quality of information gathered is not
affected by experience is consistent with some prior research (e.g., Marakas and Elam, 1998; Miyake
and Norman, 1979; Shanteau, 1992) and inconsistent with the findings of others (e.g., Schenk et al.,
1998; Walz et al., 1993). Our findings indicate that experience is not a significant determinant of
information gathering success, at least in the present context. From our findings, the stopping rule
employed by the analyst is the critical factor. This result has several important implications. First, it
suggests that information gathering can be enhanced through training, since stopping rules can be
taught to analysts. Second, it indicates that staffing choices for information gathering tasks should
not be based on experience alone.
In terms of information gathering outcomes, our findings showed that the use of the mental
list and difference threshold stopping rules resulted in (1) greater quantity of requirements than the
magnitude threshold rule, and (2) greater depth of requirements than the magnitude threshold and
representational stability rules. Further, the difference threshold rule was more successful in terms of
quality than the magnitude threshold rule. This is particularly interesting since one of the more
successful stopping rules was more characteristic of experienced analysts (mental list rule) and one
was more characteristic of less experienced analysts (difference threshold rule). However, this result
is not inconsistent with research findings in problem solving. Because of such factors as training,
cognitive abilities, and personality traits, some experienced problem solvers develop better heuristics
for performing tasks than others; on the other hand, some less experienced problem solvers are able
to perform tasks well despite their lack of experience, due to application of general problem-solving
heuristics that work well much of the time (Newell and Simon, 1972; Payne, Bettman, and Johnson,
26
1993; Smith, 1998; Tversky and Kahneman, 1974). It is possible that the difference threshold rule,
used successfully in the current problem-solving task, is a problem-solving heuristic that works well
with inexperienced analysts in general.
Our results concerning the magnitude threshold rule and mental list rule can be compared to
previous findings in choice problems. As noted earlier, the magnitude threshold rule is one
generalization of the threshold model (the stochastic dimension selection (SDS) model) proposed by
Aschenbrenner et al. (1984). Aschenbrenner et al. found good fit for the SDS model in choice tasks
ranging from deciding on a vacation area to renting a car. In the present research, the magnitude
threshold model was used by more experienced analysts, but resulted in fewer and lower quality
requirements elicited. The mental list rule is a generalization of the Core Attributes (CA) heuristic
discussed by Saad and Russo (1996). Saad and Russo found the CA heuristic descriptive of subjects’
behavior in an apartment rental choice task. In our study, the mental list rule was used by more
experienced analysts and resulted in relatively greater quantity of requirements elicited by analysts.
Although our results tempt more in-depth comparisons with these previous studies, the important
differences in the purposes of the search behavior make such comparisons hazardous. Extensions of
the current research could investigate links between the application of stopping rules in the design
process and specific problems in choice.
From a practical standpoint, the current research contributes important knowledge toward
solving a critical difficulty in decision-making efforts. In information systems development,
underspecification or mis-specification of system requirements during information gathering is an
enormous problem that costs companies more than $100 billion per year (Ewusi-Mensah, 1997;
Standish Group, 1996).
The failure to gather appropriate information leads to design and
implementation problems and poor decisions in projects in many other industries as well (see, e.g.,
Wetherbe, 1997). Understanding the stopping rules used by decision makers to gather information is
an important step in reducing these problems.
27
The distinction between stopping rule use in design and choice problems has not been widely
discussed in the literature. However, as we have pointed out, there are numerous reasons to make
such a distinction. Further investigations of stopping rules in a variety of task types and conditions
will continue to improve our understanding of decision makers’ stopping behavior during
information search.10
10
The authors thank Don Jones, Peter Westfall, W.J. Conover, and reviewers and participants at the ISCore 2002
workshop for their helpful comments on previous versions of this paper.
28
REFERENCES
Ackoff, R.L. (1967). Management misinformation systems. Management Science, 14, B147-156.
Anderson, J.R. (Ed.). (1981). Cognitive skills and their acquisition. Hillsdale, NJ: Lawrence
Erlbaum Associates, Inc.
Aschenbrenner, K.M., Albert, D., & Schmalhofer, F. (1984). Stochastic choice heuristics. Acta
Psychologica, 56, 153-166.
Baron, J., Beattie, J., & Hershey, J.C. (1988). Heuristics and biases in diagnostic reasoning:
Congruence, information, and certainty. Organizational Behavior and Human Decision
Processes, 42, 88-110.
Bartlett, F. C. (1932). Remembering. Cambridge: Cambridge University Press.
Baylor, G.W., & Simon, H.A. (1966). A chess mating combinations program. AFIPS Conference
Proceedings, 28, 431-447. Washington, DC: Spartan Books.
Beach L.R., & Strom, E. (1989). A toadstool among the mushrooms: Screening decisions and
image theory's compatibility test. Acta Psychologica, 72, 1-12.
Bockenholt, U., Albert, D., Aschenbrenner, M., & Schmalhofer, F. (1991). The effects of
attractiveness, dominance, and attribute differences on information acquisition in
multiattribute binary choice. Organizational Behavior and Human Decision Processes, 49,
258-281.
Boehm, B. W. (1981). Software engineering economics. Englewood Cliffs, NJ: Prentice-Hall.
Borges. B., Goldstein, D.G., Ortmann, A. & Gigerenzer, G. (1999). Can Ignorance Beat the Stock
Market? In G. Gigerenzer et al. (Eds.), Simple Heuristics That Make Us Smart. New York:
Oxford University Press.
Bostrom, R. P. (1989). Successful application of communication techniques to improve the systems
development process. Information & Management, 16, 279-295.
Brickman, P. (1972). Optional stopping on ascending and descending series. Organizational
Behavior and Human Performance, 7, 53-62.
Browne, G.J., Curley, S.P., & Benson, P.G. (1997). Evoking information in probability assessment:
Knowledge maps and reasoning-based directed questions. Management Science, 43, 1997, pp.
1-14.
29
Browne, G.J., Curley, S.P., & Benson, P.G. (1999). The effects of subject-defined categories on
judgmental accuracy in confidence assessment tasks. Organizational Behavior and Human
Decision Processes, 80, 134-154.
Browne, G.J. & Rogich, M.B. (2001). An empirical investigation of user requirements elicitation:
Comparing the effectiveness of prompting techniques. Journal of Management Information
Systems, 17, 223-249.
Busemeyer, J.R. (1982). Choice behavior in a sequential decision making task. Organizational
Behavior and Human Decision Processes, 29, 175-207.
Busemeyer, J.R. & Rapoport, A.. (1988). Psychological models of deferred decision making.
Journal of Mathematical Psychology, 32, 91-143.
Byrd, T. A., Cossick, K.L., & Zmud, R.W. (1992). A synthesis of research on requirements
analysis and knowledge acquisition techniques. MIS Quarterly, 16, 117-138.
Connolly, T., & Gilani, N. (1982). Information search in judgment tasks: A regression model and
some preliminary findings. Organizational Behavior and Human Decision Processes, 30,
330-350.
Connolly, T., & Thorn, B.K. (1987). Predecisional information acquisition: Effects of task
variables on suboptimal search strategies. Organizational Behavior and Human Decision
Processes, 39, 397-416.
Conover, W. J. (1999). Practical nonparametric statistics. New York: Wiley & Sons.
Couger, J. D. (1996). Creativity and innovation in information systems organizations. Danvers, MA:
Boyd and Fraser Publishing Company.
Curley, S.P., Browne, G.J., Smith, G.F., & Benson, P.G. (1995). Arguments in the practical
reasoning underlying constructed probability responses. Journal of Behavioral Decision
Making, 8, 1-20.
Davis, G. B. (1982). Strategies for information requirements determination. IBM Systems Journal,
21, 4-30.
Edwards, W., & Slovic, P. (1965). Seeking information to reduce the risk of decisions. American
Journal of Psychology, 78, 188-197.
Everitt, B.S. (1996). Making sense of statistics in psychology. Oxford: Oxford University Press.
Ewusi-Mensah, K. (1997). Critical issues in abandoned information systems projects.
Communications of the ACM, 40, 74-80.
30
Farquhar, P.H., & Pratkanis, A.R. (1993). Decision structuring with phantom alternatives.
Management Science, 39, 1214-1226.
Fischhoff, B. (1977). Cost benefit analysis and the art of motorcycle maintenance. Policy Sciences,
8, 177-202.
Fischhoff, B., Slovic, P., & Lichtenstein, S. (1978). Fault trees: Sensitivity of estimated failure
probabilities to problem representation. Journal of Experimental Psychology: Human
Perception and Performance, 4, 330-344.
Gaeth, G.J., & Shanteau, J. (1984). Reducing the influence of irrelevant information on
experienced decision makers. Organizational Behavior and Human Decision Processes, 33,
263-282.
Gettys, C.F., & Fisher, S.D. (1979). Hypothesis plausibility and hypothesis generation.
Organizational Behavior and Human Performance, 24, 93-110.
Gigerenzer, G., & Goldstein, D. (1996). Reasoning the fast and frugal way: Models of bounded
rationality. Psychological Review, 103, 650-669.
Gigerenzer, G., & Goldstein, D. (1999). Betting on one good reason: The take the best heuristic. In
G. Gigerenzer et al. (Eds.), Simple heuristics that make us smart. New York: Oxford
University Press.
Gigerenzer, G., & Todd, P.M. (1999). Fast and frugal heuristics: The adaptive toolbox. In G.
Gigerenzer et al. (Eds.), Simple heuristics that make us smart. New York: Oxford University
Press.
Gigerenzer, G., Todd, P.M., and ABC Research Group (Eds.). (1999). Simple heuristics that make
us smart. New York: Oxford University Press.
Glazer, R., Steckel, J.H., & Weiner, R.S. (1992). Locally rational decision making: The distracting
effect of information on managerial performance. Management Science, 38, 212-226.
Goodie, A.S., Ortmann, A., Davis, J.N., Bullock, S., & Werner, G.M. (1999). Demons versus
heuristics in artificial intelligence, behavioral ecology, and economics. In G. Gigerenzer et
al. (Eds.), Simple heuristics that make us smart. New York: Oxford University Press.
Green, D.M., & Swets, J.A. (1974). Signal detection theory and psychophysics. Huntington, NY:
Robert E. Krieger.
Guilford, J.P. (1957). A revised structure of intellect. Report of Psychology, 19, 1-63
Hershman, R.L., & Levine, J.R. (1970). Deviations from optimal information purchase strategies in
human decision making. Organizational Behavior and Human Performance, 5, 313-329.
31
Johnson-Laird, P.N. (1983). Mental models. Cambridge, MA: Harvard University Press.
Kogut, C.A. (1990). Consumer search behavior and sunk costs. Journal of Economic Behavior and
Organization, 14, 381-392.
Landis, J.R., & Koch, G.C. (1977). The measurement of observer agreement for categorical data.
Biometrics, 33, 1089-1091.
Leifer, R., Lee, S., & Durgee, J. (1994). Deep structures: Real information requirements
determination. Information and Management, 27, 275-285.
Lientz, B.P., & Swanson, E.B. (1980). Software maintenance management. Reading, MA:
Addison-Wesley.
Marakas, G.M., & Elam, J.J. (1998). Semantic structuring in analyst acquisition and representation
of facts in requirements analysis. Information Systems Research, 9, 37-63.
Meyer, R.J. (1982). A descriptive model of consumer information search behavior. Marketing
Science, 1, 93-121.
Miyake, N., & Norman, D.A. (1979). To ask a question, one must know enough to know what is not
known. Journal of Verbal Learning and Verbal Behavior, 18, 357-364.
Newell, A., & Simon, H.A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.
Nickles, K. R., Curley, S.P., & Benson, P.G. (1995). Judgment-based and reason-based stopping
rules in decision making under uncertainty. Working Paper, Wake Forest University,
October.
Payne, J.W., Bettman, J.R., & Johnson, E.J. (1993). The adaptive decision maker. New York:
Cambridge University Press.
Perkins, D.N., Allen, R., & Hafner, J. (1983). Difficulties in everyday reasoning. In W. Maxwell
(Ed.), Thinking: The expanding frontier. Philadelphia: Franklin University Press.
Pitz, G.F., & Barrett, H.R. (1969). Information purchase in decision task following the presentation
of free information. Journal of Experimental Psychology, 82, 410-414.
Pitz, G. F., Reinhold, H., & Geller, E.S. (1969). Strategies of information seeking in deferred
decision making. Organizational Behavior and Human Performance, 4, 1-19.
Ramamoorthy, C. V., Prakash, A., Tsai, W., & Usuda, Y. (1984). Software engineering: Problems
and perspectives. Computer, 17, 191-209.
32
Rapoport, A. (1966). A study of human control in a stochastic multistage decision task. Behavioral
Science, 11, 18-30.
Rapoport, A. (1969). Effects of observation cost on sequential search behavior. Perception and
Psychophysics, 6, 234-240.
Rapoport, A., Lissitz, R.W., & McAllister, H.A. (1972). Search behavior with and without optional
stopping. Organizational Behavior and Human Performance, 7, 1-17.
Rapoport, A., & Tversky, A. (1970). Choice behavior in an optional stopping task. Organizational
Behavior and Human Performance, 5, 105-120.
Reichman-Adar, R. (1984). Extended person-machine interface. Artificial Intelligence, 22, 157218.
Rogich, M.B. (1997). An empirical evaluation of context independent prompting tools for
requirements determination. Doctoral Dissertation, University of Maryland, Baltimore.
Dissertation Abstracts International, 58, 4481.
Saad, G., & Russo, J.E. (1996). Stopping criteria in sequential choice. Organizational Behavior
and Human Decision Processes, 67, 258-270.
Seale, D.A., & Rapoport, A. (1997). Sequential decision making with relative ranks: An
experimental investigation of the ‘secretary problem’. Organizational Behavior and Human
Decision Processes, 69, 221-236.
Schank, R.C., & Abelson, R.P. (1977). Scripts, plans, goals, and understanding: An inquiry into
human knowledge structures. Hillsdale, NJ: Erlbaum Associates.
Schenk, K.D., Vitalari, N.P., & Davis, K.S. (1998). Differences between novice and expert systems
analysts: What do we know and what do we do? Journal of Management Information
Systems, 15, 9-50.
Schmalhofer, F., Albert, D., Aschenbrenner, K.M., & Gertzen, H. (1986). Process traces of binary
choices: Evidence for selective and adaptive decision heuristics. The Quarterly Journal of
Experimental Psychology, 38A, 59-76.
Shafir, E., & Tversky, A. (1992). Thinking through uncertainty: Nonconsequential reasoning and
choice. Cognitive Psychology, 24, 449-474.
Shanteau, J. (1992). How much information does an expert use? Is it relevant? Acta Psychologica,
81, 75-86.
Shemer, I. (1987). Systems analysis: A systemic analysis of a conceptual model. Communications
of the ACM, 30, 506-512.
33
Simon, H.A. (1981). The sciences of the artificial. Cambridge, MA: MIT Press.
Smith, G.F. (1998). Quality problem solving. Milwaukee: ASQ Quality Press.
Smith, G. F., Benson, P.G., & Curley, S.P. (1991). Belief, knowledge, and uncertainty: A cognitive
perspective on subjective probability. Organizational Behavior and Human Decision
Processes, 48, 291-321.
Smith, G.F., & Browne, G.J. (1993). Conceptual foundations of design problem solving. IEEE
Transactions on Systems, Man, and Cybernetics, 23, 1209-1219.
Spetzler, C.S., & Staël von Holstein, C.-A. (1975). Probability encoding in decision analysis.
Management Science, 22, 340-358.
Standish Group. (1996). Chaos. Research Paper, The Standish Group International, Inc.
Svenson, O. (1992). Differentiation and consolidation theory of human decision making: A frame
of reference for the study of pre- and post-decision processes. Acta Psychologica, 80, 143168.
Swensson, R.G., & Thomas, R.E. (1974). Fixed and optional stopping models for two-choice
discrimination times. Journal of Mathematical Psychology, 11, 213-236.
Tversky, A., & Kahneman, D. (1974). Judgments under uncertainty: Heuristics and biases.
Science, 185, 1124-1131.
Vessey, I., & Conger, S. (1993). Learning to specify information requirements: The relationship
between application and methodology. Journal of Management Information Systems, 10,
177-201.
Wald, A. (1947). Sequential analysis. New York: Wiley & Sons.
Walz, D.B., Elam, J.J., & Curtis, B. (1993). Inside a software design team: Knowledge acquisition,
sharing, and integration. Communications of the ACM, 36, 63-77.
Watson, H.J., & Frolick, M.N. (1993). Determining information requirements for an EIS. MIS
Quarterly, 17, 255-269.
Wetherbe, J.C. (1997). Determining executive information requirements: Better, faster, and
cheaper. Cycle Time Research, 3, 1-18.
Wetherbe, J.C. (1991). Executive information requirements: Getting it right. MIS Quarterly, 15,
51-65.
34
Whitten, J.L., & Bentley, L.D. (1998). Systems analysis and design methods, 4th Edition. Burr
Ridge, IL: Irwin.
Yates, J.F., & Carlson, B.W. (1982). Toward a representational theory of decision making.
Working Paper, University of Michigan, Ann Arbor, MI.
35
36
37
38
39
Level
Goal
Process
Generic Requirement
Goal State Specification
Gap Specification
Difficulties and Constraints
Ultimate Values & Preferences
Means and Strategies
Causal Diagnosis
Knowledge Specification
Perspective
Existing Support Environment
Stakeholders
Process Description
Process Knowledge
Specification
Difficulties, Constraints
Roles and Responsibilities
Task
Task Description
Task Knowledge Specification
Performance Criteria
Informatio
n
Roles and Responsibilities
Justification
Displayed Information
Interface Design
Inputs
Stored Information
Objects and Events
Relationship Between Object &
Event
Data Attributes
Validation Criteria
Computations
Description
Identifying the particular goal state to be achieved
Comparing exiting and desired states
Identifying factors inhibiting goal achievement
Stating the final ends served by a solution
Specifying how a solution might be achieved
Identifying the causes of the problematic state
Stating facts and beliefs pertinent to the problem
Adopting appropriate point-of-view on the situation
Existing technological environment to support new
system
Organization units, customers, suppliers, competitors
A series of steps or tasks designed to produce a product
or service
Facts, rules, beliefs, decisions required to perform
process
Factors that may prohibit process completion
Individuals or departments charged with performing
processes
Identification of the sequence of actions required to
complete a task
Facts, rules, beliefs, decisions required to perform a task
Statement that associates outcome with conditions and
constraints
Individuals or departments charged with performing tasks
Explanations of specific actions to be/not to be taken
Data to be presented to end-users in paper or electronic
format
Language and formats used in presenting “Displayed
Information”
Data that must be entered into the system
Data saved by the system
Physical entities and occurrences that are relevant to the
system
How one object or event is associated with another object
or event
Characteristics of objects and events
Rules that govern the validity of data
Information created by the system
Requirements Categories
Figure 5
40
I. Data Needed from Customers
A. Personal Data
1. Name
2. Address with zip code
3. Phone Number
a. Option to enter multiple phone numbers
4. E-mail address
a. Option to enter multiple email addresses
5. Customer ID (created by customer at first purchase, and used at
every subsequent purchase)
B. Items Ordered
C. Store location at which customer wants to pick up order
D. When customer wants to pick up order (if longer time than system specifies)
II. Interface (Information to Provide to Customers)
A. Product Information
1. Picture of product
2. Brand name
3. Size
4. Unit cost
5. Price
6. Nutritional information
B. Locating Products
1. Ability to search by generic product name
2. Ability to search by actual brand name
3. Provide map of actual standard store aisles; can choose class of
products to browse by clicking on item name on “shelf”
C. Comparison feature allows comparing products on various attributes,
such as unit cost
D. When an item is not in stock, have feature that suggests possible substitute products
E. Shopping Cart
1. Have shopping cart feature
2. Can empty shopping cart at any time
3. Can remove individual items from cart at any time
4. Have running total of cost of items in cart available on-screen
5. Have calculator function available
FoodCo
Content Coding Categories
Figure 6
41
F. Have recipes available on the website
G. Customers can add notes to order items (e.g., “green bananas”)
H. Promotions
1. Have sale item page that customers can click on from homepage
2. Have instant coupons available that customers can access and use
3. Provide promotional items on product pages to increase impulse buying
4. Send periodic emails to customers with promotions and “click-throughs”
I. Ease-of-Use Features
1. Allow customers to set a default order list for themselves (when they log
on, these items will already be in their basket)
2. Provide back buttons and other easy navigational tools
3. Customer can “save” an order for several days until has time to finish order
J. Locating Stores
1. Have list of store locations so customer can locate closest one
2. Have closest store helper function–system can prompt customer with
closest store based on customer’s zip code
K. Contacting Vendor
1. Provide telephone number customers can call to speak with a manager
2. Have facility so customers can leave feedback about their shopping
experiences
III. Orders
A. No minimum order size at this point (to build customer base)
B. Will start with a limited number of items available on-line (not entire store
inventory)
C. Will accept only credit cards and debit cards
D. Customer must pay for items at time of order on-line
E. Ordering process will be on a secure server
F. For orders not picked up, customers will be sent an e-mail as a reminder
G. Will issue “rainchecks” to customers for items out of stock
H. Customers can order products 24 hours per day, 7 days per week, but
can only pick up during regular store hours
IV. Moving Goods to Customers
A. Only store pick-up at this point (no delivery)
B. Customer order will be printed out at store and employees will fetch items
and assemble order
C. Store will have employees dedicated to on-line sales (line and supervisory)
D. Orders must be placed at least 2 hours in advance of desired pick-up time
Figure 6
(cont.)
42
E. After an order is placed, system must give customer a pick-up time estimate
(Customer can choose different time if longer than specified by system)
F. Will be a staging area for on-line orders at store
G. Each on-line order will be assigned a specific order number. This number
will be used at store to organize order bags and boxes.
H. Customer can request that a specific employee pack order
I. Order items will be divided into frozen and non-frozen; order number on
containers will facilitate quick assembly when customer arrives
J. Customer will sign an order acceptance form when he picks up his order
V. Systems
A. On-line system will need to be integrated with existing store and corporate
information systems
B. Store personnel will enter product and other information into on-line system
C. Inventory system must track on-line orders
VI. Reports to Management
A. Customer-related reports
1. Number of customers
2. Number of new customers
3. Number of repeat customers
4. Mean and median order size
5. Profile/segment customers to understand buying habits
B. Product-related reports
1. Products selling and not selling
2. Compare on-line sales with in-store sales
Figure 6
(cont.)
43
Table 1
Quantity, Breadth, and Depth of Requirements
for Each Stopping Rule Group*
Quantity
Stopping Rule
Breadth
Depth
Mean
Std. Dev.
Mean
Std. Dev.
Mean
Std. Dev.
86.43
44.82
15.76
3.85
5.15
1.98
Representational Stability 62.77
19.84
15.31
2.43
4.03
0.67
Mental List
87.45
35.53
16.73
2.90
5.15
1.34
Magnitude Threshold
55.22
29.91
13.22
3.46
4.02
1.41
Difference Threshold
* Note: The breadth mean multiplied by the depth mean is not equal to the quantity mean for
each stopping rule group because the quantity mean is a weighted average of breadth and
depth. The breadth and depth means reported are simple averages within groups.
Table 2
Quality of Requirements Elicited
for Each Stopping Rule Group
Stopping Rule
Difference Threshold
Representational Stability
Mental List
Magnitude Threshold
N
Mean
Std
Deviation
22
72.22
26.46
13
54.82
23.10
10
66.01
19.67
9
46.58
24.23
Download