research statement - Stefano Grazioli

advertisement
RESEARCH STATEMENT
STEFANO GRAZIOLI
McIntire School of Commerce
At The University of Virginia
Monroe Hall - Charlottesville, VA 22904
grazioli@virginia.edu
(434)-982-2973
I am interested in information systems, more specifically in information quality and security.
Most recently, I have been investigating topics such as consumer deception on the Internet, the strategic
manipulation of information in the financial markets, and errors in retrieving business information from
databases.
The unifying theme that connects these research projects is the goal of understanding information
processing errors in business settings. In broad terms, understanding error means understanding why
people sometimes depart from procedural standards assumed to be desirable (Grazioli & Klein, 1996).
Fraud and deception are examples of intentionally induced information processing errors. The errors
made by business analysts retrieving information from databases are an example of unintentional errors.
Errors are an interesting scientific phenomenon. They are the kind of phenomenon that almost
irresistibly prompts an observer to ask: “Why did this happen?” which is often the beginning of a
scientific investigation. At the same time, understanding error has straightforward practical implications.
Reducing or preventing errors is one avenue to increase quality and productivity of processes and
decisions. As the recent events centered on Enron and WorldCom have shown, losses from decision
making errors can be very costly.
Although I have centered my work on errors that are topical to information systems – my
discipline – I feel that this research interest is naturally interdisciplinary and lends itself well to the kind
of scientific sharing that is one of the characteristics of the Faculty at McIntire.
1. DETECTING FINANCIAL DECEPTIONS
I define deception as the deliberate attempt to mislead others. The detection of deception is the
realization that such an attempt has been made. In business settings, deception is an issue of managerial
interest because the potential for deception is ubiquitous, because deception is hard to detect, and because
it is costly when not detected. From a psychological point of view, deception is a fascinating
phenomenon. It is a clear case in which individuals respond to their cognitive representations of the
environment, and not to the environment itself.
I have explored deception in various business domains. I believe that research needs to be both
scientifically rigorous and pragmatically relevant. To achieve this goal I seek concrete problems that
affect organizations and that can be generalized into scientific issues. The research projects described
below are rooted in theory, the empirical observation of specific information processing tasks, and the
R-1
development of models of those processes. Most of them have been sponsored by major organizations
(e.g., IBM, KPMG, Norwest Bank, and the IRS).
Can experts detect deceptions in their own domains?
My initial work on deception focused on the problem of detecting deception in the financial
markets (a project funded by KPMG). My coauthors and I proposed a theory of how individuals detect
deception in financial information. An experiment with senior partners at a major auditing firm revealed
that even very highly trained professionals often fail to detect deceptions in their own professional
domains. The development of several information processing models of fraud detection (computer
simulations of human problem solving behavior) concluded that success and failure at detecting fraud are
determined by (1) the ability to correctly generate the hypothesis that an intentional manipulation of
information has occurred, and (2) the ability to combine the evidence gathered while analyzing a case
(OBHDP 1992).
What kinds of deceptions exist?
In the next study we proposed that there exists only a handful of prototypical ways in which
deceptions are perpetrated by deceivers. We developed a general taxonomy of deception, based on the
finite number of ways in which a deceiver can induce a cognitive misrepresentation (i.e., an error) in the
victim. We called these means to deceive “deception tactics” (Johnson, Grazioli and Jamal AOS 1993).
For each of these, we identified one or more corresponding detection tactics. I used this taxonomy of
tactics as an analytical tool to understand deception in a variety of settings, including commercial lending
and the Internet.
Why even the experts fail to detect deception?
A third study (Cog Sci. 2001) integrated the previous studies with VanLehn’s work on the origins
of information processing errors. The resulting theory makes a key distinction between situations that
have the potential for deception, which are very frequent, and detections of actual deceptions, which are
relatively rare. Because actual detection occurs infrequently, individuals do not have sufficient feedback
to refine their knowledge of how to detect. As a result, their knowledge is likely to contain a special kind
of flaw, which VanLehn calls a “knowledge bug.”
Knowledge bugs are a special type of imperfections in an individual’s knowledge, a sort of a
competence blind spot. Buggy knowledge works well in many cases and generates errors only under
special circumstances (such as when the information processed by an individual is intentionally
manipulated). Knowledge bugs develop when specific feedback is rare. They explain why even well
trained, motivated individuals fail to detect deception.
To evaluate this integrated theory, we developed an innovative methodology based on the idea of
using errors to explain and predict problem-solving outcomes. The methodology has four steps: (1) the
observation of the process traces (“think aloud” protocols) of accurate detectors of deception is used to
build a computer model of the correct detection of deception. (2) The process traces of successful and
unsuccessful detectors are compared to identify the errors made by the unsuccessful detectors. (3) The
computer model of correct detection developed in the first step is then modified to include hypothesized
causes of errors (the “knowledge bugs” described above). (4) Finally, the model with the hypothesized
“knowledge bugs” is run again. Its behavior, now faulty, is compared with the behavior of the subjects
who failed to detect.
In our study, the model that included the knowledge bugs was 84% accurate in predicting
individual responses, which we interpret as fairly strong support for the theory. The distribution of the
R-2
errors suggested that generating the hypothesis that a deceptive manipulation has been attempted is a
critical step to successful detection. We also found that errors were surprisingly frequent: no subject,
even among the most successful individuals in our sample, was error-free across all cases.
Can we facilitate detection success?
My dissertation (funded by Norwest Bank and the Carlson School of Management) proposed that
it is possible to facilitate detection success by means of relatively simple interventions based on Cosmides
and Tooby’s social exchange theory. Specifically, I looked at deception in commercial lending, which
occurs when management of a financially troubled company manipulates a loan application so as to
obtain an otherwise uncertain loan. Laboratory experiments and models of the detection process
demonstrated that providing a loan officer with (a) knowledge of the adversarial intentions of
management of a company attempting to borrow money and (b) knowledge of broad classes of possible
manipulations, significantly facilitates detection success. Success rates at detecting deception in realistic
business situations jumped from 25 to 69 percent (AIS 1998, Grazioli and Johnson, draft).
Current extensions of this stream of research include (a) an analysis of the effects of the
interventions described above on the errors made by the loan officers (Grazioli and Johnson, draft); (b) a
comparative study of auditors and foreign currency traders (Cambridge Press 2004); (c) a study of fraud
creation (Davern & Grazioli, draft), and (d) a series of studies of deception on the Internet. The first
study explores whether it is possible to eliminate or reduce knowledge bugs. The second study looks at
the strategies that auditors and foreign currency traders adopt to manage risks posed by interactions with
other agents. The third study is designed to learn about the detection of financial manipulations by
understanding how these manipulations are created. The Internet studies are described in the next section.
2. OCCURRENCE AND DETECTION OF DECEPTION ON THE INTERNET
The specifics of Internet technology have “lowered the cost of evil” and provided new
opportunities for consumer and business fraud. As a result, the social dynamics of old forms of crime
have changed and new forms of deviant behavior have appeared (e.g., “phishing”, “page jacking”).
What is the nature of Internet deception?
The term Internet deception describes a broad set of malicious practices that use the Internet as a
medium to intentionally create in the target an incorrect mental representation of the circumstances of a
social exchange. Sirkka Jarvenpaa and I have argued that Internet deception poses a threat to the
sustainability of Internet commerce because it undermines trust among parties (FT, 1999).
To understand this threat and how to cope with it, we have built the first research database of
cases of Internet deception. The database contains over two hundred cases and was populated using
content analysis of a broad range of documentary evidence published between 1995 and 2000. The cases
from the database were classified using the taxonomy of deception tactics described above. The results of
the analysis were then used to characterize the problem posed by Internet deception, and assess its
severity (CACM 2003).
What is the modus operandi of the Internet Deceiver?
Data from the database was used to investigate some of the factors that make a specific deceptive
tactic more or less likely to be adopted by a deceiver on the Internet. We expanded the existing theory of
deception with new hypotheses that link the selection of a specific tactic to both the identity of the target
(consumer or business) and the purported identity of the deceiver (consumer or business). The results
suggest that deceivers select their deception tactics as a function of their targets, as well as their own
R-3
purported identity. The practical implications for deterrence, prevention, and detection of Internet
deception was also discussed (IJEC 2003).
What are the Psychological consequences of perceived deception?
I have conducted lab experiments to determine the role of perceived deception in e-commerce
settings. We began by modeling how deception, risk, and trust affect the willingness to buy at a web
store. A first lab experiment found that even very experienced Internet users are easily victimized by
deceptive tactics commonly found on the Internet and that the perception of deception significantly
affects risk and trust, which in turn affect the willingness to purchase at a web store (IEEE, 2000).
Can consumer detect online deceptions? How?
A second experiment with lay Internet consumers looked at the process differences between
accurate and inaccurate detectors, with particular focus on consumer responses to deceptive mechanisms
(e.g., forged BBBOnLine assurance seals) used by Internet deceivers.
The findings of the experiment suggested that a key difference between success and failure is the
ability to evaluate clues of deception, i.e. knowing how to verify or assess items found on a visited web
site, such as assurance seals, warranties, or customer testimonials. (ICIS, 2001; Grazioli Wang & Todd,
draft).
3. ERRORS IN INFORMATION RETRIEVAL FROM ORGANIZATIONAL DATABASES
Dale Goodhue, Barbara Klein and I began working in this area with an investigation of how
knowledge workers detect and repair errors when extracting information from organizational databases
(DSI, 1995). The study was initiated by the observation that people engaged in information retrieval from
a database often realize spontaneously that they have made a mistake and proceed to repair the error. Our
rationale for this initial work was that understanding how people detect and repair their own errors is the
basis for designing information technology that facilitates error detection and repair, and ultimately
improves the quality of information extracted from databases.
That initial research sparked a broader study of the effects of semantic data integration on
information retrieval performance. Data integration is the use of common definitions for the data in the
databases of different organizational units. Data integration projects are underway in many organizations,
yet little has been published on exactly how data integration (or lack thereof) affects performance at
retrieving business information from databases. This is surprising, because retrieval is one of the reasons
why business store and integrate information.
To investigate this question I developed a set of sophisticated computer programs to
automatically identify syntactic and semantic errors in SQL queries. Errors automatically identified
include syntax errors (e.g., misspelling a keyword), overspecification errors (retrieving too much
information), underspecification error (retrieving too little), and much more. The programs were used to
analyze several thousand queries written by hundreds of subjects engaged in a business information
retrieval experiment.
Statistical analysis of the data found that semantically integrated data are, as expected, conducive
of higher performance. Perhaps more interestingly, we identified the specific reason why this is so:
information retrievers in an integrated environment can “chunk” the overall information retrieval task in
larger portions without making overall more errors. Within the limitations of the study, the findings
support the implementation of semantically integrated databases that is underway in so many
organizations (Goodhue, Grazioli and Klein, draft).
R-4
Download