MSWord

advertisement
Trust in eCommerce - the ontological status of
trust
Olle Olsson
Swedish Institute of Computer Science (olleo@sics.se)
Abstract. The concept of trust has received a lot of
attention in recent years, for a number of reasons. On the
one hand, trust is seen as a necessary enabling precondition
for B2C and other sensitive services on the web, and the
problem of creating practical "trust on-line" is regarded as of
high importance. On the other hand, the aforementioned
reason has also spawned efforts on analysing the concept of
trust at a more "philosophical" level. This paper presents an
analysis of a logical kind, to effectively address the issue of
why we actually need to represent trust explicitly at all. The
conclusion is that in open communities the explicit trust
representations may be communicated between members,
and this will contribute to the privacy and security of these
members.
1. Introduction
The concept of trust has recently become a focus of interest in webbased commerce contexts, as well as in the context of open decentralised
applications [Chen and Yeager, n.d.]. There is an expectation that trust
will be a vital component in future systems [Cheskin, 1999] [TRUST-EC,
1999]. But no common consensus exists as to what trust may really mean.
Individual authors may present coherent pictures of what trust is, but such
pictures are often not compatible with each other [McKnight & Chervany,
1996] [McKnight & Chervany, 2001] [Bacharach & Gambetta, 2000].
We will here take steps towards establishing a common core
understanding of the concept of trust. Some fundamental issues need to be
clarified before a full-blown theoretical framework for trust can be
established. The first issue concerns the answer to the question: what are
we trying to achieve by introducing the concept "trust" into a theoretical
framework? The second issue concerns the answer to the question: once
the aim of "trust" is known, what do we need to cover by the concept of
trust? This paper mainly investigates the second issue, in its general
philosophical form: when, if ever, do we need to represent trust explicitly?
2. Trust and theories
Trust is a relationship between actors in some universe of
discourse. The term actor is used, as we argue that trust is a useful concept
in contexts where collaborative actions need to be performed.
We give three typical illustrations of what trust can be, both as
intuitive aids to understanding, and also to provide three classes of
contexts in which trust is to be understood.
1.
In sociology, one can define certain relationships of a cohesive nature
as being trust-relations. What holds a community together may be a
trusting relationship among its members. [Fisman & Khanna, 2000]
2. In economy certain relationships among actors are characterised as
being trust-based. A supplier should try to build trust among its
customer base. [Friedman et al, 2000] [Houser & Wooders, 2000]
3. In the technological context of peer-to-peer systems trust may be a
factor that has a major impact on establishing co-operation patterns
among the peers [Sniffen, 2000].
The ontological question in this context is whether the term "trust"
has any denotation at all, and if so, what does it actually denote? This is
the philosophical ontological question [Quine, 1948], not to be confused
with recent use of the term "ontology" in knowledge representation.
[Gruber, 1993]
For what purpose is the term "trust" introduced into some specific
scientific discourse? This question depends on an understanding of what is
meant by a "scientific discourse". Here we take this expression to actually
mean "theory", i.e. a structured description of some class of phenomena
[Achinstein, 1968] Theories are defined to provide answers to some
classes of questions. There are two main categories of questions that a
theory can be designed to provide answers to ([Theobald, 1968] [Winch,
1963]): (1) explanations ("why is P?"), and (2) predictions ("will P be?").
Explanations
Theories may be designed to explain some observable
phenomena. For theories that conceptualise trust, the intention is
that such a theory can explain an actor's behaviour through an
argument that refers to things called "trust". Certain kinds of
observations can be made, and one may distinguish between causes
and effects. The theory provides a foundation for a rational
explanation of why observed causes actually cause the observed
effects. Sociology and anthropology are good examples of where
this scientific approach can be observed; some observable societal
behaviour is explained by reference to a relationship between the
actors, a relationship called "trust".
Predictions
Predictions can be seen as the "other side of the explanation
coin", in the sense that where explanation concerns providing a
rationale for an observed phenomenon, prediction concerns
providing a rationale for a not yet made observation. A classical
statement of philosophy of science is that a usable theory should (a)
be able to explain old observations, (b) be able to predict future
observations, and (c) both explanation and prediction should be
based on the same kinds of reasoning. Meteorology is a clear-cut
example of this idea, in the sense that a meteorological theory
should explain yesterday's weather and predict tomorrow's weather
using the same kinds of reasoning principles.
Many theories, that have "trust" as a core concept, use it for both
explanation and prediction. A theory may explain the success of some
commercial actor, where actions performed by the actor increased the
customers' trust in the actor, and this led to a stable (and perhaps growing)
customer base. This could lead to a prediction that if some other actor
performs similar actions, that actor will also experience a profitable future.
The pragmatic importance of the explanation/prediction dichotomy is
hence that insights gained through explanations of historical observations
may be used to guide present actions, actions that will lead to preferred
future experiences.
"Trust" appears in such explanation/prediction theories as an
auxiliary concept in a conceptual framework that relates actions to effects.
There is another kind of use of "trust" that we will call "operational
trust".
Operational trust
This concerns theories that try to model trust formally, at a
level that permits trust to be manipulated through explicit methods.
The main reason for embarking on such an effort is the perceived
need to automate trust management. Such automation may be built
into (software) agents that select what other agents to co-operate
with when performing some specific task. That is, trying to mimic
the kind of trusting behaviour that can be detected in human-tohuman interaction, and that in such human contexts leads to what
can be called successful behaviour.
Many examples of this abound [Abdul-Rahman and Hailes, 1997]
[Bhattacharaya, 1998] [Prietula & Carley, 1998]. This area will provide a
setting for the main results of this paper. Such operational theories are
based on the following components:
1. A specified representation of entities called "trust" (or "trust[ed]-X"
for some X)
2. A specification of how this representation can be used to select cooperating partners
3. A specification of how trust is updated (modified, changed)
In this context a number of questions appear: does trust in the form
specified by the theory correspond to something called trust in the real
world? if trust is implemented, in an agent, according to the specification,
does it add to the power of the trusting agent? if there are several
competing specifications of trust, can any one of them be regarded as an
explication (or better, approximation) of the real concept of trust?
We will in this paper look at an orthogonal question: do we need to
explicitly represent trust? Two possible answers to this question are: (1) it
is not necessary to model trust explicitly, and (2) it can be of great utility
to model trust explicitly. These two candidate answers will be explored in
the next sections.
3. Trust is an unnecessary concept.
We will in this section adopt a formalistic approach, and talk about
trust as it may appear in some formal theory. We assume that we define a
theory explaining and predicting trusting behaviour, and that the theory is
expressed as a formal first order theory (see [Stoll, 1963] [Wilder, 1965]
[Henkin et al, 1959]). Some term  in this theory represents trust. Other
terms represent actions (state changes) and effects (states). A theory T that
purports to explain/predict trust-based behaviour should be empirically
grounded. This means that some terms in the theory represents observable
characteristics Hence we can partition the term of the theory into two
groups:
 Observational terms that denote concepts that can be directly
measured.
 Theoretical terms that have no corresponding experienceable concepts
In the domain that we are investigating here, we must agree that 
is a theoretical term, as we do not have any means to directly observe
anything that can reasonably be called trust. On the other hand, concepts
like "cost" and "income" can, presumably, be observed, and hence fall into
the category of observational terms.
At this point we may apply a result from formal logic; Craig's
Theorem [Craig, 1953]. This theorem states that for any formal theory T
that contains theoretical terms, there exists another formal theory T' that
does not refer to any theoretical terms but still has the same observational
statements as theorems.1
The impact of this theorem on our ontological question is that,
formally speaking, terms denoting trust are redundant. There is,
scientifically speaking, no absolute need to model trust explicitly in
theories explaining trusting behaviour. One can now draw a conclusion
that building theories of trust is a waste of time, as trust is a concept that
can be disposed with, without diminishing the power of a theory of trust.
But there may be sensible reasons to pursue explorations of formal
theories containing theoretical terms denoting something called trust. A
general argument for the usefulness of theoretical terms is presented in
[Putnam, 1965] and [Putnam, 1962]. See also [Gärdenfors, 1980].
4. Trust is a useful concept.
To find a haven for the concept of trust, we turn to what we earlier
called "the operational concept of trust". The intended use of the
operational concept of trust is in contexts where we want artificial entities
(e.g. software agents) to exhibit behaviour that can be described as trusting
behaviour. A complete definition of "trusting behaviour" requires a deeper
analysis, an analysis that cannot be presented within the limits of this
paper. We will, for the purpose of this paper, just explicate trusting
behaviour by the following description:
"The behaviour of an actor that contributes to maximising the
future utility of this actor, based on earlier experiences of
trustworthiness of other actors"
The following figure describes the main components of our model
and their dependencies.
A more formal statement of Craig's theorem is: "if T is a recursively
enumerable theory, with sets of predicate letters Pt (predicates for
theoretical terms) and Po (predicates for observational terms), then there
exists a recursively enumerable theory T' expressed only in Po such that
 is a theorem of T' iff that  is a theorem of T."
1
action
effects
observation
trust info
trust info'
The important dependencies in this picture are the following ones:
Based on available trust information (trust info)…
…an action is selected (e.g., decide on what actor to co-operate with)
This action is performed …
… leading to effects.
These effects can be observed …
… and such observations contribute to modifying available trust
information ( results in: trust info')
This is, in short, the essence of our conceptualisation of trust. The
concept of trust is represented by a combination of trust information,
actions, effects, observations, and methods operating on these. Other more
or less related proposals are [Jonkers & Treur, 1999] [Manchala, 2000]
[Marsh, 1994]
1.
2.
3.
4.
5.
6.
Individual trust
By individual trust we mean a conceptualisation that involves a
single actor's trusting behaviour. That is, the actor performs actions,
observes the effects of these actions, and updates his trust information
appropriately. That is, an actor adapts his own behaviour based on own
experiences from interactions with other actors. We deliberately identify
"individual trust" as an example of trusting behaviour, even though we do
not see much of a societal context in this description. But we argue that
this is the fundamental model of trusting behaviour, and that more
complex concepts can be defined on top of this fundamental model.
Our model of trusting behaviour contains a conceptual component
called trust information, and this component could be regarded as a
representation of what we call "trust" in the real world.
But this model is subject to the argument based on Craig's theorem.
The trust information of an actor is iteratively modified by the
observations made by the actor, and, viewed in the long term perspective,
trust information can reasonably be said to only depend on experiences.
Which means that trust information could be defined as the total history of
observations made by the actor. But it would be stretching our intuition
very thin to say that the set of our experiences of co-operating with some
other X is the trust we have for X.
Still, there is no formally compelling reason to introduce some
representation of trust separate from the representation of our experiences.
The attitude expressed in the last sentence is an application of Occam's
razor; remove from a theory whatever is not fundamentally necessary for
the strength of the theory.
Collective trust
We now introduce another dimension, a dimension that will
provide us with an argument for treating trust as a first-class concept in
our theories. This is in the context of collective trust, trust that is
established not only based on an actor's own experiences, but also based
on the trust that other actors, in the same community, have established.
That is, actor X's trust towards some other actor A depends partly on the
trust other actors Y, Z,…, in the same community, have towards actor A.
In practice, collective trust requires communication. Actors in the
same community exchange information about trust, on demand, or proactively. Whatever protocol is used, communicated trust information
should give some added value to the recipient, improve the quality of the
trust information of the recipient. For the moment we ignore the problem
of how to merge own trust information with others' trust information, and
instead focus on what kind of information should be communicated
between actors to achieve the desired sharing of trust information.
To highlight what the problem actually is, we assume a simple
scenario where we have a community of two actor, X and Y, and some
other actor, A, that is a potential co-operating partner. Both X and Y have,
during some preceding period of time, been co-operating with A, and both
X and Y have refined their respective trust information (aiding them in
deciding whether to co-operate with A or not). This establishment of trust
information was performed according to the model of individual trust, as
described earlier. Assume now that X is again contemplating whether to
co-operate with A, and request some assistance from Y, to aid in making
the decision. What should Y send to X? If we are looking at a scenario
where the members of the community (which is just X and Y) should
optimise their utility, they should communicate maximum information (the
information that enables the largest set of valid statements to be deduced).
In this scenario, the maximised information is actually the set of
experiences that Y has had from co-operating with A. Whatever statement
Y can make about the trustworthiness of A, this statement must be
deducible from the experiences of Y. Any other statement is either weaker
(enabling less conclusions to be drawn) or does not preserve validity
(enabling bad conclusions to be drawn). Hence, in order to maximise the
information content of the information sent from Y to X, Y should send
just the information that expresses the experiences of Y. But this
information does not have to contain any statements about trust.
Remember, experiences should be observationally grounded, and
statements about trust should be regarded as theoretical statements
(statements containing theoretical terms). Hence it seems that even in a
context where information is exchanged between actors in communities,
there is no logically compelling reason to have explicit representations of
trust.
But there is, in fact, an important practical reason for introducing
explicit trust representations in this area. And this depends on a sensible
requirement in open actor environments; protection of the privacy of
individual actors. Communicating the full experience-set of an actor is
revealing a lot about this actor. In open environments, this is a too strong
requirement. Experiences are private, and others access to such knowledge
may violate the privacy of the actor. Therefore it is unreasonable to expect
that complete sets of experiences will be communicated between actors.
From a privacy point of view, optimum protection is achieved if nothing is
communicated. A reasonable compromise is that actors should send to
other actors some suitable aggregation of their experiences. And this
aggregation is what we should actually call "communicated trust".
Hence, this is the context in which the theoretical term "trust" needs
to be explicitly represented. The question that remains is, of course, to
specify what trust information really consists of, trust information that can
be composed by the sending actor, used by the receiving actor, adds value
to the recipient, while not overly exposing the sender.
5. Trust and eCommerce
eCommerce in open environments defines a context where trust is
critical. Can a prospective customer commit to purchasing from some
vendor through electronic channels? What kind of risks is the customer
exposed to? How critical are these risks? In analogy with the real world,
customers are willing to perform purchases electronically if they have trust
in the vendor. And this trust is to a large extent based upon the private
experiences of this customer, and on the experiences of other customers. If
experience-based trust can be evaluated automatically, this would provide
a foundation for a decision support tool for eCommerce customers. To
protect the privacy of individual customers, while still enabling their
experiences to be used by others, requires an explicit representation of
trust, a representation that enables trust information to be communicated,
merged, and used in purchasing decisions.
6. Summary and future work
Unfortunately, not much effort has been spent on meta-scientific
issues of trust. This has lead to a situation where a lot of work on trust has
been done, but the fundamental questions about what trust is, why it
should be used, and how it should be made operational, are left
unanswered. This leaves us without the languages and tools that would
enable us to evaluate concrete work on operational trust, and hence most
such concrete work still resides in the twilight zone.
In this paper we have analysed the ontological foundations for trust,
in order to answer the fundamental question: what are the necessary
requirements on some representation of trust. The formal answer to this is:
we do not have to assume the existence of some idealistic entity called
trust, if we strive for an empirically validated theory of trusting behaviour.
The pragmatic answer, on the other hand, is: in order to achieve
confidentiality, privacy and protection in co-operating communities of
actors, we need to provide an explicit representation of trust.
By adopting the scientific approach of "meaning is use", we reduce
the problem of characterising what trust is to the problem of in what way
trust is used. A first sketch of a decision-theoretic approach to describing
the context in which trust fills a role has been presented. Further
explications of the details of this model will be presented in forthcoming
papers.
7. Bibliography
Abdul-Rahman, A. & Hailes, S. 1997. "A distributed trust model" in Proceedings of New
Security Paradigms Workshop'97, September 23 - 26, 1997, Langdale,Cumbria United
Kingdom, pp. 48-60, 1997
Achinstein, P. 1968. Concepts of Science, The John Hopkns Press, Baltimore, 1968.
Bacharach, M. & Gambetta, D. 2000. "Trust as type detection", in Castelfranchi, C (ed):
Deception, Fraud and Trust in Agent Societies, Kluwer, Dordrecht, 2000.
Bhattacharaya, R. 1998. "A formal model of trust based on outcomes", Academy of
management review, July 1998.
Chen, R. and Yeager, W. n.d. Poblano - A Distributed Trust Model for Peer-to-Peer
Networks, http://www.jxta.org/project/www/docs/trust.pdf
Cheskin, 1999. eCommerce trust study, January 1999
Craig, W. 1953. "On axiomatizability within a system", Journal of Symbolic Logic, 18:1
(March), pp 30-32.
Fisman, R. & Khanna, T. 2000. "Is trust a historical residue? Information flows and trust
levels", Journal of Economic Behavior and Organization, 38(1), January 1999, pp. 79-92
Friedman, B. & Kahn, P. H. & Howe, D. C.2000. "Trust online", CACM 43(12), December
2000, pp 34-40.
Gruber, T. R. A 1993. "Translation Approach to Portable Ontology Specifications",
Knowledge Acquisition, 5(2), 1993, pp. 199-220.
Gärdenfors, P. 1980. "Teoretiska begrepp och deras funktion", in B. Hansson (ed.), Metod
eller Anarki, Doxa, 1980, pp 77-92.
Henkin, L., P. Suppes, and A. Tarski, 1959. The axiomatic method, Amsterdam, 1959.
Houser, D. & Wooders, J. 2000. Reputation in auctions: theory, and evidence from eBay,
Working Paper, University of Arizona, February 2000
Jonkers, C. M. & Treur, J. 1999. Formal analysis of models for the dynamics of trust based
on experiences, in Garijo., F and Boman., M (eds): Proceedings of the 9th European
Workshop on Modelling Autonomous Agents in a Multi-Agent World, MAAMAW'99.
Lecture Notes in AI, Springer Verlag, Berlin, 1999
Manchala, Daniel W. 2000. "E-commerce trust metrics and models", in IEEE Internet
Computing, 4:2, March-April 2000, pp 36-44.
Marsh, Stephen 1994. Formalising trust as a computational concept, PhD Thesis, University
of Sterling, April 1994.
McKnight, D. H. & Chervany, N. L., 1996. The meanings of trust, Tech Rep 96-04, MISRC
Working Paper Series, 1996.
McKnight, D. H. and Chervany, N. L. 2001. "Conceptualizing trust: a typology and ecommerce customer relationship model", in Proceedings of the 34rd Hawaii International
Conference on System Sciences, IEEE, 2001
Prietula, M.J. & Carley, K.M., 1998. "A computational model of trust and rumor", in
Proceedings of the 1998 AAAI Fall Symposium Series, Emotional and Intelligent: The
tangled knot of cognition, October 23-25, American Association for Artificial Intelligence,
1998
Putnam, H. 1962. "What theories are not", in E. Nagel, P.Suppes, and A. Tarski (eds.), Logic,
Methodology and Philosophy of Science, Stanford University Press, 1962. Reprinted in
[Putnam, 1979].
Putnam, H. 1965. "Craig's theorem", Journal of Philosophy, LXII, 10, May 1965. Reprinted
in [Putnam, 1979].
Putnam, H. 1979. Philosophical Papers, Vol 1: Mathematics, Matter and Method, 2nd edition,
Cambridge University Press, 1979.
Quine, W.V. 1948. "On what there is", Review of Metaphysics, 1948. Reprinted with
corrections in Quine, W.V., From a logical point of view, 2nd edition, Harper & Row,
1961.
Sniffen, B. T. 2000. Trust economies in the Free Haven project. MScThesis, M.I.T., June
2000.
Stoll, R. 1963. Set theory and logic. Freeman, 1963.
Theobald, D.W. 1968. An introduction to the philosophy of science, Methuen & Co Ltd,
London, 1968.
TRUST-EC, 1999. Requirements for trust and confidence in e-commerce, Report on a
workshop held in Luxembourg, April 8-9, 1999
Wilder, R. 1965. Introduction to the foundations of mathematics, 2nd edition, John Wiley &
Sons, 1965.
Winch, P. 1963. The idea of a social science and its relation to philosophy, Routledge &
Kegan Paul, London, 1963.
Download