A Probabilistic Model of Reputation

advertisement
A Probabilistic Rating Framework for Pervasive Computing Environments
Lik Mui, Mojdeh Mohtashemi, Cheewee Ang
{lmui, mojdeh, cheewang}@mit.edu
Laboratory for Computer Science/MIT, 200 Technology Square, Cambridge, USA
Abstract
This paper presents a probabilistic framework for
characterizing how rating from one agent toward
another can be propagated with the iMatch1 multiagents environment. We term this framework
“collaborative sanctioning” (CS) in distinction
from collaborative filtering (CF). CF pools the
collective experiences of all users in a system.
This pooling is performed without weighting the
reliability (or reputation) of the users. The weight
for any given individual upon which the combined
rating depends is the subject of CS. CS abstracts
reputation and related quantities of interest and
specifies rules for manipulating them for modeling
rating interaction among users. Experiments with
simulated and real-world rating data are briefly
reported.
1 Introduction
Agent
Environment
Service
Ontology
Server
Human
Interface
Message
Transport
White Pages
Collaborative
Sanctioning Server
Yellow Pages
Agent Registration
Server
UDDI Service
Registry
Figure 1. Module Dependenncy Diagram (MDD) for the interaction of an
iMatch agent with services in its environment.
must be taken in designing these systems. This paper
provides a probabilistic framework for characterizing
approval ratings from one agent to another.
The
sociological aggregate of users’ approval ratings of each
other can be interpreted as the users’ “reputation”.
2 Collaborative Sanctioning Model
iMatch is a new infrastructure project to equip each student
and staff in academic environments with personal software
agents. Each iMatch agent helps manage its owner’s
academic life through both static and dynamic profile
matching within and across campuses. A genenic multiagents environment is being developed to support a range
of dynamic resource discovery task.
The Module
Dependency Diagram for how an agent interacts with its
environment is depicted in Figure 1. The iMatch agent
architecture is discussed elsewhere [2].
Agents exist in an environment of objects O = { o1, o2, …
oN } and other agents A = { a1, a2, … aM }. Users rate the
objects, which could be other users. The set of ratings  is
a mapping:
Object Rating:
 : A  O  [0, 1]
Note that objects can be agents also. To model the process
of sharing ratings among agents, the concept of an
encounter is required. An encounter is an event between 2
agents (ai, aj) such that the query agent (ai) asks the
response agent (aj) for aj’s rating of an object:
Each service in the environment is implemented as a web
service conforming to the UDDI web service interface. 2
This paper is concerned with one of these services
provided: collaborative sanctioning server. Within a
pervasive computing environment, the collaborative
sanctioning server provides a reliability assessment service
for objects or agents within the environment about each
other. This assessment is made possible through ratings
implicitly or explicitly provided by agents.
Most rating systems are built in an ad-hoc fashion with
arbitrary formulas and values (e.g., eBay, Amazon, [1, 7,
8]). Recent high profile fraud cases in auction sites by
users with high ratings in these systems suggest that care
Encounter:
d  D  A  O  [0, 1]  {  }
The reputation of aj in ai’s mind is defined here as the
probability that in the next encounter, aj’s rating about a
new object will be the same as ai’s rating. Reputations are
clearly context dependent quantities. So, a context has to be
taken into consideration. A context describes a set of
attributes about an environment. Let an attribute be
defined as the presence (‘1’) or absence (‘0’) of a trait. The
set of all attributes is possibly countably infinite:
Attribute:
b  B and b : O  { 0, 1}
Set of attributes: B = { b1, b2, … }
A context is then an element of the power set of B:
Context:
c  C and C  P{B}
where P{.} represents the power set. The reputation
mapping can now be represented by:
1
2
http://imatch.lcs.mit.edu/
http://www.uddi.org/
2
Bayesian Inference
Ranking Error as a Function of Neighborhood Size (Total 40 Users)
Bayesian
Control
Global
0.8
0.7
Rank Error (fraction of maximum)
Reputation:
R : A  A  C  [ 0, 1 ]
Reputation for an agent by others is learned over
encounters among them. Assume that reputation only
changes after an actual encounter (either directly with the
individual involved, or through reputation sharing among
other agents). The update rule for reputation values in the
system can be represented as:
Update rule:
newstate : R  D  R
The functional form of this update is discussed below.
0.6
0.5
0.4
0.3
(1)
0.2
0.1
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
Neighborhood Size
Let xab(i) be the indicator variable for a’s approval of b
after the ith encounter between them. If a and b have had n
encounters in the past, the proportion of number of
approvals of b by a can be modeled with a Beta prior
distribution. Let ˆ be the estimator for the true proportion
 of approvals for b by a, then, p(ˆ)  Beta(c1 , c2 ) , where
c1 and c2 are parameters determined by prior assumptions.
Assuming that each encounter’s approval probability is
independent of other encounters between a and b, the
likelihood of having p approvals and (n – p) disapprovals
p
n p
can be modeled as: L( D | ˆ)  ˆ (1  ˆ) , where D
represents the set of encounters for a specific context
between them. Combining the likelihood and its conjugate
prior, the posterior estimate for ˆ can easily be shown to
be also a Beta. In our framework, reputation for b in a’s
mind is a’s estimate of the probability that a will approve
of b in the next encounter x(n + 1). This estimate can be
shown to be [4]:
p  x (n  1)  1 | D   ˆ p(ˆ | D) dˆ  E ˆ | D
ab

n
n
n

n

This conditional expectation is the operational definition
for reputation: rab : the update rule above.
Prior probability of  for two agents with no prior
encounters can modeled by a uniform distribution. For a
Beta prior, values of c1=1 and c2=1 yields such a
distribution. If ai and ak have never met before but ai
knows aj well (i.e., ai has an opinion on aj’s reputation),
and aj knows ak well. ai’s estimate of ak’s reputation based
on ai ’s history of encounters with aj, and aj’s history of
encounters with ak is the probability that in a future
encounter between ai and ak the encounter will be rated
good by ai. This can be shown to be: p( xik(n+1)=1 | Dij,
Djk) = rij rjk + (1- rij) (1- rjk). For multiply connected
indirect neighbors, the maximum probability path of the
direct neighbor is chosen for inferring the reputation of
indirect neighbors. Recent advances in belief propagation
in multiply connected network are being investigated [7].
3 Experiments and Discussions
To compare against the Bayesian approach for estimating
reputation, two other approaches are considered. A global
reputation approach is performed by an eigenvector method
often used by sociologists [5]. Direct neighbors’ reputation
Figure 2. The graph shows ranking error as a function of size of
neighborhood in a simulated community of 40 agents.
is known based on agents’ encounters. For indirect
neighbors, every user uses the reputation ranking order of
the most reputed individual as their ranking of their indirect
neighbors. As a control measure, indirect neighbors can
also be randomly ranked. Note that when the number of
first degree direct neighbors equals the total number of
users – 1, ranking error should be 0 since there are no
second degree indirect neighbors in this case.
Figure 2 shows results with 40 agents in a simulated
community as a function of the number of direct neighbors.
Plotted is the ranking error of the 3 strategies for estimating
the reputation of indirect neighbors. The global reputation
measure leads to the most error while the Bayesian
algorithm leads to the least error. The same experimental
protocol as the simulation is carried on a set of MovieLens
data [3]. This dataset consists of 100,000 ratings by 943
users on 1682 movies of 19 different genres. Qualitatively
similar results as those in Figure 2 were obtained. Detailed
descriptions of the experiments can be found in [4].
References
[1] Glass, A., Grosz, B (2000). “Socially Conscious
Decision-Making.” Agents’2000.
[2] McGeachie, M. (2001) “Utilty Function for
Autonomous Agent Control,” submitted to Oxygen
Workshop.
[3] MovieLens. http://movielens.umn.edu/
[4] Mui, L., Ang, C., Mohtashemi, M. (2001). “A
Probabilistic Model for Collaborative Sanctioning,”
MIT LCS Technical Memorandum 617.
[5] S. Wasserman (1994). Social Network Analysis.
Cambridge University Press.
[6] Weiss, Y., Freeman, W. T. (1999). “Correctness of
Belief Propagation in Gaussian Graphical Models of
Arbitrary Topology.” Mitsubishi Electric Research
Laboratory Report T99-33.
[7] Yu, B., Singh, M. P. (2000). “A Social Mechanism of
Reputation Management in Electronic Communities.”
CIA’2000.
[8] Zacharia, G., Maes, P. (1999). “Collaborative
Reputation Mechanisms in Electronic Marketplaces.”
Proc. 32nd HICSS’1999.
Download