Taxonomy of Attacks and Defense Mechanisms in

advertisement
P2P Reputation Systems: Concepts and Applications
1. INTRODUCTION
In the last few years, a lot of research activity has been targeting Peer-to-Peer (P2P) technology,
systems, architectures and applications. Such systems comprise a number of autononous heterogeneous
peers with intermittent presence in the network and a high level of anonymity, which interoperate in
applications such as file sharing (e.g. Gnutella [Gnutella 2008], Kazaa [Kazaa 2008], Freenet [Freenet
2008] [Clarke et al. 2000]), instant messaging (e.g. Jabber [Jabber 2008]), distributed computing (e.g.
Seti@Home [Seti@home 2003]), or P2P e-commerce (e.g. [Datta et al. 2003] and [Despotovic et al.
2004]). These applications exploit the basic characteristics of P2P technology, namely scalability, selforganization, high level of distribution and ability to function in the presence of a highly transient
population of nodes, network, and computer failures, without the need of a central server and central
administration. This decentralized nature and the lack of a controlling authority expose P2P systems1 to
a broad range of security attacks, as it has also been recognized by other researchers [Castro et al.
2002], [Sit and Morris 2002] and [Maniatis et al. 2004]. Examples of such attacks include:
- Denial of Service (DoS), where adversaries consume or cause waste of network-level resources
(bandwidth, buffer space) and application-level resources (computation, memory) in order to
prevent clients of the system from obtaining timely service
- Free-loading, where adversaries attempt to benefit from the system’s services while contributing
as less as possible to the system
- Other malicious actions, where attackers attempt to compromise the integrity of the content or of
the service provided by a P2P system or the access to restricted resources by not complying to
the P2P routing or storage protocols, as shown in [Castro et al. 2002].
The effect of these attacks is maximized when malicious peers cooperate with each other or when a
peer enters a P2P system with multiple identities (Sybil attack [Douceur 2002]) and orchestrates them
in order to subvert the system.
The aforementioned attacks cannot be alleviated by traditional centralized security mechanisms
which usually protect resources by assigning rights to entities based on their identities. Therefore, a
number of advanced decentralized security mechanisms have been proposed in order to protect a P2P
system from various attacks; examples include mechanisms for secure routing and secure message
forwarding [Castro et al. 2002], mechanisms for DoS defense [Maniatis et al. 2004] and so on.
In addition, peers need to make trust decisions about whom to transact with, i.e. which peer’s
services they can rely on. Trust decisions are also needed for selecting honest peers to participate in
P2P functions, such as routing and message forwarding, in order to avoid some of the aforementioned
attacks. Therefore, trust is a prominent issue in P2P computing, and necessitates the use of trust
mechanisms. Reputation systems, also referred to as reputation-based trust systems, have emerged to
address this need and constitute an important trust management mechanism in online communities.
1
In the rest of the paper we use the terms ‘P2P systems’ and ‘P2P applications’ interchangeably.
Their main aim is to support trust decisions, i.e. to help an entity to decide whether to trust another
entity and have a transaction with it or not, by using past behavior as a predictor of future behavior.
However, reputation systems suffer themselves from attacks coming mainly from unfair or deceitful
raters, which, for example, could exploit loose registration and authentication policies in order to
deceive the system without being identified. Such attacks can be quite synchronized and powerful and
reduce the credibility of the reputation system.
While several reputation systems exist in the research literature, there is little work regarding their
credibility and their resistance to comprehensive adversary models. The work in [Suryanarayana and
Taylor 2006], which presents a threat-centric framework for evaluating and comparing different
decentralized reputation systems, and the work in [Ruohomaa et al. 2007] which presents an analysis
and comparison of reputation systems regarding basic credibility criteria are two of the very few
examples found in this area. In fact, the literature lacks a systematic analysis of the attack space related
to reputation-based trust systems for P2P applications.
2. CONCEPTS AND APPLICATIONS OF P2P REPUTATION SYSTEMS
A rich variety of reputation systems has been proposed in the literature based on either a centralized or
a decentralized architecture. Centralized reputation systems (e.g. eBay) rely on a central control or
administration of information, whereas in decentralized reputation systems (e.g. [Xiong and Liu 2004] ,
[Song et al. 2005], [Dillon et al. 2004], [Despotovic and Aberer 2004], etc.) reputation information
management is distributed to all participating entities. In the latter case some entities could still have a
special role, acting for example as trusted parties. Decentralized reputation systems, which are
examined in this paper, are more suitable for P2P applications where no central entities exist.
In a decentralized reputation system the participating entities play interchangeably the roles of
trustor, trustee and recommender. The trustor is an entity which wants to make a trust decision
regarding whether to participate or not in a transaction with another entity, the trustee. A transaction
can involve accessing or allowing access to a resource, e.g. a file, buying or selling goods, etc. The
recommender is the entity that provides the trustor with information regarding the trustworthiness of
the trustee; this information is known as recommendation.
To make a trust decision the trustor tries to predict the future behavior of the trustee by forming a
view of the trustee based on experience about its earlier actions. This subjective view is formed by
estimating an indicator of the quality of the trustee regarding its services (e.g. provision of a file, an
ecommerce trade, etc.) and comprises the trustee’s reputation or trustworthiness from the trustor’s
point of view. To form a reputation view, the trustor needs to gather experience information, either by
referring to its own earlier experience with the trustee, or by acquiring it from other entities in the form
of recommendations. Recommendations can be based on the recommender’s personal experience
alone, or on a combination of personal experience and earlier recommendations from others. A
recommendation is either a rating describing a single transaction experience (transaction-based
recommendation), or an opinion formed by the outcome of several transactions and possible outside
experience (opinion-based recommendation). The aggregation of recommendations regarding the
trustee results in estimating the trustee’s reputation value, which can be translated in a way that
facilitates a trust decision.
A trust decision is either threshold-based (e.g. [Xiong and Liu 2004] , [Despotovic and Aberer
2004], [Sabater and Sierra 2002], [Dillon et al. 2004]) where the selection of the trustee for a
transaction depends on the comparison between a specific threshold and the trustee’s reputation value,
or rank-based, where the trustor compares the trustworthiness of different peers and chooses to transact
with the entity which has the highest reputation value (e.g. [Aberer and Despotovic 2001]. A slightly
different rank-based trust decision method is proposed by Papaioannou and Stamoulis [2006]; in this
approach, a peer (service requester) selects another peer (service provider) to transact with in a
probabilistically fair manner according to the rank of the latter peer. A peer’s rank is estimated based
on its own reputation and the average reputation of all peers and determines the demand that this peer
can probabilistically attract as a future service provider.
The reputation of a peer is context-specific. Context may refer to the kind of service the trustee
provides, e.g. a peer could be highly reputable when coming to providing files but have a low
reputation as a recommender provider, or with regards to specific attributes of such a service, e.g.
authenticity of provided files, quality and quantity of a product purchased by the trustee in an ecommerce trade, etc. Reputation also depends on time, as the behavior of peers is dynamic;
furthermore, recent transactions are often considered more important for reputation estimation than
older ones.
A reputation system may support one or multiple levels of feedback transitivity. In the case of a
single transitivity level, a trustor directly collects information from peers which have had transactions
with the trustee. In the case of multiple transitivity levels, the trustor indirectly collects information
from peers which have had transactions with the trustee. This means that chains of entities are formed
to communicate to the trustor recommendations from entities which have directly interacted with the
trustee. In such chains of entities, each pair of two successive entities have transacted with each other,
so every peer has estimated a reputation value for the previous peer and all these values are used for the
estimation of the trustee’s reputation value. Reputation systems with multiple transitivity, are often
graphically presented as directed graphs, as in [Despotovic and Aberer 2004], where nodes represent
the peers, edges represent transactions between the connected peers and the weight of an edge
represents the context of the corresponding transaction and the rating of the behavior of the destination
node in the transaction as perceived by the source.
Reputation values are estimated either locally and subjectively by the trustor (e.g. [Xiong and Liu
2004], [Liang and Shi 2005]) or as global values calculated by special peers (e.g. [Kamvar et al. 2003],
[Zhou and Hwang 2007a], [Zhou and Hwang 2007b]). In the first case every peer estimates
personalized reputation values for other peers, based on its own opinions and selected
recommendations from third parties. In this way, one peer has different reputation in different peers’
database, as it is happening in real human life. In the latter case, a unique global reputation value is
estimated for each peer in the network, based on the opinions from the whole peer population. Local
reputation values are stored by the trustor itself, whereas for global reputation values ‘storing peers’,
i.e. peers storing the reputation values, are determined either randomly [Singh and Liu 2003] or by
using various techniques, such as Bloom filters [Zhou and Hwang 2007a], or a Distributed Hash Table
(DHT) as it is proposed in [Kamvar et al. 2003], [Stoica et al. 2001] and [Ratnasamy et al. 2001], and
so on. A different approach for reputation estimation is followed in NICE [Lee et al. 2003], where the
trustee is the entity which estimates and stores its own reputation value from a chain of entities which
have interacted with each other and connect the trustor with the trustee.
The various concepts of a decentralised reputation-based trust system are illustrated in the class
diagram of Figure 1.
Based on the above and on the results of some further analysis that we conducted in the field, we
outline below the basic characteristics of a reputation system which distinguish one from another:
a) Recommendation content and its representation. This could be an arithmetic value, a statement,
or a combination of a value and associated semantic information, such as the confidence of the
recommender, context and time information, etc. A recommendation can represent only positive
behavior, only negative behavior or both types of behavior of the trustee. Regarding its
representation, various formats can be used, such as binary, scalar or continuous values in a specific
interval, such as [0,1], or text in case that it contains semantic information (as in [Ingram 2003]);
b) Recommendation formation. A recommendation can be formed based on the evaluation of a
single transaction that the recommender conducted with the trustee or on aggregated ratings
regarding the transactional behaviour of the trustee. Techniques that can be used for the formation
can vary from manual ratings of single transactions (as in [Xiong and Liu 2004]) to probabilistic
techniques (as in [Zhou and Hwang 2007b] and statistical aggregation of simple arithmetic ratings
(as in [Kamvar et al. 2003]).
c) Selection of Recommenders. Recommenders can be selected based on their credibility as
recommenders, on knowledge regarding the social relationships between peers, on the similarity of
the recommenders’ previous recommendations with the recommendations of the trustor regarding
commonly evaluated peers, and so on.
d) Reputation Estimation. It refers to the calculation approach used for transforming the available
information (both direct transactional information and recommendations from others) to a useful
reputation metric. As described in Chang et al. [2005], such an approach can be either deterministic,
probabilistic or based on fuzzy logic.
e) Storage and dissemination of reputation information. Reputation values may be estimated either
reactively (i.e. on demand) or proactively (i.e. after each transaction conducted by the trustee).
Calculated reputation values are stored either locally by the trustor or the trustee or by other peers
(which we call ‘storing peers’) that are determined using various techniques as we have
aforementioned. A reputation value may be stored by more than one ‘storing peers’ for preserving
integrity. Furthermore, reputation values are communicated by the storing peers to other interested
parties either upon request, or using a disseminating technique, such as flooding [Lee et al. 2003].
f) The way a trust decision is made. Trust decisions are threshold-based or rank-based; they are
based on the estimated reputation values, therefore, the latter should be translated in a manner that
facilitates trust decisions.
Figure 1. Concepts in a decentralized reputation-based trust system
Various reputation systems have been proposed for different categories of P2P applications, such as
P2P communities [Xiong and Liu 2004] , [Lee et al. 2003], P2P e-commerce [Dillon et al. 2004],
[Sabater and Sierra 2002], [Song et al. 2005] and file sharing [Selcuk et al. 2004]. Each application
category exhibits specific context characteristics which affect the choices of a reputation system
designer regarding the aforementioned characteristics of a reputation system (e.g. content and
formation of the recommendation, the way the recommender is selected etc.). A comparison of the
various design choices in a number of reputation systems for different P2P application types is
presented in [Koutrouli and Tsalgatidou 2006].
REFERENCES
ABERER, K., AND DESPOTOVIC, Z.,2001. Managing trust in a peer-2-peer information system, In Proceedings of the 10th Intl
Conference on Information and Knowledge Management (CIKM), Atlanta, 2001
CASTRO, M., DRUSCHEL, P., GANESH, A., ROWSTRON, A., AND WALLACH, D. S. 2002. Secure routing for structured peer-to-peer
overlay networks. In Proceedings of the 5th symposium on Operating Systems Design and Iimplementation (OSDI 2002),
Boston, MA, Vol. 36, 299-314
CHANG, E., DILLON, T., HUSSAIN, F. K. 2005 Trust and Reputation for Service-Oriented Environments: Technologies for Building
Business Intelligence and Consumer Confidence. John Wiley & Sons, chapter 10.
CLARKE, I., SANDBERG, O., WILEY, B., AND HONG, T. W. 2000. Freenet: A distributed anonymous information storage and
retrieval system. In Proceedings of the ICSI Workshop on Design Issues in Anonymity and Unobservability, Berkeley, CA,
2000
DATTA, A., HAUSWIRTH, M., AND ABERER, K. 2003. Beyond “Web of Trust”: Enabling P2P E-commerce. In Proceedings of the,
IEEE International Conference on E-Commerce Technology (CEC'03), Newport Beach, CA, USA, 24-27 June 2003, 303312
DESPOTOVIC, Z.; USUNIER, J.-C. AND ABERER, K. 2004. Towards peer-to-peer double auctioning. In Proceedings of the 37th
annual Hawaii International Conference on System Sciences (HICSS-37), January 5-8, 2004, Waikoloa, Island of Hawai,
289-296
DILLON, T.S., CHANG, E., AND HUSSAIN, F.K. 2004. Managing the Dynamic Nature of Trust. In IEEE Journal Of Intelligent
Systems , Sept/Oct 2004, 19(5), 79-82
DOUCEUR, J. R. 2002. The sybil attack, In Proceedings of the 1st International Workshop on Peer-to-Peer Systems (IPTPS02),
Cambridge, MA (USA), March 2002
FREENET 2008. The Freenet website: http://freenetproject.org/
JABBER 2008. The Jabber website: http://www.jabber.org/web/Main_Page
GNUTELLA 2008. The Gnutella web site: http://wiki.limewire.org/
INGRAM, D. 2003. Trust-based Filtering for Augmented Reality. In Proceedings of the 1st International Conference on Trust
Management, May 2003, LNCS, vol. 2692, 108-122
KAZAA 2008. The Kazaa website: http://www.kazaa.com/
KAMVAR, S., SCHLOSSER, M., AND GARCIA-MOLINA, H. 2003. The EigenTrust Algorithm for Reputation Management in P2P
Networks, In Proceedings of the World Wide Web Conference 2003, Budapest, Hungary, May 2003
KOUTROULI, E., AND TSALGATIDOU, A. 2006. Reputation-based Trust Systems for P2P Applications: Design issues and
comparison framework, In Proceedings of the 3rd International Conference on Trust, Privacy and Security in Digital
Business (TRUSTBUS 2006), Krakow, Poland, September 2006, 152-161
LEE, S., SHERWOOD, R., AND BHATTACHARJEE, B. 2003. Cooperative peer groups in Nice, In Proceedings of the22nd Annual
Joint Conference on the IEEE Computer and Communications Societies (IEEE Infocom ’03), San Francisco, CA, USA,
March 30- April 3, 2003
LIANG, Z., AND SHI, W. 2005. Pet: A personalized trust model with reputation and risk evaluation for P2P resource sharing. In
Proceedings of the Proceedings of the 38th Annual Hawaii international Conference on System Sciences (HICSS 2005), Vol.
07,
January
03
06,
2005,
IEEE
Computer
Society,
Washington,
DC,
201.2,
DOI=
http://dx.doi.org/10.1109/HICSS.2005.493
MANIATIS, P., GIULI, T.J., ROUSSOPOULOS, M., ROSENTHAL, D. S.H., AND BAKER, M. 2004. Impeding Attrition Attacks on P2P
Systems. In Proceedings of the 11th ACM SIGOPS European Workshop, Leuven, Belgium, September 2004
PAPAIOANNOU, TH. G., AND STAMOULIS, G. D. 2006. Enforcing Truthful-Rating Equilibria in Electronic Marketplaces, In
Proceedings of the IEEE ICDCS Workshop on Incentive-Based Computing, Lisbon, Portugal, July 2006
RATNASAMY, S., FRANCIS, P., HANDLEY, M., KARP, R., AND SHENKER. S. 2001. A scalable content-addressable network. In
Proceedings of the ACM SIGCOMM 2001 Conference on Applications, Technologies, Architectures, and Protocols for
Computer Communication,,August 27-31, 2001, 161-172
RUOHOMAA, S., KUTVONEN, L., AND KOUTROULI, E. 2007. Reputation Management Survey, In Proceedings of the 2nd
International Conference on Availability, Reliability and Security (ARES 2007), Vienna, April 2007, 103-111
SABATER, J., AND SIERRA, C. 2002. Reputation and social network analysis in multi-agent systems, In Proceedings of the 1st
International Joint Conference on Autonomous Agents and MultiAgent Systems, Bologna, 2002, 475-482
SELCUK, A. A., UZUN, E. AND PARIENTE, M. R. 2004. A Reputation-Based Trust Management System for P2P Networks, In
Proceedings of the 4th International Workshop on Global and Peer-to-Peer Computing (GP2PC), 2004
SINGH, A., AND LIU, L. 2003. TrustMe: Anonymous Management of Trust Relationships in Decentralized P2P Systems. In
Proceedings of the IEEE International Conference on Peer-to-Peer Computing, September 2003
SIT, E., AND MORRIS, R. 2002. Security Considerations for Peer-to-Peer Distributed Hash Tables. In Proceedings of the 1st
International Workshop on Peer-to-Peer Systems (IPTPS 2002), March 2002, 261 - 269
SONG, S., HWANG, K., ZHOU, R., AND KWOK, Y.-K. 2005. Trusted P2P Transactions with Fuzzy Reputation Aggregation, In IEEE
Internet Computing Magazine, Special Issue on Security for P2P and Ad Hoc Networks, Nov/Dec 2005, 24-34
STOICA, I., MORRIS, R., KARGER, D., KAASHOEK, M.F., AND BALAKRISHNAN, H. 2001. Chord: A scalable peer-to-peer lookup
service for internet applications. In Proceedings of the ACM SIGCOMM 2001 Conference on Applications, Technologies,
Architectures, and Protocols for Computer Communication, August 27-31, 2001, 149–160
SURYANARAYANA, G., AND TAYLOR, R. N. 2006. TREF: A Threat-centric Comparison Framework for Decentralized Reputation
Models, ISR Technical Report UCI-ISR-06-2, January 2006
XIONG, L., AND LIU, L. 2004. PeerTrust: Supporting reputation-based trust for peer-to-peer electronic communities, In IEEE
Transactions on Knowledge and Data Engineering, 16(7), July 2004, 843–857
ZHOU, R., AND HWANG, K. 2007. Gossip-based reputation management for unstructured peer-to-peer networks. In IEEE
Transactions on Knowledge and Data Engineering, Jan 2007. 20(9), Sept. 2008, 1282 – 1295
ZHOU, R., AND HWANG, K. 2007. Powertrust: A robust and scalable reputation system for trusted peer-to-peer computing, In
IEEE Transactions on Parallel and Distributed Systems, 18(4), April 2007, 460-473
Download