An Efficient TCM Supervised Learning Approach With Naïve Bayesian Classifier

advertisement
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 10 - Oct 2013
An Efficient TCM Supervised Learning Approach
With Naïve Bayesian Classifier
Naveen Gosu 1 , Vakacharla Durga prasada rao2, N. Tulasi Radha3
M.Tech Scholar1, Associate Professor2, Associate Professor3
1
Dept of CSE Pydah College Of Engineering And Technology, Boyapalem A.P
2,3
Dept of CSE &IT, Kaushik College Of Engineering, Boyapalem, AP
Abstract:- In this paper we proposed an efficient trust
computation method for identifying and allowing the nodes
based on trust measures. Most of our approaches works on
statistical measures like trust computations (i.e. Direct and
indirect trust computations) these approaches are not optimal,
because anonymous user may not have(or use) same set of
characteristics as previous connection. In this paper we are
proposing an efficient trust computation method apart from
the traditional trust computation method ,this approach based
on classification for analyzing the nodes or user when connects
to the network.
I. INTRODUCTION
A wide variety of networked computer systems (such
as the Grid, the Semantic Web, and peer-to-peer systems)
can be viewed as multi-agent systems (MAS) in which the
individual components act in an autonomous and flexible
manner in order to achieve their objectives [1]. An important
class of these systems is those that are open; here defined as
systems in which agents can freely join and leave at any
time and where the agents are owned by various
stakeholders with different aims and objectives. From these
two features, it can be assumed that in open MAS:
(1) the agents are likely to be unreliable and self
interested; (2) no agent can know everything about its
environment; and (3) no central authority can control all the
agents. Despite these many uncertainties, a key component
of such systems is the interactions that necessarily have to
take place between the agents. Moreover, as the individuals
only have incomplete knowledge about their environment
and their peers, trust plays a central role in facilitating these
interactions [1- 4]. Specifically, trust is here defined as the
subjective probability with which an agent a assesses that
another agent b will perform a particular action, both before
a can monitor such action and in a context in which it affects
its own action (adapted from [4]). Generally speaking, trust
can arise from two views: the individual and the society
level.
The former consists of agent a’s direct experiences
from interactions with agent b and the various relationships
that may exist between them (e.g. owned by the same
organization, relationships derived from relationships
between the agents’ owners in the real life such as friendship
or relatives, relationships between a service provider agent
ISSN: 2231-5381
and its registered consumer agents). The latter consists of
observations by the society of agent b’s past behavior (here
termed its reputation). These indirect observations are
aggregated in some way to define agent b’s past behavior
based on the experiences of all the participants in the
system.[5][6][7].
Given its importance, a number of computational
models of trust and reputation have been developed (see
Section 4), but none of them are well suited to open MAS.
Specifically, given the above characteristics, in order to
work efficiently in an open MAS, a trust model needs to
possess the following properties:
1. It should take into account a variety of sources of trust
information in order to have a more precise trust measure
(by cross correlating several perspectives) and to cope with
the situation that some of the sources may not be available.
2. Each agent should be able to evaluate trust for itself.
Given the ‘no central authority’ nature of open MAS, agents
will typically be unwilling to rely solely on a single
centralized trust/reputation service.
3. It should be robust against possible lying from agents
(since the agents are self-interested).
II. RELATED WORK
A wide variety of trust and reputation models have been
developed in the last few years (e.g. [1], [4]
[5][7][9][10]This section reviews a selection of notable
models and shows how computational trust models have
evolved in recent years with a particular emphasis on their
applicability in open MAS Specifically that derive trust
using certificates, rules, and policies. Second, surveys the
popular trust models that follow the centralized approaching
which witness observations are reported to a central
authority. Finally notable models that follow the
decentralized approach in which no central authority is
needed for trust evaluations. From now on, for the
convenience in referring agents, we call the agent evaluating
the trustworthiness of another the evaluator, or agent a; and
the agent being evaluated by a the target agent, or agent b.
As can be seen in the previous section, trust can
come from a number of information sources: direct
experience, witness information, rules or policies. However,
due to the openness of a MAS, the level of knowledge of an
agent about its environment and its peers may vary greatly
http://www.ijettjournal.org
Page 4490
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 10 - Oct 2013
during its life cycle. Therefore, at any given time, some
information sources may not be available, or adequate, for
deducing trust. For example, the following situations may
(independently) happen:
– An agent may never have interacted with a given target
agent and, hence, its experience
cannot be used to deduce how trustworthy the target agent
is.
– An agent may not be able to locate a witness for the target
agent (because of a lack of knowledge about the target
agent’s society) and, therefore, it cannot obtain witness
information about that agent’s behaviors.
– The current set of rules to determine the level of trust is
not applicable for the target
Agent.
In such scenarios, trust models that use only one source of
information will fail to provide a trust value of the target
agent. For that reason, FIRE adopts a broader base of
information than has hitherto been used for providing trustrelated information. Although the number of sources that
provide trust-related information can be greatly varied from
application to application, we consider that most of them can
be categorized into the four main sources as
follows:
– Direct experience: The evaluator uses its previous
experiences in interacting with the
target agent to determine its trustworthiness. This type of
trust is called Interaction Trust.
– Witness information: Assuming that agents are willing to
share their direct experiences,
the evaluator can collect experiences of other agents that
interacted with the target agent.
Such information will be used to derive the
trustworthiness of the target agent based on the views of its
witnesses. Hence this type of trust is called Witness
Reputation.
– Role-based rules: Besides an agent’s past behaviors
(which is used in the two previous types of trust), there are
certain types of information that can be used to deduce trust.
These can be the various relationships between the evaluator
and the target agent or its knowledge about its domain (e.g.
norms, or the legal system in effect). For example, an agent
may be preset to trust any other agent that is owned, or
certified, by its owner; it may trust that any authorized
dealer will sell products complying to their company’s
standards; or it may trust another agent if it is a member of a
trustworthy group.4 Such settings or beliefs (which are
mostly domain-specific) can be captured by rules based on
the roles of the evaluator and the target agent to assign a
ISSN: 2231-5381
predetermined trustworthiness to the target agent. Hence this
type of trust is called Role-based Trust.
– Third-party references provided by the target agents: In
the previous cases, the evaluator needs to collect the
required information itself. However, the target agent can
also actively seek the trust of the evaluator by presenting
arguments about its trustworthiness. In this paper, such
arguments are references produced by the agents that have
interacted with the target agents certifying its behaviors.
However, in contrast to witness information which needs to
be collected by the evaluator, the target agent stores and
provides such certified references on request to gain the trust
of the evaluator. Those references can be obtained by the
target agent (assuming the cooperation of its partners) from
only a few interactions, thus, they are usually readily
available. This type of trust is called Certified Reputation.
Apart from these traditional trust computation methods we
are introducing an efficient and novel approach for
calculating the trust of the agent or node in multi agent
systems.
III. PROPOSED WORK
Now a days identifying the unauthorized user in
network is still an important research issue during the peer to
peer connections. Networks are protected using many
firewalls and encryption software’s. But many of them are
not sufficient and effective. Most trust computation systems
for mobile ad hoc networks are focusing on either routing
protocols or its efficiency, but it fails to address the security
issues. Some of the nodes may be selfish, for example, by
not forwarding the packets to the destination, thereby saving
the battery power. Some others may act malicious by
launching security attacks like denial of service or hack the
information. The ultimate goal of the security solutions for
wireless networks is to provide security services, such as
authentication, confidentiality, integrity, anonymity, and
availability, to mobile users. This paper incorporates agents
and data mining techniques to prevent anomaly intrusion in
mobile adhoc networks. Home agents present in each system
collects the data from its own system and using data mining
techniques to observed the local anomalies. The Mobile
agents monitoring the neighboring nodes and collect the
information from neighboring home agents to determine the
correlation among the observed anomalous patterns before it
will send the data. This system was able to stop all of the
successful attacks in an adhoc networks and reduce the false
alarm positives.
http://www.ijettjournal.org
Page 4491
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 10 - Oct 2013
In our approach we proposes an efficient
classification based approach for analyzing the anonymous
users and calculates the trust measures based on the training
data with the anonymous testing data. Our architecture
contributes with the following modules like Analysis agent,
Neighbour hood node, Classifier and data collection and
preprocess as follows
1) Analysis agent –Analysis agent or Home Agent is
present in the system and it monitors its own
system continuously. If an attacker sends any
packet to gather information or broadcast through
this system, it calls the classifier construction to
find out the attacks. If an attack has been made, it
will filter the respective system from the global
networks.
2) Neighboring node - Any system in the network
transfer any information to some other system, it
broadcast through intermediate system. Before it
transfer the message, it send mobile agent to the
neighboring node and gather all the information
and it return back to the system and it calls
classifier rule to find out the attacks. If there is no
suspicious activity, then it will forward the message
to neighboring node.
3) Data collection - Data collection module is
included for each anomaly detection subsystem to
collect the values of features for corresponding
layer in an system. Normal profile is created using
the data collected during the normal scenario.
Attack data is collected during the attack scenario.
4) Data preprocess - The audit data is collected in a
file and it is smoothed so that it can be used for
anomaly detection. Data preprocess is a technique
to process the information with the test train data.
In the entire layer anomaly detection systems, the
above mentioned preprocessing technique is used
For the classification process we are using Bayesian
classifier for analyzing the neighbor node testing data with
the training information. Bayesian classifier is defined by a
set C of classes and a set A of attributes. A generic class
belonging to C is denoted by cj and a generic attribute
belonging to A as Ai.Consider a database D with a set of
attribute values and the class label of the case. The training
of the Naïve Bayesian Classifier consists of the estimation of
the conditional probability distribution of each attribute,
given the class.
In our example we will consider a synthetic dataset which
consists of various anonymous and non anonymous users
node names, type of protocols and number of packets
transmitted and class labels ,that is considered as our feature
set C(c1,cc,……cn) for training of system and calculates
overall probability for positive class and negative class and
then calculate the posterior probability with respect to all
features ,finally calculate the trust probability.
Algorithm to classify malicious agent
Sample space: set of agent
H= Hypothesis that X is an agent
P(H/X) is our confidence that X is an agent
P(H) is Prior Probability of H, ie, the probability that any
given data sample is an agent regardless of its behavior
P(H/X) is based on more information, P(H) is independent
of X
Estimating probabilities
P(X), P(H), and P(X/H) may be estimated from
given data
Bayes Theorem
P( H | X )  P( X | H ) P( H )
P( X )
Steps Involved:
1.
Each data sample is of the type
X=(xi) i =1(1)n, where xi is the values of X for attribute Ai
2.
Suppose there are m classes Ci, i=1(1)m.
X Î Ci iff
P(Ci|X) > P(Cj|X) for 1£ j £ m, j¹i
i.e BC assigns X to class Ci having highest posterior
probability conditioned on X
The class for which P(Ci|X) is maximized is called the
maximum posterior hypothesis.
From Bayes Theorem
3.
P(X) is constant. Only need be maximized.
If class prior probabilities not known, then assume
all classes to be equally likely
ISSN: 2231-5381
http://www.ijettjournal.org
Page 4492
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 10 - Oct 2013
Otherwise maximize
P(Ci) = Si/S
Problem: computing P(X|Ci) is unfeasible!
4.
Naïve assumption: attribute independence
P(X|Ci) = P(x1,…,xn|C) = PP(xk|C)
5.
In order to classify an unknown sample X, evaluate
for each class Ci. Sample X is assigned to the class Ci iff
P(X|Ci)P(Ci) > P(X|Cj) P(Cj) .
[13] M. Gupta, P. Judge, and M. Ammar, “A Reputation
System for Peer-to-Peer Networks,” Proc. 13th Int’l
Workshop Network and Operating Systems Support for
Digital Audio and Video (NOSSDAV ’03), pp. 144-152,
2003.
[14] K. Aberer and Z. Despotovic, “Managing Trust in a
Peer-2-Peer Information System,” Proc. 10th Int’l Conf.
Information and Knowledge Management (CIKM ’01), pp.
310-317, 2001.
[15] L. Mui, M. Mohtashemi, and A. Halberstadt, “A
Computational Model of Trust and Reputation for EBusinesses,” Proc. 35th Ann. Hawaii Int’l Conf. System
Sciences (HICSS ’02), pp. 2431-2439, 2002.
BIOGRAPHIES
IV. CONCLUSION
Naveen Gosu completed his mca (all saints
pg college) and pursuing his mtech from
pydah college of engineering and
technology, boyapalem
Visakhapatnam531163
We proposed a novel and efficient trust computation
mechanism with naive Bayesian classifier by analyzing the
new agent information with existing agent information, by
classifying the feature sets or chatectertics of the agent. This
approach shows optimal results than the traditional trust
computation approaches
Vakacharla Durga prasada rao completed
his M.E. He is working as Associate
Professor in Dept of CSE&IT in Kaushik
college Of Engineering, Gambeeram,
Vishakapatnam-531163
REFERENCES
[1] N.R. Jennings, “An Agent-Based Approach for Building
Complex Software Systems,” Comm. ACM, vol. 44, no. 4,
pp. 35-41, 2001.
[2] R. Steinmetz and K. Wehrle, Peer-to-Peer Systems and
Applications. Springer-Verlag, 2005.
[3] Gnutella, http://www.gnutella.com, 2000.
[4] Kazaa, http://www.kazaa.com, 2011.
[5] edonkey2000, http://www.emule-project.net, 2000.
[6] I. Foster, C. Kesselman, and S. Tuecke, “The Anatomy
of the Grid: Enabling Scalable Virtual Organizations,” Int’l
J. High Performance
Computing Applications, vol. 15, no. 3, pp. 200-222, 2001.
[7] T. Berners-Lee, J. Hendler, and O. Lassila, “The
Semantic Web,”Scientific Am., pp. 35-43, May 2001.
[8] D. Saha and A. Mukherjee, “Pervasive Computing: A
Paradigm for the 21st Century,” Computer, vol. 36, no. 3,
pp. 25-31, Mar. 2003.
[9] S.D. Ramchurn, D. Huynh, and N.R. Jennings, “Trust in
Multi- Agent Systems,” The Knowledge Eng. Rev., vol. 19,
no. 1, pp. 1-25, 2004.
[10] P. Dasgupta, “Trust as a Commodity,” Trust: Making
and Breaking Cooperative Relations, vol. 4, pp. 49-72, 2000.
[11] P. Resnick, K. Kuwabara, R. Zeckhauser, and E.
Friedman, “Reputation Systems,” Comm. ACM, vol. 43, no.
12, pp. 45-48, 2000.
[12] A.A. Selcuk, E. Uzun, and M.R. Pariente, “A
Reputation-Based Trust Management System for P2P
Networks,” Proc. IEEE Int’l Symp. Cluster Computing and
the Grid (CCGRID ’04), pp. 251-258,
2004.
ISSN: 2231-5381
N. Tulasi Radha completed B.Tech in
GITAM college of Engineering and M.Tech
from JNTU college of Engineering,
Kakinada. She working as Associate
Professor in Dept of CSE&IT in Kaushik
college Of Engineering, Gambeeram,
Vishakapatnam-531163. Her interested
areas
are Network
Security, Human
Computer
Interaction Data Mining and Ware housing & Data
Structures.
http://www.ijettjournal.org
Page 4493
Download