1
2
1
2
Abstract — Mobile ad hoc networks (MANETs) is one of the successful wireless network paradigms which offers unrestricted mobility without depending on any underlying infrastructure. MANETs have become an exciting and important technology in recent years because of the rapid proliferation of variety of wireless devices, and increased use of ad hoc networks in various applications. Like any other networks, MANETs are also prone to variety of attacks majorly in routing side, most of the proposed secured routing solutions based on cryptography and authentication methods have greater overhead, which results in latency problems and resource crunch problems, especially in energy side.
The successful working of these mechanisms also depends on secured key management involving a trusted third authority, which is generally difficult to implement in MANET environment due to volatile topology. Designing a secured routing algorithm for MANETs which incorporates the notion of trust without maintaining any trusted third entity is an interesting research problem in recent years. This paper propose a new trust model based on cognitive reasoning, which associates the notion of trust with all the member nodes of MANETs using a novel Behaviors-Observations-
Beliefs(BOB) model. These trust values are used for detection and prevention of malicious and dishonest nodes while routing the data. The proposed trust model works with the
DTM-DSR protocol, which involves computation of direct trust between any two nodes using cognitive knowledge.
We have taken care of trust fading over time, rewards, and penalties while computing the trustworthiness of a node and also route. A simulator is developed for testing the proposed algorithm, the results of experiments shows incorporation of cognitive reasoning for computation of trust in routing effectively detects intrusions in MANET environment, and generates more reliable routes for secured routing of data.
Keywords: MANETs; routing; trust; DTM-DSR; security; Cognitive agents; BOB model
As an important concept in network security, trust is interpreted as a set of relations among nodes/entities participates in network activities. Trust relations are mainly based on previous behaviors of nodes/entities. The concept of trust is the same as within real life, where we trust people who have been helpful and acted trustworthy towards us in the past. In the case of ad hoc networking, nodes that operate the protocols correctly are considered as trusted nodes. The purpose of developing a notion of trust within an ad hoc network is to provide a heuristic for security. Allowing faulty or malicious nodes to be detected and removed from the network, with minimal overhead and restriction to the network.
There are three definitions of trust as follows [1]:
1) Trust is the subjective probability of one entity expecting that another entity performs a given action on which its welfare depends. The first entity is called trustor, while the other is called trustee.
2) Direct trust refers to an entity’s belief in another entity’s trustworthiness within a certain direct interaction to a certain direct experience.
3) Recommendation trust refers to one entity which may also believe that another entity is trustworthy due to the recommendations of other entities with respect to their evaluation results.
Trust management in distributed and resource-constraint networks, such as MANETs and sensor networks, is much more difficult but more crucial than in traditional hierarchical architectures, such as the Internet and infrastructure based wireless LANs. Generally, these types of distributed networks have neither pre-established infrastructure, nor centralized control servers or trusted third parties (TTPs).
The trust information or evidence used to evaluate trustworthiness is provided by peers, i.e. the nodes that form the network. The dynamically changing topology and con-
nectivity of MANETs establish trust management more as a dynamic systems problem. Furthermore, resources
(power, bandwidth, computation etc.) are normally limited because of the wireless and ad hoc environment, so the trust evaluation procedure should only rely on local information. Therefore, the essential and unique properties of trust management in this paradigm of wireless networking, as opposed to traditional centralized approaches, are: uncertainty and incompleteness of trust evidence, locality of trust information exchange, distributed computation, and so on. We are addressing this issue by storing the evidence of trust calculation in the form of beliefs generated in beliefs database stored with every mobile node.
Agents are the autonomous programs which sense the environment, acts upon the environment, and use its knowledge to achieve their goal(s) [2]. An agent program can assist people or programs and some occasions acts on their behalf. Agents possesses the mandatory properties such as: reactiveness: agents senses changes in the environment and acts according to those changes, autonomy: agents have control over their own actions, goal-orientation: agents are proactive, and temporal continuity: agents are continuously executing software. A typology of agents refers to the study of classification of agents based on some of the key attributes, exhibited by agent programs [3]. The agents are classified into Static and Mobile agents, Deliberative and
Reactive agents, and Smart or intelligent agents .
Fig. 1: Intelligent agent structure
An ideal rational/intelligent agent should do whatever action is expected to maximize its performance measure, on the basis of the evidence provided by the precept sequence and whatever built-in knowledge the agent has. An agent should possess explicit utility functions to make rational decisions. Goals, enable the agent to pick an action right away if it satisfies the intended goal. The Fig. 1, gives the structure of an intelligent agent, showing how the current precept is combined with the old internal state, and experiences of an agent to generate the updated description of the current state. To do this intelligent agents use both learning and knowledge. Here the actions are not simple precept sequence based, an intelligent agent should able to reason out the suitable action from set of possible actions, either reactively/pro-actively [4].
Fig. 2: Cognitive agent structure
Cognitive agents enable the construction of applications with: context sensitive behavior, adaptive reasoning, ability to monitor and respond to situation in real time, and implements human cognitive architecture for knowledge organization.
A cognitive act performed by cognitive agents consists of three general actions: 1. Perceiving information in the environment; 2. Reasoning about those perceptions using existing knowledge; and 3. Acting to make a reasoned change to the external or internal environment. For an application perspective, CAs empower a user by combining the speed, efficiency and accuracy of the computer with the decision making capacity, experience and expertise of human experts. The agent represents its beliefs, intentions, and desires in modular data structures and performs explicit manipulations on those structures to carry out means-ends reasoning or plan recognition (refer Fig. 2).
The proposed model uses, the DTM-DSR (Dynamic Trust
Mechanism - Dynamic Source Routing) protocol which is an extension of the DSR protocol [5]. In DTM-DSR, i) every node maintains trust table; ii) the route request message
(RREQ), includes T low and BlackList , where, T low denotes the node’s lower trust level on its neighbor, BlackList denotes distrusted node list; and iii) T route field in the route reply message (RREP), denote the accumulated route trust.
1.2.1 Route Discovery
During the process of route discovery, when a node A chooses another node B to forward a packet, A may suffer some attacks from B , such as black hole attack, wormhole attack, etc. Thus, a reliable relationship between A and B should be established. A trusted route represents a route that only involves trustworthy nodes, sending packets by the trusted route will decrease the probability of malicious attacks and improve the survivability of MANETs. The trustworthiness of a route is evaluated by trust values of nodes along the route, denoted by T route
. The route discovery includes three processes: i) RREQ delivery; ii) RREP
delivery; and iii) Route selection, which are briefly discussed as follows.
RREQ delivery
When the source node S needs to send data to the destination node D , it first checks whether there is a feasible path found between S and D . If so, S sends the data to D ; otherwise, S will start a route discovery. First, S appends its
ID into the route record, and checks whether the trust on its neighbor nodes is lower than T low
. If so, S appends the ID of neighbor nodes into BlackList . Then, S broadcasts the
RREQ packets with T low and BlackList , and sets a timer window t s
. When any intermediate node receives a RREQ packet, it processes the request according to the following steps:
1) If the requested ID for this RREQ packet is found it discards the RREQ packet and does not process it further.
address, then the route record in the packet contains the route by which the RREQ reached this node from the source node of the RREQ packet. Intermediate node returns a copy of this route in a RREP packet to the source node.
3) Otherwise, it appends its own address to the route record in the RREQ packet, and checks whether the trust on its neighbor nodes is lower than T low
. If it is, it appends the ID of the neighbor nodes into BlackList .
4) Re-broadcast the request to the neighbor nodes.
RREP delivery
When the destination node receives the first RREQ packet, it sets a timer window t d
. If t d expires, it discards the follow-up RREQ packet. Otherwise, it checks whether the
BlackList is empty. If not, it discards the RREQ packet; otherwise, it sets T route packet with T route
= 1 , and then unicasts the RREP to the intermediate node. After receiving a RREP packet, the intermediate node computes T route
, and updates the value of T route
, then it forwards the RREP packet with T route
.
Route selection
When S receives the RREP packet, if the timer window t s does not expire, it needs to update the T route value of this message. Otherwise, S discards follow-up RREP packets and picks a path with the largest T route with less number of hops.
1.2.2 Route Maintenance
After each successful route discovery takes place, S can deliver its data to D through a route. However, the route may break at any time instance due to the mobility of nodes, or attacks. In order to maintain a stable, reliable and secure network connection, route maintenance is necessary to ensure the system survivability. Route maintenance is performed when all routes fail or when the timer window t r for routing expires.
The trust model proposed in this paper is based on the DTM-DSR protocol. Every mobile node is assumed to host platform for executing agents built with cognitive intelligence. The Cognitive Agents (CAs) on a mobile node use the BOB-model to compute the trustworthiness of its neighboring nodes. The trust is computed based on the beliefs generated by observing neighboring nodes behaviors while forwarding the data. These trust values are used in the
DTM-DSR protocol to route the packet from a given source to a given destination.
The rest of the paper is organized as follows; section 2 lists some of the related work, section 3 discuss the proposed trust model, section 4 provides simulation setup and results, and section 5 concludes the paper.
Existing works that are related to trust based security can be studied under two dimensions. First, the trust evaluation models in MANETs and the Second is the trust/reputations based routing protocols in MANETs. Many of the existing trust-based routing protocols are the extensions of the popular routing protocols, such as DSR and AODV.
These models includes methods for evaluating the trust based on various parameters. Entropy based trust models are employed in ad hoc networks for secure ad hoc routing and malicious node detection [6]. It is based on a distributed scheme to acquire, maintain, and update trust records associated with the behaviors of nodes forwarding packets and the behaviors of making recommendations about other nodes.
But it is not a generic mathematical model and can not prevent the false recommendations. A semiring-based trust model [7] interpret the trust as a relation among entities that participates in various protocols. This work is focusing on the evaluation of trust evidence in ad hoc networks, because of the dynamic nature of ad hoc networks, trust evidence may be uncertain and incomplete. Using the theory of semirings, it shows how two nodes can establish an indirect trust relation without previous direct interaction. The model has more dynamic adaptability, but its convergence is slow and cannot be adopted in large scale networks. To solve the vulnerabilities with existing trust management frameworks, a robust and attack-resistant framework called the objective trust management framework (OTMF) based on a modified Bayesian approach is proposed [8]. The theoretical basis for OTMF is a modified Bayesian approach by which different weights are put on different information related to observations of behaviors according to their occurrence time and providers.
A reputation based system is an extension to source routing protocols for detecting and punishing selfish nodes in
MANETs [9]. In a mobile ad hoc network, node cooperation in packet forwarding is required in order for the network to function properly. However, some selfish nodes might intend not to forward packets in order to save resources for their own use. To discourage such behavior, a reputation-based system is proposed, to detect selfish nodes and respond to them by showing that being cooperative will benefit them more than being selfish. In this paper, besides cooperative nodes and selfish nodes, a new type of node called a suspicious node is introduced. These suspicious nodes will be further investigated and if they tend to behave selfishly, some actions are taken against them. A trust model based on Bayesian theory is proposed in [10]. The model assesses subjective trust of nodes through the Bayesian method, which makes it easy to obtain the subjective trust value of one node on another, but it cannot detect dishonest recommendations. A fuzzy trust recommendation framework
[11], and the recommendation algorithm is based on collaborative filtering in MANETs has been proposed. It considers recommendation trust, but does not consider other factors, such as the time aging and the certainty nature of trust.
Most statistical methods assume that the behavior of a system is stationary, so the ratings can be based on all observations back to the beginning of time. But often the interest is to identify and isolate intrusion nodes by keeping the theory of rewarding the positive behaviors and punishing the negative behaviors intact. Fig. 3 shows the deployment of Cognitive Agents (CAs) over every mobile node belongs to MANET under consideration. The CA present on a node is responsible for computing the trust over its neighboring nodes, group of CA’s collaboratively participate in establishing the trusted route from a given source S to a given destination D . We assume these CAs are secured enough and tamper-resistant from any host-based attacks.
S
CA
RREQ
RREP
Beliefs Database
CA
RREQ
RREP
CA
RREQ
RREP
CA
RREQ
RREP
D
CA
These algorithms makes use of the existing routing protocols, and during the routing the trust or the reputation values are included. A dependable routing by incorporating trust and reputation in the DSR protocol is proposed [12]. The mechanism makes use of Route Reply packets to propagate the trust information of nodes in the network. These trust values are used to construct trusted routes that pass through benevolent nodes and circumvent malicious nodes. But it does not consider how to prevent dishonest recommendation in the trust model. The cooperative on-demand secure route
(COSR) protocol is used to defend against the main passive route attacks [13]. COSR measures node-reputation (NR) and route-reputation (RR) by contribution, Capability of
Forwarding (CoF), and recommendation to detect malicious nodes. Watchdog and Pathrater techniques are proposed in
[14]. The Watchdog promiscuously listens to the transmission of the next node in the path for detecting misbehavior’s.
The Pathrater keeps the ratings for other nodes and performs route selection by choosing routes that do not contain selfish nodes. However, the watchdog mechanism needs to maintain the state information regarding the monitored nodes and the transmitted packets, which would add a great deal of memory overhead. The extension to the above a collaborative reputation (CORE) mechanism [15], uses the watchdog mechanism to observe neighbors, and aims to detect and isolate selfish nodes.
Fig. 3: The Proposed BOB-based Trust Model for MANET
The BOB model is a cognitive theory based model proposed in our earlier paper [16], to generate beliefs on a given mobile node, by observing various behaviors exhibited by the node during execution of routing protocol. The
BOB model is developed by giving emphasis on using the minimum computation and minimum code size, by keeping the resource restrictiveness of the mobile devices and infrastructure. The knowledge organisation using cognitive factors, helps in selecting the rational approach for deciding the trustworthiness of a node or a route. The rational approach implements systematic, step by step method in which data obtained through various observations is used for making long-term decisions. It also reduces the solution search space by consolidating the node behaviors into an abstract form called beliefs, as a result the decision making time reduces considerably.
Behaviors
The behaviors refer to the actions or reactions of a node while executing routing protocols. The behaviors are modeled using a set of behavior parameters. In general the probability P
Bh i of generating a i th behavior computed using the behavior parameters set, BP i
.
Bh i is
P
Bh i
= P
P k ∈ BP i k ∈ BP i
W bp
W k bp k
∗
∗ V max bp
( V k bp k
)
:
X k ∈ BP i
W bp k
= 1
(1)
Where W bp k
, V bp k
, and max ( V bp k
) are the weightage given to each behavior parameter in the set BP i
, current value generated for the behavior parameter bp k value the behavior parameter bp k
, and the maximum can take respectively. If the value of V bp k
, tends more and more towards max ( V probability of generation of the behavior Bh i bp k
) , the increases.
Observations
In the system an observation is the summarization of various behaviors exhibited by a node during protocol execution.
The probability of generating observation, Ob i
, i.e., P
Ob i
, is computed using the union of occurrence of defined set of behaviors which leads to that observation. Let BH set of disjoint behaviors considered for i th
Ob i is the observation Ob i
.
P
Ob i
= P ( Bh a i
∪ Bh c i
∪ Bh k i
∪ · · · ∪ Bh m i
) (2)
Where Bh a j ≤ m .
i
, Bh c i
, Bh k i
, . . . , Bh m i
∈ BH
Ob i
, where a ≤
Beliefs
A belief represents an opinion with certain confidence about a node. These beliefs are stored in a beliefs database, and periodically updated as and when the new beliefs on the event occurs. The probability of occurrence of a belief,
P
Bf i
, is the union of those observations which will generate that particular belief. Let O
Bf i is the observations set for belief, Bf i
.
P
Bf i
= P ( Ob c i
∪ Ob f i
∪ Ob l i
∪ · · · ∪ Ob n i
) (3)
Where Ob c i n .
, Ob f i , Ob l i , . . . , Ob n i ∈ O
Bf i
, where c ≤ j ≤
The CA comprised of constructs used to implement the
BOB model, the constructs are the logical structures used for periodic collection and analysis of behavior parameters of a mobile node, the BOB model uses four constructs namely: Behaviors identifier, Observations generator, Beliefs formulator, and Beliefs analyser as shown in the Fig. 4.
Fig. 4: The BOB model constructs built into CA
The Behaviors identifier construct periodically captures behavior parameters related to a mobile node. A set of behavior parameters participate in triggering one or more behaviors. A threshold-based triggering function ( F ) is implemented to identify each behavior. The F accepts a set of behavior parameters, and computes the triggering value, if the value is greater than threshold then that behavior is successfully identified. The Observations generator construct generates one or more observations on identified behaviors.
The summarization function generates an observation by enumerating number of favorable behaviors to generate an observation. If the number of favorable behaviors are less than the expected value, then that observation is not generated, otherwise an observation is generated. We propose to keep the percentage of favorable behaviors as least as 40% to generate an observation, so that the accuracy of the system is increased. The Beliefs formulator construct deduce belief(s) from one or more generated observations. Suitable logical relations are established between observations to construct predicates to deduce various belief(s). The Beliefs analyser construct analyse the newly formed beliefs, say Bl new , to compute the Belief Deviation Factor (BDF) with respect to established beliefs, say Bl old . The deviation function D finds the relative deviation between two beliefs, satisfies the distance property, i.e., increased distance between two beliefs produce higher deviations and vice versa.
The trust modeling using CAs in our scheme is explained in following steps.
Step 1: The behavior analysis is carried out by the CA on a mobile node over the actions a neighboring mobile node(s) takes over the data they receives for routing. Some of the malicious behaviors of a node includes; 1. data dumping,
2. energy draining, 3. suspicious pairing, 4. data delaying,
5. data reading, 6. data fabrication, etc. In our scheme we have modeled all these malicious behaviors using set of behavior parameters, which includes; time for forwarding, hard disk operations, energy level, next hop address, size of data received/forwarded, next hop used, and so on. These set of behaviors are accumulated over a time to generate observations, such as; formation of wormhole, formation of blackhole, denial of service, intrusion attempts, modification attempts, etc. The related observations are deduced into beliefs on a node, example, genuine node, intruder, service hijacker, wormhole attacker, blackhole attacker, route cache poisoner, etc.
Step 2: Generated beliefs on the neighboring nodes in the current time period 4 t , are compared with established beliefs from the beliefs database stored in a node in order to compute the Belief Deviation Factor (BDF). The CA calculates the deviation factor between the probability values of newly computed beliefs, i.e., P
Bl new
, by comparing them with the established corresponding probability values of
beliefs from the beliefs database, i.e., P
Bl old .
DF ( Bl new , Bl old ) = | P
Bl new
, P
Bl old
| (4)
Exponentially moving averages are used to accumulate deviation factors of beliefs generated during various time instances. The weights for each deviation decreases exponentially, giving much more importance to current deviation while still not discarding older deviations entirely. The smoothing factor α is given as,
α =
2
N umberof RoutingRequests + 1
The BDF at time t is given by,
(5)
BDF t
Bl
= α × DF ( Bl new
, Bl old
) + (1 − α ) × BDF t − 1
Bl
(6)
Step 3: The BDF is then combined with Time-aging Factor
(TF), Rewards Factor (RF), and Penalty Factor (PF), to calculate the direct trust of a node i over its neighbor node j in time 4 t , which is given by T d new
( i, j ) .
if ( T d old
( i, j ) > 0 and BDF = 0 ) then
T d new
( i, j ) = 1 − T F × T d old
( i, j ) (7) if ((
T
T d new d old
(
( i, j i, j
)
) >
=
0 and
T d
BDF old
( i, j )
= 0
×
) then
(1 − T F × ( RF ×
N
1
(8)
− P F × N
2
)) + T F × ( RF × N
1
− P F × N
2
)
Where: with time.
tive imapct of trust when the BDF is low during 4 t .
P F =
T F
λe
C
3
λe C
3
RF
=
=
λe
λe
C
1
λe C
1
4 t +1
C
2
λe C
2
× (1 − BDF ) / 4 t
× (1 − BDF ) / 4 t
4 t
− 1
+1
− 1 , represents the trust fades
× BDF/ 4 t
× BDF/ 4 t
− 1
+1
, represents the posi-
, represents the negative imapct of trust when the BDF is high during 4 t .
λ, C
1
, C
2 and
C
N
3
1 are determined according to practical requirements.
=
N umberof T imes ( BDF <LowerT hreshold )
N umberof T imes ( BDF <LowerT hreshold )+1
N umberof T imes ( BDF >HigherT hreshold )
N umberof T imes ( BDF >HigherT hreshold )+1
.
and N
2
=
Following assumptions are made in the simulated network: 1. Each node has the same transmission radius; and 2.
Each node knows the IDs of its neighbor nodes by exchanging their control information. Some of the parameters used in simulation are mobility speed, amount of data to be routed in a CBR mode, node bandwidth, and message sending duration. When the simulation began randomly chosen nodes participate in the routing process with source and destination pair. In this process if any nodes trust values have reached the lower value, then those nodes are considered as malicious nodes. The detected malicious nodes are not allowed in further routing process until their trust values are increased.
Fig. 5 to Fig. 8 shows snapshots of simulator developed.
Fig. 9 shows throuhput plotted for various simulation scenarios, we can observe the throughput decreases as more
Fig. 5: The sample MANET topology
Fig. 6: Routing started from 0 and more nodes loose trust, and marked as intruders by the system. Fig. 10 shows variation of trust of four neighboring nodes of selected mobile node no 6. The result shows trust values remains constant for a node irrespective time, it slowly increases for two more nodes, and decreases for one node over a time.
The proposed trust-based routing using the BOB-model based on cognitive theory performs efficiently over the
DTM-DSR and DSR, since the computation of trust is linked with belief genration and belief deviation. The delay incurred in computing the trust is very less compared to the DTM-DSR protocol, since the cognitive theory based knowledge is used. We could able to establish more reliable routes compared to previous two algorithms by isolating the intruder nodes from routing. We are conducting detailed performance analysis by subjecting the protocol into various routing conditions, and also incorporating trust calculated by recommendations by peers.
[1] V. Balakrishnan, V. Varadharajan, and U. Tupakula, “Trust Management in Mobile Ad Hoc Networks,” Guide to Wireless Ad Hoc
Networks, Springer, ISBN: 9781848003286, 2009.
Fig. 7: Node 1 and Node 3 trust calculation is on Fig. 9: Network Throughput
Fig. 8: Node 3 is detected as intruder
Fig. 10: Variation of trust values among nodes
[2] J. Bradshaw, “Software agents,” AAAI press, California, 2000.
[3] H. S. Nwana, “Software Agents: An Overview,” Knowledge Engineering Review, vol. 11, No 3, pp.1-40, 1996.
[4] N. R. Jennings and M. Wooldridge, “Applications of Intelligent
Agents,” 1998.
[5] S. Peng, W. Jia, G. Wang, J. Wu, and M. Guo, “Trusted Routing
Based on Dynamic Trust Mechanism in Mobile Ad-Hoc Networks”,
IEICE TRANS. on Information and Systems.
[6] Y. Sun, W. Yu, Z. Han, and K. J. R. Liu, “Information Theoretic
Framework of Trust Modeling and Evaluation for Ad Hoc Networks,”
IEEE Journal on Selected Areas in Communications, Vol.24, pp. 305-
317, 2006.
[7] G. Theodorakopoulos and J. S. Baras, “On Trust Models and Trust
Evaluation Metrics for Ad Hoc Networks,” IEEE Journal on Selected
Areas in Communications, Vol. 24, Issue 2, pp. 318-328, Feb.2006.
[8] J. Li, R. Li, J. Kato, “Future Trust Management Framework for
Mobile Ad Hoc Networks,” IEEE Communications Magazine, Vol.
46, Issue 4, pp.108-114, April 2008.
[9] T. Anantvalee and J. Wu, “Reputation-Based System for Encouraging the Cooperation of Nodes in Mobile Ad Hoc Networks,” Proc. of
IEEE International Conference on the Communications (ICC 2007), pp. 3383-3388, June 2007.
[10] S. Peng, W. Jia, and G. Wang, “Voting-Based Clustering Algorithm with Subjective Trust and Stability in Mobile Ad-Hoc Networks,”
Proc. of the IEEE/IFIP International Conference on Embedded and
Ubiquitous Computing (EUC 2008), Vol. 2, pp. 3-9, Dec. 2008.
[11] J. Luo, et al., “Fuzzy Trust Recommendation Based on Collaborative
Filtering for Mobile Ad-hoc Networks,” Proc. of the 33rd IEEE
Conference on Local Computer Networks (LCN 2008), pp. 305-311,
Oct. 2008.
[12] A. A. Pirzada, A. Datta, and C. McDonald, “Incorporating trust and reputation in the DSR protocol for dependable routing,” Elsevier of
Computer Communications, Vol. 29, pp. 2806-2821, 2006.
[13] F. Wang, Y. Mo, B. Huang, “COSR: Cooperative On-Demand
Secure Route Protocol in MANET,” International Symposium on
Communications and Information Technologies (ISCIT 2006), pp.
890-893, Oct. 2006.
[14] S. Marti, T. J. Giuli, K. Lai, and M. Baker, “Mitigating Routing
Misbehavior in Mobile Ad Hoc Networks,” Proc. of MobiCom,
Boston, MA, pp. 255-265, August 2000.
[15] P. Michiardi and R. Molva, “Core: A Collaborative Reputation mechanism to enforce node cooperation in Mobile Ad Hoc Networks,”
Communication and Multimedia Security Conference, September
2002.
[16] B.S. Babu and P. Venkataram, ”Cognitive agents based authentication
& privacy scheme for mobile transactions (CABAPS),” Computer
Communications, vol. 31, pp. 4060-4071, 2008.