An Incentive On-Demand Routing Protocol with Caching

advertisement
An Incentive On-Demand Routing Protocol with Caching in Wireless
Ad-Hoc Networks
Diego Montenegro
Andrew Park
Computer Science Department
Yale University
Raghava Vellanki
Abstract – An on-demand routing protocol for wireless ad hoc networks is one that searches for and attempts to
discover a route to some destination node only when a sending node originates a data packet addressed to that node.
In order to avoid the need for such route discovery to be performed before each data packet is sent, such routing
protocols must cache routes previously discovered. Also, the intermediate nodes on the communication path chosen
are expected to forward the packets of other nodes, so that the communication can extend beyond their wireless
transmission range. However, because wireless mobile nodes are usually constrained by limited power and
computation resources, a selfish node may be unwilling to spend its resources in forwarding packets which are not
of its direct interest, even though it expects other nodes to forward its packets to the destination. To address both of
these problems, we propose an on-demand caching protocol with incentive routing to encourage packet forwarding.
Our routing protocol is based on DSR (Dynamic Source Routing Protocol), with the implementation of caching and
incentive routing.
1. Introduction
Caching is an important part of any on-demand routing protocol for wireless ad hoc networks. An ondemand routing protocol is one that searches for, and attempts to discover a route to some destination node, only
when a sending node originates a data packet addressed to that node. In order to avoid the need for such a route
discovery to be performed before each data packet is sent, an on-demand routing protocol must cache routes
previously discovered. Such caching then introduces the problem of proper strategies for managing the structure and
contents of this cache as nodes in the network move in and out of wireless transmission range of one another,
possibly invalidating some cached routing information [1].
As stated above, the transmission range of a mobile node is limited due to the power constraint, and there is
no fixed communication infrastructure to facilitate packet forwarding; hence the communication between two nodes
beyond the transmission range relies on intermediate nodes to forward the packets. However, because mobile nodes
are typically constrained by power and computing resources, a selfish node may not be willing to use its computing
and energy resources to forward packers that are not directly beneficial to it, even though it expects others to
forward packets on its behalf.
In this paper, we address both of the above problems, by designing an on-demand caching protocol, that
significantly reduces overhead, specially when there is repeated communications between specific nodes and when
there is little or no significant host movement in place. This protocol is also composed by a Secure and ReputationBased Incentive architecture, to encourage packet forwarding and discipline selfish nodes [2].
The rest of he paper is organized as follows: Section 2 talks about the flaws present in the Test Protocol
given to us in class. Section 3 details the structure of the on-demand caching protocol. In section 4 we detail the
elements of the Reputation-Based incentive routing. In section 5 we discuss possible drawbacks of our protocol and
we conclude.
2. Evaluation of the Test Protocol
The test protocol provided, closely follows the Dynamic Source Routing algorithm, i.e., when a
node wants to transmit, it floods the network, trying to find the destination and an optimal path. It differs from DSR
in that not every node forwards the Routing Information, only the ones that are received from its control path parent.
This can become a problem, since optimal paths might not get forwarded, and the route chosen to transmit the data
might cost more than it is actually needed. For example, take the network on figure 1. As can be seen, the optimal
path would go from S to B to X to D, with a total cost of 2 + 1 + 1 = 4. If we follow the protocol given, we have the
following: node S floods the network with TESTSIGNAL, becoming the Control Path Parent of both nodes A and
B. Node A floods the network with TESTSIGNAL, becoming the CPP of nodes S and X. Also, node A forwards the
1
ROUTEINFO from path S-A and node B forwards the path S-B. Node X floods the network becoming the CPP of
node D. Node X forwards the ROUTEINFO received from A. In the end, node D can only find one path from S-D,
which is the path S-A-X-D, with a cost of 1 + 3 + 1 = 5. As seen, this protocol does not guarantee Path Optimality.
In the following section, we will present our protocol, which handles these types of situations.
Another problem with this protocol happens when the CONTROLPOWER is set to a value less than the
MAXPOWER. If this is the case, ROUTEINFO packages might get lost, since a node might only be able to contact
another node at MAXPOWER, but since the ROUTEINFO is sent at CONTROLPOWER, then the information
would not arrive. This situation is shown on figure 2.
MAXPOWER = 3 and CONTROLPOWER = 1
Another drawback of this protocol is that when a node finishes transmitting data to the destination, the
information about the Path is lost (No caching). If the same node wants to transmit to the same destination again, the
route discovery must start over, resending all the packets, and finding the optimal path again, thus creating a lot of
overhead. Also, if some other node wants to start transmitting, it must start a Route Discovery, even if in the
previous Route Discovery from other nodes, it found out many routes that it could use. This is not optimal, since a
lot of overhead is created when data is sent through the same nodes. We handle this situation in our protocol.
The last problem with the protocol is mobility. It does not address this problem, and packets sent in this
network could easily get lost if the nodes are allowed to move. This situation is also covered in our solution.
The contract transmission phase is used to allow all nodes to learn the route that the data will take
(specially the Source Node). When the destination D sends out the contract, the intermediate nodes forward the
signed contract at MAXPOWER. However, this is not necessary, because from the transmission phase, each node
on the route already knows the power level that is needed. So instead of sending at MAXPOWER, the power from
before can be used to save efficiency on power consumption. This is included in our solution.
In the Data Transmission Phase, the packets are sent through the network. The protocol effectively checks
that all packets are received by the destination, by issuing acknowledgement packets. Also deployed in this phase, is
the incentive routing protocol, which makes the payments to the nodes depending on the amount of data that is
forwarded. A flaw found in this part of the protocol, is when a node gets paid without receiving confirmation of the
received packet. This could be easily used by the intermediate nodes to cheat, by not forwarding the data packets,
2
and requesting to get paid because they “did not receive the confirmation”. Also, this protocol assumes some
outside source that will keep control of the payments, or check that every node gets paid, otherwise, there is no way
of making sure that each node gets paid exactly what it forwarded, or that each node pays what it used.
3. On-demand Caching Protocol
Our on-demand caching protocol follows the Test Protocol given to us, and the Dynamic Source Routing
Protocol. When a node wants to communicate with another node, it first checks its routing table to determine if it
already has a known route to the destination. If it does, it routes the packets through that path found, and when the
destination receives the Data Packet, it sends back an Acknowledge Packet. This packet goes back through the same
path as the Data Packet. When the ACK is received, the source node can send a new Data Packet. If by some reason,
an intermediate node cannot reach the next Hop, after an specific amount of time has expired, this node will send
back to its previous Hop a failed ACK, and will update its routing table. This failed ACK is routed back to the
source, and every node on the path updates its table. At this point, the source node can try another Path it may have
in its routing table, or if this is too expensive or the node does not have another path, it initiates a Route Discovery
(This would also happen if in the beginning, no path was found in the cache table). To initiate Route Discovery, the
source node will send out TESTSIGNALS (TS), from 1 to MAXPOWER (MP), to let other nodes know their
distance to it. Every node that receives the TS, if it is the first TS for the session, will also send out TS from 1 to
MP. Also, if this is the first TS received from this node, it will generate a ROUTEINFO (RI) with the information
about the link from the previous node to it. Upon receiving a RI, every node will forward it at MP for other nodes to
learn. As these packets move around the network, they are saved in the cache tables of every node. When a node
receives a path that it already contains in its cache table, it no longer forwards these messages. That way, the
network is not flooded with unnecessary packets. After a specific amount of time, when the destination node has
gathered enough information, it creates a Contract (Path that packets should follow) and sends it back to the source
node. Data transmission can start after this.
As can be seen, during the Route Discovery, not only do the source and destination learn the path to
communicate between each other, but also most of the nodes in the network learn new paths to communicate with
each other. After a few Route Discoveries by different nodes, almost every node will have the complete route table,
and will be able to transmit to other nodes without the need to initiate a Route Discovery. This reduces the amount
of overhead in our protocol, significantly. As an example of what we have covered so far, lets take a look at figure 3:
Initially, all cache tables are empty. Suppose S wants to communicate with D. It first starts by sending TS
from power 1 to 3 (MP). B learns that its distance to S is 1, and first sends out TS from 1 to 5 (MP) and also sends
out a RI with values (S, B, 1). A learns that its distance to S is 2, and same to B. It will send out TS with the first TS
that arrived, and will forward 2 RI, (B, A, 2) and (S, A, 2). This will continue on, until no more forwarding is done
(because everyone already has all other nodes information in their tables). Node D will construct the Contract, and
send it back to S through C and A. Node S will start the Data Transmission through this path. When S sends out the
packet to A, it calculates its timeout period as t1. Node A forwards the packet to C and calculates its timeout period.
However, suppose C has been shut off. When time equals t2, node A will create an ACK as (A,C,-1), forward this
to S, and update it’s routing table for A to C as infinity. S will also update its routing table for A to C as infinity and
find a new path to try and reach D.
Anyone reading up to here will be asking itself one question: What happens if the nodes move? We have
already covered what happens when a link is broken, we simply delete the path from our system and find a different
3
path or initiate a new route discovery, but we cannot do this when a node moves, because the node is still active and
it might be useful to other nodes, even if we cannot reach it anymore. Suppose we are in the middle of a
transmission, and node X cannot find its next hop because it moved. Then this node will send back an ACK stating
that node Y was not reachable, and everyone on that route will update this information. However, the node that
moved (node Y), will not start sending out new TS to everyone to make them realize that he has moved. Instead, the
node will have 2 options to notify of its movement; one is, when this node receives a message from any other node
in the network (either a TS, RI, ACK, Data), it then starts sending out a new TS, with a higher Sequence Number
than its previous TS. Each node maintains its own sequence number count, thus, if A starts sending a TS, it will send
out (A, D, P, N), where P is the POWERLEVEL(PL) and N is the Sequence Number (SN). Any node that stores this
information, will keep that sequence number along with the node information and the power information. Whenever
another node forwards this information in a RI, it includes the Sequence Number of the receiver node (if it is
forwarding (A, B, 1), it sends the Sequence Number of B.). Whenever a node receives a Path that it already contains,
it first checks the sequence Number. If the Sequence Number is higher, it then updates its Cache Table. If it is the
same SN, then it checks if the PL is lower. If it is equal or higher it does not update its table. So, taking this into
account, the node that moved sends out new TS with a higher Sequence Number (SN), allowing other nodes to learn
his new position. The second option of a moving node is, if after a specific amount of time, it hasn’t heard any
packet from the network, it will start sending out TS, to let everyone know of its new position. This scheme allows
nodes to move constantly, without flooding the network when it is not necessary. We allow a node to send
information out when it hears something from other nodes, because this may allow other nodes to find better (less
costly) paths to the destination they are trying to reach.
For example, in figure 3, suppose C were to move closer to B such that the PL is 1. Node S does not know
that C has moved so S would use its routing table to get the shortest path as S, A, C, D not knowing that S, B, C, D
is a better route. When the packet is sent to C from A, C would send out a TS to all its neighbors with a higher SN
to let them know that it has moved. This would trigger B to send out a RI with the same sequence number because
the sequence number has increased. Finally, all nodes learn the new distances for node C, and this can be used to
find a better route for the next Data Transmission.
4. Reputation-Based Incentive Routing
There are 3 main features of the Reputation-Based incentive routing that we chose to implement in our
protocol; 1) The reputation of a node is quantified by objective measures, 2) the propagation of reputation is
computationally-efficiently secured by a one-way-hash-chain-based authentication scheme, 3) routing is effectively
secured.
Pricing-based schemes treat packet forwarding as a service that can be priced, and introduce some form of
virtual currency (as seen in the Test Protocol given to us) to regulate the packet forwarding relationships among
different nodes. However, these schemes require virtual banks (trust authorities) that all parties can trust. In this
case, it needs assistance from a fixed communication infrastructure to implement the incentive schemes, which is
not applicable for a pure ad hoc network.
The scheme works as follows: Every node executes what is called Neighbor Monitoring, that is, collect
information about the packet forwarding behavior of its neighbors. Any node is capable of hearing transmissions
that reach him (Same assumption as with our previous part of the protocol). A node N maintains a Neighbor Node
List(NNL), which contains all of its neighbors nodes (These are learned the same way as with the first part of the
protocol, with the TS and RI). In addition, node N keeps track of 2 numbers for each of its neighbors, as below:
1. Request for Forwarding (RFn[x]): the total number of packets that node N has transmitted to X for
forwarding.
2. Has-Forwarded (HFn[x]): The total number of packets that have been forwarded by X and noticed by N
(Since transmissions in Ad Hoc networks are broadcasted, when X forwards a packet, N can hear it if its
send within the transmission range of N, i.e., if the distance between N-X is equal or smaller than the PL of
X’s transmission).
The 2 numbers are updated by the following rules. When node N sends a packet to node X for forwarding, the
counter RFn[x] is increased. Then N listens to the wireless channel and checks whether node X forwards the packet
as expected. If N detects that X has forwarded the packet before a preset time-out expires, the counter HFn[x] is
increased.
4
Given RFn[x] and HFn[x], node N can create a record called Local Evaluation Record (LERn[x]), for
neighbor node X. The LERn[x] entry consists of two entries, that is, Gn[x] and Cn[x], where Gn[x] = RFn[x] /
HFn[x] and Cn[x] is a metric called confidence, used to describe how confident node N is for its judgment on the
reputation of node X. For this scheme, we set Cn[x] to be equal to RFn[x] – NKn[x], where NKn[x] is the number of
packets that N could not determine if X forwarded or not, because X’s transmission power is smaller than the power
required to reach N (N knows this, because it has the value of the Path from X to the forwarding node, that it learned
during the Route Discovery. For example, if PL from N-X is 2 and PL from X to D is 1, then N will not know if D
forwarded the information or not. This is a drawback of this algorithm).
Using the fore-mentioned neighbor monitoring, a node could build record of the reputation of its
neighboring nodes. However, action’s based on one’s own observation of its neighbors cannot effectively punish
selfish nodes. To address this problem, reputation propagation is employed to have neighbors share the reputation
information of other nodes, so that a selfish node will be punished by all other nodes. The reputation propagation
works as follows:
1. Each node N periodically updates its LERn[x] for each neighbor node X based on the changes of RFn[x]
and HFn[x], and it broadcasts the updated record to its neighborhood if Gn[x] has significantly changed.
2. Node N uses its LERn[x] and LERi[x] (where I is in the NNL) to calculate its overall evaluation record of
X (OERn[x]) as here:
OERn[x] = Σ
λn(i) . Ci(x) . Gi(x)
Σ kεNNL U {N}, k!=x λn(k) . Ck(x)
iεNNL U {N}, i!=x
where λn(i) is the credibility that node I has earned from the perspective of node N. In the current scheme,
we choose this to be = Gn[i].
With the reputation OERn[x] obtained, node N can punish neighbor X by probabilistic dropping as follows.
If the OERn[x] is lower than a preset threshold, node N takes punishment action by probabilistically dropping the
packets originated from X. The probability of dropping is q- Y if q > Y or 0 otherwise, where q = 1 – OERn[x] and
0 < y < 1 is the margin introduced for the following consideration: a dropping action could be occasionally triggered
by some phenomena such as collision, rather than selfishness. Without the margin, two nodes may keep increasing
the dropping probability and consequently fall into a retaliation situation.
5. Open Problems and Conclusion
A drawback of our protocol is that a selfish node has no good way of regaining its reputation, since he has
been marked as selfish, nodes won’t forward packets through this path. What can be done is that after a period of
punishing the node, the neighboring nodes “give the node another chance” and start forwarding packets through it. If
his behavior continues, he is punished more severely, until a point where all of its packets are dropped, and the node
is considered disconnected (or not present) from the network.
Another drawback with our protocol, is when a node moves, since he will not tell anyone about its new
position until he hears a packet, or some specific amount of time has passed, other nodes might choose secondary
paths to send data to the destination, when this is not the optimal path, thus, creating more overhead, and forcing
other nodes to forward packets that could have gone in a different direction without so much cost. Also, when a
node moves, and node N still thinks he is there, and sends him a packet to be forwarded, since node X will not
forward the packet, node N will reduce the reputation of node X, when node X actually did not hear the message
from N, making this an unjustified decision.
To conclude, we have shown an incentive-based protocol that allows routing in mobile ad hoc networks
while using minimal overhead to determine the paths that packets should take. To do this, each node maintains a
routing table, where it saves link information between nodes, as well as reputation information, to determine which
ways packets should be routed. Nodes that are deemed with a good reputation will have all of its packets forwarded,
while nodes with bad reputation will have packets dropped with a probabilistic dropping rate. To handle mobility,
nodes that discover a broken link or a moved node will set their distance to them as infinite and will send back to the
source an ACK message stating this. The node that has moved will update its position whenever it hears a packet
from anyone, or after a specific amount of time has gone by since he moved, and he has not heard any packets from
the network. Our protocol efficiently routes all packets from all nodes, through the best path available, while
punishing nodes that do not want to cooperate with the network.
6. References
5
[1] Yih-Chun Hu, David B. Johnson. Caching Strategies in On-Demand Routing Protocols for Wireless Ad Hoc
Networks.
[2] Qi He, Dapeng Wu, Pradeep Khosla. A Secure Incentive Architecture for Ad Hoc Networks.
6
Download