What Goes Around Comes Around: Mobile Bandwidth Sharing and Aggregation Karim Habak

advertisement
What Goes Around Comes Around:
Mobile Bandwidth Sharing and Aggregation
Karim Habak*, Khaled A. Harras† , Moustafa Youssef‡
Georgia Tech, † CMU, ‡ E-JUST
karim.habak@cc.gatech.edu, † kharras@cs.cmu.edu, ‡ moustafa.youssef@ejust.edu.eg
*
*
Abstract—The exponential increase in mobile data demand,
coupled with growing user expectation to be connected in
all places at all times, have introduced novel challenges for
researchers to address. Fortunately, the wide spread deployment
of various network technologies and the increased adoption
of multi-interface-enabled devices allow researchers to develop
solutions for those challenges. Such solutions exploit available
interfaces on these devices in both local and collaborative forms.
These solutions, however, have faced a formidable deployment
barrier. Therefore, in this paper, we present OSCAR, a multiobjective, incentive-based, collaborative, and deployable bandwidth aggregation system, designed to exploit multiple network
interfaces on modern mobile devices. OSCAR’s architecture does
not introduce any intermediate hardware nor require changes
to current applications or legacy servers. This architecture
estimates the interfaces characteristics and application requirements, schedules various connections and/or packets to different
interfaces, and provides users with incentives for collaboration
and bandwidth sharing. We formulate the OSCAR scheduler as
a multi-objective scheduler that maximizes system throughput
while achieving user-defined efficiency goals for both cost and
energy consumption. We implement a small scale prototype of
our OSCAR system, which we use to evaluate its performance.
Our evaluation shows that we provide up to 150% enhancement
in the throughput compared to current operating systems with
only minor updates to the client devices.
I. I NTRODUCTION
Mobile data traffic has tremendously increased during the
last few years. For instance, AT&T reported an increase on
their network traffic by more than 20,000% in the last five
years [4]. This staggering demand for mobile data, expensive
data roaming charges, and user expectation to remain connected in all places at all time are creating novel challenges for
service providers and researchers to solve. Fortunately, modern
mobile devices are equipped with multiple heterogeneous
network interfaces that varies in terms of the technology
used (3G/4G, Wifi, Bluetooth), bandwidth provided, energy
requirements, and cost. Therefore, a potential approach for
solving some of those challenges is exploiting these interfaces
to: (1) minimize the load on congested networks (e.g. network
offloading), (2) enhance user experience, and (3) provide users
with faster, and even cheaper connectivity options.
Recent work have focused on exploiting the available network interfaces simultaneously on modern mobile devices
either locally [23], [24] or via collaborating with nearby
devices and borrowing their bandwidth [25], [26] to maximize
the available throughput. The developed approaches overlook
the importance of energy and cost efficiency. However, in
many cases, to minimize the cost or maximize the device
lifetime (i.e. minimize the energy consumption), users decide
to switch to certain network regardless of the bandwidth it
provides. Moreover, all of the developed approaches require
heavy updates to both local client devices, remote servers
and/or existing applications. To the best of our knowledge,
no one previously addressed developing an easily deployable
bandwidth aggregation system that (1) maximizes throughput
while maintaining cost and energy efficiency, and (2) enables
collaboration while providing users with incentive to share
their bandwidth; simultaneously.
In this paper, we present OSCAR, a multi-objective,
incentive-based, collaborative, and deployable bandwidth sharing and aggregation system. OSCAR fulfills the following
requirements: (1) It is easily deployable since it does not
require changes to legacy servers, applications, or network
infrastructure. (2) It exploits available network interfaces in
local and collaborative forms. (3) It adapts to real-time Internet
characteristics and the system parameters to achieve efficient
utilization of these interfaces. (4) It integrates an incentive
system that encourages users to share their bandwidth. (5) It
adopts an optimal multi-objective scheduler that maximizes the
overall system throughput, while minimizing cost and energy
consumption based on the user requirements and system
status. (6) It leverages incremental adoption and deployment
to further enhance performance gains.
The remainder of this paper presents our contributions as
follows. Section III presents the design of OSCAR architecture
fulfilling the stated requirements above. Section IV formulates
OSCARs data scheduler as an optimal multi-objective scheduler that takes user requirements, interface characteristics, and
application requirements into consideration while distributing
application data across multiple interfaces. In Section V, we
present OSCAR communication protocols that we develop
to enable efficient and secure communication between the
collaborating nodes and OSCAR-enabled servers. Section VI
presents an overview of our implemented prototype followed
by our performance evaluation that shows OSCAR’s ability
to (1) enable collaboration and bandwidth sharing between
mobile devices, (2) increase the overall system throughput
while achieving users cost and energy efficiency targets, and
(3) utilize some features in the existing Internet server to
further enhance the performance. We discuss the related work
in Section VII and conclude the paper in Section VIII.
II. M OTIVATING S CENARIO AND S YSTEM OVERVIEW
When John is having a meal in a food court, or waiting
for the train at the station, he watches Youtube videos, and
uses Facebook to get his social network feeds; using his
Virtual Bank
Alice
Virtual
Balance
Connection-Oriented
OSCAR
WiFi
Bluetooth
The Internet
3G
Legacy Server
Packet-Oriented OSCAR
Resume
Supporting
Legacy
Server
John
Mark
WiFi
John
Alice
Mark
Applications:
OSCAR Client
Context Awareness
g
Manager
User Interface
Module
Application Traffic
Char. Estimator
Packet-Oriented
Scheduling Manager
Incentives Manager
Mode
M
d Detection
D t ti
Module
Tendering Price
Determiner
Resume Module
Tenders Collection
and Filtering Module
Neighbor Discovery
Module
3G
Contracts
Assessment Module
OSCAR-Enabled Server
Connection #1 Packets
Connection #2 Packets
Connection #3 Packets
John’s Incentives Packets
Mark’s Incentives Packets
Alice’s Incentives Packets
Network Interfaces
Char. Estimator
Balance Handler
Scheduler
Battery Sensor
Reservation Manager
Fig. 1: OSCAR scenario.
tablet equipped with WiFi, Bluetooth, and 3G interfaces. John
connects to the congested free WiFi hotspot because 3G
roaming charges are too expensive. All the data from the
above applications go through this heavily congested interface,
leading to a very unpleasant experience. Meanwhile, Mark and
Alice, who are sitting next to John, are using a flat-rate 3G plan
and a private WiFi hotspot subscription, respectively. John’s
experience can be greatly improved if he becomes able to use
Mark and/or Alice’s interfaces and aggregate their bandwidths
with his own. Figure 1 depicts such a scenario where OSCAR
enabled John to use his expensive 3G only for light weight
important data and leverage underutilized bandwidth from his
neighbors using his Bluetooth and WiFi interfaces.
Figure 1 shows the five communicating entities in our
scenario. Firstly, client devices equipped with multiple network interfaces varying in their available bandwidth, energy
consumption rates, and cost per unit data. Each interface
has different paths to the Internet, either directly or through
neighboring client devices sharing their connectivity options.
Secondly, a virtual bank representing a trusted third party
that handles the payment transactions between client devices.
Thirdly, legacy servers which are typical non-modified Internet
servers. When communicating with these servers, OSCAR
traffic schedulers residing on clients use a connection-oriented
mode to schedule different connections to the available paths
where a TCP connection can be assigned to only one path.
Fourthly, connection resume-supporting legacy servers, such
as HTTP servers that support resuming the connections (e.g.
DASH video streaming servers such as Hulu, Youtube and
Netflix). OSCAR leverages these servers to enhance the performance by switching to a packet-oriented mode, where each
packet or group of packets can be independently scheduled on
a different path. Finally, OSCAR-enabled servers, represent
servers that may adopt and run OSCAR in the future, to help
clients with highly efficient packet-oriented scheduling.
III. T HE OSCAR S YSTEM A RCHITECTURE
In this section, we present OSCAR’s deployable architecture
along with a detailed description of its main components.
We focus on OSCAR’s client architecture, depicted in Figure
2, since it is the fundamental entity in our system. The
architecture of an OSCAR server includes a small subset of
the client modules to enable packet-oriented scheduling. We
will refer to them as we address each component next.
Backward Compatibility Manager
Packet Reordering
Module
Network Address
Translation Module
Connection
Redirection Module
Network Interfaces:
Fig. 2: The OSCAR client architecture.
A. Context Awareness Manager
To enable OSCAR to efficiently utilize all available connectivity options and more importantly achieve user goals,
we develop the context-awareness manager that determines,
stores and utilizes the mobile device’s context information
in addition to user needs accurately and efficiently. This
component consists of the following five modules.
1) User Interface Module: OSCAR provides a user-friendly
interface that enables users to set their requirements, preferences, and scheduling policies. This also enables users to
input other contextual parameters like the interface usage cost
(e.g. cost of using 3G), express their willingness to share
bandwidth, set the amount to be shared, set pricing policies
(e.g. minimum price), and monitor their bandwidth sharing
business. More importantly, this interface enable users to set
their cost/energy constrains as we show in Section IV-B4.
2) Application Traffic Characteristics Estimator: Knowing
the application traffic characteristics significantly impacts OSCAR’s performance. However, to be fully deployable, we cannot require any changes to existing applications to determine
this crucial information. Therefore, we adopt the techniques
proposed in OPERETTA [8], which utilizes the applications’
history to determine their future connection data demands.
OSCAR, however, identifies the application protocol using
their used port numbers instead of application/process name.
3) Neighbor Discovery Module: This module discovers and
authenticates the nearby OSCAR-enabled devices using the
protocol presented in Section V.
4) Network Interface Characteristics Estimator: This module estimates the available bandwidth and the energy consumption rates for each network interface.
Bandwidth Estimation: Since bandwidth bottlenecks typically
exist at the client’s end, OSCAR actively probes a random
set of geographically distributed servers using each interface
to estimate its available bandwidth [8]. However, when communicating with OSCAR-enabled servers, OSCAR uses data
packets for path bandwidth estimation to avoid the active
probing overhead [16].
Energy Consumption Estimation: Since a network interface
energy consumption depends on the NIC, the technology used,
the transmission data rate [18], [20]. OSCAR relies on an
online database service, which we build, containing energy
consumption rates for various network interfaces cards at
different transmission data rates. The first time OSCAR runs
on a client, it queries this database once to obtain the energy
consumption rates for each interface on the device and use
the acquired information along with the OS API provided
transmission data rate to determine the interface’s real-time
energy consumption rate.
5) Battery Sensor: This module senses the available battery
level and either the device is plugged to a power source or not.
B. Packet-Oriented Scheduling Manager
This module exploits the availability of OSCAR-enabled
servers, or servers that support the connection resumption, to
enable using packet-oriented scheduling mode for enhancing
the performance. It consists of the following modules:
1) Mode Detection Module: This module detects whether
the end point of a connection supports OSCAR or not to
enable the optional packet-oriented mode. To minimize the
overhead, we use the options part of the TCP header to
inject OSCAR-related flag, which informs one-end that the
other end is OSCAR-enabled. We refer the readers to the
accompanying technical report [10] for the detailed description
of this procedure.
2) Resume Module: OSCAR utilizes the resume module
while communicating with legacy servers to enable packetoriented scheduling without relying on OSCAR support on
the server side. To detect whether a particular server supports
the resume functionality or not, we build a web service maintaining a list of protocols that support the resume functionality.
Hence, OSCAR opportunistically connects to this web service
to update and cache this list of protocols. We represent each
protocol by a template that uniquely identifies it and includes
how to issue resume requests, and how to check with the end
server that it supports the resume functionality.
We implement the resume module as an in-device proxy
that identifies the resumable connections, distributes subconnections across different device interfaces, and closes
these sub-connections once they terminate. Since applications
require in-order data delivery, this module adopts two approaches to minimize the latency and the reordering buffer
size: (1) It divides the connection into fixed length units,
where a unit with an earlier offset from the beginning of the
connection is retrieved over multiple interfaces before the next
unit is retrieved. Each interface is assigned a portion of the
unit to retrieve based on weights decided by the scheduler.
(2) It sorts the interfaces in a descending order according to
their scheduling weight and starts the sub-connection with the
larger weight first. This way, larger amounts of data will arrive
in order from the beginning of the connection.
C. Incentives Manager
To provide users with incentives to share their bandwidth
while dealing with their nature of randomly meeting at random
locations at any point in time, we develop a credit-based
incentive system that relies on a trusted third party to maintain
credit for the different users. Like free markets, users in our
system collaborate with each other and pay for the services
acquired from one another. Although, in the rest of the paper,
we will focus on using OSCAR’s defined currency, available
electronic currencies (e.g. Bitcoin [19]) can be easily used
as payment alternatives in our system. We implement the
Incentive Manager to address the bandwidth market affairs.
It consists of the following modules:
1) Tendering Price Determiner: This module runs at the
seller and determines a price for sharing bandwidth with a
buyer via a given interface for a fixed amount of time. The
seller proposes an initial price to cover the cost for using
the bandwidth at Internet-connected interface j while communicating with a buyer through the interface i. If this initial
price does not comply with the device owner’s preset policies,
OSCAR uses the appropriate owner defined fallback rule (e.g.
changing the price to a preset amount). Once determined, this
price is fixed for a given amount of time and is renewed
periodically or upon new buyer requests. To determine an
initial price, the following three cost parameters are taken into
account.
a. Service Provider Cost: Since a node may be paying for its
own Internet access (either through an Internet provider or a
neighboring sharing node), this node needs to at least cover
such cost (ci ) to avoid incurring a loss.
b. Energy Consumption: Sharing bandwidth by enabling
multiple network interfaces will consume extra power, which
is an extremely crucial resource especially for battery operated
mobile devices. Hence, a device sharing its bandwidth needs
some compensation for its consumed energy. Noting that a
selling device uses two of its interfaces (one connected to
the Internet, i, and the other connected to the buyer, j),
calculating the required compensation should take into account
the following parameters: (1) the energy consumed to relay a
unit of data for interfaces i and j would equal (ei + ej ), (2)
the battery capacity (ECapacity ), and (3) the remaining battery
ratio (Rremaining ). Hence we calculate the energy compensation
factor (ECij ) as follows:
ei + ej
ECij = γ + 1 − Rremaining
ECapacity
(1)
Where γ is a binary value reflecting whether the device is
connected to a power source or not. If connected, γ = 0,
the user still needs to be compensated for the consumed
energy as part of the charging power is still used in the
bandwidth sharing. Therefore, the importance of your energy
consumption degrades till it reaches 0 when the battery is fully
charged. On the other hand, if the device is running on battery
power, γ = 1 guarantees receiving appropriate compensation
even if the battery is fully charged.
c. Market Status: With potentially multiple devices requesting
and offering bandwidth, the demand and supply mini-market
status amongst these devices is a crucial pricing factor. A
balance is required to maximize the sharing node’s benefit
while providing priority to users willing to pay more for
important data they need to transmit. We therefore calculate
the market dependent cost (MC) as follows:
(1 + κ)Bi,Requested − Bi,Offered
MCn,i = max MCn−1,i +
η, 0
Bi,Offered
(2)
Where i is the index of the interface connected to the Internet
that we share its bandwidth, MCn−1,i is the market status
cost during the previous estimation session, Bi,Requested is the
sum of the requested bandwidth for reservation during the
last session from such interface (i.e. demand), Bi,Offered is the
offered bandwidth from that interface (i.e. supply), η is the
cost increase factor determined by the user, and κ is a revenue
multiplier. The reason behind using the revenue multiplier (κ)
is to allow for increasing the price when the supply is equal
to the demand and is set to a small value (0.1 in our case).
The intuition is that as long as the demand is higher than the
supply, we keep increasing the market status cost value.
Total Price: Adding all costs together we obtain the following
equation:
acknowledgments before passing them to the upper layer. To
avoid being over-protective, however, it maintains an estimation of the TCP’s RTT to pass the delayed packets avoiding
TCP’s timeout events.
2) Network Address Translation Module: When a connection goes through a sharing neighbor to a legacy server, since
this server sees only one IP address for the client, OSCAR
needs to change the IP address in the packet to the neighboring
client IP. Otherwise, the reply from the legacy server will go
directly to the client, rather than to the sharing neighbor. To
address this, we implement a Network Address Translation
(NAT) module at each node which is activated on the seller
node. This module does the NATing operation for the sent
packets and reverses this operation for the received packets
from the legacy server.
3) Connection Redirection Module: This module redirects
the newly requested connections to the Resume Module to
enable using resume functionality if available.
IV. OSCAR S CHEDULER
Where ν is an energy-cost conversion factor that maps the
energy cost (ECi,j ) to monetary cost. The value of ν is set by
the virtual bank based on the current fair market prices.
2) Tenders Collection and Filtering Module: This module
runs at the bandwidth requester (buyer node) and collects
tenders from nearby OSCAR-enabled devices. These tenders
contain the path’s available bandwidth and the price of transmitting a unit of data via this path as announced by the seller.
After collecting the tenders, the module sorts them based on
cost, eliminates tenders with suspected loops and forwards the
least cost ones whose aggregate bandwidth satisfies the buyer
to the scheduler.
3) Contracts Assessment Module: When two neighbors
agree on the set of terms for bandwidth sharing (i.e. cost,
reserved bandwidth, and duration), both the buyer and seller
monitor the traffic to regulate and ensure these terms. Note that
this is enforced by the trusted OSCAR components running
on both nodes.
4) Balance Handler: This module handles the payment
transactions with the virtual bank. It also makes sure that the
buyer node does not make requests that exceed its available
credit.
5) Reservation Manager: This module makes sure that the
sold bandwidth can only be used by the node that bought it.
A. System Model
Table I summarizes the system parameters and notations.
We assume a mobile device with m different paths to the
Internet. Each path represents a way to connect to the Internet
either by using the Interface’s direct connectivity or by using
the interface to communicate with one of its neighbors sharing
its Internet connectivity. Each of these paths has its effective
bandwidth bj and cost per unit data cj , which can be the
service provider usage cost or the cost paid to the neighbor to
use their connectivity. In addition, each path uses one network
interface and it has an energy consumption rate aj , where aj
equals the difference in power consumption between the active
and idle states of the used interface. The data rate of each path
interface is denoted as rj . The device runs a set of connections
that share these interfaces and varies in their characteristics.
Our scheduling unit is a connection or a packet. We refer to
a standard network connection as a stream to avoid confusion
with the scheduling unit. Scheduling decisions are taken when
a new stream is requested from an application. Packet-Oriented
Scheduling Manager (Section III-B) then determines whether
the operation mode is connection-based (Sn = 1) or packetbased (Sn = 0). In the former case, the scheduler’s goal is to
determine to which path it should be assigned (sets xnj = 1
for only one path j). In either case, the percentage of packets
to be assigned to each path, i.e. paths relative packet load (wj ),
should be re-calculated based on the current system load (L).
D. Backward Compatibility Manager
To be deployable, OSCAR maintains backward compatibility with both current legacy servers and applications. This
feature is implemented by the following modules:
1) Packet Reordering Module: This module is activated
for the packet-oriented mode to handle packet reordering
issues introduced by transmitting data using multiple network
interfaces to be compatible with legacy applications and
avoid TCP performance degradation. While communicating
with OSCAR-enabled servers, to avoid TCP’s performance
degradation, it delays the out-of-order packets and duplicate
B. Optimal Scheduling
In this section, we describe our scheduling problem. The
decision variables are: (1) If Sj = 1, which path to assign
the new stream n to (variable xnj ) and (2) the new values for
wj , ∀j : 1 ≤ j ≤ m. The scheduler’s goal is to maximize the
system throughput under certain energy and cost constraints.
In particular, the user puts a limit on the average cost per unit
data (Cavg,Target ) and a limit on the energy consumption per
unit data (Eavg,Target ) that should not be exceeded. We refer the
readers to the accompanying technical report [10] for other
scheduling objectives.
In,i,j = νECi,j + MCn,i + ci
(3)
TABLE I: List of symbols used
Sym.
T
L
Li
Si
Description
The overall system throughput
The current system load
The current system load for stream i
Determines whether stream i is connection-based (1)
or packet-based (0)
The effective bandwidth of path j
The data rate of path j
Difference in power between active and idle states of path j
The average energy consumed per unit
data while transferring the system load
The cost per unit data of path j
The average cost per unit data of transmitting the system load
Path j needed time for finishing its load
For connection-oriented streams, equals 1 if stream
i is assigned to interface j. Equals 0 otherwise.
The ratio of packets assigned to interface j
bj
rj
aj
Eavg
cj
Cavg
Δj
xij
wj
1) Objective Function: The objective of the scheduler is to
maximize the overall system throughput (T ). Given the system
load (L), the objective function can be written as:
Maximize T =
L
maxj Δj
(4)
Where Δj is the time needed for path j to finish all its
load (connection and packet based). Since L is constant, the
objective function becomes (Minimize maxj Δj ) equivalent
to:
Minimize max
n
i=1
j
(Li (1 − Si ) wj ) +
bj
n
i=1
(Li Si xij )
(5)
Where the left summation represents the packet-oriented mode
load and the right summation is the connection-oriented mode
load. Note that any stream i will be either connection-oriented
(Si = 1) or packet-oriented (Si = 0) and thus will appear in
only one of the two summations. Dividing the sum of two
loads by the available bandwidth on that path (bj ) gives the
time needed for path j to finish its load.
2) Constraints: Target Cost: As we mentioned, the user
puts a limit of Cavg,Target on the average cost she is willing to
pay per unit data. Since the average cost (Cavg ) equals:
m
j=1 cj
Cavg =
Therefore:
m
cj
n
i=1
(Li (1 − Si ) wj ) +
n
i=1
Li Si xij
L
wj
j=1
n
(Li (1 − Si )) +
i=1
n
(6)
Li Si xij
≤ L Cavg,Target
(7)
i=1
Target Energy: Similarly, the client’s device can tolerate up to
a certain level of the average energy consumed per unit data
(Eavg ≤ Eavg,Target ):
m
aj
r
j=1 j
a
n
i=1
(Li (1 − Si ) wj ) +
n
Li Si xij
≤ L Eavg,Target
(8)
i=1
Where rjj is the average energy consumption per unit data
while using path j.
Integral Association: If the new stream
is connection-oriented,
m
it should be assigned to only one path ( j=1 xnj +(1−Sn ) =
1). Note that when Sn = 0, xnj = 0, ∀j, which is the case
when the new stream is a packet-oriented stream.
Packet Load Distribution: The total packet-oriented
load
m
should be distributed over all interfaces ( j=1 wj = 1).
Variable Ranges: The trivial constraints for the range of the
decision variables (wj ≥ 0, xnj ∈ {0, 1}, 1 ≤ j ≤ m)
3) Solution: In general, this problem is a mixed 0-1 Integer
Programming problem, which is an NP-complete problem.
However, it has a special structure that allows for an efficient solution. In particular, we have two cases: if the new
stream that triggered the scheduling decisions is packet-based
(Sn = 0) and if it is connection-based (Sn = 1).
Solution for the packet-based case: In this case, xnj = 0 ∀j.
The problem becomes a standard linear programming problem.
Hence, it can be solved efficiently to determine the different
values of wj .
Solution for the connection-based case: In this case, we need
to determine the binary variables ∀j xnj such that only one of
them equals 1 and others are 0. Our algorithm sets each one
of them to 1 and then solves the resulting linear programming
problem to find ∀j wj . The value that achieve the best objective
is then selected as the optimal decision.
4) Discussion: In some cases, the selected constraints by
the user may lead to an infeasible solution. For example, it
may be the case that there is no assignment strategy that
achieves both the cost and energy consumption constraints
specified by the user concurrently. To eliminate this possibility
and make the selection process user friendly, we design an
iterative selection policy where the user selects her top priority
constraint first (e.g. limit on cost). Based on this selection,
OSCAR calculates the feasible range for the second constraint
(e.g. energy consumption) and the user is then asked to choose
a value from this acceptable range.
V. OSCAR C OMMUNICATION P ROTOCOLS
To implement OSCAR, we develop two protocols. First, the
OSCAR collaboration protocol that handles bandwidth sharing
amongst collaborating nodes. This protocol aims to: (1) efficiently discover the neighbors, (2) authenticate both buyers
and sellers, (3) enable cost agreement and (4) guarantee the
payment. Second, an OSCAR server communication protocol
for detecting OSCAR-enabled servers and exchanging control
information between the client and server. This protocol aims
to: (1) discover the existence of OSCAR-enabled servers,
(2) enable the server to know the list of (IP, port) pairs
that they can use to send their data to the clients, (3) and
exchange with the clients the packet scheduling weights. Due
to space constraints, we only present an overview of the
OSCAR collaboration protocol and refer interested readers to
the accompanying technical report [10] for further information
and other protocols.
Figure 3 shows the state diagram for OSCAR’s collaboration protocol. The communicating entities in this protocol are
client devices offering or requesting bandwidth (separated by
vertical line) and the trusted bank server supporting our incentive system. The figure also shows the general state categories
broken down to the following three phases (separated by the
horizontal lines): discovery and authentication, tendering, and
payment.
While Requesting Bandwidth
While Offering Bandwidth
Start
Discov.
Started
nI
F
Wait for
Res.
Rec
v
ARe
s.
R
Te ecv
nd
ers
.
DR
eq
Snd
Tend
ers
Wait for
Tenders
ion
v
Rec .
c
SRe
firm
firm
Co
n
.
Snd
SRe
c.
Wait for
Bank
Confirm.
Wait for
Res.
RNA
CK
Recv
RRe
q.
n
Co
Re
cv
t
sac
Payment
Approval
Asked for
Tenders
c.
Re
ion
.
an
Tr
ec
vR
Auth.
Ended
Snd
RAC
K or
.
CK
RNA
d
Sn
K or
RAC
Recv
cv
Re
Wait for
Confirm.
v
Rec .
s
ARe
Re
GR c v
eq .
Collaborative
Mode
sac
t
;
ut .
GT o Req
R
Snd
Snd
s.
DRe
Normal
Operations
Snd
q.
GRe
Snd
AReq.
Wait for
Res.
Tra
n
Snd
ARe
q.
Auth.
Started
Known Snder
na
Tendering
eo
nts
Auth.
Started
c
Re
Payment
Non-collaborative
Mode
Tim
v.
sco
Al
l-K DT
no ou
wn t w
DTout with
Re ith
sp
on
Unknown Respondents
de
Re
cv
Wait for
Res.
Un k
no
Snd wn
er
Discov.
Requsted
Normal
Operations
User Needs
Snd
q.
DRe
Di
Discovery and Authentication
Recv DRes.
Payment
Forwading
Wait for
Bank
Confirm.
ar d
Forw c.
SRe
Fig. 3: OSCAR collaboration protocol state diagram
A. Discovery and Authentication
Neighbor Discovery: To maintain energy-efficient neighbor
discovery, a requesting node sends neighbor discovery requests
only when it has active packet-oriented connections or new
connection requests. For this discovery, we adopt the eDiscovery [14] protocol and apply it to the available interfaces
to determine the best discovery duty cycle for each interface.
Once eDiscovery fires a neighbor discovery event, our OSCAR
protocol engages and broadcasts a Discovery Request (DReq)
packet containing its id and signed using its private key.
When a node offering bandwidth receives a discovery
request (DReq) packet, it checks if it knows the public key
of the requesting node based on its user id included in the
request message. If so, it can authenticate that neighbor and
verify the requested packet’s integrity. Otherwise, it starts the
neighbor authentication. Once the requester is authenticated,
the offering node sends a Discovery Response (DRes) packet
containing the its id and signed with its private key.
If several offering/selling neighbors exist, a requesting node
receives discovery response (DRes) packets from each one.
Once the requesting node reaches the discovery timeout, it
parses all DRes replies, obtains all the user ids, then authenticates these respondents if it has their public keys. If at least
one of the respondents is unknown to the requester, It starts
the neighbor authentication.
Neighbor Authentication: This authentication occurs when
an OSCAR client (offering or requesting bandwidth) needs
to authenticate one of its neighbors. We assume that each
node keeps a certificate, received from the bank module,
containing its public key and signed by the bank’s private
key. The process starts with an authentication request (AReq)
containing a randomly generated challenge message and the
authenticator certificate is sent to the unauthenticated node
and signed with the sender’s private key to insure its integrity.
Once the unauthenticated node receives this packet, it checks
the integrity of the message and the certificate. Upon success,
an authentication response (ARes) packet is sent back to the requesting node containing the unauthenticated node’s certificate
and the challenge message all signed with the unauthenticated
node’s private key. Note that if the unauthenticated node
needs to authenticate the sender it will add another challenge
message for the sender to sign.
B. Tendering
This stage begins after a requesting node has received and
authenticated DRes replies from neighbors. The node then
broadcasts a tenders gathering request (GReq) packet. When
an offering neighbor receives a GReq, it calculates the required
tender for each of its connectivity options and sends a tenders
packet containing tender index, cost of unit data, the available
bandwidth for sharing, and the tender duration. The requesting
node receives multiple tenders from the neighboring offering
nodes. Once the requesting node reaches the gathering timeout,
it sends the tenders to the core OSCAR client which filters and
then supplies them to the scheduler. The scheduler’s responsibility at this time is to determine the accepted tenders based
on the amount of extra bandwidth required and their cost.
Once decided, the offering node sends a bandwidth reservation
request (RReq) containing the index of the tender selected and
the amount of bandwidth needed for reservation. Finally, the
offering node responds with a reservation acknowledgement
(RACK) in case of successfully reserving the bandwidth or
a negative acknowledgment (RNACK) when the requested
bandwidth in not available (ex. reserved for someone else).
C. Payment
After each service period, the offering node prepares a
receipt for what has been consumed by the requesting node
and signs this receipt with its private key. Once the requesting
node receives that receipt, it signs it with its private key. If the
requesting node has a direct connection to the bank module,
it sends the signed receipt to it. Otherwise, it forwards the
signed receipt to its neighbor in order for it to be forwarded
to the bank. Both parties wait for the bank confirmation and the
commitment of the transaction prior to further collaborations.
VI. I MPLEMENTATION AND E VALUATION
In this section we evaluate the performance of OSCAR
via implementation. We begin with an overview of the implementation. Afterwards, we describe the experimental setup
followed by a subset of our evaluation results. More evaluation
results can be found in the accompanying technical report [10].
A. Implementation
In this section, we briefly discuss some details regarding
our OSCAR prototype implementation. We specifically highlight the network layer middleware, application layer resume
service, and the monitoring application we developed.
OSCAR Network Layer Middleware: We implement the
OSCAR Network Layer Middleware using the Click modular
router [17], which can be used in Linux [17] and Android
TABLE II: Experimental Interfaces Characteristics.
Network
Interface
IF1 (WiFi)
IF2 (3G)
IF3 (Blue.)
IF4 (Blue.)
IF5 (WiFi)
Power
(mWatt)
634
900
95
95
726
Cost
($/Mb)
0
0.02
0
0
0
D. Rate
(Mbps)
11
42
0.7232
0.7232
11
BW
(Mbps)
1
2
0
0
1
TABLE III: Experiments parameters
Fig. 4: Experimental setup.
[1] operating systems. Basically, Click framework allows the
user to install OSCAR without recompiling the kernel, and to
intercept packets to and form the network protocol stack. This
middleware includes all the OSCAR packet processing modules except the Resume Module (Section III). Each component
is implemented as a Click element responsible for processing
the exchanged packets.
Application Layer Resume Service: This service reflects
the Resume Module (Section III-B2). It cooperates with the
OSCAR middleware to make use of the legacy servers resume
support. Section III-B2 provides detailed information about
this module and how it perform its tasks.
Monitoring Application: The monitoring application represents the User Interface Module that captures the user’s preferences, interface usage policies, bandwidth sharing policies, and
further monitors OSCAR’s behavior. It allows the user to select
her average cost and energy consumption limitations, monitor
her achieved throughput level, and set her interface usage
costs. It also allows the user to define the rules controlling
her bandwidth sharing and her pricing policies.
B. Experimental Setup
Figure 4, Table II and Table III depict our testbed and
summarize the main parameters and values adopted in our
evaluation. Our testbed consists of six nodes: an OSCARenabled server, two legacy servers with only one of them
supporting the resume functionality, a main client, a neighboring device sharing its bandwidth, and a traffic shaper.
Without loss of generality, both clients are running Linux
OS and are enabled with multiple network interfaces. The
traffic shaper runs the NIST-NET [5] network emulator to
emulate the varying network characteristics of each interface.
The nodes are connected to each others as shown in Figure 4.
We note that the combined bandwidth of IF1 , IF2 and IF5
is less than each server bandwidth to test the true impact of
varying the interface characteristics and scheduling strategies.
On the main client, we run different applications that vary in
terms of the number of connections per second they open (β),
the average connection data demand (λ) and their destination
port numbers. We define γ ∈ [0, 100] as the percentage
of applications’ connections that have the OSCAR-enabled
servers as their destination. When γ = 0, all connections are
with legacy servers, when γ = 100 all the connections are
with OSCAR-enabled servers.
We evaluate OSCAR using two classes of applications:
browsers (λHTTP = 22.38KB [7]) and FTP applications
(λFTP = 0.9498M B [7]). The connection establishment rate
follows a Poisson process with mean (β) connections per
Parameter
∀i Li Bandwidth (Mbps)
IF1 Bandwidth (Mbps)
Incentive Cost ($/Mb)
Neighbor sharing ratio (%)
Range
6
0.25 - 2
0.03
0 - 100
Nominal
6
1
0.03
100
second (βHTTP = 13 con/sec, and βFTP = 1 con/sec).
Each experiment represents the average of 15 runs. Note
that since OSCAR estimates the application characteristics,
its performance is not sensitive to the specific application
characteristics.
C. Results
In this section we evaluate the performance of OSCAR
using three metrics: throughput, energy consumption per unit
data, and cost per unit data. We vary the following parameters:
the percentage of connections with OSCAR-enabled servers
(γ), the percentage of resumable connections (α), interface
characteristics, neighbor sharing ratio, and the user defined
cost and/or energy constraints. We compare the OSCAR
optimal scheduler to three baseline schedulers:
• Throughput Upper Bound: This scheduler represents
the theoretically maximum achievable throughput.
• Round Robin (RR): which assigns streams or packets
to network interfaces in a rotating basis.
• Weighted Round Robin (WRR): Similar to the RR
scheduler but weighs each interface by its estimated
bandwidth; interfaces with larger bandwidths have proportionally more packets or streams assigned to them.
Effect of Changing Streams with OSCAR-Enabled Servers
(γ) vs Resumable Streams (α): As discussed in Section III,
OSCAR leverages both resume-supporting legacy servers and
OSCAR-enabled servers to enhance performance. Figure 5
shows the effect of increasing the percentage of streams
established with OSCAR-enabled servers (γ) on the performance of the OSCAR scheduler for different values of α (the
percentage of resumable streams to legacy servers). In this
experiment we set the energy consumption and cost limits
to their maximum limits (131.36 Joule/Mb and 0.03 $/Mb
respectively) to highlight the throughput gains that can be
achieved by OSCAR.
Based on Figure 5(a), we share the following observations:
(1) Even when γ = 0 and α = 0 (i.e. only working with
legacy servers with no resume support), OSCAR can enhance
the throughput by 150% as compared to using the interface
with the highest bandwidth and by 300% as compared to the
current OSs. (2) When γ and α are low, most of the streams are
connection-oriented, rendering the scheduling decision coarse
grained; once the stream is assigned to a path, all its packets
have to go through this path until termination. This reduces
the scheduling optimality. (3) For α = 0%, the system reaches
its throughput upper bound when we have only 30% of the
4
IF2 (Highest Throughput)
2
1.5
Energy (mJ/Mb)
2.5
Through Neighbor (IF3)
IF2 (Highest Throughput)
0.02
OSCAR (α = 0%)
OSCAR (α = 15%)
0.01
IF1 (Current OS)
1
0.5
40
80
OSCAR (α = 0%)
IF1 (Current OS) OSCAR (α = 15%)
60
40
60
80
100
0
0
Percentage of streams with OSCAR-enabled servers (%)
IF2 (Highest Throughput)
IF1 (Current OS)
0
0
20
100
20
Through Neighbor (IF3)
0
Through Neighbor (IF3)
120
XPUT Bound
OSCAR (α = 15%)
OSCAR (α = 0%)
3
Cost ($/Mb)
Throughput (Mbps)
140
0.03
3.5
20
40
60
80
100
0
Percentage of streams with OSCAR-enabled servers (%)
20
40
60
80
100
Percentage of streams with OSCAR-enabled servers (%)
(a) Throughput
(b) Cost
(c) Energy
Fig. 5: Impact of changing the percentage of streams with OSCAR-enabled servers (γ).
0.03
XPUT Bound
OSCAR
Weighted Round Robin
Round Robin
3
IF2 (Highest Throughput)
0.02
0.01
Round Robin
Weighted Round Robin
OSCAR
1
IF1
0.5
0.75
1
1.25
1.5
1.75
IF1
2
0
0.25
0.5
IF1 Bandwidth (Mbps)
2
1
0
IF2 (Highest Throughput)
IF1 (Current OS)
Through Neighbor (IF3)
0
20
40
60
80 100
Neighbor Sharing Ratio (%)
Fig. 7: Effect of changing the
neighbor sharing ratio (Throughput).
Throughput (Mbps)
Throughput (Mbps)
3
1.25
1.5
80
60
20
1.75
2
IF1
40
IF2 (Highest Throughput)
0
0.25
0.5
5
4
2
0.01
0.005
40
60
Energy Limit (mJ/Mb)
(a) Throughput
80
0
20
Figure 5(b) shows that OSCAR significant increase in
throughput comes with even better cost for the user. This
can be explained by noting that the using the interface with
the maximum throughput, which happens to be the costly
3G interface in our case. OSCAR, on the other hand, mixes
the different interfaces, leading to lower cost. In addition,
Figure 5(c) shows, as we relaxed the constraints on the energy
consumption, that OSCAR uses more energy consumption
than using the interface with maximum throughput to achieve
its superior throughput gains. This energy consumption can be
further reduced by the user, if needed, by setting a limit on
energy consumption.
Impact of Interface Heterogeneity: In this experiment, we
change the bandwidth of IF1 from 0.25 Mbps to 2 Mbps
while fixing the other parameters and compare the OSCAR
scheduler to to the round robin (RR) and the weighted round
robin (WRR) schedulers. Figure 6(a) shows that when the
bandwidth of IF1 is low, using only IF2 outperforms the
round robin scheduler. This is due to the fact that the round
robin scheduler does not take the interface characteristics
1.5
1.75
2
OSCAR( Cost = 0.3 )
OSCAR( Cost = 0.2 )
OSCAR( Cost = 0.1 )
OSCAR( Cost = 0.0 )
40
60
Energy Limit (mJ/Mb)
80
40
20
0
20
(b) Cost
Fig. 8: Effect of changing the average energy limit
streams connecting to OSCAR-enabled server (γ = 30). (4)
This need of OSCAR-enabled servers decreases as α increases,
till it reaches 0% when α = 35%, which is typical in
the current Internet [22]. (5) The performance gain before
saturation by adding more OSCAR-enabled servers is better
than adding more legacy servers with resume support. We
believe that this is due to the the resuming process overhead.
1.25
60
0.015
1
1
(c) Energy
0.02
XPUT Bound
OSCAR( Cost = 0.3 $Mb )
OSCAR( Cost = 0.2 $Mb )
OSCAR( Cost = 0.1 $Mb )
OSCAR( Cost = 0.0 $Mb )
3
0
20
0.75
IF1 Bandwidth (Mbps)
(b) Cost
Fig. 6: Impact of interface heterogeneity.
6
XPUT Bound
OSCAR
1
Round Robin
Weighted Round Robin
OSCAR
100
IF1 Bandwidth (Mbps)
(a) Throughput
4
0.75
Through Neighbor (IF3)
Energy (mJ/Mb)
0
0.25
Through Neighbor (IF3)
Energy (mJ/Mb)
IF2 (Highest Throughput)
4
2
140
Through Neighbor (IF3)
120
Cost ($/Mb)
Throughput (Mbps)
5
Cost ($/Mb)
6
OSCAR( Cost = 0.0 $Mb )
OSCAR( Cost = 0.2 $Mb )
OSCAR( Cost = 0.3 $Mb )
OSCAR( Cost = 0.1 $Mb )
40
60
Energy Limit (mJ/Mb)
80
(c) Energy
into account, assigning streams to each network interface in
turn. Therefore, the low bandwidth interface becomes the
bottleneck. We also notice that using the OSCAR scheduler
outperforms the weighted round robin scheduler because of
OSCAR’s ability to use application characteristics knowledge.
Figure 6(c) shows that for the WRR and OSCAR schedulers,
the energy consumption per unit data slightly increases as
the bandwidth of IF1 increases. This is due to leveraging the
increased bandwidth in the higher energy consuming interface
(IF1 ) for higher throughput. The RR scheduler, on the other
hand, is not affected by the increased bandwidth of IF1 as its
assignment policy is not bandwidth sensitive. Similar behavior
is observed in Figure 6(b), which shows that the average cost
per unit data, noting that IF1 has the lowest cost.
Changing the Neighbor Sharing Ratio: Figure 7 shows the
effect of changing the neighbor sharing ratio. The figure shows
that OSCAR can dynamically leverage the extra bandwidth
available from the neighbor. The saturation at the sharing ratio
of 70% is due to reaching the limit of the local interface (IF3 ),
even though the neighbor connection to the Internet (IF5 ) can
support a higher bandwidth. In addition, the figure highlights
OSCAR’s ability to provide and leverage new connectivity
opportunities, which is otherwise unavailable.
Changing the Cost and Energy Constraints: Figure 8 shows
the effect of changing the average energy consumption and
cost limits on the performance of the OSCAR scheduler.
In this experiment, we first select the energy consumption
constraint and then the cost constraint. Figure 8(a) shows
that, as expected, relaxing either the energy or cost constraint
leads to better throughput. In addition, a more stringent energy
constraint leads to a smaller feasibility region for the cost.
This explains the discontinuity in the figure. OSCAR handles
this transparently in a user friendly manner as discussed in
Section IV-B4. Another interesting notice in Figure 8(c) is that
relaxing the cost constraint does not always lead to reducing
the energy consumption and vice versa; Since the scheduler
goal in this evaluation is to maximize the throughput, relaxing
the cost constraint can lead to higher energy consumption if
this leads to increasing the throughput while not violating the
energy constraint.
VII. R ELATED W ORK
Local Solutions: Many solutions have emerged to address
single-device bandwidth aggregation at different layers of the
protocol stack [9]. Application layer solutions modify the
kernel socket handling functions to enable existing applications to use these interfaces [15]. Such modifications require
changes to legacy servers to support these new sockets and
to existing applications to interface with the new API [15].
Other solutions, such as MPTCP [24], lie in the transport layer
replacing the single path TCP. Although these protocols are
efficient, they can only be used if the server support this new
protocol. Finally, network layer solutions hide the variation in
interfaces from TCP [21], [23]. These solutions, however, rely
on deploying their system in both the client and server sides
[21] or use a proxy server and/or a special router to hide the
existing paths from legacy clients and server devices [23]. A
recent attempt to develop a more deployable local bandwidth
aggregation failed to exploit the true potential of the available
interfaces and ignored the cost efficiency [8], [11], [13].
Collaborative Solutions: Motivated by the increase in smartphone deployment, researchers have proposed to utilize the
available interfaces in a collaborative manner [3], [6], [25],
[26]. These systems have many shortcomings such as (1)
relying on the existence of proxy servers [6], [25], (2) introducing updates to applications to interface with their new
API [26], or (3) requiring the development of new applications
that would exploit such collaboration for their own benefits
[3]. In addition, all these solutions only focus on maximizing
throughput and ignore developing incentives systems to truly
enable this collaboration.
FON System: In terms of incentivized bandwidth sharing,
FON [2] is considered one of the closest systems to OSCAR. FON enables users to share their home bandwidth for
obtaining WiFi coverage via other worldwide FON users.
Although FON showed with some incentives, millions of
users are ready to deploy and use bandwidth sharing systems,
FON has fundamental drawbacks that OSCAR overcomes such
as: (1) FON requires a special stationary router (Fonera) to
enable bandwidth sharing, and (2) FON’s incentive mechanism
limits its usefulness because it treats sharing people equally
regardless of the amount of bandwidth they share.
VIII. C ONCLUSION AND F UTURE W ORK
We proposed OSCAR, a deployable, adaptive mobile bandwidth sharing and aggregation system. We presented the
OSCAR system architecture, formulated OSCAR’s optimal
scheduling problem, and presented OSCAR’s efficient communication protocols. We evaluated OSCAR using our prototype
implementation, showing how it can be tuned to achieve different user-defined goals, its ability to provide new connectivity
opportunities, and its significant mobile-device performance
enhancements. Finally, We are currently extending OSCAR by
adding adaptive scheduling strategies based on user profiles,
social networks information, users’ trust, and real-time needs.
IX. ACKNOWLEDGMENT
This work is supported in part by the Qatar Foundation
through Carnegie Mellon University’s Seed Research program.
R EFERENCES
[1] Building Click-2.0 using Android NDK. https://nm.gist.ac.kr/twiki
/bin/view/Main/HOWTO INSTALL CLICK IN ANDROID.
[2] FON System. http://corp.fon.com/en/.
[3] G. Ananthanarayanan, V. N. Padmanabhan, C. A. Thekkath, and
L. Ravindranath. Collaborative downloading for multi-homed wireless
devices. In ACM HotMobile, 2007.
[4] AT&T. 2011 annual report. 2011.
[5] M. Carson and D. Santay. NIST Net-A Linux-based network emulation
tool. ACM SIGCOMM Computer Communication Review, 2003.
[6] N. Do, C. Hsu, and N. Venkatasubramanian. Crowdmac: a crowdsourcing system for mobile access. In ACM/IFIP/USENIX Middleware, 2012.
[7] J. Erman, A. Mahanti, M. Arlitt, and C. Williamson. Identifying and
discriminating between web and peer-to-peer traffic in the network core.
In ACM WWW, 2007.
[8] K. Habak, K. Harras, and M. Youssef. Operetta: An optimal energy
efficient bandwidth aggregation system. In IEEE SECON, 2012.
[9] K. Habak, K. A. Harras, and M. Youssef. Bandwidth Aggregation
Techniques in Heterogeneous Multi-homed Devices: A Survey. ArXiv
e-prints 1309.0542, 2013.
[10] K. Habak, K. A. Harras, and M. Youssef. OSCAR: A Collaborative
Bandwidth Aggregation System. ArXiv:1401.1258, Jan. 2014.
[11] K. Habak, M. Youssef, and K. Harras. DBAS: A Deployable Bandwidth
Aggregation System. IFIP NTMS, 2012.
[12] K. Habak, M. Youssef, and K. A. Harras. An optimal deployable
bandwidth aggregation system. Computer Networks, 2013.
[13] B. Han and A. Srinivasan. ediscovery: Energy efficient device discovery
for mobile opportunistic communications. In IEEE ICNP, 2012.
[14] B. Higgins, A. Reda, T. Alperovich, J. Flinn, T. Giuli, B. Noble, and
D. Watson. Intentional networking: opportunistic exploitation of mobile
network diversity. In ACM Mobicom, 2010.
[15] S. Keshav. A control-theoretic approach to flow control. ACM
SIGCOMM Computer Communication Review, 1995.
[16] E. Kohler, R. Morris, B. Chen, J. Jannotti, and M. F. Kaashoek. The
click modular router. ACM TOCS, 2000.
[17] J. Lee, K. Lee, Y. Kim, and S. Chong. Phonepool: On energy-efficient
mobile network collaboration with provider aggregation. In IEEE
SECON, 2014.
[18] S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system. 2008.
[19] T. Pering, Y. Agarwal, R. Gupta, and R. Want. Coolspots: reducing
the power consumption of wireless mobile devices with multiple radio
interfaces. In ACM MobiSys, 2006.
[20] D. Phatak and T. Goff. A novel mechanism for data streaming across
multiple IP links for improving throughput and reliability in mobile
environments. In IEEE INFOCOM, 2002.
[21] A. Rahmati, C. Shepard, C. Tossell, A. Nicoara, L. Zhong, P. Kortum,
and J. Singh. Seamless flow migration on smartphones without network
support. arXiv preprint arXiv:1012.3071, 2010.
[22] P. Rodriguez, R. Chakravorty, J. Chesterfield, I. Pratt, and S. Banerjee.
Mar: A commuter router infrastructure for the mobile internet. In ACM
MobiSys, 2004.
[23] M. Scharf and A. Ford. Multipath tcp (mptcp) application interface
considerations. Technical report, RFC 6897, March, 2013.
[24] P. Sharma, S.-J. Lee, J. Brassil, and K. G. Shin. Aggregating bandwidth
for multihomed mobile collaborative communities. IEEE TMC, 2007.
[25] D. Zhu, M. Mutka, and Z. Cen. QoS aware wireless bandwidth
aggregation (QAWBA) by integrating cellular and ad-hoc networks. In
QSHINE. Citeseer, 2004.
Download