Abstract (MUST NOT EXCEED THE 200 WORD LIMITATION). The

advertisement
Abstract (MUST NOT EXCEED THE 200 WORD LIMITATION). The abstract should include the following
components: specific aims, results of findings and their significance, and plans for the coming year.
Specific Aims: 1) Understand complex effects in networks, including long-range
dependency and multi-scale features; 2) Propose new designs that take advantage of
the improved understanding.
Results and Significance:
A) Theory: Analysis of mixing times and convergence of distributed algorithms in
wireless networks; Analysis of collisions in WiFi; Fluid models of TCP that capture
Ack-clocking; Analysis of impact of uncertainty on protocols; Impact of heavy tail on
content distribution networks; Poisson counter models for networks with LRD traffic;
Analysis of burst attacks; Stochastic approximation theory for LRD noise.
B) Designs: Distributed CSMA algorithms for ad-hoc wireless networks that enable
simple protocols that control the congestion, routing, and MAC to maximize the utility
of the flows; WiFi protocols with higher throughput; faster-converging power control
algorithms for wireless networks; new key generation algorithms; Methods to mitigate
the effect of heavy tails by file partitioning and by alternate routing.
Plans for Coming Year: 1) Combination of longest-queue first and CSMA algorithms; 2)
Applications of the new stochastic approximation results; 3) Development of control
theory of LRD based on Poisson-counter models; 4) Development of the dynamic
control formulation of protocols; 5) Implementation and testing of new WiFi protocols.
--------------(4) Scientific Progress and Accomplishments (description should include significant
theoretical or experimental advances)
Distributed Algorithms for Wireless Networks. These are cross-layer protocols that
control congestion, routing, and medium access to maximize the utility of the flows in
the network. The previously known algorithms were the “maximum backpressure”
algorithms based on a primal-dual solution of a utility maximization. Unfortunately,
these algorithms require finding the independent set with the maximum backpressure
and this is a complex step that requires the exchange of control information and solving
a NP-hard problem. The new algorithms are based on a different formulation:
maximizing the utility minus a multiple of the distance between the ideal schedule and
the current schedule, parametrized by node backoff parameters. The gradient algorithm
for this problem distributes into decisions by the nodes based on only local information.
These algorithms were then extended to anycast and mutlicast from a single source
using network coding. See [C9]. The theoretical advance is to show that the actual
algorithm that uses backlogs as a proxy for the actual gradient converges to the optimal
solution and achieves the maximum long-term utility. See [C10] and [C8].
We studied the problem of congestion control and scheduling in ad hoc wireless
networks that have to support a mixture of best-effort and real-time traffic. Optimization
and stochastic network theory have been successful in designing architectures for fair
resource allocation to meet long-term throughput demands. However, strict packet
delay deadlines were not considered in this framework previously. In [P8], we propose a
model for incorporating the quality of service (QoS) requirements of packets with
deadlines in the optimization framework, building upon a model introduced by Hou,
Borkar and Kumar for an access point-controlled network. The solution to the problem
results in a joint congestion control and scheduling algorithm which fairly allocates
resources to meet the fairness objectives of both elastic and inelastic flows, and perpacket delay requirements of inelastic flows.
In [P9], we considered multiuser scheduling in wireless networks with channel variations
and flow-level dynamics. Recently, it has been shown that the MaxWeight algorithm,
which is throughput-optimal in networks with a fixed number users, fails to achieve the
maximum throughput in the presence of flow-level dynamics. In this paper, we propose
a new algorithm, called workload-based scheduling with learning, which is provably
throughput-optimal, requires no prior knowledge of channels and user demands, and
performs significantly better than previously suggested algorithms.
Analysis of Collisions in Wireless Networks. Some wireless protocols use a form of
reservation mini-slots to limit collisions. In [P10], Jian Ni and R. Srikant developed an
algorithm to limit collisions in a wireless network by using contention mini-slots. The
probability that a link becomes active if it does not hear a conflicting transmission
increases with its backlog in a judicious way. The resulting set of active links is a timereversible Markov chain that favors the independent sets with a large sum of backlogs.
They show that this algorithm achieves the maximum throughput. In [C10], Jiang and
Walrand study a protocol where nodes use a short "RTS-CTS" exchange to indicate
whether they want to transmit. If a node is successful, it transmits a packet with a
duration that increases with the backlog of the node. They prove that this algorithm
achieves maximum throughput.
Utility-Maximizing Protocols for Processing Networks. In a processing network, a task
uses parts and resources to produce new parts. Examples include logistics
management, complex mission scheduling, hospital resource allocation and intervention
scheduling, and assembly plant management. Multiple military operations involve such
resource allocation and task scheduling problems. For these networks, a myopic
maximum backpressure algorithm may not achieve the optimal network utility. We
developed in [C7] a new class of algorithms, called deficit maximum weight, that
achieve the maximum utility. The key idea is to augment the state to add enough
memory and avoid the task starvation that greedy scheduling causes.
High-Throughput WiFi protocols. A central problem in WiFi MAC protocols is that
packet errors may be caused either by transmission errors or by collisions. The
remedies for these types of loss are quite different: In case of transmission errors, the
nodes should reduce the transmission rate to reduce the packet error rate; in case of
collisions, the nodes should backoff or increase the packet length but not reduce the
transmission rate. In this work, the type of collision is identified and the nodes can take
the appropriate corrective action. More specifically, the work developed three
techniques: 1) Estimating the probability of collision to guide the automatic rate fallback
can improve the throughput by 150%; 2) Using estimates of the SNR can improve the
throughput by 625%; 3) Adjusting the packet size to maximize throughput (with a golden
section algorithm) increases the throughput by 200%-300%. See [C5] and [C6].
Impact of Uncertainty on Protocols. Protocols that attempt to maximize some
performance metric may be very sensitive of uncertainty about network characteristics.
In this study, we demonstrate that relaying protocols that attempt to maximize the worstcase throughput based on the uncertainty about the knowledge of other relays may
perform poorly even as nodes exchange more and more information to improve their
knowledge. We also show that more optimistic protocols perform better. Thus, an
excess of caution may hurt protocol performance. See [C11].
Fluid Models of Ack-Clocking in TCP. The sources of TCP connections synchronize
their operations on the acknowledgments that they receive. Standard TCP fluid models
ignore this effect. We show that correcting the models to capture ack-clocking enables
to characterize the correct stability region.
The group at UMass is working on the following three areas: 1) the use of multipath in
wireless networks, 2) capturing the effects of LRD on networks through the use of
Poisson Counter driven stochastic differential equations, 3) characterizing large graphs
through sampling.
In the context of multipath we have developed mathematical results on how how
multipath can be used to reduce the tail behavior of the delay distribution. We consider
two scenarios, redundant routing, where a copy of a file is transmitted over every path
till it is received at the destination, and split routing where the path is split up between
the paths. For general channel models we show that the delay distribution is
characterized by a power law tail and that the choice of which of redundant or split
routing to use depends on the channel statistics. In the context of PCSDE models of
LRD, we have developed an SDE representation of Reed's model for generating a
double Pareto distribution. We have used PCSDEs to explore the benefits of pacing for
reducing burstiness and improving network performance. Last, we have started an
investigation of different techniques for sampling network graphs and are exploring how
the shape of the distribution affects their performance.
Barlas Oguz and Venkat Anantharam have studied the problem of scaling of catalog
size with network size in peer-to-peer (P2P) content distribution systems that use a
push-to-peer architecture (i.e. a server seeds the peers with the content in an intelligent
way prior to the P2P downloads). For demand statistics that are heavy tailed (which
is the case in reality, according to empirical NetFlix data) they find that the scaling can
be proportional to network size in some regimes, a promising result compared to earlier
results in the are that used less realistic demand models. Future work will focus on
understanding the impact of long-range dependence in data sources on the delay in
compression/decompression algorithms.
Vivek Borkar and Venkat Anantharam have developed a general stochastic
approximation (SA) theorem that applies when the noisy observations are heavy tailed
and/or long-range dependent. This theorem can be used as a tool for analyses of the
behavior of SA-based algorithms when the noisy observations have such
characteristics, thus providing a pathway to a considerable increase in our
understanding of the performance of SA-based algorithms in a wide range of control
and networking scenarios.
Venkat Anantharam has studied Fisher information (a statistical characteristic of the
ability to estimate in parametric models, including heavy-tailed models) with a view
towards developing more widely applicable entropy power inequalities and determined
that certain results published in the literature are wrong, by giving explicit
counterexamples (see [P7]).
The Caltech group worked on a dynamic formulation of protocols. Protocol layering can
be interpreted as optimization decomposition where different layers jointly solve a global
optimization problem when each layer solves a subproblem using only local information
over a subset of the optimization variables and coordinates with other layers
(subproblems) through functions of primal and dual variables (interfaces). These
optimization problems define the constraints that deconstrain at the core of the protocol
stack and allow diverse applications and hardware platforms, giving rise to architectures
that are robust and evolvable. For example, network utility maximization (NUM) has
been extended to model the multiple protocol layers, including transport layer
(congestion control), routing, scheduling and network coding in wireless networks [P5,
P6]. These models however are ``static’’ in the sense that they describe only the
equilibrium state and are oblivious to network dynamics. We have extended the static
NUM model to finite-horizon optimal control models in [C1] that not only maximizes the
aggregate utility at the terminal time, but also integrates the transient dynamics of
congestion control algorithms. We derive the cost functions that the transients of the
primal and dual algorithms optimize. This approach allows the design of congestion
control algorithms that are distributed and automatically stable.
It has been recently discovered that heavy-tailed file completion (transfer time can result
from protocol interaction even when file sizes are light-tailed. A key to this phenomenon
is the RESTART feature where if a file transfer is interrupted before it is completed, the
transfer needs to restart from the beginning. We show in [C13] that independent or
bounded fragmentation produces light-tailed file completion time as long as the file size
is light-tailed, i.e., in this case, heavy-tailed file completion time can only originate from
heavy-tailed file sizes. We prove that if the failure distribution has non-decreasing
failure rate, then constant fragmentation minimizes expected file completion time. This
optimal fragment size is unique but depends on the file size. We present a simple blind
fragmentation policy where the constant fragment size is independent of the file size
and prove that its expected file completion time is asymptotically optimal when file size
increases. Finally, we show that under both the optimal and blind fragmentation
policies, if the file size is heavy-tailed, then the file completion time is (necessarily)
heavy-tailed with the same tail parameters.
Maximizing the minimum weighted SIR, minimizing the weighted sum MSE and
maximizing the weighted sum rate in a multiuser downlink system are three important
performance objectives in joint transceiver and power optimization, where all the users
have a total power constraint. We show in [C12, C14] that, through connections with
the nonlinear Perron-Frobenius theory, jointly optimizing power and beamformers in the
max-min weighted SIR problem can be solved optimally in a distributed fashion. Then,
connecting these three performance objectives through the arithmetic-geometric mean
inequality and nonnegative matrix theory, we solve the weighted sum MSE minimization
and weighted sum rate maximization optimally in the low to moderate interference
regimes using fast algorithms.
------------(1) Submissions or publications under ARO sponsorship during this reporting period:
(a) Papers published in peer-reviewed journals:
Accepted:
[P1] C. W. Tan and A. R. Calderbank, "Multiuser Detection of Alamouti Signals," IEEE
Transactions on Communications, Vol. 57, No. 7, pp.
2080-2089, Jul. 2009.
[P2] C. W. Tan, D. P. Palomar and M. Chiang, "Energy-Robustness Tradeoff in Cellular
Network Power Control," IEEE/ACM Transactions on Networking, Vol. 17, No. 3, pp.
912-925, Jun. 2009.
[P3] K. Jacobsson, L. L. H. Andrew, A. K. Tang, S. H. Low and H. Hjalmarsson, "An
Improved Link Model for Window Flow Control and Its
Application to FAST TCP, " in IEEE Transactions on Automatic Control, March 2009.
[P4] Chen, L., T. Cui and S. H. Low, " A Game-theoretic Framework for Medium Access
Control," IEEE Journal of Selected Areas in Communications, 26(7):1116-1127,
September 2008.
Submitted:
[P5] Chen, L., S. H. Low and J. C. Doyle, "Random Access Game and Medium Access
Control Design," IEEE/ACM Transactions on Networking, submitted 2008.
[P6] Chen, L., T. Ho, M. Chiang, S. H. Low and J. C. Doyle, "Congestion Control for
Multicast Flows with Network Coding," IEEE Transactions on Information Theory,
submitted 2008.
[P7] Venkat Anantharam,``Counterexamples to a proposed Stam inequality on finite
groups," IEEE Transactions on Information Theory, submitted August 2009.
(b) Papers published in non-peer-reviewed journals
(c) Presentations
i. Presentations at meetings, but not published in Conference Proceedings
ii. Non-Peer-Reviewed Conference Proceeding publications (other than abstracts).
iii. Peer-Reviewed Conference Proceeding publications (other than abstracts).
Accepted:
[C1] Javad Lavaei, John Doyle and Steven Low, "Congestion Control Algorithms from
Optimal Control Perspective," IEEE Conference on
Decision and Control, Dec 2009.
[C2] K. Jacobsson, L. L. H. Andrew and A. Tang, "Stability and Robustness Conditions
using Frequency Dependent Half Planes," IEEE Conference on Decision and Control,
Dec 2009.
[C3] M. Wang, C. W. Tan, A. Tang and S. H. Low, "How Bad is Single Path Routing? "
Proc. IEEE GLOBECOM, Honolulu, Hawaii, Nov 30-Dec 4, 2009.
[C4] C. W. Tan, M. Chiang and R. Srikant, "Optimal Power Control and Beamforming in
Multiuser Downlink Systems," Proc. Asilomar Conference on Signals, Systems and
Computers, Monterey, CA, Nov 2009.
[C5] W. Song, M. Krishnan, and A. Zakhor, “Adaptive Packetization for Error-Prone
Transmission over 802.11 WLANs with Hidden Terminals,” IEEE International
Workshop on Multimedia Signal Processing, Rio de Janeiro, Brazil, October 2009.
[C6] M. Krishnan, S. Pollin and A. Zakhor, "Local Estimation of Collision Probabilities in
802.11 WLANs With Hidden Terminals," IEEE Globecom 2009 Wireless Networking
Symposium (GC'09-WNS), Honolulu, HI, December 2009.
[C7] Libin Jiang and Jean Walrand, "Utility-Maximizing Scheduling for Stochastic
Processing Networks," Forty-Seventh Annual Allerton Conference, Illinois, USA
September 2009.
[C8] Libin Jiang and Jean Walrand, "Convergence and Stability of a Distributed CSMA
Algorithm for Maximal Network Throughput," IEEE Conference on Decision and Control,
Dec 2009.
Presented:
[C9] Libin Jiang and Jean Walrand, "A Distributed CSMA Algorithm for Throughput and
Utility Maximization in Wireless Networks," Forty-Sixth Annual Allerton Conference,
Illinois, USA September 23-26, 2008.
[C10] Libin Jiang and Jean Walrand, "Approaching Throughput-Optimality in a
Distributed CSMA Algorithm: Collisions and Stability," Mobihoc 2009, New Orleans, LA,
May 2009.
[C11] Jiwoong Lee and J. Walrand, "Reliable relaying with uncertain knowledge," Game
Theory for Networks, vol., no., pp.82-87, 13-15 May 2009.
[C12] C. W. Tan, M. Chiang and R. Srikant, "Maximizing Sum Rate and Minimizing MSE
on Multiuser Downlink: Optimality, Fast Algorithms, and Equivalence via Max-min SIR,"
Proc. IEEE ISIT, Seoul, South Korea, Jun 28-Jul 3, 2009.
[C13] Jayakrishnan Nair and Steven H Low, "Optimal Job fragmentation (Extended
Abstract)", The Eleventh Workshop on Mathematical Performance Modeling and
Analysis, Seattle, WA, June 2009; Also in Performance Evaluation Review, 2009.
[C14] C. W. Tan, M. Chiang and R. Srikant, "Fast Algorithms and Performance Bounds
for Sum Rate Maximization in Wireless Networks," Proc. IEEE INFOCOM, Rio de
Janerio, Brazil, Apr 20-25, 2009.
[C15] M. Suchara, L. L. H. Andrew, R. Witt, K. Jacobsson, B. P. Wydrowski, and S. H.
Low, "Implementation of Provably Stable MaxNet," in Proc. of BROADNETS 2008,
London, UK, September 2008.
Submitted:
[C16] B. Gauvin, B. Ribeiro, B. Liu, D. Towsley, J. Wang. "Measurement and Analysis of
Publishing Characteristics on MySpace," submitted to INFOCOM 2010.
[C17] Y. Cai, P.-C. Lee, W. Gong, D. Towsley. "On the Mitigation of Traffic Correlation
Attacks on Router Queues," submitted to INFOCOM 2010.
[C18] J. J. Jaramillo and R. Srikant. Optimal Scheduling for Fair Resource Allocation in
Ad Hoc Networks with Elastic and Inelastic Traffic.
http://arxiv.org/abs/0907.5402
[C19] S. Liu, L. Ying and R. Srikant. Throughput-Optimal Opportunistic Scheduling in
the Presence of Flow-Level Dynamics.
http://arxiv.org/abs/0907.3977
[C20] Jian Ni, Bo (Rambo) Tan, R. Srikant, "Q-CSMA: Queue-Length Based CSMA/CA
Algorithms for Achieving Maximum Throughput and Low Delay in Wireless Networks,"
Submitted on 15 Jan 2009.
(d) Manuscripts
(e) Books
(f) Honor and Awards
(g) Title of Patents Disclosed during the reporting period
(h) Patents Awarded during the reporting period
(2) Student/Supported Personnel Metrics (name, % supported, %Full Time Equivalent
(FTE) support provided by this agreement, and total for each category):
(a) Graduate Students:
Libin Jiang (UCB), Feb 09-May 09 49.9%
Jiwoong Lee (UCB), Jun 08-Dec 08 66%, May 09-Aug 09 28.2%
Michael Krishnan (UCB), Jun 08-Aug 08 100%, Feb 09-Aug 09 47.2%
Barlas Oguz (UCB), Jan 09-Aug09 67.4%
Aminzadeh Gohari (UCB), May 09-Aug 09 72.6%
Assane Gueye (UCB), Jun 08-Dec 08 47.1%
Nikhil Shetty (UCB), Aug 08-Dec 08 49.9%
Bo Jiang (UMASS), 50% August 2009
R. Ribeiro (UMASS)
Javad Lavaei (Caltech), 25% Aug 2008 to present
Somayeh Sojoudi (Caltech), 25% Jul 2008 to present
Jayakrishnan Nair (Caltech), 0%
Chong Jiang (UIUC), 50%, 50%
Bo Tan (UIUC), 0%
Mathieu Leconte (UIUC), 0%
(b) Post Doctorates
Patrick Lee, 100% October 2008 - August 2009
Wei Song (UCB)
Wei Wei (UMASS, 100% July 2008 - now
Lars Krister Jacobsson (Caltech), 100% Sep 2008 to 8/8/09
Chee-Wei Tan (Caltech), 100% Nov 2008 to present
Jian Ni (UIUC), 0%
(c) Faculty
Venkat Anantharam, Jul 08-Aug 08 43.4%, Jul 09-Aug 09 50%
John Doyle, 5% March 2009 to Aug 2009
Weibo Gong, one month summer 2008 and one month summer 2009
Steven Low, 5% Aug 2008 to present
Srikant, 17%
Don Towsely, , one month summer 2008 and one month summer 2009
Jean Walrand, Jul 08-Aug 08 50%, May 09-Aug 09 31.6%
Avideh Zakhor, Jul 08 100%, Jan 09-Jun 09 37.7%
(d) Undergraduate Students
Sherman Ng (UCB) 0%
Miklos Christine (UCB) Jun 09-Jul 09 68.6%
Colby Boyer (UCB) 0%
Jimmy Tang (UCB) 0%
Martin B Andreasson (Caltech), 100% Summer 2009
(e) Graduating Undergraduate Metrics (funded by this agreement and graduating during
this reporting period):
i. Number who graduated during this period: One (Colby Boyer at UCB)
ii. Number who graduated during this period with a degree in science, mathematics,
engineering, or technology fields: One
iii. Number who graduated during this period and will continue to pursue a graduate or
Ph.D. degree in science, mathematics, engineering, or technology fields: One
iv. Number who achieved a 3.5 GPA to 4.0 (4.0 max scale): All
v. Number funded by a DoD funded Center of Excellence grant for Education, Research
and Engineering: None
vi. Number who intend to work for the Department of Defense: Unknown
vii. Number who will receive scholarships or fellowships for further studies in science,
mathematics, engineering or technology fields: None
(f) Masters Degrees Awarded (Name of each, Total #)
(g) Ph.D.s Awarded (Name of each, Total #)
(h) Other Research staff (Name of each, FTE % Supported for each, Total %
Supported)
Vivek Borkar (Visiting Researcher, UCB), Apr 09-May 09 100%
Tom Quetchenbach (Caltech), 100% Apr 2009 to July 2009
Lijun Chen (Caltech), 13% Oct 2008 to present
John Doyle Jul 08-Oct 08 33.3%
(3) “Technology transfer” (any specific interactions or developments which would
constitute technology transfer of the research results). Examples include patents,
initiation of a start-up company based on research results, interactions with
industry/Army R&D Laboratories or transfer of information which might impact the
development of products.
None so far.
(4) Scientific Progress and Accomplishments (description should include significant
theoretical or experimental advances)
Abstract (MUST NOT EXCEED THE 200 WORD LIMITATION). The abstract should
include the following components: specific aims, results of findings and their
significance, and plans for the coming year.
Specific Aims: 1) Understand complex effects in networks, including long-range
dependency and multi-scale features; 2) Propose new designs that take advantage of
the improved understanding.
Results and Significance:
Theory: Analysis of mixing times and convergence of distributed algorithms in wireless
networks; Analysis of collisions in WiFi; Fluid models of TCP that capture Ack-clocking;
Analysis of impact of uncertainty on protocols; Impact of heavy tail on content
distribution networks; Poisson counter models for networks with LRD traffic; Analysis of
burst attacks; Stochastic approximation theory for LRD noise.
Designs: Distributed CSMA algorithms for ad-hoc wireless networks that enable simple
protocols that control the congestion, routing, and MAC to maximize the utility of the
flows; WiFi protocols with higher throughput; faster-converging power control algorithms
for wireless networks; new key generation algorithms; Methods to mitigate the effect of
heavy tails by file partitioning and by alternate routing.
Plans for Coming Year: 1) Combination of longest-queue first and CSMA algorithms; 2)
Applications of the new stochastic approximation results; 3) Development of control
theory of LRD based on Poisson-counter models; 4) Development of the dynamic
control formulation of protocols; 5) Implementation and testing of new WiFi protocols.
--------------(4) Scientific Progress and Accomplishments (description should include significant
theoretical or experimental advances)
Distributed Algorithms for Wireless Networks. These are cross-layer protocols that
control congestion, routing, and medium access to maximize the utility of the flows in
the network. The previously known algorithms were the “maximum backpressure”
algorithms based on a primal-dual solution of a utility maximization. Unfortunately,
these algorithms require finding the independent set with the maximum backpressure
and this is a complex step that requires the exchange of control information and solving
a NP-hard problem. The new algorithms are based on a different formulation:
maximizing the utility minus a multiple of the distance between the ideal schedule and
the current schedule, parametrized by node backoff parameters. The gradient algorithm
for this problem distributes into decisions by the nodes based on only local information.
These algorithms were then extended to anycast and mutlicast from a single source
using network coding. See [C9]. The theoretical advance is to show that the actual
algorithm that uses backlogs as a proxy for the actual gradient converges to the optimal
solution and achieves the maximum long-term utility. See [C10] and [C8].
We studied the problem of congestion control and scheduling in ad hoc wireless
networks that have to support a mixture of best-effort and real-time traffic. Optimization
and stochastic network theory have been successful in designing architectures for fair
resource allocation to meet long-term throughput demands. However, strict packet
delay deadlines were not considered in this framework previously. In [C18], we propose
a model for incorporating the quality of service (QoS) requirements of packets with
deadlines in the optimization framework, building upon a model introduced by Hou,
Borkar and Kumar for an access point-controlled network. The solution to the problem
results in a joint congestion control and scheduling algorithm which fairly allocates
resources to meet the fairness objectives of both elastic and inelastic flows, and perpacket delay requirements of inelastic flows.
In [C19], we considered multiuser scheduling in wireless networks with channel
variations and flow-level dynamics. Recently, it has been shown that the MaxWeight
algorithm, which is throughput-optimal in networks with a fixed number users, fails to
achieve the maximum throughput in the presence of flow-level dynamics. In this paper,
we propose a new algorithm, called workload-based scheduling with learning, which is
provably throughput-optimal, requires no prior knowledge of channels and user
demands, and performs significantly better than previously suggested algorithms.
Analysis of Collisions in Wireless Networks. Some wireless protocols use a form of
reservation mini-slots to limit collisions. In [C20], Jian Ni and R. Srikant developed an
algorithm to limit collisions in a wireless network by using contention mini-slots. The
probability that a link becomes active if it does not hear a conflicting transmission
increases with its backlog in a judicious way. The resulting set of active links is a time-
reversible Markov chain that favors the independent sets with a large sum of backlogs.
They show that this algorithm achieves the maximum throughput. In [C10], Jiang and
Walrand study a protocol where nodes use a short "RTS-CTS" exchange to indicate
whether they want to transmit. If a node is successful, it transmits a packet with a
duration that increases with the backlog of the node. They prove that this algorithm
achieves maximum throughput.
Utility-Maximizing Protocols for Processing Networks. In a processing network, a task
uses parts and resources to produce new parts.
Examples include logistics
management, complex mission scheduling, hospital resource allocation and intervention
scheduling, and assembly plant management. Multiple military operations involve such
resource allocation and task scheduling problems. For these networks, a myopic
maximum backpressure algorithm may not achieve the optimal network utility. We
developed in [C7] a new class of algorithms, called deficit maximum weight, that
achieve the maximum utility. The key idea is to augment the state to add enough
memory and avoid the task starvation that greedy scheduling causes.
High-Throughput WiFi protocols. A central problem in WiFi MAC protocols is that
packet errors may be caused either by transmission errors or by collisions. The
remedies for these types of loss are quite different: In case of transmission errors, the
nodes should reduce the transmission rate to reduce the packet error rate; in case of
collisions, the nodes should backoff or increase the packet length but not reduce the
transmission rate. In this work, the type of collision is identified and the nodes can take
the appropriate corrective action.
More specifically, the work developed three
techniques: 1) Estimating the probability of collision to guide the automatic rate fallback
can improve the throughput by 150%; 2) Using estimates of the SNR can improve the
throughput by 625%; 3) Adjusting the packet size to maximize throughput (with a golden
section algorithm) increases the throughput by 200%-300%. See [C5] and [C6].
Impact of Uncertainty on Protocols. Protocols that attempt to maximize some
performance metric may be very sensitive of uncertainty about network characteristics.
In this study, we demonstrate that relaying protocols that attempt to maximize the worstcase throughput based on the uncertainty about the knowledge of other relays may
perform poorly even as nodes exchange more and more information to improve their
knowledge. We also show that more optimistic protocols perform better. Thus, an
excess of caution may hurt protocol performance. See [C11].
Fluid Models of Ack-Clocking in TCP. The sources of TCP connections synchronize
their operations on the acknowledgments that they receive. Standard TCP fluid models
ignore this effect. We show that correcting the models to capture ack-clocking enables
to characterize the correct stability region.
The group at UMass is working on the following three areas: 1) the use of multipath in
wireless networks, 2) capturing the effects of LRD on networks through the use of
Poisson Counter driven stochastic differential equations, 3) characterizing large graphs
through sampling.
In the context of multipath we have developed mathematical results on how it can be
used to reduce the tail behavior of the delay distribution. We consider two scenarios,
redundant routing, where a copy of a file is transmitted over every path till it is received
at the destination, and split routing where the path is split up between the paths. For
general channel models we show that the delay distribution is characterized by a power
law tail and that the choice of which of redundant or split routing to use depends on the
channel statistics. In the context of PCSDE models of LRD, we have developed an
SDE representation of Reed's model for generating a double Pareto distribution. We
have used PCSDEs to explore the benefits of pacing for reducing burstiness and
improving network performance. Last, we have started an investigation of different
techniques for sampling network graphs and are exploring how the shape of the
distribution affects their performance.
Barlas Oguz and Venkat Anantharam have studied the problem of scaling of catalog
size with network size in peer-to-peer (P2P) content distribution systems that use a
push-to-peer architecture (i.e. a server seeds the peers with the content in an intelligent
way prior to the P2P downloads). For demand statistics that are heavy tailed (which
is the case in reality, according to empirical NetFlix data) they find that the scaling can
be proportional to network size in some regimes, a promising result compared to earlier
results in the are that used less realistic demand models. Future work will focus on
understanding the impact of long-range dependence in data sources on the delay in
compression/decompression algorithms.
Vivek Borkar and Venkat Anantharam have developed a general stochastic
approximation (SA) theorem that applies when the noisy observations are heavy tailed
and/or long-range dependent. This theorem can be used as a tool for analyses of the
behavior of SA-based algorithms when the noisy observations have such
characteristics, thus providing a pathway to a considerable increase in our
understanding of the performance of SA-based algorithms in a wide range of control
and networking scenarios.
Venkat Anantharam has studied Fisher information (a statistical characteristic of the
ability to estimate in parametric models, including heavy-tailed models) with a view
towards developing more widely applicable entropy power inequalities and determined
that certain results published in the literature are wrong, by giving explicit
counterexamples (see [P7]).
The Caltech group worked on a dynamic formulation of protocols. Protocol layering can
be interpreted as optimization decomposition where different layers jointly solve a global
optimization problem when each layer solves a subproblem using only local information
over a subset of the optimization variables and coordinates with other layers
(subproblems) through functions of primal and dual variables (interfaces).
These
optimization problems define the constraints that deconstrain at the core of the protocol
stack and allow diverse applications and hardware platforms, giving rise to architectures
that are robust and evolvable. For example, network utility maximization (NUM) has
been extended to model the multiple protocol layers, including transport layer
(congestion control), routing, scheduling and network coding in wireless networks [P5,
P6]. These models however are ``static’’ in the sense that they describe only the
equilibrium state and are oblivious to network dynamics. We have extended the static
NUM model to finite-horizon optimal control models in [C1] that not only maximizes the
aggregate utility at the terminal time, but also integrates the transient dynamics of
congestion control algorithms. We derive the cost functions that the transients of the
primal and dual algorithms optimize. This approach allows the design of congestion
control algorithms that are distributed and automatically stable.
It has been recently discovered that heavy-tailed file completion (transfer time can result
from protocol interaction even when file sizes are light-tailed. A key to this phenomenon
is the RESTART feature where if a file transfer is interrupted before it is completed, the
transfer needs to restart from the beginning. We show in [C13] that independent or
bounded fragmentation produces light-tailed file completion time as long as the file size
is light-tailed, i.e., in this case, heavy-tailed file completion time can only originate from
heavy-tailed file sizes. We prove that if the failure distribution has non-decreasing
failure rate, then constant fragmentation minimizes expected file completion time. This
optimal fragment size is unique but depends on the file size. We present a simple blind
fragmentation policy where the constant fragment size is independent of the file size
and prove that its expected file completion time is asymptotically optimal when file size
increases. Finally, we show that under both the optimal and blind fragmentation
policies, if the file size is heavy-tailed, then the file completion time is (necessarily)
heavy-tailed with the same tail parameters.
Maximizing the minimum weighted SIR, minimizing the weighted sum MSE and
maximizing the weighted sum rate in a multiuser downlink system are three important
performance objectives in joint transceiver and power optimization, where all the users
have a total power constraint. We show in [C12, C14] that, through connections with
the nonlinear Perron-Frobenius theory, jointly optimizing power and beamformers in the
max-min weighted SIR problem can be solved optimally in a distributed fashion. Then,
connecting these three performance objectives through the arithmetic-geometric mean
inequality and nonnegative matrix theory, we solve the weighted sum MSE minimization
and weighted sum rate maximization optimally in the low to moderate interference
regimes using fast algorithms.
(5) “Copies of technical reports,” which have not been previously submitted to the ARO,
should be submitted concurrently with the Interim Progress Report. (See page 6
“Technical Reports” section for instructions.) However, do not delay submission while
awaiting Reprints of publications.
Download