Uploaded by bilalraza9266

QoS 1

advertisement
2019 25th Asia-Pacific Conference on Communications (APCC)
QoS-aware Traffic Engineering in Software Defined
Networks
May Thu Zar Win
Yutaka Ishibashi
Khin Than Mya
University of Computer Studies, Yangon
Yangon, Myanmar
maythuzarwin@ucsy.edu.mm
Nagoya Institute of Technology
Nagoya, Japan
ishibasi@nitech.ac.jp
University of Computer Studies, Yangon
Yangon, Myanmar
khinthanmya@ucsy.edu.mm
Abstract—In this paper, we propose a QoS-aware traffic
engineering method in software-defined networks. Traditional
shortest path based routing cannot guarantee future traffic
demands because the routing only uses the minimum hop counts.
The QoS-aware routing is more efficient than the traditional
shortest path based routing; however, QoS parameters like link
utilization and link delay are needed to perform such kinds of
routing. The estimation of the link utilization and link delay may
increase the controller computational time and load. To reduce
the controller’s computational time and improve the network
resource utilization, the proposed QoS-aware routing method
combines mainly two parts: the initial path calculation phase and
link utilization aware path calculation phase. When the network
is initialized, the first phase collects all possible paths for each
pair of source and destination together with the link capacities
and link delays. When the network topology changes occur or the
collected paths do not satisfy traffic demands, the second phase
re-calculates paths based on the link utilization and reroutes
the traffic through the optimal path. Experimental results of
throughput and packet loss rate show that our proposed method
outperforms the other two methods.
Index Terms—QoS, traffic engineering, SDN
I. I NTRODUCTION
Traditional networks integrate control and data planes into
the same devices and lack the global centralization control.
Therefore, it is not able to satisfy requirements of the emerging cloud computing, the tactile Internet, and the Internet
of Things (IoT) technology [1]. Moreover, the traditional
networks cannot provide the complexity of control protocols,
complex traffic engineering (TE) tasks, and interconnecting of
a huge number of smart devices [2].
Software Defined Networking (SDN) is an architecture that
overcomes the above issues of the traditional networks by taking advantage of global centralization control, decouples of the
control and data planes, and enabling innovation through the
network programmability [3]. By taking the advantages, QoSaware TE can perform more effectively in SDN environments.
In an SDN network, the complex route calculation, TE, and
security are performed by the controller. When the first packet
of a flow enters an OpenFlow-enabled switch, the switch
encapsulates and forwards the packet to the SDN controller
to decide whether the flow should be added to the switch flow
table or not. Then, based on the decision the switch forwards
incoming packets to an appropriate port [4]. Therefore, it is
978-1-7281-3679-0/19/$31.00 ©2019 IEEE
quite important to reduce the communication latency between
the controller and the OpenFlow-enabled devices.
The traditional SPF (Shortest Path First) algorithm routes
the traffic efficiently, but the congestion may occur. SPF also
produces bottlenecks for future traffic demands [5]. SPF only
takes account of the minimum hop-count and does not achieve
QoS-aware TE and load balancing. Therefore, QoS-aware TE
algorithms are still important for the future Internet.
There are many TE algorithms which tried to solve the
problem of setting up the bandwidth guarantee tunnels in
networks [6-8]. For a wider range of Internet applications,
routing algorithms based on the delay and link utilization have
become more important to fulfill the user requirements. A
simple solution is proposed in [9], where firstly they prune
all the links with insufficient bandwidth, and then the solution
chooses a path with the smallest delay path. Among the
bandwidth-delay constrained routing, the Maximum Delay
Weighted Capacity Routing algorithm (MDWCRA) tries to
minimize the interference between each pair of ingress and
egress [8]. The algorithm also calculates the shortest disjoint
paths, defines critical links for bottleneck traffics, and avoids
the links for future demands.
Most of the bandwidth and delay constrained routing algorithms consist of the following three main modules: calculating
bandwidth and delay, finding the path based on bandwidth
and delay, and re-routing or installing flow rules to forward
the flow. The constrained-based routing algorithms alleviate
congestion and improve the resource utilization. However, the
calculation time and the flow rule installation time become
non-negligible factors. This motivates us to divide our TE
algorithm into the following two phases: the initial path
calculation phase and link utilization aware path calculation
phase. For fast packet rerouting, we initially calculate link
delay and link capacity and then we compute all the paths
for each pair of source and destination at the initial phase.
When the network topology changes occur or the incoming
flow demand is not satisfied with the pre-calculated paths, we
re-calculate link utilization aware paths and re-route the traffic
at the second phase.
The reminder of this paper is organized as follows. Section II outlines a background theory of SDN. In Section III,
we propose the QoS-aware TE method and explain our system
171
2019 25th Asia-Pacific Conference on Communications (APCC)
design. We present the experimental setup and results in
Section IV. Finally, Section V summarizes the paper.
II. S OFTWARE D EFINED N ETWORKING
The SDN architecture mainly consists of the following three
layers: the application layer, control layer, and data plane layer
as shown in Fig. 1.
Fig. 2: Proposed system design.
B. Paths Store
Fig. 1: Software-defined networks architecture.
The SDN applications are programmed to support all kinds
of network services such as traffic engineering, load balancing,
routing, and monitoring. The control layer is a core layer
of the SDN architecture that extracts the data plane layer
information and communicates to the application layer with
an abstract view of the network topology, including statistics
and events. The application and control layers communicate
by using northbound APIs. The data plane layer consists of
network nodes which are capable of forwarding and processing
of the data path. Communications between the data plane and
control layers use a standardized protocol called OpenFlow.
In our proposed method, we implement a new database
called Paths Store, which stores path information for each pair
of source and destination. The path information includes the
DeviceIDs of source and destination, each link along the path,
link capacity, link delay, and path_count. In this paper, we
define the link capacity as the initially assigned bandwidth,
and the link capacity of a path is the minimum link capacity
value along the path. We also define the link delay as the
total end-to-end delay, and the link delay of a path is the
total link delay along the path. To obtain the values of link
capacity and delay for each pair of source and destination, we
construct a hash table data structure that stores pairs of source
and destination as the keys and the link capacity and delay as
the values.
III. Q O S- AWARE T RAFFIC E NGINEERING
In this section, we propose the link utilization and the link
delay based QoS aware TE method. We have implemented the
proposed method on the SDN application layer. As illustrated
in Fig. 2, the proposed method mainly involves the following
four modules: topology discovery, Paths Store, utilization
monitor, and flow management modules.
A. Topology Discovery
Fig. 3: Test topology.
The topology discovery module discovers network nodes
and constructs a network topology graph by using Depth First
Search. Whenever the new network nodes added or removed,
and the link states are up or down, topology discovery will
get notifications from listeners events of the controller.
Figure 3 illustrates the test topology which includes 6
switches and 6 hosts, and the number (1 2, 3, and 4) written
at each switch denotes the port numbers for the switch. The
two numbers (e.g. (14, 10)) on each link represent the link
172
2019 25th Asia-Pacific Conference on Communications (APCC)
capacity (Mbps) and the link delay (ms), respectively. We will
carry out experiments by using this test topology in section IV.
When the network is initialized, we save path information
for each pair of source and destination into Paths Store
by taking advantage of the global view of SDN. Figure 4
represents sample path information in Paths Store. The sample
path information is one of the paths between source switch S3
(DeviceID = of 0000000000000003) and destination switch S1
(DeviceID = of 0000000000000001) of test topology in Fig. 3.
In Fig. 4, the first tuple of cost value (20.0) represents the
minimum link capacity along the path S3–S6–S5–S4–S1 and
the second tuple of cost value (25.0) represents the total link
delay along the path. The initial path_count value is zero.
Fig. 4: Sample path information in Paths Store.
C. Utilization Monitor
The utilization monitor collects the statistical information
such as port statistics of the underlying network. It also
computes the link utilization based on the collected byte count
and link capacity.
1) Calculating Link Utilization: By taking advantage of
SDN’s global centralized control, we use OpenFlow messages to calculate the link utilization (available bandwidth).
OpenFlow has many statistics messages such as flow stats,
meter stats, aggregate stats, queue stats, port stats, and table
stats. OpenFlow permits the controller to query the statistics
information of the switches. However, OpenFlow does not
implement a way to gather the link utilization (available
bandwidth) and delay values from the switch directly. Therefore, the controller uses the raw statistics values to determine
the link utilization of the switches. We can obtain the link
utilization as follows:
Lui (t) = C i (t) − Li (t)
(1)
where Lui (t) is the link utilization of the ith link at
time t, Ci (t) and Li (t) is the capacity and link load of the ith
link at time t respectively. The link load of the ith link at
time t is obtained by Eq. (2):
Li (t) = src_portbyteSent(t)+dst_portbyteReceive(t) (2)
where Li (t) is the link load of the ith link at time t. Also,
src_portbyteSent(t) and dst_portbyteReceive(t) are the
source port statistics of transmitted bytes count and destination
port statistics of received bytes count at time t, respectively.
The link utilization of the path is the minimum link utilization
along a given path as shown in Eq. (3).
Lupath (t) =
min
Lui ∈P ath
Lui (t)
(3)
Querying port statistics from all the switches in the network
may increase the controller’s load and computation time.
Therefore, the proposed method only queries statistics from
the source and destination switches of incoming traffic and
then calculates the link utilization.
2) Estimating Link Delay: There are numerous researches
handling estimation of link delay. One of the solutions [10] to
estimate the end-to-end link delay is as follows:
Tend−to−end−delay = Ttotal −
(Tcontrollertosourceswitch +
Tcontrollertodestinationswitch ) (4)
where Ttotal is the time duration to send a probe packet
from the controller to source switch, source switch to destination switch, and destination switch to the controller.
Tend−to−end−delay is the delay time form source switch to
destination switch. The solution got one-way delay by subtracting the delay time of the controller to source switch, and
controller to destination switch from the total time, Ttotal . In
this paper, we assume that the link delays are already known
according to the global view of SDN.
D. Flow Management
The Flow management module performs the following three
main tasks: selecting a path from Paths Store based on the
traffic demand, calculating paths based on the computed link
utilization, and installing flow rules for calculated paths into
the intermediate switches along the path.
As illustrated in Fig. 5, when the traffic demand or QoS
request is entered into the network, we first check whether
the topology changes occur or not. If the topology does not
change, we check again if there is a path with sufficient
demand in Paths Store. If the paths exist, we select one of
the maximum link utilization paths and increase path_count
of the selected path, and then update Paths Store database.
When the next flow demand is entered into the network,
we select a path from Paths Store based on two criteria:
the maximum link utilization and minimum path_count. We
select a path with minimum path_count among the paths with
sufficient demands. Then we install flow rules and forward
QoS request of packets through the path. This idea is used
in the initial path calculation process and it can improve the
network resource utilization and alleviate the bottleneck of
future traffic demands.
If there is no sufficient path in Paths Store, the link utilization aware path computation phase calculates the link utilization for the paths between source and destination switches
by using collected port statistic and selects a path with the
maximum link utilization. Then, we install flow rules into the
intermediate devices along the path and forwards the flow to
the destination.
173
2019 25th Asia-Pacific Conference on Communications (APCC)
Fig. 5: Overall process.
IV. E XPERIMENTS AND R ESULTS
•
This section describes the experimental testbed of the
proposed QoS-aware TE method, experiment scenarios, and
experimental results.
•
A. Experimental Testbed
Experimental tests were conducted on laptop PC (Core i54210U CPU @ 2.20GHZ with RAM 8GB, Ubuntu 16.04 OS)
with Mininet network emulator [11], and ONOS [12] SDN
controller. We used Mininet network emulator for creating
our network topology, and we also use ONOS controller as
the SDN controller. The proposed QoS-aware TE method was
running as an ONOS controller application in the application
layer. Software specifications of our testbed are presented in
Table I.
•
TABLE I: Software Specifications of Experimental Testbed
No.
1
2
3
4
Name
Operating System
Mininet Emulator
ONOS Controller
OpenFlow Protocol
Specification
Ubuntu 16.04 LTS (64 bit)
Version 2.3.0
Version 1.10.0
Version 1.3
B. Experiment Scenarios
To highlight the outcome of our proposed QoS-aware TE
method, we compare the following three methods in two
different scenarios.
Method 1: Default reactive forwarding in ONOS. Method
1 does not have QoS-aware or load balancing. Method 1
always finds the minimum hop count path.
Method 2: Link utilization-aware routing. This routing
is similar to our link utilization aware path calculation
phase. To route the QoS request, method 2 needs to
compute the link utilization of paths from source and
destination switches and forwards QoS request through
the path with maximum link utilization.
Method 3: The proposed QoS-aware TE method. To
get fast and efficient TE, we divide our QoS-aware TE
method into two phases. If there a path with sufficient
demands in Paths Store, the proposed method simply
selects the path with maximum link utilization and forwards the flow through the path. Therefore, we can reduce
the time of collecting traffic statistics, calculating the
link utilization, and computing paths. When the network
topology changes or when the QoS requests cannot be
guaranteed with the pre-calculated paths in Paths Store,
we use the link utilization aware path calculation phase to
forward packets of a QoS request flow. This can reduce
the packet loss rate and improve resource utilization.
In this test, we employed the above three methods with
two scenarios called scenarios I and II. The tests generated
different bit rates of Iperf [13] UDP traffics between hosts h1
and h3 in Fig. 3 by using methods 1, 2, and 3.
174
•
Scenario I: Host h1 sends UDP traffic of 10 Mbps to host
h3 for 10 seconds. Then, h1 sends again UDP traffic of
2019 25th Asia-Pacific Conference on Communications (APCC)
•
15 Mbps to h3.
Scenario II: Host h1 parallel sends UDP traffic of
10 Mbps and 15 Mbps to h3 for 10 seconds.
C. Experimental Results
When we conducted a scenario I by using method 1, h1
forwards the UDP traffics to h3 through the path p1 = S1–
S2–S3.
Fig. 8: Packet loss rate for methods 2 and 3.
Fig. 6: Flow tables of switches S1, S2 and S3.
We can see the forwarded path according to the flow
tables information in Fig. 6, where only shows the flow table
information of switches S1, S2, and S3 because there is no
flow table information in switches S4, S5 and S6.
Fig. 9: Throughput versus elapsed time for 10 Mbps UDP traffic in scenario I.
Fig. 7: Packet loss rate for method 1 in scenarios I and II.
Method 1 uses the path p1 even p1 does not satisfy the flow
demands because it only takes account the path with minimum
hop counts. Therefore, method 1 gets packet loss rate of 50.3%
for 15 Mbps traffic and 7.37×10−2 % for 10 Mbps traffic and,
the overall packet loss rate is 25.2%. In scenario II, method 1
chooses path p1 again; therefore, the overall packet loss rate
has been increased to 51.7%, and we can see the results in
Fig. 7.
Method 2 always finds the path with maximum link utilization. When we dealt with the scenario I by using method
2, host h1 sends both UDP traffics to h3 via the maximum
Fig. 10: Throughput versus elapsed time for 15 Mbps UDP traffic in scenario I.
175
2019 25th Asia-Pacific Conference on Communications (APCC)
we propose the QoS-aware TE method (method 3) which gets
the lowest packet loss rate and highest throughput than the
other two methods.
V. C ONCLUSION
Fig. 11: Throughput versus elapsed time for 10 Mbps UDP traffic in
scenario II.
In this paper, we proposed a QoS-aware TE method in
SDN. The proposed method combines the following two
phases: the initial path calculation phase and link utilization
aware path calculation phase. To compute QoS parameters
like the link utilization and the link delay, the controller
always monitors the switches and queries statistics values.
The tasks may increase the controller work load. To reduce
the load, the proposed method checks whether the incoming
traffic demand satisfy the paths in Paths Store, and if there
is a path with sufficient demands, then the proposed method
forwards the traffic through the path. Moreover, we found
that the proposed method can reduce packet loss rate and
increase the throughput by using link utilization aware path
calculation phase when the traffic demand does not satisfy
the path information in Paths Store. The experimental results
demonstrated that the total throughput of the proposed QoSaware TE method outperforms the other two methods.
For our future work, we will combine our proposed method
with application-aware features and find the paths based on
each application requirements. We will not only compare our
proposed method with other QoS-aware traffic engineering
methods but also study with complex network typologies and
other NOS.
R EFERENCES
Fig. 12: Throughput versus elapsed time for 15 Mbps UDP traffic in
scenario II.
link utilization path p2 = S1–S4–S6–S3. For the scenario II,
method 2 forwards the traffic of 15 Mbps through the path
p2 = S1–S4–S6–S3, and the traffic of 10 Mbps through p1 =
S1–S2–S3, respectively.
When we handled with scenarios I and II by using our
proposed method (method 3), the proposed method checks
if there is a path with sufficient demands from Paths Store.
If the path exists, the method forwards the traffic through
the path. If the path does not exist, the method finds a path
based on the link utilization. According to Figs. 7 and 8,
we can see that method 2 and proposed method (method 3)
have smaller packet loss rate than method 1. The throughput
results for UDP traffic of 10 Mbps and 15 Mbps in scenario I
are shown in Figs. 9 and 10, respectively. Figures 11 and 12
show the throughput results for scenario II. The figures reveal
that method 2 and the proposed method (method 3) get almost
the same throughput results. However, method 2 always needs
to query statistics from switches, calculate the link utilization,
and find the path with maximum link utilization. These may
increase the controller load. To get a fast and efficient route,
[1] L. C. Cheng, K. Wang, and Y. H. Hsu, "Application-aware routing
scheme for SDN-based cloud datacenters," Proc. IEEE Ubiquitous and
Future Networks (ICUFN), pp. 820-825, July 2015.
[2] D. Sinh, L. V. Le, B. S. P. Lin, and L. P. Tung, "SDN/NFV-A new
approach of deploying network infrastructure for IoT," Proc. IEEE
Wireless and Optical Communication Conference (WOCC), pp. 1-5,
June 2018.
[3] Open Networking Foundation, "Software Defined Networking: the new
norm for networks," Web. White Paper, Retrieved Apr. 2014.
[4] W. Stalling, "Software-Defined Networks and OpenFlow," The Internet
Protocol Journal, Volume 16, No. 1, Mar. 2013.
[5] R. Jmal and L.C. Fourati, "Implementing shortest path routing mechanism using Openflow POX controller," Proc. IEEE International Symposium on Networks, Computers and Communications, pp. 1-6, June 2014.
[6] QoS routing mechanisms and OSPF extensions, IETF, RFC 2676, 1999.
[7] Y. Yang, J. K. Muppala, and S. T. Chanson, "Quality of service routing
algorithms for bandwidth-delay constrained applications,” Proc. Ninth
International Conference on Network Protocols, pp. 62-70, Nov. 2001
[8] M. Kodialam and T. V. Lakshman, "Minimum interference routing
with applications to MPLS traffic engineering," Proc. IEEE INFOCOM,
vol. 2, pp. 884 - 893, 2000.
[9] Z. Wang and J. Crowcroft, "Quality of service routing for supporting
multimedia applications," IEEE Journal on Selected Areas in Communications, vol. 14, no. 7, pp. 1228-1234, Sep. 1996.
[10] H. T. Zaw and A. H. Maw, "Elephant flow detection and delay-aware
flow rerouting in software-defined network," Proc. IEEE International
Conference on Information Technology and Electrical Engineering (ICITEE), pp. 1-6, Oct. 2017.
[11] Mininet [Online]. Available at: http://mininet.org.
[12] ONOS [Online]. Available at: https://onosproject.org.
[13] Iperf - The TCP/UDP bandwidth measurement tool, [Online]. Available
at: https://iperf.sourceforge.net.
176
Download