Uploaded by Hassan Rizwan

Load Adaptive Merging Paper

advertisement
Optical Switching and Networking 47 (2023) 100712
Contents lists available at ScienceDirect
Optical Switching and Networking
journal homepage: www.elsevier.com/locate/osn
Load adaptive merging algorithm for multi-tenant PON environments
Khalid Hussain Mohammadani a, b, *, Rizwan Aslam Butt c, Kamran Ali Memon d,
Nazish Nawaz Hussaini e, Arshad Shaikh f
a
College of Computer Science, Huanggang Normal University, Huanggang 438000, China
State Key Laboratory of Information Photonics and Optical Communications, School of Electronic Engineering, Beijing University of Posts and Telecommunications,
Beijing, 100876, China
c
Department of Telecommunications Engineering, NED University of Engineering and Technology, Karachi, 75270, Pakistan
d
Department of Telecommunication Engineering, QUEST, Nawabshah, 67450, Pakistan
e
Institute of Mathematics & Computer Science, University of Sindh, Jamshoro, Pakistan
f
Department of Computer Science, ISRA University, Hyderabad, 313, Pakistan
b
A R T I C L E I N F O
A B S T R A C T
Keywords:
Merging engine in MT-PON
Tenant PON
vDBA
vNO
virtual bandwidth aggregation
Wavelength division multiplexing\time division multiplexing passive optical network (WDM/TDM-PON) is the
attractive candidate for PON bandwidth sharing among multiple service providers, featuring massive bandwidth
and longer reach. This infrastructure reduces the overall cost of the Fiber-to-the-premises (FTTP) services and
offers relatively lower tariffs for the end customers. The dynamic bandwidth and wavelength allocation (DBWA)
process in such PON networks ensure the fair sharing of the available bandwidth resources among the virtual
network operators (vNOs). The earlier reported DBWA schemes with multiple vNOs have not efficiently utilized
the unused and residual upstream bandwidth. This study presents a novel load adaptive merging algorithm
(LAMA) for converting various individual virtual bandwidth maps(vBWmaps) into a single physical bandwidth
map (phyBWMap). The LAMA scheme modifies the existing strict priority scheme called the Priority-Based
Merging Algorithm (PBMA) scheme and improves the performance of the merging engine by allocating the
phyBWMap in a load adaptive manner to the vNOs in the multi-tenant PON architecture. The proposed algorithm
is compared with PBMA in terms of throughput efficiency, upstream delay, and capacity utilization under selfsimilar and Poison traffic scenarios. The results show that the proposed scheme offers higher bandwidth utili­
zation resulting in increased throughput with lower upstream delays in the multi-tenant PON environment.
1. Introduction
With the rapid evolution of augmented reality (AR), virtual reality
(VR), and 4K/8K streaming television, the demand for higher-speed
Internet services such as on-demand movies, cloud computing, and on­
line 3-D gaming is increasing exponentially. In recent years, nextgeneration passive optical network (NG-PON) technologies, i.e., XGSPON, and TWDM PON, have emerged as the dominating broadband
access technologies to support higher data rates. The TWDM PON
technology uses multiple stacked wavelengths to achieve a higher ca­
pacity of up to 40 Gbps/10 Gbps downstream/upstream. Thus, it offers
an extended reach up to 40 KM and increased support of up to 256 users
from a single PON port, which is higher than XGSPON. However, TWDM
has a disadvantage that it requires expensive tunable optical trans­
ceivers at the Optical Network Units (ONUs) [1]. Although the latest
50G-PON technology offers even higher access bandwidth than the
TWDM PON on a single wavelength, however, this technology is still
under trial and standardization phase by the ITU-T study groups [2,3].
The widespread deployment of NG-PON has some limitations because of
the expensive capital expenditure (CAPEX) cost needed for their
implementation. The small and medium-size operators opt to share the
PON infrastructure to overcome this restriction due to its higher ca­
pacity and reach of the NG-PON technologies.
The concept of infrastructure and resource sharing in information
and communication technologies (ICT) is not new [4]. For example, IP
layer sharing (VPN level) [5], physical layer sharing (wavelength
sharing) [6–8] concepts are already in practice. NG-PON is also utilizing
bandwidth sharing on a single wavelength, and multiple wavelengths
sharing approaches. The basic idea is similar to the software-defined
networking (SDN) and network function virtualization (NFV) in IP
* Corresponding author.
E-mail address: khalid.mohammadani@gmail.com (K.H. Mohammadani).
https://doi.org/10.1016/j.osn.2022.100712
Received 8 September 2021; Received in revised form 20 June 2022; Accepted 12 August 2022
Available online 21 August 2022
1573-4277/© 2022 Elsevier B.V. All rights reserved.
K.H. Mohammadani et al.
Optical Switching and Networking 47 (2023) 100712
networks, which is extended to the optical access networks (OANs).
These approaches have paved the way for network operators to coexist
as virtual network operators (vNOs) on the PON infrastructure.
Virtualization technology is the better choice for the compatibility
and coexistence of many technologies simultaneously to increase
network resource efficiency [9]. With this provision, many operators,
services, and applications may share the common resources efficiently.
The studies [10,11] focus on the virtualization of passive optical net­
works (VPONs). A VPON is a flexible, high-bandwidth, and
cost-effective OAN design. The virtualization of the SDN-based optical
network is proposed [12]. Similarly, a multi-subsystem VPON has also
been reported in Ref. [13] for cross-system bandwidth management is a
virtual PON based multisystem based VPON.
The concept of vNOs has provided an opportunity to the network
operators to share the existing PON infrastructure with reduced in­
vestment and, thus, optimize the gains. Each vNOs acts like a tenant, and
the shared PON behaves like a multi-tenant infrastructure where all
tenants coexist in the same Optical data network (ODN). However, the
Multi-Tenant PON(MT-PON) scenario cannot work with existing dy­
namic bandwidth assignment (DBA) schemes as each vNO service level
agreement (SLA) might not be the same. Thus, OLT requires multiple
virtual dynamic bandwidth assignments (vDBA) schemes for proving a
fair chance of US bandwidth utilization to each vNO. The study in
Ref. [14] has also presented a virtual DBA for PON virtualization. A
vDBA enables vNOs to create their DBA to suit their requirements. Each
vDBA interacts with a shared engine. The shared engine plays an
essential role in sharing the resources among the multiple tenants in the
MT-PON network. Further, we present its detail in section 3.
The coordination between multiple vNOs is essential and chal­
lenging, as it requires a frame sharing engine (SE) or merging engine
(ME) to combine the individual bandwidth maps into one BWmap [15].
The ME merges all virtual bandwidth maps (vBWmap) and makes one
physical bandwidth map (phyBWmap) while adhering to the traffic class
priorities. It reduces the low priorities grant size of the virtual band­
width maps in overloading and tries to fit these bandwidth grants in the
next frame. It means the vBWmap with high priority traffic gets more
upstream bandwidth than the vBWmap with low priority traffic. The
proposed approach in Ref. [15] might suffer from a bandwidth starva­
tion problem when any residual bandwidth after deallocating from low
priority traffic is given to high priority traffic, and therefore low priority
traffic may never get its way through. It means that the vBWmaps with
high priority bandwidth requirements get more portion of phyBWmap
while lightly loaded low priority vBWmaps might face higher delays due
to lesser or unavailable bandwidth slice. As a reason, an efficient
phyBWmap utilization scheme is required to look at vBWmap from all
vNOs and merge them into one phyBWmap properly in proportion to
vNOs bandwidth demand. Our research aims at finding a solution for the
above-mentioned challenging problem at the ME.
The key contribution of this paper is the load adaptive merging al­
gorithm that merges the individual virtual bandwidth maps generated
from their independent VDBA processes. This merging process effi­
ciently utilizes the available upstream (US) bandwidth and minimizes
bandwidth wastage. This efficient utilization of the US bandwidth leads
to reduced US delays and higher throughput for all the tenants.
In this article, section 2 looks at the current PON sharing levels in
literature. Section 3 explains the system description and proposes a
novel merging scheme. Section 4 describes the simulation setup. Section
5 describes simulation results with discussion, and Section 6 ends the
article with a conclusion followed by references.
architecture based on software-defined network(SDN) and network
function virtualization (NFV), in which OLTs and ONUs were partly
virtualized and transferred into a centralized control unit by using an
SDN controller at the OLT and its agents at each ONU node. A similar
study [17] proposed a GPON-based architecture in which an OLT keeps
an Open-Flow agent interacting with the SDN controller, and the authors
have claimed that this approach is cost-efficiently because it can connect
a large number of sites at different locations. Further, central office (CO)
virtualization has led to the present virtualization trend in recent years.
Like the Central Office Re-architected as a Datacenter(CORD) project
[18]. The CORD project is a new architecture based on virtualization
and adapts the concept of Everything-as-a-Service (XaaS). This new
CORD architecture suggests transferring the control functions to virtual
OLTs(VOLTs) and virtualizing them into conventional x86 servers
located in central offices to increase network versatility. Service pro­
viders such as American Telephone and Telegraph (AT&T) and Nippon
telegraph and telephone(NTT) Communications already support CORD
[19], but CORD has a drawback of not allowing vNOs directly to manage
their own bandwidth schedulers in a multi-tenant PON [15].
Some authors have worked on the medium access control(MAC)
layer bandwidth sharing approach at the frame level to find suitable
solutions that can render network services to 5G networks [14,20,21]. In
Ref. [21], the authors proposed a framework based on the
slice-scheduler(SS) and frame-level-scheduler(FLS) in XGPONs featuring
the bandwidth sharing. While the SS determines the slice owner for each
frame, the FLS allows the operator to plan the bandwidth resources
according to specific bandwidth distribution methods for its subscribers
[22]. This architecture has isolation and customization issues, on other
hand, another approach based on an intra-frame level sharing frame­
work was developed to solve these challenges [14]. The sharing
framework gets vBWmaps from each vDBA of vNO, and it forwards the
upstream buffer reports (DBRu) to the vDBA of vNO. It analyzes all
received vBWmaps and merges them into a single physical bandwidth
map (phyBWmap) of XGS-PON. Therefore, within the scope of XGS-PON,
in Ref. [14], the authors proposed two distinct types of merging policies;
(1) No capacity sharing: This is simple approach in which each vDBA
knows its shared upstream capacity and does not allocate more than
shared capacity. The advantage of this approach is that it does not
change any grant size. The disadvantage is that this method leaves some
bandwidth behind every cycle, and it does not utilize the remaining
bandwidth. further it is described in section-3a (2) Capacity sharing: It is
a complex approach and has two steps. First step, there is no reduction in
the vBWmap grant sizes if all bandwidth grants can fit in the upstream
frame. In second step, if the in which sharing engine identifies a cu­
mulative amount of all vBWmap grants, if it is too large to fit in one
upstream frame, the best-effort and non-assured grant size of overloaded
vNO must be decreased. Therefore, in its second step when merging
engine reduces bandwidth again some bandwidth may be left
unassigned.
Authors in Ref. [15], extended the work reported in Ref. [14] and
proposed a strict Priority-Based Merging Algorithm (PBMA) algorithm
for the implementation of ME. PBMA supports the priorities-based
allocation and works on the above-discussed capacity sharing policies
of the frame-level sharing approach. The authors assumed two in­
dividuals vDBA with four priorities classes. The method analyzes each
incoming vBWmap to see if the initially requested allocation conflicts
with other vBWmap from other vNOs. PMBA keeps remaining allocations
when the collision of low priority has occurred. However, If the con­
flicting requests are in the same traffic class, the earlier request is
prioritized. PMBA moved ahead to the colliding requests with greater
priority ahead until enough unfilled positions in BWmap are available. It
temporarily designates the request as unallocated for this frame.
Following that, all priority p requests in the pool of unallocated requests,
the PMBA checks each vNO unallocated request of each priority (p)
traffic one by one and shifts to the following available fragment of
vacant slots. The request is assigned if an empty fragment with sufficient
2. Related work
In the literature, researchers have applied the vNO concept in PON in
three different ways: the software-defined network (SDN) controlled
vNOs, the merging engine-based vNOs, and the slicing-based vNOs. For
example, the authors in Ref. [16] have suggested an integrated
2
K.H. Mohammadani et al.
Optical Switching and Networking 47 (2023) 100712
bandwidth is discovered otherwise it is denied.
The PMBA scheme might not assign the bandwidth to lower priority
classes and leaves them to suffer from bandwidth starvation problems. It
does not also assign the sufficient bandwidth to all priority classes; even
it assigns unallocated slots. Therefore, PBMA should be improved for all
priority classes, and a new approach of merging engine algorithm is
necessarily required to increase the bandwidth utilization rate in multiTenant PON architecture for vNO clients. This paper contributes to
designing a load adaptive merging algorithm (LAMA) scheme for
merging engine purposes and compares it with the existing PBMA al­
gorithm scheme. The proposed work offers a proper and flexible band­
width merging scheme using the concept of adaptive load/share of vNO
that efficiently manages physical bandwidth distributions and handles
unused bandwidth to improve the latency, throughput, and revenue for
the multi-Tenant PON architecture vendors.
3. System description
In this section, we first elaborate concept of multi-Tenant resource
sharing in PON and explain the related requirements and the associated
problems in the shared environment. Then the proposed merging algo­
rithm is presented to solve the described problems.
3.1. Multi-Tenant passive optical network (MT-PON)
Fig. 1 shows the PON layout and the difference between traditional
TDM PON and the multi-tenant PON environment. In a traditional PON
environment Fig. 1(a), a single-tenant utilizes all the available band­
width resources, and a single dynamic bandwidth allocation (DBA)
scheme provides the resources to its clients. A Multi-Tenant Environ­
ment Fig. 1(b) employs the virtualization concept to execute several
VDBA processes simultaneously for each tenant to ensure fair resource
sharing for all vNOs in the network. In this environment, multiple vNOs
may coexist with their users bounded by their specific SLA. All vNOs
follow the capacity sharing policy defined in the OLT and serve the
different types of services to each ONU with the help of the VDBA pro­
cess. Additionally, MT-PON adopts the merging engine (ME) at the TC
Layer of physical OLT, which works as a bridge between vNO or VDBA
and physical OLT. The vDBA gets the virtual dynamic bandwidth reports
(vDBRu) from the merging engine, calculates the virtual bandwidth map
(vBWmap) for its PON slice, and sends it to the ME. Furthermore, the ME
broadcasts single (PhyBWmap) to all ONUs in every DS cycle. Thus, the
different vNOs can offer access services to different users; residential,
commercial, and industrial from the same ODN.
Fig. 2 shows different merging processes for the MT-PON network.
Fig. 2. ME types (a) SS ME; (b) FLS Type1; (b) FLS type 2.
Fig. 2(a) shows the SS merging process based on [21], where each vNO
accesses full PON capacity duration for upstream frame in their own
scheduled time. It isolates each operator and suffers high latency with
bandwidth starvation for under loaded vNO. Fig. 2 (b and C) show the
two type of FLS merging processes [14]. The first simplest FLS (type 1)
creates a single physical merged bandwidth map for the DS frame,
shown in Fig. 2(b), where no one vNO can share their own residual
bandwidth with other vNO maintaining isolation and dedicated band­
width capacity which may lead to partial utilization of upstream ca­
pacity. Fig. 2(c) shows improved version of FLS merging process (type 2)
which assigns bandwidth as per demand and then reduces the band­
width in reverse class priority in the merging engine to fit in the physical
bandwidth map.
In the MT sharing environment, a PON vendor can offer its services to
many tenants at a lower cost than the scenario where he would have laid
his own PON infrastructure. The main drawback of the MT sharing
environment is that if one tenant uses an inordinate amount of band­
width, this could slow down performance for the other tenants. There­
fore, in this paper, we focus on handling multiple tenant requirements
fairly and use an efficient algorithm for the merging engine to restrict
the monopolization of a heavily loaded user.
Fig. 1. PON layout: (a) Traditional PON; (b) MT-PON.
3
K.H. Mohammadani et al.
Optical Switching and Networking 47 (2023) 100712
3.2. Demonstration of class of services in MT-PON
In MT-PONs, the ME distributes physical network bandwidth on a
vNO basis. Each vNO has its own bandwidth allocation scheme that al­
locates virtual bandwidth basis on the requirement of different traffic
containers (T-CONTs). According to ITU-T standards [23,24], there are
five types of TCONTs, and each TCONT represents an individual traffic
service with different QoS attributes and having a unique identification
such as Allocation ID(Alloc_ID). Table-I shows the five different types of
Quality of service (QoS) for ITU-T compliant PON. These QoS thresholds
are usually different for different types of services; thus, different traffic
classes should be identified-high, medium, and small. Typically
speaking, it is more necessary to ensure that strict latency criteria are
met for high-priority traffic, such as T1>T2>T3>T4>T5.
In MT-PON, as discussed in the above subsection, that the ME must
send a single upstream PhyBWmap to all ONUs of all vNOs in each cycle.
However, it is not necessary that each vDBA of each vNO assigns
bandwidth to their clients in every cycle [25]. Typical TCONTs’ report
time is depicted in Fig. 3, which shows the time processing of TCONTs
reports and grants. Each ONU sends DBRu of its each TCONT queue, also
known as TCONT report (R), to OLT. The OLT extracts (R) and forwards
(R) to ME. The ME then makes a virtual TCONT report (vR) of the
concerned ONU and forwards to vDBA of specific vNO. When vDBA re­
ceives these vR then it assigns the required bandwidth to each TCONT of
the concerned ONU, generates a vBWmap or virtual grants (vG) and
sends to ME. Further the ME merges all vBWmaps of all vDBA into one
phyBWmap and broadcasts to all concerned ONUs.
Fig. 4. Flowchart of proposed Merging Scheme.
(PhyFBu), it reduces the grant size of low priority traffic and creates a
new physical merged bandwidth map. Again, it checks if the merged size
(total size) of bandwidth is still more than the original upstream band­
width. If the condition is valid, it further reduces other less-priority
traffic grant size and regenerates the physical bandwidth map. The
last step of the flowchart in Fig. 4 uses Algorithm-3; the merging engine
module allocates upstream residual bandwidth (RBWu) via an extra
TCONT according to each client of each tenant if RBWu remains after
regenerating the physical bandwidth map.
We assume that the physical upstream bandwidth (PhyFBu) size is
the total capacity of the merging engine of MT-PON. Let (I) be the
number of operators having virtual DBAs. A vNO(i) gets full PhyFBu
using sharing capacity policy, respectively. We assume that Eq. (1) is
required to strictly fulfil the service level agreement (SLA) of each
vNO(i). Where merging engine receives (vi ) total grant size from each
vNO(i) and merging engine has PhyFBu (total available physical up­
stream frame bytes).
3.3. Proposed scheme for merging engine in MT-PON environment
We explain the working mechanism of the proposed merging scheme
with the help of the flowchart shown in Fig. 4. The proposed scheme is
an improvement of the PBMA scheme utilizing to three proposed algo­
rithms. After the VDBA processes, the proposed scheme executes every
DS cycle at the physical merging engine to create a physical merged
bandwidth map. It receives virtual bandwidth maps (vBWmaps) from
each vDBA of a vNO and creates a single physical merged bandwidth
map for the DS frame, as shown in Fig. 4. Similar to the PBMA scheme,
the proposed scheme first checks the total grant size of the virtual
bandwidth map, i.e., the sum of all vBWmap using Eq. (1), and if the sum
is less than the total available upstream bandwidth (PhyFBu), it appends
all virtual bandwidth grants without losing any virtual bandwidth grant
using Algorithm-1. In Fig. 4 the blue dashed square shows the steps of
reduction of low priorities traffic with the help of algorithm-2, in which
if the summation value is greater than the physical upstream bandwidth
I
∑
vi ≤ PhyFBu
(1)
i=1
Algorithm 1 runs inside the merging engine module, it creates one
physical bandwidth map (PhyBWmap) and appends all virtual band­
width maps into (PhyBWmap) sequentially, but it does not reduce any
virtual bandwidth map grant sizes without violating TCONT priorities.
There are some inputs for Algorithm 1, i.e., n vDBAs and virtual band­
width map (vi ). Algorithm 1 starts from line 1, where it assigns zero to k
variable. Line 2 starts with for each loop and scan vi . Line 3 assigns total
size of vi to Blen variable. From lines 4 to 8, it appends all virtual grant
values of all vBWmap into one physical bandwidth map(PhyBWmap).
Algorithm 1 repeats all steps from lines 2 to 9 until all virtual grant
values are merged. The complexities of lines 2 to 9 are O(N). Algorithm 1
is a sequential merging process that has linear time complexity.
Algorithm 1. The pseudo-code of the sequential merging process
Input: Number of vDBAs n ∈ (int); vBWMap (vi ) for i = 1 to n;
Output: Physical merged bandwidth map(PhyBWmap)
1
2
3
4
5
6
7
Fig. 3. Time process diagram in MT-PON
4
k←0 //position in BWmap
foreach vi do
Blen←length of vi
for (z = 0 ; z < Blen; z + + ) do
PhyBWmap[k].ID← vBWAlloc[z].ID
PhyBWmap[k].Size← vBWMap[z].Size
k+ +
K.H. Mohammadani et al.
Optical Switching and Networking 47 (2023) 100712
8 End for
9 End foreach
lack of bandwidth in the current cycle. On the other hand, Algorithm3 in
the proposed scheme solves this bandwidth starvation problem between
the vBWmaps of the vDBAs by distributing the unused bandwidth in
proportion to the bandwidth demand of the vBWmaps. Finally, our
merging engine shares the unused bandwidth among all operators using
Eq. (2) in MT-PONs: where Si represents the total share of each VDBA
(vi ) from unused bandwidth (unAllocatedFBu ).
Algorithm 2 is a priority-based algorithm that is a modified pro­
cedure as in Ref. [15], it implies TCONT (T1>T2>T3>T4) priority
scheme, it gives high priority to a fixed(T1) and a guaranteed traffic
class(T2), then second-highest is surplus traffic class(T3) and last pri­
ority to the best-effort traffic class(T4). Algorithm-2 runs on the false
condition of Eq. (1). It decreases bandwidth grants of all overloaded
vNOs. According to ITU TCONTs definition [26], initially from line 3 to
13, the merging engine decreases best-effort traffic grants and generates
a physical bandwidth map(PhyBWmap) including guaranteed and
non-assured bandwidth grant size. If available physical bandwidth is
still insufficient for cumulative grants, algorithm2 from lines 14 to 27
decreases non-assured grants as well. Algorithm-2 modifies the PBMA
algorithm and does not store any unallocated upstream grant size for the
next upstream frame. The complexities of lines (3–13) and lines (14–27)
are O(N).
vi
Si = ∑I
i=1 vi
Algorithm 3 depends on all virtual bandwidth maps’ adaptive share/
load and allocates residual bandwidth to each client of each vNO. For
simplicity, we only take two virtual bandwidth maps from two virtual
DBA in the context of MT-PON. We use Eq. (2) to calculate the adaptive
share of each VDBA from unallocated upstream bandwidth size. Line 2
shows in Algorithm 3 that it utilizes Eq. (2) and identifies “how much is a
share for each VDBA?”. Then it redistributes residual upstream band­
width to corresponding ONUs of each VDBA from its share value using
one extra TCONT like TCONT-5(Line 4 to 8). In every cycle, we use the
algorithm-3 at the last stage in both cases with Algorithm-1 and
Algorithm-2. The complexities of lines (1–9) is O(N).
Algorithm 2. The pseudo-code of the VBWmap Reduction process
Input: Number of vDBAs n ∈ (int); vBWMap (vi ) for i = 1 to n; best
efforts = BE; non assured = NA; PhyFBu = 155520.
Output: Physical merged bandwidth map(PhyBWmap) without low
priority traffic.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
(2)
× (RBWu)
Algorithm 3. The pseudo-code of Load Adaptive merging allocation
process
Input: last position number in PhyBWmap = k; Number of vDBAs n ∈
(int); vBWMap (vi ) for i = 1 to n; Total number of ONUs of each vi = z,
Output: insert RBWu into PhyBWmap as per load of vi for its clients.
SumMV ←0
k←0
foreach vi
Blen←length of vi
for (z = 0 ; z < Blen; z + + ) do
if(vBWAlloc[z].ID ∕
∈ BE) then
PhyBWmap[k].ID← vBWAlloc[z].ID
PhyBWmap[k].Size← vBWMap[z].Size
SumMV←SumMV + vBWMap[z].Size
End if
k+ +
End for
End foreach
if(SumMV > ​ PhyFBu) then
Clear PhyBWmap
k←0
foreach vi
Blen←length of vi
for (z = 0 ; z < Blen; z + + ) do
if(vBWAlloc[z].ID ∕
∈ {BE U NA}) then
PhyBWmap[k].ID←vBWAlloc[z].ID
PhyBWmap[k].Size←vBWMap[z].Size
End if
k+ +
End for
End foreach
End if
1 foreach vi do
2 calculate share(S) of vi by Eq. (2)
3 ρ←Sz
4
5
6
7
8
9
for (x = 0 ; x < z; x + + ) do
PhyBWmap[k].IDExtra←ID of ONU(x)
PhyBWmap[k].SizeExtra← ρ for ONU(x)
k+ +
End for
End foreach
4. Simulation setup
The simulation framework follows the ITU-T XGS-PON standard [27]
and implements as a common infrastructure for (see Table 1) vNOs. We
use OMNET++ 5.5 to evaluate the proposed work in simulation with
parameters setting similar to Ref. [15]. We analyze the proposed LAMA
algorithm performance by comparing it against PMBA algorithms.
Table 2 lists the key simulation parameters. A single merging engine
connects with two vNOs , and each vNO provides services to 32 ONUs.
Each ONU maintains four priority-based traffic queues for each trans­
mission container (TCONT). To simulate a 20 km distance range be­
tween Physical OLT and ONUs, we set a 210 μs RTT value. The upstream
line rate per ONU is set to 200Mbps which means a maximum of 3125
bytes are assigned to each ONU in each upstream slot of 125 μs. For
bandwidth distribution, our MT-PON testbed follows the earliest PON
studies [28]. Therefore, we have configured ABmin1 = 2500 bytes and
SImax1 = 8, which amounts to 20 Mbps (10%) for TCONT1 traffic. We
The bandwidth demands are first satisfied with assigned grants by
the vDBA using Algorithm 2; consequently, the assigned bandwidth of
vDBA is reduced. Following on, Algorithm 3 is executed to utilize the
unused bandwidth in an adaptive manner for the additional grants. The
proposed Algorithm 3 also addresses the bandwidth starvation problem
(defect in PMBA scheme), described in Section 2. The Explanation of the
comparison is as under: Let’s assume we have two vBWmaps i.e.
vBWmap1 and vBWmap2. If the ME gets both vBWmaps simultaneously
with the same priority(p). The PMBA scheme would assign bandwidth to
either vBWmap1 or vBWmap2 because of the same priority(p). If
vBWmap1 gets physical time slot before vBWmap2, the vBWmap2 gets a
chance if only if there is any residual bandwidth available or any other
empty slot. As a result, either vBWmap1 or vBWmap2 suffers due to a
Table 1
ITUT traffic classes TCONTs.
5
TCONT
Bandwidth
Applications
1
2
3
4
5
Fixed
Assured
Non-Assured
Best Effort
Non-Assured
Constant Bit Rate (CBR)
Video and Voice (Multimedia)
Variable Bit rate (VBR)
Background(www)
Mix of All Applications
K.H. Mohammadani et al.
Optical Switching and Networking 47 (2023) 100712
Table 2
Simulation parameters.
Simulation Parameters
Values
Line rate per ONU
Round Trip Time (RTT)
US & DS
Max US Link Capacity of the frame
Network Traffic Load
Average Frame size
OLT: ONU:
vNO
Buffer Size in ONU
ME Algorithm
Traffic Type
200 Mbps
210 μs
10/10 (Gbps)
155,520 Bytes
0.06 to 1.04
CATV Frame size as in [31]
1:64:2
1Mbytes
PBMA and LAMA
Self-Similar and Poisson
used ABmin2 = 5625 bytes with SImax2 = 4, which corresponds to 90
Mbps(45%) for TCONT2 traffic. We assigned ABmin3 = ABsur3 = 5625
bytes with SImin3 = SImax3 = 8 for T3 traffic, which amounts to 45Mbps
guaranteed (22.5%) and 45Mbps non-assured bandwidth (22.5%) for
TCONT3 traffic. We assigned ABsur4 = 12500 bytes with SImin4 = 8 for
TCONT4 traffic, which results in a bandwidth reservation of 100 Mbps
(50%) on best-effort basis. We use CATTV frame size for generating
traffic load ranges from Smin = 64 bytes to Smax = 1518 byte with the
packet size distribution. Each ONU has its instance of the traffic gener­
ator, as described in Ref. [29]. We employed on-off self-similar traffic
directly applied from Ref. [30] with Pareto distribution by using shape
(α) 1.4 and 1.2 for on period and off period respectively, and calculate
Hurst (H) parameter and H ∈ [0,1] range. H = 3−2 α presents the rela­
tionship of Hurst and shape parameters. We also employed Poisson
distribution with exponentially varying inter-arrival times for traffic
frames. The traffic arrival rate parameter (λ) per ONU is calculated using
Eq. (3) for a selected load. And Eq. (4) represents to Send-Interval called
inter-arrival time(IAT).
λONU =
NetworkTrafficLoad × LineRate
NumberofONUs × AvgPktSize
sendInterval = e−
λ
Fig. 5. TCONT1 Delay(s) Vs. Traffic Load. We assumed video traffic as the
assured/guaranteed required bandwidth traffic. Fig. 6 presents the US delay Vs.
Traffic load in the case of TCONT2 with different traffic models.
(3)
(4)
5. Results and discussion
Fig. 6. TCONT2 Delay(s) Vs. Traffic load.
To have a comparative analysis among the two sharing engine al­
gorithms, we simulated both algorithms under self-similar and Poisson
distribution traffic. The performance of the algorithms is assessed in
terms of: 1) US delays (Fixed bandwidth (TCONT1-T1), Assured Band­
width (TCONT2-T2), Surplus Bandwidth (TCONT3-T3), and Best effort
bandwidth (TCONT4-T4) types, and 2) network throughput in Gbps.
By comparing the results from Fig. 6, the self-similar traffic in the case of
the LAMA algorithm is also increasing, but it is lower than the PMBA
algorithm. The T2 delay of the PMBA algorithm is 64% more at a lower
load than the LAMA algorithm. At the higher load (1.04), the delay of
the LAMA algorithm is 21% less than the PMBA algorithm under selfsimilar traffic. However, when comparing proposed work results with
existing work under Poisson traffic. We can observe from Fig. 6, the
graph shows almost constant delay of TCONT2 in case of Poisson traffic
as the traffic load of all ONUs increases with a similar pattern. From
Fig. 6, we can see that a bit of variation in the US delay of LAMA for
TCONT2, but its delay is lower than the PMBA algorithm. It is about 70%
more delay in the case of the PMBA algorithm, and this is because the
PMBA algorithm uses a priority-based allocation scheme and does not
use any flexible bandwidth scheme, which leads to higher delay. The
results proved that the proposed LAMA algorithm is a better bandwidth
allocation scheme and leads to lower delay in MT-PON.
Fig. 7 shows the comparative analysis of LAMA and PMBA dealing
with TCONT3 traffic delay(s). We can note that both PMBA and LAMA
schemes are almost similar in performance, with a difference of 11% at
higher self-similar traffic. We can observe that the self-similar delay is
higher than Poisson traffic in both algorithms. However, the LAMA al­
gorithm outperforms and having a low delay in the Poisson traffic sce­
nario. The LAMA algorithm is 40% less delay(s) than the PMBA
algorithm under the Poisson traffic scenario. The situation is different
5.1. Analysis of US delays
We also considered TCONT1 for voice traffic as the fixed bandwidth
requiring the constant bit rate (CBR). It is not dependent on the DBA
algorithm as it always requires fixed bandwidth and does not get any
excess bandwidth. Therefore, the performance of the TCONT1 shows a
similar pattern with all the DBA schemes. As a reason, its delay is also
stable and almost linear in all traffic models, as depicted in Fig. 5. Due to
bursty nature of self-similar traffic, it goes up and reaches at a maximum
limit which is about 48 packets per burst, from load 0.2 to higher traffic
load. Therefore, the delay of self-similar traffic of the TCONT1 is slightly
higher than the Poisson traffic model..
Fig. 6 shows that network traffic load increases as the T2 US delay
trend for self-similar traffic increases. The self-similar traffic of the
PMBA algorithm increases as traffic load increases. Self-similar traffic is
bursty internet traffic. Therefore, subscriber needs more bandwidth in
case of bursty traffic. So, it is necessary to evaluate the effectiveness of
the merging algorithms in the MT-PON environment under various
network traffic scenarios (e.g., Self-Similar and Poison traffic models).
6
K.H. Mohammadani et al.
Optical Switching and Networking 47 (2023) 100712
5.2. Analysis of Network Throughput (Gbps)
The network throughput is known as the total amount of data suc­
cessfully delivered in unit time. Fig. 9 demonstrates that the proposed
LAMA algorithm can allocate more bandwidth to different traffic sour­
ces for tenant clients. In the simulation, we run two GIANT DBA algo­
rithms to allocate bandwidth resources in each tenant. The PMBA
algorithm adopts a strict priority scheme in the merging engine, which
means that the traffic with the higher priority would be allocated slots
and bandwidth first in the physical bandwidth map. As a result, some
bandwidth, known as unallocated bandwidth, may not be allocated to
tenant clients in each cycle. LAMA algorithm improves strict priority
scheme and adopts adaptive share concept of the tenants for unallocated
bandwidth. Fig. 9 proves that load adaptive merging scheme (LAMA)
achieves better throughput than strict priority merging scheme (PMBA)
under different traffic sources and load. The LAMA algorithm achieves
8.66 and 6.0 (Gbps) at higher load under Poisson and self-similar traffic
scenarios, respectively, about 22–35% more throughput than the PMBA
algorithm in MT-PON.
Fig. 7. TCONT3 Delay(s) Vs. Traffic load.
for self-similar and Poisson traffic sources, where self-similar generates
bursty of traffic using the Pareto on-off period. Therefore, more packet
generates, and the delay is higher than in the Poisson traffic scenario. All
VDBA assign the total bandwidth due to bursty traffic of higher priority;
when the merging engine algorithm verifies the share of all VDBA first, it
gives a slot to priority traffic. LAMA algorithm is an adaptive allocation
scheme; after sequential bandwidth merging and allocation, it utilizes
the unallocated slots according to each tenant’s share/load of each
VDBA. Therefore, the LAMA algorithm has a lower delay as compare to
PMBA algorithms.
Fig. 8 shows the network performance under TCONT4 with both
traffic scenarios. In the LAMA algorithm, each traffic scenario achieves a
lower delay as compared to the PMBA algorithm. LAMA algorithm
provides the adaptive share of unallocated bandwidth that helps to
reduce the upstream delay. In Fig. 8, we can observe that when load
increases towards saturation, this experience delay under self-similar
traffic, which is similar across PMBA and LAMA algorithms at higher
load in MT-PON. The LAMA algorithm experiences lower delay under
the Poisson traffic scenario, the LAMA algorithm (green) is about 26%
lower than the PMBA algorithm (magenta) at higher load. The out­
performance of the LAMA algorithm is because every timeshare of ten­
ant is not the same, and the low shared tenant gives a chance to the high
shared tenant to utilize more bandwidth and reduce the overall delay of
the network.
6. Conclusion
This study improved the strict priority scheme of the merging engine
and proposed a load adaptive share scheme for the merging engine in the
MT-PON environment. Merging Engine works on the typical MAC layer
of XGS-PON and gives a chance to multiple vNO to build the MT-PON.
We compared the proposed LAMA scheme with the existing PMBA
scheme in terms of upstream delay, throughput, and bandwidth utili­
zation ratio under various traffic sources and loads. From the simulation
results, the proposed LAMA performed better than the existing PMBA
scheme in the MT-PON environment. At a higher load, the proposed
scheme can achieve almost similar US delay performance to the PMBA
under self-similar traffic. The LAMA method enables each vNO to adopt
a tailored share for their subscribers, which is nearly impossible to
achieve with the current PBMA algorithm in an MT-PON network.
The proposed scheme will support the integrated virtualizationbased PON architectures that enable MT beyond 5G, and future
research can focus on its applications in this direction. In addition,
working on the optimal solutions that reduce the energy consumption
for MT-PON environments would save the CAPAX of energy and pro­
mote the MT-PON environment, especially using the 50G PON network.
Author statement
This is to certify that all the authors of this manuscript have been
informed and approved the final version of this manuscript being
Fig. 8. TCONT4 Delay(s) Vs. Traffic load.
Fig. 9. Throughput (Gbps) Vs. Traffic load.
7
K.H. Mohammadani et al.
Optical Switching and Networking 47 (2023) 100712
submitted. No author has any objection regarding the submission and
consideration for publication of this manuscript in this journal.
[12] A.S. Thyagaturu, A. Mercian, M.P. McGarry, M. Reisslein, W. Kellerer, Software
Defined Optical Networks (SDONs): A Comprehensive Survey 18 (4) (2016).
[13] L. Tong, C. Gan, X. Wang, (Max-min model)-based mode-matched algorithm
among subsystems in multisystem-based virtual passive optical network, Opt.
Switch. Netw. 35 (2019) 2020, 100538, Jan.
[14] A. Elrasad, M. Ruffini, Frame level sharing for DBA virtualization in multi-tenant
PONs, 2017 21st Int. Conf. Opt. Netw. Des. Model. ONDM 2017 - Conf. Proc.
(2017) 1–6.
[15] M. Ruffini, A. Ahmad, S. Zeb, N. Afraz, F. Slyne, Virtual DBA: virtualizing passive
optical networks to enable multi-service operation in true multi-tenant
environments, J. Opt. Commun. Netw. 12 (4) (2020). B63–B73.
[16] H. Khalili, D. Rincón, S. Sallent, Towards an integrated SDN-NFV architecture for
EPON networks, Lect. Notes Comput. Sci. 8846 (2014) 74–84.
[17] S.S.W. Lee, K.Y. Li, M.S. Wu, Design and implementation of a GPON-based virtual
OpenFlow-enabled SDN switch, J. Lightwave Technol. 34 (10) (2016) 2552–2561.
[18] L. Peterson, et al., Central office re-architected as a data center, IEEE Commun.
Mag. 54 (10) (2016) 96–101.
[19] S. Das, From CORD to SDN enabled broadband Access (SEBA) [invited tutorial],
J. Opt. Commun. Netw. 13 (1) (2021) A88–A99.
[20] N. Afraz, F. Slyne, M. Ruffini, Full pon virtulisation supporting multi-tenancy
beyond 5g, Opt. InfoBase Conf. Pap. (October) (2019) 7–9. Part F133-.
[21] C. Li, W. Guo, W. Wang, W. Hu, M. Xia, Bandwidth resource sharing on the XGPON
transmission convergence layer in a multi-operator scenario, J. Opt. Commun.
Netw. 8 (11) (2016) 835–843.
[22] N. Slamnik-Kriještorac, H. Kremo, M. Ruffini, J.M. Marquez-Barja, Sharing
distributed and heterogeneous resources toward end-to-end 5G networks: a
comprehensive survey and a taxonomy, IEEE Commun. Surv. Tutorials 22 (3)
(2020) 1592–1628.
[23] D. Systems, ITU-T,10-Gigabit-capable Passive Optical Networks (XG-PON): General
requirements,Recommendation ITU-T G.987.1, vol. 1, 2020, 2016.
[24] K.A. Memon, et al., Dynamic bandwidth allocation algorithm with demand
forecasting mechanism for bandwidth allocations in 10-gigabit-capable passive
optical network, Optik 183 (March) (2019) 1032–1042.
[25] T. Report, TR-402 Functional Model for PON Abstraction Interface, 2018, pp. 1–26.
October.
[26] ITU-T, “G.987.3:10-Gigabit-capable passive optical networks (XG-PON):
transmission convergence (TC) layer specification, ITU-T G-SERIES Recomm 2
(2014) 1–146, 0.
[27] I.-T.R. G Series, ITU Telecommunication Standardization Sector, 10-Gigabitcapable symmetric passive optical network (XGS-PON), ’” vol. 1 (2016) (2020).
[28] K. Hussain Mohammadani, R.A. Butt, K.A. Memon, S. Faizullah, Saifullah,
M. Ishfaq, Bayesian auction game theory-based DBA for XG symmetrical PON, Opt.
Quant. Electron. 54 (5) (May 2022) 316.
[29] K.H. Mohammadani, R.A. Butt, K.A. Memon, F. Hassan, Highest cost first-based
QoS mapping scheme for fiber wireless architecture, Photonics 7 (4) (2020) 1–20.
[30] A. Kramer, , G., B. Mukherjee, Maislos, Ethernet Passive Optical Networks, 2005.
[31] D.R.A. Butt, M.W. Ashraf, Y. Anwar, M. Anwar, Receiver on Time Optimization for
Watchful Sleep Mode to Enhance Energy Savings of 10-Gigabit Passive Optical
Network, Sep. 2018.
Declaration of competing interest
The authors declare that they have no known competing financial
interests or personal relationships that could have appeared to influence
the work reported in this paper.
Acknowledgement
The authors have no any financial support for this work.
References
[1] K. Ali, et al., Traffic-Adaptive Inter Wavelength Load Balancing for TWDM PON,
IEEE Photonics J., 2019.
[2] S. Das, F. Slyne, A. Kaszubowska, M. Ruffini, Virtualized EAST-WEST PON
architecture supporting low-latency communication for mobile functional split
based on multiaccess edge computing, J. Opt. Commun. Netw. 12 (10) (2020).
D109–D119.
[3] H. Wang, P. Torres-Ferrera, V. Ferrero, A. Pagano, R. Mercinelli, R. Gaudino,
Current trends towards PON systems at 50+ Gbps, Proc. 2020 Ital. Conf. Opt.
Photonics, ICOP 2020 (2020) 1–4.
[4] K.A. Memon, et al., A bibliometric analysis and visualization of passive optical
network research in the last decade, Opt. Switch. Netw. 39 (February) (2020),
100586.
[5] B. Cornaglia, G. Young, A. Marchetta, Fixed access network sharing, Opt. Fiber
Technol. 26 (Dec. 2015) 2–11.
[6] T.E. Darcie, N. Barakat, P.P. Iannone, K.C. Reichmann, Wavelength sharing in
WDM passive optical networks, Opt. Transm. Switch. Subsystems VI 7136 (Nov.
2008) 71361I.
[7] J. Rendon Schneir, Y. Xiong, Economic implications of a co-investment scheme for
FTTH/PON architectures, Telecommun. Pol. 37 (10) (Nov. 2013) 849–860.
[8] J. Schneir, Y. Xiong, Cost analysis of network sharing in FTTH/PONs, IEEE
Commun. Mag. 52 (8) (Aug. 2014) 126–134.
[9] L. Tong, C. Gan, W. Xie, X. Wang, Priority-based mode transformation algorithm in
a multisystem-based virtual passive optical network, Opt. Switch. Netw. 31 (April
2018) (2019) 162–167.
[10] A. Otaka, et al., Fairness-aware dynamic sub-carrier allocation in distance-adaptive
modulation OFDMA-PON for elastic lambda aggregation networks, J. Opt.
Commun. Netw. 9 (Issue 7) (Jul. 2017) 616–624, 9, no. 7, pp. 616–624.
[11] K.H. Mohammadani, M. Hossen, R.A. Butt, K.A. Memon, M.M. Shaikh, ONU
migration using network coding technique in Virtual Multi-OLT PON Architecture,
Opt. Fiber Technol. 68 (2018) 2022, 102788, Jan.
8
Download