International Journal of Application or Innovation in Engineering & Management... Web Site: www.ijaiem.org Email: , Volume 3, Issue 2, February 2014

advertisement
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
QoS provisioning for scalable wireless video
streaming
Mr. Raviraj B. Mete1, Prof. A R Sarkar2
1
BE degree in Information technology from shivaji university, kolhapur, maharashtra, India
2
BE degree in computer science from BAMU, Maharashtra, India & MTech in Computer from Walchand college,
sangli, Maharashtra, India
ABSTRACT
With the increasing demand of multimedia information on wireless broadband networks, Scalable Real-time video streaming has
has requirements of quality of service (QoS). However, wireless channels are not reliable and the channel bandwidth changes with
with time, which may decrease the quality of video. This paper investigates the end-to-end QoS provisioning for scalable video
streaming traffic delivery with increasing coverage of wireless network & to deploy access points (APs) to improve throughput
and quality of Service (QoS), when the multihop communication is used .
For the scalability purpose further issues are solved by frequency planning, analytical model & optimization approach with the
guideline of best network planning. We develop an frequency planning to improve the capacity of network, analytical model to
evaluate network parameters & optimization approach for determining the optimal separation distances between APs. A complete
set of simulation scenarios, involving eight different video sequences and using two different scalable encoders, demonstrates the
quality gains of scalable video coding(SVC).
Keywords: Quality of Service (QoS) , Network Parameters, SVC(Scalable video coding).
1. INTRODUCTION
Nowadays use of multimedia on the World Wide Web (WWW) is more usual & wireless broadband video communication
has a great interest of each person at organization & Individual level. But real-time video delivery has QoS requirements,
e.g., bandwidth, delay, throughput and error requirements.
As the required best properties for video streaming contains-First, video transmission usually has minimum bandwidth
requirements (e.g., 28 kb/s) to achieve acceptable presentation quality. second, real-time video has strict delay constraints
(e.g., 1 second). This is because real-time video must be played out continuously. If the video packet does not arrive
timely, the play out process will hang out, which is not visual to human eyes. & third, video applications typically impose
upper limits on bit error rate (BER) (e.g., 1%) since too many bit errors would seriously degrade the video presentation
quality. However, unreliability and bandwidth fluctuations of wireless channels can cause severe degradation to video
quality.
To achieve this requirement we can use Scalable video coding. It is to compress a raw video sequence into multiple substreams, each of which represents different quality, image size, or frame rate. Scalable video is employed in wireless
environments primarily because of the following three reasons. First, it has been shown that scalable video is capable of
coping with the variability of bandwidth gracefully [1], [2], [3]. Second, scalable video representation is a good solution
to the heterogeneity problem in the multicast case [2], [3].
As with the quality of video transmission next objective of our project is to provide reliable broadband communications
with the deployment of access points along the main streets for scalability of the network as in [4]. The APs are
connected via wireless links to facilitate deployment. Moreover, the APs are grouped into clusters. In each cluster, only
the central AP has a wireline connection to the Internet, whereas other APs will relay the neighboring APs’ traffic
toward/from the central AP. Specifically, multihop communications can extend coverage by more hops and longer hop
distance. Maximizing the coverage of AP can reduce infrastructure cost. The longer hop distance will also lower the data
rate in the relay link between APs. Meanwhile, the increasing collisions due to more users in a larger coverage of AP will
further lower user throughput. Therefore, while multihop communication is used to extend coverage, how to improve
throughput and QoS is a major concern. In the literature, the AP deployment issues for wireless local area networks
(WLANs) are studied in [7]–[10]. In [7], an optimization approach is applied to minimize the area with poor signal
quality. The authors in [8] and [9] have developed optimization algorithms to minimize the bit error rate. In [14], an
optimal AP deployment problem is formulated, which aims to minimize the maximum of channel utilization to achieve
load balancing. In [7]–[10], all the APs are connected via cables. The performance of WMNs(Wireless Mesh Network) is
investigated in [1], [3], [5], [6], and [11]–[13]. The simulation results in [5] and the experimental results in [6] show that
even if only one user is sending data, the user throughput rapidly drops as the number of hops increases from one. Then,
it stabilizes at a very low throughput as the number of hops becomes larger. The authors in [11] and [12] point out that
with k users in an ad- hoc network, the user throughput is scaled like O(1/lambda k log k).
Volume 3, Issue 2, February 2014
Page 132
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
Fig 1: System architecture
In Fig. 1] We suggest allocating each AP with different channels.
The work in [1] studies the throughput and coverage of WMN in a single-user case. In [13], the tradeoff between
throughput and coverage for a multiuser WMN is investigated. This simple frequency planning can reduce contention
collisions to improve throughput and make the WMN more scalable. The developed model considers a practical case
where all nodes have bidirectional asymmetric traffic load and operate in the unsaturated situation. That is, the node does
not always have a frame waiting for transmission. We develop an analytical model to determine tradeoff’s between
network parameters using CSMA MAC protocol. On top of the analytical model, we use an optimization approach for
determining the optimal number of APs in a cluster and the optimal separation distances between APs. The objective
aims to maximize the profit function defined as the ratio of the capacity to the total cost for deploying a cluster of APs
with the QoS requirement. To provide a useful guideline for network planning, we compare the uniform-spacing and
increasing spacing AP deployment strategies. In the uniform-spacing strategy (see Fig. 1), all APs have the same
separation distance. In the increasing-spacing strategy (in Fig. 2), the separation distances between APs are increased
from the central AP to the outer APs. We respectively determine the optimal deployment parameters for these strategies
and compare their performances.
Fig 2: Cluster of APs where only the single side of a cluster is shown since the APs in the other side are symmetrically
deployed.
2. METHODOLOGY
The detailed methodology with respect to each objective is given below:
2.1 Increasing- Spacing Deployment Strategy
In the increasing-spacing strategy, the separation distances of APs follow the order d1<d2< · · · <dn. That is, the
heavier the load of AP, the shorter the separation distance. The optimal deployment parameters can be determined by
solving the following optimization problem: MAX Capacity of a cluster of APs / Total cost for deploying a cluster of n
APs.
2.2 Uniform Spacing placement strategy
According to the uniform-spacing strategy, all the APs have the same separation distance, i.e., sdi = d.
2.3 Throughput Analysis
From the viewpoint of a particular node, there are five types of channel activities in this wireless network, including the
following:
1) successful frame transmission;
2) unsuccessful frame transmission;
Volume 3, Issue 2, February 2014
Page 133
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
3) empty slot, where all nodes are in back off or idle;
4) successful frame transmission from other nodes;
5) unsuccessful frame transmission from other nodes.
The channel activities can be described by a sequence of activity time slots.
2.4 Analysis of Wireless access Link Capacity Between AP & User
In the cell of APi, there are K = 1 + liDM node contending for the channel. All the users have the same uplink frame
arrival rate and service rate can be considered.
2.5 Analysis of Wireless Relay Link Capacity Between Aps
In the relay link, APi−1 with downlink traffic and APi with uplink traffic contend for the channel. The transmission PHY
mode ma for APi−1 and APi i determined by the separation distance di between APi−1 and APi.
2.6 Delay Analysis
The p-persistent CSMA MAC protocol is considered, which nicely approximate the standard protocol when the average
back off times are the same. In addition, the parameter p is equal to transmission probability Tk for a busy node. User
consider the queue for the uplink relay traffic of Api. The uplink relay frame and the uplink access frame may concurrently
arrive in an activity slot.
2.7 Performance Evaluation
Evaluate the performance by using comparison of Increasing-spacing and Uniform Spacing strategies ,Interaction among
Network Parameters and Model Validation. User consider the IEEE 802.11a WLAN with bidirectional asymmetric traffic
for each user. The analytical model is validated by simulations. In the simulations, the legacy IEEE 802.11 CSMA protocol
with binary exponential back off scheme is used. consider a general case where k’ users and one AP contend for the
channel. also consider the asymmetric traffic for each user. In addition, assume that the time between frame generation in
each node is exponentially distributed. The quality of video streaming to remove leakage in transmission is also performed.
A) Scalable Video Coding
A scalable video coding scheme is to compress a raw video sequence into multiple substreams. One of the compressed
substreams is a base substreams, which can be decoded independently and provide coarse visual quality; other compressed
substreams are enhancement substreams, which can only be decoded together with the base substreams and provide better
visual quality; the complete bit-stream (i.e., combination of all the substreams) provides the highest quality. Specifically,
compared with decoding the complete bit-stream, decoding the base substreams or multiple substreams produces pictures
with degraded quality (Fig. 3(b)), or a smaller image size (Fig. 3(c)), or a lower frame rate (Fig. 3(d)). Scalable video
coding schemes have found a number of applications. For video applications over the Internet, scalable coding can assist
rate control during network congestion [66]; for web browsing of a video library, scalable coding can generate a lowresolution video preview without decoding a full resolution picture [16]; for multicast applications, scalable coding can
provide a range of picture quality suited to heterogeneous requirements of receivers (as shown in Fig. 4) [3].
(a)
(b)
(c)
(d)
Fig. 3: Scalable video: (a) video frames reconstructed from the complete bit-stream; (b) video frames with degraded
quality; (c) video frames with a smaller image size; (d) video frames with a lower frame rate.
Volume 3, Issue 2, February 2014
Page 134
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
Fig. 4: Multicast for layered video
A.1) MPEG-4 FGS Scalable Video Coding
MPEG- 4 FGS scalable video coding is a new video coding technology to inhance the flexibility of video streaming.
Similar to the conventional scalable encoding, the video is encoded into a BL and one or more ELs. For MPEG4-FGS, the
EL can be efficient truncated in order to adapt transmission rate according to underlying network conditions. This feature
can be used by the video servers to adapt the streamed video to the available bandwidth in real-time (without requiring
any computationally demanding re encoding). In addition, the fine granularity property can be exploited by the
intermediate network nodes (including base stations, in case of wireless networks) in order to adapt the video stream to
the currently available downstream bandwidth. In contrast to conventional scalable methods, the complete reception of
the EL for successful decoding is not required [13]. The received part can be decoded, increasing the overall video quality
according to the rate distortion curve of the EL as described in [14]. The overall video quality can also improve since the
error concealment method is used. In our architecture, when a frame is lost, the decoder inserts a successfully previous
decoded frame in the place of each lost frame. A packet is also considered as lost, if the delay of packet is more than the
time of the play-out buffer.
A.2) FGS Scalable extension of H.264/AVC
In order to provide FGS scalability, a picture must be represented by an H.264/AVC compatible base representation layer
and one or more FGS enhancement representations, which demonstrate the residual between the original predictions
residuals and intra blocks and their reconstructed base representation layer. This basic representation layer corresponds to
a minimally acceptable decoded quality, which can be improved in a fine granular way by truncating the enhancement
representation NAL units at any arbitrary point. Each enhancement representation contains a refinement signal that
corresponds to a bisection of the quantization step size, and is directly coded in the transform coefficient domain. For the
“encoding” of the enhancement representation layers a new slice called “Progressive Refinement” (PR) has been
introduced. In order to provide quality enhancement layer NAL units that can be truncated at any arbitrary point, the
coding order of transform coefficient levels has been modified for the progressive refinement slices. The transform
coefficient blocks are scanned in several paths, and in each path only a few coding symbols for a transform coefficient
block are coded [15].
Fig.5: Layered video encoding/decoding. D denotes the decoder.
Volume 3, Issue 2, February 2014
Page 135
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
As shown in Fig. 5), the compressed video streams can adapt to three levels of bandwidth usage (i.e., 64 kb/s, 256 kb/s,
and 1 Mb/s). In contrast, non-scalable video (say, a video stream with 1 Mb/s rate) is more susceptible to bandwidth
fluctuations (e.g., a band- width change from 1 Mb/s to 100 kb/s) since it only has one representation. Thus, scalable
video is more suitable than non-scalable video under a time-varying wireless environment. Second, scalable video
representation is a good solution to the heterogeneity problem in the multicast case [2], [3].
As we mentioned before, scalable video can withstand bandwidth variations. This is due to its bandwidth scalability.
Basically, the bandwidth scalability of video consists of SNR scalability, spatial scalability, and temporal scalability. To
depict a clear picture about scalable coding mechanisms, we describe a non-scalable encoder/decoder shown in Fig. 6. At
the non-scalable encoder, the raw video is transformed by discrete cosine transform (DCT), quantized and coded by
variable length coding (VLC). Then the compressed video stream is transmitted to the decoder through the networks. At
the non scalable decoder, the received compressed video stream is decoded by variable length decoding (VLD), then
inversely quantized, and inversely DCT-transformed.
(a)
Fig 6
(b)
(a) Non scalable Video encoder
(b) Non scalable video decoder
I)SNR Scalability
SNR scalability is defined as representing the same video in different SNR or perceptual quality (see Fig. 3(a) and 3(b)).
To be specific, SNR scalable coding quantizes the DCT confidents to different levels of accuracy by using different
quantization parameters. The resulting streams have different SNR levels or quality levels. In other words, the smaller the
quantization parameter is, the better quality the video stream can achieve. An SNR-scalable encoder with two-level
scalability is depicted in Fig. 7(a)
Fig 7: (a) SNR scalable encoder
For the base level, the SNR-scalable encoder operates in the same manner as that of the non scalable video encoder. For
the enhancement level, the operations are performed in the following order:
1. The raw video is DCT-transformed and quantized at the base level.
2. The base-level DCT coefficients are reconstructed by inverse quantization.
3. Subtract the base-level DCT coefficients from the original DCT coefficients.
4. The residual is quantized by a quantization parameter, which is smaller than that of the base level.
5. The quantized bits are coded by VLC. Since the enhancement level uses a smaller quantization parameter, it achieves
better quality than the base level.
An SNR-scalable decoder with two-level scalability is depicted in Fig. 7(b).
Volume 3, Issue 2, February 2014
Page 136
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
Fig 7: (b) SNR scalable decoder
For the base level, the SNR-scalable decoder operates exactly the same as the non-scalable video encoder. For the
enhancement level, both levels must be received, decoded by VLD, and inversely quantized. Then the base-level DCT
coefficient values are added to the enhancement-level DCT coefficient refinements. After this stage, the summed DCT
coefficients are inversely DCT-transformed, resulting in enhancement-level decoded video.
II)Spatial Scalability
Spatial scalability is defined as representing the same video in different spatial resolutions or sizes (see Fig. 3(a) and
3(c)). Typically, spatially scalable video is efficiently encoded by making use of spatially up-sampled pictures from a
lower layer as a prediction in a higher layer.
Fig 8: (a) A spatially scalable encoder
Fig. 8(a) shows a block diagram of a two-layer spatially scalable encoder. For the base layer, the raw video is first
spatially down-sampled, then DCT-transformed, quantized and VLC-coded. For the enhancement layer, the operations
are performed in the following order:
1. The raw video is spatially down-sampled, DCT transformed and quantized at the base layer.
2. The base-layer image is reconstructed by inverse quantization and inverse DCT.
3. The base-layer image is spatially up-sampled.
4. Subtract the up-sampled base-layer image from the original image.
5. The residual is DCT-transformed, and quantized by a quantization parameter, which is smaller than that of the base
layer.
6. The quantized bits are coded by VLC.
Since the enhancement layer uses a smaller quantization parameter, it achieves finer quality than the base layer. A
spatially scalable decoder with two-layer scalability is depicted in Fig. 8(b).
Fig. 8: (b) A spatially scalable decoder
Volume 3, Issue 2, February 2014
Page 137
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
For the base layer, the spatially scalable decoder operates exactly the same as the non-scalable video encoder. For the
enhancement layer, both layers must be received, decoded by VLD, inversely quantized and inversely DCT-transformed.
Then the base-layer image is spatially up-sampled. The up-sampled base-layer image is combined with the enhancementlayer refinements to form enhanced video.
III)Temporal Scalability
Temporal scalability is defined as representing the same video in different temporal resolutions or frame rates (see Fig.
3(a) and 3(d)). Typically, temporally scalable video is encoded by making use of temporally up-sampled pictures from a
lower layer as a prediction in a higher layer. The block diagram of temporally scalable coder is the same as that of
spatially scalable coder (see Fig. 8). The only difference is that the spatially scalable coder uses spatial down-sampling
and spatial up-sampling while the temporally scalable coder uses temporal down-sampling and temporal up-sampling.
Temporal down-sampling uses frame skipping. For example, a temporal down-sampling with ratio 2:1 is to discard one
frame from every two frames. Temporal up-sampling uses frame copying. For example, a temporal up-sampling with
ratio 1:2 is to make a copy for each frame and transmit the two frames to the next stage.
Each video representation has different significance and bandwidth requirement. The base layer is more important while
an enhancement layer is less important. The base layer needs less transmission bandwidth due to its coarser quality; an
enhancement layer requires more transmission bandwidth due to its finer quality. As a result, SNR/spatial/temporal
scalability achieves bandwidth scalability. That is, the same video content can be transported at different rates (i.e., in
different representations). The different video layers can be transmitted in different bit-streams called substreams. On the
other hand, they can also be transmitted in the same bit stream, which is called an embedded bit-stream. An embedded
bit-stream is formed by interleaving the base layer with the enhancement layer(s). An embedded bit-stream is also
bandwidth-scalable since application-aware networks can select certain layer(s) from an embedded bit-stream and discard
it (them) to match the available bandwidth. We would like to point out that we only described basic scalable mechanisms,
that is, SNR, spatial and temporal scalability. There can be combinations of the basic mechanisms, such as spatiotemporal
scalability [11]. Other scalabilities include frequency scalability for MPEG-1/2 [42], object-based scalability for MPEG-4
[22], and fine-granular-scalability [17], [18], [19], [20], [21].
The primary goal of using bandwidth-scalable video coding is to obtain smooth change of perceptual quality in the
presence of bandwidth fluctuations in wireless channels. However, without appropriate transport mechanisms, this goal
may not be accomplished.
So We recommend the following architecture for transmitting the desired video.
Fig.9: An architecture for transporting scalable video from a mobile terminal to wired terminal.
B) Optimal deployment
WMNs face scalability issues since coverage extension, throughput enhancement, and delay improvement are usually
contradictory goals [23]–[25], [26], [27].To provide reliable broadband communications, the access points (APs) are
deployed along the main streets as in [36]. In the literature, the AP deployment issues for wireless local area networks
(WLANs) are studied in [28]–[31]. The performance of WMNs is investigated in [23], [25], [26], [27], and [32]–[34].
The simulation results in [26] and the experimental results in [27] show that even if only one user is sending data, the
user throughput rapidly drops as the number of hops increases from one. Then, it stabilizes at a very low throughput as
the number of hops becomes larger (e.g., larger than seven [26]). As the number of users increases, user throughput and
QoS (delay) sharply degrade due to the increasing collisions [25], [26], [27], [32], [33].All the performance issues of
coverage, throughput, and QoS are essential factors in designing a scalable ITS WMN. From the coverage viewpoint, a
larger separation distance between APs can lower the infrastructure cost due to fewer APs. From the throughput
standpoint, however, a shorter separation distance can achieve a higher relay link capacity between APs. Meanwhile, a
smaller cell with fewer contending users also improves the access link capacity between users and the AP. We also
Volume 3, Issue 2, February 2014
Page 138
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
consider the frame delay consisting of contention delay and queuing delay. From the queuing delay perspective, a longer
separation distance of AP is better due to fewer hops. From the contention delay viewpoint, a shorter separation distance
and then a smaller cell may be preferred due to higher data rate and fewer contending users. In the following, an
optimization problem is formulated to determine the best number of APs in a cluster and the optimal separation distances
between APs subject to the constraints on throughput and delay.
Here we consider both uniform spacing deployment & increasing spacing deployment strategies. As we deploy new access
points in the existing network, we have to do frequency modification with planning for better transportation with specific
bandwidth.
And we also develop network guideline for better achievement in installation of scalable wireless (WMN) network.
C) Delay analysis
An analytical method to evaluate network parameters (delay variance) in the considered ITS WMN. The analytical model
is validated by simulations. In the simulations, the legacy IEEE 802.11 CSMA protocol with binary exponential back off
scheme is used. We consider a general case where K" users and one AP contend for the channel. We also consider the
asymmetric traffic for each user. In addition, assume that the time between frame generation in each node is
exponentially distributed as in [35].
3. Expected results
1.
2.
3.
4.
5.
Collision issue will be relieve and make the WMN more scalable.
Facilitates the management of QoS and throughput of a WMN.
Goals of throughput enhancement & QoS Provisioning can be fulfilled at a slight cost of coverage.
The increasing spacing strategy outperforms the uniform spacing strategy in terms of a profit function.
High Quality of Video transmission without any loss/leakage.
4. Main Trust of the Paper
As now a days use of multimedia is increasing with the popularity of www & video conferencing is also become a greatest
area of interest for commercial & educational fields. So it is a need of modern era to make video streaming very fast (i.e.
real time) for video conferencing. Video streaming is now become at that level but some drawbacks are happening in real
time fast speed video conferencing. Like , some packets are going to loose during conferencing & when new user are added
at unknown time then it is required to scale up the network setup with quality improvement.
We are developing such a network which can’t bear packet loss & can’t compromise in quality of transporting video frames
of conferencing. Whenever network will scale due to addition of new user or whenever network will scale down due to
deletion of users in conferencing, our network will automatically scale up & scale down as requirement with greatest
quality of service of video transportation.
5. Future trends Discussion/Analysis
In this project , we can also increase speed & QoS with respect to our need of use. Apart from this, In a future we will
improve QoS for large videos over a large diverse networks. We can also achieve transmission of High definition real
time videos/video frames of live concept in very small of time fraction & zero percent of noise. So we have to filter the
noise to achieve its zero percent of its existence with lack of delay & large throughput in a zero time. We can also develop
new security mechanism for video streaming to make it high level of secure so it can be useful for intelligent agencies
with 100% of confidential in real time.
6. Conclusion
In this paper, we have investigated the AP deployment issue for a scalable ITS WMN considering the overall
performances of throughput, coverage, and QoS. The presented mesh network architecture is very suitable for ITS
applications due to easy network deployment and lower infrastructure cost. With the QoS requirement, an optimization
approach has been proposed to maximize the ratio of total capacity to total cost for deploying a cluster of APs in the
considered ITS WMN. From the system architecture perspective, the proposed WMN has two main advantages. First, the
suggested frequency planning can relieve the collision issue and make the WMN more scalable. Second, the proposed
network architecture also facilitates the management of QoS and throughput of a WMN. From the system design
perspective, this paper has three important elements. First, we have proposed an analytical model to evaluate the
throughput, which has considered bidirectional asymmetric traffic for users operating in the unsaturated situation.
Second, we have developed a queuing model to analyze the frame delay and jitter. Third, we have applied an optimization
approach to determine the best number of APs in a cluster and the optimal separation distances between APs for the
proposed ITS WMN. Numerical results have shown that the optimal deployment parameters can analytically be
determined, and the goals of throughput enhancement and QoS provisioning can be fulfilled at a slight cost of coverage.
Volume 3, Issue 2, February 2014
Page 139
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
In addition, the increasing-spacing strategy outperforms the uniform-spacing strategy in terms of a profit function,
considering the capacity, the cost of APs, and the wireline overhead.
In this paper, we examined the challenges in QoS provisioning for wireless video transport. We use an adaptive
framework, which specifically addresses scalable video transport over wireless networks. The adaptive framework consists
of (1) scalable video representations, (2) network-aware end systems, and (3) adaptive services. Under this framework,
mobile terminals and network elements can adapt the video streams according to the channel conditions and transport the
adapted video streams to receivers with a smooth change of perceptual quality. The advantages of deploying such an
adaptive framework are that it can provide suitable QoS for video over wireless while achieving fairness in sharing
resources. It proposes and validates through a number of NS2-based simulation scenarios a framework that explores the
joint use of packet prioritization and scalable video coding, by evaluating two FGS video encoders, together with the
appropriate mapping of UMTS traffic classes to the DiffServ traffic classes.
References
[1] A. Balachandran, A. T. Campbell, M. E. Kounavis, Active filters: delivering scalable media to mobile devices," in
Proc. Seventh International Workshop on Network and Operating System Support for Digital Audio and Video
(NOSSDAV'97), St Louis, MO, USA, May 1997.
[2] X. Li, S. Paul, and M. H. Ammar, Layered video multicast with retransmissions (LVMR): evaluation of hierarchical
rate control," in Proc. IEEE INFOCOM'98, San Francisco, CA,USA, March 1998.
[3] S. McCanne, V. Jacobson, and M. Vetterli, “Receiver-driven layered multicast," in Proc. ACM SIGCOMM'96, pp. 117
130, Aug. 1996.
[4] R. Braden, D. Clark, and S. Shenker, “Integrated services in the Internet architecture: an overview," RFC 1633,
Internet Engineering Task Force, July 1994.
[5] W. Caripe, G. Cybenko, K. Moizumi, and R. Gray, “Network awareness and mobile agent systems," IEEE
Communications Magazine, vol. 36, no. 7, pp. 44 49, July 1998.
[6] M. C. Chan and T. Y. C. Woo, “Next-generation wireless data services: architecture and experience," IEEE Personal
Communications Magazine, vol. 6, no. 1, pp. 20{33, Feb. 1999.
[7] Y.-C. Chang and D. G. Messerschmitt, “Adaptive layered video coding for multi-time scale bandwidth fluctuations,"
submitted to IEEE Journal on Selected Areas in Communications.
[8] D. Clark, S. Shenker, and L. Zhang, “Supporting real-time applications in an integrated services packet network:
architecture and mechanisms," in Proc. ACM SIGCOMM'92, Baltimore, MD, Aug 1992.
[9] N. Davies, J. Finney, A. Friday, and A. Scott, “Supporting adaptive video applications in mobile environments,"
IEEE Communications Magazine, vol. 36, no. 6, pp. 138{143, June 1998.
[10] T. Ebrahimi and M. Kunt, “Visual data compression for multimedia applications," Proceedings of the IEEE, vol. 86,
no. 6, pp. 1109-1125, June 1998.
[11] B. Girod, U. Horn, and B. Belzer, “Scalable video coding with multiscale motion compensation and unequal error
protection," in Proc. Symposium on Multimedia Communications and Video Coding, New York: Plenum Press, pp.
475{482, Oct. 1995.
[12] P. Goyal, H. M. Vin, and H. Chen, “Start-time fair queuing: a scheduling algorithm for integrated service access," in
Proc. ACM SIGCOMM'96, Stanford, CA, USA, Aug. 1996.
[13] J. Hagenauer, “Rate-compatible punctured convolutional codes (RCPC codes) and their applications," IEEE Trans.
on Communications, vol. 36, no. 4, pp. 389{400, April 1988.
[14] J. Hagenauer and T. Stockhammer, “Channel coding and transmission aspects for wireless multimedia," Proceedings
of the IEEE, vol. 87, no. 10, pp. 1764{1777, Oct. 1999.
[15] T. Hentschel, M. Henker, and G. Fettweis, “The digital front-end of software radio terminals," IEEE Personal
Communications Magazine, vol. 6, no. 4, pp. 40{46, Aug. 1999.
[16] J. Li, “Visual progressive coding," in SPIE Proc. Visual Communications and Image Processing (VCIP'99), Jan. 1999.
[17] S. Li, F. Wu, and Y.Q. Zhang, “Study of a new approach to improve FGS video coding efficiency," ISO/IEC
JTC1/SC29/WG11, MPEG99/M5583, Dec. 1999.
[18] W. Li, “Bit-plane coding of DCT coefficients for fine granularity scalability," ISO/IEC JTC1/SC29/WG11,
MPEG98/M3989,Oct. 1998.
[19] W. Li, “Syntax of fine granularity scalability for video coding," ISO/IEC JTC1/SC29/WG11, MPEG99/M4792, July
1999.
[20] H. Radha and Y. Chen, “Fine-granular-scalable video for packet networks," in Proc. Packet Video'99, Columbia
University, New York, USA, April 1999.
[21] M. van der Schaar, H. Radha, and Y. Chen, “An all FGS solution for hybrid temporal-SNR scalability," ISO/IEC
JTC1/SC29/WG11, MPEG99/M5552, Dec. 1999.
Volume 3, Issue 2, February 2014
Page 140
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
[22] A. Vetro, H. Sun, and Y. Wang, “Object-based transcoding for scalable quality of service," in Proc. IEEE
International Symposium on Circuits and Systems (ISCAS'2000), Geneva, Switzer land, May 28-31, 2000.
[23] R. Pabst et al., “Relay-based deployment concepts for wireless and mobile broadband radio,” IEEE Communication.
Mag., vol. 42, no. 9, pp. 80–89, Sep. 2004.
[24] I. F. Akyildiz, X. Wang, and W. Wang, “Wireless mesh networks: A survey,” Computer Network., vol. 47, no. 4, pp.
445–487, Mar. 2005.
[25] J. Jun and M. Sichitiu, “The nominal capacity of wireless mesh networks,” IEEE Wireless Communication., vol. 10,
no. 5, pp. 8–14, Oct. 2003.
[26] R. Bruno, M. Conti, and E. Gregori, “Mesh networks: Commodity multihop ad hoc networks,” IEEE
Communication. Mag., vol. 43, no. 2, pp. 123–131,Mar. 2005.
[27] S. Ganguly, V. Navda, K. Kim, A. Kashyap, D. Niculescu, R. Izmailov, S. Hong, and S. R. Das, “Performance
optimizations for deploying VoIP services in mesh networks,” IEEE J. Sel. Areas Communication., vol. 24, no. 11,
pp. 2147–2158, Nov. 2006.
[28] M. Amenetsky and M. Unbehaun, “Coverage planning for outdoor wireless LAN systems,” in Proc. IEEE Zurich
Semin. Broadband Communication., Feb. 2002, pp. 49.1–49.6.
[29] M. Kobayashi et al., “Optimal access point placement in simultaneous broadcast system using OFDM for indoor
wireless LAN,” in Proc. IEEE PIMRC, Sep. 2000, pp. 200–204.
[30] T. Jiang and G. Zhu, “Uniform design simulated annealing for optimal access point placement of high data rate
indoor wireless LAN using OFDM,” in Proc. IEEE PIMRC, Sep. 2003, pp. 2302–2306.
[31] Y. Lee, K. Kim, and Y. Choi, “Optimization of AP placement and channel assignment in wireless LANs,” in Proc.
IEEE LCN, Nov. 2002, pp. 831–836.
[32] P. Gupta and P. R. Kumar, “The capacity of wireless networks,” IEEE Trans. Inf. Theory, vol. 46, no. 2, pp. 388–
404, Mar. 2000.
[33] J. Li et al., “Capacity of ad hoc wireless networks,” in Proc. ACM Mobile Communication, Jul. 2001, pp. 61–69.
[34] J.-H. Huang, L.-C. Wang, and C.-J. Chang, “Deployment strategies of access points for outdoor wireless local area
networks,” in Proc. IEEE VTC, May 2005, pp. 2949–2953.
[35] Y. C. Yay and K. C. Chua, “A capacity analysis for the IEEE 802.11 MAC protocol,” Wireless Network, vol. 7, no.
2, pp. 159–171, Mar/Apr 2001.
[36] M. Zhang and R.Wolff, “Crossing the digital divide: Cost-effective broadband wireless access for rural and remote
areas,” IEEE Communication. Magazines ,vol. 42, no. 2, pp. 99–105, Feb. 2004.
AUTHOR
Mr. Raviraj Bhagwat Mete received the BE degree in Information technology from shivaji university,
kolhapur, maharashtra, India in 2011, at present pursuing master of engineering degree in computer science
from solapur university for the year 2014, Maharashtra, India. From 2012 to 2013 , worked as assistant
professor in college of engineering, pandharpur, India.
Prof. Amit R Sarkar received BE degree in computer science from BAMU, Maharashtra, India & MTech in Computer
from Walchand college, sangli, Maharashtra, India. Also at present appeared for PhD in Chennai university. Working
as a professor in engineering college at Maharashtra from last 12+ years.
Volume 3, Issue 2, February 2014
Page 141
Download