CARNEGIE MELLON Department of Electrical and Computer Engine,~ring

advertisement
CARNEGIE
Department of Electrical
MELLON
and Computer Engine,~ring
Quality
Reservation
in Video Communications:
Subband
Coding
and Resource
Management
Yoshito
1992
Tobe
Quality Reservation in Video Communications:
Subband Coding and Resource Management
Yoshito Tobe
Submittedin partial fulfillment of the requirements
for the degree of Master of Science
Department of Electrical and Computer Engineering
Carnegie Mellon University
Pittsburgh, Pennsylvania 15213
Advisor: Prof. Jos6 M. E Moura
February 1992
Abstract
Multimedia data transmission is becominga popular application in computer communications.
Comparedwith digital media such as audio or text, real-time video is the most difficult medium
for a communication
systemto handle. The difficulties lie, in part, in the strict end-to-end delay
requirements of this medium.It is desirable to maintain a specified quality during a video session.
To guarantee this quality, the system handling the video communicationneeds a coding schemethat
adapts easily to the capacity of the computersystem, and the systemneeds the ability to reserve the
capacity necessary for the coding scheme. This thesis discusses these requirements. The effective
use of subband coding of video frames is studied, and resource managersare proposed to support
the image quality reservation. Finally, the performanceof the resource managersimplementedon a
real-time operating system, AdvancedReal-TimeSystem(ARTS),is described.
Contents
1
Introduction
2
ProblemDefinition
2.1
Local
2.2
Image
Area Network
Coding
2.2.1
2.3
2
Subbvnd
Systems
.............................
.....................................
4
Coding
6
QOS Consideration
................................
..................................
6
SubbandCoding of Video Data
6
3.1
Subband
6
3.2
Selecting
Coding
....................................
Subbands
...................................
8
Resource Management
4.1
Quality
of
9
Service
....................................
9
4.1.1
Motivation
...................................
9
4.1.2
Definition
....................................
9
4.1.3
Control
4.2
Capacity-Based
4.3
ARTS/FDDI
4A
5
2
Mechanism
Session
..............................
Reservation
Protocol
11
....................
......................................
4.3.1
Prolocol
4.3.2
Sender
4.3.3
Resource
and Conversion
and Receiver
Managers
13
of QOS .......................
..............................
..............................
Sequence of QOSControl
Subband
Effect
5.3.1
14
15
19
Coded
Data
5.2 Cost of the Resource
5.3
13
16
Results
5.1
12
of
Resource
Network
.................................
19
Management
26
Managers
Bandwidth
..........................
.............................
Allocation
........................
5.3.2 Allocation of Protocol Processing Time ....................
27
28
30
7
Conclusions
35
8 Acknowledgments
36
A Timing of Video Transmission
39
iii
List of Figures
1
Shortage
of network
2
Shortage
of CPU execution
3
Four-band partitioning
4
Seven-band partitioning
5
System diagram
of four-band
6
System diagram
of seven-band
7
Determination
8
Functions
9
Bandwidth
10
Protocol
11
Communications
12
Sequence
13
Testimage
14
Test
15
Band lc after
16
Band lc+2c
after
compression
at 0.51~pp
.........................
23
17
Band lc+3c
after
compression
at 0.51~pp
.........................
24
18
Band lc+2c+3c
19
Band lc+2c+3c+4c
20
The imagewith the lower left comerat high accuracy (lbpp) and the remaining portions
at
of
time
.............................
3
.............................
3
of the image frequency
spectrum .................
7
of the image frequency spectrum ................
7
of period
partitioning
........................
8
.......................
8
partitioning
and class_size
..........................
10
17
SM .....................................
contrc
1 in
layers
NRM ...............................
1:256
19
.....................................
19
for
x 256
2:128
QOS reduction
compression
22
22
at 0.25brp
23
accuracy(.25bpp)
..........................
at 0.75bpp
compression
configuration
20
.................................
compression
after
....................
.................................
x 128
after
18
......................................
of communications
image
lower
bandwidth
......................
at 1.0bpp
24
25
.....................
........................
........
21
System
22
Timing of receiving
frames
for
Case 1 ..........................
31
23
Timing of receiving
frames
for
Case 2 ..........................
31
24
Timing of receiving
frames
for
Case 3 ..........................
32
25
Timing of receiving
frames
for
Case 4 ..........................
32
26
Video sending
with
another
27
Timing of receiving
frames
for
28
Timing of receiving
thread
...................................
25
frames
thread
26
........................
Case 5 ..........................
for Case 6 . . . .......................
34
34
35
29
End-to-end
delay
.....................................
40
List of Tables
MSEand energy of bands and combination of bands for test image 1 ..........
21
MSE vs.
22
compressed
MSE depending
MSE vs.
bands for
on bands for
compressed
bands
test
test
of test
image 1 .......................
image 2 ........................
26
image 2 .......................
26
Consumed time for resource
management: Session Creation
..............
27
Consumed time for resource
management: Session Release ...............
27
UDP processing
Measured
time
end-to-end
...................................
delay
...............................
vi
33
40
1
Introduction
The integration of multiple media in computer networks is becomingwidespread [3, 4, 9, 10, 11].
Compared
with other mediasuch as audio or text, real-time video provides a powerfulhumaninterface.
Supporting video applications is extremely challenging from the point of view of computer communications. Video requires a small end-to-end delay. The volumesof data it generates are muchlarger
than those of other media. Manyworkstations are equipped with hardware capable of producing video
and/or displaying high quality graphics and video. Additionally, high-speed networks have sufficient
bandwidthto support video applications. Unfortunately, this hardwaresupport is not sufficient for video
applications due to the necessity of sharing resources amongseveral applications and due to the large
volumesof video data.
One approach to the former problem is to reserve the resources for transmission of video data.
Several schemeshavebeen suggested [ 1, 2, 9, 11,34] to support the reservation of resources. In the LAN
environment, a connection-based protocol, Capacity-Based Session Reservation Protocol(CBSRP)[34]
has been proposed. In CBSRP,each unidirectional video communicationentity is called a session. Each
session is maintained by a collection of resource managerswhich consists of a session manager(SM),
a system resource manager(SRM),and a network resource manager(NRM).The SMdeals with
creation, termination, and modification of sessions in one host. The SRMchecks the availability of
resources for protocol processing. The NRM
keeps track of the total utilization of synchronousservice
time within a 100Mbps
Fiber Distributed Data Interface (FDDI)network [26, 27].
CBSRPdetermines whether or not a session which meets the user’s requested quality of service
(QOS)parameters can be created. If the QOSparameters can be met, the session is established, and the
quality is assured from then on.
User demandsare varied. Someusers request only high quality video, while others are satisfied with
lower quality whenthe system capacity cannot accommodatethem otherwise. Someusers allow service
degradation as long as a specified minimum
quality is guaranteed. To accommodate
all of these types of
users, QOSparameters are extendedto include characteristics more closely associated with video. Class
of resolution and class of period are such newlyintroduced QOSparameters.
The solution to the problem of large data size is compression of the image data. Muchwork has
been done to provide a high compressionratio for images [14, 15, 16, 18, 22, 23]. It is not enough,
however,to limit the discussion to compressionratio. Regardless of the coding schemethat is chosen,
a higher compressionratio leads to moredegradationin the quality of the image. It is necessary to use
a coding schemewhich can adapt to the capacity of the system. Subbandcoding [18] was chosen as an
adaptive coding schemefor use in CBSRP.The CBSRPwith an extension [35] for subband coding has
been implementedon a distributed systems testbed, the AdvancedReal-TimeSystem(ARTS)
[31
In related work, MAGNET-II
[6, 7] applies subband coding to video frames, and assumes Asynchronous Transfer Mode(ATM)
switching-type networks. The method proposed in this thesis uses
subbands differently
and is extensible to other networks, such as FDDI. DASH[9] and ACME
[11]
consider operating system(OS)support for continuous media including video. Their major concern
the synchronization of video frames and the reservation of buffers for end-to-end delay consideration.
In [1] and [2], schemesto guarantee the response in wide area networksare mentioned. Workpresented
here differs from these systems in its capability of changingthe networkbandwidthdynamically.
In this thesis, the effectiveness of subband coding in CBSRP
video communicationsis considered,
the implementationand evaluation on ARTSbeing described. Section 2 defines the scope of the thesis.
In Section 3, subband coding is reviewed. In Section 4, QOSparameters for video communicationsare
defined and the implementation of CBSRP
on ARTSis analyzed. In Section 5, the results are shown.
Future workand conclusions are presented in Sections 6 and 7, respectively.
The results of this thesis demonstratethat dynamiccontrol of networkbandwidthallocation based
on class of resolution reduces the possibility of networkcongestion, performinga schedulability check
before session creation and ensures predictable execution of protocol processing. Thethesis also shows
howto implementthe protocol based on different classes of resolution and that a larger numberof classes
of resolutions can be obtained with moresubbandsif subbandcoding is used.
2
ProblemDefinition
In this section, features of a LANsystem for video communicationsare discussed, image coding as a
meansto reduce the cost of video is considered, and quality of service (QOS)is described.
2.1
Local Area Network Systems
A local area network(LAN)system which supports image transmission consists of the following components: senders of images, one or more receivers, and an underlying network medium.
The componentsin the sender are a video source whichis either a TVcameraor a video library, an
imagecoding device, a bus inside the workstation, memory
buffers, a networkterminal, and the software
2
to control all of these. The software usually consists of a protocol processing task and several device
drivers.
The networkmediumis the physical line that connects workstations. This includes the mediaaccess
control (MAC)sublayer in the datalink layer. Ethemet, Token Ring, FDDI[26] and DQDB
[29] are
examples of standardized network media.
The componentsin the receiver are the same as those in the sender with a few exceptions. It has a
video display instead of a video source, and it has a frame buffer as well.
At the sending workstation, imagedata loaded into the memory
buffers are encapsulatedwith protocol
headers and trailers to create packets. Then,if necessary, they are copied into the transmit buffer memory
on the networkterminal. At the receiving workstation, the reversed process is done. Thebuffer areas are
not privately ownedby the imagedata; they are shared with other tasks. The networkbandwidthis also
shared with other workstations. Therefore, there is a possibility that the imagedata cannot be delivered
properly. Three problems mayarise:
A
Figure 1: Shortage of network bandwidth
Time
Tha
Th b
Th c
Th c cannot be executed.
Figure 2: Shortage of CPUexecution time
Shortage of buffers: There are two types of buffer shortage. The shortage mayoccur in the
3
kemellinked-list buffer pools such as mbufin UNIXor in the buffers of networkdevice.
Shortage of networkbandwidth:Whenthe network is congested, the data to be sent maybe
delayed or even lost. Figure 1 shows what mayhappen in the absence of managementof the
networkbandwidth. Initially,
the inter-transmission time for data A is T1. Onceperiodic data
B, C, and D join the network, T1 increases to T2. The data to be sent could also be lost due to
congestion in the network.
¯ Shortage of CPUexecution time for protocol processing: Whenthere are manythreads to be
executed, the protocol processing mayoverload the processor, since the protocol processing is
also a CPU-consuming
task. In Figure 2, thread The cannot be executed.
Each one of the problemsmentioned above causes a delay. The failure to acquire these resources
maycause unexpecteddelay whichresults in the lack of timely interaction betweenthe sender and the
receiver. These system problemsshould be taken into account in imagecoding.
These problems are commonto wide area networks (WANs)and metropolitan area networks (MANs)
to whichLANsare connected via bridges, routers, or gateways. Therefore, studying video transmission
in a LANalso provides insight in general networks.
2.2
Image Coding
Image coding deals with efficient ways of representing images for transmission and storage purposes
[14, 15, 16]. Researchon imagecoding has been going on for morethan four decades. Since the primary
goal of image coding is to reduce the amountof pictorial data, it is also called imagecompression.
The applications of imagecoding include transmission of images as well as storage of images for local
applications. Newlyemerging High-Definition TV(HDTV)also needs efficient imagecoding to reduce
the required bandwidth [25]. The increasing need for handling images on computers connected in a
local area network (LAN)demandsmore efficient coding schemes. Digital images are becomingmore
important than analog imagessince digital imagescan be integrated into the representation of data in
computer environments.
For the past few years, two standardization efforts have been workingtoward establishing the first
intemational digital imagecoding standards. Oneis the Joint Photographic Experts Group(JPEG)[22],
whichhas set a standard for still images. Theother is the MovingPicture CodingExperts Group(MPEG)
[23]. MPEG
has been working on the standardization of compression of moving pictures. Although
MPEG
is different from JPEGin the consideration of temporal coding, both of them use spatial coding
techniques based on the discrete cosine transform (DCT)[24].
There are several benefits whendata size is reduced by image coding. First, delay time in every
domainsuch as protocol processing and network is reduced. Thus, responsiveness and interactivity
increase. Second, the time for protocol processing and transmission on the network decrease. It
eventually reduces the possibility of congestion both at the processor and in the network. Althoughit
has these advantages, it also causes undesirable problems.
Coding and deeoding delay Both coding and decoding need signal processing, introducing a
source of delay with considerable computational cost. Whenthe image data is transmitted on a
network,the coding process is done at the sender’s site and the decodingis doneat the receiver’s
site.
Degradation of quality The quality of the image degrades at high compression ratios. There
have been manyattempts to enhance the quality of the coded images with low bit rates [16].
However,the problemof degradation is inherent in lossy compressionitself.
Codingand decodingconsist essentially of several linear operations. These are basically convolutions
of the signal with filters. It is not difficult to implementthis signal processingon silicon chips. Byusing
silicon chips, the delay due to coding and decoding maybe reduced. This thesis does not address the
study of better algorithms of imagecoding that are easy to implementon silicon chips and that maintain
goodquality at low compressionratios.
Wetake here a different approach. Weallow degradation of quality whenthe compressionratio is
high. Whenthe compressionratio rises, video requires less computernetworkbandwidthcapacity, but
the quality is lowered. If the compressionratio is lowered,video requires larger bandwidthcapacity, but
the quality is higher. For this reason, this thesis proposesa design policy in whichthe compressionratio
or the quality of the imageis determinedby the user’s request and the capacity of the system.
To specify the quality of the image, a criterion to evaluate the degradation is needed. Weconsider
the meansquared error (MSE)criterion.
MSE =
MN
’
whereaij is the original value on the pixel (i,j) and bij is the decodedvalue on the pixel (i,j).
the numberof pixels per horizontal line and Nis the numberof pixels per vertical line.
5
is
2.2.1
Subband Coding
Subband coding [18] is a prominent coding scheme. An image is decomposedinto several subcomponents. The information in the original image is not equally distributed amongall subcomponents.
Generally, the lower subcomponentsin the frequency domainhave more information. Subbandcoding
is suitable for producingpackets in a computernetwork.Therefore subbandcoding is easier to adapt to
the capacity of the system by adjusting the numberof chosen subbands. Weuse subband coding for our
computer communicationsstudies. Reconstructed images by subband coding are comparedwith those
by DCT.
2.3
QOS Consideration
The quality of service (QOS)is a transport service and can be characterized by a numberof specific
parametersthat define levels of preferred, acceptable, and unacceptableservice for the user at the time a
connection is set up [8]. For video communications,QOSis defined at the interface betweenvideo users
and the system at whichsubband coding is incorporated into the resource reservation in the computer
network. In Section 4, QOSparameters for video communicationsare defined.
3
Subband Coding of Video Data
Wereview here the principles of subbandcoding and describe howsubbandsare generated in this thesis.
3.1
Subband Coding
Subbandcoding was initially
used for encoding of speech, and later extended to the source coding of
images [18]. The basic idea of subbandcoding is to split the frequencyband of the signal and then to
downsampleand code each subband separately.
In our subband coding scheme, the imagefrequency band is split into seven subbands. First, the
imagesignal is partitioned into four bands LL, HL, LH, and HH,shownin Figure 3 with two-dimensional
filters.
Eachof these bands are downsampled
by 2 x 2. This partitioning process can be repeated in all
subbandsin order to split the original imageinto subbandswith narrowerfrequency spectrum. However,
it is enoughto see the effect of further splitting by applying this process to the lowest band LLonly,
as shownin Figure 4. Compressionis accomplished by applying a coding schemeto each band and by
eliminating somesubbands. This corresponds to the encoding of data. The subband coding diagram is
shownin Figure 5 and 6.
i HL
: HHi
LH ~ HH "
Figure3: Four-bandpartitioning of the imagefrequencyspectrum
+TE
6 ~ 7
3i4i
1i2i
Wx
.... ~,..~
÷
|
Figure 4: Seven-bandpartitioning of the imagefrequencyspectrum
After encoding,transmissionand decodingof the subbands,the imagemustbe reconstructedfrom
them.Thedecodingprocessis the reverse of the encodingprocess. Eachsubbandis upsampledby 2 × 2
andsuitably bandpassfiltered to eliminatealiased copiesof the signal. Forthe seven-band
codingscheme
described, this process is done twice. Once, to reconstruct the LL subband, and again, to reconstruct the
whole image.
When
the ideal filter characteristics of the subbandcodingare approximated
with FIRfilters, the
downsampling
in the partitioning stage maycausealiasing errors. Toremovethese errors, the quadrature
mirrorfilter(QMF)techniqueas describedin [ 17, 18, 20]. Here,2-DseparableQMF
is used. Thecoding
coefficients of the 1-D16-tapQMF
designedin [17] as 16Care used. Thestop bandrejection of 16Cis
30dB. Since this filter
is symmetrical about its center, its coefficients are listed below only from center
7
Filters
Downsampling
XHL(i,j)
(2x2)
X(i,j)
X LH(i,j)
XHH(i,j)
Figure 5: Systemdiagramof four-bandpartitioning
to end[17].
.47211220,.11786660,-.099295500,-.026275600,
.046476840,.0019911500,-.020487510,.0065256660
3.2
Selecting Subbands
Severaladvantagesof subbandcodinghavebeensuggested[ 18, 19, 20]. Oneis the efficiencyof subband
coding.Anotheris the capability of applyingdifferent codingschemesto each subband.For instance,
Subband
H
Origianl
Signal
JH
~
~
~
~
1
2
3
4
=. 5
~
7
Figure 6: Systemdiagramof seven-bandpartitioning
DCTcan be applied to subband 1, while DPCM
is applied to bands of higher frequency. The most
significant advantagein subbandcoding is the wayin whichquality degrades as subbandsare discarded.
Even whensome of the subbands are not used for decoding, an image of reasonable quality can be
obtained. This characteristic of graceful degradation suitably matchescapacity-based communications.
Welook at this characteristic in Section 5.
In order to determinewhichsubbandsshould be used to attain a specific quality of image, the energy
of each subbandis calculated. Generally, subbandsof lower frequency are retained, since they often
have more information. In this thesis, the energy of a subband, E, is measuredby
E = EiM=I2,
E~=1s(i,J)
MN
wheres(i,j) is the intensity of the imageat the pixel (i,j) of the subbandand M, N are the number
rows and columns in the subband.
4
Resource Management
In this section, we first define QOSclosely associated with video. Then, wedetail the implementation
of QOSsupport in ARTSutilizing
4.1
4.1.1
CBSRP.
Quality of Service
Motivation
Twoparameters,a period and a per-period data size, determinethe quality of video. Instead of the period,
its reciprocal, the rate, can be used. Theper-perioddata size is closely related to the compressionratio.
A smaller per-period data size enforces a higher compressionratio whichwill cause the quality of the
mediato degrade. In this sense, per-period data size correspondsto the level of resolution of media. For
a receiver and a sender of continuous media, these parameters are as important as other characteristics
such as the packet loss rate or the end-to-end delay, whichare common
to all communications.
4.1.2 Definition
Theuser’s requested value for the period and the resolution can be quantizedin a finite numberof classes.
Wedefine class 0 as the lowest class, class 1 as the second lowest class, and class (n-l) as the highest
class whenthere are n classes. The numbern can be set to any value, but, in manyapplications, n = 3 is
sufficient. Let Cp~dand C~e~denote the class of the period and the class of the resolution. Ahigher order
Class Size
d4
d2
dl
pl
p2
p3
p4
Period
Figure 7: Determinationof period and class_size
for
Cprd
could be assigned to high-quality video service, while a lower order Cp~dmight be assigned to
a monitoringsystem. In this thesis, we assign moresubbandsti a higher order Cp~d.
The QOSparameters which the user should pass to the system are
,minimum
C~s, ,maximum C~,
,class _size [MAX
_RES],
,minimum Cp~d, ,maximum Cp~d,
,period[MAX_PRD],
.Importance,
,allowable end-to-end delay, and
.maximumpacket loss rate.
MAX_RES
and MAX_PRD
are respectively,
the number of classes for the resolution and the period
defined in the system. Class_size is the data size of each C~s per one period. The Importanceis the
order of importance amongsessions and is used for deciding C~ and Cp~daccording to the CBSRP
described in the next section. Wewill focus on howCr~ and Cp~dare decided.
10
4.1.3
Control Mechanism
C,.~s and Cprd play a key role both to decide the initial QOSand to ensure that the minimum
requirements
are maintainedwhile the session lasts.
Whena session is being created:
The capacity level of the resources that is needed to send continuous mediachanges according to the
class_size[Cr~s]
and the period[Cprd]. Figure 7 shows how C~s and Cp~d are decided. The ratio
k = Du,e~/P,,,~ corresponds to the slopes of the line li in the graph shownin Figure 7. Let Du,~,. and
P,~s~r denote the class_size[Crew]and the period[Cp~.a], respectively. To see howk is chosen, consider
the following example. Assumethat D~,~ can have one of two values 8000 bits per period or 2000 bits
per period and likewise P~8~ can have any of two possible values 1 ms or 4ms. For an FDDI-based
network, let the transmission rate be 100Mbps.Converting the units of D~r to time, we have that
D~,~ maybe 80#s or 20#s. The possible values for k are then:
k- 80#s _ .80 80#_~_s = .20
lms
4m~
20#s_ .20 20#s_ .05.
lms
4ms
It is clear that higher values of k, say k = .80, correspond to larger packets being transmitted with
smaller per-period intervals leading to a higher utilization of the networkresources. AdmissibleC~e~
and Cp~are determinedaccording to the user’s requesting parameters and the available capacity of the
system.
As explainedabove, operating under a larger slope requires a highercapacity. Whenthe user specifies
period[maximum
Cprd]
--
p4
period[minimum Cp~d] = p2
class_size[maximumCre~] = d4
class_size[minimum C~] = d2,
the pair (period[Cp~d], class_size[C~s]) maybe chosen to be the point whichhas the largest D~7./P~
that the system allows amongA, B, C, D, E, F, B’, C’, and D’. Since more than one point maylie on
the sameline, Duser/Puser : k, there is no unique way to decide the admissible point. Wechoose the
smallest period possible. Either A, B, C, D, or E are the admissible points in this case. The point A
11
requires the highest capacity with these requesting parameters with B, C, D, E following in this order.
The arrowsin the figure illustrate operating points with decreasing order in capacity. P --~ Q-. R shows
the possibly admissible C’p~,t whenmaximum
C’~s = minimum
C’r~s =dl. Similarly, X ~ Y -~ Z shows
the possible admissible C~ whenmaximum
Cp~a= minimumCp~d= pl. If the returned class is higher
than the minimum
class, then the session is said to be in an excess class.
Whilethe session lasts:
The C~ and C;~d maybe changed dynamically while the session lasts. A session in an excess class
can be forced to reduce its C~esor Cprct whenthe networkbandwidthof the system is exhausted and a
request for creating a more important session occurs. However,the minimum
class(E, R, Z in Figure 7)
is guaranteed to be ensured. A user whodesires only the maximum
C~s and Cp~ must set the same
value to both the minimum
class and the maximum
class in each parameter(S in Figure 7).
The consequence of providing these QOSparameters is to divide the users into two groups, one
requiring the strict-quality service and another being satisfied with the lower-qualityservice.
4.2
Capacity-Based Session Reservation Protocol
CBSRP
[34] is designed to minimize the variance of the delay for stable video communicationin a
LAN.CBSRP
provides guaranteed performance by reserving buffers, processor, and network bandwidth
necessary for end-to-end communication.In addition, resource managementkeeps resource allocation
in checkso that no shortageof resources will occur. Bypre-allocating capacity to time critical activities,
the extra delay and overheadcaused by resource contention are reduced.
A real-time session is a unidirectional communicationpath between a sender and a receiver with
guaranteed performance.The sender uses an established real-time session and delivers data to a remote
receiver object. Eachsession is distinguished by a unique session identifier, registered at both sender
and receiver.
Periodic real-time sessions can be distinguished by the periodic interarrival time by whichthe data
must reach the receiver. In general, delivered messages must pass through several domains before
reaching the receiver: sender’s protocol processing domain, network domain, and receiver’s protocol
processing domain.In order to finish delivery within a boundeddelay, processing and delivery at each
domainmust be completed within a predetermined hard deadline and sum to less than or equal to the
expected end-to-end delay. If processing within each domainis schedulable, total delay of end-to-end
12
communicationis bounded. Weuse the deadline monotonicmodel [33] based on the period of the task
and the worst case execution time. If a task is schedulable under the deadline monotonicpolicy, its
deadline or delay boundwithin a domaincan be met.
4.3
ARTS/FDDI
ARTSis a distributed real-time operating system being developed at CMU[30,31]. The objective
of ARTSis to develop and verify advanced real-time computing technologies. In ARTS,the basic
computational entity is represented as an object. Each object can invoke both periodic and aperiodic
threads. ARTSsuitably supports the QOSmentionedabove and thus can incorporate resource managers.
FDDIprovides two types of service: synchronous service and asynchronous service. With synchronousservice, access time can be boundedwhile the total synchronoustransmission time is equal to
or less than the target token rotation timer(TTRT).This is suitable for periodic transmission of video.
Although an Ethemet LANcan make use of CBSRPby estimating the amount of all the traffic,
the
capability of segregating non-real-time traffic from real-time traffic in FDDImakesit mucheasier to
implement CBSRP.FDDIalso suitably supports the QOSmentioned above, since it can maintain the
synchronousservice time allocated to all sessions.
4.3.1
Protocol and Conversion of QOS
The Session Manager and Network Resource Manager use Real-time Transport Protocol (RTP) [30]
for remote invocation. RTPprovides reliable,
synchronous, prioritized
messages. Because session
managementdoes not need guaranteed performance, operations are delivered with best effort on a
non-real-time channel. Non-real-time traffic is sent via FDDIasynchronous service. Video data is
generatedperiodically at high-speed. Becauseof the high arrival rate and short life span of the data, it
is moreimportant to keep up with its tight timing constraint rather than to ensure successful delivery
of every arriving data. Video communicationneeds a light protocol with high throughput rather than
a reliable protocol which provides retransmission. The User DatagramProtocol(UDP)[13]with FDDI
synchronousservice is thus selected with performanceguaranteed by the established session through
CBSRP.
Figure 10 showsthe selection of protocols.
The period of the user and the period of media access control(MAC)are not always the same.
Therefore a per-period data size in MAC,I~MAG,should be recalculated based on the MACperiod
PMAC, Puser,
and D .....
Since the maximum
length of one FDDIframe is 4500 Bytes, fragmentation
13
maybe necessary. Each fragmentedpacket needs Dproc(=56 bytes), a header, and a trailer for protocol
processing. Therefore the maximum
data size for user data D~,s~r should not be more than 4500bytes Dproc.
TOavoid a higher degree of complexity, the following restriction is imposedon P~s~:
Puser
: N PMAC,
or,
Puser = PMAC/N,
whereN is a positive integer. The parameter DMACis calculated as follows.
DMAC= [D~N]Dproc
ff-
Duser/N,
when P~,s~ = NPMAC,and,
DUSC?"
DMAC = N [~]
when P~ = PMAc/N. Once DMAC
is calculated,
Dproc
q-
NDuser,
the user’s requesting capacity Du~T/Pu~, can be
requested in a one-dimensional measurementof DMAC. Thus, the pair of C~ and Cprd is converted to
a one-dimensional class Csesslon. C~ion and DMACare used for dynamic QOScontrol in CBSRP.
4.3.2 Sender and Receiver
In ARTS,the primitives for sending and receiving video are realized in the user application as a sender
and a receiver. Session creation maybe invoked by either the sender or the receiver. Oncea session is
established, a periodic thread is created for the senderand another for the receiver througha user library.
Thesethreads are created for each individual session and terminated whenthe session ends.
To conformwith CBSRP,the sender and the receiver use the library functions briefly described as
follows.
rval=Session_Create(session_id,mpid, session, abort, relax)
rval=Session_Close(session_id)
rval=Session_Reconfig(session)
u_long *session_id;
PID *mpid;/* monitor thread pid */
14
struct session_dsc *session;
int *abort;
int *relax;
typedef struct session_dsc {
u_long spid;
/* session id, if known*/
OID
sp_roid;
/* remote session manageroid */
OID
sp_serv_id; /* senderid */
u_long sp_deadline;/* suggested deadline */
u_short max_res; /* the maximumCres */
u_short min_res; /* the minimumCres */
u_long class_size[MAX_RES];
u_short max_prd; /* the maximumCp,.d */
u_short min_prd; /* the minimumCp~.,~ */
u_long period[MAX_PRD];
u_short present_res;/* present C~ */
u_short present_prd;/* present C;r~ */
SM_DSC;
Session_Create, Session_Close, and Session_Reconfigare used to create a session, to terminate a
session, and to modify QOSparameters of a session, respectively. The data structure session_dsc
includes QOSand other control information. Thevalue rval is a return value whichindicates whether or
not the request has succeeded.
4.3.3
Resource Managers
SM, SRM,and NRMare also realized as kemel objects.
One SMand one SRMexist in each host
to deal with all the sessions on that host. The communicationsbetweena sender, a receiver, an SM,
and the NRM
are shownin Figure 11. Besides these messages, there are messages, Buffer_Checkand
15
Schedulability_Check, between an SMand an SRMthat check the availability
of buffers and CPU
execution time.
The functions of SMare described in pseudo code in Figure 8. The messagereceived by Accept is
processed according to the type of the message. The reply to the message is sent by Reply. Request
sends a message to another object and waits for the reply corresponding to this message. Messages
Session_Create, Session_Close,and Session_Reconfigare sent from a sender and a receiver dependingon
their request. The messages from a remote SM, Session_Connect, Session_Abort, and Session_Recalc
are requests for creating a session, terminatinga session, and reconfiguringa session, respectively. The
SMsends a message to a sender when it receives Network_Changefrom the NRMfor changing a
session parameters associated with the sender. Buffer_Checkand Schedulability_Check are done in
cooperation with the SRM.
The schemefor network bandwidthcontrol in the NRM
is shownin Figure 9 in pseudo code. Accept,
Reply, and Receive have the same functions as those in the pseudo code for the SM. In the figure,
BW[class], Max_class, Min_class, and Importance represent the network bandwidth corresponding to
Cs~sslo,~ class, the maximum
Cs~s~io,~, the minimumC~io~, and the Importance of the requesting
session, which are passed from an SM. Used_BWis the synchronous bandwidth consumed by all
sessions and maxA3W
is the maximumsynchronous bandwidth. ExcessA3W[s_id]is the difference
between BW[present C~s,io~] and BW[minimum
C~s~io,~] of a session s_id. Pop_Session_Excess
function returns a session ID from a linked list whichmaintains the information on sessions in excess
classes. The linked list is queuedin ascendingorder of Importance.
Whenthe NRM
receives Network_Addmessage from an SM, it seeks the highest Cs~ssio~ possible
for the remaining synchronousbandwidth. If even the minimum
C,~,~io,~ cannot fit into the remaining
synchronousbandwidth, the NRM
reduces C~s,io,~ of sessions in an excess class by sending a request
for bandwidth reduction to the SMassociated with the session. The Network_Change
message is sent
by the NRMto the sending SMwhose session must be forced to reduce its Cs~,~io,~. Whenthe NRM
receives Network_Release
of a session, it deallocates the networkbandwidthof that session.
4.4
Sequence of QOSControl
Figure 12 showshowthe user’s Session_Create request is accepted with cooperation between SM’sand
the NRM.For each message,(RQ) and (RP) stand for a request and a reply, respectively. In this case,
after the request comesfrom receiver2, the NRM
judges that there is no available network bandwidth
16
while(l)
if(Accept(object, message, &paraml)< 0) continue;
else {
switch(message){
case Sesslon_Create:
Calculate Cs~ssio,~ and its DMAC;
Buffer_Check;
Schedulability_Check;
Request(mmote_SM,Session_Connect, &param);
break;
case Session_Close:
Dealloc_Resource;
Request(remote_SM,Session_Abort, &param);
break;
case Session_Connect:
Buffer_Check;
Schedulability _Check;
Request(NRM, Network_Add, &param);
if(there is any session whoseclass should be reduced)
Request(sender, Network_Change,&param);
Include acquired Cs~sio,~ in retumingvalues;
break;
case Session_Abort:
Dealloc_Resource;
break;
case Session_Reconfig:
Calculate Csesslon and its DMAC;
Buffer_Check;
Schedulability_Check;
Request(remote_SM,Session_Recalc, &param);
break;
case Session_Recalc:
Buffer_Check;
Schedulability_Check;
Request(sender, Capacity_Change, &param);
break;
case Network_Change:
Request(sender, Capacity_Change, &param);
break;
}
Reply(object, &rval);
}
Figure 8: Functions of SM
17
while(l)
if(Accept(object, message, &paraml)< 0) cominue;
else {
switch(message){
case Network_Add:
class = Max_class;
while( class >= Min_class){
if( BW[class] + Used_BW< Max_BW
){
Used_BW+= BW[class];
Reply(object, &class);
else class- -;
Left_Capacity = Max_BW- Used_BW;
s_num= O;
while( Left_Capacity < BW[Min_class]){
s_id[s_num] = Pop_Session_Excess;
if( Importance[s_id[smum]]> Importance ){
Reply(object, &FAIL);
Left_Capacity += Excess_BW[s_id[smum]];
s_num++;
}
Used_BW+= BW[Min_class] - Left_Capacity;
for( i=0; i<s_num;i++){
if( object[sAd]<> object ){
Request( host[s,id], Network_Change,&param);
}
else{
Include s,id in rval;}
}
Reply(object, &rval);
break;
case Network_Release:
Used_BW-= BW[class];
Reply(object, &rval);
break;
}
Figure 9: Bandwidth control in NRM
18
Normal
Data
Video
Coitro
I
Dlta
~"Sessi°nMana.g.~_
RTP
I
UDP
IP
FDDI AsynchronousSynchronous
Figure 10: Protocol layers
SessionCreate
~T"r~., ^,
~aPac~ty_un
ange
e,,~=^.-% ~^~
^~ess~on_~econng
,.,=.oo,v~._.~,~,~.ySession-Abort
SessionConnect
I I~
t~"~e
s s io
n_-R
e,~calc~’~~
Network_Ad~d
Network_Relea~se
Network_Change
Figure 11: Communications
unless Cs~io,~ of a session with Senderl is forced to be reduced. Therefore a Network_Change
message
is sent from NRM
to the SMon the node where Senderl is on. In other circumstances, whenthere is
enoughor no network bandwidth, the NRM
mayimmediately reply to the SM. In the figure, Node3 can
be identical to Node1 or Node2 or maybe another host since only one NRM
exists in one network.
5 Results
5.1 SubbandCoded Data
A coding simulation for subband coding was carried out for two monochrome
images with 8-bit-perpixel(bpp) gray scales. Oneis an imageof size 256 × 256 shownin Figure 13. The other is an image
of size 128 x 128 displayed in Figure 14.
19
Figure 12: Sequence of communicationsfor QOSreduction
First, the influence of the chosen bands was investigated. Table 1 showsthe MSEand the energy of
the reconstructed test image1. As it is seen, band 1 is the most dominantof all bands. All other bands
show energy less than 10%of band 1 (after exclusion of the DCcomponentin band 1) and the total
energy in these bands is less than 25%of the energy in band 1 (after exclusion of the DCcomponent).
Thus, band 1 is indispensable to maintain the information of the original image. Then, decreasing the
MSEhas to be done by adding several bands to band 1. The MSEbecomes lower as the number of
bands increases. In the test imageI, band 3 decreases the MSEmorethan band 2. Table 3 contains the
measuredMSEof the reconstructed imagefor test image 2. In test image 2, band 1 is once again the
most influential subband. In contrast to test image1, here the inclusion of band 2 decreases the MSE
morethan the inclusion of band 3 does. It appears that the relative importanceof band 2 versus band 3
is imagedependent.
Next, the MSEfor the results of subband coding and DCTwere compared. The measurementof the
MSE
was only done for different combinationsof subbands, i.e., none of the bands was compressed.With
each band, we can employother coding schemesfor further compression. Although vector quantization
or DPCM
can be applied, non-uniform scalar quantization based on the LBGalgorithm [21] was used
20
Table 1: MSEand energy of bands and combinationof bands for test image 1
Bands
MSE Energy
Original image
0 21800.0
1
396.5 21138.9
1 without DCcomponent 20904.3
1664.9
2
21733.7
80.6
3
216687.7
156.5
4
21764.5
36.5
5
21759.1
44.5
6
21757.2
76.9
7
21788.3
10.3
1+2
320.5 21035.6
1+3
240.3 21077.9
1+2+3
164.2 21145.5
1+2+3+4
128.9 21351.1
1+2+3+4+5
78.6 21389.9
1+2+3+4+6
58.8 21443.3
1+2+3+4+5+6
14.6 21404.5
here because of its simplicity. Each8-bit value in a subbandis reducedto a 4-bit representation with
non-uniform scalar quantization which the LBGalgorithm performs by assuming each pixel as 1 × 1
block.. In Table 2, the MSEof the reconstructed image for test image1 with bands 1, 2, 3, and 4, is
shownto be 147.4. lc, 2c, 3c, and 4c stand for the compresseddata of bands 1, 2, 3, and 4, respectively.
On the other hand, the image coded by DCTwith lbpp shows 155.4 for the MSE.For test image 2, the
corresponding MSEsare 62.9 for subband coding and 64.4 for DCT.Therefore in terms of the quality
measured by MSE,subband coding is comparable to DCT.
Furthermore,subbandcoding enables us to raise or lower the bit-rate by appropriately choosingthe
numberof subbands used for decoding. The five images shownin Figure 15-19 correspond to the five
rowsin Table 2. Byincreasing or decreasingthe numberof bands, quality is easily adjusted. This feature
of subband coding provides a way of coding such that a part of the image mayhave higher quality by
using extra bands. The lower-left 1/4 of the test imageis moreaccurate than other areas in Figure 20.
The accurate part consists of bands 1, 2, 3, and 4, while the remainingis only decodedfromband 1. Thus
if the user can specify the portions that require greater accuracy, the subbandcodinghelps to reduce the
total size of the imageby allowing several bands to be omitted in other areas.
21
Figure 13: Test image 1:256 × 256
~ ~[~~~!~:!:!.’.::!~:~:::::i:~
Figure 14: Test image 2:128 × 128
Table 2: MSEvs. compressedbands for test image 1
lc
lc+2c
lc+3c
lc+2c+3c
lc+2c+3c+4c
22
0.25 410.0
0.50 335.6
0,50 256.0
0.75 181.0
1.00 147.4
Figure 15: Bandlc after compression at 0.25bpp
Figure 16: Bandlc+2c after compression at 0.5bpp
23
Figure 17: Bandlc+3c after compressionat 0.5bpp
Figure 18: Bandlc+2c+3c after compression at 0.75bpp
24
Figure 19: Band lc+2c+3c+4cafter compression at 1.0bpp
Figure 20: The image with the lower left comerat high accuracy (lbpp) and the remaining portions
lower accuracy(.25bpp)
25
Table 3: MSEdepending on bands for test image 2
Bands
~
1
0.5 171.2
1+2
1.0
89.4
1.0 151.0
1+3
1+2+3
1.5
69.6
1+2+3+4
2.0
35.7
1+2+3+4+5
4.0
15.0
1+2+3+4+6
4.0
27.8
1+2+3+4+5+6 6.0
7.1
Table 4: MSEvs. compressedbands of test image 2
Bands~
lc
lc+2c
lc+2c+3c
lc+2c+3c+4c
5.2
0.25
0.50
0.75
1.00
189.5
109.3
89.9
62.9
Cost of the Resource Management
In this section, the cost of the resource managementis measured using two SONY
NEWS-1720
workstations(25
MHzMC68030)with an FDDI-adapter board SONYIKX-378(AMD
SUPERNET
Chip set)
and a timer board. The timer consists of several counter TTLICs with an accurate clock. This timer
enabled us to measurethe cost of the resource management
with a resolution of l#s.
EWS-1720,
NEWS-172.~0
F
(~e.~.ceiver
fN
(~der I rNEWS.720___.i
~
~
[
IBackgr°ud
/
I
FDDI
~ARTS
Kernel
Figure 21: System configuration
26
]
Let To’,, Tc~, TA,id, Tel, TAb,., Tmbe the time from the acceptance of a request to the sending
of a reply for Session_Create, Session_Connect, Network_Add,
Session_Close, Session_Abort, and Network_Release,respectively. In the system shownin Figure 21 with no backgroundtraffic, two cases were
examined. One is the case in which the sending host contains the network resource manager(NRM)
and
thus Network_Add
communicationis done on the same host. In the other case, where the NRM
is not
on the sending host, Network_Add
communicationis done through the network. In Tables 5 and 6, the
former is Case I and the latter is Case II. Thenumberof negotiations N,~egshowshowmanyestablished
sessions are forced to reduce their classes to the minimum
by the NRM.
The difference between the two cases appear in Tc,~ because Case II needs to send Network_Add
messages over the network. The difference is small. Although the communicationgrows as the number
of negotiations increases, this is not an obstacle to the continuation of video transmission since it does
not affect the end-to-enddelay of video data ,only the time neededfor control.
Table 5: Consumedtime for resource management:Session Creation
Nn~g
Tc~
Ten
TAdd
Case I (msec)
1
2
0
86.5 114.3 142.9
70.9
98.7 127.0
22.4
22.4
22.4
Case II (msec)
0
1
2
90.2 118.5 146.5
74.6 102.6 131.0
21.1
20.9
21.1
Table 6: Consumedtime for resource management:Session Release
Case! (msec)
58.6
51.2
50.0
5.3
CaseII (msec)
65.2
42.2
50.0
Effect of Resource Managers
In this section it is assumedthat there is no overheadin capturing and displaying video so that the
effect of protocol processing will be clear. Wesimulated the video traffic because wedid not have the
capacity to create real-time video traffic. This wouldneed capturing and displaying video whichtakes a
considerable amountof time(Appendix). Twocases with and without resource managersare compared.
The interframe times of arriving frames are measured using three SONY
NEWS-1720
workstations(25
MHzMC68030) with an FDDI-adapter board SONYIKX-378(AMDSUPERNET
Chip set)
27
and
timer board described in the previous section. Eachworkstation is used to send or receive periodic data,
or to generate backgroundtraffic (Figure 21).
5.3.1
Network Bandwidth Allocation
To simulate the video traffic, two sessions, S1 and $2, are created. In each session, the period is fixed
and only Cr~s is controlled by resource managersin order to simplyverify the functions of the managers.
The requesting parameters for these sessions are
maximum C~8 = 2
minimum Cres = 0
Class_Size[0] = 8KBytes
Class_Size[l] = 16KBytes
Class_Size[2] = 24KBytes
maximum
Cp~.,i = minimumCp~,i = 0
Period[0] = 30msec
Importance = 10.
To determine the effect of networkbandwidthallocation, four cases were compared.
¯ Case1 No backgroundtraffic
¯ Case2 With background traffic
1 without NRM
¯ Case3 With background traffic
1 with NRM
¯ Case4 With backgroundtraffic
2 with NRM.
The requesting parameters of the backgroundtraffic 1 are
maximum C~8 = 0
minimum C~s = 0
Class_Size[0] = 104KBytes
maximumCprd = minimumCpra = 0
Period[0] = 10msec
Importance = 5,
while the requesting parametersof the backgroundtraffic 2 are
28
maximumCres = 1
minimumC~,s = 0
class_size[0] = 64KBytes
class_size[l]
= 104KBytes
maximumCpra = minimumCp,.a = 0
Period = 10msec
Importance = 12.
Backgroundtraffic is generated by the periodic transmission of frames with a dummy
destination
address. The dummy
frames are generated before the traffic is initiated to avoid any overhead due to
protocol processing. For those four cases, the arriving times of FDDIframes were measured at the
receiving host. Thebackgroundtraffic is generatedfirst and then other sessions are created.
Weset the requesting parameters to values whichwill clearly highlight the characteristics of each
case. The TTRTvalue is set to 30msecand Max.BW
is set to 27.5msecwhich corresponds to
343750bytes = 100Mbps/8(bits/byte).
27.5msec.
The values of Importanceare chosen to distinguish the cases in whichImportanceof $1 or $2 is higher
than that of the backgroundtraffic and the opposite. Since Dumax is set to 4096 bytes, DMAC
of S1
~ of the
or $2 at C~s = 0, 1, and2 are 8304 bytes, 166608bytes and 24912 bytes, respectively. DMAC
backgroundtraffic 1 and of the backgroundtraffic 2 at Cres = 1 are 323856bytes and 199296bytes,
respectively.
Time0 is the time the first frame is received at the receiving host. In Figures 22 through 24, o
indicates the arriving of a 4K-byte$1 frame and [] indicates the arriving of a 4K-byte5’2 frame. $1
data and $2 data are correctly delivered every 30msecin Case 1. However,in Case 2, due to the heavy
traffic, in excess of the network’scapacity, the data are no longer delivered every 30msecand the delay
increases. This is so since
323856 + 2 × 24912 > 343750.
Whenthe NRM
is working, Case 3, both $1 and $2 are reduced in class, transmission of both being able
to continue every 30msec.Classes C~s are selected such that:
(DMAcofthe background traffic
1) -b (DMAcOfS1)q- (DMAcOfS2)< Max_BW.
29
In Case 3,
323856 + 2 x 8304 < 343750
and
323856 + 8304 + 166608 > 343750.
Therefore C’res is set to 0 both for S1 and $2. In Case 4, NRM
suppresses the class of backgroundtraffic
so that $1 and $2 are able to continue transmission at the maximum
class level. Thefollowing condition
is also ensured;
199296 + 2 x 24912 < 343750.
It is clear that NRM
is avoiding the fatal error of either a long accumulateddelay or a suspensionof
transmission. Moreover,as it is shownin Case 4, NRM
enables the sessions with higher Importance to
acquire the networkbandwidthof other sessions.
5.3.2
Allocation of Protocol Processing Time
The UDPprocessing time is shownin Table 7. The figures in column"Transmission" in TabIe 7 were
measuredfrom the entry of udp_output routine fill the completion of Transmit Request to the FDDI
hardware device 1. Similarly, the figures in column"Receive" were measured from the interrupt of
the "FDDIFrameReceive" till the end of udp_input routine2. Four-byte data is chosen to evaluate the
common
expenseregardless of the data size becausezero-byte data cannot be sent. To calculate the total
communicationtime in user mode, we need to add overhead due to transmission(480#sec) and due to
receiving(5OO#sec). All the numbersindicated in Table 7 are the maximum
measuredvalues amongthe
10 samples.
With this measuredresult, the execution time of sending UDPpackets, Ts~d[#sec] is evaluated as
following.
Tsend = (2110+ 480)[&]
+ 0.337(D
- 4096[~J)
whereD is the size of the data[Bytes]. Whenthe session is created, the execution time is calculated
based on this evaluation and the affordable class is decided by the schedulingcheck criterion.
ludp_output calls ip_output routine whichcalls fn_output routine wherethe Transmit Request to the FDDIhardware device
is issued after the data is arranged into an FDDIframe.
2udp_inputis called by ip_output routine whichis called by fn_input invokedby the interrupt of "FDDIFrameReceive",
30
~,210
E
"$’180
E
ft. 150
E 120
¯ =- 90
<
6O
30
0
12
24
36
48
FrameCount
Figure 22: Timingof receiving frames for Case 1
~-210
o~
r"
¯ --~180
~ 150
~
"=............................................
E120
............................................
-
3O
!
0
12
24
36
48
FrameCount
Figure 23: Timingof receiving frames for Case 2
31
~-210
,-~ 180
E
~ 150
E 120
.E_ 90
60
30
I
12
16
FrameCount
0
Figure 24: Timingof receiving frames for Case 3
~-210
E
~’180
E
~. 150
E120
.........................................................................................
<
60
¯
¯
12
24
30~’~
I
0
36
48
FrameCount
Figure 25: Timingof receiving frames for Case 4
32
Table 7: UDPprocessing time
D~aSize(Byte)
4
1K
2K
3K
4K
Transmission(#s)
Receive(#s)
730
625
1070
1180
1409
1425
1760
1900
2110
2202
Consider the case where there exists a periodic thread, Thl, with lOmsecperiod and 6rasec worst
case execution time(WCET)
and a new session of video communication,Th2, is requested( Figure 26).
Wecreate two sessions, $1 and $2, with the same parameters,
class_size[0] = 8KBytes
class_size[I] = 16KBytes
class_size[2] = 24KBytes.
In Figures 26 and 28,0 indicates the arriving of a 4K-byte$1 frame and [] indicates the arriving of a
4K-byte 5’2 frame. Figure 27 shows the case without schedulability check. 16KBytesof data are sent
every 30mseccausing a deadline miss. Figure 28 shows the examplewith schedulability check.
Since the UDPprocessing of 8K-bytedata takes about 5msec, for two sessions in class_size[2] with
the other periodic thread,
5rasec ¯ 2,3 + 6reset, 3 > 30rasec.
Thus, in Figure27, the data is not delivered correctly every 30msec.But with the schedulability checker,
transmission is properly done every 30msecas shownin Figure 28.
Fromthis result, it is apparent that schedulability check is necessary for reserving the minimum
quality.
6
Future Work
In this work, actual transmission of real-time generated subband-codedimagepackets was not examined.
To perform this, hardwaresubbandcoder and decoder are required. If available, this can be tested with
the present resource managersin the ARTS
testbed.
33
6msec
lOmsec
Th 1
30msec
-I
Th 2
Protocol Processing
Figure 26: Video sending thread with another thread
.~,450
420
E..~390
360
330
300
270
240
~2~0
120
0
12
24
36
48
FrameCount
Figure 27: Timingof receiving frames for Case 5
Variable bit rate(VBR)coding for each subbandwas not addressed, either. If VBRis used, the data
size changes at every period and thus the resource managementbecomesmore complicated. VBR-coded
video will be important in ATM
switches whichare expected to be adopted as a standard for multimedia
services in BroadbandISDN.
An altemative approach to the use of subband coding is possible. If the operating system and the
networksupport a sufficient numberof priorities, each subbandcan be assigned to a different priority
accordingto their relative importance.This guaranteesthat lower priority bands will have a higher loss
probability.
Wehave not addressed these issues. They remain for future work.
34
~.450
420
ff~.390
360
330
300
270
240
~210
180
150
120
90
6O
3O
i
12
16
FrameCount
Figure 28: Timingof receiving frames for Case 6
7
Conclusions
The thesis examinedtwo issues of real-time video data transmission: resource managementand subband
coding for video compression. CBSRPwith a QOScontrol extension proves useful for video data
transmission. It is desirable that user demandsbe met whencommunicationresource availability is
considered. Regarding network bandwidthallocation, our results show that dynamiccontrol based on
C~es effectively reduces the possibility of network congestion. The results of the thesis also show
that performinga schedulability check before session creation ensures predictable execution of protocol
processing. If subbandcodingis used, realizations of C,.es are easily implementable,and a higher order
CT~scan be obtained with more subbands. The subband coding simulation experiments we have carried
out showthat effective assignment of Cr~s can be done by a wayin whichsubbandsof higher frequency
are added to a higher order Cres. Althoughcurrent standards on image or video coding [22, 23] make
use of DCT,subband coding is preferable from the viewpoint of QOScontrol in computer networks to
adjust the quality of images to available resources. Defining QOSparameters is meaningfulonly if the
system has the capability to manageresources according to these parameters, as for examplein the case
of ARTSwith an FDDInetwork.
The combination of subband coding and resource managementpresented in this thesis can be used
to realize predictable and reliable video communications
in computernetworks.
35
8
Acknowledgments
I wouldlike to thank Prof. Jos6 Mourafor providing mewith the motivationto explore video transmission,
and for his continued guidanceand interest-stimulating discussions. Special thanks to Dr. Nikhil Balram
for his gracious help with imageprocessing, I amvery grateful to Dr. HideyukiTokudafor allowing meto
work with ARTSand for providing insight into the actual implementationof computercommunications.
I am also indebted to manymembersof the ARTSproject. Thanks to Stefan Savage for providing an
FDDIdriver and to Stephen Choufor his expertise in programmingon ARTS.Manythanks to those who
assisted in preparing this manuscript.Finally, I wouldlike to thank mywife for her support.
References
[1] D. Ferrari and D. C. Verma, "A Scheme For Real-Time Channel Establishment in Wide-Area
Networks," IEEEJ.Select. Areas Commun.,vol.8, no.3, pp.368-379, April 1990.
[2] D. Ferrari,
"Client Requirement For Real-Time Communication Services," Network Working
Group, Request for Comments:1193 Nov. 1990.
[3] H. M. Vin, E T. Zellweger, D. C. Swinehart, and E V. Rangan, "Multimedia conferencing in the
Etherphone environment," IEEEComputer,Oct. 1991, pp.69-79.
[4] C. Nicolaou, "An architecture for real-time multimedia communicationsystems," IEEEJ.Select.
Areas Commun.,vol.8, no.3, pp.391-400, April 1990.
[5] G. Karlsson and M. Vetterli, "Packet Video and Its Integration into the NetworkArchitecture,"
IEEEJ.Select. Areas Commun.,vol.7, no.5, pp.739-751, June 1989.
[6] A.A. Lazar, G. Pacifici, and J. S. White, "Real-Time Traffic Measurementson MAGNET
II," IEEE
J.Select. Areas Commun.,vol.8, no.3, pp.467-483, April 1990.
[7] A. A. Lazar, A. Temple,and R. Gidron, "An architecture for integrated networks that guarantees
quality of service," Intl. Journal of Digital andAnalogCommunication
Systems, vol.3, pp.229-238,
1990.
[8] A. S. Tanenbaum,
"Computer
Networks,"PrenticeHall, secondedition, 1989.
36
[9] M. A. Rodrigues and V. R. Saksena, "Support For Continuous Media In The DASHSystem,"
Technical Report, University of California, Berkeley, Oct. 16, 1989.
[lO] D. P. Andersonand G. Homsy,"A Continuous Media1/0 Server and its Synchronization Mechanism," IEEEComputer,pp.51-57, Oct., 1991.
[11] R. Govindanand D. P. Anderson, "Scheduling and IPC mechanismsfor continuous media," Proc.
Symposiumon Operating System Principles, Oct. ,1991.
[121 H. Shimizu, M. Mera, and H. Tani, "Packet Communication Protocol for Image Services on
a High-Speed Multimedia LAN," IEEEJ.Select.
Areas Commun.,vol.7, no.5, pp.782-787, June
1989.
[13] Postel:RFC768, "User DatagramProtocol," Aug. 1980.
[14] A. K. Jain, "Imagedata compression:Areview," Proc. IEEE, vol.69, no.3, pp,349-389, Mar.1981.
[15] M. Kunt, A. Ikonomopoulos, and M. Kocher "Second-Generation Image-Coding Techniques,"
Proc. IEEE, vol.73, no.4, pp.549-574, Apr. 1985.
[16] R. Forchheimer and T. Kronander, "Image coding-from waveforms to animation," IEEE Trans.
Acoust., Speech, Signal Processing, vol.37, no.12, pp.2008-2022,Dec. 1989.
[17] J. D. Johnston, "A filter family designedfor use in quadraturemirror filter banks,"Proc. Int. Conf.
Acoust.,Speech, Signal Processing, 1980, pp.281-284.
[18] J. W. Woodsand S. D. O’Neil, "SubbandCoding of Images," IEEETrans. Acoust., Speech,Signal
Processing, vol.ASSP-24,no.5, Oct. 1986.
[19] E H. Westerink, D. E. Boekee, J. Biemond,and J. W. Woods,"Subbandcoding of Images Using
Vector Quatization," IEEETrans. Comm.,voi.36, no.6, June 1988.
[20] M. J. T. Smith and S. L. Eddins, "Analysis~synthesis techniques for subbandimagecoding," IEEE
Trans. Acoust., Speech,SignalProcessing, vol.38, no.8, Aug.1990.
[21] Y. Linde, A. Buzo, and R. M. Gray, "An algorithm for vector quantizer design," IEEETrans.
Commun.,vol. COM-28,pp.84-95, Jan. 1980.
37
[22] C.K. Wallace, "The JPEGStill Picture CompressionStandard," Comm.of the ACM,vol.34, no.4,
pp.30-45, Apr.1991.
Video CompressionStandard for Multimedia Applications," Comm.of the
[23] D. L. Gall, "MPEG:A
ACM,vol.34, no.4, pp.46-58, Apr.1991.
[24] R. J. Clarke, "Transform Coding of Images," AcademicPress, NewYork, NY1985.
[25] V. M. Bove, Jr. and A. B. Lippman,"Openarchitecture television," Proc. SMPTE
25th television
Conference, Detroit, Feb. 1991.
[26] American National Standard, FDDIToken Ring Media Access Control(MAC),ANSIX3.139-1987.
[27] American National Standard, FDDIToken Ring Physical Layer Protocol(PHY), ANSIX3.1481988.
[28] G. Anastasi, M. Conti, and E. Gregori, "TRP:Atransport protocol for real time services in an
FDDIenvironment," Proc. Second IFIP WG6.1/WG6.4Intn. Workshop on Protocols For HighSpeed Networks, Nov., 1990.
[29] ANSI/IEEEProposed Std P802.6/D12, "DQDBSubnetwork of a Metropolitan Area Network,"
Feb. 7, 1990, unapproveddraft.
[30] H. Tokuda, C. W. Mercer, and T. E. Marchok, "TowardsPredictable Real-Time Communication,"
Proc. of lEEE COMPSAC
"89, September, 1989.
[31] H. Tokuda and C. W. Mercer, "ARTS Kernel:A DistributedReal-Time Kernel," ACMOperating
Systems Review, 23(3), July, 1989.
[32] C. L. Liu and J. W. Layland, "Scheduling algorithms for multiprogrammingin a hard real-time
environment," JACM,20, pp.460-461, 1973.
[33] B. Sprunt, L. Sha, and J. P. Lehoczky, "Aperiodic Task Scheduling for Hard-real-time Systems,"Journal of Real-TimeSystems, 1. pp.27-60, 1989.
[34] H. Tokuda, S. T-C. Chou, and M. Morioka, "System Support for Continuous Mediawith CapacityBasedSession Reservation," Submitted for publication.
38
[35] Y. Tobe, H. Tokuda, S. T-C. Chou, and J. M. E Moura, "QOSControl in ARTS/FDDI
Continuous
Media Communications,"Submitted for publication.
[36] A.S. Krishnakumarand K. Sabnani, "VLSI implementationofcommunicationprotocols-a survey,"
IEEEJ.Select. Areas Commun.,vol.7, no.7, pp.1082-1090, Sept. 1989.
[37] H. Kanakia and D. R. Cheriton, "The VMTPNetwork Adaptor Board (NAB): high-performance
network communicationfor multiprocessors," Proc. of SIGCOMM
88, pp. 175-187.
[38] E. A. Amould,F. J. Bitz, E. C. Cooper, H. T. Kung, R. D. Sonsom,and E A. Steenkiste, "The
design of Nectar:a network backplane for heterogeneous multicomputers," School of Computer
Science, Carnegie Mellon University, Technical Report CMU-CS-89-101,
Jan. 1989.
[39] E .C .Cooper, E A. Steenkiste, R. D. Sansom, and B. D. Zill, "Protocol implementation on the
Nectar communication processor," School of ComputerScience, Carnegie Mellon University,
Technical Report CMU-CS-90-153,
Sept. 1990.
[40] M. Siegel, M. Williams, and G. RrBler, "OvercomingBottlenecks in High-SpeedTransport Systems,"Proc. 16th Local ComputerNetworks Conference, Oct. 1991.
[41] A.C. Weaver, "Making Transport Protocols Fast,"Proc. 16th Local ComputerNetworks Conference, Oct. 1991.
[42] W. A. Doeringer, D. Dykeman,M. Kaiserswerth, H. Rudin, and R. Williamson, "A survey of
light-weight transportprotocols for high-speed networks," IEEEJ.Select. Areas Commun.,vol.38,
no.11, pp.2025-2039, Nov. 1990.
Appendix
A
Timing of Video Transmission
Wemeasure the end-m-end delay of one video frame. The end-to-end delay is measured using two
SONYNEWS-1720
workstations(25
described earlier,
MHzMC68030)with the FDDI -adapter board, the timer board
and a video board SONY
NWM-254.
One workstation sends an image taken from a
VCRcamera through the video board. The other workstation receives the image and displays it onto a
CRTdisplay. In order to examinethe performanceof pure video transmission, no other networktraffic
and no other threads were created.
39
Video Capturing of One Image
UDPProtocol Processingfor 4K-ByteData
FDDI Transmission
~ // of 4K-Byte Data
Sending
Host
Ne ork
!
I I I I
Receiving
Host
Processing
for 4K-Byte Data
VideoDisplayingof (~ne Image
Total End-to-EndDelay
||
..J Time
,
Figure 29: End-to-enddelay
Table 8: Measuredend-to-enddelay
Data Size
%,c
T~,~,
Tnp
T~,r
T,,d
T~
320pix × 240pix × 16bit 160pix × 120pix × 16bit
153KBytes
38KBytes
97.3msec
27.5msec
2.6msec
2.6msec
0.4msec
0.4msec
18. lmsec
92.9msec
64.2msec
17.6msec
257.4msec
66.2msec
Thetotal end-to-enddelay Tee is a summation
of delays in several stages. Oneimageis fragmented
into 4K-bytedata. As shownin Figure 29,
whereT,c, T~,sp, T,~v, T~,~, T,,d are the time to captureone image,the time for UserDatagram
Protocol(UDP)transmission-processingof 4K-bytedata, the time for FDDItransmissionof 4K-bytedata,
the time for UDPreceiving-processing,and the time to display the image.Table 8 showsthe measured
time. TheparameterT,~p is the expectedvalue of the data size, but other data are measuredby the
timer board. SinceT~,c, Tu~,andT,,d are dominantin T~andthese timesare almostproportionalto the
data size. End-to-enddelayis practically proportionalto the data size. Forreal-time applications,the
protocol, whichhas beenhere implemented
in softwareshouldbe efficiently designedand implemented
in hardware
[36, 37, 38, 39, 40, 41, 42].
40
Download