Multimedia-Systems: Media Server Ralf Steinmetz Dr. L.Wolf, Dr. S.Fischer

advertisement
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Multimedia-Systems:
Media Server
Prof. Dr.-Ing. Ralf Steinmetz
Dr. L.Wolf, Dr. S.Fischer
TU Darmstadt - Darmstadt University of Technology,
Dept. of Electrical Engineering and Information Technology, Dept. of Computer Science
KOM - Industrial Process and System Communications, Tel.+49 6151 166151,
Merckstr. 25, D-64283 Darmstadt, Germany, Ralf.Steinmetz@KOM.tu-darmstadt.de Fax. +49 6151 166152
GMD - German National Research Center for Information Technology
IPSI - Integrated Publication and Information Systems Institute, Tel.+49 6151 869869
Dolivostr. 15, D-64293 Darmstadt, Germany, Ralf.Steinmetz@darmstadt.gmd.de Fax. +49 6151 869870
B-server.fm 1 14.January.00
Usage
Services
Systems
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Scope
Applications
Learning & Teaching
Content
Processing
Documents
Design
Security
Opt. Memories
Group
SynchroCommuninization
cations
...
Databases
Media-Server
User Interfaces
Programming
Operating Systems
Communications
Quality of Service
Networks
Basics
Compression
B-server.fm 2 14.January.00
Computer
Architectures
Image &
Graphics
Animation
Video
Audio
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Contents
1. Media Server - Push and Pull Model
Media Server Architecture Components
Scaling of Media Server - Cluster of Server
2. Storage Devices
Disk Layout
Placement of Files at Storage Device Level
3. Disk Controller
Redundant Array of Inexpensive Disks
4. Storage Management
4.1 Traditional Disk Scheduling
4.2 Disk Scheduling – Continuous Media - Goals
4.3 Replication (contents and access driven)
5. File System
Example: Video File Server
delivered
data
incoming
request
network
attachment
content
directory
memory
mgmt.
file system
storage
mgmt.
disk
controller
storage device
B-server.fm 3 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
1. Media Server - Push and Pull Model
Media server
• special type of data / file server
• high-volume data transfer
• real-time access
• large files (objects, data units)
Pull model
• client controls data delivery
• suitable for editing of
contents over networks
Push model
• also known as “data pump“
• server controls data delivery
• suitable for broadcasting data
over networks
r6
r7
d5 d3
Server
r8
d2
r9
d4
r10 r11
d3
d2
r4
d1
Client
request
delivery
Server
Client
Basic models: pull & push
• application point of view how to interact with media data
• mixtures possible: application sends “play list” to server
• same internal apply to both models (i.e. not further treated separately)
B-server.fm 4 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Media Server Architecture Components
• network attachment
• typically a network adapter
• content directory (Verzeichnisdienst)
• responsible for verifying if content is available on the
•
•
•
•
B-server.fm 5 14.January.00
delivered
data
incoming
request
network
attachment
media server
• if the requesting client is allowed to access the data
content
directory
memory management (Hauptspeicherverw.)
• caching for large data amount and performance
memory
improvement
mgmt.
file system (Dateisystem)
file system
• handles organization of the content on the server
• includes assignment of sufficient storage space
storage
during the upload phase
mgmt.
storage management (Sekundärspeicherverw.)
disk
• abstraction of driver
controller
• for disk scheduling policies and layout of files
disk controller
storage device
• handles access to data on the storage device
• head movement speed, I/O bandwidth, the largest and smallest units that
can be read at a time and the granularity of addressing, (e.g. RAID)
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Scaling of Media Server - Cluster of Server
Motivation
• growth of systems implies replication of
multiple components
Approach
• optimizing of each components
• distributed onto probably heterogeneous
components
• cooperation between distributed components
Issues e.g.
content directory must always be consistent
• internal content directory
once per media server
• external content directory
network
attachment
internal
content
directory
memory
mgmt.
file system
storage
mgmt.
disk
controller
storage
device
external
contents
directory
B-server.fm 6 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
2. Storage Devices
Storage Devices - Disks and Tapes
Tapes:
• Cannot provide independent accessible streams
• Slow random access
Disks:
• Access times:
• Seek time typically 8 ms magnetic vs. 150 ms optical disk
• CLV vs. CAV:
• Magnetic disks usually have constant rotation speed
• constant angular velocity, CAV
• more space on outside tracks
• Optical disks have varying rotation speed
• constant linear velocity, CLV
• same storage capacity on inner and outer tracks
• Capacity – cost: Optical cheaper than magnetic
• Type of persistence (Rewritable, Write-once, Read-only: e.g. CD-ROM)
Focus here on disks
B-server.fm 7 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Disk Layout
determines
• the way in which content is addressed
• how much storage space on the media is
actually addressable and usable
• the density of stored content on the media
multiple track vs. single track (CD)
• changes on single track data is expensive
tracks and sectors
• access restricted to the unit of a sector
• unused space of a sector - wasted
Zone bit recording
• motivation: sector at outer radius same
(sector) data amount but, more capacity
• constant angular velocity
• i.e. same access time to inner/outer tracks
• i.e. different read/write throughputs
• used to place
• more popular media (movies) on outer track
• less popular on inner
B-server.fm 8 14.January.00
Track
8
1
7
2
6
3
5
4
Sector
Sector
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Placement of Files at Storage Device Level
contiguous placement:
1st file
2nd file
3rd file
non-contiguous placement:
1st f.
2nd file
3rd file
File (sequence of bytes with special “end of file” symbol) organization
• Contiguous (sequential) placement stored in the order it will be read
• like in tapes
• less seek operations during playback, i.e. good for “continuous” access
• less flexibility, problem with changes to data
• Non-contiguous placement, i.e. scatter blocks across disk:
• avoids external fragmentation (“holes” between contiguous files)
• data can be used for several streams via references
• long seek operations during playback
B-server.fm 9 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
3. Disk Controller
Data Placement - Redundant Array of Inexpensive Disks
Motivation
• disks become more and more inexpensive
• better to provide set of disks instead of one large disk
• i.e Striping
Goals: to enhance storage size AND
• primary: fault tolerance (availability, stability)
• by redundancy
• related to (as low as possible) additional expenses
• secondary: performance
• by data striping
• i.e. to distribute data transparently over multiple disks
• and to make them appear as a single fast disk
• read & write
• small and huge amount of data
RAID and multimedia
• RAID helps to improve multimedia data delivery in servers
• RAID was not developed for multimedia servers
B-server.fm 10 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Redundant Array of Inexpensive Disks
Granularity of data interleaving
• fine grained
• small units to be interleaved
• any I/O request (regardless of data unit size) involves all disks
• coarse grained
• larger units of data to be interleaved
• small file (total data request) involves only some disks
Method & pattern of distribution of redundant data
• computing redundant information
• most often parity, sometimes Hamming, Reed-Solomon codes
• distribution/location
• to concentrate redundancy on some disks
• to distribute it uniformly
Note, for details see
• e.g. Chen at al, RAID: High-Performance, Reliable Secondary Storage,
acm Computing Surveys, Vol. 26, No.2, June 1994
B-server.fm 11 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Non-Redundant (RAID Level 0)
Goal and Usage
• to enhance pure I/O
performance
• use in supercomputers
RAID Level 0
RAID Level 1
RAID Level 2
Approach
• data striping among a set of
RAID Level 3
e.g. 4 disks
RAID Level 4
• block of data is split,
individual parts of it stored in
RAID Level 5
different devices
• 4 disks of 1 GB provide in
RAID Level 6
total a capacity of 4 GB
• Implementation
• i.e. SCSI allows for up to 8 daisy chained controllers & up to 56 logical units
Performance
• read
• very good but, mirrored disks may be better (if appropriate schedules are
used)
• write
• best of all RAID performances (no need to update any redundant data)
B-server.fm 12 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Mirrored (RAID Level 1)
Goal and Usage
• better fault tolerance
• frequently used in databases
(when availability is more
important than storage
efficiency)
RAID Level 0
RAID Level 1
RAID Level 2
RAID Level 3
Approach
• mirrored (or shadowing) disks RAID Level 4
duplicated data written to
second disk
RAID Level 5
• every sector on a primary disk
RAID Level 6
is also stored at secondary
disk
Performance
• read
• by parallel reads the I/O performance is increased or the disk with shorter
queues, rotational delay, seek time can be selected
• if different controllers are used
• write
• slowed down (write at 2 devices simultaneously)
B-server.fm 13 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Memory-Style ECC (RAID Level 2)
Goal
• to enhance fault tolerance
• to reduce RAID level 1 HW
costs
RAID Level 0
RAID Level 1
RAID Level 2
Approach
• bit striping among various
RAID Level 3
disks and additional error
correction codes
RAID Level 4
• error detection:
RAID Level 5
• single parity disk
• but, here error correction
RAID Level 6
used, proportional to log (of
amount of disks)
• e.g. 10 data disks with 4 parity disks
• e.g. 23 data disks and 5 parity disks
Performance
• minimal number of data to be transferred related to amount of disks (one
sector at each disk)
• large amount leads to better performance
• slower disk recovery
B-server.fm 14 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Bit-Interleaved Parity (RAID Level 3)
Goal and use
• to enhance fault tolerance
• to reduce RAID level 2 HW
costs
• application whenever
• high bandwidth is demanded
• but, not high I/O rate
Approach
• bit striping across disks
• single parity disk for any
group/array of RAID disks
• make use of build-in CRC
checks of all disks
RAID Level 0
RAID Level 1
RAID Level 2
RAID Level 3
RAID Level 4
RAID Level 5
RAID Level 6
Performance (similar to RAID level 2)
• slower disk recovery
• no interleaved I/O
• note: disks should be synchronized (reduces seek and rotational delays)
B-server.fm 15 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Block-Interleaved Parity (RAID Level 4)
Goal
• to provide fault tolerance
• to enhance RAID level 3
performance in case of fault
Approach
• sector striping across disks
• parity sectors stored in one
disk
RAID Level 0
RAID Level 1
RAID Level 2
RAID Level 3
RAID Level 4
Performance
RAID Level 5
• faster disk recovery possible
RAID Level 6
• small writes
• only 2 disks affected (not the
whole set)
• not in parallel (only 1 write per disk group as parity disk is affected)
• small read improved
• from one disk only
• may occur in parallel
B-server.fm 16 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Block-Interleaved Distributed Parity (RAID Level 5)
Goal
• to provide fault tolerance
• to remove write bottleneck of
RAID level 4
Approach
• sector striping across disks
• parity information distributed
among disk
RAID Level 0
RAID Level 1
RAID Level 2
RAID Level 3
RAID Level 4
Performance
RAID Level 5
• read & write: allows for
parallel operations
RAID Level 6
• small read & write
• very good: similar to RAID level 1
• large amount of data
• very good: similar to RAID 3 and 4
B-server.fm 17 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
P+Q Redundancy (RAID Level 6)
Goal
• Motivation:
• larger arrays may contain
more than 1 disk with
failures
• ECC are required in order to
maintain availability
RAID Level 0
RAID Level 1
RAID Level 2
RAID Level 3
RAID Level 4
Approach
RAID Level 5
• ECC
• “P+Q redundancy” based on RAID Level 6
Reed Solomon
• protects against failure of 2 disks
• 2 additional disks
• otherwise similar to RAID level 5
B-server.fm 18 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
4. Storage Management
Disk Management - File Placement on Disk
Goal: to reduce data read & write times by
• Less seek operations
• Low rotational delay or latency time
• High actual data transfer rate (can not be improved by placement)
Method: to store data in specific pattern
• Regular distance
• Combine related streams
• Larger block size
• less seek operations
• less amount of requests
• but, larger loss at internal fragmentation (last block used only for appr. 50%)
B-server.fm 19 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Interleaved Placement
interleaved storage:
3rd file
2nd file
1st f.
non-interleaved storage:
3rd file
1st f.
2nd file
Interleaved files:
• Interleaving several streams (e.g., channels of audio)
• All nth samples of each stream are in close physical proximity on disk
• Problem with changing (inserting / deleting) parts of streams
Interleaved vs. non-interleave & contiguous vs. non-contiguous/scattered
• Contiguous interleaved placement
• Scattered interleaved placement
B-server.fm 20 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
4.1 Traditional Disk Scheduling
Disk Scheduling – Classic Methods - Fundamentals
Disk scheduling determines the order
by which requests for disk access are serviced
General goals:
• Short response time
• High throughput
• Fairness (e.g., requests at disk edges should not get starved)
Multimedia Goals (in general):
• continuous throughput (must not be fair)
• timely maximal (not average) response times
• high throughput
Trade-off:
• Seek & rotational delay
vs.
• Maximum response time
B-server.fm 21 14.January.00
24
order of arrival
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
First Come First Serve (FCFS) Disk Scheduling
30
16
50
42
45
12
...
12
40
22
45
12
40
42
45
12
50
42
45
16
50
42
30
16
50
24
30
16
16
50
42
45
12
40
20
22
Properties:
• Long seek times (as not at all optimal head movement occurs)
• Short (individual) response times
B-server.fm 22 14.January.00
24
order of arrival
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Shortest Seek Time First (SSTF) Disk Scheduling
30
16
50
42
45
12
...
12
40
22
45
12
40
42
45
40
50
42
45
45
42
40
22
Properties:
• Short seek times
• Longer maximum (individual) response times
• May even lead to starvation
B-server.fm 23 14.January.00
30
50
12
20
16
50
45
30
16
50
24
30
16
24
order of arrival
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
SCAN Disk Scheduling
30
16
50
42
45
12
12
40
22
45
12
40
42
45
12
50
45
12
16
50
12
30
16
12
24
16
12
24
30
50
45
42
...
head moves
downwards
40
22
20
head moves upwards
• Move disk head always between disk edges (BIdirectional)
• Read next requested block in disk movement direction
• Compromise between optimization of seek times and response times
• data in the middle of the disk has better access properties
B-server.fm 24 14.January.00
24
order of arrival
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
N-Step-SCAN Disk Scheduling
30
16
50
16
50
45
42
45
12
considered 12
22
requests
45
12
22
40
42
45
12
50
42
45
...
30
50
45
42
12
head moves
downwards
head moves upwards
22
40
20
head moves
downwards
properties
• reduces unfairness of outer and inner tracks
• longer seek time
• shorter response time
B-server.fm 25 14.January.00
30
16
50
24
16
30
24
order of arrival
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
C-Scan Disk Scheduling
30
16
50
42
45
12
12
40
22
45
12
40
42
45
12
50
45
12
16
50
12
30
16
12
16
12
50
45
42
head moves
upwards
...
40
22
20
head moves upwards
properties
• Move disk head always between disk edges (UNIdirectional)
• Improves fairness (compared to SCAN)
B-server.fm 26 14.January.00
24
30
16
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
4.2 Disk Scheduling – Continuous Media - Goals
Suitability of classical disk scheduling methods:
• Effective utilization of disk arm ⇒ short seek time
• No provision for times or deadlines ⇒ not suitable
Continuous media specific:
• Serve continuous media (periodic) and aperiodic requests
• Never miss deadline of continuous media request while serving aperiodic
request
• Aperiodic requests should not starve service
and should themselves not be starved
• Provide high multiplicity (multiple streams) on real-time access
• Balance trade-off between buffer space and efficiency
B-server.fm 27 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Disk Scheduling: Dependencies
Continuous media disk scheduling:
• Efficiency depends on:
• tightness of deadlines
• disk layout
• available buffer space
• Ability to create schedule in advance depends on:
• buffer space and
• stream length
• General case:
• create schedule on-the-fly (to be considered here)
B-server.fm 28 14.January.00
t
3 24
3 30
order of arrival
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Earliest Deadline First (EDF) Disk Scheduling
2 16
3 24
3 50
2 42
1 45
2 16
1 12
3 50
3 50
2 40
2 42
2 42
1 22
1 12
2 40
...
1 45
1 45
1 12
2 40
2 40
1 22
2 16
3 50
3 50
50
2 42
16
2 40
42
40
requests with:
22
Real-time scheduling algorithm
• Read block for stream with nearest deadline
May result in:
• Excessive seek time and
• Poor throughput
B-server.fm 29 14.January.00
3 30
45
12
20
3 30
deadline
blocknumber
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Scan-EDF Disk Scheduling
Combines advantages of:
• SCAN (seek optimization) with
• EDF (real-time aspects)
Method:
• Requests with earlier deadlines are served first
• Among requests with same deadline, requests are served by track
location
Increase efficiency by modifying deadlines
B-server.fm 30 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Scan-EDF Disk Scheduling
t
3 24
3 30
2 16
3 50
2 42
1 45
1 12
2 40
1 22
1
2
12
40
1
22
...
1
45
2
40
1
22
50
3
50
2
40
2
42
2
42
2
40
2
40
1
45
30
2
16
3
50
16
40
42
45
22
Deadline 1
Properties
• apply EDF
• for all requests with same deadline apply SCAN
B-server.fm 31 14.January.00
16
3
12
downwards 20
2
3
Deadline 2
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Scan-EDF Disk Scheduling
Map SCAN to EDF (1)
deadline = Di + f(Ni);
Di deadline of request i,
Ni track position of request i
with f() such that Di + f(Ni) > Dj + f(Nj)if Di > Dj
e.g., f(Ni) = Ni / Nmax - 1
More accurate modification of deadlines (2):
f(Ni) if (Ni≥N)
head moves upwards
Ni
Ni
0
Nmax
N
if (Ni<N) then f(Ni)
head moves downwards
f(Ni) if Ni>N
Ni
0
B-server.fm 32 14.January.00
Ni
if Ni≤N then f(Ni)
N
Nmax
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Scan-EDF Disk Scheduling: Example
t
3 24
3 30
2 16
3 50
3,10 30
2 42
2,26 16
1 45
3,05 50
3,50 50
2,20 42
2,58 42
2,02 40
2,18 40
2,60 40
1 12
2 40
1 22
1,45 45
1,08 12
2,40 40
2,40 40
1,23 45
1,22 22
...
12 down.
downwards 20
3,50 50
16
40 down.
42 down.
45 up.
1,22 22
2,24 16
Deadline 2
22 up.
Deadline 1
e.g. at “downwards 20” the next deadlines are computed, assume NMax=100
• 1 12: downwards & 12 on the way: position20 - position12 = 08, i.e. 1,08
• 2 40: downwards & 40 not on the way: =40; i.e. 2,40
B-server.fm 33 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Group Sweeping Scheduling
Form groups
• with deadlines lying closely together
• or in round robin manner (in general)
Apply SCAN to each group
3.4 24
deadline blockno.
3.3 30
2.0 16
3.3 50
deadline 1.1
2.2 42
deadline 2.0
deadline 3.3
1.2 45
1.4 12
1.4 12
2.4 40
2.0 16
3.4 24
1.2 45
1.1 22
2.2 42
3.3 30
2.4 40
3.3 50
1.1 22
scan
12
22
group 1
B-server.fm 34 14.January.00
45
scan
42
40
group 2
16
scan
24
30
50
group 3
t
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Mixed Disk Scheduling Strategy
control data
status of
buffers
media data
balanced
buffers
SSTF
.
.
.
.
.
.
buffers
. continuous
media
.
. streams
disk scheduling
Goal
• to maximize transfer efficiency by minimizing seek time and latency
• to serve process requirements with a limited buffer space
Combines
• shortest seek time first SSTF
• Buffer underflow & overflow prevention
• by keeping buffers filled in a “similar way”
Mixed Strategy also known as “greedy strategy”
B-server.fm 35 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
4.3 Replication (contents and access driven)
Goal
• to increase availability in case of disk of machine failures (like RAID)
• to overcome the limit of number of concurrent access to individual titles
because of limits on the throughput of the hardware.
Static Replication
• user has choice of access points
• frequently done in the Internet today
• content provider stores keeps copies of the original version up to date on
servers close to the user.
Dynamic Segment Replication
• read only, segments are replicated
• since continuous media data is delivered in linear order,
a load increase on a specific segment can be used as a trigger to
replicate this segment and all following segments to other disks
Threshold-Based Dynamic Replication
• considers whole movies
• takes all disks of the system into account to determine whether a movie
should be replicated
B-server.fm 36 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
5. File System
File system
• is said to be the most visible part of an operating system
Traditional File Systems
• MSDOS File Allocation Table FAT, UNIX Berkeley FFS, ...
Multimedia File Systems
• Real Time Characteristics
• File Size
• Multiple Data Streams
Examples of Multimedia File Systems
• Video File Server (experimental, here outlined)
• Fellini
• Symphony
• IBM Media Charger ...
• Real Networks ...
B-server.fm 37 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Example: Video File Server
Data Structuring - Data Types
• Continuous-media data itself (audio, video, ...)
• Meta-data (attributes):
• Annotations by author
• Association between related files (synchronization)
• Linking and sharing of data segments:
• e.g., storing common parts only once
Frame: Basic unit of video
Sample: Basic unit of audio
Block: Basic unit of disk storage
• Homogeneous block:
• each block contains exactly one medium
• requires explicit inter-media synchronization
• Heterogeneous block:
• multiple media stored in a block
• implicit inter-media synchronization
B-server.fm 38 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Data Structuring: Terminology
general strand information:
type, recording rate, length, etc.
list of data blocks
block 1
block 2
block 3
strand
block 4
rope
synchronization
information
general rope information
audio strands
strand 1
strand 2
strand 3
strand 4
video strands
strand 1
strand 2
strand 3
strand 4
• Strand:
• Immutable sequence of continuous video frames or audio samples
• Rope:
• Collection of multiple strands tied together with synchr. information
• Strands are immutable:
• Copy operations are avoided
• Editing on ropes manipulates pointers to strands
• Many different ropes may share intervals of the same media strand
• Reference counters, storage reclaimed when no more references
B-server.fm 39 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Operations on Multimedia Ropes: Interface
Example:
(Rangan and Vin 91)
RECORD [media] → [requestID, mmRopeID]
• Record a new multimedia rope consisting of media strands
• Perform admission control to check resource availability
PLAY [mmRopeID, interval, media] → [requestID]
• Playback a multimedia rope consisting of media strands
• Perform admission control to check resource availability
STOP [requestID]
INSERT [baseRope, position, media, withRope, withInterval]
REPLACE [baseRope, media, baseInterval, withRope, withInterval]
DELETE [baseRope, media, interval]
B-server.fm 40 14.January.00
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Operations on Multimedia Ropes: Example
Merge audio and video strands:
• Rope1 contains only audio strand
• Rope2 contains only video strand
• Replace (non-existing) video component of Rope1 with video component
of Rope2
REPLACE[baseRope: Rope1, media: video,
baseInterval: [start:0, length: l],
withRope: Rope2, withInterval: [start:0, Length: l]]
Rope1
B-server.fm 41 14.January.00
Rope2
audio
video
audio
video
audio
video
audio
video
http://www.kom.e-technik.tu-darmstadt.de
http://www.ipsi.gmd.de
© Ralf Steinmetz
Multimedia File Systems: System Structure
Division into:
• Lower level storage manager
• Higher level rope server
Multimedia storage manager
• Physical storage of media strands on disk
• Admission control
• Maintain disk layout
Multimedia rope server
• Operations on multimedia ropes
• Communications with storage manager via IPC mechanisms
• Receives status messages from storage manager and
send status messages to application
B-server.fm 42 14.January.00
Download