Real-Time Database Systems: Time Constraints and Their Challenges

advertisement

Real-Time Database Systems and

Data Services: Issues and Challenges

Sang H. Son

Department of Computer Science

University of Virginia

Charlottesville, Virginia 22903 son@cs.virginia.edu

1

Outline

 Introduction: real-time database systems and real-time data services

 Why real-time databases?

 Misconceptions about real-time DBS

 Paradigm comparison

 Characteristics of data and transactions in real-time DBS

 Origins of time constraints

 Temporal consistency and data freshness

 Time constraints of transactions

 Real-time transaction processing

 Priority assignment

 Scheduling and concurrency control

 Overload management and recovery

2

Outline (cont’d)

 Advanced real-time applications

 Active, object-oriented, main-memory databases

 Flexible security paradigm for real-time databases

 Embedded databases

 Real-world applications and examples

 Real-time database projects and research prototypes

 BeeHive system

 Research issues, trends, and challenges

 Exercises

3

I. Introduction

 Outline

 Motivation: Why real-time databases and data services?

 A brief review: real-time systems

 Misconceptions about real-time DBS

 Comparison of different paradigms:

 Real-time systems vs real-time database system

 Conventional DBS vs real-time DBS

4

Some Facts about Real-Time Databases

 Fact 1: As the complexity of real-time systems and application is going up, the amount of information to be handled by real-time systems increases, motivating the need for database and data service functionality (as opposed to ad hoc techniques and internal data structures)

 Fact 2: Conventional databases do not support timing and temporal requirements , and their design objectives are not appropriate for real-time applications

 Fact 3: Tasks and transactions have both similarities and distinct differences, i.e., traditional task centric view is not plausible to realtime databases.

5

Something to Remember ...

Real-time  FAST

Real-time  nonosecs or  secs

Real-time means explicit or implicit time constraints

A high-performance database which is simply fast without the capability of specifying and enforcing time constraints are not appropriate for real-time applications

7

A Brief Review: Real-Time Systems

A system whose basic specification and design correctness arguments must include its ability to meet its time constraints .

Its correctness depends not only on the logical correctness, but also on the timeliness of its actions.

8

Review: Real-Time Systems

 Characteristics of real-time systems

 timeliness and predictability

 typically embedded in a large complex system

 dependability (reliability) is crucial

 explicit timing constraints (soft, firm, hard)

 A large number of applications

 aerospace and defense systems, nuclear systems, robotics, process control, agile manufacturing, stock exchange, network and traffic management, multimedia computing, and medical systems

 Rapid growth in research and development

 workshops, conferences, journals, commercial products

 standards (POSIX, RT-Java, RT-COBRA, etc)

9

Time Constraints

v

0 v(t)

Hard and firm deadline v

0 v(t) d t

Soft deadline d

1 d

2 t

10

Databases for Real-Time Systems

 Critical in real-time systems (any computing needs correct data)

 real-time computing needs to access data: real-world applications involve time constrained access to data that may have temporal property

 traditional real-time systems manage data in applicationdependent structures

 as systems evolve, more complex applications require efficient access to more data

 Function of real-time databases

 gathering data from the environment, processing it in the context of information acquired in the past, for providing timely and temporally correct response

11

What is a Real-Time Database?

A real-time database (RTDB) is a data store whose operations execute with predictable response, and with application-

acceptable levels of logical and temporal consistency of data , in addition to timely execution of transactions with the ACID properties.

C. D. Locke

Chief Scientist, TimeSys Co.

12

What is the gain of using RTDBS?

 More efficient way of handling large amounts of data

 Specification and enforcement of time constraints

 Improved overall timeliness and predictability

 Application semantic-based consistency and concurrency control

 Specialized overload management and recovery

 Exploitation of real-time support from underlying real-time

OS

 Reduced development costs

14

Gain of Using RTDBS (More Specifically)

 Presence of a schema - avoid redundant data and its description

 Built-in support for efficient data management - indexing, etc

 Transaction support - e.g. ACID properties

 Data integrity maintenance

But …

 Not all data in RTDB is durable : need to handle different types of data differently (will be discussed further later)

 Correctness can be traded for timeliness

- Which is more important? Depends on applications, but timeliness is more important in many cases

 Atomicity can be relaxed : monotonic queries and transactions

 Isolation of transactions may not always be needed

 Temporally-correct serializable schedules  serializable schedules

15

Objectives of Real-Time Databases

 Correctness requirements:

 consistency constraints

 time constraints on data and transactions

 Objectives

 timeliness and predictability: dealing with time constraints and violations

 Performance goals:

 minimize the penalty resulting from actions either delayed or not executed in time

 maximize the value accruing to the system from actions completed in time

 support multiple guarantee levels of quality for mixed workloads

16

Why Not Using Conventional Databases?

 Inadequacies of conventional databases:

 poor responsiveness and lack of predictability

 no facility to support for applications to specify and enforce time constraints

 designed to provide good average response time , while possibly yielding unacceptable worst case execution time

 resource management and concurrency control in conventional database systems do not support the timeliness and predictability

17

Differences from Traditional Databases

 Traditional database systems

 persistent data and consistency constraints

 efficient access to data

 transaction support: ACID properties

 correct execution of transactions in the context of concurrent execution and failure

 designed to provide good average performance

 Databases for real-time systems

 temporal data, modeling a changing environment

 response time requirements from external world

 applications need temporally coherent view

 actively pursue timeliness and predictability

18

Misconceptions on Real-Time Databases....

19

Misconceptions about RTDBS (1)

 “ Advances in hardware till take care of RTDBS requirements .”

 fast (higher throughput) does not guarantee timing constraints

 increase in size and complexity of databases and hardware will make it more difficult to meet timing constraints or to show such constraints will be met

 hardware alone cannot ensure that transactions will be scheduled properly to meet timing constraints or data is temporally valid

 transaction that uses obsolete data more quickly is still incorrect

 “ Real-time computing is equivalent to fast computing .”

 minimizing average response time vs satisfying individual timing constraints

 predictability, not speed, is the foremost goal

20

Misconceptions about RTDBS (2)

 “ Advances in standard DBS technology will take care of RTDB requirements .”

 while novel techniques for query processing, buffering, and commit protocols would help, they cannot guarantee timeliness and temporal validity

 time-cognizant protocols for concurrency control, commit processing and transaction processing are mandatory

 “ There is no need for RTDBS because we can solve all the problems with current database systems ”

 adding features such as validity intervals and transaction deadlines to current database systems is in fact moving towards to developing a real-time database system

 such approach (adding features in ad hoc manner) will be less efficient than developing one from the ground up with such capabilities

21

Misconceptions about RTDBS (3)

 “ Using a conventional DBS and placing the DB in main memory is sufficient .”

 although main-memory resident database eliminate disk delays, conventional databases have many sources of unpredictability, such as delays due to blocking on locks and transaction scheduling

 increases in performance cannot completely make up for the lack of time-cognizant protocols in conventional database systems

 “ A temporal database is a RTDB .”

 while both of temporal DB and RTDB support time-specific data operations, they support different aspects of time

 in RTDB, timely execution is of primary concern , while in temporal DB, fairness, resource utilization, and ACID properties of transactions are more important

22

Misconceptions about RTDBS (4)

 “ Problems in RTDBS will be solved in other areas .”

 some techniques developed in other areas (e.g., RTS and DBS) cannot be applied directly, due to the differences between tasks and transactions, and differences in correctness requirements

 there are unique problems in RTDBS (e.g., maintaining temporal consistency of data)

 “ RTDBS guarantee is meaningless unless H/W and S/W never fails ”

 true, in part, due to the complexity involved in predictable and timely execution

 it does not justify the designer not to reduce the odds of failure in meeting critical timing constraints

Reference: Stankovic, Son, and Hansson, ‘Misconceptions About Real-

Time Databases’ , IEEE Computer, June 1999.

23

Conventional vs. Real-Time Databases:

Correctness Criteria

Conventional Databases:

 Logical consistency

 ACID properties of transactions:

 Atomicity

 Isolation

 Consistency

 Durability

 Data integrity constraints

Real-Time Database Systems:

 Logical consistency

 ACID properties (may be relaxed)

 Data integrity constraints

 Enforce time constraints

 Deadlines of transaction

 External consistency

 absolute validity interval (AVI)

 Temporal consistency

 relative validity interval (RVI)

27

Real-time Systems vs. RTDBS

Real-time systems

 Task centric

 Deadlines attached to tasks

Real-time databases

 Data centric

 Data has temporal validity, i.e., deadlines also attached to data

 Transactions must be executed by deadline to keep the data valid, in addition to produce results in a timely manner

28

29

II. Characteristics of Data and Transactions

 Outline

 The origin of time constraints

 Types of time constraints

 Real-time data and temporal consistency

 Real-time transactions

30

The Origin of Time Constraints

 Meeting time constraints is of paramount importance in realtime database systems. Unfortunately, many of these time constraints are artifacts.

 If a real-time database system attempts to satisfy them all, it may lead to an over-constrained or over-designed system.

Issues to be discussed:

1. What are the origins of (the semantics of) time constraints of the data, events, and actions?

2. Can we do better by knowing the origins of time constraints?

3. What is the connection between time-constrained events, data, and real-time transactions ?

31

Example #1: Objects on Conveyor Belts on a Factory Floor

Recognizing and directing objects moving along a set of conveyer belts on a factory floor.

 Objects’ features captured by a camera to determine its characteristics.

 Depending on the observed features, the object is directed to the appropriate workcell.

 System updates its database with information about the object.

32

Example #1 (cont’d)

 Features of an object must be collected while the object is still in front of the camera.

 “Current” object and features apply just to the object in front of the camera

 Lose validity once a different object enters the system.

 Object’s features matched against models in database.

 Based on match, object directed to selected workcell.

 Alternative: discard object and later bring it back again in front of the camera.

33

Example #2: Air Traffic Control

System makes decisions concerning

 incoming aircrafts’ flight path

 the order in which they should land

 separation between landings

Parameters: position, speed, remaining fuel, altitude, type of aircrafts and current wind velocity.

Aircraft allowed to land => subsequent actions of this aircraft become critical: cannot violate time constraints

Alternative: Ask aircraft to assume a holding pattern.

34

Factors that Determine Time Constraints

Focus: externally-imposed temporal properties

 The characteristics of the physical systems being monitored and controlled :

 speed of the aircraft, speed of conveyer belt, temperature and pressure

 The stability characteristics as governed by its control laws :

 servo control loops of robot hands, fly-by-wire, avionics, fuel injection rate

 Quality of service requirements :

 sampling rates for audio and video, accuracy requirement for results

 Human (re)action times, human sensory perception :

 time between warning and reaction to warning

Events, data and actions inherit time constraints from these factors

 They determine the semantics (importance, strictness) of time constraints.

35

All Time Constraints are Artifacts?

May be not all of them, but even many externally-imposed constraints are artifacts:

 Length of a runway or speed of an aircraft - determined by cost and technology considerations;

 Quality of service requirements - decided by regulatory authorities;

 Response times guaranteed by service providers determined by cost and competitiveness factors

36

Designer Artifacts

Subsequent decisions of the database system designer introduce additional constraints :

 The type of computing platform used (e.g. centralized vs. distributed)

 The type of software design methodology used (e.g., datacentric vs. action-centric)

 The (pre-existing) subsystems used in composing the system

 The nature of the actions (e.g., monolithic action vs. graphstructured or triggered action)

Time constraints reflect the specific design strategy and the subsystems chosen as much as the externally imposed timing requirements

37

Decisions on Time Constraints

 Difficulty of optimal time constraints

 Determining all related time constraints in an optimal fashion for non-trivial systems is intractable => divide and conquer (and live with acceptable decisions)

 Multi-layer decision process

 The decisions made at one level affect those at the other level(s)

 While no decision at any level is likely to be unchangeable, cost and time considerations will often prevent overhaul of prior decisions

38

Decisions on Time Constraints (2)

 Decisions to be made

 Whether an action is periodic, sporadic, or aperiodic

 The right values for the periods, deadlines, and offsets within periods

 Importance or criticality values

 Flexibility (dynamic adaptability) of time constraints

39

Time Constraints of Events

Three basic types of time constraints

1. Maximum: delay between two events

Example: Once an object enters the view of the camera, object recognition must be completed within t1 seconds

2. Minimum: delay between two events

Example: No two flight landings must occur within t2 seconds

3. Durational: length of an event

Example: The aircraft must experience no turbulence for at least t3 seconds before the “seat-belt sign” can be switched off once again

Constraints can specify between stimulus and response events

(max, min, and duration between them can be stated)

40

Time Constraints of Events (2)

 The maximum and minimum type of time constraints of recurring (stimulus) events: rate-based constraints

 Time constraints determine the constraints on transactions :

 Rate-based constraints -> periodicity requirements for the corresponding actions

 Time constraints relating a stimulus and its response -

> deadline constraints

 Specifications of minimal separation between response to a stimulus and the next stimulus -> property of the sporadic activity that deals with that stimulus

41

Data in Real-Time Database Systems

 Data items reflect the state of the environment

 Data from sensors - e.g., temperature and pressure

 Derived data - e.g., rate of reaction

 Input to actuators - e.g., amount of chemicals, coolant

 Archival data - e.g., history of (interactions with) environment

 Static data as in conventional database systems

42

Time Constraints on Data

 Where do they come from?

 state of the world as perceived by the controlling system must be consistent with the actual state

 Requirements

 timely monitoring of the environment

 timely processing of sensed information

 timely derivation of needed data

 Temporal consistency of data

 absolute consistency:

freshness

of data actual state and its representation between

 relative consistency:

correlation

accessed by a transaction among data

43

Static Data and Real-Time Data

 Static data

 data in a typical database

 values not becoming obsolete as time passes

 Real-time (Temporal) data

 arrive from continuously changing environment

 represent the state at the time of sensing

 has observed time and validity interval

 users of temporal data need to see temporally coherent views of the data (state of the world)

 When must the data be temporally consistent?

 ideally, at all times

 in practice, only when they are used by transactions

45

An Example

 Data object is specified by

(value, absolute validity interval, time-stamp)

 Interested in { temperature and pressure } with relative validity interval of 5

 Let current time = 100 temperature = (347, 10, 95) and pressure = (50, 20, 98)

-- temporally consistent temperature = (347, 10, 98) and pressure = (50, 20, 91)

-- temporally inconsistent

46

What Makes the Difference?

We have a set of predicates to be satisfied by data

 Why not use standard integrity maintenance techniques?

 Not executing a transaction will maintain logical consistency, but temporal consistency will be violated

 Satisfy logical consistency by CC techniques, such as 2PL

 Satisfy temporal consistency by time-cognizant transaction processing

AVI and RVI may change with system dynamics, e.g. mode changes

47

Time Constraints Associated with Actions

Time constraints dictate the behavior of the environment

 constrain the rates and times at which inputs arrive at the system

 Example: seek permission to land only when aircraft is 10 mins from airport

Time constraints prescribe performance of the system

 dictate the responsiveness of the system to these inputs

 Example: respond to a “landing request” within 30 seconds

Time constraints are imposed to maintain data temporal consistency

 Example: actions that update an aircraft’s dynamic parameters in 1 second

48

Distinct Types of Transactions

 Write-only transactions (sensor updates): obtain state of the environment and write into the database

 store sensor data in database (e.g., temperature)

 monitoring of environment

 ensure absolute temporal consistency

 Update transactions (application updates)

 derive new data and store in database

 based on sensor and other derived data

 Read-only transactions

 read data, compute, and report (or send to actuators)

49

Time Constraints on Transactions

 Time constraints on transactions

 some come from the need to maintain temporal consistency of data

 some come from the requirements on reaction time, dictating the responsiveness of the system

 some come from the designer’s choice, specifying the rates and times at which inputs arrive at the system

 transaction’s value depends on completion time

50

Types of Time Constraints

Based on type of time constraints:

 Periodic

- Every 10 secs Sample wind velocity

- Every 20 secs Update robot position

 Aperiodic

- If temperature > 1000 within 10 secs add coolant to reactor

Based on Value:

 Hard: must execute before deadline

 Firm: abort if not completed by deadline

 Soft: diminished value if completed after deadline

51

Dealing with Time Constraint Violations

Large negative penalty => a safety-critical or hard time constraint

 typically arise from external considerations

 important to minimize the number of such constraints

No value after the deadline and no penalty accrues => a firm deadline

 typically, alternatives exist

Result useful even after deadline => a soft deadline

 system must reassign successors’ parameters - so that the overall end-to-end time constraints are satisfied

Firm and soft time constraints offer the system flexibility - not present with hard or safety-critical time constraints

52

Examples of Time Constraints Specified using ECA (Event-Condition-Action) Rules

The time constraints can be specified using ECA rules

ON (10 seconds after “initiating landing preparations”)

IF (steps not completed)

DO (within 5 seconds “abort landing”)

ON (deadline of “object recognition”)

IF (action not completed)

DO (“increase importance, adjust deadlines”)

ON (“n-th time violation within 10 secs”)

IF (crisis-mode)

DO (“drop all non-essential transactions”)

53

Time Constraints: Discussion

 Understand the issues underlying the origin and semantics of time constraints

 not all deadlines are “given.”

 need ways to deriving time constraints (and semantics) in the least stringent manner

 flexibility afforded by derived deadlines must be exploited

 deadline violation must also be handled adaptively

 Control strategies can be specified by ECA rules

54

55

III. Real-Time Transaction Processing

 Outline

 Priority assignment

 Scheduling paradigms

 Priority inversion problem

 Concurrency control protocols

 Predictability issues

 Overload management and recovery

56

Priority Assignment

 Different approaches

 EDF: earliest deadline first

 highest value (benefit) first

 highest (value/computation time) first

 complex function of deadline, value, slack time

 Priority assignment has significant impact on database system performance

 Assignment based on deadline and value has shown good performance

57

Goals of Real-Time Transaction Scheduling

 Maximize the number of transactions (both sensor and user) that meet deadlines

 Keep data temporally valid

 on overload, allow invalid intervals on data (note that data with invalid interval may not be used during that invalid time)

 overload management by trading off quality for timeliness and schedule contingency (or alternative) versions of transactions

 more on overload management later ...

59

Execution Time of Transactions

t exec

= t db

+ t

I/O

+ t int

+ t appl

+ t comm t t db

= processing of DB operations (variable)

I/O t int t appl t comm

= I/O processing

= transaction interference

= non-DB application processing (variable & optional)

= communication time

(variable)

(variable)

(variable & optional)

60

Scheduling Paradigms

Scheduling analysis or feasibility checking of real-time computations can predict whether timing constraints will be met

Several scheduling paradigms emerge, depending on

 whether a system performs schedulability analysis

 if it does, whether it is done statically or dynamically, and

 whether the result of the analysis itself produces a schedule or plan according to which computations are dispatched at runtime

61

Different Paradigms

1. Static Table-Driven approaches:

 Perform static schedulability analysis

 The resulting schedule is used at run-time to decide when a computation must begin execution

2. Static Priority Driven Preemptive Approaches:

 Perform static schedulability analysis but unlike in the previous approach, no explicit schedule is constructed

 At run-time, computations are executed (typically) highestpriority- first

 Example: rate-monotonic priority assignment - priority is assigned proportional to frequency

62

Different Paradigms (2)

3. Dynamic Planning Based Approaches:

 Feasibility is checked at run-time, i.e. a dynamically arriving computation is accepted for execution only if it found feasible (that is, guaranteed to meet its time constraints)

 One of the results of the feasibility analysis is a schedule or plan that is used to decide when a computation can begin execution.

4. Dynamic Best-effort Approaches:

 No feasibility checking is done

 The system tries to do its best to meet deadlines, but since no guarantees are provides, a computation may be aborted during its execution

63

Dealing with Hard Deadlines

 All transactions have to meet the timing constraints

 best-effort is not enough

 a kind of guarantee is required

 Requires

 periodic transactions only

 resource requirements known a priori

 worst-case execution time of transactions are known

 Use static table-driven or priority-driven approach

 schedulability analysis is necessary

 run-time support also necessary

64

Dealing with Soft/Firm Deadlines

 Two critical functions:

 assign transaction priorities

 resolve inter-transaction conflicts using transaction parameters: deadline, criticality, slack time, etc.

 For firm deadlines, abort “expired” transactions

 For soft deadlines, the transaction is continued to finish in general, even if the deadline is missed

 Various time-cognizant concurrency controls developed, many of which are extensions of two-phase locking (2PL), timestamp, and optimistic concurrency control protocols

65

Time-cognizant Transaction Scheduling

 Earliest deadline first (EDF)

 Highest value first

 Highest value density first (value per unit computation time)

 Weighted formula: complex function of deadline, value, and remaining work, etc.

 Earliest Data Deadline First: considering the validity interval

 Example: DD(Y) is used as the virtual deadline of transaction T

Read X Read Y DD(Y) DD(X)

Activate

TR T

Begin

TR T

Deadline of TR T

66

Example 1 : Commit Case

Activate

TR T

Begin

TR T

DD = Data deadline

Read X Read Y

DD(Y) DD(X)

Commit

X and Y are valid

TR T makes deadline

Deadline of TR T

67

Example 2 : Abort Case

Activate

TR T

Begin

TR T

Read X Read Y

DD(Y) DD(X)

Deadline of TR T

ABORT

68

Example 3 : Forced Wait

Read X

Read Y

DD(Y) DD(X)

Activate

TR T

Begin

TR T

Force TR T to Wait for Update to Y since it will occur soon!

Deadline of TR T

69

Example 4 : With Data Similarity

Read X

Read Y

15.70

DD(Y) - Y updated to 15.78

DD(X)

Activate

TR T

Begin

TR T

Deadline of TR T

Commit

Deadline of TR T is met

Data X is OK

Data Y is similar (defined in DB)

70

71

Transactions: Concurrency Control

 Pessimistic

 Optimistic (OCC)

 Hybrid (e.g., integrated real-time locking)

 Speculative

 Semantic-based

 Priority ceiling

72

Pessimistic Concurrency Control

 Locks are used to synchronize concurrent actions

 Two-Phase Locking (2PL)

 all locking operations precedes the first unlock operation in the transaction

 expanding phase (locks are acquired)

 shrinking phase (locks are released)

 suffers from deadlock

 priority inversion

73

Example of 2PL: Two transactions

T

1

: write_lock (X); read_object (X);

X = X + 1; write_object (X); unlock (X);

Priority T 1 > Priority of T 2

T 2 : read_lock (X); read_object (X); write_lock (Y); unlock (X); read_object (Y);

Y = X + Y; write_object (Y); unlock (Y);

74

Example of 2PL: Deadlock

T1: T2: read_lock (Y); read_object (Y); read_lock (X); read_object (X); write_lock (Y); [blocked]

:

:

=> DEADLOCK !

write_lock (X); [blocked]

:

:

75

Conflict Resolution in 2PL

 2PL (or any other locking schemes) relies on blocking requesting transaction if the data is already locked in an incompatible mode. What if a high priority transaction needs a lock held by a low priority transaction? Possibilities are ...

 let the high priority transaction wait

 abort the low priority transaction

 let low priority transaction inherit the high priority and continue execution

 The first approach will result in a situation called priority inversion

 Several conflict resolution techniques are available, but the one that use both deadline and value show better performance

76

Priority Inversion Problem in Locking

Protocols

 What is priority inversion?

 A low priority transaction forces a higher priority transaction to wait

 highly undesirable in real-time applications

 unbounded delay may result due to chained blocking and

“intermediate” blocking:

Example: T 0 is blocked by T 3 for accessing data object, then T 3 is blocked by T 2 (priority T 0 > T 2 > T 3 )

77

Example of 2PL: Priority Inversion

T 1 :

Priority inversion write_lock (X); [blocked] read_object (X);

X = X + 1; write_object (X); unlock (X);

T 2 : read_lock (X); read_object (X); write_lock (Y); unlock (X); read_object (Y);

Y = X + Y; write_object (Y); unlock (Y);

78

Solutions to Priority Inversion Problem

 Priority abort

 abort the low priority transaction - no blocking at all

 quick resolution, but wasted resources

 Priority inheritance

 execute the blocking transaction (low priority) with the priority of the blocked transaction (high priority)

 “intermediate” blocking is eliminated

 Conditional priority inheritance

 based on the estimated length of transaction

 inherit the priority only if blocking one is close to completion; abort it, otherwise

79

Conditional Priority Inheritance Protocol

Ti requests data object locked by Tj if Priority (Ti) < Priority (Tj) then block Ti else if (remaining portion of Tj > threshold) abort Tj else

Ti waits while Tj inherit the priority of Ti to execute

80

Why Conditional Priority Inheritance?

 Potential problems of (blind) priority inheritance:

 life-long blocking - a transaction may hold a lock during its entire execution (e.g., strict 2PL case)

 a transaction with low priority may inherit the high priority early in its execution and block all the other transactions with priority higher that its original priority

 especially severe if low priority transactions are long

 Conditional priority inheritance is a trade-off between priority inheritance and priority abort

 Not sensitive to the accuracy of the estimation of the transaction length

81

Performance Results

 Priority inheritance does reduce blocking times. However, it is inappropriate under strict 2PL due to life-time blocking of the high priority transaction. It performs even worse than simple waiting when data contention is high

 Priority abort is sensitive to the level of data contention

 Conditional priority inheritance is better than priority abort when data contention becomes high

 Blocking is a more serious problem than resource waste, especially when deadlines are not tight

 In general priority abort and conditional priority inheritance are better than simple waiting and priority inheritance

 Deadlock detection and restart policies appear to have little impact

82

Optimistic Concurrency Control

 No checking of data conflicts during transaction execution

 read phase: read values from DB; updates made to local copies

 validation phase

 backward validation or forward validation

 conflict resolution

 write phase:

 if validation ok then local copies are written to the DB

 otherwise discard updates and (re)start transaction

 Non-blocking

 Deadlock free

 Several conflict resolution policies

83

OCC: Validation phase

 If a transaction Ti should be serialized before a transaction Tj, then two conditions must be satisfied:

 Read/Write rule

 Data items to be written by Ti should not have already been read by Tj

 Write/Write rule

 Ti’s should not overwrite Tj’s writes

84

OCC Example

T 1 : read_object (X);

X = X + 1; write_object (X); validation

<conflict resolution, .e.g, restart transaction>

T 3 : read_object (Y);

Y = Y + 1; write_object (Y);

...

T 2 : read_object (X); read_object (Y);

Y = X + Y; write_object (Y); transaction> validation

<conflict resolution, e.g., restart

85

OCC: Conflict Resolution

When a transaction T is ready to commit, any higher-priority conflicting transaction is included in the set H

 Broadcasting commit (no priority consideration)

 T always commits and all conflicting transactions are aborted

 With priority consideration: if H is non-empty, 3 choices

 sacrifice policy: T is always aborted

 wait policy: T waits until transactions in H commits; if they do commit, T is aborted

 wait-X policy: T commits unless more than X% of conflicting transactions belong to H

86

OCC: Comparison

 Broadcasting commit (no priority consideration)

 not effective in real-time databases

 Sacrifice policy: wasteful

 there’s no guarantee the a transaction in H will actually commit; if all in H abort, T is aborted for nothing

 Wait policy: address the above problem

 if commit after waiting, it aborts lower priority transactions after waiting, which may have not enough time to restart and commit

 the longer T stays, the higher the probability of conflicts

 Wait-X policy: compromise between sacrifice and wait

 X=O: sacrifice policy; X=100: wait policy

 performance study shows X=50 gives the best results

87

Priority Ceiling Protocol

 Why?

 to provide “blocking at most once” property

 the system can compute (pre-analyze) the worst case blocking time of a transaction, and thus schedulability analysis for a set of transaction is feasible

 A complete knowledge of data and real-rime transactions necessary: for each data object, all the transactions that might access it need to be known

 true in certain applications (hard real-time applications)

 not applicable to other general applications

88

Priority Ceiling Protocol

 For each data object O:

 write-priority ceiling: the priority of the highest priority transaction that may write O

 absolute priority ceiling: the priority of the highest priority transaction that may read or write O

 r/w priority ceiling: dynamically determined priority

 which equals absolute priority ceiling if O is write-locked; equals write priority ceiling if O is read locked

 Ceiling rule: transaction cannot lock a data object unless its priority is higher that the current highest r/w priority ceiling locked by other transactions

 Inheritance rule: low priority transaction inherits the higher priority from the ones it blocks

 Good predictability but high overhead

89

90

Overload Management and Recovery ...

91

Managing Overloads

 The result of overload is a slow response for the duration of the overload

 In real-time databases, catastrophic consequences may arise:

 hard real-time transactions must be guaranteed to meet deadlines under overloads

 transaction values must be considered when deciding which transactions to shed

 missing too many low-valued transactions with soft deadlines may eventually degrade system performance

 Dealing with overloads is complex and practical solutions are needed

92

Quality-Timeliness Tradeoffs during

Overload

 Achieve timeliness by trading off

 completeness: approximate processing by sampling data and extrapolation

 consistency: relax correctness requirements in controlled manner

(e.g., epsilon-serializability, similarity)

 currency: process transactions using older versions of data

(within the tolerance range of the application)

 precision: use algorithms that produce lower precision results within the deadline

 Exploit concepts from imprecise computing, monotonic computing, mandatory/optional structures, multi-precision algorithms, primary/contingency transactions, etc.

93

Scheduling for Overload Management

Background

 Dynamic real-time systems are prone to transient overloads

 requests for service exceed available resources for a limited time causing missed deadlines

 may occur when faults in the computational environment reduce computational resource available to the system

 may occur when emergencies arise which require computational resources beyond the capacity of the system

 Overloads cause performance degradation

 Schedulers are generally not overload-tolerant

94

Scheduling for Overload Management (2)

 Resource management has two components: scheduling and admission control

 Scheduling determines the execution order of admitted transactions, which might not be enough to handle the current overload situation

 Admission control determines which transactions should be granted system resources

 To resolve transient overloads, the system needs both

 when to execute transactions and selecting which transactions to be executed (original, alternative, or contingency transaction, if the transaction is accepted)

95

Scheduling for Overload Management (3)

 Goal: dynamic overload management with graceful performance degradation (meeting all critical deadlines)

 Problem: need to handle complex workload

 critical and non-critical transactions -- some are sporadic and others aperiodic (no minimum inter-arrival time information available)

 non-critical transactions can be discarded in a controlled manner while critical transactions are replaced by alternative or contingency transactions (with shorter execution time)

 resources are reallocated among transactions that are admitted to the system using value-functions

96

Scheduling for Overload: Assumptions

Transaction and Workload:

 Critical transactions are sporadic and have a corresponding contingency transaction

 Non-critical transactions are aperiodic

 Each transaction is pre-declared and pre-analyzed with known worst case execution time

 Critical deadlines must be guaranteed even under overload conditions

System Characteristics:

 Dedicated CPU for scheduling activities is desirable; otherwise, only simple policies can be implemented

97

Scheduling Module

Scheduler consists of several components

 Pre-analysis of schedulability: critical transactions are pre-analyzed to check whether they can be executed properly and how much reduction in resource requirement can be achieved by using contingency transactions

 Admission controller determines which transactions will be eligible for scheduling

 Scheduler can schedule according to different metric

 deadline-driven

 value-driven

 Overload Resolver decides the overload resolution actions

 Dispatcher patches from the top of the ready queue (highest priority)

98

Scheduling Components

Transactions

Admission

Controller

Rejection queue

........

Overload

Resolver

Transaction

Scheduler

Dispatcher

........

Ready queue

99

Overload Resolution Strategies

Admission Controller:

 Reject transaction

 Admit contingency action

Scheduler:

 Drop transaction (firm/soft)

 Replace transaction with contingency action (hard)

 Postpone transaction execution

(soft)

100

Recovery Issues

 Recovery of temporal as well as static data necessary

 Not always necessary to recover original state because of temporal validity intervals and application semantics:

 if recovery takes longer than the absolute validity interval, it would be a waste to recover that value

 example: recovery from a telephone connection switch failure

 if connection already established: recover billing information and resources, but no need to recover connection information

 if connection was being established: recover assigned resources

101

Recovery Issues (2)

 Real-time database recovery must consider

 time and resource availability: recovery must not jeopardize ongoing critical transactions

 available transaction semantics (or state): contingency or compensating transactions can be used

 state of the environment: extrapolation of state may be possible, or more up-to-date data may be available from the sensor soon

 It’s appropriate to use partitioned and parallel logging so that critical data with short validity interval can be recovered first, without going through the entire log

 classify data according to its update frequency and importance, and utilize non-volatile high-speed memory for logging

102

103

IV. Database Techniques for Real-Time

Applications

Outline

 Active, main-memory, and object-oriented databases

 Flexible security paradigm for real-time applications

 Embedded databases

104

Active Database Systems

 Database manager reacts to events

 Transactions can trigger other transactions

 Triggers and alerters

Actions specified as rules

 ECA-rules (event - condition - action)

 Upon Event occurrence, evaluate Condition, and if condition is satisfied, trigger Action

 Coupling Modes: away), deferred transaction), immediate (triggered action will be executed right

(it will be executed at the end of the current detached (scheduled as a separate transaction)

 Cascaded triggering is possible

105

Active Real-Time Database Systems

 Real-time systems are inherently reactive

 ECA-rules provide uniform specification of reactive behavior

Problems with active database systems techniques:

 Additional sources of unpredictability

 event detection and rule triggering

 all coupling modes are not feasible

 No specification of time constraints

 Techniques are not time-cognizant

106

Temporal scope...

Total response time

Rule execution time Event detection time Action exec. time event occurrence event detection composite event detection event delivery rule retrieval condition evaluation action spawning action execution start action execution complete

107 time

Main Memory RTDBS

 Characteristics of Main Memory DBS

 the primary copy of data resides in main memory, not in disks as in conventional database systems

 memory resident data need a backup copy on disk

 Why being pursued?

 it becomes feasible to store larger databases in memory as memory becomes cheaper and chip densities increase

 direct access in memory provides much better response time and transaction throughput, and improved predictability due to the lack of I/O operations

108

Main Memory RTDBS (2)

 Difference from disk-resident databases with large cache

 it can use index structures specially designed for memory resident data (e.g., T-tree instead of B-tree)

 it can recognize and exploit the fact that some data reside permanently in memory -> data partitioning

 Data partitioning can be effectively used

 different classes of data: hot, warm, cool, and cold, based on the frequency of access and/or timing constraints of the access

(deadline of the transactions)

 in telephone switching systems, for example, routing information is hot, while customer address data is cold

109

Main Memory RTDBS (3)

 Consequences of memory board failures in MMDBS

 typically, entire machine need to be down, losing the entire DB, while in disk-resident DB, only the affected portion will be unavailable during recovery

 recovery is time-consuming, and having a very recent backup available is highly preferred -> more backups need to be taken frequently, resulting in high cost

--- performance of backup mechanism is critical

110

Impacts of Memory-Residency in RTDBS

 Concurrency control

 since lock duration is short, using small locking granules to reduce contention is not effective

--- large lock granules are appropriate in MM-RTDBS

 even serial execution can be a possibility, eliminating the cost of concurrency control

 potential problems of serial execution:

 long transactions to run concurrently with short ones

 need synchronization for multiprocessor systems

 lock information can be included in the data object, reducing the number of instructions for lock handling

--- performance improvement

111

Impacts of Memory-Residency in RTDBS (2)

 Commit processing

 to protect against failures, logging/backup necessary

--- log/backup must reside in stable storage (e.g., disks)

 before a transaction commits, its activity must be written in the log: write-ahead logging (WAL)

 logging threatens to undermine performance advantage:

 response time: transaction must wait until logging is done on disk -> logging can be a performance bottleneck

 possible solutions:

 small in-memory log, using non-volatile memory (e.g., flash memory)

 pre-commit and group commit strategy

112

Impacts of Memory-Residency in RTDBS (3)

 Query processing

 sequential access is not significantly faster than random access for memory-resident data -> techniques taking advantage of faster sequential access lose merit

 query processor must focus on actual processing costs, instead of minimizing the number of disk access

 costly operations such as index creation or data copying should be identified first, and then processing strategy can be designed to reduce their occurrences

 because of memory residency of data, it is possible to construct compact data structures for speed up

--- e.g., join operation using pointers

113

Trends in Memory-Resident RTDBS

 Extended use of pointers

 relations are stored as linked list of tuples, which cane be arrays of pointers to attribute values

 Combined hashing and indexing:

 linear hashing for unordered data and T-tree for ordered data

 Large lock granules or multi-granule locks

 Deferred updates, instead of in-place update

 Fuzzy checkpointing for reduced transaction locking time

 Special-purpose hardware support for logging and recovery

 Object-oriented design and implementation

114

Object-Oriented RTDBS

 OO data models support

 support for modeling, storing and manipulation of complex objects and semantic information into databases

 encapsulated objects

 OO data models need (for RT applications)

 time constraints on objects, i.e, attributes and methods

 Objects more complex -> unit of locking is the object -> less concurrency

 memory-resident RTDB may fit well with this restriction

 inter-object consistency management could be difficult

 Need better solutions to provide higher concurrency and predictable execution for RT applications

115

116

Real-Time Security Paradigm ...

117

Real-Time Secure Data Management

 Characteristics

 transactions with timing constraints

 data with temporal properties

 mixture of sensitive and unclassified data

 Requirements

 timeliness and predictability

 temporal consistency

 adaptive security enforcement

 high performance

118

Real-Time Secure Data Management

 Issues

 integrate support of different types of requirements

 predictability yet flexible execution

 conflicts between real-time and security

 real-time management of resources

 high performance yet fault-tolerant

 trade-offs

 scalability of solutions

119

Database Security

 Security services

 to safeguard sensitive information

 encryption, authentication, intruder detection ...

 Multilevel security (MLS)

 objects are assigned with security classification

 subjects access objects with security clearance

 no flow of information from higher level to lower one

 Applications

 almost everywhere (becoming a buzzword)

 more flexibility necessary (from static, known environment to dynamic unknown environment)

120

Trends

 Increasing number of systems operate in unpredictable (even hostile) environments

 task set, resource requirements (e.g., worst-case execution time) ...

 High assurance required for performance-critical applications

 System properties for high assurance

 real-time (timeliness, temporal consistency ..)

 security (confidentiality, authentication ..)

 fault-tolerance (availability, reliability ..)

 Each property has been studied in isolation

121

Security and Real-Time

 For timeliness, no priority inversion in real-time applications

 tasks with earlier deadline or higher criticality has higher priority for better service

 In secure systems, no security violation is allowed

 Incompatible under the binary notion of absolute security

 priority inversion vs security violation

 Higher security services require more resources

122

Example of the Problem

T

1

high priority

high security

Access Access

Resource

T

2

- low priority

- low security

Both require lock on the resource

How to resolve this conflict?

 if lock is given to T

1

, security violation

 if lock is given to T

2

, priority inversion

123

Requirement for Real-Time Secure DBS

Supporting both requirements of real-time and security for real-time databases:

How to provide acceptably high security while remains available and provides timely services?

124

Research Issues

 Flexible security vs absolute security

 paradigm for flexible security services

 identifying correct metrics for security level

 Adaptive security policies

 Mechanisms to enforce required level of security and trading-off with other requirements:

 access control, authentication, encryption, ..

 time-cognizant protocols, data deadlines, ...

 replication, primary-backup, ...

 Specification to express desired system behavior

 verification of consistency/completeness of specification

125

Flexible Security Services

 Flexible vs absolute (binary) security

 traditional notion of security is binary: secure or not

 problem of binary notion of security: difficult to provide acceptable level of security to satisfy other conflicting requirements

 research issue: quantitative flexible security levels

 One approach

 represent in terms of % of potential security violations

 problem: not precise --- percentage alone reveals nothing about implications on system security e.g., 1%violation may leak most sensitive data out

126

Flexible Security for Access Control

 Possible approaches to provide flexible security

 control potential violations between certain security levels

 even if it allows potential security violations, it does not completely compromise the security of the system

 use different algorithms in an adaptive manner

 A possible configuration

Top secret

Secret

Confidential

Unclassified

A

Top secret

Secret

Confidential

Unclassified

B

Top secret

Secret

Confidential

Unclassified

C

Top secret

Secret

Confidential

Unclassified

D

127

Flexible Security Policies (5 levels)

 Completely secure: no violations allowed

 Secure levels 2, 3 & 4: high 3 levels kept completely secure

 Secure levels 3 & 4: high 2 levels kept completely secure

 Split security: violations allowed between top 2 levels, and among low 3 levels

 Secure level 4: highest level kept completely secure

 No security: violations can occur between any levels

 Gradual security: control the number of violation between each level

128

Performance of Flexible Access Control

 Significant improvement in real-time performance as more potential covert channels allowed:

 completely secure (6.5%) vs no security (3.3%) for 500 data items

 complete secure (5%) vs no security (1%) for 1000 data items

 Trade-off capacities of security policies are strictly ordered:

 from completely secure through multiple secure levels to no security

129

Improved Functionality

 Exploiting real-time properties for improved/new features

 Example: Intrusion detection

 sensitive data objects are tagged with “time semantics” that capture normal behavior about read/update

 time semantics should be unknown to intruder

 violation of security policy can be detected:

• suspicious update request can be detected using a periodic update rate

• tolerance in the deviation from parameterized normal behavior can be

130

Adaptable Security Manager

 Need for resource tradeoffs in database services

 Adaptable security manager fits well with the concept of multiple service levels in real-time secure database systems

 Short term relaxation of security could be preferable to missed critical deadlines

 aircraft attack warning during burst of battlefield updates

 loss of production time for missed agile manufacturing command

131

Features of Adaptable Security Manager

 Multiple security levels on users/objects or communications

 computation costs increase with level of security

 Client negotiated range of security levels for transaction communications

 Dynamic level changes as a function of real-time load

132

Security Manager Environment session & transaction requests transaction results

Client

Table client security level & key session keys

& status

Session

Table transaction handoff

Security Manager transaction object & session data

TransData thread n thread n-1 object read

& write data flow execution control

Mapper/

Admission

Control

Scheduler

DB

Beehive

133

Security Level Synchronization

Sec Mgr level

1

0

3

2

Sec Mgr events

Client X events

Client X level 1

0

3

2 prepare for

2 step switch

Rn received acknowledgment

Sn Rn+1 Sn+2 Rn+2 t1 prepare to switch t5 last message accounted for switch

Rn+3

Sn Sn+1 Rn Rn+1 Sn+2 t2 t3 t4

Sn+3 time

LEGEND transaction request request with switch acknowledgment transaction response message response with switch command

Sn send the nth message

Rn receive the nth message

134

Performance: Adaptive vs. Non-Adaptive

120

100

80

60

40

20

100% adaptive

50% adaptive

0% adaptive

0

2 1.5

1 0.5

0.2

deadline/mean execution time ratio

In adaptive control, the system lowers the security dynamically

135

Level Switching

(100% adaptive client)

120

100

80

60

40

20

0

LEVEL

% MADE

2 1.5

1 0.5

deadline/mean execution time ratio

0.2

It shows the security level change and the miss ratio change

3

2

1

0

L

E

V

E

L

136

Performance Results

 Good performance gains achievable in soft real-time system during overload conditions

 When the overload is not severe, switching the security level can bring the desired performance back (as shown in the graph)

 If the system is too much overloaded or some component failed, then even reducing the security level to 0 cannot keep the system working properly (meeting critical deadlines)

 Performance gain depends also on other factors such as message size and I/O cost: significant performance improvement with large message sizes with large I/O overhead

137

138

Embedded Real-Time Databases....

139

What’s an Embedded Database?

 Same principal functionality as a desktop database (exluding the most complex operations)

 Two types:

 Application-embedded databases

 Generally not much real-time requirements

 Device-embedded databases

 Embedded systems

 Strict timing constraints involved

140

Requirements of Device-Embedded DBS

 Small footprint due to limitied storage and memory resources

 ~150 Kb

 High dependability

 continous uptime with little or no maintenance (i.e., the database should be able to perform recovery by itself).

 Mobile capabilities

 Interoperability and portability

 Communication and synchronization with external databases

 Real-time constraints

 Maintainability

 Security

141

Existing ”Embedded Databases”

 Progress

 Ardent Software

 InterSystems

 Centura SQLBase Embedded Database

 IBM DB2 Everywhere and Universal Database Satellite

 Microsoft SQL Server

 Oracle8i Lite

 Sybase SQL Anywhere Studio (and UltraLite Deployment Option)

 Pervasive.SQL 2000 SDK for Mobile and Embedded Devices

144

Features/Properties of Current Commercial

Embedded Database Systems

 Down-scaled version of the full-sized versions, i.e., still a conventional database

 Primarily targeted for general-purpose applications that require DBs

 No explicit support for real-time features

145

Embedded Database Systems Applications

 Mobile databases

 Portable computing devices

 Smart ceullular phones with Internet access

 PDAs

 Laptops

 Embedded systems

 Car engine control, brake system control, ...

 ’Tiny’ embedded databases

 Smart cards

 Intelligent appliances

 Network routers and hubs

 Set-top Internet-access boxes

146

Embedded DBS: Research Challenges

 Portability and interoperability

 Availability

 Recovery protocols that recover the database while the database is still guaranteeing some level of service

 Continuous up-time

 Query language

 What are the necessary operators (application dependent)

 Concurrency control schemes

 Architecture

 Building a database from a portfolio of modules (components)

 Application-dependent tuning of functionality and configuration

 Minimizing functionality -> minimizing memory usage

148

150

Applications

 Air Traffic control

 Aircraft Mission Control

 Command Control Systems

 Distributed Name Servers

 Manufacturing Plant

 Navigation and Positioning Systems

 automobiles, airplanes, ships, space station

 Network Management Systems

152

Applications (2)

 Real-time Process Control

 Spacecraft Control

 Telecommunication

 Cellular phones

 Normal PBX

 Training Simulation System

 Pilot training

 Battlefield training

153

Air Traffic Control

 Multiple control centers

 Controls airspace

 Terminal areas

 Enroute airspace

 Database

 aircraft: aircraft identification, transponder, code, altitude, position, speed, etc.

 flight plans: origin, route, destination, clearances, etc.

 environment data: airport configuration, navigation facilities, airways, restricted areas, notifications, weather data, etc.

154

Air Traffic Control (2)

 Contents and Size of DB

 350 airports, 250 navigation facilities, and 1500 aircrafts

 weather data, etc.

 DB size: ~20,000 entities

 Time requirements

 mean latency: 0.5 ms

 max latency: 5 ms

 external consistency: 1 sec

 temporal consistency: 6 secs

 permanence requirements: 12 hours

155

Military Aircraft Mission Control (cont’d)

 Database:

 Tracking information

 2000 air vehicles

 250 ground entities, e.g., vehicles, airports, radars, etc.

 flight plan, maps, intelligence etc.

 DB size: ~3000 - 4000 entities

156

Military Aircraft Mission Control

 Time requirements

 mean latency: 0.05 ms

 max latency: 1 - 25 ms

 external consistency: 25 ms

 temporal consistency: 25 ms

 permanence req.: 4 hours

157

Integrated Automobile Control

 TCM - Transmission Control Module

 TCS - Traction Control System

 CBC - Corner Braking Control

 DCS - Dynamic Safety Control

 ESP - Electronic Stabilization Program

 Car Diagnosis Systems

 Hard and soft TCs

 Significant interaction with external environment

 Distributed

160

161

VI. Real-Time Database Rsearch Prototype:

BeeHive System

162

Commercial “RTDBs”

 Polyhedra

 http://www.polyhedra.com/

 Tachys, (Probita)

 http://www.probita.com/tachys.html

 ClustRa

 http://www.clustra.com

 DBx

 Eaglespeed (Lockheed-Marthin)

 RTDBMS (NEC)

 … (Mitsubishi)

163

RTDBS Research Projects

 BeeHive

 University of Virginia, USA

 DeeDS

 University of Skövde, Sweden

 Rodain

 University of Helsinki, Finland

 RT-SORAC

 University of Rhode Island, USA

 MDARTS

 University of Michigan, USA

 STRIP

 Stanford University, USA

164

BeeHive : Global Multimedia Database

Support for Dependable, Real-Time

Applications

Real-Time Systems Group

Dept of Computer Science

University of Virginia

165

Applications of BeeHive

 Real-Time Process Control

 hard deadlines, main memory, need atomicity and persistence

 limited or no (i) schema, (ii) query capability

 Agile Manufacturing

 Business Decision Support Systems

 information dominance

 Intelligence Community

 Global Virtual Multimedia Real-Time DBs

166

Real-Time, Active

Temporal DBs

Troop

Positions

Multimedia

DB

Satellite

Imagery

World Wide

Network

Need Summary

Report by 4 PM

The Problem

Scenario

Archival

DBs

News

Services

167

Transaction Deadlines in BeeHive

 Hard - deadline must be met else catastrophic result

 suitable for some RTDB, in which timing constraints must be guaranteed, and the system supports predictability for certain guarantees

 Firm - deadline must be met else no value to executing transaction (just abort)

 Soft - deadline should be met, but if not, continue to process until complete

168

Absolute Validity Interval (AVI) and

Relative Validity Interval (RVI)

X 10 10 10

10 10

10

Y

20 20

Absolute Validity Interval (X) = 10

Absolute Validity Interval (Y) = 20

Relative Validity Interval X-Y < 15

20

169

Data in BeeHive

 Data from sensors (including audio/video)

 Derived data

 Time-stamped data

 Absolute consistency - environment and DB

 Relative consistency - multiple pieces of data

 Schema and meta data

 User objects (with semantics)

 Pre-declared transactions (with semantics)

170

Global Virtual Databases - BeeHive

 Dynamically reconfigure and collect DBs

Some Enterprise)

(Tailored for

 Interact with External DBs

 Utilize Distributed Execution Platforms

 Properties

 Real-Time

 QoS

 Fault Tolerance

 Security

171

BeeHive System

BeeHive

Wrapper

Search

B

W

RDBMS

B

W

OODB

B

W

RAW

B

W

BeeHive System

Native BeeHive Sites

172

BeeHive Object Model

 BeeHive Object is specified by <N, A, M, CF, T>

 N, the object ID

 A, set of attributes (name, domain, values)

 value -> value and validity interval

 semantic information

 M, set of methods

 name and location of code, parameters, execution time, resource needs, other semantic information

 CF, compatibility function

 T, timestamp

173

BeeHive Transactions

 BeeHive transaction is specified by < TN, XT, I, RQ, P >

 TN, unique ID

 XT, execution code

 I, importance

 RQ, set of requirements (for each of

RT, QoS, FT, and security

) and optional pre and post conditions

 P, policy for tradeoffs

 Example: if all resources cannot be allocated reduce

FT requirement from 3 to 2 copies.

174

Dealing With Soft/Firm RT Transactions

 Resolve inter-transaction conflicts in time cognizant manner (concurrency control)

 Assign transaction priorities (cpu scheduling)

175

Goals

 Maximize the number of TRs (sensor and user) that meet deadlines

 Keep data temporally valid

 on overload allow invalid intervals on data

(note that data with invalid interval may not be used during that invalid time)

176

The External Interface

Data maintained by cogency monitor

(external to

BeeHive)

Returned data

Cogency

Monitor

Data manipulation

Object

BeeHive integration

BeeHive

Structured Data

Unstructured

Data

Raw Data

Internet Open

Databases

177

Taxonomy of external data

 Structured (databases)

 Unstructured (search engines)

 Raw (video, audio, sensors)

178

Cogency Monitor

 Support value added services

 RT, FT, QoS, and Security

 Execute client supplied functionality

 Map incoming data into BeeHive objects

 Monitor the incoming data for

correctness

make decisions based on the returned data and possibly

 not just a firewall

179

Cogency Monitor GUI

 Pick and choose from template (value added services)

 Upon one choice - compatibility function executes that limits other choices

 Identify added functionality

 correctness

 mapping to internal BeeHive objects

 Automatic generation of the Cogency Monitor

 generated automatically using library functions (not implemented yet)

180

Value Added Services of Cogency Monitor

Real-time features

Real-time timeout

Periodic activation

Start time

Return partial results

Timestamp incoming data

Broadcast in parallel to multiple sites for faster response

Monitor response times (and/or adjust deadlines dynamically)

Fault tolerance features

Quality of

Service features

Retries

Return previous/old results

Primary and N backup sites

Filters (check returned values for reasonableness)

Keep history log

Reserve bandwidth

Compress data

Keep multiple resolution versions of data

Security features

Encryption

Access control

Maintain a firewall (place cogency monitor external to

BeeHive)

Keep audit trail

181

Unstructured cogency monitor

182

Current Research Activities in BeeHive

BeeHive Front End

Java

Cogency Monitor

Simulation

Basic BeeHive

Storage

Manager

Database

Expand

DB

RT

Threads

Admission

Control

RTDB

Internals

Security

183

Summary: BeeHive Project

 Global real-time database system

 object-based with added object semantics

 support in RT, FT, QoS, and Security

 different types of data: video, audio, images and text

 sensors and actuators

 Novel component technology

 data deadline, forced delay, conditional priority inheritance

 real-time logging and recovery

 security-performance tradeoff

 resource models for admission control

 Cogency Monitor

184

185

VII. Trends, Challenges, & Research Issues

186

Future Research Areas that Affect RTDB

Research

 Interface and component libraries

 open interfaces

 dealing with semantic mismatches

 micro and macro components

 Real-time data/information centric view

 as opposed to task centric view currently used

 Adaptive scheduling and decision making based on changing situation and incomplete workload and component profiles

 Component-based tool sets

 Configuration tools

 Tools to specify and integrate requirements of real-time and faulttolerance

191

Future Research Areas that Affect RTDB

Research (2)

“New Requirements”

 Complex software must evolve

 Software must be portable to other platforms

 develop once

 verify once

 certification and verification is very expensive

 port and integration should be automatic

 Flexible real-time data support

 one-size-fits-all does not work: small with minimal functionality for embedded systems while complete and full functionality for back-end server applications

192

Future Research Areas in RTDBS

 Resource management and scheduling

 temporal consistency guarantee (especially relative validity intervals)

 interactions between hard and soft/firm RT transactions

 transient overload handling

 I/O and network scheduling

 models which maximizes both concurrency and resource utilization

 support of different transaction types for flexible scheduling:

Alternative, Compensating, Imprecise

 Recovery

 availability (partial) of critical data during recovery

 semantic-based logging and recovery

193

Future RTDBS Research Areas (2)

 Concurrency Control

 alternative correctness models (relaxing ACID properties)

 integrated and flexible schemes for concurrency control

 Fault tolerance and security models to interact with RTDBS

 Query languages for explicit specification of real-time constraints ->

RT-SQL

 Distributed real-time databases

 commit processing

 distribution/replication of data

 recovery after site failure

 predictable (bounded) communication delays

194

Future RTDBS Research Areas (3)

 Data models to support complex multimedia objects

 Schemes to process a mixture of hard, soft, and firm timing constraints and complex transaction structures

 Support for integrating requirements of security and fault-tolerance with real-time constraints

 Performance models and benchmarks

 Support for more active features in real-time context

 techniques for bounding time in event detection, rule evaluation, rule processing mode, etc.

 associate timing constraints with triggering mechanisms

 Interaction with legacy systems (conventional databases)

195

196

VIII. Exercises

197

Exercise (1)

Suppose we have periodic processes P 1 and P 2, which measure pressure and temperature, respectively. The absolute validity interval of both of these parameters is 100 ms. The relative validity interval of a temperature-pressure pair is 50 ms. What is the maximum period of P 1 and P 2 that ensures that the database system always has a valid temperature-pressure pair reading?

198

Exercise (2)

Sometimes a transaction that would have been aborted under the twophase locking protocol can commit successfully under the optimistic protocol. Why is that? Develop a scenario in which a case of such transaction execution occurs.

199

Exercise (3)

Explain why EDF does not work well in a heavily loaded real-time database systems, and propose how you can improve the success rate by adapting EDF. Will your new scheme work as well as EDF in lightly loaded database systems? Will it work well in real-time applications other than database systems?

200

Exercise (4)

Generate examples of an application where it is permissible to relax one or more ACID properties of transactions in real-time database systems.

201

Exercise (5)

Suppose a transaction T has a timestamp of 100. Its read-set has X 1 and X 2 , and its write-set has X 3 , X 4 , and X 5 . The read timestamps of these data objects are (prior to adjustment for the commitment of T) 5,

10, 15, 16, and 18; their write timestamps are 90, 200, 250, 300, and 5, respectively. What should be the read and write timestamps after the successful commitment of T? Will the value of X 3 , X 4 , and X 5 will be changed when T commits?

202

Exercise (6)

Why the concurrency control protocols used in conventional database systems are not very useful for real-time database systems? What are the information that can be used by real-time database schedulers?

203

Exercise (7)

Compare pessimistic and optimistic approaches in concurrency control when applied to real-time database systems. Discuss different policies in optimistic concurrency control and their relative merits.

204

Exercise (8)

Are the techniques developed for real-time operating systems schedulers directly applicable to real-time database schedulers? Why or why not?

205

Exercise (9)

Discuss design criteria for real-time database systems that are different from those of conventional database systems. Why conventional recovery techniques based on logging and checkpointing may not be appropriate for real-time database systems?

206

Exercise (10)

What are the problems in achieving predictability in real-time database systems? What are the limitations of the transaction classification method we discussed in this course to support predictability?

207

Download