Shared resources TDDB47 Real Time Systems Lecture 3: Sharing resources and dynamic scheduling

advertisement

TDDB47 Real Time Systems

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

Real-Time Systems Laboratory

Department of Computer and Information Science

Linköping University, Sweden

The lecture notes are partly based on lecture notes by Simin Nadjm-Tehrani, Jörgen Hansson, Anders Törne.

They also loosely follow Burns’ and Welling book “Real-Time Systems and Programming Languages”. These lecture notes should only be used for internal teaching purposes at Linköping University.

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

31 pages

Shared resources

• The “Simple Process Model” assumes independence among the tasks

– In reality this is seldom the case

• Shared critical resource – a block of code that is shared by several processes but requires mutual exclusion at any point in time

• Blocking – a higher priority process waiting for a lower priority one

• A “priority inversion” occurs

• Obs: preemption – when a lower priority process waits for higher priority one – considered normal

• The blocking time must be

– bounded

– measurable

– Small

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

2 of 31

Avoidable priority inversion

• When a process waits for a lower priority process the priority inversion is unavoidable

• An avoidable really bad case

– A low priority process (P1) locks the resource

– A high priority process (P2) has to wait on the semaphore (blocked state)

– A medium priority process (P3) preempts P1 and runs to completion before P2!

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

3 of 31

Priority Inversion example

• Q (or V) – execution tick with access to the Q (or V) critical sections which are protected by mutual exclusion

• E – execution tick not accessing critical resources

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

4 of 31

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

Example (contn’d)

5 of 31

Priority Inheritance

• Scheme for eliminating avoidable priority inversions

– Process’ priority is no longer static

• If P1 is suspended due to P2, and if P1 has higher priority, then the priority of P2 is raised to the same priority as P1

• Priority inheritance

– Is transitive

– Guarantees an upper bound of blocking time as P1 is blocked only while P2 is using the resource

– Mutual exclusion guaranteed by protocol

– Deadlocks are still possible

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

6 of 31

Priority inheritance & blocking

• Worst case blocking time calculation

B i

= k

K

= 1 usage ( k , i ) C i

– B i

– the maximum blocking time of process I

– usage(k,i) = 1 if resource k is used by at least one process with a priority less than P i and at least one process with priority greater or equal to P i

, otherwise 0.

– C(k) – worst case execution time of k critical resource

• This is an upper bound

– even higher accuracy formulas might be available

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

7 of 31

Priority Inheritance Example

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

8 of 31

Priority Ceiling Protocols

• Two forms

– Original ceiling priority protocol (OCPP)

– Immediate ceiling priority protocol (ICPP)

• Properties (on a single processor system):

– A high priority process can be blocked at most once during its execution by lower priority processes

• Only in the beginning in the case of ICPP

– Deadlocks are prevented

– Transitive blocking is prevented

• I.e. chain blocking (c is blocked by b, which is blocked by a)

– Mutual exclusive access to resources is ensured

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

9 of 31

OCPP

• Static default priority

– assigned to each process (by a FPS scheme)

• Static ceiling value

– For each resource

– Is the maximum static priority of the processes that use it

• Dynamic priority

– Assigned at run-time, “real” priority

– Maximum of the process own static priority and any priority it inherits due to it blocking processes with higher priority

• Lock on a resource

– Can be gained by a process only if its dynamic priority is higher than the ceiling of any currently locked resource

• But excluding any resource that it has already locked itself

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

10 of 31

• How is the priority of L1 changing in time?

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

OCPP Example

11 of 31

ICPP

• Static default priority

– assigned to each process (by a FPS scheme)

• Static ceiling value

– For each resource

– Is the maximum static priority of the processes that use it

• Dynamic priority

– Assigned at run-time, “real” priority

– Maximum of its own static priority and the ceiling values of any resources it has locked.

• Consequence

– A process can be blocked only at the beginning of the execution, equivalent with:

– All resources of a process must be free before execution

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

12 of 31

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

ICPP example Blocking time & O/ICPP

• Worst case blocking for both ceiling protocols

B i

=

K max k = 1 usage ( k , i ) C i

• Since a process can only be blocked once (per activation)

– But more processes will experience this block

13 of 31 Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

14 of 31

OCPP vs. ICPP

• Although the worst-case behaviour of the two ceiling schemes is identical (from a scheduling view point), there are some points of difference

• ICPP is easier to implement than the original (OCPP)

– No need to monitor blocking relationships in ICPP

• ICPP leads to less context switches

– Blocking occurs prior to first execution in ICPP

• ICPP requires more priority movements

– This happens with all resource usages in ICPP

– OCPP only changes priority if an actual block has occurred

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

15 of 31

Proving properties of O/ICPP

• Properties (on a single processor system):

– A high priority process can be blocked at most once during its execution by lower priority processes

• Prove …

– Transitive blocking is prevented

• Prove …

– Mutual exclusive access to resources is ensured by the protocol itself

• Prove …

– Deadlocks are prevented

• Let’s prove together

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

16 of 31

Deadlock conditions recap

• The 4 conditions necessary for a deadlock

– Mutual exclusion

• Only one process at a time can use a resource

– Hold and Wait

• A process may hold allocated resources while awaiting assignment of other resources.

– No forced release

• A resource can be released only voluntarily by the process holding it

– Circular wait

• A closed chain of processes exists, such that each process holds at least one resource needed by the next process in the chain

• Alternatively, each process waits for a messages that has to be sent by the next process in the chain

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

17 of 31

Deadlock elimination

• Deadlock prevention

– At system design. Offline

• Deadlock avoidance

– Dynamically considers the request, decides if safe. Online

• Deadlock detection and treatment

• Analogy avoidance vs. prevention

– Traffic lights vs. circulation agent

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

Resource allocation graph

18 of 31

Proving O/ICPP deadlock-free

• By showing deadlock-condition 4. never holds

– Assume a circular wait

– In this “circular wait” there must be two resources that have the highest ceiling

– Now, if one is locked by a process, then no other process can lock the other resource

• As the process that locked the first resource runs with highest priority

– Immediately – ICPP

– When a lock on the second is attempted – OCPP

– Thus no circular wait exists

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

19 of 31

Respose time & O/ICPP

• The response time analysis can include blocking times

R i

= C i

+ B i

+ I i

R i

= C i

+ k

K max

= 1 usage ( k , i ) C i

+

j ∈ hp ( i )

R

T j i

⎥ C j

• Remember

– Response time analysis only for FPS scheduling

– For EDF the worst case response time for a task is not when all tasks start at the same time

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

20 of 31

Sporadic and Aperiodic Tasks

• Period (T)

– Minimum inter-arrival time

• Important for sporadic tasks with hart RT guarantees

• Undefined for aperiodic tasks

– Average inter-arrival time

• Important for sporadic and aperiodic tasks with soft RT

• Deadline (D)

– Usually D < T

– As sporadic+aperiodic tasks usually respond to an unexpected event (exception handling, warning)

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

21 of 31

Hard vs. Soft RT Processes

• Rule #1

– All processes (both hard & soft) should be schedulable

• Using average execution times

• Using average arrival rates

• With Rule#1 transient overloads are possible

• Rule #2

– All hard RT processes should be schedulable

• Using worst case execution times

• Using worst case arrival rates

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

22 of 31

Aperiodic + Sporadic + Hard RT

• Sporadic can be considered as periodic, with worst case inter-arrival rate as the period

• A “server” runs soft RT tasks to not disturb the hard RT ones

– A server usually has a period T s and a capacity C s

– Whenever an aperiodic tasks arrives and capacity is available the task is run until server capacity is depleted

– The capacity is replenished in time

– A server has to be used for aperiodic tasks

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

23 of 31

• Deadline Monotonic

– Fixed Priority Scheduling (FPS)

When D < T

– Response time analysis supports it

• EDF

– Utilisation analysis not sufficient anymore

– But still optimal scheduling policy

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

24 of 31

Dynamic vs. Static Scheduling

Dynamic Scheduling

• on-line

• no a priori knowledge

+ flexibility

+ adaptability

+ few assumptions about workload

- prone to overloads

- decision consumes processor time

only statistical guarantees

Static Scheduling

• off-line

• assumes clairvoyance

• schedules planned for peak load

+ task workloads are periodic => no overloads

+ analyzable

+ cheap

- low utilization

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

25 of 31

Static vs. Dynamic: Predictability

• Static Scheduling

– determine maximum execution time

– allocate task to node

– allocate communication slots to LANs

– construct schedules off-line

• Dynamic Scheduling

– “...not necessary to develop a set of detailed plans during the design phase, since the execution schedules are created dynamically as a consequence of the actual demand”

• H. Kopetz

– Things that can be done

• admission control

• overload resolution

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

26 of 31

Static vs. Dynamic: Resource Utilization

• Static Scheduling

– Schedules are planned for peak load

– If worst-case exec time >> avg. case exec time => low utilization (usually)

• Dynamic Scheduling

– Better utilization at low or average loads

– Increased online processing time

• Dilemma: to schedule or to execute?

• Higher overhead

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

27 of 31

Static vs. Dynamic: Extensibility

• Static Scheduling

– Task schedules must be recalculated when

• Maximum execution time changes

• New task is added

• Communication schedules – recalculated for active nodes

• Dynamic Scheduling

– Temporal properties may/should be retested

– A small change in task characteristics brings a small change in performance

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

28 of 31

Overload Management

• Reject tasks

• Abort tasks (i.e., drop)

• Replace tasks

– alternative/contingency actions

– primary/backup

• Partial execution of tasks

– imprecise computation

• Defer execution of tasks

• Migrate tasks

– only in distributed systems

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

29 of 31

Scheduling in Distributed Systems

• Task migration may increase flexibility

• Communication delays must be bounded

• Remote procedure calls introduce remote blocking

• More about distributed systems it in a future lecture

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

30 of 31

Reading material

• Chapter 13 in Burns & Wellings

• Chapter 4 in Butazzo

– for the proofs of sufficient condition and optimality

Lecture 3: Sharing resources and dynamic scheduling

Calin Curescu

31 of 31

Download