TDDB47 Real-time Systems Lecture 3: Recap: Scheduling

advertisement
Recap: Scheduling
• Uniprocessor / multiprocessor / distributed
system
TDDB47 Real-time Systems
Lecture 3: Scheduling III
• Periodic / sporadic / aperiodic
• Independent / interdependent
Mikael Asplund
• Preemptive / non-preemptive
Real-Time Systems Laboratory
Department of Computer and Information Science
Linköpings universitet
Sweden
• Handle transient overloads
• Support fault tolerance
The lecture notes are partly based on lecture notes by Calin Curescu, Simin NadjmTehrani, Jörgen Hansson, Anders Törne. They also loosely follow Burns’ and Welling
book “Real-Time Systems and Programming Languages”. These lecture notes should
only be used for internal teaching purposes at Linköping University.
Scheduling III
Mikael Asplund
60 pages
Scheduling III
Mikael Asplund
2 of 60
Content
• Interdependencies
– Resource access control
Resource access
control
• Distributed scheduling
• Sporadic and aperiodic processes
• Overload management
• WCET
Scheduling III
Mikael Asplund
3 of 60
Scheduling III
Mikael Asplund
Shared resources
4 of 60
Priority Inversion example
• Shared critical resource
– a block of code that is shared by several processes but
requires mutual exclusion at any point in time
• Blocking
– a higher priority process waiting for a lower priority one
– Obs: preemption – when a lower priority process waits for
higher priority one – considered normal
• Q (or V) – execution tick with access to the Q (or
V) critical sections which are protected by mutual
exclusion
• The blocking time must be
– bounded
– measurable
– Small
Scheduling III
Mikael Asplund
• E – execution tick not accessing critical resources
5 of 60
Scheduling III
Mikael Asplund
6 of 60
Example (contn’d)
Priority Inheritance
• Dynamic priorities
• Obtain the priority of the process that is
blocked by you
• Transitive
• Guarantees an upper bound of blocking
• Deadlocks are still possible
Scheduling III
Mikael Asplund
7 of 60
Scheduling III
Mikael Asplund
Priority inheritance & blocking
8 of 60
Priority Inheritance Example
• Worst case blocking time calculation
K
Bi = ∑ usage k,i C k 
k=1
– Bi – the maximum blocking time of process I
– usage(k,i) = 1 if resource k is used by at least one
process with a priority less than Pi and at least one
process with priority greater or equal to Pi , otherwise 0.
– C(k) – worst case execution time of k critical resource
Scheduling III
Mikael Asplund
9 of 60
Scheduling III
Mikael Asplund
Priority Ceiling Protocols
10 of 60
OCPP
• Static default priority
• Two forms
– assigned to each process (by a FPS scheme)
• Static ceiling value for each resource
– Original ceiling priority protocol (OCPP)
– Is the maximum static priority of the processes that use it
– Immediate ceiling priority protocol (ICPP)
• Dynamic priority
– Assigned at run-time, “real” priority
• Properties (on a single processor system):
– Maximum of the process own static priority and any priority it
inherits due to it blocking processes with higher priority
– A high priority process can be blocked at most once
during its execution by lower priority processes
• Lock on a resource
– Deadlocks are prevented
– Can be gained by a process only if its dynamic priority is
higher than the ceiling of any currently locked resource
– Transitive blocking is prevented
– Mutual exclusive access to resources is ensured
Scheduling III
Mikael Asplund
• Excluding any resource that it has already locked
11 of 60
Scheduling III
Mikael Asplund
12 of 60
OCPP Example
ICPP
• Static default priority
– assigned to each process (by a FPS scheme)
• Static ceiling value for each resource
– Is the maximum static priority of the processes that use
it
• Dynamic priority
– Assigned at run-time, “real” priority
– Maximum of its own static priority and the ceiling values
of any resources it has locked.
• Consequence
•
– A process can be blocked only at the beginning of the
execution, equivalent with:
How is the priority of L1 changing in time?
– All resources of a process must be free before execution
Scheduling III
Mikael Asplund
13 of 60
Scheduling III
Mikael Asplund
ICPP example
14 of 60
Blocking time & O/ICPP
• Worst case blocking for both ceiling
protocols
Bi =max Kk=1 usage k,i C k
• Since a process can only be blocked once
(per activation)
– But more processes will experience this block
Scheduling III
Mikael Asplund
15 of 60
Scheduling III
Mikael Asplund
16 of 60
Proving properties of O/ICPP
OCPP vs. ICPP
• Properties (on a single processor system):
• Worst case the same
• ICPP is easier to implement than the
original (OCPP)
– A high priority process can be blocked at most once
during its execution by lower priority processes
• ICPP leads to less context switches
– Mutual exclusive access to resources is ensured by the
protocol itself
– Transitive blocking is prevented
• ICPP requires more priority movements
– Deadlocks are prevented
• Let’s prove!
• Why?
Scheduling III
Mikael Asplund
17 of 60
Scheduling III
Mikael Asplund
18 of 60
Deadlock conditions recap
Proving O/ICPP deadlock-free
• By showing deadlock-condition 4. never holds
– Assume a circular wait
• The 4 conditions necessary for a deadlock
– Mutual exclusion
• Only one process at a time can use a resource
– Hold and Wait
• A process may hold allocated resources while
awaiting assignment of other resources.
– No forced release
• A resource can be released only voluntarily by the
process holding it
– Circular wait
• A closed chain of processes exists, such that each
process holds at least one resource needed by the
next process in the chain
• Alternatively, each process waits for a messages that
has to be sent by the next process in the chain
Scheduling III
Mikael Asplund
– In this “circular wait” there must be two resources that
have the highest ceiling
– Now, if one is locked by a process, then no other
process can lock the other resource
• As the process that locked the first resource runs
with highest priority
– Immediately – ICPP
– When a lock on the second is attempted – OCPP
– Thus no circular wait exists
19 of 60
Scheduling III
Mikael Asplund
20 of 60
Response time & O/ICPP
• The response time analysis can include blocking
times
R i =C i +Bi +I i
K
R i=C i max usage k,i  C  k 
k=1
∑
j ∈hp  i 
 
Ri
T
j
Distributed
scheduling
Cj
• Remember
– Response time analysis only for FPS scheduling
– For EDF the worst case response time for a task is not
when all tasks start at the same time
Scheduling III
Mikael Asplund
21 of 60
Scheduling III
Mikael Asplund
Variants
22 of 60
Distributed Scheduling
• Characteristics of a synchronous
distributed system
• Multi-processor
– Upper bound on communication delays
• Multi-core
– Local clocks available and drift is bounded
– Each node make progress at minimum rate
• Dynamic processor allocation
• Distributed system
– Anomalies: Response time might increase if
• WCET is decreased;
• priority is increased; or
• number of nodes is increased.
Scheduling III
Mikael Asplund
23 of 60
Scheduling III
Mikael Asplund
24 of 60
Allocation problem
• P1, P2: WCET=25, Period=50, P3: WCET=80,
Period=100
Partitioned scheduling
P1
P2
P3
P4
P5
P6
• P1 -> CPU1; P2-> CPU2; P3 -> CPU1 or CPU2
– Not feasible
• P1 & P2 -> CPU1; P3 -> CPU2
– Feasible schedule
Scheduling III
Mikael Asplund
25 of 60
Allocation
Scheduling III
Mikael Asplund
26 of 60
Rate Monotonic First Fit (RMFF)
• Processors 1..n
• Static allocation of processes more
reliable than dynamic reallocation of
processes
• Assign tasks in order of increasing periods
– Low utilisation
– Schedulability analysis for each processor
• For each task i, choose the lowest previously used
processor such that the taskset together with task i
is schedulable on that processor
• Remote blocking
– Difficult problem
– Replicate data to other nodes in order to ensure local
access
• Schedulabililty guaranteed if U>0.41
• Practical approach:
– Static allocation for safety-critical (periodic and
sporadic) processes; let aperiodic processes migrate.
Scheduling III
Mikael Asplund
27 of 60
Scheduling III
Mikael Asplund
Global scheduling
28 of 60
Distributed systems
• Any task can run on any processor
• When a task is ready to run, use the first
available processor
• RM and EDF cannot guarantee >0%
utilisation for global scheduling
Scheduling III
Mikael Asplund
29 of 60
Scheduling III
Mikael Asplund
30 of 60
Algorithms
• Buddy
• Focused addressing and bidding (FAB)
Scheduling III
Mikael Asplund
Hybrid Systems
31 of 60
Scheduling III
Mikael Asplund
Sporadic and aperiodic Tasks
32 of 60
Sporadic and aperiodic Tasks
• Deadline (D)
• Period (T)
– Usually D < T
– Minimum inter-arrival time
– As sporadic+aperiodic tasks usually respond to
an unexpected event (exception handling,
warning)
• Important for sporadic tasks with hart RT guarantees
• Undefined for aperiodic tasks
– Average inter-arrival time
• Important for sporadic and aperiodic tasks with soft
RT
Scheduling III
Mikael Asplund
33 of 60
Scheduling III
Mikael Asplund
Hard vs. Soft RT Processes
• Rule #1
Hard vs. Soft RT Processes
• Rule #2
– All processes (both hard & soft) should be
schedulable
– All hard RT processes should be schedulable
• Using worst case execution times
• Using average execution times
• Using worst case arrival rates
• Using average arrival rates
• Rule#2 guarantees that all hard real-time
processes meet their deadline.
• With Rule#1 transient overloads are
possible
Scheduling III
Mikael Asplund
34 of 60
35 of 60
Scheduling III
Mikael Asplund
36 of 60
Static priority servers
Dynamic vs. static systems
Dynamic scheduling/analysis
• on-line
• no a priori knowledge
t
+ flexibility
+ adaptability
+ few assumptions about
workload
t
t
Scheduling III
Mikael Asplund
Static scheduling/analysis
• off-line
• assumes clairvoyance
• schedules planned for peak
load
+ task workloads are periodic =>
no overloads
+ analyzable
+ cheap
- prone to overloads
- decision consumes CPU
time
- only statistical guarantees
37 of 60
- low utilization
Scheduling III
Mikael Asplund
38 of 60
Static vs. dynamic: predictability
• Static Scheduling
– determine maximum execution time
– allocate task to node
– allocate communication slots to LANs
– construct schedules off-line
Overload
Management
• Dynamic Scheduling
– “...not necessary to develop a set of detailed plans
during the design phase, since the execution schedules
are created dynamically as a consequence of the actual
demand”
• H. Kopetz
– Things that can be done
• admission control
• overload resolution
Scheduling III
Mikael Asplund
39 of 60
Scheduling III
Mikael Asplund
40 of 60
Overload Management
Feedback control
• Reject tasks
• Abort tasks (i.e., drop)
Tasks
• Replace tasks
– alternative/contingency actions
Reference
– primary/backup
+
Controller
Admission
-
• Partial execution of tasks
– imprecise computation
Miss ratio
• Defer execution of tasks
• Migrate tasks
RT-system
– only in distributed systems
Scheduling III
Mikael Asplund
41 of 60
Scheduling III
Mikael Asplund
42 of 60
What is it?
Number of task instances
WCET
BCET
Scheduling III
Mikael Asplund
43 of 60
Scheduling III
Mikael Asplund
Assumptions
WCET
Execution time
44 of 60
Ways to obtain WCET
• One task runs in isolation
• Measurement
• No interference
• No task switches of interrupts
• Static analysis
• One particular hardware platform
Scheduling III
Mikael Asplund
45 of 60
Reading
• Chapter 14 of Burns & Wellings
• Article by Ramamritham, Stankovic, and
Zhao, IEEE Transactions on Computers,
Volume 38(8), August 1989
Scheduling III
Mikael Asplund
47 of 60
Scheduling III
Mikael Asplund
46 of 60
Download