PPT - ECE 751 Embedded Computing Systems

advertisement
Lecture 14: Real Time Concepts
Embedded Computing Systems
Mikko Lipasti
Based on slides and textbook from Wayne Wolf
High Performance Embedded Computing
© 2007 Elsevier
Topics




Ch. 4 in textbook
Real-time scheduling.
Scheduling for power/energy.
Operating systems mechanisms and
overhead
© 2006 Elsevier
Real-time scheduling terminology







Process: unique execution of a program
Context switch: operating system switch from one
process to another.
Time quantum: time between OS interrupts.
Schedule: sequence of process executions or
context switches.
Thread: process that shares address space with
other threads.
Task: a collection of processes.
Subtask: one process in a task.
© 2006 Elsevier
Real-time scheduling algorithms

Static scheduling algorithms determine the
schedule off-line.



Constructive algorithms don’t have a complete
schedule until the end of the algorithm.
Iterative improvement algorithms build a schedule,
then modify it.
Dynamic scheduling algorithms build the
schedule during system operation.


Priority schedulers assign priorities to processes.
Priorities may be static or dynamic.
© 2006 Elsevier
Timing requirements

Real-time systems have timing requirements.






Hard: missing a deadline causes system failure.
Soft: missing a deadline does not cause failure.
Deadline: time at which computation must
finish.
Release time: first time that computation may
start.
Period (T): interval between deadlines.
Relative deadline: release time to deadline.
© 2006 Elsevier
Timing behavior




Initiation time: time when process actually
starts executing.
Completion time: time when process finishes.
Response time = completion time – release
time.
Execution time (C): amount of time required
to run the process on the CPU.
© 2006 Elsevier
Utilization


Total execution time C required to execute
processes 1..n is the sum of the Cis for the
processes.
Given available time t, utilization U = C/t.


Generally expressed as a percentage.
CPU can’t deliver more than 100% utilization.
© 2006 Elsevier
Static scheduling algorithms

Often take advantage of data dependencies.



Resource dependencies come from the
implementation.
As-soon-as-possible (ASAP): schedule each
process as soon as data dependencies allow.
As-late-as-possible (ALAP): schedule each
process as late as data dependencies and
deadlines allow.
© 2006 Elsevier
List scheduling

A common form of constructive scheduler.
© 2006 Elsevier
Priority-driven scheduling




Each process has a priority.
Processes may be ready or waiting.
Highest-priority ready process runs in the
current quantum.
Priorities may be static or dynamic.
© 2006 Elsevier
Rate-monotonic scheduling






Liu and Layland: proved properties of static
priority scheduling.
No data dependencies between processes.
Process periods may have arbitrary
relationships.
Ideal (zero) context switching time.
Release time of process is start of period.
Process execution time is fixed.
© 2006 Elsevier
Critical instant
© 2006 Elsevier
Critical instant analysis

Process 1 has shorter period, process 2
has longer period.
If process 2 has higher priority, then:
Schedulability condition:

Utilization is:

Utilization approaches:


© 2006 Elsevier
Earliest-deadline-first (EDF) scheduling

Liu and Layland: dynamic priority algorithm.



Process closest to its deadline has highest priority.
Relative deadline D.
Process set must satisfy:
© 2006 Elsevier
Least-laxity-first (LLF) scheduling

Laxity or slack: difference between remaining
computation time and time until deadline.


Process with smallest laxity has highest priority.
Unlike EDF, takes into account computation
time in addition to deadline.
© 2006 Elsevier
Priority inversion



RMS and EDF assume no dependencies or
outside resources.
When processes use external resources,
scheduling must take those into account.
Priority inversion: external resources can
make a low-priority process continue to
execute as if it had higher priority.
© 2006 Elsevier
Priority inversion example
© 2006 Elsevier
Priority inheritance protocols


Sha et al.: basic priority inheritance protocol, priority
ceiling protocol.
Process in a critical section executes at highest
priority of any process that shares that critical
section.


Priority ceiling protocol: each semaphore has its
own priority ceiling.


Can deadlock.
Required priority to obtain semaphore depends on priorities
of other locked semaphores.
Schedulability:
© 2006 Elsevier
Scheduling for dynamic voltage scaling

Dynamic voltage scaling (DVS): change
processor voltage to save power.


Power consumption goes down as V2,
performance goes down as V.
Must make sure that the process finishes its
deadline.
© 2006 Elsevier
Yao et al. DVS for real-time




Intensity of an interval
defines lower bound on
average speed required to
create a feasible schedule.
Interval that maximizes the
intensity is the critical
interval.
Optimal schedule is equal
to the intensity of the critical
interval.
Average rate heuristic:
© 2006 Elsevier
DVS with discrete voltages

Ishihara and Yasuura:
two voltage levels are
sufficient if a finite set
of discrete voltage
levels are used.
© 2006 Elsevier
Procrastination scheduling

Family of algorithms that maximizes lengths
of idle periods.




CPU can be turned off during idle periods, further
reducing energy consumption.
Jejurkar et al.: Power consumption P = PAC +
PDC + Pon.
Minimum breakeven time tth = Esd/Pidle.
Guarantees deadlines if:
© 2006 Elsevier
Performance estimation

Multiple processes interfere in the cache.



Single-process performance evaluation cannot
take into account the effects of a dynamic
schedule.
Kirk and Strosnider: segment the cache,
allow processes to lock themselves into a
segment.
Mueller: use software methods to partition.
© 2006 Elsevier
Cache modeling and scheduling


Li and Wolf: each process has a stable footprint in
the cache.
Two-state model:





Process is in the cache.
Process is not in the cache.
Characterize execution time in each state off-line.
Use CPU time measurements along with cache
state to estimate process performance at each
quantum.
Kastner and Thiesing: scheduling algorithm takes
cache state into account.
© 2006 Elsevier
General-purpose vs. real-time OS

Schedulers have very different goals in realtime and general-purpose operating systems:



Real-time scheduler must meet deadlines.
General-purpose scheduler tries to distribute time
equally among processes.
Early real-time operating systems:


Hunter/Ready OS for microcontrollers was
developed in early 1980s.
Mach ran on VAX, etc., provided real-time
characteristics on large platforms.
© 2006 Elsevier
Memory management

Memory management allows RTOS to run
outside applications.



Cell phones run downloaded, user-installed
programs.
Memory management helps the RTOS
manage a large virtual address space.
Flash may be used as a paging device.
© 2006 Elsevier
Windows CE memory management


Flat 32-bit address space.
Top 2 GB for kernel.


Statically mapped.
Bottom 2 GB for user processes.
© 2006 Elsevier
WinCE user memory space



64 slots of 32 MB each.
Slot 0 is currently
running process.
Slots 1-33 are the
processes.


Slot 63: resource mappings
Slots 33-62: object store,
memory mapped files
…
32 processes max.
Object store, memory
mapped files, resource
mappings.
Slot 3: process
Slot 2: process
Slot 1: DLLs
Slot 0: current process
© 2006 Elsevier
Mechanisms for real time operation

Two key mechanisms for real time:



Interrupt handler is part of the priority system.


Interrupt handler.
Scheduler.
Also introduces overhead.
Scheduler determines ability to meet
deadlines.
© 2006 Elsevier
Interrupt handling in RTOSs




Interrupts have priorities set in hardware.
These priorities supersede process priorities of the
processes.
We want to spend as little time as possible in the
hardware priority space to avoid interfering with the
scheduler.
Two layers of processing:



Interrupt service routine (ISR) is dispatched by hardware.
Interrupt service thread (IST) is a process.
Spend as little time in the ISR (hardware priorities),
do most of the work in the IST (scheduler priorities).
© 2006 Elsevier
Windows CE interrupts

Two types of ISRs:


Static iSRs are built into kernel, one-way
communication to IST.
Installable ISR can be dynamically loaded, uses
shared memory to communicate with IST.
© 2006 Elsevier
Static ISR

Built into the kernel.


One-way communication from ISR to IST.



SHx and MIPS must be written in assembler, limited
register availability.
Can share a buffer but location must be predefined.
Nested ISR support based on CPU, OEM’s
initialization.
Stack is provided by the kernel.
© 2006 Elsevier
Installable ISR





Can be dynamically loaded into kernel.
Loads a C DLL.
Can use shared memory for communication.
ISRs are processed in the order they were
installed.
Limited stack size.
© 2006 Elsevier
WinCE 4.x interrupts
ISR
ISR
ISR
ISR
ISH
device
All higher
enabled
Set event
All enabled
Except ID
© 2006 Elsevier
Enable ID
All enabled
thread I-ISR OAL kernel HW
IST processing
Interprocess communication



IPC often used for large-scale communication
in general-purpose systems.
Mailboxes are specialized memories, used
for small, fast transfers.
Multimedia systems can be supported by
quality-of-service (QoS) oriented interprocess
communication services.
© 2006 Elsevier
Power management

Advanced Configuration and Power
Management (ACPI) standard defines power
management levels:





G3 mechanical off.
G2 soft off.
G1 sleeping.
G0 working.
Legacy state.
© 2006 Elsevier
Summary




Ch. 4 in textbook
Real-time scheduling.
Scheduling for power/energy.
Operating systems mechanisms and
overhead
© 2006 Elsevier
Download