expanded precept ppt

advertisement
COS318 - Project #4
Preemptive Scheduling
Fall 2002
7/2/2016
Overview
 Implement preemptive OS with:
 Support for a preemptive scheduling:
 Implement the timed interrupt handler (irq0)
 Enforce all necessary atomicity
 Thread synchronization:
 MESA style monitors (condition variables)
7/2/2016
Overview -- what you need to do:
 We give you 19 files
 Only two of them need to modified
 scheduler.c
 Implement function irq0() -- the preemptive interrupt
handler
 Modify other functions in this file to insure Atomicity
 by (en)(dis)abling interrupts appropriately
 thread.c
 Implement functions for Condition variables
 Mesa style ondition_wait(), condition_signal()
 Again Atomicity
 Total <=100 lines of C code
 But take a while to figure out how to do
 Suggestion: read the given source code
7/2/2016
Overview Processes and Threads
 Threads (trusted)
 Linked with the kernel
 Can share address space, variables, etc.
 Access kernel services (yield, exit, lock_*,
condition_*, getpid(), *priority()) with direct
calls to kernel functions
 Use a single stack while running thread code
and kernel code
7/2/2016
Overview Processes and Threads cont’d
 Processes (untrusted)
 Linked separately from kernel
 Appear after kernel in image file
 Cannot share address space, variables, etc.
 Access kernel services (yield, exit, getpid,
*priority) via a unified system call mechanism
 Use a user stack while running process code
and a kernel stack whilst running kernel level
code
7/2/2016
Overview Process Control Block (PCB)
 Each proc/thread has an associated PCB
 PCB is struct which keeps track of
essential proc/thread info
 PCBs are initialized at start up time
 Registers and flags are saved and
restored from the PCB at context switch
 PCBs of running proc/threads are
assembled into linked circular list for
scheduling purposes
7/2/2016
Process Control Block In Detail
 pid
 Unique proc/thread id (integer)
 in_kernel
 Flag indicating if thread or process
 kernel_stack, user_stack
 Initial address of stack(s)
 start_address
 Address to jump to when first executing
7/2/2016
Process Control Block In Detail cont’d
 status
 FIRST_TIME, READY, BLOCKED, EXITED
 regs[]
 Array for context storage i.e. registers (eax,
esi, esp) and flags (eflags)
 *next,*prev
 Pointers for assembly of circular linked list
 Running proc/threads are maintained in this
list for scheduling purposes
7/2/2016
Overview Preemptive Scheduler
 Initial proc/thread is dispatched at start time
 Processes voluntarily stop running by a system
call interrupt invoking yield() or exit()
 Threads voluntarily stop running by a direct
system call invoking yield(), exit() or block()
 block() is nested in lock_acquire() or condition_wait()
 Proc/threads involuntarily stop running due to a
timer interrupt, irq0()
 Interrupt/Direct system calls invoke context saves
 Scheduler is invoked to save kernel stack, select
next available proc/thread from linked & dispatch
7/2/2016
Overview Context switching
 Contexts initialized for each PCB during startup
 Unique kernel and user stacks are assigned
 Starting addresses are stored
 Context (eflags and registers (including stack))
saved when execution is halted:
 For Threads (done for you)
 At start of yield() call before call to schedule()
 At end of block() call before call to schedule()
 For Processes (done for you)
 At start of system_call_entry() before potential call to
_yield() (and hence to schedule())
 For BOTH: (you need to do in irq0() )
 At start of irq0() before call to schedule()
7/2/2016
Overview Context switching cont’d
 Context is swapped in by dispatcher
 First time
 Set appropriate stack pointer and jump to starting
address (as before)
 Subsequent times
 Restore the kernel_stack saved at start of schedule()
 Note that this is different from the stack saved as part of
the context (regs.esp).
 Let dispatch() return so that the swapped in proc/thread
returns from that schedule() call via return address
stored on top of the saved stack.
 i.e. return to the caller of schedule(), not the caller of
dispatch() – tricky here, see schedule.c
 Context is restored by the code following said call to
schedule().
7/2/2016
Context Switching Details –
Macros
 SAVE_GEN_REGS, RESTORE_GEN_REGS
 Saves and restores eax, ebx, ecx, edx, esi, edi, ebp,
esp to current_running->regs.eax, ->regs.ebx, etc.
 Used in saving/restoring context
 SAVE_EFLAGS, RESTORE_EFLAGS
 Uses present stack to push and pop the processor
status word to and from current_running->regs.eflags
 Used is saving/restoring thread context (interrupt
mechanism automatically handles eflags for procs)
 SWITCH_TO_KERNEL_STACK, SAVE_KERNEL_STACK
 Uses current_running->kernel_stack to restore and
save the present stack, esp.
 Used in schedule()/dispatch(), system_call_entry(), and
irq0() (for processes)
7/2/2016
Access System Services -- Threads
 Threads can call to system calls (yield(), exit(), lock_*(),
condition_*(), getpid(), *priority()) directly
 yield()
 Context (eax,ebx,…,esp & eflags) saved.
 Schedule() is invoked and saves kernel_stack before moving to
next proc/thread and dispatching
 When swapped back in, dispatch returns to address on saved
kernel stack, at point after schedule()
 Context (eax,ebx,…,esp & eflags) is restored
 Yield returns to address on top of stack (to point after yield call)
 block()
 PCB is enqueued on lock/condition linked list
 Proceeds exactly as for yield()
 What about Processes??
7/2/2016
Interrupt Mechanism In detail
 The following diagram shows how the
interrupt mechanism works:
0x1000 IDT irq0() system_call_entry()
CPU
Int 0
Int 32
Int 48
7/2/2016
*Every 10ms hardware interrupt IRQ0 invokes Int 32
Access System Services -Processes
 system_call_entry()
7/2/2016
 eflags and eip are automatically saved because we enter
via the interrupt mechanism
 Context (register set including esp) is saved.
 Switch to kernel stack.
 Invoke the desired system call.
 If scheduler is invoked (via _yield), it saves kernel_stack
before moving to next proc/thread and dispatching (just
as for threads)
 When swapped back in, dispatch returns to address on
saved kernel stack, at point after schedule() in _yield()
 _yield() returns into system_call_entry()
 The kernel stack is saved
 Context (register set) is restored.
 ‘iret’ is invoked to restore eflags and return to eip
automatically saved on stack upon interrupt entry.
Access System Services -Processes
 Processes use system call interrupt mechanism
to access system services (yield(), exit(), getpid(),
*priority())
 Jump table, system_call_entry(), allows access to fixed
set of kernel services
 Address of jump table is specified in a Interrupt
Descriptor Table (IDT)
 IDT’s location (in kernel) is specified to the CPU
 Software interrupt 48 (with desired service specified in
eax) can then invoke system_call_entry()
 syslib.c maps system calls to int 48 with eax argument
7/2/2016
Preemptive scheduling -Processes and Threads
 Preemptive schedule processes/threads at a timer
interrupt – irq0()
 irq0() – What you must implement…
 eflags and eip are automatically saved because we enter via
the interrupt mechanism
 Proc/thread context is saved. (save registers…)
 Stack change occurs (processes only, switch to kernel stack)
 The End of Interrupt (SEND_EOI macro) signal is given to
inform interrupt timer to resume.
 Scheduling is invoked (scheduler will call dispatcher).
 When swapped back in, we return to this point.
 Stack save occurs (processes only ).
 Proc/thread context is restored (restore registers).
 ‘iret’ is invoked to restore eflags and return to eip automatically
saved on stack upon interrupt entry.
7/2/2016
Overview Thread Synchronization
 Only one thread can hold some lock, l, at
any one time
 Lock is acquired via lock_acquire (l)
 If UNLOCKED set to LOCKED
 If LOCKED thread is blocked (which places
thread in a queue stored in the lock)
 Lock is released via lock_release(l)
 If queue is empty then set to UNLOCKED
 If not empty then unblock (dequeue) a thread
which now holds the lock
7/2/2016
Thread Synchronization - block(),
unblock()
 These are generalized services invoked
by lock_* and condition_*
 block(struct pcb_t **q) (done for you)
 Change current_running status to BLOCKED
 Add current_running to end of queue (*q)
 Call to yield()
 unblock(struct pcb_t **q) (done for you)
7/2/2016
 Remove PCB at head of queue (*q)
 Change PCB’s status to RUNNING
 Insert PCB into running linked list
Overview Thread Synchronization cont’d
 Waiting is accomplished by first acquiring a
mutex, m, then check, if not satisfied, invoking
condition_wait (m,c) – e.g producer/consumer
 You need to implement condition_wait(m,c)
 The mutex, m, is released
 The thread then blocks on the condition, c
 Execution resumes and attempts to reacquire the
mutex, m
 Typically, the user code retests the condition….
 Signaling and Broadcasting is accomplished via
condition_signal(c) and condition_broadcast(c)
7/2/2016
 Signal should unblock one thread
 Broadcast should unblock all threads
Overview –
Atomic Processes
 Atomicity:
 Preemptive scheduling will break everything
 irq0() can occur at any point during execution
 If lock_* or condition_* are interrupted part way
through, failure can result because we assume that
these command fully complete once started
 Catastrophic results can happen if interruption
occurs during kernel services, especially those
which modify the PCB running list (schedule,
dispatch, block, unblock)
 Solution: Disable interrupts during crucial
times
7/2/2016
Overview –
Atomic Processes
 Handy Macros
 CRITICAL_SECTION_BEGIN – Disables interrupts and
increments global variable, disable_count (debug!)
 CRITICAL_SECTION_END – Decrements
disable_count and enables interrupts if disable_count
reached zero
 The count should never become negative!
 schedule(), dispatch(), block(), unblock() all assert that
disable_count > 0 since these kernel function should
never be interrupted.
 You must manually assure that other critical kernel
functions (see upcoming slides) and thread
synchronization routines are made atomic
7/2/2016
 Note, it’s not necessary that disable_count > 1 ever occur
(nested critical sections.) Avoiding this makes life easier.
Atomicity –
Details
 schedule(), dispatch(), block(), unblock() all
assert that interrupts are disabled
 All entry points to these functions must incorporate
CRITICAL_SECTION_BEGIN
 system_call_entry()
 Process system call could lead to _yield(), exit() which call to
schedule()
 yield()
 Thread’s direct system call will lead to schedule()
 exit()
 But only for a thread’s direct call since processes only call
here via system_call_entry (test current_running->in_kernel)
 irq0()
 This results in a call to schedule() for threads and processes
 lock_*, condition_*
7/2/2016
 To ensure atomicity and because can call block(), unblock()
Atomicity –
Details cont’d
 When any block() or unblock() call completes, the
interrupts are disabled (before the call, or before
dispatch swapped in a newly unblocked thread)
 CRITICAL_SECTION_END must ultimately appear after
such calls
 At end of lock_*, condition_*
 FIRST_TIME proc/threads launch directly from the
dispatcher which assumes interrupts are disabled
 CRITICAL_SECTION_END must appear before jumping
to the start address
 Non-FIRST_TIME dispatch() calls swap in saved kernel
stack and return to begin execution after the schedule()
call that caused the context to originally swap out
 CRITICAL_SECTION_END must ultimately appear at the
end of all eventual context restorations
7/2/2016
 At end of system_call_entry(), yield() or irq0()
Atomicity –
Thread Synchronization Functions
 As mentioned, lock_* and condition_* must execute
atomically
 Furthermore, lock_acquire(), lock_release(), condition_*()
must disable interrupts while calling to block(), unblock()
7/2/2016
 All code within these functions should be bracketed
within CRITICAL_SECTION_* macros
 Difficulties can arise if critical sections become nested
when condition_wait() calls to lock_acquire(),
lock_release() leading to non-zero disable_count when
context restoration completes -- BAD
 You may simplify life by defining _lock_acquire(), and
_lock_release(), which are seen only by condition_wait()
and which do not invoke critical section macros
 Since you disable interrupts at the start of
condition_wait() and are assured that block() returns
with interrupts disabled everything should work safely.
Implementation Details Code Files
 You are responsible for:
 scheduler.c
 The preemptive interrupt handler should be implemented
in irq0() as described previously
 All other functions are already implemented.
 Atomicity must be insured by (en)(dis)abling interrupts
appropriately in system_call_entry(), yield(), dispatch(),
exit(), and irq0()
 thread.c
 Condition variables must be implemented by manipulating
locks and the condition wait queue as described
 Atomicity must be insured by (en)(dis)abling interrupts
appropriately as described
7/2/2016
Implementation Details Code Files cont’d
 You are NOT responsible for:
 common.h
 Some general global definitions
 kernel.h, kernel.c, syslib.h, syslib.c
 Code to setup OS & process system call mechanism
 util.h, util.c
 Useful utilities (no standard libraries are available)
 th.h, th1.c, th2.c, process1.c, process2.c
 Some proc/threads that run in this project
 thread.h, scheduler.h, creatimage, bootblock
 The given utils/interfaces/definitions
7/2/2016
Implementation Details - Inline Assemblycont’d
 Example
 asm volatile(“addl $0,%%eax”::”n”( pcb[0].regs - pcb ));
 n indicates that an immediate value (available at compile time) is the
source for this instruction
 Since the pcb array is statically allocated, the byte address offset of
the regs[] array in the pcb struct is available as an immediate value
 asm volatile(“iret”);
 More examples in given source code
 References
 www.cs.princeton.edu/
courses/archive/fall99/cs318/Files/djgpp.html
 www.uwsg.iu.edu/hypermail/linux/kernel/9804.2/0818.html
 www.castle.net/~avly/djasm.html
7/2/2016
Implementation Details Extra Credit
 Prioritized Scheduling…
7/2/2016
Download