re-os10

advertisement
Operating Systems:
Internals and Design Principles
William Stallings
Operating Systems review
index
2-16 : Architecture & Process
17-22 : Concurrency
23-32 : Scheduling
33-40 : Memory Management
41-48 : File management
49-52 : Distributed Computing
Operating System
• Exploits the hardware resources of one or
more processors
• Provides a set of services to system users
• Manages secondary memory and I/O
devices
Operating System
• A program that controls the execution of
application programs
• An interface between applications and
hardware
• Main objectives of an OS:
– Convenience
– Efficiency
– Ability to evolve
A Computer’s
Basic Elements
•
•
•
•
Processor
Main Memory
I/O Modules
System Bus
Services Provided
by the Operating System
• Program development
– Editors and debuggers.
• Program execution
– OS handles scheduling of numerous tasks
required to execute a program
• Access I/O devices
– Each device will have unique interface
– OS presents standard interface to users
Microkernel Architecture
• Most early OS are a monolithic kernel
– Most OS functionality resides in the kernel.
• A microkernel assigns only a few essential
functions to the kernel
– Address spaces
– Interprocess communication (IPC)
– Basic scheduling
Benefits of a
Microkernel Organization
• Uniform interfaces on requests made by a
process.
• Extensibility
• Flexibility
• Portability
• Reliability
• Distributed System Support
• Object Oriented Operating Systems
Benefits of Microkernel (1)
• Uniform interfaces: Processes need not distinguish
between kernel-level and userlevel services because all
such services are provided by means of message
passing.
• Extensibility: facilitates the addition of new services as
well as the provision of
• multiple services in the same functional area.
• Flexibility: not only can new
• features be added to the operating system, but existing
features can be subtracted to produce a smaller, more
efficient implementation.
• Portability: all or at least much of the processor-specific
code is in the microkernel; thus, changes needed to port
the system to a new processor are fewer and tend to be
arranged in logical groupings.
Benefits of Microkernel (2)
• Reliability: A small microkernel can be rigorously tested.
Its use of a small number of application programming
interfaces (APIs) improves the chance of producing
quality code for the operating-system services outside
the kernel.
• Distributed system support: the message orientation
of microkernel communication lends itself to extension to
distributed systems.
• Support for object-oriented operating system
(OOOS): An object-oriented approach can lend discipline
to the design of the microkernel and to the development
of modular extensions to the operating system.
Symmetric
multiprocessing (SMP)
• An SMP system has
– multiple processors
– These processors share same main memory
and I/O facilities
– All processors can perform the same
functions
• The OS of an SMP schedules processes
or threads across all of the processors.
SMP Advantages
• Performance
– Allowing parallel processing
• Availability
– Failure of a single process does not halt the
system
• Incremental Growth
– Additional processors can be added.
• Scaling
Symmetric
multiprocessing (SMP)
• An SMP system has
– multiple processors
– These processors share same main memory
and I/O facilities
– All processors can perform the same
functions
• The OS of an SMP schedules processes
or threads across all of the processors.
SMP Advantages
• Performance
– Allowing parallel processing
• Availability
– Failure of a single process does not halt the
system
• Incremental Growth
– Additional processors can be added.
• Scaling
Multiprocessor OS
Design Considerations
• The key design issues include
– Simultaneous concurrent processes or
threads
– Scheduling
– Synchronization
– Memory Management
– Reliability and Fault Tolerance
Process Control Block
• Contains the info about
a process
• Created and manage by
the operating system
• Allows support for
multiple processes
Role of the
Process Control Block
• The most important data structure in an
OS
– It defines the state of the OS
• Process Control Block requires protection
– A faulty routine could cause damage to the
block destroying the OS’s ability to manage
the process
– Any design change to the block could affect
many modules of the OS
Concurrency
Concurrency arises in:
• Multiple applications
– Sharing time
• Structured applications
– Extension of modular design
• Operating system structure
– OS themselves implemented as a set of
processes or threads
Multiple Processes
• Central to the design of modern Operating
Systems is managing multiple processes
– Multiprogramming
– Multiprocessing
– Distributed Processing
• Big Issue is Concurrency
– Managing the interaction of all of these
processes
Deadlock
• A set of processes is deadlocked when each
process in the set is blocked awaiting an event
that can only be triggered by another blocked
process in the set
– Typically involves processes competing for
the same set of resources
• No efficient solution
– Deadlock is permanent because none of the events is ever
triggered.
– Unlike other problems in concurrent process management, there is
no efficient solution in the general case.
The Three Conditions for Deadlock
• Mutual exclusion : Only one process may
use a resource at a time.
• Hold and wait : A process may hold
allocated resources while awaiting
assignment of others.
• No preemption : No resource can be
forcibly removed from a process holding it.
Requirement for Mutual Exclusion
• 1. Mutual exclusion must be enforced: only one process
at a time is allowed into its critical section, among all
processes that have critical sections for the same
resource or shared object.
• 2. A process that halts in its non-critical section must do
so without interfering with other processes.
• 3. It must not be possible for a process requiring access
to a critical section to be delayed indefinitely: no
deadlock or starvation.
• 4. When no process is in a critical section, any process
that requests entry to its critical section must be
permitted to enter without delay.
• 5. No assumptions are made about relative process
speeds or number of processors.
• 6. A process remains inside its critical section for a finite
time only
Strong/Weak
Semaphore
• A queue is used to hold processes waiting
on the semaphore
– In what order are processes removed from
the queue?
• Strong Semaphores use FIFO
• Weak Semaphores don’t specify the
order of removal from the queue
Producer/Consumer
Problem
• General Situation:
– One or more producers are generating data and
placing these in a buffer
– A single consumer is taking items out of the buffer
one at time
– Only one producer or consumer may access the
buffer at any one time
• The Problem:
– Ensure that the Producer can’t add data into full buffer
and consumer can’t remove data from empty buffer
Producer/Consumer Animation
Functions
• Assume an infinite buffer b with a linear array of
elements
Producer
while (true) {
/* produce item v */
b[in] = v;
in++;
}
Consumer
while (true) {
while (in <= out)
/*do nothing */;
w = b[out];
out++;
/* consume item w */
}
Buffer
Incorrect Solution
Possible Scenario
Correct Solution
Semaphores
Scheduling
• An OS must allocate resources amongst
competing processes.
• The resource provided by a processor is
execution time
– The resource is allocated by means of a
schedule
Aim of Short
Term Scheduling
• Main objective is to allocate processor
time to optimize certain aspects of system
behaviour.
• A set of criteria is needed to evaluate the
scheduling policy.
Short-Term Scheduling
Criteria: User vs System
• We can differentiate between user and
system criteria
• User-oriented
– Response Time
• Elapsed time between the submission of a request
until there is output.
• System-oriented
– Effective and efficient utilization of the
processor
Short-Term Scheduling
Criteria: Performance
• We could differentiate between
performance related criteria, and those
unrelated to performance
• Performance-related
– Quantitative, easily measured
– E.g. response time and throughput
• Non-performance related
– Qualitative
– Hard to measure
Decision Mode
• Specifies the instants in time at which the
selection function is exercised.
• Two categories:
– Nonpreemptive
– Preemptive
Preemptive VS Nonpreemptive
• Preemptive scheduling allows a process to
be interrupted in the midst of its execution,
taking the CPU away and allocating it to
another process.
• Nonpreemptive scheduling ensures that a
processor relinquishes control of the CPU
only when it finishes with its current CPU
burst
Overall Aim
of Scheduling
• The aim of processor scheduling is to
assign processes to be executed by the
processor over time,
– in a way that meets system objectives, such
as response time, throughput, and processor
efficiency.
Scheduling Objectives
• The scheduling function should
– Share time fairly among processes
– Prevent starvation of a process
– Use the processor efficiently
– Have low overhead
– Prioritise processes when necessary (e.g. real
time deadlines)
Process Scheduling
Example
• Example set of
processes,
consider each a
batch job
– Service time represents total execution time
Process Scheduling
Process
Burst Time
Priority
P1
8
3
P2
4
1
P3
3
2
a). Using algorithms: FCFS, SJF, nonpreemptive priority.
b). What is the turnaround time ?
c). waiting time ? average waiting time ?
The need for memory
management
• Memory is cheap today, and getting
cheaper
– But applications are demanding more and
more memory, there is never enough!
• Memory Management, involves swapping
blocks of data from secondary storage.
• Memory I/O is slow compared to a CPU
– The OS must cleverly time the swapping to
maximise the CPU’s efficiency
Memory Management
Memory needs to be allocated to ensure a
reasonable supply of ready processes to
consume available processor time
- Memory Requirement
•Relocation
•Protection
•Sharing
•Logical organisation
•Physical organisation
Page Frame
• The number of page frames is the size of
memory divided by the page size
- Page frame = size of memory/page size
• How many bits does the system use to
maintain the displacement = as much as
page size
• Ex: contains 128MB of main memory and has a page
size of 8KB
Replacement Policies
optimal Policy
• The optimal policy produces three page
faults after the frame allocation has been
filled.
Least Recently
Used (LRU)
• Replaces the page that has not been
referenced for the longest time
• By the principle of locality, this should be
the page least likely to be referenced in
the near future
• Difficult to implement
– One approach is to tag each page with the
time of last reference.
– This requires a great deal of overhead.
LRU Example
• The LRU policy does nearly as well as the
optimal policy.
– In this example, there are four page faults
First-in, first-out (FIFO)
• Treats page frames allocated to a process
as a circular buffer
• Pages are removed in round-robin style
– Simplest replacement policy to implement
• Page that has been in memory the longest
is replaced
– But, these pages may be needed again very
soon if it hasn’t truly fallen out of use
FIFO Example
• The FIFO policy results in six page faults.
– Note that LRU recognizes that pages 2 and 5
are referenced more frequently than other
pages, whereas FIFO does not.
File Management
• File management system consists of
system utility programs that run as
privileged applications
• Concerned with secondary storage
Criteria for
File Organization
• Important criteria include:
– Short access time
– Ease of update
– Economy of storage
– Simple maintenance
– Reliability
• Priority will differ depending on the use
(e.g. read-only CD vs Hard Drive)
– Some may even conflict
File Organisation
Types
• Many exist, but usually variations of:
– Pile
– Sequential file
– Indexed sequential file
– Indexed file
– Direct, or hashed, file
File Allocation Methods
• Contiguous allocation: a single contiguous set
of blocks is allocated to a file at the time of file
creation.
• Chained allocation: allocation is on an
individual block basis. Each block contains a
pointer to the next block in the chain.
• Indexed allocation: the file allocation table
contains a separate one-level index for each file;the
index has one entry for each portion allocated to the
file.
File Allocation
Records-Blocking methods
• Fixed-length: records are used, and an integral number
of records are stored in a block. - Unused space at the
end of a block is internal fragmentation
• Variable-length Spanned Blocking: records are used and
are packed into blocks with no unused space.
Some records may span multiple blocks -Continuation
is indicated by a pointer to the successor block
• Variable-length unspanned Blocking: Uses variable
length records without spanning
– Wasted space in most blocks because of the inability
to use the remainder of a block if the next record is
larger than the remaining unused space
Definition : B=block size, R=record size, P=size of block pointer,
F=blocking factor(expected number of records with a block)
Spanned F =
unspanned F=
Bit Tables
• This method uses a vector containing one bit for
each block on the disk.
• Each entry of a 0 corresponds to a free block,
– and each 1 corresponds to a block in use.
• Advantages:
– Works well with any file allocation method
– Small as possible
– Certain amount of memory(in bytes) are required for
a block bitmap : Formula?
Distributed Computing
- Traditional Data Processing
• Traditionally data processing was
centralised
• Typically involving centralised
– Computers
– Processing
– Data
Distributed Data Processing
• Distributed Data Processing (DDP)
departs from the centralised model in one
or multiple ways.
• Usually smaller computers, are dispersed
throughout an organization.
• May involve central node with satellites, or
be a dispersed peer to peer approach
– Interconnection is usually required
Middleware
• Set of tools that provide a uniform means and style
of access to system resources across all platforms
• Enable programmers to build applications that look
and feel the same
• Enable programmers to use the same method to
access data
• TCP/IP does not provide the APIs and the
intermediate-level protocols to support a variety of
applications across different hardware and OS
platforms
Role of Middleware in
Client/Server Architecture
Download