Uploaded by Jamie Regala

OS

advertisement
Definition and Functions of an Operating System
• Defined as the software that controls
hardware, now viewed as the programs that
make the hardware usable.
• Examples include UNIX, Mach, MS-DOS, MSWindows, Windows/NT, Chicago, OS/2, MacOS,
VMS, MVS, and VM.
• Controlling the computer involves software at
several levels: kernel services, library services,
and application-level services.
• The kernel, a control program, functions in a
privileged state, responding to interrupts and
service requests.
Functions of Operating Systems
• Implement the user interface, share hardware
among users, allow data sharing, prevent user
interference, schedule resources, facilitate
input/output, recover from errors, and handle
network communications.
Objectives of Operating Systems
• To hide hardware details by creating
abstraction.
• To allocate resources to processes (manage
resources).
• To provide a pleasant and effective user
interface.
Viewing Operating Systems from Resource
Manager and Extended Machines Perspectives
• Resource manager perspective: Operating
Systems are the programs that make the
hardware usable.
Operating systems have evolved through several
generations, starting with the first generation in
the 1940s. These systems were primitive and did
not have operating systems. The second
generation saw the introduction of punch cards
in the 1950s, leading to the development of
single-stream batch processing systems. The
third generation introduced multiprogramming,
which allowed multiple jobs to run
simultaneously, allowing processors to switch
between jobs as needed. This enabled better
resource utilization and increased user
productivity.
The fourth generation saw the development of
LSI (Large Scale Integration) circuits and chips,
leading to the development of personal
computers and workstations. Two operating
systems dominate the personal computer scene:
MS-DOS and UNIX.
The term "process" was first used in the 1960s
and has been used interchangeably with "task"
or "job." Processes are more than program
codes, being active entities that include the
current value of the Program Counter (PC),
contents of processor registers, variables,
process stack (SP), and a data section containing
global variables. In the process model, all
software on a computer is organized into
sequential processes, each with its own virtual
CPU. The CPU switches back and forth among
processes, known as multiprogramming, to
ensure efficient use of resources.
The process state is a crucial part of a program,
containing code, static and dynamic data,
procedure call stack, general purpose registers,
program counter, program status word, and
operating system resources. A process goes
through a series of discrete process states,
including new, running, blocked, ready, and
terminated states.
In general-purpose systems, processes are
created through four principal events: system
initialization, execution of a process, user
request, and batch job. Background processes,
called daemons, interact with users and create a
clone of the calling process. Processes can create
a new process through a fork, with each child
having its own distinct address space.
Process creation can occur when a user logs on,
starts a program, or operating systems create a
process to provide service. Process termination
occurs when a process completes its last
statement, returning resources to the system,
purging it from system lists or tables, and erasing
its process control block (PCB).
There are six possible transitions among these
five states:
1. Block: Running → Block.
2. Time-Run-Out: Running → Ready.
3. Dispatch: Ready → Running.
4. Wakeup: Blocked → Ready.
5. Admitted: New → Ready.
6. Terminated: Running → Terminated.
In summary, the process state is a crucial part of
a program's operation, encompassing code,
static and dynamic data, procedure call stack,
general purpose registers, program counter,
status word, and operating system resources.
A process control block (PCB) is a data structure
that defines a process in an operating system. It
contains information about the current state,
unique identification of the process, pointers to
parent and child processes, priority of the
process, memory points, register save area, and
the processor it is running on. CPU scheduling is
the assignment of physical processors to
processes, and the scheduling algorithm
determines when processors should be assigned
to which processes. The goals of scheduling
include fairness, policy enforcement, efficiency,
response time, turnaround, and throughput.
Scheduling algorithms can be divided into
preemptive and nonpreemptive scheduling.
Nonpreemptive scheduling ensures fair
treatment of all processes, predictable response
times, and execution in two situations: when a
process switches from running to the waiting
state or when a process terminates. Preemptive
scheduling allows logically runable processes to
be temporarily suspended.
CPU scheduling deals with the problem of
deciding which processes in the ready queue are
to be allocated the CPU. Some scheduling
algorithms include FCFS Scheduling, SJF
Scheduling, Round Robin Scheduling, and Priority
Scheduling. FCFS is the simplest scheduling
algorithm, dispatching processes according to
their arrival time on the ready queue. It is more
predictable but not useful for scheduling
interactive users due to poor response time. The
First-Come-First-Served algorithm is rarely used
as a master scheme in modern operating systems
but is often embedded within other schemes.
Deadlock is a state where each process in a set of
processes is waiting for an event that can only be
caused by another process in the set. Resources
can be physical or logical, and can be either
physical or logical. The simplest example of
deadlock is when process 1 has been allocated
non-shareable resources A and process 2 has
been allocated non-sharable resource B. If both
processes need resource A to proceed, they are
blocked, and all useful work in the system stops.
Resources come in two flavors: preemptable and
nonpreemptable. Preemptable resources can be
taken away from the process without causing ill
effects, while nonpreemptable resources cannot
be taken away from the process without causing
ill effect. Reallocating resources can resolve
deadlocks involving preemptable resources, but
deadlocks involving nonpreemptable resources
are difficult to deal with.
Coffman (1971) identified four necessary and
sufficient deadlock conditions: Mutual Exclusion,
Hold and Wait, No-Preemption, and Circular
Wait. Havender's pioneering work showed that
deadlock could be prevented by denying any one
of these conditions. Deadlock avoidance is
another approach that anticipates deadlock
before it actually occurs, employing an algorithm
to access the possibility that deadlock could
occur and acting accordingly.
Virtual memory is a common part of most
operating systems on desktop computers
because it provides a big benefit for users at a
very low cost. With virtual memory, computers
can automatically copy unused RAM areas onto
the hard disk, freeing up space in RAM to load
new applications. This makes the computer feel
like it has unlimited RAM space even though it
only has 32 megabytes installed. Hard disk space
is much cheaper than RAM chips, providing an
economic benefit.
Download