Uploaded by Dilan TEMEL

chapters 7-12 Operating systems final exam slides

advertisement
10/5/2022
Operating
Systems:
Internals
and Design
Principles
Chapter 7
Memory
Management
Frame
A fixed-length block of main memory.
Page
A fixed-length block of data that resides in secondary memory
(such as disk). A page of data may temporarily be copied into a
frame of main memory.
Segment
A variable-length block of data that
An entire segment may temporarily be
region of main memory (segmentation)
into pages which can be individually
(combined segmentation and paging).
Table 7.1
Ninth Edition
William Stallings
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Memory Management Terms
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Memory Management
Requirements
◼
Relocation
Memory management is intended to satisfy the
following requirements:
◼
Relocation
◼
Protection
◼
Sharing
◼
Logical organization
◼
Physical organization
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Process control
information
Entry point
to program
resides in secondary memory.
copied into an available
or the segment may be divided
copied into main memory
◼
Programmers typically do not know in advance which other programs
will be resident in main memory at the time of execution of their
program
◼
Active processes need to be able to be swapped in and out of main
memory in order to maximize processor utilization
◼
Specifying that a process must be placed in the same memory
region when it is swapped back in would be limiting
◼ May need to relocate the process to a different area
of memory
◼ OS must be able to translate the memory references in the code of
the program into actual physical memory addresses, reflecting the
current location of the program in main memory.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Protection
Process Control Block
Program
Branch
instruction
Increasing
address
values
Reference
to data
Data
Current top
of stack
◼
Processes need to acquire permission to reference memory locations for
reading or writing purposes
◼
Location of a program in main memory is unpredictable
◼
Memory references generated by a process must be checked at run time
◼
Mechanisms that support relocation also support protection
Stack
Figure 7.1 Addressing Requirements for a Process
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
1
10/5/2022
Sharing
Logical Organization
◼
Advantageous to allow each process access to the same copy of
the program rather than have their own separate copy
◼
Memory management must allow controlled access to shared
areas of memory without compromising protection
◼
◼
Programs are written in modules
• Modules can be written and compiled independently
• Different degrees of protection given to modules (read-only,
execute-only)
• Sharing on a module level corresponds to the user’s way of
viewing the problem
Mechanisms used to support relocation support sharing
capabilities
◼
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Memory Partitioning
◼
Memory available for a
program plus its data may
be insufficient
Segmentation is the tool that most readily satisfies
requirements
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Physical Organization
Cannot leave the
programmer with the
responsibility to manage
memory
Memory is organized as linear address space
Programmer does not
know how much space
will be available
◼
Overlaying allows various
modules to be assigned
the same region of
memory but is time
consuming to program
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Memory management brings processes into main memory for
execution by the processor
▪ Involves virtual memory
▪ Based on segmentation and paging
Partitioning
▪ Used in several variations in some now-obsolete operating
systems
▪ Does not involve virtual memory
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Fixed Partitioning
◼ Equal-size
◼
partitions
any process whose size is less than
or equal to the partition size can be
loaded into an available partition
◼ The
operating system can swap
out a process if all partitions are
full and no process is in the
Ready or Running state
◼
A program may be too big to fit in a partition
◼ Program needs to be designed with the use of overlays
◼
Main memory utilization is inefficient
◼ Any program, regardless of size, occupies an entire
partition
◼ Internal fragmentation
◼ Wasted space due to the block of data loaded being
smaller than the partition
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
2
10/5/2022
Operating
System
Unequal Size Partitions
◼
Using unequal size partitions helps lessen the
problems
◼ programs up to 16M can be
accommodated without overlays
◼ partitions smaller than 8M allow smaller
programs to be accommodated with less
internal fragmentation
Operating
System
New
Processes
New
Processes
(a) One process queue per partition
(b) Single queue
Figure 7.3 Memory Assignment for Fixed Partitioning
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
◼
The number of partitions specified at system
generation time limits the number of active
processes in the system
Small jobs will not utilize partition space
efficiently
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Operating
System
8M
Operating
System
Process 1
56M
Partitions are of variable length and number
◼
Process is allocated exactly as much memory as it
requires
◼
This technique was used by IBM’s mainframe
operating system, OS/MVT
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Operating
System
20M
◼
Operating
System
Process 1
20M
Process 1
20M
Process 2
14M
Process 2
14M
Process 3
18M
Dynamic Partitioning
36M
22M
4M
(a)
(b)
Operating
System
Process 1
(c)
Operating
System
20M
(d)
Operating
System
Process 1
20M
Process 4
8M
• Memory becomes more and more fragmented
• Memory utilization declines
Operating
System
20M
Process 2
External Fragmentation
14M
6M
14M
Process 4
6M
Process 3
18M
Process 3
4M
(e)
18M
Process 4
6M
Process 3
4M
(f)
8M
18M
4M
(g)
Figure 7.4 The Effect of Dynamic Partitioning
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
8M
Compaction
6M
Process 3
18M
4M
(h)
•
•
•
•
Technique for overcoming external fragmentation
OS shifts processes so that they are contiguous
Free memory is together in one block
Time consuming and wastes CPU time
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
3
10/5/2022
8M
Placement Algorithms
8M
12M
12M
First Fit
22M
6M
Best Fit
Last
allocated
block (14M)
Best-fit
First-fit
18M
2M
Next-fit
8M
• Chooses the
block that is
closest in size
to the request
• Begins to scan
memory from
the beginning
and chooses
the first
available
block that is
large enough
• Begins to scan
memory from
the location
of the last
placement and
chooses the
next available
block that is
large enough
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
When the fixed partition scheme (one process queue per partition) is used, we can
expect a process will always be assigned to the same partition
◼
◼
◼
◼
◼
Whichever partition is selected when a new process is loaded will always be used to swap that
process back into memory after it has been swapped out
In that case, a simple relocating loader can be used
When the process is first loaded, all relative memory references in the code are replaced by
absolute main memory addresses, determined by the base address of the loaded process
In the case of equal-size partitions and in the case of a single process queue for
unequal-size partitions, a process may occupy different partitions during the course
of its life
◼
When a process image is first created, it is loaded into some partition in main memory; Later, the
process may be swapped out
◼
When it is subsequently swapped back in, it may be assigned to a different partition than the last
time
◼
The same is true for dynamic partitioning
When compaction is used, processes are shifted while they are in main memory
◼
Thus, the locations referenced by a process are not fixed
◼
They will change each time a process is swapped in or shifted
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
6M
Allocated block
Free block
14M
Possible new allocation
14M
Next Fit
36M
20 M
(a) Before
(b) After
Figure 7.5 Example Memory Configuration befor e
and after Allocation of 16-Mbyte Block
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Addresses
Relocation
◼
8M
6M
Logical
• Reference to a memory location independent of the current
assignment of data to memory
• Need to translate into physical address before memory access
can be achieved
Relative
• A particular example of logical address, in which the address is
expressed as a location relative to some known point
Physical or Absolute
• Actual location in main memory
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Relative address
Process Control Block
Base Register
◼
Adder
For programs that employ relative addresses
◼
◼
◼
When the program is loaded, the base register is loaded with
the starting address in main memory of the program
A “bounds” register stores the ending location of the
program
When a relative address is encountered
◼ The value in the base register is added to the relative
address to produce an absolute address
◼ The resulting address is compared to the value in the
bounds register
Program
Absolute
address
Bounds Register
Comparator
Data
Interrupt to
operating system
Stack
Process image in
main memory
Figure 7.8 Hardware Support for Relocation
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
4
10/5/2022
Frame
number
◼
Partition memory into equal fixed-size chunks that are
relatively small
◼
Process is also divided into small fixed-size chunks of the
same size
Pages
Assignment of
Process to Free
Frames
Main memory
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
(a) Fifteen Available Frames
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
• Available
chunks of
memory
Main memory
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
A.0
A.1
A.2
A.3
(b) Load Process A
Main memory
Frames
• Chunks of a
process
Main memory
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
(c) Load Process B
Main memory
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
A.0
A.1
A.2
A.3
B.0
B.1
B.2
C.0
C.1
C.2
C.3
(d) Load Process C
A.0
A.1
A.2
A.3
B.0
B.1
B.2
Main memory
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
A.0
A.1
A.2
A.3
C.0
C.1
C.2
C.3
(e) Swap out B
A.0
A.1
A.2
A.3
D.0
D.1
D.2
C.0
C.1
C.2
C.3
D.3
D.4
(f) Load Process D
Figure 7.9 Assignment of Process Pages to Free Frames
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Page Table
◼
Maintained by operating system for each process
◼
Contains the frame location for each page in the process
◼
Processor must know how to access for the current process
◼
Used by processor to produce a physical address
0
0
1
1
2
2
3
3
Process A
page table
0 —
1 —
2 —
Process B
page table
0
7
1
8
2
9
3 10
Process C
page table
0
4
1
5
2
6
3 11
4 12
Process D
page table
13
14
Free frame
list
Figure 7.10 Data Structures for the Example of Figure 7.9 at Time Epoch (f)
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
16-bit logical address
6-bit page #
Logical address =
Segment# = 1, Offset = 752
0000010111011110
0001001011110000
10-bit offset
Logical-to-Physical Address Translation - Paging
00000 1011 1011 110
16-bit logical address
0 000101
6-bit page #
Segment 0
750 bytes
Page 0
Relative address = 1502
0000010111011110
Logical address =
Page# = 1, Offset = 478
000110
10-bit 12offset
011001
Process
0 0 0 0 0 1 0 1 1 1 0page1table1 1 1 0
16-bit physical address
752
478
16-bit logical address
(a) Partitioning
Internal
fragmentation
Page 2
Segment 1
1950 bytes
Page 1
User process
(2700 bytes)
000 1100 1110 11 110
(a) Paging
4-bit segment
12-bit offset
0 #000101
1 0000110
0001
0101 1110 000
2 011001
Process
page
table
Length
+
Process segment table
000 1100 1110 11 110
(c) Segmentation
001 0001 1000 10 000
(b) Paging
(page size = 1K)
16-bit physical address
(a)
Paging
(b) Segmentation
Figure 7.11 Logical Addresses
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Base
0 001011101110 0000010000000000
1 011110011110 0010000000100000
16-bit physical address
Figure 7.12 Examples of Logical-to-Physical Address Translation
16-bit logical address
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
4-bit segment #
12-bit offset
0 0 0 10 0101 1110 000
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
Length
Base
0 001011101110 0000010000000000
1 011110011110 0010000000100000
Process segment table
+
5
10/5/2022
Segmentation
Segmentation
◼A
program can be subdivided into segments
▪ May vary in length
▪ There is a maximum length
◼ Addressing consists
▪ Segment number
▪ An offset
◼ Similar
of two parts:
to dynamic partitioning
◼ Eliminates
◼
Usually visible to programmer
◼
Provided as a convenience for organizing programs and
data
◼
Typically the programmer will assign programs and data
to different segments
◼
For purposes
of modular
programming the program or
16-bit logical
address
data
may
broken down into multiple segments
6-bit page
# be further
10-bit offset
◼ The principal inconvenience of this service is that the
0 0 0 programmer
0 0 1 0 1 1must
1 0be1aware
1 11
0
of the maximum segment size
internal fragmentation
limitation
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
0 000101
1 000110
2 011001
Process
page table
000 1100 1110 11 110
16-bit physical address
(a) Paging
Address Translation
16-bit logical address
4-bit segment #
12-bit offset
0 0 010 0101 1110 000
◼
Another consequence of unequal size segments is
that there is no simple relationship between
logical addresses and physical addresses
◼
The following steps are needed for address
translation:
Extract the segment
number as the leftmost n
bits of the logical address
Use the segment number
as an index into the
process segment table to
find the starting physical
address of the segment
Compare the offset,
expressed in the rightmost
m bits, to the length of the
segment. If the offset is
greater than or equal to
the length, the address is
invalid
Length
Base
0 001011101110 0000010000000000
1 011110011110 0010000000100000
+
Process segment table
The desired physical
address is the sum of the
starting physical address
of the segment plus the
offset
001 0001 1000 10 000
16-bit physical address
(b) Segmentation
Figure 7.12 Examples of Logical-to-Physical Address Translation
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Summary
◼
◼
Memory management
requirements
◼ Relocation
◼ Protection
◼ Sharing
◼ Logical
organization
◼ Physical
organization
Paging
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼ Memory
partitioning
◼ Fixed
partitioning
◼ Dynamic
partitioning
◼ Relocation
Review – End of Chapter
◼ Key
terms
◼ Review
Questions
◼ Problems
◼ 7.2,
7.5a, 7.6, 7.12, 7.14
◼ Segmentation
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights
reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
6
10/5/2022
Hardware and Control Structures
Operating
Systems:
Internals
and Design
Principles
◼ Two
Chapter 8
Virtual
Memory
Ninth Edition
William Stallings
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Virtual memory
characteristics fundamental to memory
management:
A storage allocation scheme in which secondary memory can be
addressed as though it were part of main memory. The addresses a
program may use to reference memory are distinguished from the
addresses the memory system uses to identify physical storage sites, and
program-generated addresses are translated automatically to the
corresponding machine addresses.The size of virtual storage is limited by
the addressing scheme of the computer system and by the amount of
secondary memory available and not by the actual number of main storage
locations.
Virtual address
The address assigned to a location in virtual memory to allow that location
to be accessed as though it were part of main memory.
Virtual address
space
The virtual storage assigned to a process.
Address space
The range of memory addresses available to a process.
Real address
The address of a storage location in main memory.
All memory references are logical addresses that are
dynamically translated into physical addresses at run time
2) A process may be broken up into a number of pieces that
don’t need to be contiguously located in main memory
during execution
1)
◼
If these two characteristics are present, it is not
necessary that all of the pages or segments of a
process be in main memory during execution
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
Operating system brings into main memory a few pieces of the
program
◼
Resident set
◼
An interrupt is generated when an address is needed that is not
in main memory
◼
Operating system places the process in a blocking state
◼
Portion of process that is in main memory
Table 8.1 Virtual Memory Terminology
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Implications
Execution of a Process
◼
Piece of process that contains the logical address is
brought into main memory
◼
◼
◼
Operating system issues a disk I/O Read request
Another process is dispatched to run while the disk I/O takes
place
An interrupt is issued when disk I/O is complete, which
causes the operating system to place the affected process in
the Ready state
◼
More processes may be maintained in main memory
◼
◼
◼
Because only some of the pieces of any particular process are loaded, there is
room for more processes
This leads to more efficient utilization of the processor because it is more
likely that at least one of the more numerous processes will be in a Ready
state at any particular time
A process may be larger than all of main memory
◼
◼
◼
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Continued . . .
If the program being written is too large, the programmer must devise ways to
structure the program into pieces that can be loaded separately in some sort
of overlay strategy
With virtual memory based on paging or segmentation, that job is left to the
OS and the hardware
The OS automatically loads pieces of a process into main memory as
required
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
1
10/5/2022
Simple Paging
Real and Virtual Memory
Real
memory
Main
memory, the
actual RAM
Virtual
memory
Memory on disk
Allows for effective
multiprogramming
and relieves the user
of tight constraints of
main memory
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Simple Segmentation
Virtual Memory
Segmentation
Main memory not partitioned
Program broken into pages by the compiler or
memory management system
Program segments specified by the programmer to
the compiler (i.e., the decision is made by the
programmer)
Internal fragmentation within frames
No internal fragmentation
No external fragmentation
External fragmentation
Operating system must maintain a page table for
each process showing which frame each page
occupies
Operating system must maintain a segment table
for each process showing the load address and
length of each segment
Operating system must maintain a free frame list
Operating system must maintain a list of free holes
in main memory
Processor uses page number, offset to calculate
absolute address
Processor uses segment number, offset to calculate
absolute address
All the pages of a
process must be in main
memory for process to
run, unless overlays are
used
All the segments of a
process must be in main
memory for process to
run, unless overlays are
used
Not all pages of a
process need be in main
memory frames for the
process to run. Pages
may be read in as
needed
Reading a page into
main memory may
require writing a page
out to disk
Table 8.2
Characteristics
of Paging and
Segmentation
Not all segments of a
process need be in main
memory for the process
to run. Segments may
be read in as needed
Reading a segment into
main memory may
require writing one or
more segments out to
disk
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Thrashing
A state in which
the system spends
most of its time
swapping process
pieces rather than
executing
instructions
Virtual Memory
Paging
Main memory partitioned into small fixed-size
chunks called frames
Principle of Locality
To avoid this, the
operating system tries
to guess, based on
recent history, which
pieces are least likely
to be used in the near
future
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
Program and data references within a process tend to cluster
◼
Only a few pieces of a process will be needed over a short
period of time
◼
Therefore it is possible to make intelligent guesses about which
pieces will be needed in the future
◼
Avoids thrashing
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Paging
For virtual memory to be practical and
effective:
• Hardware must support paging and
segmentation
• Operating system must include software for
managing the movement of pages and/or
segments between secondary memory and
main memory
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
The term virtual memory is usually associated with systems that
employ paging
◼
Each process has its own page table
◼ Each page table entry (PTE) contains the frame number of
the corresponding page in main memory
◼ A page table is also needed for a virtual memory
scheme based on paging
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
2
10/5/2022
Memory
Management
Formats
Virtual Address
Page #
Physical Address
Frame # Offset
Offset
Register
Page Table Ptr
n bits
Page Table
m bits
Page#
Offset
+
Page
Frame
Frame #
Program
Paging Mechanism
Main Memory
Figure 8.2 Address Translation in a Paging System
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Virtual Address
4-kbyte root
page table
10 bits
Frame # Offset
10 bits 12 bits
Root page
table ptr
4-Mbyte user
page table
4-Gbyte user
address space
4-kbyte page
table (contains
1024 PTEs)
Root page table
(contains 1024 PTEs)
Program
Figure 8.3 A Two-Level Hierarchical Page Table
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Page
Frame
+
+
Paging Mechanism
Main Memory
Figure 8.4 Address Translation in a Two-Level Paging System
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
The smaller the page size, the lesser the amount of internal
fragmentation
◼
However, more pages are required per process
◼
More pages per process means larger page tables
For large programs in a heavily multiprogrammed environment
some portion of the page tables of active processes must be in
virtual memory instead of main memory
◼
◼
The physical characteristics of most secondary-memory devices
favor a larger page size for more efficient block transfer of data
Page Fault Rate
◼
Page Fault Rate
Page Size
P
(a) Page Size
W
N
(b) Number of Page Frames Allocated
P = size of entire process
W = working set size
N = total number of pages in process
Figure 8.10 Typical Paging Behavior of a Program
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
3
10/5/2022
Segmentation
Page Size
The design issue of
page size is related to
the size of physical
main memory and
program size
Main memory is
getting larger and
address space used by
applications is also
growing
◼
Segmentation allows the programmer to view memory as
consisting of multiple address spaces or segments
Advantages:
Most obvious on
personal computers
where applications are
becoming increasingly
complex
◼
◼
◼
Simplifies handling of growing data structures
Allows programs to be altered and recompiled independently
Lends itself to sharing data among processes
Lends itself to protection
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Segment Organization
Address Translation
Physical address
Virtual address
Seg #
◼
+
Offset = d
Each segment table entry contains the starting address of the
corresponding segment in main memory and the length of the
segment
Base + d
Register
Seg Table Ptr
Segment table
A bit is needed to determine if the segment is already in main
memory
◼
Another bit is needed to determine if the segment has been
modified since it was loaded in main memory
+
Length Base
Program
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
d
Seg #
◼
Segment
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
Segmentation mechanism
Main memory
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Figure 8.11 Address Translation in a Segmentation System
Combined Paging and
Segmentation
Virtual Address
Seg #
Page #
Frame # Offset
Offset
Seg Table Ptr
Segmentation is visible to the
programmer
Paging is transparent to the
programmer
+
Program
Segmentation
Mechanism
Page
Table
+
Page#
In a combined
paging/segmentation system
a user’s address space is
broken up into a number of
segments. Each segment is
broken up into a number of
fixed-sized pages which are
equal in length to a main
memory frame
Seg#
Segment
Table
Paging
Mechanism
Offset
Page
Frame
Main Memory
Figure 8.12 Address Translation in a Segmentation/Paging System
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
4
10/5/2022
Operating System Software
Policies for Virtual Memory
◼
Key issue: Performance
▪ minimize page faults
The design of the memory management
portion of an operating system depends on
three fundamental areas of choice:
• Whether or not to use virtual memory techniques
• The use of paging or segmentation or both
• The algorithms employed for various aspects of
memory management
Fetch Policy
Demand paging
Prepaging
Resident Set Management
Resident set size
Fixed
Variable
Replacement Scope
Global
Local
Placement Policy
Replacement Policy
Basic Algorithms
Optimal
Least recently used (LRU)
First-in-first-out (FIFO)
Clock
Cleaning Policy
Demand
Precleaning
Load Control
Page Buffering
Degree of multiprogramming
Table 8.4 Operating System Policies for Virtual Memory
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Demand Paging
◼
Determines when a
page should be
brought into
memory
◼
Two main
types:
Demand
Paging
Prepaging
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Prepaging
◼
Prepaging
◼
◼
◼
◼
◼
Pages other than the one demanded by a page fault are brought
in
Exploits the characteristics of most secondary memory devices
If pages of a process are stored contiguously in secondary
memory it is more efficient to bring in a number of pages at
one time
Ineffective if extra pages are not referenced
Should not be confused with “swapping”
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Demand Paging
◼ Only brings pages into main memory when a reference is made
to a location on the page
◼ Many page faults when process is first started
◼ Principle of locality suggests that as more and more pages are
brought in, most future references will be to pages that have
recently been brought in, and page faults should drop to a very
low level
Placement Policy
◼
Determines where in real memory a process
piece is to reside
◼
Important design issue in a segmentation system
◼
Paging or combined paging with segmentation
placing is irrelevant because hardware performs
functions with equal efficiency
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
5
10/5/2022
Replacement Policy
◼
Deals with the selection of a page in main memory
to be replaced when a new page must be brought in
◼
◼
Objective is that the page that is removed be the page
least likely to be referenced in the near future
The more elaborate the replacement policy the
greater the hardware and software overhead to
implement it
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
▪ When a frame is locked the page currently stored in that
frame may not be replaced
▪ Kernel of the OS as well as key control structures are held
in locked frames
▪ I/O buffers and time-critical areas may be locked into
main memory frames
▪ Locking is achieved by associating a lock bit with each
frame
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Algorithms used for
the selection of a
page to replace:
•
•
•
•
▪ Selects the page for which the time to the next reference
is the longest
▪ Produces three page faults after the frame allocation has
been filled
▪ Impossible to implement – used to judge other algorithms
Optimal
Least recently used (LRU)
First-in-first-out (FIFO)
Clock
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Least Recently Used
(LRU)
◼
Replaces the page that has not been referenced for
the longest time
◼
By the principle of locality, this should be the page
least likely to be referenced in the near future
◼
Difficult to implement
◼
LRU Example
One approach is to tag each page with the time of last
reference
◼
This requires a great deal of overhead
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
6
10/5/2022
First-in-First-out (FIFO)
◼
Treats page frames allocated to a process as a circular
buffer
◼
Pages are removed in round-robin style
▪ Simple replacement policy to implement
◼
Page that has been in memory the
longest is replaced
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Clock Policy
Clock Policy
◼
Requires the association of an additional bit with each frame
◼
When a page is first loaded in memory or referenced, the use bit
is set to 1
◼
The set of frames is considered to be a circular buffer
◼ When a page is replaced, the pointer is set to the next frame
◼ To replace a page, OS scans for a page with use bit set to 0
◼ If a page with use bit 1 is encountered, it resets it to 0 and
continues on
◼
◼
◼
Referred to as the use bit
Any frame with a use bit of 1 is passed over by the algorithm
Page frames visualized as laid out in a circle
Page Faults per 1000 References
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
40
35
FIFO
30
CLOCK
25
LRU
20
15
OPT
10
5
0
6
8
10
12
14
Number of Frames Allocated
Figure 8.16 Comparison of Fixed-Allocation, Local Page Replacement Algorithms
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
7
10/5/2022
Resident Set Size
◼
The OS must decide how many pages to bring into
main memory
◼
◼
◼
The smaller the amount of memory allocated to each
process, the more processes can reside in memory
Small number of pages loaded increases page faults
Beyond a certain size, further allocations
of pages will not effect the page fault rate
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
Fixed-allocation
Variable-allocation
Gives a process a fixed
number of frames in main
memory within which to
execute
◼
◼
When a page fault occurs,
one of the pages of that
process must be replaced
Allows the number of page
frames allocated to a process
to be varied over the lifetime
of the process
◼ A process with high levels
of page faults will be given
additional page frames
◼ Relates to the concept of
replacement scope
◼ Requires s/w overhead in
OS
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Local Replacement
Fixed Allocation
◼
The scope of a replacement strategy can be categorized as
global or local
◼
Both types are activated by a page fault when there are no
free page frames
•Number of frames allocated
to a process is fixed.
Global Replacement
•Not possible.
•Page to be replaced is chosen
from among the frames
allocated to that process.
Variable Allocation
Local
• Chooses only among the resident pages of the process that generated
the page fault
•The number of frames
•Page to be replaced is chosen from
allocated to a process may be all available frames in main
changed from time to time to memory; this causes the size of the
maintain the working set of
resident set of processes to vary.
the process.
•Page to be replaced is chosen
from among the frames
allocated to that process.
Global
• Considers all unlocked pages in main memory
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Table 8.5 Resident Set Management
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Variable Allocation
Global Scope
Fixed Allocation, Local Scope
◼
Necessary to decide ahead of time the amount of
allocation to give a process
◼
If allocation is too small, there will be a high page fault
rate
If allocation is too
large, there will be
too few programs
in main memory
• Increased processor idle time
• Increased time spent in
swapping
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
Easiest to implement
◼
Adopted in a number of operating systems
◼
OS maintains a list of free frames
◼
Free frame is added to resident set of process when a page fault
occurs
◼
If no frames are available the OS must choose a page currently in
memory
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
8
10/5/2022
Variable Allocation
Local Scope
◼
When a new process is loaded into main memory, allocate to it a
certain number of page frames as its resident set
◼
When a page fault occurs, select the page to replace from among
the resident set of the process that suffers the fault
◼
Reevaluate the allocation provided to the process and increase or
decrease it to improve overall performance
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
Decision to increase or decrease a resident set size is based
on the assessment of the likely future demands of active
processes
Key elements:
• Criteria used to determine
resident set size
• The timing of changes
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Cleaning Policy
Working Set Size
◼
Demand Cleaning
A page is written out to secondary memory only when it has been selected for replacement
Problem: A page fault may have to wait for 2 page transfers before it can be unblocked
Transient
Stable
Transient
Stable
Stable
Figure 8.18 Typical Graph of Working Set Size [MAEK87]
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Load Control
Determines the number of processes that will be resident in
main memory
◼
Multiprogramming level
◼
Critical in effective memory management
◼
Too few processes, many occasions when all processes will
be blocked and much time will be spent in swapping
◼
Precleaning
Time
Transient
Stable
Allows the writing of pages in batches
Problems: pages may be modified again before they are replaced
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Processor Utilization
Transient
◼
Concerned with determining when a modified page should be
written out to secondary memory
Multiprogramming Level
Too many processes will lead to thrashing
Figure 8.19 Multiprogramming Effects
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
9
10/5/2022
Thrashing
◼
As the multiprogramming level increases from a
small value
◼ processor utilization rise as there is less chance
that all resident processes are blocked.
◼
A point will be reached at which the average
resident set is inadequate
◼ The number of page faults rises dramatically,
and processor utilization collapses.
◼
If the degree of multiprogramming is to be reduced, one or more
of the currently resident processes must be swapped out
Six possibilities exist:
•
•
•
•
•
•
Lowest-priority process
Faulting process
Last process activated
Process with the smallest resident set
Largest process
Process with the largest remaining execution window
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
UNIX
◼
Intended to be machine independent so its memory
management schemes will vary
◼
Early UNIX: variable partitioning with no virtual memory scheme
◼
Current implementations of UNIX and Solaris make use of paged
virtual memory
SVR4 and Solaris use two separate
schemes:
• Paging system
• Kernel memory allocator
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
The page frame data table is used for page
replacement
◼ Pointers are used to create lists within the table
◼
◼
◼
◼
All available frames are linked together in a list of free
frames available for bringing in pages
When the number of available frames drops below a
certain threshold, the kernel will steal a number of frames
to compensate
Use a variant of clock policy for page replacement
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Paging System
Kernel Memory
Allocator
Provides a virtual memory
capability that allocates page frames
in main memory to processes
Allocates memory for the kernel
Allocates page frames to disk block
buffers
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
The kernel generates and destroys small tables and buffers
frequently during the course of execution, each of which requires
dynamic memory allocation.
◼
Most of these blocks are significantly smaller than typical pages
(therefore paging would be inefficient)
◼
Allocations and free operations must be made as fast as possible
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
10
10/5/2022
Windows
Memory Management
◼
Virtual memory manager controls how memory is allocated
and how paging is performed
◼
Designed to operate over a variety of platforms
◼
Uses page sizes ranging from 4 Kbytes to 64 Kbytes
Windows Virtual Address Map
◼
◼
◼
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
On 32 bit platforms each user process sees a separate 32 bit
address space allowing 4 GB of virtual memory per process
▪ By default half is reserved for the OS
Large memory intensive applications run more effectively
using 64-bit Windows
Most modern PCs use the AMD64 processor architecture
which is capable of running as either a 32-bit or 64-bit
system
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Windows Paging
◼
On creation, a process can make use of the entire user space of
almost 2 GB
◼
This space is divided into fixed-size pages managed in
contiguous regions allocated on 64 KB boundaries
◼
Regions may be in one of three states:
Available
Reserved
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
When activated, a process is assigned a data
structure to manage its working set
◼
Working sets of active processes are
adjusted depending on the availability
of main memory
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Summary
Hardware and control
structures
◼ Locality and
virtual memory
◼ Paging
◼ Segmentation
◼ Combined paging
and segmentation
◼ Protection and
sharing
Windows uses variable allocation, local scope
Committed
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
◼
◼
◼
OS software
◼ Fetch policy
◼ Placement policy
◼ Replacement policy
◼ Resident set
management
◼ Cleaning policy
◼ Load control
Review – End of Chapter
◼ Key
terms
◼ Review
Questions
◼ Problems
◼ 8.4-8.7
UNIX/Windows
memory management
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights
reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
11
Operating
Systems:
Internals
and
Design
Principles
Chapter 11
I/O Management
and Disk Scheduling
Ninth Edition
By William Stallings
External devices that engage in I/O with computer
systems can be grouped into three categories:
Human readable
• Suitable for communicating with the computer user
• Printers, terminals, video display, keyboard, mouse
Machine readable
• Suitable for communicating with electronic equipment
• Disk drives, USB keys, sensors, controllers
Communication
• Suitable for communicating with remote devices
• Modems, digital line drivers
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Gigabit Ethernet
Graphics display
◼
Devices differ in a number of areas:
Hard disk
Ethernet
Data Rate
• There may be differences of magnitude between the data transfer rates
Application
Optical disk
Scanner
• The use to which a device is put has an influence on the software
Complexity of Control
• The effect on the operating system is filtered by the complexity of the I/O module that controls the
device
Unit of Transfer
• Data may be transferred as a stream of bytes or characters or in larger blocks
Data Representation
• Different data encoding schemes are used by different devices
Laser printer
Floppy disk
Modem
Mouse
Keyboard
101
102
103
104
Error Conditions
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
Three techniques for performing I/O are:
◼
Programmed I/O
◼
◼
◼
106
107
108
109
Figure 11.1 Typical I/O Device Data Rates
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
The processor issues an I/O command on behalf of a process to an I/O module;
that process then busy waits for the operation to be completed before proceeding
Interrupt-driven I/O
◼
105
Data Rate (bps)
• The nature of errors, the way in which they are reported, their consequences, and
the available range of responses differs from one device to another
The processor issues an I/O command on behalf of a process
◼ If non-blocking – processor continues to execute instructions from the process
that issued the I/O command
◼ If blocking – the next instruction the processor executes is from the OS,
which will put the current process in a blocked state and schedule another
process
Direct Memory Access (DMA)
◼
I/O-to-memory transfer
through processor
No Interrupts
Use of Interrupts
Programmed I/O
Interrupt-driven I/O
Direct I/O-to-memory
transfer
Direct memory access (DMA)
A DMA module controls the exchange of data between main memory and an
I/O module
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
1
1
2
3
4
• Processor directly controls a peripheral device
◼
• A controller or I/O module is added
• Same configuration as step 2, but now interrupts are employed
• The I/O module is enhanced to become a separate processor, with
a specialized instruction set tailored for I/O
6
• The I/O module has a local memory of its own and is, in fact, a
computer in its own right
◼
Whether a read or write is requested
◼
Address of the I/O device
Starting location in memory to read from or write to
The number of words to be read or written
◼
◼
• The I/O module is given direct control of memory via DMA
5
When a processor wishes to read or write a block of data,
it issues a command to DMA by sending:
◼
DMA transfers the data without going through the
processor
◼
When transfer is complete, DMA sends an interrupt to the
processor
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Data
Count
Efficiency
Data
Register
Data Lines
Address Lines
Address
Register
Request to DMA
Acknowledge from DMA
Interrupt
Read
Write
Control
Logic
Major effort in I/O design
◼
◼
Important because I/O
operations often form a
bottleneck
Desirable to handle all devices in
a uniform manner
◼
Applies to the way processes view
I/O devices and the way the
operating system manages I/O
devices and operations
◼
Most I/O devices are extremely
slow compared with main
memory and the processor
◼
The area that has received the
most attention is disk I/O
Diversity of devices makes it
difficult to achieve true generality
◼
◼
Use a hierarchical, modular
approach to the design of the I/O
function
Figure 11.2 Typical DMA Block Diagram
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
User
Processes
Generality
◼
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
User
Processes
User
Processes
Directory
Management
Logical
I/O
Communication
Architecture
File
System
◼
To avoid overheads and inefficiencies, it is sometimes convenient to perform
input transfers in advance of requests being made, and to perform output
transfers some time after the request is made
Physical
Organization
Device
I/O
Device
I/O
Device
I/O
Scheduling
& Control
Scheduling
& Control
Scheduling
& Control
Hardware
Hardware
Hardware
(a) Local peripheral device
(b) Communications port
(c) File system
Block-oriented device
Stream-oriented device
• Stores information in blocks
that are usually of fixed size
• Transfers are made one block
at a time
• Possible to reference data by
its block number
• Disks and USB keys are
examples
• Transfers data in and out as a
stream of bytes
• No block structure
• Terminals, printers,
communications ports,
mouse and other pointing
devices, and most other
devices that are not
secondary storage are
examples
Figure 11.4 A Model of I/O Organization
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
2
No Buffer
◼
◼
Without a buffer, the OS directly accesses the device when it needs
2 problems
◼ Program is hung up waiting for the slow I/O to complete
◼
The part of the process’s memory space used for holding the
I/O data must remains in memory and cannot be swapped out
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
◼
◼
The user process can be processing one block of data while the next
block is being read in
The OS is able to swap the process out because the input operation is
taking place in system memory rather than user process memory
Disadvantages:
◼
◼
When a user process issues an I/O request, the OS assigns a buffer
in the system portion of main memory for the operation
◼
Input transfers are made to system buffer
◼
Reading ahead/anticipated input
OS must keep track of the assignment of system buffers to user
processes.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Double Buffer
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
is done in the expectation that the block will eventually be needed
◼
when the transfer is complete, the process moves the block into user
space and immediately requests another block
◼
Can be used in a line-at-a-time
fashion or a byte-at-a-time
fashion
◼
Line-at-a-time operation is
appropriate for scroll-mode
terminals (dumb terminals)
◼
With this form of terminal,
user input is one line at a
time, with a carriage return
signaling the end of a line
◼
Output to the terminal is
similarly one line at a time
Complicates the logic in the operating system
◼
◼
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Approach generally provides a speedup compared to the lack of
system buffering
◼
Block Oriented Single Buffer
◼
Byte-at-a-time operation
◼
used on forms-mode
terminals, when each
keystroke is significant
◼
Also for other peripherals
such as sensors and
controllers
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
Assigning two system buffers to
the operation
◼
A process now transfers data to
or from one buffer while the
operating system empties or fills
the other buffer
◼
Also known as buffer swapping
Circular Buffer
◼
When more than two buffers
are used, the collection of
buffers is itself referred to as
a circular buffer
◼
Used when I/O operation
must keep up with process
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
3
◼
Disk
Performance
Parameters
Wait for
Device
Wait for
Channel
The actual details of disk I/O
operation depend on the:
◼ Computer system
◼ Operating system
◼
Seek
Nature of the I/O
channel and disk
controller hardware
Rotational
Delay
Data
Transfer
Device Busy
Figure 11.6 Timing of a Disk I/O Transfer
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
When the disk drive is operating, the disk is rotating at constant speed
◼
To read or write the head must be positioned at the desired track and
at the beginning of the desired sector on that track
◼
Track selection involves moving the head in a movable-head system or
electronically selecting one head on a fixed-head system
◼
On a movable-head system the time it takes to position the head at the
track is known as seek time
◼
The time it takes for the beginning of the sector to reach the head is
known as rotational delay
◼
The sum of the seek time and the rotational delay equals the access
time
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Processes in sequential order
◼
Fair to all processes
◼
Approximates random scheduling in performance
if there are many processes competing for the disk
◼
Control of the scheduling is outside the control of disk management
software
◼
Requests order: 55, 58, 39,18,90,160, 150, 38, 184.
◼
Goal is not to optimize disk utilization but to meet other objectives
◼
Short batch jobs and interactive jobs are given higher priority
◼
Provides good interactive response time
◼
Longer jobs may have to wait an excessively long time
◼
A poor policy for database systems
(a) FIFO
Time
25
50
75
100
125
150
175
199
Time
(b) SSTF
0
25
50
75
100
0Select the disk I/O request that requires the least movement
◼ 125
25
150
arm from its current position
50
175
◼75
Always choose the minimum seek time
199
(c) SCAN
100
◼ Requests
order: 55, 58, 39,18,90,160, 150, 38, 184.
0
125
25
150
◼ Problem:
Some requests may have to wait for a long time or
50
175 served
75
199
◼ Starvation may occur
100
track number
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
of the disk
Time
never be
track number
track number
(a) FIFO
125
0
150
25
175
50
199
75
(d) C-SCAN
100
125 Figure 11.7 Comparison of Disk Scheduling Algorithms (see Table 11.3)
150
175
199
(b) SSTF
0
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
25
50
75
100
125
150
175
199
(c) SCAN
0
25
50
mber
SCAN
◼
Time
Time
Time
known as the elevator algorithm
(a) FIFO
0
Arm
moves in one direction only
Time
25
◼ satisfies all outstanding requests until it reaches the last track
50
or no more requests (LOOK policy) in that direction then the
75
100
direction is reversed
125
◼ Favors jobs whose requests are for tracks nearest to both innermost
150
and outermost tracks and favors the latest-arriving jobs
175
◼ 199
Requests order: 55, 58, 39,18,90,160, 150, 38, 184.
Time
(b) SSTF
0
25
50
75
100
125
150
175
199
Time
(c) SCAN
0
© 2017 Pearson Education,
Inc., Hoboken, NJ. All rights reserved.
track number
track number
track number
Shortest Service Time First (SSTF)
0
25
50
75
100
125
150
175
◼ 199
Also
track number
track number
0 Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson
track number
0
25
50
75
100
125
150
175
199
◼
track number
track number
First-In, First-Out (FIFO)
25
50
75
100
125
150
175
199
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
Time
(d) C-SCAN
Figure 11.7 Comparison of Disk Scheduling Algorithms (see Table 11.3)
Time
4
track numbe
track number
50
75
100
125
150
175
199
(a) FIFO
0
25
50
75
100
125
150
175
199
Time
C-SCAN (Circular SCAN)
track number
50
◼ 75
When
(a) FIFO
(b) SSTF
(starting at track 100)
(starting at track 100)
Time
(b) SSTF
0
◼ 25
Restricts
scanning to one direction only
the last track has been visited in one direction, the arm is
begins again
returned to the opposite end of the disk and the scan
100
125
◼150
Requests order: 55, 58, 39,18,90,160, 150, 38, 184.
175
199
(c) SCAN
0
25
50
75
100
125
150
175
199
(d) C-SCAN
Next
track
accessed
55
58
39
18
90
160
150
38
184
track number
Time
© 2017 Pearson
Education,
Inc., Hoboken,
NJ. All rights
Figure
11.7
Comparison
ofreserved.
Disk Scheduling
Average
seek
length
Time
Algorithms (see Table 11.3)
Number Next
of tracks track
traversed accessed
45
3
19
21
72
70
10
112
146
55.3
90
58
55
39
38
18
150
160
184
Average
seek
length
(c) SCAN
(d) C-SCAN
(starting at track 100, (starting at track 100,
in the direction of
in the direction of
increasing track
increasing track
number)
number)
Number Next
Number Next
Number
of tracks track
of tracks track
of tracks
traversed accessed traversed accessed traversed
10
32
3
16
1
20
132
10
24
27.5
150
160
184
90
58
55
39
38
18
Average
seek
length
50
10
24
94
32
3
16
1
20
27.8
150
160
184
18
38
39
55
58
90
50
10
24
166
20
1
16
3
32
Average
seek
length
35.8
Table 11.2 Comparison of Disk Scheduling Algorithms
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
Segments the disk request queue into subqueues of length N
◼
Uses two subqueues
◼
Subqueues are processed one at a time, using SCAN
◼
◼
While a queue is being processed new requests must be added to
some other queue
When a scan begins, all of the requests are in one of the queues,
with the other empty
◼
During scan, all new requests are put into the other queue
◼
If fewer than N requests are available at the end of a scan, all of
them are processed with the next scan
◼
Service of new requests is deferred until all of the old requests have
been processed
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Name
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Description
Remarks
Selection according to requestor
Random
Random scheduling
For analysis and simulation
FIFO
First in first out
Fairest of them all
PRI
Priority by process
Control outside of disk queue
management
LIFO
Last in first out
Maximize locality and
resource utilization
Selection according to requested item
SSTF
Shortest service time first
High utilization, small queues
SCAN
Back and forth over disk
Better service distribution
C-SCAN
One way with fast return
Lower service variability
N-step-SCAN
SCAN of N records at a time
Service guarantee
FSCAN
N-step-SCAN with N = queue Load sensitive
size at beginning of SCAN
cycle
◼
Redundant Array
of Independent
Disks
◼
Consists of seven
levels, zero through
six
RAID is a set of
physical disk drives
viewed by the operating
system as a single logical
drive
Redundant disk capacity is
used to store parity
information, which
guarantees data
recoverability in case of a
disk failure
Design
architectures
share three
characteristics:
Data are distributed
across the physical
drives of an array in
a scheme known as
striping
Table 11.3 Disk Scheduling Algorithms
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
5
RAID Level 0
◼
◼
Strategy employs multiple disk drives and distributes
data in such a way as to enable simultaneous access to
data from multiple drives
◼ Improves I/O performance and allows easier
incremental increases in capacity
◼
◼
◼
◼
Support the need for redundancy effectively
◼
Makes use of stored parity information that enables the
recovery of data lost due to a disk failure
Not a true RAID because it does not include redundancy to
improve performance or provide data protection
User and system data are distributed across all of the disks
Logical disk is divided into strips, which may be in unit of
block or sector
If two I/O requests for two different blocks of data, two
requests can be issued in parallel, reducing I/O queuing time
strip 0
strip 1
strip 2
strip 4
strip 5
strip 6
strip 7
strip 8
strip 9
strip 10
strip 11
strip 12
strip 13
strip 14
strip 15
strip 3
(a) RAID 0 (non-redundant)
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
strip 0
strip 1
strip 2
strip 3
strip 0
strip 1
strip 4
strip 5
strip 6
strip 7
strip 4
strip 5
strip
strip 8
strip 9
strip 10
strip 11
strip 8
strip 9
strip
strip 14
strip 15
strip 12
strip 13
strip
f0(b)
strip 12
strip 1
strip 2
strip 3
strip 4
strip 5
strip 6
strip 7
strip 8
strip 12
RAID Level 1 (Mirroring)
◼
Can also be implemented without data striping
Read requests
can
be2 servedstripby3 either of the two disks
strip 1
strip
◼ 0
strip
strip 4
strip 5
strip 6
◼
When a drive fails the data may still be accessed from the second drive
◼
Principal disadvantage – high cost
strip 13
strip 14
strip 13
strip 11
strip 14
RAID
Level 2
strip 12
strip 15
◼
Makes use of a parallel access
technique
strip 0
b0
strip 1
strip 2
◼
b2
Data
striping
is usedb3
strip 3
strip 0
strip 2
f1(b)
strip 3
strip 4
strip 5
strip 6
◼
strip 7
strip 4
strip 5 is usedstrip 6
Typically
a Hamming
code
strip 7
strip 9
strip 10
strip 13
strip 14
strip 8
strip 7
There is
no “write
penalty”strip
– 11no parity bits computation is needed
strip 9
strip 10
◼
strip 8
strip 12
9
strip 10
(b)strip
RAID
1 (mirrored)
(a) RAID 0 (non-redundant)
Redundancy is achieved by duplicating all the data
◼
strip 13
strip 0
strip 15
b1
strip 11
◼
strip 1
strip 3
strip 0
strip 1
strip 4
strip 5
strip 6
strip 7
strip 4
strip 5
strip 6
strip 7
strip 8
strip 9
strip 10
strip 11
strip 8
strip 9
strip 10
strip 11
strip 12
strip 13
strip 14
strip 15
strip 12
strip 13
strip 14
strip 15
strip 2
strip 2
strip 9
strip 10
an environment
in 14
strip 13
strip
errors occur
f2(b
strip 11
strip 15
Figure 11.8 RAID Levels (page 1 of 2)
strip 3
b0
(b) RAID 1 (mirrored)
b1
b2
b3
f0(b)
f1(b)
f2(b)
(c) RAID 2 (redundancy through Hamming code)
Figure 11.8 RAID Levels (page 1 of 2)
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
b0
strip 1
Not implemented – too costly
(a) RAID 0 (non-redundant)
strip 0
strip 8
◼ Effective
choice
strip 15
strip 12in
which
manycode)
disk
(c) RAID 2 (redundancy through
Hamming
(b) RAID 1 (mirrored)
strip
b1
b2
b3
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
f0(b)
f1(b)
f2(b)
(c) RAID 2 (redundancy through Hamming code)
Figure 11.8 RAID Levels (page 1 of 2)
RAID Level 3
◼
◼
◼
RAID Level 4
Requires only a single redundant disk, no matter how large the disk array
Employs parallel access, with data distributed in small strips (byte
striping)
A simple parity bit is computed for the set of bits on other disks
Can reconstruct data if a single drive fails
Can achieve very high data transfer rates
◼ Any I/O request will involve the parallel transfer of data from all data
disks
◼ Good for sequential access, bad for random access
◼
◼
◼
◼
◼
b0
b1
b2
b3
P(b)
(d) RAID 3 (bit-interleaved parity)
block 1
◼
◼
A bit-by-bit parity strip is calculated across corresponding strips on
each data disk, and the parity bits are stored in the corresponding strip
on the parity disk
b0
b1 I/O write
b2 request of
b3 small size
P(b)
Involves a write penalty
when an
is
performed
Good for random read as data blocks are striped
(d) RAID
parity) it has to write to the single
Bad for random write
as3 (bit-interleaved
for every write,
parity disk
block 0
block 1
block 2
block 3
block 4
block 5
block 6
block 7
P(0-3)
block 8
block 9
block 10
block 11
P(8-11)
block 12
block 13
block 14
block 15
P(12-15)
P(4-7)
(e) RAID 4 (block-level parity)
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
block 0
◼
Use block level striping
Makes use of an independent access technique
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
block 2
block 3
P(0-3)
block 0
block 1
block 2
block 3
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
block 4
block 5
block 6
block 7
P(4-7)
block 8
block 9
block 10
block 11
P(8-11)
block 12
block 13
block 14
block 15
P(12-15)
P(0-3)
block 4
block 5
block 6
P(4-7)
block 7
block 8
block 9
P(8-11)
block 10
block 11
block 12
P(12-15)
block 13
block 14
block 15
P(16-19)
block 16
block 17
block 18
block 19
block 3
P(0-3)
(e) RAID 4 (block-level parity)
(f) RAID 5 (block-level distributed parity)
block 0
block 1
block 2
block 3
P(0-3)
block 4
block 5
block 6
P(4-7)
block 7
block 0
block 1
block 2
Q(0-3)
6
(d) RAID 3 (bit-interleaved parity)
b0
b1
b2
b3
P(b)
(d) RAID 3 (bit-interleaved parity)
block 0
block 1
block 4
block 5
RAID
Level 5
block 8
block 9
block 12
block 13
(e) RAID 4 (block-level parity)
block 2
◼
◼
block 3
block 15
Has the characteristic that the loss of
any one disk does not result in data loss
block 1
block 2
block 3
P(0-3)
block 4
block 5
block 6
P(4-7)
block 7
block 3
block 6
block 7
P(4-7)
block 8
block 9
block 10
block 11
P(8-11)
block 12
block 13
block 14
block 15
P(12-15)
RAID
Level 6
P(12-15)
block 0
block 2
block 5
◼
Typical allocation is a round-robin
scheme
◼
block 1
block 4
P(0-3)
(e) RAID 4 (block-level parity)
P(0-3)
Similar to RAID-4
that uses block
block 7
P(4-7)
striping but distributes the parity bits
block
10 all disks
block 11
P(8-11)
across
block 6
block 14
block 0
Two different parity calculations are carried
out and stored in separate blocks on different
block 3
P(0-3)
disks
block 0
block 1
block 4
block 5
block 8
block 9
block 6
P(4-7)
◼P(8-11)
Provides extremely
block 10
block 12
P(12-15)
block
14
◼ 13
Doubleblock
parity
P(16-19)
block 16
to
block 17
block 2
◼
block 7
high
data
block
11 availability
blockfault
15 tolerance up
provides
two failed
block 18drives. block 19
Incurs a substantial write penalty because
each write affects two parity blocks
(f) RAID 5 (block-level distributed parity)
block 0
block 1
block 2
block 3
P(0-3)
Q(0-3)
block 4
block 5
block 6
P(4-7)
Q(4-7)
block 7
P(8-11)
Q(8-11)
block 10
block 11
Q(12-15)
block 13
block 14
block 15
block 8
block 9
P(8-11)
block 10
block 11
block 12
P(12-15)
block 13
block 14
block 15
block 8
block 9
P(16-19)
block 16
block 17
block 18
block 19
block 12
P(12-15)
(g) RAID 6 (dual redundancy)
(f) RAID 5 (block-level distributed parity)
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Figure 11.8 RAID Levels (page 2 of 2)
block 0
block 1
block 2
block 4
block 5
block 6
P(4-7)
Q(4-7)
block 7
block 8
block 9
P(8-11)
Q(8-11)
block 10
block 11
block 12
P(12-15)
Q(12-15)
block 13
block 14
block 15
block 3
P(0-3)
Q(0-3)
(g) RAID 6 (dual redundancy)
Category
Level Description
Disks
Data availability
required
0
Nonredundant
Mirroring
1
Mirrored
2
Redundant via
Hamming code
N+m
3
Bit-interleaved parity
N+1
2N
Parallel
access
Independent
access
Higher than RAID
2, 3, 4, or 5; lower
than RAID 6
Much higher than
single disk;
comparable to
RAID 3, 4, or 5
Much higher than
single disk;
comparable to
RAID 2, 4, or 5
N+1
Much higher than
single disk;
comparable to
RAID 2, 3, or 5
Block-interleaved
distributed parity
N+1
Much higher than
single disk;
comparable to
RAID 2, 3, or 4
Block-interleaved dual
distributed parity
N+2
4
Block-interleaved
parity
5
6
Large I/O data
transfer capacity
Small I/O request rate
Higher than single
disk for read;
similar to single
disk for write
Up to twice that of a
single disk for read;
similar to single disk
for write
Highest of all
listed alternatives
Approximately twice
that of a single disk
Highest of all
listed alternatives
Approximately twice
that of a single disk
Lower than single
Very high for both read
Figure 11.8N RAID
Levels
Very high(page 2 of 2)
disk
and write
Striping
N = number of data disks;
m proportional to log N
Highest of all
listed alternatives
Similar to RAID 0
for read;
significantly lower
than single disk
for write
Similar to RAID 0
for read; lower
than single disk
for write
Similar to RAID 0
for read; lower
than RAID 5 for
write
Similar to RAID 0 for
read; significantly
lower than single disk
for write
◼
◼
Cache memory is used to apply to a memory that is smaller and faster than
main memory and that is interposed between main memory and the
processor
◼
Reduces average memory access time by exploiting the principle of locality
◼
Disk cache is a buffer in main memory for disk sectors
◼
Contains a copy of some of the sectors on the disk
Similar to RAID 0 for
read; generally lower
than single disk for
write
Similar to RAID 0 for
read; significantly
lower than RAID 5 for
write
Table 11.4 RAID Levels
Most commonly used algorithm that deals with the design issue of
replacement strategy
The block that has been in the cache the longest with no reference
to it is replaced
The request is satisfied
via the cache
If NO
The requested sector is
read into the disk
cache from the disk
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
The block that has experienced the fewest references is replaced
◼
A counter is associated with each block
◼
Counter is incremented each time block is accessed
◼
When replacement is required, the block with the smallest count is
selected
◼
Problem:
A stack of pointers reference the cache
◼
Most recently referenced block is on the top of the stack
◼
When a block is referenced or brought into the cache, it is placed on the
top of the stack
◼
◼
◼
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
If YES
When an I/O request is made
for a particular sector, a check
is made to determine if the
sector is in the disk cache
(Page 498 in textbook)
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
◼
There are short intervals of repeated references to some blocks due to locality,
thus building up high reference counts
After such an interval is over, the reference count may be misleading and not
reflect the probability that the block will soon be referenced again
The effect of locality may actually cause the LFU algorithm to make poor
replacement choices
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
7
UNIX SVR4 I/O
◼
◼
◼
◼
UNIX Buffer Cache
Each individual I/O device is associated with a special file
◼ Managed by the file system
Read and write in the same manner as user data files
Provides a simple and uniform interface to users and processes
Two types of I/O
◼ Buffered
◼ system buffer caches
◼
◼
Is essentially a disk cache
◼
The data transfer between the buffer cache and the user process space
always occurs using DMA
◼
File Subsystem
◼
◼
◼ character queues
Unbuffered
◼ Typically involves the DMA
◼
Buffer Cache
I/O operations with disk are handled through the buffer cache
Does not use up any processor cycles
Does consume bus cycles
Three lists are maintained:
◼ Free list
◼
Character
Block
Device list
◼
Driver I/O queue
◼
Device Drivers
◼
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Figure 11.12 UNIX I/O Structure
Used by character oriented devices
Terminals and printers
Either written by the I/O device and read by the process or
vice versa
Producer/consumer model is used
List of all slots in the cache that are available for allocation
◼
List of all buffers currently associated with each disk
List of buffers that are actually undergoing or waiting for I/O on a particular
device
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
Is simply DMA between device and process space
◼
Is always the fastest method for a process to perform I/O
◼
Process is locked in main memory and cannot be swapped out
◼
I/O device is tied up with the process for the
duration of the transfer making it unavailable
for other processes
Character queues may only be read once
As each character is read, it is effectively destroyed
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Windows I/O
Manager
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
I/O Manager
Cache
Manager
File System
Drivers
Network
Drivers
◼
Cache Manager
◼
Hardware
Device Drivers
◼
File System Drivers
◼
◼
Maps regions of files into
kernel virtual memory and
then relies on the virtual
memory manager to copy
pages to and from the files
on disk
Sends I/O requests to the
software drivers that
manage the hardware
device adapter
◼
Network Drivers
◼ Windows includes
integrated networking
capabilities and support
for remote file systems
◼ The facilities are
implemented as software
drivers
Hardware Device Drivers
◼ The source code of
Windows device drivers
is portable across
different processor types
Figure 11.15 Windows I/O Manager
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
8
◼
Windows offers two
modes of I/O
operation
Asynchronous
Is used whenever
possible to optimize
application
performance
Synchronous
An application initiates an
I/O operation and then
can continue processing
while the I/O request is
fulfilled
The application is
blocked until the I/O
operation completes
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
I/O devices
◼
Organization of the I/O
function
◼ The evolution of the I/O
function
◼ Direct memory access
◼
◼
Operating system design issues
◼ Design objectives
◼ Logical structure of the I/O
function
Hardware
RAID
Software RAID
Separate physical
disks combined into
one or more logical
disks by the disk
controller or disk
storage cabinet
hardware
Noncontiguous disk
space combined into
one or more logical
partitions by the
fault-tolerant
software disk driver,
FTDISK
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Summary
◼
Windows supports two sorts of RAID configurations:
◼
Disk scheduling
◼
Disk performance parameters
◼
Disk scheduling policies
◼
Raid
◼
Raid levels 0 – 6
◼
Disk cache
◼
Design and performance
considerations
Review – End of Chapter
◼ Key
terms
◼ Review
Questions
◼ Problems
◼ 11.3,
11.7
I/O Buffering
◼ Single/double/circular buffer
◼ The utility of buffering
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights
reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
9
Files
Operating
Systems:
Internals
and Design
Principles
Chapter 12
File Management
◼
Data collections created by users
◼
The File System is one of the most important parts of the OS to a user
◼
Desirable properties of files:
Long-term existence
• Files are stored on disk or other secondary storage and do not disappear when a user logs off
Sharable between processes
Ninth Edition
By William Stallings
• Files have names and can have associated access permissions that permit controlled sharing
Structure
• Files can be organized into hierarchical or more complex structure to reflect the relationships
among files
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
File Structure
File Systems
◼
Provide a means to store data organized as files as well as a
collection of functions that can be performed on files
◼
Maintain a set of attributes associated with the file
◼
Typical operations include:
◼
Create
◼
Delete
◼
Open
◼
Close
◼
Read
◼
Write
Field
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
◼
Contains a single value
◼
Fixed or variable length
Database
◼
◼
Collection of related fields that
can be treated as a unit by some
application program
Fixed or variable length
1
2
3
Collection of related data
◼
Relationships among elements
of data are explicit
◼
Collection of similar records
Designed for use by a number
of different applications
◼
Treated as a single entity
◼
May be referenced by name
5
◼
Access control restrictions
usually apply at the file level
6
◼
Consists of one or more types
of files
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Database
Minimal User Requirements
◼
◼
File
▪ Each user:
Record
Field
Basic element of data
Record
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Structure Terms
◼
Four terms are
commonly used when
discussing files:
File
4
• Should be able to create, delete, read, write and modify files
• May have controlled access to other users’ files
• May control what type of accesses are allowed to the users’ files
• Should be able to move data between files
• Should be able to back up and recover files in case of damage
• Should be able to access his or her files by name rather than by numeric identifier
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
1
User Program
Pile
Device Drivers
Indexed
Sequential
Sequential
Indexed
Hashed
Logical I/O
Basic I/O Supervisor
Basic File System
Disk Device Driver
Tape Device Driver
◼
Lowest level
◼
Communicates directly with peripheral devices
◼
Responsible for starting I/O operations on a device
◼
Processes the completion of an I/O request
◼
Considered to be part of the operating system
Figure 12.1 File System Software Architecture
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Basic File System
Basic I/O Supervisor
◼
Also referred to as the physical I/O level
◼
Responsible for all file I/O initiation and termination
◼
Primary interface with the environment outside the computer system
◼
◼
Deals with blocks of data that are exchanged with disk or tape systems
At this level, control structures are maintained that deal with device
I/O, scheduling, and file status
◼
Concerned with the placement of blocks on the secondary storage
device
◼
Selects the device on which I/O is to be performed
◼
Concerned with scheduling disk and tape accesses to optimize
performance
◼
Concerned with buffering blocks in main memory
◼
Does not understand the content of the data or the
structure of the files involved
◼
I/O buffers are assigned and secondary memory is allocated at this
level
◼
Considered part of the operating system
◼
Part of the operating system
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Logical I/O
Enables users
and
applications to
access records
Access Method
Provides
generalpurpose
record I/O
capability
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Maintains
basic data
about file
◼
Level of the file system closest to the user
◼
Provides a standard interface between applications and the file
systems and devices that hold the data
◼
Different access methods reflect different file structures and
different ways of accessing and processing the data
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
2
Physical blocks
in main memory
buffers
Records
Directory
management
User & program
comands
Operation,
File name
File
Structure
Access
method
Blocking
File Organization and Access
Physical blocks
in secondary
storage (disk)
Disk
scheduling
File
manipulation
functions
I/O
◼
File organization is the logical structuring of the records as
determined by the way in which they are accessed
◼
In choosing a file organization, several criteria are important:
Free storage
management
File
allocation
User access
control
File management concerns
Operating system concerns
◼
◼
Short access time
◼
Ease of update
◼
Economy of storage
◼
Simple maintenance
◼
Reliability
Priority of criteria depends on the application that will use the file
Figure 12.2 Elements of File Management
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
File Organization Types
The Pile
◼
Least complicated form of
file organization
◼
Data are collected in the
order they arrive
◼
Each record consists of one
burst of data
◼
Purpose is simply to
accumulate the mass of data
and save it
◼
Record access is by exhaustive
search
The pile
The
sequential
file
The direct, or
hashed, file
Five of the common
file organizations are:
The indexed
sequential file
The
indexed file
Variable-length records
Variable set of fields
Chronological order
Fixed-length reco
Fixed set of field
Sequential order
(a) Pile File
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
(b
Exhaustive
index
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
n
Main File
Index
Variable-length records
levels
Variable
set of fields
Index
2
Chronological
order
1
Exhaus
inde
Fixed-le
Fixed se
Sequent
(a) Pile File
The Sequential File
Indexed Sequential File
◼
Most common form of file
structure
◼
A fixed format is used for
records
◼
Key field uniquely identifies the
record
◼
◼
Typically used in batch
applications
Adds an overflow file for
new records
◼
Greatly reduces the time
required to access a single
record
New records are usually
stored
in a separate file,
Variable-length
records
Variable setcalled
of fieldslog file
Chronological order
◼ Periodically, the log file is
merged with the master
(a)
Pile File
file
◼
◼
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
n
Index
levels
Index
2
1
Fixed-length records
Fixed set of fields in fixed order
Sequential order based on key field
◼
Adds an index to the file to
support random access
◼ Index is searched to find
the highest key value that
is equal to or precedes
the desired key value
(c) Indexed Sequential File
n
Index
levels
2
1
Exhaustive
index
Partial
index
Exhaustive
index
Prim
(variable-
Main File
Index
(d) In
Figure 12.3 Common File Organi
Overflow
File
Multiple levels of indexing
can be used to provide
greater efficiency in access
(c) Indexed Sequential File
(b) Sequential File
Exhaustive
index
Overflow
File
(
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Main
File NOT REDISTRIBUTE OR POST ONLINE
DO
Figure 12.3 Common File3Or
Indexed Sequential File
Indexed Sequential File
◼
For a single level indexing, each record in the index file
consists of two fields: a key field and a pointer into the main
file.
◼ To find a specific field, the index is searched to find the
highest key value that is equal to or precedes the desired key
value.
◼ The search continues in the main file at the location
indicated by the pointer.
◼
For a sequential file with 1 million records
◼ Without index - to search for a particular key value will
require on average 500,000 record accesses.
◼ For an index with 1,000 entries with the keys in the index
more or less evenly distributed over the main file. Now it
will take on average 500 accesses to the index file followed
by 500 accesses to the main file to find the record. The
average search length is reduced from 500,000 to 1,000.
Variable-length records
Variable set of fields
Chronological order
◼
For a sequential file with 1 million records – with
multiple levels of indexing
◼ A lower-level index with 10,000 entries is constructed.
◼ A higher-level index into the lower-level index of 100
entries can then be constructed.
◼ The search begins at the higher-level index (average
length = 50 accesses) to find an entry point into the
lower-level index.
◼ This index is then searched (average length = 50) to
find an entry point into the main file, which is then
searched (average length = 50).
◼ Thus the average length of search has been reduced
from 500,000 to 1,000 to 150.
Fixed-length records
Fixed set of fields in fixed order
Sequential order based on key field
(a) Pile File
(b) Sequential File
Indexed File
Records are accessed only through
their indexes
n
◼ Variable-length
records
can be
Main File
Index
employed
levels
Index
◼2 Some records may not contain all
1
fields
◼ Exhaustive index contains one entry
for every record in the main file
Overflow
◼ Partial index contains
entries to
File
records where the field of interest
exists
◼
(c) Indexed Sequential File
◼
Used mostly in applications where
timeliness of information is critical
◼
Examples would be airline
reservation systems and inventory
control systems
Exhaustive
index
Exhaustive
index
Partial
index
Direct or Hashed File
◼
Access directly any block of a known
address
◼
Makes use of hashing on the key value
◼
Often used where:
◼ Very rapid access is required
◼
Primary File
(variable-length records)
◼
Fixed-length records are used
Records are always accessed
one at a time
Examples are:
•
•
•
•
Directories
Pricing tables
Schedules
Name lists
(d) Indexed File
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Figure 12.3 Common File Organizations
File Directory Information
File Name
File Type
File Organization
Volume
Starting Address
Size Used
Size Allocated
Owner
Access Information
Permitted Actions
Basic Information
Name as chosen by creator (user or program). Must be unique within a specific
directory.
For example: text, binary, load module, etc.
For systems that support different organizations
Address Information
Indicates device on which file is stored
Starting physical address on secondary storage (e.g., cylinder, track, and block
number on disk)
Current size of the file in bytes, words, or blocks
The maximum size of the file
Access Control Information
User who is assigned control of this file. The owner may be able to grant/deny
access to other users and to change these privileges.
A simple version of this element would include the user's name and password for
each authorized user.
Controls reading, writing, executing, transmitting over a network
Usage Information
When file was first placed in directory
Usually but not necessarily the current owner
Date of the last time a record was read
User who did the reading
Date of the last update, insertion, or deletion
User who did the modifying
Date of the last time the file was backed up on another storage medium
Information about current activity on the file, such as process or processes that
have the file open, whether it is locked by a process, and whether the file has been
updated in main memory but not yet on disk
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Operations Performed
on a Directory
◼
To understand the requirements for a file structure, it is helpful to
consider the types of operations that may be performed on the
directory:
Search
Create
files
Delete
files
List
directory
Update
directory
Date Created
Identity of Creator
Date Last Read Access
Identity of Last Reader
Date Last Modified
Identity of Last Modifier
Date of Last Backup
Current Usage
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
4
Master Directory
Two-Level Scheme
Master directory has
an entry for each user
directory providing
address and access
control information
There is one
directory for each
user and a master
directory
Names must be
unique only within the
collection of files of a
single user
Subirectory
Subirectory
Subirectory
Subirectory
Subirectory
File
File
File
File
Each user directory
is a simple list of
the files of that user
File system can easily
enforce access
restriction on
directories
Figure 12.6 Tree-Structured Directory
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Master Directory
System
User_A
User_B
User_C
Directory
"User_C"
File Sharing
Directory
"User_A"
Directory "User_B"
Draw
Word
Directory "Word"
Directory "Draw"
Unit_A
ABC
Two issues arise
when allowing files
to be shared among
a number of users:
Directory "Unit_A"
ABC
File
"ABC"
Access rights
Pathname: /User _B/Draw/ABC
File
"ABC"
Pathname: /User _B/Word/Unit_A/ABC
Management of
simultaneous
access
Figure 12.7 Example of Tree-Structured Directory
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Access Rights
◼
None
◼
◼
◼
The user can determine that the
file exists and who its owner is
and can then petition the owner
for additional access rights
◼
◼
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
The user can modify, delete, and
add to the file’s data
Owner
The user can change the access
rights granted to other users
Deletion
◼
The user can delete the file from
the file system
Specific
Users
User
Groups
Usually the
initial creator
of the file
Changing protection
◼
◼
The user can read the file for any
purpose, including copying and
execution
The user can add data to the file
but cannot modify or delete any
of the file’s contents
Updating
◼
The user can load and execute a
program but cannot copy it
Reading
Appending
◼
◼
Execution
◼
◼
◼
The user would not be allowed
to read the user directory that
includes the file
Knowledge
◼
User Access Rights
Has full rights
Individual
users who are
designated by
user ID
A set of users
who are not
individually
defined
May grant
rights to
others
All
All users who
have access to
this system
These are
public files
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
5
Record Blocking
◼
Fixed Blocking
1) Fixed-Length Blocking – fixed-
Blocks are the unit of I/O
with secondary storage
◼ For I/O to be
performed records
must be organized as
blocks
◼ Blocks are mostly of
fixed length
◼ Simplifies I/O and
buffer allocation
length records are used, and an
integral number of records are
stored in a block
Internal fragmentation – unused
space at the end of each block
2) Variable-Length Spanned Blocking
– variable-length records are used
and are packed into blocks with no
unused space
▪ Given the size of a block,
3) Variable-Length Unspanned
Blocking – variable-length records
are used, but spanning is not
employed
three methods of blocking
can be used:
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Variable Blocking: Spanned
Variable Blocking: Unspanned
Preallocation vs
Dynamic Allocation
File Allocation
▪ On secondary storage, a file consists of a collection of blocks
▪ The operating system or file management system is responsible for
allocating blocks to files
◼
A preallocation policy requires that the maximum size of a file be
declared at the time of the file creation request
◼
For many applications it is difficult to estimate reliably the maximum
potential size of the file
▪ The approach taken for file allocation may influence the approach
◼
taken for free space management
▪ Space is allocated to a file as one or more portions (contiguous set of
◼
Tends to be wasteful because users and application programmers tend
to overestimate size
Dynamic allocation allocates space to a file in portions as needed
allocated blocks)
▪ File allocation table (FAT)
▪ Data structure used to keep track of the portions assigned to a file
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
6
Alternatives
Portion Size
◼
In choosing a portion size there is a trade-off between efficiency from
the point of view of a single file versus overall system efficiency
◼
Items to be considered:
Two major alternatives:
Variable, large
contiguous portions
1) Contiguity of space increases performance, especially for
Retrieve_Next operations, and greatly for transactions
running in a transaction-oriented operating system
2) Having a large number of small portions increases the size of
tables needed to manage the allocation information
3) Having fixed-size portions simplifies the reallocation of space
Blocks
• Small fixed portions provide
greater flexibility
• They may require large tables
or complex structures for their
allocation
• Contiguity has been
abandoned as a primary goal
• Blocks are allocated as needed
• Provides better performance
• The variable size avoids waste
• The file allocation tables are
small
4) Having variable-size or small fixed-size portions minimizes
waste of unused storage due to overallocation
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
After Compaction
File Allocation Methods
Contiguous File Allocation
▪A single contiguous set
File Allocation Table
of blocks is allocated to a
file at the time of file
creation
▪Preallocation strategy
using variable-size
portions
▪FAT needs only a single
entry for each file
▪Is the best from the point
of view of the individual
sequential file
1
2
5
6
10
11
15
16
20
21
25
26
File D
31
30
3
0
3
5
8
2
3
5
15
29
20
21
34
25
26
30
31
4
7
8
File B
12
13
Length
1
2
File B
6
7
File C
11
12
File E
16
17
File Name Start Block
File A
0
File Name Start Block
File A
File Allocation Table
9
14
17
18
File C
22
23
File E
27
28
19
32
File A
File B
File C
File D
File E
2
9
18
30
26
10
3
4
8
9
13
18
14
File D
19
22
23
24
27
28
29
32
33
34
File A
File B
File C
File D
File E
0
3
8
19
16
Length
3
5
8
2
3
24
33
▪External fragmentation –
require compaction
Figure 12.9 Contiguous File Allocation
Figure 12.10 Contiguous File Allocation (After Compaction)
Chained Allocation
Indexed Allocation with Block Portions
▪Allocation is on an
individual block basis
File Allocation Table
▪Each block contains a
pointer to the next block in
the chain
▪The file allocation table
needs just a single entry for
each file
▪No external fragmentation
to worry about
▪Best for sequential files
▪To select an individual
▪ Allocation is on either
File Name Start Block
0
File B
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
File B
1
block requires tracing
through the chain to the
desired block
fixed-size blocks or
variable-size portions
Length
5
▪ FAT contains a
▪
▪
▪
separate one-level index
for each file
Index has one entry for
each portion allocated
for the file
No external
fragmentation
Support both sequential
and direct access
File Allocation Table
0
File B
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
File Name
Index Block
File B
24
1
8
3
14
28
Figure 12.11 Chained Allocation
Figure 12.13 Indexed Allocation with Block Portions
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
7
Indexed Allocation with
Variable Length Portions
Free Space Management
File Allocation Table
0
File B
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
File Name
Index Block
File B
24
Start Block
Length
1
28
14
3
4
1
◼
Just as allocated space must be managed, so must the unallocated
space
◼
To perform file allocation, it is necessary to know which blocks are
available
◼
A disk allocation table is needed in addition to a file allocation table
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Figure 12.14 Indexed Allocation with Variable-Length Portions
Bit Tables
◼
This method uses a vector containing one bit for each
block on the disk
◼
◼
◼
Chained Free Portions
◼
The free portions may be chained together by using a pointer and
length value in each free portion
Each entry of a 0 corresponds to a free block, and each 1
corresponds to a block in use
◼
Negligible space overhead because there is no need for a disk
allocation table
Advantages:
◼ works well with any file allocation method
◼ it is as small as possible
◼
Suited to all file allocation methods
Disadvantages:
• Leads to fragmentation
• many portions will be a single block long
• Every time you allocate a block you need to read the block
first to recover the pointer to the new first free block before
writing data to that block
Disadv: Exhaustive search of the table can slow file system
performance
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Volumes
Indexing
◼
Treats free space as a file and uses an index table as it would for file
allocation
◼
For efficiency, the index should be on the basis of variable-size
portions rather than blocks
◼
This approach provides efficient support for all of the file allocation
methods
◼
A collection of addressable sectors in secondary
memory that an OS or application can use for
data storage
◼
The sectors in a volume need not be consecutive
on a physical storage device
◼
◼
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
They need only appear that way to the OS or application
A volume may be the result of assembling and
merging smaller volumes
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
8
Access
Matrix
Volumes
◼
In the simplest case, a single disk equals one
volume.
◼
If a disk is divided into partitions
◼
◼
◼
The basic elements are:
◼ subject – an entity capable of accessing objects
◼ A process that represents a user or application
◼ object – anything to which access is controlled
◼ E.g. files, program, memory
◼ access right – the way in which an object is accessed
by a subject
◼ Read, Write, Execute
each partition functioning as a separate volume.
Multiple disks can also be treated as a single
volume
◼
or partitions on multiple disks as a single volume.
Capability
Lists
Access
Control Lists
◼
A matrix may be
decomposed by
columns, yielding
access control lists
◼
The access control list
lists users and their
permitted access rights
UNIX File
Management
◼
In the UNIX file system, six
types of files are distinguished:
Regular, or ordinary
◼
Decomposition by
rows yields
capability tickets
◼
A capability ticket
specifies authorized
objects and
operations for a user
Inodes
◼
All types of UNIX files are administered by the OS by means of
inodes
◼
An inode (index node) is a control structure that contains the key
information needed by the operating system for a particular file
◼
Several file names may be associated with a single inode
• Contains arbitrary data in zero or more data blocks
Directory
• Contains a list of file names plus pointers to associated inodes (index nodes)
Special
• Contains no data but provides a mechanism to map physical devices to file names
Named pipes
◼
An active inode is associated with exactly one file
• An interprocess communications facility
◼
Each file is controlled by exactly one inode
Links
• An alternative file name for an existing file
Symbolic links
• A data file that contains the name of the file to which it is linked
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
9
mode
Data
Data
Data
Data
Data
Data
Data
Data
Data
Data
File Allocation
owners (2)
Data
timestamps (4)
size
Data
direct(0)
direct (1)
Pointers
Pointers
direct(12)
single indirect
Pointers
File allocation is done on a block basis
◼
Allocation is dynamic, as needed, rather than using preallocation
◼
An indexed method is used to keep track of each file, with part of
the index stored in the inode for the file
◼
In all UNIX implementations the inode includes a number of direct
pointers and three indirect pointers (single, double, triple)
Data
Pointers
double indirect
◼
triple indirect
Pointers
Data
block count
reference count
flags (2)
generation number
Pointers
Pointers
Pointers
Pointers
Data
Pointers
blocksize
Data
extended attr size
extended
attribute
blocks
Pointers
Data
Inode
Figure 12.15 Structure of FreeBSD inode and File
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Table 12.3
Capacity of a FreeBSD File with 4-Kbyte Block Size
Level
Number of Blocks
Direct
12
48K
512
2M
512 ´ 512 = 256K
1G
Single Indirect
Double Indirect
512 ´ 256K = 128M
Triple Indirect
Number of Bytes
512G
Inode table
UNIX Directories
and Inodes
◼
Directories are
structured in a
hierarchical tree
◼
Each directory can
contain files and/or
other directories
◼
A directory that is inside
another directory is
referred to as a
subdirectory
Directory
i1
Name1
i2
Name2
i3
Name3
i4
Name4
Figure 12.16 UNIX Directories and Inodes
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Windows File System
◼
◼
The developers of Windows NT designed a new file system, the New
Technology File System (NTFS) which is intended to meet high-end
requirements for workstations and servers
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
NTFS Volume
and File Structure
◼
NTFS makes use of the following disk storage concepts:
Sector
Key features of NTFS:
◼
Recoverability
◼
Security
◼
Large disks and large files
◼
Multiple data streams
◼
Journaling
◼
Compression and encryption
◼
Hard and symbolic links
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
• The smallest physical storage unit on the
disk
• The data size in bytes is a power of 2 and is
almost always 512 bytes
Cluster
• One or more contiguous sectors
• The cluster size in sectors is a power of 2
Volume
• A logical partition on a disk, consisting of
one or more clusters and used by a file
system to allocate space
• Can be all or a portion of a single disk or it
can extend across multiple disks
• The maximum volume size for NTFS is 264
clusters
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
10
Table 12.4
Windows NTFS Partition and Cluster Sizes
Volume Size
Sectors per Cluster
Cluster Size
£ 512 Mbyte
1
512 bytes
512 Mbyte - 1 Gbyte
2
1K
1 Gbyte - 2 Gbyte
4
2K
2 Gbyte - 4 Gbyte
8
4K
4 Gbyte - 8 Gbyte
16
8K
8 Gbyte - 16 Gbyte
32
16K
16 Gbyte - 32 Gbyte
64
32K
> 32 Gbyte
128
64K
NTFS Volume Layout
◼
Every element on a volume is a file, and every file consists
of a collection of attributes
◼ even the data contents of a file is treated as an attribute
partition
boot
sector
Master File Table
System
Files
File Area
Figure 12.19 NTFS Volume Layout
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Master File Table (MFT)
◼
The heart of the Windows file system is the MFT
◼
The MFT is organized as a table of 1,024-byte rows, called records
◼
Each row describes a file on this volume, including the MFT itself,
which is treated as a file
◼
Each record in the MFT consists of a set of attributes that serve to
define the file (or folder) characteristics and the file contents
Summary
◼
◼
◼
◼
◼
E.g. file name, owner, access attributes (read/write, etc.)
◼
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
File structure
File management systems
File organization and access
◼ The pile
◼ The sequential file
◼ The indexed sequential file
◼ The indexed file
◼ The direct or hashed file
File directories
◼ Contents
◼ Structure
◼ Naming
Record blocking
◼
◼
◼
Secondary storage management
◼ File allocation
◼ Free space management
◼ Volumes
UNIX file management
◼ Inodes
◼ File allocation
◼ Directories
Windows file system
◼ Key features of NTFS
◼ NTFS volume and file structure
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Review – End of Chapter
◼ Key
terms
◼ Review
Questions
◼ Problems
◼ 12.3,
12.7-9, 12.11
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights
reserved.
© 2017 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
DO NOT REDISTRIBUTE OR POST ONLINE
11
Download