Uploaded by Uzair Rajput

uzair

advertisement
Virtual Memory:
Virtual memory involves the separation of logical memory as perceived by users from physical
memory. This separation allows an extremely large virtual memory to be provided for
programmers when only a smaller physical memory is available . Virtual memory makes the
task of programming much easier, because the programmer no longer needs to worry about
the amount of physical memory available.
The virtual address space of a process refers to the logical (or virtual) view of how a process is
stored in memory.
In addition to separating logical memory from physical memory, virtual memory allows files and
memory to be shared by two or more processes through page sharing .This leads to the
following benefits:
• System libraries can be shared by several processes through mapping of the shared object
into a virtual address space. Although each process considers the libraries to be part of its
virtual address space, the actual pages where the libraries reside in physical memory are shared
by all the processes .Typically, a library is mapped read-only into the space of each process that
is linked with it.
• Similarly, processes can share memory. Two or more processes can communicate through the
use of shared memory. Virtual memory allows one process to create a region of memory that it
can share with another process. Processes sharing this region consider it part of their virtual
address space, yet the actual physical pages of memory are shared.
• Pages can be shared during process creation with the fork() system call, thus speeding up
process creation.
Demand Paging:
Consider how an executable program might be loaded from disk into memory. One option is to
load the entire program in physical memory at program execution time. However, a problem
with this approach is that we may not initially need the entire program in memory. Suppose a
program starts with a list of available options from which the user is to select. Loading the
entire program into memory results in loading the executable code for all options, regardless of
whether or not an option is ultimately selected by the user. An alternative strategy is to load
pages only as they are needed. This technique is known as demand paging and is commonly
used in virtual memory systems. With demand-paged virtual memory, pages are loaded only
when they are demanded during program execution. Pages that are never accessed are thus
never loaded into physical memory. A demand-paging system is similar to a paging system with
swapping where processes reside in secondary memory (usually a disk). When we want to
execute a process, we swap it into memory.
Copy-on-Write:
Recall that the fork() system call creates a child process that is a duplicate of its parent.
Traditionally, fork() worked by creating a copy of the parent’s address space for the child,
duplicating the pages belonging to the parent. However, considering that many child processes
invoke the exec() system call immediately after creation, the copying of the parent’s address
space may be unnecessary. Instead, we can use a technique known as copy-on-write, which
works by allowing the parent and child processes initially to share the same pages
When it is determined that a page is going to be duplicated using copy- on-write, it is important to note
the location from which the free page will be allocated. Many operating systems provide a pool of free
pages for such requests. These free pages are typically allocated when the stack or heap for a process
must expand or when there are copy- on - write pages to be managed.
Page Replacement:
Page replacement takes the following approach. If no frame is free, we find one that is not
currently being used and free it. We can free a frame by writing its contents to swap space and
changing the page table (and all other tables) to indicate that the page is no longer in memory.
We can now use the freed frame to hold the page for which the process faulted. We modify the
page-fault service routine to include page replacement:
1. Find the location of the desired page on the disk.
2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to select a victim frame.
c. Write the victim frame to the disk; change the page and frame tables accordingly.
3. Read the desired page into the newly freed frame; change the page and frame tables.
4. Continue the user process from where the page fault occurred.
Notice that, if no frames are free, two page transfers (one out and one in) are required. This situation
effectively doubles the page-fault service time and increases the effective access time accordingly
FIFO PAGE REPPLACEMENT:
The simplest page-replacement algorithm is a first-in, first-out (FIFO) algorithm. A FIFO replacement
algorithm associates with each page the time when that page was brought into memory. When a page
must be replaced, the oldest page is chosen. Notice that it is not strictly necessary to record the time
when a page is brought in. We can create a FIFO queue to hold all pages in memory. We replace the
page at the head of the queue. When a page is brought into memory, we insert it at the tail of the
queue.
Optimal Page Replacement:
One result of the discovery of Belady’s anomaly was the search for an optimal pagereplacement algorithm—the algorithm that has the lowest page-fault rate of all algorithms and
will never suffer from Belady’s anomaly. Such an algorithm does exist and has been called OPT
or MIN. It is simply this: Replace the page that will not be used for the longest period of time.
Use of this page-replacement algorithm guarantees the lowest possible page fault rate for a
fixed number of frames.
LRU Page replacement:
If the optimal algorithm is not feasible, perhaps an approximation of the optimal algorithm is
possible. The key distinction between the FIFO and OPT algorithms (other than looking
backward versus forward in time) is that the FIFO algorithm uses the time when a page was
brought into memory, whereas the OPT algorithm uses the time when a page is to be used. If
we use the recent past as an approximation of the near future, then we can replace the page
that has not been used for the longest period of time. This approach is the least recently used
(LRU) algorithm. LRU replacement associates with each page the time of that page’s last use.
When a page must be replaced,
Optimal:
The optimal page replacement algorithm selects the page that will not be used for the longest
duration in the future. However, this algorithm is not practical for implementation as it requires
knowledge of future memory references.
Each page replacement algorithm has its own advantages and disadvantages, and the choice
depends on factors such as system workload and memory access patterns.
Allocation of Page Frames:
The allocation of page frames refers to the assignment of physical memory (RAM) to different
processes or pages in a virtual memory system. The operating system must efficiently manage
the available memory and allocate page frames to processes based on their memory
requirements.
Various allocation strategies can be employed, including:
Fixed Allocation: In fixed allocation, each process is allocated a fixed number of page frames at
all times. This approach provides fairness among processes but may not be efficient when
memory demands vary.
Dynamic Allocation: Dynamic allocation adjusts the number of page frames allocated to each
process based on its current needs. This approach allows for better utilization of memory but
requires efficient tracking and management of available page frames.
Priority-Based Allocation: In priority-based allocation, processes are assigned page frames
based on their priority levels. Higher-priority processes receive a larger share of memory
resources.
Proportional Allocation: Proportional allocation assigns page frames to processes based on
their proportional share of the total memory demand. This approach ensures that processes
receive memory in proportion to their requirements.
The choice of page frame allocation strategy depends on factors such as the system's memory
management policy, the number of processes, and their memory demands.
By effectively managing demand paging, employing suitable page replacement algorithms, and
optimizing page frame allocation, the operating system can ensure efficient memory utilization
and provide an illusion of larger memory space to processes, leading to improved system
performance.
Thrashing:
If the number of frames allocated to a low-priority process falls below the minimum number
required by the computer architecture, we must suspend that process’s execution. We should
then page out its remaining pages, freeing all its allocated frames. This provision introduces a
swap-in, swap-out level of intermediate CPU scheduling. In fact, look at any process that does
not have “enough” frames. If the process does not have the number of frames it needs to
support pages in active use, it will quickly page-fault. At this point, it must replace some page.
However, since all its pages are in active use, it must replace a page that will be needed again
right away. Consequently, it quickly faults again, and again, and again, replacing pages that it
must bring back in immediately.
Working Set Model:
The working-set model is a concept in the field of operating systems that helps in understanding and
analyzing the dynamic memory requirements of a process. It provides insights into the locality of memory
references and assists in optimizing the allocation of memory resources. The working-set model is based
on the following principles:
Working Set:
The working set of a process represents the set of pages that the process is actively using or referencing
within a specific time interval. It captures the temporal locality of memory references, indicating the
pages that are currently relevant and necessary for the process's execution.
Window of Time:
The working set is defined within a window of time, known as the working-set window or interval. The
working-set window represents a specific period during which the memory references of a process are
observed and recorded. It could be measured in terms of the number of memory references, the elapsed
time, or the number of instructions executed.
Locality of Reference:
The working-set model is built on the principle of locality of reference, which states that programs tend to
reference a small portion of their address space at any given time. It consists of two types of locality:
a. Temporal Locality: Temporal locality suggests that if a process has recently referenced a memory
location, it is likely to reference it again in the near future. This forms the basis of the working-set model,
where the working set captures the pages recently accessed by the process.
b. Spatial Locality: Spatial locality suggests that if a process references a particular memory location, it
is likely to reference nearby memory locations as well. This concept is used in memory management
techniques like page caching and pre-fetching.
Working-Set Size:
The working-set size refers to the number of distinct pages or memory locations present in the working
set of a process within the working-set window. It represents the amount of memory required by the
process to maintain good performance without excessive page faults. Determining an appropriate
working-set size helps in optimizing the allocation of memory resources to processes.
Shared memory and Memory mapped files:
The relationship between shared memory and memory-mapped files lies in the way they both provide
mechanisms for processes to share data and communicate with each other efficiently. Both shared
memory and memory-mapped files allow multiple processes to access a common memory region, but
they differ in their implementation and use cases.
Shared Memory:
Shared memory is a techniqe that allows multiple processes to access a shared region of memory. In
shared memory, a specific portion of the physical memory is designated as shared, and each process that
needs to access it attaches or maps this shared memory segment to its own address space. The shared
memory region is typically managed by the operating system and can be directly accessed by multiple
processes without the need for inter-process communication mechanisms like message passing.
Shared memory provides fast and efficient communication between processes since they can directly read
from and write to the shared memory region. Changes made by one process in the shared memory are
immediately visible to other processes. This makes shared memory ideal for scenarios where frequent
data sharing and communication are required, such as inter-process communication, parallel processing,
and synchronization.
Memory-Mapped Files:
Memory-mapped files, on the other hand, provide a way to map a file or a portion of a file directly into
the virtual memory of a process. When a file is memory-mapped, the operating system assigns a range of
virtual memory addresses that correspond to the file's content. This allows processes to access the file
data as if it were part of their own address space, making the file content accessible through memory
operations.
Memory-mapped files enable efficient file I/O operations since reading from or writing to a memorymapped file involves regular memory operations rather than explicit file read and write operations.
Overall, both shared memory and memory-mapped files facilitate efficient data sharing and
communication between processes, but shared memory is typically used for general-purpose inter-process
communication, while memory-mapped files are commonly employed for file-based operations and I/O
optimization.
Allocating Kernel Memory:
Kernel memory is often allocated from a free-memory pool different from the list used to satisfy
ordinary user-mode processes. There are two primary reasons for this:
1. The kernel requests memory for data structures of varying sizes, some of which are less than a page in
size. As a result, the kernel must use memory conservatively and attempt to minimize waste due to
fragmentation. This is especially important because many operating systems do not subject kernel code
or data to the paging system.
2. Pages allocated to user-mode processes do not necessarily have to be in contiguous physical
memory. However, certain hardware devices interact directly with physical memory—without the
benefit of a virtual memory interface—and consequently may require memory residing in physically
contiguous pages.
Kernel memory management is a critical aspect of operating system design and is responsible for
effectively managing and allocating memory resources within the kernel. Kernel memory management
typically involves the following key components and techniques:
Kernel Memory Zones:
Kernel memory is divided into different zones or pools to serve specific purposes. These zones are
designed to handle different types of kernel data structures and provide efficient memory management.
Commonly used zones include the following:
a. Kernel Code and Text Zone: This zone contains the executable code of the kernel, such as the core
kernel functions and drivers.
b. Kernel Data Zone: This zone holds kernel data structures, including task structures, process control
blocks, device drivers, and other kernel-related data.
c. Kernel Stack Zone: Each process or thread in the system has its own kernel stack, which is used for
storing function call information, local variables, and processor state during kernel-mode execution.
d. Page Frame Zone: This zone manages physical memory frames that are used by the kernel and the
processes running in the system.
Memory Allocation:
Kernel memory allocation refers to the process of assigning memory resources to various kernel data
structures and dynamically allocating memory as needed. Memory allocation in the kernel is typically
performed using techniques such as:
a. Static Allocation: Some kernel data structures, like fixed-size arrays or predefined data structures, are
allocated at compile time and have a fixed memory footprint.
b. Dynamic Allocation: Kernel memory management also involves dynamic allocation of memory when
the size of data structures is not known at compile time. Common techniques for dynamic allocation
include kernel memory pools, slab allocators, and buddy memory allocation.
Kernel Memory Pools:
Kernel memory pools are preallocated fixed-size memory blocks that are used to satisfy frequent kernel
memory allocation requests. Memory pools reduce fragmentation and improve memory allocation
efficiency. They are often used for managing frequently allocated data structures, such as task control
blocks or network buffers.
Slab Allocators:
Slab allocators are specialized memory allocators used for managing variable-sized kernel objects. Slabs
are preallocated and organized into caches, which store objects of a specific type and size. Slab
allocators improve memory allocation performance by avoiding the overhead of frequent memory
allocation and deallocation.
Buddy Memory Allocation:
Buddy memory allocation is a technique used for dynamic memory allocation in the kernel. It manages
memory as a power-of-two-sized block and keeps track of free and allocated memory blocks. When a
memory request is made, the buddy system splits or merges blocks to fulfill the request, ensuring
efficient memory utilization.
Memory Deallocation:
Kernel memory management also involves deallocating or releasing memory that is no longer needed.
Proper deallocation of kernel memory is crucial to prevent memory leaks and maintain efficient memory
utilization. Kernel memory deallocation techniques typically include garbage collection, reference
counting, or explicit deallocation based on specific memory management policies.
Memory Protection:
Kernel memory management includes mechanisms to protect kernel memory from unauthorized access
or modification. This involves setting appropriate access control permissions, enforcing memory
protection mechanisms like page-level protection, and preventing user-level processes from directly
accessing or modifying kernel memory.
Overall, kernel memory management focuses on efficient memory allocation, proper deallocation, and
memory protection within the kernel to ensure stability, security, and optimal performance of the
operating system. The specific techniques and algorithms used may vary depending on the design and
implementation of the operating system.
Download