MEMORY MANAGEMENT What is Demand Paging? When a process is swapped in, its pages are not swapped in all at once. Rather they are swapped in only when the process needs them(On demand) DEMAND PAGING (CONT.) When the process requires any of the page that is not loaded into the memory, a page fault trap is triggered and following steps are followed: 1 The memory address which is requested by the process is first checked, to verify the request made by the process. 2 If its found to be invalid, the process is terminated. 3 In case the request by the process is valid, a free frame is located, possibly from a free -frame list, where the required page will be moved. 4 A new operation is scheduled to move the necessary page from disk to the specified memory location. ( This will usually block the process on an I/O wait, allowing some other process to use the CPU in the meantime. ) 5 When the I/O operation is complete, the process's page table is updated with the new frame number, and the invalid bit is changed to valid. 6 The instruction that caused the page fault must now be restarted from the beginning. PAGE REPLACEMENT What happens when a process requests for more pages and no free memory is available to bring them in? Following steps can be taken to deal with this problem : 1. Put the process in the wait queue, until any other process finishes its execution thereby freeing frames. 2. Or, remove some other process completely from the memory to free frames. 3. Or, find some pages that are not being used right now, move them to the disk to get free frames. This technique is called Page replacement and is most commonly used. We have some great algorithms to carry on page replacement efficiently. BASIC PAGE REPLACEMENT • Find the location of the page requested by ongoing process on the disk. • Find a free frame. If there is a free frame, use it. If there is no free frame, use a pagereplacement algorithm to select any existing frame to be replaced, such frame is known as victim frame. • Write the victim frame to disk. Change all related page tables to indicate that this page is no longer in memory. • Move the required page and store it in the frame. Adjust all related page and frame tables to indicate the change. • Restart the process that was waiting for this page. FIFO PAGE REPLACEMENT • A very simple way of Page replacement is FIFO (First in First Out) • As new pages are requested and are swapped in, they are added to tail of a queue and the page which is at the head becomes the victim. • Its not an effective way of page replacement but can be used for small systems. THRASHING • A process that is spending more time paging than executing is said to be thrashing. • Initially when the CPU utilization is low, the process scheduling mechanism, to increase the level of multiprogramming loads multiple processes into the memory at the same time, allocating a limited amount of frames to each process. • As the memory fills up, process starts to spend a lot of time for the required pages to be swapped in, again leading to low CPU utilization because most of the processes are waiting for pages. I/O HARDWARE • An I/O system is required to take an application I/O request and send it to the physical device, then take whatever response comes back from the device and send it to the application. • I/O devices can be divided into two categories: a) Block devices − A block device is one with which the driver communicates by sending entire blocks of data. For example, Hard disks, USB cameras, Disk-On-Key etc. b) Character devices − A character device is one with which the driver communicates by sending and receiving single characters (bytes, octets). For example, serial ports, parallel ports, sounds cards etc DEVICE DRIVERS • Device drivers are software modules that can be plugged into an OS to handle a particular device. Operating System takes help from device drivers to handle all I/O devices. Device drivers encapsulate device-dependent code and implement a standard interface in such a way that code contains device-specific register reads/writes. Device driver, is generally written by the device's manufacturer and delivered along with the device on a CD-ROM. • A device driver performs the following jobs − a) To accept request from the device independent software above to it. b) Interact with the device controller to take and give I/O and perform required error handling c) Making sure that the request is executed successfully DEVICE CONTROLLERS • Device drivers are software modules that can be plugged into an OS to handle a particular device. • Operating System takes help from device drivers to handle all I/O devices. • The Device Controller works like an interface between a device and a device driver. • I/O units (Keyboard, mouse, printer, etc.) typically consist of a mechanical component and an electronic component where electronic component is called the device controller. • There is always a device controller and a device driver for each device to communicate with the Operating Systems. A device controller may be able to handle multiple devices. • As an interface its main task is to convert serial bit stream to block of bytes, perform error correction as necessary. DEVICE CONTROLLERS CONT. • Any device connected to the computer is connected by a plug and socket, and the socket is connected to a device controller. COMMUNICATION TO I/O DEVICES • The CPU must have a way to pass information to and from an I/O device. There are three approaches available to communicate with the CPU and Device. a) Special Instruction I/O b) Memory-mapped I/O c) Direct memory access (DMA) Special Instruction I/O • This uses CPU instructions that are specifically made for controlling I/O devices. • These instructions typically allow data to be sent to an I/O device or read from an I/O device. MEMORY-MAPPED I/O • The same address space is shared by memory and I/O devices. • The device is connected directly to certain main memory locations so that I/O device can transfer block of data to/from memory without going through CPU. • OS allocates buffer in memory and informs I/O device to use that buffer to send data to the CPU. • I/O device operates asynchronously with CPU, interrupts CPU when finished. DIRECT MEMORY ACCESS (DMA) • Direct Memory Access (DMA) means CPU grants I/O module authority to read from or write to memory without involvement. • DMA module itself controls exchange of data between main memory and the I/O device. • CPU is only involved at the beginning and end of the transfer and interrupted only after entire block has been transferred. • The operating system uses the DMA hardware as follows − Step Description 1 Device driver is instructed to transfer disk data to a buffer address X. 2 Device driver then instruct disk controller to transfer data to buffer. 3 Disk controller starts DMA transfer. 4 Disk controller sends each byte to DMA controller. 5 DMA controller transfers bytes to buffer, increases the memory address, decreases the counter C until C becomes zero. 6 When C becomes zero, DMA interrupts CPU to signal transfer completion. POLLING VS INTERRUPTS I/O • A computer must have a way of detecting the arrival of any type of input. • There are two ways that this can happen, known as polling and interrupts. • Both of these techniques allow the processor to deal with events that can happen at any time and that are not related to the process it is currently running. POLLING I/O • Polling is the simplest way for an I/O device to communicate with the processor. • The process of periodically checking status of the device to see if it is time for the next I/O operation, is called polling. • The I/O device simply puts the information in a Status register, and the processor must come and get the information. • Most of the time, devices will not require attention and when one does it will have to wait until it is next interrogated by the polling program. • This is an inefficient method and much of the processors time is wasted on unnecessary polls. INTERRUPTS I/O • An interrupt is a signal to the microprocessor from a device that requires attention. • A device controller puts an interrupt signal on the bus when it needs CPU’s attention when CPU receives an interrupt, It saves its current state and invokes the appropriate interrupt handler using the interrupt vector (addresses of OS routines to handle various events). • When the interrupting device has been dealt with, the CPU continues with its original task as if it had never been interrupted. WHAT IS A DEADLOCK? • Deadlocks are a set of blocked processes each holding a resource and waiting to acquire a resource held by another process. HOW TO AVOID DEADLOCKS • Deadlocks can be avoided by avoiding at least one of the four conditions, because all this four conditions are required simultaneously to cause deadlock. 1 Mutual Exclusion Resources shared such as read-only files do not lead to deadlocks but resources, such as printers and tape drives, requires exclusive access by a single process. 2 Hold and Wait In this condition processes must be prevented from holding one or more resources while simultaneously waiting for one or more others. 3 No Preemption Preemption of process resource allocations can avoid the condition of deadlocks, where ever possible. 4 Circular Wait Circular wait can be avoided if we number all resources, and require that processes request resources only in strictly increasing(or decreasing) order. HANDLING DEADLOCK • Following three strategies can be used to remove deadlock after its occurrence. Preemption We can take a resource from one process and give it to other. This will resolve the deadlock situation, but sometimes it does causes problems. Rollback In situations where deadlock is a real possibility, the system can periodically make a record of the state of each process and when deadlock occurs, roll everything back to the last checkpoint, and restart, but allocating resources differently so that deadlock does not occur. Kill one or more processes This is the simplest way, but it works. WHAT ARE THREADS? • Thread is an execution unit which consists of its own program counter, a stack, and a set of registers. • Threads are also known as Lightweight processes. • Threads are popular way to improve application through parallelism. The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in parallel. • As each thread has its own independent resource for process execution, multpile processes can be executed parallelly by increasing number of threads. TYPES OF THREAD • There are two types of threads: a) User Threads b) Kernel Threads • User threads, are above the kernel and without kernel support. These are the threads that application programmers use in their programs. • Kernel threads are supported within the kernel of the OS itself. All modern OSs support kernel level threads, allowing the kernel to perform multiple simultaneous tasks and/or to service multiple kernel system calls simultaneously. MULTITHREADING MODELS • The user threads must be mapped to kernel threads, by one of the following strategies: 1. Many to One Model 2. One to One Model 3. Many to Many Model MANY TO ONE MODEL • In the many to one model, many user-level threads are all mapped onto a single kernel thread. • Thread management is handled by the thread library in user space, which is efficient in nature. ONE TO ONE MODEL • The one to one model creates a separate kernel thread to handle each and every user thread. • Most implementations of this model place a limit on how many threads can be created. • Linux and Windows from 95 to XP implement the one-to-one model for threads. MANY TO MANY MODEL • The many to many model multiplexes any number of user threads onto an equal or smaller number of kernel threads, combining the best features of the one-to-one and many-to-one models. • Users can create any number of the threads. • Blocking the kernel system calls does not block the entire process. • Processes can be split across multiple processors BENEFITS OF MULTITHREADING • Responsiveness • Resource sharing, hence allowing better utilization of resources. • Economy. Creating and managing threads becomes easier. • Scalability. One thread runs on one CPU. In Multithreaded processes, threads can be distributed over a series of processors to scale. • Context Switching is smooth. Context switching refers to the procedure followed by CPU to change from one task to another. MULTITHREADING ISSUES Thread Cancellation Thread cancellation means terminating a thread before it has finished working. There can be two approaches for this, one is Asynchronous cancellation, which terminates the target thread immediately. The other is Deferred cancellation allows the target thread to periodically check if it should be cancelled. fork() System Call fork() is a system call executed in the kernel through which a process creates a copy of itself. Now the problem in Multithreaded process is, if one thread forks, will the entire process be copied or not? Security Issues • Yes, there can be security issues because of extensive sharing of resources between multiple threads. • There are many other issues that you might face in a multithreaded process, but there are appropriate solutions available for them. Pointing out some issues here was just to study both sides of the coin.