TDDI04 Concurrent Programming, Operating Systems, and Real-time Operating Systems Contents [SGG7] Chapter 1 § What Operating Systems Do § Computer System Organization and Architecture • CPU – I/O Interaction, Interrupts, Memory hierarchy § Operating System Operations Introduction • • n Overview of the major operating systems components Traps / system calls, Dual-mode, Timer Process and memory management § Computing Environments • • n Coverage of basic computer system organization Copyright Notice: The lecture notes are mainly based on Silberschatz’s, Galvin’s and Gagne’s book (“Operating System Concepts”, 7th ed., Wiley, 2005). No part of the lecture notes may be reproduced in any form, due to the copyrights reserved by Wiley. These lecture notes should only be used for internal teaching purposes at the Linköping University. Client-Server, Parallel and Distributed Systems Special-Purpose Systems, e.g. real-time / embedded systems Acknowledgment: The lecture notes are originally compiled by C. Kessler, IDA. Klas Arvidsson, IDA, Linköpings universitet. TDDI04, K. Arvidsson, IDA, Linköpings universitet. What is an Operating System (OS)? § A program that acts as an intermediary between a user of a computer and the computer hardware. Where are OSs found? General purpose systems Embedded systems Microprocessor market shares in 1999 § Operating system goals: • • • • © Silberschatz, Galvin and Gagne 2005 1.2 99% Execute user programs in a well-defined environment. Make the computer system convenient to use. 1% Administrate system resources. Enable efficient use of the computer hardware. § An operating system provides an environment within which other programs can do useful work; the OS does not perform any “useful” function itself. TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.3 © Silberschatz, Galvin and Gagne 2005 Real-Time Operating Systems (RTOS) TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.4 © Silberschatz, Galvin and Gagne 2005 Operating Systems General purpose operating systems § Unix § Sun Solaris § HP-UX § Linux § Windows 95/98/NT/2000/XP § Mac OS X n Real-time system (RTS) n Real-time constraints n Hard RTS n Safety critical applications (e.g. Drive-by-Wire) n Soft RTS TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.5 © Silberschatz, Galvin and Gagne 2005 Application specific operating systems, e.g. real-time OS § OSE-Delta § § § § VxWorks Chorus RT-Linux, RED-Linux EPOC, RT-Mach ... TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.6 © Silberschatz, Galvin and Gagne 2005 1 Operating System Definition Computer System Structure § OS is a resource allocator • • Computer system can be divided into 4 components: Manages all resources of a computer system § Hardware provides basic computing resources Decides between conflicting requests for efficient and fair resource use • § OS is a control program • Controls execution of programs to prevent errors and improper use of the computer § No universally accepted definition § “The one program running at all times on the computer” is called the kernel. Everything else is either a system program (ships with the operating system) or an application program. TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.7 CPU, memory, I/O devices § Operating system controls and coordinates use of hardware among various applications and users © Silberschatz, Galvin and Gagne 2005 § Application programs define the ways in which the system resources are used to solve the computing problems of the users • Word processors, compilers, web browsers, database systems, games § Users • People, machines, other computers TDDI04, K. Arvidsson, IDA, Linköpings universitet. © Silberschatz, Galvin and Gagne 2005 1.8 Computer System Organization CPU – I/O Device Interaction (1) § Computer-system operation § I/O devices and the CPU can execute concurrently. • One or more CPUs, device controllers connected through common bus providing access to shared memory § Each device controller has a local buffer. • Concurrent execution of CPUs and devices, competing for memory cycles § I/O is from the device to local buffer of controller. § CPU moves data from/to main memory to/from local buffers § Device controller informs CPU that it has finished its operation by causing an interrupt. Remark: Alternative to interrupt usage: Polling / Busywaiting, see [SGG7] Ch. 13.2.1 TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.9 © Silberschatz, Galvin and Gagne 2005 TDDI04, K. Arvidsson, IDA, Linköpings universitet. © Silberschatz, Galvin and Gagne 2005 1.10 I/O Interaction using Interrupt CPU – I/O Device Interaction (2) § Example: Read from the keyboard (KBD) § DMA = Direct Memory Access • allows for parallel activity of CPU and I/O data transfer System Bus CPU + Memory 0: initiate I/O operation KBD_RD Memory Device controller OS Kernel 3: ISR table ISR’s ... Device 5: Device driver 1 ... Device driver KBD 4: ... A OS buffer User program code 2: data + signal interrupt for KBD_RD A Local buffer 1: token 1 4 2 3 3: call ISR for KBD_RD 4: ISR uses device driver KBD, passes data to an OS buffer 5: return from interrupt TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.11 © Silberschatz, Galvin and Gagne 2005 TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.12 © Silberschatz, Galvin and Gagne 2005 2 Interrupt (1) Interrupt (2) § Interrupt transfers control to an interrupt service routine, generally through the interrupt vector (IRV), a branch table that contains the start addresses of all the service routines. § A trap is a software-generated interrupt caused either by an error or a user request. • Examples: Division by zero; Request for OS service § An operating system is interrupt driven. § The OS preserves the state of the CPU by saving registers and the program counter. § Interrupt architecture must save the address of the interrupted instruction. § How to determine which type of interrupt has occurred? • • polling vectored interrupt system: interrupt number ©indexes IRV Silberschatz, Galvin and Gagne 2005 1.13 TDDI04, K. Arvidsson, IDA, Linköpings universitet. TDDI04, K. Arvidsson, IDA, Linköpings universitet. Interrupt Timeline Storage Hierarchy for a single process doing output § Storage systems organized in hierarchy. • • • 1.14 © Silberschatz, Galvin and Gagne 2005 Speed Cost Volatility § Caching – keeping copies of some data temporarily in faster but smaller storage, closer to CPU TDDI04, K. Arvidsson, IDA, Linköpings universitet. © Silberschatz, Galvin and Gagne 2005 1.15 Background: Storage Hierarchy Level 1 2 3 Name Registers Cache Main memory Flash memory Disk storage Size < 1 KB < 16 MB < 16 GB < 64 GB 500 GB Technology Custom memory with multiple ports, CMOS On-chip / off-chip CMOS SRAM CMOS DRAM NAND Flash Magnetic disk (mechanical) Access (ns) 0.25 – 0.5 0.5 – 25 10 – 80 100-200 >5000 Bandwidth (MB/s) 20-100K 5-10K 1-5K 10-100 20-150 3-4 4 Managed by Compiler Hardware Operating system Operating system Operating system Backed by Main memory Depends on usage Optical storage or tape TDDI04, K. Arvidsson, IDA, Linköpings universitet. Disk, flash 1.17 1.16 © Silberschatz, Galvin and Gagne 2005 Background: Caching Performance of various levels of storage: Cache TDDI04, K. Arvidsson, IDA, Linköpings universitet. © Silberschatz, Galvin and Gagne 2005 § Important principle, performed at many levels in a computer (in hardware, operating system, software) § Information in use copied from slower to faster storage temporarily § Faster storage (cache) checked first to determine if information is there • • If it is, information used directly from the cache (fast) If not, data copied to cache and used there § Cache smaller than storage being cached • • Cache management important design problem Cache size and replacement policy TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.18 © Silberschatz, Galvin and Gagne 2005 3 Memory Consistency Problems Operating System Operations § Multitasking environments must be careful to use the most recent value, no matter where it is stored in the storage hierarchy § Dual mode, system calls § CPU management • • Example: Migration of integer A from disk to register creates 3 copies § Multiprocessor environment must provide cache coherency in hardware such that all CPUs have the most recent value in their cache • Uniprogramming, Multiprogramming, Multitasking Process management § Memory management § File system and mass storage management § Protection and security More on this in TDDC78 – Programming parallel computers § Distributed environment situation even more complex • • Similar consistency problem: Several copies of a datum can exist Various solutions exist, covered in [SGG7] Chapter 17, and TDDB37 TDDI04, K. Arvidsson, IDA, Linköpings universitet. © Silberschatz, Galvin and Gagne 2005 1.19 Dual mode, System calls TDDI04, K. Arvidsson, IDA, Linköpings universitet. © Silberschatz, Galvin and Gagne 2005 1.20 Uniprogramming § Dual-mode operation allows OS to protect itself and other system components • User mode and kernel mode (supervisor mode, privileged mode) Running > Privileged instructions only executable in kernel mode > System call changes mode to kernel, on return resets it to user • Mode bit provided by hardware 0 Running Waiting 10 110 Waiting 120 220 Process execution time: CPU: 10 + 10 time units I/O: 100 + 100 time units I.e., I/O intensive (200/220 = 90.9%), CPU utilization 9.1% Lecture 2 on System calls and OS structures TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.21 Single user with single program cannot keep CPU and I/O devices busy at all times. © Silberschatz, Galvin and Gagne 2005 Multiprogramming TDDI04, K. Arvidsson, IDA, Linköpings universitet. © Silberschatz, Galvin and Gagne 2005 1.22 Multiprogramming with three programs § needed for efficiency • Multiprogramming organizes jobs (code and data) so CPU always has one to execute • A subset of total jobs in system is kept in memory • One job selected and run via job scheduling • A Running B Running C When it has to wait (e.g., for I/O), OS switches to another job Running Waiting (printer) Waiting (disk) Waiting Running Running Waiting (network) Waiting Running Waiting Running Running Running Waiting Running Running Running Waiting combined Memory layout for multiprogrammed system TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.23 © Silberschatz, Galvin and Gagne 2005 TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.24 © Silberschatz, Galvin and Gagne 2005 4 Timesharing (Multitasking) CPU time sharing using timer interrupt § extension of multiprogramming: CPU switches jobs so frequently that users can interact with each job while it is running § Timer to prevent infinite loop / process hogging resources • For interactive computer systems, the response time should be short (< 1 second) • Each user has at least one program executing in memory processes • If several jobs ready to run at the same time CPU scheduling • If processes don’t fit in memory, swapping moves them in and out to run • Virtual memory allows execution of processes not completely in memory TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.25 © Silberschatz, Galvin and Gagne 2005 • • • • Set up to interrupt the computer after specific period System decrements counter at clock ticks When counter = zero, generate an interrupt So, OS regains control and can reschedule or terminate a program that exceeds allotted time TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.26 Process Management Process Management Activities § A process is a program in execution. • A unit of work within the system. • Program is a passive entity, process is an active entity. § Process needs resources to accomplish its task The operating system is responsible for: • • § § § § CPU, memory, I/O, files Initialization data Process termination requires reclaim of any reusable resources Single-threaded process: has one program counter specifying location of next instruction to execute • Process executes instructions sequentially, one at a time, until completion Multi-threaded process: has one program counter per thread Typically, a system has many processes (some user, some system pr.) running concurrently on one or more CPUs • Concurrency by multiplexing the CPUs among the processes / threads TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.27 © Silberschatz, Galvin and Gagne 2005 © Silberschatz, Galvin and Gagne 2005 § Creating and deleting both user and system processes § Suspending and resuming processes § Providing mechanisms for process synchronization § Providing mechanisms for process communication § Providing mechanisms for deadlock handling Lecture on Processes and Threads Lecture on CPU Scheduling Lecture on Synchronization Lecture on Deadlocks TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.28 © Silberschatz, Galvin and Gagne 2005 Memory Management Mass-Storage Management (1) § Memory: A large array of words or bytes, each with its own address • Primary storage – directly accessible from CPU • shared by CPU and I/O devices • a volatile storage medium § Disks used to store data that do not fit in main memory or data that must be kept for a “long” period of time. • • All data in memory before and after processing All instructions in memory in order to execute § Memory management determines what is in memory when. • Optimizing CPU utilization and memory utilization § OS memory management activities • Keeping track of which parts of memory are currently being used and by whom • Deciding which processes (or parts thereof) and data to move into and out of memory • Allocating and deallocating memory space as needed Lectures on Virtual Memory and Memory Management TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.29 © Silberschatz, Galvin and Gagne 2005 • Secondary storage § OS activities: • • • Free-space management Storage allocation Disk scheduling § Critical for system performance § Some storage need not be fast • Tertiary storage optical storage, magnetic tape... • Still must be managed TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.30 © Silberschatz, Galvin and Gagne 2005 5 Mass-Storage Management (2) Protection and Security § OS provides uniform, logical view of information storage • • Abstracts from physical to logical storage unit: file Each medium (disk, tape) has different properties: access speed, capacity, data transfer rate, sequential/random access § Protection – any mechanism for controlling access of processes or users to resources defined by the OS § Security – defense of the system against internal and external attacks • § OS File-System management • Files usually organized into directories • Access control • OS activities include > Creating and deleting files and directories Huge range, including denial-of-service, worms, viruses, identity theft, theft of service § Systems generally first distinguish among users, to determine who can do what • • • > Primitives to manipulate files and directories > Mapping files onto secondary storage User identities (user IDs, security IDs) associated with all files, processes of that user Group identifier (group ID) > Backup files to tertiary storage Lecture on Protection and Security Lecture on File Systems TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.31 © Silberschatz, Galvin and Gagne 2005 TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.32 Computer Systems and Environments Stand-alone desktop computer § Stand-alone desktop computer § Blurring over time § Client-server systems § Office environment § Parallel systems • PCs connected to a network, terminals attached to mainframe, or minicomputers providing batch and timesharing • Now portals allowing networked and remote systems access to same resources § Distributed systems § Peer-to-Peer computing systems § Real-time systems § Handheld systems § Home networks § Embedded systems TDDI04, K. Arvidsson, IDA, Linköpings universitet. © Silberschatz, Galvin and Gagne 2005 1.33 © Silberschatz, Galvin and Gagne 2005 • Used to be single system, then modems • Now firewalled, networked TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.34 © Silberschatz, Galvin and Gagne 2005 Client-Server Computing Parallel Systems (1) § Servers respond to requests by clients • Remote procedure call – also across machine boundaries via network § Compute server: compute an action requested by the client § File server: Interface to file system (read, create, update, delete) § Multiprocessor systems with more than one CPU in close communication. • Soon the default, even for desktop machines (multi-core / CMP) § Tightly coupled system (aka. shared-memory system, multiprocessor) • processors share memory and a clock; • communication usually takes place through the shared memory. § Advantages of parallel systems: • Increased throughput • • TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.35 © Silberschatz, Galvin and Gagne 2005 Economical > Scalability of performance > Multiprocessor system vs multiple single-processor system (reduction of hardware such as disks, controllers etc) Increased reliability > graceful degradation (fault tolerance, …) > fail-soft systems (replication, …) TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.36 © Silberschatz, Galvin and Gagne 2005 6 Parallel Systems (2) Parallel Systems (3) SMP architecture § Symmetric multiprocessing (SMP) • • • Each processor runs an identical copy of the operating system. Many processes can run at once without performance deterioration. Most modern operating systems support SMP § Asymmetric multiprocessing • Each processor is assigned a specific task; a master processor schedules and allocates work to slave processors. • More common in special-purpose systems (e.g., embedded MP-SoC) Remark: the notion of “processor” is relative: A traditional PC is normally considered to only have one CPU, but it usually has a graphic processor, a communication processor etc, and this is not considered a multi-processing system. TDDI04, K. Arvidsson, IDA, Linköpings universitet. © Silberschatz, Galvin and Gagne 2005 1.37 § Speed-up of a single application by parallel processing? • Requires parallelisation / restructuring of the program • Or explicitly parallel algorithms • Used in High-Performance Computing for numerically intensive applications (weather forecast, simulations, ...) § Multicomputer (”distributed memory parallel computer”) • loosely coupled • • • More in TDDC78 – Programming parallel computers TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.38 Distributed Systems (1) Distributed Systems (2) § Loosely coupled system § Network Operating System • • each processor has its own local memory processors communicate with one another through various communications lines, such as high-speed buses or telephone lines (LAN, WAN, MAN, Bluetooth, …). § Distribute the computation among several physical processors. • • • © Silberschatz, Galvin and Gagne 2005 provides file sharing, e.g., NFS - Network File System provides communication scheme runs independently from other computers on the network § Distributed Operating System • • § Advantages of distributed systems: • • • • NSC Monolith can be a more economic and scalable alternative to SMP’s but more cumbersome to program (message passing) Example: ”Beowulf clusters” Resource sharing Computation speed up Adaptivity: load sharing (migration of jobs) less autonomy between computers gives the impression there is a single operating system controlling the network. More about this in TDDB37 Distributed Systems Fault tolerance TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.39 © Silberschatz, Galvin and Gagne 2005 TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.40 © Silberschatz, Galvin and Gagne 2005 Real-Time Systems § Often used as a control device in a dedicated application such as controlling scientific experiments, medical imaging systems, industrial control systems, and some display systems. § Well-defined tasks with fixed time constraints. § Hard real-time systems. • Conflicts with time-sharing systems, not supported by general-purpose OSs. T D D B63 – Op erating Sy s tem C on ce pts – A. B e dna rsk i § Soft real-time systems • Limited utility in industrial control or robotics • Useful in applications (multimedia, virtual reality) requiring advanced OS features. Lecture on Real-time Operating Systems TDDI04, K. Arvidsson, IDA, Linköpings universitet. 1.41 © Silberschatz, Galvin and Gagne 2005 7