CS 501: Software Engineering Lecture 22 Performance of Computer Systems 1 CS 501 Spring 2005 Administration 2 CS 501 Spring 2005 Performance of Computer Systems In most computer systems The cost of people is much greater than the cost of hardware Yet performance is important Future loads may be much greater than predicted A single bottleneck can slow down an entire system 3 CS 501 Spring 2005 Predicting Performance Change: Moore's Law Original version: The density of transistors in an integrated circuit will double every year. (Gordon Moore, Intel, 1965) Current version: Cost/performance of silicon chips doubles every 18 months. 4 CS 501 Spring 2005 Moore's Law: Rules of Thumb Planning assumptions: Every year: cost/performance of silicon chips improves 25% cost/performance of magnetic media improves 30% 10 years = 100:1 20 years = 10,000:1 5 CS 501 Spring 2005 Moore's Law and System Design Design system: Production use: Withdrawn from production: 6 2005 2008 2018 Processor speeds: Memory sizes: Disk capacity: 1 1 1 1.9 1.9 2.2 28 28 51 System cost: 1 0.4 0.01 CS 501 Spring 2005 Moore's Law Example Will this be a typical personal computer? 2005 2018 Processor 2 GHz 50 GHz Memory 512 MB 14 GB 50 GB 2 TB Disc Network 100 Mb/s 1 Gb/s Surely there will be some fundamental changes in how this this power is packaged and used. 7 CS 501 Spring 2005 Parkinson's Law Original: Work expands to fill the time available. (C. Northcote Parkinson) Planning assumptions: (a) Demand will expand to use all the hardware available. (b) Low prices will create new demands. (c) Your software will be used on equipment that you have not envisioned. 8 CS 501 Spring 2005 False Assumptions from the Past Unix file system will never exceed 2 Gbytes (232 bytes). AppleTalk networks will never have more than 256 hosts (28 bits). GPS software will not last 1024 weeks. Nobody at Dartmouth will ever earn more than $10,000 per month. etc., etc., ..... 9 CS 501 Spring 2005 Moore's Law and the Long Term What level? 1965 10 2005 CS 501 Spring 2005 Moore's Law and the Long Term What level? Within your working life? 1965 11 2005? When? CS 501 Spring 2005 Predicting System Performance • Mathematical models • Simulation • Direct measurement • Rules of thumb All require detailed understanding of the interaction between software and hardware systems. 12 CS 501 Spring 2005 Understand the Interactions between Hardware and Software Example: execution of http://www.cs.cornell.edu/ domain name service TCP connection HTTP get Client 13 Servers CS 501 Spring 2005 Understand the Interactions between Hardware and Software :Thread run :Toolkit run :ComponentPeer target:HelloWorld callbackLoop handleExpose paint 14 CS 501 Spring 2005 Understand Interactions between Hardware and Software start state Decompress fork Stream video Stream audio join stop state 15 CS 501 Spring 2005 Look for Bottlenecks Possible areas of congestion Network load Database access how many joins to build a record? Locks and sequential processing CPU performance is rarely a factor, except in mathematical algorithms. More likely bottlenecks are: Reading data from disk (including paging) Moving data from memory to CPU 16 CS 501 Spring 2005 Look for Bottlenecks: Utilization Utilization is the proportion of the capacity of a service that is used on average. mean service time utilization = mean inter-arrival time When the utilization of any hardware component exceeds 30%, be prepared for congestion. Peak loads and temporary increases in demand can be much greater than the average. 17 CS 501 Spring 2005 Mathematical Models: Queues arrive wait in line service depart Single server queue 18 CS 501 Spring 2005 Queues service arrive wait in line depart Multi-server queue 19 CS 501 Spring 2005 Mathematical Models Queueing theory Good estimates of congestion can be made for singleserver queues with: • arrivals that are independent, random events (Poisson process) • service times that follow families of distributions (e.g., negative exponential, gamma) Many of the results can be extended to multi-server queues. 20 CS 501 Spring 2005 Behavior of Queues: Utilization mean delay utilization 0 21 1 CS 501 Spring 2005 Fixing Bad Performance If a system performs badly, begin by identifying the cause: • Instrumentation. Add timers to the code. Often this will reveal that the delays are centered in one specific part of the system. • Test loads. Run the system with varying loads, e.g., high transaction rates, large input files, many users, etc. This may reveal the characteristics of when the system runs badly. • Design and code reviews. Have a team review the system design and suspect sections of code for performance problems. This may reveal an algorithm that is running very slowly, e.g., a sort, locking procedure, etc. Fix the underlying cause or the problem will return! 22 CS 501 Spring 2005 Techniques for Eliminating Bottlenecks Serial and Parallel Processing Single thread v. multi-thread e.g., Unix fork Granularity of locks on data e.g., record locking Network congestion e.g., back-off algorithms 23 CS 501 Spring 2005 Measurements on Operational Systems • Benchmarks: Run system on standard problem sets, sample inputs, or a simulated load on the system. • Instrumentation: Clock specific events. If you have any doubt about the performance of part of a system, experiment with a simulated load. 24 CS 501 Spring 2005 Techniques: Simulation Model the system as set of states and events advance simulated time determine which events occurred update state and event list repeat Discrete time simulation: Time is advanced in fixed steps (e.g., 1 millisecond) Next event simulation: Time is advanced to next event Events can be simulated by random variables (e.g., arrival of next customer, completion of disk latency) 25 CS 501 Spring 2005 Timescale Operations per second CPU instruction: 26 1,000,000,000 Disk latency: read: 60 25,000,000 bytes Network LAN: dial-up modem: 10,000,000 bytes 6,000 bytes CS 501 Spring 2005 Case Study: Performance of Disk Array When many transaction use a disk array, each transaction must: wait for specific disk platter wait for I/O channel signal to move heads on disk platter wait for I/O channel pause for disk rotation read data Close agreement between: results from queuing theory, simulation, and direct measurement (within 15%). 27 CS 501 Spring 2005