Capacity Planning in Client/Server Environments Daniel A. Menascé George Mason University Fairfax, VA 22030 USA menasce@cs.gmu.edu © 1995 Daniel A. Menascé Outline Part I: Client/Server Systems Part II: Introduction to Capacity Planning Part III: A Capacity Planning Methodology for C/S Environments Part IV: Performance Prediction Models for C/S Environments © 1995 Daniel A. Menascé Outline (continued) Part V: Advanced Predictive Models of C/S Systems Part VI: Case Study Bibliography © 1995 Daniel A. Menascé Part I: Client/Server (C/S) Systems © 1995 Daniel A. Menascé Definitions and Basic Concepts Client Server Work division between client and server Client/Server communication © 1995 Daniel A. Menascé Definitions and basic concepts DB server ... LAN segment 1 R router FDDI ring LAN segment 2 R router ... DB server © 1995 Daniel A. Menascé Definitions and basic concepts: Client Workstation with graphics and processing capabilities. Graphical User Interface (GUI) implemented at the client. Partial processing executed at the client. © 1995 Daniel A. Menascé Definitions and basic concepts: Server Machine with much larger processing and I/O capacity than the client. Serves the various requests from the clients. Executes a significant portion of the processing and I/O of the requests generated at the client. © 1995 Daniel A. Menascé Work division between client and server Client Server GUI Processing Pre & Post Process. I/O COMM. COMM. DB communications network © 1995 Daniel A. Menascé Interaction between client and server Remote Procedure Call (RPC) client pre-processing DB server execute_SQL(par1,par2,...) server processing post-processing result_SQL(...) © 1995 Daniel A. Menascé Part II: Introduction to Capacity Planning © 1995 Daniel A. Menascé Migration to C/S example: “downsizing” a claim processing application DB server connected to several PCs through an Ethernet LAN GUI application executing at the PCs LAN connected to the enterprise mainframe through a T1 line DB server is updated every night. © 1995 Daniel A. Menascé Migration to C/S systems mainframe based system T1 line mainframe © 1995 Daniel A. Menascé Migration to C/S DB server based system DB server LAN gateway T1 line mainframe © 1995 Daniel A. Menascé Migration to C/S: some important questions How many clients can be supported by the DB server while maintaining a response time below 2.5 sec? How long does it take to update the DB every night? © 1995 Daniel A. Menascé Migration to C/S example: measurements with a prototype During 30 minutes (1,800 sec): – 25% CPU utilization – 30% disk utilization – 800 transactions were executed Each transaction used: 1,800 * 0.25 / 800 = 0.56 sec of CPU and 1,800 * 0.30 / 800 = 0.68 sec of disk. © 1995 Daniel A. Menascé Good News and Bad News Good News: we know the CPU and I/O service time of each transaction. Bad News: transactions at the DB server compete for CPU and I/O queues will form at each device. We don’t know how long each transaction waits in the queue for the CPU and for the disk. © 1995 Daniel A. Menascé DB Server Model arriving transactions departing transactions cpu disk DB server © 1995 Daniel A. Menascé CPU or I/O Times ? service demand = 0.56 seg queue waiting time © 1995 Daniel A. Menascé Capacity Planning Definition Capacity Planning is the process of predicting when the service levels will be violated as a function of the workload evolution, as well as the determination of the most cost-effective way of delaying system saturation. © 1995 Daniel A. Menascé C/S Migration Example: desired results response time (sec) 9 8 7 6 5 4 3 2 service level 1 0 50 60 70 80 100 no. of client workstations © 1995 Daniel A. Menascé Part III: A Capacity Planning Methodology for Client/Server Environments © 1995 Daniel A. Menascé Capacity Planning Methodology for Client Server Environments Understanding the Environment Developing a Cost Model Workload Characterization Workload Model Cost Model Validation and Calibration Performance Model Workload Forecasting Cost Prediction Performance Prediction Valid Model Cost/Performance Analysis Configuration Plan Investment Plan Personnel Plan © 1995 Daniel A. Menascé Capacity Planning Methodology for Client Server Environments Understanding the Environment Developing a Cost Model Workload Characterization Workload Model Cost Model Validation and Calibration Performance Model Workload Forecasting Cost Prediction Performance Prediction Valid Model Cost/Performance Analysis Configuration Plan Investment Plan Personnel Plan © 1995 Daniel A. Menascé Understanding the Environment Hardware and System Software Network Connectivity Map Network Protocols Server Configurations Types of Applications Service Level Agreements Support and Management Structure Procurement Procedures © 1995 Daniel A. Menascé Example of Understanding the Environment 5,000 PCs (386 e 486) running DOS and Windows 3.1 and 800 UNIX workstations. IBM MVS mainframe. 80 LANs in 20 buildings connected by an FDDI 100 Mbps backbone. 50 Cisco routers. Network technologies: FDDI, Ethernet, T1 links and Internet. © 1995 Daniel A. Menascé Example of Understanding the Environment (continued) Protocols being routed: TCP/IP and Novell IPX. Servers: 80% are 486 and Pentiums and 20% are RISC workstations running UNIX. Applications: office automation (e-mail, spreadsheets, wordprocessing), access to DBs (SQL servers) and resource sharing. Future applications: teleconferencing, EDI, image processing. © 1995 Daniel A. Menascé Capacity Planning Methodology for Client Server Environments Understanding the Environment Developing a Cost Model Workload Characterization Workload Model Cost Model Validation and Calibration Performance Model Workload Forecasting Cost Prediction Performance Prediction Valid Model Cost/Performance Analysis Configuration Plan Investment Plan Personnel Plan © 1995 Daniel A. Menascé Workload Characterization Process of partitioning the global workload into subsets called workload components. Examples of workload components: – DB transactions, – requests to a file server or, – jobs with similar characteristics. Workload components are composed of basic components. © 1995 Daniel A. Menascé Workload Characterization: workload components and basic components Workload Component Basic Component e-mail - send message - receive message - query - update - session establishment - execution of remote function. - session termination Access to a DB server Acesso to mainframe © 1995 Daniel A. Menascé Workload Characterization Basic Component Parameters Workload Intensity Parameters – number of messages sent/hour – number of query transactions/sec Service Demand Parameters – average message length – average I/O time per query transaction. © 1995 Daniel A. Menascé Workload Characterization Methodology Identification of Workload Components Identification of Basic Components. Parameter Selection. Data Collection: benchmarks and ROTS (Rules of Thumb) may be used. Workload partitioning: averaging and clustering. © 1995 Daniel A. Menascé Workload Characterization Data Collection Alternatives - use “benchmarks” and ROTs only - use measurements, “benchmarks” and ROTs - use measurements only Data Collection Facilities None Some Detailed © 1995 Daniel A. Menascé Benchmarks National Software Testing Laboratories (NSTL): servers and applications. Transaction Processing Council (TPC) System Performance Evaluation Cooperative (SPEC) AIM Benchmark suites © 1995 Daniel A. Menascé Capacity Planning Methodology for Client Server Environments Understanding the Environment Developing a Cost Model Workload Characterization Workload Model Cost Model Validation and Calibration Performance Model Workload Forecasting Cost Prediction Performance Prediction Valid Model Cost/Performance Analysis Configuration Plan Investment Plan Personnel Plan © 1995 Daniel A. Menascé Workload Model Validation Model Validation Actual Workload Synthetic Workload System System measured response times measured response times acceptable? YES NO Model Calibration Valid Workload Model © 1995 Daniel A. Menascé Capacity Planning Methodology for Client Server Environments Understanding the Environment Developing a Cost Model Workload Characterization Workload Model Cost Model Validation and Calibration Performance Model Workload Forecasting Cost Prediction Performance Prediction Valid Model Cost/Performance Analysis Configuration Plan Investment Plan Personnel Plan © 1995 Daniel A. Menascé Workload Forecasting Process of predicting the workload intensity. tps 45 40 35 30 25 20 15 10 5 0 Queries Updates Q1 Q2 Q3 Q4 © 1995 Daniel A. Menascé Workload Forecasting Forecasting Business Units Number of business elements that determine the workload evolution – number of invoices – number of accounts – number of employees – number of claims – number of beds © 1995 Daniel A. Menascé Workload Forecasting Methodology Application Selection Identification of Forecasting Business Units (FBUs) Statistics gathering on FBUs FBU forecasting (use linear regression, moving averages, exponential smoothing) and business strategic plans. © 1995 Daniel A. Menascé Linear Regression Example y = 101.7x + 1072.3 2 R = 0.9672 2000 1800 1600 1400 Series1 1200 1000 Linear (Series1) 800 600 400 200 7 6 5 4 3 0 2 1 2 3 4 5 6 7 no. invoices 1200.0 1230.0 1400.0 1467.0 1590.0 1682.5 1784.2 1 month © 1995 Daniel A. Menascé Capacity Planning Methodology for Client Server Environments Understanding the Environment Developing a Cost Model Workload Characterization Workload Model Cost Model Validation and Calibration Performance Model Workload Forecasting Cost Prediction Performance Prediction Valid Model Cost/Performance Analysis Configuration Plan Investment Plan Personnel Plan © 1995 Daniel A. Menascé Performance Prediction Predictive models: analytic or simulation based. Analytic models are based on Queuing Networks (QNs) – efficient – allow for the fast analysis of a large number of scenarios – ideal for capacity planning © 1995 Daniel A. Menascé Performance Prediction factors that impact performance Client stations Servers Communication media Protocols Interconnection devices (bridges, routers and gateways) © 1995 Daniel A. Menascé Performance Prediction Model Accuracy • coarse grain model • little effort required for data collection • fine grain model • detailed data collection for parameterization Model Accuracy Low High © 1995 Daniel A. Menascé Performance Prediction An Example ... LAN Segment 1 R router FDDI ring LAN segment 2 R router ... © 1995 Daniel A. Menascé Performance Prediction QN for Example E1 C1 R1 DB server 1 D1 B1 F interconnection between segments E2 B2 D2 R2 DB server 2 C2 © 1995 Daniel A. Menascé Performance Prediction Response Times for the Example 4 3.5 Response Time (sec) 3 2.5 2 1.5 1 0.5 0 50 60 80 Number of clients © 1995 Daniel A. Menascé Capacity Planning Methodology for Client Server Environments Understanding the Environment Developing a Cost Model Workload Characterization Workload Model Cost Model Validation and Calibration Performance Model Workload Forecasting Cost Prediction Performance Prediction Valid Model Cost/Performance Analysis Configuration Plan Investment Plan Personnel Plan © 1995 Daniel A. Menascé Performance Model Validation Model Validation Actual System Performance Model Measurements Computations measured response times computed response times acceptable? Model Calibration NO YES Valid Performance Model © 1995 Daniel A. Menascé Capacity Planning Methodology for Client Server Environments Understanding the Environment Developing a Cost Model Workload Characterization Workload Model Cost Model Validation and Calibration Performance Model Workload Forecasting Cost Prediction Performance Prediction Valid Model Cost/Performance Analysis Configuration Plan Investment Plan Personnel Plan © 1995 Daniel A. Menascé A Cost Model for C/S Environments Less than 5% of US companies quantify or control PC and LAN costs. Some hidden costs in C/S environments: – – – – hardware maintenance and support software maintenance and upgrades software distribution costs personnel costs (approx.. 60% of total cost) © 1995 Daniel A. Menascé Some Cost ROTs Software and hardware upgrades cost 10% of purchase price per year. A LAN administrator costs between US$500 and US$700 per client WS/month. Training costs vary between US$1,500 and US$3,000 per technical staff person/year. 40% of personnel costs are in resource management, 40% in application development, and 20% in other categories. © 1995 Daniel A. Menascé Part IV: Performance Prediction Models for C/S Environments © 1995 Daniel A. Menascé Queues and Queuing Networks completing transactions Client LAN cpu disk completing transactions DB server © 1995 Daniel A. Menascé Operational Analysis: Quick Review Little’s Law Utilization Law Forced Flow Law Service Demand Law Response Time Law © 1995 Daniel A. Menascé Single Queue tps X tps W S R average transaction arrival rate X = average throughput © 1995 Daniel A. Menascé Single Queue tps X tps W S R W average waiting time S = average service time R = average response time © 1995 Daniel A. Menascé Single Queue tps X tps W S R R=W+S © 1995 Daniel A. Menascé Little’s Law PUB avg. number people in the pub = avg. arrival rate at the X pub avg. time spent at the pub © 1995 Daniel A. Menascé Little’s Law Example A DB server executes 10 transactions per second. On the average, 20 transactions are being executed simultaneously. What is the average transaction response time? © 1995 Daniel A. Menascé Little’s Law Example X = 10 tps N = 20 Little’s Law: N = R R = N / = 20 /10 = 2 sec © 1995 Daniel A. Menascé Little’s Law applied to single queues tps X tps N R EquilibriumX N = R © 1995 Daniel A. Menascé Utilization Law busy time / measurement interval= no. transactions/ measurement interval = B/T C/T tps U X tps S busy time / no. transactions = B/C © 1995 Daniel A. Menascé Utilization Law U=B/T tps X= C/T U X tps S S = B/C © 1995 Daniel A. Menascé Utilization Law U=B/T tps X= C/T U X tps S S = B/C U = B / T = (B/C) / (T/C) = S * X © 1995 Daniel A. Menascé Utilization Law tps U X tps S U=S*X © 1995 Daniel A. Menascé Utilization Law Example Each access to the DB server’s disk takes 25 msec on the average. During a one hour interval, 108,000 I/O’s to the disk were executed. What is the disk utilization? © 1995 Daniel A. Menascé Utilization Law Example S = 0.025 sec X = 108,000 / 3,600 = 30 accesses/sec Utilization Law: U = S * X U = 0.025 * 30 = 0.75 = 75 % © 1995 Daniel A. Menascé Forced Flow Law i Xi Xo Xi = Vi * Xo Vi = avg. no. visits to device i per transaction © 1995 Daniel A. Menascé Forced Flow Law Example Each transaction executed on the DB server performs 3 disk accesses on the average. The disk utilization measured during a one hour interval was 50%. During the same interval, 7,200 transactions were executed. What is the average service time at the disk? © 1995 Daniel A. Menascé Forced Flow Law Example Given: Vi = 3 disk accesses per transaction Ui = 30% = 0.3 Xo = 7,200 / 3,600 = 2 tps Utilization Law: Ui = Si * Xi Si = Ui / Xi Forced Flow Law: Xi = Vi * Xo Xi = 3 * 2 = 6 tps Si = Ui / Xi = 0.3 / 6 = 50 msec © 1995 Daniel A. Menascé Service Demand Law S1 S2 S3 S4 service demand (D) U D = Si = V * S i © 1995 Daniel A. Menascé Service Demand Law S1 S2 S3 S4 service demand (D) U X o= C / T tps D = (U * T) / C = U / (C / T) = U / X © 1995 Daniel A. Menascé Service Demand Law S1 S2 S3 S4 service demand (D) U Xo= C / T tps D = V * S = U / Xo © 1995 Daniel A. Menascé Response Time Law i Ri Ro Ro = Vi * Ri i Vi = avg. no. visits to device i per transaction © 1995 Daniel A. Menascé Operational Analysis: summary Law: N = R Utilization Law: U = S * X Forced Flow Law: Xi = Vi * Xo Service Demand Law: D = U / Xo Response Time Law: Ro = Vi*Ri Little’s i © 1995 Daniel A. Menascé Queuing Networks i Xo Ri Ro Given: service demands and no. of customers Find: average response time (Ro), throughput (Xo), average queue length per device. © 1995 Daniel A. Menascé Queuing Networks Types of Devices queuing device: load independent Si(n) = Si for all n queuing device: load dependent Si(n) = f(n) delay device RT(i,n) = Di © 1995 Daniel A. Menascé Queuing Network Solution Basic Technique : Mean Value Analysis (MVA) Feature: simple, iterative and efficient. © 1995 Daniel A. Menascé Mean Value Analysis Residence Time Equation Residence Time (RT) at device i RT (i,n) = Di + Di*NQ (i,n-1)) my total service time total waiting time = total service time of all customers I find ahead of me © 1995 Daniel A. Menascé Mean Value Analysis Residence Time Equation Residence Time (RT) at device i RT (i,n) = Di * (1 + NQ (i,n-1)) where NQ (i,n) is the average number of transactions at device i when there are n transactions in the system. © 1995 Daniel A. Menascé Mean Value Analysis Throughput Equation n trans. i Xo Ri Ro Little’s Law: n = Xo * Ro = Xo * RT(i,n) i © 1995 Daniel A. Menascé Mean Value Analysis Throughput Equation Throughput Xo (n) Xo (n) = n / RT (i,n) i where n is the number of transactions in the system. © 1995 Daniel A. Menascé Mean Value Analysis Queue Length Equation X (i, n) i ... ... R (i, n) NQ (i,n) Little’s Law NQ (i, n) = R (i, n) * X (i, n) © 1995 Daniel A. Menascé Mean Value Analysis Queue Length Equation Little’s Law NQ (i, n) = R (i, n) * X (i, n) Forced Flow Law X (i, n) = Vi * Xo (n) NQ (i, n) = R (i, n) * Vi * Xo (n) = RT (i, n) * Xo (n) © 1995 Daniel A. Menascé Mean Value Analysis Queue Length Equation Average Queue Length NQ (i, n) NQ (i, n) = RT (i, n) * Xo (n) © 1995 Daniel A. Menascé Mean Value Analysis Combining the 3 equations Di * (1 + NQ (i,n-1)) RT (i,n) = Di if device i is a delay device Xo (n) = n / RT (i,n) NQ (i, n) = RT (i, n) * Xo (n) where NQ (i, 0) = 0 for all device i. © 1995 Daniel A. Menascé Mean Value Analysis Combining the 3 equations n=1 RT (i, 1) = Di * (1 + NQ (i, 0)) = Di * (1 + 0) = Di Xo (1) = 1 / RT (i,1) NQ (i, 1) = RT (i, 1) * Xo (1) © 1995 Daniel A. Menascé Mean Value Analysis Combining the 3 equations n=2 RT (i, 2) = Di * (1 + NQ (i, 1)) Xo (2) = 2 / RT (i,2) NQ (i, 2) = RT (i, 2) * Xo (2) © 1995 Daniel A. Menascé Mean Value Analysis Example n RT(cpu) 0 0.00 1 0.56 2 0.81 3 1.05 RT(disk) 0.00 0.68 1.05 1.45 Ro 0.00 1.24 1.87 2.50 Xo NQ(cpu) 0.00 0.00 0.81 0.45 1.07 0.87 1.20 1.26 NQ(disk) 0.00 0.55 1.13 1.74 © 1995 Daniel A. Menascé Revisiting the C/S migration example Response Time vs. No. Clients 6 5 4 SLA 3 2 1 0 20 30 40 50 60 70 80 © 1995 Daniel A. Menascé C/S example: additional disk Response Time vs. No. Clients 6 5 4 Orig.Server Add.Disk SLA 3 2 1 0 50 60 70 80 © 1995 Daniel A. Menascé Part V: Advanced Models for the Performance Prediction of C/S Systems © 1995 Daniel A. Menascé Example: Telemarketing Application Customers order products through a catalog. Orders are made by phone using a credit card. 30,000 orders are received every day. Calls are placed on hold for the first available representative. © 1995 Daniel A. Menascé Example: Telemarketing Application Question: How many representatives are needed to guarantee that an incoming call will not have to wait more than 5 sec on the average? © 1995 Daniel A. Menascé Example: Telemarketing application DB server LAN © 1995 Daniel A. Menascé Example: Telemarketing Application m (to be determined ) workstations and an SQL server. Ethernet LAN (10 Mbps) SQL server: one CPU and one disk. © 1995 Daniel A. Menascé Telemarketing Application Hierarchical Model call arrival rate Average waiting time per call User Model 1 m Xc (k) k=0, ..., m application, server, and LAN parameters C/S model © 1995 Daniel A. Menascé Telemarketing Application User Level Model 0 Xc (1) 1 2 Xc (2) ... Xc (m) m ... Xc (m) k ... Xc (m) k = no. of calls in the system © 1995 Daniel A. Menascé Telemarketing Application User Level Model • Computation of the average call arrival rate • 30,000 calls/day • 12 hours of operation per day • balanced traffic during the day: 30.000 calls / day 0.69 calls / sec 12 3.600 © 1995 Daniel A. Menascé User Level Model 0 1 Xc (1) Xc (2) 2 ... m Xc (m) ... Xc (m) k ... Xc (m) • Flow balance equations: state 0: p(0) = Xc(1) . p(1) © 1995 Daniel A. Menascé User Level Model 0 1 Xc (1) Xc (2) 2 ... m Xc (m) ... Xc (m) k ... Xc (m) • Flow balance equations: state 0: p(0) = Xc(1) . p(1) state 2: p(1 ) = Xc(2) . p(2) © 1995 Daniel A. Menascé User Level Model 0 1 Xc (1) Xc (2) 2 ... m Xc (m) ... Xc (m) k ... Xc (m) • Flow balance equations: state 0: p(0) = Xc(1) . p(1) state 2: p(1 ) = Xc(2) . p(2) ........... state m: p(m-1) = Xc(m). p(m) © 1995 Daniel A. Menascé User Level Model 0 1 Xc (1) Xc (2) 2 ... m Xc (m) ... Xc (m) k ... Xc (m) • Flow balance equations: state 0: p(0) = Xc(1) . p(1) state 2: p(1 ) = Xc(2) . p(2) ........... state m: p(m-1) = Xc(m). p(m) ........... state k: p(k-1) = Xc(m). p(k) © 1995 Daniel A. Menascé User Level Model 0 1 2 ... Xc (1) Xc (2) m Xc (m) ... Xc (m) k ... Xc (m) • Solution to flow balance equations: k 1 p( k ) P(0) i0 Xc(i 1) X c ( j ) X c ( m)j m © 1995 Daniel A. Menascé User Level Model • Solution to flow balance equations: Pk P0 / ( Xc (1)... Xc (k )) k m k k m P0 /[ Xc (1)... Xc (m)( Xc (m)) ] k km © 1995 Daniel A. Menascé User Level Model average waiting time per call, W W N w / (1 / ) kPkw k 1 where P j 0 Pj w k m Pk m k0 k0 © 1995 Daniel A. Menascé C/S Model completing transactions Client LAN cpu disk completing transactions DB server © 1995 Daniel A. Menascé C/S Model • If the LAN utilization is very low, model it as a delay device (e.g., in high bandwidth LAN segments). • If the utilization is greater than 20%, model the LAN as a load dependent device. • Bridges and routers should be modeled as delay devices where the delay is the latency in sec/packet. © 1995 Daniel A. Menascé Telemarketing Application Average waiting time per call waiting time (sec) 1400 1200 1000 800 600 400 200 0 125 135 145 155 165 175 185 195 no. clients © 1995 Daniel A. Menascé Telemarketing Application Average waiting time per call waiting time (sec) 9 8 7 minimum no. representatives: 176 6 5 4 3 2 1 0 150 155 160 165 170 175 180 185 No. clients © 1995 Daniel A. Menascé Part VI: Capacity Planning Case Study for C/S Environments: A Retailing Company (example adapted from Giacone’94) © 1995 Daniel A. Menascé Retailing Company: initial configuration mainframe © 1995 Daniel A. Menascé Retailing company: new configuration server 4 MB router 56 Kbps router 16 MB mainframe © 1995 Daniel A. Menascé Retailing Company Mainframe stores enterprise DB in DB2. Several times during the day detail records are sent from all stores to the mainframe. A C/S application was developed to efficiently support a Decision Support System (DSS). © 1995 Daniel A. Menascé Retailing Company The C/S application extracts and formats data from the mainframe and make them locally available. Two UNIX servers are being considered: HP and NCR. Question: how many clients can be supported by each type of server while keeping response times at acceptable levels? © 1995 Daniel A. Menascé Retailing Company : configuration alternatives Client: – PC 486 – 16 MB/Windows 3.1 NCR DB server: – – – – – UNIX ORACLE NCR 3555 4-Pentium (66 MHz) 256 MB RAM SCSI disks © 1995 Daniel A. Menascé Retailing Company : configuration alternatives HP DB server – – – – – UNIX ORACLE HP 9000-H70 2 processors (96 MHz) 256 MB RAM SCSI disks © 1995 Daniel A. Menascé Retailing Company : configuration alternatives LAN between client and DB server: – Token Ring 4 Mbps – TCP/IP Link between LAN and mainframe – 3 Kbytes/seg (38.4 Kbps) effective. © 1995 Daniel A. Menascé Retailing Company : model parameterization A typical transaction was developed. The transaction was executed several times through a “script” on each server. Used measurement utilities and tools: System Activity Reporter (sar), ps, netstat, accounting, SPI (NCR), e PCS (HP). © 1995 Daniel A. Menascé Retailing Company : CPU service demand for NCR server UNIX : 10:44 10:45 10:46 ... 10:58 Total sar -u %usr %sys %wio %idle 23 10 1 66 72 20 2 6 70 20 2 8 ... ... ... ... 39 11 4 46 1267 © 1995 Daniel A. Menascé Retailing Company : CPU service demand for NCR server measurement avg. interval utilization no. processors 1,267% / 15 * 900 sec * 4 Dcpu = 9,600 transactions = .32 sec/transaction © 1995 Daniel A. Menascé Retailing Company : CPU service demand for HP server HP PCS: 07:40 07:41 ... 07:45. Ucpu = 53.5% CPU% 54.1 53.7 ... 53.1 1,535 transactions © 1995 Daniel A. Menascé Retailing Company : CPU service demand for HP server measurement avg. interval utilization no. processors 53.7 % * 300 sec * 2 Dcpu = 1,535 transactions = .208 sec/transaction © 1995 Daniel A. Menascé Retailing Company : disk service demand for NCR server “NCR SPI Disk I/O and Service Detail” 10:44 d0100 d1010 d0010 ... Total %Busy # Reads # Writes msec 1.0 0 43 13.5 49.6 1 2100 14.2 0.3 1 8 21.2 ... ... ... ... 3 31495 9,600 transactions © 1995 Daniel A. Menascé Retailing Company : disk service demand for NCR server disco Vr Vw S 0100 0.0000 0.059 1010 0.0003 3.113 0010 0.0001 0.012 D 13.2 14.2 20.8 0.78 44.21 0.25 © 1995 Daniel A. Menascé Retailing Company : disk service demand for HP server PCS from 7:30 to 7:35 08:30 d1008 d1009 d1000 ... %Busy # Reads # Writes 0.1 90 0 1.2 270 0 7.5 0 1680 ... ... ... 1,525 transactions © 1995 Daniel A. Menascé Retailing Company : disk service demand for HP server disco Vr 1008 1009 1000 Vw 0.30 0.89 0.00 S 0.00 0.00 5.51 D 14.1 13.5 12.5 4.16 11.95 68.85 © 1995 Daniel A. Menascé Response Time vs. No. Clients sec 20.00 18.00 16.00 14.00 HP 12.00 Rncr Rhp 10.00 8.00 NCR 6.00 4.00 2.00 no. clients 0.00 600 650 700 750 800 810 815 900 950 1000 1050 © 1995 Daniel A. Menascé Concluding Remarks C/S environments offer multiple configuration alternatives: capacity, quantity and server location, connectivity and network capacity. Decisions related to system sizing require the use of predictive performance models. Analytic models are the best alternative for quickly analyzing multiple scenarios. © 1995 Daniel A. Menascé Bibliography Book: – Capacity Planning and Performance Modeling: from mainframes to clients-servers systems, D. A. Menascé, V. A. F. Almeida, L. W. Dowdy, Prentice Hall, 1994. Papers: – G. Giacone, Real World Client/Server Sizing, Proc. of CMG’94 , Dec., 1994. – J. Gunther, Benchmarking a Client/Server Application, Proc. CMG’94, Dec., 1994. © 1995 Daniel A. Menascé Bibliography Papers: – M. Salsburg, Capacity Planning in the Interoperable Enterprise, Proc. CMG’94, Dec., 1994. – T. Leo e D. Roberts, Benchmarking the File Server Performance and Capacity, Proc. CMG’94, Dec., 94. – Bell, T. E. and Falk, A. M., Measuring Application Performance in Open Systems, Proc. CMG’92 , Dec., 1992. – Ho, E., Performance Management of Distributed Open Systems, Proc. CMG’92, Dec., 1992. © 1995 Daniel A. Menascé Bibliography Papers: – Hufnagel, E., The Hidden Costs of Client/Server, Your Client/Server Survival Kit, supplement to Network Computing, Vol. 5, 1994. – Information Technology Group, Cost of Computing, Comparative Study of Mainframe and PC/LAN Installations, Mountain View, CA, 1994. – National Software Testing Laboratories, High Performance 486 DX2 and Pentium Systems, PC Digest Ratings Report, Vol. 8, Number 2, February, 1994. © 1995 Daniel A. Menascé Bibliography Papers: – Vicente, J., Network Capacity Planning, Proc. CMG’93, Dec. 1993. – Waldner, R., Client/Server Capacity Planning, Proc. CMG’92, Dec. 1992. – Menascé , D., and V. Almeida, Two-Level Performance Models of Client-Server Systems, Proc. CMG’94, Dec. 1994. © 1995 Daniel A. Menascé Bibliography Papers: – Petriu, D. and C. Woodside, Approximate MVA for Markov Models of Client/Server Systems, Proc. Third IEEE Symp. Parallel and Distributed Processing, Dallas 1991. – Rolia, J.A., M.Starkey, and G.Boersma, Modeling RPC Performance, Proc. IBM Canada CASCON’93, Toronto, Oct. 1993. © 1995 Daniel A. Menascé THE END © 1995 Daniel A. Menascé