Front cover WebSphere for z/OS to CICS and IMS Connectivity Performance Compare the performance of connectors Look at the environment that was used See the key findings for each measurement Tamas Vilaghy Rich Conway Kim Patterson Rajesh P Ramachandran Robert W St John Brent Watson Frances Williams ibm.com/redbooks Redpaper International Technical Support Organization WebSphere for z/OS to CICS and IMS Connectivity Performance January 2006 Note: Before using this information and the product it supports, read the information in “Notices” on page v. First Edition (January 2006) This edition applies to Version 5, Release 1, Modification 02 of WebSphere Application Server for z/OS. © Copyright International Business Machines Corporation 2006. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii The team that wrote this Redpaper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Connectivity design considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Summary of key performance results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Chapter 2. The measurement environment. . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1 Test objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1.1 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1.2 Test scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2.1 The sysplex configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 WLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4 WebSphere Application Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.5 CICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.5.1 CICS Transaction Gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.5.2 WebSphere MQ/CICS DPL bridge . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.6 IMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.6.1 IMS Connect environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.6.2 The IMS environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.6.3 IMS back-end transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.7 SOAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Chapter 3. The Trader applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.1 Overview of Trader application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.1.1 Trader IMS and CICS applications and data stores . . . . . . . . . . . . . 29 3.1.2 SOAP considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.1.3 Trader Web front-end user interface . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.1.4 Trader interface architecture and implementation. . . . . . . . . . . . . . . 35 3.1.5 Packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.2 TraderCICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.3 TraderSOAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.4 TraderMQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 © Copyright IBM Corp. 2006. All rights reserved. iii 3.5 TraderIMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Chapter 4. Measurements and results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.1 The testing procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.1.1 The test script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.1.2 RMF Monitor III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.2 Recorded data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.2.1 WebSphere Studio Workload Simulator . . . . . . . . . . . . . . . . . . . . . . 50 4.2.2 RMF Monitor I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.3 Metrics in our final analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.4 Tuning and adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.4.1 Changing the settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.4.2 Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.5 Results for CICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.5.1 CICS Transaction Gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.5.2 SOAP for CICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.5.3 CICS MQ DPL Bridge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.6 Results for IMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.6.1 IMS Connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.6.2 IMS MQ DPL Bridge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.7 Connector and data size comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.7.1 CICS comparison charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.7.2 IMS comparison charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 iv WebSphere for z/OS to CICS and IMS Connectivity Performance Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces. © Copyright International Business Machines Corporation 2006. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. v Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: Eserver® Eserver® Redbooks (logo) z/OS® zSeries® AIX® ™ CICS® DB2® IBM® IMS™ MVS™ OS/390® Redbooks™ RACF® RMF™ WebSphere® The following terms are trademarks of other companies: Java, J2EE, JVM, JSP, JMX, JDBC, JavaServer Pages, JavaServer, JavaBeans. Java Naming and Directory Interface, Java, Forte, EJB, Enterprise JavaBeans, and other Java trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others. vi WebSphere for z/OS to CICS and IMS Connectivity Performance Preface This IBM® Redpaper focuses on helping you understand the performance implications of the different connectivity options from WebSphere® for IBM z/OS® to CICS® or IMS™. The architectural choices can be reviewed in WebSphere for z/OS to CICS/IMS Connectivity Architectural Choices, SG24-6365. That IBM Redbook shows you the different attributes of a connection, such as availability, security, transactional capability, and performance; however, it does not compare the performance impact of various connectivity options. Instead, it emphasizes the architectural solution and the non-functional requirements. For this paper, we selected two options from that book and created a measurement environment to simulate customer scenarios. For our CICS customers, we ran tests with CICS TG, SOAP for CICS, and CICS MQ DPL Bridge. For our IMS customers, we also ran tests with IMS Connect and with IMS MQ DPL Bridge. We selected 500-byte, 5 KB, and 20 KB COMMAREA sizes, with very complex records to simulate complex customer scenarios. All of our measurements were done during a quick six-week residency. However, an issue that affected some of our results arose immediately after our measurements were completed. Service to the WebSphere Studio Enterprise Developer development tooling significantly improved CICS SOAP performance. Because we were unable to rerun our performance tests after this service became available and because we did not want to provide misleading results, we worked with IBM performance experts and developers to provide best estimates for measurements that were affected by these changes. These estimates are based on other measurements that are outside the scope of this Redpaper. All measurements were done with the ITSO-developed Trader application, which has been used in many redbooks over the years. The last version can be downloaded from WebSphere for z/OS Connectivity Handbook, SG24-7064-01. Before coming to any conclusion, we suggest that you evaluate your application. The team that wrote this Redpaper This Redpaper was produced by a team of specialists from around the world working at the International Technical Support Organization (ITSO), Poughkeepsie Center. © Copyright IBM Corp. 2006. All rights reserved. vii Tamas Vilaghy is currently a project manager in the Design Center for On Demand Business, Poughkeepsie. He was a project leader at the ITSO, Poughkeepsie Center until 2004. He led Redbook projects that involved e-business on IBM Eserver zSeries® servers. Before joining the ITSO, he worked in the System Sales Unit and Global Services departments of IBM Hungary. Tamas spent two years in Poughkeepsie, from 1998 to 2000, working with zSeries marketing and competitive analysis. From 1991 to 1998, he held technical, marketing, and sales positions for zSeries. From 1979 to 1991, he was involved with system software installation, development, and education. Rich Conway is a senior IT specialist at the ITSO, Poughkeepsie Center. He has 21 years of experience in all aspects of MVS™ and z/OS as a systems programmer. He has worked extensively with UNIX® System Services and WebSphere on z/OS. He was a project leader for the ITSO for many redbooks and has also provided technical advice and support for numerous redbooks over the past 10 years. Kim Patterson has been with IBM for eight years, working on customer contracts in OS/390® and z/OS. Her experience is with development and implementation of OS/390 and z/OS applications in IMS, DB2®, and CICS. During her IT career she has worked as an IMS and DB2 database administrator as well as an application analyst and developer. Most recently she was a member of the IBM Learning Services DB2 team on the OS/390 and z/OS platforms. As an instructor, Kim specialized in DB2 SQL, application programming, and administration courses. Recently she participated in another Redbook project on APPC protected conversations. Rajesh P Ramachandran is an advisory software engineer for IBM zSeries e-business Services. He has 10 years of experience in application development in various platforms, including mainframe, UNIX, and Linux®. He used COBOL, Java™, CICS, and Forte in his assignments. Recently, he was involved with DB2 tools development, where he was a lead developer of DB2 Data Archive Expert. Rajesh is currently on assignment in the Design Center for On Demand Business, Poughkeepsie Center. viii WebSphere for z/OS to CICS and IMS Connectivity Performance Robert W St John is a senior performance analyst from the IBM Poughkeepsie lab. His primary areas or expertise are WebSphere, Java, and UNIX Systems Services performance on z/OS. Robert has 23 years of experience with MVS and z/OS, including performance tools, system programming, and performance analysis. Although his primary focus is on improving the performance of WebSphere and Java products, he is also an author and a frequent conference speaker, providing information about WebSphere performance on z/OS. Brent Watson is an eServer IT architect at the IBM Client Technology Center (CTC). He has a strong background in IT consulting and entrepreneurship, with technical expertise in J2EE and .NET software architecture, application servers, portal implementation, and business intelligence solutions architecture. He holds a BS in Computer Science from Clarkson University. Frances Williams is a senior consultant with the eServer e-Business Services in the United States. She has over 20 years of experience in the IT field as an application design architect. Her focus is on z/OS platform technologies, which include WebSphere Application Server, WebSphere MQ, CICS, and many development languages. Her skills include performance tuning and troubleshooting application problems. Thanks to the following people for their contributions to this project: Robert Haimowitz, Patrick C. Ryan, Michael G. Conolly ITSO, Poughkeepsie Center Mitch Johnson IBM Software Services for WebSphere Denny Colvin IBM WebSphere Studio Workload Simulator Development Peter Mailand IBM RMF™ Tools Development, Boeblingen Preface ix Phil Anselm, Chuck Neidig, Sun Sy IBM Server Group Software Services Kathy Walsh IBM Washington System Center Kenneth Blackman IMS Advanced Technical Support Colin Paice IBM Hursley, WebSphere MQ Development Nigel Williams IBM Design Center for On Demand Business, Montpelier, France Phil Wakelin, Mark Cocker, Richard Cowgill, Catherine Moxey, Ian J Mitchell, John Burgess, Trevor Clarke IBM Hursley, CICS Development Sinmei DeGrange, David Viguers, Barbara Klein, Gerald Hughes, Judith Hill IMS Connect Development Forsyth Alexander ITSO, Raleigh Center Become a published author Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners and/or customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you'll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html x WebSphere for z/OS to CICS and IMS Connectivity Performance Comments welcome Your comments are important to us! We want our papers to be as helpful as possible. Send us your comments about this Redpaper or other Redbooks™ in one of the following ways: Use the online Contact us review redbook form found at: ibm.com/redbooks Send your comments in an email to: redbook@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYJ Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400 Preface xi xii WebSphere for z/OS to CICS and IMS Connectivity Performance 1 Chapter 1. Introduction This chapter provides an introduction to connectivity design and also gives a summary of the results of our project. The first section highlights aspects that must be evaluated when designing connectivity. Such issues are security, scalability, availability, and standards compliance. Each of these affect performance, usually negatively. The second section gives a short summary of our results. The reason for this summary is to highlight the key messages for those readers who do not have time to crawl through all of the details of our measurements. It should be noted that, after our measurements were completed, some significant improvements were made in the development tools that influenced the measurement results. We have done our best to adjust the results accordingly based on other performance measurements that have been done independently of this paper. This IBM Redpaper is a companion to WebSphere for z/OS Connectivity Architectural Choices, SG24-6365. This paper provides information about the performance costs of selected connectivity options that are described in SG24-6365. © Copyright IBM Corp. 2006. All rights reserved. 1 Important: The following information must be kept in mind as you read this paper: Because all our testing was done in a short period of time, we were unable to refine our results enough to consider them official performance measurements. However, we think our experiences can be used to understand the performance trade-offs that are associated with various connectivity options at a high level. Performance improvements in various areas are being released quickly and every application is different. Even if we had the best possible performance results for our application, we could only provide general guidelines. For this reason, we recommend that you test your own application on the latest software before making major design decisions. The Trader application that we deployed used virtual storage access method (VSAM) files in the CICS case and Data Language One Hiearchic Indexed Direct Access method (DL/I HIDAM) in the IMS case. 2 WebSphere for z/OS to CICS and IMS Connectivity Performance 1.1 Connectivity design considerations During the design of a connectivity solution, there are many attributes to consider. The system can be well-balanced, satisfying multiple requirements at the same time (Figure 1-1). Scalability Performance Amount of data Skills Interoperability Maturity Availability Security Figure 1-1 Well-balanced system The system can also be tailored towards a given goal, such as security and availability (see Figure 1-2 on page 4). In these cases, you cannot necessarily satisfy all requirements and still achieve the best performance. Chapter 1. Introduction 3 Scalability Perform ance Am ount of data Interoperability Skills Availability M aturity Security Figure 1-2 System tailored to security and availability You can view the following list as a starting point for aspects to consider, but individual design situations might bring up additional factors: Security Security is a necessity for most designs. Each connectivity solution has strengths and weaknesses that must be considered. Some connectivity solutions offer extensive security, and others offer hardly any. Adding security to a design usually slows down performance, because additional data is likely to be transmitted, which means that additional processing must be done. Standards compliance, interoperability Some connectivity options are proprietary; some are more standards compliant. Some standards are newer; some are older, more mature. In some cases, there is not even a standard available. Implementing new standards might mean reduced performance because the product is not mature and has not gone through many refinements. Also, standards compliance might mean implementation of additional layers between two end-points that can cause more processor usage (for example, creating Web Services). However, using a given standard can also result in better interoperability and increased accessibility because more clients can use the service. 4 WebSphere for z/OS to CICS and IMS Connectivity Performance Availability Availability means duplication. Checking the viability of the duplicate generally requires more processing and resources. This affects performance negatively. However, continuous or longer uptime is generally the result of availability, and this uptime is a business requirement that is defined in business terms. Performance (response time, CPU, and memory cost) Designing for performance is always a compromise. Usually it is hard to build open, standards-compliant, highly available, scalable, and secure connectivity and achieve high performance. The compromise is how to achieve the best balance between performance and other attributes. There are multiple performance measures, some are user related (like response time), but some are resource oriented (like CPU time). Skills availability In a given customer scenario, the skills availability always determines connectivity choices. If the programmers know WebSphere MQ, they tend to use that more than J2EE Connector Architecture (J2CA), even though J2CA might fit better for that particular solution. Synchronous and asynchronous response requirements When there is communication between applications, this is always a question. Most users prefer a prompt answer, so the design leads to synchronous solutions. But, a prompt answer is not always possible because of application availability, time zones, or scheduling issues. So, the design leads to an asynchronous solution. Usually, this is the first question that should come up when designing a solution. This automatically limits the number of connectivity options. Co-location and separation requirements This attribute is either related to security (the two applications cannot be put under the control of the same operating system) or hardware capacity (separate hardware is needed for the two applications). This automatically limits connectivity options. The standard transport protocol used today is TCP/IP for separated machines and this, as the measurements show, always under performs co-located, proprietary transport protocols. Product maturity As with standards maturity, product maturity always influences performance. Functional requirements are first; non-functional requirements are second. For example, a CICS Enterprise Information System (EIS) that has been around for 40 years has gone through so many design and performance reviews and refinements that it is likely to outperform some newly created and announced transactional systems, even though the new system might offer functions that cannot be found with CICS. The same is true for a connectivity product. It might take two or more releases or versions to solve most of the performance issues. Eventually, these issues are likely to be solved, if they can be solved internally to the product. Chapter 1. Introduction 5 Amount and type of data for communication You can solve your data communication requirements by sending small amounts of data many times or large chunks of data only a few times. If the communication choice is costly (huge amounts of processing or many data conversions), you should choose a different communication method or redesign the application for better performance. There are also limitations posed by a given connectivity option (for example, CICS Transaction Gateway, 32 KB), that can push a design to another option, although the performance is excellent. Scalable software architecture This is an attribute that can also have performance implications. Scalable solutions require the use of different internal algorithms; more storage; different sorting, table manipulation, thread safety; and other attributes that can hinder performance. The list is definitely not complete. There are many factors that influence a design, for example, company policies or limitations posed by a given EIS. For more information, refer to WebSphere for z/OS Connectivity Architectural Choices, SG24-6365. 1.2 Summary of key performance results The objective of our project was to measure the performance of the different connectivity options from IBM WebSphere Application Server for z/OS to CICS and IMS EISs. We used a simple HTTP client that connects to WebSphere for all of our tests. The project could not cover all the possible architectural options. For example, we could not measure Web Services or Java clients that connect to Simple Object Access Protocol (SOAP) or CICS Transaction Gateway (TG). It should be noted that CICS TG can handle any J2EE client and the SOAP server in CICS can handle any Web Services client. We did not measure Web Services clients that connect to WebSphere Application Server. Some of the key results of our series of measurements are: Working with a small communications area (COMMAREA) size, CICS TG outperforms both SOAP and CICS MQ Distributed Program Link (DPL) Bridge. With a small COMMAREA size, SOAP and CICS MQ DPL Bridge results are similar. In the local case, CICS MQ DPL Bridge performs better; in the remote case, CICS SOAP performs better. CICS MQ DPL Bridge cost per byte is very low when compared to other connectors that we measured. 6 WebSphere for z/OS to CICS and IMS Connectivity Performance As the COMMAREA size was increased, CICS MQ DPL Bridge performed better than SOAP and CICS TG. The CPU usage for CICS SOAP can be greater than for other connectors, because of the XML parsing of data structures, which allows users to take advantage of how easy it is to manage loosely coupled systems. The amount of XML parsing is related to the complexity of the data structure. Our testing showed that, with a less complex COMMAREA, the performance of CICS SOAP is better than for a more complex COMMAREA of the same size. Therefore, for example, the application designer has the option of packaging multiple fields into one larger field. This simplifies the SOAP/XML processing, but the Java program must pack and unpack that larger field correctly. In general, local connectors perform better than remote connectors. The relative CPU usage delta between local and remote decreases as the application data size increases. IMS Connect performs better than IMS MQ DPL Bridge. Chapter 1. Introduction 7 8 WebSphere for z/OS to CICS and IMS Connectivity Performance 2 Chapter 2. The measurement environment This chapter describes the test environment that we used to test our business applications. The objective was to test the performance of the different connectivity choices to the enterprise back-end systems. We briefly describe the infrastructure we set up and the connector types that we chose to test. We discuss the following components used in our tests: Test objectives Infrastructure – CICS – IMS – WebSphere MQ/CICS DPL Bridge – WebSphere MQ/IMS DPL Bridge © Copyright IBM Corp. 2006. All rights reserved. 9 2.1 Test objectives The objective of our tests were to effectively measure the CPU consumption of simple J2EE applications that use different connectors to back-end EISs. We tested: CICS TG SOAP for CICS WebSphere MQ CICS DPL Bridge IMS Connect connectors One measurement with IMS MQ DPL Bridge 2.1.1 Approach Our main goal was to set up our environment so that our measurements best reflected the speed of each connector and not the speed of the EIS. To do this, our EISs ran in their own service classes with highest priority so that the transactions would not have to wait for any other process. The WebSphere environment and enclaves were configured to run at a slightly lower priority so that they would not steal processing from the EIS. During our test runs, we used RMF Monitor III to monitor any delays. For each address space or group of address spaces, RMF Monitor III reported the delay that was experienced for the report interval and identified the primary cause for the delay. With the amount of processors and real storage defined for our test systems, we did not experience delays of this nature. The majority of the delays that we experienced were the result of inadequate data set placement, which were corrected by moving our test databases to ESS Direct Area Storage Devices (DASDs). With each of our test cases, we made every effort to optimize the settings for that instance of the test to achieve the maximum throughput. 2.1.2 Test scenarios We chose varying message sizes to test the impact of the message size on the performance of the transactions. The testing scenarios are shown in Table 2-1 on page 11. 10 WebSphere for z/OS to CICS and IMS Connectivity Performance Table 2-1 Table of test cases CICS Transaction Gateway (CICS TG) settings for both local and remote connections Scenario 1 DFHCOMMAREA = 0.5 KB Scenario 2 DFHCOMMAREA = 5 KB Scenario 3 DFHCOMMAREA = 20 KB SOAP connector settings for both local and remote connections Scenario 1 Message size = 0.5 KB Scenario 2 Message size = 5 KB CICS/MQ DPL Bridge connector settings for both local and remote connections Scenario 1 Message = 0.5 KB Scenario 2 message = 5 KB Scenario 3 message = 20 KB IMS connector settings for both local and remote connections Scenario 1 DFHCOMMAREA = 0.5 KB Scenario 2 DFHCOMMAREA = 5 KB Scenario 3 DFHCOMMAREA = 20 KB IMS/MQ DPL Bridge connector settings for local connection Scenario 1 DFHCOMMAREA = 5 KB 2.2 Infrastructure This section describes the infrastructure of our environment. 2.2.1 The sysplex configuration We used three systems for our performance tests. All were logical partitions (LPARs) in either a 2064 or 2084 zSeries server. All network connectivity between the LPARs was by XCF paths in the coupling facility. Chapter 2. The measurement environment 11 The DASD used for our tests was ESS DASD shared between the three LPARs. Figure 2-1 shows the server configuration. ITSO zSeries Configuration z990 server LPAR48 LPAR49 XCF ESS (disk storage) z900 server LPAR43 Figure 2-1 The servers used for the tests The EIS subsystems were configured using criteria described in WebSphere for z/OS Connectivity Handbook, SG24-7064-01. Figure 2-2 on page 13 shows the test cases that we performed: 12 CICS TG WebSphere MQ DPL Bridge: CICS CICS-SOAP IMS Connect WebSphere MQ DPL Bridge: IMS WebSphere for z/OS to CICS and IMS Connectivity Performance XCF IBM zSeries LPAR48 - z/OS IBM zSeries 2 CICS bridge MQSeries LPAR 43 - z/OS bridge IMS bridge Soap CICS 3 Trader apps SR WebSphere App Server CTG SR SR CICS/IMS Databases WebSphere Workload Simulator for z/OS 1 5 IMSC bridge 4 IMS Figure 2-2 Local test configuration Figure 2-2 also shows the high level system configuration that we used for our test environment for the local connectivity scenarios. We used only two LPARs: One LPAR contained WebSphere and the EIS systems. The other LPAR (a separate machine) was used to drive the workload. Chapter 2. The measurement environment 13 Figure 2-3 shows the high level system configuration that we used for our test environment for the remote connectivity scenarios. XCF IBM zSeries LPAR48 - z/OS IBM zSeries 2 CICS bridge MQSeries LPAR 43 - z/OS bridge IMS bridge Soap CICS CICS/IMS Databases 3 CTG WebSphere Workload Simulator for z/OS 1 5 IMSC scripts bridge 4 IMS LPAR49 - z/OS Trader apps SR WebSphere App Server SR SR All TCP/IP inter-LPAR communication is done through XCF Figure 2-3 Environment for remote cases We used three LPARs: one for WebSphere, the second for the EISs, and the third for driving the workload. The following hardware and software components were used in our test environment: Two external Coupling Facilities (CF) were installed. WebSphere Application Server V 5.1.0.2 for z/OS was used. The Resource Access Control Facility (RACF®) used a sysplex-wide shared database. Workload Manager (WLM) was set up in goal mode. 14 WebSphere for z/OS to CICS and IMS Connectivity Performance Table 2-2 shows the release levels. Table 2-2 Release levels of software we used Product Release Level z/OS V1.5 WebSphere Application Server V5.1.0.2 CICS TS V2.2 CICS TG V5.1.0 SOAP for CICS feature V2 WebSphere MQ V5.31 IMS V8.1 IMS Connect V2.2 2.3 WLM To acquire the best throughput for our transactions, we set WLM properties (Example 2-1) for WebSphere servant regions on our LPARs as CPU critical flag = yes, importance of 2, and a 90% goal. Example 2-1 Service class settings for WebSphere on SC48 Browse Line 00000000 Col 001 072 Command ===> SCROLL ===> PAGE **************************** Top of Data ****************************** * Service Class WAS48 - WAS servant sc48 Created by user WATSON on 2004/11/12 at 15:52:38 Base last updated by user WATSON on 2004/11/22 at 16:04:09 Base goal: CPU Critical flag: YES # 1 Duration --------- Imp Goal description ---------------------------------------2 90% complete within 00:00:00.350 *************************** Bottom of Data **************************** Chapter 2. The measurement environment 15 The CICS region is set at CPU critical flag = yes, importance of 1, and 80% goal (Example 2-2). The objective was to give the CICS more importance than the WebSphere application servant region, resulting in less delay for CICS resources. Similar service class definitions were set up for IMS. Example 2-2 Service class settings for CICS regions Browse Line 00000000 Col 001 072 Command ===> SCROLL ===> PAGE **************************** Top of Data ****************************** * Service Class CICSW - WAS CICS transactions Created by user FRANCK on 2002/11/16 at 16:24:26 Base last updated by user WATSON on 2004/11/12 at 17:26:09 Base goal: CPU Critical flag: YES # 1 Duration --------- Imp Goal description ---------------------------------------1 80% complete within 00:00:00.150 *************************** Bottom of Data **************************** WebSphere for z/OS propagates the performance context of work requests with WLM enclaves. Each transaction has its own enclave and is managed according to its service class. 2.4 WebSphere Application Server For WebSphere Application Server, we swapped Object Request Broker (ORB) services settings between IOBound and LONGWAIT. Figure 2-4 on page 17 shows how to set this up. 16 WebSphere for z/OS to CICS and IMS Connectivity Performance Figure 2-4 Setting the request broker in WebSphere Application Server LONGWAIT specifies more threads than IOBOUND for application processing. Specifically, LONGWAIT allowed us to run with 40 worker threads in each servant region. We used this setting for CICS TG because it spends most of the time waiting for network and remote operations to complete and very little time on its own processing. IOBOUND uses three times the number of CPUs or a minimum of five threads and a maximum of 30 threads. Visit the WebSphere information center for more information about these settings. 2.5 CICS This section describes the architecture we chose to connect WebSphere Application Server for z/OS (WebSphere) to the CICS subsystem. We used three different methods: CICS TG JMS to WebSphere MQ DPL Bridge connection to CICS SOAP connection to CICS. In the case of the local CICS TG, we did not use the gateway daemon address space because the CICS Enterprise Content Integration (ECI) resource adapter runs in WebSphere and communicates directly with CICS using the CICS TG facilities. In the remote case, we used a resource adapter connected to CICS TG by a network connection. This connection uses the ECI protocol over TCP/IP. The maximum amount of outstanding connection requests (SOMAXCONN) TCP/IP value was 10. We used the CICS resource definitions seen in Example 2-3 on page 18. Chapter 2. The measurement environment 17 Example 2-3 CICS TG and SOAP definitions during the measurements GROUP NAME: CTG ---------CONNECTION(CTG) GROUP(CTG) DESCRIPTION(CTG CONNECTION) CONNECTION-IDENTIFIERS NETNAME(SCSCERWW) INDSYS() REMOTE-ATTRIBUTES REMOTESYSTEM() REMOTENAME() REMOTESYSNET() CONNECTION-PROPERTIES ACCESSMETHOD(IRC) PROTOCOL(EXCI) CONNTYPE(SPECIFIC) SINGLESESS(NO) DATASTREAM(USER) RECORDFORMAT(U) QUEUELIMIT(NO) MAXQTIME(NO) OPERATIONAL-PROPERTIES AUTOCONNECT(NO) INSERVICE(YES) SECURITY SECURITYNAME() ATTACHSEC(IDENTIFY) BINDSECURITY(NO) USEDFLTUSER(NO) RECOVERY PSRECOVERY() XLNACTION(KEEP) SESSIONS(CTG) GROUP(CTG) DESCRIPTION(CTG SESSIONS) SESSION-IDENTIFIERS CONNECTION(CTG) SESSNAME() MODENAME() SESSION-PROPERTIES PROTOCOL(EXCI) MAXIMUM(0,0) RECEIVECOUNT(999) SENDPFX() SENDSIZE(4096) RECEIVESIZE(4096) OPERATOR-DEFAULTSPRESET-SECURITY USERID() OPERATIONAL-PROPERTIES AUTOCONNECT(NO) BUILDCHAIN(YES) IOAREALEN(4096,4096) RELREQ(NO) NEPCLASS(0) RECOVERY NETNAMEQ() RECEIVEPFX(C) SENDCOUNT() SESSPRIORITY(0) USERAREALEN(0) DISCREQ(NO) RECOVOPTION(SYSDEFAULT) +++++++++++++++++++++++++++++++++++++ GROUP NAME: SOAPUSER ---------PROGRAM(DFHWBCLI) GROUP(SOAPUSER) DESCRIPTION(Outbound HTTP Transport Interface) LANGUAGE(ASSEMBLER) RELOAD(NO) RESIDENT(NO) USAGE(NORMAL) USELPACOPY(NO) STATUS(ENABLED) CEDF(YES) DATALOCATION(ANY) EXECKEY(CICS) CONCURRENCY(QUASIRENT) REMOTE-ATTRIBUTES DYNAMIC(NO) REMOTESYSTEM() REMOTENAME() 18 WebSphere for z/OS to CICS and IMS Connectivity Performance TRANSID() JVM-ATTRIBUTES JVM(NO) JAVA-PROGRAM-OBJECT-ATTRIBUTES HOTPOOL(NO) TCPIPSERVICE(SOAP) EXECUTIONSET(FULLAPI) JVMCLASS() JVMPROFILE(DFHJVMPR) GROUP(SOAPUSER) DESCRIPTION(SOAP for CICS: HTTP port definition) URM(DFHWBADX) PORTNUMBER(8080) STATUS(OPEN) PROTOCOL(HTTP) TRANSACTION(CWXN) BACKLOG(200) TSQPREFIX() IPADDRESS() SOCKETCLOSE(10) SECURITY SSL(NO) ATTACHSEC() DNS-CONNECTION-BALANCING DNSGROUP() CERTIFICATE() AUTHENTICATE(NO) GRPCRITICAL(NO) We defined Trader using the job in Example 2-4. Example 2-4 Trader definitions in CICS //CICSADD1 JOB (999,POK),'CONWAY',CLASS=A, // MSGLEVEL=(1,1),MSGCLASS=A,NOTIFY=&SYSUID /*JOBPARM S=SC48 //************************************************* //* ADD CICS DEFS FOR TRADER //* CICS HAS TO BE DOWN TO ADD THESE //************************************************* //* //ADDDEF EXEC PGM=DFHCSDUP,REGION=1M //* //STEPLIB DD DSN=CICSTS22.CICS.SDFHLOAD,DISP=SHR //DFHCSD DD DSN=CICSUSER.CICS220.CICSERW.DFHCSD,DISP=SHR //SYSUT1 DD UNIT=SYSDA,SPACE=(1024,(100,100)) //SYSPRINT DD SYSOUT=* //SYSIN DD * DELETE GROUP(TRADER) DEFINE TRANSACTION(TRAD) GROUP(TRADER) DESCRIPTION(ITSO TRADER TRANS) PROGRAM(TRADERPL) TRANCLASS(DFHTCL00) DEFINE MAPSET(NEWTRAD) GROUP(TRADER) DESCRIPTION(ITSO TRADER MAPSET) DEFINE FILE(COMPFILE) GROUP(TRADER) RECORDFORMAT(V) ADD(YES) BROWSE(YES) DELETE(YES) READ(YES) UPDATE(YES) DATABUFFERS(2) INDEXBUFFERS(1) DSNAME(CICSUSER.CICS220.CICSERW.COMPFILE) DEFINE FILE(CUSTFILE) GROUP(TRADER) Chapter 2. The measurement environment 19 RECORDFORMAT(V) ADD(YES) BROWSE(YES) DELETE(YES) READ(YES) UPDATE(YES) DATABUFFERS(2) INDEXBUFFERS(1) DSNAME(CICSUSER.CICS220.CICSERW.CUSTFILE) ADD GROUP(TRADER) LIST(WEBLIST) // We used the Trader file definitions shown in Example 2-5. Example 2-5 VSAM file definitions FILE(COMPFILE) GROUP(TRADERW1) DESCRIPTION() VSAM-PARAMETERS DSNAME(CICSUSER.CICS220.CICSERW1.COMPFILE) PASSWORD() RLSACCESS(NO) LSRPOOLID(1) READINTEG(UNCOMMITTED) DSNSHARING(ALLREQS) STRINGS(1) NSRGROUP() REMOTE-ATTRIBUTES REMOTESYSTEM() REMOTENAME() REMOTE-AND-CFDATATABLE-PARAMETERS RECORDSIZE() KEYLENGTH() INITIAL-STATUS STATUS(ENABLED) OPENTIME(FIRSTREF) DISPOSITION(SHARE) BUFFERS DATABUFFERS(2) INDEXBUFFERS(1) DATATABLE-PARAMETERS TABLE(NO) MAXNUMRECS(NOLIMIT) CFDATATABLE-PARAMETERS CFDTPOOL() TABLENAME() UPDATEMODEL(LOCKING) LOAD(NO) DATA-FORMAT RECORDFORMAT(V) OPERATIONS ADD(YES) BROWSE(YES) DELETE(YES) READ(YES) UPDATE(YES) AUTO-JOURNALLING JOURNAL(NO) JNLREAD(NONE) JNLSYNCREAD(NO) JNLUPDATE(NO) JNLADD(NONE) JNLSYNCWRITE(YES) RECOVERY-PARAMETERS RECOVERY(NONE) FWDRECOVLOG(NO) BACKUPTYPE(STATIC) +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ FILE(CUSTFILE) 20 GROUP(TRADERW1) DESCRIPTION() WebSphere for z/OS to CICS and IMS Connectivity Performance VSAM-PARAMETERS DSNAME(CICSUSER.CICS220.CICSERW1.CUSTFILE) PASSWORD() RLSACCESS(NO) LSRPOOLID(1) READINTEG(UNCOMMITTED) DSNSHARING(ALLREQS) NSRGROUP() REMOTE-ATTRIBUTES REMOTESYSTEM() REMOTENAME() REMOTE-AND-CFDATATABLE-PARAMETERS RECORDSIZE() KEYLENGTH() INITIAL-STATUS STATUS(ENABLED) OPENTIME(FIRSTREF) DISPOSITION(SHARE) BUFFERS DATABUFFERS(2) INDEXBUFFERS(1) DATATABLE-PARAMETERS TABLE(NO) MAXNUMRECS(NOLIMIT) CFDATATABLE-PARAMETERS CFDTPOOL() TABLENAME() UPDATEMODEL(LOCKING) LOAD(NO) DATA-FORMAT RECORDFORMAT(V) OPERATIONS ADD(YES) BROWSE(YES) DELETE(YES) READ(YES) UPDATE(YES) AUTO-JOURNALLING JOURNAL(NO) JNLREAD(NONE) JNLSYNCREAD(NO) JNLUPDATE(NO) JNLADD(NONE) JNLSYNCWRITE(YES) RECOVERY-PARAMETERS RECOVERY(BACKOUTONLY) FWDRECOVLOG(NO) BACKUPTYPE(STATIC) STRINGS(1) 2.5.1 CICS Transaction Gateway We modified CTG.INI as shown in Example 2-6 to increase the amount of connections we could achieve between WebSphere and CICS. Example 2-6 Modification of CTG.INI BROWSE -- /ctg/erwctg/CTG.INI ---------------------------------------------------------------------- Line SECTION GATEWAY closetimeout=10000 ecigenericreplies=off initconnect=100 initworker=100 maxconnect=200 maxworker=200 noinput=off nonames=on Chapter 2. The measurement environment 21 notime=off workertimeout=10000 protocol@tcp.handler=com.ibm.ctg.server.TCPHandler protocol@tcp.parameters=connecttimeout=2000;idletimeout=600000;pingfrequency=60 000;port=2006;solinger=0;sotimeout=1000; 2.5.2 WebSphere MQ/CICS DPL bridge WebSphere MQ/CICS DPL Bridge was configured according to the WebSphere for z/OS Connectivity Handbook, SG24-7064-01. To run our test scenario successfully, we had to increase the CTHREAD, and IDBACK MQ system parameters. We used the following values: CTHREAD = 3000 IDBACK = 200 CTHREAD controls the total number of connections. IDBACK is the number of non-TSO connections. 2.6 IMS We used the following changes to the standard IMS and IMS Connect environment. 2.6.1 IMS Connect environment The following IMS Connect parameters were used: HWS (ID=IM4BCONN,RRS=Y,RACF=Y,XIBAREA=20) TCPIP (HOSTNAME=TCPIP,PORTID=(6001,LOCAL),MAXSOC=2000,TIMEOUT=60000) DATASTORE (ID=IM4B,GROUP=HAOTMA,MEMBER=HWS814B,TMEMBER=SCSIMS4B) The IMS Connect trace settings are shown in Example 2-7. The recommended trace level settings for IMS Connect BPE and IMS Connect internal trace are the defaults and ERROR for the production environment. Example 2-7 IMS Connect trace settings LANG=ENU /* LANGUAGE FOR MESSAGES /* (ENU = U.S. ENGLISH) # # DEFINITIONS FOR BPE SYSTEM TRACES # 22 WebSphere for z/OS to CICS and IMS Connectivity Performance */ */ TRCLEV=(AWE,LOW,BPE) TRCLEV=(CBS,MEDIUM,BPE) TRCLEV=(LATC,LOW,BPE) TRCLEV=(DISP,HIGH,BPE,PAGES=12) TRCLEV=(SSRV,HIGH,BPE) TRCLEV=(STG,MEDIUM,BPE) /* /* /* /* /* /* /* AWE SERVER TRACE CONTROL BLK SRVCS TRACE LATCH TRACE DISPATCHER TRACE WITH 12 PAGES (48K BYTES) GEN SYS SERVICES TRACE STORAGE TRACE */ */ */ */ */ */ */ # # DEFINITIONS FOR HWS TRACES # TRCLEV=(CMDT,HIGH,HWS) TRCLEV=(ENVT,HIGH,HWS) TRCLEV=(HWSW,HIGH,HWS) TRCLEV=(OTMA,HIGH,HWS) TRCLEV=(HWSI,HIGH,HWS) TRCLEV=(TCPI,HIGH,HWS) /* /* /* /* /* /* HWS COMMAND TRACE HWS ENVIRONMENT TRACE SERVER TO HWS TRACE HWS COMM DRIVER TRACE HWS TO IMS OTMA TRACE HWS COMM DRIVER TRACE */ */ */ */ */ */ 2.6.2 The IMS environment We started a total of 22 Message Processing Regions. Important: The IMS settings were not tuned for a high performance production environment because that was not our goal. As you can see from the measurement charts (Figure 4-17 on page 106), the EIS utilization is only a small portion of the overall utilization, so even with a perfectly tuned EIS, the overall results do not change too much. We ran the IMS monitor to obtain performance data. The DFSVSMxx PROCLIB member contains the log data set definition information and specifies the allocation of OLDS and WADS, the number of buffers to be used for the OLDS, and the mode of operation of the OLDS (single or dual). We changed the number of buffers according to the settings shown in Example 2-8. The settings that were changed are in bold. Example 2-8 Buffer settings BROWSE IMS814B.PROCLIB(DFSVSMDC) - 01.03 Line 00000000 Co Command ===> Scroll ********************************* Top of Data ************************ VSRBF=8192,100 VSRBF=4096,1000 VSRBF=2048,100 VSRBF=1024,100 VSRBF=512,5 Chapter 2. The measurement environment 23 IOBF=(8192,100,Y,Y) IOBF=(2048,100,Y,Y) SBONLINE,MAXSB=10 OPTIONS,BGWRT=YES,INSERT=SKP,DUMP=YES,DUMPIO=YES OPTIONS,VSAMFIX=(BFR,IOB),VSAMPLS=LOCL OPTIONS,DL/I=OUT,LOCK=OUT,DISP=OUT,SCHD=OUT,DLOG=OUT,LATC=ON,SUBS=ON OPTIONS,STRG=ON OLDSDEF OLDS=(00,01,02),BUFNO=050,MODE=DUAL WADSDEF WADS=(0,1,8,9) The DFSPBxxx member contains IMS control region execution parameters. Based on the IMS monitor reports, we changed buffer numbers as shown in Example 2-9. Example 2-9 Buffer number changes QBUF=0200, PSBW=20000, CSAPSB=5000, DLIPSB=30000, 2.6.3 IMS back-end transactions The IMS back-end EIS environment was set up with three COBOL programs that process a message from the user. Each of the three programs received and sent a COMMAREA of varying sizes based on our performance test requirements. The COBOL program names and the IMS transaction/PSB names are the same: Program TRADERS received and sent a COMMAREA of 500 bytes. Program TRADERM received and sent a COMMAREA of 5 KB. Program TRADERL received and sent a COMMAREA of 20 KB. Each COBOL program was a copy of the IMS version of the original TRADERBL COBOL CICS program (see WebSphere for z/OS Connectivity Handbook, SG24-7064-01). The basic differences are in the input and output COMMAREA sizes and the movement of data to fill those COMMAREAs in working storage. Each program performs the same basic process: Check for the type of request that has been entered by the user. Get company information, check share value, or buy or sell stock. – The getting company transaction process reads the IMS company data and allows the selection of up to four companies to review at one time. – The share value transaction process reads the IMS customer data to determine what shares the customer has so that their current values can be evaluated and returned to the user. 24 WebSphere for z/OS to CICS and IMS Connectivity Performance – The buy and sell transaction process first checks buy or sell: • For a customer that wants to buy shares, the program reads the IMS company data to determine if the company exists. If it does, the program checks the IMS customer data to determine if the buyer is a current customer. If the buyer is not, it creates a customer entry. If the buyer is, it checks the shares that the customer has, verifies that the company has the shares to sell, calculates the share value, increases the number of shares owned by the customer, updates the customer file with the new number of shares and their value, and then returns this information to the user. • For a customer that wants to sell shares, the process is to read the IMS company data to determine if the company exists. If it does, then it checks the IMS customer data to determine if that customer has those shares to sell, calculates the share value, decreases the number of shares owned by the customer, updates the customer file with the new number of shares and their value, and then returns this information to the user. 2.7 SOAP The SOAP connection to CICS is illustrated in Figure 2-5. SOAP Application in CICS XML/HTTP XML./MQ Transport SOAP Envelope Processor Request body Codepage conversion Response body Message Adapter Trader Application Figure 2-5 SOAP connection to CICS Chapter 2. The measurement environment 25 Refer to WebSphere for z/OS Connectivity Handbook, SG24-7064-01 for more information. SOAP and XML (DOM) are storage-intensive and small heap sizes can result in excessive Java garbage collection (GC). We found that a heap size of 512 MB was optimal for most test cases. You can monitor GC using the verbose:gc Java directive. Figure 2-5 on page 25 also shows a traditional CICS transaction (TRADER transaction) exposed as a Web service. This uses SOAP and requires XML parsing, which consumes more CPUs for more complex data structures. 26 WebSphere for z/OS to CICS and IMS Connectivity Performance 3 Chapter 3. The Trader applications In this chapter, we introduce the applications that were used to drive the workload that we ran during our tests.The Trader application was developed by IBM to model an electronic brokerage service, with a mix of servlets, JavaServer Pages (JSPs), and Enterprise JavaBeans (EJBs). The actual workload involved sending three different data sizes to the CICS and IMS back-end transactions. The sizes picked were 500 bytes, 5 KB and 20 KB. We developed different COBOL CICS and COBOL IMS back-end transactions for each workload; however, the business logic was the same. The applications were developed with standard WebSphere Studio Application Developer Integration Edition for the J2CA versions and WebSphere Studio Enterprise Developer for the SOAP versions. © Copyright IBM Corp. 2006. All rights reserved. 27 3.1 Overview of Trader application Trader is an example application that provides different incarnations that show how to use some of the connectors provided by WebSphere, WebSphere Studio Application Developer Integration Edition, and WebSphere Studio Enterprise Developer in an application. It is a simple application that mimics trading stocks in four different companies. The Trade application consists of four major components (Figure 3-1): A back end A data store A middle tier that provides access to the back end A front end that is implemented as a Web application Web front end Back-end interface Back-end logic Back-end datastore Figure 3-1 Trader major components The Web front end is a regular Java 2 Platform, Enterprise Edition (J2EE) Web module. The middle tier (back-end interface) is based on EJBs. The following connectors are used: CICS ECI J2CA resource adapter: This provides direct access from the back-end interface to the back-end logic hosted in CICS TS, using CICS TG IMS J2CA resource adapter: This provides direct access from the back-end interface to the back-end logic hosted in IMS TS. WebSphere MQ JMS Provider: This provides access to CICS transactions via the CICS WebSphere MQ DPL bridge. SOAP for CICS: This provides access to CICS transactions from WebSphere using SOAP. We created a Trader application for each of the connectors that are used: 28 TraderCICS (TRADERC) TraderIMS (TRADERI) TraderSOAP (TRADERS) TraderMQ (TRADERM) WebSphere for z/OS to CICS and IMS Connectivity Performance 3.1.1 Trader IMS and CICS applications and data stores The Trader application consists of a transaction that can process the trade of shares (CICS or IMS) and a data store. The data stores for all Trader applications (CICS, IMS, MQ, DB2) shared the same basic structure (Figure 3-2). TRADER.COMPANY Column name ---------------------COMPANY SHARE_PRICE UNIT_VALUE_7DAYS UNIT_VALUE_6DAYS UNIT_VALUE_5DAYS UNIT_VALUE_4DAYS UNIT_VALUE_3DAYS UNIT_VALUE_2DAYS UNIT_VALUE_1DAYS COMM_COST_SELL COMM_COST_BUY Type Len Nulls ----------- ---- ----CHARACTER 20 No REAL 4 Yes REAL 4 Yes REAL 4 Yes REAL 4 Yes REAL 4 Yes REAL 4 Yes REAL 4 Yes REAL 4 Yes INTEGER 4 Yes INTEGER 4 Yes TRADER.CUSTOMER Column name ---------------------CUSTOMER COMPANY NO_SHARES Type Len ----------- ---CHARACTER 60 CHARACTER 20 INTEGER 4 Nulls ----No No Yes Figure 3-2 Data store definitions of the Trader application For TraderCICS, the CICS application used a VSAM file as the data store. For TraderIMS, the IMS application used DL/I HIDAM as the data store. TraderMQ and TraderSOAP used the same CICS application or transaction as TraderCICS. For our purposes, we extended the COMMAREA to fit the performance requirement of 500 bytes, 5 KB, and 20 KB test cases. The COMMAREA can be seen in Example 3-1. Example 3-1 COMMAREA description for the 500 byte case 01 COMMAREA-BUFFER. 03 REQUEST-TYPE 03 RETURN-VALUE 03 USERID 03 USER-PASSWORD PIC PIC PIC PIC X(15). X(02). X(60). X(10). Chapter 3. The Trader applications 29 03 03 03 03 03 03 03 03 03 03 COMPANY-NAME PIC X(20). 03 CORRELID PIC X(32). 03 UNIT-SHARE-VALUES. 05 UNIT-SHARE-PRICE PIC X(08). 05 UNIT-VALUE-7-DAYS PIC X(08). 05 UNIT-VALUE-6-DAYS PIC X(08). 05 UNIT-VALUE-5-DAYS PIC X(08). 05 UNIT-VALUE-4-DAYS PIC X(08). 05 UNIT-VALUE-3-DAYS PIC X(08). 05 UNIT-VALUE-2-DAYS PIC X(08). 05 UNIT-VALUE-1-DAYS PIC X(08). 03 COMMISSION-COST-SELL PIC X(03). 03 COMMISSION-COST-BUY PIC X(03). 03 SHARES. 05 NO-OF-SHARES PIC X(04). 03 SHARES-CONVERT REDEFINES SHARES. 05 NO-OF-SHARES-DEC PIC 9(04). 03 TOTAL-SHARE-VALUE PIC X(12). 03 BUY-SELL1 PIC X(04). 03 BUY-SELL-PRICE1 PIC X(08). 03 BUY-SELL2 PIC X(04). 03 BUY-SELL-PRICE2 PIC X(08). 03 BUY-SELL3 PIC X(04). 03 BUY-SELL-PRICE3 PIC X(08). BUY-SELL4 PIC X(04). BUY-SELL-PRICE4 PIC X(08). ALARM-CHANGE PIC X(03). UPDATE-BUY-SELL PIC X(01). FILLER PIC X(15). COMPANY-NAME-BUFFER. 05 COMPANY-NAME-TAB OCCURS 4 TIMES INDEXED BY COMPANY-NAME-IDX PIC X(20). PERFTEST PIC 9(6). PERFTEST-CHAR-IDX-1 PIC S9(4) COMP. PEFRTEST-BUFFER. 05 PERFTEST-CHAR OCCURS 8 TIMES. 09 PERFTEST-CHAR PIC X(05). 09 PERFTEST-INT PIC 9(05). 09 PERFTEST-COMP PIC S9(04) COMP. 09 PERFTEST-COMP3 PIC S9(05) COMP-3. For the 5 KB case, the OCCURS setting (highlighted) was modified to 316 and for the 20 KB case, it was 1340. 30 WebSphere for z/OS to CICS and IMS Connectivity Performance Important: These COMMAREAs are very complex. For small COMMAREAs, this is usually not a problem. However, for larger COMMAREAs, it is wise to group fields, reducing the complexity of the COMMAREA format used to pass data over the connector. See “Further improvement options” on page 32. 3.1.2 SOAP considerations The SOAP test cases showed us that XML conversion is an expensive process that extends the transport size considerably. However, there is no 32 KB limitation for the COMMAREA, unlike the CICS TG case. We thought it would be valuable to have a less complex COMMAREA, so we ran a test with the simple COMMAREA shown in Example 3-2. This COMMAREA uses a 140 byte text field plus other numeric fields, compared to the 5 byte text field in the complex case. Example 3-2 Simple 5KB COMMAREA for the SOAP test 01 COMMAREA-BUFFER. 03 REQUEST-TYPE PIC X(15). 03 RETURN-VALUE PIC X(02). 03 USERID PIC X(60). 03 USER-PASSWORD PIC X(10). 03 COMPANY-NAME PIC X(20). 03 CORRELID PIC X(32). 03 UNIT-SHARE-VALUES. 05 UNIT-SHARE-PRICE PIC X(08). 05 UNIT-VALUE-7-DAYS PIC X(08). 05 UNIT-VALUE-6-DAYS PIC X(08). 05 UNIT-VALUE-5-DAYS PIC X(08). 05 UNIT-VALUE-4-DAYS PIC X(08). 05 UNIT-VALUE-3-DAYS PIC X(08). 05 UNIT-VALUE-2-DAYS PIC X(08). 05 UNIT-VALUE-1-DAYS PIC X(08). 03 COMMISSION-COST-SELL PIC X(03). 03 COMMISSION-COST-BUY PIC X(03). 03 SHARES. 05 NO-OF-SHARES PIC X(04). 03 SHARES-CONVERT REDEFINES SHARES. 05 NO-OF-SHARES-DEC PIC 9(04). 03 TOTAL-SHARE-VALUE PIC X(12). 03 BUY-SELL1 PIC X(04). 03 BUY-SELL-PRICE1 PIC X(08). 03 BUY-SELL2 PIC X(04). 03 BUY-SELL-PRICE2 PIC X(08). 03 BUY-SELL3 PIC X(04). 03 BUY-SELL-PRICE3 PIC X(08). 03 BUY-SELL4 PIC X(04). 03 BUY-SELL-PRICE4 PIC X(08). Chapter 3. The Trader applications 31 03 03 03 03 03 03 03 03 ALARM-CHANGE PIC X(03). UPDATE-BUY-SELL PIC X(01). FILLER PIC X(15). COMPANY-NAME-BUFFER. 05 COMPANY-NAME-TAB OCCURS 4 TIMES INDEXED BY COMPANY-NAME-IDX PIC X(20). PERFTEST PIC 9(06). PERFTEST-CHAR-9 PIC X(28). PERFTEST-CHAR-IDX-1 PIC S9(4) COMP. PEFRTEST-BUFFER. 05 PERFTEST-CHAR OCCURS 31 TIMES. 09 PERFTEST-CHAR PIC X(140). 09 PERFTEST-INT PIC 9(07). 09 PERFTEST-COMP PIC S9(04) COMP. 09 PERFTEST-COMP3 PIC S9(05) COMP-3. The number of data elements are the following: 500 B: 36 + (8 x 4) = 68 5 KB: 36 + (316 x 4) = 1300 5 KB simple: 36 + (31 x 4) = 160 The first two were complex COMMAREAs. We did not use null-truncated COMMAREAs. Further improvement options We obtained very large transport sizes because of the large number of fields. For larger COMMAREAs, it is wise to group fields together. For example, in the COMMAREA definition for the 5 KB case, PERFTEST-CHAR can be considered as one field repeated 31 times in the SOAP message. This reduced the number of fields, the transport size, and the cost of the SOAP request. In all cases, this meant 36 + 316 = 352 data elements, instead of 36 + (316 x 4) = 1300. If you take this approach, the client application must be aware of the format of the portion of the COMMAREA that represents the aggregated fields. The application must be able to generate and parse this byte array format. As a sizing guideline, you can calculate the transport size of the SOAP message. The SOAP message size is the sum of the following values: SOAP prefix and suffix: 287 bytes Operation name: 2 x the length of the program name + 23 bytes SOAP body elements: (2 x the length of an average element name + the average element data length + 5) x number of elements 32 WebSphere for z/OS to CICS and IMS Connectivity Performance As an example, if the average element name length is 8 bytes and the COMMAREA size is 4 KB, Table 3-1 shows the message sizes based on the average data length of an element. Table 3-1 SOAP message size example Number of elements Average data length SOAP message size (bytes) 128 32 7,101 512 8 15,165 2048 2 47,421 4096 1 90,429 Attention: In our measurements, we used CICS TS 2.3. We used the WebSphere Enterprise Developer converter to generate an application handler. With CICS TS 3.1, the SOAP support is integrated into CICS and there are three options you can use: The user-written application handler The CICS Web Services Assistant WebSphere Enterprise Developer to generate the application handler. CICS TS 3.1 also introduces improvements to CICS SOAP performance. 3.1.3 Trader Web front-end user interface All Trader Web modules provide the same basic user interaction (see Figure 3-3 on page 34). The entry page is the Login page. The Login page provides a field to enter a user name and one or more buttons that take you to the applications. The number of buttons depends on the actual Trader application. For example, TraderMQ provides a choice of using a message-driven bean (MDB). The Login page also allows provides a radio button that, when selected, is used to identify the workload (amount of data) the CICS or IMS transaction receives. Every transaction, be it Buy, Sell, Quotes or Companies, sends the same amount of data to the back-end CICS program. This is determined by the COMMAREA size that the user selects. These selections are available only at the time of login and the user must log out to select a different size. Chapter 3. The Trader applications 33 Logon Buy Login page Logoff Companies list Buy shares Go to Companies list Quotes Sell Go to Companies list Go to Companies list Quotes page Sell shares Figure 3-3 Trader screen flow If the login is successful, a company list appears on the next page (Figure 3-4). Figure 3-4 Trader companies list For each company, there are buttons for accessing quotes and holdings status and for buying and selling shares. This list is obtained from the back-end data store. 34 WebSphere for z/OS to CICS and IMS Connectivity Performance Clicking the Logoff button takes you back to the Login page of the Trader application. Clicking Buy or clicking Sell takes you to a page with a field where you can enter the number of shares that you want to buy or sell. It includes a button for starting the transaction. When the transaction is done, you see the Companies list again (see Figure 3-4 on page 34). To see the result of a transaction, go to the Quotes page (Figure 3-5). You do this by clicking the Quotes button on the Companies list page (Figure 3-4). Figure 3-5 Trader company quotes page 3.1.4 Trader interface architecture and implementation The overall architecture of the Trader Web application is a classic model-volume-controller (MVC) approach. The TraderServlet contains the control logic, providing a method for each user interaction. Because of time constraints, we decided not to use the Command pattern. We recommend that you use the Command pattern for applications that are larger than the Trader application. The Command pattern provides a better separation of control and command logic, which makes the application easier to maintain. Chapter 3. The Trader applications 35 Figure 3-6 illustrates the architecture. Browser TraderServlet TraderProcessEJB Connector JSP Figure 3-6 Main component diagram of the Trader Web application The TraderProcessEJB contains the front-end business logic: buy, sell, getCompanies, and so forth. The implementation is divided into two parts, an interface (TraderProcess), which is used and seen by the TraderServlet, and the actual implementation, which depends on the connector used. The JSPs format the output for the browser. To simplify the implementation of the same base application for different variations, we used the simplified class diagram in Figure 3-7 as a basis. Figure 3-7 Trader class diagram (simplified overview) 36 WebSphere for z/OS to CICS and IMS Connectivity Performance The TraderSuperServlet contained all the control and command logic for the application. The only methods implemented by the actual servlets are a method for creating the TraderProcess instance (createTrader) and the init() method of the servlet. This initialized text strings for the construction of the Uniform Resource Locators (URLs) in the applications and displayed the type of connector that is used. The TraderProcess implementations are specialized according to the connectors being used. The specific connector issues include: CICS ECI connector: This connector uses the CICS Transaction Gateway Java client (J2EE CICS ECI Connector). The code to access the J2C ECI connector is generated by WebSphere Studio Application Developer Integration Edition (Web Services Invocation Framework code). The generated code consists of a Web Service that is implemented as an EJB. It also consists of classes for setting and getting information in the ECI COMMAREA based on the object definitions that are used in the CICS programs. IMS connector: This connector uses IMS Connector for Java. The code to access IMS Connector for Java is generated by WebSphere Studio Application Developer Integration Edition (Web Services Invocation Framework code). The generated code consists of a Web Service that is implemented as an EJB. It also consists of classes for setting and getting information in the COMMAREA based on the object definitions used in the IMS programs. SOAP for CICS: This uses SOAP messages to talk to CICS. The messages are received by the SOAP for CICS server. The server then sends the SOAP message to a message adapter. The message adapter (a COBOL program generated by WebSphere Enterprise Developer) parses the messages and issues a CICS LINK to the appropriate CICS program. Separate message adapters are generated for each COMMAREA size. WebSphere MQ: Instead of going straight to CICS from the application, there is the option to use WebSphere MQ. The TraderMQ application sends a message with WebSphere MQ to the back-end business logic in CICS. The message receiver is the CICS MQ DPL bridge. When the transaction is completed in CICS, the reply is returned by WebSphere MQ to the Trader application in WebSphere. This is a quasi-synchronous front-end solution to any traditional business logic in CICS. There is an option to use an MDB EJB as the receiver in the Trader Web-application instead of a session EJB that queries the reply queue. When you use this option, select MDB on the TraderMQ login panel and start the message listener ports on the server. Chapter 3. The Trader applications 37 When the MDB listeners are enabled, the normal TraderMQ scenarios do not work (non-MDB case), because the MDB listener picks up the messages from TRADER.CICS.REPLYQ regardless of whether the option was selected. The XA (two-phase commit) feature must be enabled in the WebSphere MQ connection factory for this to work. If the message listeners are started when TraderMQ is run and the MDB option is not selected on the logon panel, TraderMQ waits for the message to return from CICS. However, it never receives the reply (you have to push the Abort button). The reason for this is that the MDB already picked up the message from the TRADER.CICS.REPLYQ reply queue and placed it in TRADER.PROCESSQ. Because MDB was not selected, the EJB business logic does not receive the message from TRADER.PROCESSQ. Restriction: Trader was not implemented with the purpose of being a fully production-qualified application. Because of this, the screen flow is based on the need to be there. The fault tolerance was limited. The application cannot be expected to run in parallel without flaws. Also, all resources were not externalized using the “java:comp/env” context. This results in a lack of transactional control (the EJB transaction attribute is set to TX_NOT_SUPPORTED) and part of the implementation not being in compliance with best practices and recommended implementation patterns. However, the applications assist in verifying that a WebSphere Connector is set up properly. They act as an example of how a WebSphere Connector can be used in an application. 3.1.5 Packaging The different Trader applications are packaged in the following Enterprise Application Repository (EAR) files: 38 TraderCICS.ear TraderIMS.ear TraderMQ.ear TraderSOAP.ear WebSphere for z/OS to CICS and IMS Connectivity Performance Figure 3-8 shows the Trader EAR file content. Trader Enterprise Application (TraderXXEAR) Trader EJB JAR (TraderXXX, can be more than one EJB JAR per Trader App) Trader Connectors JAR (TraderXXCommand) Trader Web module WAR (TraderXXWeb) Trader Class Library JAR (TraderLib) Figure 3-8 Trader EAR file contents. The TraderLib.jar file is shared between all Trader applications. It contains the TraderSuperServlet, TraderProcess, and some utility classes. The Trader Web module contains the servlet or servlets that are sub-classed from TraderSuperServlet and the JSPs used in the Web application. Because of the way J2EE 1.3 works, it is impossible to share the JSPs in the way that it is done with the TraderSuperServlet. Therefore, each Web module contains its own copy of the JSPs. The Logon.jsp is different for each Trader application, but the other JSPs are not. The Trader EJB JAR contains the EJBs used by the servlets. In the case of the TraderDB, it also contains the EJBs used for communication with the database and the business logic implementation. The Trader Connectors JAR contains the Web service that provides access to the J2EE connectors, including the EJB that connects to the J2C connector and the generated classes that are used for getting and setting data in the J2C transaction object or objects. In CICS, this is the ECI COMMAREA In IMS, they are the InputHandler and OutputHandler objects. Chapter 3. The Trader applications 39 Dependencies Each Trader application depends on the availability of some external resources to be deployable and work. All the resources are, if possible, specified by their Java Naming and Directory Interface (JNDI) name and a type. For TraderMQ, the necessary external resources are: jms/TraderQCF: WebSphere MQ JMS provider connection factory jms/TraderCICSReqQ: JMS request destination for CICS jms/TraderCICSRepQ: JMS reply destination for CICS jms/TraderProcessQ: JMS postprocessing destination for the MDB case TraderMQCICSListener: MDB EJB listener (when a message is received in a Queue listened to, the corresponding MDB is executed) TraderMQIMSListener: MDB EJB Listener Depending on your local environment, you might also need to define a Java Authentication and Authorization Services (JAAS) user ID and password to be used by the WebSphere MQ DPL Bridge. If you want TraderMQ to work, you must set up WebSphere MQ for z/OS, the proper queues, and WebSphere MQ DPL bridge for CICS. For TraderCICS, the necessary external resource is: itso/cics/eci/j2ee/trader/TraderCICSECICommandCICSECIServiceTraderCICSECIComman dCICSECIPort This is an ECI J2C connector to CICS. For TraderIMS, the necessary external resource is: itso/ims/j2ee/trader/TraderIMSCommandIMSServiceTraderIMSCommand IMSPort. This is an IMS J2C connector to IMS. Restriction: Not all of the necessary resources are externalized in the Web deployment and EJB deployment descriptors. You must look up some of the resources directly and not indirectly using the “java:comp/env” context. This also means that there is the possibility of setting up transaction control and redirection is limited. 40 WebSphere for z/OS to CICS and IMS Connectivity Performance Figure 3-9 shows an overview of the different connector paths that are implemented in the Trader applications. Logon HTML Servlet J2C ECI Connector EJB CICS BL Servlet EJB Servlet EJB Servlet EJB Servlet EJB JMS MQ Logon HTML Logon HTML MQ-CICS Bridge JMS MQ-IMS Bridge IMS BL MQ J2C IMS Connector Direct JDBC DB2 BL Logon HTML Servlet EJB CMP BL Figure 3-9 Trader application connection overview 3.2 TraderCICS We modified TraderCICS to handle three back-end CICS transactions with COMMAREA sizes of 500 bytes, 5 KB, and 20 KB. The application consists of JSPs, servlets, and both stateful and stateless session EJBs. Note: The TraderCICS application was originally created for earlier CICS TG IBM Redbooks and the latest version can be found in WebSphere for z/OS Connectivity Handbook, SG24-7064-01. The responsibilities of the different components are: Servlets: TraderSuperServlet is the superclass for the TraderCICSECIServlet. The servlet acts as a controller, takes requests from the user (JSPs), invokes the Chapter 3. The Trader applications 41 appropriate EJB to do business logic processing, and returns information back to the user. TraderCICSECIServlet is a subclass of the TraderSuperServlet. The servlet is responsible for obtaining an instance of the remote interface of the stateful session EJB. EJBs: The application consists of stateful and stateless session EJBs. The purpose of using stateful session EJBs (even though it is not necessary in this case) was to demonstrate that an existing application can be modified to talk to CICS. The application consists of three stateful session EJBs, each representing a different back-end CICS program. They are: – TraderCICSECI20K: Contains logic that populates a 20 KB COMMAREA – TraderCICSECI5K: Contains logic that populates a 5 KB COMMAREA – TraderCICSECI500: Contains logic that populates a 0.5 KB COMMAREA These stateful session EJBs then invoke corresponding stateless session EJBs that are generated by WebSphere Studio Application Developer Integrated Edition and exposed as Web services. The three stateless session EJBs are Trader20KService, Trader5KService, and Trader500BService. 3.3 TraderSOAP TraderSOAP was modified for our workload to handle two back-end CICS transactions with COMMAREA sizes of 500 bytes and 5 KB. The application consists of JSPs, servlets, and stateful session EJBs. Note: The TraderSOAP application was originally created WebSphere for z/OS Connectivity Handbook, SG24-7064-01. The responsibilities of the different components are: Servlets: TraderSuperServlet is the superclass of the TraderCICSOAPServlet. The servlet acts as a controller, takes requests from the user (JSPs), invokes the appropriate EJB to do business logic processing (although the EJB in this case does not do any work; it is used here to keep variables in the performance measurements constant), issues the SOAP call to CICS transactions, and returns information back to the user. TraderCICSECIServlet is a subclass of the TraderSuperServlet. The servlet is responsible for obtaining an instance of the remote interface of the stateful session EJB. 42 WebSphere for z/OS to CICS and IMS Connectivity Performance EJBs: The application consists of stateful session EJBs. Our purpose in using stateful session EJBs (even though it was not necessary in this case) was to demonstrate that an existing application can be modified to talk to CICS using SOAP. The application consists of single stateful session TraderCICSSOAP EJB. The SOAP calls are done from the servlet. The code to issue these calls is generated with WebSphere Enterprise Developer. The development process is explained in the connectivity handbook. The code can be found in the Web project. The calls are: Trader500BProxy: This does the actual SOAP call and is generated for the 500-byte COMMAREA test case Trader500BSvc: This is a java class that was developed to provide indirection. The purpose of this class is to format the COMMAREA contents for the 500-byte COMMAREA test case. Trader5KProxy: This does the actual SOAP call and is generated for the 5 KB COMMAREA test case. Trader5KSvc: This is a java class that was developed to provide indirection. The purpose of this class is to format the COMMAREA contents for the 5 KB COMMAREA test case. 3.4 TraderMQ TraderMQ was modified for our workload to handle three back-end CICS transactions with COMMAREA sizes of 500 bytes, 5 KB, and 20 KB. The application consists of JSPs, servlets, stateful session EJBs, and an MDB. Note: The TraderMQ application was originally created for the WebSphere for z/OS Connectivity Handbook, SG24-7064-01. The responsibilities of the different components are: Servlets: TraderSuperServlet is the superclass of the TraderMQCICSServlet. The servlet acts as a controller, takes requests from the user (JSPs), and invokes the appropriate EJB to do business logic processing. TraderMQCICSServlet is a subclass of the TraderSuperServlet. The servlet is responsible for obtaining an instance of the remote interface of the stateful session EJB. Chapter 3. The Trader applications 43 EJBs: The application consists of stateful and stateless session EJBs. Our purpose in using stateful session EJBs (even though it was not necessary in this case) was to demonstrate that an existing application can be modified to talk to CICS. The application consists of three stateful session EJBs each representing a different back-end CICS program. They are: – Trader20KMQCICS: Contains logic that populates a 20 KB COMMAREA – Trader5KMQCICS: Contains logic that populates a 5 KB COMMAREA – Trader500BMQCICS: Contains logic that populates a 500-byte COMMAREA These stateful session EJBs then use the JMS API to talk to MQ on z/OS. 3.5 TraderIMS TraderIMS was modified for our workload to handle three back-end CICS transactions with COMMAREA sizes of 500 bytes, 5 KB, and 20 KB. The application consists of JSPs, servlets, and stateful and stateless session EJBs. Note: The TraderIMS application was originally created for the WebSphere for z/OS Connectivity Handbook, SG24-7064-01. The responsibilities of the different components are: Servlets: TraderSuperServlet is the superclass of the TraderIMSECIServlet. The servlet acts as a controller, takes requests from the user (JSPs), invokes the appropriate EJB to do business logic processing, and returns information back to the user. TraderIMSECIServlet is a subclass of the TraderSuperServlet. The servlet is responsible for obtaining an instance of the remote interface of the stateful session EJB. EJBs: The application consists of stateful and stateless session EJBs. Our purpose for using stateful session EJBs (although not necessary in this case) was to demonstrate that an existing application can be modified to talk to IMS. The application consists of three stateful session EJBs: – TraderIMSECI20K: Contains logic that populates a 20 KB COMMAREA – TraderIMSECI5K: Contains logic that populates a 5 KB COMMAREA – TraderIMSECI500: Contains logic that populates a 500-byte COMMAREA 44 WebSphere for z/OS to CICS and IMS Connectivity Performance Each EJB represents a different back-end IMS program. These stateful session EJBs then invoke corresponding stateless session EJBs, which are generated by WebSphere Studio Application Developer Integrated Edition and exposed as Web services.The three stateless session EJBs are Trader20KService, Trader5KService, and Trader500BService. Chapter 3. The Trader applications 45 46 WebSphere for z/OS to CICS and IMS Connectivity Performance 4 Chapter 4. Measurements and results This chapter describes the results of our tests and is broken out into the following sections: The testing procedure Example of data captured with each test Detailed description of each metric that was extracted and compared in our final analysis The changes and adjustments that we made Results © Copyright IBM Corp. 2006. All rights reserved. 47 Important: Keep the following information in mind as you read this chapter: Because all the testing was done in a short period of time, we were unable to refine our results enough to consider them official performance measurements. However, we think our experiences can be used to understand the performance trade-offs that are associated with various connectivity options at a high level. We did some basic tuning. We tuned the number of servant regions, the number of threads, the placement of data on Enterprise Storage Subsystem (no local copies), and the reload interval. During some of our performance measurements, the Java heap size was not properly tuned. As a result, we saw excessive CPU consumption in our WebSphere servant regions during some runs. Based on other measurements outside the scope of this paper, we believe that the excessive CPU time in the servant region can easily be eliminated with proper heap tuning. Therefore, our results have been adjusted accordingly. This was a point in time measurement; changes to the connectors happen all the time. For example: – CICS TG V6 has performance enhancements. – CICS TS V3.1 has performance improvements to the SOAP connector. Application design: We did not make any design changes to improve the measurements. The COMMAREA that we used in many of our measurements was complex; it had many repetitive fields with many data types. This might or might not represent your environment. When comparing our results to your environment, be careful. We did a special test for SOAP/CICS to show that reducing the COMMAREA complexity reduces the CPU time requirements considerably. The COMMAREA represents the actual application data that is being sent. The different connectivity methods and protocols add headers and layers to the application data, so it grows as it gets to the EIS and vice versa. The actual bytes transferred, which we call transport size, varies by connectivity method and transport protocol. We intentionally did not focus on response time, because this metric might not be fair for this task. Our measured response times might have been affected by excessive simulated client load or by the processing used by GC. 48 WebSphere for z/OS to CICS and IMS Connectivity Performance 4.1 The testing procedure The goal of each test was to drive total processor utilization to approximately 90%, sustain that throughput, and measure the average CPU consumption over that duration. Multiple precautions were taken to minimize the noise of other processes and prevent discrepancies in testing conditions. This was achieved by following these steps for each test: 1. Prepare EIS databases. 2. Swap the SMF data set and restart RMF to clear out unwanted data and ensure that we do not force a swap of SMF during out test. 3. Restart EIS (CICS or IMS). 4. Restart WebSphere Application Server. 5. Initiate workload through WebSphere Studio Workload Simulator. 6. Start RMF Monitor III to view test results while the test is running. This measures overall CPU utilization while the workload is being adjusted to achieve as close as 90% utilization as possible or to the point where CPU utilization stops increasing and enclave wait or queued time starts to grow. Pushing beyond these thresholds skew results because of inflated processor delays or delays in network communications. After the necessary workload has been achieved and the test is running, RMF Monitor III is not used to eliminate the processing that it uses when creating reports. 7. Sustain the test for approximately 20 minutes. 8. Stop the workload. 9. Record the data from WebSphere Studio Workload Simulator. 10.Stop RMF and swap out the SMF data, forcing it to write to a generation data set group (GDG). 11.Run RMF Monitor I report for a 10-minute period that falls in the middle of the sustained test. 12.Record results and save all data. 4.1.1 The test script The test script was the same for every test case. Each simulated client goes through the following interactions with the Trader application: 1. Log in to the Trader application and show a list of available companies. From the Web server perspective, this includes accepting a POST request for a page and a GET request for an image. 2. Obtain a quote for a company. Chapter 4. Measurements and results 49 3. Show a list of available companies. 4. Obtain a quote for a company. 5. Show a list of available companies. 6. Choose a stock to buy. This includes a POST request for a page and a GET request for an image. 7. Buy 10 shares of the stock and return to the list of available companies. 8. Choose a stock to sell (the same stock that was bought). This includes a POST request for a page and a GET request for an image. 9. Sell 10 shares of the stock and return to the list of available companies. 10.Obtain a quote for a company 11.Show a list of available companies. 12.Log out of the Trader application 4.1.2 RMF Monitor III The RMF Monitor III utility was used with each test case for the initial tuning of the environment and for finding our testing threshold. 4.2 Recorded data Data was recorded from WebSphere Studio Workload Simulator and from RMF Monitor I reports. 4.2.1 WebSphere Studio Workload Simulator We configured WebSphere Studio Workload Simulator to record data every 5 seconds. By default, the tool displays summary data in its console every 5 minutes. The console data was captured but not used in our final analysis. The WebSphere Studio Workload Simulator engine works by instantiating simulated clients after a delay; our delay was set to 50 ms. As the test ran, the test administrator increased the simulated clients until the goal of 90% overall CPU utilization was achieved. The data reported in the WebSphere Studio Workload Simulator console averages reported values for the duration of the test. Example 4-1 shows this output. Example 4-1 WebSphere Studio Workload Simulator console output 11/23/2004 11/23/2004 11/23/2004 50 15:38:11 =========================Cumulative Statistics========================== 15:38:11 IWL0038I Run time = 00:20:03 15:38:11 IWL0007I Clients completed = 0/950 WebSphere for z/OS to CICS and IMS Connectivity Performance 11/23/2004 11/23/2004 11/23/2004 11/23/2004 11/23/2004 11/23/2004 11/23/2004 11/23/2004 11/23/2004 11/23/2004 11/23/2004 11/23/2004 concurrently 11/23/2004 15:38:11 IWL0059I Page elements = 395732 15:38:11 IWL0060I Page element throughput = 328.855 /s 15:38:11 IWL0059I Transactions = 0 15:38:11 IWL0060I Transaction throughput = 0.000 /s 15:38:11 IWL0059I Network I/O errors = 0 15:38:11 IWL0059I Web server errors = 0 15:38:11 IWL0059I Num of pages retrieved = 316089 15:38:11 IWL0060I Page throughput = 262.672 /s 15:38:11 IWL0060I HTTP data read = 1502.262 MB 15:38:11 IWL0060I HTTP data written = 269.720 MB 15:38:11 IWL0060I HTTP avg. page element response time = 1.180 15:38:11 IWL0059I HTTP avg. page element response time = 0 (with all clients running) 15:38:11 ======================================================================== WebSphere Studio Workload Simulator features a utility for graphing this data. We plotted average response time against the time (Figure 4-1). Figure 4-1 WebSphere Studio Workload Simulator graph Chapter 4. Measurements and results 51 Due to the fine granularity of the data being captured, the graphs for each test oscillated too much to provide a single average response time. To calculate this mean, the average response time recorded every 5 seconds was averaged over our exact 10 minute test in a spreadsheet. Effectively, we took the mean of the graph in Figure 4-1 over a refined time range. Attention: Response time is not the primary metric captured for this paper. Our measured response times might have been affected by excessive simulated client load or by the processing used by garbage collection. 4.2.2 RMF Monitor I RMF Monitor I reports were run over a short range of time that fell within the 20 minute duration of the test. SMF was set up to record data every 5 minutes, and we summarized our results by reporting over a 10 minute interval. The Job Control Language (JCL) job is shown in Example 4-2. The classes included in the reports are all reporting classes beginning with WAS, CICS4, ERWCTG, MQ4B, or IMS48, and the WAS48 service class. Example 4-2 RMF Monitor I report JCL job //* //****************************************************************** //* //* CREATED VIA ISPF INTERFACE //* z/OS V1R5 RMF //* //****************************************************************** //* //****************************************************************** //* //* RMF SORT PROCESSING //* //****************************************************************** //RMFSORT EXEC PGM=SORT,REGION=0M //SORTIN DD DISP=SHR,DSN=SMFDATA.RMFRECS(0) //SORTOUT DD DISP=(NEW,PASS),UNIT=SYSDA,SPACE=(CYL,(10,10)) //SORTWK01 DD DISP=(NEW,DELETE),UNIT=SYSDA,SPACE=(CYL,(10,10)) //SORTWK02 DD DISP=(NEW,DELETE),UNIT=SYSDA,SPACE=(CYL,(10,10)) //SORTWK03 DD DISP=(NEW,DELETE),UNIT=SYSDA,SPACE=(CYL,(10,10)) //SYSPRINT DD SYSOUT=* //SYSOUT DD SYSOUT=* //SYSIN DD * SORT FIELDS=(11,4,CH,A,7,4,CH,A),EQUALS MODS E15=(ERBPPE15,36000,,N),E35=(ERBPPE35,3000,,N) //****************************************************************** //* 52 WebSphere for z/OS to CICS and IMS Connectivity Performance //* RMF POSTPROCESSING //* //****************************************************************** //RMFPP EXEC PGM=ERBRMFPP,REGION=0M //MFPINPUT DD DISP=(OLD,DELETE),DSN=*.RMFSORT.SORTOUT //MFPMSGDS DD SYSOUT=* //****************************************************************** //* //* RMF POSTPROCESSING OPTIONS GENERATED FROM: //* 1. PROFILE DATA SET: 'WATSON.SG246365.JCL(RMF)' //* 2. POSTPROCESSOR OPTIONS PANEL INPUT //* //****************************************************************** //SYSIN DD * SYSRPTS (WLMGL(RCLASS(WAS*)),WLMGL(RCLASS(CICS4*)),WLMGL(SCPER(WAS48)), WLMGL(RCLASS(ERWCTG*)),WLMGL(RCLASS(MQ4B*)),WLMGL(RCLASS(IMS48*))) DATE(11232004,11232004) RTOD(1524,1546) DINTV(0005) SUMMARY(INT,TOT) SYSOUT(A) OVERVIEW(REPORT) SYSID(SC48) RMF Monitor I produces a summary report and a workload activity report based on class. In our analysis, we used the summary report for overall CPU usage for the LPAR being tested. Because this was captured and reported in 5-minute intervals, we averaged the CPU busy time for two intervals. The data highlighted in Figure 4-3 shows the values for a test that ran from 15:35 to 15:45. Example 4-3 RMF Monitor I summary report 1 PAGE R M F S U M M A R Y R E P O R T 002 z/OS V1R5 SYSTEM ID SC48 RPT VERSION V1R5 RMF START 11/23/2004-13.55.00 INTERVAL 00.04.25 END 11/23/2004-15.48.01 CYCLE 1.000 SECONDS 0 NUMBER OF INTERVALS 14 -DATE TIME INT CPU MM/DD HH.MM.SS MM.SS BUSY 011/23 13.55.00 00.10 2.9 11/23 14.46.02 03.57 6.0 11/23 14.50.00 05.00 31.2 11/23 14.55.00 04.59 85.1 011/23 15.00.00 05.00 81.6 11/23 15.05.00 05.00 87.5 11/23 15.10.00 04.59 73.2 11/23 15.15.00 04.59 37.5 011/23 15.20.00 05.00 91.4 11/23 15.25.00 04.59 90.0 DASD RESP 4.7 6.7 1.2 0.8 0.9 0.8 0.9 1.6 0.9 0.8 TOTAL LENGTH OF INTERVALS 01.02.03 DASD JOB JOB TSO TSO STC RATE MAX AVE MAX AVE MAX 73.6 0 1 1 1 93 19.7 0 1 1 1 94 243.1 0 1 1 1 94 740.3 0 1 1 1 94 723.4 0 1 1 1 94 818.3 0 1 1 1 94 683.2 0 1 1 1 94 251.2 0 1 1 1 94 781.6 0 1 1 1 94 770.9 0 1 1 1 94 STC AVE 93 93 94 94 94 94 94 94 94 94 ASCH ASCH OMVS OMVS SWAP DEMAND MAX AVE MAX AVE RATE PAGING 0 1 6 6 0.00 0.00 0 1 6 6 0.00 0.06 0 1 6 6 0.00 0.01 0 1 6 6 0.00 0.00 0 1 6 6 0.00 0.00 0 1 6 6 0.00 0.01 0 1 6 6 0.00 0.03 0 1 7 6 0.00 0.03 0 1 6 6 0.00 0.02 0 1 6 6 0.00 0.00 Chapter 4. Measurements and results 53 11/23 15.30.00 11/23 15.35.00 011/23 15.40.00 11/23 15.45.00 -TOTAL/AVERAGE 05.00 89.9 04.59 91.5 05.00 88.8 03.01 63.5 71.7 0.8 0.9 0.9 1.0 0.9 770.0 795.1 780.7 553.8 620.3 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 94 94 94 94 94 94 94 94 91 94 0 0 0 0 0 1 1 1 1 1 6 6 6 6 7 6 6 6 6 6 0.00 0.00 0.00 0.00 0.00 0.02 0.01 0.00 0.02 0.01 Some of the reporting classes and services that were set up for the Workload Activity report are shown in Example 4-4. The values highlighted in Example 4-4 were the metrics used in our final analysis. These values were captured and compared for each of the test cases. Detailed descriptions of each of these fields can be found in 4.3, “Metrics in our final analysis” on page 56. Example 4-4 RMF Monitor I Workload Activity report 1 W O R K L O A D A C T I V I T Y PAGE z/OS V1R5 SYSPLEX WTSCPLX1 RPT VERSION V1R5 RMF START 11/23/2004-15.35.00 INTERVAL 000.09.59 END 11/23/2004-15.45.00 1 MODE = GOAL POLICY ACTIVATION DATE/TIME 11/22/2004 16.05.07 ------------------------------------------------------------------------------------------------------------ SERVICE CLASS PERIODS REPORT BY: POLICY=SPSTPC TRANSACTIONS AVG 35.60 MPL 35.60 ENDED 201350 END/S 335.58 #SWAPS 0 EXCTD 0 AVG ENC 35.60 REM ENC 0.00 MS ENC 0.00 SUB TYPE CB CB TRANS.-TIME HHH.MM.SS.TTT ACTUAL 1.145 EXECUTION 106 QUEUED 1.039 R/S AFFINITY 0 INELIGIBLE 0 CONVERSION 0 STD DEV 762 BTE EXE SC48 < <= <= <= <= <= <= <= <= <= <= <= <= 54 --DASD I/O-SSCHRT 0.1 RESP 1.5 CONN 1.3 DISC 0.0 Q+PEND 0.1 IOSQ 0.0 RESPONSE TIME EX PERF AVG ACTUAL% VEL% INDX ADRSP 20.1 ----TIME---HH.MM.SS.TTT 00.00.00.175 00.00.00.210 00.00.00.245 00.00.00.280 00.00.00.315 00.00.00.350 00.00.00.385 00.00.00.420 00.00.00.455 00.00.00.490 00.00.00.525 00.00.00.700 00.00.01.400 RESOURCE GROUP=*NONE ---SERVICE---IOC 0 CPU 305738K MSO 0 SRB 0 TOT 305738K /SEC 509565 14K 14K --SERVICE TIMES-TCB 1563.1 SRB 0.0 RCT 0.0 IIT 0.0 HST 0.0 IFA N/A APPL% CP 260.5 APPL% IFACP 0.0 APPL% IFA N/A PERIOD=1 IMPORTANCE=2 PAGE-IN RATES SINGLE 0.0 BLOCK 0.0 SHARED 0.0 HSP 0.0 HSP MISS 0.0 EXP SNGL 0.0 EXP BLK 0.0 EXP SHR 0.0 -------------------------------- STATE SAMPLES BREAKDOWN (%) --------------------------------ACTIVE-- READY IDLE -----------------------------WAITING FOR----------------------------SUB APPL TYP3 0.0 0.0 0.0 0.0 0.0 0.3 97.4 0.0 0.0 2.3 GOAL: RESPONSE TIME 000.00.00.350 FOR SYSTEM SERVICE CLASS=WAS48 CRITICAL =CPU ABSRPTN TRX SERV RESP TIME (%) 0.0 9.2 P WORKLOAD=WAS 12.1 **** 382.6 ----STORAGE---AVG 0.00 TOTAL 0.00 CENTRAL 0.00 EXPAND 0.00 SHARED 0.00 ------STATE-----SWITCHED SAMPL(%) LOCAL SYSPL REMOT 0.0 0.0 0.0 0.0 0.0 0.0 90% --- USING% --- ---------- EXECUTION DELAYS % --------- ---DLY%-- -CRYPTO%- ---CNT%-% CPU IFA I/O TOT CPU UNKN IDLE USG DLY USG DLY QUIE 0.7 N/A 0.0 5.1 5.1 94.2 0.0 0.0 0.0 0.0 0.0 ----------RESPONSE TIME DISTRIBUTION-----------NUMBER OF TRANSACTIONS--------PERCENT------- 0 10 20 30 40 50 60 70 80 90 100 CUM TOTAL IN BUCKET CUM TOTAL IN BUCKET |....|....|....|....|....|....|....|....|....|....| 27944 27944 13.9 13.9 >>>>>>>> 30596 2652 15.2 1.3 > 33170 2574 16.5 1.3 > 35733 2563 17.7 1.3 > 38076 2343 18.9 1.2 > 40474 2398 20.1 1.2 > 42455 1981 21.1 1.0 > 44450 1995 22.1 1.0 > 46595 2145 23.1 1.1 > 48712 2117 24.2 1.1 > 51272 2560 25.5 1.3 > 64915 13643 32.2 6.8 >>>> 128K 63089 63.6 31.3 >>>>>>>>>>>>>>>> WebSphere for z/OS to CICS and IMS Connectivity Performance 0.0 > 00.00.01.400 201K 73346 1 100 W O R K L O A D 36.4 >>>>>>>>>>>>>>>>>>> A C T I V I T Y PAGE z/OS V1R5 SYSPLEX WTSCPLX1 RPT VERSION V1R5 RMF START 11/23/2004-15.35.00 INTERVAL 000.09.59 END 11/23/2004-15.45.00 2 MODE = GOAL POLICY ACTIVATION DATE/TIME 11/22/2004 16.05.07 ------------------------------------------------------------------------------------------------------------ REPORT CLASS(ES) REPORT BY: POLICY=SPSTPC TRANSACTIONS AVG 1.00 MPL 1.00 ENDED 0 END/S 0.00 #SWAPS 0 EXCTD 0 AVG ENC 0.00 REM ENC 0.00 MS ENC 0.00 REPORT CLASS=CICS48 DESCRIPTION =cics sc48 TRANS.-TIME HHH.MM.SS.TTT ACTUAL 0 EXECUTION 0 QUEUED 0 R/S AFFINITY 0 INELIGIBLE 0 CONVERSION 0 STD DEV 0 ---SERVICE---IOC 2289K CPU 24090K MSO 496078K SRB 2520K TOT 524977K /SEC 874963 ABSRPTN TRX SERV REPORT BY: POLICY=SPSTPC TRANSACTIONS AVG 0.00 MPL 0.00 ENDED 147577 END/S 245.96 #SWAPS 0 EXCTD 0 AVG ENC 0.00 REM ENC 0.00 MS ENC 0.00 --DASD I/O-SSCHRT 763.3 RESP 0.5 CONN 0.2 DISC 0.0 Q+PEND 0.3 IOSQ 0.0 875K 875K --SERVICE TIMES-TCB 123.2 SRB 12.9 RCT 0.0 IIT 2.7 HST 0.0 IFA N/A APPL% CP 23.1 APPL% IFACP 0.0 APPL% IFA N/A PAGE-IN RATES SINGLE 0.0 BLOCK 0.0 SHARED 0.0 HSP 0.0 HSP MISS 0.0 EXP SNGL 0.0 EXP BLK 0.0 EXP SHR 0.0 ----STORAGE---AVG 10286.6 TOTAL 10286.6 CENTRAL 10286.6 EXPAND 0.00 SHARED 0.00 REPORT CLASS=CICS48E DESCRIPTION =cics enclavesc48 TRANS.-TIME HHH.MM.SS.TTT ACTUAL 23 EXECUTION 0 QUEUED 0 R/S AFFINITY 0 INELIGIBLE 0 CONVERSION 0 STD DEV 45 1 W O R K L O A D A C T I V I T Y PAGE z/OS V1R5 SYSPLEX WTSCPLX1 RPT VERSION V1R5 RMF START 11/23/2004-15.35.00 INTERVAL 000.09.59 END 11/23/2004-15.45.00 5 MODE = GOAL POLICY ACTIVATION DATE/TIME 11/22/2004 16.05.07 REPORT BY: POLICY=SPSTPC TRANSACTIONS AVG 1.00 MPL 1.00 ENDED 0 END/S 0.00 #SWAPS 0 EXCTD 0 AVG ENC 0.00 REM ENC 0.00 MS ENC 0.00 TRANS.-TIME HHH.MM.SS.TTT ACTUAL 0 EXECUTION 0 QUEUED 0 R/S AFFINITY 0 INELIGIBLE 0 CONVERSION 0 STD DEV 0 --DASD I/O-SSCHRT 0.0 RESP 0.0 CONN 0.0 DISC 0.0 Q+PEND 0.0 IOSQ 0.0 ---SERVICE---IOC 0 CPU 13416K MSO 1094M SRB 1023K TOT 1108M /SEC 1847K ABSRPTN 1847K TRX SERV 1847K REPORT BY: POLICY=SPSTPC TRANSACTIONS AVG 35.60 MPL 35.60 ENDED 201350 END/S 335.58 #SWAPS 0 EXCTD 0 AVG ENC 35.60 REM ENC 0.00 MS ENC 0.00 REPORT CLASS=WAS48C DESCRIPTION =was control region sc48 TRANS.-TIME HHH.MM.SS.TTT ACTUAL 1.145 EXECUTION 106 QUEUED 1.039 R/S AFFINITY 0 INELIGIBLE 0 CONVERSION 0 STD DEV 762 REPORT BY: POLICY=SPSTPC --SERVICE TIMES-TCB 68.6 SRB 5.2 RCT 0.0 IIT 0.0 HST 0.0 IFA N/A APPL% CP 12.3 APPL% IFACP 0.0 APPL% IFA N/A PAGE-IN RATES SINGLE 0.0 BLOCK 0.0 SHARED 0.0 HSP 0.0 HSP MISS 0.0 EXP SNGL 0.0 EXP BLK 0.0 EXP SHR 0.0 ----STORAGE---AVG 40705.0 TOTAL 40705.0 CENTRAL 40705.0 EXPAND 0.00 PAGE-IN RATES SINGLE 0.0 BLOCK 0.0 SHARED 0.0 HSP 0.0 HSP MISS 0.0 EXP SNGL 0.0 EXP BLK 0.0 EXP SHR 0.0 ----STORAGE---AVG 0.00 TOTAL 0.00 CENTRAL 0.00 EXPAND 0.00 SHARED 15.00 REPORT CLASS=WAS48E DESCRIPTION =was enclaves sc48 --DASD I/O-SSCHRT 0.1 RESP 1.5 CONN 1.3 DISC 0.0 Q+PEND 0.1 IOSQ 0.0 ---SERVICE---IOC 0 CPU 305738K MSO 0 SRB 0 TOT 305738K /SEC 509565 ABSRPTN TRX SERV 14K 14K --SERVICE TIMES-TCB 1563.1 SRB 0.0 RCT 0.0 IIT 0.0 HST 0.0 IFA N/A APPL% CP 260.5 APPL% IFACP 0.0 APPL% IFA N/A SHARED 0.00 REPORT CLASS=WAS48S Chapter 4. Measurements and results 55 DESCRIPTION =was servant sc48 TRANSACTIONS AVG 1.00 MPL 1.00 ENDED 8 END/S 0.01 #SWAPS 4 EXCTD 0 AVG ENC 0.00 REM ENC 0.00 TRANS.-TIME HHH.MM.SS.TTT ACTUAL 1.13.212 EXECUTION 1.13.212 QUEUED 0 R/S AFFINITY 0 INELIGIBLE 0 CONVERSION 0 STD DEV 1.16.445 --DASD I/O-SSCHRT 0.0 RESP 77.0 CONN 1.0 DISC 76.0 Q+PEND 0.0 IOSQ 0.0 ---SERVICE---IOC 30209K CPU 43971K MSO 16804M SRB 1047K TOT 16879M /SEC 28132K ABSRPTN MS ENC 28M 0.00 --SERVICE TIMES-TCB 224.8 SRB 5.4 RCT 0.0 IIT 0.0 HST 0.0 IFA N/A APPL% CP 38.4 APPL% IFACP 0.0 TRX SERV 28M PAGE-IN RATES SINGLE 0.0 BLOCK 0.0 SHARED 0.0 HSP 0.0 HSP MISS 0.0 EXP SNGL 0.0 EXP BLK 0.0 EXP SHR 0.0 APPL% IFA ----STORAGE---AVG 190874 TOTAL 190874 CENTRAL 190874 EXPAND 0.00 SHARED 26.00 N/A Because this was a test of the local connection to CICS, only these classes are seen in this report: WAS48: Service class that the WebSphere enclaves run in CICS48: Report class for the CICS region CICS48E: tReport class for all CICS transactions WAS48C: Report class for the WebSphere controller WAS48S: Report class for all WebSphere servants WAS48E: Report class for WebSphere enclaves WAS49C: Report class for the WebSphere controller in the remote cases WAS49S: Report class for all WebSphere servants in the remote cases WAS49E: Report class for WebSphere enclaves in the remote cases IMS48C: Report class for the IMS control region IMS48W: Report class for the IMS message processing region IMS48X: Report class for IMS Connect ERWFTG1: Report class for CICS Transaction Gateway MQ4BC: Report class for MQ Channels MQ4BM: Report class for the MQ Master 4.3 Metrics in our final analysis In our final analysis, we distilled one main metric from WebSphere Studio Workload Simulator and as many as five metrics from each reporting class in the Workload Activity report as follows: Overall CPU utilization for each LPAR Unlike the application percent value discussed later, this is from a total of 100%. This value includes added noise from background processes and other applications, which was minimized. This value can be found in the RMF Summary report under CPU Busy, as seen in Example 4-3 on page 53. 56 WebSphere for z/OS to CICS and IMS Connectivity Performance Demand paging for each LPAR The average number of demand pages per second over the duration of the test was near 0 for all tests. Average end-to-end response time As described in 4.2.1, “WebSphere Studio Workload Simulator” on page 50, the mean of average response times that were captured by WebSphere Studio Workload Simulator over the time range is calculated as the average end-to-end response time. This value takes into account all processing, wait time, and network delays, including the network communication from the workload engine to WebSphere Application Server. In our tests, it is possible that response time was impacted by client load or the processing used by GC. WebSphere transaction time (actual) The average time it took for a job in WebSphere to complete, this value is the sum of execution time and queued time. This is similar to the average end-to-end response time, including all but the network communications between the workload engine and WebSphere. This value can be found in the Workload Activity report under TRANS.-TIME for the report class of WAS48E or WAS49E in the remote case. WebSphere transaction time (execution) The average time it took for ended jobs to complete, from when the job becomes active to completion, it includes the time that WebSphere processes, the time for the transaction to go through the connector and execute in the EIS, and all associated communication time. This value can be found in the Workload Activity report under TRANS.-TIME for the report class of WAS48E or WAS49E in the remote case. WebSphere transaction time (queued) The average time that jobs were delayed while waiting to be activated, we monitored this value during testing, maintaining a balance between having work queued up to sustain high CPU utilization and keeping this value under approximately 1 second. This value is in the Workload Activity report under TRANS.-TIME for the report class of WAS48E or WAS49E in the remote case. WebSphere transaction rate The number of WebSphere transactions that are completed per second of the test duration is the WebSphere transaction rate. A WebSphere transaction is defined as any page element served, including images. The test script outlined in 4.1.1, “The test script” on page 49 details when images are served. This value is labeled END/S in the Workload Activity report under TRANSACTIONS for the WAS48E or WAS49E report classes in the remote case. Chapter 4. Measurements and results 57 CICS transaction rate The number of CICS transactions completed per second of the test duration is the CICS transaction rate. Note that there is no one-to-one correlation between WebSphere transactions and CICS transactions, so in our case, this number was smaller. This value is labeled END/S in the Workload Activity report. In our test script, the following actions were called CICS transactions: – – – – Show a list of available companies Obtain a quote for a company Buy a stock Sell a stock Application percent for all reporting classes This metric is the percentage of the time for one processor that was used by the class. On the four-way LPARs we used for testing, the maximum application percent is 400%. This value is labeled APPL% CP in the Workload Activity report under TRANSACTIONS for the CICS48E report class. For each test, the application percent for each report class was captured separately. This identifies where the most processor time is being used for each of the different cases. Transactions per CPU second The number of transactions that run per second of overall CPU utilization for the LPAR is calculated by dividing the WebSphere Transaction Rate by the overall CPU Utilization for the LPAR. CPU milliseconds per transaction The average number of CPU seconds it takes for a WebSphere transaction to complete is defined as the number of processors (4) multiplied by 1000 ms, divided by the number of transactions per CPU second, for example, the CICS TG 0.5 KB case transaction per CPU second = 825/.9465 = 872 (rounded). Then, CPU ms per transaction = (4 x 1000)/872 = 4.587. 4.4 Tuning and adjustment In this section, we describe the changes to the settings and adjustments that we made for the tests. 4.4.1 Changing the settings When we set up the testing environment, we changed some settings from their defaults to obtain optimal performance and recorded the specific settings used for each test. You can change this settings by following these instructions: 58 WebSphere for z/OS to CICS and IMS Connectivity Performance Pass by reference: In the WebSphere Admin Console, select Servers → Application Servers → Your server → ORB Service. Make sure that Pass by reference is turned on. Number of server instances: In the WebSphere Admin Console, select Servers → Application Servers → Your server → Server Instance. Make sure that Multiple Instance Enabled is turned on. Set Minimum and Maximum Number of Instances to the total number of servants that you wish to run. We changed this value between tests, depending on the throughput and the workload differences between connectors. Workload profile: In the WebSphere Admin Console, select Servers → Application Servers → Your server → ORB Service → Advanced Settings. Set the Workload profile to IOBOUND or LONGWAIT. With LONGWAIT, each servant has 40 threads available. With IOBOUND, each servant has 12 threads available because there are four processors: MIN(30, MAX(5,(Number of CPUs*3))). See 2.3, “WLM” on page 15 for more information. Placement of data was tuned on the ESS for optimal placement. 4.4.2 Adjustment In this section, we list the adjustments that we made. Garbage collection The results table contains two CPU ms/transaction values. The first value is the actual measurement result. The second value is adjusted to reduce the GC costs to normal values. While we were analyzing our measurement results, we recognized that the GC resource consumption was too high, especially in cases where the COMMAREA was larger. Other measurements outside the scope of this book indicate that in most cases, it is possible to reduce the GC to under 2% of the total servant region CPU time. Normally GC processor usage can be reduced by: Increasing the Java heap size Reducing the number of threads in each servant region Increasing the number of servant regions This process requires multiple measurements and analysis. We did not have enough time to tune all these cases, and this is the reason we had to adjust the measurement results as follows: Chapter 4. Measurements and results 59 Transactions/CPU_sec is computed as WebSphere transaction rate/CPU utilization. CPU ms/tran is computed as number_of_CPUs x 1000/ (transactions/CPU_sec). To have a common ground for all measurements, we reduced the servant region CPU/transaction number to 3% of the total CPU time that is consumed by the enclaves and the servant region. The formula that we used is: Adjusted Servant CPU/tran = 0.03 x (adjusted servant CPU/tran + actual enclave CPU/tran) This results in: Adjusted Servant CPU/tran = 0.03/0.97 x actual Enclave CPU/tran So: Adjusted CPU/tran = actual CPU/tran - (actual servant CPU/tran - adjusted servant CPU/tran) WebSphere Studio Enterprise Developer fix After our measurements were completed, a problem in WebSphere Enterprise Developer was found and corrected (in WebSphere Enterprise Developer 5.1.2.1). The fix affects all CICS SOAP measurements, but it has no effect on the other measurements. Based on performance data that was collected by other performance teams in IBM and with the help of CICS performance experts, we have adjusted our CICS SOAP results to reflect the improvements that we can expect from this fix. The Adjusted column in each table in 4.5, “Results for CICS” on page 60 shows both of these adjustments (GC and WebSphere Enterprise Developer), thereby reflecting our best estimates given the constraints of our tests. 4.5 Results for CICS Table 4-1 shows a comparison of the results from all test cases. Table 4-1 Table of results for CICS EIS Connector Location App. data size (KB) CPU ms/Transaction Actual CICS 60 CICS TG Local Adjusted 0.5 4.590 4.502 5.0 10.746 9.841 WebSphere for z/OS to CICS and IMS Connectivity Performance EIS Connector Location App. data size (KB) CPU ms/Transaction Actual CICS CICS CICS CICS CICS CICS TG SOAP SOAP CICS MQ Bridge CICS MQ Bridge Remote Local Remote Local Remote Adjusted 20.0 47.253 33.234 0.5 5.571 5.227 5.0 13.396 11.793 20.0 37.938 33.145 0.5 10.673 8.447 5.0 73.653 59.581 Simple 5.0 16.762 13.091 0.5 11.460 9.263 5.0 77.792 64.385 Simple 5.0 17.928 14.323 0.5 9.774 9.392 5.0 11.430 10.850 20.0 16.045 14.434 0.5 12.576 12.497 5.0 14.328 14.201 20.0 18.878 18.544 Important: For all CICS MQ DPL Bridge test cases, very little adjustment was needed. This indicates that CICS MQ DPL Bridge requires a smaller memory footprint in the Java heap than the other connectors. Also, WebSphere Servant appl% might be lower with proper Java heap tuning. More detailed metrics for each test case can be found in subsequent sections. 4.5.1 CICS Transaction Gateway The results in this section were obtained for six tests of CICS TG. For detailed explanation of the fields, see z/OS V1R6.0 RMF Report Analysis, SC33-7991-09. Chapter 4. Measurements and results 61 Local (0.5 KB) Table 4-2 shows the configuration and Table 4-3 shows the results for the local CICS TG test case with a 0.5 KB COMMAREA. After preliminary tests, we determined that four servants running with a workload profile of LONGWAIT produced the best performance for this case. Table 4-2 Test configuration Variable name Value Number of clients Workload profile 2000 LONGWAIT Number of servants 4 Table 4-3 Test results Variable name Actual CPU milliseconds/transaction 4.590 ms Adjusted CPU milliseconds/transaction 4.502 ms WebSphere transaction rate 824.82/sec EIS transaction rate 606.59/sec WebSphere transaction time (actual) 0.713 sec WebSphere transaction time (execution) 0.144 sec WebSphere transaction time (queued) 0.569 sec Overall CPU utilization for LPAR 94.7% Demand paging on the LPAR 0.05 / sec Average end-to-end response time 0.798 sec WebSphere controller appl% 39.3% WebSphere servant appl% 14.3% WebSphere enclaves appl% CICS appl% 62 Value 227.1% 56.3% WebSphere for z/OS to CICS and IMS Connectivity Performance Local (5 KB) Table 4-4 shows the configuration and Table 4-5 shows the results for the local CICS TG test case with a 5 KB COMMAREA. After preliminary tests, we determined that two servants running with a LONGWAIT workload profile produced the best performance for this case. Table 4-4 Test configuration Variable name Value Number of clients Workload profile 950 LONGWAIT Number of servants 2 Table 4-5 Test results Variable name CPU milliseconds/transaction Adjusted CPU milliseconds/transaction Value 10.746 ms 9.841 ms WebSphere transaction rate 335.58 / sec EIS transaction rate 245.96 / sec WebSphere transaction time (actual) 1.145 sec WebSphere transaction time (execution) 0.106 sec WebSphere transaction time (queued) 1.039 sec Overall CPU utilization for LPAR 90.2% Demand paging on the LPAR 0.01 / sec Average end-to-end response time 1.247 sec WebSphere controller appl% 12.3% WebSphere servant appl% 38.4% WebSphere enclaves appl% CICS appl% 260.5% 23.1% Chapter 4. Measurements and results 63 Local (20 KB) Table 4-6 shows the configuration and Table 4-7 shows the results for the local CICS TG test case with a 20 KB COMMAREA. After preliminary tests, we determined that one servant running with a LONGWAIT workload profile produced the best performance for this test case. Table 4-6 Test configuration Variable name Value Number of clients Workload profile 185 LONGWAIT Number of servants 1 Table 4-7 Test results Variable name Value CPU milliseconds/transaction 47.253 ms Adjusted CPU milliseconds/transaction 33.234 ms WebSphere transaction rate 75.72 / sec EIS transaction rate 55.7 / sec WebSphere transaction time (actual) 0.837 sec WebSphere transaction time (execution) 0.390 sec WebSphere transaction time (queued) 0.446 sec Overall CPU utilization for LPAR 89.5% Demand paging on the LPAR 0.01 / sec Average end-to-end response time 1.213 sec WebSphere controller appl% 2.5% WebSphere servant appl% 112.9% WebSphere enclaves appl% 218.1% CICS appl% 64 WebSphere for z/OS to CICS and IMS Connectivity Performance 6.3% Remote (0.5 KB) Table 4-8 shows the configuration and Table 4-9 shows the results for the remote CICS TG test case with a 0.5 KB COMMAREA. After preliminary tests, we determined that four servants running with a LONGWAIT workload profile produced the best performance for this test case. Table 4-8 Test configuration Variable name Value Number of clients Workload profile 1200 LONGWAIT Number of servants 4 Table 4-9 Test results Variable name Value CPU milliseconds/transaction 5.571 ms Adjusted CPU milliseconds/transaction 5.227 ms WebSphere transaction rate 636.88 / sec EIS transaction rate 467.43 / sec WebSphere transaction time (actual) 0.768 sec WebSphere transaction time (execution) 0.129 sec WebSphere transaction time (queued) 0.638 sec Overall CPU utilization for LPAR running WebSphere 62.1% Overall CPU utilization for LPAR running the EIS 26.7% Demand paging on the LPAR running WebSphere 0.03 / sec Demand paging on the LPAR running the EIS 0.00 / sec Average end-to-end response time 0.915 sec WebSphere controller appl% 24.7% WebSphere servant appl% 27.0% WebSphere enclaves appl% 165.5% Chapter 4. Measurements and results 65 Variable name Value CICS appl% 39.3% CICS Transaction Gateway appl% 44.9% Remote (5 KB) Table 4-10 shows the configuration and Table 4-11 shows the results for the remote CICS TG test case with a 5 KB COMMAREA. After preliminary tests, we determined that four servants running with a LONGWAIT workload produced the best performance for this case. Table 4-10 Test configuration Variable name Value Number of clients Workload profile 700 LONGWAIT Number of servants 4 Table 4-11 Test results Variable name 66 Value CPU milliseconds/transaction 13.396 ms Adjusted CPU milliseconds/transaction 11.793 ms WebSphere transaction rate 321.89 / sec EIS transaction rate 236.12 / sec WebSphere transaction time (actual) 0.559 sec WebSphere transaction time (execution) 0.145 sec WebSphere transaction time (queued) 0.413 sec Overall CPU utilization for LPAR running WebSphere 88.2% Overall CPU utilization for LPAR running the EIS 19.6% Demand paging on the LPAR running WebSphere 0.19 / sec Demand paging on the LPAR running the EIS 0.01 / sec WebSphere for z/OS to CICS and IMS Connectivity Performance Variable name Value Average end-to-end response time 0.584 sec WebSphere controller appl% 10.8% WebSphere servant appl% 59.5% WebSphere enclaves appl% 255.2% CICS appl% 24.1% CICS Transaction Gateway appl% 35.4% Remote (20 KB) Table 4-12 shows the configuration and Table 4-13 shows the results for the remote CICS TG test case with a 20 KB COMMAREA. After preliminary tests, we determined that two servants running with a LONGWAIT workload profile produced the best performance for this case. Table 4-12 Test configuration Variable name Value Number of clients Workload profile 200 LONGWAIT Number of servants 2 Table 4-13 Test results Variable name Value CPU milliseconds/transaction 37.938 ms Adjusted CPU milliseconds/transaction 33.145 ms WebSphere transaction rate EIS transaction rate 104.75 / sec 76.98 / sec WebSphere transaction time (actual) 0.327 sec WebSphere transaction time (execution) 0.248 sec WebSphere transaction time (queued) 0.079 sec Overall CPU utilization for LPAR running WebSphere 86.9% Chapter 4. Measurements and results 67 Variable name Overall CPU utilization for LPAR running the EIS Value 12.5% Demand paging on the LPAR running WebSphere 0.02 / sec Demand paging on the LPAR running the EIS 0.01 / sec Average end-to-end response time 0.349 sec WebSphere controller appl% WebSphere servant appl% WebSphere enclaves appl% 3.4% 58.4% 264.9% CICS appl% CICS Transaction Gateway appl% 9.3% 24.2% 4.5.2 SOAP for CICS The results in this section were obtained for six tests of SOAP for CICS. In our test environment, we used a single CICS region, and executed the business logic of the Trader application on the CICS QR TCB. As a result, we were only able to drive our throughput to the point where the CICS QR TCB CICS was using approximately 90% of one processor. Note: If the Trader application had been designed as threadsafe, with data access in IBM DB2, we could have used the CICS Open Transaction Environment (OTE), allowing us to run the business logic on additional CICS TCBs, and therefore allowing CICS to use more than one processor. For additional information about how to use the CICS OTE, refer to Threadsafe considerations for CICS, SG24-5631. Note that in CICS TS V3.1, SOAP work is also off-loaded to other TCBs, which increases throughput in a multi-processor environment. Because of the nature of XML, the SOAP for CICS results vary with the complexity of the data structure that is being used. We demonstrated this by running two sets of tests in the 5 KB case, a case with a smaller number of larger fields in the COMMAREA, and a more complex case with a very large number of small fields. This test case was run in local and remote scenarios. 68 WebSphere for z/OS to CICS and IMS Connectivity Performance In the complex 5 KB case, there were 1300 elements and the transport size was 77 KB; in the simple 5 KB case, there were 160 elements and the total transport size was approximately 38 KB. It should also be noted that the transaction rate for CICS, as it is shown in the RMF report, is twice what the CICS TG case would be. This is because the SOAP case uses a Web Attach transaction. The EIS transaction rate that you see in our examples is calculated by dividing the reported transaction rate by 2. Note: The CICS Appl% values should be much lower after applying the fix to WebSphere Studio Enterprise Developer. Refer to “WebSphere Studio Enterprise Developer fix” on page 60 for more information about this fix. Keep-alive WebSphere Application Server for z/OS V5.1 does not support keep-alive for outbound HTTP requests. Therefore, when sending SOAP messages to a CICS TS V2.3 system with SOCKETCLOSE(10) specified in the TCPIPSERVICE resource definition, the socket is closed after each message is processed. Therefore, each SOAP message causes two CICS transactions to be run, namely the Web Attach transaction (CWXN) and the Web Alias transaction (CWBA). WebSphere V6 supports keep-alive for outbound HTTP 1.1 requests. When used in conjunction with HTTP 1.1 support in CICS TS V3.1, keep-alive support for WebSphere SOAP requests to CICS can be used. For more information, refer to: http://publib.boulder.ibm.com/infocenter/wasinfo/v6r0/index.jsp?topic=/com.ibm. websphere.zseries.doc/info/zseries/ae/rwbs_transportheaderproperty.html Local complex (0.5 KB) Table 4-14 shows the configuration and Table 4-15 on page 70 shows the results for the local SOAP for CICS test case with a 0.5 KB COMMAREA. After preliminary tests, we determined that five servants running with an IOBOUND workload profile produced the best performance for this case because of the work that is done by WebSphere to create and parse XML. Table 4-14 Test configuration Variable name Number of clients Workload profile Number of servants Value 700 IOBOUND 5 Chapter 4. Measurements and results 69 Table 4-15 Test results Variable name Value CPU milliseconds/transaction 10.673 ms Adjusted CPU milliseconds/transaction 8.447 ms WebSphere transaction rate 196.75 / sec EIS transaction rate 144.14 / sec WebSphere transaction time (actual) 1.962 sec WebSphere transaction time (execution) 0.201 sec WebSphere transaction time (queued) 1.761 sec Overall CPU utilization for LPAR 52.5% Demand paging on the LPAR 1.19 / sec Average end-to-end response time 1.990 sec WebSphere controller appl% 6.7% WebSphere servant appl% 4.5% WebSphere enclaves appl% 83.5% CICS appl% 93.7% Local complex (5 KB) Table 4-16 shows the test configuration and Table 4-17 on page 71 shows the results for the local SOAP for CICS test case with a 5 KB COMMAREA. After preliminary tests, we determined that two servants running with an IOBOUND workload profile produced the best performance for this case because of the work that is done by WebSphere to create and parse XML. Table 4-16 Test configuration Variable name Number of clients Workload profile Number of servants 70 Value 70 IOBOUND 2 WebSphere for z/OS to CICS and IMS Connectivity Performance Table 4-17 Test results Variable name Value CPU milliseconds/transaction 73.653 ms Adjusted CPU milliseconds/transaction 59.581 ms WebSphere transaction rate 30.06 / sec EIS transaction rate 22.07 / sec WebSphere transaction time (actual) 0.760 sec WebSphere transaction time (execution) 0.447 sec WebSphere transaction time (queued) 0.313 sec Overall CPU utilization for LPAR 55.4% Demand paging on the LPAR 0.01 / sec Average end-to-end response time 0.818 sec WebSphere controller appl% 1.0% WebSphere servant appl% 8.6% WebSphere enclaves appl% 104.2% CICS appl% 94.9% Local simple (5 KB) Table 4-18 shows the configuration and Table 4-19 on page 72 shows the results for the local SOAP for CICS test case with a 5 KB COMMAREA made up of simpler data. After preliminary tests, we determined that five servants running with an IOBOUND workload profile produced the best performance for this case because of the work that is done by WebSphere to create and parse XML. Table 4-18 Test configuration Variable name Number of clients Workload profile Number of servants Value 270 IOBOUND 5 Chapter 4. Measurements and results 71 Table 4-19 Test results Variable name Value CPU milliseconds/transaction 16.762 ms Adjusted CPU milliseconds/transaction 13.091 ms WebSphere transaction rate 115.14 / sec EIS transaction rate 84.67 / sec WebSphere transaction time (actual) 0.738 sec WebSphere transaction time (execution) 0.200 sec WebSphere transaction time (queued) 0.537 sec Overall CPU utilization for LPAR 48.3% Demand paging on the LPAR 3.15 / sec Average end-to-end response time 0.791 sec WebSphere controller appl% 3.6% WebSphere servant appl% 4.9% WebSphere enclaves appl% 82.0% CICS appl% 84.6% Remote complex (0.5 KB) Table 4-20 shows the configuration and Table 4-21 on page 73 shows the results for the remote SOAP for CICS test case with a 0.5 KB COMMAREA. After preliminary tests, we determined that five servants running with an IOBOUND workload profile produced the best performance for this case because of the work that is done in WebSphere to create and parse XML. Table 4-20 Test configuration Variable name Number of clients Workload profile Number of servants 72 Value 550 IOBOUND 5 WebSphere for z/OS to CICS and IMS Connectivity Performance Table 4-21 Test results Variable name CPU milliseconds/transaction Adjusted CPU milliseconds/transaction Value 11.460 ms 9.263 ms WebSphere transaction rate 196.68 / sec EIS transaction rate 144.28 / sec WebSphere transaction time (actual) 1.209 sec WebSphere transaction time (execution) 0.258 sec WebSphere transaction time (queued) 0.951 sec Overall CPU utilization for LPAR running WebSphere 29.1% Overall CPU utilization for LPAR running the EIS 27.3% Demand paging on the LPAR running WebSphere 1.09 / sec Demand paging on the LPAR running the EIS 0.00 / sec Average end-to-end response time 1.270 sec WebSphere controller appl% 6.2% WebSphere servant appl% 4.2% WebSphere enclaves appl% 83.6% CICS appl% 93.4% Remote complex (5 KB) Table 4-22 on page 74 shows the configuration and Table 4-23 on page 74 shows the results for the remote SOAP for CICS test case with a 5 KB COMMAREA. After preliminary tests, we determined that two servants running with an IOBOUND workload profile produced the best performance for this case because of the work that is done by WebSphere to create and parse XML. Chapter 4. Measurements and results 73 Table 4-22 Test configuration Variable name Value Number of clients Workload profile 70 IOBOUND Number of servants 2 Table 4-23 Test results Variable name CPU milliseconds/transaction 77.792 ms Adjusted CPU milliseconds/transaction 64.385 ms WebSphere transaction rate 30.80 / sec EIS transaction rate 22.63 / sec WebSphere transaction time (actual) 0.697 sec WebSphere transaction time (execution) 0.590 sec WebSphere transaction time (queued) 0.107 sec Overall CPU utilization for LPAR running WebSphere 32.5% Overall CPU utilization for LPAR running the EIS 27.5% Demand paging on the LPAR running WebSphere 0.02 / sec Demand paging on the LPAR running the EIS 0.00 / sec Average end-to-end response time 0.717 sec WebSphere controller appl% 1.1% WebSphere servant appl% 7.3% WebSphere enclaves appl% CICS appl% 74 Value 106.7% 96.7% WebSphere for z/OS to CICS and IMS Connectivity Performance Remote simple (5 KB) Table 4-24 shows the configuration and Table 4-25 shows the results for the remote SOAP for CICS test case with a 5 KB COMMAREA made up of simpler data. After preliminary tests, we determined that five servants running with an IOBOUND workload profile produced the best performance for this case because of the work that is done by WebSphere to create and parse XML. Table 4-24 Test configuration Variable name Value Number of clients Workload profile 270 IOBOUND Number of servants 5 Table 4-25 Test results Variable name Value CPU milliseconds/transaction 17.928 ms Adjusted CPU milliseconds/transaction 14.323 ms WebSphere transaction rate EIS transaction rate 122.60 / sec 90.14 / sec WebSphere transaction time (actual) 0.620 sec WebSphere transaction time (execution) 0.275 sec WebSphere transaction time (queued) 0.344 sec Overall CPU utilization for LPAR running WebSphere 28.9% Overall CPU utilization for LPAR running the EIS 26.1% Demand paging on the LPAR running WebSphere 0.12 / sec Demand paging on the LPAR running the EIS 0.01 / sec Average end-to-end response time 0.627 sec WebSphere controller appl% 3.8% WebSphere servant appl% 4.4% Chapter 4. Measurements and results 75 Variable name Value WebSphere enclaves appl% 87.7% CICS appl% 90.1% 4.5.3 CICS MQ DPL Bridge The following results were obtained for six tests of the CICS MQ DPL Bridge. Local (0.5 KB) Table 4-26 shows the configuration and Table 4-27 shows the results for the local CICS MQ DPL Bridge test case with a 0.5 KB COMMAREA. After preliminary tests, we determined that four servants running with a LONGWAIT workload profile produced the best performance for this case. Table 4-26 Test configuration Variable name Value Number of clients Workload profile 450 LONGWAIT Number of servants 4 Table 4-27 Test results Variable name CPU milliseconds/transaction 9.774 ms Adjusted CPU milliseconds/transaction 9.392 ms WebSphere transaction rate 360.95 / sec EIS transaction rate 265.02 / sec WebSphere transaction time (actual) 0.174 sec WebSphere transaction time (execution) 0.091 sec WebSphere transaction time (queued) 0.083 sec Overall CPU utilization for LPAR 88.2% Demand paging on the LPAR 0.01 / sec Average end-to-end response time 0.174 sec WebSphere controller appl% 76 Value 11.6% WebSphere for z/OS to CICS and IMS Connectivity Performance Variable name Value WebSphere servant appl% 21.1% WebSphere enclaves appl% 235.7% CICS appl% 50.7% MQ master appl% 9.8% Local (5 KB) Table 4-28 shows the configuration and Figure 4-29 shows the results for the local CICS MQ DPL Bridge test case with a 5 KB COMMAREA. After preliminary tests, we determined that two servants running with a LONGWAIT workload profile produced the best performance for this case. Table 4-28 Test configuration Variable name Value Number of clients Workload profile 950 LONGWAIT Number of servants 2 Table 4-29 Test results Variable name Value CPU milliseconds/transaction 11.430 ms Adjusted CPU milliseconds/transaction 10.850 ms WebSphere transaction rate 312.69 / sec EIS transaction rate 229.43 / sec WebSphere transaction time (actual) 0.209 sec WebSphere transaction time (execution) 0.114 sec WebSphere transaction time (queued) 0.095 sec Overall CPU utilization for LPAR 89.4% Demand paging on the LPAR 0.00 / sec Average end-to-end response time 0.221 sec WebSphere controller appl% 10.0% WebSphere servant appl% 25.4% Chapter 4. Measurements and results 77 Variable name Value WebSphere enclaves appl% 234.7% CICS appl% 54.9% MQ master appl% 8.6% Local (20 KB) Table 4-30 shows the configuration and Table 4-31 shows the results for the local CICS MQ DPL Bridge test case with a 20 KB COMMAREA. After preliminary tests, we determined that 2 servants running with a LONGWAIT workload profile produced the best performance for this case. Table 4-30 Test configuration Variable name Value Number of clients Workload profile 300 LONGWAIT Number of servants 2 Table 4-31 Test results Variable name Value CPU milliseconds/transaction 16.045 ms Adjusted CPU milliseconds/transaction 14.434 ms WebSphere transaction rate 218.51 / sec EIS transaction rate 157.96 / sec WebSphere transaction time (actual) 0.310 sec WebSphere transaction time (execution) 0.144 sec WebSphere transaction time (queued) 0.165 sec Overall CPU utilization for LPAR 87.7% Demand paging on the LPAR 0.00 / sec Average end-to-end response time 0.379 sec WebSphere controller appl% WebSphere servant appl% WebSphere enclaves appl% 78 7.5% 41.9% 216.3% WebSphere for z/OS to CICS and IMS Connectivity Performance Variable name Value CICS appl% 56.3% MQ master appl% 6.2% Remote (0.5 KB) Table 4-32 shows the configuration and Table 4-33 shows the results for the remote CICS MQ DPL Bridge test case with a 0.5 KB COMMAREA. After preliminary tests, we determined that four servants running with a LONGWAIT workload profile produced the best performance for this test case. Table 4-32 Test configuration Variable name Value Number of clients Workload profile 600 LONGWAIT Number of servants 4 Table 4-33 Test results Variable name Value CPU milliseconds/transaction 12.576 ms Adjusted CPU milliseconds/transaction 12.497 ms WebSphere transaction rate 465.73 / sec EIS transaction rate 334.70 / sec WebSphere transaction time (actual) 0.232 sec WebSphere transaction time (execution) 0.073 sec WebSphere transaction time (queued) 0.158 sec Overall CPU utilization for LPAR running WebSphere 83.4% Overall CPU utilization for LPAR running the EIS 60.2% Demand paging on the LPAR running WebSphere 0.00 / sec Demand paging on the LPAR running the EIS 0.00 / sec Chapter 4. Measurements and results 79 Variable name Value Average end-to-end response time 0.250 sec WebSphere controller appl% 14.4% WebSphere servant appl% 12.1% WebSphere enclaves appl% 273.6% CICS appl% 63.3% MQ master appl% 15.8% MQ channels appl% 129.7% Remote (5 KB) Table 4-34 shows the configuration and Table 4-35 shows the results for the remote CICS MQ DPL Bridge test case with a 5 KB COMMAREA. After preliminary tests, we determined that four servants running with a LONGWAIT workload profile produced the best performance for this test case. Table 4-34 Test configuration Variable name Value Number of clients Workload profile 500 LONGWAIT Number of servants 2 Table 4-35 Test results Variable name CPU milliseconds/transaction 14.328 ms Adjusted CPU milliseconds/transaction 14.201 ms WebSphere transaction rate 401.88 / sec EIS transaction rate 294.93 / sec WebSphere transaction time (actual) 0.292 sec WebSphere transaction time (execution) 0.092 sec WebSphere transaction time (queued) 0.199 sec Overall CPU utilization for LPAR running WebSphere 80 Value 82.2% WebSphere for z/OS to CICS and IMS Connectivity Performance Variable name Value Overall CPU utilization for LPAR running the EIS 61.75% Demand paging on the LPAR running WebSphere 0.00 / sec Demand paging on the LPAR running the EIS 0.00 / sec Average end-to-end response time 0.304 sec WebSphere controller appl% 12.8% WebSphere servant appl% 13.4% WebSphere enclaves appl% 269.1% CICS appl% 70.5% MQ master appl% 14.3% MQ channels appl% 128.6% Remote (20 KB) Table 4-36 shows the configuration and Table 4-37 shows the results for the remote CICS MQ DPL Bridge test case with a 20 KB COMMAREA. After preliminary tests, it was determined that two servants running with a LONGWAIT workload profile produced the best performance for this test case. Table 4-36 Test configuration Variable name Value Number of clients Workload profile 370 LONGWAIT Number of servants 2 Table 4-37 Test results Variable name Value CPU milliseconds/transaction 18.878 ms Adjusted CPU milliseconds/transaction 18.544 ms WebSphere transaction rate 297.28 / sec EIS transaction rate 218.28 / sec Chapter 4. Measurements and results 81 Variable name Value WebSphere transaction time (actual) 0.175 sec WebSphere transaction time (execution) 0.109 sec WebSphere transaction time (queued) 0.066 sec Overall CPU utilization for LPAR running WebSphere 84.65% Overall CPU utilization for LPAR running the EIS 55.65% Demand paging on the LPAR running WebSphere 0.00 / sec Demand paging on the LPAR running the EIS 0.00 / sec Average end-to-end response time 0.184 sec WebSphere controller appl% 9.1% WebSphere servant appl% 18.5% WebSphere enclaves appl% 277.6% CICS appl% 76.7% MQ master appl% 11.4% MQ channels appl% 103.0% 4.6 Results for IMS Table 4-38 is a comparison of the results from all IMS test cases. Table 4-38 Table of results for IMS EIS Connector Location Applicati on data size (KB) CPU ms / Transaction actual IMS 82 IMS Connect Local adjusted 0.5 6.735 6.628 5.0 13.219 12.076 20.0 40.033 32.652 WebSphere for z/OS to CICS and IMS Connectivity Performance EIS Connector Location Applicati on data size (KB) CPU ms / Transaction actual Remote MQ/IMS DPL Bridge adjusted 0.5 7.263 7.178 5.0 14.371 12.926 20.0 52.457 37.623 5.0 47.248 45.202 Local Important: WebSphere Servant appl% could be lower with proper Java heap tuning. More detailed metrics for each test case are discussed in the sections that follow. 4.6.1 IMS Connect The following results were obtained for the six tests of IMS Connect. Local (0.5 KB) Table 4-39 shows the configuration and Table 4-40 shows the results for the local IMS Connect test case with a 0.5 KB COMMAREA. After preliminary tests, we determined that four servants running with a LONGWAIT workload profile produced the best performance for this case. Table 4-39 Test configuration Variable name Value Number of clients Workload profile 700 LONGWAIT Number of servants 4 Table 4-40 Test results Variable name Value CPU milliseconds/transaction 6.735 ms Adjusted CPU milliseconds/transaction 6.628 ms WebSphere transaction rate 279.13 / sec Chapter 4. Measurements and results 83 Variable name Value EIS transaction rate 204.91 / sec WebSphere transaction time (actual) 0.866 sec WebSphere transaction time (execution) 0.196 sec WebSphere transaction time (queued) 0.669 sec Overall CPU utilization for LPAR 47.0% Demand paging on the LPAR 0.165 / sec Average end-to-end response time 0.980 sec WebSphere controller appl% 9.5% WebSphere servant appl% 5.0% WebSphere enclaves appl% 64.6% IMS appl% 80.2% IMS Connect appl% 2.6% Local (5 KB) Table 4-41 shows the configuration and Table 4-42 shows the results for the local IMS Connect test case with a 5 KB COMMAREA. After preliminary tests, we determined that two servants running with a LONGWAIT workload profile produced the best performance for this case. Table 4-41 Test configuration Variable name Value Number of clients Workload profile 400 LONGWAIT Number of servants 2 Table 4-42 Test results Variable name 84 Value CPU milliseconds/transaction 13.219 ms Adjusted CPU milliseconds/transaction 12.076 ms WebSphere transaction rate 206.06 / sec EIS transaction rate 147.44 / sec WebSphere for z/OS to CICS and IMS Connectivity Performance Variable name Value WebSphere transaction time (actual) 0.325 sec WebSphere transaction time (execution) 0.167 sec WebSphere transaction time (queued) 0.157 sec Overall CPU utilization for LPAR 68.1% Demand paging on the LPAR 0.06 / sec Average end-to-end response time 0.410 sec WebSphere controller appl% 6.9% WebSphere servant appl% 28.7% WebSphere enclaves appl% 166.4% IMS appl% 44.2% IMS Connect appl% 2.1% Local (20 KB) Table 4-43 shows the configuration and Table 4-44 shows the results for the local IMS Connect test case with a 20 KB COMMAREA. After preliminary tests, we determined that two servants running with a LONGWAIT workload profile produced the best performance for this case. Table 4-43 Test configuration Variable name Value Number of clients Workload profile 135 LONGWAIT Number of servants 2 Table 4-44 Test results Variable name Value CPU milliseconds/transaction 40.033 ms Adjusted CPU milliseconds/transaction 32.652 ms WebSphere transaction rate 72.64 / sec EIS transaction rate 51.89 / sec WebSphere transaction time (actual) 0.279 sec Chapter 4. Measurements and results 85 Variable name Value WebSphere transaction time (execution) 0.229 sec WebSphere transaction time (queued) 0.050 sec Overall CPU utilization for LPAR 72.7% Demand paging on the LPAR 0.01 / sec Average end-to-end response time 0.310 sec WebSphere controller appl% 2.8% WebSphere servant appl% 59.5% WebSphere enclaves appl% 190.2% IMS appl% 17.8% IMS Connect appl% 0.9% Remote (0.5 KB) Table 4-45 shows the configuration and Table 4-46 shows the results for the remote IMS Connect test case with a 0.5 KB COMMAREA. After preliminary tests, we determined that four servants running with a LONGWAIT workload profile produced the best performance for this case. Table 4-45 Test configuration Variable name Value Number of clients Workload profile 700 LONGWAIT Number of servants 4 Table 4-46 Test results Variable name 86 Value CPU milliseconds/transaction 7.263 ms Adjusted CPU milliseconds/transaction 7.178 ms WebSphere transaction rate 248.37 / sec EIS transaction rate 182.31 / sec WebSphere transaction time (actual) 0.951 sec WebSphere transaction time (execution) 0.263 sec WebSphere for z/OS to CICS and IMS Connectivity Performance Variable name Value WebSphere transaction time (queued) 0.688 sec Overall CPU utilization for LPAR running WebSphere 19.6% Overall CPU utilization for LPAR running the EIS 25.5% Demand paging on the LPAR running WebSphere 0.01 / sec Demand paging on the LPAR running the EIS 0.00 / sec Average end-to-end response time 0.976 sec WebSphere controller appl% 7.7% WebSphere servant appl% 3.7% WebSphere enclaves appl% 51.4% IMS appl% 81.7% IMS Connect appl% 3.7% Remote (5 KB) Table 4-47 shows the \configuration and Table 4-48 shows the results for the remote IMS Connect test case with a 5 KB COMMAREA. After preliminary tests, we determined that four servants running with a LONGWAIT workload profile produced the best performance for this case. Table 4-47 Test configuration Variable name Value Number of clients Workload profile 500 LONGWAIT Number of servants 2 Table 4-48 Test results Variable name Value CPU milliseconds/transaction 14.371 ms Adjusted CPU milliseconds/transaction 12.926 ms Chapter 4. Measurements and results 87 Variable name Value WebSphere transaction rate 240.21 / sec EIS transaction rate 176.41 / sec WebSphere transaction time (actual) 1.016 sec WebSphere transaction time (execution) 0.284 sec WebSphere transaction time (queued) 0.731 sec Overall CPU utilization for LPAR running WebSphere 60.9% Overall CPU utilization for LPAR running the EIS 25.4% Demand paging on the LPAR running WebSphere 0.00 / sec Demand paging on the LPAR running the EIS 0.00 / sec Average end-to-end response time 1.080 sec WebSphere controller appl% 7.7% WebSphere servant appl% 40.2% WebSphere enclaves appl% 177.5% IMS appl% 79.4% IMS Connect appl% 4.4% Remote (20 KB) Table 4-49 shows the configuration and Table 4-50 on page 89 shows the results for the remote IMS Connect test case with a 20 KB COMMAREA. After preliminary tests, we determined that two servants running with a LONGWAIT workload profile produced the best performance for this test case. Table 4-49 Test configuration Variable name Number of clients Workload profile Number of servants 88 Value 135 LONGWAIT 2 WebSphere for z/OS to CICS and IMS Connectivity Performance Table 4-50 Test results Variable name Value CPU milliseconds/transaction 52.457 ms Adjusted CPU milliseconds/transaction 37.623 ms WebSphere transaction rate 76.52 / sec EIS transaction rate 56.09 / sec WebSphere transaction time (actual) 0.712 sec WebSphere transaction time (execution) 0.425 sec WebSphere transaction time (queued) 0.286 sec Overall CPU utilization for LPAR running WebSphere 90.75% Overall CPU utilization for LPAR running the EIS 9.6% Demand paging on the LPAR running WebSphere 0.00 / sec Demand paging on the LPAR running the EIS 0.00 / sec Average end-to-end response time 0.854 sec WebSphere controller appl% 2.6% WebSphere servant appl% 120.4% WebSphere enclaves appl% 222.9% IMS appl% IMS Connect appl% 21.6% 2.3% 4.6.2 IMS MQ DPL Bridge Table 4-51 on page 90 shows the configuration and Table 4-52 on page 90 shows the results for the local IMS MQ DPL Bridge test case with a 5 KB COMMAREA. After preliminary tests, we determined that two servants running with a LONGWAIT workload profile produced the best performance for this case. Chapter 4. Measurements and results 89 Table 4-51 Test configuration Variable name Value Number of clients Workload profile 140 LONGWAIT Number of servants 2 Table 4-52 Test results Variable name Value CPU milliseconds/transaction 47.248 ms Adjusted CPU milliseconds/transaction 45.202 ms WebSphere transaction rate 72.85 / sec EIS transaction rate 53.41 / sec WebSphere transaction time (actual) 0.860 sec WebSphere transaction time (execution) 0.644 sec WebSphere transaction time (queued) 0.215 sec Overall CPU utilization for LPAR Demand paging on the LPAR Average end-to-end response time 86.05% 0.225 / sec 0.876 sec WebSphere controller appl% WebSphere servant appl% WebSphere enclaves appl% IMS appl% 2.4% 22.4% 242.5% 16.2% MQ master appl% 6.4% 4.7 Connector and data size comparisons We created several charts that compare different connectors. In the charts, we refer to COMMAREA, which is the actual data content being transferred from WebSphere to the EIS. This does not include the XML tags or any other header or infrastructure related data (SOAP scenarios). For example, in the complex 5 KB case, the transport size was 77 KB. In the simple 5 KB case, the total transport size was approximately 38 KB. 90 WebSphere for z/OS to CICS and IMS Connectivity Performance 4.7.1 CICS comparison charts Figure 4-2 is a comparison between the CICS TG, CICS SOAP, and CICS MQ DPL Bridge local and remote cases. CICS results with 500 byte COMMAREA 14 CPU millisec / tran 12 10 8 Tier 3 Tier 2 6 4 2 0 CICS CTG local CICS CTG remote CICS SOAP complex local CICS SOAP CICS MQBridge CICS MQBridge complex remote local remote Figure 4-2 CICS results with 500 bytes COMMAREA Tier 2 is the activity in the WebSphere system and Tier 3 is the activity in the EIS system. For the local cases, all EIS activity occurs in Tier 2. The key findings were: CICS TG is more efficient than SOAP or WebSphere MQ DPL Bridge with a small COMMAREA. CICS SOAP is more efficient than CICS MQ DPL Bridge with a small COMMAREA. Chapter 4. Measurements and results 91 Figure 4-3 shows the components of the CPU time. CICS results with 500 byte COMMAREA Cost breakdown CPU millisec / tran 14 12 Other MQ channel Connector EIS Controller Enclaves Servant 10 8 6 4 2 0 CICS CTG local CICS CTG remote CICS SOAP CICS SOAP complex complex local remote CICS MQBridge local CICS MQBridge remote Figure 4-3 CPU time breakdown for 500 byte cases Servant region CPU time and EIS CPU time in the SOAP test have been adjusted as explained in 4.4.2, “Adjustment” on page 59. The key findings were: In the SOAP case, the EIS CPU time is high because of the XML conversions for the complex COMMAREA. In the remote MQ DPL Bridge case, the MQ Channel CPU time is considerable. This value is the time spent in the MQ Channel Initiator to manage connections. In both local and remote MQ DPL Bridge cases, Connector time refers to MQ Master, which manages the queues. The “other” category includes components such as TCP/IP communication. Note that this component is larger in the remote cases. CICS TG is the cheapest in general and it uses the least CPU resources in the WebSphere application and in CICS. 92 WebSphere for z/OS to CICS and IMS Connectivity Performance Figure 4-4 is a comparison between the CICS TG, CICS SOAP, and MQ DPL bridge local and remote cases with the medium-sized COMMAREA. CICS results with 5K byte COMMAREA 70 CPU millisec / tran 60 50 40 Tier3 Tier 2 30 20 10 0 CICS CTG local CICS CTG CICS SOAP CICS SOAP CICS SOAP CICS SOAP remote complex complex simple local simple local remote remote CICS MQBridge local CICS MQBridge remote Figure 4-4 CICS results with 5K bytes COMMAREA Tier 2 is the activity in the WebSphere system and Tier 3 is the activity in the EIS. For the local cases, all EIS activity occurs in Tier 2. The key findings were: As the COMMAREA complexity and size increase, SOAP uses more CPU time. A simpler (see Example 3-2 on page 31) SOAP COMMAREA of the same size significantly reduced the CPU time (for example, approximately 4X cost reduction; 38KB XML transport size versus 77 KB; 160 data elements versus 1300). The local connectors are more efficient than remote connectors. With this complex COMMAREA, MQ DPL Bridge performs nearly as well as CICS TG. With a simple COMMAREA, SOAP performance was much closer to MQ DPL bridge and CICS TG. Note: The simple COMMAREA results for CICS SOAP do not provide a direct comparison with CICS TG or CICS MQ DPL Bridge results because CICS TG or CICS MQ DPL Bridge are likely to gain some benefit from the simpler COMMAREA. Chapter 4. Measurements and results 93 Figure 4-5 shows the components of the CPU time. CICS results with 5K byte COMMAREA Cost breakdown 70 CPU millisec / tran 60 Other MQ channel Connector EIS Controller Enclaves Servant 50 40 30 20 10 0 CICS CTG CICS CTG local remote CICS SOAP complex local CICS SOAP complex remote CICS SOAP simple local CICS SOAP simple remote CICS CICS MQBridge MQBridge local remote Figure 4-5 CPU time breakdown for 5K byte cases Servant region CPU time and EIS CPU time in the SOAP tests have been adjusted as explained 4.4.2, “Adjustment” on page 59. The key findings were: The SOAP COMMAREA complexity has a major impact on the CPU cost for both the WebSphere enclaves and the EIS because it includes the XML parsing costs. The MQ Channel value is the time spent in the MQ Channel Initiator to manage connections. In both the local and remote MQ DPL Bridge cases, Connector time refers to MQ Master, which manages the queues. 94 WebSphere for z/OS to CICS and IMS Connectivity Performance Figure 4-6 is a comparison between CICS TG and MQ DPL Bridge local and remote cases with our large-sized COMMAREA. 35 CICS results with 20K byte COMMAREA CPU millisec / tran 30 25 20 Tier 3 Tier 2 15 10 5 0 CICS CTG local CICS CTG remote CICS MQBridge local CICS MQBridge remote Figure 4-6 CICS results with 20K bytes COMMAREA Tier 2 is the activity in the WebSphere system and Tier 3 is the activity in the EIS. For the local cases, all EIS activity occurs in Tier 2. The key findings were: MQ DPL Bridge performs better than CICS TG with a large and complex COMMAREA. CICS TG local and remote have similar consumption with a large and complex COMMAREA. The relative CPU cost delta between local and remote decreases as the COMMAREA increases (see Figure 4-7 on page 96). Chapter 4. Measurements and results 95 The CPU time breakdown is shown in Figure 4-7. CICS results with 20K byte COMMAREA Cost breakdown 35 CPU millisec / tran 30 Other MQ channel Connector EIS Controller Enclaves Servant 25 20 15 10 5 0 CICS CTG local CICS CTG remote CICS MQBridge local CICS MQBridge remote Figure 4-7 CPU time breakdown for 20K byte cases The key findings were the same as those for the 5K COMMAREA: The MQ Channel value is the time spent in the MQ Channel Initiator to manage connections. In both the local and remote MQ DPL Bridge cases, Connector time refers to MQ Master, which manages the queues. 96 WebSphere for z/OS to CICS and IMS Connectivity Performance Figure 4-8 is a comparison of CICS TG performance with varying COMMAREA sizes. CICS TG results Varying size of COMMAREA 35 CPU millisec / tran 30 25 20 Tier 3 Tier 2 15 10 5 0 Local 500 bytes Local 5K bytes Local 20K bytes Remote 500 bytes Remote 5K bytes Remote 20K bytes Figure 4-8 CICS TG results with varying COMMAREA sizes Tier 2 is the activity in the WebSphere system and Tier 3 is the activity in the EI. For the local cases, all EIS activity occurs in Tier 2. The key findings were: For small and medium-sized COMMAREAs, local CICS TG is better than remote. For large, complex COMMAREAs, the performance of local and remote connections is comparable. Chapter 4. Measurements and results 97 Figure 4-9 shows the components of the CPU time. CICS TG results Cost breakdown 35 CPU millisec / tran 30 Other MQ channel Connector EIS Controller Enclaves Servant 25 20 15 10 5 0 Local 500 bytes Local 5K bytes Local 20K Remote 500 Remote 5K Remote bytes bytes bytes 20K bytes Figure 4-9 CPU time breakdown for CICS TG with varying COMMAREA sizes Servant region CPU time has been adjusted as explained 4.4.2, “Adjustment” on page 59. The key findings were: As the COMMAREA size increases, most of the additional cost is in the WebSphere application. The EIS cost is fairly consistent. The MQ Channel value is the time spent in the MQ Channel Initiator to manage connections. In both of the local and remote MQ DPL Bridge cases, Connector time refers to MQ Master, which manages the queues. 98 WebSphere for z/OS to CICS and IMS Connectivity Performance Figure 4-10 is a comparison of CICS SOAP performance with varying COMMAREA sizes. CICS SOAP results varying size of COMMAREA 70 CPU millisec / tran 60 50 40 Tier 3 Tier 2 30 20 10 0 Local 500 bytes Remote 500 complex bytes complex Local 5K bytes simple Remote 5K bytes simple Local 5K bytes complex Remote 5K bytes complex Figure 4-10 CICS SOAP results with varying COMMAREA sizes Tier 2 is the activity in the WebSphere system and Tier 3 is the activity in the EIS system. For the local cases, all EIS activity occurs in Tier 2. The key findings were: CPU costs for SOAP include XML parsing costs, which are much lower for a less complex COMMAREA structure. The local connector is consistently more efficient than the remote connector. Chapter 4. Measurements and results 99 Figure 4-11 shows the components of the CPU time. CICS SOAP results Cost breakdown 70 CPU millisec / tran 60 Other MQ channel Connector EIS Controller Enclaves Servant 50 40 30 20 10 0 Local 500 Remote Local 5K bytes 500 bytes bytes complex complex simple Remote 5K bytes simple Local 5K bytes complex Remote 5K bytes complex Figure 4-11 CPU time breakdown for CICS SOAP with varying COMMAREA sizes Servant region CPU time and EIS CPU time in the SOAP tests have been adjusted as explained in 4.4.2, “Adjustment” on page 59. The key findings were: The EIS CPU usage goes up considerably (compared to the CICS TG case) as the parsing of the SOAP message generated by the WebSphere application has to be done in CICS. The WebSphere enclaves CPU usage increases significantly as the size and complexity of the COMMAREA increases. The WebSphere application has to generate the SOAP request, then parse the response when it is returned from CICS. 100 WebSphere for z/OS to CICS and IMS Connectivity Performance Figure 4-12 is a comparison of CICS MQ DPL Bridge performance with varying COMMAREA sizes. CICS MQ Bridge results Varying size of COMMAREA 20 CPU millisec / tran 18 16 14 12 Tier 3 Tier 2 10 8 6 4 2 0 Local 500 bytes Local 5K bytes Local 20K bytes Remote 500 bytes Remote 5K bytes Remote 20K bytes Figure 4-12 CICS MQ DPL Bridge with varying COMMAREA sizes Tier 2 is the activity in the WebSphere system and Tier 3 is the activity in the EIS. For the local cases, all EIS activity occurs in Tier 2. The key findings were: Cost per byte with CICS MQ DPL Bridge is very low relative to other CICS connectors. The local connector is consistently more efficient than the remote connector. Chapter 4. Measurements and results 101 Figure 4-13 shows the components of the CPU time. CICS MQ Bridge results Cost breakdown 20 CPU millisec / tran 18 16 Other MQ channel Connector EIS Controller Enclaves Servant 14 12 10 8 6 4 2 0 Local 500 bytes Local 5K bytes Local 20K bytes Remote 500 bytes Remote 5K bytes Remote 20K bytes Figure 4-13 CPU time breakdown for CICS MQ DPL Bridge with varying COMMAREA sizes Servant region CPU time has been adjusted as explained 4.4.2, “Adjustment” on page 59. The key findings were: As the COMMAREA size increases, most of the increase in the CPU costs is in the WebSphere application. However, this increase is much smaller than with other connectors. The MQ channel and other categories are a significant part of the remote MQ DPL Bridge costs. The MQ Channel value is the time spent in the MQ Channel Initiator to manage connections. In both of the local and remote MQ DPL Bridge cases, Connector time refers to MQ Master, which manages the queues. 102 WebSphere for z/OS to CICS and IMS Connectivity Performance Figure 4-14 shows the CICS results (including SOAP) based on COMMAREA size. CICS results CPU cost based on COMMAREA 70 60 CTG local CTG remote SOAP complex local SOAP complex remote SOAP simple local SOAP simple remote MQ DPL Bridge local MQ DPL Bridge remote CPU millisec / tran 50 40 30 20 10 0 0 5 10 15 20 25 COMMAREA size (KB) Figure 4-14 CICS results based on COMMAREA size (with SOAP) The following observations about the chart are noteworthy: This is not a null-truncated COMMAREA, so CICS TG does not show the value of its null-stripping optimization. The per-byte cost for MQ DPL Bridge is very low relative to the per-byte cost of any of the other connectors. The SOAP results show the high cost of a complex COMMAREA; in the simple case, we obtained better results by reducing the complexity of the COMMAREA. This demonstrates a trend: the simpler the COMMAREA, the better the SOAP results. Chapter 4. Measurements and results 103 The simple SOAP case should not be directly compared to any of the other connectors because the COMMAREA structure change might also affect other connectors similarly. To show the complexity of the COMMAREA, the number of data elements were the following: 500 bytes: 36 + (8 x 4) = 68 5 KB: 36 + (316 x 4) = 1300 5 KB simple: 36 + (31 x 4) = 160 Figure 4-15 shows the CICS results based on COMMAREA without SOAP. CICS results other than SOAP CPU cost based on COMMAREA 35 30 CPU millisec / tran 25 CICS CTG local CICS CTG remote CICS MQBridge local CICS MQBridge remote 20 15 10 5 0 0 5 10 15 20 25 COMMAREA size (KB) Figure 4-15 CICS results based on COMMAREA size (without SOAP) 104 WebSphere for z/OS to CICS and IMS Connectivity Performance The initial cost of CICS MQ DPL Bridge is higher than CICS TG, but the cost per byte is so low that, for larger COMMAREAs, CICS MQ DPL Bridge is a good performer. 4.7.2 IMS comparison charts Figure 4-16 shows a comparison between IMS Connect local and remote cases with varying COMMAREA sizes. We also did one measurement with a local IMS MQ DPL Bridge for the 5 KB COMMAREA size. Tier 2 is the activity in the WebSphere system and Tier 3 is the activity in the IMS system. For the local cases, all IMS activity occurs in Tier 2. IMS results Varying size of COMMAREAs CPU millisec / tran 50 45 40 35 30 Tier 3 Tier 2 25 20 15 10 5 0 Connect Local 500 bytes Connect Local 5K bytes Connect Local 20K bytes Connect Remote 500 bytes Connect Remote 5K bytes Connect Remote 20K bytes MQBridge Local 5K bytes Figure 4-16 IMS results with varying COMMAREA sizes The key findings were: IMS Connect performs much better than IMS MQ DPL Bridge. The CPU cost is higher for remote than for local access. The CPU cost delta between local and remote seems to grow as the COMMAREA becomes larger and more complex. Chapter 4. Measurements and results 105 Figure 4-17 shows the components of the CPU time. CPU millisec / tran IMS results Cost breakdown 50 45 40 35 30 25 20 15 10 5 0 Other MQ channel Connector EIS Controller Enclaves Servant Connect Local 500 bytes Connect Local 5K bytes Connect Connect Local 20K Remote bytes 500 bytes Connect Remote 5K bytes Connect MQBridge Remote Local 5K 20K bytes bytes Figure 4-17 IMS cost breakdown with varying message sizes The key findings were: Almost all of the change in CPU costs occurs under the enclaves. The EIS cost is fairly consistent and includes IMS Control Region, DL/I, and the dependent regions. The connector is the IMS Connect address space, but does not include IMS Connector for Java, which is charged to enclaves. In the local IMS MQ DPL Bridge CPU case, the other category is considerably higher than the IMS Connect local and remote cases. The MQ Channel value is the time spent in the MQ Channel Initiator to manage connections. In the MQ DPL Bridge case, Connector time refers to MQ Master, which manages the queues. 106 WebSphere for z/OS to CICS and IMS Connectivity Performance Figure 4-18 shows the results based on the IMS message size. IMS results CPU cost based on message size 50 45 CPU millisec / tran 40 35 30 IMS Connect local IMS Connect remote IMS MQBridge local 25 20 15 10 5 0 0 5 10 15 20 25 message size (KB) Figure 4-18 IMS results based on message size The key findings were: The overall cost of IMS MQ DPL Bridge is much higher than IMS Connect. The local connector performs better than the remote connector. The benefit of using a local connector is greater with a large message size. Chapter 4. Measurements and results 107 108 WebSphere for z/OS to CICS and IMS Connectivity Performance Abbreviations and acronyms AAT Application Assembly Tool CTRACE ACEE Accessor Environment Element Component Trace services (MVS) DASD Direct Access Storage Device AIX® Advanced Interactive Executive (IBM UNIX) DB Database DD Dataset Definition AOR Application Owning Region DDF Distributed Data Facility APAR Authorized Program Analysis Report DDL Data Definition Language APF Authorized Program Facility DLL Dynamic Link Library API Application Programming Interface DN Distinguished Name DNS Domain Name System APPC Advanced Program to Program Communication DRA Database Resource Adapter DVIPA Dynamic Virtual IP Address ARM Automatic Restart Manager EAB Enterprise Access Builder ASCII American Standard Code for Information Interchange EAR Enterprise Application Repository AWT Abstract Windowing Toolkit (Java) EBCDIC Extended Binary Coded Decimal Interchange Code BPC Business Process Container EIS BSC Binary Synchronous Communication Enterprise Information System EJB Enterprise JavaBean BSF Back space file / Bean scripting format EJBQL Enterprise JavaBean Query Language CCF Common Connector Framework EUI End User Interface FSP Fault Summary Page CCI Common Client Interface FTP File Transfer Protocol CF Coupling Facility GC garbage collection CICS Customer Information Control system GIF Graphic Interchange Format CMP Container Managed Persistence GUI Graphical User Interface GWAPI Go Webserver Application Programming Interface HFS Hierarchical File Systems HLQ High Level Qualifier HTML Hypertext Markup Language CP Control Program CPU Central Processing Unit CICS TG CICS Transaction Gateway COMMAREA communications area © Copyright IBM Corp. 2006. All rights reserved. 109 HTTP Hypertext Transport Protocol MVS Multiple Virtual Storage HTTPS Secure Hypertext Transport Protocol NFS Network File System ODBC Open Database Connectivity IT Information Technology OMVS Open MVS IDE Integrated Development Environment OS operating system PME Programming Model Extensions IE Integrated Edition IJP Internal JMS Provider PSP Preventive Service Planning IMS Information Management System PTF Program Temporary Fix RACF Resource Access Control Facility RAR Resource Archive Repository RDB Relational Database REXX Restructured Extended Executor Language RMI Remote Method Invocation RMIC Remote Method Invocation Compiler RRS Resource Recovery Services SAF Security Authentication Facility IOR Interoperable Object Reference IP Internet Protocol ISHELL ISPF Shell ISPF Interactive System Productivity Facility IT Information Technology J2CA J2EE Connector Architecture JAAS Java Authentication and Authorization Services JACL Java Command Language JAR Java Archive Or Java Application Repository SCM System Configuration Management JCA Java Cryptographic Architecture SDK Software Developers Kit JCL Job Control Language SDSF Systems Display and Search Facility JDBC Java Database Connectivity SMAPI JMS Java Message Service JMX Java Management Extensions Systems Management Applications Programming Interface JNDI Java Naming and Directory Interface SMB System Message Block SMEUI Systems Management End User Interface JSP JavaServer Page JVM Java Virtual Machine SMP/E LDAP Lightweight Directory Access Protocol System Modification Program /Extended SNA LPAR logical partition Systems Network Architecture MDB message driven bean SOAP Simple Object Access Protocol MQ message queue 110 WebSphere for z/OS to CICS and IMS Connectivity Performance SQLID Structured Query Language Identifier SQLJ Structured Query Language For Java SSID Subsystem Identification SSL Secure Sockets Layer TCB Task Control Block TCP/IP Transmission Control Protocol/Internet Protocol TSO Time Sharing Option UDB Universal Database UID User Identifier UNIX AT&T Operating System For Workstations (IBM=AIX) URI Universal Resource Identifier URL Uniform Resource Locator USS UNIX System Services VI Visual Interface - Visual Screen-based Editor (AIX) VIPA Virtual IP Address VM Virtual Machine WAR Web Application Repository WLM Work Load Manager WPC Websphere Process Choreographer WSDL Web Services Description Language WSIF Web Services Invocation Framework XA Extended Architecture XMI XML Metadata Interchange XML Extensible Markup Language XSL Extensible Style Language XSLT Extensible Style Language Transformations 1PC One-phase Commit 2PC Two-phase Commit Abbreviations and acronyms 111 112 WebSphere for z/OS to CICS and IMS Connectivity Performance Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this Redpaper. IBM Redbooks For information about ordering these publications, see “How to get IBM Redbooks” on page 114. Note that some of the documents referenced here may be available in softcopy only. WebSphere for z/OS Connectivity Handbook, SG24-7064-01 WebSphere for z/OS Connectivity Architectural Choices, SG24-6365 Threadsafe considerations for CICS, SG24-5631 Other publications These publications are also relevant as further information sources: z/OS V1R6.0 RMF Report Analysis, SC33-7991-09 CICS TS V3.1 Web Services Guide, SC34-6458-02 Online resources These Web sites and URLs are also relevant as further information sources: CICS Transaction Gateway homepage http://www-306.ibm.com/software/htp/cics/ctg/ IMS Connect homepage http://www-306.ibm.com/software/data/ims/connect/ IMS Connector for Java homepage http://www-306.ibm.com/software/data/db2imstools/imstools/imsjavcon.html CICS TS 3.1 CICS Web Services InfoCenter pages http://publib.boulder.ibm.com/infocenter/cicsts31/topic/com.ibm.cics.ts.doc /dfhws/topics/dfhws_startHere.htm © Copyright IBM Corp. 2006. All rights reserved. 113 http://publib.boulder.ibm.com/infocenter/cicsts31/topic/com.ibm.cics.ts.doc /pdf/dfhwsb00.pdf SOAP for CICS home page http://www-306.ibm.com/software/htp/cics/soap/ How to get IBM Redbooks You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site: ibm.com/redbooks Help from IBM IBM Support and downloads ibm.com/support IBM Global Services ibm.com/services 114 WebSphere for z/OS to CICS and IMS Connectivity Performance Index A amount and type of data for communication 6 Application Server 14, 16–17 availability 5 average end-to-end response time 57 B back-end interface 28, 35 direct access 28 back-end logic 28 buffers 23 C CICS 17 applications and data stores 29 ECI connector 37 ECI J2C resource adapter 28 CICS comparison chart 91 CICS MQ bridge 28 CICS MQ DPL Bridge cost/byte 6 performance 101 result 6, 93 test case 62 CICS region 15, 54 CICS SOAP performance 99 result 99 CICS TG 10, 17, 28, 49, 54, 58–59, 61–62 case 31, 69, 100 CPU time breakdown 98 performance 97 Redbooks 42 result 97 CICS Transaction Gateway (See CICS TG) CICS transaction 16, 28, 42–45, 54, 58 report class 54 CICS Transaction Gateway 6, 10, 12, 15, 17 Java client 37 collocation/separation requirements 5 COMMAREA 29, 33, 37, 39, 49, 59–60, 63, 93 WebSphere MQ DPL Bridge 91 © Copyright IBM Corp. 2006. All rights reserved. COMMAREA content 43 COMMAREA increase 93, 95 COMMAREA size 38, 69, 93, 98, 102 complex COMMAREA 31, 103 cost/byte 6, 101 CPU millisecond 58 CPU ms/tran 58, 61, 83 CPU time 5, 60, 92–94, 96 D DB2 29 demand page 57 E EIS activity 91, 93, 95, 97, 99 EIS cost 98, 106 EIS system 91, 93–95 EIS transaction rate 63–65, 67, 69 Enterprise Application Repository (EAR) 38 enterprise information system 10 ESS DASD 10, 12 H HTTP client 6 I IMS 9–13, 54, 83, 105–106 applications and data stores 29 IMS back-end transaction 23 IMS connector 37 IMS environment 23 IMS J2C resource adapter 28 IMS MQ Bridge 83, 90 IMS version 24 J J2EE client 6 J2EE Web module 28 Java class 43 Java Naming and Directory Interface (JNDI) 40 JDBC connection 28 115 JSPs 27, 36, 39 RMF report 69 K S Keep-alive 69 key findings 91–95, 97–102, 105–106 scalable software architecture 6 security 4 JAAS 40 servant region 15, 17, 49, 60 worker threads 17 Service Class CICSW 16 WAS48 15, 52 servlet 36, 39, 42–43 session EJB 38, 42–44 SESSION-PROPERTIES Protocol 17, 21 size COMMAREA 95, 97, 105 skills availability 5 SMFDATA.RMFR ECS 52 SOAP 17 standards compliance, interoperability 4 stateful session EJB 42–45 remote interface 42, 44 superclass 42–44 synchronous/asynchronous response requirements 5 sysplex configuration 12 L local case 17, 91, 93, 95, 97, 99 local CICS TG 97 TG test case 63–64 local IMS 106 LPAR 53, 56, 58 M MDB case 38, 40 MDB listener 38 message-driven bean (MDB) 38 model-volume-controller (MVC) 35 MODS E15 52 MQ Channel 54 report class 54 ms 55–56, 60 O overall CPU utilization 48, 50, 56, 58 P PAGE-IN Rate 54–56 parse XML 70–73 performance (response time, CPU/memory cost) 5 preliminary test 63–66 product maturity 5 prompt answer 5 R Redbooks Web site 114 Contact us xiii remote case 6, 14, 54, 91–93, 95 report class 54–57 application percent 58 Resource Access Control Facility (RACF) 14 resource adapter 17 RMF Monitor 48, 50, 52–53 I report 50, 52 III 10, 48, 50 116 T TCP/IP 17 test case 10, 12, 25, 49–50 detailed metrics 61, 83 maximum throughput 10 test configuration 63–66, 68, 70–72 test result 63–66, 68, 70–72 test scenario 10 Tier 2 91, 93, 95, 97, 99 time zone 5 Trader Web front-end, back-end interface architecture and implementation 35 Trader application 27–29, 33–41, 49–50 data stores 29 dependencies 40 following interactions 49 IMS and CICS applications and data stores 29 logon page 33 packaging 38 Web front-end user interface 33 Web module 33 WebSphere for z/OS to CICS and IMS Connectivity Performance TRADER.CICS.REPLYQ 38 TRADER.IMS.REPLYQ 38 TRADER.PROCESSQ 38 TraderCICS 28 TraderDB 28 TraderIMS 28 TraderMQ 28 TraderSuperServlet 36, 39, 42, 44 transaction rate 57–58, 61, 69 V VSAM file 29 W Web Alias transaction (CWBA) 70 Web Attach transaction (CWXN) 70 Web front-end, back-end interface architecture and implementation 35 WebSphere Admin Console 60 WebSphere Application Server different connectivity options 6 request broker 16 WebSphere MQ 5, 38, 41 connection factory 38 DPL Bridge 13, 17, 21, 28, 40 JMS provider connection factory 40 WebSphere MQ JMS Provider 28 WebSphere MQ/CICS DPL Bridge 9, 21 WebSphere Transaction Rate 57–58, 61 Time 57 WebSphere transaction 57–58 one-to-one correlation 57 WLM 14–16 Workload profile 60, 63–65 X XML conversion 31, 92 XML file 51 Index 117 118 WebSphere for z/OS to CICS and IMS Connectivity Performance Back cover WebSphere for z/OS to CICS and IMS Connectivity Performance Compare the performance of connectors Look at the environment that was used See the key findings for each measurement The objective of this IBM Redpaper is to help you understand the performance implications of the different connectivity options from WebSphere Application Server for IBM z/OS to CICS or IMS. This paper is intended to be a companion to WebSphere for z/OS to CICS/IMS Connectivity Architectural Choices, SG24-6365, which describes architectural choices, and the different attributes of a connection, such as availability, security, transactional capability, and performance. However, that IBM Redbook does not have much information about performance. To provide those details, we ran tests with CICS Transaction Gateway, SOAP for CICS, and CICS MQ DPL Bridge. We also ran tests with IMS Connect and with IMS MQ DPL Bridge. We selected 500-byte, 5 KB, and 20 KB communication area (COMMAREA) sizes with very complex records to simulate complex customer scenarios. We share these results with you in a series of tables and charts that can help you evaluate your real-life application and decide which architectural solution might be best for you. ® Redpaper INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment. For more information: ibm.com/redbooks