LR Day1 - H2K Infosys

advertisement
Welcome to
H2KInfosys
H2K Infosys is E-Verify business based in Atlanta, Georgia – United States
www.H2KINFOSYS.com
USA - +1-(770)-777-1269 , UK - (020) 3371 7615 Training@H2KInfosys.com / H2KInfosys@Gmail.com
Why H2KInfosys
100% Job Oriented
Instructor Led
Face2Face
Training
True Live online Software
+
Cloud Test Lab with Software Tools & Live Project work
+
Mock Interviews + Resume Prep & Review + Job Placement assistance
=
Better than on site IT training Trusted by many students world wide.
Agenda
•
Int ro d u ct io n to p e rfo rmance te st in g

Wh at

Why
•
Typ e s o f p e r fo rmance te st in g
•
Pe r fo r m a n ce Te st in g Ap p ro ach
•
Pe r fo r m a n ce - Qu alit y Asp e ct
•
Pe r fo r m a n ce te st in g te rmin o lo gy
•
Pe r fo r m a n ce Co u nte r

S o f t wa re

Hard ware

Clie nt S id e

S e r ve r S id e
AGENDA
•
S ce n a r io s

Wh at & Ho w Pe rfo rman ce Te st in g p ro ce ss
•
Pe r fo r m a n ce Re q u ire me nts
•
Perfo rman ce Te st P lan n in g
•
Pe r fo r m a n ce Lab

Wh at it is ?

Va r io u s Co mp o n e nt s
•
Pe r fo r m a n ce Te st Scrip t ing
•
Pe r fo r m a n ce Te st E xe cu t io n
•
M et rics Co lle ct io n s
•
Re su lt A n a lysis
•
Re p o r t Cre at io n
•
Q & A
•
Wo r klo a d

Wh at & Why

Typ es o f Wo rklo ad
Performance Testing – What?
“ Performance Testing is the discipline concerned with determining and reporting the
current performance of a software application under various parameters. “
Performance Testing – Why?
Primarily used for…
•verifying whether the System meets the performance requirements defined in SLA’s
•determining capacity of existing systems.
•creating benchmarks for future systems.
•evaluating degradation with various loads and/or configurations.
Performance Testing Types
Load Test
Objective: To get the insight into the performance of the system under
normal condition
Methodology:
o The user behaviors are modeled as per the real world.
o
Test script mimics the activities that the users commonly perform, and include think time
delays and arrival rates reflective of those in the real world
Performance Testing Types
Stress Test
Objective: The application is stressed with unrealistic load to understand
the behavior of the application in the worst case scenario
Methodology:
In stress tests, scripted actions are executed as quickly as possible.
Stress testing is load testing with the user think time delays removed.
Performance Testing Approach
Scalability
Scalability is the capacity of an application to deal with an increasing
number of users or requests without degradation of performance. Scalability is a
measure of application performance .
Performance Testing Approach
Stability
The stability (or reliability) of an application indicates its robustness and
dependability. Stability is the measure of the application’s capacity to perform even
after being accessed concurrently by several users and heavily used. The application’s
stability partly indicates its performance.
Quality Aspect Performance
 F unctionality
 U sability
 R eliability
 P erformance
 S upportability
Performance
Effectiveness
Scalability
Quality Aspect Performance
Performance
Online Transactions
o Response times (seconds)
Batch Transactions
o Runtime (minutes or hours)
o Throughput (items / second)
Quality Aspect Performance
Effectiveness
CPU Utilization
o Real User and System Time
Memory Usage (Mb)
Network Load
o Package Size and Package Count
o Band Width and Latency
Disk Usage (I/O))
Quality Aspect Performance
Scalability
Online Transactions
o Large # of Users
Batch Transactions
o Large Transaction Volumes
o Large Data Volumes
Scale in/out
Performance testing terminology
Scenarios
Sequence of steps in the application under test
For e.g. searching a product catalog
Workload
It is the mix of demands placed on the system (AUT), for e.g. in terms of concurrent users, data
volumes, # of transactions
Operational Profile (OP)
List of demands with frequency of use
Benchmark
Standard workload, industry wide (TPC-C,TPC-W)
TPC- C : An on-line transaction processing benchmark.
TPC-W : A transactional web e-Commerce benchmark
Performance testing terminology
Users starts
request
Users finishes
request
System's starts
System's starts
System's
completes
Users starts
execution
response
response
request
Reaction
time
Response
time
Think time
Performance testing terminology
Throughput
Rate at which the requests can be
serviced by the system
Batch streams
Jobs /sec
Interactive systems
Requests /sec
CPU
Million Instructions/sec (MIPS)
Million Floating-Point operations per sec
Network
Packets per second or bits per second
Transaction processing
Transaction per second
Performance testing terminology
Bandwidth
A measure of the amount of data that can travel through a network. Usually measured in
kilobits per second (Kbps). For example, a modem line often has a bandwidth of 56.6 Kbps, and an
Ethernet line has a bandwidth of 10 Mbps (10 million bits per second).
Latency
In a network, latency, a synonym for delay, is an expression of how much time it takes for
a packet of data to get from one designated point to another. In some usages (for example, AT&T),
latency is measured by sending a packet that is returned to the sender and the round-trip time is
considered the latency.
Propagation (@ speed of light) + Transmission( ~ Size) + Router Processing (examining) + Other
computer and Storage (Switch or bridge) Delays
Performance testing terminology
Reliability
Mean time between failure
Availability
Mean time to Failure
Cost/performance
Cost  Hardware/software licensing,
installation and maintenance
Performance, (usable capacity)
What to watch
S/W Performance
– OS
– Application Code
– Configuration of Servers
H/W Performance
– CPU
– Memory
– Disc
– Network
Performance counters
Why Performance Counters?
They allow you to track the performance of your application
What Performance Counters?
Client-Side
Response Time
Hits/sec
Throughput
Pass Fail Statistics
Server-Side
CPU - %User Time, %Processor Time, Run Queue Length
Memory – Available and Committed Bytes,
Network – Bytes – Sent/sec, Received/sec
Disc – Read Bytes/sec, Write bytes/sec
Client side metrics
Hits per Second: The Hits per Second graph shows the number of hits on the Web server (y-axis) as
a function of the elapsed time in the scenario (x-axis). Hits per Second graph can be compared to
the Transaction Response Time graph to see how the number of hits affects transaction
performance.
Pass-Fail Statistics: It gives the measure of application capability to function correctly under load. It
is measured by measuring transaction pass/fail/error rates.
workload
Workload is the stimulus to system. It is an instrument simulating the real world environment. The
workload provides in depth knowledge of the behavior of the system under test. It explains
how typical users will use the system once it goes in production. It could include all the
requests and/or data inputs.
Request may include things such as :
– Retrieving data from a database
– Transforming data
– Performing calculations
– Sending documents over HTTP
– Authenticating a user, and so on.
workload
Workload may be no load, minimal, normal, above normal or
extreme.
– Extreme loads are used in load stress testing - to find
the breaking point and bottlenecks of tested system.
– Normal loads are used in performance testing - to
ensure acceptable level of performance
characteristics like response time or request
processing time under estimated load.
– Minimal loads are usually used in benchmark testing
- to estimate user experience.
workload
Workload is identified for each of the scenarios. It can be identified based on following parameters:
•
•
•
•
Numbers of users. The total number of concurrent and simultaneous users who access the
application in a given time frame.
Rate of requests. The requests received from the concurrent load of users per unit time.
Patterns of requests. A given load of concurrent users may be performing different tasks using the
application. Patterns of requests identify the average load of users, and the rate of requests for a
given functionality of an application.
Steady State
Steady state workload is the simplest workload model used in load testing. A constant number of
virtual users are run against the application for the duration of the test.
Workload models
•
Steady State
Steady state workload is the simplest workload model used in load testing. A constant number of
virtual users are run against the application for the duration of the test.
•
Increasing
Increasing workload model helps testers to find the limit of a Web application’s work capacity. At the
beginning of the load test, only a small number of virtual users are run. Virtual users are then added
to the workload step by step.
140
120
100
80
60
40
20
0
0:00
Workload Model - Increasing
140
120
100
Load
Load
Workload Model - Increasing
80
60
40
20
0:28
0:57
1:26
Elapsed Time
1:55
2:24
0
0:00
0:28
0:57
1:26
Elapsed Tim e
1:55
2:24
Workload models
Dynamic
Dynamic workload model, you can change the number of virtual users in the test while it is
being run and no simulation time is fixed.
Workload Model - Dynamic
60
Load
50
40
30
20
10
0
0:00
0:28
0:57
1:26
Elapsed Time
1:55
2:24
Workload Profile
A workload profile consists of an aggregate mix of users performing various operations.
Workload profile can be designed by performing following activities:
– Identify the distribution (ratio of work). For each key scenario, identify the distribution /
ratio of work. The distribution is based on the number of users executing each scenario,
based on application scenario.
– Identify the peak user loads. Identify the maximum expected number of concurrent
users of the Web application. Using the work distribution for each scenario, calculate the
percentage of user load per key scenario.
– Identify the user loads under a variety of conditions of interest. For instance, you might
want to identify the maximum expected number of concurrent users for the Web
application at normal and peak hours.
Workload Profile
For sample web application, the distribution of load for various profiles could be similar to that
shown in table below:
•
•
•
•
Number of users: 200 simultaneous users
Test duration: 2 hours
Think time: Random think time between 1 and 10 seconds in the test script after each
operation
Background processes: Anti-Virus software running on test environment
Scenarios what and how




Decide on the business flows that needs to be run
Decide on the mix of the business flows in a test run
Decide on the order of the test scripts that need to be started
Decide on the ramp up for each business flow and test run duration
The load generators and test scripts (user groups) for a scenario should be configured so that the
scenario accurately emulates the working environment.
The Runtime settings and test system/application configuration can be changed for creating
different scenarios for the same workload profile
Performance engagement process
Query
Requirements questionnaire
P
R
O
J
E
C
T
T
E
A
M
Response
Test plan & engagement Contract
Signed contract, approved Test Plan & application demo
P
E
R
F
O
R
M
A
N
C
E
Business flow document & project plan
Written approval
Test execution reporting
Customer feedback & project closure
L
A
B
Performance test process
Initiate
Design
Plan
Execute
Report
• App Demo
•Test Plan
• Lab Design
• Iteration 1
• Data Collection
• Business Flow
•Access to Staging
• Script design
• Iteration 2
• Analysis & Report
Activities:
Activities:
Activities:
Activities:
Activities:
Fill performance requirements questionnaire
Establish Test Goals
Application Walkthrough
Execute performance tests
Analyze test results
Prepare Test Plan
Freeze workload
Collect performance metrics
Prepare performance test
Report
Reviews by Project Team
Finalize estimates and
Service Plans
Set up master data
Prepare engagement
Contract
Reviews by Project Team
Reviews by Project Team
Reviews by Project Team
Reviews by Project Team
Deliverables:
Deliverables:
Deliverables:
Deliverables:
Deliverables:
Signed engagement
contract
Performance Test Plan
Performance test scripts
First Information Report
Performance Test Report
Project Team
Create performance scripts
Performance Test Lab
Performance requirements
 Performance Test Objective
• Forms the basis for deciding what type performance test needs to be done
• The test plan and test report should reflect this objective
 Performance Requirements
• Expected performance levels of the application under test
• Can be divided into two categories
o Performance Absolute Requirements
o Performance Goals
 Performance Absolute Requirements
• Includes criteria for contractual obligations, service-level agreements (SLA) or fixed
business needs
 Performance Goals
• Includes criteria desired in the application but variance in these can be tolerated under
certain circumstances
• Mostly end-user focused
Performance requirements
 Determine
• Purpose of the system
• High level activities
• How often each activity is performed
• Which activity is frequently used and intensive (consume more resources?)
• hardware & software architecture
• Production architecture
 Should be specific
 Should detail the required
• response times
• throughput or rate of work done
 Under
• specific user load (normal/peak)
• Specific data requirements
 Can mention resource utilization desired/thresholds
 Analyze Business Goals and objectives
Performance requirements
 Where do we need to set the goals?
• Which 5% of system behavior is worth defining with respect to performance
• Which users have priority, what work has priority?
• Which processes has business risks (loss in revenue)
 How aggressive do the service levels need to be?
• Differentiators
• Competitor
• Productivity gains
 What are the demanding conditions under which SUT would be put in?
 What are the resource constraints (Already procured hardware, network, Disk etc or sharing
the same environment with others)?
 What are the desired norms of resource utilization?
 Desired norms for reserve or spare capacity
 What are the trade offs (throughput vs response time), availability etc
 Where to get information
• Stakeholders
• Product managers
• Technical/application architects
• Application users
• Documentation (User Guide, Technical design etc)
Performance requirements
 Performance requirements template
• Objectives
• Performance Requirements
• System deployment architecture
• System configuration
• Workload profile
• Client network characteristics
Performance TEST PLAN
Contents
•Requirements/Performance Goals
•Project scope
•Entry Criteria
E.g:
All found defects during system testing phase have been fixed and re-tested.
Code freeze is in place for the application and the configuration is frozen.
The Test environments are ready
Master data has been created and verified
No activities outside of performance testing and performance test monitoring will
occur during performance testing activities
•Exit Criteria
E.g.: All planned Performance tests have been performed.
The Performance Test passes all stated objectives at maximum numbers of users
•Application overview
This gives a brief description of the business purpose of the Web
application. This may include some marketing data stating estimates or historical revenue
produced by the Web application.
Performance TEST PLAN
•
•
•
Architecture overview
This depicts the hardware and software used for the performance test environment, and
will include any deviations from the production environment. For example, document it if
you have a Web cluster of four Web servers in the production environment, but only two
Web servers in your performance test environment
Performance test process
This includes a description of:
 User scenarios
 Tools that will be used.
 User ratios and sleep times
 Ramp up/Ramp down pattern.
Test Environment
1. Test Bed
2. Network Infrastructure
3. Hardware Resources
4. Test Data
Performance TEST PLAN
Test Environment
 Test Bed
 Describe test environment for load test and test script creation.
 Describe whether the environment is a shared environment and its hours of availability
for the load test.
 Describe if environment is production-like or if it is actually the production
environment.
 Network infrastructure
 Describe the network segments, routers, switches that will participate in the load test.
It can be described with network topology diagram.
 Hardware resources
 Describe machine specifications that are available for the load test such as machine’s
name, its memory, processor, environment (i.e. Win 2000, win XP, Linux, etc), whether
the machine has the application that needs to be tested installed (AUT = Application
under test), and the machine’s location (what floor, what facility, what room, etc)
 Test Data
 Database size and other test data. Who will provide the data and configure the test
environment with appropriate test data.
Performance TEST PLAN
Test Environment
•Staffing and Support Personnel
List the individuals participating during the load tests with their roles and level of
support.
•Deliverables
Description of deliverables such as test scripts, monitoring scripts, test results, reports
–explanation of what graphs and charts will be produced. Also explain who will analyze and
interpret the load test results and review the results with the graphs from all participants
monitoring the execution of the load tests.
•Project Schedule
•Timelines for test scenario design, test scripting
•Communication Plan
•Project Control and Status Reporting Process
Performance TEST PLAN
Test Environment
•Risk Mitigation Plan
E.g of risks:
 The software to be tested is not correctly installed and configured on the test
environment.
 Master data has not been created and/or verified.
 Available licenses are not sufficient to execute performance tests to validate the
application.
 Scripts and/or scenarios increase from Performance Test Plan
•Assumptions
E.g.:
 Client will appoint a point of contact to facilitate all communication with project
team and provide technical assistance as required by personnel within four (4)
business hours of each incident. Any delays caused would directly affect the
delivery schedule
 Client will provide an uninterruptible access to their application during the entire
duration of this assignment
 The Client application under test use HTTP protocol.
 Client application will not contain any Java Swing, AJAX, Streaming media,
VBScript, ActiveX or any other custom plug-ins. Presence of any such components
will require re-visit to effort and cost estimation
Performance TEST LAB
Virtual User – Emulate the end user action by sending request and receiving response
Load Generator - emulates the end-user business process
Controller - organizes, manages and monitors the load test
- Probes – captures a single behavior when load test is in progress
Performance TEST scripting
 Correlations
o
o
Correlation is done to resolve dynamic server values
• Session IDs
• Cookies
LoadRunner supports automatic correlation
 Parameterization
o
o
o
Parameterization is done to provide each vuser with unique or specific values for application
parameters
Parameters are dynamically provided by LR to each vuser or they can be taken from data files
Different types of parameters are provided by LR for script enhancements
 Transactions
o
o
o
Transactions are defined to measure the performance of the server.
Each transaction measures the time it takes for the server to respond to specified Vuser requests.
The Controller measures the time taken to perform each transaction during the execution of a
performance test.
 Rendezvous points
 Verification Checks
Performance TEST scripting
 Rendezvous points
o
o
o
Rendezvous points are used to synchronize Vusers to perform a task at exactly the same moment –
to emulate heavy user load.
When a Vuser arrives at a rendezvous point, it is held by the Controller until all Vusers participating
in the rendezvous reach that point.
You may only add rendezvous points in the Action section – not to the init or end
 Verification Checks
o
Add verification checks to the script
• Text verification checks
• Image verification checks
Performance TEST Execution
Before Execution, Ensure
 Test system is ready for test


Test data is in place
System configuration is as per the plan



Ramp up/Run for some time/Ramp down
Schedule according to groups
Number of Vusers for each group is correct for that test run


Load Generating machines
Divide load among different machines
 Smoke test is done
 Scenario scheduling is correct
 Monitors are ready to collect performance data
 Debugging statements in the script are commented / Logging is done only on necessity
 Load Generators




Log information about each test run
Allow test system to stabilize for each test run
Store results for each test run in a separate folder
Check if Test system state is per the plan after each test run
Metrics collection
 Client side metrics



Response time
Throughput
Provided by the test tool
 Server side metrics







Perfmon (for Windows)
System commands ( for Linux/UNIX)
Test tool monitors
JMX counters
Application servers
Database Servers
Web Serves
 Scripts using Perl/shell/... for collecting metrics and formatting data
Result analysis
 Response Time : lower is better
 Throughput : higher is better
Throughput increases as load increases and
reaches a saturation point beyond which any further
load increases the response time exponentially
This is called the knee point
User Load Vs Response Time & Throughput
70
300
60
50
250
200
40
150
30
Response Time
Throughput
100
20
10
50
0
0
50 100 150 200 250 300 350 400 450 500 550
User Load
 User Load Vs CPU utilization


Should be linear
If 100% CPU usage reached before the expected user load
 CPU is the bottleneck
 Increase the number of CPUs
 Use a faster CPU
 check for Context Switches and Processor Queue Length
 If 100% CPU usage not reached then check for bottlenecks in other system resources
Result analysis
 Response Time : lower is better
 User Load Vs Disk I/O



Check for Average Disk Queue Length and Current Disk Queue Length
Check for % Disk Time
Check for Disk Transfers/sec
 Memory utilization Vs Time



check for Available memory and amount of swapping
Memory usage should get stabilized after some into the test
If memory usage increases with each test run or for each iteration of an activity for the same
number of users and doesn't come down then it could be a possible indication of memory leak
 Network utilization

current bandwidth, packets/sec, packets lost/sec
Report creation








The end deliverable of performance test
Very important from stakeholders point of view
Should reflect the performance test objective
Provide test results in tabular format and graphs
Include all issues faced during testing
Document all the findings & observations ( performance testing is close to research)
Load and especially stress would sometime reflect the bad side of an application. It throws errors, capture
all of them for future analysis
Any deviations/workarounds used should be mentioned
Contents of Test Report
o
o
o
o
o
o
o
Executive Summary
- Test Objective
- Test Results Summary
- Conclusions & Recommendations
Test Objective
Test Environment Setup
- Software configuration used including major and minor
versions where applicable
- Hardware configuration
Business flows tested / test scenarios
Test run information including observations
Test results
Conclusions & recommendations
For
Registrations
OR
Thanks
Deepa
Download