Uploaded by Rashmilakshana Lakshana

ST - UNIT -4

advertisement
Software testing - CS21C14
UNIT – IV
Performance Testing: Factors Governing Performance Testing Methodology for Performance Testing - Tools for Performance Testing Process for Performance Testing - Challenges. Regression Testing:
Introduction - Types of Regression Testing - Execution of Regression
Testing - Best Practices in Regression Testing
Performance Testing
• Testing performed to evaluate the response time, through put and utilization of the
system, to execute its required functions in comparison with different versions of the same
products or a different competitive product(s) is called performance testing
• Software testing that ensures software applications perform properly under their expected
workload.
• Process used for testing the speed, response time, stability, reliability, scalability, and resource
usage of a software application under a particular workload.
• Goal
• To identify bottlenecks, measure system performance various loads under and conditions,
and ensure that the system can handle the expected number of users or transactions.
• factors like Response time, Load, and Stability of the application.
• Response time
• Response time is the time taken by the server to respond to the client's request.
• Load
• Load means that when N-number of users using the application simultaneously or sending the request to the
server at a time.
• Stability
• we can say that, when N-number of users using the application simultaneously for a particular time.
FACTORS GOVERNING PERFORMANCE TESTING
• Throughput
• represents the number of requests/business transactions processed by the product in
a specified time duration
• Response Time
• delay between the point of request and the first response from the product
• Latency :
• delay can be caused by application, operating system and by the environment
• Tuning
• procedure by which the product performance is enhanced by setting different values
to the parameters
• Benchmarking
• Testing compared to the competitive product
Continued..
• The important factor that affect performance testing is the availability of resources. The
steps followed to find out resource and configuration is called Capacity Planning
• The purpose of capacity planning exercise is to help customers plan for the set of
hardware and software resources prior to installation or upgrade of the product
• Performance testing is done to ensure that a product
• Processes the required number of transactions in any given interval (throughput)
• is available and running different load conditions(availability)
• Responds fast enough for different load conditions(response time)
• Return on Investment for the resources
• Comparable to the competitors product for different parameters
Types of Performance Testing
• Load testing
• Stress testing
• Scalability testing
• Stability testing
Continued..
• Load testing
• It checks the product’s ability to perform under anticipated user loads.
• The objective is to identify performance congestion before the software
product is launched in the market.
• Stress testing
• It involves testing a product under extreme workloads to see whether it
handles high traffic or not.
• The objective is to identify the breaking point of a software product.
• Endurance testing
• It is performed to ensure the software can handle the expected load over a long
period.
Continued..
• Volume testing
• Large number of data is saved in a database and the overall software system’s behaviour is
observed.
• The objective is to check the product’s performance under varying database volumes.
• Scalability testing
• The software application’s effectiveness is determined by scaling up to support an increase in
user load.
• It helps in planning capacity additions to your software system.
• Spike testing
• It tests the product’s reaction to sudden large spikes in the load generated by users.
• Soak testing
• Soak testing is a type of load testing that tests the system’s ability to handle a sustained load
over a prolonged period.
• It helps identify any issues that may occur after prolonged usage of the system.
Performance Testing Process
Performance Testing Attributes
• Speed
It determines whether the software product responds rapidly.
• Scalability
It determines the amount of load the software product can handle
at a time.
• Stability
It determines whether the software product is stable in case of
varying workloads.
• Reliability
It determines whether the software product is secure or not.
Throughput
• Capability of a product to handle multiple transactions in a given period.
• Throughput represents the number of requests/business transactions processed by
the product in a specified time duration.
• As the number of concurrent users increase, the throughput increases almost
linearly with the number of requests. As there is very little congestion within the
Application Server system queues.
• In the heavy load zone or Section B, as the concurrent client
load increases, throughput remains relatively constant.
• In Section C (the buckle zone) one or more of the system
components have become exhausted and throughput starts to
degrade.
• For example, the system might enter the buckle zone when
the network connections at the Web server exhaust the limits
of the network adapter or if the requests exceed operating
system limits for file handles.
Response time
• It is equally important to find out how much time each of the
transactions took to complete.
• Response time is defined as the delay between the point of request and
the first response from the product.
• The response time increases proportionally to the user load.
Tuning and Benchmarking
• Tuning is the procedure by which
product performance is enhanced by
setting different values to the
parameters of the product, operating
system and other components.
• Tuning
improves
the
product
performance without having to touch
the source code of the product.
• The performance of the product
compared with the performance of
the competitive products is called
benchmarking.
Methodology for Performance testing
• Collecting requirements
• Writing test cases
• Automating performance test cases
• Executing performance test cases
• Analyzing performance test results
• Performance tuning
• Performance benchmarking
• Recommending right configuration for the customers (capacity
planning)
Collecting requirements (ATM example)
• PT requirement to be testable
• PT requirement to be clearly stated as what is to be measured /
improved
• PT requirement to be associated with actual number / % of
improvement is desired
• Performance can be compared with previous release
• Performance can be compared with competitive products
• Performance compared to absolute numbers derived from
actual need
• Performance numbers derived from design
Writing test cases
• PT test cases has to have the following details
• List of operations/ transactions
• Steps for execution
• Load pattern
• Product, os parameters
• Resource and configuration
• Expected response time, throughput and latency
• Product versions to be compared
• Performance test cases are repetitive in nature. These test cases are normally executed
repeatedly for different values of parameters, different load configurations and so on.
• Performance testing involves time and effort. So high priority test cases can be completed
before others. Priority is of two types absolute priority(Customer) and relative priority(testing
team)
Automating Performance Test Cases
•
Automation testing is an important step in the methodology for performance testing
•
Performance testing lends itself to automation due to the following characteristics:
•
Performance testing is repetitive
•
Performance test cases cannot be effective without automation
•
The results of performance testing need to be accurate, and manually calculating the response
time, throughput introduce inaccuracy
•
Performance testing takes into account several factors. To remember and test the various
permutation and combination of these factors is difficult
Automating performance test cases
Record
• Record the defined testing activities that will be used as a
foundation for your load test scripts.
• One activity per task or multiple activities depending on user
task definition
Modify
• Modify load test scripts defined by recorder to reflect more
realistic Load test simulations.
• Defining the project, users
• Randomize parameters (Data, times, environment)
• Randomize user activities that occur during the load test
Virtual Users (VUs):
Test Goals
Start: 5
Max Response Time <= 20 Sec
Incremented by: 5
Maximum: 200
Think Time: 5 sec
Test Script:
One typical user from login through completion.
Executing Performance test cases
Performance testing generally involves less effort for execution but more effort for planning, data collection and
analysis
•
•
•
•
Start and end time of test case execution
Log and trace/audit files of the product and operating system
Configuration of all environmental factors
The response time, throughput, latency and so on a specified in the test case documentation at regular
intervals
Scenario Testing
•
•
A set of transactions/operations that are usually performed by the user forms the scenario for performance
testing
This particular testing is done to ensure whether the mix of operations / transactions concurrently done by
different users/machine meets the performance criteria
Configuration Testing
•
•
What performance a product delivers for different configurations of hardware and network setup, is
another aspect that needs to be included during execution
This requirement mandates the need for different configurations
Tools used for performance testing
• Open Source Tools
• OpenSTA
• Diesel Test
• TestMaker
• Grinder
• LoadSim
• Jmeter
• Rubis
• Commercial Tools
• LoadRunner
• Silk Performer
• Qengine
• Empirix e-Load
Challenges of performance testing
•
•
•
•
•
•
•
•
Availability of skills
Requires large number of resources
PT test results to reflect real time environment and expectations
Selection of right tool
Tools are expensive
Performance testers to learn meta-language and scripts
Interfacing with different teams
Lack of seriousness on PTs by the management and development team
Performance benchmarking
•
Performance benchmarking is about comparing the performance of transactions
with that of the competitors.
•
No two product can have same architecture, design, functionality and code.
•
The customers and types of deployments can also be different
•
Performance test cases are repeated for different configurations and for different
values of parameters and then it's to be plotted for quick analysis.
Analyzing Performance test results
•
•
•
Analyzing the performance test results require multi-dimensional thinking.
This is the most complex part of performance testing where product knowledge, analytical
thinking and statistical background are all absolutely essential
Before analyzing the data, some calculations of data and organization of the data are required
• Calculating the mean of the performance test result data
• Calculating the standard deviation
• Removing the noise(noise removal), re-plotting nad re-calculating the mean and standard
deviation
• In terms of caching and other technologies implemented in the product, the data coming
from the cache need to be differentiated from the data that gets processed by the product and
presented
• Differentiating the performance data when the resources rare available as completely as
against when some background activities were going on
Conclusion of analyzing Performance data
•
•
•
•
•
•
•
•
•
Whether performance of the product is consistent when tests are executed multiple times
What performance can be expected for what type of configuration resources
What parameters impact performance and how they can be used to derive better performance
What is the effect of scenarios involving several mix of operations for the performance factors
What is the effect of product technologies such as caching on performance improvements
Up to what load are the performance numbers acceptable and whether the performance of the
product meets the criteria of graceful degradation
What is the optimum throughput, response time of the product for a set of factors such as load,
resources and parameters
What performance requirements are met and how the performance looks when compared to the
previous version or the expectations set earlier or the competition
In case if the high-end configuration is not available for performance testing, the
performance numbers for a high-end system should be predicted
Performance Tuning
•
•
•
•
•
•
Analyzing performance data helps in narrowing down the list of parameters that really impact
the performance results and improve product performance
Once the parameters are narrowed down the performance test cases are repeated for different
values of those parameters to further analyze their effect in getting better performance
Understanding each parameter and its impact on the product is not sufficient for performance
tuning
The combination of parameters too cause changes in performance.
The relationship among various parameters and their impact too becomes very important to
performance tuning
There are two steps involved in getting the optimum mileage for performance tuning. They are
• Tuning the product parameters
• Tuning the operating system and parameters
Tools for Performance testing
Tools are of two types Functional performance and Load Tools
Functional Performance Tools
•
helps in recording and playing back the transactions and obtaining performance numbers
(WinRunner from Mercury, QA Partner from Compuware, Silktest from Segue)
Load Testing Tools
•
simulate the load condition for performance testing without having to keep that many users or
machines
(Load Runner from Mercury, QALoad from Compuware, Silkperformer from Segue)
Process for Performance Testing
• Ever changing requirements for performance is a serious threat to the product as
performance.
• Hence it is important to collect the requirements for performance earlier in the life cycle
and address them, because changes to architecture and design late in the cycle are very
expensive
Continued..
The next step in the performance testing process is to create a performance test plan
• Resource Requirements
• Test Bed, Test-Lab Setup
• Responsibilities
• Setting up product traces, audits and traces
• Entry and Exit criteria
Challenges
•
•
•
•
•
•
Availability of skills is a major problem
requires a large number and amount of resources
Need to reflect real-life environment and expectations
Selecting the right tool
Interacting with different teams
Lack of seriousness by the management and development team
Regression Testing
• Software undergoes constant changes. Such changes are necessitated because of
defects to be fixed, enhancements to be made to existing functionality or new
functionality to be added.
• Regression testing is done to ensure that enhancements or defect fixes made to the
software works properly and does not affect the existing functionality.
• Any time such changes are made, it is important to ensure that
• The changes or additions work as designed
• The changes or additions do not break something that is already working and
should continue to work
Continued..
• Regression testing follows selective re-testing technique
• A set of test cases need to be run to verify the defect fixes are
selected by the test team
• Testing in the software development cycle that runs after every
change to ensure that the change introduces no unintended
breaks.
• An impact analysis is done to find out what areas may get impacted
due to those defect fixes
• Based on the impact analysis, some more test cases are selected to
take care of the impacted areas
• Since this testing technique focuses on reuse of existing test cases
that have been executed, this technique is called selective re-testing
Types of Regression Testing
•
When test team or customer start using the product they report defects
•
Developers analyze the defects and make defect fixes
•
The developers then do appropriate unit testing and check the defect fixes into a configuration
Management System
•
The source code for the complete product is compiled and these defect fixes along with the
existing features get consolidated into Build
•
A build thus becomes an aggregation of all the defect fixes and features that are present in the
product
•
There are two types of regression testing
•
Regular Regression Testing
•
Final Regression Testing
Continued..
• Regular Regression Testing
• Is done between test cycles to ensure that the defect fixes that are done and the
functionality that were working with earlier test cycles continue to work
• Final Regression Testing
• Done to validate before the release
• The CM engineer delivers the final build with the media and other contents exactly
as it would go to the customer
• The final regression test cycle is conducted for a specific period of duration called
cook time
• Cook time is necessary to keep testing for a certain duration, since some defects can
be unearthed only after the product has been used for a certain time duration
• Is more critical than any other type or phase of testing as this is the only testing that
ensures that same build of the product that was tested reaches the customer
Regression testing types
Corrective
•
When software’s source code has not changed.
•
Want to verify that the present system is functioning properly,
thus you will analyze the existing features and their associated
test cases rather than creating new ones.
Progressive
•
optimal method for modifying testing objectives and developing
new test cases.
•
This form of software testing is chosen when introducing a new
system component.
•
This is why it enables you to confirm that modifications do not
negatively impact the existing components
Continued..
Selective
•
•
Test scope is confined to a chosen selection of already-created
test scenarios, as the title implies.
Therefore, rather than retesting the entire system, only a few
selected components are retested.
Retest-All
•
•
Partial
•
•
To assess the impact of introducing new features to the system,
partial testing is conducted.
For instance, if inserting a new line of programming to the
source will influence existing functionality of the system. In
contrast to selective testing, the new functions are evaluated
beside the existing ones. This allows you to assess their impact.
Complete
•
•
This requires testing the overall structure simultaneously.
Similar to acceptability testing, complete regression testing
determines if the UI is damaged by the addition of one or more
components. Well prior to the product’s ultimate launch, it
undergoes exhaustive testing.
The purpose of this analysis is to re-execute every test scenario
in the testing set to ensure that there are no problems
introduced by a modification to a software’s source code.
Since you are aware of the numerous regression testing
solutions that the quality assurance team may perform, you can
plan accordingly. Nevertheless, in the current era of
automation, numerous technologies are used for regression
testing.
Unit Regression
•
During the Unit Testing phase, analysts perform unit regression
testing, throughout which they test every single unit of code
independently of any other units.
Regional Regression
•
•
Testing the areas affected by a change or modification is what
regional regression testing is all about.
Improvements in this area are analyzed to determine if any
reliable modules will be broken.
Continued..
When To Do Regression Testing
•
Regression testing is done between test cycles to find out if the software delivered is as good or better
than the builds received in the past
•
It is necessary to perform regression testing when
•
A reasonable amount of initial testing is already carried out
•
A good number of defects have been fixed
•
Defect fixes that can produce side-effects are taken care of
•
Regression testing may also be performed periodically, as a proactive measure\
When to do regression testing
•
When a new functionality is added to the system and the code has been modified to absorb and
integrate that functionality with the existing code.
•
When some defect has been identified in the software and the code is debugged to fix it.
•
When the code is modified to optimize its working.
Continued..
Process of Regression testing
Continued..
•
•
The failure of regression can only be found very little in the cycle or
found by the customers.
Well defined methodology for regression can prevent such costly
issues.
The methodology is made up of following steps
•
•
•
•
•
Performing an initial smoke or sanity test
Understanding the criteria for selecting the test cases
A methodology for selecting test cases
Resetting the test cases for test execution
Concluding the results of a regression cycle
Techniques for the selection of Test cases for Regression Testing
●
Select all test cases
•
All the test cases are selected from the already existing test suite. It is the simplest and safest technique but not
much efficient.
●
Select test cases randomly
•
Test cases are selected randomly from the existing test-suite, but it is only useful if all the test cases are equally
good in their fault detection capability which is very rare. Hence, it is not used in most of the cases.
●
Select modification traversing test cases
•
Only those test cases are selected which covers and tests the modified portions of the source code the parts which
are affected by these modifications.
●
Select higher priority test cases
•
Priority codes are assigned to each test case of the test suite based upon their bug detection capability, customer
requirements, etc.
•
After assigning the priority codes, test cases with the highest priorities are selected for the process of regression
testing.
•
The test case with the highest priority has the highest rank. For example, test case with priority code 2 is less
important than test case with priority code 1.
Performing an initial smoke or sanity test
• Whenever changes are made to a product, it should first be made sure that nothing basic
breaks
Smoke testing consists of
• Identify the basic functionality that a product must satisfy
• Designing test cases to ensure that these basic functionality work and packaging
them into a smoke test suite
• Ensuring that every time a product is built, this suite is run successfully before
anything else is run; and
• If this suite fails, escalating to the developers identify the changes and perhaps
change or roll back the changes to a state where the smoke test suite succeeds
• Developers introducing change should run the smoke test suite successfully on that
build before checking the code into the configuration management repository
• Defects in the product can introduce not only by the code but also by the scripts
used for compiling and linking the program
• Enables the uncovering of such errors introduced by the build procedures
Understanding the criteria for selecting the test cases
• There are two approaches to selecting the test cases for a regression run.
Organization can choose to have a constant set of regression tests that are run for
every build or change
• In Order to cover all fixes, the constant set of tests will encompass all features
and tests which are not required may be run every time
• A given set of defect fixes or changes may introduce problems for which there
may not be ready-made test cases in the constant set.
• Hence even after running all the regression test cases introduced defects will
continue to exist
Continued..
Select the test cases dynamically for each build by making judicious choices of the
test cases.
The selection of test cases for regression testing requires knowledge of
• The defect fixes and changes made in the current build
• The ways to test the current changes
• The impact and the current changes may have on other parts of the system;
and
• The ways of testing the other impacted parts
Some of the criteria to select test cases for regression testing are as follows:
• Include test cases that have produced for the maximum defects in the past
• Include test cases for a functionality in which a change has been made
• Include test cases in which problems are reported
Continued..
• Include test cases that test the basic functionality or the core features of the
product which are mandatory requirements of the customer
• Include test cases that test the end-to-end behavior of the application or the
product
• Include test cases to test the positive test conditions
• Include the area which is highly visible to the users
• When selecting test cases, do not select more test cases which are bound to fail
and have little or less relevance to the defect fixes
• Select more positive test cases than negative test cases for the final regression
test cycle
• Selecting negative test cases, breaks the system and create some confusion with
respect to pinpointing the cause of failure
• Regular test cycles before regression testing should have the right mix of both
positive and negative test cases
• The selection of test cases for regression testing depends more on the impact of
defect fixes than the criticality of defect itself
Classifying Test Cases
When the test cases have to be selected dynamically for each regression run it would be
worthwhile to plan for regression testing from the beginning of project, even before the test
cycles start.
To enable choosing the right tests for a regression run, the test cases can be classified into
various priorities based on importance and customer usage
Priority – 0 :
• These test cases can be called sanity test cases which check basic functionality and are
run for accepting the build for further testing.
• They are also run when a product goes through a major change.
• These test cases deliver a very high project value to both to product development teams
and to the customers
Continued..
Priority-1:
•
Uses the basic and normal setup and these
test cases deliver high project value to both
development team and to customers
Priority – 2:
•
These test cases deliver moderate project
value.
•
They are executed as part of the testing
cycle and selected for regression testing on a
need basis
Methodology for Selecting Test Cases
•
•
Once the test cases are classified into different priorities, the test
cases can be selected based on case to case basis
Case 1
•
•
Case 2
•
•
•
If the criticality and impact of the defect fixes are low, then it is enough that
a test engineer selects a few test cases from Test Case Database(TCDB)
If the criticality and the impact of the defect fixes are medium then we need
to execute all priority-0 and priority-1 test cases.
If defect fixes need additional test cases then selected test cases from
priority-2 can be used. Selecting test cases in this case is desirable but not
necessary
Case 3
•
If the criticality and impact of the defect fixes are high, then Priority -0, 1
& 2 test cases are executed
Continued..
•
Regress All:
•
•
•
Priority based regression:
•
•
In this methodology code changes are compared to the last cycle of testing
and test cases are selected based on their impact on the code
Random regression:
•
•
In this type of testing priority 0, 1 and 2 are run in order, based on the
availability of time
Regress Changes:
•
•
for regression testing all priority 0, 1, and 2 test cases are return.
This means all the test cases in the regression test bed/suite are executed
Random test cases are selected and executed
Context based dynamic regression :
•
A few priority-0 test cases are selected based on the context created by the
analysis of those test cases after execution
Resetting the test cases for regression Testing
After selecting the test cases, the next step is to prepare test cases for execution, for proceeding with
this the test case result history is needed
•
When there is a major change in the product
•
When there is a change in the build procedure which affects the product
•
Large release cycle where some test cases were not executed for a long time
•
When the product is in the final regression test cycle with a few selected test cases
•
When there is a situation, that the expected results of the test cases could be quite different
from the previous cycles
•
The test cases relating to defect fixes and production problems need to be evaluated release
after release. In case they are found to fine, they can be reset
•
Whenever existing application functionality is removed, the related test cases can be reset
•
Test cases that consistently produce a positive result can be removed
•
Test cases relating to a few negative test conditions can be removed
Continued..
• Component test cycle phase:
• Regression testing between component test cycles uses only priority-0 test cases .
• Integration testing phase:
• After component testing is over, if regression is performed between integration test
cycles priority-0 and priority-1 test cases are executed.
• Priority-1 testing can use multiple threads. A reset procedure during this phase may
affect all priority-0 and priority-1 test cases.
• System test phase:
• Priority-2 testing starts after all test cases in priority-1 executed with an acceptable
pass percentage as defined in the test plan.
• A “RESET” procedure during this phase may affect priority-0, priority-1 and
priority-2 test cases.
• Why reset test cases:
• Procedure gives a clear picture of how much testing still remains and reflects the
status of the testing.
• Removes any bias towards test cases because resetting test case results prevents the
history of test cases being viewed by testers
Concluding the results of regression testing
Best Practices in Regression Testing
• Regression methodology Can be applied when
• To access the quality of product between test cycles.
• Major release of product, have executed all test cycles,
and planning a regression test cycle for defect fixes.
• Minor release of a product having only defect fixes, and
we can plan for regression test cycles to take care of
those defect fixes.
Continued..
• Practise 1
• Regression can be used for all type of releases.
• Practise 2
• Mapping defect identifies with test cases improves regression Quality.
• Practise 3
• Create and execute regression test bed daily.
• Practise 4
• Ask your best engineer to select the test cases.
• Practise 5
• Detect defects and protect your product from defects and defect fixes.
Download