Uploaded by sushmakalgundi

Testing document- ISTQB

advertisement
What is Testing
Testing is a group of techniques to determine the correctness of the application under the
predefined script but, testing cannot find all the defect of application. The main intent of testing
is to detect failures of the application so that failures can be discovered and corrected. It does
not demonstrate that a product functions properly under all conditions but only that it is not
working in some specific conditions.
Testing furnishes comparison that compares the behavior and state of software against
mechanisms because the problem can be recognized by the mechanism. The mechanism may
include past versions of the same specified product, comparable products, and interfaces of
expected purpose, relevant standards, or other criteria but not limited up to these.
Testing includes an examination of code and also the execution of code in various environments,
conditions as well as all the examining aspects of the code. In the current scenario of software
development, a testing team may be separate from the development team so that Information
derived from testing can be used to correct the process of software development.
The success of software depends upon acceptance of its targeted audience, easy graphical user
interface, strong functionality load test, etc. For example, the audience of banking is totally
different from the audience of a video game. Therefore, when an organization develops a
software product, it can assess whether the software product will be beneficial to its purchasers
and other audience.
Type of Software testing
Software Testing Principles
Software testing is a procedure of implementing software or the application to identify the
defects or bugs. For testing an application or software, we need to follow some principles to
make our product defects free, and that also helps the test engineers to test the software with
their effort and time. Here, in this section, we are going to learn about the seven essential
principles of software testing.
Let us see the seven different testing principles, one by one:
o
Testing shows the presence of defects
o
Exhaustive Testing is not possible
o
Early Testing
o
Defect Clustering
o
Pesticide Paradox
o
Testing is context-dependent
o
Absence of errors fallacy
Testing shows the presence of defects
The test engineer will test the application to make sure that the application is bug or defects free.
While doing testing, we can only identify that the application or software has any errors. The
primary purpose of doing testing is to identify the numbers of unknown bugs with the help of
various methods and testing techniques because the entire test should be traceable to the
customer requirement, which means that to find any defects that might cause the product failure
to meet the client's needs.
By doing testing on any application, we can decrease the number of bugs, which does not mean
that the application is defect-free because sometimes the software seems to be bug-free while
performing multiple types of testing on it. But at the time of deployment in the production server,
if the end-user encounters those bugs which are not found in the testing process.
Exhaustive Testing is not possible
Sometimes it seems to be very hard to test all the modules and their features with effective and
non- effective combinations of the inputs data throughout the actual testing process.
Hence, instead of performing the exhaustive testing as it takes boundless determinations and
most of the hard work is unsuccessful. So we can complete this type of variations according to
the importance of the modules because the product timelines will not permit us to perform such
type of testing scenarios.
Early Testing
Here early testing means that all the testing activities should start in the early stages of the
software development life cycle's requirement analysis stage to identify the defects because if
we find the bugs at an early stage, it will be fixed in the initial stage itself, which may cost us very
less as compared to those which are identified in the future phase of the testing process.
To perform testing, we will require the requirement specification documents; therefore, if the
requirements are defined incorrectly, then it can be fixed directly rather than fixing them in
another stage, which could be the development phase.
Defect clustering
The defect clustering defined that throughout the testing process, we can detect the numbers of
bugs which are correlated to a small number of modules. We have various reasons for this, such
as the modules could be complicated; the coding part may be complex, and so on.
These types of software or the application will follow the Pareto Principle, which states that we
can identify that approx. Eighty percent of the complication is present in 20 percent of the
modules. With the help of this, we can find the uncertain modules, but this method has its
difficulties if the same tests are performing regularly, hence the same test will not able to identify
the new defects.
Pesticide paradox
This principle defined that if we are executing the same set of test cases again and again over a
particular time, then these kinds of the test will not be able to find the new bugs in the software
or the application. To get over these pesticide paradoxes, it is very significant to review all the
test cases frequently. And the new and different tests are necessary to be written for the
implementation of multiple parts of the application or the software, which helps us to find more
bugs.
Testing is context-dependent
Testing is a context-dependent principle states that we have multiple fields such as e-commerce
websites, commercial websites, and so on are available in the market. There is a definite way to
test the commercial site as well as the e-commerce websites because every application has its
own needs, features, and functionality. To check this type of application, we will take the help of
various kinds of testing, different technique, approaches, and multiple methods. Therefore, the
testing depends on the context of the application.
Absence of errors fallacy
Once the application is completely tested and there are no bugs identified before the release, so
we can say that the application is 99 percent bug-free. But there is the chance when the
application is tested beside the incorrect requirements, identified the flaws, and fixed them on a
given period would not help as testing is done on the wrong specification, which does not apply
to the client's requirements. The absence of error fallacy means identifying and fixing the bugs
would not help if the application is impractical and not able to accomplish the client's
requirements and needs.
Software Development Life Cycle (SDLC)
SDLC is a process that creates a structure of development of software. There are different phases
within SDLC, and each phase has its various activities. It makes the development team able to
design, create, and deliver a high-quality product.
SDLC describes various phases of software development and the order of execution of phases.
Each phase requires deliverable from the previous phase in a life cycle of software development.
Requirements are translated into design, design into development and development into testing;
after testing, it is given to the client.
Let's see all the phases in detail:
Different phases of the software development cycle
Requirement Phase
This is the most crucial phase of the software development life cycle for the developing team as
well as for the project manager. During this phase, the client states requirements, specifications,
expectations, and any other special requirement related to the product or software. All these are
gathered by the business manager or project manager or analyst of the service providing
company.
The requirement includes how the product will be used and who will use the product to
determine the load of operations. All information gathered from this phase is critical to
developing the product as per the customer requirements.
2. Design Phase
The design phase includes a detailed analysis of new software according to the requirement
phase. This is the high priority phase in the development life cycle of a system because the logical
designing of the system is converted into physical designing. The output of the requirement
phase is a collection of things that are required, and the design phase gives the way to accomplish
these requirements. The decision of all required essential tools such as programming
language like Java, .NET, PHP, a database like Oracle, MySQL, a combination of hardware and
software to provide a platform on which software can run without any problem is taken in this
phase.
There are several techniques and tools, such as data flow diagrams, flowcharts, decision tables,
and decision trees, Data dictionary, and the structured dictionary are used for describing the
system design.
3. Build /Development Phase
After the successful completion of the requirement and design phase, the next step is to
implement the design into the development of a software system. In this phase, work is divided
into small units, and coding starts by the team of developers according to the design discussed
in the previous phase and according to the requirements of the client discussed in requirement
phase to produce the desired result.
Front-end developers develop easy and attractive GUI and necessary interfaces to interact with
back-end operations and back-end developers do back-end coding according to the required
operations. All is done according to the procedure and guidelines demonstrated by the project
manager.
Since this is the coding phase, it takes the longest time and more focused approach for the
developer in the software development life cycle.
4. Testing Phase
Testing is the last step of completing a software system. In this phase, after getting the developed
GUI and back-end combination, it is tested against the requirements stated in the requirement
phase. Testing determines whether the software is actually giving the result as per the
requirements addressed in the requirement phase or not. The Development team makes a test
plan to start the test. This test plan includes all types of essential testing such as integration
testing, unit testing, acceptance testing, and system testing. Non-functional testing is also done
in this phase.
If there are any defects in the software or it is not working as per expectations, then the testing
team gives information to the development team in detail about the issue. If it is a valid defect
or worth to sort out, it will be fixed, and the development team replaces it with the new one, and
it also needs to be verified.
5. Deployment/ Deliver Phase
When software testing is completed with a satisfying result, and there are no remaining issues in
the working of the software, it is delivered to the customer for their use.
As soon as customers receive the product, they are recommended first to do the beta testing. In
beta testing, customer can require any changes which are not present in the software but
mentioned in the requirement document or any other GUI changes to make it more user-friendly.
Besides this, if any type of defect is encountered while a customer using the software; it will be
informed to the development team of that particular software to sort out the problem. If it is a
severe issue, then the development team solves it in a short time; otherwise, if it is less severe,
then it will wait for the next version.
After the solution of all types of bugs and changes, the software finally deployed to the end-user.
6. Maintenance
The maintenance phase is the last and long-lasting phase of SDLC because it is the process which
continues until the software's life cycle comes to an end. When a customer starts using software,
then actual problems start to occur, and at that time there's a need to solve these problems. This
phase also includes making changes in hardware and software to maintain its operational
effectiveness like to improve its performance, enhance security features and according to
customer's requirements with upcoming time. This process to take care of product time to time
is called maintenance.
"So, all these are six phases of software development life cycle (SDLC) under which the process
of development of software takes place. All are compulsory phases without any one of the
development cannot be possible because development continues for the lifetime of software
with maintenance phase".
Software Development Life Cycle (SDLC) Models
The software development models are those several process or approaches which are being
selected for the development of project based on the project's objectives. To accomplish various
purposes, we have many development life cycle models. And these models identify the multiple
phases of the process. Picking up the correct model for developing the software application is
very important because it will explain the what, where, and when of our planned testing.
Here, are various software development models or methodologies:
o
Waterfall model
o
Spiral model
o
Verification and validation model
o
Prototype model
o
Hybrid model
Waterfall Model
It is the first sequential-linear model because the output of the one stage is the input of the next
stage. It is simple and easy to understand, which is used for a small project. The various phases
of the waterfall model are as follows:
o
Requirement analysis
o
Feasibility study
o
Design
o
Coding
o
Testing
o
Installation
o
Maintenance
For information about the waterfall model, refers to the below link:
Spiral Model
It is the best suites model for a medium level project. It is also called the Cyclic and
Iteration model. Whenever the modules are dependent on each other, we go for this model. And
here, we develop application model wise and then handed over to the customer. The different
stages of the spiral model are as follows:
o
Requirement collection
o
Design
o
Coding
o
Testing
For information about the spiral model, refers to the below link:
Prototype Model
From the time when customer rejection was more in the earlier model, we go for this model as
customer rejection is less. And also, it allows us to prepare a sample (prototype) in the early stage
of the process, which we can show to the client and get their approval and start working on the
original project. This model refers to the action of creating the prototype of the application.
For information about the prototype model, refers to the below link:
Verification & Validation Model
It is an extended version of the waterfall model. It will implement in two phases wherein the first
phase, we will perform the verification process, and when the application is ready, we will
perform the validation process. In this model, the implementation happens in the V shape, which
means that the verification process done under downward flow and the validation process
complete in the upward flow.
For information about the Verification and validation model, refers to the below link:
Hybrid Model
The hybrid model is used when we need to acquire the properties of two models in the single
model. This model is suitable for small, medium, and large projects because it is easy to apply,
understand.
The combination of the two models could be as follows:
o
V and prototype
o
Spiral and Prototype
Software Testing Life Cycle (STLC)
The procedure of software testing is also known as STLC (Software Testing Life Cycle) which
includes phases of the testing process.The testing process is executed in a well-planned and
systematic manner. All activities are done to improve the quality of the software product.
Software testing life cycle contains the following steps:
1. Requirement Analysis
2. Test Plan Creation
3. Environment setup
4. Test case Execution
5. Defect Logging
6. Test Cycle Closure
Requirement Analysis:
The first step of the manual testing procedure is requirement analysis. In this phase, tester
analyses requirement document of SDLC (Software Development Life Cycle) to examine
requirements stated by the client. After examining the requirements, the tester makes a test plan
to check whether the software is meeting the requirements or not.
Entry Criteria
Activities
Deliverable
For the planning of test
plan requirement
specification,
application architecture
document and welldefined acceptance
criteria should be
available.
Prepare the list of all requirements and
queries, and get resolved from Technical
Manager/Lead, System Architecture, Business
Analyst and Client.Make a list of all types of
tests (Performance, Functional and security)
to be performed.Make a list of test
environment details, which should contain all
the necessary tools to execute test cases.
List of all the necessary tests for the testable
requirements andTest environment details
Test Plan Creation:
Test plan creation is the crucial phase of STLC where all the testing strategies are defined. Tester
determines the estimated effort and cost of the entire project. This phase takes place after the
successful completion of the Requirement Analysis Phase. Testing strategy and effort estimation
documents provided by this phase. Test case execution can be started after the successful
completion of Test Plan Creation.
Entry
Criteria
Activities
Deliverable
Requirement
Document
Define Objective as well as the scope of the software.
List down methods involved in testing.
Overview of the testing process.
Settlement of testing environment.
Preparation of the test schedules and control procedures.
Determination of roles and responsibilities.
List down testing deliverables, define risk if any.
Test strategy document.
Testing Effort estimation documents
are the deliverables of this phase.
Environment setup:
Setup of the test environment is an independent activity and can be started along with Test Case
Development. This is an essential part of the manual testing procedure as without environment
testing is not possible. Environment setup requires a group of essential software and hardware
to create a test environment. The testing team is not involved in setting up the testing
environment, its senior developers who create it.
Entry Criteria
Activities
Deliverable
Test strategy and test plan
document.
Test case document.
Testing data.
Prepare the list of software and hardware by analyzing
requirement specification.
After the setup of the test environment, execute the smoke
test cases to check the readiness of the test environment.
Execution report.
Defect report.
Test case Execution:
Test case Execution takes place after the successful completion of test planning. In this phase,
the testing team starts case development and execution activity. The testing team writes down
the detailed test cases, also prepares the test data if required. The prepared test cases are
reviewed by peer members of the team or Quality Assurance leader.
RTM (Requirement Traceability Matrix) is also prepared in this phase. Requirement Traceability
Matrix is industry level format, used for tracking requirements. Each test case is mapped with the
requirement specification. Backward & forward traceability can be done via RTM.
Entry Criteria
Activities
Deliverable
Requirement
Document
Creation of test cases.
Execution of test cases.
Mapping of test cases according to requirements.
Test execution result.
List of functions with the detailed explanation of
defects.
Defect Logging:
Testers and developers evaluate the completion criteria of the software based on test coverage,
quality, time consumption, cost, and critical business objectives. This phase determines the
characteristics and drawbacks of the software. Test cases and bug reports are analyzed in depth
to detect the type of defect and its severity.Defect logging analysis mainly works to find out
defect distribution depending upon severity and types.If any defect is detected, then the
software is returned to the development team to fix the defect, then the software is re-tested
on all aspects of the testing.Once the test cycle is fully completed then test closure report, and
test metrics are prepared.
Entry Criteria
Activities
Deliverable
Test case execution
report.
Defect report
It evaluates the completion criteria of the software based on test coverage, quality,
time consumption, cost, and critical business objectives.
Defect logging analysis finds out defect distribution by categorizing in types and
severity.
Closure
report
Test metrics
Test Cycle Closure:
The test cycle closure report includes all the documentation related to software design,
development, testing results, and defect reports.
This phase evaluates the strategy of development, testing procedure, possible defects in order
to use these practices in the future if there is a software with the same specification.
Entry Criteria
Activities
All document and reports
related to software.
Evaluates the strategy of development, testing procedure, possible defects to
use these practices in the future if there is a software with the same
specification
Levels of Testing
What are the levels of Software Testing?
Testing levels are the procedure for finding the missing areas and avoiding overlapping and
repetition between the development life cycle stages. We have already seen the various phases
such as Requirement collection, designing, coding testing, deployment, and
maintenance of SDLC (Software Development Life Cycle).
In order to test any application, we need to go through all the above phases of SDLC. Like SDLC,
we have multiple levels of testing, which help us maintain the quality of the software.
Deliverable
Different Levels of Testing
The levels of software testing involve the different methodologies, which can be used while we
are performing the software testing.
In software testing, we have four different levels of testing, which are as discussed below:
1. Unit Testing
2. Integration Testing
3. System Testing
4. Acceptance Testing
Types of Software Testing
In this section, we are going to understand the various types of software testing, which can be
used at the time of the Software Development Life Cycle.
As we know, software testing is a process of analyzing an application's functionality as per the
customer prerequisite.
If we want to ensure that our software is bug-free or stable, we must perform the various types
of software testing because testing is the only method that makes our application bug-free.
The different types of Software Testing
The categorization of software testing is a part of diverse testing activities, such as test strategy,
test deliverables, a defined test objective, etc. And software testing is the execution of the
software to find defects.
The purpose of having a testing type is to confirm the AUT (Application Under Test).
To start testing, we should have a requirement, application-ready, necessary resources
available. To maintain accountability, we should assign a respective module to different test
engineers.
The software testing mainly divided into two parts, which are as follows:
o
Manual Testing
o
Automation Testing
What is Manual Testing?
Testing any software or an application according to the client's needs without using any
automation tool is known as manual testing.
In other words, we can say that it is a procedure of verification and validation. Manual testing is
used to verify the behavior of an application or software in contradiction of requirements
specification.We do not require any precise knowledge of any testing tool to execute the manual
test cases. We can easily prepare the test document while performing manual testing on any
application.
Classification of Manual Testing
In software testing, manual testing can be further classified into three different types of testing,
which are as follows:
o
White Box Testing
o
Black Box Testing
o
Grey Box Testing
White Box Testing
The box testing approach of software testing consists of black box testing and white box testing.
We are discussing here white box testing which also known as glass box is testing, structural
testing, clear box testing, open box testing and transparent box testing. It tests internal coding
and infrastructure of a software focus on checking of predefined inputs against expected and
desired outputs. It is based on inner workings of an application and revolves around internal
structure testing. In this type of testing programming skills are required to design test cases. The
primary goal of white box testing is to focus on the flow of inputs and outputs through the
software and strengthening the security of the software.
The term 'white box' is used because of the internal perspective of the system. The clear box or
white box or transparent box name denote the ability to see through the software's outer shell
into its inner workings.
Developers do white box testing. In this, the developer will test every line of the code of the
program. The developers perform the White-box testing and then send the application or the
software to the testing team, where they will perform the black box testing and verify the
application along with the requirements and identify the bugs and sends it to the developer.
The developer fixes the bugs and does one round of white box testing and sends it to the testing
team. Here, fixing the bugs implies that the bug is deleted, and the particular feature is working
fine on the application.
Here, the test engineers will not include in fixing the defects for the following reasons:
o
Fixing the bug might interrupt the other features. Therefore, the test engineer should
always find the bugs, and developers should still be doing the bug fixes.
o
If the test engineers spend most of the time fixing the defects, then they may be unable
to find the other bugs in the application.
The white box testing contains various tests, which are as follows:
o
Path testing
o
Loop testing
o
Condition testing
o
Testing based on the memory perspective
o
Test performance of the program
White-box testing
Black box testing
The developers can perform white box testing.
The test engineers perform the black box testing.
To perform WBT, we should have an understanding of the
programming languages.
To perform BBT, there is no need to have an understanding of the
programming languages.
In this, we will look into the source code and test the logic
of the code.
In this, we will verify the functionality of the application based on
the requirement specification.
In this, the developer should know about the internal
design of the code.
In this, there is no need to know about the internal design of the
code.
White box test techniques
1.
Data Flow Testing
2.
Control flow
3.
Branch coverage
4.
Statement coverage
5.
Decision coverage
Data flow testing in White Box Testing - javatpoint
Black box testing
Black box testing is a technique of software testing which examines the functionality of software
without peering into its internal structure or coding. The primary source of black box testing is a
specification of requirements that is stated by the customer.
In this method, tester selects a function and gives input value to examine its functionality, and
checks whether the function is giving expected output or not. If the function produces correct
output, then it is passed in testing, otherwise failed. The test team reports the result to the
development team and then tests the next function. After completing testing of all functions if
there are severe problems, then it is given back to the development team for correction.
Generic steps of black box testing
o
The black box test is based on the specification of requirements, so it is examined in the
beginning.
o
In the second step, the tester creates a positive test scenario and an adverse test scenario
by selecting valid and invalid input values to check that the software is processing them
correctly or incorrectly.
o
In the third step, the tester develops various test cases such as decision table, all pairs
test, equivalent division, error estimation, cause-effect graph, etc.
o
The fourth phase includes the execution of all test cases.
o
In the fifth step, the tester compares the expected output against the actual output.
o
In the sixth and final step, if there is any flaw in the software, then it is cured and tested
again.
Test procedure
The test procedure of black box testing is a kind of process in which the tester has specific
knowledge about the software's work, and it develops test cases to check the accuracy of the
software's functionality.
It does not require programming knowledge of the software. All test cases are designed by
considering the input and output of a particular function.A tester knows about the definite
output of a particular input, but not about how the result is arising. There are various techniques
used in black box testing for testing like decision table technique, boundary value analysis
technique, state transition, All-pair testing, cause-effect graph technique, equivalence
partitioning technique, error guessing technique, use case technique and user story technique.
All these techniques have been explained in detail within the tutorial.
Test cases
Test cases are created considering the specification of the requirements. These test cases are
generally created from working descriptions of the software including requirements, design
parameters, and other specifications. For the testing, the test designer selects both positive test
scenario by taking valid input values and adverse test scenario by taking invalid input values to
determine the correct output. Test cases are mainly designed for functional testing but can also
be used for non-functional testing. Test cases are designed by the testing team, there is not any
involvement of the development team of software.
Techniques Used in Black Box Testing
1. Decision table technique in Black box testing
o
Decision table technique is one of the widely used case design techniques for black box
testing. This is a systematic approach where various input combinations and their
respective system behavior are captured in a tabular form.
o
o
o
o
o
o
o
That's why it is also known as a cause-effect table. This technique is used to pick the test
cases in a systematic manner; it saves the testing time and gives good coverage to the
testing area of the software application.
Decision table technique is appropriate for the functions that have a logical relationship
between two and more than two inputs.
This technique is related to the correct combination of inputs and determines the result
of various combinations of input. To design the test cases by decision table technique, we
need to consider conditions as input and actions as output.
Let's understand it by an example:
Most of us use an email account, and when you want to use an email account, for this you
need to enter the email and its associated password.
If both email and password are correctly matched, the user will be directed to the email
account's homepage; otherwise, it will come back to the login page with an error message
specified with "Incorrect Email" or "Incorrect Password."
Now, let's see how a decision table is created for the login function in which we can log
in by using email and password. Both the email and the password are the conditions, and
the expected result is action.
o
o
o
o
o
o
o
In the table, there are four conditions or test cases to test the login function. In the first
condition if both email and password are correct, then the user should be directed to
account's Homepage.
In the second condition if the email is correct, but the password is incorrect then the
function should display Incorrect Password. In the third condition if the email is incorrect,
but the password is correct, then it should display Incorrect Email.
Now, in fourth and last condition both email and password are incorrect then the function
should display Incorrect Email.
In this example, all possible conditions or test cases have been included, and in the same
way, the testing team also includes all possible test cases so that upcoming bugs can be
cured at testing level.
In order to find the number of all possible conditions, tester uses 2 n formula where n
denotes the number of inputs; in the example there is the number of inputs is 2 (one is
true and second is false).
Number of possible conditions = 2^ Number of Values of the second condition
Number of possible conditions =2^2 = 4
o
While using the decision table technique, a tester determines the expected output, if the
function produces expected output, then it is passed in testing, and if not then it is failed.
Failed software is sent back to the development team to fix the defect.
2. Boundary Value Analysis
Boundary value analysis is one of the widely used case design technique for black box testing. It
is used to test boundary values because the input values near the boundary have higher chances
of error.
Whenever we do the testing by boundary value analysis, the tester focuses on, while entering
boundary value whether the software is producing correct output or not.
Boundary values are those that contain the upper and lower limit of a variable. Assume that, age
is a variable of any function, and its minimum value is 18 and the maximum value is 30, both 18
and 30 will be considered as boundary values.
The basic assumption of boundary value analysis is, the test cases that are created using
boundary values are most likely to cause an error.
There is 18 and 30 are the boundary values that's why tester pays more attention to these values,
but this doesn't mean that the middle values like 19, 20, 21, 27, 29 are ignored. Test cases are
developed for each and every value of the range.
Testing of boundary values is done by making valid and invalid partitions. Invalid partitions are
tested because testing of output in adverse condition is also essential.
Let's understand via practical:
Imagine, there is a function that accepts a number between 18 to 30, where 18 is the minimum
and 30 is the maximum value of valid partition, the other values of this partition are 19, 20, 21,
22, 23, 24, 25, 26, 27, 28 and 29. The invalid partition consists of the numbers which are less than
18 such as 12, 14, 15, 16 and 17, and more than 30 such as 31, 32, 34, 36 and 40. Tester develops
test cases for both valid and invalid partitions to capture the behavior of the system on different
input conditions.
The software system will be passed in the test if it accepts a valid number and gives the desired
output, if it is not, then it is unsuccessful. In another scenario, the software system should not
accept invalid numbers, and if the entered number is invalid, then it should display error
massage.
If the software which is under test, follows all the testing guidelines and specifications then it is
sent to the releasing team otherwise to the development team to fix the defects.
3. State Transition Technique
The general meaning of state transition is, different forms of the same situation, and according
to the meaning, the state transition method does the same. It is used to capture the behavior of
the software application when different input values are given to the same function.
We all use the ATMs, when we withdraw money from it, it displays account details at last. Now
we again do another transaction, then it again displays account details, but the details displayed
after the second transaction are different from the first transaction, but both details are
displayed by using the same function of the ATM. So the same function was used here but each
time the output was different, this is called state transition. In the case of testing of a software
application, this method tests whether the function is following state transition specifications on
entering different inputs.
This applies to those types of applications that provide the specific number of attempts to access
the application such as the login function of an application which gets locked after the specified
number of incorrect attempts. Let's see in detail, in the login function we use email and password,
it gives a specific number of attempts to access the application, after crossing the maximum
number of attempts it gets locked with an error message.
Let see in the diagram:
There is a login function of an application which provides a maximum three number of attempts,
and after exceeding three attempts, it will be directed to an error page.
State transition table
STATE
LOGIN
VALIDATION
REDIRECTED
S1
First Attempt
Invalid
S2
S2
Second Attempt
Invalid
S3
S3
Third Attempt
Invalid
S5
S4
Home Page
S5
Error Page
In the above state transition table, we see that state S1 denotes first login attempt. When the
first attempt is invalid, the user will be directed to the second attempt (state S2). If the second
attempt is also invalid, then the user will be directed to the third attempt (state S3). Now if the
third and last attempt is invalid, then the user will be directed to the error page (state S5).
But if the third attempt is valid, then it will be directed to the homepage (state S4).
Let's see state transition table if third attempt is valid:
STATE
LOGIN
VALIDATION
REDIRECTED
S1
First Attempt
Invalid
S2
S2
Second Attempt
Invalid
S3
S3
Third Attempt
Valid
S4
S4
Home Page
S5
Error Page
By using the above state transition table we can perform testing of any software application. We
can make a state transition table by determining desired output, and then exercise the software
system to examine whether it is giving desired output or not.
4. All-pairs Testing
All-pairs testing technique is also known as pairwise testing. It is used to test all the possible
discrete combinations of values. This combinational method is used for testing the application
that uses checkbox input, radio button input (radio button is used when you have to select only
one option for example when you select gender male or female, you can select only one option),
list box, text box, etc.
Suppose, you have a function of a software application for testing, in which there are 10 fields to
input the data, so the total number of discrete combinations is 10 ^ 10 (100 billion), but testing
of all combinations is complicated because it will take a lot of time.
So, let's understand the testing process with an example:
Assume that there is a function with a list box that contains 10 elements, text box that can accept
1 to 100 characters, radio button, checkbox and OK button.
The input values are given below that can be accepted by the fields of the given function.
1. Check Box - Checked or Unchecked
2. List Box - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
3. Radio Button - On or Off
4. Text Box - Number of alphabets between 1 to 100.
5. OK - Does not accept any value, only redirects to the next page.
Calculation of all the possible combinations:
1. Check Box = 2
2. List Box = 10
3. Radio Button = 2
4. Text Box = 100
5. Total number of test cases = 2*10*2*100
6.
= 4000
The total number of test cases, including negative test cases, is 4000.
Testing of 4000 positive and negative test cases, is a very long and time-consuming process.
Therefore, the task of the testing team is to reduce the number of test cases, to do this, the
testing team considers the list box values in such a way that the first value is 0, and the other
value can be any numeric number, neither positive nor negative. Ten values are now converted
into 2 values.
Values of checkbox and radio button cannot be reduced because each has a combination of only
2 values. At last, the value of the text box is divided into three input categories valid integer,
invalid integer, and alpha-special character.
Now, we have only 24 test cases, including negative test cases.
1. 2*2*2*3 = 24
Now, the task is to make combinations for all pair technique, into which each column should have
an equal number of values, and the total value should be equal to 24.
In order to make text box column, put the most common input on the first place that is a valid
integer, on the second place put the second most common input that is an invalid integer, and
at the last place put the least common input that is an Alpha Special Character.
Then start filling the table, the first column is a text box with three values, the next column is a
list box that has 2 values, the third column is a checkbox that has 2 values, and the last one is a
radio button that also has 2 values.
Text box
List Box
Check Box
Radio Button
Valid Integer
0
Check
ON
Invalid Integer
Other
Uncheck
OFF
Valid Integer
0
Check
ON
Invalid Integer
Other
Uncheck
OFF
AlphaSpecialCharacter
0
Check
ON
AlphaSpecialCharacter
Other
Uncheck
OFF
In the table, we can see that the conventional software method results in 24 test cases instead
of 4000 cases, and the pairwise testing method only in just 6 pair test cases.
5.Equivalence Partitioning Technique
Equivalence partitioning is a technique of software testing in which input data is divided into
partitions of valid and invalid values, and it is mandatory that all partitions must exhibit the same
behavior. If a condition of one partition is true, then the condition of another equal partition
must also be true, and if a condition of one partition is false, then the condition of another equal
partition must also be false. The principle of equivalence partitioning is, test cases should be
designed to cover each partition at least once. Each value of every equal partition must exhibit
the same behavior as other.
The equivalence partitions are derived from requirements and specifications of the software. The
advantage of this approach is, it helps to reduce the time of testing due to a smaller number of
test cases from infinite to finite. It is applicable at all levels of the testing process.
Examples of Equivalence Partitioning technique
Assume that there is a function of a software application that accepts a particular number of
digits, not greater and less than that particular number. For example, an OTP number which
contains only six digits, less or more than six digits will not be accepted, and the application will
redirect the user to the error page.
1. 1. OTP Number = 6 digits
A function of the software application accepts a 10 digit mobile number.
1. 2. Mobile number = 10 digits
In both examples, we can see that there is a partition of two equally valid and invalid partitions,
on applying valid value such as OTP of six digits in the first example and mobile number of 10
digits in the second example, both valid partitions behave same, i.e. redirected to the next page.
Another two partitions contain invalid values such as 5 or less than 5 and 7 or more than 7 digits
in the first example and 9 or less than 9 and 11 or more than 11 digits in the second example,
and on applying these invalid values, both invalid partitions behave same, i.e. redirected to the
error page.
We can see in the example, there are only three test cases for each example and that is also the
principal of equivalence partitioning which states that this method intended to reduce the
number of test cases.
How we perform equivalence partitioning
We can perform equivalence partitioning in two ways which are as follows:
Let us see how pressman and general practice approaches are going to use in different
conditions:
Condition1
If the requirement is a range of values, then derive the test case for one valid and two
invalid inputs.
Here, the Range of values implies that whenever we want to identify the range values, we go for
equivalence partitioning to achieve the minimum test coverage. And after that, we go for error
guessing to achieve maximum test coverage.
According to pressman:
For example, the Amount of test field accepts a Range (100-400) of values:
According to General Practice method:
Whenever the requirement is Range + criteria, then divide the Range into the internals and check
for all these values.
For example:
In the below image, the pressman technique is enough to test for an age text field for one valid
and two invalids. But, if we have the condition for insurance of ten years and above are required
and multiple policies for various age groups in the age text field, then we need to use the practice
method.
Condition2
If the requirement is a set of values, then derive the test case for one valid and two
invalid inputs.
Here, Set of values implies that whenever we have to test a set of values, we go for one
positive and two negative inputs, then we moved for error guessing, and we also need to verify
that all the sets of values are as per the requirement.
Example 1
Based on the Pressman Method
If the Amount Transfer is (100000-700000)
Then for, 1 lakh →Accept
And according to General Practice method
The Range + Percentage given to 1 lakh - 7 lakh
Like: 1lak - 3lak →5.60%
3lak - 6lak →3.66%
6lak - 7lak →Free
If we have things like loans, we should go for the general practice approach and separate the
stuff into the intervals to achieve the minimum test coverage.
Example 2
if we are doing online shopping, mobile phone product, and the different Product ID -1,4,7,9
Here, 1 → phone covers 4 → earphones 7 → charger 9 → Screen guard
And if we give the product id as 4, it will be accepted, and it is one valid value, and if we provide
the product id as 5 and phone cover, it will not be accepted as per the requirement, and these
are the two invalid values.
Condition 3
If the requirement id Boolean (true/false), then derive the test case for both true/false values.
The Boolean value can be true and false for the radio button, checkboxes.
For example
Serial
no
Description
Input
Expected
Note
1
Select valid
NA
True
---
2
Select invalid
NA
False
Values can be change based according to
the requirement.
3
Do not select
NA
Do not select anything, error
message should be displayed
We cannot go for next question
4
Select both
NA
We can select any radio button
Only one radio button can be selected at a
time.
Note:
In Practice method, we will follow the below process:
Here, we are testing the application by deriving the below inputs values:
Let us see one program for our better understanding.
1. If( amount < 500 or > 7000)
2. {
3. Error Message
4. }
5. if( amount is between 500 & 3000)
6. {
7. deduct 2%
8. }
9. if (amount > 3000)
10. {
11. deduct 3%
12. }
When the pressman technique is used, the first two conditions are tested, but if we use the
practice method, all three conditions are covered.
We don't need to use the practice approach for all applications. Sometime we will use the
pressman method also.
But, if the application has much precision, then we go for the practice method.
If we want to use the practice method, it should follow the below aspects:
o
It should be product-specific
o
It should be case-specific
o
The number of divisions depends on the precision( 2% and 3 % deduction)
Advantages and disadvantages of Equivalence Partitioning technique
Following are pros and cons of equivalence partitioning technique:
Advantages
disadvantages
It is process-oriented
All necessary inputs may not cover.
We can achieve the Minimum test coverage
This technique will not consider the condition for boundary
value analysis.
It helps to decrease the general test
execution time and also reduce the set of test
data.
The test engineer might assume that the output for all data set
is right, which leads to the problem during the testing process.
6. Error Guessing Technique
The test case design technique or methods or approaches that need to be followed by every test
engineer while writing the test cases to achieve the maximum test coverage. If we follow the test
case design technique, then it became process-oriented rather than person-oriented.
The test case design technique ensures that all the possible values that are both positive and
negative are required for the testing purposes. In software testing, we have three different test
case design techniques which are as follows:
o
Error Guessing
o
Equivalence Partitioning
o
Boundary Value Analysis[BVA]
In this section, we will understand the first test case design technique that is Error guessing
techniques.
Error guessing is a technique in which there is no specific method for identifying the error. It is
based on the experience of the test analyst, where the tester uses the experience to guess the
problematic areas of the software. It is a type of black box testing technique which does not have
any defined structure to find the error.
In this approach, every test engineer will derive the values or inputs based on their understanding
or assumption of the requirements, and we do not follow any kind of rules to perform error
guessing technique.
The accomplishment of the error guessing technique is dependent on the ability and product
knowledge of the tester because a good test engineer knows where the bugs are most likely to
be, which helps to save lots of time.
How does the error guessing technique be implemented?
The implementation of this technique depends on the experience of the tester or analyst having
prior experience with similar applications. It requires only well-experienced testers with quick
error guessing technique. This technique is used to find errors that may not be easily captured
by formal black box testing techniques, and that is the reason, it is done after all formal
techniques.
The scope of the error guessing technique entirely depends on the tester and type of experience
in the previous testing involvements because it does not follow any method and guidelines. Test
cases are prepared by the analyst to identify conditions. The conditions are prepared by
identifying most error probable areas and then test cases are designed for them.
The main purpose of this technique is to identify common errors at any level of testing by
exercising the following tasks:
o
Enter blank space into the text fields.
o
Null pointer exception.
o
Enter invalid parameters.
o
Divide by zero.
o
Use maximum limit of files to be uploaded.
o
Check buttons without entering values.
The increment of test cases depends upon the ability and experience of the tester.
Purpose of Error guessing
The main purpose of the error guessing technique is to deal with all possible errors which cannot
be identified as informal testing.
o
The main purpose of error guessing technique is to deal with all possible errors which
cannot be identified informal testing.
o
It must contain the all-inclusive sets of test cases without skipping any problematic areas
and without involving redundant test cases.
o
This technique accomplishes the characteristics left incomplete during the formal testing.
Depending on the tester's intuition and experience, all the defects cannot be corrected. There
are some factors that can be used by the examiner while using their experience o
Tester's intuition
o
Historical learning
o
Review checklist
o
Risk reports of the software
o
Application UI
o
General testing rules
o
Previous test results
o
Defects occurred in the past
o
Variety of data which is used for testing
o
Knowledge of AUT
Examples of Error guessing method
Example1
A function of the application requires a mobile number which must be of 10 characters. Now,
below are the techniques that can be applied to guess error in the mobile number field:
o
What will be the result, if the entered character is other than a number?
o
What will be the result, if entered characters are less than 10 digits?
o
What will be the result, if the mobile field is left blank?
value
description
6000
Accept
5555
Accept
4000
Error message
8000
Error message
blank
Error message
100$
Error message
After implementing these techniques, if the output is
similar to the expected result, the function is
considered to be bug-free, but if the output is not
similar to the expected result, so it is sent to the
development team to fix the defects.
However, error guessing is the key technique among all
testing techniques as it depends on the experience of a
tester, but it does not give surety of highest quality
benchmark. It does not provide full coverage to the
software. This technique can yield a better result if
combined with other techniques of testing.
Example2
----
----
----
----
Maximum test coverage
Suppose we have one bank account, and we have to
deposit some money over there, but the amount will
be accepted on a particular range of which is 50007000. So here, we will provide the different input's
value until it covers the maximum test coverage based
on the error guessing technique, and see whether it is
accepted or give the error message:
Note:
Condition: if amount >5000 and amount<7000 amount
And, if we enter 5000 → error message (not accepted based on the condition)
7000→ error message (not accepted based on the condition)
Advantages and disadvantage of Error guessing technique
Advantages
The benefits of error guessing technique are as follows:
o
It is a good approach to find the challenging parts of the software.
o
It is beneficial when we will use this technique with the grouping of other formal testing
techniques.
o
It is used to enhance the formal test design techniques.
o
With the help of this technique, we can disclose those bugs which were probably
identified over extensive testing; therefore, the test engineer can save lots of time and
effort.
Disadvantage
Following are the drawbacks of error guessing technique:
o
The error guessing technique is person-oriented rather than process-oriented because it
depends on the person's thinking.
o
If we use this technique, we may not achieve the minimum test coverage.
o
With the help of this, we may not cover all the input or boundary values.
o
With this, we cannot give the surety of the product quality.
o
The Error guessing technique can be done by those people who have product knowledge;
it cannot be done by those who are new to the product.
7. Use Case Technique
The use case is functional testing of the black box testing used to identify the test cases from the
beginning to the end of the system as per the usage of the system. By using this technique, the
test team creates a test scenario that can exercise the entire software based on the functionality
of each function from start to end.
It is a graphic demonstration of business needs, which describe how the end-user will cooperate
with the software or the application. The use cases provide us all the possible techniques of how
the end-user uses the application as we can see in the below image, that how the use case will
look like:
In the above image, we can see that a sample of a use case where we have a requirement related
to the customer requirement specification (CRS).
For module P of the software, we have six different features.
And here, Admin has access to all the six features, the Paid user has access to the three
features and for the Free user, there is no access provided to any of the features.
Like for Admin, the different conditions would be as below:
Pre-condition→ Admin must be generated
Action→ Login as Paid user
Post-condition→ 3 features must be present
And for Free user, the different condition would be as below:
Pre-condition→ free user must be generated
Action→ Login as a free user
Post-condition→ no features
Who writes the use case?
The client provides the customer requirement specification for the application, then the
development team will write the use case according to the CRS, and the use case is sent to the
customer for their review.
If the client approves it, then the approved use case is sent to the development team for further
design and coding process and these approved use case is also sent to the testing team, so they
can start writing the test plan and later on start writing the test cases for the different features
of the software.In the below scenario, there is a tester who represents the user to use the
functions of a system one by one. In this scenario, there is an actor who represents the user to
use the functions of a software system.This describes step-by-step functionality of the software
application which can be understood with an example, assume that there is a software
application of online money transfer. The various steps for transferring money are as follows:
o
The user does login for the authentication of the actual user.
o
The system checks ID and password with the database to ensure that whether it is a valid
user or not.
o
If the verification succeeds, the server connects the user to the account page, otherwise
returns to the login page.
o
In the account page, there are several options because the examiner is checking the
money transfer option; the user goes into the money transfer option.
o
After successful completion of this step, the user enters the account number in which he
wants to transfer money. The user also need to enter other details like bank name,
amount, IFSC code, home branch, etc.
In the last step, if there is a security feature that includes verification of the ATM card number
and PIN, then enter the ATM card number, PIN and other required details.
If the system is successfully following all the steps, then there is no need to design test cases for
this function. By describing the steps to use, it is easy to design test cases for software systems.
Difference between use case and prototype
Use case
Prototype
With the help of the use case, we get to know how the
product should work. And it is a graphical
representation of the software and its multiple
features and also how they should work.
In this, we will not see how the end-user
interacts with the application because it is
just a dummy (particular image of the
software) of the application.
How developers develop the use cases
The developers use the standard symbols to write a use case so that everyone will understand
easily. They will use the Unified modeling language (UML) to create the use cases.
There are various tools available that help to write a use case, such as Rational Rose. This tool
has a predefined UML symbols, we need to drag and drop them to write a use case, and the
developer can also use these symbols to develop the use case.
Advantage of Use Case Technique
The use case technique gives us some features which help us to create an application.
Following are the benefits of using the use case technique while we are developing the product:
o
The use case is used to take the functional needs of the system.
o
These are the classification of steps, which describe the connections between the user
and its actions.
o
It starts from an elementary view where the system is created first and primarily used for
its users.
o
It is used to determine the complete analyses, which help us to achieve the complication,
and then it focuses on the one detailed features at a time.
8. Cause and Effect Graph in Black box Testing
Cause-Effect Graph Technique in Black Box Testing - javatpoint
GreyBox Testing
Greybox testing is a software testing method to test the software application with partial
knowledge of the internal working structure. It is a combination of black box and white box
testing because it involves access to internal coding to design test cases as white box testing and
testing practices are done at functionality level as black box testing.
GreyBox testing commonly identifies context-specific errors that belong to web systems. For
example; while testing, if tester encounters any defect then he makes changes in code to resolve
the defect and then test it again in real time. It concentrates on all the layers of any complex
software system to increase testing coverage. It gives the ability to test both presentation layer
as well as internal coding structure. It is primarily used in integration testing and penetration
testing.
Why GreyBox testing?
Reasons for GreyBox testing are as follows
o
It provides combined benefits of both Blackbox testing and WhiteBox testing.
o
It includes the input values of both developers and testers at the same time to improve
the overall quality of the product.
o
It reduces time consumption of long process of functional and non-functional testing.
o
It gives sufficient time to the developer to fix the product defects.
o
It includes user point of view rather than designer or tester point of view.
o
It involves examination of requirements and determination of specifications by user point
of view deeply.
GreyBox Testing Strategy
Grey box testing does not make necessary that the tester must design test cases from source
code. To perform this testing test cases can be designed on the base of, knowledge of
architectures, algorithm, internal states or other high -level descriptions of the program behavior.
It uses all the straightforward techniques of black box testing for function testing. The test case
generation is based on requirements and preset all the conditions before testing the program by
assertion method.
Generic Steps to perform Grey box Testing are:
1. First, select and identify inputs from BlackBox and WhiteBox testing inputs.
2. Second, Identify expected outputs from these selected inputs.
3. Third, identify all the major paths to traverse through during the testing period.
4. The fourth task is to identify sub-functions which are the part of main functions to perform
deep level testing.
5. The fifth task is to identify inputs for subfunctions.
6. The sixth task is to identify expected outputs for subfunctions.
7. The seventh task includes executing a test case for Subfunctions.
8. The eighth task includes verification of the correctness of result.
The test cases designed for Greybox testing includes Security related, Browser related, GUI
related, Operational system related and Database related testing.
Techniques of Grey box Testing
Matrix Testing
This testing technique comes under Grey Box testing. It defines all the used variables of a
particular program. In any program, variable are the elements through which values can travel
inside the program. It should be as per requirement otherwise, it will reduce the readability of
the program and speed of the software. Matrix technique is a method to remove unused and
uninitialized variables by identifying used variables from the program.
Regression Testing
Regression testing is used to verify that modification in any part of software has not caused any
adverse or unintended side effect in any other part of the software. During confirmation testing,
any defect got fixed, and that part of software started working as intended, but there might be a
possibility that fixed defect may have introduced a different defect somewhere else in the
software. So, regression testing takes care of these type of defects by testing strategies like retest
risky use cases, retest within a firewall, retest all, etc.
Orthogonal Array Testing or OAT
The purpose of this testing is to cover maximum code with minimum test cases. Test cases are
designed in a way that can cover maximum code as well as GUI functions with a smaller number
of test cases.
Pattern Testing
Pattern testing is applicable to such type of software that is developed by following the same
pattern of previous software. In these type of software possibility to occur the same type of
defects. Pattern testing determines reasons of the failure so they can be fixed in the next
software.
Usually, automated software testing tools are used in Greybox methodology to conduct the test
process. Stubs and module drivers provided to a tester to relieve from manually code generation.
Functional Testing:
It is a type of software testing which is used to verify the functionality of the software application,
whether the function is working according to the requirement specification. In functional testing,
each function tested by giving the value, determining the output, and verifying the actual output
with the expected value. Functional testing performed as black-box testing which is presented to
confirm that the functionality of an application or system behaves as we are expecting. It is done
to verify the functionality of the application.
Functional testing also called as black-box testing, because it focuses on application specification
rather than actual code. Tester has to test only the program rather than the system.
Goal of functional testing
The purpose of the functional testing is to check the primary entry function, necessarily usable
function, the flow of screen GUI. Functional testing displays the error message so that the user
can easily navigate throughout the application.
What is the process of functional testing?
o
Tester does verification of the requirement specification in the software application.
o
After analysis, the requirement specification tester will make a plan.
o
After planning the tests, the tester will design the test case.
o
After designing the test, case tester will make a document of the traceability matrix.
o
The tester will execute the test case design.
o
Analysis of the coverage to examine the covered testing area of the application.
o
Defect management should do to manage defect resolving.
What to test in functional testing? Explain
The main objective of functional testing is checking the functionality of the software system. It
concentrates on:
o
Basic Usability: Functional Testing involves the usability testing of the system. It checks
whether a user can navigate freely without any difficulty through screens.
o
Accessibility: Functional testing test the accessibility of the function.
o
Mainline function: It focuses on testing the main feature.
o
Error Condition: Functional testing is used to check the error condition. It checks whether
the error message displayed.
Explain the complete process to perform functional testing.
There are the following steps to perform functional testing:
o
There is a need to understand the software requirement.
o
Identify test input data
o
Compute the expected outcome with the selected input values.
o
Execute test cases
o
Comparison between the actual and the computed result
Explain the types of functional testing.
The main objective of functional testing is to test the functionality of the component.
Functional testing is divided into multiple parts.
Here are the following types of functional testing.
Unit Testing: Unit testing is a type of software testing, where the individual unit or component
of the software tested. Unit testing, examine the different part of the application, by unit testing
functional testing also done, because unit testing ensures each module is working correctly.
The developer does unit testing. Unit testing is done in the development phase of the application.
Smoke Testing: Functional testing by smoke testing. Smoke testing includes only the basic
(feature) functionality of the system. Smoke testing is known as "Build Verification Testing."
Smoke testing aims to ensure that the most important function work.
For example, Smoke testing verifies that the application launches successfully will check that GUI
is responsive.
Sanity Testing: Sanity testing involves the entire high-level business scenario is working
correctly. Sanity testing is done to check the functionality/bugs fixed. Sanity testing is little
advance than smoke testing.
For example, login is working fine; all the buttons are working correctly; after clicking on the
button navigation of the page is done or not.
Regression Testing: This type of testing concentrate to make sure that the code changes should
not side effect the existing functionality of the system. Regression testing specifies when bug
arises in the system after fixing the bug, regression testing concentrate on that all parts are
working or not. Regression testing focuses on is there any impact on the system.
Integration Testing: Integration testing combined individual units and tested as a group. The
purpose of this testing is to expose the faults in the interaction between the integrated units.
Developers and testers perform integration testing.
White box testing: White box testing is known as Clear Box testing, code-based testing,
structural testing, extensive testing, and glass box testing, transparent box testing. It is a software
testing method in which the internal structure/design/ implementation tested known to the
tester.
The white box testing needs the analysis of the internal structure of the component or system.
Black box testing: It is also known as behavioral testing. In this testing, the internal structure/
design/ implementation not known to the tester. This type of testing is functional testing. Why
we called this type of testing is black-box testing, in this testing tester, can't see the internal code.
For example, A tester without the knowledge of the internal structures of a website tests the web
pages by using the web browser providing input and verifying the output against the expected
outcome.
User acceptance testing: It is a type of testing performed by the client to certify the system
according to requirement. The final phase of testing is user acceptance testing before releasing
the software to the market or production environment. UAT is a kind of black-box testing where
two or more end-users will involve.
Retesting: Retesting is a type of testing performed to check the test cases that were unsuccessful
in the final execution are successfully pass after the defects fixed. Usually, tester assigns the bug
when they find it while testing the product or its component. The bug allocated to a developer,
and he fixes it. After fixing, the bug is assigned to a tester for its verification. This testing is known
as retesting.
Database Testing: Database testing is a type of testing which checks the schema, tables, triggers,
etc. of the database under test. Database testing may involve creating complex queries to
load/stress test the database and check its responsiveness. It checks the data integrity and
consistency.
Example: let us consider a banking application whereby a user makes a transaction. Now from
database testing following, things are important. They are:
o
Application store the transaction information in the application database and displays
them correctly to the user.
o
No information lost in this process
o
The application does not keep partially performed or aborted operation information.
o
The user information is not allowed individuals to access by the
Ad-hoc testing: Ad-hoc testing is an informal testing type whose aim is to break the system. This
type of software testing is unplanned activity. It does not follow any test design to create the test
cases. Ad-hoc testing is done randomly on any part of the application; it does not support any
structured way of testing.
Recovery Testing: Recovery testing is used to define how well an application can recover from
crashes, hardware failure, and other problems. The purpose of recovery testing is to verify the
system's ability to recover from testing points of failure.
Static Testing: Static testing is a software testing technique by which we can check the defects
in software without actually executing it. Static testing is done to avoid errors in the early stage
of the development as it is easier to find failure in the early stages. Static testing used to detect
the mistakes that may not found in dynamic testing.
Why we use static testing?
Static testing helps to find the error in the early stages. With the help of static testing, this will
reduce the development timescales. It reduces the testing cost and time. Static testing also used
for development productivity.
Tools
Sahi
Features/ Characteristics
o
It is an open-source and automation testing tool, released under Apache
License open source license, used for testing of the web application.
o
Sahi is written in Java and JavaScript and considered for most of the testing
techniques.
SoapUI
o
It runs as a proxy server; it is browser-independent.
o
It is an open-source functional testing tool, used for web application testing.
o
It is simple and easy to design.
o
It supports multiple environments, i.e., at any instance, the target
environment may be set up.
Watir
o
Watir, is an abbreviated form of web application testing in ruby, is an opensource tool for automating web browser./li>
Selenium
o
It uses a ruby scripting language, which is concise and easy to use./li>
o
Watir supports multiple browsers on various platform.
o
The open-source tool, used for functional testing on both web application
and applications of the desktop.
o
o
It automates browsers and web application for testing purpose.
o
It gives the flexibility to customize the automated test case
o
Provides the advantage of writing test scripts, as per the requirements,
using web driver.
Canoo WebTest
o
An open-source tool for performing functional testing of the web
application.
Cucumber
o
Platform independent
o
Easy and fast
o
Easy to extend to meet growing and incoming requirements.
o
Cucumber is an open-source testing tool written in Ruby language. This tool
works best for test-driven development. It is used to test many other
languages like java, c#, and python. Cucumber for testing using some
programming.
Component Testing: Component Testing is also a type of software testing in which testing is
performed on each component separately without integrating with other parts. Component
testing is also a type of black-box testing. Component testing also referred to as Unit testing,
program testing, or module testing.
Grey Box Testing: Grey Box Testing defined as a combination of both white box and black-box
testing. Grey Box testing is a testing technique which performed with limited information about
the internal functionality of the system.
What are the functional testing tools?
The functional testing can also be executed by various apart from manual testing. These tools
simplify the process of testing and help to get accurate and useful results.
It is one of the significant and top-priority based techniques which were decided and specified
before the development process.
The tools used for functional testing are:
What are the advantages of Functional Testing?
Advantages of functional testing are:
o
It produces a defect-free product.
o
It ensures that the customer is satisfied.
o
It ensures that all requirements met.
o
It ensures the proper working of all the functionality of an application/software/product.
o
It ensures that the software/ product work as expected.
o
It ensures security and safety.
o
It improves the quality of the product.
Example: Here, we are giving an example of banking software. In a bank when money transferred
from bank A to bank B. And the bank B does not receive the correct amount, the fee is applied,
or the money not converted into the correct currency, or incorrect transfer or bank A does not
receive statement advice from bank B that the payment has received. These issues are critical
and can be avoided by proper functional testing.
What are the disadvantages of functional testing?
Disadvantages of functional testing are:
o
Functional testing can miss a critical and logical error in the system.
o
This testing is not a guarantee of the software to go live.
o
The possibility of conducting redundant testing is high in functional testing.
Non-Functional Testing
Non-functional testing is a type of software testing to test non-functional parameters such as
reliability, load test, performance and accountability of the software. The primary purpose of
non-functional testing is to test the reading speed of the software system as per non-functional
parameters. The parameters of non-functional testing are never tested before the functional
testing.
Non-functional testing is also very important as functional testing because it plays a crucial role
in customer satisfaction.
For example, non-functional testing would be to test how many people can work simultaneously
on any software.
Why Non-Functional Testing
Functional and Non-functional testing both are mandatory for newly developed software.
Functional testing checks the correctness of internal functions while Non-Functional testing
checks the ability to work in an external environment.
It sets the way for software installation, setup, and execution. The measurement and metrics
used for internal research and development are collected and produced under non-functional
testing.
Non-functional testing gives detailed knowledge of product behavior and used technologies. It
helps in reducing the risk of production and associated costs of the software.
Parameters to be tested under Non-Functional Testing
Performance Testing
Performance Testing eliminates the reason behind the slow and limited performance of the
software. Reading speed of the software should be as fast as possible.
For Performance Testing, a well-structured and clear specification about expected speed must
be defined. Otherwise, the outcome of the test (Success or Failure) will not be obvious.
Load Testing
Load testing involves testing the system's loading capacity. Loading capacity means more and
more people can work on the system simultaneously.
Security Testing
Security testing is used to detect the security flaws of the software application. The testing is
done via investigating system architecture and the mindset of an attacker. Test cases are
conducted by finding areas of code where an attack is most likely to happen.
Portability Testing
The portability testing of the software is used to verify whether the system can run on different
operating systems without occurring any bug. This test also tests the working of software when
there is a same operating system but different hardware.
Accountability Testing
Accountability test is done to check whether the system is operating correctly or not. A function
should give the same result for which it has been created. If the system gives expected output, it
gets passed in the test otherwise failed.
Reliability Testing
Reliability test assumes that whether the software system is running without fail under specified
conditions or not. The system must be run for a specific time and number of processes. If the
system is failed under these specified conditions, reliability test will be failed.
Efficiency Testing
Efficiency test examines the number of resources needed to develop a software system, and how
many of these were used. It also includes the test of these three points.
o
Customer's requirements must be satisfied by the software system.
o
A software system should achieve customer specifications.
o
Enough efforts should be made to develop a software system.
Advantages of Non-functional testing
o
It provides a higher level of security. Security is a fundamental feature due to which system
is protected from cyber-attacks.
o
It ensures the loading capability of the system so that any number of users can use it
simultaneously.
o
It improves the performance of the system.
o
Test cases are never changed so do not need to write them more than once.
o
Overall time consumption is less as compared to other testing processes.
Disadvantages of Non-Functional Testing
o
Every time the software is updated, non-functional tests are performed again.
o
Due to software updates, people have to pay to re-examine the software; thus software
becomes very expensive.
Functional Vs Non-Functional Testing:
Functional Testing
Non-Functional Testing
Functional testing is performed using the functional Non-Functional testing checks the Performance,
specification provided by the client and verifies the
reliability, scalability and other non-functional aspects of the software
system against the functional requirements.
Functional testing is executed first
Non-functional testing should be performed
after functional testing
Manual Testing or automation tools can be used for
Using tools will be effective for this testing
functional testing
Business requirements are the inputs to functional Performance parameters like speed,
testing
scalability are inputs to non-functional testing.
Functional testing describes what the product does Nonfunctional testing describes how
good the product works
Easy to do Manual Testing
Examples of Functional testing are








Unit Testing
Smoke Testing
Sanity Testing
Integration Testing
White box testing
Black Box testing
User Acceptance testing
Regression Testing
Tough to do Manual Testing
Examples of Non-functional testing are









Performance Testing
Load Testing
Volume Testing
Stress Testing
Security Testing
Installation Testing
Penetration Testing
Compatibility Testing
Migration Testing
Unit Testing
Unit testing involves the testing of each unit or an individual component of the software
application. It is the first level of functional testing. The aim behind unit testing is to validate unit
components with its performance.
A unit is a single testable part of a software system and tested during the development phase of
the application software.
The purpose of unit testing is to test the correctness of isolated code. A unit component is an
individual function or code of the application. White box testing approach used for unit testing
and usually done by the developers.
Whenever the application is ready and given to the Test engineer, he/she will start checking every
component of the module or module of the application independently or one by one, and this
process is known as Unit testing or components testing.
Why Unit Testing?
In a testing level hierarchy, unit testing is the first level of testing done before integration and
other remaining levels of the testing. It uses modules for the testing process which reduces the
dependency of waiting for Unit testing frameworks, stubs, drivers and mock objects are used for
assistance in unit testing.
Generally, the software goes under four level of testing: Unit Testing, Integration Testing, System
Testing, and Acceptance Testing but sometimes due to time consumption software testers does
minimal unit testing but skipping of unit testing may lead to higher defects during Integration
Testing, System Testing, and Acceptance Testing or even during Beta Testing which takes place
after the completion of software application.
Some crucial reasons are listed below:
o
Unit testing helps tester and developers to understand the base of code that makes them
able to change defect causing code quickly.
o
Unit testing helps in the documentation.
o
Unit testing fixes defects very early in the development phase that's why there is a
possibility to occur a smaller number of defects in upcoming testing levels.
o
It helps with code reusability by migrating code and test cases.
Example of Unit testing
For the amount transfer, requirements are as follows:
1.
Amount transfer
1.1
From account number (FAN)→ Text
Box
1.1.1
FAN→ accept only 4 digit
1.2
To account no (TAN)→ Text Box
1.2.1
TAN→ Accept only 4 digit
1.3
Amount→ Text Box
1.3.1
Amount → Accept maximum 4 digit
1.4
Transfer→ Button
1.4.1
Transfer → Enabled
1.5
Cancel→ Button
1.5.1
Cancel→ Enabled
Below are the application access details, which is given by the customer
o
URL→ login Page
o
Username/password/OK → home page
o
To reach Amount transfer module follow the below
Loans → sales → Amount transfer
While performing unit testing, we should follow some rules, which are as follows:
o
To start unit testing, at least we should have one module.
o
Test for positive values
o
Test for negative values
o
No over testing
o
No assumption required
When we feel that the maximum test coverage is achieved, we will stop the testing.
Now, we will start performing the unit testing on the different components such as
o
From account number(FAN)
o
To account number(TAN)
o
Amount
o
Transfer
o
Cancel
For the FAN components
Values
Description
1234
accept
4311
Error message→ account valid or not
blank
Error message→ enter some values
5 digit/ 3 digit
Error message→ accept only 4 digit
Alphanumeric
Error message → accept only digit
Blocked account no
Error message
Copy and paste the value
Error message→ type the value
Same as FAN and TAN
Error message
For the TAN component
o
Provide the values just like we did in From account number (FAN) components
For Amount component
o
Provide the values just like we did in FAN and TAN components.
For Transfer component
o
Enter valid FAN value
o
Enter valid TAN value
o
Enter the correct value of Amount
o
Click on the Transfer button→ amount transfer successfully( confirmation message)
For Cancel Component
o
Enter the values of FAN, TAN, and amount.
o
Click on the Cancel button → all data should be cleared.
Unit Testing Tools
We have various types of unit testing tools available in the market, which are as follows:
o
NUnit
o
JUnit
o
PHPunit
o
Parasoft Jtest
o
EMMA
For more information about Unit testing tools, refers to the below link:
https://www.javatpoint.com/unit-testing-tools
Unit Testing Techniques:
Unit testing uses all white box testing techniques as it uses the code of software application:
o
Data flow Testing
o
Control Flow Testing
o
Branch Coverage Testing
o
Statement Coverage Testing
o
Decision Coverage Testing
How to achieve the best result via Unit testing?
Unit testing can give best results without getting confused and increase complexity by following
the steps listed below:
o
Test cases must be independent because if there is any change or enhancement in
requirement, the test cases will not be affected.
o
Naming conventions for unit test cases must be clear and consistent.
o
During unit testing, the identified bugs must be fixed before jump on next phase of the
SDLC.
o
Only one code should be tested at one time.
o
Adopt test cases with the writing of the code, if not doing so, the number of execution
paths will be increased.
o
If there are changes in the code of any module, ensure the corresponding unit test is
available or not for that module.
Advantages and disadvantages of unit testing
The pros and cons of unit testing are as follows:
Advantages
o
Unit testing uses module approach due to that any part can be tested without waiting for
completion of another parts testing.
o
The developing team focuses on the provided functionality of the unit and how
functionality should look in unit test suits to understand the unit API.
o
Unit testing allows the developer to refactor code after a number of days and ensure the
module still working without any defect.
Disadvantages
o
It cannot identify integration or broad level error as it works on units of the code.
o
In the unit testing, evaluation of all execution paths is not possible, so unit testing is not
able to catch each and every error in a program.
o
It is best suitable for conjunction with other testing activities.
Integration testing
Integration testing is the second level of the software testing process comes after unit testing. In
this testing, units or individual components of the software are tested in a group. The focus of
the integration testing level is to expose defects at the time of interaction between integrated
components or units.
Unit testing uses modules for testing purpose, and these modules are combined and tested in
integration testing. The Software is developed with a number of software modules that are coded
by different coders or programmers. The goal of integration testing is to check the correctness of
communication among all the modules.
Once all the components or modules are working independently, then we need to check the data
flow between the dependent modules is known as integration testing.
Let us see one sample example of a banking application, as we can see in the below image of
amount transfer.
o
First, we will login as a user P to amount transfer and send Rs200 amount, the
confirmation message should be displayed on the screen as amount transfer successfully.
Now logout as P and login as user Q and go to amount balance page and check for a
balance in that account = Present balance + Received Balance. Therefore, the integration
test is successful.
o
Also, we check if the amount of balance has reduced by Rs200 in P user account.
o
Click on the transaction, in P and Q, the message should be displayed regarding the data
and time of the amount transfer.
Guidelines for Integration Testing
o
We go for the integration testing only after the functional testing is completed on each
module of the application.
o
We always do integration testing by picking module by module so that a proper sequence
is followed, and also we don't miss out on any integration scenarios.
o
First, determine the test case strategy through which executable test cases can be
prepared according to test data.
o
Examine the structure and architecture of the application and identify the crucial modules
to test them first and also identify all possible scenarios.
o
Design test cases to verify each interface in detail.
o
Choose input data for test case execution. Input data plays a significant role in testing.
o
If we find any bugs then communicate the bug reports to developers and fix defects and
retest.
o
Perform positive and negative integration testing.
Here positive testing implies that if the total balance is Rs15, 000 and we are transferring Rs1500
and checking if the amount transfer works fine. If it does, then the test would be a pass.
And negative testing means, if the total balance is Rs15, 000 and we are transferring Rs20, 000
and check if amount transfer occurs or not, if it does not occur, the test is a pass. If it happens,
then there is a bug in the code, and we will send it to the development team for fixing that bug.
Note: Any application in this world will do functional testing compulsory, whereas integration
testing will be done only if the modules are dependent on each other. Each integration scenarios
should compulsorily have source→ data→destination. Any scenarios can be called as
integration scenario only if the data gets saved in the destination.
For example: In the Gmail application, the Source could be Compose, Data could be Email and
the Destination could be the Inbox.
Example of integration testing
Let us assume that we have a Gmail application where we perform the integration testing.
First, we will do functional testing on the login page, which includes the various components
such as username, password, submit, and cancel button. Then only we can perform integration
testing.
The different integration scenarios are as follows:
Scenarios1:
o
First, we login as P users and click on the Compose mail and performing the functional
testing for the specific components.
o
Now we click on the Send and also check for Save Drafts.
o
After that, we send a mail to Q and verify in the Send Items folder of P to check if the send
mail is there.
o
Now, we will log out as P and login as Q and move to the Inbox and verify that if the mail
has reached.
Secanrios2: We also perform the integration testing on Spam folders. If the particular contact
has been marked as spam, then any mail sent by that user should go to the spam folder and not
in the inbox.
Note: We will perform functional testing for all features, such as to send items, inbox, and so
on.
As we can see in the below image, we will perform the functional testing for all the text fields
and every feature. Then we will perform integration testing for the related functions. We first
test the add user, list of users, delete user, edit user, and then search user.
Note:
o
There are some features, we might be performing only the functional testing, and there
are some features where we are performing both functional and integration
testing based on the feature's requirements.
o
Prioritizing is essential, and we should perform it at all the phases, which means we will
open the application and select which feature needs to be tested first. Then go to that
feature and choose which component must be tested first. Go to those components and
determine
what
values
to
be
entered
first.
And don't apply the same rule everywhere because testing logic varies from feature to
feature.
o
While performing testing, we should test one feature entirely and then only proceed to
another function.
o
Among the two features, we must be performing only positive integrating testing or
both positive and negative integration testing, and this also depends on the features
need.
Reason Behind Integration Testing
Although all modules of software application already tested in unit testing, errors still exist due
to the following reasons:
1. Each module is designed by individual software developer whose programming logic may
differ from developers of other modules so; integration testing becomes essential to
determine the working of software modules.
2. To check the interaction of software modules with the database whether it is an erroneous
or not.
3. Requirements can be changed or enhanced at the time of module development. These
new requirements may not be tested at the level of unit testing hence integration testing
becomes mandatory.
4. Incompatibility between modules of software could create errors.
5. To test hardware's compatibility with software.
6. If exception handling is inadequate between modules, it can create bugs.
Integration Testing Techniques
Any testing technique (Blackbox, Whitebox, and Greybox) can be used for Integration Testing;
some are listed below:
Black Box Testing
o
State Transition technique
o
Decision Table Technique
o
Boundary Value Analysis
o
All-pairs Testing
o
Cause and Effect Graph
o
Equivalence Partitioning
o
Error Guessing
White Box Testing
o
Data flow testing
o
Control Flow Testing
o
Branch Coverage Testing
o
Decision Coverage Testing
Types of Integration Testing
Integration testing can be classified into two parts:
o
Incremental integration testing
o
Non-incremental integration testing
Incremental Approach
In the Incremental Approach, modules are added in ascending order one by one or according to
need. The selected modules must be logically related. Generally, two or more than two modules
are added and tested to determine the correctness of functions. The process continues until the
successful testing of all the modules.
OR
In this type of testing, there is a strong relationship between the dependent modules. Suppose
we take two or more modules and verify that the data flow between them is working fine. If it is,
then add more modules and test again.
For example: Suppose we have a Flipkart application, we will perform incremental integration
testing, and the flow of the application would like this:
Flipkart→ Login→ Home → Search→ Add cart→Payment → Logout
Incremental integration testing is carried out by further methods:
o
Top-Down approach
o
Bottom-Up approach
Top-Down Approach
The top-down testing strategy deals with the process in which higher level modules are tested
with lower level modules until the successful completion of testing of all the modules. Major
design flaws can be detected and fixed early because critical modules tested first. In this type of
method, we will add the modules incrementally or one by one and check the data flow in the
same order.
In the top-down approach, we will be ensuring that the module we are adding is the child of the
previous one like Child C is a child of Child B and so on as we can see in the below image:
Advantages:
o
Identification of defect is difficult.
o
An early prototype is possible.
Disadvantages:
o
Due to the high number of stubs, it gets quite complicated.
o
Lower level modules are tested inadequately.
o
Critical Modules are tested first so that fewer chances of defects.
Bottom-Up Method
The bottom to up testing strategy deals with the process in which lower level modules are tested
with higher level modules until the successful completion of testing of all the modules. Top level
critical modules are tested at last, so it may cause a defect. Or we can say that we will be adding
the modules from bottom to the top and check the data flow in the same order.
In the bottom-up method, we will ensure that the modules we are adding are the parent of the
previous one as we can see in the below image:
Advantages
o
Identification of defect is easy.
o
Do not need to wait for the development of all the modules as it saves time.
Disadvantages
o
Critical modules are tested last due to which the defects can occur.
o
There is no possibility of an early prototype.
In this, we have one addition approach which is known as hybrid testing.
Hybrid Testing Method
In this approach, both Top-Down and Bottom-Up approaches are combined for testing. In this
process, top-level modules are tested with lower level modules and lower level modules tested
with high-level modules simultaneously. There is less possibility of occurrence of defect because
each module interface is tested.
Advantages
o
The hybrid method provides features of both Bottom Up and Top Down methods.
o
It is most time reducing method.
o
It provides complete testing of all modules.
Disadvantages
o
This method needs a higher level of concentration as the process carried out in both
directions simultaneously.
o
Complicated method.
Non- incremental integration testing
We will go for this method, when the data flow is very complex and when it is difficult to find
who is a parent and who is a child. And in such case, we will create the data in any module bang
on all other existing modules and check if the data is present. Hence, it is also known as the Big
bang method.
Big Bang Method
In this approach, testing is done via integration of all modules at once. It is convenient for small
software systems, if used for large software systems identification of defects is difficult.
Since this testing can be done after completion of all modules due to that testing team has less
time for execution of this process so that internally linked interfaces and high-risk critical modules
can be missed easily.
Advantages:
o
It is convenient for small size software systems.
Disadvantages:
o
Identification of defects is difficult because finding the error where it came from is a
problem, and we don't know the source of the bug.
o
Small modules missed easily.
o
Time provided for testing is very less.
o
We may miss to test some of the interfaces.
Let us see examples for our better understanding of the non-incremental integrating testing or
big bang method:
Example1
In the below example, the development team develops the application and sends it to the CEO
of the testing team. Then the CEO will log in to the application and generate the username and
password and send a mail to the manager. After that, the CEO will tell them to start testing the
application.
Then the manager manages the username and the password and produces a username and
password and sends it to the test leads. And the test leads will send it to the test engineers for
further testing purposes. This order from the CEO to the test engineer is top-down incremental
integrating testing.
In the same way, when the test engineers are done with testing, they send a report to the test
leads, who then submit a report to the manager, and the manager will send a report to the CEO.
This process is known as Bottom-up incremental integration testing as we can see in the below
image:
Note: The combination incremental integration testing (I.I.T) and non-incremental integration
testing is known as sandwich testing.
Example2
The below example demonstrates a home page of Gmail's Inbox, where we click on
the Inbox link, and we are moved to the inbox page. Here we have to do non- incremental
integration testing because there is no parent and child concept.
Note
Stub and driver
The stub is a dummy module that receives the data and creates lots of probable data, but it
performs like a real module. When a data is sent from module P to Stub Q, it receives the data
without confirming and validating it, and produce the estimated outcome for the given data.
The function of a driver is used to verify the data from P and sends it to stub and also checks the
expected data from the stub and sends it to P.
The driver is one that sets up the test environments and also takes care of the communication,
evaluates results, and sends the reports. We never use the stub and driver in the testing process.
In White box testing, bottom-up integration testing is ideal because writing drivers is accessible.
And in black box testing, no preference is given to any testing as it depends on the application.
System Testing
System Testing includes testing of a fully integrated software system. Generally, a computer
system is made with the integration of software (any software is only a single element of a
computer system). The software is developed in units and then interfaced with other software
and hardware to create a complete computer system. In other words, a computer system consists
of a group of software to perform the various tasks, but only software cannot perform the task;
for that software must be interfaced with compatible hardware. System testing is a series of
different type of tests with the purpose to exercise and examine the full working of an integrated
software computer system against requirements.
To check the end-to-end flow of an application or the software as a user is known as System
testing. In this, we navigate (go through) all the necessary modules of an application and check
if the end features or the end business works fine, and test the product as a whole system.
It is end-to-end testing where the testing environment is similar to the production environment.
There are four levels of software testing: unit testing, integration testing, system testing
and acceptance testing, all are used for the testing purpose. Unit Testing used to test a single
software; Integration Testing used to test a group of units of software, System Testing used to
test a whole system and Acceptance Testing used to test the acceptability of business
requirements. Here we are discussing system testing which is the third level of testing levels.
Hierarchy of Testing Levels
There are mainly two widely used methods for software testing, one is White box testing which
uses internal coding to design test cases and another is black box testing which uses GUI or user
perspective to develop test cases.
o
White box testing
o
Black box testing
System testing falls under Black box testing as it includes testing of the external working of the
software. Testing follows user's perspective to identify minor defects.
System Testing includes the following steps.
o
Verification of input functions of the application to test whether it is producing the
expected output or not.
o
Testing of integrated software by including external peripherals to check the interaction
of various components with each other.
o
Testing of the whole system for End to End testing.
o
Behavior testing of the application via auser's experience
Example of System testing
Suppose we open an application, let say www.rediff.com, and there we can see that an
advertisement is displayed on the top of the homepage, and it remains there for a few seconds
before it disappears. These types of Ads are done by the Advertisement Management System
(AMS). Now, we will perform system testing for this type of field.
The below application works in the following manner:
o
Let's say that Amazon wants to display a promotion ad on January 26 at precisely 10:00
AM on the Rediff's home page for the country India.
o
Then, the sales manager logs into the website and creates a request for an advertisement
dated for the above day.
o
He/she attaches a file that likely an image files or the video file of the AD and applies.
o
The next day, the AMS manager of Rediffmail login into the application and verifies the
awaiting Ad request.
o
The AMS manager will check those Amazons ad requests are pending, and then he/she
will check if the space is available for the particular date and time.
o
If space is there, then he/she evaluate the cost of putting up the Ad at 15$ per second,
and the overall Ad cost for 10 seconds is approximate 150$.
o
The AMS manager clicks on the payment request and sends the estimated value along
with the request for payment to the Amazon manager.
o
Then the amazon manager login into the Ad status and confirms the payment request,
and he/she makes the payment as per all the details and clicks on the Submit and Pay
o
As soon as Rediff's AMs manager gets the amount, he/she will set up the Advertisement
for the specific date and time on the Rediffmail's home page.
The various system test scenarios are as follows:
Scenario1: The first test is the general scenario, as we discussed above. The test engineer will do
the system testing for the underlying situation where the Amazon manager creates a request for
the Ad and that Ad is used at a particular date and time.
Scenario2: Suppose the Amazon manager feels that the AD space is too expensive and cancels
the request. At the same time, the Flipkart requests the Ad space on January 26 at 10:00 AM.
Then the request of Amazon has been canceled. Therefore, Flipkart's promotion ad must be
arranged on January 26 at 10 AM.
After all, the request and payment have been made. Now, if Amazon changes their mind and they
feel that they are ready to make payment for January 26 at 10 AM, which should be given because
Flipkart has already used that space. Hence, another calendar must open up for Amazon to make
their booking.
Scenario3: in this, first, we login as AMS manger, then click on Set Price page and set the price
for AD space on logout page to 10$ per second.
Then login as Amazon manager and select the date and time to put up and Ad on the logout page.
And the payment should be 100$ for 10 seconds of an Ad on Rediffmail logout page.
Note: Generally, every test engineer does the functional, integration, and system testing on
their assigned module only.
As we can see in the below image, we have three different modules like Loans, Sales, and
Overdraft. And these modules are going to be tested by their assigned test engineers only
because if data flow between these modules or scenarios, then we need to clear that in which
module it is going and that test engineer should check that thing.Let us assume that here we are
performing system testing on the interest estimation, where the customer takes the Overdraft
for the first time as well as for the second time.
In this particular example, we have the following scenarios:
Scenarios 1
o
First, we will log in as a User; let see P, and apply for Overdraft Rs15000, click on apply,
and logout.
o
After that, we will log in as a Manager and approve the Overdraft of P, and logout.
o
Again we will log in as a P and check the Overdraft balance; Rs15000 should be deposited
and logout.
o
Modify the server date to the next 30 days.
o
Login as P, check the Overdraft balance is 15000+ 300+200=15500, than logout
o
Login as a Manager, click on the Deposit, and Deposit Rs500, logout.
o
Login as P, Repay the Overdraft amount, and check the Overdraft balance, which is Rs zero.
o
Apply for Overdraft in Advance as a two-month salary.
o
Approve by the Manager, amount credit and the interest will be there to the processing
fee for 1st time.
o
Login user → Homepage [Loan, Sales, Overdraft] → Overdraft page [Amount Overdraft,
Apply Overdraft, Repay Overdraft] →Application
o
Login manager → Homepage [Loan, Sales, Overdraft] → Overdraft page [Amount
Overdraft, Apply Overdraft, Repay Overdraft, Approve Overdraft]→ Approve Page
→Approve application.
o
Login as user P → Homepage [Loan, Sales, Overdraft] → Overdraft page [Amount
Overdraft, Apply Overdraft, Repay Overdraft] →Approved Overdraft →Amount Overdraft
o
Login as user P→Homepage [Loan, Sales, Overdraft] → Overdraft page [Amount Overdraft,
Apply Overdraft, Repay Overdraft] →Repay Overdraft → with process fee + interest
amount.
Scenario 2
Now, we test the alternative scenario where the bank provides an offer, which says that a
customer who takes Rs45000 as Overdraft for the first time will not charge for the Process fee.
The processing fee will not be refunded when the customer chooses another overdraft for the
third time.
We have to test for the third scenario, where the customer takes the Overdraft of Rs45000 for
the first time, and also verify that the Overdraft repays balance after applying for another
overdraft for the third time.
Scenario 3
In this, we will reflect that the application is being used generally by all the clients, all of a sudden
the bank decided to reduce the processing fee to Rs100 for new customer, and we have test
Overdraft for new clients and check whether it is accepting only for Rs100.
But then we get conflicts in the requirement, assume the client has applied for Rs15000 as
Overdraft with the current process fee for Rs200. Before the Manager is yet to approve it, the
bank decreases the process fee to Rs100.
Now, we have to test what process fee is charged for the Overdraft for the pending customer.
And the testing team cannot assume anything; they need to communicate with the Business
Analyst or the Client and find out what they want in those cases.
Therefore, if the customers provide the first set of requirements, we must come up with the
maximum possible scenarios.
Types of System Testing
System testing is divided into more than 50 types, but software testing companies typically uses
some of them. These are listed below:
Regression Testing
Regression testing is performed under system testing to confirm and identify that if there's any
defect in the system due to modification in any other part of the system. It makes sure, any
changes done during the development process have not introduced a new defect and also gives
assurance; old defects will not exist on the addition of new software over the time.
For more information about regression testing refers to the below link:
https://www.javatpoint.com/regression-testing
Load Testing
Load testing is performed under system testing to clarify whether the system can work under
real-time loads or not.
Functional Testing
Functional testing of a system is performed to find if there's any missing function in the system.
Tester makes a list of vital functions that should be in the system and can be added during
functional testing and should improve quality of the system.
Recovery Testing
Recovery testing of a system is performed under system testing to confirm reliability,
trustworthiness, accountability of the system and all are lying on recouping skills of the system.
It should be able to recover from all the possible system crashes successfully.
In this testing, we will test the application to check how well it recovers from the crashes or
disasters.
Recovery testing contains the following steps:
o
Whenever the software crashes, it should not vanish but should write the crash log
message or the error log message where the reason for crash should be mentioned. For
example: C://Program Files/QTP/Cresh.log
o
It should kill its own procedure before it vanishes. Like, in Windows, we have the Task
Manager to show which process is running.
o
We will introduce the bug and crash the application, which means that someone will lead
us to how and when will the application crash. Or By experiences, after few months of
involvement on working the product, we can get to know how and when the application
will crash.
o
Re-open the application; the application must be reopened with earlier settings.
For example: Suppose, we are using the Google Chrome browser, if the power goes off, then we
switch on the system and re-open the Google chrome, we get a message asking whether we want
to start a new session or restore the previous session. For any developed product, the developer
writes a recovery program that describes, why the software or the application is crashing,
whether the crash log messages are written or not, etc.
Migration Testing
Migration testing is performed to ensure that if the system needs to be modified in new
infrastructure so it should be modified without any issue.
Usability Testing
The purpose of this testing to make sure that the system is well familiar with the user and it meets
its objective for what it supposed to do.
For more information about usability testing refers to the below link:
https://www.javatpoint.com/usability-testing
Software and Hardware Testing
This testing of the system intends to check hardware and software compatibility. The hardware
configuration must be compatible with the software to run it without any issue. Compatibility
provides flexibility by providing interactions between hardware and software.
Why is System Testing Important?
o
System Testing gives hundred percent assurance of system performance as it covers end
to end function of the system.
o
It includes testing of System software architecture and business requirements.
o
It helps in mitigating live issues and bugs even after production.
o
System testing uses both existing system and a new system to feed same data in both and
then compare the differences in functionalities of added and existing functions so, the
user can understand benefits of new added functions of the system.
Testing Any Application
Here, we are going to test the Gmail application to understand how functional, integration, and
System testing works.
Suppose, we have to test the various modules such as Login, Compose, Draft, Inbox, Sent Item,
Spam, Chat, Help, Logout of Gmail application.
We do Functional Testing on all Modules First, and then only we can perform integration testing
and system testing.
In functional testing, at least we have one module to perform functional testing. So here we have
the Compose Module where we are performing the functional testing.
Compose
The different components of the Compose module are To, CC, BCC, Subject, Attachment, Body,
Sent, Save to Draft, Close.
o
First, we will do functional testing on the To
Input
Results
Positive inputs
mike@gmail.com
Accept
Mike12@gmail.com
Accept
Mike@yahoo.com
Accept
Negative inputs
Mike@yahoocom
Error
Mike@yaho.com
Error
o
For CC & BCC components, we will take the same input as To component.
o
For Subject component, we will take the following inputs and scenarios:
Input
Results
Positive inputs
Enter maximum character
Accept
Enter Minimum character
Accept
Blank Space
Accept
URL
Accept
Copy & Paste
Accept
Negative inputs
Crossed maximum digits
Error
Paste images / video / audio
Error
o
Maximum character
o
Minimum character
o
Flash files (GIF)
o
Smiles
o
Format
o
Blank
o
Copy & Paste
o
Hyperlink
o
Signature
o
For the Attachment component, we will take the help of the below scenarios and test the
component.
o
o
File size at maximum
o
Different file formats
o
Total No. of files
o
Attach multiple files at the same time
o
Drag & Drop
o
No Attachment
o
Delete Attachment
o
Cancel Uploading
o
View Attachment
o
Browser different locations
o
Attach opened files
For Sent component, we will write the entire field and click on the Sent button, and the
Confirmation message; Message sent successfully must be displayed.
o
For Saved to Drafts component, we will write the entire field and click on aved to drafts,
and the Confirmation message must be displayed.
o
For the Cancel component, we will write all fields and click on the Cancel button, and
the Window will be closed or moved to save to draft or all fields must be refreshed.
Once we are done performing functional testing on compose module, we will do the Integration
testing on Gmail application's various modules:
Login
o
First, we will enter the username and password for login to the application and Check the
username on the Homepage.
Compose
o
Compose mail, send it and check the mail in Sent Item [sender]
o
Compose mail, send it and check the mail in the receiver [Inbox]
o
Compose mail, send it and check the mail in self [Inbox]
o
Compose mail, click on Save as Draft, and check-in sender draft.
o
Compose mail, send it invalid id (valid format), and check for undelivered message.
o
Compose mail, close and check-in Drafts.
Inbox
o
Select the mail, reply, and check in sent items or receiver Inbox.
o
Select the mail in Inbox for reply, Save as Draft and check in the Draft.
o
Select the mail then delete it, and check in Trash.
Sent Item
o
Select the mail, Sent Item, Reply or Forward, and check in Sent item or receiver inbox.
o
Select mail, Sent Item, Reply or Forward, Save as Draft, and verify in the Draft.
o
Select mail, delete it, and check in the Trash.
Draft
o
Select the email draft, forward and check Sent item or Inbox.
o
Select the email draft, delete and verify in Trash.
Chat
o
Chat with offline users saved in the inbox of the receiver.
o
Chat with the user and verify it in the chat window.
o
Chat with a user and check in the chat history.
Note: During testing, we need to wait for a particular duration of time because system testing
can only be performed when all the modules are ready and performed functional and
integration testing.
Test Artifacts
In software testing, we have various types of test document, which are as follows:
o
Test scenarios
o
Test case
o
Test plan
o
Requirement traceability matrix(RTM)
o
Test strategy
o
Test data
o
Bug report
o
Test execution report
Test Scenario
The test scenario is a detailed document of test cases that cover end to end functionality of a
software application in liner statements. The liner statement is considered as a scenario. The test
scenario is a high-level classification of testable requirements. These requirements are grouped
on the basis of the functionality of a module and obtained from the use cases.
In the test scenario, there is a detailed testing process due to many associated test cases. Before
performing the test scenario, the tester has to consider the test cases for each scenario.
In the test scenario, testers need to put themselves in the place of the user because they test the
software application under the user's point of view. Preparation of scenarios is the most critical
part, and it is necessary to seek advice or help from customers, stakeholders or developers to
prepare the scenario.
Note:
The test scenarios can never be used for the text execution process because it does not consist
of navigation steps and input.
These are the high-level documents that talks about all the possible combination or multiple ways
or combinations of using the application and the primary purpose of the test scenarios are to
understand the overall flow of the application.
How to write Test Scenarios
As a tester, follow the following steps to create Test Scenarioso
Read the requirement document such as BRS (Business Requirement Specification), SRS
(System Requirement Specification) and FRS (Functional Requirement Specification) of the
software which is under the test.
o
Determine all technical aspects and objectives for each requirement.
o
Find all the possible ways by which the user can operate the software.
o
Ascertain all the possible scenario due to which system can be misused and also detect
the users who can be hackers.
o
After reading the requirement document and completion of the scheduled analysis make
a list of various test scenarios to verify each function of the software.
o
Once you listed all the possible test scenarios, create a traceability matrix to find out
whether each and every requirement has a corresponding test scenario or not.
o
Supervisor of the project reviews all scenarios. Later, they are evaluated by other
stakeholders of the project.
Features of Test Scenario
o
The test scenario is a liner statement that guides testers for the testing sequence.
o
Test scenario reduces the complexity and repetition of the product.
o
Test scenario means talking and thinking about tests in detail but write them in liner
statements.
o
It is a thread of operations.
o
Test scenario becomes more important when the tester does not have enough time to
write test cases, and team members agree with a detailed liner scenario.
o
The test scenario is a time saver activity.
o
It provides easy maintenance because the addition and modification of test scenarios are
easy and independent.
Note:
Some rules have to be followed when we were writing test scenarios:
o
Always list down the most commonly used feature and module by the users.
o
We always start the scenarios by picking module by module so that a proper sequence is
followed as well as we don't miss out on any module level.
o
Generally, scenarios are module level.
o
Delete scenarios should always be the last option else, and we will waste lots of time in
creating the data once again.
o
It should be written in a simple language.
o
Every scenario should be written in one line or a maximum of two lines and not in the
paragraphs.
o
Every scenario should consist of Dos and checks.
Test Case
The test case is defined as a group of conditions under which a tester determines whether a
software application is working as per the customer's requirements or not. Test case designing
includes preconditions, case name, input conditions, and expected result. A test case is a first
level action and derived from test scenarios.
It is an in-details document that contains all possible inputs (positive as well as negative) and the
navigation steps, which are used for the test execution process. Writing of test cases is a onetime attempt that can be used in the future at the time of regression testing.
Test case gives detailed information about testing strategy, testing process, preconditions, and
expected output. These are executed during the testing process to check whether the software
application is performing the task for that it was developed or not.
Test case helps the tester in defect reporting by linking defect with test case ID. Detailed test case
documentation works as a full proof guard for the testing team because if developer missed
something, then it can be caught during execution of these full-proof test cases.
To write the test case, we must have the requirements to derive the inputs, and the test scenarios
must be written so that we do not miss out on any features for testing. Then we should have the
test case template to maintain the uniformity, or every test engineer follows the same approach
to prepare the test document.
Generally, we will write the test case whenever the developer is busy in writing the code.
When do we write a test case?
We will write the test case when we get the following:
o
When the customer gives the business needs then, the developer starts developing and
says that they need 3.5 months to build this product.
o
And In the meantime, the testing team will start writing the test cases.
o
Once it is done, it will send it to the Test Lead for the review process.
o
And when the developers finish developing the product, it is handed over to the testing
team.
o
The test engineers never look at the requirement while testing the product document
because testing is constant and does not depends on the mood of the person rather than
the quality of the test engineer.
Note: When writing the test case, the actual result should never be written as the product is still
being in the development process. That?s why the actual result should be written only after the
execution of the test cases.
Why we write the test cases?
We will write the test for the following reasons:
o
To require consistency in the test case execution
o
To make sure a better test coverage
o
It depends on the process rather than on a person
o
To avoid training for every new test engineer on the product
To require consistency in the test case execution: we will see the test case and start testing the
application.
To make sure a better test coverage: for this, we should cover all possible scenarios and
document it, so that we need not remember all the scenarios again and again.
It depends on the process rather than on a person: A test engineer has tested an application
during the first release, second release, and left the company at the time of third release. As the
test engineer understood a module and tested the application thoroughly by deriving many
values. If the person is not there for the third release, it becomes difficult for the new person.
Hence all the derived values are documented so that it can be used in the future.
To avoid giving training for every new test engineer on the product: When the test engineer
leaves, he/she leaves with a lot of knowledge and scenarios. Those scenarios should be
documented so that the new test engineer can test with the given scenarios and also can write
the new scenarios.
Note: when the developers are developing the first product for the First release, the test
engineer writes the test cases. And in the second release, when the new features are added,
the test engineer writes the test cases for that also, and in the next release, when the elements
are modified, the test engineer will change the test cases or writes the new test cases as well.
What is Test Plan:
A test plan is a document that consists of all future testing-related activities. It is prepared at
the project level and in general, it defines work products to be tested, how they will be
tested, and test type distribution among the testers. Before starting testing there will be a
test manager who will be preparing a test plan. In any company whenever a new project is
taken up before the tester is involved in the testing the test manager of the team would
prepare a test Plan.
 The test plan serves as the blueprint that changes according to the progressions in
the project and stays current at all times.
 It serves as a base for conducting testing activities and coordinating activities
among a QA team.
 It is shared with Business Analysts, Project Managers, and anyone associated with
the project.
Factors
Roles
Who writes Test Plans?
Test lead, Test Manager, Test Engineer
Who reviews the Test Plan?
Test Lead, Test Manager, Test Engineer,
Customer, Development Team
Who approves the Test Plan?
Customer, Test Manager
Who writes Test Cases?
Test Lead, Test Engineer
Factors
Roles
Who reviews Test Cases?
Test Engineer, Test Lead, Customer,
Development Team
Who approves Test Cases?
Test Manager, Test Lead, Customer
Objectives of the Test Plan:
1. Overview of testing activities: The test plan provides an overview of the testing
activities and where to start and stop the work.
2. Provides timeline: The test plan helps to create the timeline for the testing
activities based on the number of hours and the workers needed.
3. Helps to estimate resources: The test plan helps to create an estimate of the
number of resources needed to finish the work.
4. Serves as a blueprint: The test plan serves as a blueprint for all the testing
activities, it has every detail from beginning to end.
5. Helps to identify solutions: A test plan helps the team members They consider the
project’s challenges and identify the solutions.
6. Serves as a rulebook: The test plan serves as a rulebook for following rules when
the project is completed phase by phase.
Types of Test Plans:
The following are the three types of test plans:
 Master Test Plan: In this type of test plan, includes multiple test strategies and has
multiple levels of testing. It goes into great depth on the planning and
management of testing at the various test levels and thus provides a bird’s eye
view of the important decisions made, tactics used, etc. It includes a list of tests
that must be executed, test coverage, the connection between various test levels,
etc.
 Phase Test Plan: In this type of test plan, emphasis is on any one phase of testing.
It includes further information on the levels listed in the master testing plan.
Information like testing schedules, benchmarks, activities, templates, and other
information that is not included in the master test plan is included in the phase
test plan.
 Specific Test Plan: This type of test plan, is designed for specific types of testing
especially non-functional testing for example plans for conducting performance
tests or security tests.
1. Objective: It describes the aim of the test plan, whatever the good process and procedure
they are going to follow to give quality software to customers. The overall objective of the
test is to find as many defects as possible and to make software bug-free. The test objective
must be broken into components and sub-components. In every component following
activities should be performed.
 List all the functionality and performance to be tested.
 Make goals and targets based on the application feature.
2. Scope: It consists of information that needs to be tested concerning an application. The
scope can be divided into two parts:
 In-Scope: The modules that are to be tested rigorously.
 Out Scope: The modules that are not to be tested rigorously.
Example: In an application A, B, C, and D features have to be developed, but the B feature has
already been designed by other companies. So the development team will purchase B from
that company and perform only integrated testing with A, B, and C.
3. Testing Methodology: The methods that are going to be used for testing depend on
application to application. The testing methodology is decided based on the feature and
application requirements.
Since the testing terms are not standard, one should define what kind of testing will be used
in the testing methodology. So that everyone can understand it.
4. Approach: The approach of testing different software is different. It deals with the flow of
applications for future reference. It has two aspects:
 High-Level Scenarios: For testing critical features high-level scenarios are written.
For Example, login to a website, and book from a website.
 The Flow Graph: It is used when one wants to make benefits such as converging
and merging easy.
5. Assumption: In this phase, certain assumptions will be made.
 The testing team will get proper support from the development team.
 The tester will get proper knowledge transfer from the development team.
 Proper resource allocation will be given by the company to the testing department.
6. Risk: All the risks that can happen if the assumption is broken. For Example, in the case of
wrong budget estimation, the cost may overrun. Some reason that may lead to risk is:



Test Manager has poor management skills.
Hard to complete the project on time.
Lack of cooperation.
7. Mitigation Plan: If any risk is involved then the company must have a backup plan, the
purpose is to avoid errors. Some points to resolve/avoid risk:
 Test priority is to be set for each test activity.
 Managers should have leadership skills.
 Training course for the testers.
8. Roles and Responsibilities: All the responsibilities and role of every member of a particular
testing team has to be recorded.
Example:
 Test Manager: Manages the project, takes appropriate resources, and gives project
direction.
 Tester: Identify the testing technique, verify the test approach, and save project
costs.
9. Schedule: Under this, it will record the start and end date of every testing-related activity.
For Example, writing the test case date and ending the test case date.
10. Defect Tracking: It is an important process in software engineering as lots of issue arises
when you develop a critical system for business. If there is any defect found while testing that
defect must be given to the developer team. There are the following methods for the process
of defect tracking:
 Information Capture: In this, we take basic information to begin the process.
 Prioritize: The task is prioritized based on severity and importance.
 Communication: Communication between the identifier of the bug and the fixer of
the bug.
 Environment: Test the application based on hardware and software.
Example: The bug can be identified using bug-tracking tools such as Jira, Mantis, and Trac.
11. Test Environments: It is the environment that the testing team will use i.e. the list of
hardware and software, while testing the application, the things that are said to be tested will
be written under this section. The installation of software is also checked under this.
Example:
 Software configuration on different operating systems, such as Windows, Linux, Mac,
etc.
 Hardware Configuration depends on RAM, ROM, etc.
12. Entry and Exit Criteria: The set of conditions that should be met to start any new type of
testing or to end any kind of testing.
Entry Condition:
 Necessary resources must be ready.
 The application must be prepared.
 Test data should be ready.
Exit Condition:
 There should not be any major bugs.
 Most test cases should be passed.
 When all test cases are executed.
Example: If the team member reports that 45% of the test cases failed, then testing will be
suspended until the developer team fixes all defects.
Example of Test Plan
13. Test Automation: It consists of the features that are to be automated and which features
are not to be automated.
 If the feature has lots of bugs then it is categorized as Manual Testing.
 If the feature is frequently tested then it can be automated.
14. Effort Estimation: This involves planning the effort that needs to be applied by every team
member.
15. Test Deliverables: It is the outcome from the testing team that is to be given to the
customers at the end of the project.
Before the testing phase:
 Test plan document.
 Test case document.
 Test design specification.
During the testing phase:
 Test scripts.
 Test data.
 Error logs.
After the testing phase:
 Test Reports.
 Defect Report.
 Installation Report.
It contains a test plan, defect report, automation report, assumption report, tools, and other
components that have been used for developing and maintaining the testing effort.
16. Template: This is followed by every kind of report that is going to be prepared by the
testing team. All the test engineers will only use these templates in the project to maintain the
consistency of the product.
How to create a Test Plan:
Below are the eight steps that can be followed to write a test plan:
Create Test Plan
1. Analyze the product: This phase focuses on analyzing the product, Interviewing clients,
designers, and developers, and performing a product walkthrough. This stage focuses on
answering the following questions:
 What is the primary objective of the product?
 Who will use the product?
 What are the hardware and software specifications of the product?
 How does the product work?
2. Design the test strategy: The test strategy document is prepared by the manager and details
the following information:
 Scope of testing which means the components that will be tested and the ones that
will be skipped.
 Type of testing which means different types of tests that will be used in the project.
 Risks and issues that will list all the possible risks that may occur during testing.
 Test logistics mentions the names of the testers and the tests that will be run by
them.
3. Define test objectives: This phase defines the objectives and expected results of the test
execution. Objectives include:
 A list of software features like functionality, GUI, performance standards, etc.
 The ideal expected outcome for every aspect of the software that needs testing.
4. Define test criteria: Two main testing criteria determine all the activities in the testing
project:
 Suspension criteria: Suspension criteria define the benchmarks for suspending all
the tests.
 Exit criteria: Exit criteria define the benchmarks that signify the successful
completion of the test phase or project. These are expected results and must match
before moving to the next stage of development.
5. Resource planning: This phase aims to create a detailed list of all the resources required for
project completion. For example, human effort, hardware and software requirements, all
infrastructure needed, etc.
6. Plan test environment: This phase is very important as the test environment is where the
QAs run their tests. The test environments must be real devices, installed with real browsers
and operating systems so that testers can monitor software behavior in real user conditions.
7. Schedule and Estimation: Break down the project into smaller tasks and allocate time and
effort for each task. This helps in efficient time estimation. Create a schedule to complete these
tasks in the designated time with a specific amount of effort.
8. Determine test deliverables: Test deliverables refer to the list of documents, tools, and other
equipment that must be created, provided, and maintained to support testing activities in the
project.
Deliverables required before
testing
Deliverables required during
testing
Deliverables required after testing
Test Plan
Test Scripts
Test Results
Test Design
Simulators
Defect Reports
Test Data
Release Notes
Error and Execution Logs
Test strategy
The test strategy is a high-level document, which is used to verify the test types (levels) to be
executed for the product and also describe that what kind of technique has to be used and which
module is going to be tested. The Project Manager can approve it. It includes the multiple
components such as documentation formats, objective, test processes, scope, and customer
communication strategy, etc. we cannot modify the test strategy.
Test data
It is data that occurs before the test is executed. It is mainly used when we are implementing the
test case. Mostly, we will have the test data in the Excel sheet format and entered manually while
performing the test case.
The test data can be used to check the expected result, which means that when the test data is
entered, the expected outcome will meet the actual result and also check the application
performance by entering the in-correct input data.
Bug report
The bug report is a document where we maintain a summary of all the bugs which occurred
during the testing process. This is a crucial document for both the developers and test engineers
because, with the help of bug reports, they can easily track the defects, report the bug, change
the status of bugs which are fixed successfully, and also avoid their repetition in further process.
Test execution report
It is the document prepared by test leads after the entire testing execution process is completed.
The test summary report defines the constancy of the product, and it contains information like
the modules, the number of written test cases, executed, pass, fail, and their percentage. And
each module has a separate spreadsheet of their respective module.
Test Case Review
When the test engineer writes a test case, he/she may skip some scenarios, inputs and writes
wrong navigation steps, which may affect the entire test execution process.
To avoid this, we will do one round of review and approval process before starting test
execution.
If we don't go for the review process, we miss out some scenarios, accuracy won't be there, and
the test engineer won't be serious.
All the cases need to be sent for the review process only after the completion of writing the test
case. So, the other person does not get disturbed.
Once the author finishes writing the test case, it needs to be sent for the other test engineer
known as a reviewer for the review process.
The reviewer opens the test case with the corresponding requirement and checks
the correctness of the test case, proper flow, and maximum test coverage.
During this review process, if the reviewer found any mistake, he/she writes it in a separate
document, which is known as Review document and sends it back to the author.
The author goes through all the review comments and starts doing the changes if it is necessary,
then send it back once again for the review process.
This correction process will continue until both authors, and the reviewer will satisfy.
Once the review is successful, the reviewer sends it back to the test lead for the final approval
process.
During this approval process, the Team leads are always kept in the loop so that the author and
reviewer will be serious in their jobs.
When the test cases are written, review, and approved, it will be stored in one centralized
location, which is known as Test Case Repository.
Note:
Test Case Repository
o
A test case repository is a centralized location where all the baseline test cases (written,
review, and approved) are stored.
o
When the client gives the requirements, the developer starts developing the modules,
and the test engineer will write the test cases according to the requirements.
o
A test case repository is used to store the approved test cases.
o
Any test engineer wants to test the application, then he/she needs to access the test case
only from the test case repository.
o
If we do not require any test case, we can drop them from the test case repository.
o
For every release, we maintain a different test case repository.
o
Once the test cases are baselined or stored in the test case repository, they cannot be
edited or changed without the permission of the test lead.
o
The testing team always has a complete back-up of the test case repository if any crash
happened which is affecting the software.
Test Case Review Process
The following are the list of activities involved in the review process:
1. Planning: This is the first phase and begins with the author requesting a moderator for the
review process. The moderator is responsible for scheduling the date, time, place, and
invitation of the review. The entry check is done to make sure that the document is ready for
review and does not have a large number of defects. Once the document clears the entry
check the author and moderator decide which part of the document to be reviewed.
2. Kick-Off: This is an optional step in the review process. The goal is to give a short
introduction on the objectives of the review and documents to everyone in the meeting.
3. Preparation: The reviewers review the document using the related documents,
procedures, rules, and checklists provided. Each participant while reviewing identifies the
defects, questions, and comments according to their understanding of the document.
4. Review Meeting: The review meeting consists of three phases Logging phase: The issues and defects identified during the preparation phase are
logged page by page. The logging is done by an author or scribe, where a scribe is a
person to do the logging. Every defect and its severity should be logged.
 Discussion phase: If any defects need discussion then they will be logged and
handled in this phase. The outcome of the discussion is documented for future
purposes.
 Decision phase: A decision on the document under review has to be made by the
participants.
5. Rework: If the number of defects found per page exceeds a certain level then the
document has to be reworked.
6. Follow-Up: The moderator checks to make sure that the author has taken action on all
known defects.
Test Case Review Process
Techniques for Test Case Review
There are three techniques to conduct a test case review:
1. Self-review: This is carried out by the tester who wrote the test cases. By looking
at SRS/FRD, he can see if all of the requirements are met or not.
2. Peer review: This is done by another tester who isn’t familiar with the system
under test but hasn’t developed the test cases. Maker and Checker are two other
names for the same thing.
3. Supervisory review: This is done by a team lead or manager who is higher in rank
than the tester who wrote the test cases and has an extensive understanding of
the requirements and system under test.
Factors to Consider During Test Case Review
During the review, the reviewer looks for the following in the test cases:
1. Template: The reviewer determines if the template meets the product’s requirements.
2. Header: The following aspects will be checked in the header:
 Whether or not all of the qualities are captured is debatable.
 Whether or not all of the traits are relevant is debatable.
 Whether all of the traits are filled or not is up to you.
3. Body: Look at the following components in the test case’s body:
 The test case should be written in such a way that the execution procedure takes as
little time as possible.
 Whether or not all feasible circumstances are covered is debatable.
 Look for a flow that has the maximum amount of test coverage.
 Whether or not to use the test case design technique.
 The test case should be easy to comprehend.
4. Text Execution Report:
 It is the last document produced by a test lead after all of the testings has been
finished.
 The test execution report defined the application’s stability and included data such
as the number of cases written, ran, passed, and failed, as well as their
percentages.
 The test execution report is a final summary report that defines the application’s
quality and aids in determining whether or not the program may be handed over
to the customer.
 Every module has its own spreadsheet to track its progress.
Common Mistakes During Test Case Review
Below are some common mistakes checked during the test case review process:
1. Spelling errors: Spelling errors can sometimes cause a lot of confusion or make a
statement difficult to grasp.
2. Replication of Test Cases: It relates to the reduction of redundant test cases. It’s
possible that two or more test cases are testing the same item and can be
combined into one, saving time and space.
3. Standard/Guidelines: It’s critical to examine whether all of the standards and
guidelines are being followed correctly during the review process.
4. Redundancy: When a test case is rendered obsolete owing to a change in
requirements or certain adjustments, it is referred to be redundancy. These types
of test scenarios must be eliminated.
5. The manner used: Test cases should be written in a basic, easy-to-understand
language.
6. Grammar: If the grammar is incorrect, the test case can be misinterpreted, leading
to incorrect findings.
7. Format of Template: When a suitable template is followed, it is simple to
add/modify test cases in the future, and the test case plan appears orderly.
Classifying Defects in Review of the Test Cases
When these checklists are utilized consistently and problems are discovered, it is
recommended that the defects be classified into one of the following categories:










Incomplete test cases.
Missing negative test cases.
No test data.
Inappropriate/Incorrect test data.
Incorrect Expected behavior.
Grammatical problems.
Typos.
Inconsistent tense/voice.
Incomplete results/number of test runs.
Defect information was not recorded in the test case.
Defects could sneak into production if test cases aren’t thoroughly reviewed. As a result,
production issues could be reported, thereby impacting the Software’s quality. Resolving
problems at this time would be much more expensive than fixing them if they had been
discovered during the testing phase.
What is Requirement Traceability Matrix (RTM)?
RTM stands for Requirement Traceability matrix. RTM maps all the requirements with the test
cases. By using this document one can verify test cases cover all functionality of the application
as per the requirements of the customer.
 Requirements: Requirements of a particular project from the client.
 Traceability: The ability to trace the tests.
 Matrix: The data which can be stored in rows and columns form.
The main purpose of the requirement traceability matrix is to verify that the all requirements
of clients are covered in the test cases designed by the testers.
In simple words, one can say it is a pen and pencil approach i.e., to analyze the two data
information but here we are using an Excel sheet to verify the data in a requirement
traceability matrix.
Why is Requirement Traceability Matrix (RTM) Important?
When business analysis people get the requirements from clients, they prepare a document
called SRS (System/Software Requirement Specification) and these requirements are stored in
this document. If we are working in the Agile model, we call this document Sprint Backlog, and
requirements are present in it in the form of user stories.
When QA gets the SRS/Sprint backlog document they first try to understand the requirements
thoroughly and then start writing test cases and reviewing them with the entire project team.
But sometimes it may happen that in these test cases, some functionality of requirements is
missing, so to avoid it we required a requirement traceability matrix.
 Each test case is traced back to each requirement in the RTM. Therefore, there is
less chance of missing any requirement in testing, and 100% test coverage can be
achieved.
 RTM helps users discover any change that was made to the requirements as well as
the origin of the requirement.
 Using RTM, requirements can be traced to determine a particular group or person
that wanted that requirement, and it can be used to prioritize the requirement.
 It helps to keep a check between requirements and other development artifacts
like technical and other requirements.
 The Traceability matrix can help the tester identify whether by adding any
requirement previous requirements are affected or not.
 RTM helps in evaluating the effect on the QA team to reuse the test case.
Parameters of Requirement Traceability Matrix (RTM):
The below figure shows the basic template of RTM. Here the requirement IDs are row-wise and
test case IDs are column-wise which means it is a forward traceability matrix.
From the figure below, it can be seen that: RTM
The following are the parameters to be included in RTM:
1. Requirement ID: The requirement ID is assigned to every requirement of the
project.
2. Requirement description: for every requirement a detailed description is given in
the SRS (System/Software Requirement Specification) document.
3. Requirement Type: understand the type of requirements i.e., banking, telecom,
healthcare, traveling, e-commerce, education, etc.
4. Test cases ID: the testing team designs test cases. Test cases are also assigned with
some ID.
Types of Traceability Matrix:
There are 3 types of traceability matrix:
1. Forward traceability matrix
2. Backward traceability matrix
3. Bi-directional traceability matrix
1. Forward traceability matrix:
In the forward traceability matrix, we mapped the requirements with the test cases. Here we
can verify that all requirements are covered in test cases and no functionality is missing in
test cases. It helps you to ensure that all the requirements available in the SRS/ Sprint backlog
can be traced back to test cases designed by the testers. It is used to check whether the
project progresses in the right direction.
Forward traceability matrix
In forwarding the traceability matrix:
Rows = Requirement ID
Column = Test case ID
2. Backward traceability matrix:
In the backward traceability matrix, we mapped the test cases with the requirements. Here
we can verify that no extra test case is added which is not required as per our requirements.
It helps you to ensure that any test cases that you have designed can be traced back to the
requirements or user stories, and you are not extending the scope of the work by just
creating additional test cases that can not be mapped to the requirement. The backward
traceability matrix is also known as the reverse traceability matrix.
Backward traceability matrix
In the Excelbackward traceability matrix:
Rows = Test cases ID
Column = Requirement ID
3. Bi-directional traceability matrix:
A bi-directional traceability matrix is a combination of a forward traceability matrix and a
backward traceability matrix. Here we verify the requirements and test cases in both ways.
Bi-directional traceability matrix
Bi-directional traceability matrix = Forward traceability matrix + Backward traceability matrix
Who Needs Requirement Traceability Matrix (RTM)?
When testers design the test cases they need to check whether test cases cover all functionality
of the application as per the requirements of the customer given in the SRS/Sprint backlog.
 To verify that they need a requirement traceability matrix.
 They generally use an Excel sheet or Google spreadsheet for RTM.
How To Create RTM?
Before creating RTM SRS/Sprint backlog documents and test cases documents are required.
Below are the steps to create RTM:
1. For RTM we will use an Excel sheet.
2. Write the name of the project, date, and name of the person who is responsible for
RTM.
3. Write all requirement IDs row-wise in the first column of an Excel sheet.
4. Write all the requirement descriptions row-wise in the second column of an Excel
sheet.
5. Write all the requirements type row-wise in the third column of an Excel sheet.
6. Write all the test cases with their IDs column-wise in an Excel sheet.
7. After writing all requirements and test cases you have to verify that for every
requirement you have prepared the test cases in both positive and negative flow.
Advantages of RTM:
Below are some benefits of using RTM:
1. Full test coverage: RTM confirms the 100% test coverage.
2. Verify missing functionality: This document is helpful for the tester to check there
is not any functionality missed while testing the application.
3. Helps to prioritize and track requirements: It also helps to understand what extra
test cases we added that are not part of the requirement.
4. Helps to track test status: It is easy to keep track of the overall test status.
5. Proper consistent documentation: RTM can help in the effort to provide proper
and consistent documentation for the team.
6. Versioning is easier: RTM helps to keep track of the required modifications and
how they impact every part of the project.
Requirement Traceability Matrix (RTM) Template:
The below figure shows the basic template of RTM. Here the requirement IDs are row-wise and
test case IDs are column-wise which means it is a forward traceability matrix.
From the figure below, it can be seen that:
 For verifying requirement number 1 there are test cases number 1 and 7.
 In requirement number 2 there are test cases number 2 and 10 and similarly, for all
other requirements, there are test cases to verify them.
Errors, Defects, Failures, and Root Causes
Human beings make errors (mistakes), which produce defects (faults, bugs), which in turn
may result in failures. Humans make errors for various reasons, such as time pressure,
complexity of work products, processes, infrastructure or interactions, or simply because they
are tired or lack adequate training. Defects can be found in documentation, such as a
requirements specification or a test script, in source code, or in a supporting artifact such as a
build file. Defects in artifacts produced earlier in the SDLC, if undetected, often lead to
defective artifacts later in the lifecycle. If a defect in code is executed, the system may fail to
do what it should do, or do something it shouldn’t, causing a failure. Some defects will always
result in a failure if executed, while others will only result in a failure in specific
circumstances, and some may never result in a failure. Errors and defects are not the only
cause of failures. Failures can also be caused by environmental conditions, such as when
radiation or electromagnetic field cause defects in firmware. A root cause is a fundamental
reason for the occurrence of a problem (e.g., a situation that leads to an error). Root causes
are identified through root cause analysis, which is typically performed when a failure occurs
or a defect is identified. It is believed that further similar failures or defects can be prevented
or their frequency reduced by addressing the root cause, such as by removing it.
What is a Bug?
A malfunction in the software/system is an error that may cause components or the system
to fail to perform its required functions. In other words, if an error is encountered during the
test it can cause malfunction. For example, incorrect data description, statements, input data,
design, etc.
Life Cycle of a Bug in Software Testing
Below are the steps in the lifecycle of the bug in software testing:
1. Open: The editor begins the process of analyzing bugs here, where possible, and
works to fix them. If the editor thinks the error is not enough, the error for some
reason can be transferred to the next four regions, Reject or No, i.e. Repeat.
2. New: This is the first stage of the distortion of distractions in the life cycle of the
disorder. In the later stages of the bug’s life cycle, confirmation and testing are
performed on these bugs when a new feature is discovered.
3. Shared: The engineering team has been provided with a new bug fixer recently
built at this level. This will be sent to the designer by the project leader or team
manager.
4. Pending Review: When fixing an error, the designer will give the inspector an error
check and the feature status will remain pending ‘review’ until the tester is working
on the error check.
5. Fixed: If the Developer completes the debugging task by making the necessary
changes, the feature status can be called “Fixed.”
6. Confirmed: If the tester had no problem with the feature after the designer was
given the feature on the test device and thought that if it was properly adjusted,
the feature status was given “verified”.
7. Open again / Reopen: If there is still an error, the editor will then be instructed to
check and the feature status will be re-opened.
8. Closed: If the error is not present, the tester changes the status of the feature to
‘Off’.
9. Check Again: The inspector then begins the process of reviewing the error to check
that the error has been corrected by the engineer as required.
10. Repeat: If the engineer is considering a factor similar to another factor. If the
developer considers a feature similar to another feature, or if the definition of
malfunction coincides with any other malfunction, the status of the feature is
changed by the developer to ‘duplicate’.
Few more stages to add here are:
11. Rejected: If a feature can be considered a real factor the developer will mean
“Rejected” developer.
12. Duplicate: If the engineer finds a feature similar to any other feature or if the concept
of the malfunction is similar to any other feature the status of the feature is changed
to ‘Duplicate’ by the developer.
13. Postponed: If the developer feels that the feature is not very important and can be
corrected in the next release, however, in that case, he can change the status of the
feature such as ‘Postponed’.
14. Not a Bug: If the feature does not affect the performance of the application, the
corrupt state is changed to “Not a Bug”.
Terms
Description
Raised by
Defect
When the application is not working as per the requirement.
Test Engineer
Bug
Informal name of defect
Test Engineer
Error
Problem in code leads to the errors.
Developer, Automation Test Engineer
Issue
When the application is not meeting the business
requirement.
Customer
Mistake
Problem in the document is known as a mistake.
--
Failure
Lots of defect leads to failure of the software.
--
Bug Report
1. Defect/ Bug Name: A short headline describing the defect. It should be specific
and accurate.
2. Defect/Bug ID: Unique identification number for the defect.
3. Defect Description: Detailed description of the bug including the information of
the module in which it was detected. It contains a detailed summary including the
severity, priority, expected results vs actual output, etc.
4. Severity: This describes the impact of the defect on the application under test.
5. Priority: This is related to how urgent it is to fix the defect. Priority can be High/
Medium/ Low based on the impact urgency at which the defect should be fixed.
6. Reported By: Name/ ID of the tester who reported the bug.
7. Reported On: Date when the defect is raised.
8. Steps: These include detailed steps along with the screenshots with which the
developer can reproduce the same defect.
9. Status: New/ Open/ Active
10. Fixed By: Name/ ID of the developer who fixed the defect.
11. Data Closed: Date when the defect is closed.
Duplicate
When the same bug has been reported multiple times by the different test engineers are known
as a duplicate bug.
Can't fix
When Developer accepting the bug and also able to reproduce, but can't do the necessary code
changes due to some constraints.
Reasons for the can't fix status of the bug
Following are the constraints or reasons for the can't fix bug:
o
No technology support: The programming language we used itself not having the
capability to solve the problem.
o
The Bug is in the core of code (framework): If the bug is minor (not important and does
not affect the application), the development lead says it can be fixed in the next release.
Or if the bug is critical (regularly used and important for the business) and development
lead cannot reject the bug.
o
The cost of fixing a bug is more than keeping it.
Note:
o
If any bug is minor, but the Developer can't fix it, which means that the Developer can
fix, but the bug is affecting the existing technology because it was present in the core of
the code.
o
Each can't fix bugs are the minor bug.
Deferred / postponed
The deferred/postpone is a status in which the bugs are postponed to the future release due to
time constraints.
The deferred status of the bug was not fixed in the initial build because of the time constraints.
As we can see in the below The Bug ID-B001 bug is found at the initial build, but it will not be
fixed in the same build, it will postpone, and fixed in the next release.
And Bug ID- B0024, B0025, and B0026 are those bugs, which are found in the last stage of the
build, and they will be fixed because these bugs are the minor bugs.
Note:
o
All minor bugs can't be deferred, but all deferred bugs are minor bugs.
o
Whenever there is no future release, then the postpone bug will be fixed at the
maintenance stage only.
Bug Report Template (excel)
The bug report template is as follows:
Defect Management Process
Since one of the major test objectives is to find defects, an established defect management
process is essential. Although we refer to "defects" here, the reported anomalies may turn out
to be real defects or something else (e.g., false positive, change request) - this is resolved during
the process of dealing with the defect reports. Anomalies may be reported during any phase of
the SDLC and the form depends on the SDLC. At a minimum, the defect management process
includes a workflow for handling individual anomalies from their discovery to their closure and
rules for their classification. The workflow typically comprises activities to log the reported
anomalies, analyze and classify them, decide on a suitable response such as to fix or keep it as it
is and finally to close the defect report. The process must be followed by all involved
stakeholders. It is advisable to handle defects from static testing (especially static analysis) in a
similar way.
Typical defect reports have the following objectives:
•
•
•
Provide those responsible for handling and resolving reported defects with
sufficient information to resolve the issue
Provide a means of tracking the quality of the work product
Provide ideas for improvement of the development and test process
A defect report logged during dynamic testing typically includes:
• Unique identifier
• Title with a short summary of the anomaly being reported
• Date when the anomaly was observed, issuing organization, and author, including their
role
• Identification of the test object and test environment
• Context of the defect (e.g., test case being run, test activity being performed, SDLC
phase, and other relevant information such as the test technique, checklist or test data being
used)






Description of the failure to enable reproduction and resolution including the steps that
detected the anomaly, and any relevant test logs, database dumps, screenshots, or
recordings
Expected results and actual results
Severity of the defect (degree of impact) on the interests of stakeholders or
requirements
Priority to fix
Status of the defect (e.g., open, deferred, duplicate, waiting to be fixed, awaiting
confirmation testing, re-opened, closed, rejected)
References (e.g., to the test case)
Some of this data may be automatically included when using defect management tools (e.g.,
identifier, date, author and initial status). Document templates for a defect report and
example defect reports can be found in the ISO/IEC/IEEE 29119-3 standard, which refers to
defect reports as incident reports.
Static Testing
Static testing is testing, which checks the application without executing the code. It is a
verification process. Some of the essential activities are done under static testing such as business
requirement review, design review, code walkthroughs, and the test documentation review.
Static testing is performed in the white box testing phase, where the programmer checks every
line of the code before handling over to the Test Engineer.
Static testing can be done manually or with the help of tools to improve the quality of the
application by finding the error at the early stage of development; that why it is also called the
verification process.
The documents review, high and low-level design review, code walkthrough take place in the
verification process.
Dynamic Testing
Dynamic testing is testing, which is done when the code is executed at the run time environment.
It is a validation process where functional testing [unit, integration, and system testing] and nonfunctional testing [user acceptance testing] are performed.
We will perform the dynamic testing to check whether the application or software is working fine
during and after the installation of the application without any error.
Difference between Static testing and Dynamic Testing
Static testing
Dynamic testing
In static testing, we will check the code or the
application without executing the code.
In dynamic testing, we will check the code/application by executing
the code.
Static testing includes activities like code Review, Dynamic testing includes activities like functional and nonWalkthrough, etc.
functional testing such as UT (usability testing), IT (integration
testing), ST (System testing) & UAT (user acceptance testing).
Static testing is a Verification Process.
Dynamic testing is a Validation Process.
Static testing is used to prevent defects.
Dynamic testing is used to find and fix the defects.
Static testing is a more cost-effective process.
Dynamic testing is a less cost-effective process.
This type of testing can be performed before the
compilation of code.
Dynamic testing can be done only after the executables are
prepared.
Under static testing, we can perform the
statement coverage testing and structural testing.
Equivalence Partitioning and Boundary Value Analysis technique are
performed under dynamic testing.
It involves the checklist and process which has
been followed by the test engineer.
This type of testing required the test case for the execution of the
code.
Experience-based Test Techniques
o
o
1.
Error guessing
2.
Exploratory testing
3.
Checklist-based testing
Exploratory Testing In exploratory testing, tests are simultaneously designed, executed,
and evaluated while the tester learns about the test object. The testing is used to learn
more about the test object, to explore it more deeply with focused tests, and to create
tests for untested areas. Exploratory testing is sometimes conducted using sessionbased testing to structure the testing. In a session-based approach, exploratory testing
is conducted within a defined time-box. The tester uses a test charter containing test
objectives to guide the testing. The test session is usually followed by a debriefing that
involves a discussion between the tester and stakeholders interested in the test results
of the test session. In this approach test objectives may be treated as high-level test
conditions. Coverage items are identified and exercised during the test session. The
tester may use test session sheets to document the steps followed and the discoveries
made. Exploratory testing is useful when there are few or inadequate specifications or
there is significant time pressure on the testing. Exploratory testing is also useful to
complement other more formal test techniques. Exploratory testing will be more
effective if the tester is experienced, has domain knowledge and has a high degree of
essential skills, like analytical skills, curiosity and creativeness
Checklist-Based Testing In checklist-based testing, a tester designs, implements, and
executes tests to cover test conditions from a checklist. Checklists can be built based on
experience, knowledge about what is important for the user, or an understanding of
why and how software fails. Checklists should not contain items that can be checked
automatically, items better suited as entry/exit criteria, or items that are too general
(Brykczynski 1999). Checklist items are often phrased in the form of a question. It should
be possible to check each item separately and directly. These items may refer to
requirements, graphical interface properties, quality characteristics or other forms of
test conditions. Checklists can be created to support various test types, including
functional and non-functional testing. Some checklist entries may gradually become less
effective over time because the developers will learn to avoid making the same errors.
New entries may also need to be added to reflect newly found high severity defects.
Therefore, checklists should be regularly updated based on defect analysis. However,
care should be taken to avoid letting the checklist become too long (Gawande 2009). In
the absence of detailed test cases, checklist-based testing can provide guidelines and
some degree of consistency for the testing. If the checklists are high-level, some
variability in the actual testing is likely to occur, resulting in potentially greater coverage
but less repeatability.
Entry Criteria and Exit Criteria
Entry criteria define the preconditions for undertaking a given activity. If entry criteria are not
met, it is likely that the activity will prove to be more difficult, time-consuming, costly, and
riskier. Exit criteria define what must be achieved in order to declare an activity completed.
Entry criteria and exit criteria should be defined for each test level, and will differ based on the
test objectives. Typical entry criteria include: availability of resources (e.g., people, tools,
environments, test data, budget, time), availability of testware (e.g., test basis, testable
requirements, user stories, test cases), and initial quality level of a test object (e.g., all smoke
tests have passed). Typical exit criteria include: measures of thoroughness (e.g., achieved level
of coverage, number of unresolved defects, defect density, number of failed test cases), and
completion criteria (e.g., planned tests have been executed, static testing has been performed,
all defects found are reported, all regression tests are automated). Running out of time or
budget can also be viewed as valid exit criteria. Even without other exit criteria being satisfied,
it can be acceptable to end testing under such circumstances, if the stakeholders have reviewed
and accepted the risk to go live without further testing. In Agile software development, exit
criteria are often called Definition of Done, defining the team’s objective metrics for a
releasable item. Entry criteria that a user story must fulfill to start the development and/or
testing activities are called Definition of Ready.
Download