Software testing is the process of analyzing a

advertisement
JRE SCHOOL OF Engineering
CLASS TEST-1 EXAMINATIONS MARCH 15
Subject Name Software Testing
Roll No. of Student
Date
27th March, 2015
For IT branch only
Note: Attempt all sections.
Subject Code
Max Marks
Max Duration
Time
EIT-062
30
1 Hour
9:20 AM to 10:20 AM
SECTION – A (3 Marks * 5 Questions = 15 Marks)
Attempt any five parts.
1. Diffrentiate between Testing & Debugging with an examples.
Ans:
Difference between Testing and Debugging are very important terms especially for those who are
new to Software Testing field.
Exact Distinction between Testing and Debugging
Testing
Debugging
1. Testing always starts with known conditions,
uses predefined methods, and has predictable
outcomes too.
1. Debugging starts from possibly un-known
initial conditions and its end cannot be predicted,
apart from statistically.
2. Testing can and should definitely be planned,
designed, and scheduled.
2. The procedures for, and period of, debugging
cannot be so constrained.
3. It proves a programmers failure.
3. It is the programmer’s vindication.
4. It is a demonstration of error or apparent
correctness.
4. It is always treated as a deductive process.
5. Testing as executed should strive to be
predictable, dull, constrained, rigid, and
inhuman.
5. Debugging demands intuitive leaps,
conjectures, experimentation, and some freedom
also.
6. Much of the testing can be done without
design knowledge.
6. Debugging is impossible without detailed
design knowledge.
7. It can often be done by an outsider.
7. It must be done by an insider.
8. Much of test execution and design can be
automated.
8. Automated debugging is still a dream for
programmers.
9. Testing purpose is to find bug.
9. Debugging purpose is to find cause of bug.
This is all we know about Difference between Testing and Debugging, but the big difference is that
debugging is conducted by a programmer and the programmers fix the errors during debugging
phase. Testers never fix the errors, but rather fined them and return to programmer.
2.
Difrentiate between Verification and Validation.
Ans:
VERIFICATION vs VALIDATION
The terms ‘Verification‘ and ‘Validation‘ are frequently used in the software testing world but the
meaning of these terms are mostly vague and debatable. You will encounter (or have encountered) all
kinds of usage and interpretations of those terms, and it is our humble attempt here to distinguish
between them as clearly as possible.
Criteria Verification
Definition The process of evaluating work-products
(not the actual final product) of a
development phase to determine whether
they meet the specified requirements for
that phase.
Objective To ensure that the product is being built
according to the requirements and design
specifications. In other words, to ensure that
work products meet their specified
requirements.
Question Are we building the product right?
Evaluation Plans, Requirement Specs, Design Specs,
Items
Code, Test Cases
Activities
 Reviews
 Walkthroughs
 Inspections
Validation
The process of evaluating software during or
at the end of the development process to
determine whether it satisfies specified
business requirements.
To ensure that the product actually meets
the user’s needs, and that the specifications
were correct in the first place. In other
words, to demonstrate that the product
fulfills its intended use when placed in its
intended environment.
Are we building the right product?
The actual product/software.

Testing
It is entirely possible that a product passes when verified but fails when validated. This can happen
when, say, a product is built as per the specifications but the specifications themselves fail to address
the user’s needs.


Trust but Verify.
Verify but also Validate.
3.
Differentiate among Faults, Errors, and Failures with examples.
Difference between defect,error,bug,failure and fault
Error : A discrepancy between a computed, observed, or measured value or condition and
the true, specified, or theoretically correct value or condition. See: anomaly, bug, defect,
exception, and fault
Failure: The inability of a system or component to perform its required functions within
specified performance requirements. See: bug, crash, exception, fault.
Bug: A fault in a program which causes the program to perform in an unintended or
unanticipated manner. See: anomaly, defect, error, exception, fault.
Fault: An incorrect step, process, or data definition in a computer program which causes the
program to perform in an unintended or unanticipated manner. See: bug, defect, error,
exception.
Defect: Mismatch between the requirements.
IEEE Definitions

Failure: External behavior is incorrect

Fault: Discrepancy in code that causes a failure.

Note:
Error: Human mistake that caused fault

Error is terminology of Developer.

Bug is terminology of Tester
4.
What is the Boundary Value Analysis?
A boundary value is any input or output value on the edge of an equivalence partition.
Let us take an example to explain this:
Suppose you have a software which accepts values between 1-1000, so the valid partition
will be (1-1000), equivalence partitions will be like:
Invalid Partition Valid Partition Invalid Partition
0
1-1000
1001 and above
And the boundary values will be 1, 1000 from valid partition and 0,1001 from invalid
partitions.
Boundary Value Analysis is a black box test design technique where test case are designed
by using boundary values, BVA is used in range checking.
5.
Write IEEE definition of software testing.
Software testing is the process of analyzing a software item to detect the differences between
existing and required conditions (that is, bugs) and to evaluate the features of the software item
6.
What is unit, integration and system testing?
Unit Testing (tests individual components or modules)
As the name implies this involves the programmer to test individual "units" of code which may only
have a single responsibility (such as performing one specific task).
These are primarily written either by the developer / programmer themselves or by a specific
individual that handles testing. Their basic purpose is to ensure that these smaller sections of code
work properly and reduce the number of bugs that can be present later within an application.
Unit tests are typically performed in a vacuum (of sorts) and are ignorant of any other code that may
be dependent or related to the unit being tested at the time.
Integration Testing (tests the entire application)
Integration testing is performed after all of these individual units (or the components of your
application) have been integrated together (much like you would find in a real working system). This
form of testing may be the more typical format of testing that you would think of when you really
think about "testing" an application and making sure that it all works well together. They typically
require a great deal more work to put together than unit tests, because you usually require a
complete system to perform them effectively.
Integration testing will provide a reasonable example of working with a production-level system and
should reveal any errors that a "live" system might encounter after entering production.
System Testing
The process of testing an integrated system is to verify that it meets specified requirements. The
objective of System Testing is to verify that the integrated information system as a whole is
functionally complete and satisfies both functional and non-functional design requirements.
Functional testing is concerned with what the system does whereas Non-Functional testing is
concerned with how the system does what it does. Defect/error detection is a primary goal.
SECTION – B (5 Marks * 1 Questions = 5 Marks)
Attempt any one part.
1. Describe Complete vs. Selective software testing in detail.
2.
Consider the following code segment
and apply the Decision/Branch coverage testing and design the test cases in the following format and
measure coverage in terms of percentage:
Line #
Predicate
True
False
Ans:
Consider the following LOC:
3 if (a == 0) {
7 if ((a==b) OR ((c == d) AND bug(a) )) {

For decision/branch coverage, evaluate an entire Boolean expression as one true-orfalse predicate even if it contains multiple logical-and or logical-or operators.
We need to ensure that each of these predicates (compound or single) is tested as both true and false

Three of the four necessary conditions - 75% branch coverage.

We add Test Case 3 : foo(1, 2, 1, 2, 1) to bring us to 100% branch coverage( making the
Boolean expression False).

Condition coverage reports the true or false outcome of each Boolean sub-expression of a
compound predicate.

In line 7 there are three sub-Boolean expressions to the larger statement (a==b), (c==d), and
bug(a).

Condition coverage measures the outcome of each of these sub-expressions independently of
each other.

With condition coverage, you ensure that each of these sub-expressions has independently
been tested as both true and false.
SECTION – C
(10 Marks * 1 Questions = 10 Marks)
Attempt any one part.
1.
Differentiate between White box and Black box testing. Discuss the code coverage testing and
Code (Cyclomatic) complexity testing in detail.
Difference Between Black-box and White-box Testing
White-Box Testing
Black-Box Testing
1- White-Box testing is also known as clear box testing,
1- Black Box means opaque object or box
glass box testing, transparent box testing, and structural
where we can’t see the internal structure.
testing.
2- In
Black
box
testing
2- White-Box testing deals with the internal structure and
we only concentrate on Input and output.
the internal working rather than only functionality.
3- Black-box test design is usually described as 3- For White-Box testing, programming background is
focusing on testing functional requirements.
must because this helps in creating test cases for whiteSo
we
normally
test
box
testing.
the functionality of software without going
4- White-box testing is applied for Unit testing,
deep in to its code and structure
integration testing and sometime in it is also used for
4- Techniques that is used in Black Box testing system
testing.
a)
b)
c)
d)
e)
f)
g)
h)
Boundary-value analysis
Error guessing
Race conditions
Cause-effect graphing
Syntax testing
State transition testing
Graph matrix
Equivalence partitioning
5- Best example of Black box testing is Search on
Google. User just enters keywords and get the
expected results in turn.
End user never take tension to see what is behind
this screen that is employed to fetch these results.
6- Technical background is not the necessity for
Black-Box Tester.
5- These are few techniques that are used in white box
testing
a) Code Coverage
b) Segment coverage: This is done to ensure that
all statement or each line of code has been
executed
c) Compound condition coverage and loop
coverage: In this we test all the conditions and
all the branching and loops in code
d) Data Flow testing: we test all the intermediate
steps, in this we test how sequential steps
behave
e) Path Testing: In this we test all the path that is
defined in code
6- Through White-Box testing, tester detects all logical
errors, design error and Typographical error.
Code Coverage testing:
Code coverage analysis is the process of:
 Finding areas of a program not exercised by a set of test cases,
 Creating additional test cases to increase coverage, and

Determining a quantitative measure of code coverage, which is an indirect measure of
quality.
An optional aspect of code coverage analysis is:
 Identifying redundant test cases that do not increase coverage.
A code coverage analyzer automates this process.
You use coverage analysis to assure quality of your set of tests, not the quality of the actual product.
You do not generally use a coverage analyzer when running your set of tests through your release
candidate. Coverage analysis requires access to test program source code and often requires
recompiling it with a special command.
This question discusses the details you should consider when planning to add coverage
analysis to your test plan. Coverage analysis has certain strengths and weaknesses. You must choose
from a range of measurement methods. You should establish a minimum percentage of coverage, to
determine when to stop analyzing coverage. Coverage analysis is one of many testing techniques; you
should not rely on it alone.
Code coverage analysis is sometimes called test coverage analysis. The two terms are
synonymous. The academic world more often uses the term "test coverage" while practitioners more
often use "code coverage". Likewise, a coverage analyzer is sometimes called a coverage monitor.
Code Complexity testing: we can measure in terms of cyclomatic complexity.
Cyclomatic complexity is a software metric. It is used to indicate the complexity of a program. It is a
quantitative measure of logical strength of the program. It directly measures the number of linearly
independent paths through a program's source code.. Cyclomatic complexity is computed using
the control flow graph of the program: the nodes of the graph correspond to indivisible groups of
commands of a program, and a directed edge connects two nodes if the second command might be
executed immediately after the first command. Cyclomatic complexity may also be applied to
individual functions, modules, methods or classes within a program.
There are three different ways to compute the cyclomatic complexity.
Method 1:
Given a control flow graph G of a program, the cyclomatic complexity V(G) can be computed as:
V(G) = E – N + 2
where N is the number of nodes of the control flow graph and E is the number of edges in the control
flow graph.
Method 2:
An alternative way of computing the cyclomatic complexity of a program from an inspection of its
control flow graph is as follows:
V(G) = Total number of bounded areas + 1
In the program’s control flow graph G, any region enclosed by nodes and edges can be called as a
bounded area. This is an easy way to determine the McCabe’s cyclomatic complexity. But, what if the
graph G is not planar, i.e. however you draw the graph, two or more edges intersect? Actually, it can
be shown that structured programs always yield planar graphs. But, presence of GOTO’s can easily
add intersecting edges. Therefore, for non-structured programs, this way of computing the McCabe’s
cyclomatic complexity cannot be used. The number of bounded areas increases with the number of
decision paths and loops. Therefore, the McCabe’s metric provides a quantitative measure of testing
difficulty and the ultimate reliability. This method provides a very easy way of computing the
cyclomatic complexity of CFGs, just from a visual examination of the CFG. On the other hand, the other
method of computing CFGs is more amenable to automation, i.e. it can be easily coded into a program
which can be used to determine the cyclomatic complexities of arbitrary CFGs.
Method 3:
The cyclomatic complexity of a program can also be easily computed by computing the number of
decision statements of the program. If N is the number of decision statement of a program, then the
McCabe’s metric is equal to N+1.
2.
What is Cyclomatic Complexity? Calulate the complexity of the following resultant flow graph
with two different methods and give formula also. List out minimal linearly independent
paths of gien flow graph.
Ans: Cyclomatic complexity is a software metric. It is used to indicate the complexity of a program.
It is a quantitative measure of logical strength of the program. It directly measures the number of
linearly independent paths through a program's source code.. Cyclomatic complexity is computed
using the control flow graph of the program: the nodes of the graph correspond to indivisible groups
of commands of a program, and a directed edge connects two nodes if the second command might be
executed immediately after the first command. Cyclomatic complexity may also be applied to
individual functions, modules, methods or classes within a program.
There are three different ways to compute the cyclomatic complexity.
Method 1:
Given a control flow graph G of a program, the cyclomatic complexity V(G) can be computed as:
V(G) = E – N + 2
where N is the number of nodes of the control flow graph and E is the number of edges in the control
flow graph.
Method 2:
An alternative way of computing the cyclomatic complexity of a program from an inspection of its
control flow graph is as follows:
V(G) = Total number of bounded areas + 1
In the program’s control flow graph G, any region enclosed by nodes and edges can be called as a
bounded area. This is an easy way to determine the McCabe’s cyclomatic complexity. But, what if the
graph G is not planar, i.e. however you draw the graph, two or more edges intersect? Actually, it can
be shown that structured programs always yield planar graphs. But, presence of GOTO’s can easily
add intersecting edges. Therefore, for non-structured programs, this way of computing the McCabe’s
cyclomatic complexity cannot be used. The number of bounded areas increases with the number of
decision paths and loops. Therefore, the McCabe’s metric provides a quantitative measure of testing
difficulty and the ultimate reliability. This method provides a very easy way of computing the
cyclomatic complexity of CFGs, just from a visual examination of the CFG. On the other hand, the other
method of computing CFGs is more amenable to automation, i.e. it can be easily coded into a program
which can be used to determine the cyclomatic complexities of arbitrary CFGs.
Method 3:
The cyclomatic complexity of a program can also be easily computed by computing the number of
decision statements of the program. If N is the number of decision statement of a program, then the
McCabe’s metric is equal to N+1.
 Step 1 : Using the design or code as a foundation, draw a corresponding flow graph.
 Step 2: Determine the cyclomatic complexity of the resultant flow graph.
 Step 3: Determine a minimum basis set of linearly independent paths.
For example,
path 1: 1-2-4-5-6-7
path 2: 1-2-4-7
path 3: 1-2-3-2-4-5-6-7
path 4: 1-2-4-5-6-5-6-7
 Step 4: Prepare test cases that will force execution of each path in the basis set.
 Step 5: Run the test cases and check their results
************
Download