Unit Testing

advertisement
Unit Testing
In computer programming, a unit test is a method of testing the correctness of a particular
module of source code.
The idea is to write test cases for every non-trivial function or method in the module so that
each test case is separate from the others if possible. This type of testing is mostly done by
the developers.
Benefits
The goal of unit testing is to isolate each part of the program and show that the individual
parts are correct. It provides a written contract that the piece must satisfy. This isolated
testing provides four main benefits:
Encourages change
Unit testing allows the programmer to re factor code at a later date, and make sure the
module still works correctly (regression testing). This provides the benefit of encouraging
programmers to make changes to the code since it is easy for the programmer to check if the
piece is still working properly.
Simplifies Integration
Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a bottomup testing style approach. By testing the parts of a program first and then testing the sum of
its parts will make integration testing easier.
Documents the code
Unit testing provides a sort of "living document" for the class being tested. Clients looking to
learn how to use the class can look at the unit tests to determine how to use the class to fit
their needs.
Separation of Interface from Implementation
Because some classes may have references to other classes, testing a class can frequently
spill over into testing another class. A common example of this is classes that depend on a
database; in order to test the class, the tester finds herself writing code that interacts with the
database. This is a mistake, because a unit test should never go outside of its own class
boundary. As a result, the software developer abstracts an interface around the database
connection, and then implements that interface with their own Mock Object. This results in
loosely coupled code, thus minimizing dependencies in the system.
Limitations
It is important to realize that unit-testing will not catch every error in the program. By
definition, it only tests the functionality of the units themselves. Therefore, it will not catch
integration errors, performance problems and any other system-wide issues. In addition, it
may not be trivial to anticipate all special cases of input the program unit under study may
receive in reality. Unit testing is only effective if it is used in conjunction with other software
testing activities.
Manual Testing
Usage:

It involves testing of all the functions performed by the people while preparing the
data and using these data from automated system.
Manual testing is performed by the tester who carries out all of the actions on the tested
application manually, step-by-step and indicates whether a particular step was accomplished
successfully or whether it failed. Manual testing is always a part of any testing effort. It is
especially useful in the initial phase of software development, when the software and its user
interface are not stable and beginning automation does not make sense.

Objective:

Verify manual support documents and procedures are correct.

Determine Manual support responsibility is correct

Determine Manual support people are adequately trained.

Determine Manual support and automated segment are properly interfaced.
How to Use

Process evaluated in all segments of SDLC.

Execution of the can be done in conjunction with normal system testing.

Instead of preparing, execution and entering actual test transactions the clerical and
supervisory personnel can use the results of processing from application system.

To test people it requires testing the interface between the people and application
system.
When to Use

Verification that manual systems function properly should be conducted throughout
the SDLC.

Should not be done at later stages of SDLC.

Best done at installation stage so that the clerical people do not get used to the actual
system just before system goes to production.
Example

Provide input personnel with the type of information they would normally receive from
their customers and then have them transcribe that information and enter it in the
computer.

Users can be provided a series of test conditions and then asked to respond to those
conditions. Conducted in this manner, manual support testing is like an examination in
which the users are asked to obtain the answer from the procedures and manuals
available to them.
Black Box Testing
Introduction
Black box testing attempts to derive sets of inputs that will fully exercise all the functional
requirements of a system. It is not an alternative to white box testing. This type of testing
attempts to find errors in the following categories:
1. incorrect or missing functions,
2. interface errors,
3. errors in data structures or external database access,
4. performance errors, and 5. initialization and termination errors.
Tests are designed to answer the following questions:
1. How is the function's validity tested?
2. What classes of input will make good test cases?
3. Is the system particularly sensitive to certain input values?
4. How are the boundaries of a data class isolated?
5. What data rates and data volume can the system tolerate?
6. What effect will specific combinations of data have on system operation?
White box testing should be performed early in the testing process, while black box testing
tends to be applied during later stages. Test cases should be derived which
1. reduce the number of additional test cases that must be designed to achieve reasonable
testing, and
2. tell us something about the presence or absence of classes of errors, rather than an error
associated only with the specific test at hand.
Equivalence Partitioning
This method divides the input domain of a program into classes of data from which test cases
can be derived. Equivalence partitioning strives to define a test case that uncovers classes of
errors and thereby reduces the number of test cases needed. It is based on an evaluation of
equivalence classes for an input condition. An equivalence class represents a set of valid or
invalid states for input conditions.
Equivalence classes may be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes are
defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence
classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid equivalence
class are defined.
4. If an input condition is boolean, then one valid and one invalid equivalence class are defined.
Boundary Value Analysis
This method leads to a selection of test cases that exercise boundary values. It complements
equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing
on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines
include:
1. For input ranges bounded by a and b, test cases should include values a and b and just
above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be developed to
exercise the minimum and maximum numbers and values just above and below these limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be designed to
exercise the data structure at its boundary.
Cause-Effect Graphing Techniques
Cause-effect graphing is a technique that provides a concise representation of logical
conditions and corresponding actions. There are four steps:
1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is
assigned to each.
2. A cause-effect graph is developed.
3. The graph is converted to a decision table.
4. Decision table rules are converted to test cases.
White box testing
White box testing is a test case design method that uses the control structure of the
procedural design to derive test cases. Test cases can be derived that
1. guarantee that all independent paths within a module have been exercised at least once,
2. exercise all logical decisions on their true and false sides,
3. execute all loops at their boundaries and within their operational bounds, and
4. exercise internal data structures to ensure their validity.
The Nature of Software Defects
Logic errors and incorrect assumptions are inversely proportional to the probability that a
program path will be executed. General processing tends to be well understood while special
case processing tends to be prone to errors.
We often believe that a logical path is not likely to be executed when it may be executed on a
regular basis. Our unconscious assumptions about control flow and data lead to design errors
that can only be detected by path testing.
Typographical errors are random.
Basis Path Testing
This method enables the designer to derive a logical complexity measure of a procedural
design and use it as a guide for defining a basis set of execution paths. Test cases that
exercise the basis set are guaranteed to execute every statement in the program at least once
during testing.
Flow Graphs
Flow graphs can be used to represent control flow in a program and can help in the derivation
of the basis set. Each flow graph node represents one or more procedural statements. The
edges between nodes represent flow of control. An edge must terminate at a node, even if the
node does not represent any useful procedural statements. A region in a flow graph is an area
bounded by edges and nodes. Each node that contains a condition is called a predicate node.
Cyclomatic complexity is a metric that provides a quantitative measure of the logical
complexity of a program. It defines the number of independent paths in the basis set and thus
provides an upper bound for the number of tests that must be performed.
The Basis Set
An independent path is any path through a program that introduces at least one new set of
processing statements (must move along at least one new edge in the path). The basis set is
not unique. Any number of different basis sets can be derived for a given procedural design.
Cyclomatic complexity, V(G), for a flow graph G is equal to
1. The number of regions in the flow graph.
2. V(G) = E - N + 2 where E is the number of edges and N is the number of nodes.
3. V(G) = P + 1 where P is the number of predicate nodes.
Deriving Test Cases
1. From the design or source code, derive a flow graph.
2. Determine the cyclomatic complexity of this flow graph.
Even without a flow graph, V(G) can be determined by counting
the number of conditional statements in the code.
3. Determine a basis set of linearly independent paths.
Predicate nodes are useful for determining the necessary paths.
4. Prepare test cases that will force execution of each path in the basis set.
Each test case is executed and compared to the expected results.
Automating Basis Set Derivation
The derivation of the flow graph and the set of basis paths is amenable to automation. A
software tool to do this can be developed using a data structure called a graph matrix. A graph
matrix is a square matrix whose size is equivalent to the number of nodes in the flow graph.
Each row and column correspond to a particular node and the matrix corresponds to the
connections (edges) between nodes. By adding a link weight to each matrix entry, more
information about the control flow can be captured. In its simplest form, the link weight is 1 if
an edge exists and 0 if it does not. But other types of link weights can be represented:
1. the probability that an edge will be executed,
2. the processing time expended during link traversal,
3. the memory required during link traversal, or
4. the resources required during link traversal.
Graph theory algorithms can be applied to these graph matrices to help in the analysis
necessary to produce the basis set.
Loop Testing
This white box technique focuses exclusively on the validity of loop constructs. Four different
classes of loops can be defined:
1. simple loops,
2. nested loops,
3. concatenated loops, and
4. unstructured loops.
Simple Loops
The following tests should be applied to simple loops where n is the maximum number of
allowable passes through the loop:
1. skip the loop entirely,
2. only pass once through the loop,
3. m passes through the loop where m < n,
4. n - 1, n, n + 1 passes through the loop.
Nested Loops
The testing of nested loops cannot simply extend the technique of simple loops since this
would result in a geometrically increasing number of test cases. One approach for nested loops:
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimums. Add tests for out-of-range or excluded values.
3. Work outward, conducting tests for the next loop while keeping all other outer loops at
minimums and other nested loops to typical values.
4. Continue until all loops have been tested.
Concatenated Loops
Concatenated loops can be tested as simple loops if each loop is independent of the others. If
they are not independent (e.g. the loop counter for one is the loop counter for the other), then
the nested approach can be used.
Unstructured Loops
This type of loop should be redesigned not tested!!!
Other White Box Techniques
Other white box testing techniques include:
1. Condition testing
exercises the logical conditions in a program.
2. Data flow testing
selects test paths according to the locations of definitions and uses of variables in the program.
A Web 2.0 website may feature any or all of a number of the
following techniques:
1. Rich Internet application techniques, optionally Ajax and Flash based
Syndication and aggregation of data in RSS/Atom
2. Extensive use of folksonomies (as tags or tag clouds)
3. REST or XML Web service APIs
4. Semantically valid XHTML markup and/or the use of Micro
formats
5.Clean and meaningful URLs
6. Use of wikis
7. Web log publishing
8. Mashups
To summarize, the major changes in the Web 2.0 world are:
1.Dynamic Websites.
2. Introduction of AJAX and Flash technologies into Websites, and
3. Emergence of browsers like Firefox, Opera, Netscape, AOL assignificant challengers to
Microsoft Internet Explorer.
Even as sites become more interactive, they get much morecomplex. Everyone is aware of
how notoriously fragile JavaScript is -especially in a cross-browser world. So, the key question
that Web.
Website Testing: An Introduction
Before we examine Web 2.0 Testing at greater length, we mustrevisit the basics of website
testing and understand why it is
unique. E-commerce websites in general are quite complex and need focused QA strategies
and specialized testing teams to work on them.
Dynamic Environment: E-commerce websites present an extraordinarily dynamic
environment that operates in rapid release cycle mode. Product information changes almost
daily, bugs are fixed in a weekly build cycle, third party functionality like analytics,
tracking, affiliate marketing are added or changed on an almost
monthly basis. This puts a tremendous strain on development and
quality assurance teams to deliver under pressure.
Cyclical Nature of Business: Most retailers generate 70-80% of their business during the
peak "busy holiday season". This means that all major enhancements and new feature
updates
have to be accomplished typically in a 7 month period from February to September. The
cyclical nature of the business leads to an unevenly distributed load on development and QA
teams.
Very High Cost of Making Mistakes: E-commerce applications have thousands of users
when compared to a typical enterprise application. A few hours of downtime can result in
thousands of
dollars in lost revenue and lost customers. This is critical given the "on-demand" nature of the
web today. It has led to an immediate expectation of quality and rapid application delivery
on the part of the user.
Standard testing methodologies must be adapted to the unique
nature of website testing environments. The following checklist
displays some of the key aspects that specialized website testing
teams must consider while designing test strategies:
Validation
Flexibility
Speed
Try varying window size
Validate the CSS
Check for broken links
Validate the HTML
Try varying font size
Access the site via a modem
Check image size specifications
Accessibility
Test accessibility
View in text browser
Browser independence
Try different browsers
Check printed pages
Switch JavaScript off
Switch plug-ins off
Switch images off
Other checks
Check non-reliance on mail-to
Check for orphan pages
Check for page titles
Information about Web testing
We call functional testing of web pages in Test Complete web testing. However, web testing
does not only mean that Test Complete can simulate mouse clicks and keystrokes in your
Internet browser, but also that it can access elements of the page from Test Complete scripts
Functional testing of web pages in Test Complete does not only mean that Test Complete
can simulate mouse clicks and key presses in your Internet browser window, but also can
access elements of the page from Test Complete scripts. We call this testing web testing.
Currently, Test Complete can access elements of web pages that are displayed in Microsoft
Internet Explorer 5, 6 or 7, Mozilla Firefox 1.5-2.0 or in any browser created on the base of
the Microsoft Web Browser control (that is, your scripts can access web pages displayed in a
Web Browser control embedded into an application’s form). Also, limited support is provided
for Netscape Navigator ver. 8.1.2 (access to web page elements is available only if Navigator
uses the Internet Explorer rendering engine).
The web testing feature is available in Test Complete Enterprise only.
Web page elements are displayed in the Object Browser (and used in your scripts) via any of
the four different object models: DOM, Tag, Tree or Hybrid. The DOM model is similar to the
HTML object model used in web scripts. It has the document root element and all other page
elements are children of this object. In the Tag model the web page elements are grouped by
their tag names. In the Tree model the hierarchy of web objects correspond to the hierarchy
of HTML elements on the web page. This model is the fastest of the four object models, as the
scripting engine spends less time locating the page element. The Hybrid model is a
combination of the Tree and DOM models. It includes objects provided by the Tree model and
the document object of the DOM model and can be used to port legacy projects that use the
DOM model to a new Tree model.
Web testing interacts with the “client” side of web pages. It does not depend on how the
pages were prepared on the “server” side (CGI, ASP, PHP, etc).
Test Complete adds specific methods, properties and events to the browser’s process and
windows that display the web pages. These methods and properties allow you to navigate to
the desired web page, delay script execution until the page is fully loaded, and so on. For
example, you could use the web testing capabilities of Test Complete to do a “smart
comparison” of a generated web page or to run data-driven tests of an order input screen.
Since Test Complete provides access to properties of web page elements, you can perform
almost any checking and verification actions over the page. Test Complete also includes
special features (web checkpoints) that let you easily perform various comparison and
verification actions. For instance, you can --
o
o
o
o
o
o
Compare the entire page, or only its tag structure, or only the contents of some
elements against a baseline copy.
Verify that all links on the page are valid.
Verify that all IMG elements have the ALT attribute specified.
Check whether the page contains MAILTO links.
Check whether the page contains Java applets.
Plus much more!
The object-oriented testing approach offered by Test Complete lets you create tests that check
only the data and are resistant to changes in the page layout. By using Test Complete’s web
document object models (see the screenshot for an example), you can directly access specific
document elements and evaluate their properties in your test scripts, thus allowing you to just
assess the data you want.
Download