Ordenação Topológica

advertisement
3rd Workshop on Teaching Software
Testing
Evolving an Elective Software Testing
Course: Lessons Learned
Edward L. Jones
Florida A&M University
Tallahassee, FL USA
Agenda

Course Overview

Student Background

Driving Principles

Overview of Assignments

Course Reflection

Improvements

Assignment Walkthroughs
Course Overview
DESCRIPTION: The purpose of this course is to build
skills necessary to perform software testing at the
function, class and application level. Students will be
taught concepts of black-box (functional and boundary)
and white-box (coverage-based) testing, and will apply
these concepts to small programs and components
(functions and classes). Students will also be taught
evaluative techniques such as coverage and mutation
testing (error seeding). This course introduces the
software engineering discipline of software quality
engineering and the legal and societal issues of software
quality.
Programming Focus
AUDIENCE: Not software testers but software
developers. What distinguishes the course approach
is that is stresses the programming aspect of
software testing. A goal is to enhance and expand
students’ programming skills to support activities
across the testing lifecycle: C++ programming and
Unix shell script programming to automate aspects of
software testing.
Students just needed a course to take ...
Conceptual Objectives

The student shall understand









The software testing lifecycle
The relationship between testing, V&V, SQA
Theoretical/practical limits of software testing
The SPRAE testing framework
Concepts and techniques for black-/white-box
testing
Test case design from behavioral model
Design patterns for test automation
Test coverage criteria
Issues of software testing management
Performance Objectives

The student shall be able to:

Use the Unix development environment

Write simple Unix shell scripts

Design functional and boundary test cases

Develop manual test scripts

Conduct tests and document results

Write test drivers to automate function, object and
application testing

Evaluate test session results; write problem reports
Learning/Evaluation Activities





80% practice / 20% concepts
Lectures (no text)
Laboratory assignments

Unix commands and tools

Testing tasks
Examinations

2 Online tests

Final (online)
Amnesty period (1 test / 2 labs)
Student Background

Reality:




20 students
Not particularly interested in testing
Low programming skill/experience
Ideal:





An interest in software testing
Strong programming skills
Scientific method (observation, hypothesis
forming)
Sophomore or junior standing
Desire for internship in software testing
My Perspective on Teaching
Testing


Testing is not just for testers!
In ideal world, fewer testers required



No silver bullet … just bricks



Developers have tester’s skills/mentality
Testing overlays development process
Simple things provide leverage
No one-size-fits-all
Be driven by a few sound principles
Driving Principles

Testing for Software Developers



Duality of developer and tester
Few Basic Concepts

Testing lifecycle

Philosophy / Attitudes (SPRAE)
Learn By Doing

Different jobs across the lifecycle
A Testing Lifecycle
Specification
Analysis
Test Strategy/Plan
Design
Test Cases
Implementation
Test Script, Data,
Driver
Execution
Test Results
Evaluation
Defect Data
Problem Reports
Experience Objectives

Student gains experience at each lifecycle
stage

Student uses/enhances existing skills

Student applies different testing
competencies

Competencies distinguish novices from the
experienced
A Framework for Practicing
Software Testing
•
Specification the basis for testing
•
Premeditation (forethought, techniques)
•
Repeatability of test design, execution, and
evaluation (equivalence v. replication)
•
Accountability via testing artifacts
•
Economy (efficacy) of human, time and
computing resources
Key Test Practices

Practitioner -- performs defined test

Builder -- constructs test “machinery”

Designer -- designs test cases

Analyst -- sets test goals, strategy

Inspector -- verifies process/results

Environmentalist -- maintains test tools &
environment

Specialist -- performs test life cycle.
Test Products

Test Report (informal) of manual testing

Test Scripts for manual testing

Test Log (semi-formal)

Application Test Driver (Unix shell script)

Unit/Class Test Driver (C++ program)

Test Data Files

Test Results (automated)

Bug Fix Log (informal)
Specification Products

Narrative specification

Specification Diagrams

Specification Worksheet (pre/post conditions)

Decision Tables

Control Flow Graphs
Assignments Target Skills

Observation Skills


Specification Skills


Coding for development of test machinery
Test Design Skills


Describe expected or actual behavior
Programming Skills


Systematic exploration of software behavior
Derive test cases from specification using
technique
Team Skills

Work with other testers
Course Reflection

Testing is programming intensive

Testing requires analytical skills and facility with
mathematical tools

Testing generates data management problem that
is amenable to automation

Testing gives students advantage in entry-level
positions

Students take this course too late
Failed Course Expectations

Students test at all levels


Students develop intuitive testing skills



Poor performance on exams with problems like those
in labs
Test case design skills low


On largest project, concepts did not transfer
1 in 3 students show “knack” for testing
Impact of balance of concept and experience


No “in-the-large” application (e.g., web-based)
Homework needed v. labs (programming)
Mentoring (timely feedback) did not occur

Students left to own devices too much
Why These Outcomes?

Formalisms important, but difficult



Lack of textbook





Students need concepts + lots of examples
Poor availability when students were working


Provide the behavior model (e.g., decision table)
Basis for systematic test case design, automation
Students worked at last minute
Not always around
Automated grading lacked 1-1 feedback
Standards-rich/tool-poor environment a distraction
Assigned work too simple??
Proposed Changes

Improve lecture notes and example bank

Find and refine

Resources and workbook

Outside-in: testing in-the-large before in-the-small

Recitation/laboratory for discussion and feedback

Increase use of testing tools (no-cost)

Increase use of collection of code/applications

Examination testbank for practice, learning
Assignment Walkthroughs

(see paper)
Assignment Walkthroughs

Blind Testing

Test Documentation

Specification

Test Automation via Shell Scripts

Unit Test Automation (Driver)

White-Box Unit Testing

Class Testing
Blind Testing I

Objective: Explore behavior of software without
the benefit of a specification

Given: Executables + general description

Results: Students not systematic in exploration or
in generalizing observed behavior

Hello  output based on length of input

Add  1-digit modulus 10 adder, input exception

Pay  pay calculation with upper bound pay amount
Blind Testing II

Programming Objective: Student writes program
that matches the observed behavior of Blind
Testing I

Test Objective: Observations on Blind Testing I
used as “test cases” for reverse-engineered
program.

Results: Students did not see the connection;

Did not replicate the recorded behavior

Did not recognize (via testing) failure to replicate
SUPPLEMENTAL SLIDES

Student work
SCALING UP
The heart of the approach is to use a
decision table as a thinking tool. The
most critical task in this process is to
identify all the stimuli and responses.
When there are many logical
combinations of stimuli, the decision
table can become large, indicating that
the unit is complex and hard to test.
IDENTIFYING BEHAVIOR
Approaches
•
Work backwards
» Identify each response
» Identify conditions that provoke response
» Identify separate stimuli
•
Work forward
» Identify stimuli
» Identify how each stimulus influences what
unit does
» Specify the response
IDENTIFYING STIMULI
•
•
•
Arguments passed upon invocation
Interactive user inputs
Internal, secondary data
» global or class variables
•
External data (sources)
» file or database status variables
» file or database data
•
Exceptions
IT PAYS TO BE A GOOD
STIMULUS DETECTIVE
•
•
Failure to identify stimuli results in an
incomplete, possibly misleading test case
The search for stimuli exposes
» interface assumptions -- a major source of
integration problems
» incomplete design of unit
» inadequate provision for exception handling
IDENTIFYING RESPONSES
•
•
•
Arguments/Results passed back on exit
Interactive user outputs
Internal, secondary data
» updated global or class variables
•
External data (sinks)
» output file or database status variables
» output file or database data
•
Exceptions
IT PAYS TO BE A GOOD
RESPONSE DETECTIVE
•
Failure to identify responses results in
» incomplete understanding of the software
under test
» shallow test cases
» incomplete expected results
» incomplete test "success" verification -certain effects not checked
•
To test, one must know all the effects
A SKETCHING TOOL
Black-Box Schematic
Stimulus Type
Response Type
Argument
Inputs
Globals
Argument
Software
under
Test
Outputs
Globals
Database
Database
Exception
Exception
BEFORE CONTINUTING
Much of the discussion so far involves
how to identify what software does.
We have introduced thinking tools for
systematically capturing our findings.
These thought processes and tools
can be used anywhere in the
lifecycle, e.g., in software design!
One Stone for Two Birds!!
Specialist I - Competencies
Practitioner
Test
Practitioner
1
2
3
4
5
...
Test Builder
1
2
3
4
5
...
Test Designer
1
2
3
4
5
...
Test Analyst
1
2
3
4
5
...
Test Inspector
1
2
3
4
5
...
Test Environmentalist
1
2
3
4
5
...
Test SPECIALIST
1
2
3
4
5
...
BOUNDARY TESTING DESIGN
METHODOLOGY
•
Specification
•
Identify elementary boundary conditions
•
Identify boundary points
•
Generate boundary test cases
•
Update test script (add boundary cases).
EXAMPLE: Pay Calculation
(1) Specification
•
Compute pay for employee, given the
number of hours worked and the hourly pay
rate. For hourly employees (rate < 30),
compute overtime at 1.5 times hourly rate for
hours in excess of 40. Salaried employees
(rate >= 30) are paid for exactly 40 hours.
EXAMPLE B
(2) Identify Behaviors
•
Case 1: Hourly AND No overtime
» (Rate < 30) & (Hours <= 40)
» Expect Pay = Hours * Rate
•
Case 2: Hourly AND Overtime
» (Rate < 30) & (Hours > 40)
» Expect Pay = 40*Rate+1.5*Rate*(Hours - 40)
•
Case 3: Salaried (Rate >= 30)
» Expect Pay = 40 * Rate
DECISION TABLE
Condition
c1: Rate < 30
| Y Y N N
c2: Hours <= 40
| Y N Y N
Action
a1: Pay = Straight time
| X
a2: Pay = Overtime
|
a3: Pay = Professional
|
Columns define
Behaviors
X
X X
EXAMPLE B
(3) Create Test Cases
•
One test case per column of decision table
» Case 1: Hourly, No Overtime
» Case 2: Hourly, Overtime
» Case 3: Salaried, No Extra Hours
» Case 4: Salaried, Extra Hours
•
Order the test cases by column
EXAMPLE B
(4) Write Test Script
Step
1
2
3
4
Stimuli
Hours Rate
30
10
50
30
50
10
40
40
Expected Response
Pay =
300
550
1600
1600
Testing Modules -- Drivers
A test driver executes a unit with
test case data and captures the
results.
Test set
Data
Test Set
Results
Driver
Arguments
Results
External
Unit
Effects
Implementing Test Drivers

Complexity
 Arguments/Results
only
 Special set-up required to execute unit
 External effects capture/inquiry
 Oracle announcing "PASS"/"FAIL"

Major Benefits
 Automated,
repeatable test script
 Documented evidence of testing
 Universal design pattern
Test Driver for Unit Pay
Driver D_pay
uses unit_environment E;
{
declare Hrs, Rate, expected;
testcase_no = 0;
open tdi_file("tdi-pay.txt");
open trs_file("trs-pay.txt");
while (more data in tdi_file)
{
read(tdi_file, Hrs, Rate);
read(tdi_file, expected);
testresult = pay(Hrs, Rate);
write (trs_file,
testcase_no++,
Hrs,
Rate,
expected,
testresult);
}//while
close tdi_file,
trs_file;
}//driver
Test Driver Files (Pay)
Test Data File
File name: tdi-pay.txt
Format: (test cases only)
rate hours expected-pay
Test Results File
File name: trs-pay.txt
Format:
case# rate hours exp-pay act-pay
File content:
10 40 400
10 50 550
10 0
0
File content:
Pass
1 10 40 400 400
Fail
2 10 50 550 500
Pass
3 10 0
0
0
Test Script!!
-----Note: No environment setup.
-----Note: Results file must be inspected
for failures.
Testing Classes -- Drivers
(Black-Box)
Class
Test set
Data
Test Set
Results
Class
Test
Driver
Method Args
Method(s)
/Results
Class-state
Example -- Stack Class
class Stack
{
public:
Stack();
void push(int n);
int pop();
int top();
bool empty();
private:
int Size;
int Top;
int Values[100];
};
Notes:
(1) Class state -- variables Size,
Top and the first 'Size' values
in array Values.
(2) Methods push and pop modify
class state; top and empty
inquire about the state.
(3) Stack does not require any
test environment of its own.
(4) Class state HIDDEN from test,
i.e., black box.
Test Driver Files (Stack class)
Test Data File (tdi-stack.txt)
File content:
----1 8
--- Push .
1 7
3 7
--- Top should be 7.
2 7
2 8
--- Pop, should be 8.
4 true --- Stack should be empty.
-----Note: No test environment setup.
Methods: 1-push, 2-pop, 3-top,
4-empty
Test Results File (trs-stack.txt)
File content:
----1 1 8
2 1 7
3 3 7 8 Fail
4 2 7 8 Fail
5 2 8 7 Fail
6 4 1
Pass
-----Note: Results file must be inspected
for pass/fails.
Download