Software self

advertisement
This work has been supported by the European Social Fund within the project «Support
for Doctoral Studies at University of Latvia».
Edgars Diebelis
Prof. Dr. sc. comp. Jānis Bičevskis


Ability to execute stored test examples to
ensure that software functionality is correct.
Testing approach
◦ Manual testing;
◦ Automated testing with testing tools;
◦ Self-Testing.
2






Problem statement.
Concept of Self-Testing.
Implementation of Self-Testing.
Comparison of the Self-Testing Concept with
Conventional Testing Support Tools.
Efficiency Measurements of Self-testing.
Conclusions.
3

„Computing systems’ complexity appears to

(Kephart, J., O., Chess, D., M.).
IBM Autonomic Computing Manifest, 2001
be approaching the limits of human
capability”
◦ outlined four key features that characterise
autonomic computing.

Smart Technology approach, 2007
◦ identified seven types of the smart technology;
◦ self-testing is one of the type of the smart
technology.
4

Self-testing contains two components:
◦ Test cases of system’s critical functionality to check
functions, which are substantial for system using;
◦ Built-in mechanism (software component) for
automated software testing (regression testing) that
provides automated running of test cases and
comparing test results with standard values.
6
Test storage mode
Test storage file
(XML)
7
Self-Testing mode
Test storage file
(XML)
Self-Testing tool
Test storage file
(XML)
8
Test Points


Test point is a programming language
command in the software text.
Test point ensures that:
◦ particular actions and field values are saved when
storing tests;
◦ the software execution outcome is registered when
tests are executed repeatedly.

By using test points, it is possible to repeat
the execution of system events.
9
Test Point Types

Test point types:
◦
◦
◦
◦
◦
◦
◦
◦
input field test point;
comparable value test point;
system message test point;
SQL query result test point;
application event test point;
test execution criteria test point;
self-testing test points;
etc.
10
Build of Self-Testing

Self-Testing tool modules:
◦ Self-Testing module;
◦ Self-testing test management module.
12
Self-Testing in use
13
14
Testing Tools Selection






Automated Testing Institute (ATI) opinion.
“ATI Automation Honors ” awards.
Since May 2009, the ATI has been publishing
its magazine Automate Software Testing.
Website has a list of 716 automated testing
tools.
ATI annual conference on automated testing
(Verify/ATI).
~8000 registred users.
16
Testing Tools

TestComplete
 the best commercial automated functional testing tool in 2010

FitNesse
 the best open source code automated functional testing tool in the NET subcategory in 2010

Ranorex
 the best commercial automated functional testing tool in the NET and
Flash/Flex sub-categories in 2010

T-Plan Robot
 the best open source code automated functional testing tool in the Java subcategory in 2010

Rational Functional Tester
 in 2009 and 2010, it was a finalist among the best commercial automated
functional and performance testing tools

HP Unified Functional Testing Software
 the best commercial automated functional testing tool in 2009

Selenium
 the best open source code automated functional testing tool in 2009 and
2010
17
Criteria for Comparison












Testing method (TM)
Test automation approach (TAA)
Test automation framework (TAF)
Testing level
Test recording and playback
Desktop applications testing
Web applications testing
Services testing
Database testing
Testing in production environment
System user can create tests
Simultaneous running of several
tests













Performing simultaneous actions
Identifying the tested object
Test result analysis
Test editing
Screenshots
Control points
Object validation
Object browser
Test log
Test schedule planner
Identification of the end of
command execution
Plug-ins and extensions
etc.
18
Comparison
19
Comparison II
20
Self-Testing further development








New Test automation frameworks.
Test editor and log.
Object validation and object browser.
Load, stress and other testing levels.
Web applications and services testing.
Additional platforms.
Plug-ins and extensions.
Integration with external environment testing.
21
Self-Testing advantages





White box testing.
Testing in production environment.
Users without in-depth IT knowledge to
define and run test cases.
Testing external interfaces.
Perform system testing without specific
preparation for running the test.
22
Efficiency Measurements




Retrospective analysis of incident notifications in a
real project.
It is not possible to apply and compare two different
concepts in the same conditions.
Analysis of incident notifications (1,171 in total) in
the CSAS in the period from July 2003 to 23 August
2011.
Subjective opinion; however, the high number of
incident notifications and the statistics do reflect
trends.
24
Statistics of Incident Notifications
Type of Incident
Quantity % of total
68
5.81
Duplicate
Hours
% of total
23.16
0.47
43
3.67
67.46
1.37
Unidentifiable bug
178
15.2
1011.96
20.52
Identifiable bug
736
62.85
3293.74
66.79
Improvement
102
8.71
241.36
4.89
Consultation
44
3.76
293.92
5.96
1171
100
4931.6
100
User error
Total:
Consultation
3.76%
Improvement
8.71%
Duplicate
5.81%
User error
3.67%
Unidentifiable
bug
15.20%
Identifiable bug
62.85%
25
Bugs Unidentifiable by the Self-testing
Bug type
Quantity
% of total
5
2.81
12
6.74
7
3.93
25
14.04
Simultaneous actions by users
5
2.81
Requirement interpretation bug
41
23.03
Specific event
83
46.63
178
100
External interface bug
Computer configuration bug
Data type bug
User interface bug
Total:
External interface
bug
2.81%
Specific event
46.63%
Computer Data type bug
configuration bug 3.93%
User interface
6.74%
bug
14.04%
Simultaneous
actions by users
2.81%
Requirement
interpretation bug
23.03%
26
Bugs Identifiable by the Self-testing
Quanti
ty
Test point
% of total
Hours
% of total
File result test point
59
8.02
150.03
4.56
Input field test point
146
19.84
827.14
25.11
Application event test point
105
14.27
364.24
11.06
Comparable value test point
28
3.8
93.53
2.84
System message test point
11
1.49
58.84
1.79
SQL query result test point
387
52.58
1799.96
54.65
736
100
3293.74
100
Kopā:
File result test
point, 8.02
SQL query result
test point, 52.58
Entry field test
point, 19.84
Application event
test point, 14.27
System message
test point, 1.49
Comparable value
test point, 3.8
27
Conclusion

self-testing approach offers not only options equal
to those offered by other globally recognized
testing support tools; moreover, self-testing
additionally offers options that other testing tools
do not possess:
◦
◦
◦
◦

testing external interfaces;
testing in production environment;
testing with the white-box method;
possibility for users without IT knowledge to capture tests.
Testing support is part of systems developed and it
is available throughout the entire life cycle of
software.
29
Conclusion II




Self-testing changes the testing process by
considerably broaden the role of the developer in
software testing.
Self-testing requires additional work to include the
self-testing functionality in the software and to
develop critical functionality tests.
Self-testing saves time by repeated (regression)
testing of the existing functionality.
Implementation of the self-testing functionality is
useful in incremental development models, in
particular in systems that are gradually improved
and maintained for many years.
30
31
Download