Presentation - Engineering and Computer Science

advertisement
Comp 587
Parker Li
Bobby Kolski
Introduction
Automated testing tools assist software engineers to gauge the
quality of software by automating the mechanical aspects of
software-testing.
Automated testing tools vary in their:
• Underlying approach
• Quality
• Ease-of-use
How does a project manager choose the best suitable testing
tool?
Software Quality Metrics
• The history of software
metrics began with counting
the number of lines of code
(LOC).
• It was assumed that more
lines of code implied more
complex programs, which in
turn were more likely to have
errors.
• However, software metrics
have evolved well beyond
the simple measures
introduced in the 1960s.
Procedural (Traditional) Software
Metrics
• Since the first introduction of LOC, metrics for traditional or
procedural source code have increased in number and
complexity.
• Cyclomatic complexity
• e is the number of edges, n is the number of nodes and p is
number of unconnected parts of G.
• Offers an estimate of the reliability, testability, and
maintainability of a program, based on measuring the
number of linearly independent paths through the program.
Procedural (Traditional) Software
Metrics (Cont.)
Function Point is a measure of the size of computer
applications and the projects that build them.
Object-Oriented Software Metrics
• Weighted-methods-per-class (WMC) – the sum of the
individual complexities of the methods within that class.
• Depth of inheritance tree (DIT) – defined as the maximum
length from the node to the root of a class tree. Deeper a
class is in the inheritance hierarchy, more complex it gets.
• Number of Children (NOC) – Immediate subclasses of class.
Large NOC implies great amount of inheritance and resue.
• Coupling between object classes (CBO) – When a class
inherits methods, instance variables, they are coupled.
Greater number = complexity
Object-Oriented Software Metrics
• Number of key classes (NKC)
• Number of subsystems (NSUB)
• Class size (CS)
• Number of operations added by a subclass (NOA)
• Average method size
• Average number of instance variables
• Class hierarchy nesting level
Which Software-Testing Tool to use?
• Does it offer critical support for planning tests and
monitoring test progress?
• Does it have an analysis feature where it assesses the
characteristics of the software quality?
• Does it offer guiding dynamic testing?
• How mature is the tool? Is it easy to implement for your
system?
Metrics for Tools that support testing
procedure software
• Human Interface Design (HID) – Tools
with well-designed human interfaces
enable easy, efficient, and accurate
setting of tool configuration.
• Tool Management (TM) – the tool should
ensure proper management of
information.
• Ease of Use (EU) – the tool should be
easy to use by both new and familiar
users.
• User Control (UC) – the tool should allow
extensive and detail testing coverage
rather than one bulky one.
3 Tools for Software Testing
•LDRA Testbed
•Parasoft Testbed
C++ Test
CodeWizard
Inspire++
•Telelogic Testbed
Logiscope
LRDA Testbed
• Coverage Report (Dynamic Analysis)
Branch Coverage
Statement Coverage
• Metrics
Cyclomatic Complexity (Static Analysis)
• Quality Report
LRDA Testbed – Branch/Decision coverage Part 1 (Part of
dynamic testing)
LRDA Testbed – Code/branch/decision coverage Part 2
(Part of dynamic testing)
Parasoft Testbed
• C++ Test
White-Box
Black-Box
Regression testing
• Code Wizard
170 industry-accepted C/C++ coding standards
•Inspire++
C/C++ runtime error checking
Parasoft Testbed (metrics)
Parasoft Testbed C++ Test Bug
Detective (static analysis)
Telelogic Testbed
• Logiscope
• TestChecker – mentioned in the article
• RuleChecker – mentioned in the article
• Audit *
• Reviewer *
* included in the current version of the tool and shown for completeness but
not covered further in the slides.
Telelogic Logiscope – Bought by IBM – The
current version is Rational Telelogic Logiscope
Exercising the Software-Testing Tools
(Comparison of the tools themselves)
• Human Interface Design (HID)
Comparison
• Keyboard-to-mouse switches
(KMS)
• Input fields per functions (IFPF)
• Average length of input fields
(ALIF)
• Button recognition factor (BR)
Exercising the Software-Testing Tools
(Comparison of the tools themselves)
Test case generation (TCG)
Automated test case generation (ATG)
Test case reuse functionality (TRF)
Special note: LDRA does not
automatically generate test
cases but does provide
user-friendly features such
as pull-down menus for created
test cases therefore it was
assigned an eight for its level of ATG.
Analysis of Results (The output of the
tools)
LRDA – For Object Oriented tests there were issues with the tool, so no
results were obtained.
High levels of knots indicate that the code is disjointed
LDRA appends the Motor Industry Software Reliability Association
(MISRA) C Standard and the Federal Aviation Authority’s DO-178B
standard.
Importance of this article
• Software Metrics have come a long way since the LOC
era.
• Software Metrics can be use not only to measure the
quality of software, but also the effectiveness of the
software testing tool.
• Choosing a software testing tool is not simple.
Questions?
Download