IMPLEMENTATION OF TEST MANAGEMENT ON OBA SYSTEM AHMED IBRAHIM SAFANA

advertisement
IMPLEMENTATION OF TEST MANAGEMENT ON OBA SYSTEM
AHMED IBRAHIM SAFANA
UNIVERSITY TEKNOLOGI MALAYSIA
3
IMPLEMENTATION OF TEST MANAGEMENT ON OBA SYSTEM
AHMED IBRAHIM SAFANA
A thesis submitted in fulfillment of the
Requirement for the award of the degree of
Master of computer science (Real Time software Engineering)
Center for Advance Software Engineering (CASE)
Faculty of Computer and Information System
Universiti Teknologi Malaysia
APRIL 2010
5
To my sister, wife, and son
Sa‟adiya, Fatima and Mohammed.
6
ACKNOWLEDGEMENT
Alhamdullilahi Alakullu Hallin. PM Dr. Suhaimi has introduced software
testing management and the use of SpiraTeam to me and gave me supports during
the entire time of this training. I am here saying thank you with all my heart. Sincere
gratitude goes to Puan Haslina for responding to my messages, phone calls and
emails promptly. The advices, encouragement and tolerance I have gotten from her
ease the writing of this thesis. I will like to say thanks to my coordinator Mr Ridzuan
for understanding me. I am also saying thank you to CASE staff like Amali and
Falzina. Thank you all.
I will like to appreciate the effort of Mohammed Darraj (Saudi), Fa‟iz Bashar,
Abdulrashid A. Ali (Nigeria), Izzul Hidayat (Indonisia), Amgad Ahmed(Egypt),
Florian (France), A/rahaman, Hesham Omar, A/Jalal(Libia), Imrana Mian (Pakistan).
May almighty Allah bless you all.
I will like to say thanks to all my class mate.
7
ABSTRACT
The growing complexity of today's software applications has catapulted the
need for testing to new heights, shrinking development and deployment schedules.
Organizations all over the world required a high turnover rates for skilled employees
for software testing and related issues. Schedules are always running tight during the
software system development, thereafter reducing efforts of performing software
testing management. In such a situation, improving software quality becomes an
impossible mission [1]. It is our belief that software industry needs a grand new
software management tool to promote software testing management. This cater the
needs for a reliable software testing management tool, which adopts structure
behavior coalescence methodology, and is able to guide testing engineers "what to
do" and "how to progress" as well. Knowing "what to do" and "how to progress"
meliorates software testing management in a great deal. Perhaps, a number of
varieties existed, but seems to be half-baked for their inability to synchronize the
hitherto separate worlds of development and testing [2]. Once the software testing
management is improved, then the software quality can also be enhanced. This
research attempts to quantify the benefits of using SpiraTeam a software Test
management tool in providing assurance of proper requirement test coverage,
requirement traceability and improving the quality of the software development.
8
ABSTRAK
Pertumbuhan yang sangat kompleks dari perisian komputer pada masa
sekarang ini telah melontarkan keperluan daripada pengujian ke tingkat yang lebih
tinggi, mengurangi jadual pembinaan dan pelaksanaan. Pertubuhan di seluruh dunia
memerlukan satu tingkat pertukaran yang tinggi untuk pengujian perisian dan isu-isu
yang berkaitan. Jadual-jadual sentiasa berjalan ketat semasa pembinaan sistem
perisian, sesudah itu mengurangi usaha untuk melakukan pengurusan pengujian
perisian. Dalam keadaan seperti itu, perbaikan kualiti perisian menjadi tugas yang
mustahil. Kita percaya industri perisian memerlukan suatu alat pengurusan perisian
yang sangat baru untuk mempromosikan pengurusan pengujian perisian. Ini
mengendalikan keperluan untuk alat pengurusan pengujian perisian, yang mengutip
metodologi penggabungan tingkah laku struktur, dan boleh memandu jurutera
penguji melakukan "apa mesti buat" dan "bagaimana untuk maju" dengan baik.
Mengetahui "apa mesti buat" dan "bagaimana untuk maju" akan mengecilkan
pengurusan pengujian perisian dengan baik. Mungkin, beberapa jenis pilihan alat
wujud, tetapi akan nampak separuh jadi sehingga tidak mampu untuk menyamakan
dua dunia berbeza daripada pembinaan dan pengujian sehingga sekarang. Sekali
pengurusan pengujian perisian itu telah ditingkatkan, kualiti perisian akan boleh
ditinggikan juga. Penyelidikan ini mencuba untuk mengukur manfaat daripada
menggunakan alat pengurusan pengujian perisian SpiraTeam dalam menyediakan
jaminan liputan keperluan pengujian yang betul, jejakan keperluan, dan
meningkatkan kualiti daripada pembinaan perisian.
9
TABLE OF CONTENTS
CHAPTER
TITLE
PAGE
DECLARATION
ii
DEDICATION
iii
ABSTRACT
iv
ABSTRAK
v
TABLE OF CONTENTS
vii
LIST OF TABLES
xii
LIST OF FIGURES
xiii
1.
2.
INTRODUCTION
1
1.1
Overview of the study
1
1.2
Background of the Study
1
1.3
Statement Problems
2
1.4
Objectives
3
1.5
Scope
3
1.6
List of Deliverables
4
1.7
Thesis Outline
5
LITERATURE REVIEW
7
10
2.1
Introduction
7
2.2
What is Test Management?
7
2.3
Test Management Tools
8
2.3.1
Testopi
9
2.3.2
QaTraq
9
2.3.3
Borland Tool
10
2.3.4
SilkCentral Tool
10
2.3.5
Ken Test
11
2.3.6
Quality Center
11
2.3.7
Rational Test Manager
12
2.3.8
Wip CAFÉ TMS
12
2.3.9
Choosing SpiraTeam
13
2.3.9.1 Requirement Management
13
2.3.9.2 Test Case Management
14
2.3.9.3 Release Planning
15
2.3.9.4 Iteration Planning
15
2.3.9.5 Incident Tracking
16
2.3.9.6 Task Management
17
2.3.9.7 Project and Users
17
2.3.9.8 Artifact Relationships
18
2.3.10 Comparative Study on Test Management Tools
19
2.4
Test Management Processes
21
2.4.1
Tmap
21
2.4.2
Trigent Test Management Methodology
25
2.4.2.1 Plan/Initiate
26
2.4.2.2 The Design
27
2.4.2.3 Execute Phase
27
2.4.2.4 Run Test and Write Report
27
2.4.2.5 Adapt
28
Heuristic Testability Method
29
2.4.3.1 Controllability
29
2.4.3.2 Observability
30
2.4.3.3 Availability
30
2.4.3
11
2.4.3.4 Simplicity
30
2.4.3.5 Stability
31
2.4.3.6 Information
31
Session Based Test Management Method
32
2.4.4.1 Exploratory Testing
32
2.4.4.2 Application
32
2.4.4.3 The Session Metrics
33
2.4.4.4 Test Coverage Totals
34
2.4.4.5 Debriefings
35
The Scan Tool Approach
36
2.4.6
Waterfall
38
2.4.7
Agile Methodology
39
2.4.7.1 Scrum
41
2.4.7.2 Extreme Programming
42
2.4.7.3 Crystal
44
2.4.7.4 Dynamic System Method
45
2.4.7.5 Feature Driven Development
46
2.4.7.6 Lean Software Development
47
Conclusion
48
2.4.4
2.4.5
2.4.8
3.
RESEARCH METHODOLOGY
53
3.1
Introduction
53
3.2
Approaches of SpiraTeam on OBA
53
3.2.1
Approach 1:Requirment Definition
54
3.2.2
Approach 2: High Level Scheduling
54
3.2.3
Approach 3: Iteration Planning
55
3.2.4
Approach 4:Task Resourcing Allocation
55
3.2.5
Approach 5: Task Execution
56
3.3
3.4
Artifacts Workflow
56
3.3.1
User
57
3.3.2
Roles
57
3.3.3
Project
58
3.3.4
Management
58
Entity Workflow
59
12
4.
3.5
Incident Workflow
59
3.6
OBA as a Case Study
62
3.7
Conclusion
63
DESIGN AND IMPLEMENTATION OF
TEST MANAGEMENT
64
4.1
Introduction
64
4.2
Research Procedure and Design
64
4.3
System Administrator
68
4.3.1
Project
69
4.3.2
View/Edit User
69
4.3.3
Project Membership
70
4.3.4
Incident
71
4.3.5
Document
72
4.4
4.5
5
My Page
74
4.4.1
My Projects
74
4.4.2
My Save Searches
75
4.4.3
My Assigned Requirements
75
4.4.4
My Assigned Test Cases
76
4.4.5
My Assign Test Set
78
4.4.6
My Pending Test Runs
78
4.4.7
My Assigned Incidents
79
4.4.8
My Detected Incidents
79
Requirement Management
79
4.5.1
Requirement Details
81
4.5.2
Requirement Coverage/Test Coverage
82
4.5.3
Tasks
83
4.6
Release Planning
83
4.7
Test Cases
84
4.8
Test Execution
86
4.9
Incident Tracking
87
4.10
Conclusion
88
SUMMARY AND CONCLUSION
89
13
5.1
Summary
89
5.2
Shortcomings
93
5.3
Recommendation and further Enhancement
93
5.4
Conclusion
94
REFERENCES
97
Appendices A
98
LIST OF TABLES
TABLE NO.
TITLE
PAGE
2.1
Projects‟ Artifacts
18
2.2
Artifacts Description
19
2.3
Brief Analysis of Tools
20
2.4
Completed Session Report
33
2.5
Test Coverage
34
2.6
Analysis Summary of Test Management Tools
51
4.1
Research Planning
65
14
LIST OF FIGURES
FIGURES NO.
TITLE
PAGE
2.1
Tmap next
21
2.2
Tmap stages
22
2.3
Business design test methodologies
24
2.4
Test management support methodology
26
2.5
Activity Hierarchy
37
2.6
Work breakdown
37
2.7
Waterfall methodology
38
2.8
Scrum methodology
42
2.9
XP methodology
44
2.10
Phase transition
49
2.11
Agile methodology
49
3.1
SpiraTeam architecture
53
3.2
SpiraTeam management phases
54
3.3
The main entities of SpiraTeam
56
3.4
User login screen
57
3.5
Project view/edit
58
3.6
SpiraTeam entity relationship
59
3.7
Incidents workflow
60
15
3.8
Incident workflow steps
61
3.9
Workflow transition
62
3.10
OBA System composition
63
4.1
Phases in service Engineering
64
4.2
Research Flowchart
66
4.3
Installation wizard
67
4.4
Administrators page
68
4.5
View/edit user
70
4.6
Project membership
71
4.7
Edit incident type
71
4.8
Edit incidents status
72
4.9
Edit incident priority
72
4.10
Document list
73
4.11
Hierarchy of documents folder
74
4.12
My page task pane
76
4.13
My assigned test cases
77
4.14
Pending test run
78
4.15
Excel importer log in screen
80
4.16
Requirement list
80
4.17
OBA requirements
81
4.18
Requirements details
81
4.19
Requirements coverage
82
4.20
Tasks assignment window
83
4.21
Releases planning
84
4.22
Test case list
84
4.23
Test case description
85
4.24
Test steps
85
4.25
Releases
86
4.26
Test run
86
4.27
Incident list
87
5.1
Requirement summary
90
5.2
Test case summary
91
5.3
Test run summary
92
5.4
Incident summary
92
16
5.5
Synchronization chart
95
17
CHAPTER 1
INTRODUCTION
1.1
Introduction
The management of the test processes has become of paramount important to
the success in finding the shortcomings of the software system under development.
The increase in the needs of reliable software system makes software testing
necessary; hence the management of the test processes cannot be left out in order to
maintain the exercise within the schedule frame.
The traditional method used in undertaking the management of the test
processes is becoming unaffordable to the customers and a very difficult task to the
testing engineers. The use of test management tool in other way round leads to
another set of problems due to the absence of some required features. This project
concentrate in studying, applying, implementing and evaluating a software test
management tool (SpiraTeam) using the On-board automobile software project as a
case study in order to eradicate the stated problems.
1.2
Background of the Study
Software test management is one of the direct factors of reducing the
dependability of the software due to the inconsistencies and lack of conformity
between the requirements and the test cases.
18
Software quality is very essential to business success. However, software
quality means different things to different people, making it difficult to discuss on
software quality. Example, for development teams, software quality means
performance; scalability,
functionality, reliability, accuracy and usability must all
be addressed. For end users, it means IT systems that deliver “what we need, when
we need it.” This terms is used worldwide and cross-platform functionality further
complicate the process to another level, because team members often use different
test management tools to handle a particular software project due to the features
presence in one tool and absence in another one, which are all required.
This complexity, and the volume of tasks involved in managing manual
software testing, automated regression and performance testing, makes formalized
software test management a must, if organizations want to gain control of the
process. The On-board Automobile system is a software project developed by the
students at the Center for Advanced Software Engineering (CASE). The tests
processes of this software are managed manually. This leads to a number of errors. It
is to the believe of the author that the introduction of the SpiraTeam in such test
management will enhance the quality of the development and reduce the time the
scheduled for testing as we will see in the coming chapters.
1.3
Statement of the Problems
There are a lot of problems associated with the manual test management of the
On-board automobile system (OBA), which makes the whole processes boring and
inadequate. This leads to the abandoning of the process uncompleted. Some of the
main problems include the followings:
1.
Processes are ad-hoc and not repeatable across project
2.
There is no visibility between test cases, requirements and defects. How do you
know when you are truly „done‟?
3.
Measuring progress and productivity during testing is time consuming and
difficult
19
4.
It is hard to share information across the project and get real-time metrics
regarding the quality of the system being tested
5.
There is no central repository of test results from all sources.
6.
Difficulty in shearing the tasks among the testing team and time consuming
7.
So many incidents are left undiscovered and unreported
1.4
Objectives
The problem statement mentioned above serves as a premise to establish a set
of specific objectives that will constitute major milestones of this research work. The
following are the objectives of this research;
1. To study the features and the functionalities of the SpiraTeam.
2. To apply the features to On-board Automobile Software system (OBA)
3. To implement Test Management processes on the requirement of the Onboard Automobile (OBA)
4. To demonstrate and evaluate the benefit of using SpiraTeam in managing the
test processes of On-boat Automobile software
5. To be able to informed the testing team members their individual tasks online
via e-mails
1.5
Scope
The software test management tool in question (SpiraTeam), has so many
advantages and abilities, such as integration with software test run tools, central data
repository system, and can allow the migration of data from either the test run tool or
some data management system such as MS-excels, JIRA, Bugzilla and so on. It is
also able to plan the whole project used as project plan as well as test management.
The scope of this project will lay much emphasis on the software test management of
the selected case study, even though, all the features of the SpiraTeam mentioned
involve with each other. The focus is on the following parts;
20
1.
Requirement Management.
2.
Test case Management.
3.
Requirement Test coverage
4.
Release Management
5.
Incidence and defect tracking
1.6
List of Deliverables
The following items are the expected deliverables to be presented to the client
at the end of the processes, for the onward review and the understanding of the
achievement made in managing the testing of the software, and the satisfaction of the
software requirement‟s needs.
1. Requirement Report
a. Requirement Summary
b. Requirement Detailed
c. Requirement Plan
d. Requirement Traceability
2. Test Case Report
a. Test Case summary
b. Test Case Details
c. Test Set Summary
d. Test Set Details
e. Printable Test scripts
f. Test Run Details
g. Test Run Summary
3. Incidence report
a. Incidence Summary
b. Incidence Detail
4. Task report
a. Task Summary
b. Task Detail
21
5. Release Report
a. Release Summary
b. Release Detail
c. Release plan
6. Requirement Graphs
a. Requirement Summary
b. Requirement Coverage
7. Test Case Graphs
a. Test Case Summary
b. Test Case Run Summary
c. Test Run Progress Rate
8. Incidence Graphs
a. Incident Summary
b. Incident Progress
c. Incident Cumulative Count
d. Incident Aging
e. Incident Turnaround Time
9. Task Graphs
a. Task Summary
b. Task Velocity
c. Task Burnup
d. Task Burndown
10. Test Plan
11. Test Case description
12. Test Summary Description
1.7
Thesis Outline
Chapter 1:
This chapter introduces the topic as a whole, outlines the
background of the study, problems observed from the manual system used in
managing the tests of the On-board automobile, the scope of the thesis, the objective,
list of deliverables and the activities to be perform on the coming chapters.
22
Chapter 2:
This chapter describes the literature review of software test
management, test management tools, SpiraTeam and several discussions on the
specific issue regarding test management. The features of the SpiraTeam and test
management tools will also be discussed. This leads to the improvement of the
management of tests by means of the selected tool for greater performance in
achieving a better result.
Chapter 3:
Research methodology.
Even though, the project work is
more on applying a software management tool on a specific software project (OBA),
to achieve a good testing processes, prior to a research on topic that focus on deep
research about some findings. This chapter will outline the methodologies used in
evaluating the tool and the suitable software development life cycles it support as
well as the emergence of the test management from the project plan and development
phases. The workflow of the incidence, artifacts and the tool will be discussed as
well.
Chapter 4:
The design and implementation of the SpiraTeam on On-board
automobile will be discussed on this chapter, how the application took place, the
snapshots, the stages in managing the tests, the test run progress, the workflows
agreed in solving a particular type of incidence and the method of finding a particular
report.
Chapter 5:
The conclusion and recommendation on the activities carried
out so far will be discussed here, the result of the application and implementation of
the test management on the case study will be evaluated, and the final decision on
the use of the tool to adopt the test management in CASE will be discussed as well.
23
CHAPTER 2
LITERATURE REVIEW
2.1
Introduction
For quite a long time, experts are making discussions, arguments and
expressing their ideas in the field of software testing. This chapter will focus on the
reviving of such arguments and discussion. The expert ideas are summarized and
synthesize to obtain up to date information. This form the basis of our goal in making
in depth research on software test Management. The selected areas that will undergo
the review include, the software test management, software test management tools
and On-board automobile (OBA), test management approaches (methodologies).
It is also the goal of this chapter to find and extract the valuable information
and knowledge of applying “SpiraTeam”. The comparison between the existing
manual test management of OBA and the use of SpiraTeam, the comparison between
the different type of software test management tools. And making analysis over the
information gathered for the purpose of further research.
2.2
What is Test Management?
The accelerating amount of software in devices and applications, combined
with delivery of multiple configurations or versions with iterations makes significant
demands on test management. Complexity and volume must be consistently
managed; else, deadlines and quality are at risk [1].
24
The functional requirements of the software system are ever-increasing,
getting huge, difficult to control, take time in planning and management, more
expensive and more defective errors existed. This makes the system very weak [2, 3,
and 4]. Sometimes, software system cannot work well as we expected, because of the
variety of fault and failures in the system which brings the loss to users directly or
indirectly. We cannot trust our software always, the problem of software
dependability hike and create fear in the minds of the stakeholders. The modern
software development Techniques for high quality and dependability, it has been
realized that poor software test management is one of the direct factors reducing the
quality of the software. This is due to the inconsistencies and lack of conformity
between the requirement and the test cases [5, 6 and 7].
In order to eliminate the use of various management tools in handling a single
project, achieving the target of dependability, reducing the heavy assessment
workload, long duration for the software test management and testing expense to
over 80% of the total cost of program. The use of the platform that engulfs all the
required features of software test management must be use. The tool should allow a
vast integration and migration with varieties of test run platform; this will help in
solving all the problems [8, 9].
2.3
Test Management Tools
This section compares different test management tools. Analysis have been
carried out on some of them, in order to have a clear understanding of why
SpiraTeam has been chosen for the implementation of test management on On-board
automobile software (OBA).
25
2.3.1
Testopia
Testopia is the extension of Bugzilla (a web based bug Tracking software).
This tool is design with the intent of tracking test cases. This test management tool
allows testing manager or a tester to integrate bug reporting with their test case run
results. Though it is designed with software testing in mind, it can be used to handle
various engineering process. Despite the fact that this tool evolved to handle some
issues that cannot be address by Bugzilla, it is also a handicap tool in some test
management issues, such as integration with other test run tools. The bias nature of
this test management tool in integrating with other product apart from Bugzilla and
other few products makes it difficult for the test managers to recommend the use of
this tool [10].
2.3.2
QaTraq Professional
QaTraq Professional has a history of being one of the leading software test
management tools. It has been design based on IEEE standard and provide the
capabilities of create and update test scripts, instant access to a variety of reports, test
cases and test results. It is also very good in improving visibility of your testing in a
single environment and pin-pointing test case execution progress.
Some of the interesting features of this product include robust GUI recording
capability that allows testers to record test cases with little or no-programming, five
different automation technologies, such as SMART record, (the tester has the
capability to automate any application on a windows platform), test web applications
on all popular browsers and delivers on the promise of test automation by providing
a simple user interface [19]. The study conducted on this tool shows that it has the
following advantages over must of the existing software test management tools.
1. Is the only solution providing full version control
2. Enables the enforcement of test script review process
3. Is based on the IEEE Software Testing standards (IEEE 829)
26
2.3.3 Borland Tool
Borland Tool is a software test management tool. It provides visibility of the
entire software application testing process from planning to test-and-fix cycles to
final decision support. It is flexible, adaptable and open Software Test Management
framework. It enables the tester to manage the software testing processes. The
flexibility of this tool helps in eliminating the issue of whether your organization is
small or large, centralized or distributed. With Borland, the tester can harness the
chaos of software testing by understanding the answers to key questions, such as:
1
Are critical requirements completely covered by appropriate tests?
2
Can I run tests in different software configurations on multiple platforms
effectively?
3
Is it possible to make better manage complex and multi-configuration test
environments?
4
How should I prioritize tests, and how do I know when to update them?
5
What are the test results for a specific build?
6
Where are the defects and what are are the necessary steps to resolve them?
7
What is the status of each defect? Is an open issue holding up the release?
8
What progress is being made against our testing strategy?
9
What are the quality trends for an application?
10 Is the application really ready for deployment?
2.3.4
SilkCentral Tool
SilkCentral comprises many features of software testing Management. It is a
product for software developer‟s organizations that undertake different kind of
development life cycle. This includes Agile, water fall and traditional projects
methodologies. Its unified framework for managing functional and performance
testing activities, manual test run or automated test run and allows the tester to see
the effectiveness of tests across distributed projects. Meanwhile, it has the ability to
27
integrate with other testing tools that perform unit test, functional test and eliminates
the need to rip and replace your existing technologies [21].
The most important features about this tool are, the ability to aggregates
existing test results independent of (whether Perl, batch, or custom), provides
normalized and consolidated project metrics across the software development life
cycle (SDLC). Perhaps, it can integrate with other test management technologies,
such Borland Tool, third-party application lifecycle management tools like QaTrap.
This help the test manager to ensure that software testing becomes a well-managed
process that spans the entire SDLC and aligns the business and development
priorities.
2.3.5
Ken Test
Ken Test is a web based test management tool that has the capabilities of
linking the artifact, Tracks changes, storing the related testing documents, and shows
real-time charts and tables. It is suitable to many standard browsers. It looks and acts
like a desktop application because of the easiness in loading but runs on a browser. It
is good in collaboration of different outlooks to a single repository and common
format. This simply means that a testing team can share and re-use previous test
cases, making them more efficient in the testing environment. The study conducted
on this tool indicates that the authors built it on the features needed by the tester not
features that sound good. It's designed to be simply effective [22].
2.3.6
Quality Center 9.2
Quality center 9.2 is the software test management tool developed by
Mercury interactive Corporation known widely as HP Quality Center (QC), but
formerly called HP Test Director for Quality Center. It is a web based test
management tool which handle Client Server technology, it has five main
28
modules/tabs. The modules include Releases, Requirements, Test Plan, Test Lab and
Defects for the management of testing processes. There are many additional modules
as well depending on the various add-ons the tester wish to add or install. Such as
BPT (Business Process Testing), the tool provides the user with a number of add-ons
to help in suiting the nature of the software project at hand [23].
2.3.7
Rational Test Manager
Rational Test Manager is a test management tool built for extensibility. It has
all the features for test management activities. It provides a wide range of supports
on everything regarding test management. From pure manual test approaches to
various automated paradigms including functional testing, unit testing, regression
tests, and performance test. It is meant to be accessed by all members of a project
team, ensuring the high visibility of test coverage information on a real time bases,
defect trends, and application readiness [23].
2.3.8
Wip-CAFÉ TMS
This software management tool provides the ability of verification and
validation services on the entire product and application release lifecycle. This tool
helps by introduce defect free and feature-rich technology for managing the tests of
an applications in as short time as possible. It also come with an expert solutions in
electing testing tool, defining performance targets and metrics, test consulting &
planning, test design, test case preparation, test execution, analysis of test results,
defect tracking and identification and removing of performance bottlenecks. It covers
many aspects of test management activity as a unique tool. Its use for manual and
automated testing management activities and support many different kind of
development life cycle [23].
29
2.3.9
Choosing SpiraTeam
This is considered to be an integrated Application Lifecycle Management
system that manages your project's requirements, releases, test cases, issues and tasks
in one unified environment and display the output in a single page. This tool
embedded the feature of SpiraPlan (an agile enable project management solution)
and SpiraTest (a highly acclaimed quality assurance system). It works in conjunction
with a variety of automated testing tools, such as test run tools, functional testing
tools and integration testing tools. This gives a clue that the user has to be familiar
with the features of SpiraTest, SpiraPlan and the appropriate automated testing tool
(test running tool) [14].
The functionality provided by this tool in the areas of test management
includes; requirements management, test case management, releases planning,
iteration planning, incident tracking, task management and project / user
management. A brief explanation about the features may bring an understating of the
tool for the reader to able to catch the benefit easily.
2.3.9.1 Requirements Management
During the Requirement management, a tester has the ability to add, insert,
import/export, edit and or delete a requirement. The environment for the
manipulation of the requirement is organized in a hierarchical order just like scope
matrix. Each requirement is attached with an importance level (ranging from critical
to low). The status of the requirement is also identified. Example, the stockholder
can easily see if the requirement status is requested, planned, in progress or
completed. The most interesting thing about the requirement management is the
ability for the tester to map each requirement with the test cases called “Requirement
Test Coverage”.
The requirement test coverage ensures that each requirement is covered by
the appropriate number of the test cases. That requirement that has not been covered
30
can easily be seen at a glance. From a development perspective, the testing team can
give estimation of the lowest-level requirements in the requirements matrix to
determine the complexity and associated resourcing. By having the high-level release
schedule determined, the requirements can easily be prioritized and scheduled
against the appropriate release according to their business priority [14]. Having the
releases at hand, the requirements are further decomposed into their constituent lowlevel project tasks that can be assigned to the project team. The system will track the
progress and revised estimates for the tasks and display them against the
requirements so that risks to the schedule can be quickly determined.
2.3.9.2 Test Case Management
Test cases can be managed in an organized environment just like the
requirements. Using SpiraTeam, a tester has the ability to add/insert, delete, edit test
cases and store them in a hierarchical folder. The test cases has a provision of test
steps, the test planner indicates the individual actions a tester must take to complete a
particular test run. The description of the expected test result follows each test step
and the required sample data to use in achieving the test. When a user executes a test
case, the results are stored in a test run that contains the success/failure status of each
test step as well as the actual observed result that the tester experienced. Incidence
comes up when a result contrary to the expected one is found. In addition each test
case is mapped to one or more requirements that the test is effectively validating,
providing the test coverage for the requirement.
At the time of test case execution, a failure can be optionally used to record a
new incident, which can then be managed in the incident tracking module. A
complete traceability can be achieved from a recorded incident to the underlying
requirement that was not satisfied. To streamline the assignment and tracking of
multiple test cases, a tester can also select groups of test cases and arrange them into
test sets or test suite. Each test set can contain test cases from a variety of different
folders and can be associated with a specific release of the system being tested (Onboard automobile).
31
2.3.9.3 Release Planning
In release planning, a tester has the ability to create different versions /
releases of the application being tested. The project can be decomposed into an
unlimited number of specific project releases, having name and version number
attached. The requirements and test cases developed during the design phase can
easily be assigned to these different releases. During the executes of a series of test
cases, the testers are able to choose the version of the project being tested and the
resulting test run information is then associated with that release.
Looking at the perspective of project planning, the releases are seen as the
major milestones in the project, these releases can be further sub-divided into
iterations which are separate mini-projects with associated project scope and tasks.
The project‟s requirements are scheduled at a high-level against the releases and the
detailed tasks are scheduled against specific iteration within the release. The
incidents are usually raised during the testing process and the incidence that rises at
that time is associated with that release; this allowing the development team to easily
determine which version of the project is affected. Finally as the incidents are
resolved and verified during the testing phase, the appropriate release can be selected
to indicate which release the incident was resolved or rises from.
2.3.9.4 Iteration Planning
A tester has the ability to track the individual iterations that comprise a
release; this gives the tester or project manager the option to manage agile
methodology projects within its own environment. Contrary to release planning
stage, where high-level requirements are estimated and scheduled, the iteration
planning phase permit assigning each of the tasks in the project backlog against a
specific iteration. With this, the available effort in the iteration can easily be
allocated within the releases.
32
When iterations are created, the tester specify the start and end-dates together
with the notional number of project resources assigned to the iteration and any nonworking days including the working hour per day. It uses this information to
calculate the planned effort available to the iteration, from which it will subtract the
estimated task effort values to determine how much effort is available to schedule
and this helps in the budget.
2.3.9.5 Incident Tracking
There is the ability for the tester to create, edit, assign, track, manage and
close incidents that are raised during the testing of the software system under
development. The incidents divided into categories based on the nature of its
existence. The categories are bugs, issues, training, enhancements, risk, limitation,
items, and change requests. Different category has different workflow and business
rules. Typically each incident is raised initially as a „New‟ item of type „Incident‟.
The review of the incidence took place when the project manager and customer or
the stakeholder meets. The step of the incidence changes from „new‟ to „open‟ if
assigned to developer for correction.
Each of the incidence has a certain level of priority depending on the
definition agree by the tester and the stockholders, ranging from critical, high,
medium or low. If the developer works to correct the „open‟ incident, and able to
succeed, the step changes to „Fixed‟ or „Not Reproducible‟ when it is not fixed
depending on the actions taken (or not taken). After the fixing of the incidence, the
project manager and customer has to verify to see if it has been fixed, then the step is
changed to „Closed‟. Robust sorting and filtering of all the incidents in the system
help the project manger to easily fish out the needed incidence, as well as the ability
to view the incidents associated with particular test cases and test runs. This enables
a work through from the requirements coverage display, right through to the open
incidents that are affecting the requirement, the history of the incidence up to the
point when the incident is fixed or closed.
33
2.3.9.6 Task Management
A task is simply the duty of the test manager or a tester in a particular project.
It is the discrete activities that each member of the development team would need to
carry out for the requirement to be fulfilled. The tasks are assigned to the team
member by the project manager; it can be assigned to an individual user as well as
associated with a particular release or iteration. The system can then be used by the
project manager to track the completion of the different tasks to determine if the
tasks fulfill their missions. It is also a way of identifying the potentiality of a
particular member as well as finding the weakness of the team member. Late starting
task is also identified.
2.3.9.7 Projects and Users
The SpiraTeam test management tool supports the management of an
unlimited number of users and projects, a tester in one project can be an observer in
another project. The control of the team member and task allocation can be
administered through the same web interface for all the projects members that are
involved. All artifacts (requirements, tests and incidents) are associated with a
particular project, and each user of the system can be given a specific role for the
particular project. So, a power user of one software project may be merely an
observer of another. That way, a central set of users can be managed across the
enterprise, whilst devolving project-level administration to the manager of the
project.
Each user has a profile and each project has its own personalized dashboard
view of all the pertinent and relevant information. This feature reduces the
information overload associated with managing of project information, and allows a
single user or project snapshot to be viewable at all times for rapid decision-making.
The project manager can easily view and access the progress of a particular member
across the project task assign to him in a real time bases.
34
2.3.9.8 Artifact Relationships
SpiraTeam has various artifacts and function use in managing the system,
such artifact includes project, user, requirement, test and many more. These artifacts
help in providing a systematic way of using the SpiraTeam to manage a project. The
list of these artifacts is on the various screen of the system, each of them has a unique
identification number in order to make it easier to identify at a glance.
The table below shows the artifact naming convention and the description
Table 2.1: Projects‟ Artifacts
Table 1: Artifact naming convention
35
Table2.2: Artifact descriptions
2.3.10 Comparative study on test management tools
The test management tools discuss so far has various numbers of advantages
and disadvantages. Some of these advantages are uniform, can be seen in all of them.
Some features existed in one of the tools and does not exist in another, some are
suitable for managing the tests of a particular kind software and unsuitable to some
as well. This makes it difficult for the test managers to categorically recommend a
single test management tool to be regarded as the best. In view of the issues at hand,
some new testing management tools have evolved in an attempt of addressing the
problems. Some of such tools have the ability to integrate and allow the migration of
data from variety of data storage and software testing tools. The following table
shows the analysis.
36
Table 2.3: Brief Analysis of Tools [23]
Tool
Strength
Quality Center
Very popular testing tool, No java support out the
good
Weakness
support,
good
cross box, more expensive
browser support , good online solution.
communication
Rational
Test Economical, lots of supporting Scripting Language is a
Manager
tolls, good extensible scripting littler basic out the box,
language through VB,
good and a bit confusion
data creating facilities, good with the tools.
online communication
SlikCentral
Test Built- in recovery system for Expensive, no object
Manager
un attended testing, ability to identity
tool
is
test across multiple platforms, available
browser and technology in one
script
Wip-CAFÉ TMS
Customized template for test No object identity tool
specification, defects and test available,
some
cases. Tools support scripting limitation in the script
design management
SpiraTeam
Customize
use for automation
dashboard, Less support to vaster
unlimited user, real – time operating system
access,
variety
of
format
reports.
The analysis gathered on table 2.3 indicates that SpiraTeam is stronger then
the remaining tools and is having less weakness compared to the remaining
37
2.4
Tests Management Process
Test Management Methodology is a set of practical ideas represent during the
management of a software tests activities. The techniques and methods used in
planning, analyzing and design, developing or managing the tests lead to a great
success in the field of software quality management. Some of these approaches
include.
2.4.1
Tmap (Test Management Approach Methodology)
Late detection of defect is costly. This is the methodology for structured
testing focused on early detection of defects in a cost effective manner while
providing insight into software quality. It has becomes the European standard as a
structured test methodology. Following the substantial user feedback and taking into
account the new technologies and approaches to software development and testing, it
has been completely revised and updated.
Structure Test
Business
Driving Test
Management
A tool set
Figure 2.1 Tmap next
38
The Tmap has four essential components. Each one of them represents what
should be carried out at that level as of the description below.
The customer (whether internal or external to the
organisation) manages the test process on rational and
economic
grounds:
Business
Driven
Test
Management (BDTM) (Costs, Risks, and Time)
Result) concept.
A Structured test processes. Ranging from
plan, test execution to test management.
Set of Tools, that is technique descriptions and
organisational and infrastructure support. The tool set
contains organisation, infrastructure and techniques.
An adaptive method that is suitable for test
situations in most environments (such as new
development, maintenance, waterfall / iterative /
agile development / RUP, customised or package
software, outsourcing).
Figure 2.2 Tmap Stages
39
This methodology offers the tester and the test manager the guidelines to
deliver results for the customers in an effective manner. Using among others V&V
(Verification & Validation) and ISO 9126 concepts applied to the software testing
area. It helps to define a software test strategy based on risks and on projects
constraints
such
as
project
schedule
and
resources
Allocation.
Because it is based on a business driven test management approach (BDTM), the
client can control the test process based on rational and economic considerations.
40
Business Design Test methodology
Formulate assignment and test
Critical success Factor
Change Request
Requirement
Business processes
ETC
collection Goal
Determine Risk Category
Determine light\Heavy Testing
Client
Determine light\Heavy Testing
Test Basis
Assign Test Techniques
Result, Risk, Time
Specify test cases
and Money
Test execution
Figure 2.3 Business design test methodology
41
Figure 2.4 illustrates the translation of business goals to testing goals. The
methodology gives a full description of the total test process from the beginning to
the end. The test processes are split into phases, deliverables and activities so that it
becomes manageable. The breaking down of the test processes makes the approach
suitable for all kinds of tests during development and maintenance. The relationship
between the different test phases makes sure that tests are executed at the right
moment. Also the transparency in tasks and responsibilities between stakeholder and
customer, between tester and test manager and between test levels plays a vital role
[25, 26].
The approach contains a complete „tool set‟. The tool set consist of a user
guide (how to test), advice regarding test items and environment (where and with
what is being tested) and organization (who is testing). This allows a uniform
execution of the various test activities across a project or organization and help in
improving the planning and management of the project. This method is flexible as
well, therefore suitable for all test situations in most environments, such as new
development, maintenance, waterfall / iterative / agile development and tailor-made
or package software costs.
2.4.2
Trigent's Test Management Methodology
The Trigent test methodology gives support of synchronizing your testing
activities with your software development lifecycle to ensure the success of your
product. It minimize bug during development and ensure quality in the test processes
of a project. The approach is categorized into different phases as shown in the
diagram below.
42
Test Management and support Methodology
Plan/initiation
Design
Execution
adapt
Test scope
Test execution
Test Design
Test regression
Test Strategy
Unit Testing
Test Cases
User support
Test Tool
System Testing
Tests logistics
Automation
Test Reporting
plan
Discovery
Design
Development
Deploy
Product or Application Development Life Cycle
Figure2.4 Test Management and support Methodology
2.4.2.1 Plan / Initiate
During the Plan / Initiate phase, the method easily understands the
characteristics of the target software product or application and the business needs of
the system. This means at the time of requirements elicitation. In most cases, this
phase occurs in parallel with business requirement of the methodology lifecycle. The
scopes of testing along with the success criteria are defined in this phase. The test
strategy is usually decided based on the context of product or software release.
The activities in the Plan / Initiate phase are:
1. Study the product/application characteristics
2. Define test process and communication mechanism
3. Understand test scope and test priorities
4. Define types of testing required
5. Identify test techniques and prepare test automation plan
6. Choose test tools
7. Prepare detailed test plan
43
2.4.2.2 The Design
The design Phase consists of designing test scenarios, application behavior
analysis, and establishing the baseline for the target product or application. Usually,
this phase is taking place at the same time or in conjunction with the design phase of
the software development methodology lifecycle. Trigent also fine tunes the test plan
and determines the precise test schedule, test effort, test data and deliverables in a
particular phase as well as the releases.
The activities in the Design phase are:
1. Develop test scenarios, test cases
2. Identify reusable test cases from Tools repository
3. Create test data, test bed infrastructure and plan logistics
4. Identify test data to test case coverage
2.4.2.3 Execute Phase
In the Execute phase, all the application that are supposed to be tested are set
up by Trigent, the product are tested and the finding are finally reported through
defect reports and progress reports. Specific test metrics are collated in this phase.
The activities in the Execute phase are:
1. Run tests and write test reports
2. Benchmark product with competition
3. Prepare bug reports
2.4.2.4 Run Test and Write Reports
This is the phase where the tester provides the test procedure, the test steps
and the test scenarios are applied as stated by the test plan. This is called test
execution, the result find on the processes of the execution are recorded as incidence
if contrary to the expected result. If the result tallies with what is written during the
test plan, it is recorded as successful.
44
2.4.2.5 Adapt
In this phase, the tester review the knowledge gained during the previous
phases when carrying out refinements to test design. Knowledge from the software
product or software launch such as beta tests is also used to adapt the test process.
Refinements and enhancements to test techniques may also emerge in this phase and
these enhancements are taken up for implementation in subsequent retest iterations.
Specific automation opportunities are also identified during this phase. While the
methodology implies a sequential staged approach, depending on the context, the
four stages could be done simultaneously as well.
The activities in the Adapt phase are:
1. Verify bug fixes are working
2. Run regression tests
3. Choose test cases to be automated
4. Develop test automation suites
5. Assess test coverage and test adequacy
6. Decide next iteration plan
In fixing a bug or incidence, there is a proper stage taken depending on which
type of incidence is at hand, this called “workflow” the work flow started by new to
open, assigned, fixed, verify, closed, reopen and so on. The incidence or bug that is
recorded closed is believed to have been fixed appropriately. These entire closed
bugs should be further verified if they need to be reopen or closed completely.
Some of the test cases can be chosen in running automated testing, in this
phase, the tester may determine which and which are going to be part of such
automation. The test cases that have been developed or the bugs that have been fixed
may cause the testing of the remaining affected classes or units. In this phase, it is the
responsibility of the test manager to ensure that it is carried out properly.
Also in this phase, the tester can develop test automation suites. Amongst the
automated test cases developed earlier, the tester have to put a number of such test
cases as a test suit which is later assign to a tester or map it to the appropriate
45
requirement for the test running. The test coverage that is the mapping between the
requirement and the test cases, the test manager ensures that there is no any
requirement left uncovered and the adequacy of the coverage is ensure. The
verification will help in identifying some of the problems that usually occur as a
result of urgency to complete the activity by the testing staff.
It is in this phase that the test manager or the tester may decide to start
planning for iteration. This comes from the data gathered during the release planning.
The iteration may come as a result of some lapses found on the previous iteration
either on the same release or on different releases [27].
2.4.3
Heuristic Testability Method
The Heuristic approach consists of a list of tentative ideas that makes a
software project testable by helping the testers and developers to improve the
testability of a product. This process makes the testing go faster and takes less effort.
The method comes up with the following stapes
2.4.3.1 Controllability
The better we can control it the more testing becomes automated and
optimize. The following helps in achieving the system‟s controllability.
1. A scriptable interface or test harness is available
2. Software and hardware states and variables can be control directly by t he test
manager.
3. Software modules, objects or function are the layer that can be tested
separately or independently.
46
2.4.3.2 Observability
The approach has the idea of observation, what you see is what you can test
or what u are seeing is what can be tested. The following are the key point to be
observed.
1. Past system states and variables are visible or queriable (e.g., transaction
logs)
2. System state and variable s are visible or queriable during the execution.
3. Distinct output is generated for each input
4. All factors affecting the output are visible
5. Incorrect output are easily identified
6. Internal errors are automatically detected and reported through self – testing
mechanisms.
2.4.3.3 Availability
To test it we have to get it. What tester needs to test must be available within his
reach.
1. The system has few bugs (bugs add analysis and reporting overhead to the
test process)
2. No bugs block the execution of tests.
3. Production evolves in functional stages (allows simultaneous development
and testing).
4. Source code is accessible.
2.4.3.4 Simplicity
The simpler it is the less there is to test. The tester should try by making sure
that all the test items are kept simple.
1. The design is self consistent
2. Functional simplicity (the features set is the minimum necessary to meet
requirement)
3. Structural simplicity (module are cohesive and loosely couple)
47
4. Code simplicity (the code is not so convoluted that an outside inspector can
easily inspect it)
2.4.3.5 Stability
The fewer the test items, the fewer the disruption to testing. This is making the
impression that a tester should be precise over the reasonable test item, try as much
as possible to consolidate those test items that are similar [28].
1. Changes to the software are infrequent
2. Changes to the software are controlled and communicated
3. Changes to the software do not invalidate automated testing
2.4.3.6 Information
The more information we have the smarter we test. The tester should try as
much as possible to gather enough information from the stockholder and the end user
of the system. This gives a way to understand the test cases that are not conforming
to the requirement‟s need.
1. The design is similar to the product we are familiar with
2. The technology to which the product is base is well understood
3. Dependency between internal, external and shared components are well
understood
4. The purpose of the software is well understood
5. The users of the software are well understood
6. The environment of the software for use is well understood
7. Technical document is accessible, organized, accurate, specific and detailed.
8. Software requirements are well understood.
48
2.4.4
Session-Based Test Management Method
It is sometimes called "ad hoc" testing management. It is a creative, intuitive
process. Everything testers do is optimized in finding bugs fast, so plans often
change as testers learn more about the product and its weaknesses. Session-based test
management is one method to organize and direct exploratory testing. It allows the
test manager to provide meaningful reports to management while preserving the
creativity that makes exploratory testing work effectively. The explanation of this
method includes a sample session reports, and a tool developed that produces metrics
from those reports.
2.4.4.1 Exploratory Testing
This testing is unscripted, unrehearsed testing. Its effectiveness depends on
several intangibles such as the skill of the tester, their intuition, their experience, and
their ability to follow hunches. But it is these intangibles that often confound test
managers when it comes to being accountable for the results. For example, at the end
of the day, when the manager asks for status from an exploratory tester, they may get
some silly answers by saying I tested some functions here and there, just looking
around. And even though the tester may have filed several bugs, the manager may
have no idea what they did to find them. Even if the manager was skilled to ask the
right questions about what the tester did, the tester may have forgotten the details or
may not be able to describe their thinking out.
These problems existed when doing exploratory testing for a client. Some one
has to be accountable for the work. A status reports that reflected what is carried out
actually need to be provided. The testing team has to show that they are creative,
skilled explorers, yet produce a detailed map of the success achieved.
2.4.4.2 Application
A Session-Based Test Management has been invented as a way to make those
intangibles more tangible. It can be thought of as structured exploratory testing,
49
which may seem like a contradiction-in-terms, but "structure" does not mean the
testing is pre-scripted. It means the expectations for what kind of work will be done
and how it will be reported. As in a recording studio, this work is done in "sessions."
Sessions range from 45 minutes to several hours, but no matter the length, it is time
spent testing against a charter for the session. At the end of a session, the tester hands
in a session report, tagged with important information about what is carried out.
2.4.4.3 The Session Metrics
The session metrics are the primary means to express the status of the
exploratory test process. They contain the following elements:
1. Number of sessions completed
2. Number of problems found
3. Function areas covered
4. Percentage of session time spent setting up for testing
5. Percentage of session time spent testing
6. Percentage of session time spent investigating problems
Table 2.4: Completed Session Reports[22]
Sessi
Date
on
Tim
Dur
e
2:
Cht
Opp
Tes
Bu
Setu
Bug
Iss
Tst
r
s
t
g
p
s
ue
rs
2:
0
0.9
0.2
2:0
9
3
1
3
3
0.3
0.4
1.50
0
2
2
4
5
0.5
0.7
1.45
6
5
1
6
8
0.3
0.0
2.50
2
2
3
4
5
Sec0
5/01/0
4:
1
0
pm
Sec0
5/01/0
6:
1.5
1.5
2
0
pm
0
0
Sec0
5/01/0
8:
1.4
1.4
3
0
pm
5
5
Sec0
6/01/0
9:
2.5
2.5
4
0
am
0
0
0
0
0
50
2.4.4.4 Test Coverage Totals
In this report, it shows how test session map to the test coverage. All numbers
in the table below except Bugs and Issues represent normal sessions. A normal
session takes about 90 minutes of uninterrupted test time by a single tester. If the
Total column reports, example "13" for a particular area, it means that the specified
area was mentioned on session reports that totaled to a duration equivalent to 15
normal sessions worth of testing. A coverage area is entered in the column “AREAS”
section of a test session report only if the test lead verifies that a substantial part of
the session covered that area. Thus, this is a rough, but meaningful indication of test
coverage from a tester's perspective.
Table 2.5: Test Coverage [29]
Total
CHTR
OPP
TEST
BUG
Setup
Bugs
Issues
AREA
15
12.56
0.3
8.47
2.02
32
15
5
Build 1.2
15
13.30
0.4
6.44
2.05
21
11
6
OS | WIN98
3.99
3.99
0
1.73
1.33
0.93
7
6
DECIDERIGHT
MAIN
|
TABLE
WINDOW
STRATEGY
3
2.8
0.2
1.1
1.13
0.56
14
2
|
EXPLORATION &
ANALYSIS
2.66
2.66
0
0.79
1.06
0.79
7
3
2.66
2.66
0
0.79
1.06
0.79
7
3
2
1.8
0.2
0.9
0.54
0.36
8
2
DECIDERIGHT
|
QUICKBUILD
STRATEGY
|
CLAIMS TESTING
STRATEGY
|
COMPLEX
|
STRESS TESTING
DECIDERIGHT
1.33
1.33
0
0.93
0.26
0.13
0
3
|
DOCUMENTS
WINDOW
1.33
1
1.33
1
0
0
1.33
0.3
0
0.6
0
0.1
0
6
2
2
DECIDERIGHT
|
OLE
STRATEGY
|
COMPLEX
|
FUNCTION
&
DATA TESTING
STRATEGY
0.66
0.6
0.06
0.42
0.12
0.06
4
2
|
COMPLEX | RISK
TESTING
51
2.4.4.5 Debriefings
Debriefings are worked out at the end of the session by the tester and the test
manager. It has been discovered that the value of SBTM relies on the ability of the
test manager to talk with the tester about the work that was carried out. This helps the
tester and manager makes or drive the most out of that meeting (in which the meeting
ends in about 15-20 minutes), followed by compiling a checklist of questions [29].
Session Debrief Checklist
Charter
1. Is the relevant approved session reports reviewed?
2. Does it match the bulk of the testing that was actually done previously?
Areas
1. Is there at least one O/S keyword, (but only if applicable)?
2. How accurate is the build keyword?
3. Is there at least one existing strategy keyword?
4. Any, at least one product area, as specific as meaningful to specify the target?
Duration
1. Did the duration code be in line with the actual duration?
2. How was the session continuous and is it uninterrupted?
TBS
1. Have the TBS definitions been followed consecutively?
2. Have the TBS precedence rules been followed exactly?
3. How do the TBS numbers relate to On Charter work only?
Opportunity
1. When the opportunity number is over 0%, what was the opportunity?
2. When the opportunity number is over 25%, consider modifying the charter.
3. When the opportunity number is over 50%, modify the charter for this session
and consider doing a new session based on the original charter.
Data Files
1. If there were no data files, why not, what happened?
52
2. If there were data files, were they original or re-used? If re-used, were they
modified in any way? If so, how do they now relate to other sessions that
refer to the same data?
3. Is there an associated test coverage outline that should be referenced so far?
2.4.5
The Scan Tool Approach
The scan tool makes a report about session by looking at the heading around
them. It has been discovered that the methodology used by this tool relies on the skill
of the test manager, not just like some other test management tools, this makes the
producers to provide a managers guide documents which will discuss session
protocols, the benefits and the problems that can easily be encountered. When using
SBTM (Scan Based Testing Management Tool), for the test manager to be able to
work with the methodology of this tool he has to have a clear understanding of the
way it works through the guideline discussed.
Ad hoc testing (AKA exploratory testing) relies on tester intuition. It is
unscripted, but unrehearsed, and improvisational on how do I, as test manager,
understand what‟s happening throughout the test processes, so I can have ideas on
where to direct the work and explain it to my clients. Example, a one solution test in
sessions may have Character, Time Box, Reviewable Result and Debriefing. Flexible
scheduling, briefing enough to allow course correction, long enough to get solid
testing done, long enough for efficient debriefings, beware of overly precise timing.
The Debriefing is the measurement that begins with observation such as, the
manager reviews session sheet to assure that he understand it and that it follows the
protocol, the tester answer any question, session metrics are checked, character may
be adjusted, session may be extended, new session may be chartered, coaching
happens.
Often completing the above mentioned stages there is what the test
manager call “Agenda Proof”. The Agenda Proof consists of Past, Result, Obstacles,
Outlook and Feelings [27].
53
Activity Hierarchy: All test work fits here, somewhere
Figure 2.5 Activity Hierarchy
Figure 2.5 illustrates the Work Breakdown, these are the stages involved in
undertaking the test processes using the Scan tool approach. The first step is about
any work in the process which is divided into session and no session. Figure 2.6
below shows the statistic of the duration each activity takes
Diagnosing the productivity
Test
Bug
Setup
Oppoturnity
Non-Session
Figure 2.6 Work breakdowns.
54
2.4.6
Waterfall
Despite the fact that water fall model has been existing for some times and
remain the popular model adopted by the software developers before the coming of
agile. The adjustment carried out in this processes outline clearly why we decide to
select one of the practices of agile methodology over the traditional waterfall model.
In carrying out the application of SpiraTeam in managing the test processes of Onboard Automobile System (OBA). Let us have a look at waterfall methodology at a
glance.
The waterfall model is seen as a sequential software development process, in
which progress is flowing steadily downwards as the way water fall down. The
phases are one after the other that started with Analysis, Design, Implementation,
Verification and ends – up with Maintenance. The waterfall development model has
originally started in the Manufacturing and the construction industries. A good
structured physical environments in which the changing of facts is prohibitively very
costly, since no formal software development methodologies existed at the time, this
hardware-oriented model was simply adapted for software development. It becomes
difficult at times to handle the changes in software unlike hardware [31].
Figure 2.7 waterfall methodology
55
2.4.7
Agile Methodology
Instead of phases, projects are broken down into releases and iterations. The
iterations will each have a fully functioning system that can be release. The
requirement do not have to be codified upfront, instead they are prioritized and
scheduled on iterations. Usually the requirements are composed of „stories‟ that can
be schedule into particular release and iteration. Agile development methodology
attempts to provide many opportunities to assess the direction of a project throughout
the development lifecycle including the test management.
The possibilities in achieving this is carried out through regular cadences of
work, called sprints or known as iterations, at the end of which the developer or
testing teams must present a shippable increment of work. By focusing on the
repetition of abbreviated work cycles as well as the functional product they yield
after every cycle or iteration, agile methodology could also be described as
“iterative” and “incremental.” in view of some other methodologies. In waterfall,
development teams only have one chance to get each aspect of a project right. But in
an agile paradigm, every aspect of development such as requirements Management,
Test case Management, design, etc. is seen continually revisited throughout the
lifecycle.
When a team stops and re-evaluates the direction of a project every two
weeks, there‟s always time to steer it in another direction, this also helps the testing
manager in determining the progress of their testing team. The results obtained out of
this “inspect-and-adapt” approach to management, greatly reduces the development
costs, test management risk, and time to market. Because teams can gather
requirements, manage the requirement and at the same time they‟re gathering the test
cases. This phenomenon is known as “analysis paralysis” and cannot really impede a
team from making progress. Due to the fact that team‟s work cycle is limited to two
weeks or some certain limit of time. Depending often the agreement reach by the test
manager and the testing team, it gives stakeholders recurring opportunities to
calibrate releases for success in the real world following the iterations.
56
In essence, it could be said that the agile development methodology helps
companies build the right product and test them effectively within small budget.
Instead of committing to market a piece of software that hasn‟t even been written yet,
agile empowers teams to optimize their release as it‟s developed, to be as
competitive as possible in the marketplace. In the end, a development agile
methodology that preserves a product‟s critical market relevance and ensures a
team‟s work doesn‟t wind up on a shelf, never released, is an attractive option for
stakeholders and developers alike.
Agile methodology is an approach to project management, typically used in
software development and testing processes. It helps the development teams in
responding to unpredictable ways of building software through incremental, iterative
work cadences. But before discussing agile methodologies further, it‟s best to first
turn to the methodology that inspired it: waterfall, or traditional sequential
development. Perhaps, it also important to see some various approaches of the agile,
at the time being the explanation on those approaches will come before the
traditional waterfall methodology. This is discussed in section 2.5.6.
The various numbers of agile methodologies share much of the same
philosophy with the discussion earlier on, as well as many of the same characteristics
and practices. But from an implementation standpoint, each has its own recipe of
practices, terminology, and tactics in which among those we are going to select and
adhere the suitable approach for the application of SpiraTeam to On-board
Automobile system (OBA). Here are the summary a few and the main contenders of
these days:
1. Scrum Extreme
2. Programming (XP)
3. Crystal Dynamic System Development Method (DSDM)
4. Feature-Driven Development (FDD)
5. Leans Software Development
57
2.4.7.1 Scrum
Scrum is a lightweight management framework that is suitable for SpiraTeam
in managing test processes of a particular software project such as OBA software. It
has a broad ability for managing and controlling iterative and incremental projects of
all types. Many experts such as Ken Schwaber, Mike Beedle, Jeff Sutherland and
others have contributed significantly to the evolution of Scrum over the last few
years. Over the last couple of years in particular, Scrum has garnered increasing
popularity in the software community due to its simplicity, flexibility, proven
productivity, and ability to act as a wrapper for various engineering practices more
especially software projects promoted by other Agile methodologies. In this
approach the Project owner, which is the stakeholder works closely with the testing
team to identify the priority, severity, importance and functionalities of the
requirements and the test cases in form of a "Product Backlog".
The Product Backlog consists of features, bug fixes, issues, risk, nonfunctional requirements, and so on. Whatever is required to be carried out in order to
successfully deliver a reliable working software system is founded. With priorities
driven by the stakeholder, a cross-functional teams estimate and sign-up to deliver
"potentially shippable increments" of software during successive Sprints, typically
lasting 30 days or less depending on the nature of the system.
Once a Sprint has been delivered, the Product Backlog is analyzed and
reprioritized for further iteration, if necessary, and the next set of functionality is
selected for the next Sprint and the prioritization would also take place as of the first
time. The features of scrum has been proven, it can be able to scale a very large
organization that has multiple team of many people. Some of the attribute of scrum
include the followings
1. It is an agile process to control and manage any development work such as
testing.
2. It is a wrapper for existing software engineering practices.
3. It is iteratively, incrementally develop systems product with regards to
requirement rapidly changing
58
4. It controls the chaos of conflicting interests and needs in a software
development processes.
5. It maximize co-operation and easily improve communications.
6. If anything gets in the way of development, it detects it and causes its
removal.
7. Applying it in a proper manner helps in maximizing productivity.
8. It implements, controlled and organized development for multiple projects
more especially interrelated products.
9.
It‟s a healthy system that makes everyone to feel good about their job and
their contributions.
Figure 2.8 Scrum methodology[14]
2.4.7.2 Extreme Programming (XP)
The Extreme Programming (XP) is one of the controversial methodologies
among the agilest despite that fact that it‟s also very popular. It is an approach or a
method of delivering high-quality software quickly and continuously. These features
make it possible for the test manager to able to manage tests using this approach.
One of the good things about it is the involvement of the stake holders which it has
59
promote often, continuous testing, rapid feedback loops, continuous planning, and
close teamwork. To deliver working software at very frequent intervals, typically
every 1-3 weeks the approach has to be adhered. XP was originally based on four
simple values which include simplicity, communication, feedback, and encourage. It
has a number of twelve supporting practices as follows:
1. Planning Game such as test Plan
2. Small Releases based on some iterations
3. Customer Acceptance Tests or system testing
4. Simple Design, of your project
5. Pair Programming or pair testing
6. Test-Driven Development through team changes.
7. Refactoring or recycling
8. Continuous Integration
9. Collective Code Ownership
10. Testing Standards (IEEE 928)
11. Metaphor
12. Sustainability
In this approach, the stockholders works very closely with the testing and the
development team to define and prioritize the test cases, requirement and granular
units of functionalities. The testing team estimates, plans, and delivers the highest
priority to the stakeholder in the form of working, tested software on an iteration by
iteration basis. In order to maximize productivity, the practices provide a supportive,
lightweight framework to guide a team and ensure high-quality software
development and testing processes.
60
Figure 2.9 XP methodology [24]
Figure 2.9 shows the hierarchy of stages in XP methodology. It illustrates the
the releases, iteration and bug tracking activities during the software project
development.
2.4.7.3 Crystal
The Crystal methodology is one of the adapted lightweight approaches,
software development and testing processes. This approach or methodology
comprised of a family of methodologies such as Crystal Clear, Crystal Yellow,
Crystal Orange, and so on, the unique characteristics of this system is driven by
several factors such as team size, system criticality, and project priorities. This
family addresses the realization that each project may require a slightly tailored set of
policies, practices, and processes in order to meet the project‟s unique characteristics;
this simply means that the approach remains flexible in certifying the desire of a
particular project [31].
Majority of the key tenets of this approach include teamwork,
communication, and simplicity, as well as reflection to frequently adjust and improve
the process in order to suite the project in question. As other agile methodologies,
Crystal promotes early, frequent delivery of working software, adaptability, highly
involvement of the stakeholder, and the removal of and bureaucracy or distractions
as easy as possible
61
2.4.7.4 Dynamic Systems Development Method (DSDM)
The Dynamic System Development Method come into existence as a result of
the needs to provide an industry standards and a framework for project delivery
around 1994, this system was know as Rapid Application Development (RAD) as at
that time. While RAD was extremely popular in the early 1990‟s, the RAD approach
to software delivery evolved in a fairly unstructured manner. And this leads to the
creation of DSDM Consortium, it was created and convened in 1994 with the zeal
and goal of devising and promoting a common industry framework that aid rapid
software delivery and enhance quality test management. Since then, this
methodology has evolved and matured to provide a comprehensive foundation for
planning, managing, executing, and scaling agile and iterative software development
projects and testing processes [31].
This approach is based on nine key principles which primarily revolve around
business needs, value, stakeholder involvement, empowered teams, frequent
delivery, integrated testing, and stakeholder collaboration. They are specifically calls
out “fitness for business purpose” as the primary criteria for delivery and acceptance
of a system, focusing on the useful 80% of the system that can be deployed in 20%
of the time with intensive testing exercises.
Requirements are baseline at a high level and early in the project release
priority. All the development changes must be reversible; rework is built into the
process. Requirements are planned and delivered in short, test cases planned and
execute in short time, fixed-length time-boxes, also referred to as iterations, and the
prioritizing of requirements for DSDM projects are carried out using MoSCoW
Rules:
M
-
Must have requirements
S
-
Should have if at all possible
C
-
Could have but not critical
W
-
Won‟t have this time, but potentially later
62
Almost all critical work must be completed in a DSDM project. It is also
important that not every requirement in a project or test cases are considered critical,
some are high, important and low. Within each of the requirements or test cases, less
critical items are included so that if necessary, they can be removed to keep from
impacting higher priority requirements on the schedule and this is what happened
when a release is needed as fast as possible. The DSDM project framework remains
independent of, and can easily be implemented in conjunction with, other iterative
methodologies such as Extreme Programming and the Rational Unified Process.
2.4.7.5 Feature-Driven Development (FDD)
The Feature Driven Development (FDD), incarnation firstly occurred as a
result as a result of collaboration between De Luca and OOD thought leader Peter
Coad. This is a model-driven and short-iteration process. The whole processes began
with establishing an overall model shape to identify the ideas involved. Then it
continues with a series of two-week "design by feature, build by feature" iterations.
The features are small, but very useful in the eyes of the stockholders in terms of the
results. The approach designs the rest of the development process around feature
delivery using the following eight practices:
1. Domain Object Modeling
2. Developing by Feature
3. Component/Class Ownership
4. Feature Teams
5. Inspections
6. Configuration Management
7. Regular Builds
8. Visibility of progress and results
It recommends specific programmer practices such as the so called "Regular
Builds" and "Component/Class Ownership". The proponents of this system claims
that it scales more straightforwardly than other approaches, and is better suited to
larger teams of software development or testing team. Not like the remaining agile
approaches, it describes a specific very short phases of work which are to be
63
accomplished separately per feature. This includes Domain Walkthrough, Design,
Design Inspection, Code, Code Inspection, and Promote in building a suitable
system. The notion of "Domain Object Modeling" is increasingly interesting outside
the FDD community, as a result of the success of Eric Evans' book called “DomainDriving Design” [32].
2.4.7.6 Lean Software Development
Lean Software Development is considered to be an iterative methodology
which was originally developed by Mary and Tom Poppendieck. This methodology
owes much of its principles and practices to the Lean Enterprise movement, and the
practices of some popular companies like Toyota. It focuses the team on delivering a
good and value to the customer or stakeholder, and on the efficiency of the "Value
Stream," the mechanisms that deliver that Value. The main principles of Lean are:
1. Eliminating or minimizing Waste
2. Amplifying Learning to the appropriate position
3. Deciding as Late as Possible
4. Delivering as Fast as it can be
5. Empowering the Team with innovative ideas
6. Building Integrity In the system development or testing
7. Seeing the Whole project at a glance.
The eliminates or minimizing waste through such practices is nothing but
selecting only the truly valuable features for a system, have them prioritize after the
selection, and delivering them in small batches called iterations or releases. It also
lays an emphasis on the speed and efficiency of development workflow, and relies on
rapid and reliable feedback between programmers, testers and the stockholders [12].
It uses the idea of work product being "pulled" via customers request known
as change request and focuses on decision-making authority. And ability on
individuals and small teams to conduct or performed their tasks, since research
shows this to be faster and more efficient. With hierarchical flow of control, the
approach becomes involved in so many practices, like wise concentrates on the
64
efficiency of the use of team resources. And trying to ensure that everyone is
productive as much of the time as possible. This gives the idea of concentrating on
concurrent work within the team members, and the fewest possible intra-team
workflow dependencies. The initiator also strongly recommends that automated unit
tests be written at the same time the code is written for better testing processes [32].
2.4.8
Conclusion
The study conducted on chapter two of this thesis, run through different
approaches related to managing the tests of On-board automobile system (OBA).
These approaches as discussed at the beginning of this chapter include software test
management, software test management tools and test management approaches
(Methodologies).
The essence of all these is to find out a suitable tool, approach and a good
understanding of software test management at large, so as to achieve the stated
goals. Based on the study conducted on test management tools, section 2.3.10 in
this chapter shows the advantages and disadvantages of some selected tools amongst
the tools that undergoes this review. This gives a way of selecting SpiraTeam to be
the tool suitable for conducting the experiment.
On the view of test management approaches, a discussion on a suitable
approaches and analysis on some of the studied approaches will be explained.
Section 2.5.6 of this chapter shows that waterfall become the most suitable approach
amongst all the approaches existing before the coming of agile. In lieu of time, and
the basis of the study that show waterfall approach to be the must popular, this
research makes the analysis on waterfall and agile methodologies. The Agile
method is found to be suitable for the application over waterfall based on the
reasons below.
Some of the drawbacks of the waterfall include
65
1.
It is not flexible to changes in customer requirements
2.
Time is wasted building features that nobody needs
3.
The end user cannot give feedback till it‟s completed coded
4.
You don‟t know how stable the system is until the end
Figure 2.10 Phases transitions between agile and waterfall.
In agile methodology the requirements for the project do not have to be
codified upfront, instead they are prioritized and scheduled for each iteration and the
requirements are composed of „stories‟ that can be scheduled into a particular release
and iteration
Figure 2.11 agile methodology
Traditionally, Software Development organizations use the following tools to
manage their lifecycles:
1. Requirements stored in MS-Word documents, MS-Excel spreadsheets or
expensive tools such a RequisitePro, Doors
2. High-level project plan (GANTT chart) developed in tools such as Microsoft
Project, Primavera and printed out for reference
66
3. Project estimates prepared using a combination of the high-level project plan
and specialized standalone MS-Excel spreadsheets
4. Detailed schedules maintained by individual team members using MS-Excel,
Whiteboards or Groupware solutions
5. MS-Access, MS-Excel or standalone web-based bug-tracking system for
tracking issues and defects
As the static project plan with its discrete phases has become replaced by the
more flexible agile approach, the old set of tools no longer works:
1. The project requirements and scope are not locked-down so the schedule of
releases and iterations needs to be connected to the requirements backlog in
real-time
2. The project schedule is constantly evolving, with stories being reallocated to
different iterations and team members re-estimating the number of stories
they can complete (velocity)
3. Defects and stories need to be managed in the same environment, with the
project estimates and schedules taking account of both at all times
The SpiraPlan is explicitly designed to address these issues and provide an
integrated solution, It can manages your project's requirements, stories, release plans,
iteration plans, tasks, bugs and issues in one environment. This Tool is a
methodology agnostic which can be used equally web for any agile methodology
including Scrum, AUP, XP, and DSDM and has the Ability to leverage your existing
technology investments and can be able to integrate with many third-party defectmanagement systems. The listed below are some of the many great features that
makes this tool support agile project management software project. And determined
why it is suggested to be use in managing the OBA tests.
1. Develop high-level requirements with initial estimates
2. Create notional project schedule with major releases
3. Prioritize and schedule requirements for each release
4. Determine resourcing levels to deliver required features
5. Decompose requirements into detailed task breakdown
6. Integrated web-based document management.
67
7. Allocate tasks to iterations based on detailed task estimates
8. Load-balance project resources to maximize project velocity
9. Track issues and defects against project schedule
10. View project velocity, burn down and burn up reports
The table below comprises some of the methodologies
Table 2.6: Analysis Summary of Test Management Tools [13]
Waterfall
Trigent's
Agile
Allows for work force specialization
X
X
X
Order line appeals to management
X
Can be reported about
X
Facilitate allocation of resources
X
STRENGTHS
X
X
X
X
Early allocation of resources
X
Early functionality
X
Does not require a complete set of requirement at the
X
onset
Resources can be held constant
X
Control costs and risk through prototyping
X
WEAKNESS
Require a complete set of requirement at the onset
X
Enforcement of non- implementation attitude hamper
X
analyst/design communication
Beginning with less define general objectives may be
X
X
uncomfortable for management
Require clean interfaces between modules
X
Incompatibility with a formal review and audit
X
X
X
X
procedure
Tendency for difficult problems to be pushed to the
future so that the initial promise of the first
increment is not met by subsequent products
The increment model may be used with a complete
set of requirement or with less define general
objectives
68
Based on the analysis made on Table 2.6, it has shown that SpiraTeam.is the
appropriate tool to be use in undertaking the exercise. It has more strength and
has less weakness compared to the remaining tools
.
69
CHAPTER 3
RESEARCH METHODOLOGY
3.1
Introduction
The frameworks, the postulate applied using SpiraTeam test management tool
in handling the tests of the On-board Automobile Software project, the Practical
Approaches to Software Testing, (Automation and Manual Testing) are to be
discussed in this chapter. SpiraTeam support Agile Methodology. Traditionally,
projects are delivered in a series of phases that are based on increasing levels of
certainty around the system being built or under test. The diagram below shows the
overall architecture of the SpiraTeam.
Figure 3.1 SpiraTeam architecture
3.2
Approaches of SpiraTeam On OBA
In this section the explanation of how the methodologies of agile approaches
to implement the experiment are discussed.
This shows the stages involved in
70
handling the artifacts, attachments, incidences, release, iteration and the workflows
in handling Automobile software project. The phases or stages are
1. Requirement Definition
2.
High Level Schedules
3. Iterations Planning
4. Task resourcing Allocation
5. Test Execution
Requirement
Definition
High L
Iteration
Schedule
Planning
Task
Resources
Test
Execution
Allocation
Figure 3.2 SpiraTest management phases.
3.2.1
Approach 1: Requirement Definition
The first approach or step is to define the project‟s requirements which are
the hierarchical list of all the features of the Onboard Automobile Software (both
business and technical) that the system needs to fulfill, they can be entered by hand,
imported from any data based source such as Excel, or loaded from other tools such
as RequisitePro. This allows the testing team to prioritize the requirements, add
attachments, cross-linked and have project-specific attributes. At this moment the
project team may not get every thing right, there is no need of being worried as the
approach is Agile, so the requirements will evolve during the project.
3.2.2
Approach 2: High Level Scheduling
After the requirement, high level scheduling is the next, by plan out the
project‟s high-level schedule, this schedule include the major releases in the project,
minor releases and optionally
builds and iterations depending on desired
71
granularity. The testing team can by this time assign the different lower-level
requirements to each of the releases so that it is possible to start planning the features
that will be developed in each release based on customer or users‟ priority and
business value. In some methodologies in case of using this tool, such as Scrum the
requirements list is called the project backlog instead of requirements.
3.2.3
Approach 3: Iteration Planning
Now that we already have the requirement list and the priority have been
scheduled, the testing team can easily start the iteration planning (for the first one)
by decomposing the requirements into detailed project tasks that can further be
prioritized and individually estimated. These estimates can then be compared
against the top-down requirement estimate and then assign the tasks to the
individual iterations. Using the iteration planning functionality can help in
determining if there is enough time/resources to support the planned functionality
and assign any defects that were raised in the previous release/iteration into the
current iteration for resolution. In some methodologies (Scrum) this is called „sprint
planning‟
3.2.4
Approach 4: Task Resourcing Allocation
At this level the test manager can now be able to schedule the team members
and load balance the task resourcing allocation, assign the discrete project tasks and
defects to the members of the development team staffed to the iteration, the team
members view their individual schedule and task assignments to them so that they
can determine if they can perform all the tasks as at the time stated, the detailed task
schedule is updated by the team members, with the release/iteration schedule
reflecting the updates so that the management can make changes to the master
schedule at any time its required.
72
3.2.5
Approach 5: Task Execution
The real-time status of the progress of the iteration and release is visible by
the management during execution of tasks assigned to the team members, the team
members can update the actual effort, % complete and predicted end-date values of
their various tasks as they complete the assigned workload, after completing their
tasks the management can see it in real time and which tasks are yet to be completed,
or if exception conditions occur (late starting tasks, late finishing tasks, overruns,
etc.), the status of the overall iteration and release is updated to give early indication
of needed management intervention, such as reassigning more task or reducing the
work load on some members of the team. In addition the progress of the tasks is
linked back to the original requirements, so there is full requirements traceability and
tests coverage.
3.3
Artifacts Workflow
Section 2.3.9 in chapter two discussed the artifact in SpiraTeam. These
artifacts as discussed earlier on provide the functionalities in managing Tests using
SpiraTeam, the diagram below shows the relation ships and workflow between these
artifacts.
Figure 3.3 The main entities that comprise a SpiraTest project.
73
3.3.1
User
When managing a project in a Spirateam, the user must log in order to view
his personalized home-page. The home page lists the key tasks that a user need to
focus on, and drill-down into each of the assigned projects in a single dashboard
view. Each of the projects assigned to the user has its own dashboard that depicts the
overall project health and status in a single comprehensive view. The diagram below
shows the log in page.
Figure 3.4 User log screen
3.3.2
Roles
After the user logged to the SpiraTeam interface, the next step is checking the
roles assigned to him/her by the project manager. The possible roles include tester,
project owner, developer, observer, manager and incident user. A project manager
can assign one or more roles to a user in different projects depending on the number
of project the user is a member of.
74
3.3.3
Project
Immediately after checking the roles assigned, the user move the select a
project amongst the projects to which he/she is a member. The following screen is
displayed when you choose the “View/Edit Projects” link from the Administration
sidebar navigation.
Figure 3.5 project view/ edit
This screen displays the list of all the projects in the system together with
their website URL, date of creation and active status. Selecting by clicking on either
the link in the right-hand column or the name of the project will change the project to
bold indicating that it is selected.
3.3.4
Management
The immediate stage after selecting the project is the management of the
artifact. These include requirement management, test cases management, release
planning, test suite, incidence planning and test coverage. All of these artifacts
management stuff has been discuss during literature review in section 2.3.9 of
chapter 2.
75
3.4 Entity Workflow
There are various entities in managing a project using SpiraTeam. The
relationship that existed between these entities determine the stage involve in project
plan and test management. The diagram below shows the flow of these entities.
Figure 3.6 The relationships between the various SpiraTest entities
3.5 Incidence Workflow
An incidence workflow is a predefined sequence of incident statuses linked
together by “workflow transitions” to enable a newly created incident to be reviewed,
prioritized, assigned, resolved and closed, as well as to handle exception cases such
as the case of a duplicate or non-reproducible incident. The workflow list screen for a
sample project is illustrated below.
76
Figure 3.7 Incidence workflow
The workflow of the incidence depend on the type of the incidence, the
workflow of the „risk‟ incidence is not the same with that of „enhancement‟
incidence. The definition of the workflow depends on the meeting and agreement
between the test manager and the stakeholders. Figure 18 lists in the left-most
column all the various incident statuses defined for the project. The next column lists
all the possible transitions that can occur from that status. In addition, with each
transition is listed the name of the resulting destination status that the transition leads
to. E.g. from the assigned status, depending on your role, you can move the incident
to either duplicate, resolves or not-reproducible depending on which transition the
user takes.
On the Transition column, clicking on the transition name leads to the
appropriate details page. On the detail page the tester can set the properties of the
step or another transition. A transition can simply be deleted by click the “Delete”
button after the transition name, and to add a new transition, click the “Add
Transition” button in the Operations column.
77
Figure 3.8 Incident workflow steps
Figure 3.8 shows the workflow transition diagram. At the upper part of the
screen is the “workflow browser” which illustrates how the transition relates to the
workflow as a whole. It displays the current transition in the middle, with the
originating and destination steps listed to either side. Clicking on either incident
status name will take you to the appropriate workflow step details page. This allows
you to click through the whole workflow from start to finish without having to return
to the workflow details page.
78
Figure 3.9 work flow transition.
On each transition a user has to satisfy some certain condition in order to be
able execute it, that is move the incident from the originating status to the destination
status. And a set of notification rules that allow you to specify who should get an
email notification if the transition is executed. Both the conditions and notifications
allow you to set three types of user role.
3.6
OBA as a Case Study
The Driving Assistance System (DAS) is defined as the potential study of
vahicle for the near future and is the general name given to the system which is to be
controlled by a device called “Save Drive” (SD). The Safe drive consists of processor,
which is the On-Board Automobile (OBA). The OBA is a coupler computer software
79
and the control panel on the mechanical components that are intended to improve the
safety of the vehicle driving especially over a long trip on a motor way. It is aimed at
providing luxury to the drivers. The software specifications and requirement provides
a way of writing a softweare test plan which is used in conducting the ad-hoc test
process[24]. This project is conducted by a group of software Engineering students,
from the Center for Advanced Software Engineering (CASE), University Technologi
Malaysia. The project will be use as a case study for achieving a better testing of this
software. Among the testing tools, Spira Team is selected to to be use in undertake
the project. The disgram below shows the subsytem components of OBA [24].
Figure 3.10 OBA‟s Subsystem composition
3.7
Conclusion
The schedules, framework, postulates, selected approach and methodology
have been discussed and the workflows have also been outlined. The next chapter
will concentrate on implementing the appropriate channel of achieving them. The
slogan of “Plan a flight and fly the plan” is to be implemented.
80
CHAPTER 4
DESIGN AND IMPLEMENTATION OF TEST MANAGEMENT
4.1
Introduction
This chapter discussed the implementation of the selected tool and
methodology in handling or managing the tests of the On-board automobile system.
Based on the conclusion in section 2.3.10 and section 2.5.8, SpiraTeam test
mangement tool and Agile methodology has been selected to undertake this exercise.
The combination of these components provides a better solution to the problems
encountred during the management of the OBA tests. The implementation is
discussed in the follows.paragraphs.
4.2
Research procedure and design
The ultimate aim of this research work is to implement the software test
management on OBA software the research has adopted service engineering process
approach as a research methodology [20]. The service engineering processes consist
of two main stages. First, the ability to implementing software test management in
the client site to be able to interact with the web server. Second service engineering
which focus on the server site the diagram below illustrates the idea
Analyze Service
Specific Test
Case
Design Test
management
Implement test
service
Figure 4.1 Phases in Service Engineering
81
The research planning and schedule has been outline in the diagram below. It
shows the schedule and the time allocation for each activity involved during the
research exercise.
Table 4.1 Research planning and Schedule
No
Activity
1
2
3
4
5
6
7
8
9
10
Test Management study
SpiraTeam training
Server preparation
Problem definition
Proposal approach evaluation
Integrate the client and server site
Implementing the test management on OBA
Result evaluation
Publication
Deliverables and presentation
Months
1
2
3
4
5
In this research, the analysis of the existing mediator processes has been
carried out. This is for the determination of the strength and the weakness of the
processes. The advantages of WSMO framework are maintained and the weakness
has been discarded. The research activities and procedures has been outline in a
flowchart as shown in the diagram below.
82
Start
Literature
review
Evaluation of
approaches
Implementing
the framework
SpiraTeam
Agile Approach
based on
pattern using
SpiraTeam
Publications
Waterfall
approach using
traditional
pattern
Prototype
application
Experimentation
on the prototype
Satisfactor
y results?
Evaluation of
the Approaches
Report and
publication
End
Figure 4.2 Research Flowchart
The first step undertaken during the implementation of SpiraTeam on OBA is
the systems‟ prerequisite. This is simply the preparation of the system for installation
of SpiraTeam software. The installation package has been provided by the project
manager. These include the SpiraTeam software and the installation manual. Before
the installation of the SpiraTeam, a number of preparations have been made.
Ensuring that the web-server is correctly configured and the database engine works
correctly are part of the preparations. The hardware requirements and the software
configurations were provided by the vendor. The following items were selected and
83
installed to the SpiraTeam Server for a smooth installation and proper
implementation.
1. Windows server 2003 (Operating System).
2. Windows server 2003 SP2 (Service Pack)
3. Microsoft SQL Server 2008 express Edition (Database)
4. Internet information Server (IIS) (Web Server)
5. Mozilla Fire Fox 3.0 (Browser)
6. Microsoft.NET Framework (A component)
By having the requirement installed, various technics have been applied to
ensure that they are working correctly. The verification of the effectiveness of these
requirements allows the proper installation of the SpiraTeam.
SpiraTeam software has been installed into the server (PC) by double clicking
the software and following the what? on screen wizard. The wizard will givealert
when ever any of the requirements is not working properly.
After the succefull installation of the SpiraTeam software, the following
diagram was displayed on the screen.
Figure 4.3 Instalation Wizard
84
4.3
System Administrator
Having the software installed succesfully, the Administartor who is the
Project leader of the OBA perform the typical system-wide administrative tasks. The
tasks are necessary for setting up projects in the system on the SpiraTeam server. The
users (OBA team members) were added to the project. To perform these tasks, the
administrator login to the system with a username that has “System Administration”
permissions. The special “Administrator” username is created by the installer for this
very purpose. It was initially logged with the username Administrator, and the
password PleaseChange. It was after the first login this password was changed. As
the administrator logged, the administrator selected and click the “Administration”
link above the main navigation bar. This displayed the Administration home page as
shown below.
Figure 4.4 Administartors Page.
85
4.3.1
Project
By selecting view/edit project from the left pane of the figure 4.2 another
windows appear. This screen displayed the list of projects in the system (both
inactive and active) together with their website URL, date of creation and active
status. Add new project was selected and the setting of the OBA project into the
SpiraTeam Began.
The name of the project was entered (MyOBA), together with an optional
description and/or web-site URL. The project has been marked “Active”. A default
template was selected for OBA project. This makes the project send a copy across
the workflows, user membership, custom properties, document types, document
folders, data synchronization and other configuration settings that the project wants
to reuse from the old project.
The project leader who is also the test manager set the projects‟ group and
edited notifications. This means that who get what notified? For example among the
members, who will be notified when an incident is raised? The Planning of the
project was also conducted here. The manual GANTT chart is no longer needed. The
working hour per days, working days per week and the effort calculation has all been
set at this level.
4.3.2
View/Edit User
Selecting the View/edit user gives the test manager the ability to add the team
members who are responsible for the tasks in the OBA project. The list of users in
the system is displayed (both inactive and active) together with their first name,
middle initial, last name, username (login), administrative permission status and
active status.
86
The manager has the ability to filter the list of users by either choosing an
administrative / active status, or entering a portion of the first name, middle initial,
last name or username into the appropriate text box. The OBA test manager added
the names of the team members. The diagram below shows the window for adding a
member to the system.
Figure 4.5 View/Edit User
4.3.3
Project Membership
The Test manager selected project membership from the left pane of the
screen. The list of all the member added to the project displayed. The project
manager choose the memebers for the OBA and assigned responsibilites to them.
The roles includes project owner, observer, tester, developer, incident user and so on.
The following screen shows the project membership window.
87
Figure 4.6 Project membership
4.3.4
Incident
Before detecting any incident, the test manager has to create custom
properties and values for incidents. The values can also be change; this is populated
in many of the standard fields used in the incident tracker, such as the types, statuses,
priorities and severities. When setting the incident for OBA project, the process for
changing each of these is shown on the diagram below.
Figure 4.7 Edit Incident Type
88
By clicking the associated workflow drop-down list, the OBA test manager
specifies which workflow the incident type will follow. This is a very powerful
feature since it allows the tester to configure different workflows for different
incident types; i.e. a “Bug” may follow a workflow geared to identification and
resolution, where as a “Risk” may only need a much simpler set of steps and actions.
The test manager seizes the opportunity of taking the right workflow for each
incident type. The status, the priority and the workflow was also set as shown in the
diagrams below.
Figure 4.8 Edit incudent status
Figure 4.9 Edit incudent priority
4.3.5
Document
OBA project have a lot of documents attached to it. The documents started
from the beginnig of the contract such as governmetal documents, non government
documents, Engineering document, management document, requirement document,
89
specification documents, testing document, mislinious and so on. The test manager
attached all the documents to the project for reference porposes. The diagrams below
shows the list of the document with the arrangements and the hierachical oder of the
folders in the SpiraTeam.
Figure 4.10 Documents list
90
Figure 4.11 Hierachy of documents folder
4.4
My Page
As the project manager set the OBA project, he provided the passwords and
user name to each of the project member. Immidaitely after loging to the SpiraTeam
as a user or tester, the next page is called “Mypage” or “Dashboard”. This page
displays the information of the tasks regarding a particuler user. The page consists of
many task panes which includes the following:
4.4.1
My Projects
This task pane list all the projects that a user is involved in. The task pane
consist of columns namely Project Name, Group and Creation Date. With regards to
the implementation of OBA, the name given to the project is “MyOBA” and the
91
group is internal, indicating that the testing team are not external testers and the
creation date is 9 – DEC – 2009.
4.4.2
My Saved Searches:
This task pane displayed all the searches saved by the user. Its has two
columns, names of the search and the project from where the reash was conducted.
Some of the OBA testers have save some data to enable them under take their duties
correctly.
4.4.3
My Assigned Requirments
The requirement assigned by the test manager to a user are also displayed in
this page, the task pane consists of four clumns. The clumns consist of Name,
project, importance and status. The OBA team members view the requirements
asigned to them by the team leader on this page. The diagram below display the
assigned requirements.
92
Figure 4.12 My page task pane 1
4.4.4
My Assigned Test Cases
This pane displayed all the test cases assigned to a particuler user. In the case
of the OBA project all the test cases are assigned and shared between two members
who are believed to be the testers of the project. The project manager has assigned
them to be responsible for executing the assigned test scripts. The script name is
displayed, along with its last execution status (failed, passed or not-run) and date of
last execution. This enables the tester or the test manager to see how recently the
tests have been run, and whether they need to be re-run. The diagram below shows
the assigned test cases to one of the testers of the OBA.
93
Figure 4.13My Assigned test cases.
Clicking on the test-name hyperlink, leads to the details page for this test-case
and the project that the test-case belongs to, which is made as the testers‟ current
project. Click on the “Execute” link listed below it, launched all the test-case in the
test-case execute mode for easy resetting of the fail test cases.
4.4.5
My Assigned Test Set
This means that the group of test set that a tester is responsible for executing.
It is the test cases contained within the test set against a specified release of the OBA.
The test set name was displayed, along with its status, the project it belongs to, the
number of remaining test cases to be executed, and the date by which all the tests
need to have been run. When clicking on the test-set name hyperlink, it takes the
tester to the details page for this test-set and the project that the test-set belongs to,
becomes the testers‟ current project.
94
Clicking on the “Execute” link listed below the window launches the testcases contained within the test-set in the test-case execution module. With this, the
tester‟s assigned tasks can be easily carried out.
4.4.6
My Pending Test Runs
This consists of the list of all test runs on OBA which have been executed in
the test case module but have not been completed. Once a test case or test set is
executed, a pending test run entry is stored in the system so that a tester can
continue execution at a later date. The diagram below shows assigned test and
pending test.
Figure 4.14 Pending test run
4.4.7
My Assigned Incidents
The team leader of the OBA project has assigned development tasks to
project tester that need to be completed so that a release can be completed for the
95
fulfillment of the requirement. The tasks are listed in ascending date order so that the
items with the oldest due-dates are displayed first. In addition, each task is displayed
with a progress indicator that graphically illustrates its completion against schedule.
Clicking on the task name hyperlink leads to the task details page. This page
described the task in more detail, illustrate which requirement and release it is
associated with, and also allow the tester to view the change log of actions that have
been performed on it.
4.4.8
My Detected Incidents
All the open incidents that were detected by any of the members are shown
on this page, across all the different projects a tester is a member of. These incidents
are not necessarily ones that the tester need to take an active role in resolving them,
but in the issue of OBA project since the testers were the originator, either by
executing a test case or just logging a standalone incident, they can view it and
make sure that they are resolved in a timely manner.
4.5
Requirement Management
The requirement of the OBA was inserted into the SpiraTeam using an excel
Data Importer. The excel importer require to be logged in. The page allows the tester
to login to SpiraTeam via excel login page. The tester chooses the project (OBA) to
import data into. By entering the address of the server into the URL web textbox, the
excel importer loads the application ready for the transfer.
This is usually of the form http://10.10.12.57/SpiraTeam. The user-name and
password was entered into the textbox that is normally used to log into SpiraTeam on
the web. Immediately after the entering this, a “Load Projects” button was clicked.
The system loads the OBA and all the available projects to be ready for the taking of
the requirements or test cases.
96
Having the projects ready, the interested project was selected (OBA) from the
drop-down menu. The importation of the requirements began. The diagrams below
show the log-in screen, Excel importer and the exported requirements list.
Figure 4.15 Excel importer log-in screen
Figure 4.16 Requirements‟ list
97
Figure 4.17 OBA Requirements
4.5.1
Requirement Details
The requirement that has been imported from the excel importer need to be
explained. The brief explanation on each of the requirements is called requirements
details. Clicking on the requirement from the requirement link leads to the
requirement detailed page as shown below.
Figure 4.18 Requirement details
98
The requirements names and a brief description on it were made. Importance,
releases, status, author, creation date, updated date, release owner and the planned
effort are entered for all the requirements of the OBA. This is to enable the owner
(tester) to understand the requirement and the needs associated to it.
4.5.2
Requirements coverage / Test coverage
The OBA requirements have been covered by the available test cases relevant
to each of them. The requirement coverage page is found just below the requirements
detail window. The page contained two task panes, one is containing all the
requirements of the OBA and the other one is containing the test case. The
relationship between the test cases and the requirements existed in this page. This is
called requirement mapping. The diagram below shows the requirements mapping
window.
Figure 4.19 Requirement coverage
Each requirement is selected by ticking the check box against the needed
requirement. The remove, add, and remove all buttons have been used in mapping
the requirements with the test cases.
99
4.5.3
Tasks
Tasks have been assigned to the team members. Each of the tasks is displayed
together with its name, description (by pointing the mouse over the name), progress,
priority, start-date, current owner, estimated effort, actual effort and numeric task
identifier. Clicking on the task name will bring up the Task Details page which is
described in more detail. This allowed the testers to edit the details of an existing
task. The diagram below illustrates the tasks assignment processes.
Figure 4.20 Tasks assignment window
4.6
Release Planning
Based on the requirements, the development team and the stockholders
decided to make a series of 3 releases for the OBA. The releases have been named
as Release 0001, Release 0002 and Release 0003. Each release consists of critical
requirements. The requirement that have a little problem in release 0001 is addressed
in release 0002 and so on. The release planning has been achieved using the
following window.
100
Figure 4.21 Release planning
4.7
Test Cases
The test cases of OBA project have been transferred into SpiraTeam using the
excel importer just like the requirement list. The test cases have been nested in a
hierarchical structure. This was carried out after the meeting between the project
manager and the stick holder. The decision was made to put the related test cases into
a single folder called test suit. The test suits have been assigned to different testers.
The diagram below shows the test case list.
Figure 4.22 Test case lists
The test cases require some descriptions to give the tester a clue about it. The
description includes test case name, Author, creation time, owner, estimated time,
101
and priority and execution status. Clicking on the test case link leads to the following
diagram.
Figure 4.23 Test case description
Below this window is the steps description field. These fields enable the
author to describe the steps to be taken in exacting the test. It involves the test steps,
expected result, sample data and the status. The diagram below illustrates the test
cases steps.
Figure 4.24 Test steps
102
4.8
Test Execution
Following the requirement management, test cases management, test coverage,
release planning and task assignments. The test execution of the OBA becomes easy.
From the test case list, a single test case was selected by clicking on the check box
beside it and then clicks “Execute Button”. The action leads to the following
diagram.
Figure 4.25 Releases
This enables the tester to attach the test cases to a particular release among
the releases planned already. After selecting the release, clicking on the “Next”
button leads to the next diagram.
Figure 4.26 Test run
103
The steps described during the test cases management appears. The test
management took place at this level, when the tester run the OBA test bench the
result obtained determines the step to be passed, blocked or fails. This brings the
issue of incident tracking.
4.9
Incident Tracking
During the test run, some strange behaviors evolve. The behaviors come as a
result of having the test result contrary to the expected result, the result did not suite
the requirements, lack of proper training and or when the end user did not see the
result as the developer sees it. Those are called incidents. The OBA project has
encounter varieties of incidents. The following diagram shows how the incidents of
the OBA where recorded.
Figure 4.27 Incident list
The entire incident recorded during the test run has been reviewed and
corrected based on the incident workflow described earlier on.
104
4.10
Conclusion
This chapter discusses the implementation of the SpiraTeam on OBA
software Project. The methodologies discussed in chapter 3 of this project have been
strictly adhered. This gives the test manager of OBA the opportunities of achieving a
good result at the end of the implementation. The result obtained shows that using
SpiraTeam to manage the tests of a software project cannot be compared with the
traditional method or is far beyond comparison. The results obtained are shown in
the attachment of the thesis.
105
CHAPTER 5
CONCLUSION
5.1
Summary
The study shows a number of benefits that are expected to be achieved when
using SpiraTeam. The research has under go a study in various fields related to the
uses of test managemant tool to manage the testing of the On-board automobile
system (OBA). The workflows and the strategies of undertaking the exercise has also
been identified. The implementation of the selected tool and approach answered the
research questions.
The features of the SpiraTeam applied during this exercise has provides a
successful and manageable result within a limited time frame. The result obtained
provides a tremendous benefit when compared to the previous result obtained from
the manual system.
End-to-end management of the testing lifecycle in the testing of OBA
including resources, releases and test cases, scheduling, test execution, defects,
documents, collaboration and all aspects of reporting and metrics in real-time are
major areas. This project sketches functionalities review over various test
management tools. It gives a brief picture to select a flexible test management tool
for our software testing project which is the SpiraTeam. The summary of the
snapshot generated during the running of the OBA in SpiraTeam are shown below
106
Figure 5.1 Requirement summary
Figure 5.1 The SpiraTest management tool provides the graphical view for
the summary of the entire requirements of OBA entered during the requirement
management. The graph shows how many requirements are currently in the project.
The number of requirements is displayed according to the specified criteria. For
example, a tester can specify the type of data displayed along the x-axis, and the
requirement information which is used to group the data.
When the graph is first open, a list box will proved the available options for
the tester to select a field. The tester picks the field that would be displayed on the xaxis and the field that the data would be grouped. In this figure of the report, the xaxis represents the requirements‟ status, and the individual bars are grouped by
requirement importance. Each data-value can be viewed by positioning the mouse
pointer over the bar, and a “tooltip” will pop-up listing the actual data value.
107
Figure 5.2 Test case summaries
Figure 5.2 is the test case summary graph. It shows how many test cases are
currently in OBA project. The number of test cases is displayed according to the
criteria that a tester specifies as in the requirements summary. A tester can specify
the type of data displayed along the x-axis, and the test case information which is
used to group the data. By first opening the graph, an option will appear for picking
the field of data display. The data is displayed on the x-axis and y-axis group the
data.
In this report, the x-axis represents the test case execution status, and the
individual bars are grouped by test case status. Each data-value can be viewed by
positioning the mouse pointer over the bar, and a “tooltip” will pop-up listing the
actual data value.
108
Figure 5.3 Test run summary
Figure 5.3 displays the test summary. The test run summary graph shows how
many test runs are currently in a project. The number of test runs is displayed
according to the criteria a tester specifies. The types of data to be displayed along the
x-axis and y- axis are to be specified.
In this report, the x-axis represents the test run execution status, and the
individual bars are grouped by test run version. Each data-value can be viewed by
positioning the mouse pointer over the bar, and a “tooltip” will pop-up listing the
actual data value. Clicking on the “Display Data Grid” link displayed the underlying
data that is being used to generate the graph.
Figure 5.4 Incident summary
109
The incident summary graph shows how many incidents have currently been
discovered in the OBA project during the test run. The number of incidents is
displayed according to the criteria that specified. By selecting from the list box, a
tester can specify the type of data displayed along the x-axis, and the incident
information which is used to group the data. Once the appropriate fields are chosen,
by clicking the “Select” button the graph will be displayed the graph as shown above.
5.2
Short Comings
Despite the beautiful features of the SpiraTeam, the research discovered some
few drawbacks from this tool during the implementation on OBA software project.
The includes
1. Operating system. The installing of SpiraTeam on a server running on Vista
or XP operating system limits the number of concurrent users. Accessing the
server from remote location take a long time to connect to the server.
2. Database. The tool operates only on Microsoft SQL server, it does not support
any other database engine and the assessing of data on Microsoft SQL server
2008 is faster compared to Microsoft SQL server 2007. Transferring of data
from one database to another is very difficult.
3. Web Server the tool is limited to (IIS) internet information service, no room
for tom card and others.
4. Web Browser Accessing the tool on Mozilla fire fox provide fast access and
good looking graphics compared with Google chrome.
5.3
Recommendation
In order to make software testing process flexible and changeable, the author
recommended a reflective software testing management tool. This tool adopts
reflective architecture. The tool should be able to perform the work of configuration
110
management and do the tests run. It is also recommended that the mechanism of
SpiraTeam be extended to support other operating systems such as Linux, Apple and
others. The tool should also be an open source so that many people can benefit from
features of test management it provides. CASE should adopt the use of SpiraTeam in
managing the tests of Project I and project II for software engineering students.
5.4
Conclusion
SpiraTeam test management tool has a great enhancement for speeding up the
tests management processes. The documentation time for writing the test documents
can now be achieved within small time, approximately 70% off if SpiraTeam is used
compared to the traditional method.
Some of the documents that are missing during the traditional test
management of the OBA can easily be generated now. The graphical views, real time
assessment, single view of the whole activity at a glance, variety of reports and many
formats on a single report can also be generated within just one or two clicking.
SpiraTeam easy the way test management activities are scattered into many tolls or
different environment for a single project. It brings about the integration and
migration of data from variety of tools and data management engines. Data can be
migrated from MS -Word, MS-Excel to SpiraTeam and vice versa.
It has the ability to synchronize requirements / use cases with IBM Rational
Requisite Pro. With regards to Bug / Issue Tracking, the following can be carried out.
1
Synchronize incidents with Atlassian JIRA
2
Synchronize incidents with Bugzilla
3
Synchronize incidents with FogBugz
4
Synchronize incidents with Microsoft Team Foundation Serve
111
Figure 5.5 Synchronization chart
Figure 5.5 illustrates a two ways flow of data between the SpiraTeam and
supported tools that can interchange data. The exchange can be from SpiraTeam or to
it.
During the beginning of the research, the author has developed two kind of
test documents for the smooth implementing of the SpiraTeam on OBA the
documents include IEEE 128 Test management documents and a customized test
management document from a software testing company in Malaysia. The IEEE
documents include
1. Test Plan
2. Test Case specification
3. Test Procedure specification
4. Software Design Specification and
5. Test Incident Reports
The test documents generated from the customized system of the testing
company includes
1. Software Test Plan
2. Software Test Case and
3. Software Test Report
During the period of SpiraTeam implementation on OBA a one week
workshop was organized by the research Supervisor in person, Associate Professor
Dr Suhaimi Ibrahim, conducted by the author to software Engineering student of
CASE. A series of lecture notes has been provided.
112
The lecture notes includes
1. Administrators Guide
2. Users Guide and
3. Instruction on how to Install SpiraTeam
Two academic Conference papers have been written and submitted to various
conferences. The conference proceedings papers includes
1. Managing the Software Test Process using SpiraTeam
2. Implementing software Test management using a SpiraTeam Tool
These together with the list of deliverable from the implementation of
SpiraTeam mentioned in section 1.6 of this these are attached.(it may not be found
attached after sometimes to avoid exposing OBA data) The use of the SpiraTeam in
undertaking the test processes of any software developed in CASE will ease the
workload of the Lecturers and the student more especially when the future version of
the SpireTeam is able to tackle the shortcoming mentioned earlier on.
113
REFERENCES
1.
H. Ohtera, S. and Yamada, Optimal allocation & control for software testing
resources, IEEE Transection on reliability, vol. 39, pp. 171–172, June 1999.
2.
L. Lui, and D. J. Robson, A support Enviroment for the managing of software
testing, IEEE conference on software engineering 1992.
3.
N. Kicillof. W. Garieskamp and V. Braberman, Achieving both model and
code coverage with aytomated gray-box testing, AMOST„07 ACM London,
UK July 2007, pp. 1–5.
4.
F. Li, W. M. Ma, and A. Chao, Architecture centric approach to enhance
software testing management, IEEE Eight International Conference on
Intelligent Systems Design and Application 2008
5.
P. M. Kamde, V. D. Nandavadekar and R. G. Pawar, Value of test cases in
software testing, IEEE International Conference On Management of
Innovation and Technology, 2006.
6.
J. W. Cangussu, R. A. Decarlo, and A. P. Mathur, Monitoring the software
Test process using statistical process control: A logrithmic approach,
ESEC/ESE 03, September 1-5 2003 Helsinki, Filand
7.
Y. Jun-feng, Y. Shi, L. Ju-bo, X. Dan, J. Xiang-yang, Reflective architecture
based software testing management model, 2006 IEEE International
Conference on Management and Technology.
8.
H. Pieire, Testing networking ,Vol. 39, no. 2, 1990 june, P. 172:
9.
P. Farell-Vinay, Manage software testing, Auerbach publication tayloy &
Fracis group, vol. 2, Dec. 2008, pp. 39-89.
10.
I. Burnstain, Practica software testing, Springer Verley publication, vol. 2,
Dec. 2003, pp. 39-89.
11.
White paper Gain control of the chaotic software test management , Software
Test Management
114
12.
Test management approaches and methodology, Tmap Sogeti, retrieved
February 2010 from http://www.tmap.net/info
13.
Scan tool net, Scan tool management tool, retrieved in January 2010 from
http://www.scantool.net/scan-tools/
14.
Testing services, Tragen Tool , retrieved in February 2010 from
http://www.trigent.com/services/testing/
15.
N. Palani, Test Management tool review, Wipro technology & Electronic.
Unpublished paper.
16. Y. shen, and J. Liu, Research on the application of data mining in software
testing and defects analysis, academy publosher vol: pp15, pp 29.
17.
SpiraTeam software Administrators and user guides http.www.inflactar.com
18.
M. Grindal, J. Offutt and J. Mellin “On the testing maturity of software
producing organizations,” Proceedings of the Testing: Academic & Industrial
Conference-Practice And Research Techniques (TAIC PART „06) 2006 IEEE
19.
Y. shen, and J. Liu “Research on the application of data mining in software
testing and defects analysis,” 2009 Second International Conference on
Intelligent Computation Technology and Automation
20.
M. A. Hennell, D. Hedley, and I. J. Riddell, “Assessing a class of software
tools,” International Conference On Management of Innovation and
Technology 1984.
21.
P. M. Duernberger, “Software testing application in a computer science
curriculum,”
22.
White Paper “taming software quality complexity with virtual automation”.
23.
Mashable, English Wikipedia Reaches 3 million Articles, Retrieved in March
2010
from
http://mashable.com/2010/01/02/wikipedia-3million-articles
Retrieved in February 2010.
24.
Techrunch, twitter reaches 445 million people Worldwide in June Comscore,
Retrieved in March 2010.
25.
Facebook, Press Statistics Retrieved in march 2010 from
http://facebook.com/press/infor.testing?statistics
115
APPENDIX A
SpiraTeam Installation Guide
Download