Uploaded by dhiary25

1547686823

advertisement
FAST – a Framework for Automating Software
Testing: a Practical Approach
Abstract—Context: the quality of a software product can be
directly influenced by the quality of its development process.
Therefore, immature or ad-hoc test processes are means that are
unsuited for introducing systematic test automation, and should not be
used to support improving the quality of software. Objective: in order
to conduct this research, the benefits and limitations of and gaps in
automating software testing had to be assessed in order to identify the
best practices and to propose a strategy for systematically introducing
test automation into software development processes. Method: to
conduct this research, an exploratory bibliographical survey was
undertaken so as to underpin the search by theory and the recent
literature. After defining the proposal, two case studies were conducted
so as to analyze the proposal in a real-world environment. In addition,
the proposal was also assessed through a focus group with specialists
in the field. Results: the proposal of a Framework for Automating
Software Testing – FAST, which is a theoretical framework consisting
of a hierarchical structure to introduce test automation. Conclusion:
The findings of this research showed that the absence of systematic
processes is one of the factors that hinder the introduction of test
automation. Based on the results of the case studies, FAST can be
considered as a satisfactory alternative that lies within the scope of
introducing and maintaining test automation in software development.
Keywords—Software process improvement, software quality,
software testing, test automation,
I. INTRODUCTION
I
t is understood that "the quality of a system or product is
highly influenced by the quality of the process used to
develop and maintain it," [1], in which the perspective that
process improvement is a necessary factor for the development
of software with quality to be delivered to the market. In this
sense, software testing has emerged as another aspect to be
developed when talking about software quality, in which test
can be defined as "the dynamics of checking the behavior of a
program from a finite set of test cases, properly selected from
an infinite domain of executions "[2].
However, the software test knowledge area is quite
comprehensive and this work focuses on test automation, which
is understood as the use of software to perform the test activities
[3], considered an important topic of interest which has been
widely studied in the literature [4]-[9].
The benefits of automation can be observed in the long run,
whose focus should be on increasing the coverage of tests and
not only on reducing the cost [8],[10],[11],[12]. Studies show
that tests account for 50% or more of the total cost of the project
[5] since while it may be costly to deliver a delayed product to
the market, a defective product can be considered catastrophic
[10].
Test automation has been proposed as a solution to reduce
project costs [13] and is becoming increasingly popular with the
need to improve software quality amid growing system
complexity [14]. Test automation can be used to gain velocity
[7], turn test repeatable [7] and manage more tests in less time
[15]. Furthermore, it can be a very efficient way of reducing the
effort involved in the development, eliminating or minimizing
repetitive tasks and reducing the risk of human error [6]. It can
be considered an effective way of reducing effort and
minimizing the repetition of activities in the software
development lifecycle [9].
Many attempts have failed to achieve the real benefits of
automation in a durable way [14] and some problems related to
automation adoption can be listed [4], such as:
• Inadequate choice or absence of some types of test,
which leads to inefficiency in the execution of tests;
• Incorrect expectations regarding the benefits of
automation and the investment to be made;
• The absence of diversification of the automation strategy
to be adopted;
• Use of test tools focused only on test run where other
potential areas are overlooked.
It can be observed that there is a technical deficiency in many
implementations of test execution automation, causing
problems for the use and maintenance of automation systems
[8]. In addition, it is observed that there is no clear definition as
to how to design, implement and maintain a test automation
system in order to maximize the benefits of automation in a
given scope, although there is a demand for such between the
developers and testers [8].
It is observed that there is still a gap in the research related to
test automation, given the absence of approaches and guidelines
to assist in the design, implementation and maintenance of test
automation approaches [16]. A multivocal review of the
literature [17] presented a research on the assessment of test
process improvement maturity, noting that the following
problems are relevant:
• Software testing processes remain immature and
conducted ad- hoc;
• Immature practices lead to inefficiency in defect
detection;
• Schedules and costs constantly exceed what was
planned; and,
• Test is not being conducted efficiently.
Given the problematic presented and, based on these
problems, identified in both the software industry and in the
references studied, the problem of this study is associated with
the following research question:
How should test automation be introduced and
maintained in the software development process?
The general objective of this research is to propose a strategy
for the systematic introduction of test automation practices in
the context of a software development project.
In addition, it has the following specific objectives:
• Analyze the failure factors of software testing automation
deployment in organizations; and
• Identify good practices to introduce automation of tests
mentioned in the literature.
II. BIBLIOGRAPHICAL REVIEW
A. Software Testing
Barr et al. [18] describe software testing as an activity whose
purpose is to stimulate the system and observe its response, in
which both the stimulus and the response have a value, which
may coincide when the stimulus and response are true. The
stimulus, in this scenario, corresponds to the activity of
software testing that can be performed from various forms and
techniques.
Bertolino [19] comments on the importance of software
testing engineering, whose aim is to observe the execution of a
system to validate if it behaves as intended and identifies
potential failures. In addition, test has been widely used in
industry as a quality control, providing an objective analysis of
software behavior.
"The work of testing a software can be divided between
automatic testing and manual testing" [15] and testing activity
assists quality assurance by collecting information about the
software studied [13]. Software testing includes activities that
consist of designing test cases, running the software with test
cases, and examining the results produced by the execution
[13].
In this context, test automation addresses the automation of
software testing activities, in which one of the reasons for using
automated testing instead of manual is that manual test
consumes more resources and automated test increases
efficiency [20]. The decision of what should be automated
should be carried out as soon as possible in the software
development process, in order to minimize the problems related
to this activity [21].
According to Bertolino [19], test automation is a significant
area of interest, whose objective is to improve the degree of
automation, both through the development of better techniques
for the generation of automated tests, and automation of the
process of automation. test.
Ramler and Wolfmaier [5] present an analysis of the
economic perspective between manual and automatic testing,
commenting that for the implementation of automation, only
costs are evaluated and their benefits ignored, compared to
manual tests.
In addition, test automation is a suitable tool for quality
improvement and cost reduction in the long run, as the benefits
require time to be observed [15]. Also in this context, ISO /
IEEE 29.119-1 [22] comments that it is possible to automate
many of the activities described in its processes, in which
automation requires the use of tools and is mainly focused on
test execution. However, many activities can be performed
through tools such as:
• Management of test cases;
• Monitoring and control of the test;
• Generation of test data;
• Statistical analysis;
• Generation of test cases;
• Monitoring and control of the test; and,
• Implementation and maintenance of
environment.
the
test
B. Test Automation
Software testing should efficiently find defects as well as be
efficient so that its execution can be performed quickly and at
the lowest possible cost. In this scenario, the test automation
can directly support the reduction of the effort required to
perform test activities or to increase the scope of the test to be
executed within a certain time frame [23].
Test automation effectively supports the project to achieve
cumulative coverage [10], which, as defined by the author, is
the fact that, over time, the set of automated test cases allows
them to be accumulated so that both existing and new
requirements can be tested throughout the test lifecycle.
Fewster and Graham [23] also point out that test and test
automation are different since the first one refers to the ability
to find and execute the most appropriate test cases for a
particular context, given the innumerable possibilities of
existing tests in a project. Test automation can also be
considered a skill, but in a different context, because the
demand, in this case, refers to the proper choice and correct
implementation of the tests to be automated.
Test automation improves the coverage of regression tests
due to the accumulation of automated test cases over time, in
order to improve productivity, quality, and efficiency of
execution [24]. Automation allows even the smallest
maintenance changes to be fully tested with minimal team effort
[23].
According to ISO / IEC / IEEE 29.119-1 [22], it is possible
to automate several activities described in the Test Management
and Dynamic Testing process described in ISO / IEC / IECC
29.119-2 [22], and although they are usually associated only
with the test execution activity, there are many additional
testing tasks that can be supported by software-based tools.
III. RELATED WORK
The analysis of the studies related to this research was based
on the literature review, analysis of existing systematic reviews
[25]-[27] and multivocal review [17]. The results achieved were
organized so that some approaches were classified as correlated
works, which are those that present suggestions for test
automation through more comprehensive proposals to improve
the test process as a whole. Some others were considered related
work because they specifically address test automation
processes.
From this research, the following correlated works were
selected:
• Test Maturity Model – TMM [28];
• Test Improvement Model – TIM [29];
•
•
•
•
Test Process Improvement – TPI [30];
Software Testing Enhancement Paradigm – STEP [31];
Agile Quality Assurance Model – AQAM [32];
Brazilian Test Process Improvement (MPT.BR) (2011)
[33],[34];
• Test Maturity Model Integration (TMMI) [35]; and
• Automated Test Generation – ATG [36].
Besides these, 2 related works were found, such as:
• Maturity Model for Automated Software Testing –
MMAST [37]; and
• Test Automation Improvement Model – TAIM [38].
Given this context, Fig. 1 presents a chronological view of
the appearance of these works.
Fig. 1 Related and corelated work timeline
A. Corelated Works
The TMM [28] is a model composed by 5 levels of maturity:
(1) Initial; (2) Phases Definition; (3) Integration; (4)
Management and Measurement; and (5) Optimization and
Defect Prevention and Quality Control. In its structure, each
level contemplates objectives that are achieved through
activities, tasks, and responsibilities. Level 5 has, as one of the
objectives, the use of tools to support the planning and
execution of the tests, as well as collecting and analyzing the
test data. However, the presented approach is superficial and
does not contemplate indications of introduction and
maintenance of the test automation.
TIM [29] is a maturity model defined through two
components: a framework that includes levels and key areas,
and an evaluation procedure. There are 4 levels, such as (1)
Baseline; (2) Effective cost; (3) Risk reduction; and (4)
Optimization; in addition to 5 key areas: Organization, Planning
and Monitoring, Testware, Test Cases and Review. In this
context, Testware is the key area that includes the definition of
configuration management of test assets and the use of tools to
perform repeatable and non-creative activities. However, the
systematization of the use of tools to perform the test
automation activities is not detailed.
The TPI [30] is a model composed of 20 key areas, each with
different levels of maturity, which are established from a
maturity matrix, in which each level is described by
checkpoints and suggestions of improvement. One of the key
areas is called Test Automation, which describes that
automation can be implemented to address the following
checkpoints: (1) the use of tools at maturity level A; (2)
automation management at level B; and (3) optimization of test
automation at level C, where A is the initial and C is the highest.
Although it addresses maturity in test automation, it is observed
negligence in the evaluation of the context of automation, in
which it is only the use of tools and management of automation,
since the scope of automation is more complex than only the
presented processes.
STEP [31] is a business process-oriented test process
improvement framework that is organized by: (1) Business
objectives; (2) Process objectives; and (3) 17 process areas.
Test automation is present in the Test Execution process area,
which suggests that the introduction of test automation is
through the implementation of test automation scripts.
However, there are no suggestions for practices that indicate
how automation scripts should be implemented and introduced
in the context of the project.
AQAM [32] is a model for quality assurance in an agile
environment, comprising 20 key process areas (KPAs), which
include guides, processes, best practices, templates,
customizations and maturity levels. The purpose of identifying
KPAs is to objectively assess the strengths and weaknesses of
the important areas of the quality assurance process and then
develop a plan for process improvement as a whole. Among the
existing KPAs, 9 of them are directly related to the test
activities, such as: (1) Test planning; (2) Test case management;
(3) Defect analysis; (4) Defect report; (5) Unit test; (6)
Performance test; (7) Test environment management; (8)
Organization of the test; and (9) Test automation. Despite
presenting KPAs related to the test process, they do not cover
all levels of test automation for the context of a project, being
restricted to unit and performance testing. Therefore, with
regard to the introduction of test automation, the proposal is
limited and, in addition, the existing documentation does not
present a complete description of its practices.
MPT.BR [33],[34] addresses the improvement of the testing
process throughout the product test life cycle. The model is
composed of five maturity levels, and each maturity level is
composed of process areas.
In the scope of test automation, the model presents the Test
Execution Automation (AET) process area, whose purpose is
the definition and maintenance of a strategy to automate the
execution of the test. This process area is composed of the
following list of specific practices:
• AET1 - Define objectives of the automation regime;
• AET2 - Define criteria for selection of test cases for
automation;
• AET3 - Define a framework for test automation;
• AET4 - Manage automated test incidents;
• AET5 - Check adherence to automation objectives; and,
• AET6 - Analyze return on investment in automation.
Although the specific practices present a systematic way for
the introduction of test automation, they are still vague in the
sense of identifying the moment in which the automation should
be performed. There is no specific information about the
introduction of automation in the software development
process, as well as not informing which levels of testing can be
automated. The written format is generic and comprehensive,
which can suit every type of automation. However, it does not
help the choice of where the automation should start and what
benefits can be achieved.
There is also the Tool Management (GDF) process area,
whose objective is to manage the identification, analysis,
selection, and implementation of tools in the organization,
composed of the following specific practices:
• GDF1 - Identify necessary tools;
• GDF2 - Select tools;
• GDF3 - Conduct pilot project;
• GDF4 - Select tool's gurus;
• GDF5 - Define tool's deployment strategies; and,
• GDF6 - Deploy tools.
Although GDF provides guidelines for using test tools and
how they should be maintained, GDF does not provide
objective suggestions on how tools can be used to support test
automation, since the process area does not only address tools
of automation. Therefore, it is considered that, despite
presenting a guide for automation introduction, it is generic and
does not go into detail about the automation process.
The TMMI [35] is a model for the improvement of the test
process, developed by the TMMi Foundation as a guide,
reference framework and complementary model to CMMI
version 1.2 [39]. TMMI follows the staged version of CMMI,
and also uses the concepts of maturity levels for assessment and
improvement of the test process. The specific focus of the
model is the improvement of the test process, starting from the
chaotic stage to the mature and controlled process through
defect prevention.
TMMI is composed of 5 maturity levels, such as: level 1 Initial; level 2 - Managed; level 3 - Defined; level 4 - Measured;
and level 5 - Optimization. Each maturity level presents a set of
process areas necessary to reach maturity at that level, where
each is the baseline to the next level.
Although it is a maturity model specifically for the test area
and presents systematic ways to introduce the practice of
software testing in the context of project development, it does
not present a process area specifically dedicated to testing tools
and / or automation. It does not include systematic suggestions
to improve tests automation, as described in the model:
“Note that the TMMi does not have a specific process area
dedicated to test tools and/or test automation. Within TMMi test
tools are treated as a supporting resource (practices) and are
therefore part of the process area where they provide support,
e.g., applying a test design tool is a supporting test practice
within the process area Test Design and Execution at TMMi
level 2 and applying a performance testing is tool is a
supporting test practice within the process area Non-functional
Testing at TMMi level 3” [35].
The ATG [36] is a process that was developed to complement
the TPI [30], whose objective is the generation of automated
tests from models. In this context, the creation of the ATG took
place from the introduction of 4 process areas and modifications
in some areas existing in the TPI. The models, in this scenario,
need to be computable, processed by the computer, from, for
example, requirements, design, source code, test objects, etc. If
the model cannot be computable, it must be converted as soon
as possible into the software development process, preferably
in parallel with the software design. The ATG is limited in the
scope of its proposal because, in addition to focusing only on
the generation of tests based on models, it does not explain how
the models should be generated for the different levels of test
automation. In addition, in industry, during the adoption of
ATG techniques, some limitations have been encountered, due
in part to the difficulty of creating and maintaining test models,
which may further hinder the adoption of the approach.
B. Related Works
The Maturity Model for Automated Software Testing
(MMAST) [37] is a model that was developed for
manufacturers of computerized medical equipment and its
purpose is to define the appropriate level of automation in
which an equipment manufacturer fits. It consists of 4 levels of
maturity, such as:
• Level 1 - Accidental automation: characterized by the ad
hoc, individualistic and accidental process of carrying
out the activities, in which there is no documentation of
the important information that is restricted to key pieces
of the organization. Test automation is performed in a
timely manner and is not based on process and/or
planning.
• Level 2 - Beginning automation: is associated with the
use of capture and replay tools that reproduce the
responses of the system under test. Documentation
begins by recording software and test requirements, the
writing of which provides the basis for level 3
implementation.
• Level 3 - Intentional automation: the focus is the
execution of the defined and planned automation for the
context of a project, based on the requirements and
scripts of the automatic test and it assumes that the test
team is part of the project team. The model indicates that
level 3 is suitable for the manufacture of medical
equipment.
• Level 4 - Advanced Automation: it is presented as an
improved version of Level 3 with the inclusion of the
post-delivery defect management practice, in which it is
captured and sent directly to the processes of correction,
creation, and regression of the tests.
The model also presents a checklist to support and identify at
what level the test process should be automated, based on the
following questions:
• How big are your software projects?
• How complex is your product?
• What financial risk does your product represent for the
company?
• What risk does your product pose to the patient and the
operator?
From the checklist, the automation level is recommended.
However, despite being a maturity model, it does not present
key areas or process areas and its description is abstract and
does not contemplate aspects on how test automation can
actually be introduced.
The Test Automation Improvement Model (TAIM) [38] is a
model for process improvement based on measurements to
determine if one activity is better than the other, focusing on the
calculation of the return on investment. In addition, it is based
on the view that the testing process can be fully or partially
automated, and provides insight into how it is possible to fully
automate the testing process by defining 10 key areas (Key Area
- KA):
1. Test management;
2. Testing requirements;
3. Test specification;
4. Implementation of the test;
5. Automation of the test process;
6. Execution of the test;
7. Test Verdicts;
8. Test environment;
9. Testing tools; and
10. Fault and fault management.
In addition, the model includes a Generic Area (GA), which
consists of Traceability; Defined measurements such as
Efficiency, Effectiveness, and Cost (return of investment);
Analysis e.g. Trends, Data Aggregation; Testware; Standards;
Quality of Automation and Competence.
However, "TAIM can be viewed as work-in-progress and as
a research challenge" [38], as it is still very generalized and the
evolution of KAs is presented as a future work. In addition, the
model is based on measurements but does not describe them
and neither presents how they should be used and how they are
related to KAs and GA.
Although presenting KAs as a way of introducing test
automation, it is not clear how they can be systematically
introduced in the software development process since they are
presented in isolated ways as a guide to best practices.
However, the model fails to provide information on how it can
be adapted to the software development context.
IV. FAST
According to the ISO / IEC / IEEE 24.765 [40] software
engineering dictionary, a framework can be defined as a model
that it refines or specializes to provide parts of its functionality
in a given context. This concept has been adapted to FAST for
the purpose of establishing a set of practices so that the
introduction of test automation is consistent, structured and
systematic for the project that implements it.
Although the concept of a framework is more related to the
technical component for the construction of systems, it was
adapted to assume the perspective of a theoretical framework
that contemplates a hierarchical structure for the
implementation of test automation from practices that can be
instantiated according to specific and distinct needs of each
project context.
This, in turn, differs from a maturity model by the fact that
there is no obligation in the implementation of its process areas
since the concept of maturity/capacity levels does not apply to
the proposed framework.
The conceptual framework of FAST is composed of elements
that have been defined according to CMMI-DEV [1], as shown
in Fig. 2 and described in Table I.
Fig. 2 FAST’s conceptual structure.
TABLE I
FAST CONCEPTUAL ELEMENTS
Element
Description
Automation This element has been adapted from the concept of test level,
Level
described as an independent test effort with its own
documentation and resources (IEEE 829, 2008). The
automation test level, in turn, can be understood as the scope
in which automation test processes will take place.
Area
It is the element that aggregates process areas, considering the
practical aspects of the model that emphasize the main
relationships between process areas. It corresponds to the
areas of interest for which FAST is divided into two,
considering both technical and support aspects, which are
required to introduce automation into a software project. This
element was adapted from the CMMI-DEV category concept
(2010).
Area
Corresponds to the purpose of the area to which it is related.
Objective
Process Area A set of practices related to a given area that, when
implemented collectively, satisfy a set of objectives
considered important for the improvement of that area (SEI,
2010).
Process Area It includes the description of the objectives of the process
Purpose
area.
Practice
It is an element that presents the description of the activities
considered important for the process area to be achieved (SEI,
2010). They are described in a way that shows the sequence
between them.
Subpractice Detailed description to provide guidance on how to interpret
and implement a particular practice (SEI, 2010).
Work product Suggestions of artifacts associated with a particular practice,
including information on the outputs to be elaborated from the
execution of the same.
It was conceived considering two distinct but complementary
aspects to compose its proposal from its Technical and Support
areas, in which each of them has a set of process areas, as shown
in Fig. 3. In this context, the proposed framework is structured
in such a way that the areas aggregate process areas specifically
organized to maintain a coherent semantics with the objective
of each area.
The process areas, in turn, were defined through objectives,
practices with their respective subpractices and work products,
in order to present suggestions on how automation can be
introduced in the context of a software development project.
Fig. 3 FAST Process Areas
A. FAST Technical Area
The Technical Area was created to contemplate test
automation practices through its process areas so that they can
be adapted according to the context in which they will be
implemented. Its purpose is to establish and maintain
mechanisms for the automatic accomplishment of software
testing, in order to support the creation of project environments
that are efficient to develop, manage and maintain automated
tests in a given domain.
In addition, this area is focused on presenting practical
aspects for systematically performing automatic tests,
composing an architecture based on process areas that can be
integrated into the product development process, providing
strategic support for process improvement through test
automation.
The Technical Area is composed of the following Process
Areas: (1) Unit Testing, (2) Integration Testing, (3) Systems
Testing and (4) Acceptance Testing, in which each one is
presented in an independent and cohesive manner, and can be
instantiated from the level of automation required by the
context in question, and therefore, there are no prerequisites
among them. The level of automation represents the level of
abstraction with which the automation will be performed, as
represented in Fig 4.
In addition, each process area presents practices that include
designing, implementing, and running the automated tests. The
descriptions of each process area will be presented in the
following sections.
Fig. 4 Technical Area Abstraction
B. Fast Support Area
The Support Area was created with the purpose of suggesting
practices that support the execution of technical area processes
so that to support the systematic implementation, execution,
and monitoring of the test automation in the project.
The Support Area is comprised of the following Process
Areas (AP): (1) Test Project Planning, (2) Test Project
Monitoring, (3) Configuration Management, (4) Measurement
and Analysis, (5) Requirements and (6) Defects management,
which have practices that can relate to each other.
In addition, its PAs permeate all levels of automation in the
Technical Area and provide support for the execution of
automation from the proposed process areas, as represented by
Fig. 5.
Fig. 5 Support Area
Based on the proposal’s two areas, the framework was
described to contemplate process areas, practices, subpractices
and work products. However, given the length of its
documentation and space limitations of this article, it was
chosen to present only an overview of each process area, as
presented in Tables II and III.
V. CASE STUDY
A. Case Study Plan
The objective of a case study is the definition of what is
expected to achieve as a result of its execution [41]. In this
context, according to Robson's [42] classification, the purpose
of a case study is the improvement, in which, in the context of
this work, feedback is sought from the use of FAST in real and
contemporary software project development environment,
which has a high degree of realism.
In this scenario, research questions are statements about the
knowledge being sought or expected to be discovered during
the case study [41]. From this context, the research questions
defined for this case study were as follows:
RQ1- Has FAST supported the systematic introduction of
test automation in the project?
RQ2- What were the positive aspects (success factors) of
using FAST to introduce test automation into the project?
RQ3- What were the negative aspects (failure factors) arising
from the use of FAST in the implementation of test automation
in the project?
RQ4- What are the limitations for introducing test
automation into the project, from using FAST?
TABLE II
FAST TECHNICAL AREA DESCRIPTION
Unit Testing
Process
Area
Practices
Design Unit Testing
• Identify additional requirements and procedures that need to be tested.
• Identify input and output data types.
• Design the architecture of the test suite.
• Specify the required test procedures.
• Specify test cases.
Implement Unit Testing
• Obtain and verify the test data.
• Obtain test items.
• Implement unit tests.
Execute Unit Testing
• Perform unit testing.
• Determine unit test results.
Finish Unit Testing
System Testing
Integration Testing
Establish Test Integration
Techniques.
Acceptance Testing
Subpractices
Design Integration Testing
Implement Integration
Testing
• Evaluate the normal completion of the unit test.
• Evaluate abnormal completion of the unit test.
• Increase the unit test set.
• Establish integration test granularity.
• Select integration test design techniques.
• Derive test conditions.
• Derive test coverage items.
• Derive the test cases.
• Assemble test sets.
• Establish test procedures.
• Implement integration test scripts.
• Run integration testing scripts.
Execute Integration Testing
• Determine test results.
• Evaluate normal completion of the integration test.
• Evaluate abnormal completion of integration test.
Finish Integration Testing
• Increase the integration test suite.
• Perform post-conditions for test cases.
• Select system test types.
Establish System Testing
• Select techniques for the systems test project.
Types and Techniques
• Select data generation techniques.
• Select techniques for generating scripts.
• Derive test conditions.
• Derive test coverage items.
Design System Testing
• Derive the test cases.
• Assemble test sets.
• Establish test procedures.
Implement System Testing • Implement the system test scripts.
• Run system test scripts.
Execute System Testing
• Determine test results.
• Evaluate normal system test completion.
• Evaluate abnormal system test completion.
Finish System Testing
• Increase the system test set.
• Perform post-conditions for test cases.
Establish Acceptance
• Define type of acceptance test.
Testing Criteria
• Define acceptance criteria.
• Derive test conditions.
• Derive test coverage items.
Design Acceptance Testing • Derive test cases.
• Assemble test sets.
• Establish test procedures.
Implement Acceptance
• Implement the acceptance test scripts.
Testing
Execute Acceptance
• Run acceptance testing scripts.
Testing
• Determine test results.
• Evaluate the normal completion of the acceptance test.
• Evaluate abnormal completion of the acceptance test.
Finish Acceptance Testing
• Increase the acceptance test set.
• Perform post-conditions for test cases.
Work Products
• Unit test design.
• Specification of test procedures.
• Specification of the test cases.
• Test data.
• Test support resources.
• Configuration of test items.
• Summary of tests..
• Summary of tests.
• Test execution log.
• Revised test specifications.
• Revised test data.
• Test summary.
• Revised test specifications.
• Additional test data.
• Integration test project.
• Specification of the test design.
• Integration test scripts.
• Summary of tests.
• Results of implementation.
• Summary of tests.
• Test data.
• Test Project.
• Specification of the test design.
• System test scripts.
• Summary of tests.
• Results of implementation.
• Summary of tests.
• Test data.
• Test design.
• Specification of the test design.
• System test scripts.
• Test summary.
• Results of implementation.
• Test summary.
• Test data.
TABLE III
FAST SUPPORT AREA DESCRIPTION
Process
Area
Practices
Configuration
Management
Test Project
Monitoring,
Test `Project Planning
Assess Product Risks
Define Test Automation
Strategy
Establish Estimates
Develop Test Automation
Plan
• Plan schedule.
• Plan human resources.
• Plan involvement of project stakeholders.
• Identify project risks.
• Establish the plan.
• Monitor the progress of the project.
Monitor project
Manage corrective actions.
Establish test automation
environment.
Establish baselines of the
test automation project.
Measurement and
Analysis
Establish mechanisms of
measurement and analysis.
Provide test automation
measurement results.
Define Automation
Requirements
Requirements
• Define product risk categories.
• Identify risks.
• Analyze risks.
• Identify scope of test automation.
• Define test automation strategy.
• Define input criteria.
• Define exit criteria.
• Set test suspension criteria.
• Define test restart criteria.
• Establish Project Analytical Framework - EAP.
• Define the test life cycle.
• Determine size and cost estimates.
Track and control changes.
Defect
Management
Subpractices
• Identify deviations from the project.
• Record corrective actions
• Follow corrective action until it closes.
• Establish environmental requirements:
• Implement the test automation environment.
• Maintain the test automation environment.
• Identify configuration items.
• Establish the configuration management environment.
• Generate baselines and releases.
• Track change requests
• Control configuration items
• Establish test automation measurement objectives.
• Specify measurements of test automation.
• Specify procedures for collecting and storing measurements.
• Specify procedures for analyzing measures.
• Collect automation measurement data.
• Analyze automation measurement data.
• Communicate results.
• Store data and results.
• Define objective criteria for assessing requirements.
• Elicit automation requirements.
Define traceability between • Determine traceability scheme between requirements and automation
automation requirements
work products.
and your work products.
• Define bidirectional traceability of requirements.
Manage automation
requirement changes.
Establish defect
management system
Analyze test result.
Establish actions to
eliminate the root cause of
defects.
• Manage test automation scope change.
• Maintain bidirectional traceability of test automation requirements.
• Define defect management strategy.
• Define a defect management system.
• Maintain defect management system.
• Record, classify and prioritize defects.
• Analyze the root cause of the selected defects.
• Propose corrective actions.
• Monitor implementation of corrective actions.
Work Products
• Analytical risk structure - EAR.
• Product risk chart.
• Risk exposure matrix.
• Test automation strategy.
• Test automation criteria.
.
• Project analytical framework.
• Life cycle of the automation project;
• Estimates of the size of the test
automation project.
• Cost estimate of test automation
project.
• Timeline of the test automation
project.
• Human resources matrix.
• Stakeholder engagement matrix.
• Project risks.
• Test project follow-up worksheet.
• Test project follow-up worksheet.
• Configuration management plan.
• Configuration management plan.
• Configuration management system.
• Configuration management system.
• Test automation measurement plan,
• Test automation measurement plan,
• Criteria for analysis of requirements
• Automation requirements.
• Conceptual traceability scheme.
• Traceability Matrix.
• Request for change.
• Impact analysis.
• Traceability Matrix
• Default state machine.
• Defect management system.
• Defect registration.
• Root cause analysis of defects.
• Corrective actions.
These questions were defined with the purpose of
consolidating information that could serve as inputs to analyze
if FAST is a mechanism that can be used to answer the research
question of this work, which is: How test automation should be
introduced and maintained in the process software
development?
B. Case Definition
Case 1 occurred in the context of a project for the
development of a product for academic management, offered as
Software as a service (SaaS), with functionalities aimed at
public and private school, through the management of teaching
units, classes, grades, student enrollment, school year closing,
among others. The product is based on the following
technologies:
• PHP 7 (development language);
• Laravel (framework for PHP development);
• Redis (in-memory database structure used for caching);
• ElasticSearch (search tool to handle large amounts of data
in real time);
• Postgresql (used database);
• Git (tool for project versioning);
• VueJS (framework for JavaScript);
• Bootstrap (framework for CSS / HTML);
• Gulp (task runner); and
• NGinx (HTTP server).
The team that worked on Case 1 was composed of 8
participants who worked as Product Owner, Software Engineer,
and Scrum Master. The age of the group varied from 22 to 38
years old and they were male and female.
Case 2 occurred in the context of the development of a
software that centralizes user identities of several systems of the
organization, developed to meet the requirements of the OAuth
2 protocol, in order to optimize the management of logins for
the various systems and to facilitate the integration of new
systems. The product is based on the following technologies:
• JAVA 8 (Backend programming language);
• Tomcat (Web container);
• Spring MVC (framework for development in JAVA);
• PostgreSQL (Database used for project data);
• Oracle (the database used for educational ERP data);
• Rest Full Web services (WEB systems interoperability
standard);
• Angular Material (framework for JavaScript development
for front-end) and
• Gradle (project configuration manager).
The team that worked in Case 2 was composed of 6
participants who worked as Project Management, Software
Architect, Team Leader and Developer. The age of the group
varied from 26 to 33 years old and they were both male and
female.
C. Data Collection Plan
The methods for collecting data derived from the case studies
were interviews and metrics.
The interview was selected to collect the qualitative data
generated from the introduction of the automated test system by
FAST implementation in the context of the 2 selected Cases. In
this scenario, it was planned to be executed semi-structured
[43], with previously planned questions, whose order may vary
according to the course of the interview. The interview protocol
was organized according to the funnel model principle [44], in
which the most general questions were initially addressed and
then refined to the more specific ones as described below:
Generic Questions:
Characterization of the participant, considering the following
aspects, such as level of experience, demographic data,
individual interests, and technical skills.
Specific Questions:
1. What do you think of the organization and goals of
automation levels?
2. Are they suitable for introducing test automation in the
project?
3. Do you think automation levels supported the planning of
the test automation strategy?
4. What do you think of the support area process areas
(Planning,
Monitoring,
Configuration
Management,
Requirements, Measurement, Defect Management)?
5. Are they suitable to support test automation in the project?
6. Did you miss any process area? Need to add some process
area to the Support Area?
7. Did you miss any practice?
8. Did you miss the specification and/or description of what
aspects of FAST to use?
9. What do you think of the process areas of the technical
area (Unit Test, Integration Test, Systems Test, and Acceptance
Test)?
10. Are they adequate to support the test automation in the
project?
11. Did you miss any process area? Need to add some process
area to the Technical Area?
12. Did you miss practice?
13. Did you miss the specification and/or description of what
aspects of FAST to use?
General:
14. Did FAST facilitate the systematic introduction of test
automation in the project?
15. What are the positive aspects (success factors) of using
FAST to introduce test automation into the project?
16. What negative aspects (failure factors) resulting from the
use of FAST in the implementation of test automation in the
project?
17. What are the limitations for introducing test automation
into the project from the use of FAST?
18. How do you evaluate the feasibility of FAST?
19. How do you evaluate the integrity of FAST?
20. In relation to the distribution of process areas in
Technical Area and Support Area, do you consider adequate for
the implementation of automation in the project?
21. Do you consider FAST adequate to support the
introduction of test automation in your project?
22. What are the difficulties of using FAST in your project?
For the definition of the metrics, the Goal Question Metric
(GQM) method was used [45], [46], where specific metrics area
described as it follows.
1. Project size: project scope count through the use of
function point technique.
2. Size of the test automation planning: Planned project scope
count for automation by defining the test automation strategy,
separated by level of automation.
3. Size of the accomplished of the project automation: Count
of the scope of automation carried out in the project, separated
by levels of automation, according to the strategy of test
automation.
4. Test Automation Effort in the Pilot Project: Count the
effort expended by the project participants.
5. Number of defects: Display the number of defects found
from the execution of the test automation.
6. Return on investment (ROI): Present the return on
investment made for the test automation project.
D.Collected Data
To collect the interview data, participants were selected
based on a non-probabilistic sample, considered as the most
appropriate method for qualitative research, through an
intentional sampling, based on the assumption that the
researcher wants to understand a certain vision/understanding
and, therefore, should select a sample from which the maximum
can be learned [43]. Table IV presents information related to
the execution of the interviews in the case studies.
Date
03/29/2017
03/29/2017
04/18/2017
05/11/2017
05/18/2017
TABLE IV
CASE STUDY INTERVIEW INFORMATION
Participant
Case
Duration
P2
C1
32 min
P3
C1
44 min
P1
C1
58 min
P3
C2
48 min
P1
C2
56 min
In addition to the qualitative data from the interviews,
metrics were also collected, as detailed in Table V.
TABLE V
CASE STUDY METRIC INFORMATION
Metrics
Case 1
Case 2
1
1.309 FP
2
Unit Testing: 1309 FP
3
Unit Testing: 383 PF
4
5
6
244h
0
Not Collected
24 FP
Unit Testing: 12FP
Integration Testing: 20FP
Unit Testing: 7FP
Integration Testing: 12FP
280h
0
Not Collected
E. Data Analysis
The process of data analysis is dynamic and recursive, and
becomes more extensive throughout the course of the study
[43]. The quantitative data, from the collection of the metrics,
served to characterize the scope of each case study project and
will be analyzed separately by Case.
In relation to Case 1, the project had already started 14
months prior to the beginning of the case study and, therefore,
its size of 1309 FP (metric 1) considered all the functionalities
developed from the initiation until the moment of the count,
which occurred 2 months after its inception. As this was an
ongoing project, whose requirements were being defined and
implemented, the project scope increased until the end of the
case study, but due to staff limitations and availability, the
project size count could not be updated.
For metric 2, automation planning size, its value was equal
to the size of the project because the planned automation
strategy was that the entire scope of the project should have
automated unit testing. For strategic organizational issues, and
as a way to gradually introduce the test automation, the project
prioritized only the automation of the unit tests. However, the
automation strategy did not restrict planning to automate testing
at integration levels and systems, but they were not performed
during the period in which the case study was analyzed. In spite
of this, Case 1 did not count the planned scope for integration
and systems levels and therefore, metric 2 contemplated only
the planned size for unit tests. Such counting was not performed
because it was a more complex activity, which required a longer
time. The complexity in this count is mainly associated with the
integration testing, since it relates to several functionalities and
is directly associated with the system architecture.
In this context, in the case study period, automation was
introduced only in the unit test, in which 383 PFs (metric 3)
were automated and executed, out of a total of 1309 PFs, that
is, 29.26% of the unit tests were automated. It is worth
mentioning that, although the case study started on 21/12/2016,
automation itself did not begin on this date, since Case 1
invested in the planning, configuration of the environment and
understanding of FAST, so that automation could be started.
Metric 4 brought the value of 244 hours spent on the project,
including training, follow-up meetings, planning, design,
implementation and execution of automated unit testing. This
number represents 11.14% of all the effort spent in this period,
whose sum was 2,191 hours spent by the whole team.
In relation to the number of defects, metric 5, Case 1 did not
report a defect due to the fact that the defects resulting from the
unit test automation do not generate a fault record in the tool,
since this automation was performed by the developer and at
the same time in which the defect was found, it was directly
repaired. Thus, the fact that metric 5 is zero does not mean that
the automation of the unit test did not cause a defect, but rather
that it was not accounted for due to the organization's strategy
of not including in its process the formal registration defects
from the unit test automation.
In the scope of the ROI, metric 6, this could not be collected
by making use of historical data on the cost of the correction of
the defect without the automation of test, given that the
organization does not have and therefore can not generate the
calculation of the same.
The Case 2 project started in conjunction with the case study,
and although the reported project size is only 24 FPs (metric 1),
it is complex and highly critical given its scope of various
systems integration.
The planned automation strategy for this case included the
automation of unit and integration tests, with the scope of 12 PF
and 20 PF, respectively (metric 2). From this scope, 7 PF
(metric 3) were automated at the unit test level, that is, 58.33%
of the planned. For the integration tests, 20 PF (metric 3) were
automated, representing 60% of the predicted scope.
There were 180 hours (metric 4) spent in planning,
designing, implementing and executing the automated tests, and
the total number of project hours during the same period was
900 hours and, therefore, automation accounted for 20% of the
effort from the project.
No formal defects have been opened in this period so that
metric 5 is zeroed since the project does not have the
formalization of defect management. In the context of metric 6,
similar to Case 1, the reasons were the same for not having
collected the ROI.
In addition, the diagnosis performed after the FAST
implementation showed that the process areas of Unit Testing,
Integration Testing, Test Project Planning, Test Project
Monitoring, and Configuration Management have improved
their practices for inclusion of automation test.
In order to analyze the interviews, the collected data were
codified and classified, whose challenge was to construct
categories that captured the recurrent patterns that permeate all
the data originated from the interviews. For this, the audios
were heard and transcribed in the tool Microsoft Excel, through
which, from the generated codes of the first interview,
categories were pre-established for the classification of the
following interview. When the following interview was
analyzed, from the data coding, an attempt was made to fit them
into one of the categories, to evaluate if there was recurrence in
the information. If necessary, new categories were created. This
categorization process was repeated for all interviews, and in
the end, a list of categories was refined, excluding possible
redundancies, for the generation of the conceptual scheme, as
represented in Fig. 6.
Fig. 6 Data Analysis Categories
The categories were defined according to the criteria
proposed by Merriam [43], considering the following aspects:
• Must be aligned and answer the research questions;
• They must be comprehensive so that there are sufficient
categories and all relevant data are included;
• They must be mutually exclusive;
• The name should be as sensitive as possible to the context
that is related; and,
• They must be conceptually congruent, where each level of
abstraction should categorize sets of the same level.
F. Threats to Validity
Conducting research on real-world issues implies an
exchange between the level of control and the degree of realism,
whose realistic situation is often complex and not deterministic,
which makes it difficult to understand what is happening,
especially for studies with explanatory purposes [44]. This
scenario is characteristic of the case study, in which the
limitations are in place and require a consolidated analysis to
minimize their impacts on the results obtained.
Among the limitations of the case study, it can be mentioned
that in none of the cases the FAST was completely implanted,
which limits the analysis of its results from the practical
experiences only of the areas of Unit Test, Integration Test, Test
Project Planning, Test Project Tracking, and Configuration
Management. This limitation, in turn, had to be accepted within
the research, since the researcher could not interfere in the
project so that the framework could be fully implemented.
In addition, it was not possible to ensure that participants read
all FAST documentation, where it was observed that the focus
of the responses was based on practical experiences with scope
restricted to the process areas deployed. While aware of this
fact, the questions throughout the interview were not limited
and were comprehensive for the entire FAST scope.
Another limitation of this study is the fact that FAST
implementation was performed in 2 cases in which software
development processes already existed. Although its proposal
is not limited to this scope, it is not possible to conclude how it
would be implemented in a completely ad-hoc environment.
According to Easterbrook et al. [47], the major limitation of
case studies is that data collection and analysis are more open
to the researcher's interpretation and bias, and to minimize this
threat to validity, an explicit framework for selecting and
collecting data is required. were performed by means of a
carefully detailed protocol and followed throughout the study.
Another threat raised was the possibility of bias for data
analysis in a specific domain of software development and so
that it could be minimized in this study were selected 2 different
scenarios, in which the absence of certain information in one
can be complemented by the existence in the other.
However, the threat to the researcher's participation in the
context of supporting FAST implementation may have
influenced the results obtained, as well as may have constrained
interview participants in providing their answers.
However, according to Merriam and Tisdell [43], what
makes experimental studies reliable is their careful design,
based on the application of standards well developed and
accepted by the scientific community, a fact that can be
observed from the information in detail described in this
chapter.
VI. FOCUS GROUP
A. Focus Group Plan
The research problem within the scope of the realization of
the focus group is associated with the central problem of this
work, which would be how automation can be introduced in the
context of software development. From this question, the
objective of the focus group is to evaluate the FAST, under the
aspects of completeness, clarity and adequacy of the proposed
structure.
In order for this objective to be achieved, the roadmap was
designed to address the following aspects of FAST:
1 Presentation of the objectives for the focus group.
2 Presentation of the proposal overview.
3 Discussion on test automation levels.
4 Discussion and analysis on the two areas (Technical and
Support)
5 Discussion and analysis of the process areas of the
technical area.
6 Discussion and analysis of the process areas of the support
area.
Based on the issues addressed, the experts were selected from
the profile and knowledge in test automation, and despite
having invited 7 participants, only 4 attended the meeting.
B. Execution
The focal group took place on September 19, 2017, with an
approximate duration of 90 minutes, being recorded in audio,
with the permission of the participants, in addition to the paper
notes made by the moderator, whenever deemed necessary.
The section started from the presentation of the focal group
method, indicating the objectives of its application in the
context of this research for the participants, explaining that the
objective of this method was to collect the largest amount of
information from the interaction between the members of the
group [ 48]. At that moment, the schedule for that session was
presented. After the beginning, the participants were asked to
feel free to contribute people's opinions and ideas on the topics
covered.
The FAST overview was presented, showing the components
of the framework, areas, process areas.
Given this scenario, the data were collected, analyzed and
consolidated from the planned topics, whose analyzes will be
described in the next Section.
C. Data Analysis
For the analysis of the data, an analysis and comparison of
the discussions around the topics presented throughout the
focus group was carried out, to identify the similarities and
obtain an understanding of how the various variables behave in
the context of the research [49]. For this, the qualitative data
were analyzed from the audio transcription, which was
performed from the Microsoft Excel tool.
Regarding the levels of automation, after presenting its
definition, concepts and objectives, all experts indicated that its
description was clear. In addition, the amount of automation
levels was considered adequate, with no need for addition or
removal of any level. However, in the course of the discussion,
a specialist questioned the applicability of the level of
acceptance, given that the same should be performed by the
client. From this consideration, another expert pointed out that
performing the acceptance test is a consequence of systems
testing within the scope of the customer's environment.
For the analysis of the support area, the Requirements
process area presented doubts at the beginning of the
discussion, in which it was asked if it would not be better if this
process area were treated as part of the test automation project
planning. In addition, there were questions about the difference
between the requirements of automation and those of the project
and how this is addressed in the proposal. During the group's
own discussion, it was understood that the project requirements
could serve as a basis for automation requirements. It was also
questioned whether the proposal addresses techniques for
surveying automation requirements for projects that
contemplate legacy systems, in which it was observed that the
suggestion could be included as an improvement point for the
FAST.
In the scope of the Process Automation Monitoring process
area, it was asked how the framework addresses the monitoring
of the automation project, so that this practice can be performed
objectively. After a long discussion, it was suggested the
definition and monitoring of indicators related to automation,
whose process area is present in the description of FAST.
Regarding the defect management process area, it was
commented on how the proposal addresses a way to report the
results of automation, in which the practices and subpractices
in this process area were discussed. From the discussion and
interaction among the members, it was suggested to include, in
the framekwork description, a suggestion for a standard that
could support the report of defects in the project.
In addition, it was asked about the adaptation and
instantiation of the framework and its processes in
environments that already exist process and / or agile practices.
It was also questioned if the proposal contemplated the
definition of a process for automation deployment using FAST,
and after discussion among the group, the difference between
the proposed framework and an automation process was
understood, which could be a future work to be done.
researched.
Regarding the Technical and Support Areas, the participants
considered that the titles and objectives were adequate to their
proposal, and that the form of organization was consistent with
the proposed strategy. It was observed that, initially, the
members had difficulty understanding the relation between the
areas.
In this context, it was asked about the limitations of the use
of the framework in practice, when one of the experts
commented that it does not see limitations in the framework,
since it addresses all practices for test automation, but that the
limitations are due of the priorities that the projects deal with,
that is, the introduction of automation is not always performed,
because the practice is not prioritized in the scope of the project.
Also in this context, it was mentioned that the absence of a
process contemplating step by step how to use the framework
can be a limitation for its use. From the defined step-by-step,
the expert suggested setting a metric to assess how FAST
adheres to the project. Although not in the process, this metric
is analyzed when the diagnosis is made prior to its
implementation.
D.Threats to Validity
Among the limitations of the application of the focus group,
the fact that more experienced participants can intimidate the
contribution of the others at the moment of projecting their
opinion on a certain topic. As a way to mitigate this limitation,
a homogeneous group was selected in terms of experiments in
test automation.
Another restriction of this method is associated with the
limited understanding by the specialists of what is being
discussed, since there is a time limit for the realization of the
method. In this context, very complex issues can be
misinterpreted, which may lead the group to have difficulties in
converging to the central theme discussed. As a way to reduce
this threat, the moderator acted in a way to make an explanation
and prior presentation of the topic to be discussed, so that the
experts could have a better understanding of the topic.
One of the threats to the validity of the focus group
application was that not all participants were able to understand
the applicability of the framework, since the session has a time
limitation and the participant needs to abstract the
understanding to the practical implementation of the proposal.
In this sense, since the selected participants' profile was focused
on professionals with practical experience of automation, and
not on the use and implementation of process improvement in
the context of projects, a limitation of their evaluation was
identified.
Another identified threat is the fact that the author of the
proposal was the mediator of the focus group, which is why the
participants could be measured in their comments when
evaluating the proposal. In order to minimize this threat, we
sought the selection of participants who had greater autonomy
and independence from the author and who did not have ties of
friendship that interfered
VII. CONCLUSION
The studies show that software testing is a discipline of
software engineering that is constantly growing, whose demand
for research grows from the constant search for best practices,
aiming to provide visibility regarding the quality of the product
so that it is improved.
In this context, the test appears through a perspective to
provide a more objective view on the quality of the product, in
which the search for best practices to realize it comes from the
focus of research found in the literature, as well as the demands
observed in the market. It is in this context that the introduction
of test automation has emerged as a demand to systematize their
processes so that the real benefits are achieved.
Several approaches to process improvement have also
emerged in the literature, both as part of maturity models for
software testing and as specific approaches to automation.
However, it was noted that none of them provided practical
subsidies so that the introduction of test automation practices
could occur systematically in the context of a software
development project.
From this scenario, this research was developed, and had a
methodology based on an empirical interview, development of
the FAST and its evaluation through case study and focus
group.
The planning of the case study was carried out so that
information could be collected throughout its execution. The
Selection of Cases was carried out in order to provide
complementary profiles, where the environment of a private
company and a public institution are distinct scenarios and that
have brought valuable results to the demands of this research.
Based on data from the case study and the focus group,
qualitative and quantitative, they were analyzed and organized
from the following categories:
• Guide introduction of automation;
• Understanding the automation context;
• Completeness of the framework;
• Ease of reading;
• FAST limitations;
• Project limitations;
Difficulty in interpreting FAST; and,
• Suggestions for improvements.
Considering the data from the case study and focus group, an
association could be made to answer the research question of
this work, which is:
How should test automation be introduced and maintained in
the software development process?
In this context, the test automation should be introduced in a
systematic way in the context of the software development
process, in which the FAST proved to be a viable strategy for
the introduction and maintenance to be performed. In view of
this view, the main contribution of this work is a solution to
systematize the test automation processes, from a framework
that includes technical and support aspects, which are essential
for the systematic introduction of test automation.
From the general objective of the work, which focuses on a
strategy for systematic introduction of test automation practices
in the context of software development project, it was observed
that FAST can be considered as one of the strategies derived
from this objective. In addition, some of the failure factors of
the automation deployment could be observed, both from the
knowledge acquired from the literature review, and from the
case study and focus group.
This work is relevant because it responds to the practical
problems of test automation based on recent research and
market demand and is innovative because it consolidated from
a single proposal both the technical aspects directly related to
test automation, as those who support the design,
implementation, execution, and closing of the test, through the
support area.
REFERENCES
[1]
SOFTWARE ENGINEERING INSTITUTE (SEI). CMMI for
development, version 1.3, staged representation, 2010. Pittsburgh, PA.
Available at www.sei.cmu.edu/reports/10tr033.pdf, CMU/SEI-2010-TR033.
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
SWEBOK (2007). Guide to the Software Engineering Body of
Knowledge. Body of knowledge, Fraunhofer IESE, Robert Bosch GmbH,
University of Groningen, University of Karlskrona/Ronneby, Siemens.
ISTQB 2012. Standard glossary of terms used in Software Testing,
Version 2.2, 2012
BERNER, S; WEBER, R; KELLER, R. Observations and lessons learned
from automated testing. In: International Conference on Software
Engineering (ICSE), May 15-21, St. Louis, USA, 2005, pp. 571-579.
RAMLER, R; WOLFMAIER, K. Economic Perspectives in Test
Automation: Balancing Automated and Manual Testing with Opportunity
Cost. In: International workshop on Automation of Software Test (AST),
23 May, Shanghai, China, 2006, pp. 85-91.
KARHU, K; REPO, T; TAIPALE, O; SMOLANDER, K. Empirical
Observations on Software Testing Automation. In: 2009 International
Conference on Software Testing Verification and Validation. IEEE, Apr.
2009, pp. 201–209.
RAFI D; MOSES K; PETERSEN K; MÄNTYLÄ M. Benefits and
limitations of automated software testing: systematic literature review and
practitioner survey. In: 7th International Workshop on Automation of
Software Test (AST), June, 2012, Zurich, p.p. 36-42.
WIKLUND, K; ELDH, S; SUNDMARK, D; LUNDQVIST, K. Technical
Debt in Test Automation. In: IEEE Fifth International Conference on
Software Testing, Verification and Validation, 17-21 April, Montreal,
QC, 2012, pp. 887 – 892.
WIKLUND, K; SUNDMARK, D; ELDH, S; LUNDQVIST, K.
Impediments for Automated Testing - An Empirical Analysis of a User
Support Discussion Board. In: IEEE International Conference on
Software Testing, Verification, and Validation, ICST 2014, March 31 April 4, Cleveland, OH, 2014, pp 113-122.
HAYES, L. Automated Testing Handbook. Software Testing Institute,
Richardson, TX, Março, 2004.
DAMM, L; LUNDBERG, L. Results from introducing component-level
test automation and test-driven development. Journal of Systems and
Software, Volume 79, no. 7, pp. 1001–1014, 2006.
WISSINK, T; AMARO, C. Successful Test Automation for Software
Maintenance. In: Software Maintenance, 2006. ICSM’06. 22nd IEEE
International Conference on. IEEE, 2006, pp. 265–266.
HAROLD, M. Testing: a roadmap. In: The Future of Software
Engineering. Ed by Finkelstein, A., 22th International Conference on
Software Engineering (ICSE), Limerick, Ireland, June 2000, pp. 61-72.
FEWSTER, M. Common Mistakes in Test Automation. In: Fall test
automation conference, Boston, 2001.
TAIPALE, O.; KASURINEN, J.; KARHU, K.; and SMOLANDER, K.
Trade-off between automated and manual software testing. In:
International Journal of System Assurance Engineering and Management,
June, 2011, Volume 2, Issue 2, pp. 114–125.
FURTADO, A. P; MEIRA, S; SANTOS, C; NOVAIS, T; FERREIRA,
M. FAST: Framework for Automating Software Testing. Published In:
International Conference on Software Engineering Advances (ICSEA),
Roma, Italia, (2016), pp. 91.
GAROUSI, V; FELDERER, M; HACALOGLU, T. Software test
maturity assessment and test process improvement: A multivocal
literature review. Information and Software Technology. Volume 85,
Maio, 2017, pp. 16-42.
BARR, E; HARMAN, M; MCMINN, P; SHAHBAZ, M; YOO, S. The
oracle problem in software testing: a survey. IEEE Transactions on
Software Engineering, Volume 41, Issue n° 5, May 2015, pp. 507-525.
BERTOLINO, A. Software testing research: achievements, challenges,
dreams, future of software engineering. IEEE Computer Society, 2007, pp
85–103. doi: 10.1109/FOSE.2007.25
DUSTIN, E; RASHKA, J; PAUL, J. Automated software testing:
introduction, management, and performance. Boston, Addison-Wesley,
1999.
PERSSON, C; YILMAZTURK, N. Establishment of automated
regression testing at ABB: industrial experience report on ‘Avoiding the
Pitfalls’. In: The 19th international conference on automated software
engineering
(ASE’04).
IEEE
Computer
Society.
DOI:
10.1109/ASE.2004.1342729.
INTERNATIONAL ORGANIZATION FOR STANDARDIZATION/
INTERNATIONAL ELECTROTECHNICAL
COMISSION/IEEE
COMPUTER SOCIETY (ISO/IEC/IEEE) 29.119-1. Software and
systems engineering -- Software testing -- Part 1: Concepts and
definitions, 2013.
FEWSTER, M. e GRAHAM, D. Software Test Automation: Effective
Use of Test Execution Tools. Addison-Wesley, New York, 1999.
[24] NAIK, K e TRIPATHY, P. Software Testing and Quality Assurance.
Wiley, 2008.
[25] GARCIA, C; DÁVILA, A; PESSOA, M. Test process models: systematic
literature review, In: Software Process Improvement and Capability
Determination (SPICE 2014), Springer, 2014, pp. 84–93.
[26] BÖHMER, K., RINDERLE-MA, S. A systematic literature review on
process model testing: Approaches, challenges, and research directions.
Cornell
University
Library,
set.
2015.
Available
at:
https://arxiv.org/abs/1509.04076 Acess on 01 ago. 2016.
[27] AFZAL, W; ALONE, S; GLOCKSIEN, K; TORKAR, R. Software test
process improvement approaches: a systematic literature review and an
industrial case study. Journal of System and Software, Volume 111. 2016,
pp. 1–33.
[28] ________. Test Maturity Model (TMM). 1996. Illinois Institute of
Technology,
Disponível
em:
http://science.iit.edu/computerscience/research/testing-maturity-model-tmm 2014.08.11
[29] ERICSON, T; SUBOTIC, A; URSING, S. TIM - A Test Improvement
Model. Software Testing Verification and Reliability, Volume 7, pp. 229246, 1997.
[30] KOOMEN, T. e POL, M. Improvement of the test process using TPI, In:
Proc. Sogeti Nederland BV. Nederland ,1998.
[31] CHELLADURAI,
P.
Watch
Your
STEP,
2011
http://www.uploads.pnsqc.org/ 2011/papers/T-56 _ Chelladurai _
paper.pdf. Last accessed: Feb. 2016.
[32] HONGYING, G; CHENG. Y. A customizable agile software quality
assurance model, In: 5th International Conference on New Trends in
Information Science and Service Science (NISS), 2011, pp. 382–387.
[33] FURTADO, A; GOMES, M; ANDRADE, E; FARIAS JR, I. MPT.BR: a
Brazilian maturity model for testing. Published In: Quality Software
International Conference (QSIC), Xi'an, Shaanxi, (2012), pp. 220-229.
[34] ________. Melhoria do Processo de Teste Brasileiro (MPT.BR).
SOFTEXRECIFE, RECIFE, 2011.
[35] ________. Test Maturity Model Integration (TMMI). TMMi Foundation,
Versão
1.0,
2012.
Available
at:
http://www.tmmi.org/pdf/TMMi.Framework.pdf
2014.08.11.
Last
access: January 2015.
[36] HEISKANEN, H., MAUNUMAA, M., KATARA M. A test process
improvement model for automated test generation, In: Product-Focused
Software Process Improvement, Springer, 2012, pp. 17–31.
[37] KRAUSE, M. E. A Maturity Model for Automated Software Testing.
Medical Device & Diagnostic Industry Magazine, December 1994,
Available at http://www.mddionline.com/article/software-maturitymodel-automated-software-testing. Last access: January 2015.
[38] ELDH, S; ANDERSSON, K; WIKLUND, K. Towards a Test Automation
Improvement Model (TAIM). In: IEEE International Conference on
Software Testing, Verification, and Validation Workshops (ICSTW),
March 31st to April 4th , Cleveland, OH, 2014, pp. 337-342.
[39] CHRISSIS M; KONRAD, M; SHRUM, S. CMMI Second Edition:
guidelines for process integration and product improvement. Addison
Wesley, 2007.
[40] INTERNATIONAL ORGANIZATION FOR STANDARDIZATION/
INTERNATIONAL ELECTROTECHNICAL
COMISSION/IEEE
COMPUTER SOCIETY (ISO/IEC/IEEE) 24.765. Systems and software
engineering – vocabulary, 2010.
[41] RUNESON, P; HÖST, M; RAINER, A; REGNELL, B. Case Study
Research in software engineering: Guidelines and Examples. Wiley,
2012.
[42] ROBSON, C. Real World Research. Blackwell, Second Edition, 2002.
[43] MERRIAM, S. B; TISDELL, E. J. Qualitative Research: A guide to
Design and Implementation. Jossey-Bass, 4th edition, 2016.
[44] RUNESON, P. e HÖST, M. Guidelines for conduction and reporting case
study research in software engineering. In: Empirical Software
Engineering Journal, Volume 14, Issue 2, pp. 131-164, April 2008.
[45] BASILI, V. R, WEISS, D. M. A methodology for collecting valid
software engineering data. IEEE Transactions on Software Engineering,
Volume SE-10, Issue, 1984, pp. 728–739.
[46] VAN SOLINGEN, R; BERGHOUT; E. The goal/question/metric
method: A practical guide for quality improvement of software
development. McGraw-Hill, 1999.
[47] EASTERBROOK, S; SINGER, J; STOREY M; DAMIAN, D. Selecting
empirical methods for software engineering, Springer London, 2008, pp.
285-311.
[48] REED, J; PAYTON, V. R. Focus groups: issues of analysis and
interpretation. In: Journal of Advanced Nursing, Volume 26, pp. 765-771,
1997.
[49] KITZINGER, J. The methodology of focus group: the importance of
interaction between research participants. In Journal of Sociology of
Health and Illness. Volume 16, Issue 1, pp. 103-121, 1995.
Download