Uploaded by Shifa Haidry

Automation Test Strategy

advertisement
Automation Test Strategy
The Juice Shop Web App
Version 0.1 (Draft)
<Date>
Prepared By:
<Author Name>
Document Control
Document Detail
Title:
Version:
Date:
Electronic File Name:
Electronic File Location:
Author:
Contributors:
Change Control
Issue Date
Version
<Date>
v0.1d
The Juice Shop Web App Automation Test Strategy
0.1 (Draft)
<Date>
The Juice Shop Automation Test Strategy.doc
<Shared URL>
<Author Name>
<Add names of anyone reviewing or providing information>
Details
Working Draft, not yet for review
Referenced Documentation
Ref Document Name
Case Study – The Juice
1
Shop
Electronic File Location
<link to requirement document>
Author
<Author Name>
Table of Contents
Document Control
Table of Contents
● Introduction
● Objective/Goals
● Automation Approach
● Return on Investment (ROI)
● Risk Analysis
● Automation Plan
● In/Out of Scope
Features to be tested
Features not to be tested
● Types of Automation Test:
● Regression Testing
● Sanity/Smoke Testing
● Automation Environment
● Tool selection as per testing layers
● System Requirements
● Test execution
● Defect Management
● Defect Lifecycle
● Test Reports
● Maintenance
● Introduction
To provide Juice Shop clients with a sophisticated online shopping experience that
allows consumers from all around the city to buy their favourite juices and have them
delivered to their home. Juice Shop also has an implied goal of fostering a health-aware
and environmentally sensitive community of individuals.
● Objective/Goals
The objective of this Automation Test Strategy is to establish the automation test
strategy that the Test Team will use while providing testing services to all of the
business's projects.
This document clarifies the Automation testing approach, return on investment &
risk analysis, methods, and practices that will be employed during testing the
project.
Goals in test automation are:
■ Reducing testing time and cost
■ Increasing the speed of the testing process
■ Improving test coverage
■ Reducing redundancy
■ Faster quality feedback without manual intervention
● Automation Approach
Our project team will strive to deliver high quality products using Scrum development
methodology and the Test Pyramid concept will be applied to ensure efficient testing.
An evolutionary framework will be created which will have the following attributes:
●
●
●
●
●
●
Automation framework will be capable of testing multiple interfaces like UI,
Databases, and API Integration Points.
The test automation will be repeatable and deterministic.
A test automation framework will be able to deal with application changes
reasonably well and test maintenance should be efficient.
Tests will be run reasonably quickly and preferably unattended.
Test will be capable of running on a variety of environments such as
Development and Stage and Production.
Tests will be run in a continuous build pipeline using Jenkins for automated run,
test results publishing and keeping a history of test runs.
Following Test Methodologies will be followed:
1) Unit Tests
2) Shift Left Testing
3) Contract Testing
4) API Automation Testing
5) UI Automation Testing
Roles and Responsibilities of a QA are defined as follows:
● Analyze backlog and run brainstorming sessions
● Create high level test scenarios for functionality under test (FUT) and
review with BA
● Create detailed test cases for FUT
● Identify gaps in stories vs business requirements
● Develop UI and API automation suites
● Maintain QA Test Environments
● Perform shift left, manual functional & security testing
● Report testing status to stakeholders
● Maintain and Publish Risks, Assumptions, Issues and Dependencies at
testing level
● Return on Investment (ROI)
Running automated tests multiple times is a key benefit of test automation.
Rather than purchasing an automation tool or building a framework from scratch and
coding tests, running all the tests manually may be faster and more cost-effective if they
only need to be run a few times. However, as automated tests are run repeatedly
against new builds a break-even point is reached. Consequently, all subsequent tests
will maximize ROI and provide increased capacity and coverage (i.e., the ability to
conduct testing that was previously not feasible manually).
In the example below, break-even is reached after 25 test automation runs, and by the
50th run it has reached an ROI of about 1.75, meaning that test automation had
delivered 75% more value than what had been invested into it.
Source
Test automation requires significant upfront and ongoing investment, but it can pay itself
off with use. Service level or integration testing as part of a holistic test automation
strategy can help to realize these gains. Relying on manual regression testing as a
product grows puts both the product and possibly the organization at risk.
● Risk Analysis
Risk
Delays in delivering completed
Test Items from Development
1
would impact test timescales and
final Release quality
Delays in the turnaround time for
fixing critical bugs, which would
2
require re-testing, could have an
impact on the project dates.
The Test Team, Development or
PM teams require domain
3 guidance from one or the other
and they are not available. This
would delay project activities.
Mitigation Strategy
Product Management and
Development to advise of any delays
and adjust Release Scope of
Resources to allow the test activities
to be performed.
Strong management of bug resolution
would be required from Development
to ensure bugs are fixed and available
for re-testing in the scheduled time.
High
High
The Test Team, Development and PM
teams to ensure they are available at
Medium
critical points or contactable during the
project activities.
Features of Test Items will not be
testable.
The Test Team will record untested
features and request the PM to
assess business risk in support of the
release of untested features.
Unexpected dependencies
between Test Items and service
5 components are encountered that
require revision of Test Scenarios
and related Test Cases.
Information about dependencies is
updated and communicated promptly
to allow timely revision of Test
Scenarios and Test Cases
4
Impact
Low
Low
● Automation Plan
● In/Out of Scope
Features to be tested
Following are the features to be tested:
● User creation/registration
● User login
● User profile
● Browse products/product catalogue
● Checkout - Add to cart and cart modifications
● Manage Delivery address – create, update, delete and select from existing
● Payment options
● Payment gateway integration
● Confirm Order and Order Summary page
● Track order
● Review Product
Features not to be tested
Following Features are not to be tested:
● Payment gateway functionality
● Accessibility
● Types of Automation Test:
Automation testing will cover following types of testing:
Test Type
Focus
Advantage
How many
Unit Tests
Code level sanity
Very fast to write
and execute
Maximum (>100)
Contract Tests
Focus on API
Contract level
assertions
Help in identifying
in contract changes
Tests for only
scenarios where
there is a contract
between 2 external
system APIs
API Tests
Testing API
functionality
Faster to write and
execute than UI
Automation
10-30 per API
UI E2E Automation
Tests
Simulate user
behavior and
assertion on
integration with all
components in real
time
Helps identify
defects in
integration
components in real
time environment
setup
<20
● Regression Testing
Automated unit and functional tests will be continuously run using the continuous
integration server resulting in regression testing occurring frequently. As new automated
unit tests or functional tests are created they will be added to the regression suite. There
will be some manual regression testing required and this will be captured and managed
using a Test Management tool. For each release existing test cases will be identified as
being required to be executed for regression, these will be assigned to the release in the
test management tool and help form the regression suite for a release.
● Sanity/Smoke Testing
Select high priority E2E Automated functional UI, API and Unit tests will form a
sanity/smoke test suite. This test suite will be run after a new build is triggered in the
pipeline. These automated test suites will check for build stability and new feature sanity
testing to confirm build is stable and good for QA. New smoke/sanity tests will be added
when new features are added to the product.
● Automation Environment
We would setup following environments for the purpose of running automation tests
successfully and continuously:
Environment
Purpose
Development
Local dev laptops for code development
Test
Lower environment to run unit tests and
contract tests
Stage
Higher prod like environment where E2E
UI and API tests will run to certify further
deployment to Production
Production
Live user environment
● Tool selection as per testing layers
On high level, following testing tools are selected for this project:
● Project and Test Case Management – JIRA
● Contract Testing - PACT
● Automated API Testing – REST-ASSURED – JAVA Framework
● UI Automation Testing – Selenium – JAVA Framework
● System Requirements
The following detail the environmental and infrastructure needs required for the
testing of Juice Shop Test Items and execution of Regression Testing.
Hardware.
● Integration Environment (Stage)
● Test Env
Software
● <Name of Bug Tracking Tool>: http://...
● <Name of Test Case Management Tool>: http://
● <Name of Automation Tool>: http://
Infrastructure
● Network connections are available on all Test Systems as required.
Test Repository
● http://…
● Test execution and Test Reports
It is important to know the execution priority of your tests, because in the order of
execution, individual test cases must take first place, then integration and finally
regression testing should take place. Finally, every change and deployment should be
followed by regression testing to confirm the changes are valid and working as expected.
CI/CD tools such as jenkins would play a crucial role in test case execution as all
automation suites should be part of CI/CD pipeline as should be triggered as per below
diagram:
Run tests in parallel: To increase the testing speed,tests will be run in parallel. Tools
such as Selenium Grid, allows testing across several machines with different
combinations of browsers and operating systems. Selenium Grid will be integrated with
Jenkins and other CI development tools. The tests will be triggered automatically and
their results stored together with the relevant tasks.
Data-driven tests: A data-driven approach to test automation will allow us to reuse
automated test scripts. New cases can be generated by changing the data stored in
external files. To get the most out of this approach, we will parametrize both the input
data and the expected output data.
Reporting: Following the execution of a set of tests, reporting will be set up so that the
team can see test automation results in an easy and understandable manner. CI/CD
tools will be configured to send out reports with email to respective stakeholders.
A test report will have the following components :
● Project Information includes the name and description of the project.
● Description of the test objective including the aim and enlisting the types
of tests.
● Test Summary including the test case execution states.
● Defect report with the description, priority, severity, and status of the
defect management process.
Tools for Reporting:
● Automation framework will be built with TestNG to better manage and
capture execution status for all test cases and test suits.
● Automation framework will generate beautiful and informative HTML
reports using Extent/Alure reporting libraries.
● Jenkins automated test pipeline will be setup to execute and find failures
in automated tests and send out HTML reports as attachment to via email
to respective stakeholders.
● Defect Lifecycle & Management
The ultimate purpose of automation is to maintain the quality of the product and
raise defects when they are introduced in the code. It is essential to articulate
how automation defects will be captured and reported in the defect management
tool. All defects will follow below lifecycle:
Defect Life Cycle includes the following stages:
New: When a defect is logged and posted for the first time. Its state is given as new.
Assigned: Once the bug is posted by the tester, the lead of the tester approves the bug
and assigns the bug to the developer team. There can be two scenarios, first that the
defect can directly assign to the developer, who owns the functionality of the defect.
Second, it can also be assigned to the Dev Lead and once it is approved with the Dev
Lead, he or she can further move the defect to the developer.
Open: Its state when the developer starts analyzing and working on the defect fix.
Fixed: When developer makes necessary code changes and verifies the changes then
he/she can make bug status as ‘Fixed’. This is also an indication to the Dev Lead that
the defects on Fixed status are the defect which will be available to tester to test in the
coming build.
Retest: At this stage the tester do the retesting of the changed code which developer
has given to him to check whether the defect got fixed or not.
Once the latest build is pushed to the environment, Dev lead move all the Fixed defects
to Retest. It is an indication to the testing team that the defects are ready to test.
Reopened: If the bug still exists even after the bug is fixed by the developer, the tester
changes the status to “reopened”. The bug goes through the life cycle once again.
Deferred: The bug, changed to deferred state means the bug is expected to be fixed in
next releases. The reasons for changing the bug to this state have many factors. Some
of them are priority of the bug may be low, lack of time for the release or the bug may
not have major effect on the software.
Rejected: If the developer feels that the bug is not genuine, developer rejects the bug.
Then the state of the bug is changed to “rejected”.
Duplicate : If the bug is repeated twice or the two bugs mention the same concept of the
bug, then the recent/latest bug status is changed to “duplicate“.
Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug
no longer exists in the software, tester changes the status of the bug to “closed”. This
state means that the bug is fixed, tested and approved.
Not a bug/Enhancement: The state given as “Not a bug/Enhancement” if there is no
change in the functionality of the application. For an example: If customer asks for some
change in the look and field of the application like change of color of some text then it is
not a bug but just some change in the looks of the application.
● Maintenance
Automation maintenance activities include but are not limited to:
1. Enhancing the expectation handling and error logging mechanism
2. Making framework more robust and scalable
3. Adding new functionalities in the framework to handle more use cases and
test cases.
4. Code refactoring
● Metrics
Key testing metrics are as follows:
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
number of test iterations planned for each test task
relative importance of the application system to the business
complexity of the application system under test
number of functional areas involved in the module and module integration test
number of system processes
number of scenarios per system process
number of test steps per scenario
complexity of the module under test
complexity of the scenario under test
number of other application systems in the systems integration test
Number of test cases automated
Number of tests failed for each automated suite run
Number of tests passed for each automated suite run
Total time of automation test suite run
Test flakiness
● Critical Success Factors
In addition to the Project Name overall critical success factors, the following critical success
factors are specific to the Testing process:
●
●
●
●
●
●
●
●
●
●
Testing considerations must begin in the early phases of the project.
Test script development must be based on key project deliverables.
Testing must be objective and must be performed by an independent test team
(other than the programmers responsible for the application software).
The problem management process must be functional as soon as testing begins,
and must ensure that only valid and non-duplicated defects are processed.
Multiple iterations for each testing task should be planned to allow for a higher
density of testing for the current test iteration and scheduled fixes for the next
iteration.
Planning for the systems integration test should start early, as it will involve
multiple projects, systems, and organizations.
The scope of the regression test should be well defined.
An automated tool should be used to perform regression testing.
Locking, response time, and stress testing should use process-based testing
scripts.
Modules should be categorized by their relative importance to the business for
defect prioritization and performance testing.
● Responsibility Matrix
The table below outlines the main responsibilities in brief for test activities:
Activity
Provision of Technical
Documents
Test Planning and Estimation
Review and Sign off Test Plan
Product
Manager
Developm
ent
Manager
X
X
X
X
Testing Documentation
Test
Manager
QA
X
X
X
X
X
Test Preparation and Execution
X
Test Environment Set-up
Change Control of Test
Environments
Provision of Unit Tested Test
Items
Bug fixes and return to the Test
Team for re-test
X
X
X
X
X
Product Change Control
X
X
X
Ongoing Test Reporting
X
Test Summary Reporting
X
● Approvals
The following people are required to approve the Test Strategy
Approval By
Approval
Test Manager
Pending
The Test Department Manager
Pending
Product Owner
Pending
Development Manager
Pending
Project Manager
Pending
X
Revision Log
Date
[yyyy-mm-dd]
Version
0.1
Change Reference
Author
Reviewed by
Download