Setting up a successful test automation project

advertisement
Setting up a
successful test
automation project
White Paper
Author:
Antony Edwards
built by
White Paper
Contents
1Introduction
2
Project management
2.1
Project objectives
2.2
Project plan
3
Preparation tasks
3.1
Set up your test environment
3.2
Define your test architecture, conventions, and framework
3.3
Set up your relationships with other teams
3.4
Set up your test management
3.5
Set up your automated test execution
3.6 Set up your configuration management
3.7 Train the team
4
On-going activities
5
Appendix: Checklist
6
Appendix: Tools selection
built by
testplant.com
Page 2
White Paper
1Introduction
Test automation can deliver huge benefits in terms of time-to-market, quality, productivity, and auditability to almost any
team creating or deploying software. For example, TestPlant has worked with a leading global news publisher to reduce
their app update cycle from three weeks to two days, a major UK bank to reduce post-release defects by 65%, and one of
the world’s top five retailers to double the number of apps they are delivering without increasing the size of their test team.
These stories are not uncommon – test automation really has delivered amazing benefits to lots of companies.
But, there are also a large number of teams which have been trying to rollout test automation for years and not achieved
any tangible benefits, often having invested significant effort. Why?
TestPlant has worked closely with hundreds of companies to successfully deploy test automation; and has analysed many
failed test automation efforts. Based on this experience we have identified two critical success factors that are common in
successful efforts and missing in failed efforts.
Successful test automation deployments …
• … are properly project managed.
• … include day-to-day someone who has successfully deployed test automation before.
These critical success factors are obvious when you read them, but the fact is that many (and possibly most) new test
automation deployment efforts going on in the world today fail both these criteria, and will most likely not achieve their
objectives as a result.
This guide is all about helping you setup a successful test automation project. It is a guide, a template, and a checklist for
defining a high-quality project plan for the deployment of test automation.
Note that this guide assumes that a high-level decision has already been taken to deploy test automation has already,
i.e. the project does not have to create a business case. It also assumes that a test automation tool has been selected. A
future version of this guide may include these elements if requested so please provide feedback.
built by
testplant.com
Page 3
White Paper
2
Project management
Test automation deployments must be properly project managed. Any reasonable project management methodology can
be used (e.g. Agile, PRINCE2, CCPM), as long as it is applied diligently.
The following project management elements are essential:
• A Project Manager. A Project Manager with sufficient time to dedicate to the project, an understanding of test
automation, an understanding of the product being tested, and enough technical knowledge to understand test
environment issues.
• O
bjectives. Clearly defined outcome and output objectives (see below).
• A
Project Plan. A clear and maintained project plan. Section 3 defines a set of standard preparation tasks that
should be included in the project plan and Section 4 defines the standard on-going activities that should be
included in the project plan.
• Regular Reviews. Regular and frequent project reviews.
2.1
Project objectives
What do you want to accomplish with test automation? This sounds like a simple question, but is often difficult to quantify,
yet is critical to a successful project.
Test automation projects should have both output objectives and outcome objectives. Output objectives are the tangible
outputs and activities of the project, e.g. a set of automated test scripts for a product’s smoke tests. Outcome objectives
are the value that these outputs will provide, e.g. a reduction of a test cycle from one week to two days. Output objectives
are key to keeping a project focussed and ensuring delivery; outcome objectives are key to ensuring the project delivers
value to the team.
The following list presents best-practice for defining project objectives:
• T
est automation can deliver huge benefits, but like any capability you need to build up, and you should avoid
being too ambitious in your first project. It is strongly recommended that you run a pilot project with short
timescales and valuable but realistic objectives, before diving into a larger project. Set your goals accordingly.
• M
any new test automation deployment projects focus entirely on automating the existing manual test process.
Manual test processes typically leave out effective test activities (e.g. regression testing or compatibility testing)
because they are simply infeasible in a manual testing approach, but these activities are where test automation
can deliver significant benefits. Conversely, there are some test activities where automation will not deliver
significant benefits. So when you are defining your output objectives, consider which testing activities will
contribute towards your outcome objectives, and then which activities could benefit from automation.
2.2
Project plan
Regardless of the project management methodology the plan should include – the project objectives, the scope of the
project, the timeline and milestones, a list of resources (especially people) available to the project, and a list of tasks to
deliver the scope.
built by
testplant.com
Page 4
White Paper
3
Preparation tasks
This section lists a set of critical preparation tasks that should be included in your project plan.
3.1
Set up your test environment
An inadequate test environment is possibly the most common cause of wasted time and unreliable test results. It is
essential that you design and set up a reliable test environment before you start significant test script creation and
execution. Note that setting up a reliable test environment is not difficult if planned and resourced properly, but if it is done
in an ad-hoc manner it usually causes problems.
You should consider the following when setting up your test environment:
• E
nsure all the necessary test systems are included in the environment. If you are testing mobile there are many
references available on the Internet to help you define a representative set of devices for your target audience.
• E
nsure there are enough test systems so that no-one is ever waiting. Consider manual testers, people creating
and debugging automated test scripts, and automated test execution. It is very common to see manual testers
accidentally break an automated test run by trying to use the test system at the same time.
• E
nsure all testers will have access to the test environment. It is very common for a test environment not to be
accessible from some test team locations; this wastes time as testers must travel and work away from their
normal environment.
• Ensure all dependencies are in-place and reliable. For example, if you are testing a mobile application that
connects to a server, you must have a reliable server to test against. Note that this includes all test data (e.g. test
user accounts).
• Ensure the test environment is fully reliable before you start using it. It is extremely frustrating to investigate a
failed test script only to find out that it is an environment issue.
Finally – ensure the test environment is fully change controlled. This is potentially the most important item because
without change control even a perfectly setup test environment will most likely create problems as the project progresses.
Without change control unexpected versions of the application-under-test will be installed on test systems, configurations
will be changed, test systems will be removed, browser versions will be upgraded, OS versions will be upgraded, network
cables will be unplugged and so on.
built by
testplant.com
Page 5
White Paper
3.2
Define your test architecture, conventions, and framework
Before you start creating automated test scripts for your test-cases you should define your test architecture, conventions,
and framework (referred to simply as ‘framework’ from here).
The key purpose of the framework is to facilitate:
• Modularization and re-use of common functions (e.g. login).
• Testers being able to easily execute, review, and maintain tests written by other testers.
• Consistency of results reporting.
There is a temptation at this point to try and define the set of common functions that will be needed and pre-implement
these functions before testers start working on implementing scripts for test-cases. We recommend not to do this; our
experience shows that many of the functions implemented are never used, many of the functions that are used have to be
significantly re-factored to be used effectively from scripts implementing test-cases, and there is a de-motivational effect
of putting in a lot of scripting effort and not having any automated test-cases to show for it. Instead we recommend that a
clear approach to modularization is defined up-front and then this repository is built-up as needed, e.g. the first person to
implement a test script for a test-case that requires a “log in” step should realise that this is likely to be a common function
and add their implementation to the repository.
3.3
Set up your relationships with other teams
A test team does not work in isolation. They work with the development team to get new versions of the application-undertest and raise defects. They most likely work with other teams such as the marketing team to understand requirements or
the IT team to set up their test environment.
It is important to set up your relationships with these other teams, meet the key people you will be working with,
understand how they work, explain how you intend to work, understand the infrastructure each other are using, and
discuss how to make the interfaces productive.
For example, some questions you would ask the development team:
• D
o they have one person who reviews and triages all defects or do defects go directly to the developer who
worked on a feature?
• D
o they have a release manager who is responsible for communicating all releases?
• What information do they need in a defect report?
• When a new defect is raised typically how long will it be until someone looks at it?
• If something isn’t working in the product and it’s really blocking the test team who is the best person to speak to?
built by
testplant.com
Page 6
White Paper
3.4
Set up your test management
If you want to use a test management tool such as qTest, Zephyr, TestLink, or HP ALM; this tool should be setup and
configured before you start.
3.5
Set up your automated test execution
You will almost certainly want to be executing test runs automatically over night or on check-in; and able to manually start
a test run. You may do this via a test management tool, a tool focussed on test execution such as eggPlant Manager, or a
continuous integration tool such as Jira.
Again – this should be fully setup and configured before you start.
This step may again seem obvious, but we have seen many projects waste a lot of time due to badly setup test execution.
For example, test execution frameworks that are error-prone to configure and so end up executing tests against the
wrong test devices, frameworks that do not save the logs needed to debug errors, frameworks that stop working at
2am due to back-ups starting, and frameworks that have no mechanism for a tester to manually start a test run and so
changes can only be verified overnight.
Note – many people use test automation tools (e.g. eggPlant Functional and eggPlant Performance) for monitoring of
live systems; i.e. they use the test scripts written during development and then use them for application-level monitoring
of the live system. In such scenarios this step is even more important and the automated test execution must be carefully
tested.
3.6
Set up your configuration management
You should setup a configuration management tool (SCM) such as Git or Subversion to store your test scripts; and
automated test execution should always use a defined set of scripts from the SCM.
Using an SCM facilitates sharing, ensures all script changes are logged, and ensures all scripts are frequently backed up.
A common scenario on test automation projects is for a test script that has been passing to suddenly start failing,
everyone to say that they have not changed the script at all, but after several wasted hours to discover that someone
clearly has made a quick unverified change to a script. An SCM means you can resolve such issues in minutes rather
than hours.
3.7
Train the team
Ensure all testers have been properly trained on the testing tools being used, the test environment, configuration
management, the test architecture, and so on.
Successful test automation teams usually also setup mechanisms for team members to share best-practice, useful
resources (e.g. links to videos, example scripts), and other useful information. This is typically a mix of on-line
mechanisms (e.g. a Wiki) and regular meetings.
built by
testplant.com
Page 7
White Paper
4
On-going activities
Section 3 listed a set of preparation tasks that should be included at the start of your project plan. This section lists the
key on-going activities for the main ‘implementation’ stage of your test automation project.
The most important point of this section is that test automation projects are not just about writing new test scripts! There
are several other crucial activities that must be resourced and managed properly. A common cause of project failure is to
only allocate enough people to do the actual initial test script creation; this means that the other activities are done in a
low quality way (or not at all), which can quickly undermine the whole project.
So your project plan must consider all the on-going activities listed below.
• T
est-case definition. Test-case definition and creating an automated test script to execute the test-case are
separate tasks and should be managed separately. You should think about test-case definition as ‘design’ and
creating automated test-scripts as ‘implementation’. If you try to design and implement at the same time you are
likely to waste time. The most successful test automation projects always define and review a test-case before a
test script is created.
• Framework development. Your test framework needs to be maintained for the benefits of modularization and
standardization to be realised. If you are using a mature test automation tool and setup a solid framework at the
start of the project this should not be a lot of effort, but it needs to be done regularly.
• Test script development (first platform). The majority of the effort on the project should be dedicated to
creating automated test scripts. Most automated test scripts these days must work across multiple platforms.
Tools such as eggPlant Functional provide a lot of support for easily porting automated test scripts across
platforms, but some work is usually required, and this porting work is often not considered in the project plan.
This is why we distinguish between ‘first platform’, i.e. creating the original automated test script for a test case
for one platform, and ‘other platforms’ (below) which is about ensuring this script works on other platforms.
• Test script development (other platforms). See above bullet point.
• T
est execution, reviewing results, and raising defects. Someone in the team needs to ensure that overnight
test runs (or whatever approach you have chosen) are setup correctly and they need to review the results.
Reviewing results can be significant effort for applications-under-test that are changing frequently or low
quality; or when a large number of new automated test scripts are being added to the environment. But diligent,
methodical, review of test results is the key to achieving reliable test automation and getting value from the
results.

• T
est script maintenance. Existing test scripts must be updated from time-to-time as the underlying applicationunder-test changes, as the target test systems change (e.g. a new mobile device is added to the test
environment), and as defects are found in test scripts. The level of test script maintenance effort required can be
very different between different test automation tools and should be a key part of your tools choice evaluation.
• E
nvironment management. Your test environment must be actively maintained; i.e. someone must ensure the
environment remains clean, ensure enough test systems are available, and approve changes to the environment
(e.g. new versions of the application-under-test).
• P
roject management. Too many test automation projects stop doing project management once the project is
running. You must continue to monitor progress, resolve issues, mitigate risks, and ensure the project is going to
meet its objectives.
built by
testplant.com
Page 8
White Paper
Your project plan needs to ensure that all these activities are included and resourced. The effort required for each will
depend on your objectives, application-under-test, existing assets, etc; but they all need to be considered and covered.
Note: when reviewing this document TestPlant’s Customer Success Managers all agreed that ‘reviewing results’
(and acting on them) is the on-going activity most projects get wrong. If you aren’t carefully reviewing your test
results and acting on them then you will not get value out of test automation and you will not have a robust set of
tests. 
As one example, a typical resource allocation for a project team is:
• Test-case definition – 2 people (this assumes test-cases don’t already exist).
• Test script development and test script maintenance – 4-5 people.
- 55% Test script development (first platform).
- 15% Test script development (other platforms).
- 20% Test script maintenance.
- 5% framework development.
• Test execution and environment management – 1 person.
• Project management – 0.5 people.
built by
testplant.com
Page 9
White Paper
5
Appendix: Checklist
The following checklist should be used at the start of the implementation stage of a test automation deployment project to
ensure it has been setup correctly.
 Planning:
- Project manager assigned.
- Objectives defined (business and technical).
- Baseline project plan created.
- Regular project review mechanisms defined and scheduled.
 Preparation tasks:
- Test environment set up and tested.
- Relationships with other teams established.
- Test architecture, conventions, and framework defined.
- Test management tool set up and configured.
- Automated test execution framework set up.
- Configuration management defined and infrastructure set up.
- Team trained.
 On-going activities:
- All on-going activities included in the baseline project plan – test-case definition, framework development, test
script development (first platform), test script development (other platforms), test execution and review, test script
management, and environment management.
built by
testplant.com
Page 10
White Paper
6
Appendix: Tools selection
This appendix presents high-level guidance for selecting a test automation tool.
• Beware of being ‘tool led’. Define your objectives and needs and then select a tool.
• Technical Fit. Can the tool test all (or the vast majority) of your scenarios and environments (mobile, desktop,
server); and is it future-proof?
• P
roductivity. Can your current team quickly and easily achieve automation and get benefits with the tool?
Automation projects can quickly become long change projects.
• Ease-of-maintenance. Are scripts robust and quick to maintain?
• Integration. Can the tool plug-in to your other testing and development tools?
• B
eware of ‘showroom appeal’. Many features look great in a demo (record-and-playback), but simply don’t
work in real environments.
built by
testplant.com
Page 11
Download