Successfully Automating Functional and Regression Testing with

advertisement
Successfully Automating Functional and Regression Testing
with Open Source Tools – Detailed Notes
By John Lockhart, john@webtest.co.nz, www.webtest.co.nz
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed
Notes .................................................................................................................................................... 1
Part 0: Overview .............................................................................................................................. 1
What we will cover ...................................................................................................................... 1
Part 1: The profession ...................................................................................................................... 2
Why test? ..................................................................................................................................... 2
Perceptions of testing ................................................................................................................... 2
Task/Job/Career/Profession/Specialty .......................................................................................... 3
Other disciplines within software development ........................................................................... 3
The broader perspective: Software Engineering .......................................................................... 5
A Software Engineer Specialising in Testing ............................................................................... 6
Part 2 – Key principles from other areas to apply ............................................................................ 8
Lessons for testers from Software Engineering’s other disciplines ............................................. 8
Lessons for testers from outside Software Engineering ............................................................... 8
All about Agile ............................................................................................................................. 9
Part 3 – Examples .......................................................................................................................... 12
Example 1 – Small successful development team from the 90's ............................................... 12
Example 2 – Internet booking application with development team of 20 ................................. 12
Example 3 – Corporate struggle................................................................................................. 13
Example 4 – Corporate success, so far anyway ......................................................................... 14
Part 4 – Automating ....................................................................................................................... 16
The two main causes of failure and keys to success .................................................................. 16
Principles and Architecture ........................................................................................................ 16
Part 5 – Overview of Fitnesse and associated tools ....................................................................... 20
The Fitnesse Architecture........................................................................................................... 20
Fitnesse Features ........................................................................................................................ 20
New Fitnesse Developments and Add-Ons ................................................................................ 20
Part 6: Additional useful skills and tools of the trade for testers ................................................... 22
Part 7 – The path to success and happiness.................................................................................... 25
Part 0: Overview
What we will cover
Testing is perceived as the least important / least professional discipline and lags behind others in
software development. How can we beat this emotionally and practically? We will examine our
beliefs and views of testing as these control our level of success. We will apply principles of these
mature areas to testing and learn from them.
Essential to effective and satisfying testing is the appropriate approach to automation. Examples
will demonstrate the importance of the right people with the right priority and demonstrate some
successful and unsucessful dynamics. We then delve more deeply in functional automation using
open source tools in an agile manner with a focus on architecture and maintenance and look at
Fitnesse as an example.
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 1 of 21
We will see that the biggest risk for automation is failure in maintenance and that the keys to
avoiding this are firstly having the right people with the right priority and secondly having the
correct test architecture.
We combine all this with a suggested model of software testing as a specialty within the general
profession of software engineering in order to define what we need to do to deliver the most value
and satisfaction. After reviewing the overall attitudes as well as the specific tools and skills
required, you should be well placed to know what you need to do to succeed individually or with a
team that you manage. You should have a clear vision of what success is and the knowledge,
distinctions and drive to create the change required.
Part 1: The profession
What you believe about yourself, your work, the world and what will give you pleasure or pain
drives your level of enjoyment and satisfaction and also drives your behaviour. Your attitude and
behaviour in turn drive, or limit, your success. So it is critical to examine these beliefs rather than to
just absorb them unquestioningly as you go through life.
Why test?
There are many possible answers, some more useful than others:
 “Ensure no bugs.” Impossible – you can prove bugs exist by finding them but you can't
prove there are no bugs, so sets you up for failure when one “slips through”.
 “Ensure all functionality is delivered correctly.” Sounds good but according to what
standard? Usually this is a functional specification or requriements document but I've yet to
see one that was interpreted identially by all readers and had no omissions.
 “Ensure key workflows are not totally broken.” This doesn't sound so smooth, but is the
most basic and important of regression and integration testing principles. If you test that
some examples of the most critical scenarios through the system work end to end, you have
mitigated the highest risk – that the release is broken in some fundamental way.
 “Stop defects entering production.” Testing as a quality gate for production is traditional e.g.
I heard it from an IBM consultant recently. Problem is the Japanese quality revolution which
was based on Deming’s work had as a key concept that quality is the responsibility of the
whole team. If you accept the quality police role it creates an “us and them” situation which
is not useful. It also has the same failing as the first reason in that some defects will always
enter production.
 “Assess risk or provide information regarding quality.” This is a balanced view of what
testers do within the larger team and reflects the fact that it is not usually their decision how
much testing can be done or when the software should be deployed.
 “Assist the team to improve quality.” This is the agile model and works very well in close
teams encouraging a culture of quality throughout the development process and lets the
testers add value in the maximum possible ways.
 “Legal or contractual reasons.” This and other specific test requirements will often exist and
can be covered in most of the models given above.
Perceptions of testing
 “We will pull people in who are not busy from elsewhere in the business to do the testing.”
Implies anyone can do it which implies no skills required. Even supermarkets require some
training before staff are let loose in any role!
 Testing is a necessary evil and a cost centre that does not deliver business value. Often this is
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 2 of 21
a management view but sometimes exists in the development team as well.
 Testers are pedants with no business sense or with no technical sense.
 Testers are failed developers or are learning to be a developer or a business analyst.
Sometimes testing is not even recognised as worthy of a title so testers have to be classified
as a junior business analyst etc.
If testing really is like that and doesn't require specific skills or experience then it is by definition of
little value – unskilled labour. Do you believe any of the above? Do people around you? How can
we correct this? What would the opposite belief be? Perhaps:
“Testers have general knowledge of both business and technical areas and specific skills that
produce great value in terms of quality with impressive efficiency and business focus
reducing overall product risk and cost and find it a satisfying specialty providing
opportunities unmatched by other roles.”
If you don't believe in yourself or your role, and have sound and specific reasons for those beliefs,
others certainly won't.
Task/Job/Career/Profession/Specialty
Think about the importance of a name. Say to yourself “a job”, “a career”, “a profession”. For me
very different things come to mind.
A job is something that never changes and you probably don't enjoy but just do for the money. That
pay probably isn't much and is unlikely to increase unless you work overtime. You might train for
maybe a day or a week when you start as defined by the boss but are unlikely to have ongoing
training or development.
A career has a “path” or “ladder” of progress up a salary scale but training and how you proceed is
as much or more your boss’s responsibility as your own. It is likely to require a degree and salary
range might be $40 000 to $80 000. Some job satisfaction is likely.
A profession has connotations of being responsible for your own progress, level of expertise,
income and of being something you are really interested in. Salaries are probably higher and getting
well into the $100 000+ range.
A specialist takes it further with an expectation that you do what you need to do to maintain world
class expertise including travelling to attend training, reading professional journals, and perhaps
even doing research in a particular niche. There is no hard limit on earning potential. Think of a
doctor who then does two additional years of training to specialise.
Which sets of perceptions is true of you? Which would you like to be? Whichever of those labels
you choose, do you have goals that will get you there?
Other disciplines within software development
I believe testing is perhaps the most interesting area in software development, because it is at the
cusp of a revolution. That is why I moved to it from the more mature area of project management.
Let's compare it to some of the other roles in software engineering:
Architect:
 Started around 1970. Service Oriented Architecture (SOA) perhaps represents the maturing
of a generic architectural model that is not hardware dependant as previous ones were.
 $95,000 to $135,000.
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 3 of 21
 UML, EABOK, Zachman framework. SOA. Microsoft has a stringent qualification for this
(see http://www.microsoft.com/learning/mcp/architect/) among others. Others exist such as
http://www.opengroup.org/itac/ and
http://www.sei.cmu.edu/architecture/certificate_program.html.
 The ultimate technical position partly due to the responsibility of managing the technical
complexity.
 Agile has a somewhat different model for the architect role but is still the technical leader.
Project Manager:
 Started in the 1950s and matured in the 1960s.
 Pay $80,000 to $130,000
 Defined by Project Management Institute (PMI) (1969) with body of knowledge PMBOK
and professional qualification (PMP) with stringent initial and ongoing requirements and
professional support. Many university have Masters courses etc.
 “Prior to the 1950s, projects were managed on an ad hoc basis using mostly Gantt Charts
and informal techniques and tools” sound like testing? Is testing 50 years behind project
management in its development?
 Scrum (1986/91) is challenging traditional project management theory for software. Part of
agile and uses a rugby game vs. a relay race metaphor so work together rather than passing
the baton.
Programmer/Developer/DBA:
 In 1954 FORTRAN was invented and was the first high level programming language.
Matured in 1990s with Object Oriented styles standardising on Java/C#.
 $65,000 to $95,000
 Sun and Microsoft have Certifications and some IT degree usually present. Communities but
no professional body. Similarly to other “Engineers” most serious software developers these
days would be expected to have four years of university training.
 Object Oriented approach led to standardisation on Java / C# for most large scale
development. Extreme Programming (XP) and Agile have revitalised teams and can be seen
as the discipline of programming taking responsibility for its own standards and methods
rather than relying on managers to tell them what to do with some Software Development
Life Cycle (SDLC).
Business Analyst:
 History and origins not clear. Maybe maturing with UML etc but often not used in practice
by BAs.
 $65,000 to $95,000
 Certification in initial stages. UML provides some shared language. The International
Institute of Business Analysis (http://www.theiiba.org/) provides a certification program for
business analysts (Certified Business Analyst Professional or CBAP), as well as providing a
body of knowledge for the field ([Business Analysis Body of Knowledge or BABOK).
 Like testing often perceived as something with no specific skills that anyone from the
business or with an IT degree can do and also as a path into IT say for a support or call
centre person. Like testing many BAs lack the tools and skills they need.
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 4 of 21
Tester/ Test Analyst / Test Manager:
 The separation of debugging from testing was introduced by Glenford J. Myers in 1979. Not
even names standardised. Originally testing was not even included as a role within XP but
that has now reversed in Agile. Test Driven Development (TDD) and the more recent
Acceptance Test Driven Development (ATDD) are fascinating paradigm shifts with massive
implications for the role and value of testing.
 $50,000 to $95,000
 Certification in initial stages. ISTQB Foundation and just this year Advanced certification
(equivalent to ISEB). Perhaps more importantly provide a BOK but controversy over this
from competing models e.g. AST (http://www.associationforsoftwaretesting.org/). CSTE and
CSQA also have a BOK: http://www.softwarecertifications.org/cstebok/cstebok.htm. CSTP
(see http://www.iist.org/cstp.php, and http://www.testinginstitute.com/certification.php and
http://www.qaiworldwide.org) has prerequisites and ongoing requirements for recertification
but they are extremely minimal e.g. 1 year's experience and 10 days of training to be
certified. None of these have a professional body of any significance.
 Perceived as something with no specific skills that anyone from the business can do and also
as a path into IT say for a support or call centre person. Tools and methods for automation
unchanged from the 90s using outdated Computer Aided Software Engineering (CASE)
model with large custom propriatory tool suites based around databases. The tool vendors as
a rule don't even use their own tools. The agile community has found an alternative to this.
Note: The above are my estimates for salary. See
and
http://www.robertwalters.com/resources/salarysurvey/New%20Zealand%202008.pdf for some more credible data and
http://www.payscale.com/mypayscale.aspx for an interesting site that calculates your current salary range
among other interesting stuff.
http://www.absoluteit.co.nz/absolute/absoluteweb.nsf/SalarySurvey/$FILE/Remuneration%20Survey%20AbsoluteIT.pdf
The broader perspective: Software Engineering
Software Engineer:
 Term appeared in late 50's with conferences in late 60's. The profession grew in response to
the “software crisis” of those and subsequent decades. Dijkstra's 1968 article "A Case
against the GO TO Statement", is regarded as a major step towards the widespread
deprecation of the GOTO statement and its effective replacement by structured control
constructs, such as the while loop. This methodology was also called structured
programming. This was the birth of the profession. Is test automation struggling to learn
these lessons of 40 years ago?
 $70,000 to $90,000 (omitting architects and high value specialists)
 As of the 2004 edition, the SWEBOK guide defines ten knowledge Areas (KA) within the
field of "software engineering". Testing and Quality are two of them. Software Engineering
2004 (SE2004) provides a competing view. Both are IEEE sponsored. There are plenty of
degree and post grad courses. Lack a central certification but some well established
professional bodies e.g. Computer Society. http://www2.computer.org/portal/web/csda/prep
has a list of resources for their software engineer certification. For example have a look at
the sample questions at http://www2.computer.org/portal/web/csda/test to get some idea of
how much you may already know.
Let's look into Software Engineering to get a better context for testing:
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 5 of 21
Some quotes from the SWEBOK introduction:
WHAT IS A RECOGNIZED PROFESSION?
For software engineering to be fully known as a legitimate engineering discipline and a
recognized profession, consensus on a core body of knowledge is imperative. This fact is
well illustrated by Starr when he defines what can be considered a legitimate discipline and
a recognized profession. In his Pulitzer Prize-winning book on the history of the medical
profession in the USA, he states, “The legitimization of professional authority involves three
distinctive claims: first, that the knowledge and competence of the professional have been
validated by a community of his or her peers; second, that this consensually validated
knowledge rests on rational, scientific grounds; and third, that the professional’s judgment
and advice are oriented toward a set of substantive values, such as health. These aspects of
legitimacy correspond to the kinds of attributes—collegial, cognitive, and moral—usually
embodied in the term “profession.”
WHAT ARE THE CHARACTERISTICS OF A PROFESSION?
Gary Ford and Norman Gibbs studied several recognized professions, including medicine,
law, engineering, and accounting. They concluded that an engineering profession is
characterized by several components: An initial professional education in a curriculum
validated by society through accreditation Registration of fitness to practice via voluntary
certification or mandatory licensing Specialized skill development and continuing
professional education Communal support via a professional society A commitment to norms
of conduct often prescribed in a code of ethics This Guide contributes to the first three of
these components.
A Software Engineer Specialising in Testing
Page 31 and 32 of the 234 page SWBOK Guide at http://www.swebok.org/ lists the 11 Knowledge
Areas of SW Engineering of which Testing is one and Quality is another. The full list is:
 Software requirements
 Software design
 Software construction
 Software testing
 Software maintenance
 Software configuration management
 Software engineering management
 Software engineering process
 Software engineering tools and methods
 Software quality
Perhaps to really be a professional tester you need understanding and some skill in each of these.
For example:
 Do you understand configuration management to the level that you can manage at least test
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 6 of 21
resources effectively, and can access other resources as required? If not start using
subversion or some other similar tool so you do understand the concepts.
 Do you understand requirements enough to validate them and to test again even technical
ones such as UML? If not see the excellent STANZ talk on this 2 years ago.
 Do you understand project management enough to capably run a testing project or the testing
component of a project? Can you define scope and deliverables, manage budget, timelines,
staffing, suppliers, reporting and communications?
 Do you understand software construction enough to properly construct a test suite that is
maintainable? And programming well enough that you can automate simple tasks in some
language?
 Do you have skills with XML/XSL, Databases, text files and tools such as RegEx to work
with each of these, sufficient to use log files and other sorts of results effectively?
 Do you understand traditional waterfall SDLC and differences from Agile?
All of these skills can be learnt in a degree course, in commercial courses at considerable cost, or in
your own time at next to no expense, or perhaps even on your boss’s time. But should your boss pay
for the time for you to become more highly skilled, so you can ask for a higher salary and he has to
replace your current role, or are you doing that primarily for your benefit so should you take
responsibility for it?
To be a software engineer specialising in testing implies general skills such as those above, an
understanding of the SDLC and Agile as well as specific testing expertise. Can you provide general
value in software development as well as specific value in testing that others can't?
Look at your self-image: Tester. Now imagine you meet a medical professional at the top of their
field. How might they describe themselves? As a “people fixer”? More likely he might say he is an
Ophthalmologist and if asked to explain say that he is a Doctor who specialises in eyes. So are you a
software tester or are you a Software Engineer or Software Developer with specialist skills and
qualifications in Testing? Or perhaps you are a software quality specialist? It may sound silly but
how you think of yourself is the most important predictor of what you achieve and how others view
you.
Part 2 – Key principles from other areas to apply
Specialists like to look broadly and think freely to get new ideas and insights and inspiration. Fresh
perspectives give useful ideas, keep work interesting and broaden skills.
As an exercise in this develop the habit of being contrary and questioning beliefs. Consider the
opposites of what you currently think e.g. Do you need bug tracking software, management
software, release cycles, test scripts?
You can start to collect your own ideas but below are some to get you started:
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 7 of 21
Lessons for testers from Software Engineering’s other disciplines
Be familiar with at least one relevant body of knowledge – preferably a broad one e.g. Software
Engineering, and a specific one e.g. ISTQB. Stay up to date. Read books!
Consider certification e.g. ISTQB or AST or a more general one. As an automator a developer
certification e.g. Java might be more useful than a testing course.
Consider values – PMI has ethical standards and that is one thing that distinguishes professionals
and specialists from “cowboys”.
Consider joining a professional group e.g. Computer Society and Test Professionals Network.
Be part of a group where you are exposed to ideas and can learn from peers. There are many suitable
online forums.
See Lee Copeland’s “Forgettings” talk from STANZ 2006 which covered much of this.
Lessons for testers from outside Software Engineering
A couple of principles from Just in Time theory:
 Preventative maintenance (Application: Run automated tests every night even if no build so
that any problems with the environment for any reason are resolved before the pressure is on
and without confusing test results)
 Reduce Setup Times (Applications: Automate deployments, smoke test)
And from Total quality control:
 Automatic inspection (Applications: Automated execution but also automated source
inspection for errors or excessive complexity or coverage)
 Quality measures (Application: How do we know testing worked? How do we know quality
is improving?)
 Fail-safe methods (Applications: System should respond to even unexpected errors in a way
that is likely to avoid catastrophe. System should prevent errors where possible e.g. Provide
list instead of free form, limit entry length.)
Various management theories mostly from a manufacturing context that focus on constraints:
 They talk about how the only process worth optimising is the one “weak link in the chain”
that constrains your delivery (Application: What is the bottleneck? Is it debugging? If so find
bugs earlier. Is it test environment instability? If so make that number one priority to
improve.)
 It is very likely that the constraint(s) are policy in nature and not physical. (Application:
Does policy of separating test and coding staff improve outcomes of does it slow down cycle
time and reduce quality with only theoretical benefits?)
 Anything that reduces smoothness of production causes waste. (Application: Why do we
have the release cycle we have? Does it cause waste or resource? If regression is automated
could we release more often to reduce “lumpiness”? Daily?)
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 8 of 21
Continuous improvement and Quality Circles:
 If we improve by 1% per week we will be working twice as efficiently within 18 months
(due to compounding). Could you do that?
 The best people to improve processes are the people doing them, not the managers.
All about Agile
http://en.wikipedia.org/wiki/PM_Declaration_of_Interdependence says
"We ...
 Increase return on investment by -- making continuous flow of value our focus.
 Deliver reliable results by -- engaging customers in frequent interactions and shared
ownership.
 Expect uncertainty and manage for it through -- iterations, anticipation and adaptation.
 Unleash creativity and innovation by -- recognizing that individuals are the ultimate
source of value, and creating an environment where they can make a difference.
 Boost performance through -- group accountability for results and shared responsibility for
team effectiveness.
 Improve effectiveness and reliability through -- situationally specific strategies, processes
and practices."
For testing the above might suggest:

Try to ensure you are actually doing testing each day – manual or automated - and
minimising documentation. Don't wait for releases.

Document your tests in such a way that it improves communication with customers in a
useful way. The most useful way is if they find problems before development.

Accept uncertainty. This will make you very popular. Doesn't mean you do other's jobs for
them, just that you don't assume they are doing things wrong.

Creativity and innovation help us avoid just looking for the same bugs over and over. Let
automation do that, not people! Innovate in all areas. It doesn't matter if most things you try
fail as long as you manage the risks appropriately.

Group accountability: Don't let yourself be the “police”. Instead work with the team and help
them.

Use situationally specific strategies: Work out your priorities and find the best ways in your
situation to tackle them.
Agile (evolved from XP in mid-90's and core principles in Agile Manifesto):
 Short iterations with full cycle typically from 2 to 4 weeks at full release quality
 Co-location, teams of 5 to 9, cross-functional and self-organising
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 9 of 21
 Scrum: Daily stand-up meetings. Pigs and chickens.
 Customer rep in team and prioritise per iteration (including do they want a document or a
working screen!).
 Time boxing: Sacrifice scope not schedule, budget or quality.
 Test early and often (TDD and automated unit tests etc). Only code to make tests pass then
stop.
 No development for hypothetical requirements, but do architect the required code to reduce
“smells” or anti-patterns.
 Core metric is working software (stories).
 Late changes in requirements are welcome.
 Motivated expert trusted staff.
 Attention to technical excellence and good design and simplicity.
 Refactoring to reduce “technical debt”, sometimes as separate iteration.
Implications for testing:
 If you are a team member you need to understand software development and be technically
competent to earn respect.
 Understand the role of unit testing and what JUnit is.
 Take responsibility – instead of asking developers each time to give you extracts from logs,
ask them how to access them and learn to extract the key data yourself. Also understand the
various environments and configuration systems.
 Make sure your test cases can be validated with the customer rep and the developers.
 Use test cases that can then be executed directly as automated tests to reduce cycle time.
 Automate all regression tests that you can, including load and performance tests, to reduce
cycle time.
 Don't grizzle when things change – understand the paradigm.
 Don't hesitate to ask the programmers to enhance testability of the code.
 Understand good test design, “smells” or anti-patterns, refactoring and technical debt.
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 10 of 21
Understand load and performance (sometimes called SVP) testing as well as functional and
exploratory testing.Part
3 – Examples
Example 1 – Small successful development team from the 90's
The only job I've ever had where neither management nor customers complained about the cost of
testing or the standard of quality. Product was an application used by 1500 dairy farmers to monitor
every aspect of their herds. Reasons the testing was so successful included:
The business expert (a vet and researcher) was intimately involved every day with the team,
including exploratory testing and real use of the application. This is very similar to current agile
practice.
Quality was recognised as critical due to the high cost of bug fix releases on CD to thousands of
users. This is also now a standard assumption of agile.
One full time staff member fully allocated to automated regression hence able to keep up with
current cycle. The was done using Rational Robot so is a good counter-example to my general belief
that open source tools are best, but:
 It was his only priority so he could understand all the nuances of the tool and follow up any
issues quickly with the vendor and user community, becoming a real expert in the test tool.
 It was a Windows application and Robot worked best in that space i.e. The tool suited the
job. Both the application being tested and the testing tool used variants of Visual Basic so it
was a good match to the in-house skill-set.
 The architecture was simple in that just one application was being tested and it was basically
a standalone Windows application.
 Suitable open source tools were not available to my knowledge in the 1990s.
Example 2 – Internet booking application with development team of 20
Management only funded two testers and they were so busy that we could not get even one of them
to focus on automation full time. The culture was one of crisis management. The testers had no
technical or development skills. Result was poor test architecture and maintainability.
Manual testing was poorly defined and there were no standard manual regression tests so
automation required definition of what would be tested, not just automation of existing tests.
The complex multi-tier distributed application with a Windows client and web clients customised
for individual customers let to many more technical challenges with the test automation and that
complexity was not well managed.
The entire Rational suite was used, and this time the cost for the large group including licences,
servers, administration, upgrades (which took weeks) and issue resolution was impossible to justify
from the minimal successes achieved.
A low cost offshore subsidiary was created and allowed for manual testing using staff that knew the
application well and had very low cost.
The development team use primarily open source tools (PHP) and Borland products so the Rational
technology was not a good match and it was a struggle to generate enthusiasm outside the test team.
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 11 of 21
Example 3 – Corporate struggle
Here there was an excellent permanent manual testing team of approximately six staff, and good test
processes, but none of the testers were technical except the team leader and there was no significant
reuse of test assests and no standard regression tests hence no quick win targets for automation..
A standard “big bang” project for automation was run with RFI, RPF, demo's from competing
vendors, proof of concept etc so the business case was for > $100 000 dollars, which meant there
were beurocratic hurdles and that a lot of money was spent before any benefit could accrue.
Rational was selected again, but only one person in NZ was qualified to maintain the custom code
to link the test tool to the 'green screen' system. This was a complex architecture with Web,
Windows, Database, Mainframe and Web Services and again the scope was not limited to reduce
risk and produce measurable benefits in a reasonable timeframe.
No staff were allocated full time to automation nor had it as their primary responsibility. The
ccntractor who drove the project moved on to other things in the company then left and the test team
leader while technically excellent and well qualified had many other conflicting responsibilities,
primarliy to ensure the large manual testing workload was managed.
Keyword (or DSL) automation approaches were poorly understood, if at all, by the staff and tool
vendor and only functional decomposition was recommended as a model. That is an 80's approach
to developing software and not suitable for a complex environment.
Some years on automation is still not delivering good results in this
case.Example 4 – Corporate success, so far anyway
This business also has diverse systems and a very complex architecure. Previous approaches to
automation with large tools from mainstream vendors had ended up “shelfware” and left the
company “burnt” and sceptical and afraid of test automation.
Hence a low risk approach was selected that could “fly under the radar” and show that results could
be delivered quickly and maintained . To do that we focussed just on the online channel which was
bringing in almost half of their revenue - over $1 billion per year – and had the highest visibility and
testing cost.
We also restricted the scope was to automating 80% of the existing documented manual regression
scripts. This helped as a big cost in automation that is often not allowed for is the analysis and
processing of the many defects, suspected defects or requirements confusion uncovered in the
process. The remaining 20% were either not cost-effective to automate for various reasons, or were
kept manual to ensure a human eye scanned all key screens for each release. That is of course
necessary to pick any defects not specifically tested by the automated scripts, which must be very
narrow in their validations or they become too “brittle” and hard to maintain.
Even with these existing scripts as a basis there was a fair amount of clarification requried as they
were quite inconsistent and out of date despite being used on each release! This is common with
manual scripts and a big advantage of automated regression test scripts is that they can't be out of
date because then they will not pass! Surprisingly we also found a number of defects that had
slipped past the manual testing. Now (automated) scripts are always up to date (otherwise they fail!)
while previously (manual) tests were notoriously poorly maintained as it was left till after the
project then not done as people moved on to next urgent project.
Canoo WebTest (webtest.canoo.com) was chosen. It is a tool written in java and using
XML/Ant/Groovy and is specifically designed for testing web applications. It drives a “pretend”
browser which is a java library called HTMLUnit, and while fast and powerful and supporting most
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 12 of 21
AJAX libraries etc, does mean the business had to get over their reservation that the tests were not
seeing exactly the environment the users had (in this case usually but not limited to Internet
Explorer). Of course no test environment ever exactly matches any user environment!
Because of limited scope and use of a simple free tool in mostly stable areas of the web application
the business case had just the cost of one person to do the automation for four months at about
$45,000 hence below governance radar and meaning there was inherently low risk.
A competent and committed person with this as #1 priority and who would be around to guide later
evolution of suite was allocated.
Many principles of keyword architecture and agile approach followed e.g. attempting to avoid any
code duplication and to encapsulate access to a page in one place.
Result was that goal was achieved and a team of 1 test coordinator and 4 testers per release was
replaced with one coordinator/automator and one tester for same period of time. This also enabled a
move to a dedicated tester whereas when needed large team had to pull from business, and this led
to increased efficiency. This group is now sold on the benefits of using professional testers, as well
as on automation.
The cost of maintaining scripts was covered by savings on management of staff and of manual
scripts. Use of java-based tool in java team allowed us to leverage off existing expertise, though this
was not really required in initial phase. Simple free tool meant negligible tool admin or maintenance
cost.
Now they are looking to add a different toolset (Fitnesse + WebDriver) to:

Give better use of the keyword-driven and “executable requirements” models

Be more flexible by allowing driving the application stack at any layer rather than primarily
at the GUI.

Support a variety of real and simulated browsers through open source libraries such as
WebDriver.

Leverage off the technical skills of the in-house java development team (who were not
available when the testing was initially begun).
Another interesting lesson from this organisation was that when keyword-style automation
was first suggested, the java developers decided they would take that on themselves, and
attempted multiple approaches using java testing tools and also writing their own. After more
than 6 months of this their manager realised it was not succeeding. There are two main
reasons for this: The developers are not expert in application level testing (as against unit
testing) and they do not have it as their core priority. I have seen this pattern in other
companies too, and it is perhaps the one risk of the agile approach that is not present with the
“big tool, big vendor” traditional approach.Part
4 – Automating
The two main causes of failure and keys to success
We can see from the examples above, and I think any person with enough experience is likely to
agree, that the biggest risk to test automation and most common cause of failure is the issue of
maintenance of the test suite as it becomes complex.
There is a technical and a business reason for this. The business reason is failure to obtain or
maintain the right expertise, or to have the right people but without automation as their top priority.
If it is not the top priority then it will sooner or later for some release be omitted due to the pressure
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 13 of 21
to deliver on time. Unfortunately the next release is likely to have the same pressure, and a backlog
of work to bring the automation up to date, and so is more likely to omit the automation. Gradually
the asset that was developed degrades and becomes worthless and automation is seen to have failed.
The technical reason is related to architecture, which we discuss further below. Without a sound
architecture, the complexity of the suite as it evolves and grows and the cumulative changes to the
application being tested as it also evolves and grows more complex, get to a point where
maintenance is such a problem that it can not be done in the time between releases. This leads to the
same outcome as above.
If we understand that we are developing a complex software application, in other words that a
complex test suite (not the tool used to build it but the automated tests themselves) is a software
application in itself all this becomes obvious. No one would expect to be able to develop
successfully using spare time efforts by people with poor skills and no attempt to apply sound
architecture or programming principles.
Principles and Architecture
Agile principles for automation revolve around simplicity and maintainability:
1
“Just enough”: Set an objective and do just what you need to in order to achieve it. Don't
angst over anything else.
2
“Horses for Courses”: Choose the simplest tool that can achieve your goals in your context.
3
Quick and dirty if don't want reuse, otherwise architect. Architect means use a layered
approach to manage complexity and provide maintainability, which we will discuss below.
4
Use tools that leverages off the skills of yourself and team e.g. VB/.Net/Java/RegEx/XPath
for scripting and ideally to allow modification of source code for tools.
5
Don't get involved with vendors or licences or closed source unless strong reason to do so.
6
Clarify the layers you want to test and ensure toolset supports them.
7
Minimise documentation and maximise its use using the “executable requirements” model.
The keys to any sound architecture - test or otherwise - are to:
1
Separate concerns in layers
2
Avoid duplication and redundancy
The first of these is the essence of architecture and the second follows on from it and is the essence
of development. Let's look at them in turn.
Layered architectures are used for networking e.g. Web sites use HTTP which defines the web
pages and sits on top of TCP which sits on top of IP etc; and of course for applications we have Ntier architectures e.g. For a web application might have client/browser, web server, application
server, database – and that is a relatively simple example.
For testing we want the top layer to express the tests in business terminology. This means that
tests should be expressed in the language of requirements, using the business/user terminology in a
structured but easily readable way. It also means they should express the tests at a level above and
independent of the test technology, test level (e.g. GUI or database) and of the GUI structure such as
buttons and pages. For example “Add a new user with name of 'John Smith' and check that they
appear in the user list.” is good. “Insert a user record in the database...” is bad as is “Enter 'John' in
the first name field then click....”.
A common example of this model is Keyword (or Action word) driven testing where we define high
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 14 of 21
level actions like “Add user” or “Search for account” which may take parameters and can be reused
in multiple test not only with different data but also with the keywords mixed and matched in
different orders to create “flow” style tests that can express use case scenarios or stories.
Associated to this is the concept of Domain Specific Languages (DSL’s). This is the programming
level equivalent to keywords, where methods or procedures are defined in the programming
language, which map to these business level actions. This may be actually used to code the
application, or may be separate to the application code, and used just to “wrap” the application in
such a way that the test keywords can drive the application, and validate or reuse data from the
application responses. In the latter case these are sometimes (e.g. With FIT/Fitnesse) called
“fixtures”.
Fixtures are the second level of the architecture and provide the glue from the business level
keywords which might be expressed in a spreadsheet or a wiki, to the application at some suitable
level of its stack. For example the fixture might drive the application through the GUI or might
drive it at the web service or database or code object layer or even at a command line layer.
These fixtures might in turn use another layer, usually a generic one, to connect to the application at
the appropriate level. For a web GUI this might be a library or tool such as Selenium or WebDriver
that knows how to enter data into web forms, click links, and extract data from web pages. In the
case of database or web service fixtures it might be a general java or .Net library class using JDBC
or ADO to hit a database or using Axis or some other classes to access a web service.
Finally we have the application itself, which can be considered part of the test stack for two reasons:
you can't run tests without it; and ideally the developers will optimise it for testability through
“hooks” or naming conventions for example making sure key elements in a web page that the
automation will need to access have unique and meaningful Ids.
Each of these layers should have the ability to log and report appropriately. Ideally the first business
layer will show results directly in the context of the tests. They may colour them to show success or
failure and insert actual results beside the expected ones. The other layers should support modes of
logging for various purposes, including debugging, ideally without having to write specific code for
this, and allowing the level of detail of logging to be specified for the run. This is best accomplished
using a library such as log4j.
These layers should also provide logical tools to identify key elements in the application such as for
a web page: form elements, buttons and links, and content in the page. RegEx and XPath would be
two key ones to expect, as you do not want to have to learn techniques specific to the tool, that are
neither available nor useful outside the context of that tool
Versioning and source control must be provided at all these levels. It is best to use the most standard
way of versioning things at the level in question so if using a wiki at the top level, use the build in
wiki versioning, but with a note which version of the application the test was validated against. If
using a spreadsheet then use whatever configuration control is used for office type documents in
your organisation, and for fixtures and other code-level artefacts use the same source control that the
developers use for their code, or if you don't have in house developers but are testing 3rd party code,
use a standard tool that suits code written in the language you are writing your tests in.
In summary we have four levels:
1
The test is defined in sequences of keywords with data as parameters using terminology that
is meaningful to the business and independent of the technology.
2
The tests are then mapped to code modules that link each keyword to the appropriate part of
the application.
3
This code may use other code or tools to map generically to the particular layer of the
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 15 of 21
application in questions such as running a stored procedure in a database. This may not be
necessary if the application is coded using a model that maps well to the business.
4
The application itself should be at least set up and ideally optimised to support the testing.
Each layer should be as independent as possible so that:
 Optimal tools can be swapped in and out at each level according to the desired purpose or
the suitability or availability of tools, without disrupting the entire test architecture. This
minimises tool and vendor lock-in and allows multiple tools to be used depending on the
type of testing most suited to particular tests.
 People with the most appropriate skills can work at each level. The person writing the tests
themselves may be a tester, a business analyst, or a business customer or user of the system.
The person writing the fixtures may be one of the developers using the same language and
tools they are already skilled in, and just having the job of implementing the agreed
keywords, or could be a tester with basic technical skills, or could be outsourced. The
keywords are core to this and should be agreed between all interested parties.
 Changes to the application or tests should be able to be made with minimal impact on the
other layers. A change to the structure of a web page should only require changes to the
fixture that drives that page. A change to a test cases should require either no changes to
other layers, or the addition or modification of usually a very few keywords and hence
fixtures, to support functionality that had not previously been set up for testing. A change to
the tool used to drive web pages for selenium to WebDriver would only require changes to
the fixture layer and of course the swapping of these tools in the third layer, but not to the
test scripts themselves. Changing a fixture to support an additional parameter can easily be
done in such a way that none of the existing tests break.
Avoiding duplication and redundancy is a key to quality code development and applies just as
much to testing. This principle has been around almost as long as coding, but has continually
evolved. We have already talked about how each fixture should drive and be able to parse the parts
of the application that are used for a specific keyword. Generally only one keyword will access any
given part of the application for example only the “Log in” keyword will access the username,
password, forgot password parts of a web system, even if they appear on multiple pages. Often
variations on a given keyword are required, for example some taking all the detailed parameters and
others being high level and making some default assumptions for easy and readability of tests where
the specific details of data used are not important. There may be specific tests that cover all aspects
of the payment step in an online flow, but outside of these tests many other end to end scenario tests
will require a payment to be made but not care about the details. So we may have a “Pay” keyword
with parameters for payment type, card type, expiry date etc but may want to also create another
keyword, either also called “Pay” or perhaps “Make default payment” that just uses default values.
This principle means that the simple one would just call the detailed one internally, passing in
default values, so behind the scenes they are variations on one keyword and don't involve
duplicating code.
The team can decide on a rule of thumb such as whenever you
write more than 5 lines of code that repeat code from
elsewhere, it should be abstracted into a method that can be
called from each place, but in reality it is not this simple and
requires judgement. If there are say 3 lines but repeated in 50
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 16 of 21
places, then that is more important to factor out than 5 lines
used in two places. Similarly if different parameters are used
it is slightly more work to factor out – so maybe 7 lines is
appropriate, but if no parameters it is easy to do, so maybe 3
lines. The key is to have time allocated and an agreed priority
to continually improve the quality of the code as you work,
and never to sacrifice that for slightly quicker
development.Part 5 – Overview of Fitnesse and associated
tools
The Fitnesse Architecture
Fitnesse Features




Fitnesse is a wiki test management and execution framework built on top of FIT and
extended by FitLibrary which are all free java open source applications.
It allows tests to be written in business language using simple tables in a wiki and supports
most functional testing patterns such as tables to do data-driven style testing of business
rules and keyword style tests to exercise end to end scenarios.
It automatically executes against fixtures written in java, C# or various other languages thus
allowing leverage of existing skills and libraries and reporting back in real time in the same
wiki tables that the tests are defined in.
It separates the non-technical process of test case writing which can and should be done in
conjunction with the business as part or requirement definition ahead of coding, from the
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 17 of 21



technical process of creating code fixtures that connect each action to the application being
tested which should be done in conjunction with development.
As a wiki it contains built in version control, security, distributed access and ability to mix
annotations among the test cases making then true executable requirements .\
The wiki structure is enhanced by Fitnesse with features to support test suite management
and basic refactoring.
Fitnesse supports the use of variables so that data definitions can be centralised to avoid
duplication.
New Fitnesse Developments and Add-Ons




The new Macro feature due out this month allows actions to be defined in terms of a series
of lower level actions; passing parameters in the same way as through they were fixture
code.
New SpiderFixture feature due out this month provides built in actions for all basic web
functionality e.g. type into a text field, click something, validate some page content. It drives
Firefox or IE or HTMLUnit (which is a headless i.e. “pretend” java browser) and displays
test results directly in the wiki.
New XMLUnit support to allow easy calling of web services and validation of responses
using XPath etc.
The one notable omission in Fitnesse is the ability to pass back values from the application
in one test step, and reuse them in later steps. I expect this to be added soon but until it is
that logic can easily be handled at the fixture level but does require basic coding skills.
Database access is provided by a database fixture.Part
6: Additional useful
skills and tools of the trade for testers
There are core skill areas you must cover to be a testing professional and tools are relevant to some
of these. Here is a summary:
1
Test planning i.e. given a testing situation have a feel for the types of testing that might be
required and the time, tools and resources required. Specific tools are not required at this
level, but make sure you understand the range of testing e.g. From the ISTQB Foundation
Syllabus (http://www.istqb.org/syllabi.htm) and have access to the testing heuristics sheet
(http://testobsessed.com/wordpress/wpcontent/uploads/2007/02/testheuristicscheatsheetv1.pdf) and good example test plans.
2
Test analysis i.e. given a testing challenge and a specific approach, be able to define and
prioritise the specific tests you recommend. Again tools are not so relevant but you need to
get these skills one way or another. If you are afraid of maths and logic work through it and
get past that – you can't be afraid of core parts of your job and be successful or a useful
resource!
3
Manual test execution: You must be able to execute tests; record issues well, follow them up
appropriately, report on progress, and carry out exploratory testing. If not good at any of this
then get good!
4
Performance testing: This is a specialised sub-field but many of the same principles apply
and if you want to increase your value and broaden your opportunities a good start is to read
the free book by Scott Barber
(http://www.codeplex.com/PerfTestingGuide/Release/ProjectReleases.aspx?ReleaseId=6690
). Tools are essential here and depend on the architecture you are testing. A great general
purpose tool that you can teach yourself to use is JMeter. Make sure you know how to use
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 18 of 21
this to performance and load test web services and you can easily extend that knowledge to
other contexts.
5
Automated functional testing: we cover this in detail elsewhere in this talk but you need to
understand the underlying principles, at least one good framework like Fitnesse, and some
way to connect to the architectures you are likely to have to test e.g. selenium/WebDriver for
web, or equivalent tools to test driver web services, databases, windows GUIs, java GUIs,
Flash/Flex, databases etc. You may wish to also download the trial version of one of the “big
4” commercial tools and spend a month of your spare time getting a feel for one of them, but
don't think that will impress on your CV as all the graduates from overseas universities say
on their resumes that they have experience with the Mercury suite, but none of them
understand how to use automation effectively.
Now we get down to more specific supporting skills that are equally essential and tools are even
more important here. When working as a tester you need to know how to use the tools of the trade.
You don't want to call yourself a builder then ask your customer to find you a drill, or find someone
to hammer in a nail for you! What tools do you need? I would suggest you need the following utility
skills with some suggestions of tools required:
1
Quick and dirty testing automation of repetitive tasks. For this you need a macro tool for the
environment you usually work in e.g. Windows, and a macro tool for any key applications
you use in that environment e.g. Text editors or Excel. You should be the person others can
come to for this sort of thing and then you become more valuable. Examples are iMacros or
better Selenium IDE for web actions, AutoIt for windows GUI actions
(http://www.autoitscript.com/autoit3/), Groovy with Scriptom for Windows COM interface
automation (http://groovy.codehaus.org/COM+Scripting). For quick and dirty testing of web
applications I highly recommend the Selenium IDE Firefox plug-in. It installs in less than
one minute, can be learnt in less than 10 minutes and gives free easy, powerful test
record/playback with ability to handle almost all requirements and validations.
2
Data and log manipulation: You must be able to prepare large volumes of data for input to
tests, and analyse large amounts of output data, and even convert one to the other. Again you
should be the go-to person for any data manipulation and you need a good set of tools: You
must be competent in SQL and able to work with a database of your choice e.g. MS Access
or MySQL. Find a "For Dummies" book and add this to your training list. A scripting tool
such as Ruby or Groovy or Visual Basic and be familiar with all the text manipulation
commands particularly RegEx (regular expressions) which let you do text matching and
replacing using very powerful and complex rules very easily. There are plenty of online
references for RegEx and you will earn the respect of the developers. Finally it is worth
getting reasonably competent with the data manipulation capabilities of Excel.
3
XML transformation either to drive web services or to extract information from web pages
or web service responses or to manipulate XML from one format to another or to display
XML data in pretty HTML formats etc. You need two tools: XPath and XSLT. XPath lets
you “query” an xml document in an analogous way to using SQL for a database. This means
you can validate content or rules or extract any subset of information from xml. XSLT
extends this to a framework to transform from one XML format to any other XML, HTML
or text format using rules themselves specified in XML so that you don't have to do any
coding. Many tools and applications support these e.g. Most browsers support XSLT and
most open source testing tools support XPath (and Regular Expressions) as a way of
identifying data in a response so you can either validate or extract that data to a variable.
There are plenty of online tutorials if you Google these.
4
The old-fashioned but still industry-standard automated functional testing tools: download a
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 19 of 21
trial version for a 15 or 30 days and get a feel for at the commercial tools: IBM/Rational,
HP/Mercury, Borland/Silk, Compuware, TestComplete. If one is used in your company then
pick that one of course.
That's all! Remember - you don't have to do all this in one year. Doctors take a while to go through
their training to the point where they are high-earning specialists so you have to be in for the long
haul. Commit yourself to getting confident with one of the above each year.
If you're a test manager rather than a tester: Use the guidelines above to develop your test team.
Look for these skills in people you employ or have employed, or look for a willingness to develop
them. Pick at least one person and preferably two in your team to focus on developing world class
expertise in each of the areas above.
There is one other skill that deserves a quick mention: Don't ever go back to people to ask the same
question you have already asked because you didn't record the information. To be professional you
must be organised and keep track of information that is relevant to your work well. Find a tool you
like – a wiki or a loose leaf notebook with an index or whatever – I haven't found a perfect solution
although wikis come close if you always have internet access. You must be able to instantly record
important data and re-order it so it stays useful and ideally access it from anywhere. Again, you
should be the guru that people come to regarding test environment access, configuration,
troubleshooting etc, NOT the other way round!
If you are a tester, think about how a test manager would
react to seeing some or all of the above on your CV. Test
managers get so sick of seeing resumes that look like they have
all been copied and pasted from each other: Little meaningless
bits of experience with the Mercury tools but no
understanding at all; Lots of details of the applications that
were tested but no details of the specific tools, techniques,
successes, failures or lessons learnt. Anyone can write a CV by
copying stuff from the ISTQB syllabus but most employers
can see through that! If you can put the items we've discussed
above on your CV they show real value you can add, and you
don't need anyone's permission or help – it's completely up to
you.Part 7 – The path to success and happiness
We said we would map out a path to success and happiness in testing for managers and analysts by:
 Being clear on what it means to succeed as a test team – not just getting through each
release.
 Vision, knowledge and distinctions so can lead from above or below and re-frame testing.
 Developing the drive to follow through: fears, growth, contribution and fun. Creating
change.
 Understanding and being able to use the tools of the trade.
 Creating change personally and within a team.
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 20 of 21
Hopefully we've done this, but lets wrap up with some vital general principles. All of these are keys
to life in general, but (you probably won't be surprised to hear) have specific applicability to the
great science and art of software testing :







Develop passion and leadership and vision.
Continually learn and grow in knowledge and skills.
Seek continual incremental improvement in everything you do.
Build assets – architect and reuse.
Support and work with the team. Don’t be the “police”.
Be flexible adaptive and proactive.
Automate the boring bits!
As always and by way or review let's have some examples to demontrate this . To build a successful
test team or to become a successful test professional we need to:


Provide leadership by being passionate about testing and up with the play thus providing
leadership in the area. Otherwise find something else to do!
Ensure each test cycle we go through should not only accomplish the short term goal of
testing the application for this release, but should incrementally improve the test assets of
the organisation, and the skill level of the team.

Repetitive work that is deterministic i.e. you can describe exactly what to do, and what to
check for, should as a rule be automated.

Work with and support the team however you can from a quality perspective.

Always be improving our skills and techniques and be proactive and adaptive.

Avoid taking on a “production police” role and never take on the decision to go live or not.

Encourage root cause analysis even if initially you have to do it yourselves.

Make sure you understand all the core testing paradigms, especially the modern, agile ones,
and be prepared to demonstrate how well they work.

Make sure you understand how to use a little bit of automation at a time to demonstrate its
usefulness, but always bearing in mind sound test architecture and be able to explain this.
If you do all the above the testing team will earn respect as a group with unique value that can not
be replaced by “pulling a some people from the business for a few weeks” or offshoring or
outsourcing or downsizing. More importantly you as an individual will find your work exciting,
interesting and lucrative.
Successfully Automating Functional and Regression Testing with Open Source Tools – Detailed Notes
John Lockhart - john@webtest.co.nz
STANZ 2008, Wellington, New Zealand
Page 21 of 21
Download