Agile_Testing_Tactic..

advertisement
Agile Testing Tactics
for WaterFail, FrAgile, ScrumBut, & KantBan
Tim Lyon
TimLyon@GameStop.com
http://www.linkedin.com/in/timlyon
Epic Fails with Application Lifecycle
Management (ALM) and QA
2
WaterFail Project Example
3
FrAgile Project Example
4
ScrumBut Project Example
5
KantBan Project Example
6
Learnings: No Perfect Agile QA Process
 Several QA steps can help address agile
development projects and risk
 Automate, automate, & automate
 Test planning make effective instead of extensive
 Maximize test cases with probabilistic tests & data
 Scheduled Session-based Exploratory &
Performance testing
 Simple, information reporting & tracking of testing
7
Agile Automation Considerations
8
Why Don’t We Automate?
Human Interface/Human Error
Description
General rate for errors involving high stress levels
Operator fails to act correctly in the first 30 minutes
of an emergency situation
Operator fails to act correctly after the first few
hours in a high stress situation
Error in a routine operation where care is required
Error in simple routine operation
Selection of the wrong switch (dissimilar in shape)
Human-performance limit: single operator
Human-performance limit: team of operators
performing a well designed task
Error
1000 100
Probability LOCs TCs
0.3
300
30
100
10
0.03
0.01
0.001
0.001
0.0001
30
10
1
1
0.1
3
1
0.1
0.1
0.01
0.00001
General Human-Error Probability Data in Various Operating Conditions
"Human Interface/Human Error", Charles P. Shelton, Carnegie Mellon University, 18-849b Dependable
Embedded Systems, Spring 1999, http://www.ece.cmu.edu/~koopman/des_s99/human/
9
0.1
0.01 0.001
Additional Functionality
Additional Functionality
Baseline Existing Features
Regression
New Functionality
Can We Afford Not To Automate?
Manual Or
Automated
QA Effort
Time
10
Need More QA Effort ≈ Coverage
Need More QA Effort ≈ Coverage
Base Head Count & Effort for Coverage
Time
Automated Test Coverage
or Overall Manual Coverage
Time
Pace of feature
addition & complexity
far exceeds pace of
deprecating systems
or functionality
Long term increasing
complexity with more
manual testing is not
a sustainable model
Automated
regression tests
optimize the
Manual QA effort
Building Functional Automation Library
Sanity Check
Post Build
Application is running and rendering
necessary information on key displays
Automation Layers
Used to Develop Automation
Library Over Time
Smoke Tests
Simple “Happy Path” functional tests of key
features to execute and validate
Integrated Functional Testing
Positive Functional Tests
In-depth “Happy Path” functional tests of
features to execute and validate
Database Validation Tests
Alternative Functional Tests
Non-GUI based data storage and
data handling validation
In-depth alternate but “Happy Path” of less
common functional tests of features to
execute and validate
Negative Functional Tests
In-depth “UNHappy Path” functional tests of
features to execute and validate proper
system response and handling
System Tests
Development
11
Learnings: No Automation Costs
 Start with Simple Confirming Tests and Build up
 Work with Developers to Help Implement,
Maintain, and Run
 Utilize Available Systems After Hours
 Provide Time to Write, Execute, and Code Review
Automation
12
Agile Test Planning
13
“10 Minute Test Plan” (in only 30 minutes)
 Concept publicized on James Whittaker’s Blog:
 http://googletesting.blogspot.com/2011/09/10-minute-
test-plan.html
 Intended to address issues with test plans, such as:
 Difficult to keep up-to-date and become obsolete
 Written ad-hoc, leading to holes in coverage
 Disorganized, difficult to consume all related information at
once
14
How about ACC Methodology?
 Attributes (adjectives of the system)
 Qualities that promote the product & distinguish it from
competition (e.g. "Fast", "Secure", "Stable“)
 Components (nouns of the system)
 Building blocks that constitute the system in question. (e.g.
"Database", “API", and “Search“)
 Capabilities (verbs of the system)
 Ties a specific component to an attribute that is then
testable:
 Database is Secure: “All credit card info is stored encrypted”
 Search is Fast: “All search queries return in less than 1
second”
15
Google Test Analytics
 Google Open Source Tool for ACC Generation:
 http://code.google.com/p/test-analytics
16
Learnings: 10 Minutes is Not Enough
 Keep things at a High level
 Components and attributes should be fairly vague
 Do NOT start breaking up capabilities into tasks each
component has to perform
 Generally 5 to 12 Components best per project
 Tool is to help generate coverage and risk-focus over
project - not necessarily each and every test case
 Tool is still in its infancy – released 10/19/2011
17
Combinational Testing
18
What is Combinational Testing?
 Combining all test factors to a certain level to increase
effectiveness and probability of discovering failures
 Pairwise/ Orthogonal Array Testing (OATS)/Taguchi Methods
http://www.pairwise.org/
 Most errors caused by interactions of at most two factors
 Efficiently yet effectively reduces test cases rather than testing
all variable combinations
 Better than “guesstimation” for generating test cases by hand
with much less chance for errors of combination omission
19
Example OATS
 Orthogonal arrays can be named like LRuns(LevelsFactors)
 Example: L4(23):
 Website with 3 Sections (FACTORS)
 Each Section has 2 States (LEVELS)
 Results in 4 pair-wise Tests (RUNS)
FACTORS
20
RUNS
TOP
MIDDLE
BOTTOM
Test 1
HIDDEN
HIDDEN
HIDDEN
Test 2
HIDDEN
VISIBLE
VISIBLE
Test 3
VISIBLE
HIDDEN
VISIBLE
Test 4
VISIBLE
VISIBLE
HIDDEN
Comparison Exhaustive Tests
 Increases Test Cases by 100%
 Can still utilize from to add “interesting” test cases
FACTORS
21
RUNS
TOP
MIDDLE
BOTTOM
Test 1
HIDDEN
HIDDEN
HIDDEN
Test 2
HIDDEN
VISIBLE
VISIBLE
Test 3
VISIBLE
HIDDEN
VISIBLE
Test 4
VISIBLE
VISIBLE
HIDDEN
Test 5
HIDDEN
HIDDEN
VISIBLE
Test 6
HIDDEN
VISIBLE
HIDDEN
Test 7
VISIBLE
VISIBLE
VISIBLE
Test 8
VISIBLE
HIDDEN
HIDDEN
Helpful when Factors Grow
22
Tool to Help Generate Tables
 Microsoft PICT (Freeware)
 http://msdn.microsoft.com/en-us/library/cc150619.aspx
 Web Script Interface:
 http://abouttesting.blogspot.com/2011/03/pairwise-test-
case-design-part-four.html
23
What is PICT Good for?
 Mixed Strength Combinations
 Create Parameter Hierarchy
 Conditional Combinations & Exclusions
 Seeding Mandatory Test Cases
 Assigning Weights to Important Values
24
Learnings: Tables can be BIG!
 Combinations and parameters to use takes
some thoughtfulness & planning
 Still can generate unwieldy test cases to run
 PICT statistical optimization does not always
generate full pair-wise combinations
 Good for Test Case generation as well as basic
test data generation
25
Session Based / Exploratory
Testing
26
Organized Exploratory Testing
 Exploratory testing is simultaneous learning, test
design, and test execution - James Bach
 Session-Based Test Management is a more
formalized approach to it:
 http://www.satisfice.com/articles/sbtm.pdf
 Key Points
 Have a Charter/Mission Objective for each test session
 Time Box It - with “interesting path” extension possibilities
 Record findings
 Review session runs
27
Learnings: Explore Options
 Make it a regular practice
 Visible timer on display on desktop
 Have some type of recording mechanism to replay if
possible
 Change mission roles and objectives to give
different context
 Test Lead needs to be actively involved in review
 If implementation is too cumbersome to execute –
they won’t
28
Tools That Can Help
 Screenshot
 Greenshot (http://getgreenshot.org/)
 Video Capture
 CamStudio Open Source (http://camstudio.org/)
 Web debugging proxy
 Fiddler2 (http://www.fiddler2.com)
 Network Protocol Analyzer
 WireShark (http://www.wireshark.org/)
29
Reporting on Agile Testing
30
Use Test Case Points
 Weighted measure of test case execution/coverage
 Points provides risk/complexity of test cases to run
 Attempts to help answer “We still have not run X test
cases, should we release?”
 Regrade as it moves from new feature to regression
 New Feature = + 10
 Cart/Checkout = + 10
 Accounts = +5
 Regression Item = + 1
 Negative Test = + 3
31
Learnings: Clarify What was QA’d
 Tell Them What Your Team Did
 Traceability to Requirements
 List of Known Issues (LOKI)
 Regression Efforts (Automation & Manual)
 Exploratory Testing, etc
 Tell Them What Your Team Did NOT Do!
 Regression Sets Not Run
 Environmental Constraints to Efforts
 Third-Party Dependencies or Restrictions
32
Download