End User Requirements

advertisement
Inpainting Assigment – Tips and Hints
Outline
• how to design a good test plan
• selection of dimensions to test along
• selection of values for each dimension
• justification of each decision made
• interpretation of the test results
• mix between white box and black box testing
• How to design an efficient test plan
• determine the minimal number of test cases needed
• argue why this is sufficient
• replace black box by white box testing where possible
End User Requirements
E1: Scalability
What is the maximal dimension (X or Y, in pixels) of the image
on which the software runs as expected?
• first, identify independent dimensions
• X (width) and Y (height), or
• X*Y (image area)
• brute-force approach: consider X,Y independent
• black-box test combinations of Xi,Yi, with Xi  {x0, x1, …, xn} (same for Y)
• use boundary-value analysis to determine x0 , xn
• what is the smallest possible x (i.e. x0)
• white-box: read the article in detail
• black box: just try 0, 1, 2… until success
• what is the largest possible x (i.e. xn)
• white-box: read requirements + README + assignment
• black-box: try sizes close/above the sample images (500,1000,…)
End User Requirements
E1: Scalability (cont.)
What is the maximal dimension (X or Y, in pixels) of the image
on which the software runs as expected?
• white-box approach: are X,Y treated differently?
• code review: X and Y are treated identically, i.e.
• are present in identical computations
• these computations are on the same control-paths
• hence we have one variable only (X==Y)
• for this variable
• do black box range testing (as before)
• refine white-box analysis
• code review: search for array/buffer bounds
End User Requirements
E1: Scalability
What is the maximal image size on which the tool runs in under
5 seconds on your system?
• reuse the results from 1st assessment
• Do not test beyond the maximally accepted image size
• refine the question
• brute-force black-box approach (done by most of you)
• pick several image sizes
• run the tool, time the results
• two main problems with this method
• assumes speed is monotonic in the image size
• assumes the image size is the main speed parameter
• how do we know this is true??
End User Requirements
E1: Scalability (cont.)
What is the maximal image size on which the tool runs in under
5 seconds on your system?
• white-box analysis
• read the paper and review the computational parts of the code
• determine the algorithm complexity
• is it just O(f(N)), where N = #pixels, or O(f(N,a,b,...))?
• is f() a monotonic function?
• hints (read the paper/code):
• inpainting is O(N log N) where N = #scratch pixels
• hence speed
• depends on scratch area only
• is monotonic in the scratch area
• so, the optimal testing is
• time the tool for some reasonably large N (scratch area)
• compute Nmax for which t = 5 seconds (knowing that t = k N log N)
• verify/refine above Nmax using black box testing
End User Requirements
E2: Input variability
The software should run on BMP and PNG images
• identify dimensions
• Image format (BMP or PNG)
• color depth (1, 8, 16, 24, 32 bits/pixel)
• test the requirement
• black-box: OK since we don’t have many input combinations
• white-box (bit faster than black-box)
• identify image I/O code (easy)
• eliminate formats/depths not handled
• black-box test the remaining combinations
End User Requirements
E2: Input variability (cont.)
The software should run correctly for a wide range of scratch
configurations
• identify dimensions
• scratch total area (?)
• some of you tested on that
• not a relevant dimension: paper/code shows clearly that the
algorithm is local, so the total scratch area is irrelevant
• scratch local diameter – thickness?
• yes – it is mentioned in the assignment as a constraint
• bounds given: 2%..5% of the image size
• Scratch direction?
• yes – the paper clearly mentions gradient computations
(and those are obviously direction-sensitive)
End User Requirements
E2: Input variability (cont.)
The software should run correctly for a wide range of scratch
configurations
• identify dimensions (cont.)
• scratch position in image?
• yes – the paper clearly mentions neighborhood computations
• yes – see white-box ‘ordinary algorithm’ code reviews
• for-loop bounds coincide with image bounds
• image coordinates often involved in i+1..i-1 type of computations
• so we have three scratch variables
• local thickness
• orientation
• Position in image
End User Requirements
E2: Input variability (cont.)
Testing for the three scratch variables
• how many test cases (images) should I generate?
• generate several images, one per parameter-combination
• OK, but lots of work
• ideal for linking defects to input variables
• generate a few (in the limit, one) image containing a complex scratch
• this is OK because (recall) the inpainting is local!
(so every scratch-fragment on the image acts as a small test-case..)
End User Requirements
E3: Robustness
The software should not crash or have large delays
• first, catalogue/eliminate the results from the previous tests
• second, refine the inputs/variables close to already identified crashes/delays
Example (crash)
• Some of you found a crash when a scratch touches the lower image border
• black-box refinement
• Vary the position/angle/thickness of the scratch
• ...so to better pinpoint the crash situation
• white-box refinement (code review for common coding errors)
• what is the crash’s cause? Out-of-bounds array indexing
• when does that happen?
• study the FIELD<T> class
• ...specifically the FIELD<T>::value(int i,int j) method 
End User Requirements
E3: Robustness (cont.)
Example (long computations)
• white-box analysis
• recall the complexity O(N log N) for N = # scratch pixels
• white-box study (see FastMarchingMethod class and/or paper)
• critical operation: insertion/deletion from a sorted map
• map max size = scratch boundary length
• Insertion/deletion = O(log S), for a map with S elements
• hence, black-box test for
• very long scratches having
• ...a relatively small area
End User Requirements
E4: Tint preservation
The inpainting should preserve the tints of the original image
• determine variables
• white-box analysis (code + paper)
• all images are treated as RGB triplets
• all computations for R, G, B are
• identical
• done on the same control paths
• hence, the tint variables are R, G, B
• note: some imaging tools use other spaces e.g. HSV, CIELab, ...
End User Requirements
E4: Tint preservation (cont.)
The inpainting should preserve the tints of the original image
• design test cases
• just as for the scratch test cases
• can design one image per test-case
• can assemble several test-cases (tint-areas) in one big image
• recall, inpainting is local!
• how many test cases do we really need (how many values?)
• for each dimension, you have a saturation/luminance range
• can easily capture these in separate images, e.g.
...and one for green, too
End User Requirements
E4: Tint preservation (cont.)
Why don’t we need to test other tints than R, G, B?
• any tint is a linear combination of R, G, B
• if all 3 primary tints are preserved by inpainting, so is their linear combination
Quantitative measuring
•
•
•
•
more refined evaluation
do inpainting
use an image-processing tool to subtract result from original
see whether tints are preserved (examine difference)
End User Requirements
E5: Installation
The software should be easily installable and run out-of-the-box on at least
three platforms (e.g. Windows or Linux OS versions)
• identify variables
• trivial: platform = variable, has exactly 3 samples
• black-box testing
• install + run the software on these specific platforms
• use image+scratch on which software is guaranteed to run
(e.g. from the sample set provided)
• white-box testing
• check the build dependencies
• 3rd party libraries
• check the code for platform-specific elements
• #ifdef...#endif constructs (e.g. #ifdef __WIN32)
End User Requirements
E5: Installation (cont.)
Question
• Is black-box testing on Windows (or Linux) 64-bit relevant?
• some of you used this instead of 32-bit systems
• however, see D1: Portability
• the software should compile on 32-bit OSes
• hence we can test on 64-bit OSes and
• if all runs well, this can subsume 32-bit testing
• if some tests fail
• we must test on 32-bit
• ...or do white-box testing to further understand why
Download