Supplementary Material Sketch-based Mesh Cutting: A Comparative Study Lubin Fan Min Meng

advertisement
Supplementary Material
Sketch-based Mesh Cutting: A Comparative Study
Lubin Fan
Min Meng
Ligang Liu
Department of Mathematics, Zhejiang University, Hangzhou 310027, China
In this document we thoroughly describe the evaluation work in details, including the
algorithms evaluated, the ground-truth set, the system developed, the participants involved, and
the whole experiment process. We also show the experimental results and demonstrate the
performances of the evaluated algorithms to support our analysis.
1. Sketch-based mesh cutting algorithms
For the purpose of consistent and fair comparison, our evaluation focuses on extracting
meaningful parts from 3D shapes for the interactive segmentation algorithms. Table 1 gives a
broad classification of sketch-based interactive segmentation methods published in the literature,
according to the user interfaces provided by the methods. We choose one representative algorithm
from each typical type (abbreviated as EMC, PMC, CBB, and ICC respectively) for evaluation,
which provide a good coverage of the various approaches. We will describe the algorithms of each
typical type briefly in the following subsections, please refer to the original papers for further
details.
1.1.
Foreground/Background sketch-based mesh cutting
In recent years, a series of interactive mesh segmentation approaches based on the
foreground/background sketch-based interface have been proposed. Ji et al. [1] proposed the first
foreground/background sketch-based mesh segmentation algorithm, easy mesh cutting (shown in
Fig. 1(a)). Based on an improved feature-aware isophotic metric, they use a simple and efficient
region growing technique to segment the mesh interactively. By using a different metric, Wu et al.
[7] presented a similar method based on region growing for sketch-based mesh segmentation. Lai
et al. [8] extended the random walk technique to give algorithm for interactive mesh segmentation.
Based on feature preserving harmonic field, Meng et al. [9] employed the graph cut technique to
produce the segmentation results interactively. Xiao et al. [10] developed a hierarchical method
for interactive mesh segmentation by extracting high-level features through local adaptive
aggregation. Brown et al. [11] proposed a graph cut segmentation algorithm, which utilized the
minimum graph cut to determine optimal boundaries for interactive mesh segmentation. We chose
easy mesh cutting (EMC) as the representative method of this type as it was published earlier and
has gained popularity.
1.2. Foreground sketch-based mesh cutting
Fan et al. [2] proposed a progressive painting-based mesh cut out tool, paint mesh cutting
1
(PMC), for interactive mesh segmentation (shown in Fig. 1(b)). With the sketch-based interface,
the user draws a single stroke on the foreground region and then obtains the desired part,
User Interface
Algorithms
Easy mesh cutting [1]
Abbreviation
EMC
A sketch-based interactive framework for real-time mesh segmentation [7]
Fast mesh segmentation using random walks [8]
Foreground/Background
Sketch-based mesh segmentation based on feature preserving harmonic field [9]
Hierarchical aggregation for efficient shape extraction [10]
Interactive part selection for mesh and point models using hierarchical graph-cut
partitioning [11]
Foreground
Cross-boundary
Boundary
Paint mesh cutting [2]
PMC
Mesh decomposition with cross-boundary brushes [3]
CBB
iCutter: A direct cut out tool for 3D shapes [4]
ICC
Modeling by example [6]
Mesh scissoring with minima rule and part salience [16]
Table 1: Sketch-based interactive mesh segmentation algorithms.
which is achieved by efficient graph-cut based optimizations.
1.3. Cross-boundary sketch-based mesh cutting
Zheng et al. [3] presented an intuitive interface, cross-boundary brushes (CBB), for
interactive mesh segmentation (shown in Fig. 1(c)). For the part-type segmentation, the user draws
a stroke across the desired cut and then obtains a best cut along an isoline of the harmonic field
driven by the stroke.
1.4. Along-boundary sketch-based mesh cutting
Funkhouser et al. [6] provided a simple and intuitive tool, intelligent scissor, for interactive
mesh segmentation. With the sketch-based interface, the user paints stroke on the mesh surface to
specify where cuts should be made, then the algorithm finds the optimal cut by solving a
constrained least cost path problem. Lee et al. [16] presented a similar method for sketch-based
mesh segmentation, by projecting the guiding line over the mesh to define the cutting boundary.
Meng et al. [4] proposed a novel mesh cut tool, iCutter, for cutting out semantic parts of 3D
shapes (shown in Fig. 1(d)). The user draws a freehand stroke to specify where cuts should be
made, and then iCutter returns the best cut that meets the user’s intention and expectation, by
selecting the optimal isoline from a well-designed scalar field. We chose iCutter (ICC) as the
representative method of this type due to its intuition and flexibility.
2
(a)
(b)
(c)
(d)
Figure 1: Different user interfaces for various sketch-based mesh segmentation
algorithms: (a) foreground/background sketch-based interface [1]; (b) foreground
sketch-based interface [2]; (c) cross-boundary sketch-based interface [3]; (d)
along-boundary sketch-based interface [4].
2. Corpus
Our test models and ground-truth corpus are constructed based on the Princeton segmentation
databank. Considering the characteristics of interactive segmentation, we select 16 categories from
the databank with five models in different poses from each category. 3 categories are discarded,
such as tables owing to their strong symmetry, glasses owing to their simplicity, busts owing to
their patch-type segmentations. These models are chosen so that each model is associated with one
component that could be unambiguously described to the user for extraction. We select the manual
segmentations from the database which are associated with the models to form our ground-truth
corpus.
We show one ground-truth per model in Appendix I.
3. System and assignment
To facilitate the comparison, we have implemented a complete system, allowing participates
to segment the semantic parts from models using the evaluated algorithms with corresponding
sketch-based user interfaces. All the evaluated algorithms have been implemented and integrated
into the system.
(a)
(b)
Figure 2: Screenshot of Evaluation System. (a) Load the model into system; (b) Show the
segmentation result.
3
3.1. Evaluation system
Our evaluation system integrates four evaluated algorithms and a complicated interaction
record algorithm. But the user interface of the system is intuition, simple, and easy to use. Fig. 2(a)
shows a screen shot of our evaluation system.
After loading a test model into the system, the user can freely navigate the model to the
appropriate view position in the main window. Several images that reflect requirement of
segmentation from different viewpoints are displayed automatically in the Evaluation panel
located on the right of our system. The user can drag the scroll bar, named as Change view, to
switch images so as to learn which component needs to be cut out of the model.
A cutting tool bar is located on the left of our system, four types of user interfaces
corresponding to different evaluated algorithms are shown on it. The user can press the button to
choose the sketch-based tool. Then the user can draw sketches on the model by dragging the
mouse cursor to specify which part needs to be extracted. Afterwards the user can press the
Cutting Button to do the segmentation. The user can inspect the segmentation result (shown in Fig.
2(b)) on the screen and decide whether to mark new strokes to refine the segmentation.
Segmentation result is updated when new strokes are marked.
3.2. Training
In the beginning, we provide a short training course to all the users to help them familiarize
themselves with the system quickly.
In the Training phrase, we provide the specification of the system and a short video which
shows how to use the system to evaluate different algorithms. Afterwards the users can try the
system by using the sample models provided. In the training phrase, the system will not record
user’s interaction.
3.3. Evaluation
After training, users can start the evaluation. Each participant was assigned to run the
experiment on the models of one set. After loading a mesh model, the participants were asked to
extract the required part from the model using two evaluated algorithms for the specific paired
comparison respectively, with corresponding sketch-based interface. Once ready, they can click
the Start button on the Evaluation panel; start to extract the required component from the model
using a random algorithm, following the instructions to mark strokes with the mouse.
After drawing a few sketches on the image plane by dragging the mouse cursor to specify the
brushes on the model, the users only need to click the Cutting button to segment the model (shown
in Fig. 3(a)). When users finished their current task, they can click the Next Task on the Evaluation
panel (shown in Fig. 3(b)) to proceed to next task. Then the system will load a new same model
and randomly specify another algorithm to do the segmentation. After finish this task, the users are
asked the questions about the comparison of those algorithms they have just used (see Appendix II
Questionnaire 1).
To preventing participants spending too much time on refining their final results, it is
necessary to impose a reasonable time limit on each task, locating on the Evaluation panel. Each
4
task is restricted to a maximum of 3 minutes, and users are allowed to proceed to next task earlier
when they finish their current segmentation.
(b)
(a)
Figure 3: Segmentation in Evaluation mode. (a) Evaluation mode. (b) Evaluation
panel.
3.4. Questionnaire
Once completed all the segmentation tasks, each participant was also asked to fill out the
questions in the questionnaire, to provide the relative importance of the four criteria on a ratio
scale of 1-9. Some questions on personal information and related to the evaluation are listed in
Appendix II Questionnaire 2.
Figure 4: The flow chart of the assignment.
3.5. Task Assignment
For the purpose of acquiring segmentations from participants for each model in the corpus,
we divided the ground-truth randomly into 80 sets, ensuring that each set contains six models from
different categories and each set has different types of part shapes. Then each participant was
5
assigned to segment the models of one set. The participants are asked to extract the required part
from the model using two evaluated algorithms for the specific paired comparison respectively,
with corresponding sketch-based interface. So at each run, the participant performed the
segmentation task on the model using two algorithms in random order, and then is asked the
questions about the comparison in the questionnaire, to provide the relative performance of the
two algorithms for each criterion on a ratio scale of 1 to 9. This would continue until all the
required comparisons are tested. The flow chart of the assignment is shown in Fig. 4.
4. Experiment
We describe relevant aspects of the evaluation experiment in this section.
4.1. Participants
We first describe the detailed information of the participants.
121 individuals participated in our experiment, of which 68 participants have experience in
geometry processing, and the rest are needed to be trained for the task. There are 87 males and 34
females in all the participants, whose ages are ranged from 20 to 29 years with an average of 24.
Most of the participants are computer science graduates.
More details are shown in the follow in tables.
4.2. Experiments collection
All 121 participants completed the experiments. 1452 segmentations were collected for
objective evaluation, of which 1327 were accepted, and 125 were discarded as the segmentation
conflicted with the requirement of the task. By distributing the model sets to participants equally,
6
each model obtained an average of four segmentations for each algorithm. Additionally, 121
survey responses for the questionnaire were collected for subjective evaluation. Thus all the
experimental results can be used to evaluate the performance of the interactive algorithms.
5. Objective evaluation
For each of these algorithms, we have computed five metrics to evaluate how similar their
interactive segmentations are to the ground-truth. We describe the evaluation metrics and relevant
evaluation results in this section.
5.1. Averaged over each category
The final evaluation results of the 16 object categories are shown in Appendix III. Each of the
two bar charts on the top shows the averaged accuracy over models of each category. In all cases,
higher bars represent better results. Plot on the left bottom shows time required for mesh
segmentation and user interaction for each algorithm averaged over the models of each category.
Stability with each algorithm over the models of each category is shown on the right bottom.
5.2. Averaged over the whole data
Initial accuracy. The evaluations of boundary and region accuracy computed across all the
models for each algorithm are shown below, higher bars represent better accuracy.
Final accuracy. The evaluation of boundary and region accuracy computed across all the models
for each algorithm is shown, higher bars represent better accuracy. The standard error bars are also
shown.
7
Below shows a comparison of the performance of each interactive algorithm within each
model category. Each entry in the top chart is averaged over the models of each category
according to the region measure, and chart in the bottom is according to the boundary measure.
The following chart shows the time required for segmentation and user interaction with each
algorithm averaged across the entire data set.
The following chart shows the comparison of the stability test of the evaluated algorithms for
the initial segmentation and final segmentation respectively
8
6. Subjective evaluation
In this section, we describe the subjective evaluation by using the analytic hierarchy process
(AHP) procedure. We explain how we get the evaluation by users’ feedbacks in detail, and we
show the relevant aspects of the evaluation data.
The hierarchy of the performance evaluation can be developed as follow.
6.1. Criteria evaluation
We collect the data from questionnaires participants filled out. The ratio scales of the relative
importance of one criterion over another are averaged to construct the pairwise comparison matrix
for criteria evaluation (shown in Table 2).
Criteria
Ease of Use
User Intention
Stability
Efficiency
Ease of Use
1
0.5
0.382
1.74
User Intention
2
1
1.4
3.56
Stability
2.62
0.714
1
2.7
Efficiency
0.575
0.281
0.37
1
Evaluation
Table 2: Pair-wise comparison matrix for criteria.
Size of matrix
1
2
3
4
5
6
7
8
9
10
Random consistency (RI)
0
0
0.58
0.9
1.12
1.24
1.32
1.41
1.45
1.49
Table 3: Average random consistency (RI).
Criteria
Ease of Use
User Intention
Stability
Efficiency
Priority vector
Ease of Use
0.161
0.200
0.121
0.193
0.1688
User Intention
0.323
0.401
0.444
0.396
0.3910
Stability
0.423
0.286
0.317
0.300
0.3315
Efficiency
0.093
0.113
0.118
0.111
0.1088
Evaluation
λAmax = 4.0407,
CI = 0.0136,
RI = 0.9,
Table 4: Synthesized matrix for criteria.
We analysis the comparison matrix by following steps:
9
CR = 0.0151<0.1 OK.
Step 1. synthesize the pairwise comparison matrix;
Step 2. calculate the priority vector for evaluation;
Step 3. calculate the consistency ratio;
Step 4. calculateλmax;
Step 5. calculate the consistency index, CI;
Step 6. select appropriate value of random consistency ratio from Table 3; and
Step 7. check the consistency of the pairwise comparison matrix to check whether the
participants’ comparisons were consistent or not.
First, Synthesizing the pairwise comparison matrix is performed by dividing each element of
the matrix by its column total. The synthesized matrix for criteria evaluation is shown in Table 4.
Then, the priority vector in Table 4 can be obtained by finding the row averages. The
calculation is as follow:
 (0.161 + 0.200 + 0.121 + 0.193) / 4   0.1688
 (0.323 + 0.401 + 0.444 + 0.396) / 4  0.3910 

=

(0.423 + 0.286 + 0.317 + 0.300) / 4   0.3315

 

 (0.093 + 0.113 + 0.117 + 0.111) / 4   0.1085
Estimating the consistency ratio is as follows:
0.5 0.382 1.74   0.1688 0.6797 
 1
 2
1
1.4 3.56  0.3910  1.5790 

⋅
=
 2.62 0.714
1
2.7   0.3315 1.3459 

 
 

1   0.1085  0.4381
0.575 0.281 0.27
Dividing all the elements of the weighted sum matrices by their respective priority vector
element, we obtain:
0.6797
1.5790
1.3459
0.4381
= 4.0267,
= 4.0384,
= 4.0600,
= 4.0378
0.1688
0.3910
0.3315
0.1085
We then compute the average of these values to obtain λmax:
λmax =
( 4.0267 + 4.0384 + 4.0600 + 4.0378) / 4 =
The consistency index, CI, as follows:
=
CI
λmax − n
=
n −1
4.0407 − 4
= 0.0136
4 −1
We calculate the consistency ratio, CR, as follows:
10
4.0407
CR
=
CI 0.0136
=
= 0.0151
RI
0.9
As the value of CR is less than 0.1, the judgments are acceptable.
According to the priority vector of criteria evaluation, the rating of criteria is shown below.
6.2. Algorithm evaluation with respect to each criterion
Similarly, the pairwise comparison matrices and priority vectors for the evaluated algorithms
with respect to each criterion can be found as shown in Table 5-8.
Ease of Use
EMC
PMC
CBB
ICC
Priority vector
EMC
1
0.270
0.769
0.4
0.1258
PMC
3.7
1
1.5
0.588
0.3109
CBB
1.3
0.667
1
0.625
0.1952
ICC
2.5
1.7
1.6
1
0.3682
λAmax = 4.1372,
CI = 0.0320,
RI = 0.9,
CR = 0.0356<0.1 OK.
Table 5: Pair-wise comparison matrix for ease of use.
User Intention
EMC
PMC
CBB
ICC
Priority vector
EMC
1
0.667
0.526
0.455
0.1507
PMC
1.5
1
1.5
0.667
0.2679
CBB
1.9
0.667
1
0.909
0.2510
ICC
2.2
1.5
1.1
1
0.3303
λAmax = 4.0714,
CI = 0.0195,
RI = 0.9,
CR = 0.0217<0.1 OK.
Table 6: Pair-wise comparison matrix for user intention.
Stability
EMC
PMC
CBB
ICC
Priority vector
EMC
1
0.5
0.4
0.588
0.1410
PMC
2
1
0.833
0.556
0.2323
CBB
2.5
1.2
1
0.833
0.2962
ICC
1.7
1.8
1.2
1
0.3304
λAmax = 4.0675,
CI = 0.0197,
RI = 0.9,
CR = 0.0219<0.1 OK.
Table 7: Pair-wise comparison matrix for stability.
11
Efficiency
EMC
PMC
CBB
ICC
Priority vector
EMC
1
0.286
0.556
0.472
0.1225
PMC
3.5
1
1.7
0.667
0.3270
CBB
1.8
0.588
1
0.667
0.2091
ICC
2.1
1.5
1.5
1
0.3415
λAmax = 4.1079,
CI = 0.0254,
RI = 0.9,
CR = 0.0282<0.1 OK.
Table 8: Pair-wise comparison matrix for efficiency.
According to the priority vector of each criterion, the rating of the evaluated algorithms with
respect to relative criterion is shown as follows.
6.3. Overall performance evaluation
We combine the criterion priorities and the priorities of each evaluated algorithm relative to
each criterion in order to develop the overall priority ranking of the evaluated algorithms which is
termed as the priority matrix (shown in Table 9).
Ease of Use
User Intention
Stability
Efficiency
Overall
(0.1688)
(0.3910)
(0.3315)
(0.1088)
Priority vector
EMC
0.1258
0.1507
0.1410
0.1225
0.1402
PMC
0.3109
0.2679
0.2323
0.3270
0.2698
CBB
0.1952
0.2510
0.2962
0.2091
0.2520
ICC
0.3682
0.3303
0.3304
0.3415
0.3380
Table 9: Priority matrix for the overall performance evaluation of the algorithms.
12
The calculations for finding the overall priority of the algorithms are given below for
illustration purposes:
Priority of algorithm EMC
= 0.1688 × 0.1258 + 0.391× 0.1507 + 0.3315 × 0.1410 + 0.1088 × 0.1225 = 0.1402
Priority of algorithm PMC
= 0.1688 × 0.3109 + 0.391× 0.2679 + 0.3315 × 0.2323 + 0.1088 × 0.3270 = 0.2698
Priority of algorithm CBB
= 0.1688 × 0.1952 + 0.391× 0.2510 + 0.3315 × 0.2962 + 0.1088 × 0.2090 = 0.2520
Priority of algorithm ICC
= 0.1688 × 0.3682 + 0.391× 0.3303 + 0.3315 × 0.3304 + 0.1088 × 0.3415 = 0.3379
We show the overall priority vector of the evaluated algorithm in the following chart to help
readers understand it directly.
13
Appendix I
Corpus
Category 1: Human
Model 1
Model 2
Model 3
Model 4
Model 5
Model 7
Model 8
Model 9
Model 10
Model 12
Model 13
Model 14
Model 15
Model 17
Model 18
Model 19
Model 20
Model 22
Model 23
Model 24
Model 25
Category 2: Cup
Model 6
Category 3: Airplane
Model 11
Category 4: Ant
Model 16
Category 5: Chair
Model 21
14
Category 6: Octopus
Model 31
Model 32
Model 33
Model 34
Model 35
Model 37
Model 38
Model 39
Model 40
Model 42
Model 43
Model 44
Model 45
Model 47
Model 48
Model 49
Model 50
Model 57
Model 58
Model 59
Model 60
Category 7: Teddy
Model 36
Category 8: Hand
Model 41
Category 9: Plier
Model 46
Category 10: Fish
Model 56
15
Category 11: Bird
Model 61
Model 62
Model 63
Model 64
Model 65
Model 67
Model 68
Model 69
Model 70
Model 72
Model 73
Model 74
Model 75
Model 77
Model 78
Model 79
Model 80
Model 72
Model 73
Model 74
Model 75
Category 12: Armadillo
Model 66
Category 13: Mech
Model 71
Category 14: Bearing
Model 76
Category 15: Vase
Model 71
16
Category 16: Fourleg
Model 86
Model 87
Model 88
17
Model 89
Model 90
Appendix II
I.
Questionnaire
Questionnaire 1 (Algorithm Comparison)
In the following questions, you need to pair wisely compare between the algorithms
you have used to segment the model by using the relative scale measurement shown
below.
Pair-wise comparison scale for AHP preferences
Numerical rating
Verbal judgments of preferences
9
8
7
6
5
4
3
2
1
Extremely preferred
Very strongly to extremely
Very strongly preferred
Strongly to very strongly
Strongly preferred
Moderately to strongly
Moderately preferred
Equally to moderately
Equally preferred
Example:
If you think algorithm A is STRONGLY perform better than algorithm B, then you
should choose 5 in the “A vs. B” list box or 1/5 in the “B vs. A” list box.
A vs. B :
5
or
B vs. A :
1/5
We denote A is the first algorithm you have used to segment the model, and B is the second
algorithm.
1. Ease of use:
A
vs. B
2. User intention:
A
vs. B
3. Stability:
A
vs. B
4. Efficiency:
A
vs. B
18
II. Questionnaire 2 (Criteria Evaluation )
 Personal Information Session
1. Your gender: ○ Male.
○ Female.
2. Your age: _______
3. Your education background:
○ PhD.
○ M.S.
○ Bachelor
4. Your computer skill:
○ Advanced. ○ Intermediate.
○ Beginner.
5. Your graphics background:
○ Advanced. ○ Intermediate.
○ Beginner.
○ Others
○ Poor.
○ None.
 Subjective Evaluation Session
In the following questions, you need to pair wisely compare between the criteria to
provide the relative importance of one criterion over another for interactive mesh
segmentation by using the relative scale measurement shown below.
Pair-wise comparison scale for AHP preferences
Numerical rating
Verbal judgments of preferences
9
8
7
6
5
4
3
2
1
Extremely preferred
Very strongly to extremely
Very strongly preferred
Strongly to very strongly
Strongly preferred
Moderately to strongly
Moderately preferred
Equally to moderately
Equally preferred
Example:
If you think A is STRONGLY more important than B, then you should choose 5 in the “A
vs. B” list box or 1/5 in the “B vs. A” list box.
A vs. B :
5
or
B vs. A :
1/5
Criteria evaluation
1.Ease of use vs. User intention
3.Ease of use vs. Efficiency
5.User intention vs. Efficiency
2.Ease of use vs. Stability
4.User intention vs. Stability
6.Stability vs. Efficiency
19
Appendix III
Averaged over each category
Each of the two bar charts on the top shows the final averaged accuracy over models of each
category. In all cases, higher bars represent better results. Plot on the right bottom shows time
required for mesh segmentation and user interaction for each algorithm averaged over models of
each category. Stability with each algorithm over models of each category is shown on the left
bottom.
Category 1: Human
20
Category 2: Cup
Category 3: Airplane
21
Category 4: Ant
Category 5: Chair
22
Category 6: Octopus
Category 7: Teddy
23
Category 8: Hand
Category 9: Plier
24
Category 10: Fish
Category 11: Bird
25
Category 12: Armadillo
Category 13: Mech
26
Category 14: Bearing
Category 15: Vase
27
Category 16: Fourleg
28
Download