Automatic sequence of 3D point data for surface fitting

Computers & Industrial Engineering 57 (2009) 408–418
Contents lists available at ScienceDirect
Computers & Industrial Engineering
journal homepage: www.elsevier.com/locate/caie
Automatic sequence of 3D point data for surface fitting using neural networks
He Xueming a,b,*, Li Chenggang a, Hu Yujin a, Zhang Rong c, Simon X. Yang d, Gauri S. Mittal d
a
School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
School of Mechanical Engineering, Southern Yangtze University, Wuxi 214122, China
School of Science, Southern Yangtze University, Wuxi 214122, China
d
School of Engineering, University of Guelph, Guelph, Ont., Canada N1G2W1
b
c
a r t i c l e
i n f o
Article history:
Received 4 February 2007
Received in revised form 13 June 2008
Accepted 5 January 2009
Available online 13 January 2009
Keywords:
Automatic sequence
CAD/CAM
Neural networks
Reverse engineering
Surface fitting
a b s t r a c t
In this paper, a neural network-based algorithm is proposed to explore the sequence of the measured
point data for surface fitting. In CAD/CAM, the ordered data serves as the input to fit smooth surfaces
so that a reverse engineering system can be established for 3D sculptured surface design. The geometry
feature recognition capability of back-propagation neural networks is also explored. Scan number and 3D
coordinates are used as the inputs of the proposed neural networks to determine the curve which a data
point belongs to and the sequence number of the data point on the curve. In the segmentation process,
the neural network output is segment number; while the segment number and sequence number on the
same curve are the outputs when sequencing those points on the same curve. After evaluating a large
number of trials, an optimal model is selected from various neural network architectures for segmentation and sequence. The neural network is successfully trained by the known data and validated the unexposed. The proposed model can easily adapt for new data measured from the same part for a more precise
fitting surface. In comparison to Lin et al.’s [Lin, A. C., Lin, S.-Y., & Fang, T.-H. (1998). Automated sequence
arrangement of 3D point data for surface fitting in reverse engineering. Computer in Industry, 35, 149–
173] method, the presented algorithm neither needs to calculate the angle formed by each point and
its two previous points nor causes any chaotic phenomenon of point order.
Ó 2009 Elsevier Ltd. All rights reserved.
1. Introduction
Computer aided design and manufacturing (CAD/CAM) play an
important role in present product development. In general, designers first employ a CAD system to design the detailed geometrical
shape of a product; then send the designed shape to a CAM system
to generate the required data for manufacturing and finally operate
a numerical control machine to produce the part. However, the design of products with complex sculptured surfaces starts with a
clay model. Many existing physical parts such as car bodies, ship
hulls, propellers, shoe insoles and bottles need to be reconstructed
when neither their original drawings nor CAD models are available.
The process of reproducing such existing parts without detailed
engineering drawings is named as reverse engineering. It is based
on digitizing the actual products and then fitting the digitized data
to surfaces rather than relying on the blueprints. In this situation,
designers must employ either a laser scanner or a coordinate measuring machine (CMM) to measure existing parts, and then use the
measured data to reconstruct a CAD model. After the CAD model is
* Corresponding author. Address: School of Mechanical Science and Engineering,
Huazhong University of Science and Technology, Wuhan 430074, China. Tel.: +86 27
87543972.
E-mail address: hxuem2003@163.com (X. He).
0360-8352/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved.
doi:10.1016/j.cie.2009.01.003
established, it will then be improved by successive manufacturingrelated actions. Through the measured data, the reverse engineering is to help the designers to improve an existing product and
thereby develop better products.
There is a review on data acquisition techniques, characterization
of geometric models and related surface representations, segmentation and fitting techniques in reverse engineering (Varady, Martin, &
Cox, 1997). From the CMM point of view, it is difficult to fetch surface
information rapidly through CMM and the massive point data obtained can barely be processed directly (Yau et al., 1993).
The initial point data acquired by a measuring device generally
require pre-processing operations such as noise filtering, smoothing, merging and data ordering for subsequent operations. Using
the pre-processed point data, a surface model can be generated by
a curve fitting and surface fitting method, which is shown in Fig. 1.
For sequencing the measured point data and increasing the accuracy of fitting surface, four algorithms are proposed to categorize
and sequence the continuous, supplementary and chaotic point data
and revise the sequence of points (Lin, Lin, & Fang, 1998). Through
sampling, regressing and filtering, the points are regenerated with
designer interaction in the format that they meet the requirements
of fitting into a B-spline curve with good shape and high quality (Tai
& Huang, 2000). Using the normal values of points, a large amount of
unordered sets of point data is handled and constructed the final 3D-
X. He et al. / Computers & Industrial Engineering 57 (2009) 408–418
Data Acquisition
Pre-processing
Data Arrangement
Segmentation/Sequence
Curve/surface Fitting
3D Surface Model
Fig. 1. General process for reverse engineering.
girds by the octree-based 3D-grid method (Woo, Kang, Wang, & Lee,
2002). Considering that the input parameterization does not reflect
the nature of a tensor product B-spline surface, a simple surface with
a few control points (cubic Bézier patch) is built and then a reference
surface is generated through gradually increasing the smoothness or
the number of control points (Weiss, Andor, Renner, & Várady,
2002). After computing the best fit least-squares surfaces with
adaptively optimized smoothness weights, a smooth surface within
the given tolerances is found. A meshless parameterization is proposed to parameterize and triangulate unorganized point sets digitized from a single patch (Floater & Reimers, 2001). By solving a
sparse linear system the points are mapped into a planar parameter
domain and by making a standard triangulation of the parameter
points a corresponding triangulation of the original data sets is obtained. The chosen data points are used as control points to construct initial NURBS surfaces and then all the measured data
points are appended as target points to modify these initial surfaces
using minimization of deviation under boundary condition constraints (Yin, 2004). Assuming the normals at the unorganized sample points known, smooth surfaces of arbitrary topology are
reconstructed using the natural neighbor interpolation (Boissonnat
& Cazals, 2002).
Conventional computation methods process the point data
sequentially and logically, and the description of given variables
is obtained from a series of instructions to solve a problem. In comparison to them, artificial neural networks (NNs) can be used to
solve a wide variety of problems in science and engineering, especially in the fields where the conventional modeling methods fail.
Particularly, NNs do not need to execute programmed instructions
but respond in parallel to the input patterns (Anderson, 1995).
Well-trained NNs can be used as a model in a specific application,
which is a data-processing system inspired by a neural system.
They can capture a lot of relationships or discover regularities
existing in a set of input patterns that are difficult to describe adequately by conventional approaches. The predictive ability of an
NN results from the training on experimental data and then validation by independent data (Mittal & Zhang, 2000; Zhang, Yang, Mittal, & Yi, 2002). An NN has the ability to re-learn to improve its
performance if new data are available.
Feature extraction has been widely investigated for many years.
An NN is employed to recognize features from boundary representations (B-reps) solid models of parts described by the adjacency
matrix, which contains input patterns for the network (Prabhakar
& Henderson, 1992). However, in their work, no training step is addressed in the repeated presentations of training. To reconstruct a
409
damaged or worn freeform surface, a series of points from a mathematically known surface are applied to train an NN (Gu & Yan,
1995). A set of heuristics is used to break compound features into
simple ones using an NN (Nezis & Vosniakos, 1997). Freeform surfaces from Bezier patches are reconstructed by simultaneously
updating networks that correspond to the separate patches (Knopf
& Kofman, 2001). A back-propagation network is used to segment
surface primitives of parts by computing Gaussian and mean curvatures of the surfaces (Alrashdan, Motavalli, & Fallahi, 2000). A
neural network self-organizing map (SOM) method is employed
to create a 3D parametric grid and reconstruct a B-spline surface
(Barhak & Fisher, 2001, 2002).
2. Research methodology
In reverse engineering, a designer firstly places the existing product in either a CMM or a laser scanner for surface configuration measurement. And the acquired point data can be input into CAD
software to establish curves and surfaces. However, before the point
data are processed by CAD system, they must be divided into several
segments sorted to understand the start and end points of each
curve, and then the data are fit into curves and the curves are transformed into surfaces. Currently, the whole procedure is conducted
manually, thus the precision cannot be guaranteed.
On the other hand, when a contact device is used to measure
the surface configuration of a product, the measured point data
sometimes are not dense enough to be developed into satisfactory
curves, surfaces and shapes. So more points are needed to be measured on the portions where point data are not previously taken.
The point data acquired at the second round of measurements have
to be attached at the end of the original data. Consequently, the
positions of these points added in the point data file must be recognized and located precisely before we utilize CAD software to
build the surface model. Though relatively troublesome, this process proves to be an ineluctability step in the reverse engineering.
Due to the versatile geometry patterns of different surface regions
on the product body, the measurement and manual point-ordering
procedure become even more complicated and troublesome. So Lin
et al. (1998) employ four continuous algorithms for crosschecking
the point data with distances and angles to categorize and index
these points. The procedure of dealing with the point data proposed by Lin et al. is very complicated and often causes chaotic
and wrong order. Furthermore, for different shape geometry or a
little complicated surface, their method must adjust and modify
the angle threshold to meet the new demand. It is thus the objective of the paper to propose a viable, neural network-based solution to automatically segment and sequence the point data
acquired by CMM, so as to simplify and fasten the processes of
the measured point data used in reverse engineering.
In consideration of NN pattern recognition capability, the point
data in a 3D space may be obtained from measurement at very
sparse manner then a little dense and finally at very dense manner
to fit the accurate surface gradually. Fig. 2 shows point data measured from the engine-hood of a toy–car model. Such sparse data
(Fig. 2a) can manually identify both the start and the end points
of each curve directly. Both x- and y-coordinate values will change
suddenly in sparse manner, and become different trend from previous points at the location where the curve is sectioned off in the
data file, while at least one of coordinate values of continuous
points on the same curve changes gradually. Fig. 2b and Table 1
show dense point data measured from the same model. It displays
that more and dense point data appear at the same measuring
curve. As a result, here we must judge whether continuous points
are located on the same curve or not. Since the geometry of surfaces varies, it is more difficult for us to use general algorithms
let alone manual method to separate and locate so many measured
410
X. He et al. / Computers & Industrial Engineering 57 (2009) 408–418
a
b
° ----Sparse-measured points
40
* ----Dense-measured points
40
0
100
Z 20
Z 200
100
100
Y
50
50
c
Y
X
0 0
100
50
50
0 0
X
----Dense-measured points
*Δ ----Supplement
points
40
0
100
Z 20
100
Y
50
50
X
0 0
Fig. 2. Examples of measured point data of a toy–car engine-hood. (a) Sparse measuring for segmentation; (b) dense measuring and (c) supplementary appending.
Table 1
First-time measuring point data of a toy–car engine-hood (Lin et al., 1998).
Scan no.
x
y
z
Scan no.
x
y
z
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
2.014
0.445
6.870
19.254
22.918
22.965
19.290
6.730
0.000
1.648
19.283
18.624
23.602
31.906
34.491
34.455
31.736
23.526
18.414
18.676
32.849
33.262
36.685
41.675
43.292
43.144
41.314
36.585
33.218
32.044
43.675
44.951
47.512
50.254
51.240
0.993
3.721
9.732
21.805
37.935
57.358
76.573
89.302
95.148
98.701
0.048
6.114
12.657
23.151
37.749
56.674
76.012
88.066
94.316
99.719
0.000
7.855
14.874
24.282
37.885
56.272
75.508
86.876
93.192
99.571
0.391
9.034
16.331
25.493
38.241
21.813
22.308
23.498
23.555
23.569
23.771
23.796
23.440
22.648
21.980
21.121
21.999
22.156
20.754
20.586
20.801
21.206
22.315
22.094
21.198
19.065
20.134
20.045
18.092
18.067
18.296
18.538
20.099
20.041
19.166
16.219
17.218
17.054
15.307
15.339
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
51.024
49.790
47.320
44.961
43.034
52.861
55.012
57.405
59.137
59.832
59.678
58.728
57.006
54.851
52.814
62.197
65.399
67.868
69.479
69.876
69.171
69.171
67.187
64.992
63.087
73.337
77.910
80.768
82.029
82.053
82.146
81.880
79.684
77.442
75.150
56.114
74.958
85.819
92.087
98.877
1.016
9.755
17.194
27.079
39.032
55.954
74.307
84.822
91.025
97.758
1.815
10.262
18.347
28.824
40.435
55.898
73.379
83.694
89.654
96.237
3.857
10.690
19.219
31.224
42.345
56.448
72.130
82.176
87.734
92.828
15.565
15.682
17.114
17.095
16.513
12.865
13.513
13.290
11.907
11.979
12.134
12.154
13.430
13.468
13.093
8.447
8.085
7.986
7.133
7.336
7.426
7.268
8.156
8.309
8.382
1.603
0.271
0.365
0.000
0.255
0.268
0.013
0.554
0.833
1.074
point data. The first step of proposed method can easily and accurately solve the problem based on learning and testing the situation in Fig. 2a using NN architectures.
Fig. 2c and Table 2 show the result of supplementing points that
are inserted at cleaves of the original measurement points to increase the accuracy of fitting surfaces. In Table 2, 63 points ob-
tained at the second round measurement are attached to the end
of the original data file, while first 70 points are duplicated with
the same sequencing method as the former ones. Now, it is very
important to separate the second round measurement point data
and reorder all point data in correct sequence. The second step of
proposed method will be designed to solve accurately the latter
problem based on learning and testing the result of the first step
of proposed method using NN architectures.
3. Neural network design
To get the best segmentation and sequence by an NN, 16 neural
network architectures shown in Figs. 3 and 4 are evaluated. The
layers, the hidden neurons, scale and activation functions, learning
rate, momentum and initial weights vary in NNs. The algorithms
used in NN training consist of a back-propagation (BP) shown in
Fig. 4a–l, a general regression (GR) illustrated in Fig. 4m, a multilayer group method of data handling (GMDH) given in Fig. 4n,
and an unsupervised Kohonen net and a probabilistic net shown
in Fig. 4o and p. In the segmentation procedure of car–toy engine-hood, there are just 28 datasets available to identify manually
the start point and the end point of each curve. Five datasets of the
total are randomly selected as testing sets and another 2 datasets
as production sets. The remaining datasets are used for training
the NN. While in the sequence procedure of the above product,
there are more than 70 datasets. Actually the more datasets are selected as testing sets, production sets and training sets of the NN,
the better precision will be obtained.
A variety of NN architectures and the learning strategies evaluated in the NN model design are adopted one by one. Then various
scale and activation functions are selected to evaluate the presented NN architectures. After that, parameters such as the learning rates, momentum and initial weights, layers and hidden
neurons are addressed. Finally, through learning and testing, optimal NN algorithms for two-step sequencing are obtained.
3.1. Neural network (NN) architecture and learning strategy
To search for optimal NN architectures for automatic segmenting and sequencing 3D point data for surface fitting, 5 types of NN
are evaluated, including BP, GR, GMDH, K, and P nets.
411
X. He et al. / Computers & Industrial Engineering 57 (2009) 408–418
Table 2
Measuring point data of an engine-hood after appending supplementary points (Lin et al., 1998).
Scan no.
x
y
z
Scan no.
x
y
z
Scan no.
x
y
z
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
2.014
0.445
6.870
19.254
22.918
22.965
19.290
6.730
0.000
1.648
19.283
18.624
23.602
31.906
34.491
34.455
31.736
23.526
18.414
18.676
32.849
33.262
36.685
41.675
43.292
43.144
41.314
36.585
33.218
32.044
43.675
44.951
47.512
50.254
51.240
51.024
49.790
47.320
44.961
43.034
52.861
55.012
57.405
59.137
59.832
0.993
3.721
9.732
21.805
37.935
57.358
76.573
89.302
95.148
98.701
0.048
6.114
12.657
23.151
37.749
56.674
76.012
88.066
94.316
99.719
0.000
7.855
14.874
24.282
37.885
56.272
75.508
86.876
93.192
99.571
0.391
9.034
16.331
25.493
38.241
56.114
74.958
15.819
92.087
98.877
1.016
9.755
17.194
27.079
39.032
21.813
22.308
23.498
23.555
23.569
23.771
23.796
23.440
22.648
21.980
21.121
21.999
22.156
20.754
20.586
20.801
21.206
22.315
22.094
21.198
19.065
20.134
20.045
18.092
18.067
18.296
18.538
20.099
20.041
19.166
16.219
17.218
17.054
15.307
15.339
15.565
15.682
17.114
17.095
16.513
12.865
13.513
13.290
11.907
11.979
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
59.678
58.728
57.006
54.851
52.814
62.197
65.399
67.868
69.479
69.876
69.872
69.171
67.187
64.992
63.087
73.337
77.910
80.768
82.029
82.053
82.146
81.880
79.684
77.442
75.150
1.137
1.941
13.596
22.127
22.909
22.179
13.516
1.604
0.681
18.858
20.057
28.048
34.054
34.370
33.935
27.873
19.983
18.423
33.031
34.352
55.954
74.307
84.822
91.025
97.758
1.815
10.262
18.347
28.824
40.435
55.898
73.379
83.694
89.654
96.237
3.857
10.690
19.219
31.224
42.345
56.448
72.130
82.176
87.734
92.828
2.606
5.777
15.029
29.469
47.256
67.465
83.878
93.027
96.730
3.083
9.100
17.098
30.180
46.593
66.848
83.098
91.599
96.943
4.261
11.140
12.134
12.154
13.430
13.468
13.093
8.447
8.085
7.986
7.133
7.336
7.426
7.268
8.156
8.309
8.382
1.603
0.271
0.365
0.000
0.255
0.268
0.013
0.554
0.833
1.074
22.096
22.713
23.955
23.362
23.889
23.661
23.897
22.998
22.401
21.587
22.217
21.823
20.489
20.735
20.787
22.087
22.284
21.753
19.701
20.274
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
39.604
42.929
43.261
42.648
39.298
34.378
32.688
44.283
45.988
49.071
50.985
51.206
50.604
48.888
45.944
44.136
54.024
56.145
58.093
59.718
59.760
59.402
58.137
55.857
53.958
64.100
66.561
68.759
69.820
69.860
69.694
68.449
65.945
604.118
76.090
79.390
81.560
82.139
82.028
82.187
81.056
78.404
76.520
18.923
30.732
46.387
66.399
82.287
90.425
96.116
5.257
12.475
20.484
31.463
46.458
66.009
81.513
89.311
95.074
5.777
13.524
21.702
32.650
46.696
65.579
80.641
88.221
94.086
6.385
14.162
23.238
34.440
47.358
65.114
79.438
86.964
92.619
7.572
14.338
25.118
36.764
48.753
64.608
77.963
85.372
89.995
19.534
17.998
18.208
18.365
19.828
20.178
19.710
16.887
17.266
16.559
15.256
15.498
15.610
16.953
17.166
16.886
13.305
13.512
12.101
11.861
12.113
12.091
13.364
13.466
13.357
8.025
8.124
7.213
7.212
7.428
7.304
8.022
8.283
8.217
0.712
0.228
0.078
0.097
0.362
0.066
0.394
0.732
0.901
Fig. 3. Flow diagram of NNs for the automatic sequencing of data points.
Back-propagation neural network (BPNN) is the most commonly used NN. There are four main types of BPNN: standard,
jump connection, Jordan–Elman and Ward (Fig. 6). For standard
nets (SNs, Fig. 4a–c), each layer is just connected to its immediately
previous layer and the number of hidden layers varies from one to
three for the best NN architecture. All the neurons in a layer are
identical, i.e. they have the same activation function. The number
of neurons in each layer, the scaling function for the input layer
and the activation functions for the other layers are also varied.
For jump connection nets (JCNs, Fig. 4d–f), each layer is connected
to every other layer. This architecture enables every layer to view
the features detected in all the previous layers as opposed to just
the previous layer. It might also detect linearity in the data due
to the direct connection between the input and output layer. It is
a powerful architecture that works better than architectures with
standard connections. Jordan–Elman nets (JENs, Fig. 4g–i) are
recurrent networks with dampened feedback and there is a reverse
connection from the hidden layer, the input slab and the output
slab. Ward Systems Group (Frederick, MD) invented Ward nets
(WNs, Fig. 4j–l). Due to a direct link with the input layer, the output layer is able to spot the linearity presented in the input data,
along with the features detected by the hidden layers. These hidden layers have different activation functions, which extract different types of features from the inputs and thus present the output
layer with different views of the input domain.
412
X. He et al. / Computers & Industrial Engineering 57 (2009) 408–418
1
;
Þ=rÞ
1 þ expððu u
uu
;
and the tanh function is GðuÞ ¼ tanh
The logistic function is GðuÞ ¼
r
are the variable and its mean values, respectively, and r
where u; u
is the standard deviation of the variable.
To observe their effectiveness, both linear and non-linear scaling methods are evaluated in the NN model.
3.2.2. Activation function
An activation function is needed to act on each hidden neuron
and the output neuron. It could also be either linear or non-linear.
But to find the non-linear relationship between input and output,
the activation function of hidden neurons must be a non-linear
function. There are eight activation functions in the NN model:
Linear : GðuÞ ¼ u:
Non-linear : Logistic : GðuÞ ¼
1
1 þ expðuÞ
Symmetric-logistic : GðuÞ ¼
2
1
1 þ expðuÞ
Gaussian : GðuÞ ¼ expðu2 Þ
Fig. 4. Architecture of NNs for the automatic sequencing of data points.
Gaussian-complement : GðuÞ ¼ 1 expðu2 Þ
Hyperbolic Tangent Tanh : GðuÞ ¼ tanhðuÞ
Tanh15 : GðuÞ ¼ tanhð1:5uÞ
General regression neural network (GRNN, Fig. 4m) is a threelayer network that learns in only one epoch, needs one hidden neuron for every pattern and works well on problems with sparse data.
There is no training parameter, but a smoothing factor is required
when the network is applied to the data.
Group method of data handling or polynomial network (GMDH,
Fig. 4n) is implemented with non-polynomial terms created by
using linear and non-linear regression in the links. The post layer
is made by regressions of the values in the previously created layer
along with the input variables.
Kohonen net (KNN, Fig. 4o) is an unsupervised learning net. No
desired outputs are provided. The net looks for similarities inherent in the input data and clusters them based on their similarity.
Probabilistic Neural Network (PNN, Fig. 4p) is implemented as a
three-layer supervised network that classifies patterns. This network could not be used for problems that require continuous values for the output. It could only classify inputs into a specified
number of categories. It works well with problems having few
training samples.
3.2. Scale and activation functions
A satisfactory NN model for the proposed problem is based on
the selection of suitable scale and activation functions. To obtain
an optimal NN model, several scaling and activation functions
are evaluated.
3.2.1. Scaling function
To ensure the equal contribution of each input variable to the
NN, the inputs of the model are scaled into a common numeric
range that is either (0, 1) or (1, 1). In these two ranges, there are
two types of scaling functions named linear or non-linear scaling
functions. Two linear scaling functions are (0, 1), (1, 1) where
the numbers in the data below and above the ranges are clipped
to the upper or lower bounds. Another two linear ones are hh0, 1ii
and hh1, 1ii where larger and smaller numbers are allowed for the
new data and are not clipped at the upper or lower bounds. Two
non-linear scaling functions are logistic and tanh that scale data
to (0, 1) and (1, 1), respectively.
Sine : GðuÞ ¼ sinðuÞ
After applying an activation function to the sum of input values
P
ui with weight wji, the hidden layers’ outputs hj = G( wjiui) are
produced, and then are gradually passed forward to obtain the final
P
model outputs vk = F( wkjhj), where G, F are the activation
functions.
3.3. Learning rate, momentum and initial weights
Weight changes in back-propagation are proportional to the
negative gradient of the error. This guideline determines the relative changes that must occur in different weights when a set of
samples are presented, but this does not fix the exact magnitude
of the desired weight. The magnitude changes depend on the
appropriate choice of the learning rate (g). A large value of g would
lead to rapid learning but the weight might then oscillate, while a
small one imply slow learning (Anderson, 1995). In the learning of
an NN, the right value of g is between 0.1 and 0.9.
The momentum coefficient (a) determines the proportion of the
last weight change that is added to the new weight change. The value of a can be obtained adaptively as the value of g changes. A
well-chosen value of a could significantly reduce the number of
iterations for convergence. A value close to 0 implies that the past
history does not have much effect on the weight change, while a
value closer to 1 suggests that the current error has little effect
on the weight change.
Training is generally commenced with the initial weight values
(x) chosen randomly. Since larger weight magnitude might drive
the first hidden nodes to saturation and then require large amounts
of training time to emerge from the saturated state, 0.3 is selected
as the value of x for all neurons in the NN.
3.4. Number of hidden neurons
Many important issues, such as how many training samples for
successful learning and how large of an NN for a specific task, are
solved in practice by trial and error. With very few hidden neurons,
the network may not be powerful enough for a given learning task;
413
X. He et al. / Computers & Industrial Engineering 57 (2009) 408–418
while with a large number of hidden neurons, the computation is
too expensive. Neural learning is considered as successful only if
the system could perform well on test data which the system has
not been trained.
For a three-layer NN, the minimum hidden neurons can be computed according to the following formula:
Nhidden Ninput þ Noutput pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
þ Ndatasets ;
2
where Nhidden is the minimum number of hidden neurons; Ninput and
Noutput are the numbers of input neurons and output neurons; and
Ndatasets is the number of training datasets (NeuroShell 2, Release
4.0 help file).
For more than three-layers, the neurons in each hidden layer
Nhidden should be divided by the number of hidden layers. For multiple slabs in the same hidden layer, the hidden neurons also must
be divided evenly among the slabs.
Thus, in the proposed segmentation NN model, the minimum
numbers of hidden neurons for three-, four- and five-layer networks are 8, 4 and 3, respectively; while in the proposed sequence
NN model, they are 11, 6 and 4, respectively.
3.5. Optimal artificial neural network
For optimization of the NNs, different models are tested by
varying the number of hidden neurons. For example, more than
362 different models for segmenting are evaluated considering
that the hidden neurons vary from 7 to 20 for three-, four- and
five-layered NNs.
Firstly, working with the 362 models in BP, Kohonen and probabilistic networks, the best two models are selected based on the
feature recognition accuracy. Secondly, working with the two selected models, the optimum scale and activation functions are
found. Lastly, using GRNN and GMDH, the optimum learning rate,
momentum and initial weights of the NN are also figured out. The
evaluating method for selecting optimal NN is based on the minimization of deviations between predicted and actual observed values. The statistical parameters such as Sr, Sd and R2 are used to
evaluate the performance of each NN (Pan & Li, 1999).
8
P
~Þ2
>
< Sr ¼ ð m m
P
Þ2 ;
Sd ¼ ð m m
>
: 2
R ¼ 1 Sr =Sd
~ is the predicted value of v; m
is the
where v is the actual value; m
mean v; Sr is the sum of squares deviation between actual and pre-
a
dicted values due to regression; Sd is the sum of squares deviation
between actual and the mean; and R2 is the coefficient of determination and refers to a multiple regression fitting as opposed to a
simple regression one.
At the same time, the following errors are calculated: the mean
squared error (E) is the mean of the square of the actual minus the
predicted values; the mean absolute error (e) is the mean of the
absolute values of the actual minus the predicted; and the mean
relative error (e0 ) is the mean of absolute error divided by the actual
value. The standard deviation d, d0 of e and e0 , and % data within 5%
error are also calculated.
After testing and evaluating a large number of NNs, the optimal
NNs for two-step automatic sequence of point data are the fourlayer WN3.
4. Implementation
As sculpture surfaces are very complicated, it is necessary for
surface fitting to scan enough points. It involves in categorizing
these points into different segments, and then sequencing the segmented points into continuous points. The proposed method employs two NN models to do the two operations. It requires
parallel input datasets representing geometric characteristics of
the scanned point data. These input datasets are automatically derived from the scanned points. Then, they are presented to the NN
module to recognize the appropriate feature as a curve. The number of neurons in the hidden layer and the number of hidden layers
are selected to be as small as possible at first, and then they are increased until the network converges to a solution.
4.1. Segmentation
This step aims to train and test many NNs on the relatively simple and sparse point data captured from the sculpture surface to
find an optimal NN architecture for segmenting the point data.
The optimal NN is then trained with learning rate (g) and momentum (a) equal to 0.1, and the initial weights x equal to 0.3 for all
neurons in the NN. The adjustment of the weights is made to reduce the error between actual and predicted values after all the
training patterns are presented to the network. The trained NN is
then applied on the dense point data measured from the same surface. Final optimal NN architecture for segmentation of point data
is a four-layer WN3 that contains three neurons for the second,
third and fourth slabs (Fig. 5a). The scaling function is linear
(0, 1); activation functions for the second, third and fourth slabs
b
Threshold1
Threshold1
Threshold3
Threshold3
Threshold4
Threshold4
Scan
No.
Scan
No.
X
Segment
number
Y
Y
Sequence
number
Z
Z
X
Segment
number
Threshold2
Threshold2
Fig. 5. Configuration of an NN for the automatic sequencing of data points.
414
X. He et al. / Computers & Industrial Engineering 57 (2009) 408–418
tual network output values from 1.944 to 2.216 closer to 2.0 compose another curve, i.e. curve 2. The other points construct the
other curves. Thus, all points are separated into 7 curves and there
are 10 points in each curve. Shown in Fig. 6, the first curve is actually made up of the most previous 10 points, i.e. point 1, 2, 3, . . ., 9
and 10; the next continuous 10 points consist of the second curve
and so on.
Dense-measured points and curves connected by these points
----Dense-measured points
*
70 69
40
Z
20
0
68
67
30
10
9
29 28
20
19 18
27
17
8
7
100
66
65
16
6
26
64
15
80
5
24
14
4
60
Y
40
13
12
3
21
←
20
0
23
22
←
←
curve 7
63
62
61
25
4.2. Sequencing
90
←
60
21 curve 3
curve 2
11
curve 1 30
X
0
Fig. 6. Automatic segmentation of dense measured point data using an NN.
are Tanh, Sine, and Gaussian-complement, respectively. The inputs
of the NN are the scan number and 3D coordinates, and the output
is the curve or segment number. For example, the NN is based on
training and testing the data point in Fig. 1. These points are then
used to segment and categorize the dense point data measured in
Fig. 2 and Table 1 into several curves illustrated in Fig. 6 and Table
3.
In Table 3, the first 10 points with the actual network output
values from 1.003 to 1.249 closer to 1.0 represent their location
on the same curve, i.e. curve 1. The second 10 points with the ac-
If the fitting surface does not describe the part correctly, the
supplementary point data (Fig. 2c or Fig. 7) need to be measured.
So these added points should be arranged, selected for a particular
curve, and sequenced into a suitable array. Assuming that the original curves have been sequenced in the previous NN model, the
second NN model will be accomplished by training and testing
the results of the previous model. After carrying out the same procedures as the segmentation, a final optimal NN architecture for
sequence is also obtained (Fig. 5b). The differences between sequence NN model and segmentation NN model are that each hidden slab contains four neurons; activation functions for the
second, third and fourth slabs are Logistic, Symmetric-logistic
and Gaussian-complement, respectively, during sequencing procedure. The outputs are the segment number and the sequence number on the same segment.
Sequencing of the supplementary and previous points of Fig. 2c
is illustrated in Figs. 8–10 and Table 4. The first example point 1
(scan no.) (Table 4) has the network output values of 1.000 (segment no.) and 1.057 (sequence no.), the second example point 71
Table 3
Prediction segment number of point data of Fig. 2b using the optimum NN.
Pattern: WN3
Scaling function: (0, 1)
Activation functions: Tanh, Sine, Gassin-complement
Scan no.
x
y
z
Segment no.
Scan no.
x
y
z
Segment no.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
1.102
1.008
1.027
1.249
1.098
1.081
1.203
1.064
1.003
1.100
2.175
1.948
1.968
2.216
2.026
1.990
2.125
1.944
1.982
2.111
3.252
3.026
3.020
3.188
2.993
2.958
3.091
2.942
3.008
3.152
4.169
4.023
4.043
4.118
3.956
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
51.024
49.79
47.32
44.961
43.034
52.861
55.012
57.405
59.137
59.832
59.678
58.728
57.006
54.851
52.814
62.197
65.399
67.868
69.479
69.876
69.171
69.171
67.187
64.992
63.087
73.337
77.91
80.768
82.029
82.053
82.146
81.88
79.684
77.442
75.15
56.114
74.958
85.819
92.087
98.877
1.016
9.755
17.194
27.079
39.032
55.954
74.307
84.822
91.025
97.758
1.815
10.262
18.347
28.824
40.435
55.898
73.379
83.694
89.654
96.237
3.857
10.69
19.219
31.224
42.345
56.448
72.13
82.176
87.734
92.828
15.565
15.682
17.114
17.095
16.513
12.865
13.513
13.29
11.907
11.979
12.134
12.154
13.43
13.468
13.093
8.447
8.085
7.986
7.133
7.336
7.426
7.268
8.156
8.309
8.382
1.603
0.271
0.365
0
0.255
0.268
0.013
0.554
0.833
1.074
3.939
4.059
3.972
3.987
4.083
4.983
4.948
5.021
5.075
4.983
5.002
5.090
4.993
4.948
5.012
5.851
5.979
6.041
6.096
6.037
6.010
6.158
6.057
5.993
6.016
6.841
7.000
7.000
7.000
6.976
6.994
7.000
7.000
6.990
6.975
2.014
0.445
6.87
19.254
22.918
22.965
19.29
6.73
0
1.648
19.283
18.624
23.602
31.906
34.491
34.455
31.736
23.526
18.414
18.676
32.849
33.262
36.685
41.675
43.292
43.144
41.314
36.585
33.218
32.044
43.675
44.951
47.512
50.254
51.24
0.993
3.721
9.732
21.805
37.935
57.358
76.573
89.302
95.148
98.701
0.048
6.114
12.657
23.151
37.749
56.674
76.012
88.066
94.316
99.719
0
7.855
14.874
24.282
37.885
56.272
75.508
86.876
93.192
99.571
0.391
9.034
16.331
25.493
38.241
21.813
22.308
23.498
23.555
23.569
23.771
23.796
23.44
22.648
21.98
21.121
21.999
22.156
20.754
20.586
20.801
21.206
22.315
22.094
21.198
19.065
20.134
20.045
18.092
18.067
18.296
18.538
20.099
20.041
19.166
16.219
17.218
17.054
15.307
15.339
415
X. He et al. / Computers & Industrial Engineering 57 (2009) 408–418
Supplemet points and curves connected by dense-measured points
----Dense-measured points
*
Δ----Supplement points
Δ ----Supplement points
133
132
70 69 68131 130
67
133
132131
40
Z
20
130
97 96 95
88 87 86
94
85
77
7978
76
75
0
100
84
93
83
92
curve 7
127←
126
125
91
74
82 90
89
73 81
← curve 3
80
← curve 2
72
71← curve 1 40
X
80
60
Y
129
128
40
20
0
Curves connected after appending supplement points
0
Fig. 7. Supplemental point data around the segmented curves.
80
66 129
65128
97 96 95
64127
30 29 28
94
27
126
86
88 87
63
85
2019 18
125
17
26 93
62
40
77
16 84
61
7 76
2592
20 79
78
6 75
Z 0 109 8
15 83 24
91
90
60
5 74
14
82 23
100
89
22
473 81
13
21
80
80
40 X
12 11
60
3
72
71
40
20
21
Y
20
0 0
100
80
Fig. 8. Automatic sequencing of supplemental plus dense measured point data
using an NN.
Fig. 9. Fitting surface from: (a) dense measured point data and (b) appending supplemental point data.
Fig. 10. Automatic segmentation of: (a) dense measured point data, (b) appending supplemental point data digitized from a leaf-like surface and (c, d) corresponding fitting
surface.
416
X. He et al. / Computers & Industrial Engineering 57 (2009) 408–418
Table 4
Prediction segment number and sequence number of point data of Fig. 2c using the optimum NN.
Pattern: WN3
Scaling function: (0, 1)
Activation functions: Logistic, Symmetric-logistic, Gaussian-complement
Scan no.
x
y
z
Segment no.
Sequence no.
1
71
2
72
3
73
4
74
5
75
6
76
7
77
8
78
9
79
10
11
80
12
81
13
82
14
83
15
84
16
85
17
86
18
87
19
88
20
21
89
22
90
23
91
24
92
25
93
26
94
27
95
28
96
29
97
30
31
98
32
99
33
100
34
101
35
102
2.014
1.137
0.445
1.941
6.870
13.596
19.254
22.127
22.918
22.909
22.965
22.179
19.290
13.516
6.730
1.604
0.000
0.681
1.648
19.283
18.858
18.624
20.057
23.602
28.048
31.906
34.054
34.491
34.370
34.455
33.935
31.736
27.873
23.526
19.983
18.414
18.423
18.676
32.849
33.031
33.262
34.352
36.685
39.604
41.675
42.929
43.292
43.261
43.144
42.648
41.314
39.298
36.585
34.378
33.218
32.688
32.044
43.675
44.283
44.951
45.988
47.512
49.071
50.254
50.985
51.240
51.206
0.993
2.606
3.721
5.777
9.732
15.029
21.805
29.469
37.935
47.256
57.358
67.465
76.573
83.878
89.302
93.027
95.148
96.730
98.701
0.048
3.083
6.114
9.100
12.657
17.098
23.151
30.180
37.749
46.593
56.674
66.848
76.012
83.098
88.066
91.599
94.316
96.943
99.719
0.000
4.261
7.855
11.140
14.874
18.923
24.282
30.732
37.885
46.387
56.272
66.399
75.508
82.287
86.876
90.425
93.192
96.116
99.571
0.391
5.257
9.034
12.475
16.331
20.484
25.493
31.463
38.241
46.458
21.813
22.096
22.308
22.713
23.498
23.955
23.555
23.362
23.569
23.889
23.771
23.661
23.796
23.897
23.440
22.998
22.648
22.401
21.980
21.121
21.587
21.999
22.217
22.156
21.823
20.754
20.489
20.586
20.735
20.801
20.787
21.206
22.087
22.315
22.284
22.094
21.753
21.198
19.065
19.701
20.134
20.274
20.045
19.534
18.092
17.998
18.067
18.208
18.296
18.365
18.538
19.828
20.099
20.178
20.041
19.710
19.166
16.219
16.887
17.218
17.266
17.054
16.559
15.307
15.256
15.339
15.498
1.000
1.000
1.006
1.005
1.008
1.132
1.173
1.091
1.000
1.000
1.000
1.023
1.051
1.000
1.000
1.000
1.000
1.000
1.040
2.019
2.013
1.943
1.930
1.985
2.051
2.036
2.009
1.932
1.868
1.910
1.992
2.028
1.937
1.879
1.883
1.958
2.116
2.326
2.984
3.031
2.968
2.930
2.971
3.096
3.009
3.024
2.995
2.957
2.958
3.001
3.068
3.047
2.935
2.894
2.942
3.056
3.211
3.994
4.040
4.019
4.017
4.052
4.086
4.025
4.041
4.032
4.000
1.057
1.378
1.629
2.082
2.897
3.653
4.165
4.604
5.063
5.537
6.016
6.480
6.916
7.369
7.937
8.631
9.122
9.467
9.875
1.000
1.433
2.005
2.538
3.013
3.421
3.923
4.434
4.939
5.463
5.993
6.507
6.994
7.483
8.039
8.574
9.068
9.576
10.000
1.000
1.503
2.052
2.505
2.931
3.325
3.872
4.400
4.913
5.441
5.983
6.509
7.023
7.435
7.943
8.465
8.951
9.502
10.000
1.000
1.549
2.059
2.495
2.944
3.396
3.923
4.437
4.938
5.457
has the output values of 1.000 and 1.378, and the third example
point 2 has the output values of 1.006 and 1.629. The above indi-
Scan no.
x
y
z
Segment no.
Sequence no.
36
103
37
104
38
105
39
106
40
41
107
42
108
43
109
44
110
45
111
46
112
47
113
48
114
49
115
50
51
116
52
117
53
118
54
119
55
120
56
121
57
122
58
123
59
124
60
61
125
62
126
63
127
64
128
65
129
66
130
67
131
68
132
69
133
70
51.024
50.604
49.790
48.888
47.320
45.944
44.961
44.136
43.034
52.861
54.024
55.012
56.145
57.405
58.093
59.137
59.718
59.832
59.760
59.678
59.402
58.728
58.137
57.006
55.857
54.851
53.958
52.814
62.197
64.100
65.399
66.561
67.868
68.759
69.479
69.820
69.876
69.860
69.872
69.694
69.171
68.449
67.187
65.945
64.992
64.118
63.087
73.337
76.090
77.910
79.390
80.768
81.560
82.029
82.139
82.053
82.028
82.146
82.187
81.880
81.056
79.684
78.404
77.442
76.520
75.150
56.114
66.009
74.958
81.513
85.819
89.311
92.087
95.074
98.877
1.016
5.777
9.755
13.524
17.194
21.702
27.079
32.650
39.032
46.696
55.954
65.579
74.307
80.641
84.822
88.221
91.025
94.086
97.758
1.815
6.385
10.262
14.162
18.347
23.238
28.824
34.440
40.435
47.358
55.898
65.114
73.379
79.438
83.694
86.964
89.654
92.619
96.237
3.857
7.572
10.690
14.338
19.219
25.118
31.224
36.764
42.345
48.753
56.448
64.608
72.130
77.963
82.176
85.372
87.734
89.995
92.828
15.565
15.610
15.682
16.953
17.114
17.166
17.095
16.886
16.513
12.865
13.305
13.513
13.512
13.290
12.101
11.907
11.861
11.979
12.113
12.134
12.091
12.154
13.364
13.430
13.466
13.468
13.357
13.093
8.447
8.025
8.085
8.124
7.986
7.213
7.133
7.212
7.336
7.428
7.426
7.304
7.268
8.022
8.156
8.283
8.309
8.217
8.382
1.603
0.712
0.271
0.228
0.365
0.078
0.000
0.097
0.255
0.362
0.268
0.066
0.013
0.394
0.554
0.732
0.833
0.901
1.074
3.963
3.971
4.049
4.071
4.018
3.986
3.984
4.008
4.044
4.958
4.956
4.933
4.940
4.984
4.991
5.038
5.062
5.043
4.999
4.958
4.957
5.011
4.929
4.922
4.917
4.913
4.920
4.935
5.983
6.039
5.982
5.942
5.954
6.066
6.081
6.070
6.040
6.002
5.972
5.990
6.051
5.954
5.983
6.001
6.023
6.052
5.969
6.984
7.000
7.000
7.000
6.996
7.000
7.000
7.000
6.983
6.960
6.957
6.983
7.000
7.000
7.000
7.000
7.000
7.000
6.982
5.997
6.521
7.038
7.420
7.889
8.388
8.862
9.417
10.000
1.000
1.553
2.069
2.543
2.979
3.496
4.031
4.513
4.988
5.475
5.996
6.516
7.033
7.429
7.893
8.381
8.853
9.414
10.000
1.000
1.525
2.014
2.508
3.015
3.546
4.100
4.581
5.021
5.461
5.948
6.459
6.978
7.448
7.950
8.439
8.906
9.456
10.000
1.088
1.485
1.843
2.283
2.871
3.520
4.108
4.569
4.978
5.397
5.863
6.364
6.902
7.457
8.042
8.583
9.024
9.456
9.958
cates that points 1, 71 and 2 occupy the first curve and are three
continuous points. According to the predications of the NN, other
X. He et al. / Computers & Industrial Engineering 57 (2009) 408–418
Table 5
Prediction errors analysis for the optimum NN.
Statistical parameters
Patterns processed
Output
Predication for sequencing of point data using WN3
Segmentation step
Sequence step
Training and testing: 28 Training and testing: 70
Production: 70
Production: 133
Segment no.
Coefficient of determination R2 0.9997
Mean squared error E
0.001
Mean absolute error e
0.025
0.032
Standard deviation r of e
0.008
Mean relative error e0
0
0
Standard deviation r of e
0.075
Percent within 5% error
92.857
Percent within 5–10% error
7.143
Percent within 10–20% error
0
Segment no. Sequence no.
0.999
0.004
0.041
0.063
0.002
0.065
92.857
4.286
2.857
0.9992
0.007
0.059
0.084
0.022
0.126
94.286
4.286
1.429
points on the first curve are 72, 3, 73, . . ., 9, 79, and 10. Another 19
points are on the second curve and so on. Thus, point 1 is smoothly
connected to 71, 2, 72, 3, . . ., and 10 on the first curve (Fig. 8); and
points 11, 80, 12, 81, . . ., and 20 construct the second curve. The
comparison between Table 4 with Fig. 8 indicates that the predicted segment and sequence numbers of the curves for all points
accord fully with the actual points’ positions. Fig. 9a is a surface
drawn from the dense-measured points, while Fig. 9b is a more
accurate fitting surface after appending the supplemental points.
The prediction error analysis is given in Table 5. The mean absolute error for two-step sequencing varies from 2.5% (for segment
number) to 5.9% (for sequence number). The mean relative error
also varies from 0.2% to 2.2%. Their standard deviations of absolute
and relative errors range from 3.2% to 8.4% and from 6.5% to 12.6%,
respectively. The data within 5% error also increase from 92.9% in
segment step to 94.3% in sequencing step. From prediction error
417
analysis and comparison of the positions of point data predicted
in plan figures with their actual locations, it is clear that the proposed NN models are valid for automatic sequence of point data
for reverse engineering.
Fig. 10a and b shows the processes of automatic segmentation
and sequencing of dense measured point data and appending supplemental point data digitized from a leaf-like surface using the
proposed NN model. And Fig. 10c and d is the corresponding fitting
surfaces.
Fig. 11 shows another blade surface reconstructed by the proposed NN model of four-layered Ward nets.
5. Conclusions
A novel sequencing neural network methodology for a set of
scanned point data has been developed. The method could either
automatically add supplementary data to the original, or re-establish the data point sequence without angle computation and chaotic phenomena. The neural network approach is a flexible
methodology that could be easily trained to handle point data in
reverse engineering. The trained model has successfully recognized
geometric features such as key curves and the order of point data.
The typical geometric characteristics of input patterns are automatically derived from the scanned points. Then they are presented to the neural network model to recognize appropriate
geometric features. The ordered point data is further used for
non-uniform rational B-spline surface model. The operator could
observe how non-uniform rational B-spline surfaces differ from
the actual and decide whether denser data points are required to
add or not.
The optimal neural network for segmentation and sequence of
point data are four-layered Ward nets, whose activation functions
are Tanh, Sine and Gaussian-complement for segmentation proce-
Fig. 11. Automatic segmentation of: (a) dense measured point data, (b) appending supplemental point data digitized from a blade surface, and (c, d) corresponding fitting
surface.
418
X. He et al. / Computers & Industrial Engineering 57 (2009) 408–418
dure and Logistic, Symmetric-logistic and Gaussian-complement
for sequencing procedure. The mean absolute error varies from
2.5% to 5.9% for two-step sequencing process of point data with
an average error of 4%. The data within 5% error are more than
94.3% in sequencing process. Reasonable results are obtained from
well-trained neural network as an alternative tool for surface fitting in reverse engineering.
Acknowledgement
The authors acknowledge the financial support of the National
Science Foundation of China (50575082).
References
Alrashdan, A., Motavalli, S., & Fallahi, B. (2000). Automatic segmentation of digitized
data for reverse engineering application. IIE Transactions, 32, 59–69.
Anderson, J. A. (1995). An introduction to neural networks (Vol. 9(4), pp. 42–46).
London: Massachusetts Institute of Technology.
Barhak, J., & Fisher, A. (2001). Parameterization and reconstruction from 3D
scattered points based on neural network and PDE techniques. IEEE Transactions
on Visualization and Computer Graphics, 7(1), 1–16.
Barhak, J., & Fisher, A. (2002). Adaptive reconstruction of freeform objects with 3D
SOM neural network girds. Computers & Graphics, 26, 745–751.
Boissonnat, J.-D., & Cazals, F. (2002). Smooth surface reconstruction via natural
neighbor interpolation of distance functions. Computational Geometry, 22,
185–203.
Floater, M. S., & Reimers, M. (2001). Meshless parameterization and surface
reconstruction. Computer Aided Geometric Design, 18, 77–92.
Gu, P., & Yan, X. (1995). Neural network approach to the reconstruction of freeform
surfaces for reverse engineering. Computer-Aided Design, 27(1), 59–64.
Knopf, G. K., & Kofman, J. (2001). Adaptive reconstruction of free-form surfaces
using Bernstein basis function networks. Engineering Applications of Artificial
Intelligence, 14, 577–588.
Lin, A. C., Lin, S.-Y., & Fang, T.-H. (1998). Automated sequence arrangement of 3D
point data for surface fitting in reverse engineering. Computer in Industry, 35,
149–173.
Mittal, G. S., & Zhang, J. (2000). Prediction of temperature and moisture content of
frankfurters during thermal processing using neural network. Meat Science, 55,
13–24.
Nezis, K., & Vosniakos, G. (1997). Recognising 2.5D shape features using a neural
network and heuristics. Computer-Aided Design, 29(7), 523–539.
Pan, D., & Li, Q. (1999). Study on the multi-index comprehensive evaluation method
of artificial neural network. Journal of SSCSA, 15(2), 105–107. 110.
Prabhakar, S., & Henderson, M. R. (1992). Automatic form-feature recognition using
neural-network-based techniques on boundary representations of solid models.
Computer-Aided Design, 24(7), 381–393.
Tai, Ching-Chih, & Huang, Ming-Chih (2000). The processing of data points basing
on design intent inreverse engineering. International Journal of Machine Tools &
Manufacture, 40, 1913–1927.
Varady, T., Martin, R. R., & Cox, J. (1997). Reverse engineering of geometric models –
an introduction. Computer-Aided Design, 29(4), 255–268.
Weiss, V., Andor, L., Renner, G., & Várady, T. (2002). Advanced surface fitting
techniques. Computer Aided Geometric Design, 19, 19–42.
Woo, H., Kang, E., Wang, S., & Lee, K. H. (2002). A new segmentation method for
point cloud data. International Journal of Machine Tools & Manufacture, 42,
167–178.
Yau, H. T., Haque, S., & Menq, C. H. (1993). Reverse engineering in the design of
engine intake and exhaust ports. In Proceedings of the symposium on computercontrolled machines for manufacturing, SAME winter annual meeting, New
Orleans, LA, USA.
Yin, Z. (2004). Reverse engineering of a NURBS surface from digitized points subject
to boundary conditions. Computers & Graphics, 28, 207–212.
Zhang, Q., Yang, S. X., Mittal, G. S., & Yi, S. (2002). Predication of performance indices
and optimal parameters of rough rice drying using neural networks. Biosystems
Engineering, 83(3), 281–290.
ID
1135340
Title
Automaticsequenceof3Dpointdataforsurfacefittingusingneuralnetworks
http://fulltext.study/journal/1346
http://FullText.Study
Pages
11