Uploaded by rhinemine

Application of Data Fusion in NDT

advertisement
Application of Data Fusion
in Nondestructive Testing (NDT)
René Heideklang and Parisa Shokouhi
BAM – Federal Institute for Materials Research and Testing
Unter den Eichen 87
12205 Berlin
rene.heideklang@bam.de, parisa.shokouhi@bam.de
Abstract—Applying contemporary data fusion techniques, the
multi-modal nondestructive testing (NDT) data sets can be
combined to obtain more reliable results. The reliability can
be quantified in terms of the probability of detection of sought
material defects. A concise review of the published studies on
NDT data fusion is provided here and the key concepts and
anticipated challenges are discussed. The detailed steps involved
in the NDT fusion process are explained with reference to a
case study. The presented data set includes the results of three
different NDT techniques on a test specimen with built-in defects.
Several pixel-level fusion algorithms were applied and their
performances are quantitatively compared.
I. I NTRODUCTION
Nondestructive Testing (NDT) refers to the testing techniques used for evaluating the quality of materials while
maintaining their structural integrity. NDT is applied for
quality control (QC) / assurance (QA) and inspection of critical
components in different sectors of aerospace, railroad, transportation and energy industries. Some of the typical industrial
inspection tasks include locating cracks and voids in machine
parts, detecting various welding defects as well as evaluating
the quality of bond between different layers of composites.
A great number of NDT techniques have been developed
targeting one or more of these tasks. Some examples include
eddy current (ET), thermography (TT), ultrasonic (UT) and
radiography testing (RT). NDT offers a cost-effective and flexible alternative to destructive testing which can be implemented
during various stages of industrial production from design and
manufacturing (QC/QA) to operation (regular inspection). In
Structural Health Monitoring (SHM) applications, one or more
NDT techniques are used for continuous monitoring of inservice components or structures.
NDT plays a vital role in assuring safety and serviceability of a variety of key infrastructure and facilities. One of
the rising issues concerning the application of NDT is the
reliability of NDT assessments. Missing a critical defect or
underestimating its size may have catastrophic consequences.
The reliability of a particular testing method to detect a certain
type of defect is established in validation studies. This is usually expressed in the form of Receiver Operating Characteristic
(ROC) or Probability of Detection (POD) curves. The latter
depict the probability of detection (with a certain level of
confidence) against the varying fault size.
It is well-known that different NDT techniques exhibit different strengths and limitations. Given the variety of possible
defects, it is often necessary to employ more than one NDT
technique which could provide complementary or redundant
information. For example, ET of conductive materials is useful
for near-surface defect detection, while UT yields volumetric
information. Employing both methods and combining the two
data sets, a unified representation is generated which describes
different aspects of the object at once and thus offers simplified
interpretability.
A different kind of fusion is realized by exploiting information redundancy or concurrent information. That is, multiple
sources of information on the same aspect of the object
are fused to reduce uncertainty and thus achieve increased
detection robustness and accuracy. For example, this could
mean that taking a multi-method approach, small material
faults can be detected with a higher reliability than would
have been possible using each NDT modality alone. With the
growing need for reliable nondestructive assessment, multimethod NDT is being used more and more often. In parallel,
there is an ever increasing demand on methods to make use of
the full potential of the collected data and arrive at the most
reliable assessments. It is worth emphasizing that the available
and accessible computational power enables new opportunities
to approach these demands. In this context, data fusion is
being revived enjoying unprecedented attention from the NDT
community.
This paper provides an introduction to an ongoing study on
the fusion of redundant information in multi-sensor NDT data
sets. After a review of recent relevant studies, the challenges
and benefits of multi-sensor material testing are highlighted in
Section III. Subsequently, the discussed aspects are elucidated
in the context of a real NDT data set in Section IV. This
paper concludes with a summary including the perspective on
the direction of the research work.
II. L ITERATURE REVIEW
The first book dedicated to the applications of NDT data
fusion was edited by Gros [1]. It presents a broad selection
of studies demonstrating that multi-sensor NDT is capable
of combining complementary data, enhancing signal to noise
ratio (SNR) and performing more accurate defect detection.
To our knowledge, the most recent survey of NDT data fusion
was published by Liu et al. [2] which gives an overview of
the concepts and techniques applied in recent works.
Mendoza et al. [3] fused data from four nondestructive
sensors for the assessment of apples’ firmness and sugar concentration. The authors employ a linear regression technique
(partial least squares) to reveal the joint relationship between
sensor data and apple characteristics. It is concluded that
prediction was improved over single sensor analysis.
Khan et al. [4] treat the characterization of material flaw
profiles as an inverse problem. It is proposed to apply methods
from the intensively studied field of target tracking, where
the usual task is to track the hidden state of a system
across time. Here, the authors track the material’s state across
spatial regions of the specimen. Particle filtering is adopted to
overcome the limitations of the Kalman filter. It is concluded
that this method provides improved inversion accuracy when
data fusion is used.
An example of image fusion using a hierarchical decomposition technique is found in [5]. The obtained images
are transformed to Intensity/Hue/Saturation (IHS) space and
subjected to wavelet analysis. A simple fusion rule on the
obtained coefficients and subsequent inverse transformation
yield improved signal to noise ratio (SNR), higher robustness,
accuracy and reliability.
In another paper [6], radiography data are classified to
discriminate defects from false alarms. In fact, no sensor
fusion is performed, but rather multiple features are extracted,
regarded as sources of information, and fused. Specifically,
mass values are computed for each feature and then combined
using Dempster-Shafer theory and more simple combination
rules (mean / median mass). The performance reference is
given by an industrial system as well as a trained support
vector machine (SVM). The authors rank their results by a
custom evaluation measure, which was justified by the industrial requirements. According to this measure, all strategies
outperform the industrial system. SVM and feature fusion
by mean mass and median mass give the best results of
comparable qualities. The main contribution of this paper
is the algorithm developed for automatic initial mass value
computation based on the feature histograms.
Possibility theory and an adaptive fusion operator were
applied in [7] for NDT of concrete samples. The approach
employs linear regression to estimate the relationship between
sensor values and material characteristics. However, in this
study, the observed data were modeled as the target variables
instead of the independent variables. A possibility distribution
is then derived over the material parameters, from which inferences could be drawn. The authors report a good agreement
between predicted and expected results.
In [8], a hierarchy of fuzzy logic rules was employed to
fuse multi-modal sensor data. These rules are set up by expert
knowledge and apparently produce a high degree of accuracy.
However, no quantitative results were published.
A neural network (NN) was designed in [9] to approximate the functional relationship between the measured data
and welding-related parameters. Results indicate an accuracy
which is better than offline ultrasonic analysis and only slightly
worse than destructive testing. Corrosion detection was the
goal of [10]. The authors employed regression techniques
based on a generalized additive model (GAM). Eddy Current
testing (multi-frequency and pulsed) provide the data from
which the remaining material thickness is estimated. The
method was compared to traditional calibration (linear regression curve) and a trained wavelet neural network (WNN).
Ground truth was determined from X-ray images. A generally
improved performance was observed.
Another example of applied Dempster-Shafer theory is
found in [11]. The required initial mass values, which represent reliabilities, are computed using fuzzy membership functions and a-priori knowledge. Fusion of the infrared thermal
images is carried out through Dempster’s rule of combination
in a pixel-wise manner. It is reported that this approach
results in an improved reliability over conventional singlesensor analysis. The authors point out that this method is able
to combine “the knowledge acquired during the development
of each non-destructive inspection method taken separately
[. . . ] with fairly reduced training”.
In [12], an ensemble of different classifiers was trained
to predict certain coarse material states (heat treatment/stress/cracking). Features were extracted from multi-frequency
eddy current and ultrasonic data separately and subjected
to k-nearest-neighbor classifier, decision tree, support vector
machine and multi-layer perceptron (MLP). The obtained
decisions were fused by majority voting for improved prediction performance. Subsequently, once the coarse material
class has been determined, additional MLPs were applied to
derive detail-level information. This decision fusion approach
resulted in an “improved classification results at the coarselevel, with little or no improvement at the detail-level”.
The Ph.D. thesis of Liu [13] covers fuzzy techniques for
machinery fault diagnosis. These comprise the selection of
features based on fuzzy measures as well as feature-level
and decision-level fusion using fuzzy integrals. Vibration and
current signals were obtained in a rolling element bearing
experiment and an electrical motor experiment. The study
demonstrates improved fault diagnosis.
This literature synthesis above clearly demonstrates the
NDT researchers’ awareness of the benefits of data fusion in
improving the quality of multi-modal assessment. The following section provides a more detailed overview of the particular
characteristics of NDT data and the typically encountered
challenges dealing with multi-sensor NDT data sets.
III. A PPLICATION OF DATA FUSION IN NDT
A. Data characteristics
There are manifold sources of diverse NDT data corresponding to various applications. However, when developing
an NDT fusion system, it is helpful to start with ”ideal”
test data sets, as they are generally simplified and allow
quantitative performance evaluation. It is, however, essential
to generate or choose ideal data which would adequately
represent the characteristics of the ”real” target data set. Such
data sets are typically obtained by taking measurements on
well-designed test specimens with known properties and builtin defects or are generated using computer simulations. The
data sets obtained on test specimens are usually preferred as
they capture many of the realistic aspects of the target data.
However, they cannot usually cover the whole range of variations in test parameters. Therefore, the developed algorithms
are susceptible to overfitting. Computer simulation results, on
the other hand, although often too simplified, offer the chance
to study the sensitivity of the fusion algorithms to various test
parameters. Also, it is to be noted that creating a satisfactory
computer model capturing the essential characteristics of an
NDT system, if possible, is a challenging task. It requires
a deep understanding of the underlying physical processes
involved in the test, the details of the sensors as well as the
properties and behavior of the test material. A combination
of experimentally and numerically simulated data sets often
provides the optimal trial data set.
The dimension of data sets depends on the number of
test modalities as well as the number of meaningful sensor
attributes per modality. Apart from the actual sensor values,
all measurements include spatial (2D or 3D) coordinate information in physical units, which implicitly link the individual
modalities. Except for monitoring applications, the amount
of the stored data is often manageable so that big databases
are not necessary. Leaving aside monitoring tasks, the data
usually lack a temporal dimension, because the object under
inspection is assumed to be in a static state. This means that
experiments can be carefully planned, corrected if necessary,
and repeated. The difficult tasks of tracking and temporal
alignment, often important in other data fusion domains, need
not be taken into account here. Also, well-known models of
the fusion process (e.g. [14], [15]) only apply in a truncated
form, since no process refinement or sensor management has
to be implemented.
the defects might be randomly scattered over or within the
specimen.
As briefly mentioned in the previous section, a further
challenge is the limitation of test data sets. In the studies
summarized in Section II, techniques from the field of statistics
and machine learning are often applied for data fusion. These
techniques usually require large sample sizes. There are only
a limited number of test specimens available, which cover a
certain range of simulated defects. While spatial resolution is
generally satisfactory, to obtain a large pool of pixel samples,
the natural variability of defect properties is hard to mimic.
In addition, many parameters (such as sensor properties, environment and material) stay fixed in an experimental setting.
Therefore, the generality of the derived fusion method has to
be assessed to reduce, or at least quantify, the potential overfitting. Augmenting such data sets with high-quality computer
simulation results can help expand the applicability domain of
the developed methods. The challenges and benefits of data
fusion in nondestructive testing will be demonstrated in the
following section with in reference to an exemplary data set
obtained on an idealized test specimen.
IV. C ASE STUDY
The test data set presented includes the measurements
obtained using three different NDT techniques on a steel test
specimen. The test specimen (see Fig. 1) has ten built-in
notches (i.e., surface cracks), ranging in width from 0.08 to
0.25 mm and in depth from 0.01 to 2.24 mm. This fairly simple
object was chosen in order to study the basic characteristics
of data under ideal conditions. The preprocessing steps and
fusion procedure are discussed below.
B. Challenges
Maybe the most obvious challenge, as in any other sensor
fusion application, is that the source data are represented
heterogeneously, meaning that information sources from different formats and domains have to be combined. In particular,
because the different sensor values reflect distinct physical
quantities, amplitude values have to be normalized. This may
not always be a matter of simple re-scaling. For instance,
differential sensors will give zero amplitude at crack locations,
i.e., the minimum derivative is reached where a signal peak
attains its maximum or minimum value. As the measured
values represent different physical quantities, knowledge of
the underlying material science and physics principles would
be helpful for data pre-processing.
In order to effectively extract and merge information, expertise in each NDT modality is necessary. This is especially
challenging because most methods are based on unrelated
physical principles and most NDT experimentalists are specialists in one or two methods. Moreover, even NDT experts
cannot assist in selecting regions of interest (ROI) a-priori, as
Fig. 1. An ideal test specimen with ten built-in notches
A. Data acquisition and preprocessing
Three surface testing methods were used to inspect the test
specimen, namely, Eddy Current (ET), Giant Magnetoresistance (GMR) and active Thermography testing (TT). Each
of these three techniques operates utilizing distinct physical
principles. In ET, the impedance is measured; the GMR sensor
is sensitive to the magnetic stray field, and active TT registers
emitted thermal energy while the specimen is ”heated” by a
laser. The data are represented in different formats, as shown
in Table I. Respective format incompatibilities are emphasized,
which must be normalized prior to fusion. Each modality
TABLE I
DATA DIMENSIONALITIES
Testing modality
GMR
Eddy Current
Input
dimensions
x, y
x, y
x/y coordinates
scattered
gridded
Thermography
x, y, time
gridded
Output
dimensions
voltage (V)
impedance (Ω),
complex-valued
intensity (digits)
had to be pre-processed individually. The pre-processing steps
included noise suppression and de-trending. Physical coordinates are available for all data sets. In this example, the
thermography data are actually stored as a movie. Its temporal
dimension can be integrated in a pixel-level crack detection
algorithm, as detailed in [16]. The x and y coordinates for each
modality are measured relative to an arbitrary origin and thus
have to be aligned, or registered, into a common coordinate
system. For future experiments, it would be helpful to attach
some kind of easily recognized markers to the specimen for
simple computation of the transform parameters. Otherwise,
known salient geometrical features of the object can be taken
as natural markers. In the event that no such helpful reference
points exist, they may be found by a salient point detector.
An alternative is to employ an automated procedure in order
to find the transform which maximizes some kind of robust
similarity measure between the data sets [17]. While this
approach is always applicable and thus very generic, it is
also computationally demanding and an optimal solution might
not be found, owing to the nonlinearity of the optimization
problem.
Since the sensors’ different spatial resolutions can be quantified in physical units, their normalization is conveniently
handled by registration. This alignment process is comparably
simple in theory, because only rotation and translation effects
have to be corrected. However, for instance, in the case
of active thermography testing, cameras may be involved
and projective distortions must be normalized in advance.
This process is significantly more complicated for objects of
complex geometry.
In the next step, the derived coordinate transformation has
to be applied to each data set. This requires an adequate
interpolation technique. This is not a trivial task if coordinates
do not lie on a regular grid, but instead are arbitrarily scattered,
as is the case with the GMR data presented here, see e.g., [18].
It should be noted that NDT data could be complex-valued,
e.g., the eddy current sensor data. The complex-valued data
in most cases should be treated as two independent sources
of information (real / imaginary part, or amplitude / phase),
as they contain complementary physical information. Only the
ET amplitudes are considered for the analysis presented in this
paper.
The results from all three modalities are shown in Fig. 2.
Note that all three methods are capable of finding most of
the notches. However, each technique has its own advantages
and limitations. ET data (top) show sensitivity to most defects,
but the resolution is poor. The processed (after model-fitting)
GMR data (middle) contain more clearly defined peaks, but
the less severe cracks are not visible. Thermography (bottom)
exhibits good spatial resolution and high sensitivity, but at the
cost of an increased false alarm rate. Hence, a combination
should yield improved results.
Fig. 2. Preprocessed and registered data sets of three surface inspection
NDT techniques: Eddy Current (top), GMR (middle), Active Thermography
(bottom)
B. Fusion algorithms
Fusion can be accomplished at different levels. Registered
data sets can be readily fused at pixel level after amplitude
normalization. Defect detection or quantification is then carried out on the basis of the fused image, which is expected
to reveal material faults with reduced uncertainty compared
to the results from individual images. This approach has the
benefit of being very generic, making no assumptions about the
nature of defect signals other than increased amplitude. This
results in high sensitivity, yet specificity may be low (many
false alarms). On the other hand, this procedure does not leave
any room for incorporating the expert knowledge in the fusion
process. This is generally easier when fusing at a higher level,
for instance at feature level.
An example of feature extraction using model fitting is
shown in Fig. 3. One single image row of the GMR data
around one of the notches is shown in this figure. The raw
sensor data (blue) show a regular pattern, which can be wellmodeled by a simple analytical model (red). By means of
model fitting, domain knowledge can be exploited to extract
highly informative features such as the peak’s amplitude,
position and width. On this more abstract level of data
representation, features from the different modalities can be
fused. For instance, a segmentation algorithm can be employed
Fig. 3. An example of feature extraction by fitting an empirical model (red)
to the GMR data (blue)
to generate a preliminary set of hypotheses about faulty
material areas. There are mainly two options: either to perform
joint segmentation, which is indeed a fusion operation, or to
segment individually followed by region association. Each of
the resulting cross-modal regions is then described by a vector
of feature values, which represents the (simple) fusion step.
The final decision about flaw occurrence, and potentially about
its parameters, is subsequently made in the joint feature space.
However, as noted previously, segmentation-based techniques
require rich data sets for sound performance evaluation.
C. Fusion of NDT data
In principle, any fusion rule can be applied to NDT data.
However, there are certain aspects in this specific domain
which promote some methods over others. For instance, a
catalog of rules can be defined in an attempt to formalize
the decisions made by NDT experimentalists. These rules
can be then transformed into a fuzzy inference system or a
probabilistic graphical model (PGM) for Bayesian inference.
Fuzzy rules have the advantage of being represented intuitively
by linguistic variables, which makes the technique ideal to
express human knowledge. In contrast, PGMs are able to represent arbitrary statistical distributions which can be queried.
Both types of inference systems can also be learned from data.
If indeed the variability of expected material defects is
limited, e.g., in a very specific field of application, representative data sets can be collected for fusion in feature
space by machine learning. Although the danger of overfitting
must always be emphasized, learning directly from data may
capture previously unknown relationships and / or results in
more accurate rules. One disadvantage, however, is that some
methods, e.g., trained discriminative classifiers, can be hard to
interpret in contrast to predefined expert rules.
For some easily interpretable NDT signals whose amplitudes directly relate to the occurrence of inhomogeneities (e.g.
EC, TT), simple fault detectors operating at the pixel-level can
be designed. Fusion then takes place at the decision level, for
example by a voting scheme, or Dempster-Shafer theory, if
the sources’ uncertainties can be quantified.
The results of a low-level fusion approach are presented in
this paper. A number of image combination techniques are applied and quantitatively compared. These are, namely, wavelet
coefficient fusion, singular-value decomposition (SVD) fusion
[19], empirical mode decomposition (EMD1 ) fusion and the
simple method of averaging the images per pixel. See Fig. 4
for an illustrative comparison of the fusion results. Waveletbased fusion is carried out by first performing a 4-level discrete
wavelet transform (wavelet db4) per image, next combining
the coefficients of each modality, e.g., by taking maximum
or mean values. The fused image is obtained through the
reconstruction of the combined coefficients. In contrast to the
wavelet transforms, SVD and EMD are entirely data-driven,
i.e., no basis function sets is assumed for the decomposition.
SVD, according to the cited reference, is able to de-correlate
neighboring pixel values per modality. In the obtained linearly
transformed space, the data sets are fused and inversely
transformed afterwards. EMD decomposes the signal into
a sequence of increasingly smooth intrinsic mode functions
(IMFs), which, when added up, make up the original data set.
The combined IMFS from different modalities gives the fused
image.
D. Evaluation
For a quantitative evaluation, the surface cracks are manually marked in a reference image and receiver operating
characteristic (ROC) analysis is carried out by thresholding
the fused pixel intensities. The area under the curve (AUC)
measure reflects the trade-off between sensitivity and specificity in this defect detection task. Ideally, the NDT sensor
value ranges for the two pixel classes background noise and
material fault are non-overlapping and thus the AUC would
attain its maximum value of 1. More realistically, such a
clear distinction is not possible especially for small defects.
Yet, different fusion schemes can be objectively ranked with
respect to their benefit for class separation according to their
AUC values.
A second criterion is given by evaluating the best classifier
per fusion method, corresponding to the point on the ROC
curve which represents the best expected trade-off between
sensitivity and specificity. This point is indicated by its optimal
slope, taking into account class imbalance and potential classification costs2 . The true positive rate (TPR) of this optimal
classifier for each fusion method is chosen here for direct
evaluation of crack detectability.
In addition to the fused images, the individual modalities
are also included in this ROC analysis to assess the value of
data fusion.
The resulting ROC curves are shown in Fig. 5. In Fig. 6, a
quantitative comparison is presented.
E. Discussion
The quantitative evaluation shows that, for the given data
set, a data fusion approach using the wavelet transform is
1 http://www.mathworks.com/matlabcentral/fileexchange/
28761-bi-dimensional-emperical-mode-decomposition-bemd
2 http://www.mathworks.de/de/help/stats/perfcurve.html
Fig. 5. ROC evaluation. Wavelet excluding Thermography is obscured by the
curve of “ECD individually”.
Fig. 4. Pixel-level fusion results, from top to bottom: Wavelet decomposition
(db4, 4 levels), wavelet decomposition of eddy current and GMR data only,
SVD, EMD, mean pixel value. Note the different color scales.
best-suited to improve material fault detection in a per-pixel
manner. In particular, this fusion rule facilitates the detection
of small cracks, as is reflected in the highest TPR score.
While background noise from the thermography data set is
retained, as can be seen in the fused image, cracks appear
prominently and thus AUC performance is still among the
top three. If wavelet fusion is carried out excluding the
thermography image, performance is virtually identical to
classification solely based on the EC sensor. Apparently, the
GMR method does not add much value to the other two
modalities, which is also reflected in the comparably low performance measures regarding the uni-modal regime. This can
be explained by the lower sensitivity of the GMR technique.
In contrast, thermography by itself exhibits high sensitivity.
But owing to the dominant background noise, classification
results lag behind all evaluated fusion strategies. The best
tested individual sensor is eddy current. Although this will
not be the method of choice for accurate defect localization,
its broad near-surface crack responses enable robust detection
performance indicated by both AUC and TPR.
Concerning other fusion techniques, EMD facilitates an
efficient classifier according to the TPR measure. However,
Fig. 6. Fusion methods ranked by AUC (left) and TPR (right).
for other thresholds, the balance between true positive rate
and true negative rate is not achieved well in comparison to
other fusion methods.
For instance, the multi-scale SVD technique produces satisfying AUC results, which means that classes can be separated
comparably well. Yet, in this case, AUC and TPR are conflicting because the chosen optimal threshold is unable to detect
enough pixels of the defect class.
Surprisingly enough, the very basic fusion rule of averaging corresponding pixel values across the three modalities
appears to be quite effective in reducing noise and spurious
values while retaining even smaller crack responses. While
the specific threshold is characterized by medium TPR, the
integrated performance across all thresholds outperforms all
other methods by a small margin. However, there is certainly
room for improvement concerning parameter fine-tuning, as
well as finding more appropriate fusion rules in order to
outperform such simple strategies.
V. C ONCLUSION
As multi-sensor applications are becoming increasingly
attractive, nondestructive material testing should be able to
benefit from the potentially improved reliability and accuracy.
The literature synthesis revealed that, efforts in this direction
have been made, although mostly in academic settings. In
particular, the extraction of defect parameters using multiple
sources of information is a promising approach to be further
investigated. In practice, while multiple NDT methods are
actually employed, fusion is still largely performed on the
”expert level”, i.e. by human interpretation. Thus, the full
potential of the available information is usually not realized.
Compared to multi-modal fusion in other areas, typical NDT
applications alleviate some common challenges by operating in
a static context and because expert knowledge can be adopted
to some degree.
A multi-modal data set containing the results of NDT on a
test specimen using three techniques was presented here. Different steps involved in the NDT fusion process were discussed
with reference to this exemplary data set. Several pixel-level
fusion algorithms were applied and the resulting fused images
were compared quantitatively. It was shown that by fusion of
spatially overlapping information, the performance of pixelwise crack detection can be improved relative to uni-modal
analysis.
ACKNOWLEDGMENT
The authors would like to thank their colleagues – M.
Pelkner, R. Casperson, A. Eckey, C. Maierhofer, M. Ziegler,
P. Myrach – for kindly sharing their data sets.
R EFERENCES
[1] X. E. H. Gros, Applications of NDT Data Fusion. Boston u.a.: Kluwer
Academic Publ., 2001.
[2] Z. Liu, D. Forsyth, J. Komorowski, K. Hanasaki, and T. Kirubarajan,
“Survey: State of the art in NDE data fusion techniques,” IEEE Transactions on Instrumentation and Measurement, vol. 56, no. 6, pp. 2435
–2451, Dec. 2007.
[3] F. Mendoza, R. Lu, and H. Cen, “Comparison and fusion of
four nondestructive sensors for predicting apple fruit firmness
and soluble solids content,” Postharvest Biology and Technology,
vol. 73, no. 0, pp. 89–98, Nov. 2012. [Online]. Available: http:
//www.sciencedirect.com/science/article/pii/S0925521412001263
[4] T. Khan, P. Ramuhalli, and S. C. Dass, “Particle-filter-based multisensor
fusion for solving low-frequency electromagnetic NDE inverse
problems,” IEEE Transactions on Instrumentation and Measurement,
vol. 60, no. 6, pp. 2142–2153, Jun. 2011. [Online]. Available: http:
//ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5734846
[5] I. Elshafiey, A. Algarni, and M. A. Alkanhal, “Image fusion based
enhancement of nondestructive evaluation systems,” Ch11 in Image
Fusion, 2011. [Online]. Available: http://cdn.intechweb.org/pdfs/12991.
pdf
[6] A. Osman, V. Kaftandjian, and U. Hassler, “Application of data
fusion theory and support vector machine to x-ray castings inspection,”
in 10th European Conference on Non-Destructive Testing, Moscow,
Fraunhofer (June 7–11 2010), 2010. [Online]. Available: http:
//ndt.net/article/ecndt2010/reports/4 05 21.pdf
[7] M. A. Ploix, V. Garnier, D. Breysse, and J. Moysan, “Possibilistic
NDT data fusion for evaluating concrete structures,” in Proceedings
of the 7th International Symposium on Non-destructive Testing in
Civil Engineering, Nantes, France, June 30th-July 3rd, 2009. [Online].
Available: http://ndt.net/article/ndtce2009/papers/54.pdf
[8] T. dos Santos, B. Silva, P. d. S. Vilaça, L. Quintino, and
J. M. Sousa, “Data fusion in non destructive testing using fuzzy
logic to evaluate friction stir welding,” Welding International,
vol. 22, no. 12, pp. 826–833, Dec. 2008. [Online]. Available:
http://www.tandfonline.com/doi/abs/10.1080/09507110802591327
[9] J. Cullen, N. Athi, M. Al-Jader, P. Johnson, A. Al-Shamma’a, A. Shaw,
and A. El-Rasheed, “Multisensor fusion for on line monitoring of
the quality of spot welding in automotive industry,” Measurement,
vol. 41, no. 4, pp. 412–423, May 2008. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0263224107000103
[10] Z. Liu, P. Ramuhalli, S. Safizadeh, and D. S. Forsyth, “Combining
multiple nondestructive inspection images with a generalized additive
model,” Measurement Science and Technology, vol. 19, no. 8, p. 085701,
Aug. 2008. [Online]. Available: http://iopscience.iop.org/0957-0233/19/
8/085701
[11] J. Moysan, A. Durocher, C. Gueudré, and G. Corneloup, “Improvement
of the non-destructive evaluation of plasma facing components by data
combination of infrared thermal images,” NDT & E International,
vol. 40, no. 6, pp. 478–485, Sep. 2007. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0963869507000400
[12] J. Dion, M. Kumar, and P. Ramuhalli, “Multi-Sensor data fusion
for High-Resolution material characterization,” AIP Conference
Proceedings, vol. 894, no. 1, pp. 1189–1196, Mar. 2007. [Online].
Available: http://proceedings.aip.org/resource/2/apcpcs/894/1/1189 1
[13] X. Liu, “Machinery fault diagnostics based on fuzzy measure and
fuzzy integral data fusion techniques,” Ph.D. dissertation, Queensland
University of Technology, May 2007.
[14] A. N. Steinberg, C. L. Bowman, and F. E. White, “Revisions to
the JDL data fusion model,” Proceedings of AeroSense conference,
vol. SPIE Vol. 3719, pp. 430–441, Mar. 1999. [Online]. Available:
http://dx.doi.org/10.1117/12.341367
[15] M. Bedworth and J. O’Brien, “The omnibus model: a new
model of data fusion?” IEEE Aerospace and Electronic Systems
Magazine, vol. 15, no. 4, p. 30–36, 2000. [Online]. Available:
http://isif.org/fusion/proceedings/fusion99CD/C-075.pdf
[16] J. Schlichting, M. Ziegler, C. Maierhofer, and M. Kreutzbruck, “Flying
laser spot thermography for the fast detection of surface breaking
cracks,” Proceedings 18th World Conference on Non-Destructive
Testing, Apr. 2012. [Online]. Available: http://www.ndt.net/article/
wcndt2012/papers/499 wcndtfinal00499.pdf
[17] B. Zitová and J. Flusser, “Image registration methods: a survey,”
Image and Vision Computing, vol. 21, no. 11, pp. 977–1000, Oct.
2003. [Online]. Available: http://linkinghub.elsevier.com/retrieve/pii/
S0262885603001379
[18] J. P. Lewis, F. Pighin, and K. Anjyo, “Scattered data interpolation and
approximation for computer graphics,” in ACM SIGGRAPH ASIA 2010
Courses, ser. SA ’10. New York, NY, USA: ACM, 2010, p. 2:1–2:73.
[Online]. Available: http://doi.acm.org/10.1145/1900520.1900522
[19] V. P. S. Naidu, “Image fusion technique using multi-resolution singular
value decomposition,” Defence Science Journal, vol. 61, no. 5, Sep.
2011. [Online]. Available: http://nal-ir.nal.res.in/11320/
Download