as a PDF - RG Journals of Scientific Research

advertisement
International Journal of Advances in Engineering Sciences Vol.3, Issue 1, January, 2013
A Review on Application of Artificial Neural Networks for
Injection Moulding and Casting processes
Manjunath Patel G C
Research Scholar, Department of Mechanical
Engineering, NITK, Surathkal, INDIA
Email: manju09mpm05@gmail.com;
Abstract: Artificial Neural networks (ANNs) have potential
challenges in the eve of prediction, optimization, control,
monitor, identification, classification, modelling and so on
particularly in the field of manufacturing. This paper presents
a selective review on use of ANNs in the application of casting
and injection moulding processes. We discuss number of key
issues which must be addressed when applying neural
networks to practical problems and steps followed for the
development of such models are also outlined. These includes
data collection, division of data collected and pre-processing of
available data, selection of appropriate model inputs, outputs,
network architectures, network parameters, training
algorithms, learning scheme, training modes, network topology
defined, training termination, choice of performance criteria
for training and model validations. The suitable options
available for network developers were discussed and
recommended suggestions to be considered are highlighted.
Keywords: Metal casting processes, Injection moulding
processes, Casting Materials/Alloys, Artificial Neural Networks
INTRODUCTION
Neural network is one among the simplified models of
our biological nervous systems. Our biological nervous
system consists of massively parallel distributed large
number of interconnected processing elements known as
neurons [22]. For example [24], in human brain the most
basic element is neuron. The beauty of human brain is its
ability to think, remember and apply past experiences to our
future actions. Our human brain comprises about 100billion
neurons, each neuron connects up to 200000 neurons and
approximately 1000-10000 interconnections are available.
The neurons were arranged to form a layer and the
connection pattern formed within and between the
respective layers through weights referred as network
architecture. There are mainly five basic neuron connection
exists namely [24],
1. Single-layer feed forward network
2. Multi-layer feed forward network
3. Single node with its own feedback
4. Single layer recurrent network
5. Multi-layer recurrent network
From the above five network connections currently more
than fifty different network architectures were developed
under supervised and unsupervised learning.
A.
Introduction to Metal castings
Casting is one among the widely used near net
shape manufacturing process started production during
4000-3000 B.C., Find major applications in both ferrous and
I.
Print-ISSN: 2231-2013 e-ISSN: 2231-0347
© RG Education Society (INDIA)
Prasad Krishna
Professor, Department of Mechanical Engineering,
NITK, Surathkal, INDIA
Email: krishnprasad@gmail.com
non ferrous casting materials such as casting engines blocks,
valves, pumps, machine tools, power transmission
equipments, aerospace, home appliances and so on. Because
of its wide applications, functional advantage, economical
benefits, near net shape manufacturing capability more than
90% of the manufacturing goods produced through casting
processes [91]. Moulding, melting followed by pouring,
fettling, inspection and elimination/dispatch were the major
steps followed in casting process. Control during each stage
in casting process plays a vital role else it may lead to
casting defects like rat-tails, scabs, cracks, blow holes, oxide
films, misrun, shrinkage, porosity, cold shut, scar, blister,
hot tears, segregations, cracks and so on. In past few
decades neural networks were used to tackle production
related problems, controlling the process by reducing
casting defects, lead times, scrap rate, production cost and to
avoid trial and error method, expert dependent advices in
optimizing the process.
B.
Cast alloys and composite materials introduction
Cast aluminium alloys widely used in various
engineering applications due to its light weight
characteristics, good heat, cast-ability and electrical
conductivity, low density, melting point, coefficient of
thermal expansion, and casting shrinkage respectively [86].
The mechanical properties of cast alloys mainly depend on
weight fractions of alloying elements, applied heat
treatments, microstructures, and morphologies of the
various phases [79]. The use of composite materials in
recent years is due to ever increasing fuel prices, need of
weight reduction in aerospace and automotive sectors. Stir
casting, diffusion bonding, squeeze casting, ultrasonic
method, powder metallurgy, compo casting, thixo-forging
and were the major techniques to fabricate composites.
C.
Introduction to Injection moulding process
The complete injection moulding (IM) cycle
follows three stages namely filling, post-filling and mouldopening for processing plastics. Plastic materials have
following properties such as light weight, corrosion
resistance, transparency, ability to make complex shapes,
allow to integrate with other materials, functions, excellent
thermal and electrical insulation properties. Quality related
problems observed from the reviewed publications during
injection moulding process were sink mark, flash, jetting,
flow marks, weld lines, volumetric shrinkage, warpage,
dimensional shrinkage variations and others. In recent years
there is a rapid growth in ANNs to address and tackle the
International Journal of Advances in Engineering Sciences Vol.3, Issue 1, January, 2013
problems that might occur in casting as well as IM
processes either to enhance mechanical, micro-structure or
surface characteristics as shown based on years of
publications in table 1(a). Neural networks proved to be an
excellent tool from the reviewed publications starting from
preparation of mould to final inspection stage refer table
1(b). 48 out of 145 network architectures (NA) reviewed
used for optimum process parameter prediction, 13 NA
applied during moulding, 4 NA used to study the flow
behaviour or metal fill during process cycle [28], [51] & [1].
7 NA [15], [36] & [87] used during solidification stage to
enhance properties by reducing defects, 2 NA [73] & [58]
used to predict the micro-structure characteristics, 1 NA
[31] applied during design stage for minimum porosity, 12
NA [8], [12], [13] & [53] used during inspection stage to
predict the presence/absence of defect, 3 NA [66] utilized
for heat treatment stage to enhance the mechanical
properties, 15 NA [25] used for process planning, other 40
NA utilised for various purpose such as 4 NA [6] & [52]
cost estimation before manufacturing the component, 33 NA
used for mechanical properties prediction at different stages
during the process, remaining 3 NA applied for data
extraction [4], state of molten metal level [46] and metal
temperature [21]. Neural networks have good learning and
generalization capability; they learn from examples during
training and generate outputs for new inputs which were not
used during the training period. The above feature of ANNs
helps to use as an alternative tool to simulate complex and
ill-defined problems. As shown in table 1(c). 1 NA applied
for identification of heat transfer co-efficient between metal
mould interfaces [36], 9 NA were used to control the
process [14], [37], [48-50] & [61], 9 NA [8], [12-13], [61]
& [86] utilized for classification of defects, 95 NA utilized
for prediction, 11 NA [9], [11], [18], [23], [26], [35], [59] &
[66] adopted to optimize the process, 2 NA [28] & [51] used
for to monitor the fluid flow and metal fill towards cavity
and remaining 18 NA [2], [10], [38], [47], [55], [81] & [82]
were used to map the relationship between input-outputs via
modelling. ANNs applied by various researchers to achieve
various objectives for different cast alloys, injection
moulding and casting processes. 31 NA utilized for injection
moulding (IM) processes to enhance product quality, 23 NA
[1], [3], [21], [32], [57], [72-73], [78-79] & [87] used to
improve mechanical, micro-structure and surface
characteristics of cast material/alloys (CM/A), 20 NA [8-9],
[12-13], [20], [30], [38], [40], [64-65] & [71] utilized for
pressure die casting (PDC) processes namely low and high
pressure die casting processes, 15 NA [25] adopted for
investment casting (IC) process, 15 NA [48-49], [53-54],
[58], [66], [68-69] & [76] used for continuous casting
process, 14 NA [16], [17], [27], [33], [44], [47] & [60]
presented for stir casting process (STIRC), 6 NA [41], [55],
[74] & [83] used for sand casting (SC) process, 4 NA [81] &
[89] used for centrifugal casting (CFC) process, 3 NA [61],
[85] & [86] make used for cossworth (CP) process, 2 NA
utilized for graphite sand mold casting (GSMC) [42], semisolid metal casting (SSMC) [70], lost foam casting (LFC)
[28] & [51], squeeze casting (SQC) [2] & [81] and
phosphate graphite mould casting (PGMC) [43] process
respectively and remaining 4 NA adopted for each process
such as green sand casting (GSC) [63], permanent mould
casting (PMC) [77], strip casting (STRIPC) [46] and gravity
die casting (GDC) [36] processes.
II.
CLASSIFICATION OF REVIEWED WORKS
The search for papers was done utilizing available
scientific and technology resource data bases namely
journals, conferences, symposium and others. Total 145 NA
found from 83 publications were chosen for review if they
meet the following requirements,
Focused on the casting processes, cast alloys and
injection moulding processes,
Neural network tools used as a modelling
technique.
From the reviewed publications following key issues were
focused in applying ANNs to practical problems,
1. Generation of training samples for ANN
2. Selection of neural network type
3. Selection of input and output for network
4. Design of suitable network architecture
5. Selection of suitable learning algorithms for
training
6. Selection of hidden layers, hidden neurons,
activation functions, bias and other network
parameters
7. Selection for evaluation of training performance,
network termination, model validation
8. Applications of ANN model in casting and
injection moulding processes
III.
PROBLEM DEFINITION
Statistical design of experiments proven to be an cost
effective tool for mapping the complex relationship between
numbers of independent and dependent variables and
optimize throughput in various manufacturing process with
minimum experiments. Full factorial design (FFD), central
composite design (CCD), box-behnken design (BBD) and
taguchi parametric design were some of the tools available
for experiment conduction and optimizing the process.
Statistical tools help the designer in identifying the best set
of parameter combination level to yield quality results for
discrete values [4] & [18], however for continuous variables
to conduct experiments at predetermined levels within the
narrow range finds difficulties in some fields like foundry,
welding, metal forming [55] and so on and may not help
engineers to optimize the process. Modification of statistical
models by incorporating new random data is not feasible.
ANNs addressed these limitations effectively and refines the
model by incorporating new random data at any stage
during training. Form the reviewed literatures statistical
methods were used to reduce the experiments/simulation
runs and to fulfil the need of enough training data for
ANNs. ANNs yield best prediction when compared to other
statistical models in most of the case studies was observed.
A.
Data collection
ANNs prediction accuracy depend upon the quality
and quantity of the data used for training observed from the
reviewed publications, various means of data collected for
2
International Journal of Advances in Engineering Sciences Vol.3, Issue 1, January, 2013
TABLE 1. REVIEW OF LITERATURES IN CASTING AND INJECTION MOLDING APPLICATIONS VIA ARTIFICIAL NEURAL NETWORKS
SL.
No
Number of network architectures among reviewed publications
a
Year of
publications
b
Process
applications
c
ANN applications
Identification-1
Control-9
Classification-9
Prediction-95
Optimizaton-11
d
Model validation
CRC-10
Max. PE-7
RMSE-7
PE-25
Graph-35
MAE-5
SDQ-5
SDE-2
e
Network training
CRC-13
MPE-20
RMSE-26
MSE-44
NRMSE-1
MAE-9
SDQ-3
MAPE-2
f
Training
termination
TE/TT-2
g
Hidden layers
h
Input neurons
One-3
Two-10
i
Output neurons
One-94
Two-12
j
Topology
definition
k
Training mode
l
Learning scheme
Supervised-135
Unsupervised-4
NR-6
m
Validation set
Used-21
Not used-87
NR-37
n
Simulation
software
o
Data collection
p
Process and cast
alloys
q
ANN Simulation
software
\
Network
architecture
Mat.NNTB-40
LVQ-2
RBF-11
MLP-algorithms
BP-58
LM-40
r
s
1997-1
1999-2
2000-3
Molding-13
2001-4
Design-1
2003-1
Process parameter-48
EL-10
One-82
Three-30
Three-13
NEB-20
2002-3
2005-6
2006-11
Heat treatmet-3
NR-73
Two-29
Three-1
Five-23
Four-8
Five-4
OBT-25
Six-6
Six-2
2008-8
Process planning-15
EL/TE-37
Four-29
2007-10
DOE-6
IM-31
REPL-4
IC-15
C-Mold-3
Meltflow-6
DOE-T/SS-11
2011-7
Inspection-12
Monitor-2
Mean.PE-21
PQI-2
2012-5
Others-46
Modelling-18
RE-1
MAPE-19
None-22
NR-4
NR-29
NMSE-1
SSE-15
TQ-6
NTEP-18
EL/TT-5
NR-9
Between seven to nine-15
Between ten to thirty-24
NR-5
Above ten-4
Nr-5
Between seven to nine-3
OAT-4
2010-13
Solidification-7
OMO-29
PPN-23
On-line mode-17
FEA/FEM-9
2009-13
T&E-44
Off-line-128
Moldflow-4
Procast-22
Virtual casting-1
ENRDOE/SS-38
TB/HB/HD-6
ENRDOE-65
GSMC-2
STIRC-14
CC-15
SC-6
Matlab-21
Qnet-2
DPL-3
VC/C++/JPL-6
SLP-4
MLP-120
SCG-5
PDC-20
CG-2
3
CFC-4
SSMC-2
NPV4.5-2
SOM-2
V-shrink-1
DOE-T-12
CM/A-23
CP-3
NNPII/Plus-3
PNN-1
MAL-2
Others-5
DOE-SS-1
SQC-2
LFC-2
Others-6
SNNS-14
NR-50
Others-4
NR-4
NR-6
Others-2
Other-1
Others-7
International Journal of Advances in Engineering Sciences Vol.3, Issue 1, January, 2013
SS has the following advantages observed from the
literatures [65], SS replace the real experiments need to be
conducted. It helps in decision making-where human
experts are not available and process calibration procedure
takes longer than the manufacturing lead time, Where large
number of process variables need to be examined with a
minimum number of process range values, when die
preparation cost and time were high. SS have some
limitations such as simulation model are not suitable for
large number of repetitive analysis required for optimization
process because of high computational cost [11]. Simulation
programs require need of human experts to interpret the
output data [20]. Large number of simulation trials, coupled
with lengthy execution or computation time/run might make
investigator impractical [65]. The analyses and
interpretations of simulation results were still empirical and
enough computation time cannot meet the requirements of
online control [71]. So to eliminate the trial and error
method, reducing computational cost, repetitive analysis,
need for experts and implementation of online control,
advance methods are in high demand to model and optimize
the process. Artificial intelligence (AI) based ANN model
addresses these limitations and predict large number of
outputs within few seconds once the training has completed.
C.
Training data quantity
Collection of adequate quantity of training data is
essential to enhance neural network performance. During
the training process network architecture and there network
parameters such as connected weights, learning rate,
activation functions, constants, bias etc., need to be
optimized for the desired problem under experimentation.
When neural network is trained (weights are fixed) it then
becomes possible to generate satisfactory results when
presented with any new input data it has never experienced
before. Total number of weights (TW) can be calculated for
the given network [34] using TW = (Number of input
neurons (IN) * Number of hidden neurons (HN)) + (Number
of hidden neurons (HN) * Number of output neurons
(ON))). HN increases network connections and weights, this
number cannot be increased without any limit because one
may reach the situation that, number of the connections to
be fitted is larger than the number of the data patterns
available for training. Though the neural network can still
be trained, but it is not possible to mathematically determine
more fitting parameters than the available data patterns for
training [80]. Table 2. Represents various researchers used
extremely limited data patterns wherein network
connections and weights to be fitted were more than the data
patterns used for training. To study and compare the
effectiveness of neural network training techniques and
learning algorithms must be done with large data patterns to
improve generalization and avoid over-fitting but not with
small data points [80]. Approximately 1000 set of data used
for training yielded better results by various researchers [6],
[20], [23], [29], [30] and [42]. It was observed from the
literatures collected data base was divided into 3 stages
namely training set, validation set and test sets. The main
purpose of training was to decide the weight and bias of the
training is as shown in table 1(o). 6 NA [3], [50], [57] &
[75] used data collected from design of experiments (DOE)
like response surface methodologies (RSM) namely CCD &
BBD, 12 NA [4], [18], [43], [45], [56], [59], [63], [70] &
[84] used DOE utilizing taguchi parametric design (DOET), 65 NA utilized either trial & error method of experiment
conduction, experts advice/ experience from industry refers
to experiments not resembling DOE (ENRDOE), 11 NA
[11], [26], [35], [40], [65], [67], [73] & [82] adopted
combination of DOE-T and simulation software (SS), SS
was used to avoid the need of real experiments to be
conducted because it adds material, processing, labour,
inspection costs and so on and the purpose of DOE-T is to
reduce the number of simulation trials because of high
computational time for each trial may lead to costly
simulation. 1 NA [19] used integration of DOE and SS
(DOE-SS), 38 NA utilized combination of ENRDOE and SS
(ENRDOE/SS), 4 NA [23] & [42] used regression equations
from the design of experiments in published literatures
(REFPL) to satisfy need of huge data for training, Text
books (TB), hand books (HB) [79], historical data (HD)
[52], [68] & [76] were the other means of collected
database. Some of the limitations observed from the
reviewed literatures such as, DOE-T reduce the experiments
need to be conducted but fails to provide enough training
data for ANN simulation in reference to example 13 shown
in table 2. SS fulfils need of huge training data for ANNs,
however simulation model is not suitable for large number
of repetitive analysis required to optimize the process leads
to high computational cost [11] and need of experts to
interpret the results. ENRDOE follows large number of
experiments to yield best results leads to high
manufacturing cost and may not provide required quantity
training data. So alternative method to overcome this
problem is use of DOE for experiment conduction, develop
regression equation from the real experiments through
analysis of variance, statistically rearranging the input
variables between their respective ranges and can predict the
output through regression equation obtained from DOE
which satisfy the need of huge training data.
B.
Simulation software
In recent years, numerical simulation software’s
were rapidly developed and applied successfully in many
casting and injection moulding industry to enhance product
quality, improve properties and reduce manufacturing costs.
The list of commonly used and commercially available SS
for IM process observed form the reviewed literatures
shown in table 1(n) namely 3 NA [5] used c-mold, 4 NA
[11], [19], [26] & [35] utilized mold flow, 1 NA [67]
adopted Timon-3DTM and 1 NA [29] used Partadviser
CAE. Similarly in casting applications, 22 NA [25], [36], 22
NA [25], [36], [64], [65], [71] & [82] used Procast software,
V-shrink [1], Virtual casting [73], Any cast [21], Maxwell
[28] and finite element methods (FEM) [33], finite element
analysis (FEA) [51], fluent, calcosoft, magmasoft were the
advanced commercial codes were available for simulation.
4
International Journal of Advances in Engineering Sciences Vol.3, Issue 1, January, 2013
network, validation set verifies whether the estimated
outputs from the network were accurate and testing
determines whether network was over-trained or not. It is
quite interesting to note that only 21 NA used validation
sets, 37 NA not used and 87 NA not reported information
about validation sets as shown in table 1(m).
Qnet for windows version 2.0, NeuralLab, havBPNet++,
IBM Neural network utility, NeuroGenetic Optimise (NGO)
version 2.0, Aree 3.0 Adaptive logic network development
system for windows, TDL version 1.1 Neuroon-line,
NeuFrame and OWL Neural network library.
From the reviewed publications as shown in table 1
(q), following ANN simulation software has adopted by
various researchers namely, 21 and 40 NA adopted
MATLAB and MATLAB neural network tool box (Mat.
NNTB), 14 NA employed using statistica neural network
software (SNNS) [8], [13], [81], [78] & [85-86], 2 NA [14]
& [37] uses Qnet, Neuro-planner version 4.5 (NPV) [32], 6
NA used either visual basic c (VC)/c++/ java programming
language (JPL), 3 NA [16], [44] & [88] employed delphiprogramming language (DPL), 3 NA [66] professional
II/plus (NNPII/plus), 1 NA [15] utilized Neuro solutions
(NS)], PC neuron (PCN) [75], Neuro-computer (NC) [52],
Fast artificial neural network library (FANNL) [63] and
remaining networks not reported (NR) the kind of ANN
simulation software adopted. From the literature survey, it
has been observed that selection of ANN simulation
software based on the type of neural architecture used for
prediction. Because all ANN simulation software cannot
predict all type of network architectures.
E.
Network type and architectures
Currently more than 50 different neural network
architectures were available from the available literatures. In
casting and IM applications five different type of NA were
observed from the reviewed literatures as shown in table
1(r). 120 NA make use of multi-layer perceptron network
(MLP) because of its good generalization capability, 11 NA
utilized radial basis function network because of its training
speed and easy to construct, 2 NA [65] & [71] used learning
vector quantization (LVQ) network, 2 NA [4] & [15] used
self organize mapping (SOM) as an un-supervised learning
network, 4 NA [13], [74] & [78] adopted single layered
perceptron (SLP) network, 1 NA [13] employed
probabilistic neural network (PNN), 1 NA [4] adopted the
combination of SOM and MLP network, 4 NA [3], [28],
[63] & [84] NR the kind of network architecture they
employed for their application. MLP with different training
algorithms adopted by various researchers as shown in table
1(s), namely 58 NA used back-propagation (BP), 40 NA
adopted levenberg-marquardt (LM),
2 NA employed
conjugate gradient (CG), 2 NA momentum adaptive
learning (MAL), 5 NA employed scaled conjugate gradient
(SCG), newton method, Bayesian regularization (BR), rprop
[87], variable metric chaos to improve BP algorithm [69],
combination of LM and BP [78] and LM and BR [57] were
the other few algorithms used to tackle and improve the
prediction accuracy.
F.
Learning (or) Training
The process of modifying the neural network
parameters by making proper adjustments lead to result in
production of desired response. During training the network
parameters were optimized, as a result of which it undergoes
internal process of fitting. Table 1(l), clearly shows 135 NA
TABLE 2. LIMITED DATA SET USED FOR TRAINING
1
Network
architecture
IN-HN-ON
4-7-4
Total
weights
(TW)
56
9
[2]
2
1-12-8
108
10
[16]
3
5-10-3
80
13
[79]
4
5-8-1
48
19
[82]
5
1-8-2
26
10
[88]
6
5-15-1
90
80
[50]
Examples
Ref.
7
3-4-1
16
8
[9]
8
11-23-5
4-4-1
4-10-1
1-8-3
368
20
50
32
65
10
10
12
[41]
11
3-7-1
28
20
[71]
12
3-15-1
60
6
[58]
13
5-6-6
66
18
[59]
14
2-10-3
50
46
[27]
9
10
D.
Training
Data used
[65]
[44]
ANN simulation software
Simulation uses a model to develop conclusions
providing insight on the behaviour of real-world problems
being studied. Computer simulation is used to reduce the
risk associated with creating new systems or upgrading the
existing systems. Over the past few decades, computer
simulation software together with statistical analysis
techniques was used as decision making tool for the given
task and is growing more technical, precision by reducing
the error percentage in the field of business, industry and so
on [62]. Computer simulation is very important because it
helps in prediction of future events and occurrence. For
example, simulation gives for the particular operating
conditions; it predicts outputs or responses based on the past
data without conducting any experiments. There are various
freeware and shareware software available for ANN
simulation for measurement fields [90] namely Rochester
connectionist simulator, NeurDS, PlaNet 5.7, GENESIS 2.0,
SNNS 4.1, Aspirin/MIGRAINES 6.0, Atree 3.0 educational
kit for windows, PDP++, Uts, Multi-module neural
computing environment, NevProp, PYGMALION, Matrix
Back Propagation (MBP), Win NN, BIOSIM, AINET,
DenoGNG, nn/xvv, NNDT 1.4, Trajan 2.1, Neural network
at your finger tips etc., and some of the commercial
software packages for measurement field [90] are also
available for ANN simulation like BrainMaker, SAS,
MATLAB
Neural
Network
Tool,
Propagator,
Neuroforecaster and visual data, NESTOR, Neuroshell2,
NeuroWindows, NeuFuz4, PARTEK, NeuroSolutions v2.0,
5
International Journal of Advances in Engineering Sciences Vol.3, Issue 1, January, 2013
used supervised learning scheme, 4 NA makes use of
unsupervised learning and 6 NA not reported the kind of
learning procedure adopted. Generally there are three kinds
of learning process namely supervised learning, unsupervised learning & reinforcement learning.
Supervised learning: Learning with the help of a teacher
refers to supervised learning. Here input-output data were
available for training. Error determination is possible by
comparing neural network prediction and target output data.
This error is feedback to the network for updating the
weights until predicted network results close to desired
target values [24].
Un-Supervised Learning: Learning without the help of a
teacher referred as un-supervised learning. In un-supervised
learning the target output data were not available; hence
error prediction is not possible. Because no feed-back
information available from the environment to inform what
the output should be or whether the predicted output is right
or wrong. In such case neural network must discover,
patterns, regularities, features (or) categories from input data
patterns and relates the input data over output, such kind of
process is called as self-organizing process [24].
Reinforcement Learning: Reinforcement learning process is
similar to supervised learning. Here in supervised learning
correct target output values are known for each input data.
But in some cases, very less information might be available.
For example the network output might tell that its actual
output is only 50% correct. Thus only the critic information
is available not the exact information. The reinforcement
signal acts as feedback information to the network during
learning process [24].
G.
Training modes
The process of modifying the neural network
parameters by updating connection weights, bias and other
network related parameters if any refers to training [22].
There are normally two types of training,
Incremental mode of training (on-line training)- refers to an
approach in which training sample/scenarios are collected
on-line and pass to the network to be trained one after the
other in sequence.
Batch mode of training (off-line training)- The necessary
data is collected before the commencement of the training
and entire training data (say more than one) is passed to the
network at a time and average error in predictions is
determined and is back propagated to the network to update
weights and bias values to enhance prediction accuracy.
From reviewed publications shown in table 1(k), off-line
approach has been adopted in 128 NA, 17 NA make use of
on-line prediction applications namely, to monitor the metal
fill in LFC [28], prediction of
aluminium silicon
modification level in AlSiCu alloys [57], to monitor
product quality through strip casting process [46], to predict
optimal flow behaviour of molten metal inside die cavity
[64], to predict crack in continuous casting process [48-49],
The authors [68] analyzed characteristics of existing mould
breakout prediction systems, influence factors on breakout,
feasibility that mould friction is used to forecast breakout
and investigated mould friction through online training
method for mould breakout prediction, researchers [75]
developed in-process mixed-material caused flash
monitoring system for IM process. Incremental mode of
training has the following advantages, easier to implement,
computationally faster than batch mode of training [22],
occurrence of defects/problems can be tackled at any stage
during the process leads to increase in productivity,
decreasing human labour, reduces manufacturing lead time
and costs [41]
H.
Network Topology
During
development
stage
of
network,
understanding of the following factors are more important
such as initialization of weights, selection of activation
function, learning rate, bias value, momentum constant,
activation constants, hidden layers/neurons and so on. These
factors not only affect network convergence but also affect
prediction accuracy. Table 1(j), shows the way researchers
adopted to define their network topology. Not even the best
topology (NEB) used in 20 NA [38], authors dint mentioned
either how many hidden layers/neurons, transfer function,
training termination, learning algorithms, bias, learning rate,
momentum
constant,
model
validation,
network
performance criteria they adopted. 25 NA presents only best
topology (OBT) which includes network parameters, hidden
layer/neurons with no justification regarding selection, One
at a time strategy (OAT) adopted by 4 NA [23] & [42],
where in topology selection done through network
parametric study, 29 NA only mentioned optimization, but
actual optimization procedures followed were not reported,
23 NA [4], [6], [25], [52] & [89] proposed new
methodology to define their network, authors [25] presented
optimum topology selection by brute-force exhaustive
enumeration of alternatives. When relatively few
alternatives and/or network training time is low, in such
situations genetic algorithm tool may find best possible
solution to optimize networks. 44 NA adopted the most
common method trial and error method (T&E). Though
some new method proposed in selection of hidden
layer/neurons and network topology, but they were not
popular because they might fail for other applications. So
till date there is no perfect methodology for defining
network topology and still it is under intensive study.
I.
Neural network architecture parameters
Normalization and De-normalization: Scaling of inputoutput values plays crucial role because improper scaled
values cause incompatibility, leads to inaccurate results [5].
Normalization process helps in speed up the training
process, avoids numerical overflows due to very large or
small values [21]. Statistical techniques such as zero score,
median, sigmoid, mini-max and statistical column
normalization were available for data normalization,
developed under different rules namely max rule, sum rule,
product rule and so on [39]. Data normalization range
selection is based on intended transfer function adopted,
which helps in modifying summing weights into an output.
Normalization range between (0, 1) for logistic sigmoid
transfer function and (-1, +1) for hyperbolic tangent transfer
6
International Journal of Advances in Engineering Sciences Vol.3, Issue 1, January, 2013
function [66]. The purpose of de-normalization is to convert
predicted scaled values into actual output.
J.
Initialization of weights
In neural NA neurons of one layer connected to
other neighbouring layer by means of connection weights.
Weights contain information about input signals to solve
complex problems [24]. From the review literatures it has
been observed that network performance relies partly on
initialization of weights. Smaller weights tend to increase
the training time for network convergence, whereas larger
weight matrix may cause instability in the network
performance. Network weights have to initialize at small
random values because if all weights start with equal
weights and if the solution requires an unequal weights to be
developed the network won’t converge and may fail to learn
from training examples with error stabilizing or even
increasing as learning continues during training period[19]
& [30].
K.
Transfer or Activation functions
Activation function is used to convert the weighted
sum of input values of a neuron to a value that represents
the output of the neuron. The output of a neuron is largely
depends on the intended transfer function used. Different
types of transfer function are used in MLP networks are
hard-limit-it generates output either 1.0 or 0.0 depending on
the input, log-sigmoid- output lies in the range between 0
and 1, linear-output of transfer function is made equal to its
input and ranges varies between -1.0 to 1.0 and tan-sigmoid
transfer function-generates output lies in the range between
-1.0 to 1.0 [22]. Radial basis function is the most commonly
used transfer function in RBF networks. The RBF network
trains faster than multilayer networks, but requires many
neurons for high-dimensional input spaces [57]. The nature
of the sigmoid transfer function used in BP algorithm does
not reaches its extreme values of 0 and 1 without infinitely
large weights, therefore input-output patterns are to be
normalized between the ranges from 0.1 to 0.9 [71], [36].
From the literature review linear, log and tan sigmoid
transfer function finds in maximum applications. Linear
transfer function is used in almost many NA in input and
output layers. Log and tan sigmoid transfer function used in
MLP network and radial basis function in case of RBF
networks finds maximum applications in case of hidden
layers. Linear transfer function in RBF networks and MLP
networks was normally used which helps to speed up the
training process. However in case of output layer log or tan
sigmoid transfer functions also used by many researchers
[59], [23], [42], [67], [58], [74] to improve predictions while
sacrificing the higher computational cost. Authors [42] and
[23] used activation constants in the transfer function which
helps in reducing the mean squared error to small values,
selection of constants based on network parametric study.
L.
Learning rate (η) and Momentum (α) constant
The purpose of using network parameter-learning
rate (η) is to prevent over-learning and error vibration. The
learning rate parameter range varies between 0-1. Lower
value of η ensures a true gradient descent but increases total
number of learning steps to obtain the optimal solution.
Higher value of η indicates rapid convergence but results in
local minima by over-shoot and may never converge to
minimum [30]. The momentum (α) constant parameter helps
in weight updating in order to speed up the search in local
minima region leads to faster convergence, increases
prediction accuracy and shortening the computational time.
To prevent over learning rate, error vibration and to avoid
local optimum value α + η = 1.0 is adopted by authors [6].
Hence to conclude optimal vale of η and α depends on the
problem to be solved and chosen experimentally via trial
and error method, network parametric study by varying one
at a time between the range 0-1 [42].
M.
Bias
Bias neuron does not receive any input and emits
constant output across the weighted connections to neurons
in the next layers [21], [29], and [33] The main function of
using bias neuron is to prevent the weight matrix from
stagnation, controlling activation function during learning
process and finally helps neurons to be flexible and
adaptable [77]. The range of bias value is varies between (00.15) [36].
N.
Number of Input and Output neurons
Table 1(h) & (i) represent the number of inputs and
outputs used among reviewed publications. A single input
neuron used in 3 NA [44], [76] & [88], two neurons in 10
NA, three neurons in 30 NA, four neurons in 29 NA, five
neurons in 23 NA, six neurons in 6NA, seven to nine
neurons employed in 15 NA, ten to thirty neurons in 24 NA
[12], [54], [78] & [86] and 5 NA not reported the number of
input & output neurons used. For one input variable
conventional, simple regression, moving average and
smoothed line function in micro-soft excel perform better
than neural network. ANNs were best suited to model
complicated interactions between several numbers of input
variables [80]. A single output neuron used in 94 NA, two
neurons in 12 NA, three neurons in 13 NA, four neurons in
8 NA, five neurons in 4 NA, six in 2 NA, seven to nine
neurons employed in 3 NA, ten to hundred neurons is used
in 2 NA [8] & [15], 108 output neurons used in 2 NA [85] &
[86]. It is more effective to develop separate models for
individual output, because training time/computational time
increases dramatically when the number of outputs increases
[80]. Neural networks proved to simulate with high
accuracy for higher input-output neurons only when we
have huge training data and ready to sacrifice computational
time.
O.
Selection of Hidden neurons and Hidden layers
Hidden layer(s) & hidden neurons play significant
role in terms of convergence rate during training, learning
time and prediction accuracy of the network [3]. The
possible means of size choice observed from the reviewed
literatures such as selection based on experience and
experimentation with the problem or application in hand [3],
neural network prediction accuracy can further be improved
by increasing hidden layers and neurons [11] & [19],
alkaikes information criterion, network information
7
International Journal of Advances in Engineering Sciences Vol.3, Issue 1, January, 2013
criterion, neural network information criterion methods [4]
available based on statistical probability and an error energy
function. 82 NA employed for general problems with single
hidden layer, 29 NA adopted for complicated using 2 hidden
layers, 1 NA adopted three hidden layers [64], 9 NA not
reported the kind of network architecture adopted as shown
in table 1(g). Too many hidden neurons may have risk of
over fitting in training data leads to poor generalization and
increases training time [30]. Too few neurons (say one) are
not sufficient to capture non-linear mapping from the
collected data base. Mathematically proved by Hyken,
single hidden layer with enough neurons yields successful
results for feed forward neural network [17]. Kolmogorov
theorem [40] & [89] H = 2p+1 (H is hidden neurons, p is
input neurons) adopted to fix hidden neurons, l=Sqrt(n+m)
+ a, (n= input neurons, m= output neurons, a= constant (1,
10) and l is hidden neurons) employed for size choice [58],
network purning technique [68], H= (input neurons + output
neurons)/2 used by authors [6] & [79], network parametric
study using one at a time [23] & [42] strategy was employed
to fix hidden neurons. There are some few examples [13],
[19], [51], [61] and [78] observed, where neural networks
predict best results even when hidden neuron numbers are
more than 50. According to authors [34], when the number
of hidden neurons is equal to the number of training data
used, the learning could be fastest but loses its
generalization capability; hence for good generalization the
ratio 10:1 must be used means for every 10 training patterns
1 hidden neurons yields better results. The exact size choice
is still under intensive study with no conclusive solutions
available because of network mapping complexity and nondeterministic nature of many successfully completed
training procedures; however choice was done based on
network prediction accuracy [30].
P.
NA adopted mean square error (MSE), 15 NA used sum of
squared error (SSE), 9 NA employed mean absolute error
(MAE), 6 NA used training quality (TQ), 20 NA adopted
mean percent error (MPE), 3 NA used standard deviation
quotient (SDQ), 2 NA utilized mean absolute percentage
error (MAPE), 1 NA used normalized mean squared error
(NMSE), 26 NA make use of root mean squared error
(RMSE), 2 NA used performance quality index (PQI), 1 NA
used normalized root mean squared error (NRMSE), 13 NA
used correlation coefficient (CRC) and remaining . 29 NA
not reported. Some literatures adopted more than one
statistical index for evaluation of network training
performance few examples in [6], [10], [78] and [61]. The
goal of any training algorithm is to reduce the error between
the predicted and the actual value. A lower error & higher
CRC indicates the estimated value of network is closer to
the true value [6]. Among various statistical quality indexes
MSE followed by RMSE, MPE and SSE were widely used.
R.
Model validation
The ultimate goal at the end of training is to
evaluate the network prediction accuracy (Model validation)
with new set of data which are not used in training; the
predicted model has to provide accurate and reliable
forecasts. There are various approaches used by researchers
as shown in Table 1(d), [4] to evaluate the ANN model
namely CRC [7], RMSE, Maximum percent error (Max.
PE), Percent error (PE) [83], MAE, SDQ, Standard
deviation error (SDE), Mean percent error (Mean PE),
Relative error (RE), MAPE, authors [4], [27], [43], [67],
[78], [79], [81] and [89] used more than one approach for
model validation and attempt made by the authors [3], [4],
[8], [20], [21], [27], [30-32], [36], [45], [50], [57], [66-67],
[70], [72], [76], [77], [87] and [88] Expressed their
prediction accuracy through graph by comparing with
experimental and predicted neural network results.
S.
Sensitivity analysis & reverse process model
Sensitivity analysis (SA) is of paramount
importance to identify the most significant process
parameters during manufacturing process. SA can be
applied during pre-processing stage to predict which factors
contributing more and less, less contributed will be
discarded and more will be considered and the difference in
prediction with and without can also be checked. SA on
post-processing stage determines the most critical factors
from the selected process parameters. 1 NA [72] presented
study made on the effects of different alloying elements,
chemical composition, grain refiner, modifier and cooling
rate on porosity formation in Al-Si casting alloys via SA.
Development of reverse process model is always in great
demand because industrialist are more interested to know
the optimal setting process parameters for achieving desired
castings quality. Reverse process model configures the
response as the inputs and the process parameters as the
output during training and learns the correlation between the
responses and process parameters. The statistical methods
[42] such as DOE might fail to carry out reverse mapping
because the transformation matrix might not be invertible at
all. This problem limitation could be handled effectively by
Neural network training termination
Table 1(f) represents researchers adopted various
criteria to terminate the network training will fall among the
following categories such as when the error (MSE, RMSE,
etc.,) between neural network predicted value and the target
value is reduced or meets the pre-set value (Error goal),
when the preset training epochs completed and when crossvalidation takes place between training and test data. In
most of the reviewed neural network architectures the first
two approaches were used. 73 NA not reported, 37 NA
adopted (EL/TE) during training if the preset error limit
(EL) reaches, network terminates else it continues until the
preset number of training epochs (TE). 18 NA employed
fixed number of training epochs (NTEP) alone, 10 NA used
only EL, 5 NA utilized (EL/TT) – if EL reaches within the
preset timing network terminates training else it continues
until the preset TT to reach. 2 NA used (TT/TE) – if preset
number of training epochs reaches within preset time
network stops training else continues until the TT should
reach.
Q.
Neural network training performances
To check performance of network training ability
on estimated results, various statistical evaluation indexes
observed in review of 145 NA is reported in table 1(e). 44
8
International Journal of Advances in Engineering Sciences Vol.3, Issue 1, January, 2013
using neural networks, as it works like a black-box data
processing unit.
V.
CONCLUSIONS
Following are the major observations drawn from the
literatures;
1. ANNs can be employed for casting and IM
processes starting from moulding to final
inspection stage.
2. Multiple regression equation obtained from DOE
fulfils the need of huge training data for ANN
3. ANNs addresses the limitations of simulation
software by reducing repetitive analysis,
computational cost and time, replaces need of
experts for results interpretation and helps in
implementation for online process control.
4. An open field for studies aiming at on-line
approach to address future problem or defects and
correcting using adaptive and real time manner
adopting ANNs
5. Supervised learning, MLP networks, one or two
hidden layer(s), BP & LM algorithm, off-line
training mode and Matlab simulations were more
common in casting and IM process.
6. Very few authors attempted on use of activation
constants in transfer function and bias values for
MLP networks, even though it reduces the MSE to
small values and improves the prediction accuracy.
7. ANN is best used to model complex interaction
among several number of input & output parameter
and predicts with good accuracy only limitation is
need of enough training data and ready to sacrifice
for computational time and cost.
8. Although author’s attempted and proposed some
empirical equations for optimum hidden
layer/neurons selection, however it is not
generalized one and fails in other applications
while using those methods. Hence optimum
selection for hidden layer (s)/hidden neurons is still
under intensive study.
9. Single hidden layer finds maximum applications in
MLP networks from reviewed publications, reason
might be increase the hidden layers increases the
computation time or mathematically proven by
Simon Hyken, single hidden layer with enough
neurons yields better prediction accuracy. Only few
NA used validation data set and recommended that
use of validation sets prevents over-fitting.
10. Extremely limited amount data sets was used by
few authors for training, where the network
connections to be fitted are larger than the data
available for training. Hence recommendation
made by some author’s neural network has to be
trained using large data base to avoid over-fitting
and improve generalization.
11. It is important to note that very few authors
reported the criteria adopted when to stop the
neural network training, because prediction
accuracy relies on neural network training
termination.
12. In regards to model validation based on test sets or
experimental results with network predicted
IV.
HYBRID MODELS
Soft computing is a collection of computational
techniques in computer science, artificial intelligence,
machine learning and some engineering disciplines, which
attempts to study, model and analyze complex relationship,
which are difficult from conventional methods (hard
computing) to yield better results at low cost, analytic and
complete solutions [24]. Neural networks (NN), fuzzy logic
(FL) and genetic algorithms (GA) are few examples of soft
computing. If more than one soft computing technique
employed to solve problems refers to hybrid computing or
systems. Hybrid systems classification based on the
following three types [34],
1. Sequential hybrids,
2. Auxiliary hybrids and
3. Embedded hybrids.
Examples [34], GA obtains the optimal process parameters
first and hands over pre-processed data to the NN for further
processing in Sequential hybrids, Neuro-genetic system for
Auxiliary hybrids, in which an NN employs GA to optimize
the network architecture parameters. NN-FL is an example
for embedded hybrid system, in which NN receives fuzzy
inputs to process it and extract fuzzy outputs.
Observations made from the reviewed publications that
PSO-BPNN [6] for product and mold cost estimation of
PIM process, Neural-Fuzzy model [14] to dimensionally
control the molded parts in IM process, Neural-fuzzy model
composed of NN for learning relationship between inputoutput data and FL for reasoning to generate more reliable
suggestion for modifying induced output values from the
trained neural network. Neuro-Fuzzy Model [15] developed
to compare the results of modified and un-modified Al-Si
alloys under vibration & non-vibration conditions which
influences on mechanical properties. ANN & GA models
[31] have been implemented to optimize the effective
process parameters on porosity formation in Al-Si casting
alloys. A hybrid of back propagation neural network and
genetic algorithm for optimization of injection moulding
process parameters [35], GA-NN based model is used to
tackle problems related to quality of castings in green sand
mould system under both forward as well as reverse
mappings [42]. To determine the optimum process
conditions for improving the IM plastic part quality through
combination of ANN/GA model [45], To predict the optimal
process conditions using NN by avoiding what if scenarios
raised in casting simulation software, time consuming and
GA is used to yield best output parameter values [65]. To
optimize the thin walled component manufactured through
the use of LPDC process for aluminium alloys by
combination of NN and GA [71]. From these applications
on hybrid models in casting and injection moulding
processes, neural networks found to be an effective
estimation and optimization tool.
9
International Journal of Advances in Engineering Sciences Vol.3, Issue 1, January, 2013
13.
14.
15.
16.
[12] Paliniswamy S, C.R. Nagarajah, K. Graves and P. Loventti, A Hybrid
Signal Pre-Processing approach in Processing Ultrasonic Signals with
noise, Int J Adv Manuf Technol (2009) 42:766–771
[13] Dobrzanski L.A, Krupinski M and Sokolowski J. H, Computer aided
classification of flaws occurred during casting of aluminum, Journal
of Materials Processing Technology 167 (2005) 456–462.
[14] Lau H.C.U, Wong T.T and Pun K.F, Neural-Fuzzy modeling of
plastic injection molding machine for intelligent control, Expert
Systems with Applications 17 (1999) 33–43
[15] Srinivasulu Reddy K. and Ranga Janardhana G, Developing a neuro
fuzzy model to predict the properties of alsi2 alloy, ARPN Journal of
Engineering and Applied Sciences, VOL. 4, NO. 10, 2009, 63-73
[16] Necat altinkok, Use of artificial neural network for prediction of
physical properties and tensile strengths in particle reinforced
aluminum
matrix
composites
tensile strengths in particle reinforced aluminum matrix composites,
Journal of materials science 40 (2005) 1767 – 1770
[17] Muhammad hayat jokhio, Muhammad ibrahim panhwar and
Mukthiaraliunar, Modeling mechanical properties of composite
produced using stir casting method, Mehran university research
journal of engineering & technology, vol.30, no.1, (2011), 75-88.
[18] Wen-chin Chen, Gong-Loung Fu, Pei-Hao Tai and Wei-Jaw Deng,
Process parameter optimization for MIMO plastic injection molding
via soft computing, Expert Systems with Applications 36 (2009)
1114–1122.
[19] Hasan Kurtaran, Babur Ozcelik and Tuncay Erzurumlu, Warpage
optimization of a bus ceiling lamp base using neural network model
and genetic algorithm, Journal of Materials Processing Technology
169 (2005) 314–319
[20] Prasad K. D. V. Yarlagadda, Prediction of die casting process
parameters by using an artificial neural network model for zinc alloys,
int. j. prod. res., 2000, vol. 38, no. 1, 119-139.
[21] Farahany S, Erfani M, Karamoozian A, Ourdjini A and Hasbullah
Idris M, Artificial neural networks to predict of liquidus temperature
in hypo-eutectic Al-Si cast alloys, Journal of applied sciences 10(24):
3243-3249, 201
[22] Pratihar D. K, Soft computing. Narosa publishing house pvt. Ltd,
India, 2008.
[23] Manjunath Patel G C, and Prasad Krishna, Prediction and
Optimization of Dimensional Shrinkage Variations in Injection
Molded Parts using Forward and Reverse Mapping of Artificial
Neural Networks, Advanced Materials Research Vols. 463-464
(2012) pp 674-678.
[24] Sivanandam S.N. and Deepa S.N., Principles of Soft Computing, 2nd
Edition, Wiley India Pvt. Ltd, New Dehli, 2011.
[25] Vosniakos G.C, Galiotou V, Pantelis D, Benardos P and Pavloua P,
The scope of artificial neural network metamodels for precision
casting process planning, Robotics and Computer-Integrated
Manufacturing 25 (2009) 909–916.
[26] Fei Yin, Huajie Maoa, Lin Hua, Wei Guo and Maosheng Shu, Back
Propagation neural network modeling for warpage prediction and
optimization of plastic products during injection molding, Materials
and Design 32 (2011) 1844–1850.
[27] Adel Mahamood Hassan, Abdalla Alrashdan, Mohammed T.
Hayajneh and Ahmad Turki Mayyas, Prediction of density, porosity
and hardness in aluminum–copper-based composite materials using
artificial neural network, journal of materials processing technology
209 (2009) 894–899.
[28] Mohamed Abdelrahman, Jeanison Pradeep Arulanantham, Ralph
Dinwiddie, Graham Walford and Fred Vondraa, Monitoring metalfill in a lost foam casting process, Volume 45, Number 4, October
2006, pages 459–475.
[29] Sadeghi B.H.M, A BP-neural network predictor model for plastic
injection molding process, Journal of Materials Processing
Technology 103 (2000) 411-416.
[30] Prasad K.D.V. Yarlagadda and Eric Cheng Wei Chiang, A neural
network system for the prediction of process parameters in pressure
die casting, Journal of Materials Processing Technology 89–90
(1999) 583–590.
[31] Mousavi Anijdan S.H, Bahrami A, Madaah Hosseini H.R. and
Shafyei A, Using genetic algorithm and artificial neural network
analyses to design an Al–Si casting alloy of minimum porosity,
Materials and Design 27 (2006) 605–609.
results, some publications not presented network
prediction for test cases. Some author’s presented
predicted results in a graphical way, an important
requirement that papers should include training and
testing sets used; prediction results in numerical
format and specification of network model results
obtained.
It has been proven from the reviewed publications
neural networks can be applied for prediction,
optimization, monitor, control, identification,
modelling, and classification.
Sensitivity analysis on trained neural network
model identifies the most significant parameters
and helps to control the process.
Reverse process model can be effectively
developed via ANN
Few publications used hybrid computing systems
as a combination of ANN with GA, PSO and
Fuzzy logic. ANN/GA, GANN, PSO-BPNN,
Neural-Fuzzy systems yields superior results in
comparison with neural networks alone. Reason
might be by overcoming the local optimum
solutions obtained from neural network alone
REFERENCES
[1]
Florin susac, Mihaela banu and Alexandru epureanu, Artificial neural
network applied to thermo-mechanical fields monitoring during
casting, Proceedings of the 11th WSEAS Int. Conf. on mathematical
methods, computational techniques and intelligent systems, 2009,
247-251.
[2] Shu-Fu-Hua, Aluminum-zinc alloy squeeze casting technological
parameters optimization based on PSO and ANN, china foundry vol.4
No3. (2007), 202-205.
[3] L. Wang, D. Apelian, M. Makhlouf and W. Huang, Predicting
Composition and Properties of Aluminum Die Casting Alloys Using
Artificial Neural Network, Metallurgical Science and Technology,
Vol. 26, No 1. (2008) 16-21.
[4] Wen-Chin Chen, Pei-Hao Tai, Min-Wen Wang, Wei-jaw Deng and
Chen-Tai Chen, A neural network based approach for dynamic
quality prediction in a plastic injection molding process, Expert
Systems with Applications 35 (2008) 843–849.
[5] Prasad K. D. V. Yaralagadda and Cobby Ang Teck Khong,
Development of a hybrid neural network system for prediction of
process parameters in injection moulding, Journal of Materials
Processing Technology 118 (2001) 110–116
[6] Che Z. H, PSO-based back-propagation artificial neural network for
product and mold cost estimation of plastic injection molding,
Computers & Industrial Engineering 58 (2010) 625–637
[7] Cetin Karatas, Adnan Sozen, Erol Arcaklioglu and Sami Erguney,
Investigation of mouldability for feedstocks used powder injection
moulding, Materials and Design 29 (2008) 1713–1724.
[8] Dobrzanski L.A, Krupinski M, Sokolowski J.H, Zarychta P and
Wlodarczyk-flinger A, Methodology of analysis of casting defects,
JAMME, Vol 18, 2006, 267-270.
[9] Jiang Zheng, Qudong Wang, Peng Zhao and Congbo Wu,
Optimization of high-pressure die-casting process parameters using
artificial neural network, Int J Adv Manuf Technol (2009) 44:667–
674
[10] Cetin Karatas, Adnan Sozen, Erol Arcaklioglu and Sami Erguney,
Modeling of yield length in the mould of commercial plastics using
artificial neural networks, Materials and Design 28 (2007) 278–286.
[11] Ozceli B and Erzurumlu T, Comparison of the warpage optimization
in the plastic injection molding using ANOVA, Journal of Materials
Processing Technology 171 (2006) 437–445.
10
International Journal of Advances in Engineering Sciences Vol.3, Issue 1, January, 2013
[32] Marcin Perzyk and Andrzej W. Kochanaski, Prediction of ductile cast
iron quality by artificial
neural networks, Journal of Materials
Processing Technology 109 (2001) 305-307
[33] Ali Mazahery and Mohan ostad Shabani, A356 Reinforced with
nanoparticles numeric analysis of mechanical properties, JOM (2012),
DOI: 10.1007/s1183.
[34] Rajasekaran S. and Vijayalakshmi Pai G.A., Neural networks, Fuzzy
Logic and Genetic algorithms synthesis and applications, PHI
Learning Pvy. Ltd, New Delhi, 2011
[35] Fie Yin, Huajie Mao and Lin Hua, A hybrid of back propagation
neural network and genetic algorithm for optimization of injection
molding process parameters, Materials and Design 32 (2011) 3457–
3464.
[36] Liqiang Zhang, Luoxing Li, Hui Ju and Biwi Zhu, Inverse
identification of interfacial heat transfer co-efficient between the
casting and metal mold using neural network, Energy conversion and
management 51 (2010) 1898–1904.
[37] Lau H.C.W, Ning A., Pun K.F and Chin K.S., Neural networks for
the dimensional control of molded parts based on a reverse process
model, Journal of Materials Processing Technology 117 (2001) 89–
96.
[38] Imad Khan M, Yako Fray man and Saeid Nahavandi, Modeling of
porosity defects in high pressure die casting with a neural network,
Proceedings of the Fourth International Conference on Intelligent
Technologies 2003, Chiang Mai University, Thailand, pp. 1-6.
[39] Jayalakshmi T and Santhakumaran A, Statistical Normalization and
Back Propagation for Classification, International Journal of
Computer Theory and Engineering, Vol. 3, No.1, February,
2011,1793-8201
[40] Xiang Zhang, Shuiguang Tong, Li Xu and Shengzan Yan,
Optimization of Low-Pressure Die Casting Process with Soft
Computing, Proceedings of the 2007 IEEE International Conference
on Mechatronics and Automation August 5 - 8, 2007, Harbin, China,
619-623.
[41] Benny Karunakar D. & Datta G.L, Prevention of defects in castings
using back propagation neural networks, Int J Adv Manuf Technol
(2008) 39:1111–1124.
[42] Mahesh B. Parappagoudar, Pratihar D.K. and Datta G.L, Forward and
reverse mappings in green sand mould system using neural networks,
Applied Soft Computing 8 (2008) 239–260.
[43] An-hui Cai, Hua Chen, Jing-ying Tan, Min Chen, Xiao-song Li,
Yong Zhou and Wei-ke An, Optimization of compositional and
technological parameters for phosphate graphite sand, JMEPEG
(2008) 17:465–471
[44] Necat Altinkok, Use of artificial neural network for prediction of
mechanical properties of a-al2o3 particulate-reinforced Al–Si10mg
alloy composites prepared by using stir casting process, Journal of
Composite Materials, Vol. 40, No. 9, (2006), 779-796.
[45] Shen Changyu, Wang Lixia and Li Qian, Optimization of injection
molding process parameters using combination of artificial neural
network and genetic algorithm method, Journal of Materials
Processing Technology 183 (2007) 412–418.
[46] Hung-Yi Chen and Shiuh-Jer Huang, Adaptive neural network
controller for the molten steel level control of strip casting processes,
Journal of Mechanical Science and Technology 24 (3) (2010) 755760.
[47] Ali Mazahery and Mohsen Ostad Shabani, Nano-sized silicon carbide
reinforced commercial casting aluminum alloy matrix: Experimental
and novel modeling evaluation, Powder Technology 217 (2012) 558–
565.
[48] Tirian G.O., Prostean O and Filip I., Control system of the continuous
casting process for cracks removal, 5th International Symposium on
Applied Computational Intelligence and Informatics, May 28–29,
2009 – Timişoara, Romania, 265-269.
[49] Gelu Ovidiu Tirian, Stela Rusu-Anghel, Manuela Pănoiu and Camelia
Pinca Bretotean, Control of the continuous casting process using
neural networks, Proceedings of the 13th WSEAS International
Conference on COMPUTERS, 2009, 199-204.
[50] Keniga S, Ben-Davidb A, Omera M and. Sadeh A, Control of
properties in injection molding by neural networks, Engineering
Applications of Artificial Intelligence 14 (2001) 819–823.
[51] Mohamed A. Abdelrahman, Ankush Gupta and Wael A. Deabes, A
feature-based solution to forward problem in electrical capacitance
tomography of conductive materials, IEEE transactions on
[52]
[53]
[54]
[55]
[56]
[57]
[58]
[59]
[60]
[61]
[62]
[63]
[64]
[65]
[66]
[67]
[68]
[69]
[70]
[71]
11
instrumentation and measurement, vol. 60, NO. 2, February 2011,
430-441.
Rawin Raviwongse and Venkat Allada, Artificial Neural Network
Based Model for Computation of Injection Mould Complexity, Int J
Adv Manuf Technol (1997) 13, 577-586.
Gelu ovidu Tirian and Camelia bretotean Pinca, Applications of
Artificial neural networks in continuous casting, wseas transactions
on systems, Issue 6, Volume 8, 2009, 693-702.
Wang xudong, Yao man and Chen Xingfu, Development of
prediction method for abnormalities in slab continuous casting using
artificial neural network models, ISIJ International, Vol. 46 (2006),
No. 7, pp. 1047–1053.
Mandal A. and Roy P, Modeling the compressive strength of
molasses–cement sand system using Design of experiments and back
propagation neural network, Journal of Materials Processing
Technology 180 (2006) 167–173.
Mirigul Altan, Reducing shrinkage in injection moldings via the
Taguchi, ANOVA and neural network methods, Materials and Design
31 (2010) 599–604.
Francis R and Sokolowski J, Prediction of the Aluminum Silicon
modification level in the AlSiCu alloys using artificial neural
networks, AMES, Metalurgija - Journal of Metallurgy, 2007, 3-15.
Jang Li-hong, Wang Ai-guo, Tian Nai-yuan, Zhang Wei-cun and Fan
qiao-li, BP Neural Network of Continuous Casting Technological
Parameters and Secondary Dendrite Arm Spacing of Spring steel,
Journal of iron and steel research, International 2011, 18(8):25-29.
Chorng-Jyh Tzeng, Yung-Kuang Yang ,Yu-Hsin Lin and Chih-Hung
Tsai, A study of optimization of injection molding process parameters
for SGF and PTFE reinforced PC composites using neural network
and response surface methodology, Int J Adv Manuf Technol, 26
January 2012, DOI 10.1007/s00170-012-3933-6.
Guang-ming CAO, Cheng-gang Li, Guo-ping ZHOU, Zhen-yu LIU,
WU Di, WANG Guo-dong and Liu Xiang-hua, Rolling force
prediction for strip casting using theoretical model and artificial
intelligence, J. Cent. South Univ. Technol. (2010) 17: 795−800.
L.A Dobrzanski, Krupinski M, Zarychta P and Maniara R, Analysis
of influence of chemical composition of Al-Si-Cu casting alloy on
formation of casting defects, JAMME, 2007, vol. 21, Issue. 2, 53-56.
Roger McHaney, Understanding computer simultation”, Roger
McHaney and Ventus publishing ApS, 2009, ISBN 978-87-7681-5059.
Lakshmanan Singaram, Improving the quality of the sand casting
using Taguchi method and ANN analysis, International journal on
design and manufacturing technologies, vol. 4, No. 1, 2010, 1-5.
Jitender K. Rai, Amir M. Lajimi and Paul Xirouchakis, An intelligent
system for predicting HPDC process variables in interactive
environment, journal of materials processing technology 2 0 3 (2008)
72–79.
Krimpenis A, Benardos P.G, Vosniakos G.O and Koulouvitaki A,
Simulation-based selection of optimum pressure die-casting process
parameters using neural nets and genetic algorithms, Int J Adv Manuf
Technol (2006) 27:509–517.
Sterjovski Z, Nolan D, Carpenter K. R, Dunne D.P and Norrish J,
Artificial neural networks for modelling the mechanical properties of
steels in various applications, Journal of Materials Processing
Technology 170 (2005) 536–544.
Kwak T. S, Suzuki T, Bae W. B, Uehara Y and Ohmori H,
Application of neural network and computer simulation to improve
surface profile of injection molding optic lens, Journal of Materials
Processing Technology 170 (2005) 24–31.
Wentao Li, Yang Li and Yuan Zhang, Study of mould breakout
prediction technique in continuous casting production, 3rd
International Conference on Biomedical Engineering and Informatics
(BMEI 2010), 2966-2970.
Fengxiang gao, Changsong wang, Yubao zhang and Xiao chen,
Application of variable-metric chaos optimization neural network in
predicting slab surface temperature of the continuous casting, IEEE,
2009 Chinese Control and Decision Conference, 2296-2299.
Kim Y.H, Choi J.C, Yoon J. M and Park J. H., A Study of the
Optimum Reheating Process for A356 Alloy in Semi-Solid Forging,
Int J Adv Manuf Technol (2002) 20:277–283.
Liqiang Zhang, Luoxing Li, Shiuping Wang and Biwu Zhu,
Optimization of LPDC Process Parameters Using the Combination of
Artificial Neural Network and Genetic Algorithm Method, Journal of
International Journal of Advances in Engineering Sciences Vol.3, Issue 1, January, 2013
[72]
[73]
[74]
[75]
[76]
[77]
[78]
[79]
[80]
[81]
Materials Engineering and Performance, 2011, DOI: 10.1007/s11665011-9933-0.
Shafyei A, Mousavi Anijdan S.H and Bahrami A, Prediction of
porosity percent in Al–Si casting alloys using ANN, Materials
Science and Engineering A 431 (2006) 206–210.
Hanumantha Rao D, Tagore G. R. N. and Janardhana G. Ranga,
Evolution of artificial neural network (ANN) model for predicting
secondary dendrite arm spacing in aluminium alloy casting, J. of the
Braz. Soc. of Mech. Sci. & Eng., Vol. XXXII, No. 3, July-September
2010, 276-281.
Calcaterra S, Campana G and Tomesani L, Prediction of mechanical
properties in spheroidal cast iron by neural networks, Journal of
Materials Processing Technology 104 (2000) 74-80.
Joseph Chen, Mandara Savage and Jie James Zhu, Development of
artificial neural network-based in-process mixed-material-caused
flash monitoring (ANN-IPMFM) system in injection molding, Int J
Adv Manuf Technol (2008) 36:43–52.
Jouni Ikaheimonen, Kauko Leiviska, Jari Ruuska and Jarkko Matkala,
Nozzle clogging prediction in continuous casting of steel, 2002, 15th
Triennial World Congress, Barcelona, Spain.
Abhilash E, Joseph M. A and Prasad Krishna, Prediction of dendritic
parameters and macro hardness variation in permanent mould casting
of al-12%si alloys using artificial neural networks, (2006) FDMP,
vol.2, no.3, pp.211-220.
L.A. Dobrzanski, Kowalski M and Madejski J, Methodology of the
mechanical properties prediction for the metallurgical products from
the engineering steels using the Artificial Intelligence methods,
Journal of Materials Processing Technology 164–165 (2005) 1500–
1509.
Mehmet Sirac Ozerdem and Sedat Kolukisa, Artificial neural network
approach to predict the mechanical properties of Cu–Sn–Pb–Zn–Ni
cast alloys, Materials and Design 30 (2009) 764–769.
Sha W. and Edwards K.L., The use of artificial neural networks in
materials science based research, Materials and Design 28 (2007)
1747–1752.
Dobrzanski L. A, Maniara R, Sokolowski J, Kasprzak W, Krupinski
M and Brytan Z, Applications of the artificial intelligence methods
for modeling of the ACAlSi7Cu alloy crystallization process, Journal
of Materials Processing Technology 192–193 (2007) 582–587.
[82] Rong Ji Wang, Junliang Zeng and Dian-wu zhou, Determination of
temperature difference in squeeze casting hot work tool steel, Int J
Mater Form, 2011, DOI 10.1007/s12289-011-1061-8.
[83] HamiD Pourasiabi, Hamed Pourasiabi, Zhila amirzadeh and
Mohammad babazadeh, Development a multi-layer perceptron
artificial neural network model to estimate the Vickers hardness of
Mn–Ni–Cu–Mo austempered ductile iron, Materials and Design 35
(2012) 782–789.
[84] An-hui cai , Hua chen, Wei-ke An, Xiao-song li and Yong zhou,
Optimization of composition and technology for phosphate graphite
mold, Materials and Design 29 (2008) 1835–1839
[85] L.A. Dobrzanski*, M. Krupinski, R. Maniara & J.H. Sokolowski,
Quality analysis of the Al-Si-Cu alloy castings, Archieves of foundry
engineering, 2007, Vol. 7, Issue 2, 91-94.
[86] L.A. Dobrzanski, Krupinski M and Sokolowski J.H., Methodology of
automatic quality control of aluminium castings, JAMME, 2007, vol.
20, Issues 1-2, 69-78.
[87] Mohsen ostad shabani and Ali mazahery, Microstructural prediction
of cast A356 Alloy as a function of cooling rate, JOM, vol. 63, No. 8,
2011, 132-136.
[88] Necat Altinkok and Rasit Koker, Modelling of the prediction of
tensile and density properties in particle reinforced metal matrix
composites by using neural networks, Materials and Design 27 (2006)
625–631.
[89] Yu Jingyuan, Li Qiang 1, Tang Ji and Sun Xudong, Predicting model
on ultimate compressive strength of Al2O3-ZrO2 ceramic foam filter
based on BP neural network, China foundry, vol. 8, No. 3, 2011, 286289.
[90] Daponte P and Grimaldi D, Artificial neural networks in
measurements, Measurements, 23, (1998), Pages 93-115.
[91] ASM Metals HandBook, Casting, 9th Edition, vol. 15, 1998.
Appendix:
Figure: Ishikawa diagram for artificial neural networks
12
Download