NERDLab

advertisement
NERDLab
Neuro-Engineering Research & Development Laboratory
Research Activities
Mechanical Engineering Department
The University of Texas at Austin,
1 University Station, C2200, Austin, Texas, 78712-0292.
Voice: +1 (512) 471-7852. Fax:+1 (512) 471-7682.
Email:[email protected]
1993-2015
1
Contents
1 Research Projects
1.1 Parallel Implementation of Neural Networks . . . . . . . . .
1.2 Active Vibration Quenching . . . . . . . . . . . . . . . . . . .
1.3 Optimal Mapping and Initialization . . . . . . . . . . . . . .
1.3.1 Problem Specific Networks . . . . . . . . . . . . . . .
1.3.2 Initialization . . . . . . . . . . . . . . . . . . . . . . .
1.3.3 Mapping of Complex Systems . . . . . . . . . . . . . .
1.3.4 Quantization of Complexity . . . . . . . . . . . . . . .
1.4 Self-tuning PID Controllers for Boiler . . . . . . . . . . . . .
1.5 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . .
1.5.1 A Heuristic Error definition for backpropagation . . .
1.6 Modeling Environment of Nonlinear Dynamic Systems using
and Bond Graphs . . . . . . . . . . . . . . . . . . . . . . . . .
1.7 Digital VLSI Neuro Chip Design . . . . . . . . . . . . . . . .
1.8 Remanufacturing Engineering Tools for Water-Jet Cutting . .
1.8.1 NFD — Neural–Network Feature Detector . . . . . . .
1.8.2 MADA — MMC Automated Data Acquisition . . . .
1.8.3 NOR — Neural–Network Object Reconstruction . . .
1.8.4 CCG — CNC Code Generation . . . . . . . . . . . . .
1.8.5 Deliverables . . . . . . . . . . . . . . . . . . . . . . . .
1.9 Robust Automation for Manufacturing . . . . . . . . . . . . .
1.10 System Identification of Non-Linear Systems . . . . . . . . .
1.11 Comparative analysis of control techniques . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
Neural Networks
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
2
2
3
3
4
4
4
5
6
6
6
.
.
.
.
.
.
.
.
.
.
.
7
8
10
11
11
12
12
12
12
13
14
2 Relationships to On-Going Projects
16
3 Research Facilities
17
4 Research Personnel
18
5 Recent Publications
19
i
Executive Summary
It is indeed a pleasure for me to introduce the Neuro-Engineering Research & Development
Laboratory staff of the Mechanical Engineering Department, The University of Texas at Austin.
We have, I believe, a very strong team for the research and development of neuro-engineering
technologies. We have secured support from state and federal agencies as well as from industry
to develop and transfer technology in the area of neural networks applied to engineering. Our
projects range from the development of new architectures, learning algorithms, and paradigms,
through system identification of nonlinear dynamic systems, adaptive (intelligent) control, fault
diagnostics, optimization, to knowledge representation, reverse engineering, design and precision
machining.
This document gives a glimpse of the resources and current projects at the NERDLab. We
welcome your questions about our research. If you would like additional information, please
contact us electronically at EMAIL:SMTP%“[email protected]”, by voice at
(512) 471-7852, or FAX:(512) 471-7682. Thank you for your interest.
Very truly yours,
Benito Fernández
Assistant Professor and NERDLab Director.
ii
NERDLab
Neuro- Engineering Research & Development Laboratory
Mechanical Engineering Department
The University of Texas at Austin
The NERDLab at UT is one of the newest programs of the Mechanical Engineering Department. The rapid growth in the size and quality of this program in the last year has
corresponded with the growing recognition of The University of Texas as one of the top ranked
American Universities in overall academic quality and the recognition of Neural Networks as a
viable and important technology for the years to come. This document provides information on
the objectives, programs, staff, research interests, and computational and laboratory facilities
available for graduate studies and research in this area.
Program Objectives/Mission: Neural Networks constitutes a frontier area of research
and industrial activity in several disciplines of Science and Engineering. The underlying unifying
basis that groups the technology is the effort to discover and imitate how the human brain works,
then apply this technology to solve complex engineering problems. So our mission is to serve
as a bridge between research in the area of intelligent systems and the application of these
concepts to real world problems. Our focus is aimed at two fronts: Develop research tools
to study, analyze, select, and test Neural Networks’ paradigms; Develop technology transfer
vehicles through education and applications of this area of research to industry.
1
1
1.1
Research Projects
Parallel Implementation of Neural Networks
Artificial Neural Networks(ANN) are massively parallel computational models, that can be
mapped onto parallel computers to achieve high Connections per Second. We are investigating
the mapping neural networks onto Multicomputer Networks and a general Data Structure and
a Message-Based Processing algorithm for computation, keeping the processing granularity
sufficiently coarse. The nodes of these networks are the processors, each having a locally
addressable memory. A Network communication controller is used to pass information among
nodes.
Neural Networks have large communication overheads, and the simulation will be parallelized to reduce the inter-process communication to a minimum. The issues of avoiding deadlocks and optimal mapping of Network, given the Parallel Neural Network Simulator (PNNS)
kernel, are under study.
The Node parallelism is the most efficient method of achieving high degree of parallelism
in a small multicomputer systems with minimum communication overheads. The hardware
consists of 3 Pac8 NuBus Link Adaptor, each with 4 Inmos 32-bit T805 25M Hz Transputer
modules, interfaced to a Apple Macintosh. The NuBus Link Adaptor has a data transfer rate
of 1 MB/sec.
The basic building block of Artificial Neural Networks is a Node, a simple processing
element. The inputs to a node are weighted activation level of other nodes. The weights
associated to the each input form the information storing elements. In ANN simulations these
nodes are topologically arranged in layers. The input to the nodes of a given layer are from the
nodes of previous layers (topologically) and the inputs from the other nodes of layer arranged
topologically ahead are of previous timestamp (dynamic networks). Nodes of a given layer are
to be evaluated before proceeding to the next layer, as the inputs to the next layer are of same
timestamp. Thus, a layer can be said to be a wavefront in the computational model during the
simulation of Neural Networks. The information (or computation) progresses along the layers
during forward propagation and backwards along the same front during backpropagation of
error. These type of networks can be termed as Back-propagation Networks (BPN) or MultiLayer Perceptron (MLP) and are one of the most commonly used networks. The process of
forward propagation involves calculating the weighted sum of the input to a node and passing
through a non-linear element. In the case of cross-talk the input is the activation at previous
timestamp of a node from the same layer. In order to take care of such possibilities, a front is
sub-divided into two subfronts. In the first stage the inputs to each node of a layer is calculated
and synchronized and in the second stage the outputs are evaluated by passing through the
non-linear function.
2
1.2
Active Vibration Quenching
Limitations of passive isolation systems have made non-linear control systems more attractive
for vibration damping. To apply the properties of terfenol, which provide magnetostrictive
strains for vibration damping, a plant consisting of three of these actuators was controlled in
the presence of persistent vibration using Sliding Mode Control (SMC) and Neural Networks.
In SMC, the principle is to define an error surface as a differential operator such that the
control guarantees the system reaches and maintains the error dynamics within the parameters
established by the surface thickness and profile. The plant is an aluminum platform to which
the disturbance signal of the shaker table is mechanically transmitted through the passive
structural impedance of unenergized terfenol actuator. Three velometers record vibrations
from the table and these signals are then conditioned using digital filters. The conditioned
signals are tapped as inputs to A/D on host computer 1 for control as well as to a fourier
analyzer for analysis purposes on host computer 2. Control apparatus consists of host computer
1 which is an i386 machine with 3 Banshee Mother Boards (Atlanta Signal Processors Inc.),
each with a TMS320C30 Digital Signal Processing chip, and each with A/D daughter boards.
The control algorithm is generated by compiling C code and A/D–D/A routines are written in
assembly to speed up data acquisition process. The executable is loaded onto the TMS320C30
processor boards for standalone operation. The signal received in A/D is processed by the
SMC algorithm on the TMS320C30 and calculated control signal is output through the D/A.
This analog output after passing through the power amplifier is fed to the terfenol actuator
to achieve the desired reduction in amplitude of the vibrating table. Host computer 2 is an
i386 machine which receives its input from the signal analyzer (Tektronix 2630 Four - channel
Fourier Analyzer). This provides an alternative method of checking the transfer function and
other desired responses of the system without interfering with its operation.
We are currently waiting for a neurochip under development at Motorola to implement the
control algorithm in this architecture.
1.3
Optimal Mapping and Initialization
There has be an active interest in the area of faster learning methods to make backpropagation
networks a feasible option for various applications. Towards this goal there has be an active
interest in the group to look into issues relating optimal mapping and initialization. Optimal
mapping results in faster learning and problem specific network where information content is
more transparent. The initialization of network is made possible due to the complexity reduction
by mapping.
The main idea being tried at NERD Lab. is to achieve faster learning by defining problem
specific networks, and initialization of the network using various techniques. In case of complex
problems graph theoretic approaches are being investigated.
3
1.3.1
Problem Specific Networks
Problem Specific Networks give rise to optimal network size and also a better distribution of
complexity over the network. The idea of the custom networks is instead of using the black
box approach, allocate a part of the network to an identifiable sub-problem. Therefore if the
network is required to map a function, say,
f (x, y, z) = g(x, y) · h(y, z)
then the network consists of two subnets, mapping g(x, y) and h(y, z) respectively. This give
rise to reduction in complexity of each network, getting over the poor scaling of the Backpropagation algorithm as the complexity increases. Instead of learning with huge full recurrent architectures, it is better to get rid of unnecessary connections. The problem of overgeneralization in feedforward networks and longer learning time are carried over to recurrent
networks. The custom network can be used in both dynamic and static systems. In the case of
dynamic system state equations can be used to derive the causality relation between the states
and thus obtaining the mapping. An alternate method to find the mapping has been proposed
using Bond Graphs.
1.3.2
Initialization
Neural networks have the drawback of long training times. The training set, the initial
conditions, and the learning algorithm play a crucial role in the network convergence. Based on
the curve-fitting properties of static feed-forward nets, initialization algorithm (Neural Space
Initialization) are developed. The NSI along with NeuroBond Graph are an ideal combination
for initialization of recurrent networks used for system identification. The Neuro Bond Graph or
the dynamical equations can give the interplay and causality relationships between the different
dynamic elements, giving rise to a sparse network. Each subnet of the Network can be initialized
to give a shorter learning periods. For system identification such methods can be applied which
gives a representative estimate for the system parameters. The initialization of networks results
in considerable improvement in learning process. One of the reasons of better performance to
Radial Basis Function (RBF) networks is the initialization of centers, which is equivalent to
initialization. Some methods of initialization are described in (3).
1.3.3
Mapping of Complex Systems
The “optimal” mapping of a complex system, is of utmost importance for handling large
system. The issues in maximum parallelism and maximum decomposition are crucial for real–
time implementation and faster learning process respectively. In getting maximum parallelism
ie., the complete mapping of a given function is done using a three layered perceptron will
result in no decomposition of the function. This will result in large dimensional map were
initialization of the network is impossible and the learning time will be large due to high
dimensionality. The other extent is the decomposition is done as far as possible resulting in a
network with many layers. The time taken for forward propagation becomes large. The problem
4
in backpropagating through a network with many layers is not well studied. Whereas with
decomposition of complex function will facilitate in initialization of smaller networks making it
possible to learn very complex as functions in reasonable time.
The decomposition of a function is illustrated in the following example[1].
Example 1 (decomposition).
x = f (a, b, c, d, e)
(1)
= f (f1 [f11 (a, b), f12 (a, c), f13 (b, d)) ,
f2 (f21 (e, d), f22 (a, c))] ,
The equation can be rewritten as,
x11 = f11 (a, b)
x12 = f12 (a, b)
x13 = f13 (b, d)
x21 = f21 (c, d)
x22 = f22 (a, c)
x1 = f1 (x11 , x12 , x13)
x2 = f2 (x21 , x22 , x23 )
x = f (x1 , x2 )
(2)
where the network with input dimensionality(emulating equation 1 of five is broken down to
eight sub-networks emulating
f11 , f12 , f13 , f21 , f22 , f1 , f2 , f
from equation 2, where the dimensionality is reduced to three and makes the initialization
feasible. The resulting network has 3 × 3 layers instead of 1 × 3 in the original network. k
The dependency relation can be expressed as a directed graph G(V,E) , giving the dataflow
information. The vertices represent the variables in the set of equations and the edges describing
the dependency. the edges Ei incident onto the vertice Vi are the variables required to compute
the ith variable.
1.3.4
Quantization of Complexity
The graph G gives a way of mapping the set of equations onto a neural network. Each vertices
Vi represent a multilayered (three layers) network with number of nodes in the first layer being
5
equal to the edges Eij incident and one output node. The hidden nodes can be selected based on
the complexity of the function hi the vertices Vi is mapping. In a complex set of equations the
number of layers can be arbitrarily large if all the equations are broken down to the equations
of as simple as possible. To get a network with an upper bound in the number of layers the
graph G has to be reduced to Gl with some vertices Vj ∈ Gl represents a composition of two or
more vertices of the graph G.
The reduction of graph G → Gl is achieved with increasing the complexity of the resulting
graph to the minimum. The process requires the definition for the complexity of network. The
complexity of network is defined as the number of inputs × outputs. As any given network
has only one output, therefore the complexity is reduced to the number of inputs. It is noted
that the definition inconsistent in very cases. For example, if a network is used to map f :
(x, y, z) → x + y + z the mapping can be done using just one linear node, whereas a function of
the form g : x → x2 although defined very small in complexity scale compared to the function
f is not. The definition of complexity has to be extended to function dependent (non-linearity
dependent) to take care of all cases.
1.4
Self-tuning PID Controllers for Boiler
This part of the project is concern with the self-tuning PID control of the power plant control.
The problem can be stated as:
1.5
Problem Statement
Find the optimal PID gains for each channel (in the MIMO plant) & optimal in the sense of
a standard performance measure ie., integral of a weighted squared error and input energy.
Z
J = xT Qx + uT Ru
t
The objective is to minimize the performance index for a linear MIMO system. The linear
MIMO systems are to be obtained at various operating points of the boiler. The self-tuning
PID has been tested on a simple SISO system and MIMO system.
1.5.1
A Heuristic Error definition for backpropagation
The backpropagation through the layers is divided into two sub networks one of the subnet representing the system under study and other modeling the controller. It is noted that
backpropagation and MIT rule are the same if the error function of the network is defined as
the squared error (= DesiredOutput - NetworkOutput). The performance measure which is to
be minimized lacks uniformity in the sense that the terms (quadratic of output error and the
6
control input to the system) to be minimized are defined at different layers of the networks,
and the intermediate layers are fixed (ie., the weights corresponding to these layers are fixed ).
To quantify a compound error at the output stage of the controller, a concept of ‘error of the
controller output’ based on MIT rule is defined for the error at the output of the plant (xT Qx)
at the controller output ( or plant input ), in addition to the error, (uT Ru).
The error of the controller output corresponding to the error at the output is defined based
on heuristic MIT rule. Assuming the input to the plant as a parameter and the error in the
parameter is defined as the change required as defined by MIT rule. Consider a sub system
x(k + 1) = A x(k) + B u(k)
(3)
y(k) = C x(k) + D u(k)
(4)
T
E(j) = (error(j))
T
= (yd (j) − y(j)) (yd (j) − y(j))
(5)
according to MIT rule ∇u(k) = −µΣj ∂E(j)
∂u(k) and defining the ”error of the controller output”
corresponding to the error at the plant output as res error(t) ∝ ∇u(t) and the compound error
for the back propagation as res error(t) + u(t) for all t = 0, 1, 2, · · · , T .
1.6
Modeling Environment of Nonlinear Dynamic Systems using Neural
Networks and Bond Graphs
Artificial neural networks (ANNs) have proved themselves to be universal approximators (2; 4)
and work well for system identification (6; 5; 3; 1). The usual approach used in the literature
is to train an ANN as a fully connected multi-layer perceptron (MLP) to attain the desired
mapping. This “black-box” approach works very well except for the usual complaint that it
is hard to interpret what the network has learned or the lack of association with the physical
system from where the training data was obtained. Using NeuroBondGraphs (NBG), our intention is to bridge this gap. Based on “Energy-based” modeling techniques, we are able to
“qualitatively” characterize the system components’ interaction in a straightforward manner,
allowing the construction of an ANN that resembles the physical system. The end result is a
sparse network that, after training, has an almost one-to-one correspondence with the physical
system it imitates giving a “qualitative” mapping of the system’ component nonlinearities.
The basic idea is to use Bond Graphs (BGs) as the method to identify and differentiate the
basic elements of the system being modeled. Then, based on these elements unique/defining
characteristics/behavior (see Fig. 1) and the BG connectivity, we would be able to generate
the dynamic structure, i.e. the set of first order nonlinear differential equations that describe
the system’s energy interactions. The “unknown nonlinearities” are the constitutive relations
between elements and would be approximated/learned by the neural network from input/output
experimental or simulated training data.
When combined with the network integration method and the system inherent mappings,
we get a multi–layer dynamic (recursive) network as shown in Fig. 2 through the use of Neural
Space Representation (NSR) topology. This approach gives: (1) the neural network a specific
architecture to work with, so when the training is done, the “knowledge” acquired can be
7
Figure 1: BondGraph Energy-based Modeling Philosophy.
referred to physical parameters,and, (2) the bondgraph community a tool to represent and
identify generalized nonlinear systems.
In summary, the basic objectives are to
• Use BondGraphs to identify (qualitatively) the basic “elements” of the system to be
modeled;
• Generate, from the bondgraph, the neural network dynamic structure.
• After training using input/output data, generate (quantitatively) the “elements” constitutive relations.
1.7
Digital VLSI Neuro Chip Design
The objective of this research is to produce in silicon neural network paradigms and real–time
learning algorithms which are being developed at the NERDLab.
Neural Network based algorithm simplifies down to weighted average of the various previous
8
Figure 2: DyNet “Hybrid” Architecture.
inputs to produce an output. Given a set of patterns which are representative of the I/O
characteristics of the non-linear plant the Network For the System Identification (NFSI) learns
the plant characteristics at different points in state space and maps to a structured parameter
list. The O/P of the actual plant is compared to the network output and the error dynamics
so obtained is analyzed by the Error Modeller which updates the error dynamics of the actual
and NFSI. The error information is then back propagated to the plant. Such a process leads
to an optimal (or sub-optimal) controller robust to uncertainty and measurement error (as the
network also acts as an adaptive filter).
In the process of selecting the architecture for VLSI implementation of a Neural Network
it is imperative to optimize the operations of the basic cell i.e the Neuron. A single neuron is a
part of the network having connections to neurons in other layers. Connectivity to Neurons in
the same layer to enable cross talk is a feature that will be incorporated in the future versions.
Each Neuron is to be laid out in CMOS 3µ technology on a MOSIS 40 pin pad. The present
architecture consists of a 32 bit data bus and 6 bit control bus with Vdd and GND forming the
other pins.
Each Neuron is designed so that it can receive inputs from two neurons in the previous
layer and send O/P to two other neurons in the next layer. Each link consists of a bus for the
actual signal and the error signal for that link. The link is active only if the link control line
is enabled. The weights are stored on on-chip static RAM and are updated on signal from the
global controller.
A global controller performs the task of enabling the links on various neurons and deciding
on providing initialization and weight update enable signals. The exact architecture of the
global controller is to be decided. The global controller is to incorporate provision to upload
weights on initialization and download them at the end of learning process which are stored on
off chip memory.
With this architecture it is possible to build an arbitrarily large network where the final
O/P delay is proportional to the number of intermediate layers only.
9
1.8
Remanufacturing Engineering Tools for Water-Jet Cutting
The basic objective of this project is to develop and implement tools to aide the reverse engineering process from part/concept to product using Applied Intelligence tools such as Artificial
Neural Network (ANN. It is proposed to automate the “reverse engineering” process. Starting
by automating the planning and data acquisition from an existing part, or drawing of it, to
the CNC–code generation. It is difficult to manage a maintenance utility without overstocking
replacement components, so having the capability of reproducing different components in a
reasonable turnaround time not only alleviates the stoking problem, but indeed increases the
readiness and independence of the utility. Remanufacturing is expensive and time-consuming.
If “one–of–a–kind” reproduction is desired, the current process cost/benefit ratio may not be
favorable. With a neural network, we propose an architecture to improve this ratio, allowing
“retooling” to be a competitive and almost vital technology. Most of the required technology
already exist and has been proven at UT Austin, and most of the hardware requirements is
already available in–house at Kelly AFB.
Accurate coordinate measurement can be obtained through apparatus as the CMM (Coordinate Measuring System). Unfortunately, current data acquisition is performed manually. The
approximate mathematical description of the part can be generated and fed into an automatic
manufacturing process like CNC machining, then we can achieve a fully automatic remanufacturing process. Considerable resources (time and cost) will be saved by using this proposed
technology.
The basic architecture of this project is described in Fig. 3. The setup consists of three
separable subsystems:
NFD A Neural Feature Detector which consists of an inexpensive vision system to acquire a
“rough” bitmap picture of the part that then is fed to a NNet algorithm to detect the
part outline, some salient features and their relative position.
CADA The CMM Automated Data Acquisition. This subsystem basically automates the data
gathering with the CMM system that is currently used in manual form. The system with
take the NFD findings and plan the path of the CMM–probe (or a mounted camera) for
outline and features coordinates meassurements.
NOR The Neural Object Reconstructor. This subsystem will take the data collected by MADA
with NFD information and extract (compress) the information into features and/or part
description. A possible alternative is to generate an AutoCAD file as the subsystem
output.
CCG The CNC Code Generation. This item is optional and will consider the development of
CNC–code given the AutoCAD description. There are comercially available software to
perform this task, but if Kelly’s AFLC staff feels complete integration is desirable, additional funding could be used to establish a Neural Network based database retrieval and
CNC–code generation. Also in this subsystem, a parallel proposal is being submmitted
on UDM (Universal Die Maker) that will allow the generation of universal dies (reusable)
for metal forming.
10
Figure 3: RE-Tool diagram
In what follows, a more detailed description of the different subsystems is included.
1.8.1
NFD — Neural–Network Feature Detector
Using the BitMap image captured by the vision system, a NNet can be devise to “detect” the
“basic” features of the part, create a list of the features encountered, and determine relative
positions of features and an outline of the part. As a starting point we will try the concept with
2-D objects with some basic features. Neural Networks architectures exist in the open literature
that are well suited for this task, We anticipate the use of MLP (Multi–Layer Perceptron), ART
(Adaptive Resonant Theory) networks, and Neocognitron.
1.8.2
MADA — MMC Automated Data Acquisition
After the outline of the object, features, and their relative psoition have been established, we
are in the position to plan a sweep of the MMC system to automatically obtain true distances
associated with the specific list of features. Standard path generation algorithms will be used
for this task with a small variation to account for reflective force feedback at the MMC–probe.
The MADA will generate a data set associated with each feature and the part outline.
11
1.8.3
NOR — Neural–Network Object Reconstruction
Given the features list and the data associated with it, confirmation of features and reconstruction of the part is possible. Note that the NFD is not an extremelly accurate system since
the vision system is not accurate enough. NFD is used as a filter of the features, while the
accuracy of the meassurements is left to the MMC system. Exisiting architectures and learning
algorithms under development at UT will be used to detect (confirm NFD guesses) new features
and parametrize its parameter values. The output of this unit could be a file readable form
AutoCAD.
1.8.4
CCG — CNC Code Generation
This is an optional task/component of the overall system. If there are no tools to automatically
generate CNC–code from an AutoCAD file, a suitable code can be implemented or interface
the developed code with commercial software tools. A neural network could be impelented for
library search and matching operations. In specific networks such as ASDM (Adaptive Sparse
Distributed Memory) have been proven well suited for such tasks.
1.8.5
Deliverables
At the end of the project we expect to have the major components that prove the feasibility
of such a system to be an effective tool in remanufacturing. Further development, training, and
improvements are expected due to the large ammount of parts processed at Kelly.
1.9
Robust Automation for Manufacturing
The feasibility of using Artificial Neural Networks(ANN) as a tool for modeling and controlling
complex machining processes is being studied in the Neuro Engineering Research & Development Laboratory. Machine tools are justifiably called the foundation of the industry. Most
industrial parts are made by machine tools, especially in mass production by Computer Numerical Controlled (CNC) machines. Improving the precision of CNC machining processes will
be important contribution to the automation of the manufacturing industry. A novel research
program for Robust Automation for Manufacturing (RAM) concentrating on Light Weight High
Precision Machining (LWHPM) using Artificial Neural Networks is currently under implementation in the Neuro Engineering Research & Development Laboratory.
The most common error that causes machining inaccuracy is the lack of precision of the
machine itself over a large control volume. This is due to faulty calibration, thermal compliance,
and permanent deformation of machine structure. The second common error is due to the
dynamic interaction between the machine and the workpiece. In the CNC turning processes as
the tool cuts into the part, the part deflects. The larger the depth of the cut, the more the part
12
deflects resulting in larger error. At present the most common way of maintaining accuracy is
to minimize the interaction by using small depth of cut and feed rate. This results in small
throughput.
In the unified approach for improving machining precision in turning process using Artificial Neural Networks, a Neural Network is used to learn the inverse mapping between the
commanded and machined (actual) part dimensions generated by a CNC turning process. Several different part profiles are machined on the same CNC Turning machine after fixing the
feed rate,and spindle speed. Multi samples of each profile are machined. It is observed that
the error is maximum around the center of the part, and larger toward the supported end than
the clamped end. The positions, actual diameters at these locations, and desired diameters are
taken as training data for the Neural Network. The network learns the nonlinear mapping from
actual dimensions to desired dimensions along the entire part length. For a given desired part
dimensions, the network generates required CNC codes that correct the static and dynamic errors of the machine. After training, the network generalization capability is tested by presenting
new part profiles to the network. Based on the Experimental results it can be concluded that
the Neural Network can learn the characteristics of machine tool and generate compensation
codes to correct the inaccuracies due to calibration errors and deformation interactions. Further
research with more parameters like change in speed, feed is under progress.
1.10
System Identification of Non-Linear Systems
The goal of this project is to develop robust controllers based on dynamic neural network
architectures. The proposed controller is neural based and assumes a model of the plant/process
in the form of a recurrent multilayer perceptron (DyN etT M ) and builds a robust controller based
on sliding mode control principles. Since sliding controllers assume full state feedback we will
also develop nonlinear observers based on the same neural network architecture.
The complexities involved in using non-linear basis functions for system identification, deriving identification algorithms that can be applied to a reasonably large class of non-linear
systems, and/or using the physical non-linear model for control, restrict the use and viability of
using non-linear techniques. The most commonly used nonparametric identification methods
for non-linear systems employ the Voltera series, the Wiener kernel approach, and the expansion of the restoring force in a series of orthogonal sets of functions as Chebyshev polynomials.
The Voltera series and Wiener approach require excessive computation time and have serious
convergence problems. In this respect neural networks offers a viable technique for non-linear
system identification and control.
The capability of a static neural network to map any function to any desired accuracy
and, in addition, the system independent adaptive learning algorithm to take care of any online parameter variation or any modeling error offers as a possible tool for non-linear control.
The neural network, a universal approximator, forms a universal structure for a large class of
non-linear mappings. In addition neural networks can be thought of as a massively parallel
architecture to take care of the computational needs for any non-linear system.
13
This project is focusing on a five layered network which brings out the dynamics of the
system explicitly. It also gives rise to a fixed and defined number of system states which can be
altered making it possible to model only the dominant dynamics. Such an approach can give
a hierarchical structure of system identification with one network modeling only the dominant
dynamics were as another on modeling all the dynamics. Such a structure may result in a more
robust System Identification and Fault Diagnosis. The architecture consists of two generic
sub-block structure each modeling f (·, ·) and g(·) respectively. Thus giving the flexibility of
sub-identification for each block as well as input-output identification. In general we always
have a mathematical model before starting which can be used to initialize each sub block by
Neural Space Initialization (NSI) [3] and static back propagation. The architecture also makes
it possible to associate the dynamic set of nodes some physical meaning. It can be again
noted that each sub-block consists of an ordinary static network, with properties of nonlinear
approximator.
1.11
Comparative analysis of control techniques
Real-time control system has currently implemented in NeuroEngineering Laboratory, Mechanical Engineering in The University of Texas at Austin. Several control techniques such as
PD, Fuzzy Logic Control, Sliding Mode Control, Expert System Control, and Neural Network
Control are compared in terms of their control surfaces, stability, and robustness through computer simulation and experimentally real-time implementation. The cart-inverted-pendulum is
used to demonstrate the performance of each controller in real-time. It is critical to validate
the performance of controlling the nonlinear system through real–time control implementation.
Since the results of the simulation are not able to give a complete measure of the performance
of the controller, computational requirements plays an important factor. We have implemented
real time controllers using the LabView package from National Instruments, Inc. and intend
to compare the controller performances. In the data acquisition part, we have adopted a highresolution audio frequency range analog I/O NB-A2100 board for the Macintosh II family of
computers. This board has 16-bit, simultaneously-sampled analog input with 64-times oversampling delta-sigma modulating ADCs and digital anti-aliasing filters for extremely high-accuracy
data acquisition. The sensor for measuring the angle of the pendulum is a potentiometer. All
the controllers are designed by using the utilities in LabView and Think-C.
From the view point of controller design, the PD is easier controller to implement. However, in the real time control process, a PD loses its stability due to the effect of disturbances
and is very sensitive to the change in the plant parameters. This verifies the fact that it is hard
to control a nonlinear system with a linear controller. Based on the experimental results, a
fuzzy controller has high capability to accommodate the nonlinearity and uncertainty problems
in the control system. The membership function plays a very critical role in this control process.
Once a good membership function is defined, the performance of a fuzzy controller will be very
robust in the face of system uncertainty. However, one weakness is that a fuzzy control lacks
formal synthesis and analysis techniques. The control process depends upon the experience of
human operators whose qualitative “rules of thumb” can be described as fuzzy decision rules.
Sliding mode controller has the capability to handle modeling imprecision and disturbances
based on a good robust nonlinear control algorithm. However, it may excite high-frequency
14
Figure 4: Real-Time Control block diagram.
dynamics when switching control is selected. Expert system control uses the same rules as
fuzzy control, but, in general, it is discrete and cannot predict all the possibilities of the system
uncertainty according to its searching algorithm. Training the neural network control system
is time-consuming due to the backpropagation algorithm requiring a long time to train the
controller network. The neural controller does have satisfactory performance compared with
others. In this research, the simulation and experiments are conducted to illustrate the possible
and actual performance of each controller. There is not enough reason to conclude that one
controller is the best. The research indicates the weaknesses and strengths of each controller
and provides a basic concept to choose a controller to control systems for real-time application.
15
2
Relationships to On-Going Projects
Currently the principal investigators are involved in several projects with immediate relevance
and significance to these research areas. The projects provide technology base are with the Department of Energy (DOE), the State of Texas Energy Research Application Program (ERAP),
and the National Science Foundation (NSF).
The DOE project is focusing on enhanced diagnostics and control of nuclear power plant
components, through the use of ANNs. The major emphasis is on local (component) diagnostics and model predictive control. The project uses a recurrent multilayer perceptron and an
associative memory for diagnostics and control. Accurate computer models of nuclear power
plants and plant data can be made available to the proposed NSF project. The ERAP project
objective is to reduce the consumption of fossil fuels used by electric-generating power plants
through a combined thermal optimization and control approach based on ANNs. The plan to
achieve that goal involves addressing general-purpose optimization algorithms that can be used
to optimize the thermal set point profiles of the plant variables and the redesign of the plant
control systems using a multivariable control approach to enhance the plant transient responses
to various input disturbances. A hierarchical control is proposed that will be implemented in
three phases. The NSF project addresses system fault diagnosis issues associated with the operation of power systems, using Adaptive Critic NNs. The approach includes studies towards
the augmentation of Adaptive Critic(AC) into an integrated diagnostics approach which incorporates ART network as a trainer and Competitively Inhibited Neural Network (CINN) as
critics.
These networks are used for incipient fault detection where physical relevance of the network
data with the process is crucial. Neuro Bond Graph(NBG)is an offspring tool developed as part
of this task. But due to resources limitations, more funds are needed to support this effort,
explore this ideas to validation and implementation issues. For classification, growing networks
are being developed to incorporate new (never encountered) situations.
16
3
Research Facilities
For Neuro Engineering research, a parallel processor that consist of 12 IMMOS T805 transputer are hosted by a Macintosh IIci with an external chassis. Software has been developed
and continuous development is under progress for parallel as well as sequential processors. The
software developed DyNet, and µNet can emulate virtually most Neural Network paradigms.
They are flexible user–friendly shells with graphic interfaces that are portable to a variety of
platforms. So far there are working versions on Macintosh, IBM PC, RISC 6000, and Sun with
Motiff under X–Windows.
For real–time control there is a i386 machine with 3 Banshee Mother Boards (Atlanta Signal Processors Inc.), each with a TMS320C30 Digital Signal Processing chip, and each with an
A/D daughter board. There are also a Macintosh IIci and IIf x with National Instruments
LabView software, 2 TMS320C30 Digital Signal Processing (DSP) boards, Direct Memory Access Boards connected through Real–Time System Interface (RTSI), and an Audio Frequency
Fourier Analyzer (AFFA) board for fast data acquisition and processing.
The Mechanical Engineering Department at the University of Texas at Austin has NeuroComputing facilities with extensive applications and graphics software.
Additionally, the main campus of The University of Texas at Austin provide various digital
computers and extensive software. The following is a partial list of some of the available
computer hardware:
Machine
Number
Cyber 170/750
Convex C120
Cray X-MP/24
Cray X-MP/14se
DEC VAX 11/750
DEC VAX 11/785s
DEC VAX 11/875
DEC VAX 6410
DEC VAX 8300
DEC VAX 8800
Encore UNIX System
IBM 3081-D
IBM 3090/200E
IBM SYSTEM 36
LEVCO Super Server (12 IMMOS T-805 Transputers each)
Macintosh IIfx
Macintosh IIci
Macintosh Quadra 900
Sun-2/170
RISC 6000
2 (dual)
1
1
1
6
4
2
1
2
1
8 NS32332 CPUs
1
1
1
2
2
2
1
3
10
17
4
Research Personnel
The Neuro Engineering Research and Development (NERD) Laboratory in The University of
Texas at Austin has been established and is devoted to explore new neural network architectures,
develop fast and robust learning rules, and apply ANNs to solve real world engineering problems.
Currently, the NERDLab has a staff of 12, actively working on ANN research in the areas of
system identification, optimization, fault diagnosis, algorithms,controls, and manufacturing.
Faculty participating in the NERDLab are well known in their fields and show a strong
commitment to the technology.
Dr. Benito Fernández, an Assistant Professor in ME at The University of Texas at Austin,
has worked in modeling, simulation, and control of nonlinear systems for the past 11 years.
His areas of specialty is multivariable nonlinear control (he has published 8 papers on this
area), and neural networks for nonlinear system identification (he has 8 papers in this area).
He has obtained funding from DOE, ERAP, and NDF in the area of neural networks applied
to prediction, optimization, and control of power systems. He has developed computer codes
for this purpose. Dr. Fernández has technical degrees in Chemical, Materials, and Mechanical
Engineering.
Dr. Michael D. Bryant, an Associate Professor in ME at The University of Texas at Austin,
has worked on tribology related research for the past 13 years. His main research interest are in
tribology, surface contact, system dynamics , and modeling. Dr. Bryant is a PYI (Presidential
Young Investigator Award) recipient and has published approximately 20 technical papers on
this topics, has served in various capacities in national and international meetings on the topic,
and has consulted for several companies. He is especially noted for his research activities in
surface contact.
18
5
Recent Publications
[1] Bryant, M.D. , B. Fernández, N. Wang, V.V. Murty, V. Vadlamani, T.S. West, “Active
Vibration Control in Structures using Magnetostrictive Terfenol with Feedback and/or
Neural Network Controllers”, Conference on Advances in Adaptive Sensory Materials and
Their Applications, Blacksburg, VA, April 27-29, 1992. Also submitted to the Journal of
Intelligent Material Systems and Structures.
[2] Fernández, B., “NeuroBondGraph,: Development Environment for Modeling of Nonlinear
Dynamic Systems using Neural Networks and Bond Graphs,” Artificial Neural Networks
In Engineering Conference, ANNIE’92, Nov. 15-18, 1992, Saint Louis, MO
[3] Fernández, B. “Tools for ANN Learning”, Artificial Neural Networks In Engineering Conference, ANNIE’91, Nov. 10-13, 1991, Saint Louis, MO.
[4] Chang, W. R., R. K. Tumati, and B. Fernández, “Improving Machining Precision in
Turning Process Using Artificial Neural Networks” ASME Journal of Dynamic Systems,
Measurement, and Control, 1992. (submitted for publication).
[5] Fernández, B., “NeuroBondGraphs: Physically-based Modeling of Nonlinear Dynamic
Systems using Neural Networks and Bond Graphs”. ASME Journal of Dynamic Systems,
Measurement, and Control, 1992. (submitted for publication).
[6] Fernández, B. “Why are Neural Networks Suitable for Nonlinear Systems?.” NERDLab
(NeuroEngineering Research & Development Laboratory). The University of Texas at
Austin, Austin, Texas. T.R. 90-NL-001. October 1990.
[7] Fernández, B. “About Learning in Neural Networks.” NERDLab (NeuroEngineering
Research & Development Laboratory). The University of Texas at Austin, Austin, Texas.
T.R. 90-NL-002. October 1990.
[8] Fernández, B. “DyNet: Shell for Developing Artificial Neural Networks.” NERDLab
(NeuroEngineering Research & Development Laboratory). The University of Texas at
Austin, Austin, Texas. T.R. 90-NL-003. November 1990.
[9] Fernández, B. “Neural Space Representation of Nonlinear Systems Dynamics”. NERDLab
(NeuroEngineering Research & Development Laboratory). The University of Texas at
Austin, Austin, Texas. T.R. 90-NL-004. November 1990.
[10] Chang, W. R., R. K. Tumati, and B. Fernández, “Improving Machining Precision in Turning Process Using Artificial Neural Networks”. NERDLab (NeuroEngineering Research
& Development Laboratory). The University of Texas at Austin, Austin, Texas. T.R.
91-NL-005. October 1991.
[11] Fernandez, B., R.K. Tumati, “ A Literature Survey of Electro Rheological (ER) Fluids
& its applications in Engineering ”. NERDLab (NeuroEngineering Research & Development Laboratory). The University of Texas at Austin, Austin, Texas. T.R. 91-NL-006.
November 1991.
[12] Bryant, M. D., B. Fernández, N. Wang, V. V. Murty, V. Vadlamani, T. S. West, “Active
Vibration Control In Structures Using Magnetostrictive Terfenol With Feedback And/or
19
Neural Network Controllers”. NERDLab (NeuroEngineering Research & Development
Laboratory). The University of Texas at Austin, Austin, Texas. T.R. 91-NL-007 December 1991.
[13] Fernández, B., “µNet: An Interactive Environment for the Development of Networks”.
NERDLab (NeuroEngineering Research & Development Laboratory). The University of
Texas at Austin, Austin, Texas. T.R. 91-NL-008 December 1991.
[14] Fernández, B., R.K. Tumati, “ A Review on Fixture Design Automation”. NERDLab
(NeuroEngineering Research & Development Laboratory). The University of Texas at
Austin, Austin, Texas. T.R. 91-NL-009 February 1992.
[15] Fernández, B., “NeuroBondGraphs: Physically-based Modeling of Nonlinear Dynamic
Systems using Neural Networks and Bond Graphs”. NERDLab (NeuroEngineering Research & Development Laboratory). The University of Texas at Austin, Austin, Texas.
T.R. 92-NL-010. February 1992.
[16] Fernández, B., R.K. Tumati, “ Applications of Artificial Neural Networks in Manufacturing ”. NERDLab (NeuroEngineering Research & Development Laboratory). The
University of Texas at Austin, Austin, Texas. T.R. 92-NL-011. August 1992.
References
[1] Chu, R., Shoureshi, R., and Tenorio, M., “Neural Networks for System Identification,”
Proceedings of the American Control Conference, Pittsburgh, PA, pp. 916-921, June 1989.
[2] Cybenko, G., “Approximation by Superposition of a Sigmoidal Function,” Mathematics of
Control, Signals, and Systems, 2:303-314, 1989.
[3] Fernández, B. “Tools for Artificial Neural Networks Learning.” in Intelligent Engineering Systems Through Artificial Neural Networks, C.H. Dagli, S.R.T. Kumara, and
Y.C. Shin Editors, Proceedings of the Artificial Neural Networks in Engineering (ANNIE’91) conference, Nov. 10-13, 1991, St. Louis, MO.
[4] Hecht-Nielsen, R., “Theory of Backpropagation Neural Network,” Proceedings of the International Joint Conference on Neural Networks, Vol. I, pp. 593-605, 1987.
[5] Narendra, K. S. and Parthasarathy, K., “Identification and Control of Dynamical Systems
Using Neural Networks,” IEEE Transactions on Neural Networks, Vol.1, No.1, March 1990.
[6] Rumelhart, D.E., J.L. McLelland, & the PDP Research Group., Editors. Parallel Distributed Processing: Explorations in Microstructure of Cognition, Vol. I: Foundations,
MIT Press/Bradford Books, Cambridge, MA, 1986.
20
Download
Related flashcards
Create Flashcards