System Modeling and Control_updated

advertisement
Introduction to System Modeling and
Control





Introduction
Basic Definitions
Different Model Types
System Identification
Neural Network Modeling
Mathematical Modeling (MM)




A mathematical model represent a physical
system in terms of mathematical equations
It is derived based on physical laws
(e.g.,Newton’s law, Hooke’s, circuit laws,
etc.) in combination with experimental data.
It quantifies the essential features and
behavior of a physical system or process.
It may be used for prediction, design
modification and control.
Engineering Modeling Process
Theory
Data
Engineering
System
f m
dv
 bv
dt
c
v
T

v
x
Numerical
Solution
f
Solution
Data
Math. Model
Model
Reduction
Control
Design
Example: Automobile
• Engine Design and Control
• Heat & Vibration Analysis
• Structural Analysis
Graphical
Visualization/Animation
System Variables
Every system is associated with 3 variables:
y
u
System
x



Input variables (u) originate outside the system and
are not affected by what happens in the system
State variables (x) constitute a minimum set of system
variables necessary to describe completely the state of
the system at any given time.
Output variables (y) are a subset or a functional
combination of state variables, which one is interested
to monitor or regulate.
Mathematical Model Types
Lumped-parameter
discrete-event
Most General
x  f ( x, u, t )
y  h( x, u, t )
Input-Output Model
y ( n)  f ( y ( n1) ,, y , y, u ( n) ,, u, u, t )
Linear-Time invariant (LTI)
x  Ax  Bu
y  Cx  Du
distributed
LTI Input-Output Model
y(n)  a1 y(n1)  an1 y  an y  b0u(n)  bn1u  bnu
Discrete-time model:
Transfer Function Model
Y (s)  G( s)U ( s)
x (t )  x(t  1)
y (i ) (t )  y(t  i)
Example: Accelerometer (Text 6.6.1)
Consider the mass-spring-damper (may be used as
accelerometer or seismograph) system shown below:
Free-Body-Diagram
x
x
fs
fs
M
M
fd
fd
fs(y): position dependent spring force, y=u-x
fd(y): velocity dependent spring force
Newton’s 2nd law
Linearizaed model:
  M u
  y
  fd ( y )  fs ( y )
Mx
  by  ky  Mu

My
u
Acceleromter Transfer Function


Accelerometer Model: My  by  ky  Mu
Transfer Function: Y/A=1/(s2+2ns+n2)



n=(k/m)1/2, =b/2n
Natural Frequency n, damping factor 
Model can be used to evaluate the
sensitivity of the accelerometer


Impulse Response
Frequency Response
Impulse Response
Frequency Response
Bode Diagrams
From: U(1)
40
0
-20
-40
-60
0
-50
To: Y(1)
Phase (deg); Magnitude (dB)
20
-100
-150
-200
-1
10
10
0
Frequency (rad/sec)
10
/n
1
Mixed Systems




Most systems in mechatronics are of the
mixed type, e.g., electromechanical,
hydromechanical, etc
Each subsystem within a mixed system can
be modeled as single discipline system first
Power transformation among various
subsystems are used to integrate them into
the entire system
Overall mathematical model may be
assembled into a system of equations, or a
transfer function
Electro-Mechanical Example
Input: voltage u
Output: Angular velocity 
Ra
u
La
ia
B
dc

Elecrical Subsystem (loop method):
di a
u  Ra i a  La
 eb , eb  back - emf voltage
dt
Mechanical Subsystem
  B

Tmotor  J
J
Electro-Mechanical Example
Power Transformation:
Torque-Current:
Voltage-Speed:
Ra
B
Tmotor  Kt i a
eb  Kb
La
u
ia
dc
where Kt: torque constant, Kb: velocity constant For an
Kt  K b
ideal motor
Combing previous equations results in the following
mathematical model:
 di a
 Ra i a  K b   u
La
 dt
J
   B  K t i a  0

System identification
Experimental determination of system model.
There are two methods of system
identification:
 Parametric Identification: The input-output
model coefficients are estimated to “fit” the
input-output data.
 Frequency-Domain (non-parametric): The
Bode diagram [G(j) vs.  in log-log scale] is
estimated directly form the input-output data.
The input can either be a sweeping sinusoidal
or random signal.
Electro-Mechanical Example
Ra
Transfer Function, La=0:
La
B

Kt Ra 
Ω(s)
k




U(s) Js  B  Kt K b Ra
Ts  1
ia
u
Kt

12
u
t
ku
10
k=10, T=0.1
Amplitude
8
6
4
T
2
0
0
0.1
0.2
0.3
Time (secs)
0.4
0.5
Comments on First Order
Identification
Graphical method is
 difficult to optimize with noisy data and
multiple data sets
 only applicable to low order systems
 difficult to automate
Least Squares Estimation

Given a linear system with uniformly
sampled input output data, (u(k),y(k)), then
y (k )  a1y (k  1)    an y (k  n)  b1u(k  1)    bnu(k  n)  noise

Least squares curve-fitting technique may
be used to estimate the coefficients of the
above model called ARMA (Auto Regressive
Moving Average) model.
Nonlinear System Modeling
& Control
Neural Network Approach
Introduction




Real world nonlinear systems often difficult to
characterize by first principle modeling
First principle models are often
suitable for control design
Modeling often accomplished with inputoutput maps of experimental data from the
system
Neural networks provide a powerful tool for
data-driven modeling of nonlinear systems
Input-Output (NARMA) Model
u
z-1
z-1
z-1
y
g
z-1
z-1
z-1
y [k ]  g( y [k  m],..., y [k  1],u[k  m],...,u[k  1])
What is a Neural Network?


Artificial Neural Networks (ANN) are
massively parallel computational machines
(program or hardware) patterned after
biological neural nets.
ANN’s are used in a wide array of applications
requiring reasoning/information processing
including





pattern recognition/classification
monitoring/diagnostics
system identification & control
forecasting
optimization
Advantages and
Disadvantages of ANN’s

Advantages:





Learning from
Parallel architecture
Adaptability
Fault tolerance and redundancy
Disadvantages:




Hard to design
Unpredictable behavior
Slow Training
“Curse” of dimensionality
Biological Neural Nets




A neuron is a building block of biological
networks
A single cell neuron consists of the cell body
(soma), dendrites, and axon.
The dendrites receive signals from axons of
other neurons.
The pathway between neurons is synapse
with variable strength
Artificial Neural Networks



They are used to learn a given inputoutput relationship from input-output
data (exemplars).
The neural network type depends
primarily on its activation function
Most popular ANNs:



Sigmoidal Multilayer Networks
Radial basis function
NLPN (Sadegh et al 1998,2010)
Multilayer Perceptron

MLP is used to learn, store, and produce
input output relationships
y   w i ( x j v ij )
x1
y
x2
i
j
weights
activation
function

The activation function (x) is a suitable
nonlinear function:



Sigmidal: (x)=tanh(x)
Gaussian: (x)=e-x2
Triangualr (to be described later)
Sigmoidal and Gaussian
Activation Functions
1
0.9
gaussian
sigmoid
0.8
0.7
sig(x)
0.6
0.5
0.4
0.3
0.2
0.1
0
-5
-4
-3
-2
-1
0
x
1
2
3
4
5
Multilayer Netwoks
y
x
W0
Wp
Wk,ij: Weight from node i in layer k-1 to node j in layer k




y  WpT σ WpT1σ σ W1T σ W0T x

Universal Approximation
Theorem (UAT)
A single hidden layer perceptron network with a
sufficiently large number of neurons can
approximate any continuous function arbitrarily
close.
Comments:
 The UAT does not say how large the network
should be
 Optimal design and training may be difficult
Training


Objective: Given a set of training inputoutput data (x,yt) FIND the network
weights that minimize the expected
2
L  E ( y  yt )
error
Steepest Descent Method: Adjust
weights in the direction of steepest
descent of L to make dL as negative as
possible.
dL  E(eT dy)  0, e  y  yt
Neural Networks with Local Basis
Functions


These networks employ basis (or
activation) functions that exist locally, i.e.,
they are activated only by a certain type of
stimuli
Examples:




Cerebellar Model Articulation Controller (CMAC,
Albus)
B-Spline CMAC
Radial Basis Functions
Nodal Link Perceptron Network (NLPN, Sadegh)
Biological Underpinnings


Cerebellum: Responsible for complex
voluntary movement and balance in umans
Purkinje cells in cerebellar cortex is believed
to have CMAC like architecture
Nodal Link Perceptron Network
(NLPN) [Sadegh, 95,98]





Piecewise multilinear network
(extension of 1-dimensional spline)
Good approximation capability (2nd
order)
Convergent training algorithm
Globally optimal training is possible
Has been used in real world control
applications
NLPN Architecture

wi
Input-Output Equation
y   w i i ( x, v)
x
i

Basis Function:
i (x, v)  i1 (x1, v)i 2 (x2, v)i n (xn , v)

Each ij is a 1-dimensional triangular
basis function over a finite interval
y
NLPN Approximation: 1-D Functions

Consider a scalar function f(x)
wi+1
wi

ai
ai+1
f(x) on interval [ai,ai+1] can be approximated
by a line

x  ai
f ( x )  1 
 ai 1  ai

 x  ai
w i  

 ai 1  ai

w i 1

Basis Function Approximation

Defining the activation/basis functions
 x  ai 1
x  [ai 1, ai ]
 a a ,
i 1
 i
x  ai

i ( x )  1 
, x  [ai , ai 1 ]
 ai 1  ai
0,
otherw ise




ai-1
Function f can expressed as
f ( x )  w i i ( x, a)
ai
 a1 
a    
 aN 
ai+1
(1st order B-spline CMAC)
i

This is also similar to fuzzy-logic approximation
with “triangular” membership functions.
Neural Network Approximation of
NARMA Model
u[k-1]
y
y[k-m]
Question: Is an arbitrary neural network model
consistent with a physical system (i.e., one that has
an internal realization)?
State-Space Model
u
system
States: x1,…,xn
x[k  1]  f ( x[k ],u[k ])
y [k ]  h( x[k ])
y
A Class of Observable State
Space Realizable Models

Consider the input-output model:
y [k ]  g( y [k  m],..., y [k  1],u[k  m],...,u[k  1])

When does the input-output model have a
state-space realization?
x[k  1]  f ( x[k ],u[k ])
y [k ]  h( x[k ])
Comments on State Realization of
Input-Output Model




A Generic input-Output Model does not
necessarily have a state-space realization
(Sadegh 2001, IEEE Trans. On Auto. Control)
There are necessary and sufficient conditions
for realizability
Once these conditions are satisfied the statespace model may be symbolically or
computationally constructed
A general class of input-Output Models may
be constructed that is guaranteed to admit a
state-space realization
The Model Form

The following Input-Output Model
always admits a minimal state
realization: m 2


g ( y 1,..., y m , u1,..., um )   g m 1( y i 1, y i  2 , ui 1 )  g1( y m , um )
i 0
State Space Realization

The state-model of the input-output model is
as follows with y=x1:

x  x 2  g1( x1, u )



x 2  x3  g 2 ( x1, x1 , u )

1



x m 1  x m  g m 1( x1, x1 , u )


x m  g m ( x1, x1 , u )
Neural Networks


Reduced coupling results in sub-networks:
Can’t use prepackaged software, but
standard training methods are the same
Nodal Link Perceptron Networks

Local basis functions, similar to CMAC
networks. Reduced Coupling also results
in sub-networks:
Simulation Example


Nonlinear mass spring damper
Data sampled at 0.01s, output is the velocity
of the 2nd mass
Simulation Results
1
1
0
0
output
I
II
-1
-1
0
5
10
0
1
1
0
0
5
10
IV
III
-1
-1
0
5
10
0
5
10
time
model response
system response
I: Linear model. mse=0.0281.
training(static) mse=0.0059.
II: NARMA model. mse=0.0082.
training(static) mse=0.0021.
III. Neural network. mse=3.6034e-4. training(static) mse=0.0016.N
IV. NLPN.
mse=7.2765e-4. training(static) mse=2.6622e-4.
Simulation Results
1
1
II
output
I
0
0
-1
-1
0
2
4
6
8
0
1
2
4
6
8
2
4
6
8
1
IV
III
0
0
-1
-1
0
2
4
6
8
0
time
model response
system response
I: Linear model. mse=0.0271.
II: NARMA model. mse=0.0067.
III. Neural network. mse=5.3790e-4.
IV. NLPN.
mse=7.1835e-4.
Conclusions




A number of data driven modeling techniques are
suitable for an observable state space
transformation
Rough guidelines were given for when and how to
use NARMA, neural network and NLPN models
NLPN modifications make it an easily trainable
option with excellent capabilities
Substantial training & design issues include data
sampling rate and input repetition due to the
reduced coupling restriction
Fluid Power Application
INTRODUCTION
APPLICATIONS:




Robotics
Manufacturing
Automobile industry
Hydraulics
EXAMPLE:
EHPV control
(electro-hydraulic poppet valve)
 Highly nonlinear
 Time varying characteristics
 Control schemes needed to
open two or more valves
simultaneously
Motivation



The valve opening is controlled by
means of the solenoid input current
The standard approach is to calibrate of
the current-opening relationship for
each valve
Manual calibration is time consuming
and inefficient
Research Goals



Precisely control the conductivity of
each valve using a nominal input-output
relationship.
Auto-calibrate the input-output
relationship
Use the auto-calibration for precise
control without requiring the exact
input-output relationship
INTRODUCTION
EXAMPLE:




Several EHPV’s were used
to control the hydraulic
piston
Each EHPV is supplied with
its own learning controller
Learning Controller employs
a Neural Network (NLPN) in
the feedback
Satisfactory results for
single EHPV used for
pressure control
Control Design

Nonlinear system (‘lifted’ to a square system)
x k n  F x k ,uk 

Feedback Control Law

ˆ


(
x

d , xd )
ˆ ( xd , x d )  K p
u
( x  xd )
xd



d
ˆ ( x , xd )

is the neural network output
The neural network controller is directly trained based
on the time history of the tracking error
Learning Control Block Diagram
Experimental Results
Experimental Results
Download