Control

advertisement
Lavi Shpigelman, Dynamic Systems and control – 67522 – Practical Exercise 1
Dynamic Systems
And Control
 Course info.
 Introduction (What this course is about)
1
Course home page
Lavi Shpigelman, Dynamic Systems and control – 76929 –
 Home page: http://www.cs.huji.ac.il/~control
2
Lavi Shpigelman, Dynamic Systems and control – 76929 –
Course Info
 Home page: http://www.cs.huji.ac.il/~control
 Staff:
Prof. Naftali Tishby (Ross, room 207)
Lavi Shpigelman (Ross, room 61)
 Class:
Sunday, 12-3pm, ICNC
 Grading
• 40% exercises, 60% project
 Textbooks:
• Chi-Tsong Chen, Linear System Theory and Design, Oxford
University Press, 1999
• Robert F. Stengel, Optimal Control and Estimation, Dover
Publications, 1994
• J.J.E. Slotine and W. Li, Applied nonlinear control, Prentice Hall,
Englewood cliffs, New Jersey, 1991
• H. K. Khalil, Non-linear Systems, Prentice Hall, 2001
3
Intro – Dynamical Systems
Lavi Shpigelman, Dynamic Systems and control – 76929 –
 What are dynamic systems?
Physical things with
states that evolve in
time
4
(Optimal) Control
Lavi Shpigelman, Dynamic Systems and control – 76929 –
Objective: Interact with a dynamical system to
achieve desired goals
 Stabilize nuclear
reactor within safety
limits
 Fly aircraft minimizing
fuel consumption
 Pick up glass without
spilling any milk
...Measures of optimality
5
Example:
Prosthetics  bionics
Lavi Shpigelman, Dynamic Systems and control – 76929 –

Problem:
Make a leg that knows when to
bend.
Inputs:

•
•
•
•

Knee angle.
Ankle angle.
Ground pressure.
Stump pressures.
Outputs:
•
Variable joint stiffness and damping
6
Lavi Shpigelman, Dynamic Systems and control – 76929 –
Example: Robotics,
Reinforcement Learning
 How do you stand up?
 How do you teach someone to stand up?
 Reinforcement learning:
Let the controller learn by trial and error and give
it general feedback (reinforce ‘good’ moves).
 Training a 3 piece robot to stand up:
• Start of training:
• End of training:
7
Lavi Shpigelman, Dynamic Systems and control – 76929 –
Modeling (making assumptions)
Control
Signals
Task Goal
Controller
Plant
Observations
Mathematical relationships
Graphical representation (information flow)
8
Lavi Shpigelman, Dynamic Systems and control – 76929 –
Control Example:
Motor control
Plant (controlled system): hand
Controller: Nervous System
controller
Control objective:
Task dependent (e.g. hit ball)
Plant Inputs: Neural muscle activation signals.
Plant Outputs: Visual, Proprioceptive, ...
Plant State: Positions, velocities, muscle
activations, available energy…
Controller Input: Noisy sensory information
Controller Output: Noisy neural patterns
9
Lavi Shpigelman, Dynamic Systems and control – 76929 –
Modeling Motor Control
Control
Signals
Task Goal
Brain
controller
Neural
Pattern
Hand
plant
Observations
sensory Feedback
Details…
10
Lavi Shpigelman, Dynamic Systems and control – 76929 –
Optimal Movements




Control Objective: Reach from a to b.
Fact: more than one way to skin a cat...
How to choose: Add optimality principle
E.g. optimality principle:
Minimum variance at b.
 Modeling assumption(s):
Control is noisy: noise / ||control signal||
 Control problem:
find the “optimal” control signal.
 Note:
No feedback (open loop control)
11
sensory - motor control loop
Lavi Shpigelman, Dynamic Systems and control – 76929 –
Modeling Motor Control - Details
Wolpert DM & Ghahramani Z (2000) Computational principles of
movement neuroscience Nature Neuroscience 3:1212-1217
12
State Estimation – step 1
Lavi Shpigelman, Dynamic Systems and control – 76929 –
 Open loop estimate (w/o feedback)
13
Lavi Shpigelman, Dynamic Systems and control – 76929 –
State Estimation – step 2
 Step 1:
Control signal & a
forward dynamics
model (dynamics
predictor) updates
the change in state
estimate.
 Step 2:
Sensory information
& forward sensory
model (sensory
predictor) are used
to refine the estimate
14
Lavi Shpigelman, Dynamic Systems and control – 76929 –
Context Estimation
(Adaptive Control)
15
Lavi Shpigelman, Dynamic Systems and control – 76929 –
Adaptive Control generation
 An inverse model learns
to translate a desired
state (sequence) into a
control signal.
 A non-adapting, low gain
feedback controller does
the same for the state
error. Its output is used
as an error signal for
learning the Inverse
model.
16
Lavi Shpigelman, Dynamic Systems and control – 76929 –
Simple(st) Dynamical System
Example
 Consider a shock absorber.
 We wish to formulate a dynamical
system model of the mass that is
suspended by the absorber.
 We choose a linear Ordinary
Differential Equation (ODE) of 2nd order
net force
damping
force
External
force (u)
spring
force
external
force
Contraction
Shock
Absorber
u
m
y
(y)
17
Lavi Shpigelman, Dynamic Systems and control – 76929 –
Elements of the Dynamic System
Controllable
inputs u
Observable
Process
Outputs y
Observation
Process
Dynamic
Process
Observations z
State x
Process
noise w
Observation
noise n
Plant
State evolving with time
(differential equations)
18
Lavi Shpigelman, Dynamic Systems and control – 76929 –
Controllability & Observability of
the Dynamic Process States
Main issues:
stability
stabilizability
Controllable
inputs u
Controllable
controlled
observed
Observable
Outputs y
Observable
Disturbance
(noise) w
Uncontrolled
Unobserved
Dynamic Process
States x
19
Lavi Shpigelman, Dynamic Systems and control – 76929 –
Other Modeling Issues*





Time-varying / Time-invariant
Continuous time / Discreet time
Continuous states / Discreet states
Linear / Nonlinear
Lumped / Not-lumped
(having a state vector of finite/infinite dimension)
 Stochastic / Deterministic
More:
 Types of disturbances (noise)
 Control models
* All combinations are possible
20
Rough course outline
Lavi Shpigelman, Dynamic Systems and control – 76929 –
 Review of continuous (state and time), Linear, Time
Invariant, state space models.
• Linear algebra, state space model, solutions, realizations,
stability, observability, controllability
 Noiseless optimal control (non linear)
• Loss functions, calculus of variations, optimization methods.
 Stochastic LTI Gaussian models
• State estimation, stochastic optimal control
 Model Learning
 Nonlinear system analysis
• Phase plane analysis, Lyapunov theory.
 Nonlinear control methods
• Feedback linearization, sliding control, adaptive control,
Reinforcement learning, ML.
21
Download