team4 - the CATS! - Rensselaer Polytechnic Institute

advertisement
IMAGE TRACING LASER SYSTEM
ECSE-4460 Control System Design
Final Project Report
Jason Duarte
Azmat Latif
Stephen Sundell
Tim Weidner
May 8, 2006
Rensselaer Polytechnic Institute
ii
Abstract
This report describes the design approach to build an image-tracing laser system
using two degrees of freedom for pan and tilt motion. The design model parameters are
obtained through MATLAB, SolidWorks, and LabVIEW. The system is divided into
three major subsystems: image analysis, trajectory planning, and control design. Image
analysis is used to identify a pattern to be traced. The trajectory planner returns the joint
trajectories to follow a desired path. Finally, the controller positions the links to the
desired locations. The design objectives are to traverse an image at a rate of 1 ft/sec with
0% overshoot and minimum steady state errors.
iii
Table of Contents
1. INTRODUCTION .................................................................................................................................... 1
2. PROFESSIONAL AND SOCIETAL CONSIDERATIONS ................................................................. 3
3. DESIGN PROCEDURE .......................................................................................................................... 4
3.1. IMAGE PROCESSING ............................................................................................................................. 4
3.2. TRAJECTORY GENERATION .................................................................................................................. 5
3.3. INVERSE KINEMATICS .......................................................................................................................... 6
3.4. CONTROL SYSTEM DESIGN .................................................................................................................. 7
4. DESIGN DETAILS .................................................................................................................................. 8
4.1. RANGE OF MOTION .............................................................................................................................. 8
4.2. PRELIMINARY MODEL DEVELOPMENT................................................................................................. 9
4.3. FRICTION IDENTIFICATION ..................................................................................................................10
4.4. NONLINEAR MODEL ...........................................................................................................................13
4.5. INVERSE KINEMATICS .........................................................................................................................15
4.6. TRAJECTORY GENERATION .................................................................................................................17
4.7. LINEAR MODEL ..................................................................................................................................19
4.8. CONTROLLER DESIGN .........................................................................................................................20
5. DESIGN VERIFICATION .....................................................................................................................23
5.1 MODEL VERIFICATION .........................................................................................................................23
5.2. IMAGE CALIBRATION ..........................................................................................................................25
6. COSTS ......................................................................................................................................................30
7. CONCLUSIONS ......................................................................................................................................32
8. REFERENCES ........................................................................................................................................33
APPENDICES .............................................................................................................................................34
APPENDIX A: PAN ASSEMBLY ...................................................................................................................34
APPENDIX B: TILT ASSEMBLY ...................................................................................................................34
APPENDIX C: MASS PROPERTIES FOR PAN AXIS ........................................................................................35
APPENDIX D: MASS PROPERTIES FOR TILT AXIS .......................................................................................36
APPENDIX E: FRICTION IDENTIFICATION CODE .........................................................................................37
APPENDIX F: VELOCITY ESTIMATION USING FINITE DIFFERENCE .............................................................38
APPENDIX G: VELOCITY ESTIMATION USING PULSE PERIOD ....................................................................39
APPENDIX H: MODEL LINEARIZATION CODE ............................................................................................40
APPENDIX I: STATEMENT OF CONTRIBUTION ............................................................................................41
iv
List of Figures
FIGURE 1: BLOCK DIAGRAM OF SYSTEM .......................................................................................................... 1
FIGURE 2: ONE DIMENSIONAL PID CONTROLLER ............................................................................................ 7
FIGURE 3: LASER RANGE OF MOTION ON THE TRACING PLANE ........................................................................ 8
FIGURE 4: LASER AND CAMERA MOUNTING ASSEMBLY ................................................................................. 9
FIGURE 5: PAN AND TILT VELOCITY VS. TIME................................................................................................10
FIGURE 6: PAN AXIS FRICTION IDENTIFICATION ............................................................................................11
FIGURE 7: TILT AXIS FRICTION IDENTIFICATION ............................................................................................12
FIGURE 8: PAN-TILT NONLINEAR SIMULATION ..............................................................................................13
FIGURE 9: PAN-TILT SYSTEM DYNAMICS .......................................................................................................14
FIGURE 10: FRICTION MODEL ........................................................................................................................14
FIGURE 11: BASIC PAN-TILT SYSTEM ............................................................................................................15
FIGURE 12: SIMPLE IMAGES AND TRAJECTORY PROFILES ..............................................................................18
FIGURE 13: COMPLEX IMAGES AND TRAJECTORY PROFILES ..........................................................................18
FIGURE 14: PID CONTROLLER........................................................................................................................20
FIGURE 15: PAN AXIS STEP RESPONSE ............................................................................................................21
FIGURE 16: TILT AXIS STEP RESPONSE ............................................................................................................21
FIGURE 17: TRACKING RESPONSE OF NONLINEAR SYSTEM ............................................................................22
FIGURE 18: VERIFICATION MODEL FOR SINGLE JOINT ...................................................................................23
FIGURE 19: SIMULATED RESPONSES TO CONSTANT TORQUES .........................................................................24
FIGURE 20: MEASURED RESPONSES TO CONSTANT TORQUES .........................................................................24
FIGURE 21: VERTICAL IMAGE CALIBRATION .................................................................................................25
FIGURE 22: HORIZONTAL IMAGE CALIBRATION ............................................................................................25
FIGURE 23: MAIN FUNCTION FOR PATH FINDING ..........................................................................................26
FIGURE 24: IMAGE SUBFUNCTION 1...............................................................................................................27
FIGURE 25: IMAGE SUBFUNCTION 3...............................................................................................................27
FIGURE 26: IMAGE SUBFUNCTION 5...............................................................................................................28
FIGURE 27: PIXEL COORDINATES OF IMAGES TO BE TRACED ..........................................................................29
v
List of Tables
TABLE 1: PERFORMANCE SPECIFICATIONS ......................................................................................... 2
TABLE 2: PAN AXIS FRICTION PARAMETERS .....................................................................................12
TABLE 3: TILT AXIS FRICTION PARAMETERS.....................................................................................12
TABLE 4: COST OF EQUIPMENT ..............................................................................................................30
TABLE 5: COST OF ADDITIONAL ITEMS ...............................................................................................31
TABLE 6: COST OF LABOR .......................................................................................................................31
1
1. Introduction
The goal of this project is to create an image-tracing laser system capable of tracing
patterns consisting of curves and polygons. There are several similar designs that are
used in the industry today. Predominant applications are robots used for spray-painting,
arc welding, and laser cutting.
Most robotic applications in industry today use CAD drawings to highlight the curves
which are cut and etched. The image tracking laser system will use a webcam to identify
the image to be traced. Several image analysis functions are then used to determine an
appropriate path. Figure 1 illustrates the block diagram for the system.
Snap Image
Inverse Kinematics
Obtain Image
Coordinates
Translate to
World Corrdinates
Create Trajectory
Controllers
Figure 1: Block diagram of system
First a picture of the image to be traced is snapped using a webcam. The pixel
coordinates of the image are obtained and then converted to real world coordinates. The
real world coordinates are fed through the inverse kinematics to obtain the corresponding
joint angles. The paths and the given actuator velocity and acceleration constraints are
used to generate a trajectory for the laser pointer to follow. Finally, a controller is used to
give the desired performance.
2
Specifications
The laser is placed parallel to an initially horizontal plate that tilts on two axes in order to
control the direction of the laser. Each tilting axis is operated on by an electric motor.
Each motor is controlled using software with position feedback for control. The intent is
to have high accuracy and repeatability as shown in Table 1.
TABLE 1: PERFORMANCE SPECIFICATIONS
Range of Motion
Speed
Accuracy
Noise Tolerance
o The laser tilts ±14° and pans ±18°.
o The laser is located 3 feet from the image. This gives a
maximum image size of 2’ x 1.5’.
o The speed at which the laser traces an image plays a vital
role in increasing throughput on an assembly line
production. The laser travels at 12 inches/sec from a
distance of 3 feet.
o The most important aspect of any tracking system is
accuracy. Therefore the system should exhibit both zero
overshoot from the line curve and also have no steady state
error. At a distance of 3’ the laser is within 1/8th of an inch
of the desired position.
o Due to various hues on an image creating an unwanted line
curve, the image processing software must be able to
increase the contrast of the image if needed. Very dark or
black pictures against a white background are used. In
addition, the system is able to reject any disturbances and
return to its intended path.
3
2. Professional and Societal Considerations
One clear advantage of using robotic manipulators is that certain tasks can be completed
quickly and efficiently. The combination of minimal fixturing and high execution speed
can be economically beneficial for companies.
In addition, several applications of robots may save lives. For example, a worker can
develop respiratory irritation and metabolic toxicity from inhaling paint components and
spray drift. Therefore, it is desirable to use a robot to aid the worker by creating a safer
work environment.
It is with these considerations in mind that the accuracy and safety of the image tracing
laser system are of greatest importance. To reduce any harmful effects of the laser, the
laser is turned off between different tracings.
4
3. Design Procedure
3.1. Image Processing
In order to accurately trace a figure the image must first be identified using a webcam. In
order to increase the contrast between the background and the image a histogram was
used. After an image is taken, it will be passed through a histogram function and
analyzed. If the image is found unworthy, it will be discarded. Another image will be
taken and similarly processed. Once a final picture is approved it will be sent to the
aforementioned function. This will cut down on certain noise problems with the camera
and lighting.
Once an image is retrieved, the pixel values will be searched to locate the positions of all
the white pixels (threshold will cause the image to be inverted) making up the figure.
After finding the first white pixel the pixels at a certain radius from the one found will be
checked iteratively until the next white pixel is found. Assuming that there is a line, the
next search will begin in the same direction that the first two pixels were pointing. If a
white pixel is not found in this direction, the search will sweep in a clockwise direction
until found. Since the image will have no breaks in it and no lines will cross each other,
the search algorithm will traverse the whole perimeter and return to the beginning.
Recording all the points along the way will yield the trajectory that must be followed by
the laser. Finally the pixel coordinates are converted to real world coordinates.
5
3.2. Trajectory Generation
Continuous path motion is used when there is a definite geometric path to follow such as
spray painting. The path of a robot describes the position and orientation of the
manipulator. A trajectory contains information about the path plus the desired speed at
which the robot traverses along that path. The velocity and acceleration profile must be
kept within the actuator limits in order for the robot to follow a given path.
Trajectory planning can be conducted in joint variable space or in Cartesian space [1].
The advantage of planning in joint variable space is that the planning is done directly
with the controlled variables (joint angles). It is therefore a simpler approach which is
easier to plan. However, most manipulation tasks are given in terms of the Cartesian
world coordinates. For example, in our design we will be using a webcam to identify the
path the laser pointer should take in order to trace a figure. The path is specified in terms
of world coordinates and not joint variables.
Equation 1 shows how the desired trajectory can be represented in terms of a vector w in
R 6 known as the tool configuration vector.
 P
w 
 R
(1)
P and R represent the position and orientation of the end-effector relative to the base
frame. For our design we can consider the vector as being in R 3 since we are only
controlling the position of the laser pointer and not its orientation.
6
3.3. Inverse Kinematics
Given a desired position the corresponding joint variables must be determined. The
procedure to map from the Cartesian space to joint space is known as inverse kinematics.
However, before the inverse kinematics can be solved for the mapping from joint space
to Cartesian space must be determined. This procedure is known as forward kinematics.
7
3.4. Control System Design
The purpose of a controller is to compare the actual output to the desired input and
provide a control signal which will reduce the error to zero or as close to zero as possible
[2]. Initially we will test how accurately we can position each link. Therefore, the input
to our system will be the desired joint angles [ 1 ,2 ] calculated from the inverse
kinematics.
A common approach to robot control that is used in many commercial robots is singleaxis PID control [3]. To describe the PID control we let r (t ) be the desired joint angles
and q(t ) the actual joint angles. The error is given by Equation 2.
e( t )  r ( t )  q ( t )
(2)
Ideally the error should be zero but in practice it varies, particularly when the reference
input r (t ) is changing. A common technique used to control a robot is to employ n
independent controllers, one for each joint [3]. The general equation is given by Eqn. 3.
t

K p e(t )  K I  e( )d  K D e(t )
(3)
0
The gains { K P , K I , K D } are n  n diagonal matrices indicating that each axis is
controlled independently. For our design the gain matrices will be 2  2 . Figure 2 shows
a block diagram for a single axis PID controller.
Figure 2: One dimensional PID controller
A PID controller provides quick response, good control of system stability, and low
steady-state error. These characteristics along with our familiarity with PID controllers
make this type of controller an attractive initial approach.
The gains of the PID controller must be adjusted to meet our design specifications. The
gains can be determined and simulated using the root locus method. Another thing to
consider in the control design is the presence of friction. To compensate for friction,
viscous and Coulomb friction terms can be fed forward in the controller to cancel its
effect.
8
4. Design Details
4.1. Range of Motion
In order to correctly calibrate from the pixel to world coordinates the laser is positioned
directly in the center of the 2’x 1.5’ image exactly 3 feet away. The size of the image
(2’x1.5’) is chosen because of camera viewing angle restrictions from 3 feet. An image
plane diagram is shown below in Figure 3.
Figure 3: Laser Range of motion on the tracing plane
This ensures that the maximum range of motion needed in opposite directions is equal.
Simple trigonometry can be used to determine the maximum angle movement of the laser
from the center of the image for either pan or tilt motions. The pan range of motion is +/18  , and the tilt range of motion is +/- 14  .
9
4.2. Preliminary Model Development
Since the system is quite complex, an extremely precise non-linear model needs to be
developed. In this case, a Lagrange-Euler dynamic model is considered and kinetic and
potential energies of the system must be found.
A SolidWorks works model of the system shown in Figure 4 was created and used to
provide the mass and inertia matrices and of both pan and tilt bodies. Inertia tensors for
body A (pan) and body B (tilt) are found to be:
0.0000  0.0001
 0.0012

I A   0.0000
0.0007  0.0001
 0.0001  0.0001 0.0008 
 0.0002  0.0001 0.0001 
I B   0.0001 0.0007  0.0001
 0.0001  0.0001 0.0006 
Masses for both bodies are found to be:
mA  0.5603kg
mB  0.3468kg
SolidWorks models and total mass properties for both bodies can be found in Appencies
A through D.
Figure 4: Laser and Camera Mounting Assembly
Professor Wen’s pantilt.m script is used to define the symbolic equations for the system.
The file requires an input of body masses, inertia tensor matrices and center of gravity.
Running the file in MATLAB returns values for the inertia tensor, velocity coupling
matrix, gravity loading vector, and total energy.
10
4.3. Friction Identification
Friction parameters were identified by varying the input torque and comparing the
system’s steady state angular velocity. A LabView program was created (Appendix E) to
send constant voltages ranging from -1V to 1V to the system in 0.05V increments. A 1V
pulse was sent to the system for 1 second before each voltage test to overcome stiction.
Each incremental voltage was then applied for 30 seconds and the velocity of the system
was calculated throughout this duration. Each test was run for thirty second tests to
ensure that the steady state velocity was reached for the applied voltage.
After all voltages were tested, the program generated an Angular Velocity by Time graph
shown for pan and tilt axes in Figure 5.
Figure 5: Pan and Tilt Velocity vs. Time
An array was also created containing the final velocities of the system at each given
voltage. These values were exported to Excel, and voltage was converted to torque using
Equation 4.
 m  V  N  Kt  Ka
 m  V  (2.47 / .869)6.3  .0432  .1
 m  0.077389V
(4)
It must be noted that prior to this testing, the amplifiers were tested and calibrated to
verify that they indeed supplied .1A/V. In the case of the pan axis, a back EMF was
induced by the pan motor, and therefore a reading of .08A/V was measured instead of
.1A/V. This value was used in the torque equation for the pan axis to ensure the most
accurate friction values possible.
Torque vs. Velocity plots were then created to identify friction terms as shown in Figures
6 and 7. Linear regression was performed in each plot, and slopes were found to
11
determine viscous friction. Coulomb friction was determined as the value at which the
best fit lines intersect the torque axis. Tables 2 and 3 display all calculated friction
values.
Pan Axis Friction Identification
0.08
Applied Torque (N-m)
0.06
y = 0.0003x + 0.0638
0.04
0.02
0
-40
-30
-20
-10
0
10
20
-0.02
-0.04
-0.06
y = 0.0004x - 0.041
-0.08
Steady State Load Angular Velocity (rad/sec)
Figure 6: Pan Axis Friction Identification
30
40
12
Tilt Axis Friction Identification
0.1
0.08
Applied Torque (N-m)
0.06
y = 0.0007x + 0.0584
0.04
0.02
0
-40
-30
-20
-10
-0.02
0
10
20
30
-0.04
-0.06
-0.08
y = 0.0006x - 0.0569
-0.1
Steady State Load Angular Velocity (rad/sec)
Figure 7: Tilt Axis Friction Identification
TABLE 2: PAN AXIS FRICTION PARAMETERS
Viscous
(N*m*s/rad)
Coulomb
(N*m)
Positive Negative
0.0003
0.0004
0.0638
-0.041
TABLE 3: TILT AXIS FRICTION PARAMETERS
Viscous
(N*m*s/rad)
Coulomb
(N*m)
Positive Negative
0.0007
0.0006
0.0584
-0.0569
40
13
4.4. Nonlinear Model
In order to test and validate the actual design, a full nonlinear simulation is used. As
shown in figure 8, the simulation consists of desired input angles, PID controllers for pan
and tilt axes, the pan-tilt dynamics, actual output angles, and a quantizer in the feedback
path.
Pan Tilt Nonlinear Simulation
y Scope
y error scope
In1 Out1
[t angle1 angle2]
Demux
PID 1
From
Workspace
demux
In1 Out1
tau
mux
Zero-Order
Hold
y
theta
output
Pan-tilt dynamics
PID 2
Quantizer
Figure 8: Pan-Tilt Nonlinear Simulation
Through the use of this detailed model one can test different controller configurations
with relative ease. At this point, all system parameters such as mass, inertia, and Coriolis
forces have been identified.
2
.
4096
Additionally, friction is being included for with the other pan-tilt dynamics as shown in
figure 9.
The quantization interval has been set to match the encoder resolution of
14
theta M inverse
mass matrix inverse
1
tau
Matrix
Multiply
2
theta
3
thetadot
1
thetaddot
M inv *(tau-G-C-F)
theta G(theta)
Gravity Load
theta
C thdot
thetadot
Coriolis/Centrifugal
thetadot
Out1
Friction
Figure 9: Pan-Tilt System Dynamics
The friction dynamics have been determined experimentally, and are modeled according
to the Equation 5.
y  sign ( x) * ( Fv * abs( x)  Fc )
(5)
In the equation, Fv represents the viscous friction and Fc represents the Coulomb friction.
Since the combination of both viscous and Coulomb friction is discontinuous, the
simulation configuration parameters were modified to disable zero crossing control. The
resulting terms are shown in figure 10.
fc1
Sign
Gain
fv1
1
Demux
Add
Gain1
1
thetadot
Out1
fc2
Mux
Sign1
Gain2
Demux
fv2
Add1
Gain3
Figure 10: Friction Model
15
4.5. Inverse Kinematics
A coordinate frame has been assigned at the intersection of the pan and tilt axis. The
laser pointer is placed at the origin of this coordinate frame and the system is placed three
feet from the image plane. The basic setup of our pan-tilt system is illustrated in Figure
11.
t
.
n
Distance =3 ft
Figure 11: Basic Pan-Tilt System
A vector n is drawn parallel to the line of sight of the laser. Initially the vector is
designated in the direction:
0
n0
 
 1
A rotation about the x and y axes can be represented by the following rotation matrix:
0
1

R x =  0 cos  2
 0 sin 
2

0 

 sin  2 
cos  2 
 cos 1 0 sin 1 


Ry =  0
1
0 
  sin  0 cos  
1


16
To obtain a composite rotation we simply multiply the rotation matrices. Here, t
represents the size of the ray vector emanating from the laser, and d is the fixed distance
from the system to the image plane. Using Equation 6, we can specify the joint variable
and obtain the real world coordinates of the laser pointer. Thus we have determined the
forward kinematics.
 sin 1 cos  2 
 x 
 t= y 
Rx R y n t = 
sin  2


 
 cos 1 sin  2 
  d 
(6)
To determine the inverse kinematics we must determine which joint angles, when
inserted into the forward kinematics, will evaluate to a given x, y, and z coordinate.
Using Equation 6 the joint angles are solved to give Equations 7 and 8.
1  a tan 2(
x
)
d
(7)
 2  a tan 2(
y cos 1
)
d
(8)
17
4.6. Trajectory Generation
Using National Instruments (NI) Motion Assistant a trajectory was generated to trace the
images shown in Figures 12 and 13. The inputs to the Motion Assistant are the path of
the joint variable obtained from the inverse kinematics, the joint velocity and, joint
accelerations.
The maximum velocity and acceleration values for the trajectory generation were
determined experimentally using the finite difference and pulse period methods.
Appendix F shows the implementation of the finite difference method, which also returns
acceleration. Appendix G shows the implementation of the pulse period method, which
does not return acceleration.
The left column of Figures 12 and 13 displays the points that were identified using image
analysis and the right column displays the position profile for the pan and tilt axis
obtained from NI’s Motion Assistant. Figure 12 illustrates simple shapes consisting
circular and straight lines. Figure 13 displays more complex shapes of a star and a hand
drawn image of a tree.
18
Figure 12: Simple Images and Trajectory Profiles
Figure 13: Complex Images and Trajectory Profiles
19
4.7. Linear Model
Creating a linear system is useful in order to develop an optimal controller for the system.
Linearization simplifies the pan-tilt model by focusing on each angle ( pan , tilt )
independently, while effects on each other are treated as disturbances.
In order to linearize the system about a point, numerical values for  pan and  tilt are used.
Because the pan angle has no gravity loading, an angle of zero can be used. For the tilt
axis, half of the range of motion ( 26.5 ) is used.
These desired angles, along with our system parameters calculated previously are entered
and loaded into MATLAB via script pantiltmodel.m (Appendix H). The MATLAB
script, pantilt_lin.m (Appendix H), found from 2003’s ECSE 4962 course website [4],
creates the matrices in the state space equation, Eqn. 9.
G(s)  C (sI  A) 1 B  D
(9)
Open loop transfer functions are also determined in this script. However, since the
system is linear, transfer functions from pan-to-tilt and tilt-to-pan are ignored. Open loop
transfer functions G11 (pan) and G22 (tilt) are given as:
13.16s 2  1.754s  10.29
G11  4
s  0.2649s 3  0.7997 s 2  0.1029s
G22 
13.33
s  0.1333s  0.7822
2
20
4.8. Controller Design
The control system is implemented using a PID controller for each axis. Both PID
controllers are designed for the simulation using Simulink PID blocks shown in Figure
14. Proportional, derivative and integral gains are hand tuned until the system results met
the desired specifications.
Figure 14: PID Controller
Step inputs were tested in the simulation system and gains were accordingly tuned.
Proportional, integral, and derivative gains are displayed below. Step responses for both
pan and tilt axes are shown in Figures 15 and 16. Additional tweaking was done to
smoothly follow a sinusoidal input instead of a step since the actual system relies on
following a trajectory. Figure 17 represents the system’s ability to track a sinusoidal
input with a frequency of 10rad/sec.
PID gain values used for pan axis:
K p  50
K i  .1
Kd  8
PID gain values used for tilt axis:
K p  65
K i  .1
K d  7.5
21
Figure 15: Pan axis step response
Figure 16: Tilt axis step response
22
Figure 17: Tracking Response of Nonlinear system
23
5. Design Verification
5.1 Model Verification
To verify the model, a series of constant torques were input into the system in order to
match steady state velocities with those measured from the friction identification
procedure. The single joint model is shown in Figure 18. Simulated and measured
responses are displayed in Figures 19 and 20.
When comparing the simulated response to the measured response, discrepancies can be
noticed for the first few seconds. This is due to the voltage spike being given in the
measured response to overcome stiction. Steady state velocities on the measured graphs
are incorrect due to a missing conversion factor in the LabVIEW code for the external
gear ratio and the encoder resolution. This was fixed at a later time when velocity
estimation was changed.
Scope1
Scope
1
tau
thetaddot
-K-
tau
Gain2
theta
thetaddot
thetadot
thetadot
1/s
thetaddot -> thetadot
1/s
Saturation
thetadot->theta
1
theta
Subsystem
Scope2
Figure 18: Verification Model for Single Joint
24
Figure 19: Simulated responses to constant torques
Figure 20: Measured responses to constant torques
25
5.2. Image Calibration
Once all the pixel coordinates have been recorded they are converted into real world
coordinates. In order to determine the real world coordinates the conversion from pixel
to distance needs to be known. To find this out a ruler was placed at the distance
predetermined to be the distance of the image from the camera. Two pieces of tape were
placed on the ruler 4 inches apart. An image was then taken with the ruler in a vertical
orientation, and one with the ruler in a horizontal orientation. The two images shown in
Figures 21 and 22 were used to then find how many pixels represented an inch at three
feet away.
Figure 21: Vertical Image Calibration
Figure 22: Horizontal Image Calibration
The two images captured are show above. The first image shows the ruler vertical, the
second horizontal. The pieces of tape were found to be 48 pixels apart in each image.
Dividing this number by 4 inches gives the conversion of 12 pixels per inch. With this
26
information it is now possible to convert points in the image to points in the real world
relative to the camera. With a resolution of 320 X 240 the figure size is limited further
than anticipated. The figure can only extend 2’ in the horizontal direction and 1’5” in the
vertical direction. This is much smaller than the 3’ by 3’ figure that was initially
proposed. The main function to get all the points along the perimeter of the drawing is
shown in Figure 23.
Figure 23: Main Function for Path Finding
This function takes a snap of a picture, gotten by the function labeled 1 in Figure 24, and
sends it to the histogram function. Once through the histogram, the image is put through
the threshold function which will get rid of random data. The image is then sent to
function 3 which finds the starting point for the search; function 3 is shown in Figure 25.
After the first coordinate is found the function searches for each next point until it returns
to the beginning. The function for finding the next point is function 5 and is shown in
Figure 26. After all coordinates are found they are output through a two dimensional
array.
27
Figure 24: Image Subfunction 1
Function 1 takes care of getting an image from the USB camera. Looking at Figure 24,
the first thing it does is create an image. It then initiates the camera, takes a picture, and
closes the camera. The picture is placed in the image that was created earlier. Since, the
camera takes a color picture, the image is then converted to black and white. This image
is output to the main function.
Figure 25: Image Subfunction 3
Function 3, shown in Figure 25, is used to find the first pixel that contains the figure. It
searches linearly, left to right, top to bottom, until it comes to the first white pixel. Once
this pixel is found, the coordinates are returned to the main function.
28
Figure 26: Image Subfunction 5
Figure 26 shows function 5, which checks a pixel at a certain radius and angle from a
given point. In the main function this is run in a while loop given various angles.
Function 5 takes the angles and converts them into x and y distances. These are then
multiplied by the radius, 5 in this version, and added to the coordinates of the center
pixel. The pixel which is referenced by the new coordinate is checked for color. If the
pixel is white a true value is returned to the main function; if it is black a false value is
returned. When the main function receives a true value it knows to move on to the next
point and save the coordinates of the point it just found.
Figure 27 display the results of the image algorithm that was used to identify the pixel
coordinates.
29
Figure 27: Pixel coordinates of images to be traced
As can be seen from Figure 27 the image algorithm correctly identified the images to be
traced.
30
6. Costs
The cost of equipment used throughout the design process is shown in Table 4. Out-ofpocket expenses for additional items are displayed in Table 5. The total for these extra
items needed for project completion and demonstration did not exceed the $100 budget
initially set. Cost of labor is shown in Table 6.
TABLE 4: COST OF EQUIPMENT
Item
Pittman motor GM8724S010
Pan gear A
Pan gear B
Tilt gear A
Tilt gear B
Pan belt
Tilt belt
LabView 7.1 software
LabView Real-Time software
LabView FPGA software
LabView IMAQ software
NI cRIO-9004
NI 1 M gate reconfigurable I/O (RIO) FPGA
NI cRIO-9411
NI cRIO-9263
NI cRIO-9215
NI-9401
Total
Qty
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Cost
$100.00
$15.00
$7.50
$15.00
$7.50
$2.00
$2.00
$2,395.00
$1,995.00
$1,995.00
$2,995.00
$1,495.00
$1,195.00
$100.00
$100.00
$100.00
$100.00
Total
$200.00
$15.00
$7.50
$15.00
$7.50
$2.00
$2.00
$2,395.00
$1,995.00
$1,995.00
$2,995.00
$1,495.00
$1,195.00
$100.00
$100.00
$100.00
$100.00
$12,719.00
Source
Supplied
Supplied
Supplied
Supplied
Supplied
Supplied
Supplied
Supplied
Supplied
Supplied
Supplied
Supplied
Supplied
Supplied
Supplied
Supplied
Supplied
31
TABLE 5: COST OF ADDITIONAL ITEMS
Additional Items
Laser pointer
USB camera
Poster-sized paper
Plotter paper
Black markers
Qty
1
1
2
1
2
Misc. electronics/parts
Total
Cost
$14.45
$18.98
$1.97
$40.56
$1.97
Total
$14.45
$18.98
$3.94
$40.56
$3.94
$10.78
$10.78
$86.74
Source
Ebay
Ebay
Walmart
VCC
Walmart
Trojan
Electronics
TABLE 6: COST OF LABOR
Team
Duarte, Jason (engineer)
Latif, Azmat (engineer)
Sundell, Stephen (engineer)
Weidner, Tim (engineer)
Total
Hours
250
250
250
250
1000
Cost
$30/hr
$30/hr
$30/hr
$30/hr
Total
$7,500
$7,500
$7,500
$7,500
$30,000
32
7. Conclusions
The objective of this project was to use a pan-tilt system to trace images using a laser
pointer. In order to meet the design objectives the project was split into three key
sections: image analysis, trajectory generation and control design. Using a webcam, the
desired path to trace around a figure was obtained. The trajectory generator specified the
speed at which to travel along the path. Finally a PID controller was implemented to
achieve the performance specifications.
There are several areas of the final system that can be improved. Currently the image
algorithm works well with single continuously drawn figures placed on the screen.
However, the image algorithm will not return the pixel locations of multiple shapes on
the same screen. In addition, the algorithm was primarily written for shapes that did not
have any line crossings. To trace figures such as a figure eight the algorithm will have to
be revised.
At times, the image processing algorithm failed to return all of the pixel locations of a
figure. As a result, a figure may contain several gaps, and therefore would not
completely trace. To address this issue the team increased the intensity of the light on the
image to return all of the pixels. A possible improvement can be to adjust the algorithm
to so that it completes any gaps in the image.
Another area that could be improved is the implementation of vision feedback.
Incorporating vision feedback in the design would allow the controller to correct any
offsets between the figure and the laser. Due to time limitations and inability of the
camera to detect the laser, this type of controller could not be implemented in this project.
An additional improvement would be to enable the system to track an image on multiple
image planes. This would be possible by using different background colors. These types
of improvements can be implemented by a future Control Systems Design teams. In
conclusion, this project has been challenging and rewarding experience.
33
8. References
1. Groover, Mikell P., Weiss, Mitchell, Nagel, Roger N., Odrey, Nicholas G.
Industrial Robotics: Technology, Programming, and Applications
McGraw-Hill Inc, 1986 pages 393-410.
2. Fu, K.S, Gonzalez, R.C, Lee, C.S.G, Robotics: Control, Sensing, Vision, and
Intelligence, McGraw Hill Inc, 1987 pages 149-155.
3. Schilling, Robert J. Fundamentals of Robotics: Analysis & Control
Upper Saddle River, New Jersey, Prentice Hall, 1990 pages 140-147, page 265-266.
4. ECSE 4962 Control Systems Design, Meeting Slides, “Linearization script,” Spring
2003, http://www.cats.rpi.edu/%7Ewenj/ECSE4962S03/pantiltmodel.m.
34
Appendices
Appendix A: Pan Assembly
Appendix B: Tilt Assembly
35
Appendix C: Mass Properties for Pan Axis
Output coordinate System: -- default –
Density = 2796.7634 kilograms per cubic meter
Mass = 0.5603 kilograms
Volume = 0.0002 cubic meters
Surface area = 0.0680 square meters
Center of mass: ( meters )
X = -0.0126
Y = -0.0228
Z = -0.0003
Principal axes of inertia and principal moments of inertia: ( kilograms * square meters )
Taken at the center of mass.
Ix = (0.1102, 0.7740, -0.6235)
Px = 0.0006
Iy = (-0.0656, 0.6316, 0.7725)
Py = 0.0009
Iz = (0.9917, -0.0442, 0.1204)
Pz = 0.0012
Moments of inertia: ( kilograms * square meters )
Taken at the center of mass and aligned with the output coordinate system.
Lxx = 0.0012 Lxy = 0.0000 Lxz = -0.0001
Lyx = 0.0000 Lyy = 0.0007 Lyz = -0.0001
Lzx = -0.0001 Lzy = -0.0001 Lzz = 0.0008
Moments of inertia: ( kilograms * square meters )
Taken at the output coordinate system.
Ixx = 0.0015 Ixy = 0.0002 Ixz = -0.0001
Iyx = 0.0002 Iyy = 0.0008 Iyz = -0.0001
Izx = -0.0001 Izy = -0.0001 Izz = 0.0012
36
Appendix D: Mass Properties for Tilt Axis
Output coordinate System: -- default -Density = 2766.6335 kilograms per cubic meter
Mass = 0.3468 kilograms
Volume = 0.0001 cubic meters
Surface area = 0.0400 square meters
Center of mass: ( meters )
X = 0.0760
Y = 0.0072
Z = -0.0135
Principal axes of inertia and principal moments of inertia: ( kilograms * square meters )
Taken at the center of mass.
Ix = (0.9487, -0.1438, 0.2815)
Px = 0.0002
Iy = (0.3156, 0.3774, -0.8706)
Py = 0.0006
Iz = (0.0190, 0.9148, 0.4034)
Pz = 0.0007
Moments of inertia: ( kilograms * square meters )
Taken at the center of mass and aligned with the output coordinate system.
Lxx = 0.0002 Lxy = -0.0001 Lxz = 0.0001
Lyx = -0.0001 Lyy = 0.0007 Lyz = -0.0001
Lzx = 0.0001 Lzy = -0.0001 Lzz = 0.0006
Moments of inertia: ( kilograms * square meters )
Taken at the output coordinate system.
Ixx = 0.0003 Ixy = 0.0001 Ixz = -0.0002
Iyx = 0.0001 Iyy = 0.0028 Iyz = -0.0001
Izx = -0.0002 Izy = -0.0001 Izz = 0.0026
37
Appendix E: Friction Identification Code
38
Appendix F: Velocity Estimation Using Finite Difference
39
Appendix G: Velocity Estimation Using Pulse Period
40
Appendix H: Model Linearization Code
%pantilt_lin.m
%
% linearized pan-tilt system about theta=0 thetadot =26.565deg
%
% setup pan-tilt parameters
pantiltmodel;
% gravity
g = 9.807;
% get the linearized mass matrix
theta1d=26.565*pi/180;theta2d=26.565*pi/180;
M = massmatrix(mA,IA,pA,Im1,N1,mB,IB,pB,Im2,N2,theta1d,theta2d);
% assume no friction for now
d1=0.01; % viscuous friction for pan
d2=0.01; % viscuous friction for tilt
D = diag([d1 d2]);
% get the linearized gravity term
gradG=gravitylin(mA,IA,pA,Im1,N1,mB,IB,pB,Im2,N2,theta1d,theta2d);
% define state space matrices
A = [zeros(2,2) eye(2,2); -inv(M)*gradG -inv(M)*D];
B = [zeros(2,2) ; inv(M)];
C = [eye(2,2), zeros(2,2)];
D = zeros(2,2);
G = ss(A,B,C,D);
%disp('*** G ***');
%disp(['Open loop transfer function linearized about (',...
% num2str([theta1d theta2d]),')']);
%tf(G)
41
Appendix I: Statement of Contribution
While specific subsections were written individually as shown below, all team members
contributed equally in the completion of this report.
Jason Duarte _____________________________________
- Range of Motion
- Preliminary Model Development
- Friction Identification
- Nonlinear Model
- Linear Model
- Controller Design
- Model Verification
- Costs
Azmat Latif ______________________________________
- Abstract
- Introduction
- Control System Design
- Trajectory Generation
- Inverse Kinematics
Stephen Sundell ___________________________________
- Image Processing
- Laser Control
Tim Weidner ______________________________________
- Abstract
- Professional and Societal Considerations
- Velocity Estimation
- Conclusions
- Report Organization
Download