team4 - RPI - The Center for Automation Technologies and Systems

advertisement
VISION-TRACKING TURRET SYSTEM
by
Jason Lam
John Lee
Jonathan Rothberg
ECSE-4962 Control Systems Design
May 4, 2005
Rensselaer Polytechnic Institute
ABSTRACT
The goal of this project is to successfully track a moving remote controlled car. The
system utilizes a visual system, infrared light specifically, to locate the target. The car is
outfitted with infrared emitters and photodiodes and is capable of traveling freely
throughout a predefined area around the visual system. When integrated with image
processing algorithms and a properly tuned control system, the system would be capable
of following the car and automatically adjusting itself to hit the car’s photodiodes with a
laser beam.
This report is a cumulative report that documents the development of the Vision-Tracking
Turret System. The report describes the project constraints, design approach, and
validation of the subsystems. It also assesses the current implementation of the system
and suggests enhancements for the future.
ii
TABLE OF CONTENTS
1
2
3
4
5
6
7
8
INTRODUCTION................................................................................................................. 1
PROFESSIONAL AND SOCIETAL CONSIDERATIONS.................................................. 3
DESIGN PROCEDURE....................................................................................................... 5
3.1
Mounting System ............................................................................................................ 5
3.2
Target, Infrared and Laser Feedback System ................................................................. 5
3.3
Image Processing ............................................................................................................ 6
3.4
Modeling, Coordinate Translation System ..................................................................... 7
3.5
Friction Identification, Parameter Identification............................................................. 7
3.6
Control System................................................................................................................ 7
DESIGN DETAILS............................................................................................................... 9
4.1
Mounting System ............................................................................................................ 9
4.2
Target, Infrared and Laser Feedback System ............................................................... 12
4.2.1
Car Speed .............................................................................................................. 12
4.2.2
Laser Feedback System......................................................................................... 14
4.3
Image Processing .......................................................................................................... 16
4.4
Modeling, Coordinate Translation System ................................................................... 18
4.4.1
Motion Range........................................................................................................ 18
4.4.2
Pointing Accuracy................................................................................................. 20
4.4.3
Coordinate System Modeling................................................................................ 21
4.4.4
Parallax Correction: Virtual Center Point........................................................... 23
4.4.5
Field of Vision....................................................................................................... 24
4.4.6
Desired Tilt Angle from Camera Coordinate ....................................................... 26
4.4.7
Desired Pan Angle from Camera Coordinate ...................................................... 29
4.5
Friction Identification, Parameter Identification........................................................... 31
4.5.1
Pan-Axis Friction.................................................................................................. 33
4.5.2
Tilt-Axis Friction................................................................................................... 35
4.5.3
Motor Modeling via Friction Identification.......................................................... 36
4.5.4
Motor Modeling via Parameter Identification...................................................... 39
4.6
Control System.............................................................................................................. 40
4.6.1
Controller Design ................................................................................................. 40
4.6.2
Implementation of the Integrated System.............................................................. 44
DESIGN VERIFICATION ................................................................................................ 47
5.1
Mounting System .......................................................................................................... 47
5.2
Target, Infrared and Laser Feedback System ............................................................... 47
5.3
Image Processing .......................................................................................................... 47
5.4
Modeling, Coordinate Translation System ................................................................... 54
5.5
Friction Identification, Parameter Identification........................................................... 57
5.6
Control System.............................................................................................................. 61
COST AND SCHEDULE ................................................................................................... 63
CONCLUSIONS ................................................................................................................. 66
REFERENCES.................................................................................................................... 67
iii
LIST OF FIGURES
Figure 1 – Sequence of Events for Vision-Tracking Turret System
Figure 2 – Laser and Camera Mounting Assembly
Figure 3 – Implemented Laser and Camera Mounting Design
Figure 4 – Shims installed on mounting assembly
Figure 5 – Emitter and Detector Placement Pattern
Figure 6 – Photodiode Orientation
Figure 7 – Screenshot before processing
Figure 8 – Screenshot after processing
Figure 9 – Tilt Range
Figure 10 – Pan Range
Figure 11 – Mount Setup
Figure 12 – Field of View at 45°
Figure 13 – Camera Geometry at 45°
Figure 14 – Range of View
Figure 15 – Object Tilt Angle
Figure 16 – Camera View
Figure 17 – Desired Tilt Angle
Figure 18 – Desired Pan Angle
Figure 19 – Camera View
Figure 20 – Angle Geometry
Figure 21 – Pan Axis Velocities
Figure 22 – Pan Axis Torques
Figure 23 – Tilt Axis Velocities
Figure 24 – Tilt Axis Torques
Figure 25 – Motor Model with no Mount
Figure 26 – Motor Model (tilt)
Figure 27 – PID Controller
Figure 28 – Closed Loop Response to a Step Input
Figure 29 - Closed loop system response in the Pan axis
Figure 30 - Closed loop system response in the Tilt axis
Figure 31 – Final System Implementation
Figure 32 – Sequence, post processing
Figure 33 - Screenshot before processing
Figure 34 – Screenshot after processing
Figure 35 – Screenshot sequence, post processing
Figure 36 – Stationary Target Implementation
Figure 37 – Desired Pan and Tilt Angle Calculation
Figure 38 – Motor Model (tilt) from Parameter ID
Figure 39 – Motor Model (tilt) Subsystem from Parameter ID
Figure 40 – Friction ID from Motor Model with no Mount
Figure 41 – Open-Loop Response (Chirp) – Actual and Simulated comparison
Figure 42 – Open-Loop Response (Ramp) – Actual and Simulated comparison
Figure 43 – Open-Loop Response Comparison
Figure 44 – Desired vs. Actual Response Plots
iv
LIST OF TABLES
Table 1 – Car Velocity Experiment
Table 2 – Pan Friction Parameters
Table 3 – Tilt Friction Parameters
Table 4 - Motor Specifications
Table 5 – PID Characteristics
Table 6 – Actual Controller Performance
Table 7 – Bill and Cost of Materials
Table 8 – Labor Costs
Table 9 – Project Schedule
v
LIST OF APPENDICES
Appendix A – Maximum Angle Speed
Appendix B – Infrared and Laser Feedback System Circuit Schematics
Appendix C – Circuit component specification sheets
Appendix D – Team Member Contribution
Appendix E – Team Member Resumes
vi
1
INTRODUCTION
Tracking a moving target via infrared light and shooting it with a laser beam requires the
integration of several subsystems, which are tied together with a robust mathematical model and
a properly tuned control system. Conceptually, the camera must be able to detect the infrared
light emitted by the target. The image processing algorithm then proceeds to determine the
location of the target in terms of the webcam’s captured frame. Given these coordinates and the
current tilt angle, a mathematical model of the system translates this data to determine the
physical location of the target with respect to the location of the camera and laser assembly. The
mathematical model adjusts for several physical parameters of the system and outputs
compensated pan and tilt angles that would direct the laser pointer to point straight at the target.
The telemetry is sent through a control system to direct the laser into the target’s array of
photodiodes and the car, in turn, will indicate whether is has been shot.
Car emits infrared light
Camera detects infrared light
Image processing determines locations of car based
on screenshot; outputs X and Y coordinates of car
Coordinate Translation System, in addition to current tilt
and pan angle, determines physical location of target
Parallax and Virtual Center point functions
determine desired pan and tilt angles
Control System moves assembly
Laser hits car’s photodiodes
Car signals a hit
Figure 1 – Sequence of Events for Vision-Tracking Turret System
1
The key element to automating the system is to design a capable control system that regulates the
system’s movements. Appropriate control system constraints can be derived after considering
the performance of the target in question. Given the specifications of target and the capability of
the supporting subsystems, it has been determined that the following performance specifications
must be met by the control system and will be elaborated in subsequent chapters:
Settling time = 0.17 sec (Less than average image processing time)
Overshoot < 1%
Steady State Error = 0
2
2
PROFESSIONAL AND SOCIETAL CONSIDERATIONS
There is a growing demand for control systems with motion tracking ability. The practicality of
this project can be extended to a broad range of applications. Of these, the most apparent use is
in defense, where there is a strong emphasis on reliably neutralizing threats without risking
human life. Moreover, a system that is capable of operating at night is a very attractive feature
within the context of defense systems.
Personal security can be obtained through the use of a similar system, home security being the
most obvious implementation. Since the camera is designed to track a moving object, it can
easily be utilized as a security camera. Once the system detects movement it will begin
following the object and recording the movement. Also, since the camera is infrared sensitive it
will be able to operate at night, when the risk of having an intruder increases. The cost of the
proposed system makes it feasible for a home or business security system.
Another important application for this type of system is military defense. In this day and age,
defense has become a very high priority. A system similar to the proposed design could be
implemented to detect missiles and destroy them before they are able to harm anyone. There is
currently no system in place that will intercept a missile. It may be inferred that the proposed
National Missile Defense system will have features similar to the system that the project plans to
implement.
A comparable system can also be used in civilian and commercial applications such as traffic
control or surveillance. Implementing such a system can provide information on traffic and
travel patterns in a specific area. Again, the infrared portion of the system would bolster its
range of operation by allowing it to function at night.
The functionality of the system could also serve as an entertainment system, where players
control the RC car within the boundary, and the goal is to dodge the shots for as long as possible.
Alternatively, the system can be implemented in laser tag, allowing for automated turrets that
would shoot any moving targets within a certain bound in the arena. It could also be an agility
3
training system, where trainees wear a suit with sensors on it, and they are to avoid the shots
from the turret.
4
3
DESIGN PROCEDURE
The development of the project was carried out in phases. In each phase, the team focused on a
specific task or subsystem. Phases would overlap when possible. Once each subsystem was
completed, they were tested and integrated into the overall system. A control system was then
applied to the larger system and tuned to meet the specified performance constraints.
3.1
Mounting System
The mounting system provides a way to rigidly fix the camera and laser pointer to the rest of the
pan and tilt assembly. Moreover, it maintains the geometry of the system, which is vital to the
accuracy of the system modeling. Although the assembly shall allow for a full 360° range of
motion, the range implemented in the project will be restricted to 180°.
The design of mounting system evolved through several versions. It has been established that
the hardware kit that the team received included motors with small gear ratios because the speed
of the system was more imperative than the torque. Therefore, a large effort was made to
minimize the inertia and weight that the motors would be subject to.
Ultimately, the design that was implemented in the system was governed by the time constraints
and the resources available. From the physical modeling (via SolidWorks 2004), it is clear that
the proposed design would have been far superior to the one that was implemented, and would
be the optimal alternative.
3.2
Target, Infrared and Laser Feedback System
The development of the remote control car and the circuitry to support the targeting feedback
system was a relatively straight forward process. A car was selected to provide the system with a
challenging target. Several cars were tested and the final model was chosen based on its
velocity, maneuverability, and size. Alterations could be made to the weight of the car to affect
its acceleration and was an important feature during the integration of the systems and tuning of
the control system.
5
The circuitry was developed with the constraints that the webcam was capable of detecting
infrared light and the laser that would be implemented in the system would emit a red laser.
Therefore, the circuit must be capable of emitting infrared light and detecting the red laser
without any interference. The design was inspired by circuits that were implemented in other
projects [1]. Careful component selection yielded a very successful initial prototype. Initial
testing was done by triggering the feedback system by manipulating a laser pointer by hand.
Small enhancements were made to the system over time to increase its functionality and the
accuracy of the image processing system.
Although the intention was to finalize the circuit by soldering the components to a wafer board,
the large amount of surface area available on the car allowed the circuitry to remain on the
breadboards and have the boards mounted directly on to the car. If the target had been smaller,
decreasing the physical footprint of the circuitry and its components would have been a more
serious concern.
3.3
Image Processing
The image processing portion of the project provides the critical feedback link between the target
and the camera assembly. Its sole function is to determine where the target is in relation to the
camera’s field of view. The development of the image processing primarily followed an iterative
process and was developed in conjunction with the infrared emitting circuit design. Preliminary
MATLAB code was tested with the webcam and a mockup of the infrared emitter array. As
more elements of the project were finalized, specifically the mounting system and the car,
modifications were made to the image processing, including tweaking the color values that the
algorithm perceives as the target.
Alternative methods of acquiring images from the camera did not focus on the algorithms.
Rather, dedicated hardware for the image processing and more capable cameras could have been
implemented to increase the responsiveness of the system.
6
3.4
Modeling, Coordinate Translation System
The translation between coordinate systems is a geometrically and trigonometrically intensive
exercise that allows the pan and tilt system to derive telemetry data from the image processing.
Moreover, many issues arise when the objectives of the project require that a laser be accurately
pointed at a specified area of the car.
The comprehensive model that has been developed successfully compensates for several issues
involving coordinate translations, parallax, and center point adjustments. With an accurate
model in place, further integration will allow the project to move forward into control
implementation and to ultimately track a moving target.
The modeling of the system also followed an iterative process. Initial development started with
measuring the physical parameters of the webcam, including its field of view and resolution. A
strong initial model was available to be tested as soon as the mounting system was completed.
The modeling was verified and revamped through extensive testing of the integrated system.
While there are no feasible alternatives to this portion of the project, further progress could have
been made to compensate for other webcam anomalies, such as barrel distortion.
3.5
Friction Identification, Parameter Identification
Modeling of the system was done in parallel with the Friction Identification and Parameter
Identification portion of the project. These processes are required to accurately predict the
behavior of the pan and tilt mechanisms. Several runs were required due to hardware problems.
After proceeding with the friction identification techniques, it was determined that the parameter
identification method would yield a more precise model of the system.
3.6
Control System
With all the subsystems in place and tested with preliminary scripts, the control system was
finally introduced into the system to manage the movements of the assembly as it follows the
target. Due to problems with the synchronization of the subsystems, earlier attempts at the
control system performed poorly. Software issues and difficulty in storing previous tilt and pan
7
angle data in the system model impeded progress, as well. Nevertheless, these difficulties have
been overcome, and the final implementation of the system performed better than expected.
8
4
DESIGN DETAILS
4.1
Mounting System
Height:
Tilt Axle: 0.92 m
Tilt Axle to camera: 3.4 cm
Tilt Axle to laser pointer: 5.7 cm
Motion range:
Tilt: 30.5° to 68.5° (from the horizontal plane)
Pan: 180°
The initial design for mounting the laser and camera to the pan and tilt assembly was developed
prior to the Friction and Parameter Identification phase of the project. The later design stays
largely intact with minor changes in decreasing the weight and inertia of the mounting assembly.
SolidWorks was used to develop a means to connect the camera and laser assembly to the pan
and tilt mechanism. The preliminary model shown in Figure 3 demonstrates the requirements of
the mount. Due to the fact that the assembly must respond relatively quickly, it is critical to
minimize the weight of the payload on the assembly. Doing so will allow the camera to
accelerate much more quickly since there will be less inertia to overcome.
9
Figure 2 – Laser and Camera Mounting Assembly
Another critical requirement in the mount was to maintain a rigid frame so that the laser and the
camera's center of view will remain perfectly parallel. A slight displacement of these two
elements on the mount will likely translate to large errors in aiming due to the geometry of the
system. In order to meet these rigidity and weight requirements, it is anticipated that the parts
will be machined out of aluminium.
With this design, the camera will be secured by a backing plate and two screws, creating a vise
around the camera. The laser will also be rigidly fixed to the mount via two top-mounted setscrews.
The mount will be rigidly fixed to the tilt axle through collar clamps. The collar clamps are then
fixed to the rest of the mounting assembly with bolts. The team may elect to line the shaft with a
sheet of rubber to maintain a tight fit with the shaft and minimize the effects of gaps and
machining errors on the mounting plate. Likewise, the surfaces for the laser and camera may be
10
lined with rubber sheet or rubber grommets, as well. At this point, the design will be ready for
fabrication after more precise measurements are made.
Despite having a robust mounting design ready for fabrication, the estimated turnaround time,
given by Jim Schatz, would introduce significant delays in the work schedule. Compromises
were made to stay within the bounds of the scheduling and a mounting design was fabricated
from a scrap block of aluminum.
Figure 3 – Implemented Laser and Camera Mounting Design
Figure 4 – Shims installed on mounting assembly
11
Fitment issues were also apparent in the implemented design because of fabrication errors. It
was remedied by installing shims in specific areas, shown in Figure 4, to maintain the geometry
between the camera and laser.
4.2
Target, Infrared and Laser Feedback System
4.2.1
Car Speed
Car Velocity: 0.6848 m/s (weighted car),
0.3252 m/s (non-weighted car)
The required minimum speed of the pan-tilt mechanism relies on the speed of the target itself.
An experiment was conducted to determine the remote controlled car’s maximum theoretical
speed. The experiment was set up under the following parameters:
1. Since there was difficulty calibrating the car to go in a straight line, a length of string
was tied to the back of the car.
2. The string was measured to be 299.3cm from the start to the end, which was marked by a
knot.
3. The car started its run above the knot and full throttle was applied. The time it took for
the car to reach the end of the string was recorded.
4. The test was run 10 times.
Below is the sampled data:
Run
Time (s)
1
5.948
2
5.838
3
5.598
4
5.598
5
5.888
6
5.808
7
5.888
8
5.507
12
9
5.377
10
5.648
Average(s)
5.7098
Length(m)
3.91
Velocity (m/s)
0.6848
Table 1 – Car Velocity Experiment
The average time was calculated and used to determine the maximum velocity of the car. The
average time was used to minimize recording error. Thus, the speed of the car is
3.91m / 5.7098s = 0.6848m/s
Knowing the speed of the car, the minimum speed that the pan-tilt mechanism needs to keep up
with the car can be determined.
For the tilt mechanism, the worst case scenario would be tilting from its maximum angle towards
its minimum angle (from horizontal) to track the car that is traveling at its maximum speed in a
straight line from the center of the system. The maximum speed for tilting is calculated to be
25.7197°/s (see Appendix A1).
For the pan mechanism, the worst case scenario would be to track the moving target traveling on
the circumference of the inner radius of the workspace. The maximum speed for panning is
calculated to be 64.2595°/s (see Appendix A2).
During testing, the car was weighed down to decrease its ability to accelerate, which provided
the system with an easier target to track. However, significant advances were made since the
initial control design. With increased performance, the weight was removed from the car.
13
4.2.2
Laser Feedback System
The circuitry has been fully outfitted with eight photodiodes and 16 infrared LEDs. Each LED
has been installed in parallel to maintain functionality in case one of the elements fails. The
photodiodes are each connected to a gain circuit and the sensitivity of each photodiode can be
calibrated via a potentiometer installed on the back of the vehicle. The photodiodes are
connected to circuit of cascading OR gates which will trigger the status LED, mounted on the
front of the car, to go off when one of the detectors receives a high input.
In addition to being connected via logical OR gates, the photodiodes are also connected by an SR
latch so that triggering a single photodiode will trip the status LED and buzzer and the indicators
will remain in that state until the system is reset. During stability testing, the circuit developed
problems in the gain circuitry in earlier versions of the circuit. It was determined that the
operational amplifiers were not functioning properly and were unable to give the digital logic
portion of the system distinct high and low values. The amplifiers were replaced with LM324
amplifiers, which allow a wider range of operating voltages and higher gains. Extra amplifier
ICs were ordered in case of another malfunction.
The system voltage was also increased from 3V to 6V on the detection portion of the circuitry.
The decision to increase the voltage was justified by considering the specifications of the
integrated circuit components. It was determined that the components would operate nominally
at 5V, but the spec sheets indicate that the components seemed to handle overvoltages better than
undervoltages (see Appendix C).
The infrared emitting LED circuit, however, remains on a 3V supply. The emitters would run
the risk of burning out at a higher voltage. Moreover, the light emitted from the LEDs is
sufficiently bright after assessing the images captured by the infrared-sensitive webcam. By
using two 2-AA battery boxes, the emitter circuit receives its 3V source by tapping into one of
the battery boxes before the two boxes are connected in series to create a 6V supply for the
detector circuit. This allows the system to consolidate its power sources, which was essential
because the car has a limited amount of surface area to mount the rest of the circuitry.
14
The infrared emitting circuit was also combined with the detection circuit in the same physical
area. This conserved mounting space on the car but, more importantly, this design decision
allows the laser to point at the light source and trigger the detector circuit simultaneously.
Without combining the emitting and detecting areas, the system would have to calculate a
compensation factor and would only add to the complexity of the system modeling.
The primary concern with placing the emitters and detectors in the same area was the
interference generated by the emitters, since the detectors have a substantial sensitivity to
infrared range of the light spectrum (see Appendix C). Various designs were considered to help
mitigate the interference. However, after an initial installation of the emitters on the detector
breadboard without any shielding, it was concluded that the detector circuit was still able to
function properly by adjusting the sensitivity of the circuit.
Another design decision was made regarding the pattern of emitters and detectors on the
sensitive area. It was required for the pattern to be symmetrical so that the laser will trigger the
circuit reliably, regardless of the car’s orientation with respect to the camera and laser assembly.
The pattern was established in conjunction with the development of the image processing
system. Because the system calculates the centroid of the bright spot emitted by the LEDs and
points the laser to that centroid, it was apparent that there should be a concentrated number of
detectors in the center and surrounded by emitter LEDs. Figure 1 demonstrates the pattern that
may fulfill these requirements.
Laser Detector
Infrared Emitter
Figure 5 – Emitter and Detector Placement Pattern
15
Initial testing was done manually by holding the laser pointer at a height that is equivalent to the
height of the camera and laser assembly. The detection circuit responded very well in a wide
range of distances and orientations. Tilting the detectors was another design decision, as shown
in Figure 2, to increase the system’s sensitivity angle.
Figure 6 – Photodiode Orientation
4.3
Image Processing
Camera Resolution:
Width: 320 pixels
Height: 240 pixels
The initial approach to the image processing portion of project was developed in MATLAB
utilizing many built-in functions. The process developed in MATLAB applies color filters to
extract the target by adjusting several threshold values. After tuning the filters, the code
successfully identifies the LED array as shown in Figure 7.
The basis of the image processing system is color detection. Since the system utilizes IR LED’s
at all times it will be able to accurately detect the remote controlled car. The IR LED’s emit a
blue light that will be the only color present in the image.
16
Figure 7 – Screenshot before processing
The use of the IR LED’s at all times enables the system to easily go from a daytime condition to
a nighttime condition. The light emitted from the IR LED’s will remain the same color in both
daytime and nighttime conditions; this fact reduces some of the complexity of the image
processing system.
Figure 8 – Screenshot after processing
Keeping the image processing system as simple as possible is a major consideration for the
system. The less complex the image processing system is, the quicker the image processing will
execute. Execution time is a key concern for the system that is being developed. If the image
processing takes too long to execute it could potentially have disastrous effects on the
performance of the system.
17
The algorithm uses color based segmentation to detect the location of the car. First, the image is
acquired from the web cam followed by the segmenting of the red, green and blue channels.
Once the channels are separated a threshold value for the image is determined experimentally.
The threshold value determines what parts of the image to keep or leave “turned on” and what
parts to remove or “turn off”. To leave a portion of the image “turned on” the vale will be set to
1 indicating white; conversely the portion of the image to be “turned off” will be set to 0
indicating black.
Upon realizing a suitable threshold value, the image can be segmented based on a specific color.
For the purposes of this project the specific color is blue. To extract only the parts of the image
that are blue and above a certain threshold, the red and green channels are subtracted from the
blue channel. Subtracting off all the red and green that exists in the image leaves only those
portions that contain blue.
The next step is to remove any objects that are too small, since the car will be the largest object
present in the image. Once this step is done all that is left in the image is the car. At this point
many properties about the image can be obtained. The most important property about the image
is the location of the centroid of the car. The centroid location will then be feedback and the
control system will respond accordingly.
4.4
Modeling, Coordinate Translation System
4.4.1
Motion Range
Workspace range:
0.36 m to 1.56 m, measured radically from pan and tilt assembly location
The initial parameters that were used to calculate the motion range were estimated values that
were determined to be relatively feasible for the system. Figure 9 shows a simplified setup of the
system and sufficiently portrays the physical system to determine the motion range of the tilt
system.
18
webcam
Ԥ
h
d
Figure 9 – Tilt Range
For the tilt axis, it was assumed that the pan-tilt assembly is a total of .92 m above the ground
and aiming downwards to where the moving target is located. A simple trigonometric
relationship shows that the angle of the pan and tilt will be governed by the height and distance
from the car
tan T
h
d
The amount of space that the assembly should be able to cover was also an estimated value.
From inspection, the useable open space available in the lab was measured to be from .36 m to
1.55 m away from the system. Therefore, the maximum tilt of 68.6° and minimum tilt of 30.7°
would give the system a workspace radius shown below (Figure 10):
19
1.55 m
0.36 m
Figure 10 – Pan Range
After building the mount system, it is clear that our estimations were very accurate. The height
of the camera from the floor was .92 m and the usable tilt range was from about 30.5° to 68.5°.
These tilt values result in a workspace range of 0.36 m to 1.56 m.
For the pan axis, the webcam is mounted atop the pan-tilt mechanism, and would follow the
target within a range of 180° around the system. Again, by proving the feasibility of the system
at 180°, it would be relatively straight forward to extend the system to a full 360°.
4.4.2
Pointing Accuracy
It has become apparent that the perceived size of the photodiode array can change depending on
the orientation of the car and its distance from the camera. The dimensions of the array have
been measured to be about 6cm by 2.5 cm. However, the worst case scenario must be considered
and it shall be assumed that the target will be a circle with a diameter of 2.5 cm. Given that the
maximum distance away from the targeting system is 3.0482m = 304.82cm, the system requires
a pointing accuracy of (2.5 / (2*pi*304.82) )*360 = 0.4699° (Pan).
In terms of tilt pointing accuracy, the maximum deflection would be angle_of_distance
(304.82cm) – angle_of_distance (302.32cm) = 34.514°-34.291° = 0.223° (Tilt).
20
4.4.3
Coordinate System Modeling
Due to the fact there are several subsystems, the overall control system requires the translation
and transformation between several coordinate systems. The physical system is represented by a
plane that is coplanar with the floor of the lab, with the origin centered directly below the pan
and tilt system. The camera, however, can only represent its world in terms of the visual plane,
namely the pixels in the image. Moreover, the rotational displacement of the pan and tilt
assembly must be translated into Cartesian coordinates.
This system requires high accuracy to achieve its goals. Therefore, a detailed mathematical
model of the system has been developed to better simulate and design the control system. This
includes taking into consideration the height of the target and the geometry of the tilt assembly.
The design of the system is to place the laser pointer directly above the camera.
zc
L
C = Camera
L = Laser pointer
C
lc
hc
h
Ĭt
yc
Figure 11 – Mount Setup
21
The following equation is derived from the figure:
zc
hc
yc
yc
lc
cos(4t )
zc h
hc
tan(4t )
lc
h
cos(4t )
tan(4t )
where yc is the distance from the base of the camera to the target, 4t is the tilt angle measured
from the floor up, h is the height of the tilt axle, and lc is the height to the center of the camera
lens from the tilt axle.
Similarly, the following equation can be inferred from the relationship above, because the
camera and laser pointer is parallel to each other:
yl
ll
h
cos(4 t )
tan 4 t
Solving this equation in terms of 4t would give:
4t
1
arctan(
h * (2hl 2 * h 2 y 2 y 2l 2 y 4 )
h * (2hl 2 * h 2 y 2 y 2l 2 y 4 )
2 * (h 2 y 2 )
,
)
y
2 * (h 2 y 2 )
22
Being able to calculate object distance from system tilt angle, and vice versa from system tilt
angle to object distance, is essential and is often used in setting up the mathematical model for
coordinates transformation.
4.4.4
Parallax Correction: Virtual Center Point
The height difference between the camera and the laser pointer on the mount plays a significant
role in the design of the vision system. Because the laser pointer is directly above the camera, if
the target remains at the true center of the camera vision, the laser would be pointing above the
target. In addition, this pointing error is not simply a constant above the target because it is
dependent on the system’s current tilt angle. This is due to the non-linear relationship between
the distances of the target and the camera tilt angle. A dynamic “virtual center point” in the
vision system then becomes necessary.
The main idea behind the “center point” tracking method is to have the camera consistently
maintain the target at its “center”, and that point on the camera vision should ideally correspond
to the actual laser pointing location. Since the system’s current tilt angle is known from its
encoder, the position of the “virtual center point” can be calculated. Hence, if the camera were
to be able to keep the “virtual center point” to overlap with the target on the camera screen, the
laser point is pointing at the target, and the target position is known.
In order to calculate the “virtual center point”, the camera’s field of vision, particularly its angle
of view in the tilt direction, is needed because the relationship between system tilt angle and
camera pixels is linear.
23
4.4.5
Field of Vision
The following experiment is performed to determine the camera’s field of vision:
1. The camera is tilted at 45° (down from the horizontal plane) and it is at the height of 77.8
cm
2. Its view of vision is then determined by measuring the amount of distance seen in the
camera vision
The measured results are shown in the diagram below in Figure 12:
Figure 12 – Field of View at 45°
24
Ĭ
C
C = Camera
Ĭt - Ĭ
Ĭt + Ĭ
Ĭt
113.5 cm
du
dl
Figure 13 – Camera Geometry at 45°
du – dl = 113.5 cm
77.8
77.8
tan(45q 4) tan(45q 4)
113.5
Solving for Ĭ from the above equation, Ĭ = 18.05°. Therefore, the field of view of the camera in
the tilt direction is 18.05° * 2 = 36.1°.
Since the camera has a resolution of 320px by 240px. Degrees per pixel can then be calculated
as follows:
Degrees per pixel (calculated in the tilt direction) = 36.1° / 240 px = 0.1504° per px
From this experiment, sufficient data has been collected that the field of vision can be calculated
in the pan direction. The degrees per pixel on both the pan and tilt axis should theoretically be
25
the same, since a pixel is a perfect square. Its calculation is shown in Appendix D, and the
calculated value is 0.1617° per px, which differs by about 7%.
4.4.6
Desired Tilt Angle from Camera Coordinate
L
C
Ĭ = 18.05°
C = Camera
L = Laser pointer
Ĭt - Ĭ = Ĭt - 18.05°
Ĭt + Ĭ = Ĭt + 18.05°
Ĭt
Range of view (tilt)
Figure 14 – Range of View
As long as the target is within the range of view, the camera together with the image processing
function will be able to determine the location of the target by returning the x and y location in
pixels on the camera screen.
The desired tilt angle of the system to hit the target can be calculated with the following steps:
Step 1: find the tilt angle such that the object is in the center of screen
26
C
C = Camera
L = Laser pointer
ǻĬ
240
Ĭt + 18.05°
ypx
Ĭobj
0
0 px
240 px
y px
Figure 15 – Object Tilt Angle
Figure 16 – Camera View
The cross in the figure is a supposed location of the object for the ease of illustration. The object
could be anywhere within the range of view.
Ĭobj = (Ĭt + 18.05° ) – ( y * degrees per pixeltilt)
Step 2: calculate the distance of the object away from system, given Ĭobj
d obj
lc
h
cos(4 obj )
tan(4 obj )
where dobj is the distance of the object away from the system, and lc is the length from the tilt
axle to the center of the camera screen. Note that lc is used instead of ll because the object is
currently in the perspective of the camera.
Step 3: find system tilt angle, Ĭl such that the laser point would be on the target, given dobj
27
Ĭl
C
L
C = Camera
L = Laser pointer
Ĭl
dobj
Figure 17 – Desired Tilt Angle
The desired system tilt angle can be calculated by solving for Ĭl in the following equation:
d obj
ll
h
cos(4l )
tan(4l )
Please note that ll is now used in the relationship instead of lc. Ĭl is the desired system tilt angle
such that the laser point in on the target.
28
4.4.7
Desired Pan Angle from Camera Coordinate
l
z
dobj
dobj
Ĭobj
Ĭ
Ĭpan
Base of the system
Figure 18 – Desired Pan Angle
The cross is the location of the object that is in the range of view of the camera. From previous
calculations in the tilt direction, dobj, the distance of the object away from the system is known.
29
Step 1: find the pan angle such that the object is in the center of screen
Suppose that is
the locations of
the target
x
0px
160px
320px
ǻx
Figure 19 – Camera View
Ĭ = (x – 160 px) * degrees per pixel pan
Step 2: calculate z, the distance between the camera and the object, and l
l
z
Ĭobj
ș
z
dobj
Figure 20 – Angle Geometry
Ĭobj is known from pervious calculation for the tilt direction.
30
d obj
z
cos(4 obj )
l
sin(4)
Step 3: calculate desired pan angle from Ĭpan
Solving for Ĭpan from the following relation:
d obj
l
sin(4 pan )
Finally,
desired pan angle = current system pan angle - Ĭpan
4.5
Friction Identification, Parameter Identification
The pan and tilt motors are the same for the assigned kit:
PITTMAN GM8712-11
19.1VDC
6.3:1 Ratio
To identify model friction, each axis is first isolated and tested separately. First, an impulse
strong enough to overcome static friction was applied, which is immediately followed by a
constant voltage. This is repeated from a negative voltage to a positive voltage that saturates the
motor velocity, stepping up at 0.05 Volts intervals. Each run shall be long enough for each of
the velocity to reach steady-state value.
To determine the fiction of the pan-axis the voltage was varied from -2.6V to -1.85V with an
impulse of -3V. The positive voltage ranged from 2.4V to 1.75V with an impulse of 3V.
31
Similarly the friction of the tilt-axis was determined with the following voltages, negative
voltage ranged from -2.3V to -1.5V using an impulse of -2.5V. The positive voltage varied from
1.9V to 1.4V with an impulse of 2.1V.
The steady-state values are found by averaging the final seconds of the velocity output, at which
they are at steady-state. Least-mean squares approximation were then applied to those non-zero
and non-saturated final state values. The slope of the regression line and the intercept point is
then found, and they represent viscous friction and Coulomb friction respectively.
Gear Ratio, N = Ratio of Gear Diameters * Motor Internal Gear Ratio
2.6
* 6.3 32.715
0.506
Torque, IJ =
(Gear Ratios) * (Motor Torque Constant) * (Amplifier Current to Voltage Ratio) * Volts
t
V
32.3715 * 2.16 *10 2
N m
A
) * 0.1
A
V
32
0.0699
Nm
V
Pan-Axis Friction
Velocity vs. Time Family of Curves
40
30
20
10
Velocity (rad/sec)
4.5.1
0
-10
-20
-30
-40
0
5
10
15
Time (sec)
20
Figure 21 – Pan Axis Velocities
33
25
30
Friction Identification Data
0.2
0.15
Applied Torque (N-m)
0.1
0.05
0
-0.05
-0.1
-0.15
-0.2
-40
-30
-20
-10
0
10
Steady State Theta Dot (rad/sec)
20
30
Figure 22 – Pan Axis Torques
Pan Friction Parameters
Identified Values
Viscous Friction (Positive)
0.0010 Nm*s/rad
Viscous Friction (Negative)
0.0013 Nm*s/rad
Coulomb Friction (Positive)
0.1341 Nm
Coulomb Friction (Negative)
-0.1345 Nm
Table 2 – Pan Friction Parameters
34
40
Tilt-Axis Friction
Velocity vs. Time Family of Curves
40
30
20
10
Velocity (rad/sec)
4.5.2
0
-10
-20
-30
-40
0
5
10
15
Time (sec)
20
Figure 23 – Tilt Axis Velocities
35
25
30
Friction Identification Data
0.15
0.1
Applied Torque (N-m)
0.05
0
-0.05
-0.1
-0.15
-0.2
-40
-30
-20
-10
0
10
Steady State Theta Dot (rad/sec)
20
30
40
Figure 24 – Tilt Axis Torques
Tilt Friction Parameters
Identified Valued
Viscous Friction (Positive)
0.0008 Nm*s/rad
Viscous Friction (Negative)
0.0014 Nm*s/rad
Coulomb Friction (Positive)
0.1030 Nm
Coulomb Friction (Negative)
-0.1159 Nm
Table 3 – Tilt Friction Parameters
4.5.3
Motor Modeling via Friction Identification
Modeling the motor accurately is an essential factor in properly developing the control system
for the system. The data on friction identification of the pan and tilt assembly served as the
baseline for the mathematical modeling of the motors in Simulink.
36
Given the following equation, several of the variables can be quantified from the specification
sheets of the motors and establish a voltage to torque relationship. The other variables can be
quantified through the geometry of the assembly.
( J L N 2 * J M ) * 4cLc ( BLV N * BM V ) * 4cL ( BLC N * BM C ) sgn(4cL )
NW M mglc * sin(4 L ) W L
which could be simplified to the following:
I * 4cLc a1 * 4cL a2 * sgn(4cL )
NW M
a1 and a2, however, cannot be directly substituted with viscous ( BV ) and Coulomb ( BC ) friction
that were determined previously due to the setup of the model:
4cL
( N (0.0699)V a2 ) / a1
a1 and a2 is the viscous and Coulomb friction multiplied by N respectively.
2
Out1
Quantizer1
1
Out1
Out2
Out2
1
In1
Terminator
1 DOF model - tilt
Proportional Gain
Step1
Out1
0
In1
Out2
Proportional Gain1
Step2
Quantizer2
1 DOF model1 - pan
3
Out3
Step
Figure 25 – Motor Model with no Mount
37
Terminator1
2
Out2
1
In1
0.0699
Saturation
Voltage to Torque
32.3715
1/0.004664
N
1/Inertia
1
s
1
s
Integrator
Integrator1
1
Out1
-0.02489
viscous friction
-3.33328
Coloumb Friction
Sign
MATLAB
Function
0
Gain3
sin
Figure 26 – Motor Model (tilt)
The SolidWorks model of the pan and tilt assembly can calculate the inertia generated at each
axis. The mass properties function yields a moment of inertia matrix and the principle moments
are given by the diagonal of this matrix. Determining the inertias with respect to each axis can
be reduced to an eigenvalue problem. Once the eigenvalues are determined, the value
corresponding to the z-axis of the model is used for the tilt calculations and the value
corresponding to the y-axis is used for pan calculations.
Tilt Axis Parameter
Symbol
Identified Value
Load Inertia
J
4.777x10 kg m
Motor Inertia
J
1.6x10 kg m
Gear Ratio
N
32.3715
Pan Axis Parameter
Symbol
Identified Value
Load Inertia
J
1.4x10 kg m
Motor Inertia
J
1.6x10 kg m
Gear Ratio
N
32.3715
-5
L
-6
m
-3
L
-6
m
Table 4 - Motor Specifications
38
2
2
2
2
4.5.4
Motor Modeling via Parameter Identification
The goal is to identify a1, a2, a3, a4 from experimental data, V(t) and Ԧ(t) of the following
equation:
4cLc a1 * 4cL a2 * sgn(4cL )
a3 *V a4 sin(4 L )
The equation is to be multiplied by Ԧ’ and integrated over 1 sampling period over n-1 number of
recorded data. A chirp signal together with a gain that does not saturate the motor was chosen as
the input of the system. Using matrix manipulation, the a1, a2, a3, and a4 for the tilt motor were
estimated as follows:
a1 = 4.9340
a2 = 196.9860
a3 = 158.0191
a4 = -6.3883
39
4.6
Control System
4.6.1
Controller Design
The control system will be implemented using a 3 term controller, PID:
Transfer function:
KP KI
KDs
s
KDs2 KPs KI
s
KP = Proportional gain
KI = Integral gain
KD = Derivative gain
A PID Controller has the following characteristics, summarized in the table below:
CL response
Rise Time
Overshoot
Settling Time
Steady-State Error
KP
Decrease
Increase
Small change
Decrease
KI
Decrease
Increase
Increase
Eliminate
KD
Small change
Decrease
Decrease
Small change
Table 5 – PID Characteristics
A proportional controller (KP) generally has the effect of reducing the rise time and reducing the
steady-state error, but never eliminates it. An integral control (KI) completely eliminates the
steady-state error, but it generally slows down the transient response. A derivative control (KD)
has the effect of reducing overshoot and improving the transient response. However, KP, KI and
KD are dependent on each other, hence the correlations between them is not exact. [3]
The controller is first designed on the pan motor model obtained from parameter identification.
The derivative term is realized in the form of a washout filter, in which s is approximated by
s/(s/p + 1). [2]
40
kp
Gain
1
s
1
In1
ki
Integrator
1
Out1
Gain1
kd.s
1/ps+1
washout
Figure 27 – PID Controller
After tuning the controller, a step input was tested on the system:
KP = 84
KI = 38
KD = 2.7, p=192
Below are closed loop system responses to a step input to the motor model:
Closed loop system response to a step input
1.4
desired
model
1.2
Voltage (V)
1
0.8
0.6
0.4
0.2
0
0
0.5
1
1.5
2
Time (s)
2.5
3
Figure 28 – Closed Loop Response to a Step Input
41
3.5
Mo = 0.6291%
tp = 0.2900 s
tr = 0.0451 s
ts = 0.1170s
ess = 0.1689 %
where Mo - percent overshoot, tp - time to peak, tr - rise time (10% - 90%), ts - settling time
(2%) and ess - percent steady-state error
this meets the desired specifications of settling time = 0.17 sec (Less than average image
processing time) , Overshoot < 1%, and Steady State Error § 0
Implementing the same controller on the actual motor resulted in a similar response. Because the
motors for both the pan and tilt axes are the same model, the same controller worked well in both
axes. In the actual system, we increased the proportional gain on the controller in order to get a
faster rise time and coping with slightly more overshoot. In fact, the overshoot may serve the
system well, as it may provide a predictive element to the controller. The proportional gain is
increased to 126.
42
The following are the step response for the pan and tilt axes of the actual motor:
Closed loop system response to pi/4 step input
0.8
0.7
0.6
Voltage (V)
0.5
0.4
0.3
0.2
0.1
actual
desired
0
0
0.5
1
1.5
2
time (s)
2.5
3
3.5
Figure 29 - Closed loop system response in the Pan axis
Closed loop system response to a pi/4 step input - tilt
0.8
0.7
0.6
Voltage (V)
0.5
0.4
0.3
0.2
0.1
0
actual
desired
0
500
1000
1500
2000
time (s)
2500
3000
3500
Figure 30 - Closed loop system response in the Tilt axis
43
Below is a summary of the controllers’ performance:
Pan Axis
Tilt Axis
Mo
0.3906
[]
tp
0.185
2.395
tr
0.068
0.0868
ts
0.116
0.175
ess
0.3906
0.3906
Table 6 – Actual Controller Performance
4.6.2
Implementation of the Integrated System
In order for the system to shoot moving object autonomously, the image processing and the
control system must run continuously. Although the tests conducted to verify the system’s
competency with stationary objects was successful, running the system continuously introduced
synchronization errors between the controller and the image processing code.
Specifically, the errors arise from the fact that the initial Simulink model runs much faster than
the image processing. This leads to an instability in the system, since the image processing
portion of the system can only run as fast as it can capture and process its images, while the
current pan and tilt angles that are being returned from the encoders are being updated every
0.001 sec. This discrepancy in sampling rates ultimately returns invalid desired pan and tilt
telemetry.
Initially, the most rudimentary solution to this problem is to run the two systems in a loop such
that the Simulink model must finish its current iteration of the loop before entering another.
Essentially, the system is targeting stationary objects continuously, without reinitializing the
system to its starting position. This is achieved by having the model perform its processes based
on the arithmetic difference in pan and tilt angles. However, in order to calculate this difference,
the current tilt angle must be known. The only way to access this data is to stop the Simulink
model. As a result, the constant starting and stopping of the simulation introduces a long delay
and dramatically affects the performance of the system.
44
A more novel approach was devised to overcome the lag that is experienced in the earlier
method. This approach involves instructing the Simulink model to hold the current desired tilt
angle until the image processing system completes processing the next image. Holding the data
is achieved by introducing a flag in the Simulink model, which indicates to when the image
processing has calculated the next set of telemetry data. The new implementation is able to
achieve a sampling rate of more than 5 updates per second. Comparatively, the old system was
only able to achieve a sampling rate of less than 1 update per second.
45
46
Figure 31 – Final System Implementation
5
DESIGN VERIFICATION
Preliminary testing was done at the subsystem level of the project to gather specifications of the
hardware and to ensure that the subsystems were developing properly. However, validating the
subsystems in terms of their performance and functionality within the larger system required
joining the subsystems together first. The following sections detail the methods of verification
of each subsystem after various degrees of integration.
5.1
Mounting System
Because the preferred solution to the mounting system was not able to be fabricated in time, a
compromised design was implemented in the system. Shims were installed to correct for
fabrication errors. Despite these shortcomings, the mounting assembly performed well. From
visual inspection, it is clear that the mass properties of the implemented assembly would be
significantly different from the proposed design. However, the control system managed to
compensate for these differences. More importantly, the motors on the pan and tilt mechanism
provided sufficient power to rotate the camera and laser with enough speed and accuracy.
5.2
Target, Infrared and Laser Feedback System
Prior to integrating the entire system, the verification of the Infrared and Feedback System was
limited to manual manipulation of the laser pointer. From these experiments, it has been shown
that the sensitivity of system exceeded the specifications that the pan and tilt system would
require. With the given emitter and photodiode pattern, the sensitivity of the photodiodes could
be adjusted so that it would not give false positives, yet still remain sensitive enough to detect the
laser beam from the mounting assembly.
5.3
Image Processing
The image processing portion of the project has been verified by developing preliminary code on
MATLAB. A simple array of 15 infrared LEDs was created and put in front of the camera to test
the webcam’s response to the infrared light. Figure 32 shows that the string of IR LEDs is very
prominent in the camera’s view and its spectral range is sufficient for this project.
47
At this point the image processing system takes on average 0.5 seconds to execute. Testing has
revealed that the car will travel approximately 34 centimeters in this time frame, making it very
difficult for the control system to accurately track the car. The main reason for the slow
execution is that all the image acquisition and processing is done in MATLAB. Due to the
inherent overhead of MATLAB, heavy processing takes longer then desired. The figure below
records the execution time per frame of a test run. They are processed images, and the white
light is the detected IR light on top of the car.
48
Frame 1 (t=0..0.551 s)
Frame 2 (t=0.551..0.982 s)
Frame 3 (t=0.982..1.573 s)
Frame 4 (t=1.573..2.133 s)
Frame 5 (t=2.133..2.694 s)
Frame 6 (t=2.694..3.225 s)
Figure 32 – Sequence, post processing
49
In these figures the car was driven across the screen (from right to left) while images were
captured. As can be seen only four of the frames contain the car. The distance that the car
moved in-between frames is much too great. As a result, a redesign of the algorithm was done to
eliminate the slow built-in functions.
A redesigned approach to the image processing was established to improve the performance and
reliability, MATLAB’s image processing functionality needed to be removed. To be able to
remove MATLAB’s image processing function calls a new approach to processing the images
needed to be discovered. The new approach took advantage of the fact that the IR LED’s
mounted on the remote controlled car will provide the brightest region in the image. Since the
IR LED’s produce a bright and saturated area in the image, locating the centroid of the saturated
region will provide the location of the moving car.
To locate the centroid of the brightest area, the locations of all pixels that are greater then a
certain value will be collected. Once all the locations are found, the mean value of all the X
values is determined along with the mean value of all the Y values. The mean X and Y value is
the location of the centroid of the bright saturated region. This approach enabled the removal of
the compute intensive image processing functions.
Subtraction of slow image processing functions greatly improved the speed of the algorithm. As
mentioned earlier, the original algorithm took 0.5 seconds to process the images, the new and
improved algorithm reduced the time to process images to 0.1 seconds on average. Reducing the
time to 0.1 seconds allows on average ten frames per second to be captured. Along with being
able to get more frames per second, the distance the car moves between frames has also been
greatly reduced to approximately 6.4cm. Shrinking the distance that the car travels in between
fames enables the control system to more accurately track the car as it moves. Also, there is less
chance of the system losing track of the car.
50
Figure 33 - Screenshot before processing
Above is an image that was taken of the car with the IR LED’s turned on. As is evident in the
image the IR LED’s give off an extremely bright and saturated light. Using this image and
running it through the improved algorithm produces the result shown below.
In the image above the car is covered in a black mouse pad to ensure that the lights would be the
only source of light on the car. If the light from the IR LED’s reflect off of the car it will shift
the centroid location of the saturated light region. In order to maintain reliable results covering
the car in black material will greatly increase the reliability of the image processing system.
Figure 34 – Screenshot after processing
51
The processed image, shown in Figure 5, displays a red X on the location of the centroid. The
coordinates of this location will be feedback into the system to calculate the pan and tilt angles.
Also, as can be seen displayed on the top on of the image it took 0.01 seconds to process. The
time seems really fast since the algorithm has been run a few times prior and gets executed faster
since everything is in memory. This is a major improvement over the original algorithm.
52
Figure 35 – Screenshot sequence, post processing
The sequence of images in Figure 35, illustrate how the car moves across the camera’s field of
vision. As is evident from the images the car does not move a great amount in between images
allowing the control system too smoothly and accurately track the car as it moves along the floor.
Moreover, when the camera gains the ability to follow the car, the number of frames with the car
in view will, ideally, only be limited to the refresh rate of the camera and not the speed of the
car. The reason for the varying position of the red X is due to the fact that the mean values are
used for X and Y. This means that bright areas present in various locations in the image shift the
final position of the centroid.
One of the features of the proposed system is that it can detect the remote controlled car in night
time conditions. During tests with night time conditions it was determined that the camera takes
approximately one second to take a picture. After one second the picture is taken then the image
processing begins and when all is said and done approximately 1.1 to 1.2 seconds pass on
average. These are disastrous results and it is likely that the car will have moved across the
camera’s view in 2 or less frames.
The team’s belief is that the camera needs a certain amount of light to take a picture. If the
camera does not have sufficient light it will pause until adequate light becomes available.
Testing revealed that the camera would hang until the car has entered the camera’s field of view
to take a picture. This behavior demonstrates that the camera waits until there is light present in
the field of view until it takes a picture. Testing was also performed until dim lighting
conditions; while these results proved to be better they were still not entirely acceptable.
53
Improved performance under dim lighting conditions further establishes that lack of light affects
the capture time of the camera.
During the course of developing and testing the system, several obstacles emerged. The first was
an issue with image acquisition. Image acquisition is accomplished through the use of a C++
DLL that acquires the image though Windows Video API. An issue concerning the validity of
the acquired image emerged. It was determined that on occasion the image that was being
processed was the previously acquired image. There is no explanation for this particular
behavior other than the C++ DLL that is being using does not properly clear the image buffer
each time it acquires an image. One way to combat the issue would be to rewrite the C++ DLL
since access to the source code is available. Since the discovery of this issue occurred very close
to the end of the semester rewriting the C++ DLL was not a feasible solution. Through more
experimentation it was discovered that if the program acquired two images back to back the
system would get the desired image. Although, this implementation does add a small amount of
time to the image processing it is still tolerable.
Another obstacle that emerged, towards the end of the semester, was the system would follow
the red dot on the floor generated by the laser pointer on the system. When the system was
integrated many tests were performed to evaluate how the system would locate and track the
remote controlled car. At first when the system was activated the car would be found and the
system would adjust its position to point to the car. Shortly after moving to the car’s location the
system would start to move up and try to follow the red laser pointer. Quickly realizing that the
image processing was detecting the laser pointer instead of the IR LED’s mounted on the car.
This could not be allowed to happen so the image processing code needed to be tweaked. It
turned out to be a fairly simple fix. Since the laser pointer was red, evaluating the red pixel
valued was removed from the code. Elimination of red detection solved the problem.
5.4
Modeling, Coordinate Translation System
To verify the validity of the modeling and the coordinate translation system, a trial was done
with a stationary target. The camera and laser assembly was initialized at 45 degrees below
horizontal and orthogonal to the edge of the table. With the target starting within the camera’s
54
initial frame of view, the image processing executed to obtain the position of the car via its
infrared light emitters. The image processing returned the coordinates of the target on the
screenshot. These parameters then allow the system to output an appropriate pan and tilt angle
so that the laser beam hits the car. The trials done with the coordinate translation system in place
demonstrated that the system was accurate enough to hit the target most of the time. A small
error was sometimes detected, which may be due to the model’s accuracy. It has been known
that there is substantial barrel distortion caused by the small lens on the webcam. Anomalies
such as this affect the accuracy of the system when it attempts to orient itself at targets that start
out near the border of the camera’s field of view.
3
1/0.001
Out4
Gain1
1
z
Unit Delay
6
tilt angle input
In1
tilt PID controller
In1
PCI-QUAD04
Comp. Boards Count
Inc. Encoder
Out1
desired tilt angle
Out1
1PCIM-DAS1602/16
ComputerBoards
2 Analog Output
current tilt angle
2*pi/(1024*4)
1
Gain
PCI-QUAD04
PCIM-DAS1602 16
PCI-QUAD04
Comp. Boards Count
Inc. Encoder
desired pan angle
pan PID controller
2*pi/(1024*4)
Gain2
PCI-QUAD04 1
5
pan angle input
4
1/0.001
Out5
1
Gain3
z
Unit Delay1
Figure 36 – Stationary Target Implementation
55
2
current pan angle
Desired tilt angle and desired pan angle is calculated with another Simulink model shown below:
1
current_tilt
current tilt angle
Out1
tilt_angle
2
xLoc
x_px
x px
fcn
current_pan
0
desired_pan
y _px
pan_angle
desired tilt rad
Out2
desired pan rad
yLoc
y px
0
desired_tilt
0
camera_dist
object distance
Desired Pan-Tilt Angles for (x, y) px on Screen
current pan angle
Figure 37 – Desired Pan and Tilt Angle Calculation
56
5.5
Friction Identification, Parameter Identification
After applying these values to the equation and Simulink diagram, the response of the model was
plotted for several voltages to compare its accuracy to the observed response of the physical
system during friction identification.
First, a ramp input is used to test the model:
3
Out3
Out1
1
Out2
Out1
In1
Ramp
Subsystem
2
Out2
Figure 38 – Motor Model (tilt) from Parameter ID
2
Out2
1
In1
1
s
1
s
Integrator
Integrator1
a3
Saturation
1
Out1
-a1
-a2
Sign
a4
Gain
MATLAB
Function
sin
Figure 39 – Motor Model (tilt) Subsystem from Parameter ID
Below is a family of curves that resembles the linear approximation of velocity to a positive
voltage:
57
Velocity vs. Time Family of Curves
35
30
Velocity (rad/sec)
25
20
15
10
5
0
0
5
10
15
Time (sec)
20
25
30
Figure 40 – Friction ID from Motor Model with no Mount
The steady state velocities of the corresponding voltages are exactly the least mean square fit of
the data obtained from friction id.
To test whether the inertia values accurately model the motor performance, a chirp signal was
used as a test input and its results were compared to the actual ones on the motor.
58
Open-Loop Response (Chirp) - Actual and Simulated
50
Simulated
Actual
40
30
Velocity (rad/sec)
20
10
0
-10
-20
-30
-40
-50
0
5
10
15
Time (sec)
20
25
30
Figure 41 – Open-Loop Response (Chirp) – Actual and Simulated comparison
The simulated results unfortunately did not fit the actual results very well. Therefore, an
alternative method with parameter identification was used to find a better model.
Results were then compared between the model and actual results:
59
Open-Loop Response (Ramp) - Actual and Simulated
40
Actual
Simulated
35
30
Velocity (rad/sec)
25
20
15
10
5
0
-5
0
5
10
15
Time (sec)
20
25
30
Figure 42 – Open-Loop Response (Ramp) – Actual and Simulated comparison
60
Open-Loop Response (Chirp) - Actual and Simulated
50
40
30
Velocity (rad/sec)
20
10
0
-10
-20
-30
Simulated
-40
-50
Actual
0
5
10
15
Time (sec)
20
25
30
Figure 43 – Open-Loop Response Comparison
This is the response of the Parameter Identification method. The model better simulates the
system than the Friction Identification method. While the lower peak values are consistently
larger than the actual, the model can be fine tuned by tweaking the negative voltage saturation
value.
5.6
Control System
Only after the system has been fully integrated can there be an assessment of the performance of
the control system. Figure 44 compares the desired position to the actual position of the system.
It is apparent that the system responds very quickly and precisely.
61
Desired vs. Actual Response to a Demo Run - Tilt
0.4
0.3
0.2
position (theta)
0.1
0
-0.1
-0.2
-0.3
-0.4
actual
desired
-0.5
0
1
2
3
4
time (s)
5
6
7
8
Desired vs Actual Response to a Demo Run - Pan
0.2
actual
desired
0
-0.2
position (theta)
-0.4
-0.6
-0.8
-1
-1.2
-1.4
-1.6
0
1
2
3
4
time (s)
5
6
7
Figure 44 – Desired vs. Actual Response Plots
62
8
6
COST AND SCHEDULE
Table 7 documents the supplies procured for the project and their cost. Table 8 depicts the
projected cost of investment in man hours. Finally, Table 9 describes the schedule of tasks that
had been completed. For the most part, the cost of the project and the scheduling of the project
stayed stable and consistent throughout the semester, despite the fact that a team member was
lost and there were several unanticipated setbacks in system development.
Components for the project were selected to maximize cost effectiveness while still fulfilling the
imposed requirements. The system implements common components to reduce cost and allow
for easy repair, should a component fail. At the same time, it has been shown that the
components have sufficient capabilities to allow the system to work properly within the imposed
constraints. The end-product shall be a nominal balance that provides an exceptional
performance-per-dollar ratio.
63
Supplier
Model Number/Description
Qty
Cost
Total
Source
Jameco Electronics
JE21 Breadboard
1
$6.75
$6.75
Trojan Electronics
Generic
XC556RB Red LEDs
2
$0.39
$0.78
Trojan Electronics
Vishay Intertechnology
BPV10 Silicon PIN Photodiode
4
$1.14
$4.56
Newark InOne
IRC
68k ohm resistor
4
$0.39
$1.56
Trojan Electronics
IRC
1k ohm resistor
4
$0.39
$1.56
Trojan Electronics
4
$0.50
$2.00
All Electronics Corp.
3386P 20k ohm Trimming
Bourns
Potentiometer
276-1711 LM324 Quad Op
National Semiconductor
Amp
1
$1.49
$1.49
Radio Shack
Texas Instruments
7408 Quad 2-Input AND Gate
1
$1.49
$1.49
Trojan Electronics
CD4001UB Quad 2-Input NOR
Texas Instruments
Gate
1
$0.35
$0.35
Trojan Electronics
GC Electronics
14-422-1M19 Hook-Up Wire
1
$2.48
$2.48
Trojan Electronics
Generic
Battery Holder
1
$0.70
$0.70
All Electronics
Generic
TLN-100 Infrared Emitter
15
$0.80
$12.00
Trojan Electronics
Street Mayhem Mazda RX-8
Nikko
Radio Control Car
1
$16.88
$16.88
Wal-Mart
Energizer
AA Battery
6
$0.61
$3.66
Wal-Mart
Energizer
9v Battery
1
$2.44
$2.44
Wal-Mart
JP Electronics
Infrared Webcam
1
$20.00
$20.00
Laser Pointer
Red Laser Pointer
2
4.75
9.5
TOTAL
$88.20
Table 7 – Bill and Cost of Materials
Name
Hours
Cost per hour
Total
Jonathan Rothberg
250
$40.00
$10,000.00
John Lee
250
$40.00
$10,000.00
Jason Lam
250
$40.00
$10,000.00
TOTAL HOURS
750
TOTAL
Table 8 – Labor Costs
64
$30,000.00
Hong Kong
All Electronics
Week
Task
Member
1
Propose project idea
Jon R., John L., Jason L.
2
Discuss Design Approach
Jon R., John L., Jason L.
Discuss Design Memo + Delegate work
Jon R., John L., Jason L.
Develop image processing algorithm
Jon R., John L., Jason L.
Get remote controlled car
John L.
Work on proposal
John R., John L., Jason L.
Initial Modeling
Jason L.
3
4
Detector Circuit
John L.
5
Finalize Project Proposal
Jon R., John L., Jason L.
6
Refine image processing
John R., John L., Jason L.
7
8
9
10
11
12
13
14
15
Refine laser detection circuit
John L.
Model refinement
Jason L.
Research alternate image processing methods
Jon R., Jason L.
Final Circuit Development, Photodiode and Emitter layout
John L.
Model and camera integration, stationary object
Jason L. , Jon R.
Program Development
Jon R.
Circuit and car integration
John L.
Model and camera integration, moving object
Jason L., Jon R., John L.
Tune model, control
Jason L., John L.
Discuss Progress report, delegate work
Jason L., Jon R., John L.
System test, moving object
Jason L., Jon R., John L.
Tune model, control
Jason L., John L.
Progress Report
Jason L., Jon R., John L.
System test, moving object (faster)
Jason L., Jon R., John L.
Program refinement
Jon R.
Tune model, control
Jason L., John L.
System test, fast moving object w/ obstacles
Jason L., Jon R., John L.
Program refinement
Jon R.
Tune model, control
Jason L., John L.
Discuss Final Presentation, delegate work
Jason L., Jon R., John L.
Demonstration
Jason L., Jon R., John L.
Final Presentation
Jason L., Jon R., John L.
Discuss Final Report, delegate work
Jason L., Jon R., John L.
Final Report
Jason L., Jon R., John L.
Table 9 – Project Schedule
65
7
CONCLUSIONS
The performance of the system clearly demonstrates that the objectives put forth have been
satisfied and shall be considered a success. The taped demonstration shows that the system can
operate in the most extreme lighting conditions, even in total darkness. Moreover, the videos
show that the controller and image processing are robust enough to hit the target consistently and
accurately. The data suggests that the mathematical model that has been developed is accurate
enough to consistently hit target within 2 screen captures. It has also been demonstrated that the
method of implementation seems to allow the system to self corrects itself to hit the target,
despite the small inaccuracies of the physical modeling.
Although the objectives have been achieved, there are several aspects of the project that call for
prospective enhancements. The most notable deficiency of the system is the lack of an algorithm
that attempts to predict the position of the moving target. Currently, the system can only respond
to the current location of the target and cannot hit the target unless it slows down to allow the
system to catch up.
Another method of increasing the systems performance is to implement more advanced
hardware, particularly a more responsive camera, in order to increase the sampling time of the
system. By doing so, the system will respond more continuously and will minimize the lag in
the system.
A qualitative enhancement to the system may include controlling the laser pointer so that it
would should discrete pulses instead of having it on continuously. This enhancement would
serve as a better simulation of real-world applications, for example, in the defense industry,
where there are limited amount of ammunition and time to take out a certain target.
Finally, the physical modeling of the system may be improved to accommodate barrel distortion
anomalies of the webcam. While this was identified as a problem during the development of the
system, it was decided that the geometry involved in modeling such anomalies would be beyond
the scope of the course.
66
8
REFERENCES
[1] R. O’Neil, R D.Lewis, L. Lim, Cesar Palerm, and Jeremiah Harmsen, Laboratory
Introduction to Embedded Control, lab manual version 9.4. New York: RPI, 2002.
[2] http://www.cat.rpi.edu/~wen/ECSE4962S03/session05_021203.pdf
[3] http://www.engin.umich.edu/group/ctm/PID/PID.html
67
C
10
7402
7408
10kOhm
9
83
2
1
of
1
1
2
33
U12A
U10A
Status_LED
1
Page 1
7402
7402
2
February 10, 2005
2
A
U11A
1
Page Size:
U13A
U6C
7402
3
IR PHOTODIODE CIRCUIT
11
V-
U7A
2
John Lee, Vision Tracking Turret
LM324
9 -
8
7408
d
7402
7402
2
1
-
4
2
1kOhm
68kOhm
V+
d
Photodiode3
U21
1
3
Revision:
10 +
1
10kOhm
A
1kOhm
4
R16
11
V-
V+
R15
+3V
LM324
2 -
3 +
d
2
A
B
68kOhm
+3V
U15A
U16A
+3V
7402
U8B
+3V
4
7402
U9D
U17A
7408
1
6
5
14
1
U14A
7408
10kOhm
d
Photodiode4
D
4
11
V-
V+
+
12
-
13
LM324
+3V
LM324
-
+
68kOhm
1kOhm
68kOhm
1kOhm
11
V-
V+
10kOhm
4
R12
B
+3V
R1
7
d
Photodiode2
12
+3V
13
Photodiode1
11
2
1
2
3
1
5
2
3
6
1
2
3
1
3
A
B
A
B
Revision:
-
d
IRLED
d
IRLED
February 10, 2005
2
d
IRLED
IR LED CIRCUIT
John Lee, Vision Targeting Turret
d
IRLED
2
d
IRLED
of
Page Size:
Page 1
d
IRLED
1
A
d
IRLED
d
IRLED
d
IRLED
d
IRLED
d
IRLED
d
IRLED
d
IRLED
d
IRLED
d
IRLED
1
1
+3V
A
B
Data sheet acquired from Harris Semiconductor
SCHS016C – Revised September 2003
The CD4001UB types are supplied in
14-lead hermetic dual-in-line ceramic
packages (F3A suffix), 14-lead dual-in-line
plastic packages (E suffix), 14-lead
small-outline packages (M, MT, M96, and
NSR suffixes), and 14-lead thin shrink
small-outline packages (PW and PWR
suffixes).
Copyright © 2003, Texas Instruments Incorporated
LM124/LM224/LM324/LM2902
Low Power Quad Operational Amplifiers
General Description
Advantages
The LM124 series consists of four independent, high gain,
internally frequency compensated operational amplifiers
which were designed specifically to operate from a single
power supply over a wide range of voltages. Operation from
split power supplies is also possible and the low power supply current drain is independent of the magnitude of the
power supply voltage.
Application areas include transducer amplifiers, DC gain
blocks and all the conventional op amp circuits which now
can be more easily implemented in single power supply systems. For example, the LM124 series can be directly operated off of the standard +5V power supply voltage which is
used in digital systems and will easily provide the required
interface electronics without requiring the additional ± 15V
power supplies.
n Eliminates need for dual supplies
n Four internally compensated op amps in a single
package
n Allows directly sensing near GND and VOUT also goes
to GND
n Compatible with all forms of logic
n Power drain suitable for battery operation
Unique Characteristics
n In the linear mode the input common-mode voltage
range includes ground and the output voltage can also
swing to ground, even though operated from only a
single power supply voltage
n The unity gain cross frequency is temperature
compensated
n The input bias current is also temperature compensated
Features
n Internally frequency compensated for unity gain
n Large DC voltage gain 100 dB
n Wide bandwidth (unity gain) 1 MHz
(temperature compensated)
n Wide power supply range:
Single supply 3V to 32V
or dual supplies ± 1.5V to ± 16V
n Very low supply current drain (700 µA) — essentially
independent of supply voltage
n Low input biasing current 45 nA
(temperature compensated)
n Low input offset voltage 2 mV
and offset current: 5 nA
n Input common-mode voltage range includes ground
n Differential input voltage range equal to the power
supply voltage
n Large output voltage swing 0V to V+ − 1.5V
Connection Diagram
Dual-In-Line Package
DS009299-1
Top View
Order Number LM124J, LM124AJ, LM124J/883 (Note 2), LM124AJ/883 (Note 1), LM224J,
LM224AJ, LM324J, LM324M, LM324MX, LM324AM, LM324AMX, LM2902M, LM2902MX, LM324N, LM324AN,
LM324MT, LM324MTX or LM2902N LM124AJRQML and LM124AJRQMLV(Note 3)
See NS Package Number J14A, M14A or N14A
Note 1: LM124A available per JM38510/11006
Note 2: LM124 available per JM38510/11005
© 2000 National Semiconductor Corporation
DS009299
www.national.com
LM124/LM224/LM324/LM2902 Low Power Quad Operational Amplifiers
August 2000
LM124/LM224/LM324/LM2902
Absolute Maximum Ratings (Note 12)
If Military/Aerospace specified devices are required,
please contact the National Semiconductor Sales Office/
Distributors for availability and specifications.
LM124/LM224/LM324
LM2902
LM124A/LM224A/LM324A
Supply Voltage, V+
32V
Differential Input Voltage
26V
32V
26V
−0.3V to +32V
−0.3V to +26V
50 mA
50 mA
Molded DIP
1130 mW
1130 mW
Cavity DIP
1260 mW
1260 mW
Small Outline Package
800 mW
800 mW
Input Voltage
Input Current
(VIN < −0.3V) (Note 6)
Power Dissipation (Note 4)
Output Short-Circuit to GND
(One Amplifier) (Note 5)
V+ ≤ 15V and TA = 25˚C
Continuous
Continuous
Operating Temperature Range
−40˚C to +85˚C
LM324/LM324A
0˚C to +70˚C
LM224/LM224A
−25˚C to +85˚C
LM124/LM124A
−55˚C to +125˚C
Storage Temperature Range
−65˚C to +150˚C
−65˚C to +150˚C
260˚C
260˚C
260˚C
260˚C
Vapor Phase (60 seconds)
215˚C
215˚C
Infrared (15 seconds)
220˚C
220˚C
Lead Temperature (Soldering, 10 seconds)
Soldering Information
Dual-In-Line Package
Soldering (10 seconds)
Small Outline Package
See AN-450 “Surface Mounting Methods and Their Effect on Product Reliability” for other methods of soldering surface mount
devices.
ESD Tolerance (Note 13)
250V
250V
Electrical Characteristics
V+ = +5.0V, (Note 7), unless otherwise stated
Parameter
Input Offset Voltage
(Note 8) TA = 25˚C
Input Bias Current
IIN(+) or IIN(−), VCM = 0V,
(Note 9)
TA = 25˚C
Input Offset Current
LM124A
Conditions
Min
IIN(+) or IIN(−), VCM = 0V,
LM224A
Typ
Max
1
Min
LM324A
Typ
Max
2
1
20
50
2
10
Min
Units
Typ
Max
3
2
3
mV
40
80
45
100
nA
2
15
5
30
nA
V+−1.5
V
TA = 25˚C
Input Common-Mode
V+ = 30V, (LM2902, V+ = 26V),
Voltage Range (Note 10)
TA = 25˚C
Supply Current
V+−1.5
0
V+−1.5
0
0
Over Full Temperature Range
RL = ∞ On All Op Amps
mA
V+ = 30V (LM2902 V+ = 26V)
V+ = 5V
Large Signal
V+ = 15V, RL≥ 2kΩ,
Voltage Gain
(VO = 1V to 11V), TA = 25˚C
Common-Mode
DC, VCM = 0V to V+ − 1.5V,
Rejection Ratio
TA = 25˚C
3
1.5
3
0.7
1.2
1.5
3
0.7
1.2
1.5
3
0.7
1.2
50
100
50
100
25
100
V/mV
70
85
70
85
65
85
dB
www.national.com
The final report, and the activities that it has entailed, has been a collaborative effort
amongst the three group members. Proofreading was done by all members of the team,
as well.
Jason Lam – Friction and Parameter Identification, Coordinate System modeling, Motor
modeling, Control System, Controller Design
John Lee – Laser and camera mounting assembly, Laser Detection circuit, Motion Range,
Assessment, formatting
Jonathan Rothberg – Image Processing system, presentation, formatting
Jason Lam
________________________
John Lee
________________________
Jonathan Rothberg
________________________
Jason Chung Yui Lam
Email: lamj2@rpi.edu
Current Address: Voorhees 023, E-Complex
1999 Burdett Avenue
Troy, NY 12180 USA
Tel: (518) 265-7142
Education:
Rensselaer Polytechnic Institute Troy, NY
Cumulative GPA: 3.76/4.00
Dual B.S. in Electrical Engineering, and Computer & Systems Engineering
Minor: Economics
Expected Date of Graduation: May 2005
Relevant
Experience:
Computer Graphics, RPI
Mini-Golf Game
Spring 2004
- Co-wrote a mini-golf game in OpenGL featuring real-time physics, collision detection and texture
mapping
Undergraduate Research Program, Electrical Impedance Tomography Lab, RPI
Filter Design for an Electrical Impedance Tomography System
Summer 2003
- Designed the signal processing part of the safety circuitry in the Adaptive Current Tomography (ACT) 4
System
- Designed filters to separate ground current signal into channels centered at ACT 4 specific operating
frequencies
Introduction to Engineering Design, RPI
Table Tennis Robot
Spring 2003
- Designed the ball detection unit control system with piezoelectric sensors
- Designed the DC and stepper motor control system, and the ball indicator system
Krueger Engineering (Asia) Ltd., Wanchai, Hong Kong, China
Electrical AutoCAD Technician
Summer 2002
- Drafted electrical and lighting plan for a high-school renovation project with AutoCAD
Honors:
Tau Beta Pi Engineering Honor Society
Eta Kappa Nu Electrical and Computer Engineering Honor Society
RPI School of Engineering Dean’s List, Fall 2001 – Present
Electrical & Computer Systems Engineering (ECSE) Honor Seminar, Spring 2003
Bank of America Achievement Award in Calculus in May 2001
Woodside Priory School Academic Excellence in A.P. Statistics in May 2000
Leadership/
Activities:
National Collegiate Table Tennis Association (NCTTA)
Upstate New York Division Director
RPI Table Tennis Club
President/Webmaster (http://tabletennis.union.rpi.edu)
RPI Table Tennis Team
RPI Asian Awareness Weekend Committee
Co-Chair, Night Market Chair
RPI Hong Kong Students Association
Activities Director
Advisor
RPI Concert Choir
Tenor
Pubic Relations/Webmaster (http://chorale.union.rpi.edu)
RPI Symphony Orchestra
2nd Violinist
May 2003 – Present
May 2002 – Present
August 2001 – Present
August 2002 – May 2004
May 2002 – May 2003
May 2004 – Present
August 2001 – May 2004
May 2002 – May 2003
August 2001 – May 2002
Technical
Skills:
Languages: C/C++, JAVA, HTML, PHP, MySQL, OpenGL, V+
Software:
Solid Works, AutoCAD, LogicWorks, PSpice, Maple, Adobe Photoshop, PageMaker, MS
Office, FrontPage, Access, Macromedia Dreamweaver
Citizenship:
Citizen of Hong Kong, China with British Passport
Languages:
Fluent in both written and spoken English and Chinese (Cantonese)
Jonathan Rothberg
895 Old Mill Road, San Marino, CA 91108
Cell: (914)-329-5820 School: (518)-276-3226
rothbj@rpi.edu
OBJECTIVE
To obtain a fulltime position in software engineering or a related computer science field.
EDUCATION
Rensselaer Polytechnic Institute, Troy, NY
Bachelor of Science Degree: Dual Major, Expected Graduation May 2005
ƒ
Computer and Systems Engineering
Computer Science
ƒ
GPA: 3.00 Overall; 3.3 in Major
Rockland Community College, Suffern, NY (2000-2002)
COMPUTER SKILLS
Operating Systems: Windows (98, XP), UNIX (Linux)
Programming Languages: C, C++, C#, JAVA, Embedded C (Motorola 68HC11)
Applications: Microsoft Office, Flash, DreamWeaver, Logic Works, Maple v8.0, Visual
Studio.NET, WinRunner.
HONORS & AWARDS
Dean’s list at Rensselaer Polytechnic Institute and Rockland Community College
Computer Science Honor Society Candidate.
WORK EXPERIENCE
Answer Financial, Inc., Encino, CA (Dec 2004 – Jan 2005; Summer 2004)
Software Developer Intern
Responsibilities: Developed an Internet Explorer plug-in, utilizing C# and the
.Net framework, to improve the efficiency of the agent’s daily operations.
Quality Assurance Analyst Intern
Responsibilities: Performed pre-release software testing of Microsoft.NET and
Visual Basic based Web application for the sale of personal lines of
insurance. Worked closely with development staff to resolve issues and
wrote automated test scripts.
Rensselaer Polytechnic Institute, Troy, NY (Jan 2004 – May 2004)
Teachers Assistant
Responsibilities: Help students with lab session and homework.
WealthPlex, Inc., Wilmington, DE (May 2003 – August 2003)
Developer
Responsibilities: Developed and designed a website for a wealth management
company. Developed stock quoting, portfolio management and account
management systems. Worked closely with the CEO to deliver an
appealing and functional Web site.
Technologies Used: ASP.NET(C#), XML, ColdFusion, PHP, MySQL, MS
SQL 2000
ACTIVITIES
Reading, learning new programming languages, fitness and exercise, writing many of my
own programs
John Lee
Permanent Address:
20 Confucius Plaza 32B
New York, NY 10002
(212) 226-2985
leej19@rpi.edu
Campus Address:
2004 15th St. 2nd Fl.
Troy, NY 12180
(917) 553-8300
Objective
A full-time position in the Electrical and/or Computer Systems Engineering field for an
open-minded and hardworking student. Primary goal is to attain experience in space and
satellite systems, specifically signals, algorithms, communications and control systems.
Available December 2005.
Education
Rensselaer Polytechnic Institute, Troy, NY
x
B.S. Electrical Engineering, Computer Systems Engineering, graduating Dec 2005
x
G.P.A. – 3.28 / 4.0
Aug 2001 – Dec 2005
Work Experience
The Boeing Company, Seal Beach, CA
Coop Student, Space and Intelligence Systems
x
Security Clearance: SSBI
x
Collaborated with systems engineers to create a database that documents
May 2004 – Dec 2004
the connectivity between each satellite subsystem.
x
Supported Test and Verification group for several test phases and readiness reviews
x
Ran and debugged test scripts on non-flight hardware to validate system integration
Hunter College High School, New York, NY
Jan 2001 – June 2001
Intern, Computer Department
x
Provided hardware and software support for school faculty
x
Upgraded 30+ systems for school’s new computer lab
x
Edited layout for school’s primer, The Umbrella
Arthur Chabon Architects, New York, NY
Sept 2000 – Jan 2001
Intern
x
Performed basic drafting on elevations
x
Assisted with blueprints and CAD files
x
Maintained client accounts, portfolios, resource library
Project Experience
Control System Design:
x
Research, design, development, and implementation of an image-based tracking system
x
System would recognize, follow and shoot a moving target without user intervention.
x
Project encompassed system modeling, image-processing algorithms, laser detection system
Embedded Control:
x
Developed and implemented various subsystems to enable a Smart Car to autonomously
traverse various predefined tracks.
x
Systems include optical tracking, motor control, steering control and speed control.
Intro to Engineering Design:
x
Research and implementation of infrared circuitry to detect various parameters and
characteristics in a Ball Testing Machine
Other Coursework:
Control Systems Design
Applied Cryptography
Discrete Time Systems
Data Structures and Algorithms
Signals and Systems
Computer Components and Operations
Microelectronics Technology
Engineering Graphics & CAD
Digital Electronics
Probability for Engr. Applications
Fields and Waves
Leadership Training
Honors, Awards and
Leadership
Tau Beta Pi Engineering Honor Society, Member
Eta Kappa Nu Electrical and Computer Systems Honor Society, Member
Asian Awareness Weekend, Treasurer
Managed a budget of over $5000 for a campus-wide event that unites
x
on-campus Asian organizations to promote culture, heritage, and diversity
Skills
xEngineering: PSpice, LogicWorks, MATLAB, LabView, Maple, SolidWorks
x
Programming: Microsoft Visual C++, Microsoft Visual Studio.NET, OpenGL, Tcl/Tk
x
Microsoft Windows XP/2000/98/95, Microsoft Office (Word, Excel, PowerPoint)
Volunteer Experience
Listening to the City, New York, NY
Volunteer
x
Logistics for a town hall meeting between 5,000 citizens and
The Lower Manhattan Development Corporation
Spring 2005
Fall 2002
Fall 2002
Fall 2002 - Present
Spring 2003 - Present
Spring 2004
Summer 2002
Download