Monocular Vision for Collision Avoidance in Vehicles by Kyle John Veldman Submitted to the Department of Mechanical Engineering in Partial Fulfillment of the Requirements for the Degree of Bachelor of Science in Mechanical Engineering ARCHIVES at the )LULGY Massachusetts Institute of Technology SEP 302015 June 2015 LIBRARIES C 2015 Massachusetts Institute of Technology. All rights reserved. Signature redacted '6' 1 - Signature of Author: Department of Mechanical Engineering May 22, 2015 Signature redacted Kamal Youcef-Toumi Professor of Mechanical Engineering Thesis Supervisor / ( Certified by: Accepted by: Signature redacted Anette Hosoi Professor of Mechanical Engineering Undergraduate Officer 1 2 Monocular Vision for Collision Avoidance in Vehicles by Kyle John Veldman Submitted to the Department of Mechanical Engineering on May 22, 2015 in Partial Fulfillment of the Requirements for the Degree of Bachelor of Science in Mechanical Engineering ABSTRACT An experimental study facilitated by Ford Global Technologies, Inc. on the potential substitution of stereovision systems in car automation with monocular vision systems. The monocular system pairs a camera and passive lens with an active lens. Most active lenses require linear actuating systems to adjust the optical parameters of the system but this experiment employed an Optotune focus tunable lens adjusted by a Lorentz actuator for a much more reliable system. Tests were conducted in a lab environment to capture images of environmental objects at different distances from the system, pass those images through an image processing algorithm operating a high-pass filter to separate in-focus aspects of the image from out-of focus ones. Although the system is in the early phases of testing, monocular vision shows the ability to replace stereovision system. However, additional testing must be done to acclimate the apparatus to environmental factors, minimize the processing speed, and redesign the system for portability. Thesis Supervisor: Kamal Youcef-Toumi Tile: Professor of Mechanical Engineering 3 ACKNOWLEDGEMENTS I would like to express my sincere gratitude to my thesis advisor, Professor Youcef-Toumi, and the amazing group of people that he has assembled in the Mechatronics Research Lab. I owe a great deal to Iman Soltani Bozchalooi, a doctoral candidate in the Mechatronics Research Lab. I've learned a great deal from him throughout this process and will be forever grateful for his tutelage. I would also like to thank the other members of the Mechatronics Research Lab for making me feel so welcome, exposing me to their heavily diversified endeavors, and for helping me with my research. It has truly been a pleasure to work with all of you during my time here. I gratefully acknowledge funding support from Ford Global Technologies, LLC with Mohsen Lakehal-ayat as program manager Finally, I would like to thank my family and friends for all their support and encouragement along the way. 4 TABLE OF CONTENTS A cknow ledgem ents ........................................................................................................ List Of Figures.................................................. 4 ERROR! BOOKMARK NOT DEFINED. Introduction....................................................................................................................... 7 Background ....................................................................................................................... 9 Requisite Optics ..................................................................................................... 9 Focus Tunable Lenses............................................................................................... 9 Experim ent ...................................................................................................................... 11 System D esign .......................................................................................................... 11 H ardw are..................................................................................................................... 11 Softw are ...................................................................................................................... 12 Im age Processing Algorithm .................................................................................... 13 Im plem entation ........................................................................................................ 15 D iscussion......................................................................................................................... 18 Conclusion and Recommendations ................ ERROR! BOOKMARK NOT DEFINED. Bibliography .................................................................................................................... 21 Appendix A: Xenon Ruby 2.5/25 Data Sheet............................................................. 22 Appendix B: Optotune 10-30-Ci Specification Sheet................................................ 24 Appendix C : LabV IEW Block D iagram s ..................................................................... 31 5 LIST OF FIGURES Figure 1: Application of the Optotune tunable lens technology to change the geometry of two types of lenses. In the upper images, the lens is transformed from convex to concave by pushing the ring that forms the lens towards the container, filling the lens with liquid. In the lower images, a ring pushes on the outer membrane moving the liquid to the center of the lens .............................. 10 Figure 2: A high-level breakdown of the experimental system into its hardware and software components. (Icons designed by Freepik)..........................11 Figure 3: The first iteration of the monocular vision setup showing a) the Optotune EL-10-30-Ci tunable focus lens, b) the Ruby 2.5/25 lens, and c) the IDS UL3220C P cam era........................................................................................... 12 Figure 4: The Relevant Area selection (shown in red) for the closest plane of focus (left) and the farthest plane of focus (right) after horizon removal ....... 14 Figure 5: Image post segmentation and post horizon removal. Note: the Relevant Area for this image is the entire field of view........................................... 15 Table 1: A breakdown of the image processing algorithm in LabVIEW .... 17 6 INTRODUCTION As vehicle autonomy has become an increasingly popular topic in engineering fields, the US Department of Transportation has established a number of policies used to govern the emerging technology and to better provide guidance to states or institutions that allow testing of these technologies. One such policy [1], released by the National Highway Traffic Safety Administration, defined the levels of vehicle automation as follows: (1) No-automation - The driver is in complete control of all vehicle functionality at all times. (2) Function-specific Automation - Vehicle employs the use of one or more specific functions (i.e. electronic stability control). (3) Combined Function Automation - At least two primary control functions are automated to relieve the driver from having to monitor these functions (i.e. adaptive cruise control). (4) Limited Self-Driving Automation - The driver can seize control of safety-critical functions in the event of dangerous conditions (i.e. hazardous weather) but the vehicle can operate independently absent of such events. (5) Full Self-Driving Automation - The vehicle monitors conditions and controls functionality with minimal input (i.e. destination selection) on the part of the driver during the entire trip. The lower levels of vehicle autonomy are being continually supplemented as current research supplies technical innovation; the focus being Combined Function Automation. Adaptive cruise control, lane detection, and blind spot monitoring are three examples of - the dozens of automated functionalities that vehicles can now employ. Utilizing stereo vision 7 deriving information about a scene from multiple cameras - has allowed a number of these technologies and led to various advancements including collision avoidance systems. These systems alert the driver when a collision is imminent and can sometimes seize autonomous control to stop the vehicle. However, because stereo vision relies on multiple cameras there is an accompanying spatial requirement for the camera systems which can limit potential applications. This thesis explores the applicability of monocular computer vision - solely relying on a single camera for image processing - as opposed to the more popular stereo vision systems. If employed, monocular vision could lead to more diverse applications due to its condensed size and versatility. 8 BACKGROUND Requisite Optics A compound lens system is a series of one or more lenses. In the case of this experimental setup, an active lens and passive lens are coupled together. The passive lens has a fixed focal length - the measure of how strongly the system converges or diverges light - while the active lens has an adjustable focal length. By adjusting this focal length of the active lens, the plane of focus of the system - the plane perpendicular to the optic axis of the system where the focal point lies - can be changed which allows the system to focus on objects at different distances. The more common stereovision systems use two or more cameras that allow "3D information [to] be obtained from a pair of images, also known as a stereo pair, by estimating the relative depth of points in the scene" [3]. These two images compare the same scene from two angles and use triangulation to retrieve the desired information. An associated difficulty with compound lens systems is that the depth of focus - the distance in front of and behind the plane of focus where objects appear in focus - becomes larger as you move farther from the system. Therefore, when looking at objects a significant distance away from the compound lens system, more and more objects appear in focus despite their varying distance from the setup. Focus Tunable Lenses Most compound lens systems - apparatuses that contain more than one lens - use lenses of solid glass or plastic. Systems with adjustable magnification or zoom (i.e. cameras) are typically comprised of these setups and employ a series of passive (stationary) lenses and active (dynamic) lenses; in such systems, active lenses are moved backward or forward to adjust the 9 zoom or focus. However, Optotune has developed a different form of active lens that employs similar mechanics to those of the human eye. The eye contains elastic lens materials which change shape to adjust focus [4]. Optotune adopted this mechanism to create "focus tunable lenses [which] are shape-changing lenses based on a combination of optical fluids and a polymer membrane" [2]. The geometry of the lens (see Figure 1) can be altered by controlling "A circular ring that pushes onto the center of the membrane [that] shapes the tunable lens" [2]. These changes are achieved by controlling the amount of current that is passed to the Optotune system which alters the focal length. (a) (b) (d) (C) Figure 1: Application of the Optotune tunable lens technology to change the geometry of two types of lenses. In images (a) and (b) the lens is transformedfrom convex to concave by pushing the ring thatforms the lens towards the container,filling the lens with liquid. In images (c) and (b) a ringpushes on the outer membrane moving the liquid to the center of the lens [2]. Using the Optotune lens has a number of advantages. Being able to alter the geometry from one extreme (concave) to another (convex) is analogous to moving a lens of fixed geometry several centimeters within a compound system, but this removes the need for linear actuation. This results in a corresponding drop in required power and makes the entire system more 10 compact. Due to the material properties of the focus tunable lens, the apparatus will be more robust and lighter weight. To summarize, there are "five main advantages of focus tunable lenses over traditional optics: compact design, less mechanics, fast response, low power, and less tolerance sensitivity" [2]. 11 EXPERIMENT System Design The monocular vision system can be divided into two primary components, hardware and software, with both being controlled by a central computer (see Figure 2). (4 Figure 2: A high-level breakdown of the experimental system into its hardware and software components. The central controller (1) passes an oscillating current to (2) the active lens of the compound lens system, changing the plane of focus. As the plane of focus shifts, (3) the camera captures images to be sent to the controller. The controller then (4) segments the image, (5) applies a high pass filter to those segments to determine whether the object lies in the focal plane of the image, and (6) examines the past analyses on that segment to determine where the object lies in the series of images [5][6]. Hardware The compound lens system has three parts for this first iteration (see Figure 3). The first is the camera for which an IDS UI-3220CP was selected for its global shutter used to capture fast moving objects; similarly, the camera's complementary metal oxide semiconductor (CMOS) sensor allows for high quality images in high dynamic, high contrast scenes. These and other features make the camera highly suitable for machine vision applications. A Ruby 2.5/25 Lens 12 was selected for the passive lens due to its robustness and ability to maintain set optical parameters during continuous use (see Appendix A for more technical information). An Optotune focus tunable lens, specifically the EL-10-30-Ci industrial version (specifications available in Appendix B), was chosen to avoid the obstacles that accompany an active lens that requires mechanical actuation [5][6]. (c) Figure 3: The first iteration of the monocularvision setup showing a) the Optotune EL-10-30-Ci tunable focus lens, b) the Ruby 2.5/25 lens, and c) the IDS UI-3220CPcamera. Software LabVIEW by National Instruments (NI) is being utilized at this stage to moderate the system. The Optotune active lens Block Diagram (LabVIEW files provided by Optotune) can be found in Appendix C. The NI Vision Acquisition Software and Advanced Signal Processing Toolkit were used to accommodate the image processing required; the Vision Acquisition Software creates an interface with the compound lens system and the Advanced Signal Processing Toolkit employs a high pass filter on the captured images to perform edge detection. 13 LabVIEW is essential to the experimental setup. Each image must be labeled with the corresponding current that was supplied to the active lens when the image was taken and the matching distance to the plane of focus of the camera. Because all of this information must be integrated simultaneously, LabVIEW provided the optimal functionality at this phase. Image ProcessingAlgorithm As the plane of focus of the camera system oscillates from its near to far extreme repeatedly by controlling the current passed to the camera, each frame is saved along with the associated current value. Those images are then sent through the stages of the systems image processing. The first phase is to parse the image into the Relevant Area. When the plane of focus is nearest the camera, the Relevant Area comprises the entire image; everything in the field of view is important as they could be potential hazards. When the plane of focus is farthest from the camera, much less of the image is significant because the objects of importance will be much smaller. Therefore, the Relevant Area will be much smaller. An example of this application is shown in Figure 4. Figure 4: The Relevant Area selection (shown in red) for the closestplane offocus (left) and the farthestplane offocus (right). The Relevant Area of the image is then segmented into a grid (see Figure 5). The boxsize within this grid is varied with the distance to the plane of focus. As the plane of focus 14 moves farther from the driver, the potential threats (i.e. other cars) will occupy smaller segments of the image so to be able to differentiate relevant objects, the box-size will have to adjust accordingly. Figure 5: Image post segmentation. Note: the Relevant Area for this image is the entire field of view. Once the image is segmented, the entire image is passed through a wavelet transformation. The detail coefficients are kept for further processing and the wavelet transform can be interpreted as a high-filtering step. This effectively removes the areas that are out of focus and leaves the edges of whichever objects lie in the plane of focus. The remainder is then used to compute the energy within each box of the segmented image to be used for further analysis. The more objects within the plane of focus, the higher the energy. This same process is undergone by every image. The archive of information is then analyzed for energy spikes that could correspond to potential collisions or obstacles [5][6]. Implementation The image processing algorithm was implemented in LabVIEW and showed promise in this phase of the project. Table 1 shows a breakdown of the relevant portions of the LabVIEW block diagrams. Appendix C contains the entire block diagram for additional reference. 15 Image Processing Step Description LabVIEW Screenshot Image data is sent from the camera to rGre 6) the computer. Wavelet resulting array is Analysis used for further analysis. Segmentation The array resulting from the high-pass filter is separated and Energy into the desired Summation block size and the energy of each block is derived. - High Pass Filter and The image is passed through a high pass filter and the . Capture Or' int32 arraySize, horizDivider, vertDivider; int32 i, j, n; lint32 currentRow, currentCol; int32 firstRun; float64 currentVal; horizDivider= 30; vertDivider = 40; arraySize = sizeOfDim(sourceO); float64 a[8)[8); for(i=0;i<arraySize;i++){ currentRow =floor(i/horizDivider); for0j=0;j<arraySize;j++){ cuffrentCol= flooroj/vertl)ivi der); currentVal = pow(source[i]U],2); alcurrentRow)[currentCol]= aIcurrentRow][currentCol]+ currentV 16 10 Historical Analysis Each segmented block of the image is compared over multiple frames. fleat64 x int32 Y V depth1501 dept Y deptbtxl= hist[xjjyj( ct Sin:71 Update Table 1: A breakdown of the image processingalgorithm in LabVIEW. 17 (Strip Chart) DISCUSSION This experiment successfully used LabVIEW to apply an image processing algorithm to a monocular vision system while simultaneously controlling said system. The system - comprised of a passive lens, an Optotune adjustable lens, and a camera - was able to capture images in the lab environment of objects at varying distances from the camera. Each frame was then successfully passed through a wavelet analysis and segmented. Analyzing images taken over multiple cycles was not accomplished within the scope of this project. 18 CONCLUSION AND RECOMMENDATIONS Based on preliminary testing, the monocular vision system shows the potential to substitute for the more commonly utilized stereovision system for collision avoidance and detection. Although stereovision has the advantage of accounting for depth by using multiple cameras, a single-camera system - using an oscillating active lens and image processing algorithm - seems to provide a suitable means of machine vision. The success found in a lab environment at this stage suggests that vehicular applications are feasible. However, seeing as this project is still in an early phase, a number of steps must be completed before monocular vision's viability for use in traffic can be confirmed. (1) All testing was done in the lab environment under ideal conditions, for the next phase testing needs to be done outdoors to account for environmental factors. (2) Objects used for testing were used for their convenient geometry. The system needs to be tested using vehicles for more accurate results (indoors - in a parking garage - and outdoors). (3) The final monocular vision setup needs to be portable. The current system is heavily reliant on a desktop's processing power. Future iterations need to be moveable, robust, and compact. (4) The final product needs to be able to complete five cycles - moving from farthest plane of focus to the closest - per second, so the cycle time needs to be optimized. (5) To speed up processing speed, a Relevant Pixel Ratio (RPI) - the ratio of utilized pixels to total pixels - needs to be introduced. For objects closest to the vision system, every pixel is not required for data processing because the significant objects will appear rather large; therefore, the RPI will be low. Conversely, the RPI will be highest when the plane 19 of focus is farthest from the camera seeing as the relevant objects will be smaller in the image. 20 BIBLIOGRAPHY [1] United States. U.S. Department of Transportation's National Highway Traffic Safety Administration. Automated Vehicles Policy. Washington: GPO, 2013. Print. [2] "Focus tunable lenses." optotune. N.p., 2013. Web. 17 May 2015. <http://www.optotune.com/ >. [3] "Stereo vision for depth estimation." Math Works. N.p., 2015. Web. 21 May 2015. <http://www.mathworks.com/discovery/stereo-vision.html>. [4] "Accomodation." Hyperphysics. Georgia State University, n.d. Web. 21 May 2015. <http://hyperphysics.phy-astr.gsu.edu/hbase/vision/accom.html>. [5] "Depth Mapping Using Active Lenses." MIT-Ford Internal presentation. Massachusetts Institute of Technology. Cambridge, MA. Feb 2015. Print. [6] "Depth Mapping Using Active Lenses." MIT-Ford Internal presentation. Massachusetts Institute of Technology. Cambridge, MA. Nov 2014. Print. "Xenon-RUBY Lens." Schneider Kreuznach. N.p.: n.p., 2013. 1-2. Web. <http://www.schneiderkreuznach.com/en/industrial-solutions/lenses-andaccessories/products/118-oe9mm-lenses/>. "Fast Electrically Tunable Lens EL-10-30 Series." optotune. N.p.: n.p., 2013. 1-15. Web. <http://www.optotune.com/images/products/Optotune%20EL-10-30.pdf>. 21 APPENDIX A: Xenon Ruby 2.5/25 Data Sheet . Xenon-RUBY Lens I IIN-~ rbrhnelder XvM.o.M M TIhe Xenan4amy tens i 053 in mcp aoooui pmm Sae sewUoft of tman he a nusn up Is 11 1S (Smn) TMs Wm I ow pu IadeWf -e -mn ptce aM permanm- By Mag a racM 6 Ieaed peed of 2_2. a Vy NO piae pewMuuoMe 1 Even Pw Na1iou aft r"u t s~mw plWroM ad I or m 0admte sand I& nd mGnchanam mUawian. s wuarae.5 wiame caUwuuus us in 1St set pUoe pawnelms mnma i place. Appfmwn My Feebuus I" ut meFawios txa wuI m in . er@knment . " Compaa eMapandOw wet " Focas and Is Sang be= . "0g mom" Oanpfts " TwuinioAlm 400 - ImD in (V - MR) . Deiiea lvr senGm0Mto I 151(%Ml) 10 7 . sems__ . anhine tMon and onbw ngn i - e I ETqnO 20 I 3 2.2 - 16 Focal length 25.2 mm irage crce 9 nn AMu - 100 net Tranisaion Iof*w uafty 000i .s* F-Stop ratge amale MEauGmWit C4-01 tnterface M26.5 I0.5 Finer Thread 29F Weight 10689 Code W4o couted SctvnmidwAga Pacc U&. 2W Ca" 710ow 26 Queen's Road CnraI, HMng KOng sinmdw Opos me. 255 OserAve. tugpauge, IY 1178 chim LISA Rune 42 83M 0301 Fax +a5 302 4722 Phone +1631 761-000 FaK +1631761-590 m1.o jos. Sin3M Opildie Wine GnbH F~~lnafe 132 55543 Bad Kammiach Gennany Pmne +49 671 601-205 49 671 601-286 Ud5 wm05 Fax W gV3widW-i..n ai= -0301rills -pom.-a0111-1 111 Qdnn . ft , . VmWw2,W 1 i MW3 f. a..Wa a OM i .m 2 @3.0 J. . i 1 1.. .. a.i. . 1.m...... wtdwaraN 22 .. os q Xenon-RUBY 2.2125 -Ii m oer itjg ~ . iuvk I ' 3: ..1.'i H-'- KRl All VI- .. ...... ..-... e.- -i.)~ ?..-, d- v !" C. " I . lit! I 6b H5 Orif: 15. ~ :I -~.= 10C DISTORT 1Ol4 foC :~)c.~or r, h - cw i a bar'vfc V) goi el . r h1 O i .'. ..... ....... .. ...... PelctWv .cclrc I ............. I....4 .......... I ~ 23 trri rs. in v n . . t I rac, - t d f ~ 10 ry 11l IIINA I ION 1 -r I :-I -- 1,C VFW ttairt:e - -AII. APPENDIX B: Optotune 10-30-Ci Specification Sheet RO- isa" 1 optotune Cai0omt S JmIOeSIaMM AwIL.. I I I Fast Electrically Tunable Lens EL-10-30 Series 1 ~ t 14z wI bV Wp C awrotis aIUd nAtipe cmngt polyom ler s tuned to a dsed vatm d th n ikndmus. Opotuwe ons three difFerent types ohoeslngs f the EL-10-30. The copact 30KIO.7 sun homsoing a sin2m mm hosing with C-mount threads and the mndeitral C-amovint housdng (0)with tirose connect For each hosifg there ae dEferent options to aapt the lees to yexr needs The Goevatweof thas "igh reractive (ik) lhpnd (no=135 Eivese cover gass comtinp Optionalfset lenses " * " The ocallength (I=1.300, V=100) v=UZ) & low dipersion (LD) hpid The table below summamzes the posmle options for the three different husigs. At the end of Ihis doomment you tind a detaied explanatian of the mnamng concept mien orderig a custonized EL-10-30. Option EL-1UM-C in, Mrn LD LO vs.K mR low V M 18w C o-emadw Yes No *.-fiethmlem '400-700 a-M0-I3-Ci tI--E nrn brusO 0im40 700-lifi gm irfru--d broad bind yes i064 'narraorcad nrr The loagwigg table audines the specications of our standard electnically tunable lens EL-10-30. Cover glass on den dr. coatings and tunuig range can be adte Mechanical specifications EL-O-3eC so so SO so,,s EL.0 omerusgifte wow* a m mnpsssel sn.9 300 T-empotemeaw&rUommr L-10-3"- so s So as A,4 WO 2A 46.A OWNaept 403s 211n :4)o l m ves $tuumuewor yes P"~~ k) Electrical specifications V IZC Supply WINNER 33 (Wommi w F----imiesasp euss COMI Pa owe ebr in - 2'q1 a, m mu I(u-t"eltqrsn46(amaubdtep9 .. v-ok. 4..cmltn o cuayf pm PO O As I &.*...mft mcaIaeas PS... 44152 06 aM 0 I wWW &Berm I 24 a 35.62 I SUMssAw A* - - einp 3. (mmiwusm) a Caadiqg ~IwI V ii i ii ii I I ii I I if U Ii II 'I jI1~ fl~ ii11 ii ii ti ii ii ii U. H U 0 II I I I U I I I I I II I II 111116111 I II 9 19 9 9 9 1 A 19 & I I I wwwwwwwwwwwwww I I III .4 II aI II I I IIII p p p tit p II S I IImI~ I I S S I I 41 I I I I I I p p "p IIII'I I p p a a' a' SI 0 I I i ~ 4: II I I if VP ~0 II I. I a ji I SLw e II I- Iiii I I 0 I 'I ii' I I III II i I; C 0 i (1~ 9 i 0 i~1 I o1* ~gUUI I ill'' 15j111ji I ~ p Iii""'I'll -9 . ..... .... ................................. .... ...... .................... .... .. ...... .... .......... ......... .......... optotune c"Otwe9.0lu MWCNot*Ba=suse nwcmm1 of dmEL-Ia~cmEL-2DWd~mmikv pm umd omgmd Itopr (ual l0lt Figure 1: Mechartko drawing of the compadtEL -10-30 (uenit mm). -mfi I MmmaM I b k OMA 9O .1M - I OMSUft I Soft I iANOWAM I Ubf@@DM 26 .. .... ... . Ueopi.mmu W U.MW edhm oktaoua ,,W** m~Wf.bW" ' '' lt.W .. ......... paliam hmb une assaemn -- > ioptotune A4Im 4 k uLof 71 $is- c . 4 iLg Ofset Lens Rua ii (M14.fixGl 5 lhmead) -3? L _4- "biur jh1r1tvW IF (r A.r,1 i 0 4 0( 1 fw '4d I Fm ( c'Cjkir%5t dF N LeiR Rcufirer. 0: 1'32 JN VA C Mrmril -es Plarmt Wvwxl S11.1: Figure 2: Mechonkic drMnMg of the EL-10-30C (Uit mM) The upper aght pe shows the positon of the A4 threaded hole for mounting of the EL-10-30-C ~M mm.4131 I ~m~mm ; a.~ I m~~m I~ i~ 27 1-- -- -- - - - . - - - . . . -...... . .. :. .. . .. .............. . .. * UthfNAa4f Soptotune 24VO.lo- -AL-n ;7, I AIOG-7P-fip Qiret *.:?S Pawtac (MI 4AOthread: r mlAx ~ Figure 3: Medianica ruwiqg of the idustrialC-Mount EL-10-30-6 (unit mm). n-jcrka -onecK gi tr mu IM lbbm dmW j E 11mm compact m-wD3o lusa 31mm bq c~efr *k*q to d EL-E4 - oni I do EL-3-d 6-p WoI mmmb Aim0ke bcdk tou M3 mu M1*a -mdIubmkhaauyJ ava~ft1nO(ptatm (W i4103eEL-1-30SCbiaIOM hft WCcu fur v m I .w 0-5 pb 6 %W hMC bad mmeu (P/N 534=401 SovpI sim wmf m adIp upmt 2% Sbtmdfmmy. 11 ' g Mot U w EL-203 -4d-fe SEWM mW ~-2 r6waftlowd I 3 4 WCLW SEMB= S M* U S meff*Na W U~. ~ ~ ~ ~ M 2 solmo z tIw*now fl e uo .4 S t 040"ww p hbd. hw~ O"fwmI ftim~mIO"" oo I III&sW 28 .......... k) It I I I II I I ii I i ii I U I if II Ii a I I I I V I 1+ I I II I ii I II iiitIIii II. Fr[ff11 II I I I I' II U *1 II I~1' I I I. If If I a I 'I I. I I I F IU I\r I ' ~ 1FT1 I yi I~T21 ": ., J. " ~ I 3 Z.r ~ I ~ J .i <~i~ / t - S I I I II I d r I I (I~ C 0 'U ................ I............................. ti. uIdb 0j I Ii ~ ftii ............. ....... Ii~i. Ii I ii F ii ii ii i ii I., S 2 I '-I3 A- V .1- V Ism AL ~I~p '1 p 6.4- 1; .11 - ~ ~ ~; 114 "li dII I II I I I I. I*1 I! I. I IL' 11 ill' [''iiiit $11 iii) IletI liii its'."U I1111 I.till V~ F jII" I I I (III (t~ C 0 -N 'Ii APPENDIX C: LabVIEW Block Diagrams Optotune Wiring II -a-- - I cM ip . IW LK T-4I HER C; b~p~rA.. ~1 MA O4 --- - I IE- -, I-~PW4 4-~------I----l-~4!.4----4I-1 I'l 1 p,0 i i Mdet952et 2W M- &#.M Ow wPTO. al I ,4 w t>1 rg"r-, T 31 ... ...... .... .. .. .. . ... ..... ... . ... .... ... .. .. ..... ..... ....... .... ......... .............. ......... ...... ......... ..... ...................... Image Processing 32 Oj I F- t