Development of software for cameras in C++ Erasmus in Trondheim, Norway Tutors: Thomas Haratyk François Lepan Jan H. Nilsen, Tomas Holt, Patrick Lebègue -1- Acknowledgements We would like to thank HiST, Høgskolen i Sør-Trøndelag, for welcoming us in the university during these three months, our tutors Mr. Jan H. Nilsen and Thomas Holt, from the “Vizlab”, for their help and advices in both our project and things to do in Trondheim, Patrick Lebègue our French tutor, Marit Sandtrø from the international relationship, and all our professors from the IUT A. -2- Summary Acknowledgements................................................................................................................... - 2 Abstract.................................................................................................................................... - 5 Résumé .................................................................................................................................... - 6 1) Introduction.......................................................................................................................... - 7 2) Presentation of Norway and HiST ........................................................................................... - 8 2.1) Norway .......................................................................................................................... - 8 2.2) Trondheim ..................................................................................................................... - 8 2.3) HiST ............................................................................................................................... - 9 3) Learning Phases .................................................................................................................. - 10 3.1) Setting up..................................................................................................................... - 10 3.2) Understanding MFC ...................................................................................................... - 11 3.3) The Imaging Control library ........................................................................................... - 12 3.4) A complex IDE: Visual Studio ......................................................................................... - 13 4) Coding phases..................................................................................................................... - 14 4.1) Display a live show feeds from a single camera............................................................... - 14 4.1.1) Using a IC dialog box template ................................................................................ - 14 4.1.2) Using a MFC application template ........................................................................... - 14 4.2) Display a live show feeds from multiple cameras ............................................................ - 16 4.3) Properties dialog .......................................................................................................... - 16 4.4) The contextual menu .................................................................................................... - 17 4.5) The toolbar................................................................................................................... - 18 4.6) Pause the live show ...................................................................................................... - 18 4.7) Settings dialog .............................................................................................................. - 19 4.8) Snapshots..................................................................................................................... - 20 4.8.1) BMP snapshots....................................................................................................... - 20 4.8.2) PGM snapshots ...................................................................................................... - 22 4.9) Record to an AVI file ..................................................................................................... - 23 4.9.1) Record to an AVI file from a single camera ............................................................... - 23 4.9.2) Record to an AVI file from multiple cameras ............................................................ - 26 4.9.3) Adding an overlay to the display.............................................................................. - 26 4.10) Using VizLab circular buffer ......................................................................................... - 27 4.11) Fixing minor issues ...................................................................................................... - 28 4.11.1) Size of the camera frame ...................................................................................... - 28 -3- 4.11.2) Using radio buttons .............................................................................................. - 29 5) Conclusion .......................................................................................................................... - 30 5.1) Technical assessment.................................................................................................... - 30 5.2) Human assessment....................................................................................................... - 30 5.3) Our project................................................................................................................... - 30 6) Annexes.............................................................................................................................. - 31 6.1) The grabber class.......................................................................................................... - 31 6.2) The VizLab.................................................................................................................... - 32 6.3) Our working station ...................................................................................................... - 32 6.4) Overview of our final software ...................................................................................... - 33 - -4- Abstract This report stands for a trace of our work during our Erasmus exchange in Trondheim, Norway. We chose Norway because we didn't knew anything about this country, so it was for us an opportunity to discover a new culture and landscapes. Besides, living three months in a foreign country allowed us to improve our English fluency and vocabulary, which is essential in computer science. We have worked along with the Vizlab team on their project to render 3D movement using multiple cameras and sensors. We had to learn C++ and to familiarize with the Visual Studio environment, which will certainly be a useful competence in our later working life. -5- Résumé Nous avons choisi la Norvège tout d'abord parce que nous ne connaissions rien de ce pays, ce fut pour nous une opportunité de découvrir une nouvelle culture et de nouveaux paysages. Nous sommes aussi partis dans un pays étranger afin d'améliorer notre niveau d'anglais, ce qui est primordial en informatique. Nous avons travaillés avec l'équipe "Vizlab" sur leur projet visant à interpréter les mouvements 3D en utilisant plusieurs camera et capteurs sur un homme ou un objet quelconque. Nous avons dû apprendre le C++ et nous familiariser avec Visual Studio, ce qui nous pourrait nous être très utile lorsque nous entrerons dans la vie active. -6- 1) Introduction As part of our DUT, in the last 3 months we had to do an internship either in a university abroad or in a French company. We chose going abroad because it was for us an opportunity to discover a new country, a new way of living and new languages. We were under guardianship of Jan H. Nilsen and Tomas Holt which were already working on image processing with their own software but some features were missing. Hence they asked us to make a similar program that would fill the blanks. Our work on this project was divided in two parts. On the one hand we had to learn the C++ programming language and the Microsoft Visual Studio IDE1. It took us a long time before we could really start the project itself. On the other hand we had as instruction to create a program, similar to the one they used, that allows the user to select multiple plugged cameras, show live feeds and controls for camera properties etc. We will see further for more detail. 1 Integrated Development Environment -7- 2) Presentation of Norway and HiST 2.1) Norway Norway, officially named the kingdom of Norway is 385,252 square kilometers big and has a population of 4.9 million people. It is a unitary parliamentary democracy and constitutional monarchy, with King Harald V as its head of state and Jens Stoltenberg as its prime minister. Situated in the Occidental part of Scandinavian, Norway is famous for its fjord and its magnificent landscape. Also well known for his environment protection, Norway has 20% of protecte d area which leads it to the 31st worldwide place. This country has rich resources of oil, gas, forest and is the second exporter of seafood in the world. Fig. 1: Norwegian flag 2.2) Trondheim Trondheim was historically named Kaupangen 2, Nidaros and then Trondheim. During the Viking age 3 until 1217 Trondheim was the capital of Norway. Nowadays it’s a city in Sør-Trøndelag and is the fourth most populated city of Norway with 170 936 people. The mayor of the city is Ritta Ottervick from the Norwegian Labour Party 4 since 2003. This city is composed of almost one fifth of student which makes it heavily influenced by student culture, characterized by a tradition of volunteer work. 2 It means market place from late 8th to 11th centuries. 4 center-left political party. 3 -8- Fig. 2: Trondheim 2.3) HiST Høgskolen i Sør-Trøndelag named in English Sør-Trøndelag University College was established in 1994 by merging eight colleges in Trondheim. Welcoming 7000 students, HIST is the third largest university college in Norway, and one of the two dominant academic institutions in Trondheim. Fig. 3: HiST -9- 3) Learning Phases 3.1) Setting up In the very beginning of our placement in Trondheim, we had no idea about the purpose of the project. The first time we met our tutor, we were told that we would have to work on software or video game dealing with a 3D library called OGRE5. Even if you can use this library in other languages like Java, we knew that it was natively working with C++ and, whatever our project would be, performances would have been better with a lower level language than Java. Besides, it was an opportunity to learn C++ since we already had good knowledge of Java. Therefore, we started working on C++ basics. Fortunately, we already knew C and OOP6 so it was not difficult to approach. Midway through our learning, we were told the actual subject of our project (make software for cameras using IC library), this is where we realized that we would have to work on Microsoft Windows with Visual Studio IDE. The IC7 library works with WDM8 video sources and the documentation is based on Visual Studio, that’s why we took the trouble of installing Microsoft Windows and Visual Studio. Indeed, one of us uses a Mac and did not brought the Mac OSX installation disk from France (required to install Windows on a Mac), and we had trouble downloading Visual Studio from MSDNAA 9 because of the incredible slowness of their servers and our download quotas. We encountered several other problems, in addition to administrative complications. All this led to delay the project itself. IC documentation recommends using the MFC template to make software using IC library . The following part will explain what MFC is. 5 Object-Oriented Graphics Rendering Engine Object-Oriented Programming, see glossary for further explanations 7 Imaging Control 8 Windows Driver Model 9 Microsoft Developer Network Academic Alliance 6 - 10 - 3.2) Understanding MFC The Microsoft Foundation Class Library (also Microsoft Foundation Classes or MFC) is a lib rary that wraps portions of the Windows API in C++ classes, including functionality that enab les them to use a default application framework. Source: Wikipedia MFC is a huge library. It took us a lot of time to understand the template of a MFC application generated by Visual Studio. On the MSDN website, under Prerequisites for learning MFC, it is said that you should already know a little about programming for Windows. You understand why when you look for an explanation about a MFC function in the MSDN documentation since it often refers to the function it wraps. Speaking of the documentation, we used it a lot and we can conclude that it is plethoric but badly structured. We knew nothing about programming for Windows, thus we learnt basics for building a GUI10 using the Win3211 API12 . We were used to program GUIs using Java Swing 13 API; it was extremely different with Win32 as you really need a lot of code to create a simple window. It was disconcerting: in addition to the fact that there was a lot of code, it was not easy to apprehend. Visual Studio can generate a template of a MFC application. While trying to understand the generated code and to add custom one, we realized that you often have to deal with the Win32 API directly, and that MFC API is not as object-oriented as it pretends. On the other side, a strong advantage of a generated MFC application is that it creates an application skeleton with a document class and a view class. MFC separates data management into these two classes. The MFC document/view architecture makes it easy to support multiple views of the same document, multiple document types, and other valuable user-interface features. The document/view implementation in the class library separates the data itself from its display and from user operations on the data. All changes to the data are managed through the document class. The view calls this interface to access and update the data. Fig. 4 10 Graphical User Interface Windows 32-bits API, the API that MFC wraps. 12 Application Programming Interface (It serves as an interface between different software programs) 13 Swing is an API for providing a graphical user interface (GUI) for Java programs. 11 - 11 - 3.3) The Imaging Control library The Imaging Control is an API delivered with the purchase of a camera from The Imaging Source. We were lent two cameras of this manufacturer; they were like the ones Vizlab uses (au-dessousFig. 5: Imaging Source camera). Fig. 5: Imaging Source camera The fact that IC requires Windows and Visual Studio appears as it highest weakness. Since the cameras meet the standard WDM, we could have use another library; we stuck to IC because it was designed for this cameras and the API bring a lot of benefits when it comes to program for their cameras. The documentation is meticulous, well-structured and contains a lot of code examples (Fig. 6). It is a boon that code samples treats most of the important features that IC offers since it is very unlikely to find help or tutorials about IC on the internet. Fig. 6 - 12 - 3.4) A complex IDE: Visual Studio Microsoft Visual Studio is an integrated development environment (IDE) from Microsoft. It can be used to develop console and graphical user interface applications along with Windows Forms applications, web sites, web applications, and web services in both native code together with managed code for all platforms supported by Microsoft Windows, Windows Mobile, Windows CE, .NET Framework , Framework and Microsoft Silverlight. Aaz Source: Wikipedia We use Microsoft Visual C++ which is part of Visual Studio. At first glance we felt helpless as there are plenty of menus, controls, and every kind of options you can imagine (Fig. 7). Fig. 7 VC++14 is difficult to handle and it probably takes a long time to be mastered, however it is a very powerful tool and lot of the features helped us in many ways. At this point, we cannot imagine redoing the project without such a powerful IDE. For instance, the IntelliSense tool assists you with autocompleting while typing code; it avoids many errors and saves lot of time. The debugger helped us a lot too: at any time of the execution, you might want to know the value of a variable, the debugger allows you to pause the execution, browse the stack of function calls and inspect variables content. This may seem silly but we were used to do resolve this kind of issues “by hand”. 14 Visual C++ - 13 - 4) Coding phases 4.1) Display a live show feeds from a single camera 4.1.1) Using a IC dialog box template In the beginning we didn't know how to get started so we followed the documentation of the IC library. In VC++, when you create a new project you have to choose what type of application you want, and after installing the IC library we could choose an IC dialog box template. After working a little on this dialog application, we realized that it didn't meet our tutor expectations. Indeed this template only allows seeing the live feeds of one camera at a time and we weren't able to manipulate the data of the camera (Fig. 8: IC dialog application). That's why we tried the other solution given by the documentation: use a MFC application template. Fig. 8: IC dialog application 4.1.2) Using a MFC application template When you choose a MFC application template you have the choice between a SDI 15 and a MDI16 one. We chose a SDI in order to have a structure that allows us to manipulate the data of the camera. As we saw before the structure of a MFC is composed of a document and a view. With that architecture we managed to create an application that show live feeds of the selected camera and add our own 15 16 Single Document Interface Multiple Document Interface - 14 - functionalities. First we had to load the IC library with the corresponding license key in the main class so that we could be able to use all of the functions that it offers. After this our aim was to show live feeds. To do so we had to create a grabber (6.1) The grabber class) in the document, we use the grabber class to display a live show of one camera and we have added functionalities like changing the properties (exposure, gain) or stop the live show. As a result we had a window with a menu bar in which we could add functionalities and the display was made in the whole frame (Fig. 9: a SDI application (one document only)). But even if we managed to control a camera, our goal was to control several of them at a time, that's why we had to use a MDI MFC application template (Fig. 10: How MDI works or 6.4) Overview of our final software). Fig. 9: a SDI application (one document only) Fig. 10: How MDI works - 15 - 4.2) Display a live show feeds from multiple cameras After we managed to display a live show from a single camera we had to do it with multiple cameras. We searched on the Internet how to create several documents within a main frame and it appears that the MDI is the most suitable and easiest way to do so. Afterwards we created another project with the MDI MFC template. With this template, we were able to create separated frames for each camera you open. The main difference between a SDI and a MDI is that the grabber is created when you create a new document (frame). It allows us to have several cameras at the same time and manipulate them separately (6.4) Overview of our final software). At that time our main goal was achieved and the only work we had to do was to add functionalities and use the circular buffer given by our tutor. 4.3) Properties dialog The Vizlab needed to be able to set properties for every device they use. The IC Library offers an easy way for accessing properties with some useful functions. As you can see on the following captures (Fig. 11 : Properties of IC cameras and Fig. 12 : Properties of a webcam ), the dialog box displays properties depending on the device driver. Fig. 11 : Properties of IC cameras - 16 - Fig. 12 : Properties of a webcam 4.4) The contextual menu As we wish to develop an intuitive GUI, we added a contextual menu upon a right click on the cam era window (Fig. 13: Contextual menu). At first, there were only a “properties” and a “close” item in it. We added items throughout the project's progress. Fig. 13: Contextual menu The code to show a popup menu upon right click is not as intuitive as the result: Most of the code samples found on the Internet were not working, even the one on MSDN. The tricky part is that you have to deal with two CMenu to display only one. - 17 - 4.5) The toolbar A toolbar allows quick access to the important features. By default, the MFC application template includes one, so you just have to change it the way you want. Functionally, there is nothing to reproach to the implementation of toolbars In MFC, it works as expected. But when it comes to design and organization of the icons, it is really confusing. Fig. 14 : The toolbar Our toolbar is composed of 6 items (Fig. 14 : The toolbar). From left to right: Open a device, close a device, record to AVI, take a snapshot, show device properties and last but not le ast: About. 4.6) Pause the live show As you can see on the code sample below, pausing and resuming the live show feeds was not a difficult feature to implement. The problem was (and still is) to update the button so there would be a tick mark when liv e show is on and none when it is paused (Fig. 13: Contextual menu). We found out there was a similar issue for menu items attached to a dialog box, an official fix (ref.Q242577) is available but it didn’t resolve our problem. We chose to move on because we still had plenty of more important things to do. - 18 - 4.7) Settings dialog When we discussed about snapshots with our tutors we were already able to do so. We were able to save them into BMP file without difficulties. But they were using PGM files so we thought that it would be nice to have the choice between BMP and PGM file. This is where we decided to implement a settings dialog (Fig. 15: Settings dialog); we also thought that the ability of choosing where the snapshots and record would be saved would be a good option. All those settings are save d in a file to remember user’s choices (Fig. 16: the config.cfg file). Fig. 15: Settings dialog As you see the dialog box is divided in two parts: The first stands for snapshots and the second for records. Here is the settings file: Fig. 16: the config.cfg file This file contains all the data we need to load the Dialog box with the last saved settings. We will use the EditControl 17 of the snapshot as an example of how we load and save to the file. First we add a keyword, "_SNAPPATH=", to the EditControl of our dialog box in order to find the corresponding data. Then we search this keyword, retrieve the data (here "C:\snapshots\") and put it in the control 17 An EditControl is a field where the user may enter text - 19 - so that the user can see what his last choice was. To save we do the same thing but the other way: we take the value of the control, add the keyword before it and write it to the file. The other EditControl and button for the record works the same way as for snapshots except that the keyword isn't the same. For the radio button the only thing that changes is that you retrieve the value of the first radio button in order to know which one is selected ( 4.11.2) Using radio buttons ). Now that you know how we use this file to save and load data, let’s see how the dialog box itself works. The common controls of the two parts are an EditControl and a button on its right. The EditControl contains the path where you want your snapshot or record to be. The button launches another dialog box that shows the tree of all your directories and files. Fig. 17: Browse for folder After you pick the directory, its path will be added to the EditControl in order to save it to the settings file. In the snapshot part are also two radio buttons. With it you can choose to save it as a BMP or a PGM file. 4.8) Snapshots 4.8.1) BMP snapshots The IC library offers the possibility to save as a BMP file the last acquired image in the buffer. We had to attach our grabber to a FrameHandlerSink18. Indeed, the data stream splits in two different paths: A display path and a sink path. The display path is, as you might expect, a stream for the display output only while the sink path is where you manipulate data and retrieve image s (6.1) The grabber class).Thus you have to set either a FrameHandlerSink or a MediaStreamSink 19 , in this case we use the FrameHandlerSink because it is for grabbing frames from the image stream whereas the MediaStreamSink is for saving image data in video files. 18 19 A FrameHandlerSink is an object of the IC library used to grab frames A MediaStreamSink is an object of the IC library used to record video files - 20 - As you can see line 190 in the code above, we snap one image to the MemBuffer20 when the snapshot function is called, then we have to define a name for the file otherwise each snapshot would erase the previous one. When the filename is defined, if the snap mode is set to BMP in the settings, we retrieve the last image and save it to BMP (line 217 in the code below). The tricky part in the saving to BMP file function is consequently to determine the file name: The previous code set the filename according to the name of the camera and adds a number at the end (Fig. 18: name of snapshot files). The weakness of this algorithm is that it may takes a long time to determine a filename if there are a lot of snapshots, but since it snaps the image before it defines the filename, there are no synchronization issues. 20 A MemBuffer is an object of the IC library where image data is temporally stored - 21 - Fig. 18: name of snapshot files 4.8.2) PGM snapshots Our tutors wanted to be able to save an image as a PGM file. PGM is the portable gray map format. It is a simple gray scale image description. The definition is as follows: A "magic number" for identifying the file type. A PGM file's magic number is the two characters "P2". Whitespace (blanks, TABs, CRs, LFs) 21. A width, formatted as ASCII 22 characters in decimal. Whitespace. A height, again in ASCII decimal. Whitespace. The maximum gray value, again in ASCII decimal. Whitespace. Data Since IC does not support saving files to this format, we had to implement it. We added a function that saves to a PGM file a byte array passed as parameter along with a filename and the size of the input device. This function is the following: 21 22 blanks, TABs, CRs and LFs are just different kinds of whitespaces American Standard Code for Information Interchange, ordinarily used to write text. - 22 - The code is self-explanatory since you know how a PGM file is built. 4.9) Record to an AVI file 4.9.1) Record to an AVI file from a single camera In order to make a record from a camera to an AVI file we needed to know what codec the user wants to use and then create the media sink with this codec. To do so, when the user chooses to record to an AVI file a dialog box pops up: Fig. 19 : Record to AVI dialog box - 23 - In the first version of this dialog box, there was not the possibility of choosing between “All cameras” and “Active camera only” since we were only able to do it for one camera. This dialog box uses an EditControl, radio buttons and a ListBox. At the creation of the dialog box we had to fill the list with the different available codecs. To do so we added this loop: m_codecList: the control variable of the ListBox. Then to record in AVI file we retrieve the selected codec, the name given to the record, and the selected radio button. Here is a code sample that shows how we use these data in order to record: - 24 - As you see this time we use a MediaStreamSink because this time we have to assign the grabber another sink that can use a stream in order to record to AVI: Then, we set the sink in pause mode so that no images are recorded: Assign the sink to the grabber: If the grabber is a valid source, start the live in order to start the stream of data: Afterward, the only thing left is to set the sink in run mode until you want to stop the record: We have added another function which stop the live, set the sink to FrameHandlerSink and then restart the live so that you won’t have any problem taking snapshots after that. - 25 - 4.9.2) Record to an AVI file from multiple cameras To record from multiple cameras, we had to reorganize a part of the code since we had some functions and members visibility issues. Besides, we added a document enumerator class which was useful in many ways. Here is the class structure: It is really efficient and simple to use: : We just had to call the record function for each document in the loop since in the MDI structure documents are already executed in different threads. But even if there is a native support for threads, because the loop takes time to iterate, we were facing synchronization problems. We tried to fix this later on with our own threads waiting for a trigger signal but did not manage to do it efficiently. We didn’t spend that much time on this issue because our tutors told us that AVI was an incidental feature. 4.9.3) Adding an overlay to the display As you may have seen throughout the previous code samples, we used an OverlayBitmap to display a “Rec” text and circle in the upper left corner (Fig. 20: an overlay on a camera display). Fig. 20: an overlay on a camera display Overlays are useful and well implemented in IC library, as you can see we retrieve the ePP_DISPLAY overlay: - 26 - ePP_DISPLAY is a member of tPathPosition enumeration. The tPathPosition enumerates the possible positions of the OverlayBitmap in the image stream: Enumeration Description ePP_NONE The OverlayBitmap will not be visible at all. ePP_DEVICE The OverlayBitmap will be visible in both the live video display and grabbed images/videos. ePP_SINK The OverlayBitmap will be visible in grabbed images and videos , but not in the live display. ePP_DISPLAY The OverlayBitmap will be visible in the live display, but not in grabbed images or videos. The following schema (Fig. 21: Overlay mapping) is the best way to understand the OverlayBitmap possible positions: Fig. 21: Overlay mapping 4.10) Using VizLab circular buffer Our tutors gave us a few of the classes they use in their own project. We kept on trying for a long time to include them in our project, but (we suppose that) we were facing libraries issues and couldn’t manage to resolve it. We were able to bridge the gap by preparing the ground for the implementation of their circular buffer. Indeed, to use their circular buffer they needed, obviously, the data of the image but also the number of the camera and its dimension. - 27 - 4.11) Fixing minor issues During the whole project we had encounter many issues that caused us difficulties. Here are the main problems we faced and their solutions. It also turns out that we found some very useful methods that helped us for the rest of the project. 4.11.1) Size of the camera frame We first encounter troubles resizing the frame that contains the live show of the camera. Our first intuition was to resize it from the child frame class (Fig. 10: How MDI works). Our main problem was, from this place, that we had to get an access to the grabber in order to know what the dimension of the camera was. We did some research on the web until we found this static function: It returns a pointer to the active document. The fact that this function is static allowed us to use it in every part of the project. So we tried to use it in a function called at the creation of the child frame but we experienced trouble. Thanks to VC++ execution stack tool, we managed to find where the problem was. The program stopped because the grabber was not initialized at this time of the execution. We then did a few tests and saw that the function was called three times (maybe to - 28 - initialize values) and that the grabber was not initialized because the document was not created yet. We were desperate but after a while we found a better and simpler solution: We use this function in the view class. It consists of creating a structure, storing the data of the grabber in it and sending a message to the parent frame of the view (the child frame). We will only explain the last three parameters because the last one bypasses the first three parameters. The third and fourth parameters are for the size of the frame. As you can see we added a value to each one because of the border of the frame. The last parameter “SWP_NOMOVE” retains the current position. 4.11.2) Using radio buttons In order to use the radio buttons in the record and settings dialog we had to figure out how it worked. At first it seemed easy but it wasn't. In the beginning you have to create them, and then you have to establish the group. To do so we had to set a tab order and change the properties of the two radio buttons. What is a tab order? It is the order that the tabulation key follows, so you don’t have to use the mouse: Fig. 22: Tab order Pressing tab button will first lead you to item number 1, pressing another time will lead you to number 2, etc. Now you must be wondering "what does this have to do with radio buttons?”. In order to create a group of radio button you need to know what the first element of the group is. Then you will have to go to its properties and set its group option on true. The next radio buttons (following the tab order) will be in the same group unless it reaches another radio button with its group option set on true. Once your radio group is settled, you have to assign to the first radio - 29 - button of the group an INT control variable. This variable serves to the whole group, the value of the first radio is 0, the second is 1, etc. We think that this way of handling radio buttons isn’t thoughtful; it led to a waste of time for something that should be easier to understand. 5) Conclusion 5.1) Technical assessment During our project we had to learn and use a new programming language (C++) and tool (Microsoft Visual Studio). In fact our knowledge in other programming language like Java and C allowed us to learn quickly the basics of the C++ language. We also learnt a new way to implement a frame based program using MFC. In addition we improved our skill in English and now have a better understanding of Object Oriented programming. We also had to learn a lot about the IC control library in order to use the camera lent by our university. Being organized was also necessary to be as efficient as possible during the whole project. But programming isn't the only part of the internship that has adduced something to us; there is also the human part that took a great place. 5.2) Human assessment The Erasmus program allowed us to meet a lot of people that helped us discovering Norway. Indeed their ways of living and thinking aren’t the same as in France. We also learnt how to work independently unlike all the courses we had in France, we had to work on our own. We can also say that this internship was a good opportunity to manage ourselves in both working and living on our own. Thanks to the IUT and HIST we were able to make all of these things. 5.3) Our project During these three months we hoped that we could make more than our tutor expected. Indeed, we didn't manage to include their circular buffer to the project, we could have made the program more efficient, added other features, etc. In fact our tutors said that before we go back to France we could work on adding support for FireWire cameras or high end cameras. We hope that our work would be helpful for the Vizlab, and maybe we can come back one day to work on the project as a real work this time! - 30 - 6) Annexes 6.1) The grabber class A grabber is the main class of an IC Imaging Control Class Library. This class provides the basic functionality to control the flow of image buffers in an image stream from the source (a video capture device) to the sink (a collection of image buffers or a media file). The idea behind this class is to provide the functionality of a standard frame grabber to users of WDM streaming devices. Using this class, you can: select a video source change parameters of a video source display a live image prepare, start and stop the image stream from the device to the sink (image buffers / media file) specify the sink (image buffers / media file) apply one or more frame filters to the image stream apply overlays at 3 different locations to the image stream - 31 - 6.2) The VizLab 6.3) Our working station - 32 - 6.4) Overview of our final software - 33 -