Detection of Parking Spaces Using Google Android Phone

advertisement

M

C

G

ILL

U

NIVERSITY

December 3, 2009

D

ETECTION OF

P

ARKING

S

PACES

U

SING

G

OOGLE

A

NDROID

P

HONE

Ching-Wai Chee Winston Lin Philip Minh Phu Tang Danny Wu Xiao Yu

260228240 260235451 260227074 260238209 260158645

0

Ching-Wai Chee

260228240

Detection of Parking Spaces Using Google Android

Winston Lin

260235451

Philip Minh Phu Tang

260227074

Danny Wu

260238209

Xiao Yu

260158645

ABSTRACT

With growing demands for a parking space detection service, the rapidly developing Smartphone market presents itself as a unique opportunity to finally provide a solution to this demand. Parkdroid is an application developed for Android Smartphones that detects free parking spaces when mounted on a car and stores the results onto a remote server. This report gives an overview of the design issues and discusses the adaptations required to conform to the limitations set forth by the HTC Dream Android

Smartphone. Finally, the report summarizes the results of the entire system and will go over possible future improvements.

1.

INTRODUCTION

Parkdroid is an application developed for the Google Android mobile operating system (OS) that can locate free parking spaces on the side of the road and then upload it to a remote server that contains a database of available parking spaces. In order to do so, development must be done concurrently on two ends. First, there needs to be an application on the Android Smartphone that will keep track of the location of the phone in the form of latitude and longitude that is provided by the phone’s global positioning system (GPS). At the same time, the camera will automatically take pictures at the corresponding location in order to determine whether or not a free parking space exists. Once these two pieces of information are acquired, they must be sent wirelessly to a remote server that will keep track of all the available parking spaces. Therefore, a specialised server must be designed to handle such incoming packets of data to extract the necessary information and then store it onto its database. While this idea is somewhat simple in theory, many important requirements must be considered to ensure a complete solution that will satisfy all parties involved in the process.

1.1. Requirements

Since the fundamental operation of this application requires the contribution of a community of users, it is vital to provide a sense of security and privacy to the contributors and the public. In order to do so, no personal information should be transmitted to the server in the process. First of all, the database on the server does not require or store any information that can identify the user. As a result,

1

the server cannot parse out and track any specific user since all contributions are anonymous. Secondly, each image sent to the server will contain only one GPS location attached to it, to further hinder the server’s ability to track the movement of any one user. Finally, personal information must be reduced to a bare minimum when sending pictures to the server. This means that faces, license plates and other such personal information should not be visually identifiable in the pictures.

In addition, the amount of data sent to the server for each transaction should be minimised to reduce the bandwidth costs of the users. This involves the right balance between image size and the ability for the image processing algorithm on the server to extract enough information to determine parking availability from the picture.

Finally, the safety of the users must be carefully examined since the application operates in a car.

To minimize distractions to the driver using the application, the application needs to be self-sufficient.

The goal is to have the least amount of user interaction possible; once the application has been properly setup and running, no further input should be needed to maintain functionality.

1.2. Initial Conception

An initial design has been drawn out in the first half of this design project as shown in Figure 1 .

However due to many uncertainties with regards to the phone and its capabilities; the flow of the program has been adapted to reflect the restrictions presented by the phone. The initial plan was to have two parallel threads running on the phone; one to operate the camera and perform preliminary image processing, while another thread would gather GPS data and perform filtering to get a more precise location. Once the two pieces of information are gathered properly, they are then sent forth to the server via a 3G wireless connection. On the server end, it would receive the incoming data and through several image processing algorithms, determine whether or not parking spots are available. However, the output of the image processing algorithms on the server end was uncertain and as a result, the database design was also unclear.

2

Figure 1 Previous implementation of Parkdroid

1.3. Goals

The objective set forth for the second half of this project is to get a system that operates fully from beginning to end. In order to advance in a systematic fashion for this project, the uncertainties with regards to the capabilities of the Smartphone must first be settled. Section 2 discusses the testing methods used to determine the capabilities of the phone and the results that influence the implementation of the system. Once the uncertainties are settled, the overall architecture of the system can be finalised to get a proper workflow on both the Smartphone and the server to get a preliminary backbone for the entire system. This is revealed in greater detail in Sections 3 and 4. An overview of the results for the entire project is examined in Sections 5, followed by the project timeline in Section

6. Finally the last section will describe the future possibilities of this technology and possible improvements.

2.

DESIGN ANALYSIS

An important step before commencing development full steam was to analyze the capabilities of the HTC Dream hardware and the Android framework in practice.

The specific mention of the HTC Dream is due to the fact that it was the only Smartphone provided for this project. However, it should be said that the completed Android application for this project was developed to run equally well on any Android Smartphone.

2.1. GPS Analysis

One of the two fundamental pieces of information required for this project to work is the location of the phone where the picture is taken. The Android application programming interface (API) makes the developer’s life easy by providing a Location package that contains all the necessary information to determine the current location. The methods provided in the Location package that proved to be useful were getLatitude(), getLongitude(), getAccuracy() and getTime() which returns measurements such as latitude, longitude, the accuracy of each reading in meters and the Unix time at the moment of the measurement [1]. In addition, there is a Geocoder package that allows the developer to convert coordinates to addresses. If the Geocoder can be proved to be accurate and effective, this could dramatically influence the database implementation when storing the parking availability results [2].

This was all known in the previous semester, so one of the first objectives in this project was to fully

3

test the GPS hardware capabilities of the HTC Dream. The questions that needed to be answered with regards to the GPS capabilities were: How fast can it take readings? How accurate are the readings?

How relevant is the getAccuracy() function?

2.1.1.

GPS Capabilities

Once the phone was acquired, one of the first tests performed was to use the Location package to test the GPS’ robustness and capabilities. In order to test the speed at which GPS readings can be recorded, different thresholds for the minimum time between GPS readings were set and the phone was able to consistently and reliably acquire GPS readings at a rate of one reading per second. Once this was determined, a set of GPS data was acquired while driving and then plotted onto Google Maps as shown in Figure 2. It can be observed that the GPS on the HTC Dream Smartphone is fairly precise since every point maps directly onto the road without any filtering. This result exceeded the initial expectation of getting inconsistent readings that would require a filtering algorithm to align them back onto a road. However, this initial test was performed mostly on a highway and distant from any large buildings that would obstruct the phone’s path to satellites. It was not a realistic simulation of a metropolitan downtown environment.

Figure 2 Test Run #1

A second test of similar fashion was then performed while looking for parking in downtown

Montreal. The results are shown in Figure 3 and it reaffirmed the decision that a filter to map the coordinates onto its respective road was not necessary. However, the GPS was in constant motion, and it is therefore difficult to gauge the actual accuracy of the GPS readings compared to the ground truth.

In order get the actual accuracy of the GPS readings, stationary tests were required.

4

Figure 3 Test Run #2

To be able to judge the accuracy of the HTC Dream’s GPS readings, the ground truth must first be determined. Using Google Earth and Flash Earth as a reference for ground truth, the results of stationary readings are shown in Figure A1 to A6 in the Appendix [3] [4]. By computing the difference between the ground truth and the GPS readings acquired by the phone, then converting the difference from degrees to meters using a conversion calculator, the points on the graph represent the captured

GPS readings with the origin as the ground truth [5].

It can be observed from the readings taken on the corner of Rue Prince Arthur and Rue University on the 5 th

of October that the average error was about 9.4 meters south and 0.09 meters east of the ground truth with an average total error of about 9.6 meters. More importantly, with 156 sample readings, the variance in longitude and latitude was around 3.7 meters and 0.18 meters, respectively.

This variance is relatively minor considering that an average parking spot is about 5 meters in length.

On the 23 rd of November, another set of GPS readings were taken at the same location in similar weather with an average error of 0.6 meters east and 3.4 meters south. With a sample size of 193, the variance in longitude and latitude were 0.8 meters and 1.3 meters, respectively. This result is congruent with the initial expectation that GPS readings vary on a day-to-day basis.

This experiment, along with other GPS data that can be found in the Appendix, proves that GPS readings at best can only be accurate to about eight to ten meters due to the day-to-day variations and the variance from the GPS readings. This is an important error that was accounted for when dealing with even the most ideal conditions.

5

However, harsher conditions must also be tested to get a better idea of how to handle surroundings where the phone’s connection to satellite reception is highly obstructed. In order to do so, the phone was placed at the corner of Avenue McGill College and Rue Cathcart to gather GPS readings, where towering high rises are of abundance. The data obtained showed that the readings can vary from 81 meters south to 32 meters north and 52 meters east to 84 meters west of the ground truth. This is not an error that can be accounted for given the current time frame of the project. Therefore to minimise complexity, the project will only consider accurate GPS readings.

2.1.2.

Relevance of getAccuracy() Function

Looking at the 225 sample readings collected where the phone returned a getAccuracy() of eight meters or less, the actual error with respect to ground truth was on average 7.2 meters, with a 93.7% probability that the error was 10 meters or less. Using this as a reference, a simple filter can be devised where a GPS fix with a measured inaccuracy of eight meters or less can be considered an accurate reading with an error of 10 meters.

2.1.3.

Geocoder

One final feature that the Android API offers is the Geocoder package. This package allows the user to convert coordinates to an address. One downside to the Geocoder is that it needs to send the coordinates to the Google servers, which will return the address. This step will increase data transmission and impose an extra delay in the process. However, it allows the database to store addresses instead of coordinates, which is more easily consumable.

When put to the test, a major flaw to the Geocoder was revealed. It uses Google’s data and is equivalent to searching a coordinate in Google Maps. Instead of giving a precise address, a coordinate is translated into a range of addresses as shown in Figure 4. This reduces the accuracy of each GPS reading dramatically. Because the downsides of the Geocoder heavily outweigh its benefits, the decision to not use it was obvious.

Figure 4 Geocoder Results

6

2.2. Camera Analysis

During the preliminary design phase last semester, it was already discovered that the Android camera APIs were fairly intuitive to use. However, even with the version updates since then it remains lacking in features in comparison to a standard digital camera of today. Features such as controlling the shutter speed (the duration of the exposure which can cause blurring), color balance, and ISO sensitivity (sensitivity to light) are noticeably absent [6]. Although Google does intend on adding numerous new camera parameters into the API in the next revision of Android, no Smartphones are capable of using these additions as of this writing. [7].

Due to these limitations, camera testing was performed to provide a point of reference and to detail the efficiency of the camera in standard driving conditions. Aside from software prototyping to test the

API, the primary test consisted of capturing images from within a moving vehicle. A simple camera application was written that would capture images every ten seconds and save them onto the secure digital (SD) card on the Smartphone. Several reference images obtained from these tests can be viewed in the Appendix.

2.3. Ground Line Analysis

Ground line is a feature on the Android phone used to distinguish the road from the cars. The information obtained from this feature is then sent to the server in order to aid the MATLAB algorithm that finds cars.

However, there are for all intents and purposes, two ways in which one could distinguish the cars from the road in an image, and the final decision as to which approach to take came from a thorough analysis of both.

2.3.1.

Ground Extraction

The first approach for distinguishing the cars from the road is to run an algorithm called ground extraction. This works by taking each captured image, and running an algorithm that works like the magic wand tool popularised by Photoshop. Several pixels from the image are first chosen to form the reference colors. These reference colors are then compared throughout the image with a tolerance in order to make a selection of only similar colors. In the scope of the project, this feature could be used to separate the road from the cars by taking colors on the road as references and comparing them with the rest of the image.

7

Note that this algorithm is not without its flaws. The best level of color tolerance will differ from image to image, and a poorly selected tolerance could be disastrous. The initial color pixels to take as reference for the road will also always assumed to be in a certain point in the image.

2.3.2.

Ground Line

As an alternative to the ground extraction algorithm, ground line consists of a simple line that is drawn to split the cars and the road prior to starting the image capture sequence by the user.

There are admittedly significant downsides to the approach of using a line instead of the ground extraction algorithm. In a real driving environment, a car will obviously change lanes, go over bumps, and when any of this happens, the reliability of the ground line becomes questionable.

However, there are also major benefits in using the ground line approach; the most evident benefits being those of not requiring any time to process the image and being considerably easier to implement.

The easier implementation is important, because the issue of ground line really only became apparent near the end of the project and when all the systems were finally being merged together. The

MATLAB algorithm to find cars that had been inherited from Konstantin Speransky made use of the distinction between the road and cars for optimization purposes. Not wanting to ignore an existing feature, it was concluded that only a highly optimised and precise ground extraction method would actually perform better than the ground line method. Thus, the ground line method was chosen for the purposes of the project.

2.4. Networking Analysis

The first realisation was that despite the fact that HTC Dream is a quadband device, the 3G frequencies in Canada differed from that of the United States. It was an omission in previous semester’s research and restricted the wireless use of the Android phone to Enhanced Data Rates for

GSM Evolution (EDGE) and Wi-Fi. Even with the setback however, the phone proved to be adequate performance tests. Data connection was generally stable in most downtown areas and transmission speed was tested to max out at 75kbits/s, not far from EDGE’s theoretical maximum, and averaged

40kbits/s.

Also, not only does the Android support network stacks, it is also completely web ready with support classes that go far beyond sockets. This permitted a design that stayed focus on higher level issues and not on the details of socket programming.

8

Among the networking libraries available online, the Extensible Markup Language – Remote

Procedure Call (XML-RPC) protocol was chosen and the server was exposed as a web service. This protocol greatly aided with the serialisation of various GPS and image data, and also simplified programming.

This combination of hardware conditions and design choices suits the project well since the main payload of the transmission will be a processed image averaging at 8 kilobytes (KB). At an average transmission rate of 5 KB per second, a constant stream of information was supplied without draining battery or bandwidth. The overhead of XML tags in a 8KB message is negligible as well comparing to a custom socket packet design or a Representational State Transfer (REST)ful protocol.

3.

SYSTEM ARCHITECTURE – ANDROID

The following sections describe the architecture of the Parkdroid system. The current implementation divides the system into 2 parts. The Android software, developed in Java using

Google’s API level 4, can be distributed across multiple Android phones. The server software runs on a

Linux server to receive and process the inputs of connected phones. This section describes the onphone implementations.

3.1. Android Fundamentals

Programming for the Android was accomplished in Java using the Eclipse integrated development environment (IDE) and the free Android plug-in. Although the core language constructs are obviously no different when programming for a Smartphone in comparison to a desktop application, there are some important concepts that had to be learned along the way.

First and foremost, every Android application runs its own Linux process with its own special

Java virtual machine called Dalvik [8]. The process starts when the application’s code needs to be executed, but only shuts down when it is no longer needed and the system resources are required by other applications. This last note is especially important, as a common issue that was encountered due to inexperience with the platform was having the application in the background running even though it had been exited by the user.

Also worth mentioning is the Android Manifest that goes along with each Android application.

This structured XML file is essentially the glue that holds the application together. Every Android application is made up of a lot of different components, and in order to encourage reusability and abstraction, the manifest file is used to link multiple separate components together. This proved to be a

9

very useful framework feature some ways into the project, as different members of the group had been working on separate Android components [9].

One final fundamental point worth mentioning concerning the Android framework is that of permissions. In order to prevent any undesirable impact on the user, every application has certain levels of permissions associated with it. These permissions are written into the Android Manifest file, and concern almost everything hardware related from being able to use the GPS and camera, to being able to write to the SD card. It is an excellent safeguard where any attempt to use a feature without permission will fail [10].

3.2. Phone Design

Figure 5 Workflow of Android Smartphone

Figure 5 outlines the workflow of the background process on the Android phone. The intent is to have the user run the software after mounting their phone onto the dashboard. Parkdroid would then continuously upload sets of GPS and processed image data to the server.

To accomplish this, three sets of APIs are built into three different modules: the GPS module gathers raw GPS information; the image module captures an image and processes it; and the network module takes the GPS and image, and sends them onto the server.

The entire sequential process starts with the completion of the login. Once a connection to the server has been made, the GPS module is called on to initiate the looping process. The reasoning for using the GPS module as a starting point of the process is because unlike the other modules, a GPS reading cannot be made at will since it depends on the variables such as satellite signal strength which are uncontrollable. And since each reading is only valid for the particular instant in a moving vehicle, it would be ideally associated with a picture taken simultaneously. Because the image and networking modules can be controlled, the reception of a fresh GPS reading is used as the starting point of a process iteration.

Once an adequate reading has been obtained, the image module then takes a picture with minimal delay and processes it. Once both the GPS reading and a process image are ready, the networking module sends the data to the server, and at the completion of the transmission when the server responds with an affirmation, the GPS module is instructed to listen for a new reading and the iteration restarts.

10

3.3. Login

The login screen is the starting point of Parkdroid. At this point, all modules are inactive until a connection can be made to the server. This point also allows the user to set the ground line parameters

(explained previous in section 2.3) and to choose a connection method.

Given the context of the project, only the secure shell (SSH) port of the server was open to the public. Therefore the connection methods of the program reflected that. The user can either connect directly to the server at hansonbros.ece.mcgill.ca:5432 by connecting to McGill University’s intranet via Wi-Fi or tunnel the traffic via SSH. In this instance, a third party SSH tunnelling software will be required. During connection, the user’s name and ground line parameters will be sent to the server and on acknowledgement of the connection, the GPS module will be activated.

3.4. GPS

The design of the Android’s Google API level 4 was such that a GPS reading could not be directly requested. Instead a LocationListener class must be subclassed and the onLocationChanged method implementation overridden. GPS information will then be provided by the Android OS on its own time.

The entire process is highly dependent on the strength of the satellite signal and could potentially deliver no information if signal strength is inadequate. Like automobile navigation GPS, the

LocationListener could also take up to potentially 20 seconds before locking on to a satellite signals before delivering the first readings.

Once a proper reading has been made, the API returns a Location class object. Its longitude, latitude, accuracy and time properties are of interest. Nominal control over the frequency of the updates is also available.

Upon receiving a fresh reading, the objective is to associate it with a picture. At this point, an accuracy check is first performed to ensure the validity of the reading. As previously shown, the accuracy property of a Location object has a reasonable correlation with the actual error of the reading.

By permitting only readings with an error estimation of eight metres, readings have been limited to errors acceptable for finding parking and without drastically reducing the amount of usable readings.

When a reading of acceptable accuracy is ready, the listener is first turned off from receiving further GPS updates. This ensures that only one set of data is being processed at a time. Concurrency could cause problems and complications since hardware resources such as the camera and data communication are not shareable. Also, multithreading central processing unit (CPU) and network

11

intensive operations such as image processing and data transmission would not yield major time improvements either. After the listener is suspended, the image module is initiated.

3.5. Camera

The camera functionality on the Smartphone consists mainly of two components: image capturing and image processing.

Before continuing, special mention should be made toward the preliminary work accomplished by

Sharmila Chakraborty during the summer of 2009. Also under the supervision of Professor Coates for the same project, her work was used as a base for the image capturing and image processing functionality that follows.

3.5.1.

Image Capture

Sequential image capture is accomplished through the use of a simple timer and through the use of the Android camera API. For every picture, the camera is first initialised with the desired parameters of a 640x480 image. A call to autofocus is then issued and the image is taken. This process takes approximately half a second.

The call to perform autofocus is surprisingly crucial towards the eventual quality of the final image. As stated in the hardware camera analysis previously, the Android camera API is currently lacking in customizable parameters. Instead, there is a huge reliance on this simple feature to ensure that the image color is well balanced. The significant contrast between having autofocus enabled and not having it enabled is shown Figure 6.

Figure 6 The importance of having autofocus

3.5.2.

Image Processing

Once an image has been captured, image processing on the Smartphone is performed. The current system consists of two steps: greyscaling and image compression.

First, the image is sent through a greyscale filter to remove it of its color. This reduces the size of the image and currently leaves the image at 16 bits per pixel. This filter is provided in the Android API

12

and takes approximately 100 milliseconds. Note that the size of the bitmaps could be further reduced to 8 bits per pixel, but this feature currently has defects in the Android API and was not used [6].

Second, the image is compressed into JPEG (Joint Photographic Experts Group) format in low quality. This second step serves two purposes. Compressing the raw image data into JPEG format reduces the file size that is sent to the server in order to conserve bandwidth. Compression in low quality allows for fine details to be blurred and thus satisfies any privacy concerns. This process takes approximately one second.

3.6. Networking

When the image processing is complete, the networking module is called to transmit the data over network to the server. The networking module contains open-source XML-RPC client library written by pskink[7] and released under the Apache License 2.0. Thus, the networking module is based on the

XML-RPC protocol to communicate between the server and the phone.

To initiate a new transmission, a subclass of Thread is created. Then the data sending method is called asynchronously and a callback will be initiated when a response from the server is received.

Asynchronous data transmission insures that the UI is always responsive during transmission.

The transmission method essentially makes a remote procedure call on the web service using the connection parameters chosen at login (direct or SSH). The username, latitude, longitude, time and the image bitstream are passed on as parameters of the function call.

When a response is received from the server, the network module enables the GPS

LocationListener again and the iteration restarts.

3.7. User Interface

Creating user interfaces for an Android application consists of defining view hierarchies in an

XML layout file. Every interface is made up of a tree of widgets such as text fields and buttons. These widgets are then grouped and displayed on screen.

The final Android application consists of three screens. The first screen is that of a login screen, where a user can select between a local or SSH (secure shell) connection to the server, set the ground line, save the default settings, or login to start the process.

13

Figure 7 Android Application Login Screen

The second screen is used for setting up the ground line. On this screen, a camera preview shows up and the user taps on the screen to set two points that make up the line. This step is therefore performed before initiating the image capture sequence, as these two points are then sent to the server as x and y coordinates.

Figure 8 Android Application Ground Line Preview

The final and main screen of the Android application serves as a continuously updated log of GPS reception, camera status, and server messages. Current GPS reception can be determined at a glance thanks to the color circle at the top of the screen. This circle changes color depending on reception quality. The user is continuously informed of the status of the application by the scrolling text field.

14

Figure 9 Android Application Main Screen

4.

SYSTEM ARCHITECTURE – SERVER

4.1. Server Design

The server of the Android parking finding network has the purpose of taking in entries from

Android phones and determining whether a free parking exists in the photograph transmitted from the phone. Then that information has to be stored in a readable manner for a future client application to consume.

In the current implementation, the server sits on a Debian 5.0.3 machine in McGill University. The entire server process runs on a Python script. This script contains the several module necessary for the purpose of the server: a XML-RPC web service server, a MATLAB program capable of finding cars in an image, a probability calculation system that can use the output of MATLAB and the GPS information to determine where the probable free parking spots are located, and a MySQL database storing these resulting estimates.

The choice of the Python programming language was for its extreme efficiency. The entire server process minus the MATLAB function is less than 250 lines of code. Python allowed the code to not be bloated with memory management, pointers, and parsers that would take the focus away from the actual system design.

The overall architecture of the server is outlined in the following chart. Each module will now be described in further detail.

15

Figure 10 Python Server

In Figure 10, the entire server is divided into two semi-independent sub-processes running on different threads. On the left is the outward facing web service server that receives entries from a network of Android phones. The two sub-processes communicate via a shared buffer.

This is so that the web server can always be responsive to new entries while the processor thread on the right is working at processing the entries. Consider a timing difference between the different modules. The act of receiving a new entry on the web service server is about a second. The MATLAB function takes roughly 10 minutes and the remaining modules take less than one second. Given these timing disparities, this asynchronous process allows a large set of entries to be inputted and the queue can be worked out by the processor over a longer period of time.

4.2. Web Service Server

The web server makes use of a Python networking framework “Twisted”[8] open-sourced under the MIT license. This framework allows for the construction of asynchronous XML-RPC servers with relative ease. Here, each remote procedure call is implemented as a function that can be called from an

Android or any other devices operating with the XML-RPC protocol. Running on the port 5432, this process will continuously wait for connections and serve incoming requests.

As shown in Figure 10, this server process communicates with the processor thread via a shared queue and a settings dictionary for users. This users dictionary is filled via the “connect” RPC.

“Connect” is one of the two allowed RPC of the server. This essentially allows the client to check for an active server during login phase of the on-phone process. But it is also during this time that parameters for the ground line that is set in the login screen sent to the server. Since the user mounts

16

only the phone once, the line is assumed to be valid during an entire connection session and the parameters of the line is associated with the login username.

The second remote procedure call is “recordgps”. This method requires the input of six parameters.

The username for logging purposes as well as to retrieve previously saved ground line parameters.

Latitude, longitude, accuracy and time are readings from the GPS module. Finally, a byte array containing the JPEG picture is also transmitted.

Both these functions are callable concurrently and while the processor is working on existing data.

Once “recordgps” is called with the correct parameters, the method will attempt to acquire the lock to the queue to avoid synchronization problems. Then, the method saves the incoming byte array onto a file on the server and logs the GPS data. Then, it puts the GPS data with the picture’s filename on the shared queue and releases the lock. Saving the picture to a file allows for a large queue size since each entry will only consist of a username string, two doubles, one integer, one long and one filename string.

Logging the GPS data and saving the picture file also facilitates the debugging process later.

4.3. Processor

The processor thread starts by testing for a valid connection to the MySQL database server. It then ensures the task of getting Android entries from the queue, processing the picture in MATLAB, calculating probabilities using MATLAB output and GPS readings and saving estimates to database before reiterating.

This process also ensures that pictures with parking on the left-hand side and pictures with parking on the right-hand side can both be effectively handled. This is because the MATLAB detection system makes comparisons of eigenimages against an existing bank of images with cars on the right-hand side, left-handed parking will not be detected. A simple solution used to solve this problem is to make use of the ground line previously determined to distinguish left-handed and right-handed pictures. Then, lefthanded pictures are simply flipped about the vertical axis such that MATLAB would treat them in the same way as it would treat right-handed pictures.

4.4. MATLAB

The primary purpose of MATLAB on the server is to detect cars. This is completed by scrutinizing the image and analyzing sections of various sizes of the image until a car is determined.

Algorithms of this nature are called local detectors. There are above all two existing implementations of such detectors and the following sections explain the reasoning behind the final decision.

17

4.4.1.

Local Detector

The first local detector model is called GentleBoost. It constructs a strong classifier from a large set of weak classifiers using a modified boosting algorithm [14].

The second local detector model is called the Principal Component Analysis (PCA) detector. It is based on the projection and then the reconstruction of a detection window using trained data that contain different sets of eigenimages.

Having implemented and tested both local detectors in MATLAB, the results clearly demonstrate that the PCA algorithm is much faster and more desirable for the purposes of this project [15]. A comparison of the duration of running both algorithms is shown in Figure 11.

GentleBoost PCA(Simple difference)

Time, s

0 100 200 300

Figure 11 Time to classify 750 windows using different algorithms [15]

4.4.2.

PCA Detector

Figure 12 PCA box detection

The PCA Detector was implemented by first defining training sets for cars and backgrounds. For the training sets, 50 images were used for cars, while 250 images were used for backgrounds. From these training images, sets of eigenimages and mean images were obtained. The general idea is that by using this information, the algorithm can then determine what is a car and what is the background in a new image.

18

Also note that the background training set was bigger due to the broad definition of what is a background. It can be anything from a window, building, the sky, or a combination of any of them.

In order to handle colors, greyscale versions and their respective gradient representation are used for classification. This information is color invariant. In more detail, there are four sets of eigenimages and four mean images acquired from the greyscale and gradient car images.

The next part in explaining the algorithm is the actual comparison, a reconstruction of an image 𝑰 using a set of eigenimages 𝑨 and mean image 𝑿 .

The projection P is then obtained using the following equation that takes the transpose T of the eigenimages:

𝑃 = 𝐴 𝑇 (𝐼 − 𝑋) (1)

The reconstruction I r

is then obtained from this projection:

𝐼 𝑟

= 𝐴 ∙ 𝑃 + 𝑋 (2)

PCA works by taking multiple image sections of different sizes and running the algorithm on each.

For each of these sections, there are four useful pieces of data.

The first piece is the difference between the greyscale image with the reconstructed image using the car eigenimage set and the mean d

1

. The second is the difference, but using the background eigenimage set and mean d

2

. The third d

3

and fourth d

4

correspond to the first and second, but using the gradient images. Each difference is obtained using the following equation: 𝑑 = |𝐼 − 𝐼 𝑟

| (3)

Once the four differences are obtained, the probability of having a car in the section is calculated with this next equation:

𝐷 = 𝑑

2 𝑑

1

+ 𝑑

4 𝑑

3

(4)

If the D value in the previous equation is greater than two, then the image section has found a car, otherwise it is classified as a background [15].

19

In order to process the entire image, the PCA detector works with 150 by 80 pixel image sections in a loop with respect to the x axis until the right side of the scanning window reaches the rightmost side of the image. The dimensions of the entire images currently being used are 640 by 480 pixels.

Within each the loop is a nested loop that scans with respect to the y axis until the bottom of the one of image sections reaches the bottom of the whole image.

In other words, the image is scanned from top to bottom and then shifted slightly to the right each time until the entire image is scanned. Due to the dimensions of the training sets, the dimensions of the scan window are also constantly being resized to different magnitudes. This allows for the scanning of cars of different sizes, and images of different camera angles and perspectives. In the general scheme of things, this can be thought of as being the equivalent of making the scan window bigger and smaller.

4.4.3.

Bounding Box Pruning and Filtering

To best use the results of the PCA Detector, filtering is necessary.

First a simple pruning of overlapping boxes is performed. This is accomplished by comparing pairs of overlapping boxes. Take for example that there is a box1 and box2. Both boxes have a probability of having a car, but box2 has a higher probability. Then, box1 is only kept if the ratio of overlapping area of the two boxes is less than the threshold of 40 percent. Note that this threshold sometimes must also vary in order to adapt to situations where a car is partially hidden car by a neighboring car.

Contextual cues such as the camera settings, the position with respect to ground, and the probability of having a car in a particular box are also later combined in order to help produce in a even more precise result. This is implemented by following the Bayesian Framework elaborated by

Konstantin Speransky [15].

4.4.4.

Image Processing Optimization

One crucial issue that arose once the image processing algorithms were put in place was the processing time. It took almost thirteen minutes to process each image. Another issue that arose was the amount of bandwidth used when transferring a colored image from the client’s Smartphone to the server. The eventual resolution can be split into three parts: improvement of the image processing algorithm, converting the MATLAB code into C, and using a ground line instead of the ground extraction algorithm that requires a colored image.

20

4.4.5.

Optimization by pre-boxing

In the PCA algorithms explain in section 4.4.2, it turns out that assigning boxes takes the most processing time. Note that PCA involves assigning boxes that surround what could possibility be a car.

In many situations these boxes may be assigned in impossible areas such as the ground or above the horizon which would implicitly mean that a car might be floating. An example of what the final result might look like prior to pruning is shown in the Figure 13.

Figure 13 Example of car detection before pruning

The best way to save time is to prevent these useless boxes from being assigned in the first place.

The idea was to implement a left/right/top pre-boxing algorithm. The pre-boxing involves identifying the size of the box, hence the car height in real life, by using the following equation: 𝑦 𝑖

≈ 𝑦 𝑐 𝑣 𝑜 ℎ 𝑖

− 𝑣 𝑖

(5)

In order to find the height of the car, the Yi value was calculated using the equation above. Given that it is possible to calculate the size of the car before it is being assigned a box, a lot of processing time can be saved by preventing these boxes to be assigned at the first place, hence pre-boxing. In

Figure 14, an arbitrary middle (glowing blue line) is assigned which cuts the picture in half. Logically, the left side of the arbitrary line should have only small cars; hence it is expected to have small boxes.

Whereas on the right side, the cars are closer, so the boxes should be seen as large boxes. By using this

21

idea, all large boxes on the left, and all small boxes on the right should not be assigned at the first place.

Figure 14 Left and Right Pre-Boxing

Another concept used is the top pre-boxing which prevents all the boxes above the horizon to be assigned beforehand. The horizon line (pink glowing line in Figure 15) is created by using the points provided by the client side when setting the ground line as explained in section 3.6.

Figure 15 Top pre-boxing

4.4.6.

Converting MATLAB to C

Another solution to increase the efficiency of processing an image was to change language from

MATLAB to C. MATLAB has a compiler which allows the conversion of MATLAB files into the C wrapper classes. The console command used to convert the MATLAB files is the MCC command.

The C wrapper class makes use of the MCR (MATLAB Compiler Runtime) library which allows the C files to call and use MATLAB functions. It is necessary to set the environment path to that library in order to have a working standalone C application.

22

4.4.7.

Ground Line replacement

In order to decrease the amount of bandwidth from uploading large amount of pictures to the server as explained in section 2.3.1, it was necessary to replace the coloured image by a greyscale image. Originally, the code provided by Konstantin Speransky, used a coloured image to extract the ground in order to increase the probability of boxes surrounding a car while at the same time, removing all possible boxes surrounding a ground. As such, an extra feature was created on the client side which could be used to replace the coloured image by two coordinates. By using those coordinates, it would be a simply matter to calculate the slope of the ground line (section 2.3) seen on the right picture of the

Figure 16.

Figure 16 Ground extraction replacement

4.5. Probability Calculations

At this point, the measured GPS location, its estimated inaccuracy and the output of the MATLAB are known. The output of the MATLAB is presented as an array of probabilities of cars found in the image. From this, an estimation of the probable free parking spots at specific locations needs to be derived.

From this situation, 2 problems arise. First, the actual position of where the picture was taken is not known. A measured location exists but as highlighted in section 2.1, a disparity exists between the measured location and the actual location. Therefore, probabilities cannot be assigned to specific locations with certitude.

Second, the MATLAB offers an output of the probabilities of the cars found in the picture. It is a list of how certain it is that the cars found in the picture are actually cars. This information does not say which cars are there and it definitely does not say which cars are not there. The information that needs to be provided to future customers is not a list of where cars have been found but where cars should have been found but are not found. This inversion could cause a problem since the number of cars that should be found is unknown.

23

This module will then have to use these two imperfect pieces of information to compromise for a fair and meaningful estimate of the desired answer. To solve the first uncertainty, the estimations associated with the picture are shared among all locations that the phone could have been at when the picture is taken. Since a reasonable correlation has been found between the measured inaccuracy and the actual inaccuracy, the measured inaccuracy was used to create a box of possible locations to distribute the estimate. All these possible locations receive a share of the estimate weighed by their distance to the measured location. Closer possibilities get a bigger share and further possibilities get a smaller share. The weighed distribution is normalised in its whole so the estimated number is conserved for the picture.

To solve the second uncertainty, the total number of cars that should have been detected needs to be first determined. Since the sizes of the PCA detection boxes in MATLAB can be controlled, a limit for the smallest car that can be detected is set. This has been fitted as to detect the third car but not the forth. This then becomes the number of cars that should be detected by the algorithm in its most reliable cases. This is one of the potential flaws of the system which can be improved upon by adding more training data to the MATLAB system to increase reliability. This is further discussed in later sections.

There is still, however, the problem of not being able to assign probabilities to each specific location. That is the case since it is very difficult to map each spot found in the picture with an actual parking spot. A compromise has been made to solve this problem: the ‘resolution’ of the information has been decreased such that the output of the inversion of the information from the MATLAB would be an estimate of free parking spots likely found in the picture. So a list of three probabilities of a car being in the picture is turned into an estimation of number of cars in the picture. This reduction can be made since the initial ‘precise’ data is not without uncertainty. Not rounding down a longer uncertain number does not make it better data. Second, given the context, it is not more useful for consumers to know exactly where the free parking is. In other words, knowing where to go to be able to find free parking is just as useful as knowing which of the three parking in front of where the user wants to go will be free. Once within this uncertainty range of three car lengths, the user can make their own decision on where the park. The purpose of the data is to help the user decide where to look for parking.

This entire process can then be explained with the following diagrams:

24

Figure 17 Stages of probability calculation

Figure a represents the ground truth, a situation where the user takes a picture with 2 cars out of 3.

Figure b represents the output of MATLAB. A probability is assigned to each spot. Only that in reality, it would be impossible to know which probability corresponds to which spot.

Figure c is an estimate of how many free spots are seen rather than how likely the observed spots are not free. This number is derived from figure b by calculating how likely it is to have 1, 2 and 3 spots free. In this case, there is probability of 0.54 that one parking spot is available, a probability of

0.042 that two parking spots are available and a probability of 0.04 that all three parking spots are free.

These probabilities are then converted into an estimated number that would be meaningful to users.

This is done by doing 1 ∗ 𝑝1 + 2 ∗ 𝑝2 + 3 ∗ 𝑝3 . For this example, it becomes 1.5 which is intuitively a reasonable guess given the numbers of Figure b. This means that there are about 1.5 spots free within this picture.

Figure d accounts for the other inaccuracy, the GPS inaccuracy. Since the exact location of the phone when the picture is taken is not known, all probable locations get a share of the estimate in c with a normalized weigh related to their distances to the measured location.

To obtain Figure e, the coefficients in Figure d is multiplied to the values in Figure c to yield a final result that would appear in the database. Since the result in Figure d is normalized, the sum of all

25

numbers in Figure e is still 1.5. It is simply spread out since it is impossible to say which single spot is the 1.5 estimate valid in. Comparing with the ground truth, the goal of the system is well accomplished, that is to guide the user to a location that would have a free parking spot in visible range. In this case, compare the one free parking spot in the ground truth with the final results in figure e.

4.6. Database

The database is composed of the two tables shown in the Figure 18.

Figure 18 Database tables

A specification of the project is to have all findings matched with actual spots on the ground.

Therefore, the table structure contains one with all potential parking spots and another storing the probabilities of availability.

4.6.1.

Database Table: parkdroid_location

The first table contains all the information that will aid in the calculation in finding a parking spot.

It is basically a potential parking spot which is associated with a reference name and a GPS location.

Each record includes a Boolean field (isValid) which predefines as a valid or a non-valid parking spot such as a fire hydrant. The distance field is used to find the relative distance of where the first parking spot was registered on the database from that street. For this project, all parking spaces are marked on

University Street between Milton Gates and Price Arthur as shown in Figure 19.

26

Figure 19 parkdroid_location with University Street values

4.6.2.

Database Table: parkdroid_data

The second table makes use of the parkdroid _ location table. As explained in Section 4.5, each estimate is associated with a parking spot, time stamp, and the user from which the GPS data and picture was provided from.

Figure 20 parkdroid_data after recording values

5.

SYSTEM RESULTS

This section reports on performances of the Parkdoid system after the entire system is implemented. Here will be assessed how well each component has fulfilled its goals and any improvements that can be made.

5.1. GPS Results

The GPS has been tested to perform in accordance with expectations. This module reasonably is the least surprising in terms of results since so many tests have been performed already prior to implementing its current design. The usage of the API's measured inaccuracy in the probability calculation module has turned out to be appropriate as it constantly allowed probabilities to be assigned to the relevant spots.

27

For future optimisation considerations, the LocationListener class can have its listen handle removed during processing and transmission to save battery power. The initial lock-on stages of the

GPS also leaves room for improvement since it can sometimes take 20 seconds before a first reading is made, but not much can be done using the current APIs.

5.2. Camera Results

5.2.1.

Camera Timing

Overall, the camera portion of the application now averages a time of approximately 1.5 seconds per image. This is the time it takes from first capturing the image to having a processed image ready to be sent to the server. The time to perform these tasks on the initial code provided by Sharmila averaged around 37 seconds per image. Thus, the time required to perform the camera portion has now been improved by a factor of 25.

Although this represents a substantial improvement, this still remains an area that can be improved.

The addition of autofocus to deliver clear images also has the downside of increasing the delay required to capture an image. According to Google, they will be adding new camera functions into the API that could likely compensate for this issue by performing color correction on the image.

5.2.2.

Image Processing

Image processing currently succeeds in converting a color image to a greyscale image and reducing the quality of the image in order to reduce the image size. It also succeeds in distorting the image sufficiently for privacy concerns and in an adequate amount of time as described in previously.

Improvement towards further reducing the size of the image remains possible by compressing the image to 8 bits per pixel rather than the current 16 bits per pixel. As mentioned in Section 3.4, this feature currently has defects in the Android API and was not used [6].

From these results, it can be concluded that it was the right decision to perform a basic level of image processing on the Smartphone prior to delivering it to the server. At the end of the day, an important part of the scalability of the entire system falls into the efficiency of this component. Relying too much on the server for processing will result in it becoming a bottleneck once the number of users increases beyond a certain point.

28

5.3. Bandwidth Results

Data plans for Smartphones are currently rather expensive and cost $30 CDN (Canadian) for 500 megabytes (MB) of traffic. With the final settings, each package sent to the server approximates to around 8KB. This package consists of the processed image, the GPS coordinates, and the ground line coordinates.

Assuming 150 images per day, this will cost the average user around $2.16 CDN per month, which equates to roughly 7.2% of their monthly bandwidth. This result is less than the 10% maximum that Professor Coates stated as the desired goal.

5.4. Network Results

The network module has performed reliably in tests. At no time was the transmission inconsistent, slow or the data corrupt. However, some improvements can be added to provide more feedback in case of errors caused by inactive server etc. The transmission speed was consistent with previous measurements in the field and can transmit pictures to the server within 2 seconds.

5.5. Server Results

The Python server has worked reliably during tests. The uptime of the server was surprising since it never stopped being active when left on for extended periods. Before bringing this project to a more public market, however, a more dedicated server would definitely be necessary. Better hardware can help reduce the image processing time that is of concern currently and more appropriate network topography can avoid connection problems on the Android and allocate/secure one port open to public.

More fault tolerance is also required before deploying this project to maximise uptime.

5.6. Image Processing Results

5.6.1.

Processing Time

After implementing the optimization solutions from section 4.4.4, 4.4.5, and 4.4.6, the amount of time required to process an image decreased. Originally, it would take thirteen minutes to process an image, but now, it would take approximately six minutes, thus increasing efficiency of the image processing by 53%.

29

PCA Detection

Pruning

BN Model

After (min)

Before (min)

Total Time

0:00 4:48 9:36 14:24

Figure 21 Time Analysis after optimization

However, during full system tests, deterioration in performance has been experienced when using real-time pictures. Processing time averaged 11 minutes. This is a concern that can affect the validity of entries since they become less certain as time increases. Currently, the bulk of the total processing time is in the PCA detector stage but more detailed measurements need to be made in the future to determine and solve the source of the delay.

5.6.2.

Reliability of Image Detection

During field tests, the image processing showed some inaccuracy and cars were not found.

By comparison with the original code, it was shown that the optimisation did not introduce new errors in general. However, the modified pruning process that employs the ground line could have introduced some problems by incorrectly removing some cars found by the PCA detector.

In other words, the error seemed to be introduced by using a different set of input pictures. Doubts were placed on the compression quality of the input pictures but were proven wrong since high quality pictures performed equally bad.

In new picture sets, cars were mostly just not found by the PCA detector. Since tests on old pictures were made via comparison with training data that were formed with the said pictures, new pictures were not as easily detected due to a lacking pool of training data. Continuously updating the amount of training data would greatly increase the reliability.

6.

CONCLUSION

The current Parkdroid system now contains all the necessary components required to detect free parking spaces. Although there still remains some ways to go before the system is commercially ready to serve as a data provider for real customers, that goal is definitely not too far from reach.

All systems performed well individually and were successfully capable of fulfilling the goals set forth at the beginning of the project. The provided restrictions and specifications were respected from

30

beginning to end. The system flow of the integrated components was also reliable. There remains some inaccuracy in the server's image processing component, but the system architecture is healthy and the issues originate primarily from a lack of training data.

With these corrections in place, Parkdroid has the ability to provide suitably formatted data that can be then easily consumed and visualized with a future client application. In the open-source community, each of the components of Parkdroid can also serve as a strong base for further development in their respective fields.

ACKNOWLEDGEMENTS

The authors would like to thank Mark Coates, Abhay Ghatpande and Konstatin Speransky of

McGill University for their help and support.

REFERENCES

[1]

[2]

[3]

[4]

Google Code, “Location”, available online at http://developer.android.com/reference/android/location/Location.html, November 30, 2009

Google Code, “Geocoder”, available online at http://developer.android.com/reference/android/location/Geocoder.html, November 30, 2009

Google, “Google Earth”, available online at http://earth.google.com/, November 30, 2009

[12] Paul Neave, “Flash Earth”, available online at http://www.flashearth.com/, November 30, 2009

CSG Computer Support Group, “Length of a Degree of Latitude and Longitude Calculator”, available online

[5] at http://www.csgnetwork.com/degreelenllavcalc.html, November 30, 2009

[6]

Android Developers, “Package Index”, available online at http://developer.android.com/reference/packages.html, November 28, 2009.

[7]

Android Developers, “Android 2.0, Release 1”, available online at http://developer.android.com/sdk/android-2.0.html, November 28, 2009.

[8] Android Developers, “What is Android?”, available online at http://developer.android.com/guide/basics/what-is-android.html, November 28, 2009.

[9] Android Developers, “Application Fundamentals”, available online at http://developer.android.com/guide/topics/ fundamentals.html, November 28, 2009.

[10] Android Developers, “Security and Permissions?”, available online at http://developer.android.com/guide/topics/security/ security.html, November 28, 2009.

[11] Tom Gibara, “Urgent: unable to get rid of the memory issues for graphics apps”, available online at http://groups.google.com/group/android-developers/, November 30, 2009.

[12] Google Code, “android-xmlrpc”, available online at http://code.google.com/p/android-xmlrpc/, November

29, 2009.

[13] Twisted Matrix Labs, “Twisted”, available online at http://twistedmatrix.com/trac/, November 30, 2009

[14] K.Murphy, A.Torralba and W.Freeman, Using the Forest to See the Trees: A Graphical Model Relating

Features, Objects, and Scenes, NIPS, 16, 2003

[15] K.Speransky, Car Detection using Bayesian Network, Term Paper, McGill University, March 2009.

31

APPENDIX A – REFERENCE IMAGES

32

33

APPENDIX B – GPS READINGS

-15

Acquired GPS Coordinates

McGill Campus Outside Art Building

15

10

5

-5

0

-5

-10

5 15

-15

Longitude Difference in Meters

Figure A1 The data points show the error magnitude of the GPS readings

Acquired GPS Data

Cathcart and McGill College

80

30

-120 -70 30 80

-70

-120

Longitude Difference in Meters

Figure A2 The data points show the error magnitude of the GPS readings

34

-35

Acquired GPS Coordinates

Maisonneuve and McGill College

35

25

-15

15

5

-5

-15

5 25

-25

-35

Longitude Difference in Meters

Figure A3 The data points show the error magnitude of the GPS readings

Acquired GPS Coordinates

Prince Arthur and University Street

15

10

-15

5

-5

0

-5

-10

5 15

-15

Longitude Difference in meters

Figure A4 The data points show the error magnitude of the GPS readings taken on the 5 th of October, 2009

35

-8

Acquired GPS Coordinates

Prince Arthur and University Street

Take 2

8

6

4

2

-3

0

-2

-4

2 7

-6

-8

Longitude Difference in meters

Figure A5 The data points show the error magnitude of the GPS readings taken on the 23 rd of November, 2009

Acquired GPS Coordinates

On University Street outside Rutherford

Building

15

-15 -5

0

-5

-10

10

5

5 15

-15

Longitude Difference in Meters

Figure A6 The data points show the error magnitude of the GPS readings

36

Acquired GPS Coordinates

On University Street near Milton Gates

8

3

-12 -7 3 8

-7

-12

Longitude Difference in Meters

Figure A7 The data points show the error magnitude of the GPS readings

APPENDIX C – TIME LOG

03/09/09  Updated Android firmware to 1.5 on HTC Dream

04/09/09  Wrote testing code for XML-RPC communication between Android Smartphone and

Python server

05/09/09  Tested networking on the Smartphone

08/09/09  Implemented a function to get the GPS via Android API.

 Able to save readings onto the SD card.

 Ran tests and converted results to KLM in order to plot them onto Google Maps.

09/09/09  Completed the first version of image capture functionality on the Smartphone using

Sharmila’s code as a base.

 Downloaded SDRESCAN application onto the HTC Dream in order to allow for scanning

 of the SD card without reboot.

Obtained a set of reference pictures using camera application

15/09/09  Main Android project uploaded to SVN. reboot.

 Discussed with Konstantin the overview of MATLAB algorithms.

16/09/09  Prepared the server and tested networking.

 Tested Konstantin’s MATLAB code. Successful in converting all the images into gradient and blurred greyscale, but failed at running the car detection algorithms.

17/09/09  Android uploads GPS coordinates directly to server.

 Tested GPS Geocoder from Android API.

 Obtained latest MATLAB code from Konstantin.

19/09/09  Prepared server database and non-SSH network tunnelling.

 Python database management.

24/09/09  Implemented a faster blur algorithm for Android Smartphone and overall code optimization.

01/10/09  Attempted to convert MATLAB code to C, but failed due to missing MCR Library path.

02/10/09  Further attempts at converting MATLAB code to C. This time Abhay helped, but still no success.

04/10/09  Made some modifications in main MATLAB program and in PCA detection.

37

05/10/09  Retrieved a set of stationary GPS data from different environments.

19/10/09  Researched into Smartphone data plans

 GPS coordinates now transfer from phone through XML-RPC to Python server, and finally into MySQL.

 Analyzed GPS data and GPS errors

23/10/09  Cleaned up code and arranged classes.

24/10/09  Added function UI to Parkdroid.

 Added image capture classes to Parkdroid.

27/10/09  Attempted to store output information from MATLAB to C as an output on the console.

29/10/09  Cleaned up code and performed some general debugging.

 Debugged MATLAB code.

05/11/09  Python server is now multithreaded. Can now accept incoming entries and process them.

 Server can now flip images for MATLAB.

 Meeting with Konstantin in order to improve the PCA detection. A significant speed improvement resulted.

06/11/09  Added ground line functionality to the Smartphone.

07/11/09  More work on converting MATLAB to C.

08/11/09  Successfully modified MATLAB such that training can be saved into a single file.

09/11/09  MATLAB can now take ground line inputs from client side.

10/11/09  Meeting with Konstantin concerning implementing ground line into MATLAB.

12/11/09  Improvements to MATLAB code concerning outputting results and handling ground line.

17/11/09  Integrated probability calculations with database.

19/11/09  Obtained more sample imaged at different camera angles.

 Meeting with Konstantin that finalized MATLAB algorithms concerning parking availability.

22/11/09  Performed lots of testing of the entire system.

23/11/09  Performed more testing of the entire system.

30/11/09  Added logging functions.

 Tweaked probability algorithm.

 Performed more tests.

38

Download