- Project Hosting

advertisement
Final Document
Big Dog’s Kryptonite
Controlling an RC car over a network
James Crosetto (Bachelor of Science, Computer Science and Computer Engineering)
Jeremy Ellison (Bachelor of Science, Computer Science and Computer Engineering)
Seth Schwiethale (Bachelor of Science, Computer Science)
Faculty Mentor: Tosh Kakar
05/22/09
Abstract:
In this project, a remote controlled car was modified to be driven over a wireless
network connection. By replacing the standard radio frequency receiver with an Axis 207W IP
camera, communication between a PC and the car could be established. Through this
connection, the output port of the camera was programmed to communicate with the
MC9S12C32 microcontroller. Based on the camera’s output signals, the microprocessor would
then output the appropriate pulse width modulated signals to the speed and steering servos of
the car.
1
Contents
Introduction .................................................................................................................................................. 7
Functional Objectives Summary ....................................................................................................... 7
Learning Objectives Summary .......................................................................................................... 7
Research Review ........................................................................................................................................... 8
Design.......................................................................................................................................................... 12
Connection between the IP Camera and the User’s PC (video feed) ..................................................... 13
Connection between the IP Camera and the User’s PC (commands)..................................................... 14
Hardware: Sending signals produced by software to the car ................................................................. 14
Theory of hardware operation............................................................................................................ 15
R/C Engine ....................................................................................................................................... 15
Controlling Units of the R/C car ...................................................................................................... 15
IP Camera ........................................................................................................................................ 16
Microprocessor ............................................................................................................................... 16
Battery Pack .................................................................................................................................... 17
Hardware Design................................................................................................................................. 17
GUI .......................................................................................................................................................... 22
Implementation .......................................................................................................................................... 25
Computer to Camera .............................................................................................................................. 25
Server .................................................................................................................................................. 25
ARM Cross-Compiler ....................................................................................................................... 25
Getting the code on the camera ..................................................................................................... 28
Our Implementation of the Server.................................................................................................. 28
Client ................................................................................................................................................... 31
Camera to Microprocessor ..................................................................................................................... 45
Camera ................................................................................................................................................ 45
Microprocessor ................................................................................................................................... 60
2
Microprocessor to Car ............................................................................................................................ 64
Future Work ................................................................................................................................................ 68
Results ......................................................................................................................................................... 69
Conclusion ................................................................................................................................................... 70
Appendix A – Programming the Dragon12 Board....................................................................................... 71
Appendix B – Porting Code from the Dragon12 to the Dragonfly12 .......................................................... 77
Appendix C – Building the proper 0x86 to ARM cross-compiler for the Axis 207W’s hardware................ 82
Appendix D – The Requirements Document .............................................................................................. 84
Introduction ............................................................................................................................................ 84
Project Description.................................................................................................................................. 84
Functional Objectives Summary ......................................................................................................... 85
Learning Objectives Summary ............................................................................................................ 86
Use Cases ................................................................................................................................................ 86
Use Case Model ...................................................................................................................................... 88
Task Breakdown ...................................................................................................................................... 88
Budget ..................................................................................................................................................... 91
Bibliography ................................................................................................................................................ 93
Glossary ....................................................................................................................................................... 94
3
Figures
Figure 1: The Basic components necessary to fulfill the basic communication requirements for the
network RC car ............................................................................................................................................ 12
Figure 2: components after consolidating the computer and camera into one unit ................................. 13
Figure 3: Pulse width modulation ............................................................................................................... 16
Figure 4: The complete setup on the RC car ............................................................................................... 21
Figure 5: The microprocessor component is added to the design ............................................................. 22
Figure 6: Initial design of GUI ...................................................................................................................... 23
Figure 7: GUI design for a dropped connection .......................................................................................... 23
Figure 8: Sequence diagram for using the RC car ....................................................................................... 24
Figure 9: How to enable telnet ................................................................................................................... 28
Figure 10: Creating the TCP Socket ............................................................................................................. 28
Figure 11: sockaddr_in structure ................................................................................................................ 29
Figure 12: Binding a socket ......................................................................................................................... 29
Figure 13: Listen for a connection .............................................................................................................. 29
Figure 14: Accepting a connection.............................................................................................................. 29
Figure 15: Receiving commands ................................................................................................................. 30
Figure 16: Reading characters from a buffer. ............................................................................................. 30
Figure 17: authorizing http connection ...................................................................................................... 31
Figure 18: Receiving an image .................................................................................................................... 32
Figure 19: Repainting image from MJPEG stream ...................................................................................... 32
Figure 20: Finding the JPEG image .............................................................................................................. 33
Figure 21: Opening a socket........................................................................................................................ 35
Figure 22: The GUI ...................................................................................................................................... 36
Figure 23: Speed and steering images ........................................................................................................ 38
Figure 24: Setting steering and speed states .............................................................................................. 38
4
Figure 25: Sending commands to the server .............................................................................................. 39
Figure 26: Logitech Rumblepad information .............................................................................................. 40
Figure 27: Getting controller events ........................................................................................................... 41
Figure 28: processing a command from the controller .............................................................................. 42
Figure 29: Sending commands to the camera and updating the GUI......................................................... 43
Figure 30: Sending the state to the camera................................................................................................ 43
Figure 31: Controller button ....................................................................................................................... 44
Figure 32: Activating the output using iod.................................................................................................. 46
Figure 34: Example code showing definitions of IO variables .................................................................... 49
Figure 35: Example code for using ioctl().................................................................................................... 50
Figure 36: Test program to figure out how to use ioctl() with the Axis 207W camera .............................. 52
Figure 37: Test program to measure the speed at which ioctl can trigger the camera's output ............... 54
Figure 38: Initial code to trigger the camera's output within the server ................................................... 55
Figure 39: Six bit signal sequence code ...................................................................................................... 58
Figure 40: Timer register initialization ........................................................................................................ 60
Figure 41: Microprocessor code to interpret the signals from the camera's output (initial design) ......... 62
Figure 42: Microprocessor code to interpret the signals from the camera's output (final design) ........... 63
Figure 43: PWM register initialization ........................................................................................................ 65
Figure 44: Microprocessor code to create correct PWM signal for the RC car's control boxes(initial
design) ......................................................................................................................................................... 67
Figure 45: Microprocessor code to create correct PWM signal for the RC car's control boxes (final design)
.................................................................................................................................................................... 68
Figure 46: Connecting two Dragon12 boards ............................................................................................. 72
Figure 47: Resetting Dragon12 board in EmbeddedGNU ........................................................................... 73
Figure 48: Choose HCS12 Serial Monitor file .............................................................................................. 74
Figure 49: The Serial Monitor has been successfully loaded ...................................................................... 75
5
Figure 50: Options to control the monitor.................................................................................................. 76
Figure 51: Creating a new Dragonfly12 project .......................................................................................... 78
Figure 52: Dragonfly12 initial project layout .............................................................................................. 79
Figure 53: Programming the Dragonfly12 .................................................................................................. 80
Figure 54: Downloading code to Dragonfly12 using EmbeddedGNU ......................................................... 81
Figure 55: Use Case model .......................................................................................................................... 88
Tables
Table 1: Signal conversion between IP Camera output and Steering Box input. ....................................... 19
Table 2: Signal conversion between IP Camera output and Speed Control Box input. .............................. 20
Table 3: Number of signals sent and the corresponding speed and direction of the RC car ..................... 56
Table 4: Steering and Speed signal sequences sent and the corresponding time taken to send the signals
.................................................................................................................................................................... 58
6
Introduction
This project involves modifying an R/C car to be controlled from a personal computer
over a wireless network connection. As long as a suitable wireless network connection is
available (refer to the requirements document), the user will be able to run the program,
establish a network connection between the computer and the car, and then control the car
(acceleration and steering) while being shown a live video stream from an IP camera on the car.
The design involves both software and hardware. The software will run on a user’s
personal computer. It will allow the user to connect to the IP camera on the car using an
existing wireless network, control the steering and acceleration of the car through keyboard
commands, and view a live video stream from the camera on the car. The IP camera supports
both video transmission, wireless network connectivity and the capacity for an embedded
server as we will see in the design. The signal output on the camera will be connected to a
microprocessor. The microprocessor will have outputs that connect to the control boxes on the
car, thus allowing control signals received by the camera to be relayed to the car. A more
detailed description of the requirements can be found in the requirements document.
Functional Objectives Summary
1. design and implement computer hardware for the car
2. design and implement the hardware on the car to convert a signal from the
computer hardware to a signal that the controls on the car can use
3. have the hardware on the car connect to a wireless network via TCP/IP
4. design a program to send control signals to hardware on the car
5. establish a connection between the hardware on the car and the program on the
user’s computer over a wireless network
6. establish full ability to control car (acceleration and steering)
7. establish video transmission from camera on car to GUI on user’s computer
Learning Objectives Summary
1. Understand how communication over wireless networks work.
2. Learn how information can be both sent and received efficiently over a wireless
network.
3. Learn how to detect whether a wireless network connection can be established and
how to detect immediately when a connection fails
4. Learn how a circuit can be built to modify a signal (such as a change in current)
5. Learn how a circuit can send a variable signal (this can be used to achieve different
speeds of the R/C car)
6. Learn how the acceleration and steering circuits work on the R/C car
7
Research Review
We researched several possibilities for implementing hardware on the R/C car that can
both establish a wireless network connection and send signals to the control boxes. Initially,
we wanted to use a wireless card that can connect to the internet using cell phone signals. This
would allow the car to be used anywhere a cell phone signal is present. However, this idea was
quickly revised for several reasons. First, we needed to find a small computer which supported
this type of card. The only computers we found that might work were laptops; but most were
too large for the car and the smaller laptops generally ran Linux which wasn’t supported by the
wireless cards (for example see Verizon Wireless 595 AirCard®). [1] We decided to try to find a
small device that acted like a computer (such as a PDA or blackberry) that could connect to a
cell phone network and run some basic applications. However, most didn’t have any output
that we could use to send signals from the device to the R/C car’s control boxes and a cell
phone network card would still be required for the user’s computer. Also, the cards required a
data plan [1] which would have been required to run the car, which we didn’t want. Finally, the
details of how the cell phone networks work are proprietary, so we would have had a very hard
time configuring our software to work correctly using them.
Our second option was to use an IEEE 802.11 compatible wireless network. This way we
could have more control over the network (and we could set up a network ourselves for
testing) and most computers can connect to this kind of wireless network. All we needed was a
computer that we could put on the car. As already mentioned, most laptops were too large for
the car so we decided to look for a small device (such as a PDA) that supported wireless
networking or that had a port where we could plug in a wireless adapter. Once again, we were
unable to find one that could support wireless connectivity and have an output we could use to
send signals to the car’s control boxes. Further research yielded the discovery of IP cameras.
[2]
IP cameras can connect to a network (through either a network cable or wirelessly) thus
allowing remote control of them. [2] We decided that we could use an IP camera as long as it
could connect wirelessly, provided some sort of output for sending signals to the R/C car’s
control boxes, and allowed simple programs to be written for and run on it. The reason we
needed simple programs to run on the camera is that we need to be able to customize our
ability to control the camera so we can use it to control the RC car. The camera that seemed to
fit perfectly with these requirements was the Axis 207W. [3] It is small so it will be easy to
mount onto the R/C car. It uses IEEE 802.11b/g wireless connectivity. [4] It has a video
streaming capability as well as an output for sending signals to a device (such as an alarm). [4]
The camera also has an embedded Linux operating system [4] and Axis provides an in depth
8
guide to writing scripts for the camera, which allows for custom control of the camera. There is
also an API for manipulating video called VAPIX® that is provide by Axis for use with their
cameras. The camera uses a 4.9-5.1 Volt power source with a max load of 3.5W. [4] Therefore,
we could easily use batteries to supply power to it when it is mounted on the R/C car.
Our research to find a reliable and flexible means of communication, between the
remote computer and the IP Camera, have led us to decide on using the TCP/IP
communications protocol suite [5] in our design. This connection will be the means of sending
driving commands from the remote computer to the IP Camera and a solid understanding of
TCP/IP Sockets has benefited our understanding of how the connection could be maintained.
We have decided to use a Client/Server Architecture to utilize the TCP/IP Sockets [5] as
a means of communication. It makes sense that the server should be on the Camera/Car side,
as it will process requests. To embed the application on the IP Camera, it will have to be written
in C. The bare bones of a Socket Server involve: creating a socket, binding it to a port and
hostname, start listening to the port, accepting connections, and handle/send messages. In C
there are methods to perform these tasks that we plan on building off of, such as:
int
int
int
int
socket(int domain,
int type, int protocol),
bind(int s, structsockaddr *name, intnamelen),
listen(int s, int backlog),
accept(int s, structsockaddr *addr, int *addrlen).
These are some of the essential methods we’ll be able to use after including socket.h, and
inet.h. With this as the basis of our communication between the Client and our Server (on the
Car/Camera side), the rest of the task of writing the Server will lay in the design of handling
incoming messages from a the input of our Socket; which will be discussed in detail in the
Design and Implementation section.
Since the server will need to run on the IP Camera’s hardware, the project requires
embedding C-application(s) onto the camera. We researched the specifications of the camera’s
processor and embedded system to find out what our target environment would be. The Axis
207W has an ARTPEC processor with linux 2.6.18 running on an armv4tl architecture. Since the
camera did not have a compiler on it and our Server would have to be developed on one of the
Intel 0x86 architecture machines, the code would have to be compiled using the proper 0x86 to
ARM cross-compiler to ensure the program would run on our target system. We were provided
with a cross-compiler to build by Axis; Appendix C will discuss the unexpected issues we faced
in simply setting this up.
Even though the Server is to be written in C, since we are using TCP/IP as our means of
communication the Client does not have to be a C application. We have chosen Java as the
9
language to write the Client in, since it: is Cross-Platform, has AWT and Swing libraries for
creating a GUI, and can deal with TCP/IP Sockets. At this point in research, there are still some
unknowns as to how all this will work together, but at an abstract level, all of the components
are there so Java seems like the perfect choice to develop the User side software in.
The VAPIX® application programming interface is RTSP-based (Real Time Streaming
Protocol). [6] The VAPIX® API is provided by AXIS as a means to remotely control the media
stream from the IP Camera to the remote computer. We have researched scripting techniques
for utilizing the API, but ultimately decided that creating our own HTTP connection to get the
MJPEG Stream from the camera was the most straightforward method of handling the video
feed. Since we have decided to write the Client/GUI in Java, getting the media stream through
an HTTP connection with the IP Camera’s MJPEG Stream source seemed to be the simplest way,
with less overhead than using the VAPIX® API. We will discuss what this means in more detail in
the Design and Implementation section.
In addition, we needed to find out the specifications of how the R/C car is controlled so
we contacted the manufacturer (Futaba) of the control boxes that are on the car. We found
out that both the steering and motor (speed) controls use a 1.0-2.0 millisecond square pulse
signal with amplitude of 4-6 volts with respect to the ground. The signal repeats every 20
milliseconds. Thus, we need the output signal from the camera to match this in order to control
the car.
Next we researched how we can control the signal output of the IP camera to produce
the signal required by the R/C car’s control boxes. We contacted Axis and learned that the
camera output has been used for controlling a strobe light so a controlled square pulse seems
possible. The scripting guide provided by Axis also gives some example scripts for controlling
the I/O of the camera, so we should be able to write scripts that can be used to send signals to
the R/C car’s control boxes. However, further research revealed that the camera is unable to
produce the signal required to control the RC car. According to the Axis scripting guide [7] the
camera can activate the output with a maximum frequency of 100Hz. This is inadequate
because the steering and motor speed controls use a Pulse Width Modulated (PWM) signal
with a period of 20msthat is high between 1 and 2ms. A circuit could possibly be designed to
convert the output signal of the camera to the correct PWM signal, but we decided it would be
easier and allow for greater flexibility if a microprocessor was used. Therefore, we decided to
use a microprocessor to produce the signal output required by the RC car’s steering and speed
control boxes.
The microprocessor will use the output of the IP camera as input and convert the signal
to what is required by the RC car’s control boxes. We decided to use a Dragon12 development
10
board to design the program that will change the signal. We will then flash an actual
microprocessor, the Dragonfly12, which will be used on the RC car with our program. The
Dragon12 development board will be used with the CodeWarrior development suite. A
detailed guide for setting up the Dragon12 board to work with CodeWarrior is given in
Appendix A.
A lot of research was done on how to use the Dragon12 board for microprocessor
development. Most of it involves learning how to use the different aspects of the Dragon12
board to accomplish different things. An aspect of the research that pertains directly to our
project is the use of a microprocessor to produce a controlled output signal. We found out that
an output signal from the microprocessor can be precisely controlled using the clock frequency
of the microprocessor. We successfully used the microprocessor clock frequency on the
Dragon12 board to blink two LEDs at different rates. One blinked about once every 1/3 of a
second and the second one blinked twice as fast. This was accomplished using the example
projects and tutorials provided by Dr. Frank Wornle of the University of Adelaide. [8]
Further research revealed that the microprocessor contains a PWM signal output. Using
that along with the microprocessor timer interrupts allows for the translation of an input signal
into an output signal. The PWM signal that is generated is based on the input signal received.
11
Design
As was introduced in the requirements document, we have the following issues to tackle:

Establish a connection between the RC car’s Computer and the User’s (or driver’s) PC.

Establish a connection between the RC car’s Computer and the hardware that controls
the RC car.

Get a real time visual of the car’s position (to allow interactive driving opportunity).

Send commands from User’s PC to the RC car: this will require both software and in the
case of the RC car’s side, additional hardware.

Finally, when everything is functional, provide an attractive, user-friendly GUI.
We start with this basic picture to illustrate, in an abstract way, the modules necessary to fulfill
these requirements. Then, we will sequentially build off of this to achieve a design that
addresses full functionality:
Figure 1: The Basic components necessary to fulfill the basic communication requirements for the network RC
car
Our first objective was finding a way to connect the two sides of the system; the “PC” being the
user side and the rest being on the car side. As described in the research section, we have
12
decided on an IP camera to sit on the car-side and handle that end’s connection over a wireless
network.
Figure 2: components after consolidating the computer and camera into one unit
Connection between the IP Camera and the User’s PC (video
feed)
The camera’s system includes an API called VAPIX®, which is based on the Real Time
Streaming Protocol (RTSP) designed to be used by sending predefined commands like SETUP,
PLAY, PAUSE, and STOP to ‘control’ the video feed. These commands are sent as requests that
look like HTTP headers and corresponding responses of similar form are sent back from the
camera.
We wanted to get rid of the overhead of the API and avoid the time consumption of
learning it and figuring out how it might be incorporated into a Java application. We decided
that we wanted to directly use the MJPEG stream that the camera serves out. This is accessible
from a URL looking something like:
“http://152.117.205.34/axis-cgi/mjpg/video.cgi?resolution=320x240”
By creating our own HTTP connection in our Java application, we can get this as an InputStream.
But now that we have the MJPEG stream, knowing what it looks like is important, so we know
how to go about using it. From VAPIX_3_HTTP_API_3_00.pdf section 5.2.4.4:
Content-Type: image/jpeg
Content-Length: <image size>
13
<JPEG image data>
--<boundary>
So, the JPEG images are there, but we can’t simply put the MJPEG stream into images blindly. If
we can get the MJPEG stream as a Java InputStream, we are able to continuously find the
beginning of a JPEG and add bytes, starting there, to an Image until we get to the boundary.
Our video feed problem is resolved to this: continuously find whole JPEG images within the
stream. Each time one is found, display it. Since the IP Camera is dishing out images at,
depending on your preference, around 30 frames per second, we have a considerably solid
video feed. We’ll explain the Java implementation of this in detail in the Implementation
section.
Connection between the IP Camera and the User’s PC
(commands)
From our research on the TCP/IP protocol suite, we realized that the most reliable way to send
commands would be to use the Client/Server Architecture using a connection between TCP/IP
sockets. [5] Naturally, the car side is the Server, as it will accept a connection and process
requests. The Client is on the user’s PC. Additionally, since it will not be necessary in the scope
of this project to ever have more than one ‘controller’ for a car, the Server only allows one
client to connect to it.
The Axis 207W has an embedded Linux operating system 2.6.18 running on the armv4tl
architecture. To embed the Server program on the IP Camera, it must be written in C and
compiled using Axis’ 0x86 to ARM cross-compiler to run on the IP Camera’s hardware. In this
way, we will be able to write our own Server that will run on the 207W. To implement the basic
functionality of a Server, we will use the methods touched on in our research section, creating a
socket and listening on it. Our Client can be written in almost any language, we will use Java, as
it will be highly compatible with our GUI. With a Server on the camera, with the car, that can
accept requests from the Client on the driver’s computer, we have the communication from the
user’s PC to the IP Camera covered. What is sent from the Client and received by the Server and
how it’s handled is the issue of design in how the car will be controlled.
Hardware: Sending signals produced by software to the car
The hardware aspect of this project focuses on programming a microprocessor to take a
signal sent to it by the output of the IP camera and relay a modified signal to the steering and
speed control boxes on the RC car. The details of this process are explained below.
14
Theory of hardware operation
The hardware design of this project was broken down into several key components:
R/C Engine
This R/C car is driven by a simple electric motor. As current flows through the motor in the
positive direction, the engine spins in the forward direction. As current flows the opposite
direction, the engine spins in reverse. However, the details of operation for the engine are
unnecessary for this project since there is a controlling unit which determines the appropriate
current to be sent to the engine based on its received pulse signal. The details of the controlling
units function is elaborated upon in part 2 of this section.
Controlling Units of the R/C car
We found that there were 3 components which controlled the functions of the car. The first
controlling unit is the servo. This servo box currently takes in a radio frequency signal and
divides it into two channels: one which leads to a steering box, the other to the speed control.
Each channel consists of three wires: ground, Vcc, and a pulse width modulated signal. The Vcc
carries a voltage between 4.6-6.0V. The pulse width modulation is what determines the
position of the steering and speed boxes of the car. The pulse is a square wave, repeating
approximately every 20ms. This pulse ranges between 1.0 and 2.0ms with 1.5ms as the normal
center. As the pulse deviates from this norm towards the 1.0min or the 2.0max, it changes the
position of the steering or speed (depending on which channel it’s feeding). For example, as the
pulse to the speed control moves towards 2.0ms, the speed of the engine is going to increase in
the forward direction. As the pulse moves towards 1.0ms, the engine speed is going to increase
in the reverse direction.
The graphs below are a demonstration of the pulse width modulation. As the pulse
changes, it changes the amount of voltage that is delivered. In the examples below, the first
case shows 50% of the pulse is at the Vcc, therefore, 50% of the Vcc is delivered. In the second
case, 25% of the pulse is at the Vcc, therefore, 25% of the Vcc is delivered.
15
Figure 3: Pulse width modulation
IP Camera
The details of this camera have been outlined above. The camera carries an embedded
Linux operating system which should allow for custom controlled output of the alarm port. This
output will be programmed to respond accordingly to input signals from the network
connection and be in the form of the necessary pulse needed for the speed and steering
controls. The voltage which this camera requires – 4.9-5.1V – will also be within the range that
could be provided from a battery pack.
Microprocessor
A programmable microprocessor is what we are going to use to replace the servo box and
recreate the pulse width modulation. Since the camera has a max output interval of 10ms, it
would not be adequate for supplying a PWM signal for the steering and speed commands.
Instead, the microprocessor will receive a signal from the IP Camera and translate it into the
appropriate signals for the steering and speed control units. The output signals will vary in
length between 10 and 110 ms for speed and 120 and 220 ms for steering (see Table 1).
The microprocessor will be written, debugged tested using CodeWarrior and the
Dragon12 Rev E development board. The programming language will consist of a combination
of C and Assembly.
16
Battery Pack
The standard battery pack on this vehicle is a series of six 1.2V, 1700mAh batteries.
Since equipment is going to be added, a second battery pack was added to supply the camera
and microprocessor with power. Once the IP camera and microprocessor have been
implemented, the appropriate amount of batteries will be added. The current run time for the
standard car and battery pack is approximately 20 minutes. The battery pack for the IP camera
and microprocessor should last at least as long.
Hardware Design
The output of the IP camera can be activated a maximum of 100 times a second (period
of 10ms) according to the Axis scripting guide. [7] As noted above, the RC car requires a 1-2ms
pulse every 20ms so this is much too slow. Therefore, we added a microprocessor which will
take the signal from the IP camera’s output and convert it to the required signal for the control
boxes on the RC car. The output signal from the camera will consist of two alternating signals
representing the acceleration and steering of the car. The signal for the acceleration varies in
length between 10ms and 110ms in increments of 10ms. This signal represents 11 degrees of
acceleration; 5 for moving forward, one representing no movement, and 5 for moving in
reverse. The first 5 increments (10, 20, 30, 40, 50) are used for moving forward, decreasing in
speed as the signal time increases (10 is the fastest speed, 50 is the slowest speed of the car). A
60ms signal represents neither moving forward nor backwards. The largest 5 increments (70,
80, 90, 100, 110) represent moving backwards, increasing in speed as the signal length
increases (70 is the slowest, 110 is the fastest). The second signal, used to control steering,
varies between 120 and 220ms in increments of 10ms also. This signal represents 11 degrees
of freedom; 5 for turning left, one for moving straight, 5 for turning right. The first 5
increments represent turning left (120, 130, 140, 150, 160), decreasing in the degree of
sharpness as the length of the signal increases (120 is the sharpest left and 160 is a slight left
turn). A 170ms signal represents no turn. The last 5 increments represent turning right (180,
190, 200, 210, 220). The degree of sharpness of the turn increases as the signal time lengthens
(180 represents a slight right turn and 220 represents the sharpest right). The reason for
limiting the signal increments to 11 instead of allowing for continuous control is that the
camera can’t output a signal that varies continuously. It can only output signals with period
lengths that increment in 10ms. Thus, the signal must be divided, and allowing for 11 different
signal lengths will provide control that will seem nearly continuous to the user.
The input to the steering box and speed box on the RC car requires a 20ms pulse-width
modulation signal. The high part of the signal has a variance between 1 and 2ms. In order to
17
correspond with the eleven different signals that can be sent from the camera to the
microprocessor, the high part of the signal sent to the steering and speed boxes of the RC car
will be divided into eleven different lengths of roughly equal sizes. The different lengths are 1,
1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9 and 2ms. They correspond with the input signals from
the IP camera to the microprocessor for steering and speed control as shown in Tables 1 and 2
below.
The microprocessor has a single input connected to the output of the camera, which has
two outputs. One will be connected to the steering control box and one will be connected to
the speed control box on the RC car. The steering control will receive the appropriate pulsewidth modulated signal from the microprocessor when the microprocessor receives a signal
representing a steering command from the IP camera (between 120-210ms in length). The
pulse width modulated signal sent to the steering box will vary appropriately as well. For
example, if the microprocessor receives a 120ms signal, which represents a hard left command
from the user, it will send a 20ms signal with 1ms high and 19ms low to the steering box on the
RC car, which represents a hard left. Likewise, if the microprocessor receives a 150ms signal
from the IP camera, it will send a 20ms signal with 1.33ms high and 18.67ms low, which
represents a slight left turn. The chart below (Table 1) illustrates the conversions of the signal
outputted from the IP camera, which is received by the microprocessor, to the input signal to
the steering box, which is sent by the microprocessor.
18
Table 1: Signal conversion between IP Camera output and Steering Box input. The time given as the
Microprocessor output is how much of the 20ms pulse width modulated signal is high.
IP Camera Output Signal Period
(Microprocessor Input)
Steering Box Input Signal Period (High)
(Microprocessor Output)
120ms
1.0ms
Left Turn
130ms
1.1ms
Left Turn
140ms
1.2ms
Left Turn
150ms
1.3ms
Left Turn
160ms
1.4ms
Left Turn
170ms
1.5ms
No Turn
180ms
1.6ms
Right Turn
190ms
1.7ms
Right Turn
200ms
1.8ms
Right Turn
210ms
1.9ms
Right Turn
220ms
2.0ms
Right Turn
Car Action
The output to the speed control box on the RC car will work similarly. For example, if
the microprocessor receives a 15ms signal from the IP camera, which represents the fastest
speed, it will send a 20ms signal with 2ms high and 18ms low to the speed control box of the RC
car. This corresponds to the fastest speed possible for the RC car. The chart below (Table 2)
illustrates the conversions of the signal outputted from the IP camera, which is received by the
microprocessor, to the input signal to the speed control box, which is sent by the
microprocessor.
19
Table 2: Signal conversion between IP Camera output and Speed Control Box input. The time given as the
Microprocessor output is how much of the 20ms pulse width modulated signal is high.
IP Camera Output Signal Length
(Microprocessor Input)
Speed Box Input Signal Length (High)
(Microprocessor Output)
10ms
2.0ms
Move Forward
20ms
1.9ms
Move Forward
30ms
1.8ms
Move Forward
40ms
1.7ms
Move Forward
50ms
1.6ms
Move Forward
60ms
1.5ms
No Movement
70ms
1.4ms
Move Backward
80ms
1.3ms
Move Backward
90ms
1.2ms
Move Backward
100ms
1.1ms
Move Backward
110ms
1.0ms
Move Backward
Car Action
Once completed, the connection between the IP camera, microprocessor, and RC car
steering and speed control boxes will look something like Figure 4. The microprocessor will
have a single input from the output of the IP camera. It will have two outputs; one will connect
to the speed control box and the other to the steering box. The microprocessor will continually
send signals to the two control boxes based on the input signals received. Since the IP camera
is only able to send one signal at a time, the signals for steering and speed control will alternate.
Thus, every other signal is for steering and the signals in between are for speed. While the
microprocessor is receiving a signal corresponding to steering from the IP camera, it will
continue to send the previous signal for speed to the speed control box. The reverse is true
when the microprocessor receives a signal from the IP camera for speed. This makes it so that
the commands are ‘continuous’; there is no break while the signal for the other kind of control
is being sent. This may produce a little lag, but it shouldn’t be too noticeable as the longest
signal is about 1/5 of a second (210 ms). The IP camera, microprocessor, and RC car will all be
powered by batteries.
20
Figure 4: The complete setup on the RC car
Figure 5, below, represents the addition of the microprocessor into the overall project.
It will communicate between the IP camera and the RC car.
21
Figure 5: The microprocessor component is added to the design
GUI
The original design was to use the language Processing to write the GUI. It ‘s main purpose is
for generating and manipulating images, so we wanted to directly connect with the RTSP server
on the IP Camera and display the received video feed. The GUI should also have an instance of
our Client that will be written in Java (Processing is built on top of Java using its libraries. Behind
the scenes, a Processing application is wrapped in a Java Class). Since our design changed from
initially planning on using the RTSP server and VAPIX® API, to parsing the MJPEG stream
ourselves, we have also decided to start out by writing the GUI in Java also so we can directly
use some of Java’s libraries that will be necessary for the parsing process. If time allows, we
would like to incorporate Processing into the project as the language for the GUI. That would
allow us to be able to have a user-friendly interface, where you can control the car through our
Client using the arrows on your keyboard and directly view the video feed from the IP camera
mounted on the Car. Below are a couple mockup screen shots of what our GUI might look like.
We will display a monitor of the relative speed, based on the commands being sent to the
server. We would also prefer to have a way to monitor the signal strength of the IP Camera’s
wireless connection. You can also disconnect using the GUI.
22
Figure 6: Initial design of GUI
Figure 7: GUI design for a dropped connection
Below is a sequence diagram including all of our modules added to fulfill our requirements. Also
refer and compare to Use cases in the Requirements Doc:
23
Figure 8: Sequence diagram for using the RC car
24
Implementation
The implementation covers everything that was done after the initial design. This
includes problems encountered, the design changes made to handle the problems, and the final
outcome. There will be four main sections embodying the four major areas of design for this
project: Computer to Camera, Camera to Microprocessor, Microprocessor to Car, and Powering
the Project. Computer to Camera focuses on the implementation of the user interface, controls,
and communication between the server on the camera and the program on the user’s
computer. Camera to Microprocessor describes how signals are sent from the camera to the
microprocessor and how they are received by the microprocessor. Finally, Microprocessor to
Car covers how the microprocessor creates the proper pulse width modulated signals and sends
them to the car. Powering the Project covers the circuit design and implementation for
powering the microprocessor, car, and camera via batteries.
Computer to Camera
Here we have the two side of software to implement: the Server, on the camera side
and the Client, on the User’s PC. The Server is in charge of receiving user commands and then
outputting the appropriate signals via the camera’s output to the microprocessor. The client
provides the GUI, including video feedback from the camera, receives commands, and then
transmits them to the Server.
Server
We wanted to keep the server as simple as possible. It serves the sole purpose of
receiving messages from the client, and handling them accordingly. There is one C file to handle
this. As noted in the design section, this C file had to be compiled using the appropriate 0x86 to
ARM cross-compiler for Linux 2.6.18 on the armv4tl architecture. Setting this up on a computer
so we could develop our server presented a major hang up. We could not get the crosscompiler to build on our machine. This strenuous process as well as our eventual solution will
be discussed in detail in Appendix C. If you are planning on developing on the Axis Camera, this
may save you a great deal of time and hair.
ARM Cross-Compiler
There was very little documentation given on cross-compiling for the Axis 207W camera.
The only documentation we could find on cross-compiling for an Axis camera was given in the
Axis Developer Wiki. [9] It only covered compiling using the CRIS architecture for the ETRAX
chip family, which is different from the architecture and chip on the Axis 207W. The Axis 207W
25
uses an ARTPEC-A processor and ARM architecture. Thus, we had to find a different crosscompiler.
Searching the Internet eventually led to the discovery of a guide for installing an ARM
cross-compiler for Axis ARTPEC-A cameras. [10] The comptools-arm-1.20050730.1 toolchain
provides a cross-compiler for the ARM processor. The toolchain can be downloaded from
ftp.axis.se/pub_soft/cam_srv/arm_tools/comptools-arm-1.20050730.1.tar.gz. The toolchain is
only for Linux, so we needed to install it on a computer running a Linux distribution. We
already had Ubuntu 8.10 installed on a computer so we decided to try to put the cross-compiler
on that computer. Unfortunately, the installation process would fail every time due to an error
in the code. We managed to find the code that contained the error and fixed it, but this only
led to the installation crashing at a different point due to a stack overflow. We tried to fix this
error, but ended up tracing it to a compiled file, thus we were unable to change it. We had
done some research on creating our own compiler, so we decided to give that a try.
Our first attempt was trying to create our own GNU GCC compiler for cross-compiling to
the ARM architecture. This proved to be very complicated and beyond the scope of our project.
It would have involved knowing a lot of information about the ARM architecture on the Axis
207W camera, and Axis wasn’t willing to give us much information on their cameras. This left
us with trying to emulate a cross-compiler made by someone else for the ARM architecture and
then modifying it for our camera.
One thing we had determined was that the Axis 207W camera uses the BusyBox library,
which is part of the uClibc library, instead of the standard GNU GCC library for C function calls.
We were able to find information about cross-compiling on the uClibc website. [11] Using a
tool called ‘buildroot’ along with the uClibc libraries, an ARM toolchain, among others, can be
built. This seemed really promising since buildroot creates the makefile needed to create the
cross-compiler based on a bunch of options. The only problem was that the cross-compiler had
to be created for Linux 2.6.18, which was an old version of the Linux kernel. The newest version
of buildroot didn’t allow for compiling to Linux kernel 2.6.18. Luckily, we were able to find an
older version of buildroot that did. However, we were still unsure about what a lot of the
buildroot options needed to be for our camera. We ended up trying different combinations,
some of which successfully built a cross-compiler. For the toolchains that installed correctly,
however, we were unable to successfully compile a test C program (e.g. HelloWorld) that would
run on the camera. This wouldn’t have been so bad, but every time an option was changed, we
would have to restart the installation process of the cross-compiler, which took around 45
minutes to complete. Thus, testing every buildroot option until we had a cross-compiler that
26
worked wasn’t feasible, especially since we weren’t even sure if we could create one. This led
back to the comptools-arm-1.20050730.1 toolchain.
We figured that the comptools toolchain had to have worked on some version of the
Linux kernel and with some version of the GNU C compiler. Maybe if we tried installing it on a
previous version of Ubuntu we could get it to work. We installed Ubuntu 8.04 (the version
previous to Ubuntu 8.10) and tried installing comptools on that. Again it failed with the same
error. Going back one more version of Ubuntu (7.10), we tried once more. It failed again in the
same spot. Since, the error seemed to be occurring in the same place on each distribution of
Ubuntu, it seemed likely that the error was with the GNU GCC compiler on the computer. Next,
we tried installing a different version of the GNU GCC compiler.
Looking at the file name the comptools-arm-1.20050730.1, it seems pretty obvious that
the number is a date (07-30-2005). We decided to try to install a version of GCC that was
released close to that date. Release 3.4.4 was in May 2005 so it seemed like a good choice. We
tried to build it and install it on Ubuntu 7.10, but ran into problems getting Ubuntu to use that
version of the GCC compiler instead of the version already installed on the computer.
Eventually, we just tried removing the default version of GCC already on the Ubuntu 7.10
computer. This ended up breaking Ubuntu so that nothing would work. Once again, it took
nearly an hour to install a different version of GCC, let alone the time it took to reinstall Ubuntu
every time we broke it. We ended up trying it on an older version of Ubuntu again. Ubuntu
6.10 was released near the end of 2006 so it seemed reasonable that it should support the
comptools toolchain from the year earlier. After installing it, we tried to get the software that
was required for building the comptools toolchain (see Appendix C for details), but the software
repositories for Ubuntu 6.10 had been removed! We tried using the repositories from Ubuntu
7.04 (which were still available) but this ended up breaking the Ubuntu 6.10 installation. Next,
we tried Ubuntu 7.04, since it was the oldest version still with repositories. Unfortunately, the
installation of the toolchain crashed in the same spot again. Without an older supported
version of Ubuntu, we were stuck!
Searching previous versions of Ubuntu revealed one called 6.06 LTS. LTS means it has
long term support. We installed it and the repositories were still active so we could install the
software we needed to install the cross-compiler. It all came down to the installation of the
cross-compiler. We started it and 50 minutes later it was complete! We compiled our Hello
World program, put it on the camera, and ran it. It worked! We finally had a cross-compiler for
the Axis 207W camera. For step by step instructions on getting the cross-compiler installed on
Ubuntu 6.06 LTS and how to use it to compile code for the Axis 207W camera, refer to
Appendix C.
27
Getting the code on the camera
The Axis 207W has both an ftp server and telnet on it. This makes it possible to transfer our
compiled code to the camera’s directories and then run the program. First, make a ftp
connection to the camera using the command ftp <camera’sIPaddress>. Enter your
username and password as prompted. The compiled C program needs to be transferred in
Binary mode, so enter the command binary. Now you must change to a writable directory,
either the /tmp folder or the /mnt/flash folder. Enter put <filename>. The compiled
file should now be in the directory you chose.
In our project we used telnet to run the Server. While the camera has telnet, it is not enabled
by default. Page 16 of the Axis Scripting Guide [7] describes how to enable telnet on the camera.
How to enable telnet
To enable telnet...
1. Browse to http://192.168.0.90/admin-bin/editcgi.cgi?file=/etc/inittab
2. Locate the line # tnet:35:once:/usr/sbin/telnetd
3. Uncomment the line (i.e. delete the #).
4. Save the file.
5. Restart the product.
Now, telnet is enabled.
Figure 9: How to enable telnet
Use the command telnet <camera’sIPaddress>. Change into the directory where the compiled
Server is and run it.
Our Implementation of the Server
The Server first initializes the socket, as described in the research section. Note: there is a
method, Crash which simply prints the error encountered along with the String passed.
if ((serversock = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0) {
Crash("failed creating socket");
}
Figure 10: Creating the TCP Socket
Fill in the sockaddr_in structure with the required information. Set a Hostname and Port to be
used:
28
memset(&echoserver, 0, sizeof(echoserver));
echoserver.sin_family = AF_INET;
echoserver.sin_addr.s_addr = htonl(INADDR_ANY);
echoserver.sin_port = htons(atoi(argv[1]));
/*
/*
/*
/*
Clear struct */
Internet/IP */
Incoming addr */
server port */
Figure 11: sockaddr_in structure
Bind the Socket to the local address:
if (bind(serversock, (structsockaddr *) &echoserver, sizeof(echoserver)) < 0)
{
Crash("bind() failed");
}
Figure 12: Binding a socket
Start listening for connections on the socket:
if (listen(serversock, MAXPENDING) < 0) {
Crash("listen() failed");
}
Figure 13: Listen for a connection
Accept a connection and then pass off the connection to a method to handle incoming data:
if ((clientsock = accept(serversock, (structsockaddr *) &echoclient,
&clientlen)) < 0)
{
Crash("failed accepting client connection");
}
fprintf(stdout, "Client connected: %s\n",
inet_ntoa(echoclient.sin_addr));
HandleClient(clientsock);
Figure 14: Accepting a connection
After a connection has been established on that port, the Server starts handling messages. The
specifications, due to the requirements of the Camera to Microprocessor issue covered later,
are that the server receives an integer from the client, and based on that integer sends out the
proper pulse to the microprocessor.
29
voidHandleClient(int sock) {
char buffer[BUFFSIZE];
int received = -1;
int state = 0;
//replaces above commented out code
while(recv_all(sock, buffer)){
state = atoi(buffer);
pulse(state);
state = 0;
}…
Figure 15: Receiving commands
The recv_all is in a header file we have included replacing the recv method in socket.h. This is
taken from an example in the book, Hacking. [12] The reason for this is that our integer system
gets into the double digits, so the Client will be sending strings that might look like “12”. Since
recv reads in a buffer of chars, there’s no assurance that we wouldn’t send off a “1” and then a
“2” to pulse. This would clearly not be a reliable way to drive the car, so we want to insure that
if we send a “12” from the Client, then the server will wait till it gets the whole command
before it sends out a pulse. We can simply send a terminator, in our case “\n”, from the Client,
and on the Server side, simply wait till we hit that terminator before we send out a pulse
command. The header file contains the method recv_all which loops using the recv method and
keeps adding the chars it gets to the buffer until it gets a “\n” then it puts a null terminator at
the end of our “String”, “\0”, so the char array is valid and can now be converted to an integer.
…
while(recv(sockfd, ptr, 1, 0) == 1) {//read single byte
if(*ptr == END[end_match]){
end_match++;
if(end_match == END_SIZE){
*(ptr+1-END_SIZE) = '\0';
returnstrlen(buffer);
}…
Figure 16: Reading characters from a buffer.
More details on how the integers are interpreted and how exactly the outgoing pulses are
generated in the pulse function will be discussed below.
30
Client
The Client (including all the software on the User’s PC to drive the car) side has more
components and is responsible for: Getting the MJPEG stream, parsing the stream into JPEGs as
they arrive, continuously displaying them in a GUI, taking commands from some interface
(keyboard and gaming controller), and finally sending commands off to the Server. This is all
written in Java and the tasks are split up between three classes, AxisCamera, StreamParser, and
Cockpit.
We can think of AxisCamera as a container of the MJPEG stream. The basic functionality
of this class includes:





opening a HTTP to the camera
getting the MJPEG stream as a Java InputStream
defining a boundary for the MJPEG stream
sending the InputStream to StreamParser
getting back an image and painting it to itself (as it extends a JComponent)
Opening an HTTP connection can be done using an HttpURLConnection in Java. We used a
base64authorization from sun.misc.BASE64Encoder to encode the username and password in
the encode method. Once the HttpURLConnection is opened, if it requires authorization as our
Axis Camera’s web server does, we have to request authorization and then send off our
encrypted username and password in the form “Basic username:password”, then we can
connect.
fullURL = "http://"+hostName+mjpegStream; //hostName and mjpegStream are
changable in GUI
URL u = new URL(fullURL);
huc = (HttpURLConnection) u.openConnection();
// if authorization is required set up the connection with
the encoded authorization-information
if (base64authorization != null) {
huc.setDoInput(true);
huc.setRequestProperty("Authorization",
base64authorization);
huc.connect();
}
Figure 17: authorizing http connection
To synchronously get back image data from StreamParser and paint it to a JComponent we
used the following methodology:
31
AxisCamera implements Runnable, so it runs as a thread. The run method calls the
StreamParser’s parse method which will loop to get bytes from the MJPEG stream, explained in
the next section.
AxisCamera also implements ChangeListener using the stateChanged method to get segments
back from the StreamParser and adding them to the image so JComponentspaintComponent
method will continuously paint the updated image:
Public void stateChanged(ChangeEvent e) {
byte[] segment = parser.getSegment();
if (segment.length> 0) {
try {
image = ImageIO.read(new
ByteArrayInputStream(segment));
EventQueue.invokeLater(updater);
} catch (IOException e1) {
e1.printStackTrace();
}
}
Figure 18: Receiving an image
The line: EventQueue.invokeLater(updater); invokes updater’s run method to be called
synchronously on AWT’s EventQueue. Updater is a new Runnable started within the
AxisCamera to repaint the JComponent.
public void connect() {
try {
/**
*SeethestateChanged()function
*/
updater = newRunnable(){
publicvoid run() {
if (!initCompleted) {
initDisplay();
}
repaint();…
Figure 19: Repainting image from MJPEG stream
initDisplay simply sets the dimensions of the JComponent, AxisCamera. When the state is
changed, meaning image data is received from StreamParser, the new data is synchronously
painted to the JComponent. As long as that data is a valid JPEG, we have a video stream.
32
StreamParser handles parsing out the JPEG data from the MJPEG stream. The parse
method is called from AxisCamera. This essentially takes the InputStream and starts appending
everything to the internal buffer. When the boundary is found, the processSegment method is
called, which finds the beginning of a JPEG, if there is any, and then adds everything from that
point on to the segment. Lastly, it informs the listeners that the segment has changed. This
process strips anything in the stream that is not JPEG data.
public void parse(){
int b;
try{
while ((b = in.read()) != -1 && !canceled) {
append(b);
if (checkBoundary()) {
//found the boundary process the segment to find the
JPEG image in it
processSegment();
// And clear out our internal buffer.
cur = 0;
}
}…
protected void processSegment() {
// First, look through the new segment for the start of a JPEG
boolean found = false;
inti;
for (i = 0; i<cur - JPEG_START.length; i++) {
if (segmentsEqual(buffer, i, JPEG_START, 0,
JPEG_START.length)) {
found = true;
break;
}
}
if (found) {
int segLength = cur - boundary.length - i;
segment = newbyte[segLength];
System.arraycopy(buffer, i, segment, 0, segLength);
tellListeners();
}
}
Figure 20: Finding the JPEG image
parse() starts appending data from the InputStream until it hits the boundary, as described in
the research section. From VAPIX_3_HTTP_API_3_00.pdf section 5.2.4.4, the boundary of the
MJPEG stream that the 207W serves is: “--<boundary>”. Once the boundary is found, we
have a segment that is ready to be processed, meaning stripping out any unusable data.
processSegement() finds the beginning of JPEG data if there is any.
staticfinalbyte[] JPEG_START = newbyte[] { (byte) 0xFF, (byte) 0xD8 };
33
As you recall, from the research section there is data in here that is not actually JPEG image
data, so by finding the beginning of the JPEG, we are skipping over the junk we don’t need.
From the start of the JPEG till the point where we found the boundary contains the whole JPEG
image. Now that we have a new segment we tell the listeners, one of which is AxisCamera.
With StreamParser, we have insured that AxisCamera will only get back JPEG data, so with the
two combined, we are getting a reliable video stream (by rapidly updating JPEG images) from
the MJPEG stream.
Cockpit:
With the video stream now covered, we have the tasks of creating a Client to connect to
the server and send commands and providing an interface to put everything together so the
user can easily use all of the components to drive the car. These two tasks are both handled in
Cockpit.java.
Cockpit goes through the process of:






opening a TCP/IP socket to connect to the Server’s socket
creating a GUI using Swing and AWT
adding AxisCamera as one of its Components (giving the user a video feed)
Listening to the arrow keys
defining the combination of key states as an integer to send to the server
sending off messages to server as integers based on what combination of speed and
steering are determined by the arrow keys
The implementation of this should be straightforward looking at the Cockpit.java file. The
definitions of the state and integers sent off to the Server are defined in Table 3 below. Keys are
the default interface of controlling the car. This is done by making Cockpit implement the
KeyListener interface. We used the KeyPressed and KeyReleased along with the keyCode of the
arrows (37, 38, 39, and 40) to specifically listen for when the arrow keys are pressed and
released i.e. you want to pay attention to when someone has pressed the right arrow as well as
when they release it.
Cockpit.java sets up the GUI by adding all the components (what this looks like can be seen
below). This is all done using standard Swing and AWT tools. After everything is initialized, you
will see the GUI, where you can connect. By default the program will try to connect to a Camera
that has been hardcoded into AxisCamera. Since we were always using the same IP camera with
a static IP address, we have the program try to initialize with a video feed. If it doesn’t work
34
that’s ok, the GUI will load without an image. We made the IP address to connect to resettable
in the Setup tab. The Connect button action will call the openSocket() method. This will open a
connection to the MJPEG stream (if there isn’t already one open) and also connect to the server
on the camera.
private Boolean openSocket() {
...
try {
if(!axPanel.connected){
axPanel = new AxisCamera(hostnameField.getText(),
mjpgStreamField.getText(), userField.getText(), passField.getText());
feed = new Thread(axPanel);
feed.start();
feedPane.add(axPanel);
feedPane.repaint();
Thread.currentThread().sleep(3000);
}
controller = new Socket(axPanel.hostName, port);
// Socket connection created
System.out.println("Connected to: " + axPanel.hostName
+ " -->on port: " + port + "\n'q' to close
connection");
input = newBufferedReader(newInputStreamReader(controller
.getInputStream()));
output = new
DataOutputStream(controller.getOutputStream());
isConnected = true;
button1.setText("Disconnect");
logo.setIcon(logos[1]);
returnee = true;
...
}
Figure 21: Opening a socket
Here controller is our Socket. We instantiate a new AxisCamera if necessary, and then use
the camera’s hostname and the port the server is listening on to open a Socket. We use the
Socket to create new input and output streams. The rest is just setting Booleans and text as
well as our little logo, which you’ll see in the screen shot. Now we have covered presenting an
interface and making the appropriate connections.
The current GUI looks like this:
35
Figure 22: The GUI
36
Here, all of the images are ImageIcons, which are simply updated as states change. The images
are available at http://code.google.com/p/networkrccar/downloads/list under putwClasses.zip.
These image files need to be in the same directory as the classes. The images are first loaded
into an array of ImageIcon, than the Icons are created with the createImageIcon method. This is
done at the beginning of the program along with the GUI initialization
final private String[] SPEEDPATHS = { "GUIspeed0.png", "GUIspeed1.png",
"GUIspeed2.png", "GUIspeed3.png", "GUIspeed4.png",
"GUIspeed5.png" };
final private String[] ARROWPATHS = { "GUILeftOff.png", "GUILeftOn.png",
"GUIRightOff.png", "GUIRightOn.png" };
final private String[] LOGOPATHS = {"GUIlogoOff.png" , "GUIlogoOn.png"};
...
protected ImageIcon[] createImageIcon(String[] path, String description) {
ImageIcon[] icons = newImageIcon[path.length];
for (inti = 0; i<path.length; i++) {
java.net.URLimgURL = getClass().getResource(path[i]);
if (imgURL != null) {
icons[i] = newImageIcon(imgURL, description);
} else {
System.err.println("Couldn't find file: " + path);
return null;
}
}// for
return icons;
}
...
public void keyPressed(KeyEvent e) {
if (isConnected) {
key = e.getKeyCode();
// only pay attention if a turn is not already pressed
if (key == 37 || key == 39) {
if (turnKeyPressed == 0) {
turnKeyPressed = key;
sendOut();
if(key == 37)
leftInd.setIcon(directions[1]);
else
rightInd.setIcon(directions[3]);
}
}
// change speed and then update gauge
if (key == 38 &&speed != 4) {
speed++;
sendOut();
speedGauge.setIcon(speeds[speed]);
}
if (key == 40 &&speed != 0) {
37
speed--;
sendOut();
speedGauge.setIcon(speeds[speed]);
}…
Figure 23: Speed and steering images
Also, as you can see in these two listening methods, speed and direction is set. We still have the
functional side to take care of. As mentioned above, through implementing KeyListener and
using the methods keyPressed and keyReleased, we can get input from the user. What’s left is
to define what the input should be interpreted as and then send it off to the Server
private void setState() {
if (speed == 0) {
if (turnKeyPressed == 0)
state = 0;
if (turnKeyPressed == 37)
state = 1;
if (turnKeyPressed == 39)
state = 2;
}
elseif (speed == 1) {
if (turnKeyPressed == 0)
state = 3;
if (turnKeyPressed == 37)
state = 4;
if (turnKeyPressed == 39)
state = 5;
}
elseif (speed == 2) {
if (turnKeyPressed == 0)
state = 6;
...
}
...
else
state = 1111;
}
Figure 24: Setting steering and speed states
The method setState, determines what integer should be assigned to state according to the
combination of speed and direction i.e. if the speed is 1 and the left arrow is pressed, the state
is set to 4. When the arrow key is released, state is set to 3. So both the keyPressed and
keyReleased methods call setState followed by sendOut. sendOut simply writes state as bytes
to the output stream. A note to be made about sending out our commands or states is that we
38
need to remember how the Server was expecting them. The Server is expecting an integer and
then the terminator “\n” so every time we send out a new state, we need to follow it with the
terminator.
private void sendOut() {
// set state, that is the binary signal corresponding to current
// speed-direction
setState();
// should never happen...
if (state == 1111) {
System.out.println("invalid state!");
closeSocket();
System.exit(0);
}
if (isConnected) {
try {
output.writeBytes(state + "\n");…
Figure 25: Sending commands to the server
In addition to using the arrow keys, we wanted to add support for a controller so that
the user could use either the controller or the keyboard to drive the car. We found a Java API
framework for game controllers called JInput. Following the Getting Started with JInput post on
the JInput forums, we were able to get the API installed and setup for use in our application.
[13] We then used the sample code given in the forum to write a simple class that looks for
controllers on a computer and displays the name of the controller, the components (e.g.
buttons or sticks), and the component identifiers. The code used was exactly what was given
on the forum so I won’t reproduce it here. We plugged in a Logitech Rumblepad 2 controller
and ran the program. It picked it up along with the keyboard and mice connected to the
computer. Part of the output for the Logitech Controller is given in the figure below.
Logitech RumblePad 2 USB
Type: Stick
Component Count: 17
Component 0: Z Rotation
Identifier: rz
ComponentType: Absolute Analog
Component 1: Z Axis
Identifier: z
ComponentType: Absolute Analog
Component 2: Y Axis
Identifier: y
ComponentType: Absolute Analog
Component 3: X Axis
39
Identifier: x
ComponentType: Absolute Analog
Component 4: Hat Switch
Identifier: pov
ComponentType: Absolute Digital
Component 5: Button 0
Identifier: 0
ComponentType: Absolute Digital
…
Figure 26: Logitech Rumblepad information
The controller has 17 components, 4 of which are analog and 13 of which are digital.
The 4 analog components represent the x and y directions of the 2 analog joysticks on the
controller. We wanted to set up the controller so the user controls the car using the left analog
joystick. Therefore, we had to determine which joystick had which components. This was
accomplished using the program to poll the controllers given in the Getting Started with JInput
forum. It was modified to display the component name and identifier if the component was
analog. Running the program and moving the analog joystick that we wanted to use for
controlling the car showed that the two components were Component 2: Y Axis and
Component 3: X Axis.
The next step was to implement the controller in our program. First we created a new
class called Control that implements the Runnable interface. This was done so that it could be
added as a thread to the existing program. Within the run method, an array of controllers
connected to the computer is gotten and then the program looks for a controller with the name
“Logitech RumblePad2 USB”. If it is found, then it is saved as the controller to use, otherwise
no controllers are used. Thus the program only works with Logitech RumblePad 2 USB
controllers.
Next the program enters a while loop that continuously polls the controller for input
until the controller becomes disconnected. The code is shown in the following figure.
while (true) {
pad.poll();
EventQueue queue = pad.getEventQueue();
Event event = newEvent();
while (queue.getNextEvent(event)) {
Component comp = event.getComponent();
String id = comp.getIdentifier().toString();
float value = event.getValue();
40
if (comp.isAnalog()
&& (id.equals("x") || id.equals("y"))) {
//System.out.println(value);
controllerCommand(id, value);
}
}
try {
Thread.sleep(10);
} catch (InterruptedException e) {
e.printStackTrace();
}
if(!hasController)
return;
}
}
Figure 27: Getting controller events
After polling the controller and getting the event queue, the program loops through the
queue and checks if an event is caused by a component that is part of the joystick being used to
control the car. If it is, the command gets sent to the method controllerCommand, which
processes the command. After looping through the queue, the threads execution is halted for
10ms and then execution continues. At the very bottom is an if statement that returns from
the run method, thus terminating the thread, if the controller is no longer being used.
Now we can take a look at the controllerCommand method. The code is shown in the
following figure.
public void controllerCommand(String direction, float value){
//moving backward
if(isConnected){
if(direction.equals("x")){
//full left
if(value < -0.75){
steer = 0;
}
else if (value < -0.5){
steer = 1;
}
else if (value < -0.25){
steer = 2;
}
else if (value < 0.25){
steer = 3;
}
else if (value < 0.5){
steer = 4;
41
}
else if (value < 0.75){
steer = 5;
}
else {
steer = 6;
}
if(steer != prevSteer){
sendOut();
prevSteer = steer;
}
}
//moving forward
else if(direction.equals("y")){
//stopped
if(value > -0.2){
speed = 0;
}
else if (value > -0.4){
speed = 1;
}
else if (value > -0.6){
speed = 2;
}
else if (value > -0.8){
speed = 3;
}
else {
speed = 4;
}
if(prevSpeed != speed){
prevSpeed = speed;
sendOut();
}
}
}
}
Figure 28: processing a command from the controller
First, the method checks if the program is connected to the camera, which is
represented by the isConnected variable. If it is connected, the command is processed and the
steering or speed variables are updated accordingly. The variables prevSpeed and prevSteer
keep track of the previous values of steer and speed. If nothing has changed then nothing is
sent to the camera. If something has changed, then the sendOut method is called. Initially, the
steer variable had only three possible values because there were only three possible steering
states: left, straight, and right. This made the car very hard to control with the controller
however, so more states were added. This required a redesign of the hardware as well. The
42
changes are described in the Camera to Microprocessor and Microprocessor to car sections
below. One thing to note is that the speed increases as the analog stick moves in the negative y
direction. This is because the negative y direction is actually the up direction of the joystick.
The sendOut method sends the commands to the camera and updates the GUI. The
code is given below.
private void sendOut() {
if(steer == 3){
leftInd.setIcon(directions[0]);
rightInd.setIcon(directions[2]);
}
else if(steer< 3){
leftInd.setIcon(directions[1]);
}
else if(steer> 3){
rightInd.setIcon(directions[3]);
}
speedGauge.setIcon(speeds[speed]);
sc.sendOut(steer, speed);
}
Figure 29: Sending commands to the camera and updating the GUI
The method first updates the icons on the GUI to reflect the changed state and then
calls the sendOut method of sc, where sc is an object of the SendCommand class. The
SendCommand class sends the correct value to the camera. The sendOut and setState methods
are given below.
public void sendOut(intst, int sp) {
speed = sp;
steer = st;
setState();
try {
System.out.println(state);
output.writeBytes(state + "\n");
}
catch (IOException e) {
System.out.println("Failed to send command: " + e);
}
}
private void setState() {
state = (speed<< 3) + steer;
}
Figure 30: Sending the state to the camera
43
First, the variable state is calculated from the values of speed and steer. Basically, speed
is bit shifted so that it becomes the highest three bits and steer is the lowest three bits of state.
Next, state is written to DataOutputStream output, which sends the integer to the camera.
Initially, the commands were sent to the camera from within the Cockpit class, but this
had to be abstracted to the SendCommand class so that a controller could be used. Since the
controller used a separate thread for execution, it had to be able to send commands. This
would have required a Cockpit object, which would have created a whole new instance of the
program. Thus creating the SendCommand class eliminated this problem.
The Cockpit class contains a Control object and a SendCommand object. It has a button
for enabling/disabling the controller. The code for the button is given below.
else if (e.getActionCommand().equals("toggleController")){
if(cont != null&&cont.isAlive()){ //controller connected
con.removeController();
while(cont.isAlive()){} // wait for thread to
terminate
if(cont.isAlive()){
System.out.println("Error removing
controller");
}
else
useConButton.setText("Use Controller");
}
else{ //controller not connected
cont = new Thread(con);
cont.start();
useConButton.setText("Remove Controller");
}
}
Figure 31: Controller button
The code removes the controller if there is one or enables one if there isn’t a controller.
The variable con is a Control that gets initialized on startup and is used within the controller
threads.
The Cockpit class implements the ControllerListener interface so that it can detect
when controllers are plugged in or unplugged and act accordingly. If a controller is plugged in
then a new instance of the Control class is created and a new thread started. If a controller is
disconnected, then the thread is terminated. The instance of sendCommand, sc, is initialized
when Cockpit connects with the camera. It gets passed to the Control object, con, which uses it
to send commands to the camera. In this way, both the keyboard and the controller can be
used at the same time to control the car without sending conflicting commands.
44
Camera to Microprocessor
This section covers the interaction between the camera and microprocessor. The
camera needs to send signals to the microprocessor using its output. The output is activated in
different sequences based on the commands received from the user’s computer. The
microprocessor then interprets the sequence so it can create the correct PWM signal. This
section will be broken into two parts: Camera and Microprocessor. The Camera section will
focus on how the camera is able to output the correct signal sequence based on the command
received by the user. The Microprocessor section describes how the signals are received and
interpreted before changing the PWM signal that’s sent to the RC car’s control boxes.
Camera
The camera uses a program called iod to activate the output. This was described in the
Axis scripting guide. It works in the following way: iod is called with the argument “–script”,
followed by the output/input identifier, followed by a colon and a forward slash to activate or a
backslash to deactivate the output or input. In our case, we wanted to activate/deactivate the
output of the camera, which has the identifier of “1”. Thus, the command would look like
“$ ./iod -script 1:/” to activate the output or “$ ./iod -script 1:\” to deactivate the output. The
program is designed to be run from a script on the camera, as can be seen by the “-script”
argument. We needed to activate the output from within our own program, however, so we
ended up using the system command execl() to call the iod function. We implemented the
following code based on the signals generated as specified in Tables 1 and 2 above. The
program was just a test to measure how fast we could activate the output.
#include<stdio.h>
#include<stdlib.h>
#include<string.h>
#include<unistd.h>
#include<time.h>
#include<sys/time.h>
#include<sys/types.h>
#include<sys/wait.h>
/*
* testExecl.c
*
* Created on: Apr 1, 2009
*
Author: bigdog
*/
int main(intargc, char *argv[]){
45
int i;
pid_tpid;
struct timeval start, stop;
for( i=0 ; i<60 ; i++ ){
gettimeofday(&start, 0);
pid = vfork();
if (pid == 0) /* fork succeeded */
{
if(i%2==0) //turn on output
execl("/bin/iod", "-script", "iod", "1:/", NULL);
else //turn off output
execl("/bin/iod", "-script", "iod", "1:\\", NULL);
}
else if (pid< 0) /* fork returns -1 on failure */
{
perror("fork"); /* display error message */
}
else{
wait(NULL);
gettimeofday(&stop, 0);
longdifsec = stop.tv_sec -start.tv_sec;
long dif = stop.tv_usec - start.tv_usec;
printf("sec:%ldusec:%ld \n", difsec, dif);
}
}
return 1;
}
Figure 32: Activating the output using iod
As can be seen, we create a for loop that activates and then deactivates the output 30
times each (loops a total of 60 times). First, we grab the current time. Then, a process is forked
using the system command vfork(), which makes a copy of the current process, before calling
execl(). This is because execl() takes over the calling process, causing it to terminate when the
execl() function returns. This way it only terminates the forked process and our loop continues.
The if statements are used to perform different commands based on the process. If the process
has a pid of 0, then we know that it is the forked process and we either activate or deactivate
the output. If pid is less than 0, an error has occurred and we exit. If neither of those conditions
are true, we know that it must be the main process. In this case, we wait for the forked process
to return, get the current time again, and calculate and display the difference between the start
and end times.
We compiled and ran the program on the camera to see how fast we activate the
output. The results were very surprising. The camera was not able to output the signals in
increments of 10 ms consistently. Most of the signals generated were significantly longer than
10 ms, varying between 10 and 50 ms. This was before we even started streaming video. With
46
video streaming from the camera to a computer via the preview webpage of the camera, the
signal generation took even longer and with more variation. We were getting signals varying in
length between 50 ms and 200 ms. This was much slower than anticipated and wouldn’t be
satisfactory for controlling the car. The large variation made it so that the signals would be
impossible to time using a microprocessor as well. We needed a way to speed up the signals
and get a consistent signal. This was achieved through examining the contents of the program
iod.
We tried multiple times to contact the camera manufacturer, Axis, to obtain information
about how to trigger the camera’s output without having to call the program iod. They were
very unresponsive, sending us to the FAQ section of the Axis website, which doesn’t contain
any information regarding the output of the camera. We were wasting a lot of time trying to
contact them with no results, so we eventually gave up and decided to try to figure out how the
output works on our own. iod is a compiled program and we didn’t have access to the source
code for it, so we knew that we needed to figure it out using reverse engineering.
First, we searched the Internet using Google for the function iod to see if we could
obtain any information about how it worked. That didn’t provide any answers, so we decided
to see if we could find out how electronic device outputs were activated in Linux. A version of
Linux is what runs on the Axis 207W camera we were using, so we figured if we found out how
to do activate an output for something else, we could apply the concept here. An online search
for how to activate an output provided some useful information. The system command ioctl()
seemed to come up a lot for triggering hardware components, such as outputs. Further
investigation revealed that the ioctl() works by taking three arguments: a file descriptor (an
integer used to reference a file) that references a “file” that represents the hardware
component to be controlled, a integer representing the command to perform, and a bit
sequence representing what will be written to the file. The way it was used varied quite a bit
between different hardware components, so it was unclear how it could be used for the Axis
207W camera. Further searching didn’t reveal much information either. Eventually, we
decided to go to the source and examine the program iod to determine if and how it uses ioctl().
Initially, we tried writing and compiling a program that used ioctl() and then do a hex
dump of the compiled program to find the hexadecimal sequence that represents ioctl(). Next,
we performed a hex dump on the compiled iod program and searched for the ioctl() hex
sequence obtained from the hex dump of our sample program that uses ioctl(). If we could find
the hex sequence than maybe we could determine what the hex representations of the
arguments to ioctl() were. The hex representation of the arguments could be passed directly to
the ioctl() system call without worrying what they actually represented. We were able to find
47
part of the hex sequence, but we weren’t able to determine if it represented the ioctl() function
because it wasn’t a complete match. One of the major problems was that the arguments to the
ioctl() could be anything because they were operating system dependent. Thus, the example
arguments we used in our sample program could be completely different from the ones used in
iod. The potential effect that different arguments would have on the compiled ioctl() call was
unknown as well. Also, iod could have been compiled using a compiler different from the one
we were using, thus potentially altering the hex representation of ioctl() from the one we
obtained from our own compiled program. Thus, trying to find ioctl() and determine what the
arguments were from the hex dump of iod became extremely difficult. A different approach
was needed.
We decided to take a look at the compiled iod program. Using the Open Office word
processor, we examined the contents of the iod program. Most of the output shown in Open
Office was compiled so it looked like garbage, but we were able to glean some information
about how it works from the text in the program. The text from strings within the program
doesn’t change during compilation, so it showed up directly as written in Open Office. Figure
33 below shows some of the output obtained from iod. A lot of information about how
outputs/inputs, iod usage, and system variables showed up in the text output. As can be seen,
several interesting things, including variables named IO_SETGET_INPUT and
IO_SETGET_OUTPUT and references to I/O devices are present. This seemed like exactly what
we needed to use for controlling the output.
33: Inside iod
48
What next? Back to Google. A search for “IO_SETGET_OUTPUT axis 207w” resulted in
several example programs that used it as an argument to the ioctl() function. One in particular
was very useful. It was a file from the Linux Cross Reference section called etraxgpio.h. [14] An
excerpt from the file is shown below. The file contains definitions of variables used for the Etrax
processor. Etrax processors are used in the Axis cameras that support the Cris cross compiler.
This is different from what is used on the Axis 207W, but since it was the same company it was
likely that the variables would be at least similar, if not the same.
*
15 *
16 * For ETRAX FS (ARCH_V32):
17 * /dev/gpioa minor 0, 8 bit GPIO, each bit can change direction
18 * /dev/gpiob minor 1, 18 bit GPIO, each bit can change direction
19 * /dev/gpioc minor 2, 18 bit GPIO, each bit can change direction
20 * /dev/gpiod minor 3, 18 bit GPIO, each bit can change direction
21 * /dev/gpioe minor 4, 18 bit GPIO, each bit can change direction
22 * /dev/leds
minor 5, Access to leds depending on kernelconfig
23 *
24 */
49/* supported ioctl _IOC_NR's */
52 #define IO_SETBITS
0x2/* set the bits marked by 1 in the argument */
53 #define IO_CLRBITS
0x3/* clear the bits marked by 1 in the argument */
98 #define IO_SETGET_OUTPUT 0x13 /* bits set in *arg is set to output,
99
* *arg updated with current output pins.
100
*/
Figure 34: Example code showing definitions of IO variables
One of the main things that was interesting about this file is the mention of the gpio
files located in the dev directory. An examination of the dev directory on the Axis 207W
revealed several gpio files. These were the files that the file descriptors passed to ioctl() were
referencing. The only problem was determining which one was used to control the output on
the camera. Other interesting things were the variables IO_SETBITS and IO_CLRBITS, which
potentially could be what was used by ioctl() to activate and deactivate the camera output.
Also, IO_SETGET_OUTPUT shows up, and it looks like it is used to determine which pins act as
outputs. Now, how does all of this fit together?
Searching online for “gpioioctl”, led to the discovery of an example program that uses
ioctl and gpio to control hardware output and input. [15] The following lines of code are taken
from the program and seem to be a good example of how the camera output can be controlled.
Once again, this is for an Etrax camera, but the commands for our camera would probably be
very similar since both cameras are made by Axis.
if ((fd = open("/dev/gpioa", O_RDWR)) < 0)
>
{
>perror("open");
>exit(1);
49
>}else
>
{
>changeBits(fd, bit, setBit);
>
}
>voidchangeBits(intfd, int bit, intsetBit)
> {
>int bitmap = 0;
>
>bitmap = 1 << bit;
>
>if (setBit)
> {
>ioctl(fd, _IO(ETRAXGPIO_IOCTYPE, IO_SETBITS), bitmap);
> }
>else
> {
>ioctl(fd, _IO(ETRAXGPIO_IOCTYPE, IO_CLRBITS), bitmap);
> }
>
> }
Figure 35: Example code for using ioctl()
First, the file “/dev/gpioa” is opened and then the ioctl() function is called. It gets
passed the file descriptor to “/dev/gpioa” and then a strange system struct called _IO that takes
two arguments, the last of which looks familiar. The IO_SETBITS and IO_CLRBITS variables were
defined in the header file etraxgpio.h, as shown in Figure 35 above. Finally, there is a bit
sequence that seems to have a one at the location where the bit is supposed to be cleared or
set. Only one bit location in the bitmap contains a one. This looks like exactly what we need to
control the output with some minor modifications to fit our camera. We aren’t using an Etrax
camera so the first argument to _IO(),ETRAXGPIO_IOCTYPE, will be different. At another
location in the program that is partially shown above, GPIO_IOCTYPE was used instead of
ETRAXGPIO_IOCTYPE. This may be what we need. All that is left is to determine which gpio file
controls the output and what the arguments to ioctl() should be.
A test program was written to determine the gpio file and bitmap used to control the
output on the Axis 207W. It was assumed that _IO(GPIO_IOCTYPE, IO_CLRBITS) was the correct
second argument of ioctl() to deactivate the output and _IO(GPIO_IOCTYPE, IO_SETBITS) was
the correct second argument to activate the output. One problem, however, was that the
variables GPIO_IOCTYPE,IO_SETGET_OUTPUT, IO_SETBITS, and IO_CLRBITS were undefined.
We had no idea what header file we needed to include and where it was located. We found the
Linux program ‘beagle’ that is used to index files and then search within them for a keyword.
We knew that the header file had to be located in the cross-compiler for the Axis 207W. Using
beagle, the entire cross-compiler folder was indexed and searched for “IO_SETBITS”. This took
50
several hours, but eventually the file “gpiodriver.h” was found (see the last “#include”
statement in Figure 36). It contained the definitions of the four variables shown above. Now
we were ready to test.
The test program opened all of the gpio files it could and looped through them with
different bit sequences (the third argument to ioctl()) in order to determine the correct
sequence for activating the output of the camera. A few gpio files were scrapped right away
because they couldn’t be opened. Others caused a hardware error when trying to use ioctl()
with them. The code is shown below in Figure 36.
#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#include<fcntl.h>
#include<sys/ioctl.h>
#include<asm/arch/gpiodriver.h>
/*
* testExecl.c
*
* Created on: Apr 1, 2009
*
Author: bigdog
*/
int main(intargc, char *argv[]) {
int l = 0;
int a = 1;
printf("before a");
int fda = open("/dev/gpioa", O_RDWR);
pri ntf("before b");
intfdb = open("/dev/gpiob", O_RDWR);
printf("before c");
int fdc = open("/dev/gpioc", O_RDWR);
printf("before g");
int fdg = open("/dev/gpiog", O_RDWR);
if (fda< 0)
printf("Hello world a\n");
else {
printf("inside a\n");
}
if (fdb< 0)
printf("Hello world b\n");
else {
printf("inside b\n");
ioctl(fdb, _IO(GPIO_IOCTYPE, IO_SETGET_OUTPUT), &intb);
}
if (fdc< 0)
51
printf("Hello world c");
else{
printf("inside c");
ioctl(fdc, _IO(GPIO_IOCTYPE, IO_SETGET_OUTPUT), &intc);
}
if (fdg< 0)
printf("Hello world g\n");
else {
printf("inside g\n");
ioctl(fdg, _IO(GPIO_IOCTYPE, IO_SETGET_OUTPUT), &intg);
}
while (l < 1000) {
printf("gpioa%d\n", l);
ioctl(fda, _IO(GPIO_IOCTYPE,
sleep(5);
ioctl(fda, _IO(GPIO_IOCTYPE,
sleep(5);
printf("gpiob %d\n", l);
ioctl(fdb, _IO(GPIO_IOCTYPE,
sleep(5);
ioctl(fdb, _IO(GPIO_IOCTYPE,
sleep(5);
printf("gpioc %d\n", l);
ioctl(fdc, _IO(GPIO_IOCTYPE,
sleep(5);
ioctl(fdc, _IO(GPIO_IOCTYPE,
sleep(5);
printf("gpiog %d\n", l);
ioctl(fdg, _IO(GPIO_IOCTYPE,
sleep(5);
ioctl(fdg, _IO(GPIO_IOCTYPE,
sleep(5);
a = a << 1;
IO_CLRBITS), a);
IO_SETBITS), a);
IO_CLRBITS), a);
IO_SETBITS), a);
IO_CLRBITS), a);
IO_SETBITS), a);
IO_CLRBITS), a);
IO_SETBITS), a);
}
close(fda);
close(fdb);
close(fdc);
close(fdg);
return 0;
}
Figure 36: Test program to figure out how to use ioctl() with the Axis 207W camera
The program shown above was what was eventually run without causing any errors.
First, gpioa, gpiob, gpioc, and gpiog were opened and given the file descriptors fda, fdb, fdc,
and fdg respectively. If a file failed to open, the message “Hello World *“ was printed,
otherwise “inside *” was printed, where * is either a, b, c, or g representing the corresponding
gpio* file. Then, we went into a for loop that tried different bitmasks (represented by the
variable ‘a’) within ioctl() and printed which gpio file was being tested and what loop iteration
52
was currently processed. We started with a=1 and bit shifted ‘a’ left by 1 each iteration. Thus
the final bit sequence would have the form 1* where * represents an undermined number of
zeros. The single ‘1’ in the bitmap corresponds with what was used in the example program
found online and shown in Figure 36 above. A five second pause was put between calls to ioctl()
so that we could see exactly when the output got activated. We hooked an oscilloscope up to
the camera output so that we could see when the output became activated. As soon as the
camera output became active we could look at the output printed on the command line to see
which gpio file and loop iteration the program was on. Using this strategy and with a little
patience, the camera output eventually became activated! Examining the command line
output, we determined that the file used for controlling the camera output was gpioa and the
bitmap was 1000. We could now control the camera output within our own program.
We still needed to determine if ioctl() would be fast enough to control the output the
way we needed. To test this, we implemented another program that activated and deactivated
the camera output as fast as possible using ioctl(). The code is shown in Figure 37 below.
/*
* outputSpeedTest.c
*
* Created on: May 9, 2009
*
Author: bigdog
*/
#include<stdio.h>
#include<sys/socket.h>
#include<arpa/inet.h>
#include<stdlib.h>
#include<string.h>
#include<unistd.h>
#include<netinet/in.h>
//these are for the alarm output
#include<fcntl.h>
#include<sys/ioctl.h>
#include<asm/arch/gpiodriver.h>
#include<time.h>
#include<sys/time.h>
int main(intargc, char *argv[]) {
int i,fda;
struct timeval start, stop;
//printf("COMMAND RECEIVED: %d\n", state);
int a = 1 << 3;
int j;
int k;
//six signals sent
//low order bits sent first
//e.g. if 010110=22 is sent, then the output is 011010
53
fda = open("/dev/gpioa", O_RDWR);
for (i = 0; i< 1000000; i++) {
gettimeofday(&start, 0);
ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
gettimeofday(&stop, 0);
long difsec = stop.tv_sec - start.tv_sec;
long dif = stop.tv_usec - start.tv_usec;
printf("sec:%ldusec:%ld \n", difsec, dif);
}
//set the output low at the end
ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
return 1;
}
Figure 37: Test program to measure the speed at which ioctl can trigger the camera's output
The program activates and then deactivates the camera’s output a million times;
measuring and then displaying the time it takes during each iteration. We also used an
oscilloscope to measure the period of the signal and look at the waveform created by the
output. Although there was some variation in timing, we determined it could always be
activated and deactivated in less than one millisecond. Next, we tested it while streaming video
to a PC. The results were identical. The streaming of video had no effect on the speed at which
the output could be activated using ioctl().This was much faster than we had originally planned
for, thus allowing us to make some speed improvements.
The ioctl() commands for controlling the output were added to the server code next.
The server activates the output in a function called pulse(). Figure 38 shows pulse().
54
void pulse(int state){
inti;
printf("COMMAND RECEIVED: %d\n", state);
int high, low;
for(i=0;i<state;i++){
ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
usleep(1);
}
//after the right amount of pulses have been sent
//sleep for 20 ms for microprocessor to know end of signal
//printf("%s","END_");
high=ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
usleep(20000);
low=ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
}
Figure 38: Initial code to trigger the camera's output within the server
The program takes in an integer representing the number of pulses to be sent to the
microprocessor. Next, the program executes a for loop that sends the required number of
signals to the microprocessor. A call to usleep() had to be added after deactivating the output
because the microprocessor wouldn’t register the signal as going low if the output was
reactivated too quickly after being deactivated. Thus every time the signal went low, we
paused before sending the next signal. We only slept for 1 microsecond but the usleep() system
call takes around 10ms on average to run because the operating system has to process the call.
If other programs are running at the same time, the delay could be longer. We measured the
time in a test program and it seemed to stay consistently at about 10ms. The final signal sent
was at least 20 ms in length. It marked the end of the signal sequence and was easily
distinguishable from the other signals because they would never reach 20 ms in length.
The total length of the signal sequence varied between 20ms (just the end signal
representing straight and no speed) and about 160 ms (representing full speed and turning
right). See Table 3 below for a reference of number of signals vs. RC car speed and direction. In
the steering column, -1 represents turning left, 0 going straight, and 1 turning right. Speed
increases from 0 (stopped) to 4 (full speed).
55
Table 3: Number of signals sent and the corresponding speed and direction of the RC car
# of signals
Speed
Steering
0
0
0
1
0
-1
2
0
1
3
1
0
4
1
-1
5
1
1
6
2
0
7
2
-1
8
2
1
9
3
0
10
3
-1
11
3
1
12
4
0
13
4
-1
14
4
1
The signal sequence takes over a tenth of a second at speeds 3 and 4. This is noticeable,
especially when trying to steer the RC car. We thought about reversing the signal sequences so
that the faster speeds were represented by shorter signal sequences. Thus the car would be
much more responsive at higher speeds making it easier to steer. We didn’t end up
implementing that idea, however. Instead, we ended up changing it so that a sequence of six
bits determined the speed and direction.
Initially, we wanted to loop through the bits in the sequence number and activate the
camera’s output if the current bit was a one or deactivate the output if it was a zero. Each
56
output would be followed by a 10ms pause (we’ll call it a pulse). The microprocessor would
wait 10ms and then read whether the output on the camera was active or not. If it was active,
it would count the bit as a one, if not then it would take it as a zero. In this way it would
reconstruct the number representing the command sent to the server from the user. For
example, if the user sent 34 (1000100), the camera’s output would be off for two pulses, on for
one, off for three more, and on for the final one (the pulses are sent starting with the lowest
order bits). The microprocessor would determine 34 was sent from the pulses it receives from
the camera’s output. The problem was that we weren’t able to get a consistent 10ms pause.
We tried using usleep(), but it wasn’t accurate enough. We thought about making the pause
longer (e.g. 20ms), but that would slow the overall signal sequence time down considerably if
we were to increase it enough that the variation in usleep() didn’t matter.
Thus, we decided to split the six bit sequence so that the 3 highest bits determined the
speed and the last 3 bits determined the direction. The “sequence” is sent as an integer from
the client to the server where it parses the integer to separate the three highest bits from the
lowest three bits. The highest bits are stored as an integer and the lowest bits are stored as
another integer. For example, if the number 34 (100010 in binary) is sent to the server, it is
parsed as 4 (100 in binary) for speed and 2 (010 in binary) for steering. The code for our
program is shown in Figure 39 below.
void pulse(int state){
int i, j;
printf("COMMAND RECEIVED: %d\n", state);
int speed, steer;
speed= state >>3;//high bits
steer= state-(speed <<3);//low order bits
for(i=0;i<steer;i++){//steering bits
ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
for(j =0; j <5000; j++){}//pause
ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
for(j =0; j <10000; j++){}//pause
}
ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
usleep(5000);//sleep for at least 5 ms to mark end of steering sequence
ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
for(j =0; j <10000; j++){}//pause
for(i=0;i<speed;i++){//speed bits
57
ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
for(j =0; j <5000; j++){}//pause
ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
for(j =0; j <10000; j++){}//pause
}
ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
usleep(20000);//sleep for at least 20 ms to mark end of speed sequence
ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
for(j =0; j <10000; j++){}//pause
}
Figure 39: Six bit signal sequence code
First, the values for speed and steering are calculated, where speed is the 3 highest bits
of the variable “state” and steer is the three lowest bits of “state”. Next, the output is activated
“steer” times. This represents the direction the car will be traveling. For loops were added as
pauses instead of usleep() because the length of a for loop pause can be determined more
precisely. In our case, we only need to sleep long enough for the microprocessor to register
that the output has been activated or deactivated. If we use usleep(), we will be waiting about
10ms minimum as discussed earlier. If we use a for loop, that time can be cut down
considerably. For example, in the program above, the for loop pause of 10000 takes less than 2
ms. After sending the appropriate number of signals to represent steering, at least 5 ms pause
is inserted to mark the end of the signal sequence. Since the for loop pauses never reach 5ms,
the microprocessor can distinguish the terminating signal from the rest. Next, the number of
signals representing the speed is sent followed with a pause of at least 20ms. The usleep(5000)
pause shouldn’t take much longer than 15 ms (5 ms for the pause and 10 ms for the time taken
to process the usleep() call), so the microprocessor can distinguish between the end of the first
sequence and the end of the second sequence based on the length of the terminating signal.
This provided a considerable drop in signal sequence length compared to the old method of
pausing using usleep() after each signal and having a terminating signal of 20ms. It also allows
for additional speed and steering states without incurring much of an increase in the total
signal sequence length. See the table below for the updated signal combinations and lengths.
Table 4: Steering and Speed signal sequences sent and the corresponding time taken to send the signals
# ofSpeed
Signals
0
0
0
58
# ofSteering
Signals
0
1
2
Approx. Total Signal
Sequence Length (ms)
45
47
49
0
0
0
0
1
1
1
1
1
1
1
2
2
2
2
2
2
2
3
3
3
3
3
3
3
4
4
4
4
4
4
4
3
4
5
6
0
1
2
3
4
5
6
0
1
2
3
4
5
6
0
1
2
3
4
5
6
0
1
2
3
4
5
6
51
53
55
57
47
49
51
53
55
57
59
49
51
53
55
57
59
61
51
53
55
57
59
61
63
53
55
57
59
61
63
65
As can be seen, there are many more states than in Table 3. The longest duration is only
about 65 ms however, which is nearly 100 ms shorter than the signal with the longest duration
from Table 4 of about 160 ms. More speed signals represent faster speeds. Thus 0 is no speed
and 4 is full speed. For steering, 0 represents full left, 3 represents straight, and 6 represents
full right.
59
This concludes the Camera side of the Camera to Microprocessor connection. Now we
turn to the Microprocessor side.
Microprocessor
Most of the microprocessor code was originally written for and tested on the Dragon12
development board, but needed to be ported to the Dragonfly12 microprocessor, which is what
is used on the RC car while it is driving. That was much more difficult than expected. The
procedure in its entirety can be found in Appendix B. The code in this section is what was put
on the Dragonfly12 which only has minor modifications from what was developed on the
Dragon12 development board.
The microprocessor first had to be set up to be able to receive the signals from the
camera. The following code performs this step.
void init_Timer(void){
//uses PT0 for input
TIOS = 0x00; //input capture on all ports (including PT0)
TCTL4 = 0x03; //input capture on both rising and falling edges (PT0)
TCTL3 = 0x00; //clear input control for control logic on other ports
TIE = 0x01; //enable interrupt for PT0
TSCR2 = 0x07; //set prescaler value to 128 (freq = 375 KHz)
TSCR1 = 0x90; //enables timer/fast clear
}
Figure 40: Timer register initialization
It is well commented so there isn’t much need to go into detail about what it does. The
main things to notice are that port PT0 is what is used to capture the input signals from the
camera’s output. It captures on both rising and falling edges. Thus, since the interrupt is
enabled for PT0, an interrupt will be triggered both when the camera’s output is activated and
when the camera’s output is deactivated. The last thing to notice is that the prescaler value is
set to 128. This means that the timer frequency is equal to the input bus frequency divided by
128. This is one of the cases where the Dragonfly12 differs slightly from the Dragon12
development board. The Dragon12 board has a bus frequency of 24 MHz and the Dragonfly12
has a bus frequency of 48MHz. We want the timer clock frequency to be 375 KHz so we divide
48MHz by 128 to get 375KHz (divide by 64 when using the Dragon12 development board). The
reason for 375 KHz is that the timer runs a 16 bit counter. Every cycle it increments by one. We
want this to increment as slowly as possible because it is used to time the signals sent by the
60
camera’s output. If it increments too quickly, it may overflow more than one time when
measuring a signal, which would make it impossible to determine the signals length. At 375
KHz the 16 bit counter overflows about 375/65 ≈ 6 times a second. The microprocessor has a bit
that gets set on a timer overflow, so we can accurately measure about twice as much time or
up to 1/3 of a second. This is much more time than we need for any of our signals, but the extra
margin for error doesn’t hurt.
The following code is what was running on the microprocessor to measure the signals
sent to it from the camera’s output. This code is what was running on the microprocessor based
on the initial implementation of the output signals given in Figure 38.
#include<mc9s12c32.h>
unsigned int count = 0;
unsigned int length = 0;
unsigned int begin = 0;
byte total = 0;
interrupt 8 void TOC0_Int(void){
//rising edge
if(PTT_PTT0) {
begin = TC0;
} else { //falling edge
//length = time between rising edge and falling edge in 1/1000 secs
length = (TC0 + (0xFFFF*(TFLG2_TOF)) - begin)/375;
TFLG2_TOF &= 1;//clear bit
//end of signal (signal length greater than or equal to 18 ms)
if(length > 18) {
//not turning
if(count % 3 == 0) {
PWMDTY2 = 15;
}
else if (count % 3 == 1){ //turn left
PWMDTY2 = 11;
} else{
PWMDTY2 = 19; //turn right
}
PWMDTY1 = count/3 + 153;
count = 0;
61
}
else{ //not the end of the signal
count++;
}
begin = 0;
}
}
Figure 41: Microprocessor code to interpret the signals from the camera's output (initial design)
This method gets called every time there is an interrupt on channel PT0. This is
determined by the “interrupt 8” at the beginning of the method header. First we determine
whether it was a rising or falling edge. If it is a rising edge than PT0 will be high and the variable
PTT_PTT0 will be equal to 1 (representing the status of PT0). In this case we just set the
variable “begin” equal to the current value in the counter TC0. Since PT0 is set up to capture on
both rising and falling edges, TC0 will capture the current value in the timer counter and store it
on a defined transition of PT0 (both rising and falling). If it is a falling edge, the length of the
signal is calculated in milliseconds. This is done by subtracting the value in “begin” from the
current value in TC0 which was captured when PT0 went low. TFLG2_TOF gets set if the timer
counter overflows. If the timer overflows we need to add on the maximum timer value
(0xFFFF) to TC0. Finally we divide by 375, since the counter is incremented 375,000 (375 KHz)
times a second this puts length in milliseconds (375000/375=1000=1KHz=1ms period). Next, we
check to see if the length is greater than 18 ms. If it is, we know that we have a terminating
signal and we change the PWM output signals accordingly (see Microprocessor to Car).
Otherwise, we know that it is just part of the signal sequence and count (the number of signals
received) is incremented.
The following code was written to handle the camera’s output as produced by the code
shown in Figure 39. It is only slightly different from the code shown in Figure 41, and the
initialization code shown in Figure 40 remained unchanged.
62
byte count = 0;
unsigned int length = 0;
unsigned int begin = 0;
int total = 0;
interrupt 8 void TOC0_Int(void){
//rising edge
if(PTT_PTT0) {
begin = TC0;
} else { //falling edge
//length = time between rising edge and falling edge in 1/1000 secs
length = (TC0 + (0xFFFF*(TFLG2_TOF)) - begin)/375;
TFLG2_TOF &= 1;//clear bit
//end of signal (signal length greater than or equal to 3 ms)
if(length > 19) { //end of speed sequence
PWMDTY1 = count+153;
count = 0;
} else if(length > 4){
//end of steering sequence
if(count == 0){
PWMDTY2 = 11; //full left
} else if(count == 6){
PWMDTY2 = 19; //full right
} else{
PWMDTY2 = count + 12; //inbetween
}
count = 0;
} else{ //not the end of the signal
count++;
}
begin = 0;
}
}
Figure 42: Microprocessor code to interpret the signals from the camera's output (final design)
The program is pretty much the same up to the first if statement involving the length. If
the length is greater than 19 ms, we know that it is a terminating signal and the sequence of
signals just received represents the speed. The PWM signal for speed is updated accordingly.
Otherwise, if length is greater than 4ms but less than 19 ms, we know that the terminating
signal for the steering sequence has just been received. The PWM signal for steering is updated
accordingly. If neither of those conditions are true, than a signal in a sequence was received
and count is incremented. Every time a terminating signal is received, count is reset for the
next signal sequence.
No major problems were encountered during the design of the microprocessor. The
hardest part was making sure everything was set appropriately at all times: bits were cleared
63
correctly, timing was correct, the correct registers were being used, etc. We did encounter one
problem that we couldn’t fix and designed a work around for. It is as follows.
Initially, we used a variable in the microprocessor to try to keep track of whether we
were on the first sequence or the second. That way we could terminate both the steering and
speed signal sequences using a signal of the same duration (e.g. 5ms), thus potentially speeding
things up. The variable would be 1 if the camera was sending signals representing speed and 0
if the camera was sending the signal sequence for steering. During testing, however, the
variable would become reversed for some reason, thus it would be 1 during the steering
sequence and 0 during the speed sequence. The microprocessor would send the wrong signals
to the car and so nothing would work correctly. We ended up just adding in the terminating
signals of different lengths to avoid the problem, as outlined above. The microprocessor could
determine whether it received the steering sequence or the speed sequence based on the
length of the terminating signal.
Microprocessor to Car
This section focuses on the generation of the PWM signal that gets sent to the control
boxes on the RC car. It covers how the PWM register of the microprocessor is set up and how
the microprocessor determines the signal to send to the control boxes on the RC car based on
the input received from the camera.
The microprocessor PWM register is initialized as shown in the following code.
//This uses pins PT1 for speed and PT2 for steering
void init_PWM(void){
//set up channel 0 and 1 for speed and channel 2 for steering
PWMCTL = 0x10; //concatenate channels 0 and 1 into 16 bit PWM channel
MODRR = 0x06; //Set PT1 and PT2 as outputs of PP1 and PP2 respectively
//channel 1 is low order bits, channel 0 is high order bits
//all options for the 16 bit PWM channel are determined by channel 1 options
PWME = 0x06; //enable PWM channels 1 and 2
PWMPOL = 0x06; //set polarity to start high/end low (channels 1 and 2)
PWMCLK = 0x06; //clock SA is the source for channel 1 and SB for channel 2
//set clock B prescaler to 16 (B = E/32) E=48,000,000 Hz
//and clock A prescaler to 8 (A = E/16)
A=3,000,000 Hz
PWMPRCLK = 0x54;
PWMCAE =0x00; //left align outputs
PWMSCLA = 0x0F; //SA = A/(15*2) = 100,000 Hz
PWMSCLB = 0x4B; //SB = B/(75*2) = 10,000 Hz
64
B=1,500,000 Hz
//The combined periods of channel 0 and 1 represent the period
//for the 16 bit channel (channel 0 is high order, channel 1 low order)
//Period for 16 bit channel = (period of SA)*2000 = (1/100,000)*2000 = 0.02
seconds (50Hz)
PWMPER0 = 0x07; //high order
PWMPER1 = 0xD0; //low order
PWMPER2 = 0xC8; //Period for channel 2 = (period of SA)*200 =
(1/10,000)*200 = 0.02 seconds (50Hz)
//clock period for channel 0 and 1 = 24*10^6/(150*200*16) = 1/50 sec = 50Hz
//Duty cycle for 16 bit channel = (150/2000)*0.02 = 0.0015 seconds
PWMDTY0 = 0x00; //high order
PWMDTY1 = 0x96; //low order
PWMDTY2 = 0x0F; //Duty cycle for channel 2 = (15/200)*0.02 = 0.0015 seconds
}
Figure 43: PWM register initialization
Most of the code can be understood from the comments. The main things to notice are
that channels 0 and 1 of the PWM register are concatenated together to create a single 16 bit
channel (versus 8 bit channels if used separately). This allows for finer control over the PWM
signal that is generated using channels 0 and 1. When concatenated, only the settings for
channel 1 are used and the settings for channel 0 are ignored. The RC car picks up speed very
quickly, so we use the concatenated channel to control the speed of the car. This allows us to
increment the speed in much smaller amounts compared with using an 8 bit channel. We use
channel 2 as the steering output. We don’t need fine control over steering so we leave it as an
8 bit channel. Setting the MODRR register equal to 0x06 set sthe ports PT1 and PT2 as the
outputs of ports PP1 and PP2 respectively. This is another difference between the Dragonfly12
and the Dragon12. The Dragonfly12 has fewer pins than the Dragon12, and it lacks pins for
ports PP1 and PP2. Thus, pins PT1 and PT2 are linked to ports PP1 and PP2 so that they output
the PWM signal.
Another important part is the clock signal used to determine the PWM signal. It works
in the same way that the timer register does to slow down the bus clock. The bus clock of 48
MHz is slowed down based on the values in PWMPRCLK, PWMSCLA, and PWMSCLB. The PWM
register has four clocks it uses for creating signals called A, B, SA, and SB. A and B are slowed
versions of the bus clock. SA is a slower clock A and SB is a slower clock B. The value in
PWMPRCLK shown in the code above divides the bus clock by 32 for clock B (1.5 MHz) and by
16 for clock A (3 MHz). PWMSCLA is then used to slow clock A even more. The value in
PWMSCLA divides clock A by 30, making SA equal to 100 KHz. The value in PWMSCLB divides
clock B by 150, making SB equal to 10 KHz. Clock SA is used for channel 1 and clock SB is used
for channel 2.
65
Next, the periods of the PWM modulated signals are calculated. Both the steering and
speed controls use 20ms PWM signals so a 20ms period is used for both. The period is equal to
the clock period multiplied by the value in the PWMPER register for the channel. Channel 1
uses clock SA (100 KHz) and so the value in the PWMPER register must be 2000 to create a
PWM signal with period 20 ms. Since, channels 0 and 1 are concatenated, PWMPER0 and
PWMPER1 are concatenated. PWMPER0 contains the high order bits and PWMPER1 contains
the low order bits. Likewise, channel 2 needs a 20 ms signal. This is done via the PWMPER2
register. Channel 2 uses clock SB, which operates at 10 KHz, thus a value of 200 is put into
PWMPER2 to create a period of 20ms.
The last step is setting the duty cycle of each PWM signal. This is the fraction of the
period where the signal is high. The duty cycles for the PWM signals sent to the RC cars control
boxes vary between 1 and 2 ms. A 1.5 ms duty cycle represents no speed when sent to the
speed control box and straight when sent to the steering control box. This is the value initially
given as the duty cycle for both PWM signals. Since the duty cycle is given as a fraction of the
period, this means we need a duty cycle of 1.5/20 = 15/200. Since the PWMPER2 register
contains 200, the value 15 is put into the PWMDTY2 register. Likewise, the value 150 is put into
the concatenated PWMDTY0 and PWMDTY1 registers. This finishes the initialization of the
PWM registers.
The next step is modifying the PWM signal based on the input received from the
camera’s output. Once again, we start with the code, as previously given in Figure 38 above,
which was used during the initial implementation that had only three steering states.
#include<mc9s12c32.h>
unsigned int count = 0;
unsigned int length = 0;
unsigned int begin = 0;
byte total = 0;
interrupt 8 void TOC0_Int(void){
//rising edge
if(PTT_PTT0) {
begin = TC0;
} else { //falling edge
//length = time between rising edge and falling edge in 1/1000 secs
length = (TC0 + (0xFFFF*(TFLG2_TOF)) - begin)/375;
TFLG2_TOF &= 1;//clear bit
//end of signal (signal length greater than or equal to 18 ms)
66
if(length > 18) {
//not turning
if(count % 3 == 0) {
PWMDTY2 = 15;
}
else if (count % 3 == 1){ //turn left
PWMDTY2 = 11;
} else{
PWMDTY2 = 19; //turn right
}
PWMDTY1 = count/3 + 153;
count = 0;
}
else{ //not the end of the signal
count++;
}
begin = 0;
}
}
Figure 44: Microprocessor code to create correct PWM signal for the RC car's control boxes(initial design)
Referencing Table 4, we can see that the number of signals mod 3 can be used to
determine the direction. If the result is 0 than the car should go straight, if the result is 1 the
car should turn left, and if the result is 2 the car should turn right. This is implemented above,
where count is the number of signals. If we detect a terminating signal (length > 18), the duty
cycle for the steering (PWMDTY2) is updated accordingly with an appropriate value, as stated in
the comments. It can also be seen from Table 4 that the number of signals divided by 3 equals
the value for the speed. In the code above, we add on count/3 to PWMDTY1. It adds onto the
value of 153 because the car doesn’t start to move until a duty cycle of at least 154 is reached.
The speed picks up very quickly however, so the duty cycle is only incremented by one for each
increasing speed. Since the speed duty cycle of 153 represents 1.53ms, incrementing by one
only increases the period of the duty cycle by 0.01ms or 10 µs.
The following code was given previously in Figure 42 above, and represents the final
version implemented where there are up to 8 steering states and 8 speed states. None of the
code used to initialize the PWM register given in Figure 43 had to be changed.
byte count = 0;
unsigned int length = 0;
unsigned int begin = 0;
int total = 0;
interrupt 8 void TOC0_Int(void){
67
//rising edge
if(PTT_PTT0) {
begin = TC0;
} else { //falling edge
//length = time between rising edge and falling edge in 1/1000 secs
length = (TC0 + (0xFFFF*(TFLG2_TOF)) - begin)/375;
TFLG2_TOF &= 1;//clear bit
//end of signal (signal length greater than or equal to 3 ms)
if(length > 19) { //end of speed sequence
PWMDTY1 = count+153;
count = 0;
} else if(length > 4){
//end of steering sequence
if(count == 0){
PWMDTY2 = 11; //full left
} else if(count == 6){
PWMDTY2 = 19; //full right
} else{
PWMDTY2 = count + 12; //inbetween
}
count = 0;
} else{ //not the end of the signal
count++;
}
begin = 0;
}
}
Figure 45: Microprocessor code to create correct PWM signal for the RC car's control boxes (final design)
There are only small differences between this code and the code in Figure 44 in terms of
how the duty cycles are changed. The main difference is that separate signal sequences for
steering and speed are received by the microprocessor. If the terminating signal is longer than
20ms (length > 19), the previous signal sequence was for speed and so PWMDTY1 is updated
accordingly. In this case count is just added to 153. If length is greater than 4ms, then the
signal sequence was for steering. Since there are only 7 steering states (0-6), but 9 possible
values for PWMDTY2 (11-19), a couple of steering states had to be left out. We decided to
leave out states 12 and 18. This allowed the user to turn full left (state 11) or full right (state
19), while having finer control near the middle (states 13-17).
Future Work

68
Finish designing of a safety recovery system. This basically means that the R/C car will
retrace its movements back to its starting location if the wireless connection between
the user’s computer and the IP camera is unintentionally lost (if the user exits the
program or terminates the connection, the car shouldn’t retrace its movements). This
will require researching how to monitor the connection between the camera and the
program running on the user’s computer. It will also require researching how to write a
script for the IP camera that will record the movements of the car and then, in the event
that the wireless network connection is lost, send the correct signals to the control
boxes on the car to get it to retrace its movements in the reverse order. Also, on the
user side, the program will let the user know that the connection has been lost and that
the car is currently retracing its route back to the starting position. It should also
continuously ‘look’ for the IP camera and reestablish a connection to it if possible. This
will be done by James, Jeremy, and Seth.


Implement the safety recovery system. First, this requires knowledge of how to write a
script for the IP camera. The script must store the acceleration and steering commands
sent to it from the user in the camera’s memory. Another script must monitor the
connection between the user’s computer and the IP camera. As soon as it fails, the
camera starts sending signals to the control boxes on the R/C car causing the car to
retrace its route. If a connection is reestablished, the car stops retracing its route. On
the user side, a message must be displayed letting the user know that the connection
was lost and the car is retracing its route. The program will continuously try to
reconnect with the IP camera and will let the user know when if it successfully
reconnects. The hardware (car side) will be done by James and Jeremy. The software
(user side) will be done by Seth.
clean up code. Remove testing print statements, add all javadoc comments. fix
alignment on setup tab of GUI.
Results
Using the methodology described throughout this document, we were able to design
and implement a working prototype. Using a functional, user-friendly GUI, we are able to
control our R/C car over a network from a remote location. The system was field tested by
locating the car at a place with wireless internet (such as PLU’s library) and the user’s PC at
another location (PLU’s Morken Center) and successfully controlling the car over the network
connection.
69
Conclusion
Overall, we are happy with our project. We were able to successfully design and
implement a modification for an RC car to be controlled via network communication. We
accomplished nearly every functional requirement we set at the beginning of our project with
the exception of backtracking. Had there been fewer set backs or more time, we are confident
we could have designed and implemented this aspect of our system as well.
All of our code and documents, as well as some video demonstrations are available at:
www.code.google.com/p/networkrccar.
70
Appendix A – Programming the Dragon12 Board
This is a reference for setting up the Dragon12 development board to work with the
CodeWarrior development suite. This procedure erases the default monitor program, DBug-12,
and installs the HCS12 serial monitor on the Dragon12 board. The HCS12 serial monitor is what
is used by CodeWarrior to communicate with the Dragon12 board. This procedure is taken
directly from Dr. Frank Wornle’s website [8] (see the link “HCS12 Serial Monitor on Dragon-12”).
The HCS12 serial monitor can be obtained from his website as well. Added comments appear
within ‘[]’. The main difference is that EmbeddedGNU was used instead of HyperTerminal for
reprogramming the Dragon12 development board. EmbeddedGNU can be obtained from Eric
Engler’s website [16].
Both uBug12 as well as the Hi-Ware debugger allow the downloading of user
applications into RAM and/or the Flash EEPROM of the microcontroller.
WithuBug12, a suitable S-Record has to be generated. Programs destined for
Flash EEPROM are downloaded using the host command fload. Both S2 records
as well as the older S1 records (command line option ‘;b’) are supported. Some
applications might rely on the provision of an interrupt vector table. HCS12 Serial
Monitor expects this vector table in the address space from 0xFF80 – 0xFFFF.
This is within the protected area of the Flash EEPROM. The monitor therefore
places the supplied interrupt vectors in the address space just below the monitor
program (0xF780 –0xF800) and maps all original interrupts (0xFF80 – 0xFFFF) to
this secondary vector table. From a programmer’s point of view, the vector table
always goes into the space from 0xFF80 – 0xFFFF.
The source code of the HCS12 Serial Monitor can be obtained from Motorola’s
website (www.motorola.com, see application note AN2548/D and the
supporting CodeWarrior project AN2548SW2.zip).
A few minor modifications are necessary to adapt this project to the hardware of
theDragon-12 development board:
(1) It is proposed to use the on-board switch SW7\1 (EVB, EEPROM) as
LOAD/RUN switch. The state of this switch is tested by the monitor
during reset. Placing SW7\1 in position ‘EVB’ (see corresponding LED)
forces the monitor to become active (LOAD mode). Switching SW7\1 to
position ‘EEPROM’ diverts program execution to the user code (jump via
the secondary RESET interrupt vector at 0xF77E – 0xF77F).
71
(2) The monitor is configured for a crystal clock frequency of 4 MHz,
leading to a PLL controlled bus speed of 24 MHz.
(3) The CodeWarrior project (AN2546SW2.zip) has been modified to
produce an S1-format S-Record. The frequently problematic S0header
has been suppressed and the length of all S-Records has been adjusted to
32 bytes. This makes it possible to download the monitor program using
DBug-12 and two Dragon-12 boards connected in BDM mode.
The protected Flash EEPROM area of the MC9S12DP256C (Dragon-12) can only
be programmed using a BDM interface. An inexpensive approach is to use two
Dragon-12 boards connected to each other via a 6-wire BDM cable. The master
board should run DBug-12 (installed by default) and be connected to the host via
a serial communication interface (9600 bps, 8 bits, no parity, 1 stop bit, no flow
control). A standard terminal program (e.g. HyperTerminal) [or EmbeddedGNU]
can be used to control this board. The slave board is connected to the master
board through a 6-wire BDM cable. Power only needs to be supplied to either
the master or the slave board (i. e. one power supply is sufficient, see Figure 46)
[below].
Figure 46: Connecting two Dragon12 boards
72
The BDM interface of DBug-12 allows the target system (slave) to be
programmed using the commands fbulk (erases the entire Flash EEPROM of the
target system) and fload (downloads an S-Record into the Flash EEPROM of the
target system).Unfortunately, DBug-12 requires all S-Records to be of the same
length, which the SRecords produced by CodeWarrior are not. CodeWarrior has
therefore been configured to run a small script file (/bin/make_s19.bat) which
calls upon Gordon Doughman’s S-Record conversion utility SRecCvt.exe. The final
output file(S12SerMon2r0.s19) is the required S-Record in a downloadable
format. [This does not need to be done as S12SerMon2r0.s19 is already included
in the download from Dr. Frank Wornle’s website].
Click on the green debug button (cf. Figure 5) to build the monitor program and
convert the output file to a downloadable S-Record file.
Start HyperTerminal [EmbeddedGNU] and reset the host board. Make sure the
host board is set to POD mode. You should be presented with a small menu
(Figure 47) [below]. Select menu item (2) to reset the target system. You should
be presented with a prompt ‘S>’ indicating that the target system is stopped.
Figure 47: Resetting Dragon12 board in EmbeddedGNU
Erase the Flash EEPROM of the target system by issuing the command fbulk.
After a short delay, the prompt should reappear.
Issue the command fload ;b to download the S-Record file of the HCS12 Serial
Monitor. Option ‘;b’ indicates that this file is in S1-format (as opposed to S2).
73
From the Transfer menu select Send Text File… . Find and select the
fileS12SerMon2r0.s19. [When using Embedded GNU choose Download from the
Build menu] You will have to switch the displayed file type to ‘All Files (*.*)’(see
Figure 48)[below]. Click on Open to start the download.
Figure 48: Choose HCS12 Serial Monitor file
Once the download is complete, you should be presented with the prompt
(Figure 49)[below].The HCS12 Serial Monitor has successfully been written to the
target board. Disconnect the BDM cable from the target system and close the
terminal window.
74
Figure 49: The Serial Monitor has been successfully loaded
Connect the serial interface SCI0 of the target system to the serial port of the
host computer and start uBug12[this can be obtained from Eric Engler’s website
[16]].
Ensure that you can connect to the target by entering ‘con 1’ (for serial port
COM1).You should receive the acknowledgement message ‘CONNECTED’. Issue
the command ‘help’ to see what options are available to control the monitor.
Disconnect from the target using command ‘discon’ (Figure 50)[below].
Note that HCS12 Serial Monitor provides set of commands very similar to that
ofDBug-12: A user application can be downloaded into the Flash EEPROM of the
microcontroller using fload (;b), the unprotected sectors of Flash EEPROM can be
erased using fbulk, etc. Following the download of a user program into the Flash
EEPROM of the microcontroller, the code can be started by switching SW7\1 to
RUN(EEPROM) and resetting the board (reset button, SW6).
75
Figure 50: Options to control the monitor
This completes the installation and testing of HCS12 Serial Monitor on the
Dragon-12development boards.
76
Appendix B – Porting Code from the Dragon12 to
the Dragonfly12
This is the procedure for getting code that works with the Dragon12 development board
to work on the Dragonfly12. It requires a Dragon12 development board with D-Bug12 on it, a
BDM cable, and a Dragonfly12 microprocessor. The Dragonfly12 uses the MC9S12C32
processor while the Dragon12 uses the MC9S12DP256 processor. You should familiarize
yourself with the differences by reading the user guide for the MC9S12C32
athttp://www.freescale.com/files/microcontrollers/doc/data_sheet/MC9S12C128V1.pdfand
comparing it against that of the DP256 chip
athttp://www.cs.plu.edu/~nelsonj/9s12dp256/9S12DP256BDGV2.pdf. Any differences in terms
of the actual code will have to be taken into account separately from what is described in this
guide. This guide will cover porting code developed in Codewarrior to the Dragonfly12. Mr.
Wayne Chu from Wytec provided a lot of help to figure out this process.
The first step is to create a new project. Select New Project from the File menu. You
should be presented with the New Project window shown in Figure 51 below.
77
Figure 51: Creating a new Dragonfly12 project
Enter a name for the project and click Next. The New Project Wizard should appear. Click Next.
On page 2, select MC9S12C32 from the list of Derivatives and click next. On page 3, select the
languages that you will be programming in. For this project, only C was selected. The next few
pages ask if certain Codewarrior features should be used, including Processor Expert,
OSEKturbo, and PC-lint. It is safe to answer no to all of them. On page 7, it asks what startup
code should be used. Choose minimal startup code. On page 8, it asks for the floating point
format supported. Choose None. On page 9 it asks for the memory model to be used. Choose
Small. On page 10 you can choose what connections you want. Only Full Chip Simulation is
needed. After clicking Finish the basic project should be created (see Figure 52 below). The
green button is used to compile the code.
78
Figure 52: Dragonfly12 initial project layout
Whenever the code is compiled, it creates an s19 file and puts it in the bin folder within
the project. If you are using Full Chip Simulation, the file is probably called
“Full_Chip_Simulation.abs.s19”. This file is what will eventually be put onto the Dragonfly12.
First, the s19 file needs to be converted to the correct format. This is done using a program
called SRecCvt.exe. It can be downloaded from
http://www.ece.utep.edu/courses/web3376/Programs.html. Put SRecCvt.exe in the bin folder
containing the s19 file.
The conversion can be done from the command line, but it is easier to create a bat file
that will do it for you. The following line for converting the s19 file was generously provided by
Mr. Wayne Chu. Open a text editor and put the following line in it:
79
sreccvt -m c0000 fffff 32 -of f0000 -o DF12.s19 Full_Chip_Simulation.abs.s19
This assumes that Full_Chip_Simulation.abs.s19 is the name of the s19 file created by
Codewarrior. If the s19 file generated by Codewarrior is called something different then
Full_Chip_Simulation.abs.s19 should be changed to the name of the s19 file. DF12.s19 is the
name of the output s19 file. It can be called anything, just make sure it is different from the
name of the input s19 file. Save the file to the bin folder containing SRecCvt.exe with the
extension “.bat” (e.g. make_s19.bat). Double click on the bat file and the Codewarrior s19 file
should be converted automatically and saved as the output s19 file given in the bat file (e.g.
DF12.s19). This is the file that will be put onto the Dragonfly12 microprocessor.
Connect power to a Dragon12 development board that has D-Bug12 on it. Plug one end
of the BDM cable into “BDM in” on the Dragon12 and the other end into the BDM connection
on the Dragonfly12. Make sure the brown wire in the BDM cable lines up with the “1” on both
of the BDM connections. Switch the Dragon12 to POD mode by setting SW7 switch 1 down and
SW7 switch 2 up. See the picture below for the complete setup.
Figure 53: Programming the Dragonfly12
80
Start up EmbeddedGNU. Press the reset switch on the Dragon12 board. A menu may
appear similar to the one shown in Figure 53 below (make sure the Terminal tab is selected). If
nothing appears try clicking in the Terminal and pressing a key. Sometimes it doesn’t display
properly and this usually fixes it. If it does, choose option 1 and enter 8000 to set the target
speed to 8000 KHz. If the R> prompt appears, type “reset” and hit enter. You should now be
presented with the S> prompt. Enter “fbulk” to erase the flash memory on the Dragonfly12.
After a brief pause, the S> prompt should reappear. Now enter “fload” and press enter. The
terminal will hang. Go to the Build menu and choose Download. Navigate to the bin folder of
the Codewarrior project and select the converted s19 file (e.g. DF12.s19). Click Open. A series
of stars should appear while the file is downloading. Once it is finished, the S> prompt should
reappear. The program is now on the DragonFly12.
Figure 54: Downloading code to Dragonfly12 using EmbeddedGNU
81
Appendix C – Building the proper 0x86 to ARM
cross-compiler for the Axis 207W’s hardware
We were able to find the file emb-app-arm-R4_40-1.tar found here:
ftp://ftp.axis.com/pub_soft/cam_srv/arm_tools/ which contained the files to build the crosscompiler that was configured for the armv4tl architecture that is running on the camera’s
ARTPEC processor. The way we got it working was to build it on Ubuntu 6.06 LTS which can be
downloaded here: http://releases.ubuntu.com/dapper/ .
from emp_app_sdk_arm_R4_40, these are the steps to build the cross-compiler and compile
the program:
Follow these steps to set up the compiler:
Download the compressed compiler into your home directory (or any other directory you
can write to).
Unpack the compressed file:
%: ~> tar -zxvf comptools-arm-x.y.z.tar.gz
It will unpack to comptools-arm-x.y.z. Move to that directory:
%: ~> cd comptools-arm-x.y.z/
To be able to install the compiler, get root access:
%: ~/comptools-arm-x.y.z> su
%: ~/comptools-arm-x.y.z> <root password>
Install the compiler to a path of your choice, e.g. /usr/local/arm. When done, exit root
access:
Note that the build process will take quite some time and will consume about 1GB disk. The
installation need about 100MB disk.
%: ~/comptools-arm-x.y.z> ./install-arm-tools /usr/local/arm
%: ~/comptools-arm-x.y.z> exit
In order to use the ARM compiler toolchain, add install-path/bin to your $PATH, e.g.
/usr/local/arm/bin.
3.2 Code tree package
Download the code tree package to your home directory and unpack it:
%: ~> tar -zxvf emb-app-arm-Rx.tar.gz
Install the code tree:
%: ~> ./install-sdk
3.3 File structure in the code tree
In the code tree axis/emb-app-arm-linux-IRx-uclibc-IRy/ you will now find a file and folder
structure
like this:
• apps/examples/hello
• apps/examples/artpec-a/mpeg4read
• apps/examples/artpec-a/ycbcrread
• files/
• init_env
• libs/
• licenses.txt
• Makefile
• os/
82
• SOFTWARE_LICENCE_AGREEMENT.txt
• target/
• tools/
The init_env file contains the environment variables. It is a good idea to check that they
have the
correct values:
%: ~/axis/emb-app-arm-linux-IRx-uclibc-IRy> less init_env
Whenever you are working with your code tree you should run the init_env file to set the
environment
variables to their correct values:
%: ~/axis/emb-app-arm-linux-IRx-uclibc-IRy> . init_env
or:
%: ~/axis/emb-app-arm-linux-IRx-uclibc-IRy> source init_env
In the tools/ directory you can find the Rules.axis file that contains rules for building and
linking. This
file is included in each Makefile and you must also include it in your own Makefile.
4. BUILD AN APPLICATION
Create your application in a new directory in the apps/ directory. You will need at least a cfile and a
Makefile. You can copy the Makefile from the small example application in apps/hello/ and
modify it to
fit your own application.
The Makefile must contain these two lines:
AXIS_USABLE_LIBS = UCLIBC GLIBC
include $(AXIS_TOP_DIR)/tools/build/Rules.axis
The libraries are always linked dynamically.
5. COMPILE THE APPLICATION
When compiling your application you can compile it to run on host (your computer) first to
test it
before downloading it to your Axis product. First prepare to compile for host and then
compile the
application:
%: ~/axis/emb-app-arm-linux-IRx-uclibc-IRy/apps/myApp> make host
%: ~/axis/emb-app-arm-linux-IRx-uclibc-IRy/apps/myApp> make
When compiling the application to run on your product you must first specify for which type
of product
you intend to compile the application. First prepare to compile for arm-axis-linux-gnuuclibc
and then
compile the application:
%: ~/axis/emb-app-arm-linux-IRx-uclibc-IRy/apps/myApp> make arm-axis-linux-gnuuclibc
%: ~/axis/emb-app-arm-linux-IRx-uclibc-IRy/apps/myApp> make
The binary can be made significantly smaller by removing symbol information:
%: ~/axis/emb-app-arm-linux-IRx-uclibc-IRy/apps/myApp> arm-linux-strip
name_of_executable
83
Appendix D – The Requirements Document
Date: 11/4/08
Version: 5
Introduction
The idea of this project is to enhance an R/C car. It will demonstrate each of our group
member’s abilities to problem solve, task manage, and think creatively. Through this project, we will
design additional functionality for the R/C car using computer engineering and computer science. With
the addition of computer hardware and a camera, our R/C car will be able to be controlled over a
wireless network from a computer rather then by a radio frequency remote. These modifications will
increase the range at which the car can be controlled as well as allow the user to control the car without
needing to have it in sight. Other features may be added as well – features such as the ability of the car
to retrace its route to a “safe” location if the network connection is lost.
We have begun research in areas we foresee as necessary for the success of this project. The
first area is determining how all of the parts of the R/C car work. These parts include the motor, the
power supply, and the speed and steering controls. The next major area of concern is the wireless
communication between the car and the user’s computer. We need to determine the signal strength
and network bandwidth necessary to stream video to the user’s computer and still allow responsive
control of the car. We also need to determine a way to send signals to the car which control steering
and acceleration.
Project Description
Our project will create an R/C car that can be controlled over a wireless network from a
computer. When an R/C car is purchased from a store its operating range is limited by line of sight and
the maximum distance from the car the radio frequency controller can operate. With a fully successful
project, both of these limitations will be removed. The first step will be to add a camera and computer
hardware to our remote control car. Then, we will create a program which will allow a user to establish
a connection between the hardware on the car and their personal computer. The user will be able to
control the car and receive video feedback from the car through this connection. Thus, we will have to
design a circuit between the computer hardware on the car and the car’s steering and acceleration
controls. This circuit will take a signal from the computer hardware and convert it into an appropriate
signal that the car’s controls will be able to understand. Successful implementation would provide
unrestricted operation (line of sight and distance from the controller are no longer factors) of the R/C
car within the range of the wireless network being used.
84
While the idea seems straight forward, there are many facets to this project. On a functionality
level, we will need to break the project up into subtasks which we can complete and test individually
before moving on to the next one. The first task will be adding hardware to the car that can connect to a
wireless network. Next, we will need to connect this hardware to the steering and acceleration controls
on the car.
Once the hardware issues have been solved, we will need to implement a program that can
connect to the hardware on the car over a wireless network. The car will be identified via an IP address.
First we will implement a basic program that will just be able to send signals to the car. The signals will
comprise steering and acceleration control. Next, we will add a GUI which will incorporate the steering
and acceleration controls in a user friendly manner. Finally, we will display a real-time video feed from
the camera on the R/C car within the GUI.
Beyond the tasks outlined, we currently have the goal of designing the hardware on the car with
a safety retreat feature, which involves keeping a log of the cars actions in memory. This will allow the
car to retrace its movements to its starting point in the event the network connection is lost. In the
event this happens, we also would like to design the computer hardware on the car and the program to
automatically reestablish a network connection if it becomes possible.
In order to complete all the intended tasks of this project, we will need to research many things.
The understanding of networks (both generally and the understanding wireless networks specifically)
will be essential. Our program needs to communicate without fault over a wireless network to the
hardware on the car and vice versa. Understanding electric circuits is another researching task at hand.
How do the acceleration and steering controls work? How can we implement a circuit so the computer
hardware on the car can send signals to the car to make it change direction or speed? These are some
of the major issues that we will need to research to answer.
A minimally successful project will consist of a car that can be controlled over a wireless network.
This means we can send a steering or acceleration command with a specific purpose, such as turn the
car to the left, from the user’s computer and the car will respond as directed within a couple seconds.
This shows we have learned how to establish a network connection between the hardware on the car
and the user’s computer. It will also show that we have learned how to connect the hardware on the
car with the controls on the car so that the car responds appropriately to commands. Thus, our learning
objectives will be satisfied.
Functional Objectives Summary
1. design and implement computer hardware for the car
2. design and implement the hardware on the car to convert a signal from the computer
hardware to a signal that the controls on the car can use
3. have the hardware on the car connect to a wireless network via TCP/IP
4. design a program to send control signals to hardware on the car
5. establish a connection between the hardware on the car and the program on the user’s
computer over a wireless network
85
6. establish full ability to control car (acceleration and steering)
7. establish video transmission from camera on car to GUI on user’s computer
8. implement memory of cars movements and ability to retrace them in the case the network
connection becomes lost
9. automatic internet establisher
10. full addition of any other accessory features
Learning Objectives Summary
1. Understand how communication over wireless networks work.
2. Learn how information can be both sent and received efficiently over a wireless network.
3. Learn how to detect whether a wireless network connection can be established and how to
detect immediately when a connection fails
4. Learn how a circuit can be built to modify a signal (such as a change in current)
5. Learn how a circuit can send a variable signal (this can be used to achieve different speeds
of the R/C car)
6. Learn how the acceleration and steering circuits work on the R/C car
Use Cases
Use Case:
Open control interface application and select network settings.
Actors: RC car driver.
Precondition:
User needs to have the application in a location on their computer’s hard disk. The
network RC car and onboard hardware must be powered on.
Steps:
Actor actions
System responses
User Runs the application
GUI opens with a prompt to create/connect to a
network.
User Selects/creates a network
Application validates selection and establishes a
connection with RC car’s controls and camera.
GUI displays live video feed from camera and
awaits commands
Use Case:
Control the network RC car’s motion via remote GUI app
Actors:
RC car driver
86
Goals:
To steer and propel the RC car from a remote location under the ideal conditions of a
stable connection between the RC car and the remote control app.
Precondition:
Open control interface app and select network settings use case. Mouse cursor is
located within video feed display area.
Steps:
Actor actions
System responses
User presses space bar
RC car accelerates in the forward direction to a
velocity determined by the position of the
cursor which will act as a speed control in our
user interface.
User moves cursor around video display
RC car’s direction and velocity controls react
appropriately to changing orientation of cursor.
User releases space bar
RC car discontinues acceleration, eventually
slowing to zero velocity.
User presses Ctrl+space bar
RC car accelerates in the reverse direction to a
velocity determined by the vertical orientation
of the cursor on the video display and in a
direction determined by the horizontal
orientation of the cursor on the video display.
User moves cursor off of video display area
vertical and horizontal orientation are set to the
last position of the cursor before it left the area,
until the cursor returns to video display area.
Postcondition: The nature of this use case requires that any of these
steps may be performed in any order, so it is necessary that during
and after each step, the video feed continues to be processed and
displayed.
Use Case:
87
disconnect from network and close interface application
Actors:
RC car driver
Goals:
gracefully disconnect from network and close program
Precondition:
Open control interface app and select network settings use case.
Steps:
Actor Actions
System Responses
click disconnect button on GUI Disconnects user’s computer from
the network. (car considered
dead)
kill program
Use Case Model
Figure 55: Use Case model
Task Breakdown
1. Design
a. Hardware Design
i. Design how car will be controlled
1. Determine what will be used to transmit signals to the RC car’s
steering and acceleration “boxes”
a. What kinds of signals will be used (RF or wired).
88
b. What will be used to transmit the signals (signal
generation).
2. Design signal acquisition hardware
a. Research networking and TCP/IP protocols
b. Research and decide what hardware will best suite our onboard (on rc car) computing/networking needs
c. test our system in increments
i. test remote computer to on-board
hardware communication
ii. figure out what we can use as output
from hardware
d. figure out what RC car’s hardware needs as input (do
testing on servo connections and determine Voltages, etc)
e. research how to get input / output to agree or at least shake
hands.
3. Safety recovery
a. research an efficient way for on board system to memorize
all maneuvers made.
b. If signal is not recovered in a designated amount of time (1
min) car should backtrack, using the memory.
c. stop backtracking once car has a strong signal again.
ii. Design video
1. Use a camera mounted on the car for visual feedback to the user
a. research and design application to receive video stream
from camera.
b. first design dummy applications to communicate with IP
camera, using the VAPIX® API.
iii. Design how car and electronics on it will be powered
1. Design electronics to use car battery power or another source
b. Software Design
i. Design user interface
1. Live video feed from camera on car is displayed
2. Current signal strength is displayed.
3. Controls for steering and accelerating are simple.
a. Possible use of a joystick or something similar?
4. Alert if signal is lost.
5. Alert if malfunction (such as loss of steering control).
ii. Design network connection to car
1. User side
a. Program will try to find car and will alert user when a
connection is established or if a connection cannot be
established (research network connections)
b. Signals must be generated based on user commands to
control the car. (write applications that send data over
network)
89
c. Program will monitor connection status to warn the user
when the signal is weak. (research how to monitor our
wireless signal)
d. Program has a scan function to continuously try to detect a
car if connection lost in a non-graceful manner. (again
research network connections)
2. Car side
a. Signals must be interpreted from user and then sent to
proper controls on the car (do a lot of testing here)
b. Video feed must be interpreted and relayed back to user
program. (Using VAPIX® API, retrieve stream from
camera to our interface
iii. Design safety recovery
1. Car must keep track of movements and then be able to retrace them
in case of a loss of signal
2. If unable to acquire signal, car must be able to return to area where
it had a signal.
c. Purchase required hardware/software/books/etc.
2. Implementation
a. Car side
i. Build and test network connection hardware and software
1. Make sure computer on car can send and receive signals
2. Implement software interpreter on the car for signals
ii. Build and test controls
1. Connect computer to controls
2. Test whether signals sent to the car are converted correctly so that
desired actions actually happen (for example the car accelerates
and decelerates when it is supposed to).
iii. Connect camera to car
1. Test to make sure video is being sent to the computer on the car
and that it is then interpreted correctly for transmission to the user
interface
iv. Build safety recovery system
1. Test if car can safely navigate the way it came
2. Test that car finds a “safe” location if it is unable to recover the
signal.
b. User side
i. Implement and test network connection software
1. Implement software to establish a network connection to car
2. Test to make sure it connects correctly to the car and that the
connection is “realized” by the software
3. Create signal generator
a. Test that signals are sent to the car and received from the
car correctly.
b. Test ability to control the car from the user side
90
ii. Implement user interface
1. Make sure all necessary information is displayed
2. Create video “window” for displaying video feed from the camera
on the car
a. Test that video is received correctly
3. Test that messages are displayed appropriately (for instance, when
a connection is lost the user must be alerted).
4. Create easy to use help menu
iii. Implement user controls
1. Keyboard or joystick commands are set (and configurable?)
2. Test controls to make sure car reacts as it should.
c. Test full implementation
i. Stress test different aspects
1. Make sure system reacts correctly when a connection is lost.
a. Make sure the connection can be reestablished.
2. Test when the network connection starts to get bogged down (i.e.
lots of quick commands)
3. Test unexpected closing of user program.
a. Car should enter safety/recovery mode.
b. User should be able to restart program and connect back to
the car.
3. Add additional features
a. Make sure they are reversible! (if something doesn’t work we should be able to
revert to previous working configuration without a problem).
4. Deliver the final product
Budget
Needed Materials:
User Side Computer
-
already have
IP camera with minimal functionality
-
$250-300
-
usable embedded system (preferably Linux)
on board wifi connectivity
usable signal output
tcp/ip, http protocol
Preferred additional functionality
91
low power requirements
supports encryption
web server
-
bandwidth efficient
mpeg-4 compression
interfacing API
RC car
-
already have
Wireless Networking Bk -
Library
Linux Bk
-
Library
Other Books/Manuals
-
Google/Library
Website
-
Google Code
Development Languages:
Java
-
free
Processing
-
free
Vapix or similar API
-
free
Signal interpreting hardware
-
to be determined (not to exceed $50)
92
Bibliography
[1]
Verizon Wireless AirCard® 595 product page. [Online].
http://www.verizonwireless.com/b2c/store/controller?item=phoneFirst&action=viewPhoneDetail
&selectedPhoneId=2730
[2]
Wikipedia. (2008, Nov.) IP Camera. [Online]. http://en.wikipedia.org/wiki/IP_Camera
[3]
Axis 207 Network Camera. [Online]. http://www.axis.com/products/cam_207/
[4]
Axis 207/207W/207MW netowrk Cameras Datasheet. [Online].
http://www.axis.com/files/datasheet/ds_207mw_combo_30820_en_0801_lo.pdf
[5]
C. Hunt, TCP/IP Network Administration. Sebastopol, CA: O'Reilly, 2002.
[6]
Axis. (2008) VAPIX version 3 RTSP API. [Online].
http://www.axis.com/techsup/cam_servers/dev/cam_http_api_index.php
[7]
Axis Scripting Guide. [Online].
http://www.axis.com/techsup/cam_servers/dev/files/axis_scripting_guide_2_1_8.pdf
[8]
F. Wornle. (2005, Oct.) Wytec Dragon12 Development Board. [Online].
http://www.mecheng.adelaide.edu.au/robotics/wpage.php?wpage_id=56
[9]
(2009, Feb.) Software Development Kit. [Online].
http://developer.axis.com/wiki/doku.php?id=axis:sdk#compiler
[10] Axis. [Online]. http://www.axis.com/ftp/pub_soft/cam_srv/arm_tools/arm_tools.htm
[11] E. Andersen. (2008) µClibc Toolchains. [Online]. http://uclibc.org/toolchains.html
[12] J. Erickson, Hacking: The Art of Exploitation. San Francisco: No Starch Press, 2008.
[13] endolf. (2007, Jun.) Getting started with JInput. [Online].
http://www.javagaming.org/index.php/topic,16866.0.html
[14] (2002, Jun.) Linux Cross Reference. [Online].
http://www.gelato.unsw.edu.au/lxr/source/include/asm-cris/etraxgpio.h#L98
[15] D. Hodge. (2004, Jul.) GPIO select() and poll(). [Online]. http://mhonarc.axis.se/dev-
93
etrax/msg04518.html
[16] E. Engler. Embedded Tools. [Online]. http://www.geocities.com/englere_geo/
[17] Sun Microsystems, Inc. (2008) Java™ Platform, Standard Edition 6 API Specification. [Online].
http://java.sun.com/javase/6/docs/api/
[18] K. L. C. Michael J. Donahoo, TCP/IP Sockets in C. San Francisco: Academic Press, 2001.
[19] R. L. Timothy C. Lethbridge, Object Oriented Software Engineering: Practical Software Development
using UML and Java. London: McGraw-Hill, 2005.
Glossary
API (application programming interface): a set of functions, procedures, methods, classes or
protocols that an operating system, library or service provides to support requests made by
computer programs.
AWT: Contains all of the classes for creating user interfaces and for painting graphics and
images from java.awt in JavaTM Platform Standard Ed. 6
Bit: A binary digit that can have the value 0 or 1. Combinations of bits are used by computers
for representing information.
Computer network: a group of interconnected computers.
Clock frequency: The operating frequency of the CPU of a microprocessor. This refers to the
number of times the voltage within the CPU changes from high to low and back again within
one second. A higher clock speed means the CPU can perform more operations per second.
Central processing unit (CPU): A machine, typically within a microprocessor, that can execute
computer programs.
Cross-Compiler:
Electric current: Flow of electric charge.
94
Electric circuit: an interconnection of electrical elements.
Encryption: the process of transforming information (referred to as plaintext) using an
algorithm (called cipher) to make it unreadable to anyone except those possessing special
knowledge, usually referred to as a key.
ftp (file transfer protocol): a network protocol used to exchange and manipulate files over a
TCP computer network, such as the Internet.
GUI (graphical user interface): a type of user interface which allows users to interact with a
computer through graphical icons and visual indicators
HTTP (Hypertext Transfer Protocol): a communications protocol internet used for retrieving
inter-linked text documents.
IP camera: a unit that includes a camera, web server, and connectivity board.
Interrupt: Can be either software or hardware. A software interrupt causes a change in
program execution, usually jumping to an interrupt handler. A hardware interrupt causes the
processor to save its state and switch execution to an interrupt handler.
Joystick: an input device consisting of a stick that pivots on a base and reports its angle or
direction to the device it is controlling.
Light-Emitting Diode (LED): A diode, or two terminal device that allows current to flow in a
single direction, which emits light when current flows through it. Different colors can be
created using different semiconducting materials within the diode.
Linux: a Unix-like computer operating system family which uses the Linux kernel.
Memory: integrated circuits used by a computer to store information.
Microprocessor: a silicon chip that performs arithmetic and logic functions in order to control a
device.
Modulation:the process of varying a periodic waveform.
Network Bandwidth: the capacity for a given system to transfer data over a connection (usually
in bits/s or multiples of it (Kbit/s Mbit/s etc.)).
Pulse Width Modulation:uses a square wave whose pulse width is modulated resulting in the
variation of the average value of the waveform.
95
R/C (remote controlled) car: a powered model car driven from a distance via a radio control
system.
Register: Stores bits of information that can be read out or written in simultaneously by
computer hardware.
Repository: storage location from which software packages may be retrieved and installed on a
computer
RF (radio frequency): a frequency or rate of oscillation within the range of about 3 Hz to
300 GHz.
RTSP (Real Time Streaming Protocol): A control protocol for media streams delivered by a
media server.
Servo: mechanism which converts a radio frequency to an electrical signal.
Swing: Provides a set of "lightweight" (all-Java language) components that, to the maximum
degree possible, work the same on all platforms. from javax.swing in JavaTM Platform Standard
Ed. 6
telnet (Telecommunication network): a network protocol used on the Internet or local area
networks to provide a bidirectional interactive communications facility.
TCP/IP Socket: an end-point of a bidirectional process-to-process communication flow across
an IP based network.
TCP/IP (Internet Protocol Suite): a set of data communication protocols, including Transmission
Control Protocol and the Internet Protocol.
Toolchain: a blanket term for a collection of programming tools produced by the GNU Project.
These tools form a toolchain (suite of tools used in a serial manner) used for developing
applications and operating systems.
User Interface: means by which a user interacts with a program. It allows input from the user to
modify the program and provides output to the user which denotes the results of the current
program setup.
VAPIX®: RTSP-based application programming interface to Axis cameras.
Video transmission: sending a video signal from a source to a destination.
Voltage: difference in electric potential between two points in an electric circuit.
96
Wi-Fi: name of wireless technology used by many electronics for wireless networking. It covers
IEEE 802.11 technologies in particular.
Wireless network: computer network that is wireless.
97
Download