- eee - Google Project Hosting

advertisement
Design Document
Big Dog’s Kryptonite
James Crosetto (Bachelor of Science, Computer Science and Computer Engineering)
Jeremy Ellison (Bachelor of Science, Computer Science and Computer Engineering)
Seth Schwiethale (Bachelor of Science, Computer Science)
Faculty Mentor: Tosh Kakar
12/18/08
Contents
Introduction .................................................................................................................................................. 3
Research Review ........................................................................................................................................... 4
Design............................................................................................................................................................ 7
Commands from the user’s computer to the IP Camera .......................................................................... 9
Hardware: Sending signals produced by software to the car ................................................................. 10
Theory of hardware operation............................................................................................................ 10
Hardware Design................................................................................................................................. 12
Backtracking ........................................................................................................................................ 17
GUI ...................................................................................................................................................... 18
Implementation .......................................................................................................................................... 21
Computer to Camera .............................................................................................................................. 21
Camera to Microprocessor ..................................................................................................................... 27
Camera ................................................................................................................................................ 27
Microprocessor ................................................................................................................................... 43
Microprocessor to Car ............................................................................................................................ 47
Work Completed ......................................................................................................................................... 51
Future Work ................................................................................................................................................ 53
Timetable .................................................................................................................................................... 55
Appendix A – Programming the Dragon12 Board....................................................................................... 56
Appendix B – Porting Code from the Dragon12 to the Dragonfly12 .......................................................... 61
Bibliography ................................................................................................................................................ 66
Glossary ....................................................................................................................................................... 66
1
Table of Figures
Figure 1: The components necessary to the project .................................................................................... 8
Figure 2: Components after consolidating the computer and camera into one unit ................................... 8
Figure 3: Pulse width modulation ............................................................................................................... 11
Figure 4: The complete setup on the RC car ............................................................................................... 16
Figure 5: The microprocessor component is added to the design ............................................................. 17
Figure 6: Initial design of GUI ...................................................................................................................... 18
Figure 7: GUI design for a dropped connection .......................................................................................... 19
Figure 8: Sequence diagram for using the RC car ....................................................................................... 20
Figure 9: Logitech Rumblepad information ................................................................................................ 22
Figure 10: Getting controller events ........................................................................................................... 23
Figure 11: processing a command from the controller .............................................................................. 25
Figure 12: Sending commands to the camera and updating the GUI......................................................... 25
Figure 13: Sending the state to the camera................................................................................................ 26
Figure 14: Controller button ....................................................................................................................... 27
Figure 15: Activating the output using iod.................................................................................................. 29
Figure 16: Inside iod .................................................................................................................................... 31
Figure 17: Example code showing definitions of IO variables .................................................................... 32
Figure 18: Example code for using ioctl().................................................................................................... 33
Figure 19: Test program to figure out how to use ioctl() with the Axis 207W camera .............................. 35
Figure 20: Test program to measure the speed at which ioctl can trigger the camera's output ............... 36
Figure 21: Initial code to trigger the camera's output within the server ................................................... 37
Figure 22: Six bit signal sequence code ...................................................................................................... 40
2
Figure 23: Timer register initialization ........................................................................................................ 43
Figure 24: Microprocessor code to interpret the signals from the camera's output (initial design) ......... 45
Figure 25: Microprocessor code to interpret the signals from the camera's output (final design) ........... 46
Figure 26: PWM register initialization ........................................................................................................ 48
Figure 27: Microprocessor code to create correct PWM signal for the RC car's control boxes (initial
design) ......................................................................................................................................................... 50
Figure 28: Microprocessor code to create correct PWM signal for the RC car's control boxes (final design)
.................................................................................................................................................................... 51
Figure 29: Timetable ................................................................................................................................... 55
Figure 30: Connecting two Dragon12 boards ............................................................................................. 57
Figure 31: Resetting Dragon12 board in EmbeddedGNU ........................................................................... 58
Figure 32: Choose HCS12 Serial Monitor file .............................................................................................. 59
Figure 33: The Serial Monitor has been successfully loaded ...................................................................... 60
Figure 34: Options to control the monitor.................................................................................................. 61
Figure 35: Creating a new Dragonfly12 project .......................................................................................... 62
Figure 36: Dragonfly12 initial project layout .............................................................................................. 63
Figure 37: Downloading code to Dragonfly12 using EmbeddedGNU ......................................................... 65
Introduction
This project involves modifying an R/C car to be controlled from a personal computer
over a wireless network connection. As long as a suitable wireless network connection is
available (refer to the requirements document), the user will be able to run the program,
establish a network connection between the computer and the car, and then control the car
(acceleration and steering) while being shown a live video stream from an IP camera on the car.
The design involves both software and hardware. The software will run on a user’s
personal computer. It will allow the user to connect to the IP camera on the car using an
existing wireless network, control the steering and acceleration of the car through keyboard
3
commands, and view a live video stream from the camera on the car. The IP camera supports
both video transmission, wireless network connectivity and the capacity for an embedded
server as we will see in the design. The signal output on the camera will be connected to the
control boxes on the car, thus allowing control signals received by the camera to be relayed to
the car. A more detailed description of the requirements can be found in the requirements
document.
Research Review
We researched several possibilities for implementing hardware on the R/C car that can
both establish a wireless network connection and send signals to the control boxes. Initially,
we wanted to use a wireless card that can connect to the internet using cell phone signals. This
would allow the car to be used anywhere a cell phone signal is present. However, this idea was
quickly revised for several reasons. First, we needed to find a small computer which supported
this type of card. The only computers we found that might work were laptops; but most were
too large for the car and the smaller laptops generally ran Linux which wasn’t supported by the
wireless cards (for example see Verizon Wireless 595 AirCard®).[1] We decided to try to find a
small device that acted like a computer (such as a PDA or blackberry) that could connect to a
cell phone network and run some basic applications. However, most didn’t have any output
that we could use to send signals from the device to the R/C car’s control boxes and a cell
phone network card would still be required for the user’s computer. Also, the cards required a
data plan [1] which would have been required to run the car, which we didn’t want. Finally, the
details of how the cell phone networks work are proprietary, so we would have had a very hard
time configuring our software to work correctly using them.
Our second option was to use an IEEE 802.11 compatible wireless network. This way we
could have more control over the network (and we could set up a network ourselves for
testing) and most computers can connect to this kind of wireless network. All we needed was a
computer that we could put on the car. As already mentioned, most laptops were too large for
the car so we decided to look for a small device (such as a PDA) that supported wireless
networking or that had a port where we could plug in a wireless adapter. Once again, we were
unable to find one that could support wireless connectivity and have an output we could use to
send signals to the car’s control boxes. Further research yielded the discovery of IP cameras. [2]
IP cameras can connect to a network (through either a network cable or wirelessly) thus
allowing remote control of them.[2] We decided that we could use an IP camera as long as it
could connect wirelessly, provided some sort of output for sending signals to the R/C car’s
4
control boxes, and allowed simple programs to be written for and run on it. The reason we
needed simple programs to run on the camera is that we need to be able to customize our
ability to control the camera so we can use it to control the RC car. The camera that seemed to
fit perfectly with these requirements was the Axis 207W.[3] It is small so it will be easy to mount
onto the R/C car. It uses IEEE 802.11b/g wireless connectivity.[4] It has a video streaming
capability as well as an output for sending signals to a device (such as an alarm). [4] The camera
also has an embedded Linux operating system[4] and Axis provides an in depth guide[5] to
writing scripts for the camera, which allows for custom control of the camera. There is also an
API for manipulating video called VAPIX® that is provide by Axis for use with their cameras. The
camera uses a 4.9-5.1 Volt power source with a max load of 3.5W.[4] Therefore, we could easily
use batteries to supply power to it when it is mounted on the R/C car.
We have begun research into the TCP/IP communications protocol suite [6], to learn
about network administration. This should benefit our understanding of how the connection
between the remote computer and the IP Camera will be maintained. The VAPIX® application
programming interface is RTSP-based (Real Time Streaming Protocol)[7]. The VAPIX® API will
provide the ‘remote control’ communication of the media streams between the remote
computer and the IP Camera. We have researched scripting techniques for utilizing the API to
increase the functionality of the IP camera[5] .
The project may also require embedding C-application(s) onto the IP Camera, if scripting
is inadequate or too slow to perform some of the specific tasks needed for full functionality of
our remote control system. More research is currently under way for this possibility and AXIS
provides support for integrating their products into end-user solutions such as this, by providing
technical information[7].
As the Design section will discuss, we have decided to use a Client/Server Architecture
that will communicate between two TCP/IP Sockets[6]. It makes sense that the server be on the
Camera/Car side as it will process requests. To embed the application on the IP Camera, it will
have to be written in C. The bare bones of a server is to Create a socket, bind it to a port and
hostname, start listening to the port, accept connections, and handle/send messages. In C there
are methods to perform these tasks that we plan on building off of, such as:
int
int
int
int
5
socket(int domain,
int type, int protocol),
bind(int s, struct sockaddr *name, int namelen),
listen(int s, int backlog),
accept(int s, struct sockaddr *addr, int *addrlen).
These are some of the essential methods we’ll be able to use after including socket.h, and
manifest.h. Further research will be needed to deal with handling messages, specifically if we
will need to implement a ‘non blocking’ server, as discussed later in the Design issues.
In addition, we needed to find out the specifications of how the R/C car is controlled so
we contacted the manufacturer (Futaba) of the control boxes that are on the car. We found
out that both the steering and motor (speed) controls use a 1.0-2.0 millisecond square pulse
signal with amplitude of 4-6 volts with respect to the ground. The signal repeats every 20
milliseconds. Thus, we need the output signal from the camera to match this in order to control
the car.
Next we researched how we can control the signal output of the IP camera to produce
the signal required by the R/C car’s control boxes. We contacted Axis and learned that the
camera output has been used for controlling a strobe light so a controlled square pulse seems
possible. The scripting guide provided by Axis also gives some example scripts for controlling
the I/O of the camera, so we should be able to write scripts that can be used to send signals to
the R/C car’s control boxes. However, further research revealed that the camera is unable to
produce the signal required to control the RC car. According to the Axis scripting guide[5] the
camera can activate the output with a maximum frequency of 100Hz. This is inadequate
because the steering and motor speed controls use a Pulse Width Modulated (PWM) signal
with a period of 20ms that is high between 1 and 2ms. A circuit could possibly be designed to
convert the output signal of the camera to the correct PWM signal, but we decided it would be
easier and allow for greater flexibility if a microprocessor was used. Therefore, we decided to
use a microprocessor to produce the signal output required by the RC cars steering and speed
control boxes.
The microprocessor will use the output of the IP camera as input and convert the signal
to what is required by the RC car’s control boxes. We decided to use a Dragon12 development
board to design the program that will change the signal. We will then flash an actual
microprocessor that will be used on the RC car with our program. The Dragon12 development
board will be used with the CodeWarrior development suite. A detailed guide for setting up the
Dragon12 board to work with CodeWarrior is given in Appendix A.
A lot of research has been done on how to use the Dragon12 board for microprocessor
development. Most of it involves learning how to use the different aspects of the Dragon12
board to accomplish different things. An aspect of the research that pertains directly to our
project is the use of a microprocessor to produce a controlled output signal. We found out that
an output signal from the microprocessor can be precisely controlled using the clock frequency
of the microprocessor. We successfully used the microprocessor clock frequency on the
6
Dragon12 board to blink two LEDs at different rates. One blinked about once every 1/3 of a
second and the second one blinked twice as fast. This was accomplished using the example
projects and tutorials provided by Dr. Frank Wornle of the University of Adelaide[10].
Further research revealed that the microprocessor contains a PWM signal output. Using
that along with the microprocessor timer interrupts allows for the translation of an input signal
into an output signal. The PWM signal that is generated is based on the input signal received.
An overview is given in Tables 1 and 2 on pages 10 and 11.
Design
As was introduced in the requirements document, we have the following issues to tackle:

Establish a connection between the RC car and the User’s (or driver’s) PC.

Get a real time visual of the car’s position (to allow interactive driving opportunity).

Send commands from User’s PC to the RC car: this will require both software and in the
case of the RC car’s side, additional hardware.

Have a backtracking system, for when the car loses a signal and needs to try to retrace
back to a position where it can reacquire one.

Finally, when everything is functional, provide an attractive, user-friendly GUI.
We start with this basic picture to illustrate, in an abstract way, the modules necessary to fulfill
these requirements. Then, we will sequentially build off of this to achieve a design that
addresses full functionality:
7
Figure 1: The components necessary to the project
Our first objective was finding a way to connect the two sides of the system. As described in the
research section, we have decided on an IP camera to sit on the car-side and handle that end’s
connection over a wireless network.
Figure 2: Components after consolidating the computer and camera into one unit
Over this wireless connection, we will get a real time video stream and send commands to the
car.
8
The camera’s system includes an API called VAPIX® as mentioned in the research section, which
is based on the Real Time Streaming Protocol (RTSP) allowing us to use predefined commands
like SETUP, PLAY, PAUSE, and STOP to ‘control’ the video feed. We are able to modify many
important parameters using this API, such as bitrate and resolution. Our GUI will process the
incoming feed simply by displaying it to the screen.
The heftier task of communication will be controlling the RC car. This process will entail:

Sending commands from user’s PC to the IP Camera

Producing and outputting signals at the IP Camera to send to an external
microprocessor

Translating and relaying the signal from the microprocessor to the car’s control boxes.
Commands from the user’s computer to the IP Camera
From our research on the TCP/IP protocol suite, we have realized that the most reliable way to
send commands would be to use the Client/Server Architecture using a connection between
TCP/IP sockets[6]. Naturally, the car side is the Server, as it will accept a connection and process
requests. The Client will be on the user’s PC. Additionally, since it will not be necessary in the
scope of this project to ever have more than one ‘controller’ for a car, the Sever will only allow
one client to connect to it.
The Axis 207W has an embedded Linux operating system distribution called CRIS. To embed the
Server program on the IP Camera, it must be written in C and compiled using Axis’ compiler to
run on the CRIS architecture. In this way, we will be able to write our own Server that will run
on the 207W. To implement the basic functionality of a Server, we will use the methods
touched on in our research section, creating a socket and listening on it. Our Client can be
written in almost any language, we will likely choose Java as it will be highly compatible with
our GUI. With a Server on the camera, with the car, that can accept requests from the Client on
the driver’s computer, we have the communication from the user’s PC to the IP Camera
covered.
Note: A predicted issue here is that of blocking. If our client sends a command that says to ‘turn
left’ a certain amount and then the next commands says to ‘turn right’, we don’t want the
command that says ‘turn right’ to have to wait for an acknowledgement of the ‘turn left’
command from the server. This may result in an unpredictable latency. In Java, to implement a
‘non-blocking’ server, a layer is added between the client and the server called a Selector,
9
which breaks up incoming messages into thread-like buffer chunks and sends them to the
server synchronously using keys which hold information about the buffer-chunk being sent. I
suspect we may need to implement something similar to this, though in C, it will require more
work than using a predefined class.
Hardware: Sending signals produced by software to the car
The hardware aspect of this project will focus on programming a microprocessor to take
a signal sent to it by the output of the IP camera and relay a modified signal to the steering and
speed control boxes on the RC car. The details of this process are explained below.
Theory of hardware operation
The hardware design of this project can be broken down into several key components:
1. R/C Engine
This R/C car is driven by a simple electric motor. As current flows through the motor in the
positive direction, the engine spins in the forward direction. As current flows the opposite
direction, the engine spins in reverse. However, the details of operation for the engine are
unnecessary for this project since there is a controlling unit which determines the appropriate
current to be sent to the engine based on its received pulse signal. The details of the controlling
units function is elaborated upon in part 2 of this section.
2. Controlling Units of the R/C car
We found that there were 3 components which controlled the functions of the car. The first
controlling unit was the servo. This servo box currently takes in a radio frequency signal and
divides it into two channels: one which leads to a steering box, the other to the speed control.
Each channel consists of three wires: ground, Vcc, and a pulse width modulation. The Vcc
carries a voltage between 4.6-6.0V. The pulse width modulation is what determines the
position of the steering and speed boxes of the car. The pulse is a square wave, repeating
approximately every 20ms. This pulse ranges between 1.0 and 2.0ms with 1.5ms as the normal
center. As the pulse deviates from this norm towards the 1.0min or the 2.0max, it changes the
position of the steering or speed (depending on which channel it’s feeding). For example, as the
pulse to the speed control moves towards 2.0ms, the speed of the engine speed is going to
increase in the forward direction. As the pulse moves towards 1.0ms, the engine speed is going
to increase in the reverse direction.
10
The graphs below are a demonstration of the pulse width modulation. As the pulse
changes, it changes the amount of voltage is delivered. In the examples below, the first case
shows 50% of the pulse is at the Vcc, therefore, 50% of the Vcc is delivered. In the second case,
25% of the pulse is at the Vcc, therefore, 25% of the Vcc is delivered.
Figure 3: Pulse width modulation
3. IP Camera
The details of this camera have been outlined above. The camera carries an embedded
Linux operating system which should allow for custom controlled output of the alarm port. This
output will be programmed to respond accordingly to input signals from the network
connection and be in the form of the necessary pulse needed for the speed and steering
controls. The voltage to which this camera will require – 4.9-5.1V – will also be within the range
that could be provided from a battery pack.
4. Microprocessor
A programmable microprocessor is what we are going to use to replace the servo box and
recreate the pulse width modulation. Since the camera has a max interval of 1ms, it would not
be adequate for supplying a steering and speed command. Instead, the microprocessor will
receive a signal from the IP Camera and translate it into the appropriate signals for the steering
and speed control units. Our current estimation for signals from the camera will be 100-185ms
for steering and 200-285ms for speed.
11
The microprocessor will be written, debugged tested using CodeWarrior and the
Dragon12 Rev E development board. The programming language will consist of a combination
of C and Assembly.
5. Battery Pack
The standard battery pack on this vehicle is a series of six 1.2V, 1700mAh batteries. Since
equipment is going to be added, a second battery pack will be added to supply the camera and
microprocessor with power. Once the IP camera and microprocessor have been implemented,
the appropriate amount of batteries will be added. The current run time for the standard car
and battery pack is approximately 20 minutes. The battery pack for the IP camera and
microprocessor will be made close to equal.
Hardware Design
The output of the IP camera can be activated a maximum of 100 times a second (period
of 10ms) according to the axis scripting guide[5]. As noted above, the RC car requires a 1-2ms
pulse every 20ms so this is much too slow. Therefore, we will add a microprocessor which will
take the signal from the IP camera’s output and convert it to the required signal for the control
boxes on the RC car. The output signal from the camera will consist of two alternating signals
representing the acceleration and steering of the car. The signal for the acceleration would
vary in length between 10ms and 110ms in increments of 10ms. This signal represents 11
degrees of acceleration; 5 for moving forward, one representing no movement, and 5 for
moving in reverse. The first 5 increments (10, 20, 30, 40, 50) are used for moving forward,
decreasing in speed as the signal time increases (10 is the fastest speed, 50 is the slowest speed
of the car). A 60ms signal represents neither moving forward nor backwards. The largest 5
increments (70, 80, 90, 100, 110) represent moving backwards, increasing in speed as the signal
length increases (70 is the slowest, 110 is the fastest). The second signal, used to control
steering, varies between 120 and 220ms in increments of 10ms also. This signal represents 11
degrees of freedom; 5 for turning left, one for moving straight, 5 for turning right. The first 5
increments represent turning left (120, 130, 140, 150, 160), decreasing in the degree of
sharpness as the length of the signal increases (120 is the sharpest left and 160 is a slight left
turn). A 170ms signal represents no turn. The last 5 increments represent turning right (180,
190, 200, 210, 220). The degree of sharpness of the turn increases as the signal time lengthens
(180 represents a slight right turn and 220 represents the sharpest right). The reason for
limiting the signal increments to 11 instead of allowing for continuous control is that the
camera can’t output a signal that varies continuously. It can only output signals with period
lengths that increment in 10ms. Thus, the signal must be divided, and allowing for 11 different
signal lengths will provide control that will seem nearly continuous to the user.
12
As mentioned in the theory of operation, the input to the steering box and speed box on
the RC car requires a 20ms pulse-width modulation signal. The high part of the signal has a
variance between 1 and 2ms. In order to correspond with the seven different signals that can
be sent from the camera to the microprocessor, the high part of the signal sent to the steering
and speed boxes of the RC car will be divided into seven different lengths of roughly equal sizes.
The different lengths are 1, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9 and 2ms. They correspond
with the input signals from the IP camera to the microprocessor for steering and speed control
as shown in Tables 1 and 2 below.
The microprocessor will have a single input connected to the output of the camera. It
will have two outputs. One will be connected to the steering control box and one will be
connected to the speed control box on the RC car. The steering control will receive the
appropriate pulse-width modulated signal from the microprocessor when the microprocessor
receives a signal representing a steering command from the IP camera (between 120-210ms in
length). The pulse width modulated signal sent to the steering box will vary appropriately as
well. For example, if the microprocessor receives a 120ms signal, which represents a hard left
command from the user, it will send a 20ms signal with 1ms high and 19ms low to the steering
box on the RC car, which represents a hard left. Likewise, if the microprocessor receives a
150ms signal from the IP camera, it will send a 20ms signal with 1.33ms high and 18.67ms low,
which represents a slight left turn. The chart below (Table 1) illustrates the conversions of the
signal outputted from the IP camera, which is received by the microprocessor, to the input
signal to the steering box, which is sent by the microprocessor.
Table 1: Signal conversion between IP Camera output and Steering Box input. The time given as the
Microprocessor output is how much of the 20ms pulse width modulated signal is high.
IP Camera Output Signal Period
(Microprocessor Input)
Steering Box Input Signal Period (High)
(Microprocessor Output)
120ms
1.0ms
Left Turn
130ms
1.1ms
Left Turn
140ms
1.2ms
Left Turn
150ms
1.3ms
Left Turn
160ms
1.4ms
Left Turn
170ms
1.5ms
No Turn
180ms
1.6ms
Right Turn
13
Car Action
190ms
1.7ms
Right Turn
200ms
1.8ms
Right Turn
210ms
1.9ms
Right Turn
220ms
2.0ms
Right Turn
The output to the speed control box on the RC car will work similarly. For example, if
the microprocessor receives a 15ms signal from the IP camera, which represents the fastest
speed, it will send a 20ms signal with 2ms high and 18ms low to the speed control box of the RC
car. This corresponds to the fastest speed possible for the RC car. The chart below (Table 2)
illustrates the conversions of the signal outputted from the IP camera, which is received by the
microprocessor, to the input signal to the speed control box, which is sent by the
microprocessor.
Table 2: Signal conversion between IP Camera output and Speed Control Box input. The time given as the
Microprocessor output is how much of the 20ms pulse width modulated signal is high.
IP Camera Output Signal Length
(Microprocessor Input)
Speed Box Input Signal Length (High)
(Microprocessor Output)
10ms
2.0ms
Move Forward
20ms
1.9ms
Move Forward
30ms
1.8ms
Move Forward
40ms
1.7ms
Move Forward
50ms
1.6ms
Move Forward
60ms
1.5ms
No Movement
70ms
1.4ms
Move Backward
80ms
1.3ms
Move Backward
90ms
1.2ms
Move Backward
100ms
1.1ms
Move Backward
110ms
1.0ms
Move Backward
14
Car Action
Once completed, the connection between the IP camera, microprocessor, and RC car
steering and speed control boxes will look something like Figure 6. The microprocessor will
have a single input from the output of the IP camera. It will have two outputs; one will connect
to the speed control box and the other to the steering box. The microprocessor will continually
send signals to the two control boxes based on the input signals received. Since the IP camera
is only able to send one signal at a time, the signals for steering and speed control will alternate.
Thus, every other signal is for steering and the signals in between are for speed. While the
microprocessor is receiving a signal corresponding to steering from the IP camera, it will
continue to send the previous signal for speed to the speed control box. The reverse is true
when the microprocessor receives a signal from the IP camera for speed. This makes it so that
the commands are ‘continuous’; there is no break while the signal for the other kind of control
is being sent. This may produce a little lag, but it shouldn’t be too noticeable as the longest
signal is about 1/5 of a second (210 ms). The IP camera, microprocessor, and RC car will all be
powered by batteries.
15
Figure 4: The complete setup on the RC car
Figure 7, below, represents the addition of the microprocessor into the overall project.
It will communicate between the IP camera and the RC car.
16
Figure 5: The microprocessor component is added to the design
Backtracking
The final aspect of the communication between the IP camera and the RC car is the
implementation of a backtracking system. In the case that the network connection between
the IP camera and the user’s computer is lost, the RC car will backtrack to its starting location.
While backtracking, it will periodically attempt to reconnect with the user’s computer. If
reconnecting is successful, it will discontinue backtracking and control will return to the user.
This will be implemented as part of the server on the IP camera. Every time a command
is sent to the IP camera from the user it will be placed in memory on a stack. Each command
will consist of the type, either speed control or steering, and the duration of the command. The
only difference is that the direction will be reversed (moving forward will become backward
and vice versa). For example, if the command is move forward at the fastest speed, then the
type of command is speed and since the direction is reversed the length is 105ms (see Figure 5).
As soon as the network connection between the user’s computer and the IP camera is lost, the
commands get popped off of the stack and sent to the microprocessor. Thus the car will return
to its original location. If the connection between the user’s computer and the IP camera is
reestablished then the backtracking stops, and commands resume being added to the stack.
17
GUI
We are currently planning on using the language Processing to write the GUI. It ‘s main purpose
is for generating and manipulating images, so it will directly connect with the RTSP server on
the IP Camera and display the received video feed. The GUI will also have an instance of our
Client that will be written in Java (Processing is built on top of Java using it’s libraries. Behind
the scenes, a processing application is wrapped in a Java Class). Now we will be able to have a
user-friendly interface, where you can control the car through our Client using the arrows on
your keyboard and directly view the video feed from the IP camera mounted on the Car. Below
are a couple mockup screen shots of what our GUI might look like. We will display a monitor of
the relative speed, based on the commands being sent to the server. We would also prefer to
have a way to monitor the signal strength of the IP Camera’s wireless connection. You can also
disconnect using the GUI.
Figure 6: Initial design of GUI
18
Figure 7: GUI design for a dropped connection
Below is a sequence diagram including all of our modules added to fulfill our requirements. Also
refer and compare to Use cases in the Requirements Doc:
19
Figure 8: Sequence diagram for using the RC car
20
Implementation
The implementation covers everything that was done after the initial design. This
includes problems encountered, the design changes made to handle the problems, and the final
outcome. There will be four main sections embodying the four major areas of design for this
project: Computer to Camera, Camera to Microprocessor, Microprocessor to Car, and Powering
the Project. Computer to Camera focuses on the implementation of the user interface, controls,
and communication between the server on the camera and the program on the user’s
computer. Camera to Microprocessor describes how signals are sent from the camera to the
microprocessor and how they are received by the microprocessor. Finally, Microprocessor to
Car covers how the microprocessor creates the proper pulse width modulated signals and sends
them to the car. Powering the Project covers the circuit design and implementation for
powering the microprocessor, car, and camera via batteries.
Computer to Camera
We wanted to add support for a controller so that the user could use either the
controller or the keyboard to drive the car. We found a Java API framework for game
controllers called JInput. Following the Getting Started with JInput post on the JInput forums,
we were able to get the API installed and setup for use in our application. (endolf) We then
used the sample code given in the forum to write a simple class that looks for controllers on a
computer and displays the name of the controller, the components (e.g. buttons or sticks), and
the component identifiers. The code used was exactly what was given on the forum so I won’t
reproduce it here. We plugged in a Logitech Rumblepad 2 controller and ran the program. It
picked it up along with the keyboard and mice connected to the computer. The output for the
Logitech Controller is given in the figure below.
Logitech RumblePad 2 USB
Type: Stick
Component Count: 17
Component 0: Z Rotation
Identifier: rz
ComponentType: Absolute
Component 1: Z Axis
Identifier: z
ComponentType: Absolute
Component 2: Y Axis
Identifier: y
ComponentType: Absolute
Component 3: X Axis
Identifier: x
ComponentType: Absolute
Component 4: Hat Switch
21
Analog
Analog
Analog
Analog
Identifier: pov
ComponentType: Absolute
Component 5: Button 0
Identifier: 0
ComponentType: Absolute
Component 6: Button 1
Identifier: 1
ComponentType: Absolute
Component 7: Button 2
Identifier: 2
ComponentType: Absolute
Component 8: Button 3
Identifier: 3
ComponentType: Absolute
Component 9: Button 4
Identifier: 4
ComponentType: Absolute
Component 10: Button 5
Identifier: 5
ComponentType: Absolute
Component 11: Button 6
Identifier: 6
ComponentType: Absolute
Component 12: Button 7
Identifier: 7
ComponentType: Absolute
Component 13: Button 8
Identifier: 8
ComponentType: Absolute
Component 14: Button 9
Identifier: 9
ComponentType: Absolute
Component 15: Button 10
Identifier: 10
ComponentType: Absolute
Component 16: Button 11
Identifier: 11
ComponentType: Absolute
Digital
Digital
Digital
Digital
Digital
Digital
Digital
Digital
Digital
Digital
Digital
Digital
Digital
Figure 9: Logitech Rumblepad information
The controller has 17 components, 4 of which are analog and 13 of which are digital.
The 4 analog components represent the x and y directions of the 2 analog joysticks on the
controller. We want to set up the controller so the user controls the car using the left analog
joystick. Therefore, we had to determine which joystick had which components. This was
accomplished using the program to poll the controllers given in the Getting Started with JInput
forum. It was modified to display the component name and identifier if the component was
analog. Running the program and moving the analog joystick that we wanted to use for
controlling the car showed that the two components were Component 2: Y Axis and
Component 3: X Axis.
22
The next step was to implement the controller in our program. First we created a new
class called Control that implements the Runnable interface. This was done so that it could be
added as a thread to the existing program. Within the run method, an array of controllers
connected to the computer is gotten and then the program looks for a controller with the name
“Logitech RumblePad 2 USB”. If it is found, then it is saved as the controller to use, otherwise
no controllers are used. Thus the program only works with Logitech RumblePad 2 USB
controllers.
Next the program enters a while loop that continuously polls the controller for input
until the controller becomes disconnected. The code is shown in the following figure.
while (true) {
pad.poll();
EventQueue queue = pad.getEventQueue();
Event event = new Event();
while (queue.getNextEvent(event)) {
Component comp = event.getComponent();
String id = comp.getIdentifier().toString();
float value = event.getValue();
if (comp.isAnalog()
&& (id.equals("x") || id.equals("y"))) {
//System.out.println(value);
controllerCommand(id, value);
}
}
try {
Thread.sleep(10);
} catch (InterruptedException e) {
e.printStackTrace();
}
if(!hasController)
return;
}
}
Figure 10: Getting controller events
After polling the controller and getting the event queue, the program loops through the
queue and checks if an event is caused by a component that is part of the joystick being used to
control the car. If it is, the command gets sent to the method controllerCommand, which
processes the command. After looping through the queue, the threads execution is halted for
23
10ms and then execution continues. At the very bottom is an if statement that returns from
the run method, thus terminating the thread, if the controller is no longer being used.
Now we can take a look at the controllerCommand method. The code is shown in the
following figure.
public void controllerCommand(String direction, float value){
//moving backward
if(isConnected){
if(direction.equals("x")){
//full left
if(value < -0.75){
steer = 0;
}
else if (value < -0.5){
steer = 1;
}
else if (value < -0.25){
steer = 2;
}
else if (value < 0.25){
steer = 3;
}
else if (value < 0.5){
steer = 4;
}
else if (value < 0.75){
steer = 5;
}
else {
steer = 6;
}
if(steer != prevSteer){
sendOut();
prevSteer = steer;
}
}
//moving forward
else if(direction.equals("y")){
//stopped
if(value > -0.2){
speed = 0;
}
else if (value > -0.4){
speed = 1;
}
else if (value > -0.6){
speed = 2;
}
else if (value > -0.8){
speed = 3;
24
}
else {
speed = 4;
}
if(prevSpeed != speed){
prevSpeed = speed;
sendOut();
}
}
}
}
Figure 11: processing a command from the controller
First, the method checks if the program is connected to the camera, which is
represented by the isConnected variable. If it is connected, the command is processed and the
steering or speed variables are updated accordingly. The variables prevSpeed and prevSteer
keep track of the previous values of steer and speed. If nothing has changed then nothing is
sent to the camera. If something has changed, then the sendOut method is called. Initially, the
steer variable had only three possible values because there were only three possible steering
states: left, straight, and right. This made the car very hard to control with the controller
however, so more states were added. This required a redesign of the hardware as well. The
changes are described in the Camera to Microprocessor and Microprocessor to car sections
below. One thing to note is that the speed increases as the analog stick moves in the negative y
direction. This is because the negative y direction is actually the up direction of the joystick.
The sendOut method sends the commands to the camera and updates the GUI. The
code is given below.
private void sendOut() {
if(steer == 3){
leftInd.setIcon(directions[0]);
rightInd.setIcon(directions[2]);
}
else if(steer < 3){
leftInd.setIcon(directions[1]);
}
else if(steer > 3){
rightInd.setIcon(directions[3]);
}
speedGauge.setIcon(speeds[speed]);
sc.sendOut(steer, speed);
}
Figure 12: Sending commands to the camera and updating the GUI
25
The method first updates the icons on the GUI to reflect the changed state and then
calls the sendOut method of sc, where sc is an object of the SendCommand class. The
SendCommand class sends the correct value to the camera. The sendOut and setState methods
are given below.
public void sendOut(int st, int sp) {
speed = sp;
steer = st;
setState();
try {
System.out.println(state);
output.writeBytes(state + "\n");
}
catch (IOException e) {
System.out.println("Failed to send command: " + e);
}
}
private void setState() {
state = (speed << 3) + steer;
}
Figure 13: Sending the state to the camera
First, the variable state is calculated from the values of speed and steer. Basically, speed
is bit shifted so that it becomes the highest three bits and steer is the lowest three bits of state.
Next, state is written to DataOutputStream output, which sends the integer to the camera.
Initially, the commands were sent to the camera from within the Cockpit class, but this
had to be abstracted to the SendCommand class so that a controller could be used. Since the
controller used a separate thread for execution, it had to be able to send commands. This
would have required a Cockpit object, which would have created a whole new instance of the
program. Thus creating the SendCommand class eliminated this problem.
The Cockpit class contains a Control object and a SendCommand object. It has a button
for enabling/disabling the controller. The code for the button is given below.
else if (e.getActionCommand().equals("toggleController")){
if(cont != null && cont.isAlive()){ //controller connected
con.removeController();
while(cont.isAlive()){} // wait for thread to
terminate
if(cont.isAlive()){
System.out.println("Error removing
controller");
}
26
else
useConButton.setText("Use Controller");
}
else{ //controller not connected
cont = new Thread(con);
cont.start();
useConButton.setText("Remove Controller");
}
}
Figure 14: Controller button
The code removes the controller if there is one or enables one if there isn’t a controller.
The variable con is a Control that gets initialized on startup and is used within the controller
threads.
The Cockpit class implements the ControllerListener interface so that it can detect
when controllers are plugged in or unplugged and act accordingly. If a controller is plugged in
then a new instance of the Control class is created and a new thread started. If a controller is
disconnected, then the thread is terminated. The instance of sendCommand, sc, is initialized
when Cockpit connects with the camera. It gets passed to the Control object, con, which uses it
to send commands to the camera. In this way, both the keyboard and the controller can be
used at the same time to control the car without sending conflicting commands.
Camera to Microprocessor
This section covers the interaction between the camera and microprocessor. The
camera needs to send signals to the microprocessor using its output. The output is activated in
different sequences based on the commands received from the user’s computer. The
microprocessor then interprets the sequence so it can create the correct PWM signal. This
section will be broken into two parts: Camera and Microprocessor. The Camera section will
focus on how the camera is able to output the correct signal sequence based on the command
received by the user. The Microprocessor section describes how the signals are received and
interpreted before changing the PWM signal that’s sent to the RC car’s control boxes.
Camera
The camera uses a program called iod to activate the output. This was described in the
Axis scripting guide. It works in the following way: iod is called with the argument “–script”,
followed by the output/input identifier, followed by a colon and a forward slash to activate or a
backslash to deactivate the output or input. In our case, we wanted to activate/deactivate the
27
output of the camera, which has the identifier of “1”. Thus, the command would look like
“$ ./iod -script 1:/” to activate the output or “$ ./iod -script 1:\” to deactivate the output. The
program is designed to be run from a script on the camera, as can be seen by the “-script”
argument. We needed to activate the output from within our own program, however, so we
ended up using the system command execl() to call the iod function. We implemented the
following code based on the signals generated as specified in Tables 1 and 2 above. The
program was just a test to measure how fast we could activate the output.
#include
#include
#include
#include
#include
#include
#include
#include
<stdio.h>
<stdlib.h>
<string.h>
<unistd.h>
<time.h>
<sys/time.h>
<sys/types.h>
<sys/wait.h>
/*
* testExecl.c
*
* Created on: Apr 1, 2009
*
Author: bigdog
*/
int main(int argc, char *argv[]){
int i;
pid_t pid;
struct timeval start, stop;
for( i=0 ; i<60 ; i++ ){
gettimeofday(&start, 0);
pid = vfork();
if (pid == 0) /* fork succeeded */
{
if(i%2==0) //turn on output
execl("/bin/iod", "-script", "iod", "1:/", NULL);
else //turn off output
execl("/bin/iod", "-script", "iod", "1:\\", NULL);
}
else if (pid < 0) /* fork returns -1 on failure */
{
perror("fork"); /* display error message */
}
else{
wait(NULL);
gettimeofday(&stop, 0);
long difsec = stop.tv_sec -start.tv_sec;
long dif = stop.tv_usec - start.tv_usec;
printf("sec:%ld usec:%ld \n", difsec, dif);
}
}
28
return 1;
}
Figure 15: Activating the output using iod
As can be seen, we create a for loop that activates and then deactivates the output 30
times each (loops a total of 60 times). First, we grab the current time. Then, a process is forked
using the system command vfork(), which makes a copy of the current process, before calling
execl(). This is because execl() takes over the calling process, causing it to terminate when the
execl() function returns. This way it only terminates the forked process and our loop continues.
The if statements are used to perform different commands based on the process. If the process
has a pid of 0, then we know that it is the forked process and we either activate or deactivate
the output. If pid is less than 0, an error has occurred and we exit. If neither of those conditions
are true, we know that it must be the main process. In this case, we wait for the forked process
to return, get the current time again, and calculate and display the difference between the start
and end times.
We compiled and ran the program on the camera to see how fast we activate the
output. The results were very surprising. The camera was not able to output the signals in
increments of 10 ms consistently. Most of the signals generated were significantly longer than
10 ms, varying between 10 and 50 ms. This was before we even started streaming video. With
video streaming from the camera to a computer via the preview webpage of the camera, the
signal generation took even longer and with more variation. We were getting signals varying in
length between 50 ms and 200 ms. This was much slower than anticipated and wouldn’t be
satisfactory for controlling the car. The large variation made it so that the signals would be
impossible to time using a microprocessor as well. We needed a way to speed up the signals
and get a consistent signal. This was achieved through examining the contents of the program
iod.
We tried multiple times to contact the camera manufacturer, Axis, to obtain information
about how to trigger the camera’s output without having to call the program iod. They were
very unresponsive, sending us to the FAQ section of the Axis website, which doesn’t contain
any information regarding the output of the camera. We were wasting a lot of time trying to
contact them with no results, so we eventually gave up and decided to try to figure out how the
output works on our own. iod is a compiled program and we didn’t have access to the source
code for it, so we knew that we needed to figure it out using reverse engineering.
First, we searched the Internet using Google for the function iod to see if we could
obtain any information about how it worked. That didn’t provide any answers, so we decided
29
to see if we could find out how electronic device outputs were activated in Linux. A version of
Linux is what runs on the Axis 207W camera we were using, so we figured if we found out how
to do activate an output for something else, we could apply the concept here. An online search
for how to activate an output provided some useful information. The system command ioctl()
seemed to come up a lot for triggering hardware components, such as outputs. Further
investigation revealed that the ioctl() works by taking three arguments: a file descriptor (an
integer used to reference a file) that references a “file” that represents the hardware
component to be controlled, a integer representing the command to perform, and a bit
sequence representing what will be written to the file. The way it was used varied quite a bit
between different hardware components, so it was unclear how it could be used for the Axis
207W camera. Further searching didn’t reveal much information either. Eventually, we
decided to go to the source and examine the program iod to determine if and how it uses ioctl().
Initially, we tried writing and compiling a program that used ioctl() and then do a hex
dump of the compiled program to find the hexadecimal sequence that represents ioctl(). Next,
we performed a hex dump on the compiled iod program and searched for the ioctl() hex
sequence obtained from the hex dump of our sample program that uses ioctl(). If we could find
the hex sequence than maybe we could determine what the hex representations of the
arguments to ioctl() were. The hex representation of the arguments could be passed directly to
the ioctl() system call without worrying what they actually represented. We were able to find
part of the hex sequence, but we weren’t able to determine if it represented the ioctl() function
because it wasn’t a complete match. One of the major problems was that the arguments to the
ioctl() could be anything because they were operating system dependent. Thus, the example
arguments we used in our sample program could be completely different from the ones used in
iod. The potential effect that different arguments would have on the compiled ioctl() call was
unknown as well. Also, iod could have been compiled using a compiler different from the one
we were using, thus potentially altering the hex representation of ioctl() from the one we
obtained from our own compiled program. Thus, trying to find ioctl() and determine what the
arguments were from the hex dump of iod became extremely difficult. A different approach
was needed.
We decided to take a look at the compiled iod program. Using the Open Office word
processor, we examined the contents of the iod program. Most of the output shown in Open
Office was compiled so it looked like garbage, but we were able to glean some information
about how it works from the text in the program. The text from strings within the program
doesn’t change during compilation, so it showed up directly as written in Open Office. Figure
10 below shows some of the output obtained from iod. A lot of information about how
outputs/inputs, iod usage, and system variables showed up in the text output. As can be seen,
30
several interesting things, including variables named IO_SETGET_INPUT and
IO_SETGET_OUTPUT and references to I/O devices are present. This seemed like exactly what
we needed to use for controlling the output.
Figure 16: Inside iod
What next? Back to Google. A search for “IO_SETGET_OUTPUT axis 207w” resulted in
several example programs that used it as an argument to the ioctl() function. One in particular
was very useful. It was a file from the Linux Cross Reference section called etraxgpio.h.[1] An
excerpt from the file is shown below. The file contains definitions of variables used for the Etrax
processor. Etrax processors are used in the Axis cameras that support the Cris cross compiler.
This is different from what is used on the Axis 207W, but since it was the same company it was
likely that the variables would be at least similar, if not the same.
*
15 *
16 * For ETRAX FS (ARCH_V32):
17 * /dev/gpioa minor 0, 8 bit GPIO, each bit can change direction
18 * /dev/gpiob minor 1, 18 bit GPIO, each bit can change direction
19 * /dev/gpioc minor 2, 18 bit GPIO, each bit can change direction
20 * /dev/gpiod minor 3, 18 bit GPIO, each bit can change direction
21 * /dev/gpioe minor 4, 18 bit GPIO, each bit can change direction
22 * /dev/leds
minor 5, Access to leds depending on kernelconfig
23 *
24 */
49 /* supported ioctl _IOC_NR's */
52 #define IO_SETBITS
0x2 /* set the bits marked by 1 in the argument */
31
53 #define IO_CLRBITS
0x3 /* clear the bits marked by 1 in the argument
*/
98 #define IO_SETGET_OUTPUT 0x13 /* bits set in *arg is set to output,
99
* *arg updated with current output pins.
100
*/
Figure 17: Example code showing definitions of IO variables
One of the main things that was interesting about this file is the mention of the gpio
files located in the dev directory. An examination of the dev directory on the Axis 207W
revealed several gpio files. These were the files that the file descriptors passed to ioctl() were
referencing. The only problem was determining which one was used to control the output on
the camera. Other interesting things were the variables IO_SETBITS and IO_CLRBITS, which
potentially could be what was used by ioctl() to activate and deactivate the camera output.
Also, IO_SETGET_OUTPUT shows up, and it looks like it is used to determine which pins act as
outputs. Now, how does all of this fit together?
Searching online for “gpio ioctl”, led to the discovery of an example program that uses
ioctl and gpio to control hardware output and input. (Hodge) The following lines of code are
taken from the program and seem to be a good example of how the camera output can be
controlled. Once again, this is for an Etrax camera, but the commands for our camera would
probably be very similar since both cameras are made by Axis.
if ((fd = open("/dev/gpioa", O_RDWR)) < 0)
>
{
>
perror("open");
>
exit(1);
>
} else
>
{
>
changeBits(fd, bit, setBit);
>
}
> void changeBits(int fd, int bit, int setBit)
> {
> int bitmap = 0;
>
> bitmap = 1 << bit;
>
> if (setBit)
> {
>
ioctl(fd, _IO(ETRAXGPIO_IOCTYPE, IO_SETBITS), bitmap);
> }
>
else
> {
>
ioctl(fd, _IO(ETRAXGPIO_IOCTYPE, IO_CLRBITS), bitmap);
> }
>
> }
32
Figure 18: Example code for using ioctl()
First, the file “/dev/gpioa” is opened and then the ioctl() function is called. It gets
passed the file descriptor to “/dev/gpioa” and then a strange system struct called _IO that takes
two arguments, the last of which looks familiar. The IO_SETBITS and IO_CLRBITS variables were
defined in the header file etraxgpio.h, as shown in Figure [?] above. Finally, there is a bit
sequence that seems to have a one at the location where the bit is supposed to be cleared or
set. Only one bit location in the bitmap contains a one. This looks like exactly what we need to
control the output with some minor modifications to fit our camera. We aren’t using an Etrax
camera so the first argument to _IO(),ETRAXGPIO_IOCTYPE, will be different. At another
location in the program that is partially shown above, GPIO_IOCTYPE was used instead of
ETRAXGPIO_IOCTYPE. This may be what we need. All that is left is to determine which gpio file
controls the output and what the arguments to ioctl() should be.
A test program was written to determine the gpio file and bitmap used to control the
output on the Axis 207W. It was assumed that _IO(GPIO_IOCTYPE, IO_CLRBITS) was the
correct second argument of ioctl() to deactivate the output and _IO(GPIO_IOCTYPE,
IO_SETBITS) was the correct second argument to activate the output. One problem, however,
was that the variables GPIO_IOCTYPE, IO_SETGET_OUTPUT, IO_SETBITS, and IO_CLRBITS were
undefined. We had no idea what header file we needed to include and where it was located.
We found the Linux program ‘beagle’ that is used to index files and then search within them for
a keyword. We knew that the header file had to be located in the cross-compiler for the Axis
207W. Using beagle, the entire cross-compiler folder was indexed and searched for
“IO_SETBITS”. This took several hours, but eventually the file “gpiodriver.h” was found (see the
last “#include” statement in Figure [?]). It contained the definitions of the four variables shown
above. Now we were ready to test.
The test program opened all of the gpio files it could and looped through them with
different bit sequences (the third argument to ioctl()) in order to determine the correct
sequence for activating the output of the camera. A few gpio files were scrapped right away
because they couldn’t be opened. Others caused a hardware error when trying to use ioctl()
with them. The code is shown below in Figure 12.
#include
#include
#include
#include
#include
#include
33
<stdio.h>
<stdlib.h>
<unistd.h>
<fcntl.h>
<sys/ioctl.h>
<asm/arch/gpiodriver.h>
/*
* testExecl.c
*
* Created on: Apr 1, 2009
*
Author: bigdog
*/
int main(int argc, char *argv[]) {
int l = 0;
int a = 1;
printf("before a");
int fda = open("/dev/gpioa",
printf("before b");
int fdb = open("/dev/gpiob",
printf("before c");
int fdc = open("/dev/gpioc",
printf("before g");
int fdg = open("/dev/gpiog",
O_RDWR);
O_RDWR);
O_RDWR);
O_RDWR);
if (fda < 0)
printf("Hello world a\n");
else {
printf("inside a\n");
}
if (fdb < 0)
printf("Hello world b\n");
else {
printf("inside b\n");
ioctl(fdb, _IO(GPIO_IOCTYPE, IO_SETGET_OUTPUT), &intb);
}
if (fdc < 0)
printf("Hello world c");
else{
printf("inside c");
ioctl(fdc, _IO(GPIO_IOCTYPE, IO_SETGET_OUTPUT), &intc);
}
if (fdg < 0)
printf("Hello world g\n");
else {
printf("inside g\n");
ioctl(fdg, _IO(GPIO_IOCTYPE, IO_SETGET_OUTPUT), &intg);
}
while (l < 1000) {
printf("gpioa %d\n", l);
ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
sleep(5);
ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
sleep(5);
printf("gpiob %d\n", l);
ioctl(fdb, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
34
sleep(5);
ioctl(fdb, _IO(GPIO_IOCTYPE,
sleep(5);
printf("gpioc %d\n", l);
ioctl(fdc, _IO(GPIO_IOCTYPE,
sleep(5);
ioctl(fdc, _IO(GPIO_IOCTYPE,
sleep(5);
printf("gpiog %d\n", l);
ioctl(fdg, _IO(GPIO_IOCTYPE,
sleep(5);
ioctl(fdg, _IO(GPIO_IOCTYPE,
sleep(5);
a = a << 1;
IO_SETBITS), a);
IO_CLRBITS), a);
IO_SETBITS), a);
IO_CLRBITS), a);
IO_SETBITS), a);
}
close(fda);
close(fdb);
close(fdc);
close(fdg);
return 0;
}
Figure 19: Test program to figure out how to use ioctl() with the Axis 207W camera
The program shown above was what was eventually run without causing any errors.
First, gpioa, gpiob, gpioc, and gpiog were opened and given the file descriptors fda, fdb, fdc,
and fdg respectively. If a file failed to open, the message “Hello World *“ was printed,
otherwise “inside *” was printed, where * is either a, b, c, or g representing the corresponding
gpio* file. Then, we went into a for loop that tried different bitmasks (represented by the
variable ‘a’) within ioctl() and printed which gpio file was being tested and what loop iteration
was currently processed. We started with a=1 and bit shifted ‘a’ left by 1 each iteration. Thus
the final bit sequence would have the form 1* where * represents an undermined number of
zeros. The single ‘1’ in the bitmap corresponds with what was used in the example program
found online and shown in Figure [?] above. A five second pause was put between calls to
ioctl() so that we could see exactly when the output got activated. We hooked an oscilloscope
up to the camera output so that we could see when the output became activated. As soon as
the camera output became active we could look at the output printed on the command line to
see which gpio file and loop iteration the program was on. Using this strategy and with a little
patience, the camera output eventually became activated! Examining the command line
output, we determined that the file used for controlling the camera output was gpioa and the
bitmap was 1000. We could now control the camera output within our own program.
35
We still needed to determine if ioctl() would be fast enough to control the output the
way we needed. To test this, we implemented another program that activated and deactivated
the camera output as fast as possible using ioctl(). The code is shown in Figure [?] below.
/*
* outputSpeedTest.c
*
* Created on: May 9, 2009
*
Author: bigdog
*/
#include <stdio.h>
#include <sys/socket.h>
#include <arpa/inet.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <netinet/in.h>
//these are for the alarm output
#include <fcntl.h>
#include <sys/ioctl.h>
#include <asm/arch/gpiodriver.h>
#include <time.h>
#include <sys/time.h>
int main(int argc, char *argv[]) {
int i,fda;
struct timeval start, stop;
//printf("COMMAND RECEIVED: %d\n", state);
int a = 1 << 3;
int j;
int k;
//six signals sent
//low order bits sent first
//e.g. if 010110=22 is sent, then the output is 011010
fda = open("/dev/gpioa", O_RDWR);
for (i = 0; i < 1000000; i++) {
gettimeofday(&start, 0);
ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
gettimeofday(&stop, 0);
long difsec = stop.tv_sec - start.tv_sec;
long dif = stop.tv_usec - start.tv_usec;
printf("sec:%ld usec:%ld \n", difsec, dif);
}
//set the output low at the end
ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
return 1;
}
Figure 20: Test program to measure the speed at which ioctl can trigger the camera's output
36
The program activates and then deactivates the camera’s output a million times;
measuring and then displaying the time it takes during each iteration. We also used an
oscilloscope to measure the period of the signal and look at the waveform created by the
output. Although there was some variation in timing, we determined it could always be
activated and deactivated in less than one millisecond. Next, we tested it while streaming video
to a PC. The results were identical. The streaming of video had no effect on the speed at which
the output could be activated using ioctl(). This was much faster than we had originally planned
for, thus allowing us to make some speed improvements.
The ioctl() commands for controlling the output were added to the server code next.
The server activates the output in a function called pulse(). Figure [?] shows pulse().
void pulse(int state){
int i;
printf("COMMAND RECEIVED: %d\n", state);
int high, low;
for(i = 0; i<state; i++){
ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
usleep(1);
}
//after the right amount of pulses have been sent
//sleep for 20 ms for microprocessor to know end of signal
//printf("%s","END_");
high = ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
usleep(20000);
low = ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
}
Figure 21: Initial code to trigger the camera's output within the server
The program takes in an integer representing the number of pulses to be sent to the
microprocessor. Next, the program executes a for loop that sends the required number of
signals to the microprocessor. A call to usleep() had to be added after deactivating the output
because the microprocessor wouldn’t register the signal as going low if the output was
reactivated too quickly after being deactivated. Thus every time the signal went low, we paused
before sending the next signal. We only slept for 1 microsecond but the usleep() system call
takes around 10ms on average to run because the operating system has to process the call. If
other programs are running at the same time, the delay could be longer. We measured the
37
time in a test program and it seemed to stay consistently at about 10ms. The final signal sent
was at least 20 ms in length. It marked the end of the signal sequence and was easily
distinguishable from the other signals because they would never reach 20 ms in length.
The total length of the signal sequence varied between 20ms (just the end signal
representing straight and no speed) and about 160 ms (representing full speed and turning
right). See Table 4 below for a reference of number of signals vs RC car speed and direction. In
the steering column, -1 represents turning left, 0 going straight, and 1 turning right. Speed
increases from 0 (stopped) to 4 (full speed).
Table 3: Number of signals sent and the corresponding speed and direction of the RC car
38
# of signals
Speed
Steering
0
0
0
1
0
-1
2
0
1
3
1
0
4
1
-1
5
1
1
6
2
0
7
2
-1
8
2
1
9
3
0
10
3
-1
11
3
1
12
4
0
13
4
-1
14
4
1
The signal sequence takes over a tenth of a second at speeds 3 and 4. This is noticeable,
especially when trying to steer the RC car. We thought about reversing the signal sequences so
that the faster speeds were represented by shorter signal sequences. Thus the car would be
much more responsive at higher speeds making it easier to steer. We didn’t end up
implementing that idea, however. Instead, we ended up changing it so that a sequence of six
bits determined the speed and direction.
Initially, we wanted to loop through the bits in the sequence number and activate the
camera’s output if the current bit was a one or deactivate the output if it was a zero. Each
output would be followed by a 10ms pause (we’ll call it a pulse). The microprocessor would
wait 10ms and then read whether the output on the camera was active or not. If it was active,
it would count the bit as a one, if not then it would take it as a zero. In this way it would
reconstruct the number representing the command sent to the server from the user. For
example, if the user sent 34 (1000100), the camera’s output would be off for two pulses, on for
one, off for three more, and on for the final one (the pulses are sent starting with the lowest
order bits). The microprocessor would determine 34 was sent from the pulses it receives from
the camera’s output. The problem was that we weren’t able to get a consistent 10ms pause.
We tried using usleep(), but it wasn’t accurate enough. We thought about making the pause
longer (e.g. 20ms), but that would slow the overall signal sequence time down considerably if
we were to increase it enough that the variation in usleep() didn’t matter.
Thus, we decided to split the six bit sequence so that the 3 highest bits determined the
speed and the last 3 bits determined the direction. The “sequence” is sent as an integer from
the client to the server where it parses the integer to separate the three highest bits from the
lowest three bits. The highest bits are stored as an integer and the lowest bits are stored as
another integer. For example, if the number 34 (100010 in binary) is sent to the server, it is
parsed as 4 (100 in binary) for speed and 2 (010 in binary) for steering. The code for our
program is shown in Figure [?] below.
void pulse(int state){
int i, j;
printf("COMMAND RECEIVED: %d\n", state);
int speed, steer;
speed = state >> 3; //high bits
steer = state-(speed << 3); //low order bits
for(i = 0; i<steer; i++){ //steering bits
ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
for(j = 0; j < 5000; j++){} //pause
ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
for(j = 0; j < 10000; j++){} //pause
39
}
ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
usleep(5000); //sleep for at least 5 ms to mark end of steering sequence
ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
for(j = 0; j < 10000; j++){} //pause
for(i = 0; i<speed; i++){ //speed bits
ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
for(j = 0; j < 5000; j++){} //pause
ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
for(j = 0; j < 10000; j++){} //pause
}
ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
usleep(20000); //sleep for at least 20 ms to mark end of speed sequence
ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
for(j = 0; j < 10000; j++){} //pause
}
Figure 22: Six bit signal sequence code
First, the values for speed and steering are calculated, where speed is the 3 highest bits
of the variable “state” and steer is the three lowest bits of “state”. Next, the output is activated
“steer” times. This represents the direction the car will be traveling. For loops were added as
pauses instead of usleep() because the length of a for loop pause can be determined more
precisely. In our case, we only need to sleep long enough for the microprocessor to register
that the output has been activated or deactivated. If we use usleep(), we will be waiting about
10ms minimum as discussed earlier. If we use a for loop, that time can be cut down
considerably. For example, in the program above, the for loop pause of 10000 takes less than 2
ms. After sending the appropriate number of signals to represent steering, at least 5 ms pause
is inserted to mark the end of the signal sequence. Since the for loop pauses never reach 5ms,
the microprocessor can distinguish the terminating signal from the rest. Next, the number of
signals representing the speed are sent followed with a pause of at least 20ms. The
usleep(5000) pause shouldn’t take much longer than 15 ms (5 ms for the pause and 10 ms for
the time taken to process the usleep() call), so the microprocessor can distinguish between the
end of the first sequence and the end of the second sequence based on the length of the
terminating signal. This provided a considerable drop in signal sequence length compared to
the old method of pausing using usleep() after each signal and having a terminating signal of
20ms. It also allows for additional speed and steering states without incurring much of an
increase in the total signal sequence length. See the table below (Table [?]) for the updated
signal combinations and lengths.
40
Table 4: Steering and Speed signal sequences sent and the corresponding time taken to send the signals
41
# of Speed
Signals
# of Steering
Signals
Approx. Total
Signal Sequence
Length (ms)
0
0
45
0
1
47
0
2
49
0
3
51
0
4
53
0
5
55
0
6
57
1
0
47
1
1
49
1
2
51
1
3
53
1
4
55
1
5
57
1
6
59
2
0
49
2
1
51
2
2
53
2
3
55
2
4
57
2
5
59
2
6
61
3
0
51
3
1
53
3
2
55
3
3
57
3
4
59
3
5
61
3
6
63
4
0
53
4
1
55
4
2
57
4
3
59
4
4
61
4
5
63
4
6
65
As can be seen, there are many more states than in Table 4. The longest duration is only
about 65 ms however, which is nearly 100 ms shorter than the signal with the longest duration
from Table 4 of about 160 ms. More speed signals represent faster speeds. Thus 0 is no speed
42
and 4 is full speed. For steering, 0 represents full left, 3 represents straight, and 6 represents
full right.
This concludes the Camera side of the Camera to Microprocessor connection. Now we
turn to the Microprocessor side.
Microprocessor
Most of the microprocessor code was originally written for and tested on the Dragon12
development board, but needed to be ported to the Dragonfly12 microprocessor, which is what
is used on the RC car while it is driving. That was much more difficult than expected. The
procedure in its entirety can be found in Appendix B. The code in this section is what was put
on the Dragonfly12 which only has minor modifications from what was developed on the
Dragon12 development board.
The microprocessor first had to be set up to be able to receive the signals from the
camera. The following code performs this step.
void init_Timer(void){
//uses PT0 for input
TIOS = 0x00; //input capture on all ports (including PT0)
TCTL4 = 0x03; //input capture on both rising and falling edges (PT0)
TCTL3 = 0x00; //clear input control for control logic on other ports
TIE = 0x01; //enable interrupt for PT0
TSCR2 = 0x07; //set prescaler value to 128 (freq = 375 KHz)
TSCR1 = 0x90; //enables timer/fast clear
}
Figure 23: Timer register initialization
It is well commented so there isn’t much need to go into detail about what it does. The
main things to notice are that port PT0 is what is used to capture the input signals from the
camera’s output. It captures on both rising and falling edges. Thus, since the interrupt is
enabled for PT0, an interrupt will be triggered both when the camera’s output is activated and
when the camera’s output is deactivated. The last thing to notice is that the prescaler value is
set to 128. This means that the timer frequency is equal to the input bus frequency divided by
128. This is one of the cases where the Dragonfly12 differs slightly from the Dragon12
development board. The Dragon12 board has a bus frequency of 24 MHz and the Dragonfly12
has a bus frequency of 48MHz. We want the timer clock frequency to be 375 KHz so we divide
43
48MHz by 128 to get 375KHz (divide by 64 when using the Dragon12 development board). The
reason for 375 KHz is that the timer runs a 16 bit counter. Every cycle it increments by one. We
want this to increment as slowly as possible because it is used to time the signals sent by the
camera’s output. If it increments too quickly, it may overflow more than one time when
measuring a signal, which would make it impossible to determine the signals length. At 375
KHz the 16 bit counter overflows about 375/65 ≈ 6 times a second. The microprocessor has a bit
that gets set on a timer overflow, so we can accurately measure about twice as much time or
up to 1/3 of a second. This is much more time than we need for any of our signals, but the extra
margin for error doesn’t hurt.
The following code is what was running on the microprocessor to measure the signals
sent to it from the camera’s output. This code is what was running on the microprocessor based
on the initial implementation of the output signals given in Figure [?].
#include <mc9s12c32.h>
unsigned int count = 0;
unsigned int length = 0;
unsigned int begin = 0;
byte total = 0;
interrupt 8 void TOC0_Int(void){
//rising edge
if(PTT_PTT0) {
begin = TC0;
} else { //falling edge
//length = time between rising edge and falling edge in 1/1000 secs
length = (TC0 + (0xFFFF*(TFLG2_TOF)) - begin)/375;
TFLG2_TOF &= 1;//clear bit
//end of signal (signal length greater than or equal to 18 ms)
if(length > 18) {
//not turning
if(count % 3 == 0) {
PWMDTY2 = 15;
}
else if (count % 3 == 1){ //turn left
PWMDTY2 = 11;
} else{
PWMDTY2 = 19; //turn right
}
PWMDTY1 = count/3 + 153;
count = 0;
}
44
else{ //not the end of the signal
count++;
}
begin = 0;
}
}
Figure 24: Microprocessor code to interpret the signals from the camera's output (initial design)
This method gets called every time there is an interrupt on channel PT0. This is
determined by the “interrupt 8” at the beginning of the method header. First we determine
whether it was a rising or falling edge. If it is a rising edge than PT0 will be high and the variable
PTT_PTT0 will be equal to 1 (representing the status of PT0). In this case we just set the
variable “begin” equal to the current value in the counter TC0. Since PT0 is set up to capture on
both rising and falling edges, TC0 will capture the current value in the timer counter and store it
on a defined transition of PT0 (both rising and falling). If it is a falling edge, the length of the
signal is calculated in milliseconds. This is done by subtracting the value in “begin” from the
current value in TC0 which was captured when PT0 went low. TFLG2_TOF gets set if the timer
counter overflows. If the timer overflows we need to add on the maximum timer value
(0xFFFF) to TC0. Finally we divide by 375, since the counter is incremented 375,000 (375 KHz)
times a second this puts length in milliseconds (375000/375=1000=1KHz=1ms period). Next, we
check to see if the length is greater than 18 ms. If it is, we know that we have a terminating
signal and we change the PWM output signals accordingly (see Microprocessor to Car).
Otherwise, we know that it is just part of the signal sequence and count (the number of signals
received) is incremented.
The following code was written to handle the camera’s output as produced by the code
shown in Figure [?]. It is only slightly different from the code shown in Figure [?], and the
initialization code shown in Figure [?] remained unchanged.
byte count = 0;
unsigned int length = 0;
unsigned int begin = 0;
int total = 0;
interrupt 8 void TOC0_Int(void){
//rising edge
if(PTT_PTT0) {
begin = TC0;
} else { //falling edge
//length = time between rising edge and falling edge in 1/1000 secs
length = (TC0 + (0xFFFF*(TFLG2_TOF)) - begin)/375;
45
TFLG2_TOF &= 1;//clear bit
//end of signal (signal length greater than or equal to 3 ms)
if(length > 19) { //end of speed sequence
PWMDTY1 = count+153;
count = 0;
} else if(length > 4){
//end of steering sequence
if(count == 0){
PWMDTY2 = 11; //full left
} else if(count == 6){
PWMDTY2 = 19; //full right
} else{
PWMDTY2 = count + 12; //inbetween
}
count = 0;
} else{ //not the end of the signal
count++;
}
begin = 0;
}
}
Figure 25: Microprocessor code to interpret the signals from the camera's output (final design)
The program is pretty much the same up to the first if statement involving the length. If
the length is greater than 19 ms, we know that it is a terminating signal and the sequence of
signals just received represents the speed. The PWM signal for speed is updated accordingly.
Otherwise, if length is greater than 4ms but less than 19 ms, we know that the terminating
signal for the steering sequence has just been received. The PWM signal for steering is updated
accordingly. If neither of those conditions are true, than a signal in a sequence was received
and count is incremented. Every time a terminating signal is received, count is reset for the
next signal sequence.
No major problems were encountered during the design of the microprocessor. The
hardest part was making sure everything was set appropriately at all times: bits were cleared
correctly, timing was correct, the correct registers were being used, etc. We did encounter one
problem that we couldn’t fix and designed a work around for. It is as follows.
Initially, we used a variable in the microprocessor to try to keep track of whether we
were on the first sequence or the second. That way we could terminate both the steering and
speed signal sequences using a signal of the same duration (e.g. 5ms), thus potentially speeding
things up. The variable would be 1 if the camera was sending signals representing speed and 0
if the camera was sending the signal sequence for steering. During testing, however, the
variable would become reversed for some reason, thus it would be 1 during the steering
sequence and 0 during the speed sequence. The microprocessor would send the wrong signals
to the car and so nothing would work correctly. We ended up just adding in the terminating
46
signals of different lengths to avoid the problem, as outlined above. The microprocessor could
determine whether it received the steering sequence or the speed sequence based on the
length of the terminating signal.
Microprocessor to Car
This section focuses on the generation of the PWM signal that gets sent to the control
boxes on the RC car. It covers how the PWM register of the microprocessor is set up and how
the microprocessor determines the signal to send to the control boxes on the RC car based on
the input received from the camera.
The microprocessor PWM register is initialized as shown in the following code.
//This uses pins PT1 for speed and PT2 for steering
void init_PWM(void){
//set up channel 0 and 1 for speed and channel 2 for steering
PWMCTL = 0x10; //concatenate channels 0 and 1 into 16 bit PWM channel
MODRR = 0x06; //Set PT1 and PT2 as outputs of PP1 and PP2 respectively
//channel 1 is low order bits, channel 0 is high order bits
//all options for the 16 bit PWM channel are determined by channel 1
options
PWME = 0x06; //enable PWM channels 1 and 2
PWMPOL = 0x06; //set polarity to start high/end low (channels 1 and 2)
PWMCLK = 0x06; //clock SA is the source for channel 1 and SB for channel 2
//set clock B prescaler to 16 (B = E/32) E=48,000,000 Hz
//and clock A prescaler to 8 (A = E/16)
A=3,000,000 Hz
PWMPRCLK = 0x54;
PWMCAE =0x00; //left align outputs
PWMSCLA = 0x0F; //SA = A/(15*2) = 100,000 Hz
PWMSCLB = 0x4B; //SB = B/(75*2) = 10,000 Hz
B=1,500,000 Hz
//The combined periods of channel 0 and 1 represent the period
//for the 16 bit channel (channel 0 is high order, channel 1 low order)
//Period for 16 bit channel = (period of SA)*2000 = (1/100,000)*2000 = 0.02
seconds (50Hz)
PWMPER0 = 0x07; //high order
PWMPER1 = 0xD0; //low order
PWMPER2 = 0xC8; //Period for channel 2 = (period of SA)*200 =
(1/10,000)*200 = 0.02 seconds (50Hz)
//clock period for channel 0 and 1 = 24*10^6/(150*200*16) = 1/50 sec = 50Hz
//Duty cycle for 16 bit channel = (150/2000)*0.02 = 0.0015 seconds
PWMDTY0 = 0x00; //high order
PWMDTY1 = 0x96; //low order
PWMDTY2 = 0x0F; //Duty cycle for channel 2 = (15/200)*0.02 = 0.0015 seconds
47
}
Figure 26: PWM register initialization
Most of the code can be understood from the comments. The main things to notice are
that channels 0 and 1 of the PWM register are concatenated together to create a single 16 bit
channel (versus 8 bit channels if used separately). This allows for finer control over the PWM
signal that is generated using channels 0 and 1. When concatenated, only the settings for
channel 1 are used and the settings for channel 0 are ignored. The RC car picks up speed very
quickly, so we use the concatenated channel to control the speed of the car. This allows us to
increment the speed in much smaller amounts compared with using an 8 bit channel. We use
channel 2 as the steering output. We don’t need fine control over steering so we leave it as an
8 bit channel. Setting the MODRR register equal to 0x06 set sthe ports PT1 and PT2 as the
outputs of ports PP1 and PP2 respectively. This is another difference between the Dragonfly12
and the Dragon12. The Dragonfly12 has fewer pins than the Dragon12, and it lacks pins for
ports PP1 and PP2. Thus, pins PT1 and PT2 are linked to ports PP1 and PP2 so that they output
the PWM signal.
Another important part is the clock signal used to determine the PWM signal. It works
in the same way that the timer register does to slow down the bus clock. The bus clock of 48
MHz is slowed down based on the values in PWMPRCLK, PWMSCLA, and PWMSCLB. The PWM
register has four clocks it uses for creating signals called A, B, SA, and SB. A and B are slowed
versions of the bus clock. SA is a slower clock A and SB is a slower clock B. The value in
PWMPRCLK shown in the code above divides the bus clock by 32 for clock B (1.5 MHz) and by
16 for clock A (3 MHz). PWMSCLA is then used to slow clock A even more. The value in
PWMSCLA divides clock A by 30, making SA equal to 100 KHz. The value in PWMSCLB divides
clock B by 150, making SB equal to 10 KHz. Clock SA is used for channel 1 and clock SB is used
for channel 2.
Next, the periods of the PWM modulated signals are calculated. Both the steering and
speed controls use 20ms PWM signals so a 20ms period is used for both. The period is equal to
the clock period multiplied by the value in the PWMPER register for the channel. Channel 1
uses clock SA (100 KHz) and so the value in the PWMPER register must be 2000 to create a
PWM signal with period 20 ms. Since, channels 0 and 1 are concatenated, PWMPER0 and
PWMPER1 are concatenated. PWMPER0 contains the high order bits and PWMPER1 contains
the low order bits. Likewise, channel 2 needs a 20 ms signal. This is done via the PWMPER2
register. Channel 2 uses clock SB, which operates at 10 KHz, thus a value of 200 is put into
PWMPER2 to create a period of 20ms.
48
The last step is setting the duty cycle of each PWM signal. This is the fraction of the
period where the signal is high. The duty cycles for the PWM signals sent to the RC cars control
boxes vary between 1 and 2 ms. A 1.5 ms duty cycle represents no speed when sent to the
speed control box and straight when sent to the steering control box. This is the value initially
given as the duty cycle for both PWM signals. Since the duty cycle is given as a fraction of the
period, this means we need a duty cycle of 1.5/20 = 15/200. Since the PWMPER2 register
contains 200, the value 15 is put into the PWMDTY2 register. Likewise, the value 150 is put into
the concatenated PWMDTY0 and PWMDTY1 registers. This finishes the initialization of the
PWM registers.
The next step is modifying the PWM signal based on the input received from the
camera’s output. Once again, we start with the code, as previously given in Figure [?] above,
which was used during the initial implementation that had only three steering states.
#include <mc9s12c32.h>
unsigned int count = 0;
unsigned int length = 0;
unsigned int begin = 0;
byte total = 0;
interrupt 8 void TOC0_Int(void){
//rising edge
if(PTT_PTT0) {
begin = TC0;
} else { //falling edge
//length = time between rising edge and falling edge in 1/1000 secs
length = (TC0 + (0xFFFF*(TFLG2_TOF)) - begin)/375;
TFLG2_TOF &= 1;//clear bit
//end of signal (signal length greater than or equal to 18 ms)
if(length > 18) {
//not turning
if(count % 3 == 0) {
PWMDTY2 = 15;
}
else if (count % 3 == 1){ //turn left
PWMDTY2 = 11;
} else{
PWMDTY2 = 19; //turn right
}
PWMDTY1 = count/3 + 153;
count = 0;
49
}
else{ //not the end of the signal
count++;
}
begin = 0;
}
}
Figure 27: Microprocessor code to create correct PWM signal for the RC car's control boxes (initial design)
Referencing Table 4, we can see that the number of signals mod 3 can be used to
determine the direction. If the result is 0 than the car should go straight, if the result is 1 the
car should turn left, and if the result is 2 the car should turn right. This is implemented above,
where count is the number of signals. If we detect a terminating signal (length > 18), the duty
cycle for the steering (PWMDTY2) is updated accordingly with an appropriate value, as stated in
the comments. It can also be seen from Table 4 that the number of signals divided by 3 equals
the value for the speed. In the code above, we add on count/3 to PWMDTY1. It adds onto the
value of 153 because the car doesn’t start to move until a duty cycle of at least 154 is reached.
The speed picks up very quickly however, so the duty cycle is only incremented by one for each
increasing speed. Since the speed duty cycle of 153 represents 1.53ms, incrementing by one
only increases the period of the duty cycle by 0.01ms or 10 µs.
The following code was given previously in figure [?] above, and represents the final
version implemented where there are up to 8 steering states and 8 speed states. None of the
code used to initialize the PWM register given in Figure [?] had to be changed.
byte count = 0;
unsigned int length = 0;
unsigned int begin = 0;
int total = 0;
interrupt 8 void TOC0_Int(void){
//rising edge
if(PTT_PTT0) {
begin = TC0;
} else { //falling edge
//length = time between rising edge and falling edge in 1/1000 secs
length = (TC0 + (0xFFFF*(TFLG2_TOF)) - begin)/375;
TFLG2_TOF &= 1;//clear bit
//end of signal (signal length greater than or equal to 3 ms)
if(length > 19) { //end of speed sequence
PWMDTY1 = count+153;
count = 0;
} else if(length > 4){
//end of steering sequence
50
if(count == 0){
PWMDTY2 = 11; //full left
} else if(count == 6){
PWMDTY2 = 19; //full right
} else{
PWMDTY2 = count + 12; //inbetween
}
count = 0;
} else{ //not the end of the signal
count++;
}
begin = 0;
}
}
Figure 28: Microprocessor code to create correct PWM signal for the RC car's control boxes (final design)
There are only small differences between this code and the code in Figure [?] in terms of
how the duty cycles are changed. The main difference is that separate signal sequences for
steering and speed are received by the microprocessor. If the terminating signal is longer than
20ms (length > 19), the previous signal sequence was for speed and so PWMDTY1 is updated
accordingly. In this case count is just added to 153. If length is greater than 4ms, then the
signal sequence was for steering. Since there are only 7 steering states (0-6), but 9 possible
values for PWMDTY2 (11-19), a couple of steering states had to be left out. We decided to
leave out states 12 and 18. This allowed the user to turn full left (state 11) or full right (state
19), while having finer control near the middle (states 13-17).
Work Completed
Our group has been researching many of the different aspects of the project to increase
our understanding of the problem. Much of this research has been mentioned above and has
shaped our ideas for the design and organization of the project.
Research areas we have covered which will likely play a role in our project:
51

IP Cameras: comparing different makers, models and gaining an understanding of their
basic purpose and functionality to evaluate how we can use them to fulfill our
requirements.

TCP/IP and other related protocols such as RTSP: to understand how we can maintain
communications and send commands between the IP Camera and the remote
computer[7].

Embedding Software: To write a Server that will run on the camera, we have had to
install the Axis C compiler. Writing programs that are compiled with this compiler will
provide us the availability of running them on the IP Camera.

The output that the camera provides has been recently a focal point of our efforts. We
have looked at manuals, datasheets, and contacted support at AXIS to convince
ourselves that we will be able to manipulate it to serve our purpose

The output from the servo that is currently the controlling hardware on an RC car: to
understand what exactly we are replacing.

Understanding how to use a Dragon12 development board along with CodeWarrior.
This will be used to program the microprocessor on the RC car.
Some basic user interface mock-ups have been made to familiarize ourselves with some of
the aspects of the GUI that do not directly relate to the networking research we are doing.
Using the Processing language, we have successfully read in a continuous video stream from a
web cam and displayed it in an application window. This will obviously be an important aspect
to our project.
A basic Server has been written in C, using commands such as the few mentioned in the
research section. In addition, to get a little experience with ‘non-blocking’ client/server, we
have written a non-blocking Client and Server in Java using the Selector Class[9].
We have written a basic program for the Dragon12 development board which illustrates
how to use the microprocessors clock frequency to control an output signal. The program used
a timing register that counts up to 216-1 (65535) to delay interrupts which would cause an LED
to turn on and off. Using one less than the maximum value (65534), and LED could be made to
blink at a rate of about once every 1/3 of a second. A second LED was programmed to blink
twice as fast (delay of 32767). This was done using CodeWarrior after setting up the Dragon12
board to work with it (see Appendix A). The clock on the Dragon12 board runs at
approximately 24 MHz although this can be changed. The writing of programs for the Dragon12
development board with CodeWarrior is made much simpler by using CodeWarrior Stationary.
This is basically a foundation that sets up the Dragon12 board to work with CodeWarrior and its
libraries. When working with stationary, the basic settings of the microprocessor are already
defined, thus programs can be written and run without having to set up the microprocessor
settings every time. I used the Metrowerks CodeWarrior Stationery for Dragon12, Revision E,
flat memory model, provided by Dr. Frank Wornle[10].
52
Future Work
53

Gain more experience with pulse modulation using our specific IP Camera and a
microprocessor programmed using the Dragon 12 development board. This will be
done by James and Jeremy.

Learn more about Socket programming in C to implement the most efficient Server,
since we are dealing with limited resources. Also more experience will be needed if we
end up needing to implement a ‘non-blocking’ server. This will be done by James and
Seth.

Finish designing of a safety recovery system. This basically means that the R/C car will
retrace its movements back to its starting location if the wireless connection between
the user’s computer and the IP camera is unintentionally lost (if the user exits the
program or terminates the connection, the car shouldn’t retrace its movements). This
will require researching how to monitor the connection between the camera and the
program running on the user’s computer. It will also require researching how to write a
script for the IP camera that will record the movements of the car and then, in the event
that the wireless network connection is lost, send the correct signals to the control
boxes on the car to get it to retrace its movements in the reverse order. Also, on the
user side, the program will let the user know that the connection has been lost and that
the car is currently retracing its route back to the starting position. It should also
continuously ‘look’ for the IP camera and reestablish a connection to it if possible. This
will be done by James, Jeremy, and Seth.

Develop the networking software while working directly with the Axis 207W IP Camera
and it’s embedded operating system. Commands from the Client on the user’s PC will be
sent to the camera and processed by Server running on it. This will be done by Seth and
James.

Connect the IP camera to the R/C car’s control boxes. This involves implementing a
circuit between the signal output of the IP camera and the control input (for steering
and acceleration) of the R/C car. A microprocessor must be programmed to take an
output signal from the IP camera and modify it so that it produces the signals required
by the steering and speed control boxes on the RC car. This will involve a lot of research
into how to program microprocessors using the Dragon12 development board. This will
be done by James and Jeremy.
54

Test the control of the car over the network. Test that the car can be controlled (both
steering and acceleration) using the networking framework. This tests everything that
has been implemented so far. This will be done by James.

Implement the GUI skeleton. Create the basic outline of what the GUI will have. This
includes a place for a live video stream, a way to connect/disconnect from the IP camera,
and something that happens if a connection is lost. This will be done by Seth.

Implement the video and controls in the GUI. This basically means making the GUI fully
functional. It displays the video stream from the IP camera, allows for connecting and
disconnecting from the IP camera, shows network connection status, allows mouse
control of the car, and allows for graceful terminating of the connection from the car.
This requires researching how to do all of this in Java as well as integrating the
Processing language (used for mouse control of car) with the Java code. This will be
done by James and Seth.

Implement the safety recovery system. First, this requires knowledge of how to write a
script for the IP camera. The script must store the acceleration and steering commands
sent to it from the user in the camera’s memory. Another script must monitor the
connection between the user’s computer and the IP camera. As soon as it fails, the
camera starts sending signals to the control boxes on the R/C car causing the car to
retrace its route. If a connection is reestablished, the car stops retracing its route. On
the user side, a message must be displayed letting the user know that the connection
was lost and the car is retracing its route. The program will continuously try to
reconnect with the IP camera and will let the user know when if it successfully
reconnects. The hardware (car side) will be done by James and Jeremy. The software
(user side) will be done by Seth.

Full testing. Here we perform numerous tests on the design to make sure the car, IP
camera and software remain functional under different conditions. It is also to test the
limits of what we have implemented, including how long the car can be driven before
something fails (such as a battery dies or a network connection fails). Any issues that
show up during the testing can be fixed during this time. This will be done by James,
Jeremy, and Seth.

Additional features. Things such as joystick/gamepad compatibility can be added during
this phase as long as the system remains working or the changes are easily reversible in
case something goes wrong. This will be done by James, Jeremy, and Seth.
Timetable
Figure 29: Timetable
55
Appendix A – Programming the Dragon12 Board
This is a reference for setting up the Dragon12 development board to work with the
CodeWarrior development suite. This procedure erases the default monitor program, DBug-12,
and installs the HCS12 serial monitor on the Dragon12 board. The HCS12 serial monitor is what
is used by CodeWarrior to communicate with the Dragon12 board. This procedure is taken
directly from Dr. Frank Wornle’s website[10] (see the link “HCS12 Serial Monitor on Dragon-12”).
The HCS12 serial monitor can be obtained from his website as well. Added comments appear
within ‘[]’. The main difference is that EmbeddedGNU was used instead of HyperTerminal for
reprogramming the Dragon12 development board. EmbeddedGNU can be obtained from Eric
Engler’s website[11].
Both uBug12 as well as the Hi-Ware debugger allow the downloading of user
applications into RAM and/or the Flash EEPROM of the microcontroller. With
uBug12, a suitable S-Record has to be generated. Programs destined for Flash
EEPROM are downloaded using the host command fload. Both S2 records as well
as the older S1 records (command line option ‘;b’) are supported. Some
applications might rely on the provision of an interrupt vector table. HCS12 Serial
Monitor expects this vector table in the address space from 0xFF80 – 0xFFFF.
This is within the protected area of the Flash EEPROM. The monitor therefore
places the supplied interrupt vectors in the address space just below the monitor
program (0xF780 – 0xF800) and maps all original interrupts (0xFF80 – 0xFFFF) to
this secondary vector table. From a programmer’s point of view, the vector table
always goes into the space from 0xFF80 – 0xFFFF.
The source code of the HCS12 Serial Monitor can be obtained from Motorola’s
website (www.motorola.com, see application note AN2548/D and the
supporting CodeWarrior project AN2548SW2.zip).
A few minor modifications are necessary to adapt this project to the hardware of
theDragon-12 development board:
(1) It is proposed to use the on-board switch SW7\1 (EVB, EEPROM) as
LOAD/RUN switch. The state of this switch is tested by the monitor
during reset. Placing SW7\1 in position ‘EVB’ (see corresponding LED)
forces the monitor to become active (LOAD mode). Switching SW7\1 to
position ‘EEPROM’ diverts program execution to the user code (jump via
the secondary RESET interrupt vector at 0xF77E – 0xF77F).
56
(2) The monitor is configured for a crystal clock frequency of 4 MHz,
leading to a PLL controlled bus speed of 24 MHz.
(3) The CodeWarrior project (AN2546SW2.zip) has been modified to
produce an S1-format S-Record. The frequently problematic S0 header
has been suppressed and the length of all S-Records has been adjusted to
32 bytes. This makes it possible to download the monitor program using
DBug-12 and two Dragon-12 boards connected in BDM mode.
The protected Flash EEPROM area of the MC9S12DP256C (Dragon-12) can only
be programmed using a BDM interface. An inexpensive approach is to use two
Dragon-12 boards connected to each other via a 6-wire BDM cable. The master
board should run DBug-12 (installed by default) and be connected to the host via
a serial communication interface (9600 bps, 8 bits, no parity, 1 stop bit, no flow
control). A standard terminal program (e.g. HyperTerminal) [or EmbeddedGNU]
can be used to control this board. The slave board is connected to the master
board through a 6-wire BDM cable. Power only needs to be supplied to either
the master or the slave board (i. e. one power supply is sufficient, see Figure 1)
[below].
Figure 30: Connecting two Dragon12 boards
57
The BDM interface of DBug-12 allows the target system (slave) to be
programmed using the commands fbulk (erases the entire Flash EEPROM of the
target system) and fload (downloads an S-Record into the Flash EEPROM of the
target system). Unfortunately, DBug-12 requires all S-Records to be of the same
length, which the SRecords produced by CodeWarrior are not. CodeWarrior has
therefore been configured to run a small script file (/bin/make_s19.bat) which
calls upon Gordon Doughman’s S-Record conversion utility SRecCvt.exe. The final
output file (S12SerMon2r0.s19) is the required S-Record in a downloadable
format. [This does not need to be done as S12SerMon2r0.s19 is already included
in the download from Dr. Frank Wornle’s website].
Click on the green debug button (cf. Figure 5) to build the monitor program and
convert the output file to a downloadable S-Record file.
Start HyperTerminal [EmbeddedGNU] and reset the host board. Make sure the
host board is set to POD mode. You should be presented with a small menu
(Figure 2) [below]. Select menu item (2) to reset the target system. You should
be presented with a prompt ‘S>’ indicating that the target system is stopped.
Figure 31: Resetting Dragon12 board in EmbeddedGNU
Erase the Flash EEPROM of the target system by issuing the command fbulk.
After a short delay, the prompt should reappear.
Issue the command fload ;b to download the S-Record file of the HCS12 Serial
Monitor. Option ‘;b’ indicates that this file is in S1-format (as opposed to S2).
58
From the Transfer menu select Send Text File… . Find and select the file
S12SerMon2r0.s19. [When using EmbeddedGNU choose Download from the
Build menu] You will have to switch the displayed file type to ‘All Files (*.*)’ (see
Figure 3)[below]. Click on Open to start the download.
Figure 32: Choose HCS12 Serial Monitor file
Once the download is complete, you should be presented with the prompt
(Figure 4)[below]. The HCS12 Serial Monitor has successfully been written to the
target board. Disconnect the BDM cable from the target system and close the
terminal window.
59
Figure 33: The Serial Monitor has been successfully loaded
Connect the serial interface SCI0 of the target system to the serial port of the
host computer and start uBug12[this can be obtained from Eric Engler’s
website[11]].
Ensure that you can connect to the target by entering ‘con 1’ (for serial port
COM1). You should receive the acknowledgement message ‘CONNECTED’. Issue
the command ‘help’ to see what options are available to control the monitor.
Disconnect from the target using command ‘discon’ (Figure 5)[below].
Note that HCS12 Serial Monitor provides set of commands very similar to that of
DBug-12: A user application can be downloaded into the Flash EEPROM of the
microcontroller using fload (;b), the unprotected sectors of Flash EEPROM can be
erased using fbulk, etc. Following the download of a user program into the Flash
EEPROM of the microcontroller, the code can be started by switching SW7\1 to
RUN (EEPROM) and resetting the board (reset button, SW6).
60
Figure 34: Options to control the monitor
This completes the installation and testing of HCS12 Serial Monitor on the
Dragon-12 development boards.
Appendix B – Porting Code from the Dragon12 to
the Dragonfly12
This is the procedure for getting code that works with the Dragon12 development board
to work on the Dragonfly12. It requires a Dragon12 development board with D-Bug12 on it, a
61
BDM cable, and a Dragonfly12 microprocessor. The Dragonfly12 uses the MC9S12C32
processor while the Dragon12 uses the MC9S12DP256 processor. You should familiarize
yourself with the differences by reading the user guide for the MC9S12C32 at
http://www.freescale.com/files/microcontrollers/doc/data_sheet/MC9S12C128V1.pdf and
comparing it against that of the DP256 chip at
http://www.cs.plu.edu/~nelsonj/9s12dp256/9S12DP256BDGV2.pdf. Any differences in terms
of the actual code will have to be taken into account separately from what is described in this
guide. This guide will cover porting code developed in Codewarrior to the Dragonfly12. Mr.
Wayne Chu from Wytec provided a lot of help to figure out this process.
The first step is to create a new project. Select New Project from the File menu. You
should be presented with the New Project window shown in Figure [?] below.
Figure 35: Creating a new Dragonfly12 project
62
Enter a name for the project and click Next. The New Project Wizard should appear. Click Next.
On page 2, select MC9S12C32 from the list of Derivatives and click next. On page 3, select the
languages that you will be programming in. For this project, only C was selected. The next few
pages ask if certain Codewarrior features should be used, including Processor Expert,
OSEKturbo, and PC-lint. It is safe to answer no to all of them. On page 7, it asks what startup
code should be used. Choose minimal startup code. On page 8, it asks for the floating point
format supported. Choose None. On page 9 it asks for the memory model to be used. Choose
Small. On page 10 you can choose what connections you want. Only Full Chip Simulation is
needed. After clicking Finish the basic project should be created (see Figure [?] below). The
green button is used to compile the code.
Figure 36: Dragonfly12 initial project layout
Whenever the code is compiled, it creates an s19 file and puts it in the bin folder within
the project. If you are using Full Chip Simulation, the file is probably called
63
“Full_Chip_Simulation.abs.s19”. This file is what will eventually be put onto the Dragonfly12.
First, the s19 file needs to be converted to the correct format. This is done using a program
called SRecCvt.exe. It can be downloaded from
http://www.ece.utep.edu/courses/web3376/Programs.html. Put SRecCvt.exe in the bin folder
containing the s19 file.
The conversion can be done from the command line, but it is easier to create a bat file
that will do it for you. The following line for converting the s19 file was generously provided by
Mr. Wayne Chu. Open a text editor and put the following line in it:
sreccvt -m c0000 fffff 32 -of f0000 -o DF12.s19 Full_Chip_Simulation.abs.s19
This assumes that Full_Chip_Simulation.abs.s19 is the name of the s19 file created by
Codewarrior. If the s19 file generated by Codewarrior is called something different then
Full_Chip_Simulation.abs.s19 should be changed to the name of the s19 file. DF12.s19 is the
name of the output s19 file. It can be called anything, just make sure it is different from the
name of the input s19 file. Save the file to the bin folder containing SRecCvt.exe with the
extension “.bat” (e.g. make_s19.bat). Double click on the bat file and the Codewarrior s19 file
should be converted automatically and saved as the output s19 file given in the bat file (e.g.
DF12.s19). This is the file that will be put onto the Dragonfly12 microprocessor.
Connect power to a Dragon12 development board that has D-Bug12 on it. Plug one end
of the BDM cable into “BDM in” on the Dragon12 and the other end into the BDM connection
on the Dragonfly12. Make sure the brown wire in the BDM cable lines up with the “1” on both
of the BDM connections. Switch the Dragon12 to POD mode by setting SW7 switch 1 down and
SW7 switch 2 up. See the picture below for the complete setup.
Start up EmbeddedGNU. Press the reset switch on the Dragon12 board. A menu may
appear similar to the one shown in Figure [?] below (make sure the Terminal tab is selected). If
nothing appears try clicking in the Terminal and pressing a key. Sometimes it doesn’t display
properly and this usually fixes it. If it does, choose option 1 and enter 8000 to set the target
speed to 8000 KHz. If the R> prompt appears, type “reset” and hit enter. You should now be
presented with the S> prompt. Enter “fbulk” to erase the flash memory on the Dragonfly12.
After a brief pause, the S> prompt should reappear. Now enter “fload” and press enter. The
64
terminal will hang. Go to the Build menu and choose Download. Navigate to the bin folder of
the Codewarrior project and select the converted s19 file (e.g. DF12.s19). Click Open. A series
of stars should appear while the file is downloading. Once it is finished, the S> prompt should
reappear. The program is now on the DragonFly12.
Figure 37: Downloading code to Dragonfly12 using EmbeddedGNU
65
Bibliography
Axis 207 Network Camera. <http://www.axis.com/products/cam_207/>.
Axis 207/207W/207MW netowrk Cameras Datasheet.
<http://www.axis.com/files/datasheet/ds_207mw_combo_30820_en_0801_lo.pdf>.
Axis Scripting Guide.
<http://www.axis.com/techsup/cam_servers/dev/files/axis_scripting_guide_2_1_8.pdf>.
Axis. VAPIX version 3 RTSP API. 2008.
Engler, Eric. Embedded Tools. <http://www.geocities.com/englere_geo/>.
Hunt, Craig. TCP/IP Network Administration. Sebastopol, CA: O'Reilly, 2002.
Lethbridge, Timothy C. and Robert Laganiere. Object-Oriented Software Engineering: Practical Software
Development using UML and Java. New York: McGraw Hill, 2001.
Sun Microsystems, Inc. Java™ Platform, Standard Edition 6 API Specification. 2008.
<http://java.sun.com/javase/6/docs/api/>.
Verizon Wireless AirCard® 595 product page. October 2008
<http://www.verizonwireless.com/b2c/store/controller?item=phoneFirst&action=viewPhoneDetail&sel
ectedPhoneId=2730>.
Wikipedia. IP Camera. 19 Nov 2008. October 2008 <http://en.wikipedia.org/wiki/IP_Camera>.
Wornle, Frank. Wytec Dragon12 Development Board. October 2005.
<http://www.mecheng.adelaide.edu.au/robotics/wpage.php?wpage_id=56>.
Glossary
API (application programming interface): a set of functions, procedures, methods, classes or
protocols that an operating system, library or service provides to support requests made by
computer programs.
Bit: A binary digit that can have the value 0 or 1. Combinations of bits are used by computers
for representing information.
Computer network: a group of interconnected computers.
66
Clock frequency: The operating frequency of the CPU of a microprocessor. This refers to the
number of times the voltage within the CPU changes from high to low and back again within
one second. A higher clock speed means the CPU can perform more operations per second.
Central processing unit (CPU): A machine, typically within a microprocessor, that can execute
computer programs.
Electric current: Flow of electric charge.
Electric circuit: an interconnection of electrical elements.
Encryption: the process of transforming information (referred to as plaintext) using an
algorithm (called cipher) to make it unreadable to anyone except those possessing special
knowledge, usually referred to as a key.
GUI (graphical user interface): a type of user interface which allows users to interact with a
computer through graphical icons and visual indicators
HTTP (Hypertext Transfer Protocol): a communications protocol internet used for retrieving
inter-linked text documents.
IP camera: a unit that includes a camera, web server, and connectivity board.
Interrupt: Can be either software or hardware. A software interrupt causes a change in
program execution, usually jumping to an interrupt handler. A hardware interrupt causes the
processor to save its state and switch execution to an interrupt handler.
Joystick: an input device consisting of a stick that pivots on a base and reports its angle or
direction to the device it is controlling.
Light-Emitting Diode (LED): A diode, or two terminal device that allows current to flow in a
single direction, which emits light when current flows through it. Different colors can be
created using different semiconducting materials within the diode.
Linux: a Unix-like computer operating system family which uses the Linux kernel.
Memory: integrated circuits used by a computer to store information.
Microprocessor: a silicon chip that performs arithmetic and logic functions in order to control a
device.
Modulation: the process of varying a periodic waveform.
67
Network Bandwidth: the capacity for a given system to transfer data over a connection (usually
in bits/s or multiples of it (Kbit/s Mbit/s etc.)).
Pulse Width Modulation: uses a square wave whose pulse width is modulated resulting in the
variation of the average value of the waveform.
R/C (remote controlled) car: a powered model car driven from a distance via a radio control
system.
Register: Stores bits of information that can be read out or written in simultaneously by
computer hardware.
RF (radio frequency): a frequency or rate of oscillation within the range of about 3 Hz to
300 GHz.
RTSP (Real Time Streaming Protocol): A control protocol for media streams delivered by a
media server.
Servo: mechanism which converts a radio frequency to an electrical signal.
TCP/IP Socket: an end-point of a bidirectional process-to-process communication flow across
an IP based network.
TCP/IP (Internet Protocol Suite): a set of data communication protocols, including Transmission
Control Protocol and the Internet Protocol.
User Interface: means by which a user interacts with a program. It allows input from the user to
modify the program and provides output to the user which denotes the results of the current
program setup.
VAPIX®: RTSP-based application programming interface to Axis cameras.
Video transmission: sending a video signal from a source to a destination.
Voltage: difference in electric potential between two points in an electric circuit.
Wi-Fi: name of wireless technology used by many electronics for wireless networking. It covers
IEEE 802.11 technologies in particular.
Wireless network: computer network that is wireless.
68
Download