Architecture for Small UAVs A Distributed Flight Control System L5

advertisement
L5
A Distributed Flight Control System
Architecture for Small UAVs
by
ENS Ryan Bruce Norris, USN
B.S. Systems Engineering
United States Naval Academy, 1996
Submitted to the Department of Electrical Engineering and Computer Science
In Partial Fulfillment of the Requirements for the Degree of
MASTER OF SCIENCE IN ELECTRICAL ENGINEERING
AND COMPUTER SCIENCE
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
June 1998
© 1998 Ryan Bruce Norris.
All rights reserved.
Y~~en~hePh~ssaXsvfrmP"
~7rrl,~sBpr~~l
L.C
"'
~ c:i!4rr
~
d
i'''
ij''
ri ~P~il~T~ I~jti~f7 C--r.
Signature of Author
Department of Electrical Engineering and Computer Science
May 20, 1998
Approved by
Dr. David S. Kang
Project Manager, Charles Stark Draper Laboratory
,
•Technical
Supervisor
Certified by
SDr.
fessor of
John J. Deyst, Jr.
onautics and Astronautics
'tT
sis Advisor
Accepted by
Arthur C. Smith
Chairman, Committee on Graduate Students
Department of Electrical Engineering and Computer Science
jU 2319M
A Distributed Flight Control System
Architecture for Small UAVs
by
ENS Ryan Bruce Norris, USN
Submitted to the Department of Electrical Engineering and
Computer Science on May 20, 1998, in Partial Fulfillment of the
Requirements for the Degree of Master of Science in Electrical
Engineering and Computer Science
Abstract
The current practice for designing a small unmanned aerial vehicle (UAV) flight control
system is to use a traditional centralized processing architecture. Because of its
characteristic lack of modularity, this type of architecture does not satisfy the need for
frequent upgrades, replacements, and redesigns of vehicles and their subsystems in the
most time-and-cost-effective manner. This thesis explores the potential of a data busbased, distributed processing architecture, to better meet that need.
A hypothetical centralized architecture is generated representing a current practice
baseline for small UAV architectures. The benefits of utilizing a distributed architecture
for the small UAV flight control system are analyzed, and some design methods and
relevant technologies are discussed. Specifically, two distributed flight control system
architectures are presented: a LONWorks (Local Operating Network) control bus
architecture and a Universal Serial Bus (USB) high performance serial bus architecture.
To demonstrate the relative advantages of the distributed approach, a set of metrics is
used to compare the costs of altering the centralized architecture and the USB
architecture, when the specific system must be redesigned or the system must be
expanded with an additional flight control sensor.
Technical Supervisor: Dr. David S. Kang
Title: Project Manager, Charles Stark Draper Laboratory
Thesis Advisor: Dr. John J. Deyst, Jr.
Title: Professor of Aeronautics and Astronautics
ACKNOWLEDGMENTS
Many individuals helped contribute to the success of this thesis, and I would like to extend great
thanks to each and every one of them:
First, I would to thank my three thesis advisors from the Draper Lab: Dave Kang, Paul
Rosenstrach, and Bob Powers. Dave was my advisor during the two years I spent at Draper and I
thank him for giving me the opportunity to work in the Intelligent Unmanned Vehicle Center at
Draper. Dave and Paul were great advisors during the research phase of my thesis. Dave never
let me loose sight of the big picture, and Paul taught me the importance of not making
assumptions. He would always push me to research a subject until I fully understood it. Bob
was an excellent advisor when it came time to write my thesis. He was always available to read
and comment on sections of the thesis, and he taught me a great deal about writing a good
technical paper. I would also like to thank John Deyst, my advisor at MIT, for taking the time to
read and comment on my thesis.
Many of the references for this thesis came from discussions I had with engineers at the Draper
Lab. They were always willing to make time in their day to meet with me, and their opinions on
various subjects guided me in my research. I would like to extend thanks to the following
engineers at the Draper Lab:
Brent Appleby
Bob Butler
Jim Connelly
Paul Debitteto
Bruce Dow
John Dowdle
Dave Hanson
Tony Kourepenis
Bob Leger
Mark Prestero
Gary Schwartz
Mike Socha
Tom Thorveldson
Paul VanBrokhoven
Skip Youngberg
The two years that I spent at Draper gave me a chance to work with many other students, and at
the same time, make some great friends. I would like to thank the students at Draper with whom
I worked. I learned a great deal from many of them, and they made the work at Draper fun and
interesting. Specifically, I would like to thank Chuck Tung, Christian Trott, Mohan Gurunathan,
and Bill Kaliardos. Chuck, Christian, and Mohan were always available for advice on anything
electrical, Mohan gave away his secrets on taking great digital pictures, and Bill always had time
to talk and give suggestions concerning any thesis problems I was having.
I would like to thank the librarians at Draper for their patience and help in finding many of the
references I used for this thesis.
I would like to thank all of my friends and family. Their love and support has motivated me over
the years of my education. I would especially like to thank my beautiful wife Zoe. Her smile
and ability to make me laugh can always turn a bad day into a good day. I dedicate this thesis to
her.
Finally, I would like to thank God for giving me the opportunity to enjoy life and continue to
learn day after day. Because of him, all of this is possible.
This thesis was prepared at The Charles Stark Draper Laboratory, Inc., under IR&D Contract
#13316.
Publication of this thesis does not constitute approval by Draper or the sponsoring agency of the
findings or conclusions contained herein. It is published for the exchange and stimulation of
ideas.
Permission is hereby granted by the Author to the Massachusetts Institute of Technology to
reproduce any or all of this thesis.
Ryan Bruce Norris
TABLE OF CONTENTS
LIST OF FIGURES .....................................................................................
11
LIST OF TABLES ................................................................................
13
CHAPTER 1..............................................................................15
....................................................... 15
1.1
M otivation ..............................................
1.2
Unmanned Vehicles.....................................................
15
1.3
M odularity ...........................................................................................................
16
1.4
DOTS ..................................................
17
1.5
The Modularity Advantage: a ULV Case Study .......................................
17
1.5.1
1.5.1.1
System Components ......................................................... 18
1.5.1.2
W iring and Interconnection of Components .............
1.5.2
1.6
18
EOD Architectural Design................................................................
............
EOD Architectural Problems and Solutions ..............
Approach ...............................................
.................... 20
21
........................................................ 23
CHAPTER 2..............................................................25
2.1
Narrowing The Application.......................................................................
25
2.2
Defining Vehicle Requirements ..........................................................................
27
............
CHAPTER 3...........................................................
............ 31
3.1
Traditional Architectures ..........................................................
3.2
Defining New Requirements .......................................
3.3
Initial Architectural Conclusions..........................
......
............... 33
................................
CHAPTER 4 ..............................................................
4.1
Data Bus Characteristics............................................................................
4.1.1
...... 31
35
37
37
Topology ...................................................................................................... 38
38
4. 1.1.1
The Ring ........................................
4.1.1.2
The Star ........................................
4.1.1.3
The Tree ..............................................................
.................................................. 39
........................... 40
.....
..... .........
Interfaces.................................. ..
4.1.3
The O SI M odel ....................................................................
4.1.4
Sm art N odes ................................................................
................... 4 1
........................... 42
Data Bus Technology ..................................................................................... 44
4.2
4 .2.1
C ontrol B us ................................................................................................
45
4.2.2
High Performance Serial Bus ...................................................
46
4.2.3
Why Not 1553? .......................................................................................
47
LONWorks .......................................................................................................... 48
4.3
4.3.1
LON W orks N ode........................................................................................ 49
4.3.2
LONTalk Protocol .......................................................................................
4.3.3
Communications and Power ...................................................
4.3.4
LONWorks Control Module...................................
..................................
53
USB Topology ...................................................
4.4.2
USB Host ................................................................................
4.4.3
USB Transactions and Bus Protocol .....................................
4.4.4
Power ...................................
...
52
.................. 53
4.4.1
....
51
........ .........51
U niversal Serial B us ..........................................................................
4.4
...............
..
55
.. 58
.....................
58
....... 58
4.4.4.1
Bus-Powered Hub ..............................................................
4.4.4.2
Self-Powered Hub ............................................................................ 59
4.4.4.3
Bus-Powered Function ......................................................................... 59
4.4.4.4
Self-Powered Function ..................................................................... 59
4.4.4.5
Voltage Requirements............................................
4.4.5
Latency and Timing.................................
CHAPTER 5 ........................................................
60
...................... 60
........................
65
5.1
SCPB Flight Control System............................................................................... 65
5.2
Ideal Solution ...........................................................................................
5.3
LONWorks Solution............................................................................................ 69
5.4
........... 68
5.3.1
LONWorks Architectural Design ................................................................ 69
5.3.2
LONWorks Bandwidth Analysis ........................................................
72
.....................................................
... 73
USB Architectural Design ................................................................
73
U SB Solution..........................................
5.4.1
8
.......... .................... 40
4.1.2
75
5.4.2
USB Power Options......................................
5.4.3
USB Bandwidth Analysis ..........................................
............... 76
CHAPTER 6 ..............................................................................
............................ 79
6.1
Functional Factors .............................................................
6.2
Cost Factors .........................................................................................................
6.3
6.2.1
New Design Costs .............................................................................
6.2.2
System Upgrade Costs ......................................
Conclusions .............................................
79
82
84
......................85
...................................................... 86
CHAPTER 7 ..............................................................................
89
........... 93
APPENDIX A ...........................................................
............
REFERENCES .......................................................
.......... ............ 95
10
LIST OF FIGURES
FIGURE 1-1: THE EOD MICRO-ROVERS...................................................................
19
FIGURE 1-2: EOD MICRO-ROVER WIRING DIAGRAM ....................................................... 20
FIGURE 1-3: WIRING DIAGRAM FOR A BUS-BASED EOD MICRO-ROVER ......................
22
FIGURE 2-1: PAYLOAD AND WINGSPAN OF EXISTING UAVs ......................................... 25
FIGURE 2-2: RANGE AND ENDURANCE OF EXISTING UAVs.............................................. 26
FIGURE 3-1: TRADITIONAL CENTRALIZED PROCESSING ARCHITECTURE......................... 32
FIGURE 3-2: INTERCONNECTION OF TRANSDUCERS.............................................. 34
FIGURE 4-1: RING TOPOLOGY FOR A DATA BUS ........................................
......... 39
FIGURE 4-2: STAR TOPOLOGY FOR A DATA BUS ........................................
......... 39
FIGURE 4-3: TREE TOPOLOGY FOR A DATA BUS ........................................
......... 40
FIGURE 4-4: BANDWIDTH AND COMPLEXITY OF DATA BUS TECHNOLOGIES...................45
FIGURE 4-5: NEURON MICROCONTROLLER BLOCK DIAGRAM .......................................... 49
50
FIGURE 4-6: LONWORKS NODE ............................................................
FIGURE 4-7: LONW ORKS CONTROL M ODULE .................................................................. 52
FIGURE 4-8: U SB TOPOLOGY ......................................
.......................... 54
FIGURE 4-9: USB FOUR WIRE CABLE..................................................... 54
FIGURE 4-10: USB HOST COMPONENTS AND DATA CONVERSION PROCESS .................. 55
FIGURE 4-11: U SB H UB ..............................................................................................
57
..........
60
FIGURE 4-13: NRZI ENCODED DATA.......................................................................
62
FIGURE 4-14: WORST CASE TOTAL ROUND TRIP DELAY .........................................
62
FIGURE 5-1: SCPB SUBSYSTEM INTEGRATION DIAGRAM...........................................
66
FIGURE 4-12: VOLTAGE DROP TOPOLOGY ........................................
FIGURE
5-2: SCPB SUBSYSTEM INTERCONNECT DIAGRAM.............................................. 67
FIGURE 5-3: IDEAL ARCHITECTURE FOR A DISTRIBUTED FLIGHT CONTROL SYSTEM........ 69
FIGURE 5-4: LONWORKS DISTRIBUTED FLIGHT CONTROL SYSTEM ARCHITECTURE....... 70
FIGURE 5-5: LONWORKS CONTROL MODULE INTEGRATED CIRCUIT BOARD ...............
71
FIGURE 5-6: USB DISTRIBUTED FLIGHT CONTROL SYSTEM ARCHITECTURE ...........
FIGURE
5-7: USB FRAME FOR FLIGHT CONTROL SYSTEM DATA ...................................
FIGURE
6-1: COSTS FOR REDESIGNING AND UPGRADING A FLIGHT CONTROL SYSTEM
74
77
.... 86
LIST OF TABLES
TABLE 1-1: THE EOD MICRO-ROVER SYSTEM COMPONENTS .....................................
19
TABLE 2-1: UAV REQUIREMENTS ...........................................................
28
TABLE 2-2: INTEGRATED NAVIGATION SYSTEM REQUIREMENTS..............................
29
TABLE 4-1: THE OSI MODEL FOR COMMUNICATION SYSTEMS.................................. 42
TABLE 4-2: LONWORKS TRANSCEIVER TYPES ........................................
...........
51
TABLE 4-3: USB CHARACTERISTICS AND ESTIMATED PROTOCOL OVERHEAD .............. 61
TABLE 5-1: SCPB NAVIGATIONAL SUBSYSTEM REQUIREMENTS.............................. 68
TABLE 5-2: INTEL USB CHIP SPECIFICATIONS .................................................................. 75
TABLE 5-3: USB INERTIAL INSTRUMENT AND GPS BANDWIDTH CALCULATIONS............ 77
TABLE 6-1: TRADITIONAL VERSUS USB FUNCTIONAL FACTORS................................
80
TABLE 6-2: ADDED VOLUME OF USB SYSTEM COMPONENTS .....................................
81
TABLE 6-3: TRADITIONAL VERSUS USB COST FACTORS .........................................
83
TABLE 6-4: NEW DESIGN COSTS FOR TRADITIONAL AND USB ARCHITECTURES.............. 84
TABLE 6-5: SYSTEM UPGRADE COSTS FOR TRADITIONAL AND USB ARCHITECTURES ..... 85
14
Chapter 1: Introduction
Chapter 1
Introduction
1.1
Motivation
The increasing variety of missions on which unmanned aerial vehicles (UAVs)' are being
deployed, and the rapidly advancing technology of flight control sensors, both indicate a coming
need for frequent upgrades, replacements, and redesigns of these vehicles and their subsystems.
Because of its inflexible nature, the traditional centralized architecture used in many existing
UAVs will not satisfy this need in the most time and cost effective manner. As the sensors,
actuators, or other control components required for particular missions change or become
outdated, a distributed architecture would allow for easy substitution and replacement of those
components.
This plug-and-play concept would save some of the time and money that are
normally put into redesigning and constructing a new vehicle whenever these changes occur.
1.2
Unmanned Vehicles
Over the past century, the U.S. has assumed the role of peacekeeper of the world. To uphold this
role in the
2 1st
century, our nation must maintain an effective military capable of deploying
forces for a variety of purposes, in many parts of the world, in a timely and effective manner.
The ability to rapidly obtain information about the enemy can benefit mobile forces in mission
areas such as reconnaissance and surveillance, battle damage assessment (BDA), target
designation, and nuclear, biological, and chemical (NBC) detection.
Success in these areas,
however, often requires knowing the conditions of the battlefield before forces actually arrive
there. Unmanned vehicles can aid immensely in this task.
Three classes of unmanned vehicles exist:
the unmanned land vehicle (ULV), the
unmanned underwater vehicle (UUV), and the unmanned aerial vehicle (UAV).
l Refer to Appendix A for a listing of acronyms used throughout this thesis.
Information
Chapter 1: Introduction
concerning the conditions of a battlefield can be rapidly obtained with air reconnaissance
missions. Consequently, there is great interest in UAVs.
UAV missions are generally flown to cover areas determined to be too hazardous or
expensive for manned reconnaissance aircraft. U.S. Military operations in Grenada, Lebanon,
and Libya, in the early 1980s, identified the need for an on-call, inexpensive UAV [7]. Thus, the
PioneerUAV was developed in the mid 1980s and has since flown missions in the Persian Gulf
and Bosnia. Currently, the U.S. funds six major UAV programs and supports research in many
related areas.
Recent UAV developments have caused the U.S. government to define
requirements for UAVs to support an increasing variety of operations and identify a need for
different classes of UAVs to support these operations [22].
Different missions require different levels of performance from a vehicle, and it is almost
impossible to design a single vehicle capable of supporting all possible missions. Current UAV
design practice falls short of meeting the implied challenge: rapid and economical adaptation to
changes in mission requirements.
1.3
Modularity
The dictionary defines a module as: a separable component [of a system], frequently one that is
[easily] interchangeable with others ... [20]. In general, a module refers to a device within a
larger system that performs a given function to support the overall function of the larger system.
This module function can be described by the output of the device in relation to some input to the
device.
The user of the device (e.g. the flight control computer) is not concerned with the
internal components and overall operation of the device, but rather the information the device or
module provides as output or the information that must be supplied to the device or module as
input. If the module must be replaced or upgraded, the flight control computer will not care if a
different module replaces the old module, just as long as the new module provides the computer
with correct information or readily accepts the information provided by the computer.
As
defined here, a modular system is a system in which many modules can easily be connected to or
removed from a central control computer. In general, the more modular a system is, the easier it
will be to adapt an old system to meet new requirements and missions. The interface between a
module and the flight control computer, as well as the interface between two modules, usually
determines the difficulty of connecting and disconnecting modules to and from the flight control
computer and to each other. In general, the more standardized an interface is within a system,
Chapter 1: Introduction
the easier it will be to reconfigure and expand the system, and the higher the modularity will be
for that system.
Updating an old vehicle to perform a new mission could be accomplished with a modular
flight control system. A modular flight control system could be used across several different
vehicles by reconfiguring it to accommodate specific missions, thus reducing the development
costs of designing a new vehicle every time a new mission is encountered. A modular flight
control system could easily accommodate changes in technology as well.
With the rapid
technological pace of today, when the sensors aboard an aircraft become outdated, they can be
replaced by current, up-to-date sensors, without redesigning the entire system.
1.4
DOTS
Much of the motivation for the research involving a modular UAV was provided by a project at
the CSDL called "Draper Off-the-Shelf (DOTS)". The primary goal of the DOTS project was to
develop off-the-shelf subsystems to strengthen total system development market potential by
minimizing development cost and time over the life of a product. Given an application with
market potential for Draper, the overall objectives of this project were to select a candidate
DOTS subsystem, identify and compare methods to satisfy a range of requirements at minimum
development cost and time, and if possible, develop a prototype system using these methods.
The application chosen for review was the UAV, and because of broad interest within Draper,
the UAV flight control system was selected as the candidate DOTS subsystem.
1.5
The Modularity Advantage: a ULV Case Study
The concept of modularity is by no means only applicable to the flight control system of a UAV.
The vision of a modular control system can be extended to other unmanned vehicles as well. In
fact, the experience gained from working with other students on the architectural development of
a highly centralized ULV has indicated a need for system modularity to ease the ongoing
development and maintenance of the vehicle.
Since the start of the decade, different series of ULVs have been designed and built at
the Intelligent Unmanned Vehicle Center (IUVC) of the Charles Stark Draper Laboratory
Chapter 1: Introduction
(CSDL) in Cambridge, Massachusetts. The most recent series of vehicles are those known as the
EOD (Explosive Ordnance Disposal) Micro-Rovers.
The EOD rovers are intended for use in the detection and retrieval of unexploded
ordnance (UXO).
The EOD rovers must be able to perform several tasks which include:
building a hazard map, navigating to within 1 m2 of the UXO, precisely locating the explosive,
picking up the UXO, and carrying the explosive to a disposal site. The EOD rovers are similar in
design to the MITy-x series of rovers which were also designed and built at the IUVC at Draper.
Two rovers were built for the EOD project, EOD-1 and EOD-2.
EOD-1 is a tele-operated
system, which serves to show the mobility capabilities of the mechanical platform as well as the
option for tele-operated control. EOD-2 is envisioned as a fully autonomous system that can
complete an assigned task with minimal operator assistance. This case study will present the
basic architecture implemented for the EOD rovers as well as the problems associated with the
architecture which motivated the research for a more robust, efficient, and modular design.
1.5.1
EOD Architectural Design
1.5.1.1
System Components
The EOD vehicles are equipped with a six-wheel drive flexible frame, a front and rear
Ackermann steering system similar to that used on automobiles, and a grappler device for UXO
retrieval. The EOD micro-rovers are shown in Figure 1-1.
The flexible frame of the EOD micro-rovers provide it with a high degree of
maneuverability, enabling the rover to traverse rocks, curbs and uneven terrain. The frame is
constructed of three individual platforms connected by spring steel wire. Table 1-1 summarizes
the vehicle components for both EOD-1 and EOD-2.
EOD-1 is used primarily for parallel
development purposes and it shares the same basic mechanical architecture with EOD-2, but it
lacks a complete sensor suite and grappler mechanism.
The components are housed in the three individual vehicle platforms. The front platform
contains the two degree of freedom grappler mechanism, the metal detector unit, the sonar, and
the front bumper sensors. The middle platform houses the main processor, the video camera and
transmitter, the wireless modem, the local positioning system transponder, and the micromechanical rate gyro. The rear platform contains the power regulation circuitry, the motor driver
electronics, and the batteries.
Chapter 1: Introduction
Figure 1-1: The EOD Micro-Rovers
Component
EOD-1
EOD-2
Six-wheeled, three-platform mechanical architecture
0
0
Six drive wheel motors with integrated encoders
*
*
*
Grappler mechanism for retrieval of UXO
12 MHz Z-World Little Giant Microprocessor with
512 KB SRAM and PIO096 Expansion Board
*
Systron-Donner micro-mechanical gyroscope
Video camera and transmitter
*
0
Front bumper sensors
*
Local Positioning System (LPS)
*
Polaroid sonar ranging module array
*
Proxim wireless modem
0
Metal Detector
0
Table 1-1: The EOD Micro-Rover System Components
Chapter 1: Introduction
1.5.1.2
Wiring and Interconnection of Components
The EOD micro-rovers were designed using a traditional centralized architecture approach in
which every sensor and actuator throughout the system is connected directly to the main central
processing unit (CPU). The characteristics of a centralized architecture will be discussed in
greater detail in Chapter 3.
The main processor of the EOD micro-rover is located on the center platform of the
vehicle. The main processor is a 12 MHz Little Giant microcontroller that is based on the Zilog
Z180 microprocessor. An input and output (I/O) expander board is used as well to accommodate
the number of digital I/O lines required to interface to the many sensors and actuators. This main
processor stack stores and executes the micro-rover's main program and is responsible for the
following tasks:
collecting data from the sensors, processing the navigation and control
algorithms, commanding the actuators, and communicating with the ground station.
A simplified version of the EOD wiring diagram is shown in Figure 1-2.
As was
previously stated, the system was implemented using a traditional centralized processing
architecture and each peripheral device in the system uses a number of dedicated data lines
which connect it to the CPU. The vehicle is broken into three platforms. Each platform has a
DB50 data connector and a DB50 power connector, allowing the platforms to be easily
connected and disconnected. The data connector carries the low current signals that connect the
sensors to the CPU while the power connector carries the high current signals that drive the
motors. Inside the center platform is a backplane through which all the wires to and from the
CPU must pass. Should the processor fail, this design allows it to be easily replaced.
Although the EOD system architecture seems very simple and modular to some level, its
highly centralized architecture made it difficult and very frustrating to develop and maintain.
The next section will discuss some of the problems encountered with the architecture as well as
possible solutions.
Device
e
a
Device
Device
Device
Device
cplane
Rear
Platform
Middle
Platform
Device
Front
Platform
Figure 1-2: EOD Micro-Rover Wiring Diagram
Chapter 1: Introduction
1.5.2
EOD Architectural Problems and Solutions
Throughout the construction, development, programming, and maintenance of the EOD microrovers, various problems were discovered which led to the perceived desire for possible
architectural changes to alleviate the problems.
During programming, problems began to arise with the Little Giant microprocessor. The
Little Giant was having difficulty processing all of the low level driver code required for each
peripheral in addition to the high level behavioral code required for the vehicle navigation and
control algorithms. Initially, to relieve this burden on the Little Giant, the high level tasks were
moved to the ground station and the rover lost the ability to perform its own path-planning and
navigation, without ground station assistance. Eventually, a decision was made to replace the
Little Giant microprocessor with a 486 processor running at 50 MHz-the Intel x86 based
processor. This type of processor possesses much more computing power than the Little Giant
and would allow the path-planning and navigation code to remain on-board the rover. However,
there was one major problem associated with the x86 based processor. Unlike the Little Giant,
which has many digital and analog I/O ports for directly controlling each peripheral, the x86
based processor has only two serial ports and relies on the addition of expansion cards for
additional I/O ports. To avoid the addition of these expansion cards, to accommodate the various
sensors and actuators, it was determined that a different method for communication was desired.
Because of the central processing architecture used on the vehicle, the intensely
burdensome task of low level hardware management was placed on the main processor. The
sensors communicate all raw data to the main processor where it is processed into meaningful
data which can then be used by the high level algorithms. In a similar manner, the actuators
would receive low level hardware commands directly from the main processor. This burden can
be relieved by distributing the processing load. A dedicated embedded microcontroller placed at
each sensor and actuator would allow any of the high bandwidth control loops to be performed
locally at the specific peripheral devices.
The sensor or actuator with the embedded
microcontroller becomes a "smart" node; this idea will be discussed further in Chapter 4.
Essentially, the CPU is required to only send higher level commands to each device in a
predefined data format and the process by which those commands are carried out is left to the
microcontroller at each device. With respect to modularity, this concept allows easy substitution
and interchanging of sensors and actuators. Because of the predefined data format, the CPU will
Chapter 1: Introduction
always communicate with a specific sensor or actuator in the same manner. When a sensor or
actuator must be substituted into the system, the only major task is to program the locally
embedded microcontroller to transform the device data into the proper format.
The final problem encountered on the EOD rover was due to the wiring architecture.
The centralized processing architecture required dedicated data lines for each peripheral device
and each device was wired directly into the CPU. The disadvantages associated with this type of
approach include: large numbers of dedicated I/O lines, difficult documentation and debugging,
and high material and labor costs. The large number of wires caused a major inconvenience
during the construction phase of the rovers. A team of students was dedicated, for over a week,
to the task of crimping the approximately 2,000 wire crimps necessary. Also, because of the
many wires run throughout the system, it was very difficult to locate wiring problems when they
occurred. It was not uncommon for two students to spend an entire day trying to locate a faulty
wire. Sometimes the documentation was not up to date and this would exacerbate the problem.
It was determined that the best way to alleviate this problem was through the use of a data bus;
the data bus will be discussed in Chapter 4.
A data bus allows the different sensors and
actuators, throughout the system, to share the same transmission medium, when communicating
with the CPU, thus reducing the total number of wires run throughout the system and reducing
the cost and time to manufacture the product as well.
A fellow student from the IUVC has designed a system which implements the previously
mentioned architectural changes for the EOD micro-rovers [21]. A simplified version for the
updated wiring diagram would be similar to that in Figure 1-3. As will be explained in Chapter
4, a data bus is limited by the maximum bandwidth it can support. Devices can be added to the
data bus as long as the rate at which information must be passed to or from the device does not
exceed the maximum bandwidth of the bus. This allows the system to be readily expanded and
developed.
Device
Device
Device
Device
DeviceDevice
Rear
Platform
Middle
Platform
Front
Platform
Figure 1-3: Wiring Diagram for a Bus-Based EOD Micro-Rover
Chapter 1: Introduction
With these ideas in mind, and guided by the lessons derived from the EOD rover
experience, it will be feasible to develop and present an improved architectural approach for a
UAV flight control system.
1.6
Approach
The work of this thesis can be broken down as follows:
*
Identify a span of system level requirements for a class of UAVs. An initial
list of vehicles will be considered and this list will be narrowed through an
elimination process based on criteria such as mission, size, or Draper
production potential for specific vehicles.
Next, traditional requirements
will be defined for the specific UAVs chosen.
Traditional requirements
refer to general system requirements such as size and weight as well as
instrument requirements such as the drift of inertial instruments. Finally, a
set of desired requirements or characteristics for an alternative architecture
will be defined.
*
Determine the optimum level of standardization and select the best
architecture to satisfy the defined requirements. Initially, it was assumed
that adopting an architecture which would allow such characteristics as
modularity and expandability would require some sort of standardization.
The degree of standardization will have to be determined as well as the best
method to implement standardization.
Data bus technology offers many
benefits in this context.
*
Analyze and draw conclusions based on a predefined set of metrics for the
proposed architecture versus a traditional centralized architecture. In order
to effectively measure the validity of an alternative architecture, a list of
metrics will have to be established.
Using these metrics will allow
conclusions to be drawn on two major issues: the expected development
cost and time over the life of a product in the event of a system redesign or
expansion.
Chapter 1: Introduction
The goal of this thesis is not to point out major design flaws in existing systems, but
rather to study the existing systems and gain insight into the overall design and decision
processes. The vehicles examined were each originally designed with one specific purpose or
mission in mind and the architecture chosen for the vehicle best accomplished that mission.
However, when multiple purposes or missions are considered and a design must be tailored to
many platforms, in the most timely and cost effective manner, these existing architectures are
impractical. Therefore, an alternative architecture that best meets these multiple missions or
needs must be defined.
Chapter 2: Requirements Definition Process
Chapter 2
Requirements Definition Process
2.1
Narrowing The Application
Examining the missions and requirements of different existing UAVs is a good first step in
architectural development. This type of examination will reveal advantages and disadvantages of
different vehicles and their architectures, and allows conclusions to be drawn which can then be
used to narrow the class of UAVs under study. Figure 2-1 below gives the reader an idea of the
relative size of a variety of existing UAVs. They are compared by wingspan of the aircraft and
the weight of the payload the aircraft will carry. Figure 2-2 on the following page also gives the
reader an idea of the relative mission characteristics for the same UAVs. It compares the mission
duration or the endurance of the vehicles with the range of the vehicles.
* Global Hawk
0 Predator
0 Hunter
0 Pioneer
*DSAAV
* WASP
0 Pointer
0.1
- Micro-UAV
rJru1 -
0.1
I
I
I
I I
IIIl
I
I
I
I I I II I l I
IIii
1
I
I
I
I
lll
10
WINGSPAN (m)
Figure 2-1: Payload and Wingspan of Existing UAVs
1
Chapter 2: Requirements Definition Process
The UAVs considered in the study were:
*
Global Hawk -
a fixed-wing high altitude endurance UAV used by U.S.
forces
*
Predator -
a fixed-wing medium altitude endurance UAV used by U.S.
forces
*
Hunter - a fixed-wing tactical UAV used by U.S. forces
*
Pioneer-
a fixed-wing tactical UAV used by U.S. forces
*
Pointer -
small fixed-wing UAV used by U.S. Army and U.S. Marine
Corps
*
Draper Small Autonomous Aerial Vehicle (DSAAV) -
small rotary-wing
aircraft used at Draper
*
Wide Area Surveillance Projectile (WASP) -
Draper funded, guided
munition project at MIT
*
Ultra-light- class of very light fixed-wing aircraft
*
Micro-UAV -
proof of concept study being conducted by the Defense
Advanced Research Projects Agency (DARPA)
40-
Global Hawk
30
20 -
Z
• Predator
Pointer
_*
Hunter
Micro-UAV
10-
/ SAAV
0
10
1
Pioneer
I i
1ASPIi
i
I
I lii l
100
1000
1I 1I 1I I 11
10000
RANGE (km)
Figure 2-2: Range and Endurance of Existing UAVs
Chapter 2: Requirements Definition Process
This list of vehicles was narrowed down to a much smaller group of vehicles. The
endurance vehicles were eliminated from the study because their very large airframe made them
unrealistic for production by Draper. Further research indicated that it would even be unrealistic
for Draper to be the prime contractor for aircraft the size of Hunter or Pioneer, and they were
eliminated from the study as well. The Ultra-lightclass of vehicles was eliminated because the
placing of expensive electronics on a cheap airframe seemed unrealistic.
Due to a lack of
information on its navigational system requirements, the Pointer was also eliminated from the
study. The remaining aircraft, in order by size, included: the Micro-UAV, the WASP, and the
DSAA V.
In summary, the aircraft under study were narrowed down to a group of "small" UAVs
for the reasons mentioned.
In addition, no commercial off-the-shelf (COTS) flight control
systems were found to meet the requirements and missions for these vehicles. The elimination of
the larger tactical and surveillance vehicles was driven by the fact that many COTS flight control
systems are already in production for this class of vehicles. There is heavy competition between
larger prime contractors in this area and the vehicle program scale would be unrealistic for
Draper to get involved in.
Defining Vehicle Requirements
2.2
After narrowing down the application to a small class of UAVs, the next step was to gather
requirements for these specific aircraft, examine those requirements, and draw conclusions about
different methods to use in order to satisfy the range of requirements across the different
vehicles.
All three vehicles have some level of autonomy which is provided by the flight control
system.
As stated before, the Micro-UAV is still being researched by DARPA and the
requirements are therefore still preliminary. The aircraft is small enough to fit in the palm of a
human hand and its primary mission is surveillance-another possible use would be NBC
detection. The mission of the WASP can be divided into two sub-missions. It is a precision
guided munition which transforms itself into a fixed-wing aircraft to be used for surveillance.
The requirements used for the WASP in this thesis will cover the guided munition portion of the
mission.
The DSAAV is an autonomous helicopter whose primary missions are object and
vehicle recognition and small object retrieval.
Chapter 2: Requirements Definition Process
A large list of requirements for the three vehicles was generated. In order to initially
narrow the flight control system study, the navigational subsystem was chosen as the primary
subsystem to examine. Therefore, the requirements gathered included both the physical aircraft
The navigational subsystem
requirements and the navigational subsystem requirements.
requirements included primarily the requirements for the GPS and inertial subsystems.
condensed list of the requirements can be seen in Table 2-1 below.
A
It includes only those
requirements from the original list that had any significant meaning and were useful for
comparing the vehicles.
Parameter
Max Endurance (hrs)
IMU Weight (ibs)
IMU Volume (in3 )
IMU Input Voltage (V)
IMU Power (W)
IMU Shock (g)
Gyro Bias Magnitude (deg/hr)
Accelerometer Bias Magnitude (g)
Micro-UAV
WASP
DSAAV
1
0.02
1
TBD
<5
TBD
3600
TBD
0.1
0.11
5
+12, +5
3
16000
18000
<2
0.5
1.96
33.5
±15
7
200 (peak)
156
0.006
Table 2-1: UAV Requirements
First, instrument
Two conclusions were drawn from analyzing these requirements.
performance and utilization vary widely. Second, there is a great difference in the mentioned
requirements based on the missions of the three UAVs studied. Requirements which varied the
greatest were volume, weight, shock, endurance, and the bias magnitude for the gyros and
accelerometers. These conclusions led to the following question: could a single flight control
system satisfy the span of all vehicle requirements without too much over-kill?
In order to answer this question, three systems were examined.
The three systems
included a hypothetical centralized flight control system and two COTS flight control systems:
the C-MIGITS II, a miniature integrated GPS/INS tactical system from Boeing, and the
BGH1251, a miniature flight management unit from Honeywell.
The hypothetical system is
based on the current practice in small UAV flight control system design and will be referred to as
the Small UAV Current Practice Baseline (SCPB). The requirements gathered for the systems
were similar to the those gathered for the vehicles and included the physical requirements of the
system as well as the performance requirements of the navigational components. A condensed
Chapter 2: Requirements Definition Process
list of the requirements is shown in Table 2-2. It includes only those requirements from the
original list that had any significant meaning and were useful for comparing the systems.
IMU Weight (lbs)
IMU Volume (in3)
IMU Input Voltage (V)
IMU Power (W)
IMU Shock (g)
Gyro Bias Magnitude (deg/hr)
Accelerometer Bias Magnitude (g)
SCPB
(Hypothetical)
C-MIGITS II
(Boeing)
BGH1251
(Honeywell)
0.10
4.5
±12 , +5
3
16000
10000
1
2.1
43.2
+28
17
150 for 11 ms
10
0.0015
2
33
±15
8
Not Available
1-10
0.001
Table 2-2: Integrated Navigation System Requirements
A number of conclusions were drawn based on the two previous tables. First, the COTS
systems are too large and power hungry to satisfy small UAV requirements. The BGH1251 does
come very close to satisfying the requirements for the DSAAV. This instrument, however, cannot
satisfy the requirements for the other two vehicles because of its large size.
Next, it was
concluded that the SCPB flight control system would satisfy the requirements for the WASP but
not the DSAAV or the Micro-UAV.
This is a result of the mission duration and the bias
magnitude for the inertial instruments-a longer mission will require more precise inertial
instruments.'
Furthermore, because the SCPB system is a centralized, point design solution,
higher bandwidth sensors such as vision-based navigation may overload the system. 2
Substitution of a more precise inertial instrument into the SCPB flight control system could
allow the system to meet the requirements for the DSAAV and the Micro-UAV, solving the
problem.
The conclusions drawn from the requirements definition process have suggested
something about the vehicle and existing system architectures. In fact, the process can actually
be viewed as a proof of concept for initial assumptions that were made about flight control
system architectures.
The major assumption was that the traditional centralized architecture,
under which most flight control systems of today are designed, would be impractical for tailoring
a design to many platforms in the most timely and cost effective manner. This assumption has
' It is assumed that the threat of GPS jamming exists.
2The addition of vision-based navigation on the WASP is being researched
[6].
Chapter 2: Requirements Definition Process
been verified by the conclusions drawn in this chapter. Specifically, the existing centralized
flight control systems examined could not meet the requirements of multiple vehicles. As a
result, an alternative architecture would have to be defined, and the conclusions suggest that a
scalable architecture that preserves the prior design, but allows for sensor and processor
improvements, is a good approach.. The following chapter builds on this approach and looks at
flight control system architectures in order to draw initial conclusions on implementation
methods.
Chapter 3: Traditional Flight Control System Architectures
Chapter 3
Traditional Flight Control System
Architectures
The conclusions drawn from the requirements definition process suggest an approach for
satisfying the span of requirements across the vehicles. That approach is a scalable architecture
which preserves the prior design of the system, but also allows for component improvements
throughout the life of the vehicle. This chapter will build on that suggestion by examining flight
control system architectures. First, the current practice in these architectures will be analyzed
and conclusions will be drawn based on traditional architectures. Next, alternative architectures
that will satisfy a list of desired characteristics will be considered.
This step will include
exploring different methods of connecting transducers to a CPU, defining the desired system
characteristics, and finally drawing conclusions on the different ways to achieve those
characteristics.
3.1
Traditional Architectures
Flight control systems are traditionally designed using a centralized processing architecture. In
fact, a majority of the small UAV flight control systems being used today are still designed using
this approach. The design focuses around a single central control unit (CCU), where a main CPU
is used to perform all of the sensor fusion, navigational algorithms, and other computational
tasks.
Generally, the CCU will have multiple analog and digital 1/O pins to which the
transducers' are wired directly. Figure 3-1 shows the overall layout of such an architecture. As
described, the CCU is at the center of the diagram, with the transducers branching off from the
I/O pins.
As used here, a "transducer" is any sensor or actuator used by a system.
Chapter 3: Traditional Flight Control System Architectures
SENSOR
SENSOR
A/D I/O
SENSOR
D/A
CENTRAL CONTROL UNIT
A/D
ACTUATOR
I/O
D/A
ACTUATOR
ACTUATOR
Figure 3-1: Traditional Centralized Processing Architecture
There is one major advantage to this particular architecture.
If a designer has one
particular purpose in mind, or a single list of requirements to meet, very high performance of the
system can usually be obtained. For example, when it is determined that a system must be
designed to tackle a single mission, the designer may concentrate all of his or her efforts on
designing a system that will best perform that particular mission. In technical terms, the designer
"custom designs" the system to best meet his or her needs.
However, if in the future, the
designer must adapt this custom design to perform additional missions, he or she will likely find
this to be a difficult task. The major disadvantages of traditional architectures are that they are
inflexible, they are not modular, and they are not expandable.
This can be due to a number of reasons. First, the system could have a specific limited
CPU capacity. This could come in the form of the number of 1/O pins available, the amount of
memory available, or the speed at which the CPU is capable of running the system.
Next, substitution of transducers will impact all specific hardware and software
interfaces. For example, suppose that the system contains some general transducer with specific
system requirements (e.g. power, I/O, signal conditioning circuitry). Now, suppose a decision
has been made to upgrade that transducer with a more sophisticated sensor with the same general
purpose. On the hardware side, this might require an additional power rail to be run through the
system. If all the 1/O lines are already committed, circuitry must be designed to extend the 1/O of
the CCU.
Also, the connectors used to interconnect the various components may be full,
requiring a mechanical alteration of the system to run additional connectors.
Finally, this
Chapter 3: Traditional Flight Control System Architectures
mechanical alteration may require more changes in the electrical system. A similar result occurs
on the software side as well. When the low-level software used to control the transducer is
rewritten, changes in the higher-level software may be required as well. These domino effects
end up wasting time and money, and they can possibly be avoided with the implementation of an
alternative architecture [1].
Finally, because the transducers are directly wired into the CCU, and possibly to each
other, many wires will be run throughout the system. The more wires that are run throughout a
system, the greater the complexity of the system, and the more chance for errors and associated
problems. In addition, such complexity makes it much more difficult to troubleshoot the system
when the problems do occur.
As a result, a major conclusion can be drawn about traditional centralized processing
architectures.
The disadvantages mentioned, as well as the overall inability to upgrade the
system, will cause the product to become obsolete after a period of time. It is hypothesized that
this period of time can be extended at minimum development cost and time through
implementation of an alternative architecture.
3.2
Defining New Requirements
When considering alternative architectures, it makes sense to first study the various architectures
used in the interconnection of systems and then to generate a list of desired system characteristics
or new requirements.
The different methods of connecting transducers to a main CPU are shown in Figure 3-2.
The different methods are labeled with letters and increase in complexity from left to right.
Method A shows a "dumb" 2 transducer connected directly to the main CPU.
Any A/D
conversion or signal conditioning for the transducer is done centrally in the main CPU and is the
method previously shown in Figure 3-1. Method B shows a dumb transducer connected to the
main CPU through local circuitry-either A/D conversion or simple signal conditioning. This
method begins to remove a portion of the data computation from the main CPU. Method C
2As used here, a "dumb" transducer is any sensor or actuator which outputs only raw data-no computation
of the data is done local to the transducer.
Chapter 3: Traditional Flight Control System Architectures
shows a "smart node" 3 connected to the CPU via a data bus. The smart node will be discussed in
Chapter 4. Recall from the ULV case study in Chapter 1 that a data bus allows the different
sensors and actuators of a system to share the same transmission medium when communicating
with the CPU. The main point here is that a smart node includes three elements: a dumb
transducer, a local microcontroller for data calculations, and a bus interface which allows the
device to talk to the main CPU through the data bus. These are enclosed in the dashed line boxes
in Figure 3-2. Method D shows a multiple sensor smart node, connected to the main CPU via the
data bus. Finally, method E depicts what is known as peer-to-peer communication. This form of
communication allows transducer to transducer communication without involving the main CPU,
and is readily accomplished with the use of smart nodes.
or I
Sensor
ActuaSensor
Actuator
Sensor
or
or
Sensor
Actuaor
ActuatorI
Sensor
Interface Circuitry
A-ontonerl
Sensor I
Actuator
a
Sensor
Acuator
-Contorl
Sensor 2
Sensor -Contoller
a-Contoer
Figure 3-2: Interconnection of Transducers
After initially analyzing traditional flight control system architectures and then
specifying the different methods of connecting transducers to a main CPU, new requirements for
an alternative flight control system can be defined. The term "new requirements" refers to a list
of desired characteristics for the alternative architecture. Many of these characteristics have
been mentioned previously, and they support the request for a scalable architecture made in
Chapter 2:
3Pradip Madan, from Echelon Corporation, defines a "smart node" as any device integrating transducer
element and basic electronics for linearization, calibration, and communication [15].
Chapter 3: Traditional Flight Control System Architectures
*
System modularity and upgrades
*
Continuing expansion of system capability
*
Easy troubleshooting of transducers-they can be tested off-board and more
easily maintained using common test facilities
Distributed processing as required
*
Recall that a body of technology exists and is in wide use to implement plug-and-play
systems of sensors and actuators and meet these desired characteristics. It is known as the data
bus. In order for a product to meet these requirements, it must possess an architecture which is
different from many systems currently on the market.
Direct processor interface designs in
traditional architectures are difficult to modify and eventually become outdated. In order for a
system to be modular, a plug-and-play simplicity with standard interfaces is desired, where
different parts of the flight control system could be interchanged and have as little effect as
possible on the entire system.. Chapter 4 will show that a data bus can meet all of these needs.
3.3
Initial Architectural Conclusions
This chapter has tied together many ideas to form initial architectural conclusions.
The stage
was set with the requirements definition conclusion calling for a scalable architecture which
preserved the prior design of the system, but also allowed for component improvements
throughout the life of the vehicle. It was determined that the traditional centralized processing
architecture could not easily allow for component improvements, and its direct processor
interface design eventually led to an obsolete product after a short period of time. A plug-andplay capable system with standard interfaces is desired, and it was determined that the best way
to implement this type of system is through a distributed architecture implementing a data bus.
The next chapter will discuss the data bus, its overall characteristics, and existing data bus
technology.
36
Chapter 4: The Data Bus
Chapter 4
The Data Bus
A data bus offers many benefits to engineers who must design and implement distributed
architecture systems. This chapter will discuss the major characteristics of the data bus, its
advantages over the point-to-point solution in a centralized architecture, and the issues
surrounding the implementation of a distributed architecture using the data bus. In addition,
existing data bus technologies will be discussed and comparisons between various busses will be
made. The chapter will end with a description of two existing data busses.
4.1
Data Bus Characteristics
A bus is a common set of electrical wires connecting a group of devices, such as sensors and
actuators, in order to transfer data among them. Bus communication is not like that of point-topoint communication in traditional centralized architectures.
Unlike point-to-point connections, which allow only two devices to communicate by
connecting the I/O ports of the first device with the I/O ports of the second device, a bus joins a
larger number of devices and allows the exchange of data among all of them. In point-to-point
communication, when two devices are connected to each other, they are usually free to send data
back and forth when and how they like. On the other hand, bus communication requires a set of
rules, known as the protocol, to let information flow from one device to another.
Information travelling over the bus has a digital format and is transferred serially in bits
(zeros and ones) or bytes (eight bits). Serial communication has the advantage of only requiring
a limited number of lines to transfer information. The serial interface requires only one signal
wire to carry data in one direction and two wires to carry data simultaneously in two directions.
Sometimes a bus might have a positive and a negative data line. If two data lines are run close
together along the same path, it is likely that they will be subjected to the same noise. The
purpose of positive and negative data lines is to subtract the one line containing the noise, from
Chapter 4: The Data Bus
the other line containing the signal and the noise, leaving the true signal. This process is referred
to as common mode rejection of the noise.
In many cases, information sent across the bus will be seen by all devices connected to
the bus. Each device is given a number or address which distinguishes it from other devices
connected to the system. The address is attached to the packet of information containing the
message data, and devices can read this address to determine which device the message is
destined for.
The advantages previously mentioned give the data bus a flexible topology making it
easy to add new devices to existing systems. One limitation of the data bus is bandwidth. The
bandwidth of a data bus determines that maximum amount of information that can travel over the
bus during a period of time. A device can be added to the data bus as long as the bandwidth
requirement of the device does not exceed the maximum bandwidth of the bus.
4.1.1
Topology
The topology of a system is the structure or arrangement of the system's nodes in relation to each
other on the data bus. Recall from Chapter 3 that a node was defined as the integration of a
transducer, a microcontroller for data calculations, and a bus interface to allow the device to
communicate over the bus. The topology of a system usually determines how data will flow
throughout that system. When designing a system, an engineer must decide which topology best
meets his or her needs. The most common topologies are the ring, the star, and the tree.
4.1.1.1
The Ring
As shown in Figure 4-1, a ring topology has all of its nodes connected in series around a "ring",
allowing data to travel in only one direction around that ring. When the system is configured,
each node is given an identifier or address which it uses to distinguish itself from other nodes
connected to the system. When a node receives the signal, it extracts any information that was
labeled with its address. Then, if it wants, it adds information and a destination address to the
signal, and repeats the signal to the next node in the ring. There are limitations to the ring. The
system cannot be extended while it is running simply because that would require breaking the
ring. A single point failure also exists because if one node fails, the entire system will fail.
Chapter 4: The Data Bus
Figure 4-1: Ring Topology for a Data Bus
4.1.1.2
The Star
As shown in Figure 4-2, the star centers around a master or host node. Every node in the system
is connected to that host and all communications must go through the host. This type of structure
is generally considered a master-slave architecture. Nodes can be easily added to the system
without interrupting the operation of the system. Also, the failure of one of the slave nodes does
not crash the whole system. A single point failure still exists at the central node; if the central
node fails, the entire system will fail. The star structure appears to be very similar to that of a
centralized architecture. Recall from Chapter 3 that the transducers of a centralized architecture
are wired directly into the central node making replacement and substitution of the transducers
difficult. A star structure differs from the true centralized architecture in that each slave node is
connected to the master node through a dedicated port or interface. Data bus interfaces will be
discussed in Sub-section 4.1.2
Figure 4-2: Star Topology for a Data Bus
Chapter 4: The Data Bus
4.1.1.3
The Tree
The tree structure is the most popular data bus topology. As shown in Figure 4-3, the tree
consists of nodes branching off of the bus. The tree can be configured to allow communications
in one of two modes. In one mode of communication, a node can be designated as the host node
and the tree can operate in a master-slave protocol. In a different mode of communication, the
nodes can talk among themselves through a peer-to-peer communication protocol. Peer-to-peer
communication requires no intervention from a central controller. In either mode, the benign
failure of one of the nodes will not crash the system. However, a node that might fail from
broadcasting out of turn on the bus can cause a system crash. As shown in Figure 4-3, the tree
topology requires terminating resistors at the ends of the bus. When a signal is sent over the bus,
it will reach all of the nodes and then will reach the end of the bus. In many cases, because the
signal cannot just disappear at the end of the bus, abrupt termination of the bus will cause a
reflection of the signal and disturb communications. For that reason, the terminating resistors are
placed at each end of the bus to absorb the signals and prevent reflections.
NODE 1
NODE n
Figure 4-3: Tree Topology for a Data Bus
4.1.2
Interfaces
While node topology determines the overall structure of the nodes on the bus, the interface
determines how the nodes are physically attached to the bus. Generally, there are three types of
interfaces for connecting transducers to a main controller or to each other: a custom interface, a
standard interface, and a universal interface.
Chapter 4: The Data Bus
A custom interface is characteristic of the traditional centralized architecture and
transducers are hardwired directly into their own specific port. To change a transducer would
require unwiring the old transducer and wiring in the new transducer. This type of interface
could not exist in a modular system.
With a standard interface, different ports are assigned to handle specific transducers.
The different sensors and actuators are interfaced internally in their local region or node and then
connected to these specific ports.
For example, the system might have a "gyro port", an
"accelerometer port", and a "GPS port". Every type of gyro, accelerometer, and GPS will have a
standard interface which will allow the instrument to communicate through its port. Each port is
given an address or identifier which remains the same throughout the life of the system. This
allows a given port to be accessed at any time through its address. This type of interface adds
more modularity to a system than a custom interface.
The universal interface gives a system the greatest degree of modularity.
With a
universal interface, all ports on the main controller or bus have the same interface and can handle
all possible transducers. Where a gyro, accelerometer, and GPS would be connected to their own
ports with a standard interface, a universal interface would allow the instruments to be connected
to any port on the main controller or bus. When a node is attached to the system, a main
controller will configure it and assign an address or identifier to its port so that software will
know how to access the node.
An interface defines how nodes are connected to a data bus. The manner in which these
nodes communicate will be defined next.
4.1.3
The OSI Model
The OSI (Open System Interconnection) model, developed by the ISO (International Standard
Organization), is a framework for identifying the relationship of different functions in
communication systems.
Because many devices transfer data over the same communication
medium, the bus access protocol must contain functions for handling situations that would not be
encountered if the devices communicated in a point-to-point manner. Some typical functions
include:
data formatting and framing, addressing, routing, error and collision detection, and
collision avoidance.
Specifically, the OSI model defines seven layers of functions for
communication systems. These layers and the functions performed by each layer are shown in
Table 4-1 [3].
Chapter 4: The Data Bus
FUNCTION
OSI LAYER
LAYER 7:
Application
Application
Defines the application or node interface
LAYER 6:
Presentation
Data interpretation: code conversion, data encryption, data compression
LAYER 5:
YSession
LAYER 4:
Handles interaction between two end points in setting up a session,
provides transport layer with information needed to establish the session
(directory assistance), and deals with access rights (security)
Establishes a logical channel of communication between two points,
breaks messages into packets and reassembles them on the other side,
provides acknowledgements, and multiplexes sessions
Transport
LAYER 3:
Network
Handles addressing, routing, and flow control
LAYER 2:
Data Link
Handles frame formatting, error detection, and collision detection and
avoidance
LAYER 1:
Provides the virtual link for bit transmission and defines the physical
characteristics of the communication portion of the circuit (voltage
levels and currents)
Table 4-1: The OSI Model for Communication Systems
The method by which these layers function and move data through communication
systems is very similar to the method by which a gift is packaged and sent through the mail.
First, a person (layer) must obtain and prepare the gift (data) that must be sent. Then the gift is
packaged by wrapping it and securing it in the package. The package is then sent out through the
mail just like the data is sent out over the communications medium. Upon receipt of the package
by the receiver, it is then unpacked or reconstructed to obtain the gift inside.
4.1.4
Smart Nodes
The idea behind distributed control is that modularity can be achieved by adding appropriate
"intelligence" at the sensors and actuators. "Smart nodes", a common name for intelligent nodes,
are the fundamental functional components of distributed control systems. In the most liberal
interpretation, a smart node consists of a transducer combined with signal conditioning and other
circuitry to produce output information signals that have been logically operated upon to increase
the value of the information [17].
Nader Najafi of MRL/Sensors in Essex Junction, New
Chapter 4: The Data Bus
Hampshire, identified a set of attributes for the ideal smart node in March of 1993. He said a
smart node should digitize analog transducer outputs, have an external bi-directional bus for
communications, contain a specific address for user access and identification, and be able to
execute commands or logical functions received over a digital bus [13].
The intelligence in a smart node is generally added in the form of a microcontroller as
was shown in Figure 3-2. By placing a microcontroller local to a sensor or actuator, a more
useful and effective node can be achieved. This microcontroller is responsible for running the
application code that is necessary for the transducer to perform its function. One of its primary
tasks is to perform the computations required to turn the raw data from the transducer into a more
useful standard form.
Consider once more the example of substituting a new sensor for an existing sensor in a
system. Both sensors perform the same function except the new sensor might be smaller and/or
lighter, and it might have different power requirements and/or a different output data format.
With smart nodes on a data bus, this change is relatively easy to make. Because the interface to
the bus will be identical for every node (in the case of a universal interface), there will be no
reason to alter any of the system hardware. If the bus supplies a nominal voltage, a power
converter can be added to the node to convert this nominal voltage into the voltage needed by the
new sensor. Because data is passed across the bus only in a standard format, the microcontroller
on the new node will be responsible for converting the raw data from the new sensor into this
format. This standard format is expected by the central controller or host node, and therefore
changing a sensor will have little effect on the main system software. It makes no difference to
the host node where and in what format the data originated, just as long as the node is able to
supply the data in the standard predefined format. This method of operation makes matters
easier for an engineer who must design a new sensor for an existing system.
The overall operation of a distributed control system, utilizing smart nodes, is quite
simple. High bandwidth operations specific to the transducer are carried out locally in a
microcontroller, and bus interface circuitry connects the transducer module to the bus. The host
node can then use software to either read from a sensor with a specific address or write to an
actuator with a specific address. This method of operation offers many advantages.
First, because all high bandwidth operations are done in the microcontroller local to each
transducer, the amount of data that must be sent to the main controller over the bus is reduced.
This makes it possible to design classes of nodes which could supply the same kind of low
bandwidth output to the bus. This output could then be accessible to the main controller through
Chapter 4: The Data Bus
software.
To allow for rapid prototyping for specific missions, different nodes could be
connected and disconnected without effecting any of the other nodes. These devices could be
tested off-board before they are added to a specific vehicle. Fault tolerance can also be achieved
with a distributed architecture [1]. Redundant nodes can be used to prevent a single failure of
one node from effecting the performance of the system. As was mentioned at the beginning of
this section, available bandwidth is the primary limitation to a data bus architecture. Expansion
of the whole system is facilitated and nodes can be added as long as there is enough bandwidth to
support them.
The disadvantages of a traditional centralized architecture in flight control systems were
covered in Chapter 3. Many of the disadvantages that were mentioned are alleviated with the use
of a distributed architecture, leaving a modular, expandable, and upgradable system.
4.2
Data Bus Technology
Many different types of data bus technologies exist today. Two major issues arise which provide
a means for comparison between these different busses:
varying complexity of bus
implementation and bandwidth limitations of the bus. As used here, complexity refers to the
amount of overhead or difficulty, in terms of hardware and software, required to implement the
data bus into a system. It is likely that increased hardware and software at the transducer level
may require higher initial costs. However, this increased hardware and software helps add a
plug-and-play capability to the system and once it has been added to the system, the cost of
redesigning or upgrading the system will go down.
The use of a data bus is essential for
development of a modular plug-and-play system. High bandwidth and low complexity are two
desirable characteristics for a small UAV flight control system.
Figure 4-4 compares the
bandwidth and complexity of existing data bus technologies.
While SCSI and TCP/IP data bus approaches offer high bandwidths, both technologies
would require extensive hardware and software development for a small UAV flight control
system implementation. The complexity of such technologies is judged to be very high. At the
opposite extreme, the RS-232 and I 2C SPI busses require little hardware and software
development and are both judged to have a low complexity. However, both technologies have a
very low maximum bus bandwidth and would be unfeasible for implementing a small UAV flight
control system. The MIL-STD-1553 data bus was not considered as an option because of its
Chapter 4: The Data Bus
bulky system components. The control bus 1 and the high speed serial bus (USB or Firewire)
provide a compromise featuring relatively low hardware and software complexity as well as
reasonable bandwidth for flight control problems. The remainder of this chapter will explore
both of these busses in greater detail.
Bandwidth per second
100Mb
1Gb
10Mb
1Mb
100Kb
Hardwareand Software Complexity
Medium
High
Low
Figure 4-4: Bandwidth and Complexity of Data Bus Technologies
4.2.1
Control Bus
Approximately a decade ago, the desire for quick and easy interconnection of sensors and
actuators, in the manufacturing environment, motivated the industrial measurement and control
market to begin looking for alternatives to traditional centralized control strategies [1]. Systems
engineers began studying the possibility of creating a "control" bus-an inexpensive, small, lowpower standardized I/O network that would allow for quick and easy interconnection of sensors
and actuators. The most popular control bus protocols used on the market today include:
1The control bus is also referred to as the field bus or sensor bus.
Chapter 4: The Data Bus
*
Seriplex - Automated Process Control, Inc. (APC)
*
ControllerArea Network (CAN) -
*
LONWorks (Local OperatingNetwork) - Echelon
Robert Bosch & Intel
Each of the above control busses supports a maximum bandwidth of approximately 1 Mbps.
Seriplex operates with a specialized ASIC and does not require any additional controllers to run
the protocol. However, it is primarily designed to be used with very simple devices such as
push-button sensors which control on/off actuators such as lights. It is feasible to implement a
flight control system using this data bus, but the implementation would require far too much
additional hardware and customized designs. The CAN bus offers the user the bottom three
layers of the OSI seven-layer standard (network layer, data link layer, and physical layer). The
upper four layers can be implemented with one of two protocols: the Smart Distributed System
(SDS) from Honeywell Microswitch or DeviceNet from Allen Bradley. Microprocessors are
available from many vendors which have the CAN protocol integrated on the chip. These
microprocessors can be used for the creation of nodes throughout a system. LONWorks offers a
similar solution to CAN. Its protocol offers the user all seven layers of the OSI standard and its
own "neuron" chip, available from Motorola and Toshiba, is used for node implementation.
CAN and LONWorks offer many of the same features applicable to flight control system
implementation.
The bandwidth of each bus is comparable as well.
However, because
LONWorks is a more mature technology with more readily available resources, it will be
explored further as a candidate for a control bus implementation of a flight control system.
4.2.2
High Performance Serial Bus
The high performance serial bus is a recent data bus technology used primarily in desktop
computers.
Its primary application is the integration of I/O components and the personal
computer using a low-cost, high-speed serial interface. Its growth, however, is slowly spreading
into the area of embedded systems. The invention of the high performance serial bus was driven
primarily by the rapidly growing need for mass information transfer. Typical local area networks
(LANs) cannot provide connection capabilities in a cost-effective manner, nor do they easily
support guaranteed bandwidth for "mission critical" applications. The cost effectiveness of LAN
based systems is questionable as well, and the cost per node is normally at least $1000 [19]. The
Chapter 4: The Data Bus
universal interface provided by the high performance serial bus will likely replace all existing
serial, parallel, and SCSI interfaces currently used for attaching external peripherals to PCs.
Two high performance serial busses currently exist: the Universal Serial Bus (USB) and
the Firewire.2 Similarities between the two busses include:
universal interfaces, support for
dynamic (or hot) swapping 3 and plug-and-play, and support for a cabled transmission medium
carrying both data and power. The two busses differ primarily in terms of bandwidth and the
number of devices that they will support. While USB supports bandwidths of 1.5 and 12 Mbps,
Firewire will support bandwidths of 100, 200, and 400 Mbps. For current small UAV flight
control system applications, the need for data rates beyond 12 Mbps is not envisioned. USB will
support 127 devices while Firewire will support 62 devices. Other than bandwidth and device
support, the general characteristics of the busses are very similar. In the area of implementation,
however, Firewire is more hardware and software intensive than USB. While USB has many
one-chip node solutions, Firewire still requires separate chips, respectively, for the link layer and
the physical layer within a node. Thus, USB is judged to be a better candidate for a high
performance serial bus implementation of a flight control system.
4.2.3
Why Not 1553?
The digital data bus, MIL-STD-1553, was designed in the early 1970's to replace analog pointto-point wire bundles between electronic instrumentation in both military and commercial
aircraft. The latest version of the serial LAN for military avionics, known as MIL-STD-1553B,
was issued in 1978 and continues to be the most popular militarized network today [16].
Based on its widespread use in military avionics, one would think that the 1553 data bus
would be a prime candidate for a distributed flight control system architecture in small UAVs.
However, it was not considered for implementation because of two reasons. First, although it is
still very popular and in widespread use, the 1553 is two decades old. All of the data bus
technologies considered in this thesis are less than a decade old. As technology continues to
2 The Firewire is an IEEE standard (IEEE P1394). Although USB is not currently an IEEE standard, it is
owned by members of the USB Implementers Forum and has backing by large companies such as
Microsoft, Intel, and Texas Instruments.
3 Hot swapping allows peripherals to be attached, configured, used, and detached while the host and other
peripherals are in operation.
Chapter 4: The Data Bus
advance, the newer technologies will generally outperform the 1553. The second reason that the
1553 was not considered for implementation was because of its large size and high power
consumption. Instruments are connected to the 1553 bus through bus couplers which reduce
reflections and maintain signal impedance levels. These couplers vary in size from 3.7 in3 for a
single instrument coupler, to 25 in 3 for an eight instrument coupler. The system must be able to
handle the addition of these bulky couplers every time a sensor or actuator is added. For
comparison, a LONWorks control module (with 11 I/O pins) has a volume of 2.6 in 3, a USB
function controller has a volume of 0.188 in3 , and a USB hub has a volume of 0.188 in3 . These
components will be discussed later in this chapter. Bus operation also requires a bulky VME
interface card and power in the range of 5 W. Data can be transmitted and received at a
maximum bit rate of 1 Mbps [16,18]. This bit rate is comparable to the bit rate of the control
busses and much less than that of the high performance serial busses. Due to its bulky nature,
high power, and mediocre bit rate, the 1553 data bus is not judged to be a good choice for
implementing a distributed flight control system on a small UAV.
4.3
LONWorks
The section gives an overview of the LONWorks technology and is provided to allow the reader
to better understand the LONWorks architectural implementations that will be discussed in the
following chapter. The information hereafter, concerning LONWorks, was accumulated from
the following four sources which the reader should consult for more detailed information:
[1,9,14], and the LONWorks home page at http://www.echelon.com/.
LONWorks technology was created by Echelon Corporation and includes all of the
elements required to design and support control networks, specifically:
the Neuron Chip, the
LONTalk protocol, and LONWorks transceivers. A local operating network (LON) uses these
elements to allow intelligent devices, or nodes, to communicate with each other over a variety of
communications media. The topology of LONWorks is a tree structure with standard interfaces.
The number of nodes the network will support is only limited by the bandwidth available on the
bus, up to a maximum of 32,385 nodes.
Chapter 4: The Data Bus
4.3.1
LONWorks Node
Motorola and Toshiba are the two manufacturers of the Neuron microcontroller chips. These
chips perform the network-and-application-specific processing within a node.
Each Neuron
microcontroller has three microprocessors, memory, a hardware port consisting of 11 I/O lines,
and a communications port that allows access to the communications medium. Figure 4-5 shows
a block diagram of the Neuron microcontroller. 4
VSS VCC
6
$
External
Memory
Bus
5V I
10%
EEPROM
Media
Access
Access
CPU
Network
RAM
2Kbytes
2
ROM
10 Kbtes
Application
CPU
2 Kytes
Internal
16-Bit Address Bu
Timing and Control
and
ock
t
RESET I CLK1 CLK2
9EAVUE
Application 1/0Block:
Counters,
Registers,
h 16-Bit Load
WControl
Multiplexer,
Sources,
Clock
Latches, Scaled
20 mACurrent
Sinks,
Programmable
Pullups,
etc.
I
II
100 . 107
t t
108 109 1010
Network
Port
Communication
-f t t-tCPO... CP4
Figure 4-5: Neuron Microcontroller Block Diagram
The three processors in the Neuron Chip work together to implement all seven layers of
the OSI seven-layer network protocol. Processor #1 is the Media Access Control (MAC) layer
processor and it handles layers one and two by driving the communications subsystem hardware
and executing the collision avoidance algorithm. The collision avoidance algorithm allows the
data bus to operate under conditions of overload and still carry its maximum capacity, rather than
4 There are a number of Neuron microcontrollers available with different memory options.
comparison of the different versions of the Neuron, consult reference [14].
For a
Chapter 4: The Data Bus
have its throughput degrade due to excess collisions. Processor #2 is the Network Processor and
it handles layers three through six by processing network variables, addressing, routing, and
managing the network.
Processors #1 and #2 are preprogrammed and they require no
programming from the user. Processor #3 is the application processor and it implements layer
seven by executing the code written by the user to control the device. The three processors
communicate through buffers in shared memory.
A node usually consists of a Neuron Chip, a power source, a transceiver which provides
the interface between the node and the communications medium, and application electronics for
interfacing the device being controlled to the 1/O pins of the Neuron.
These pins may be
configured in numerous ways to provide flexible input and output functions for the device.
There are 34 different configurations that include bit, byte, serial, and parallel communications.
The application electronics required are dependent on the specific transducer being interfaced
within the node. Although the application processor on the Neuron Chip is supposed to handle
all application software, sometimes a transducer may require more intensive processing than can
be provided by the Neuron. If this is the case, a more powerful "host processor" can be used to
run all of the application software. The Neuron Chip will remain responsible for media access
and network control. Figure 4-6 shows a typical LONWorks node for each case mentioned.
"Neuron Chip-Based Node"
"Host Processor-Based Node"
Figure 4-6: LONWorks Node
Chapter 4: The Data Bus
4.3.2
LONTalk Protocol
The Neuron Chip uses processors #1 and #2 to implement a complete networking protocol called
LONTalk. This protocol allows the application code running on processor three to communicate
with applications running on other nodes elsewhere on the same network.
LONTalk was
designed specifically for control system networks and provides for both master-slave and peer-topeer communications between devices. The protocol is based on an enhanced carrier sense
multiple access (CSMA) network access algorithm.
This algorithm provides a collision
avoidance scheme to allow for consistent network performance. The purpose of the collision
avoidance algorithm was discussed in the previous sub-section.
4.3.3
Communications and Power
The protocol processing on the Neuron Chip is media-independent, which allows the Neuron
Chip to support a wide variety of communications media. A number of different LONWorks
transceivers are shown in Table 4-2.
The transceiver interfaces the node with the
communications medium.
Transceiver Type
Data Rate
39 Kbps
EIA-232
Twisted Pair with Transformer
Twisted Pair Link Power
Power-line
78 Kbps, 1.25 Mbps
78 Kbps
2 Kbps , 5 Kbps , 10 Kbps
RF (300 MHz)
1200 bps
RF (450 MHz)
4800 bps
RF (900 MHz)
9600 bps
IR
78 Kbps
Fiber Optic
1.25 Mbps
Coaxial
1.25 Mbps
Table 4-2: LONWorks Transceiver Types
Communication among nodes is carried out over these media using either "network
variables" or explicit messages. The network variables may be specified as either input or output
objects and are implemented using a programming language developed by Echelon called Neuron
Chapter 4: The Data Bus
C. 5 Assignment of a value to a network output variable causes propagation of that value to all
nodes declaring the variable as an input variable. The communication port of the Neuron Chip
consists of five pins that can be configured to interface to a wide variety of the network
transceivers in order to send these messages across the network.
Node power is most easily obtained by individually powering each node or through the
use of power converters. A link power transceiver is available which would carry power to each
node in the same fashion as the communications media does with data. However, this transceiver
is only available in a slower 78 Kbps version.
4.3.4
LONWorks Control Module
Control modules provide a simple, cost-effective method of adding LONWorks technology to
any control system. A control module consists of a miniature circuit card integrating a Neuron
Chip, a PROM socket, a communication transceiver, and connectors for power, I/O, and the
network. Figure 4-7 shows the layout of a LONWorks control module:
Figure 4-7: LONWorks Control Module
The small size of the control modules permits them to be mounted on or inside a device,
directly adjacent to the transducer that the module will control. The modules are economically
5 Neuron C is a derivative of the ANSI C programming language.
Chapter 4: The Data Bus
priced for all users at $58 per module. In conclusion, the LONWorks control module allows the
user to quickly and effectively implement LONWorks nodes throughout a system.
4.4
Universal Serial Bus
The section gives an overview of the USB technology and is provided to allow the reader to
better understand the USB architectural implementations that will be discussed in the following
chapter. The information hereafter concerning the USB was accumulated from the following
three sources which the reader should consult for more detailed information:
[2,23], and the
USB home page at http://www.usb.org/.
The USB is a cable bus that allows data to be exchanged between a host computer and
many peripherals. These peripherals share the USB bandwidth through a host scheduled, token
based protocol. The USB supports data rates of either 1.5 Mbps or 12 Mbps, hot attachment
and removal of devices, and a maximum of 127 total devices.
4.4.1
USB Topology
The USB connects USB devices with the USB host through a tiered star topology. As shown in
Figure 4-8, the tiered star topology is very similar to the star topology mentioned earlier in this
chapter. Instead of just one star centered around a host, however, the tiered star topology has
many stars branching from the host node. The host contains the root hub which is the origin of
all USB ports in the system. This root hub provides a number of USB ports (only one is shown
in Figure 4-8) to which USB devices are attached. USB devices include hubs and functions.
Hubs are the center of each star and provide additional attachment points to the system, while
functions act as nodes and provide capabilities to the system. To expand the system, functions
are attached to hubs and when more connections are needed, hubs are attached to hubs. The
system can be expanded to a maximum of six tiers.
Chapter 4: The Data Bus
Host (Root Tier)
/7Tier
Node
Node
4
Node
Figure 4-8: USB Topology
Hubs and functions both have an upstream (toward host) connection and hubs generally
have multiple downstream (away from host) connections.
All USB devices have a universal
interface and are connected upstream to a hub with a single cable. The USB transfers data and
power over this four wire cable as shown in Figure 4-9. Data is carried over two data lines which
common mode reject noise on the wire to allow for a purer signal. The USB also carries a bus
voltage of +5 V and ground (GND). Sub-section 4.4.4 describes USB power issues.
5 meters max
+5V
+5V
D+
0
D+
D-
ooo
D-
GND
GND
Figure 4-9: USB Four Wire Cable
The USB supports four types of data transfers:
control, bulk, interrupt, and isochronous.
Control transfers are used to configure a device when it is attached. Bulk transfers are used for
Chapter 4: The Data Bus
large quantities of data.
Interrupt transfers are used for frequent updates of a device and
isochronous transfers are used for real time data transfers which occupy a guaranteed amount of
USB bandwidth with a guaranteed delivery latency.
4.4.2
USB Host
As stated in the previous sub-section, all USB transactions are initiated by the USB host. The
host consists of different hardware and software components which are used to carry out these
transactions between the host and different USB devices. The two host hardware components
are the host controller and the root hub, and the two host software components are the client
software and the USB system software. The client software contains the USB device drivers
while the USB system software contains the USB driver and the host controller driver.
Communication originates in the host under software control and is physically transferred over
the USB with the hardware. Figure 4-10 shows the different host components and the flow of
data from the client software to the actual bus.
Figure 4-10: USB Host Components and Data Conversion Process
Chapter 4: The Data Bus
USB transactions are initiated by USB device drivers that wish to communicate with
USB devices. Device drivers must exist for each class of function attached to the USB. For
example, any gyro that might be attached to a USB flight control system would be run by a gyro
class device driver. The device drivers send requests to the USB driver with I/O Request Packets
(IRPs). These IRPs are used to initiate transfers to and from USB devices. The USB device
drivers also supply a memory buffer which is used to store data when transferring data to or from
a USB device. The device drivers are unaware as to how data is actually transferred over the
USB and therefore, must rely on other host software to manage their requests.
The USB system software comprises the USB driver (USBD) and the host controller
driver (HCD), and provides the interface between the USB device driver and the USB host
controller. Each transfer between a given register 6 within a USB device and the device driver
occurs via a communication pipe which the USB system software establishes during device
configuration.
When IRPs are received from the device drivers, the USBD translates and
organizes these requests into transactions that will be executed during a series of Ims frames.
This task is performed by the USBD based on the capabilities of the USB system, which it
already has knowledge of, and the device requirements which it reads from device descriptors
during device configuration. The USB HCD then schedules these transactions for broadcast over
the USB by building a series of transaction lists which are to be performed during each 1ms
frame. The HCD does so by building a linked list of data structures, called transfer descriptors
(TDs), in memory. These transfer descriptors define the transactions that are scheduled to be
performed during each frame. The transfer descriptors contain information needed to generate
the transactions, including: the USB device address, the type of transfer, the direction of transfer
(to or from the host), the amount of data to be transferred, and the address of a device's memory
buffer.
The host controller and the root hub are the hardware components responsible for
physically sending transactions over the USB.
The host controller initiates read and write
transactions over the USB via the root hub. If a write transaction is required, the host controller
reads the data from the memory buffer provided by the device driver, performs the serial
conversion on the data via the Serial Interface Engine (SIE), creates the USB transaction, and
then forwards it to the root hub to send over the USB. On the other hand, if a read transaction is
6These
registers are called USB endpoints.
Chapter 4: The Data Bus
required, the host controller builds the read transaction list and sends it to the root hub where it
gets transmitted over the USB. The USB devices being addressed then transmit data back to the
root hub. This data is then forwarded back to the host controller. The host controller performs
the serial conversion on the data and transfers it to the memory buffer provided by the device
driver. There are currently two different implementations of the host controller and the HCD:
the Universal Host Controller (UHC), developed and owned by Intel, and the Open Host
Controller (OHC), developed by a number of companies from the USB Implementers Forum.
Each implementation performs the functions previously mentioned, but in different ways.
The root hub and other USB hubs provide the attachment points that are necessary for
connecting downstream USB devices to the upstream host. As shown in Figure 4-11, a hub
consists of a hub repeater and a hub controller.
The hub repeater handles bus traffic by
forwarding data messages. Transmissions originating at the host arrive at the upstream port of
the hub and are repeated to all enabled ports. In a similar manner, devices which respond to the
host must transmit messages upstream. The hub must then repeat these messages from the
downstream port to the upstream port. The hub controller provides the interface registers to
allow communication to and from the host. The hub controller is responsible for controlling
different aspects of hub operation such as power management and port enabling and disabling.
Upstream Port
H ub
Repeater
Port
Port2
Hub
Controller
Port N
Downstream Ports
Figure 4-11: USB Hub
In summary, the USB host initiates all communications over the USB.
Transactions
originate in client and USB system software and are then transmitted over the bus via the host
controller and USB root hub.
Chapter 4: The Data Bus
4.4.3
USB Transactions and Bus Protocol
USB transactions typically involve the transmission of one, two, or three packets of information,
depending on the type of transaction. Each transaction begins with a "token packet" sent out by
the host controller. This token packet describes the type and direction of the transaction, the
USB device address, and the endpoint number. After decoding this information, the device or
devices addressed in this token packet will prepare information to send to the host or prepare to
receive information from the host, depending on the direction of the transaction. The source of
the transaction will then send the "data packet", which carries the payload associated with the
transfer. The destination then will send a "handshake packet" indicating whether or not the data
was received without error. If errors do exist in the packets, they will be resent. All types of
transfers, except for isochronous transfers, will utilize this handshake packet. To allow for real
time data transfers and a guaranteed amount of USB bandwidth in isochronous transfers, any
errors in electrical transmissions are not corrected by retries.
4.4.4
Power
The USB cable carries power as well as data, and all USB ports can supply power for attached
hubs and functions. Hubs and functions have the option of using the available cable power or
they can utilize their own power supply instead. The method by which the hub or function is
powered determines its classification.
The four classifications are:
bus-powered hubs, self-
powered hubs, bus-powered functions, and self-powered functions.
4.4.4.1
Bus-Powered Hub
Bus-powered hubs draw all of their power from the USB connector power pins. They may draw
up to one unit load 7 at power up and a total of five unit loads during operation. This power is
split between the hub controller, any embedded functions, and all external downstream ports.
The downstream ports of a bus-powered hub can supply only one unit load per port regardless of
the current drawn on any of the other ports attached to that hub. Furthermore, a bus-powered
hub will only support a maximum of four downstream ports. This is because the maximum
current available to a bus-powered hub is five unit loads and one unit load must be used to power
7A
unit load is defined to be 100 mA of current.
Chapter 4: The Data Bus
the hub controller. As a result, if more ports are desired, the hub will have to be a self-powered
hub.
Self-Powered Hub
4.4.4.2
Self-powered hubs have their own local power supply that distributes power to any embedded
functions and to all external downstream ports. The power for the hub controller can be obtained
from an upstream port similar to that of bus-powered hubs, or it can be obtained from the local
power supply in the same manner as the downstream ports. Using power from an upstream port
allows the hub controller to communicate with the host even when the local power supply is off,
making it possible for the host to differentiate between disconnected devices and devices that are
simply not powered. This method also allows the system to conserve power in special cases
where powering a device is not necessary.
As was previously stated, the local power supply provides power to all downstream
ports. Where the maximum number of ports supported by a bus-powered hub was four, the
number of ports that can be supported by a self-powered hub is limited only by this local power.
Each port is capable of supplying five unit loads, but in order to meet regulatory safety limits, no
single port can deliver greater than 5 A.
4.4.4.3
Bus-Powered Function
There are two types of bus-powered functions: a low-power, bus-powered function and a highpower, bus-powered function. Both types of bus-powered functions draw power from the USB
cable and have a regulator which provides the proper voltage to the function controller. The
difference in the two bus-powered functions is that while the low-power function draws less than
one unit load from the USB cable when fully operational, the high-power function will draw over
one unit load and a maximum of five unit loads from the USB cable.
4.4.4.4
Self-Powered Function
Self-powered functions are similar to self-powered hubs in that power is supplied to the function
from a local power supply. In the same manner as for the self-powered hub controller, the
function controller can be powered from either the bus or from a local power supply. The
Chapter 4: The Data Bus
advantage of communicating with the host when the device is not powered, to conserve power in
certain situations, still exists. The amount of power a self-powered function can draw is limited
only by this local power supply, and due to the fact that the function is not required to power any
downstream ports, there is no limit on the amount of current the device can draw.
4.4.4.5
Voltage Requirements
As power is dispersed over the USB, voltage drops can occur because the power must pass
through hubs.
This places certain requirements, in terms of a voltage drop budget, on the
different devices. In particular, the voltage supplied by the host or self-powered hub ports is
between 4.75 V and 5.25 V. Bus-powered hubs have a maximum drop of 350 mV from an
upstream source to a downstream sink.
All hubs and functions must be able to provide
configuration information to the host with a minimum of 4.40 V and functions drawing more
than one unit load must be able to operate with a minimum of 4.75 V. This idea is shown below
in Figure 4-12.
4.75 V (min)
Self-Powered
Hub
4.40 V (min)
Bus-Powered
Bus-Powered
Hub
Function
Figure 4-12: Voltage Drop Topology
4.4.5
Latency and Timing
The USB system software must determine if the USB can support a given communications pipe
before the pipe will be setup for communicating with a device endpoint. Each device requires a
specific bus bandwidth or bus time for a given transaction. The USB system software reads this
bus time from device descriptors during device configuration. The bus time read from the device
descriptors, however, only contains the time needed for the data payload and does not include
any overhead time or latency time. Therefore, the USB system software must calculate the total
time (data plus overhead) for a given transaction. This calculation is required to ensure that the
Chapter 4: The Data Bus
time available in a frame is not exceeded. Several parameters are included in the bus bandwidth
calculation for a given pipe and include:
* Number of Data Bytes -
This is the byte count of a given payload and is read from the
MaxPacketSize field in the device descriptor.
*
Transfer Type - This parameter determines the amount of overhead associated with a given
transaction type. The different structure of each transaction type makes this parameter vary
depending on the transaction. For instance, isochronous transfers do not require a handshake
packet and therefore will not be limited by the handshake overhead. An overhead exists for
the token packet, the data packet, and the handshake packet, if it is required. The overhead
data usually includes bits for the following items:
synchronization, packet identification,
device address, cyclic redundancy check (CRC), and end-of-packet notification (EOP).
Table 4-3 shows the USB characteristics and estimated overheads for different transactions,
calculated in a USB throughput analysis by John Garney of Intel [10].
Bus Speed
Bit Time
Byte Time
Max Bandwidth
Frame Time
Max Bytes per Frame
Protocol Overhead
for Non-Isochronous
Protocol Overhead
for Isochronous
Output
Protocol Overhead
for Isochronous Input
12 Mbps
8.33 x 10-8 nsec
7.0 x 10-7 nsec
1.5 MBps
0.001 sec
1500 Bytes
20 Bytes Times per Frame (BTF)
16 BTF
17 BTF
Table 4-3: USB Characteristics and Estimated Protocol Overhead
* Host Recovery Time - This is the time required for the host controller to recover from the
last transmission and prepare for the next transmission. This parameter is host controller
implementation specific.
* Bit Stuff Time -
Data transferred via the USB is encoded using Non-Return to Zero,
Inverted (NRZI) encoding to help ensure integrity of data delivery, without requiring the
delivery of a separate clock signal with the data. Figure 4-13 shows a serial data stream and
Chapter 4: The Data Bus
Transitions in the NRZI data stream represent O's while no
the resulting NRZI data.
transitions represent l's. Bit stuffing forces transitions into the NRZI data stream in the
This ensures that the receiver detects a
event that six consecutive l's are transmitted.
transition in the NRZI data stream at least every seventh bit time. This enables the receiver
to maintain synchronization with the incoming data. Bit stuffing is a function of the data
stream and is unknown to the host software. Therefore, a worst case theoretical maximum
number of additional bit times is included for bit stuffing -4 1.1667 x 8 x MaxPacketSize.
Idle
Data
0
1
1
I
0
1
0
1
0
0
0
1
0
0
I---.
I
1
I
1
0
L
NRZI
Figure 4-13: NRZI Encoded Data
Depth in Topology - The time required to send messages across the transmission medium is
dependent on the number of cable segments that the transmission must cross. The maximum
round-trip delay is 16 bit times or 2 BTF and can be seen in Figure 4-14.
16 bit times
30 ns
i40ns 1
70 ns
1
70 ns
1
70 ns
1
70 ns
700 ns Roundtrip (8.5 bit times)
,
7.5 bit times
Figure 4-14: Worst Case Total Round Trip Delay
All bandwidth calculations have been made on a worst case basis. Therefore, there will
normally be bandwidth remaining after all schedules transactions have completed during a frame.
Host controllers can be designed to reclaim this bandwidth and perform additional transactions to
utilize the bus more efficiently. This process, however, is implementation dependent.
Chapter 4: The Data Bus
This chapter examined the data bus and specifically looked at two candidate data busses that
could be used for implementing a distributed flight control system. The next chapter will utilize
these data busses in a case study of an existing flight control system.
64
Chapter 5: Alternative Flight Control System Architectures
Chapter 5
Alternative Flight Control System
Architectures
As suggested in the previous chapter, two candidate data bus technologies show promise for
implementing a distributed flight control system: the LONWorks control bus and the USB high
performance serial bus. In this chapter, alternative distributed architectures, utilizing these data
bus technologies, will be contrasted with a baseline centralized flight control system. First, the
architecture and requirements for the SCPB flight control system that was introduced in Chapter
2 will be presented.
Then three distributed architectural solutions will be suggested as
alternatives to this centralized architecture: an ideal solution, a LONWorks solution, and a USB
solution.
Under these alternative architectures, it is likely that the system may take a
performance penalty in areas such as size, weight, or power. The question is whether these
penalties are acceptable if the alternative architecture will allow cheaper system redesigns and/or
upgrades, and will also allow the system to be used in different applications and on different
platforms-Chapter 6 will attempt to answer this question.
5.1
SCPB Flight Control System
The question of whether or not an alternative distributed architecture can save time and money in
the event of a system redesign or upgrade is the primary concern. Before this question can be
answered, however, the feasibility of the alternative architecture must be assessed.
It makes
sense to select a test flight control system, determine its requirements, and then attempt to satisfy
those requirements with an alternative architecture. If any of the requirements cannot be met by
the alternative architecture, the requirement should be noted, and the impact of failing to meet
the particular requirement should be evaluated.
The candidate centralized flight control system should have stringent system
requirements because if the alternative architecture can meet those requirements, chances are it
Chapter 5: Alternative Flight Control System Architectures
can meet the requirements of most other systems. Therefore, the SCPB flight control system, as
described in Section 2.2, is baselined for control of a highly maneuverable vehicle having strict
size, weight, and power constraints. The SCPB subsystem architecture is shown in Figure 5-1.
GPS Antenna
Receiver
P, V, T
Flight
[1 Hz]
GPS
Electronics
IMU Velocity Aiding
Interface
[10 Hz]
Electronics
Clock
[10.95 & 21.8 MHz]
Flight Control Computer
Other Inputs
Inertial
Power
Supply
-.
Sensor
I
Acceleration , Rates
[300 Hz - Analog]
Assembly
[100 Hz Updates]
Actuators
L7
FPGA
Navigation
--Estimates of P, V, 0
SCof
Canard Commands
PWM Command I
[20 KHz]
A/D
Steering
--Kalman Filter Output
50 Hz
[300 Hz]
Steering
Control
System
.
PWM Command
Roll Commands
[1 KHz]
[50 Hz]
--Body Rates
Position Feedback
[300 Hz - Analog]
Auto-pilot
Telemetry Antenna
Transmit Data (P, V)
Telemetry
Transmitter
Telemetry
[50 & 100 & 300 Hz]
Clock
Figure 5-1: SCPB Subsystem Integration Diagram
Notice that all of the subsystems are connected directly to the flight control computer.
Figure 5-2 shows the layout of a typical backplane that would be used to connect the various
subsystems of the SCPB.
The large number of interconnections make the system highly
centralized and much more difficult to work with in the event of a redesign or upgrade.
Typically, a field programmable gate array (FPGA) is used to handle all of the I/O to the flight
control computer. Each time a sensor is added to the system, this FPGA must be redesigned to
Chapter 5: Alternative Flight Control System Architectures
handle the new I/0. The process of redesigning the FPGA is usually a very difficult task for
anyone except for the original designer [4].
Figure 5-2: SCPB Subsystem Interconnect Diagram
As stated in Chapter 2, attention is to be focused on the navigational subsystem of the
flight control system, which comprises the inertial sensors and the GPS. The requirements for
the SCPB subsystems were chosen as follows. There are six inertial sensors: three gyros (Xaxis, Y-axis, and Z-axis) and three accelerometers (X-axis, Y-axis, and Z-axis). Each sensor will
require +12 V and +5 V for operation.
Three analog signals will enter the flight control
computer from each instrument carrying, respectively, the rate or acceleration measurement
signal, a temperature signal used for temperature compensation of the instruments, and a
reference signal used for common mode rejection of noise. In the flight control computer, these
signals will enter a multiplexer (MUX), pass through a 14-bit A/D converter, get compensated
with temperature and other compensation algorithms, before finally entering a Kalman filter as
AVs and As at 300 Hz. The GPS will communicate serially with the flight controller at a
Chapter 5: Alternative Flight Control System Architectures
bandwidth of 1 Hz. It will require +5 V for operation and will have a 19200 baud rate. The
navigational system requirements are summarized in Table 5-1.
Navigational Sensor
Gyro
Voltage Requirements
Power
±12 V
72 mW
Bit Rate
12 V
+5 V
72 mW
200 mW
12.6 Kbps
Accelerometer
+12 V
+12.6
+5 V
72 mW
130 mW
Kbps
GPS (operating)
GPS (stand-by)
+5 V
+5 V
2 W
400 mW
19.2 Kbps
Table 5-1: SCPB Navigational Subsystem Requirements
In the following three sections of this chapter, three data bus architectures will be
presented as alternatives to the SCPB. First, an ideal solution will be presented. The ideal
solution will revisit the desired characteristics of a distributed flight control system and lay out a
standard against which to judge the two candidate data bus architectures.
5.2
Ideal Solution
The problem at hand is to simplify system redesigns and upgrades, saving both design time and
money. A modular flight control system has many obvious benefits.
The life of the flight
control system can be extended by preventing the system from obsolescence in the event of a
system upgrade or redesign caused by advances in technology. Also, a modular flight control
system could be more easily reconfigured than a centralized flight control system, allowing it to
be used across multiple platforms. In Chapter 3, a set of "new requirements" for a flight control
system were defined and included: system modularity and upgrades, continuing expansion of the
system capability, easy troubleshooting of transducers, and distributed processing as required.
The author's vision of an ideal architecture for a modular flight control system is shown in
Figure 5-3.
Chapter 5: Alternative Flight Control System Architectures
Figure 5-3: Ideal Architecture for a Distributed Flight Control System
The ideal architecture should be simple and inexpensive. It should utilize a data bus for
ease of modularity and upgrading the system. A tree topology allows for easy addition of
transducers and one transducer failing will not crash the entire system. A four wire bus carries
both data and power to all of the sensors attached to the bus. Distributed processing and smart
nodes should be used to handle data in the most efficient manner possible. A universal interface
gives the system plug-and-play capabilities and allows transducers to be tested and maintained
separately from the vehicle in common test facilities. Finally, the ideal solution would have an
unlimited amount of bus bandwidth available and the system could be expanded whenever
desired.
5.3
5.3.1
LONWorks Solution
LONWorks Architectural Design
A distributed architecture for a flight control system designed utilizing the LONWorks control
bus is shown in Figure 5-4.
Chapter 5: Alternative Flight Control System Architectures
Figure 5-4: LONWorks Distributed Flight Control System Architecture
Each sensor or actuator used in the flight control system is designed using a LONWorks
control module. Recall from Chapter 4 that a LONWorks control module consists of a miniature
circuit card integrating a Neuron Chip, external memory, a communication transceiver, and
connectors for power, I/O, and the bus. The control module is responsible for formatting the data
from each sensor or actuator into the standard digital format required for transmission over the
communications medium.
Each transducer becomes a virtual "LONWorks Transducer"
consisting of the transducer, the LONWorks control module, and any electronics needed to
Chapter 5: Alternative Flight Control System Architectures
interface the transducer to the LONWorks control module. Referring once more to Figure 5-4,
the dotted box labeled "LONWorks Gyro" demonstrates this concept. LONWorks control
modules are available for purchase from Echelon for $58 per module, a small price to pay for the
time saved to integrate the different components. The physical size of a module is also rather
small. The dimesions of a LONWorks control module are 2.42"xl.61"x0.67" (Lenth x
Width x Height) or 2.6 in3 . Figure 5-5 shows a control module from Echelon.
Transceiver
Neuron
Chip
.
Network
Pins
1/0 Pins
~ ..
U.S. Dime
External
Memory
Figure 5-5: LONWorks Control Module Integrated Circuit Board
The LONWorks topology is a tree structure which allows transducers to be easily added
to the system. The communications medium used for the data bus is a two wire bus which
carries data into and out of the flight control computer. A theoretical data rate of 1.25 Mbps is
achievable using a twisted pair transceiver. A power transceiver is available which would carry
power to each node, parallel to the communications media. However, this power transceiver is
only available in a slower 78 Kbps version. To attain the maximum theoretical data rate of 1.25
Mbps, power must be supplied individually to each node.
Advantages to designing a system in this manner were stated in earlier chapters and they
become clearer when examining an actual architecture.
If a transducer must be updated or
changed for some reason, the only major task is to turn that transducer into a LONWorks
transducer which will supply the proper preformatted output to the bus. It is not a problem if the
new transducer has a different signal format at its output or even additional signals. Only the
Chapter 5: Alternative Flight Control System Architectures
circuitry and programming that are local to the transducer inside the LONWorks transducer must
be altered. This leads to little or no alteration of software inside the flight control computer.
The LONWorks architecture satisfies many of the system traits set forth in the ideal
architecture. Whether or not it can meet the requirements of the SCPB is the next question that
should be answered.
5.3.2
LONWorks Bandwidth Analysis
A good initial requirement to analyze is bandwidth. The maximum amount of bandwidth that a
bus can supply is a major concern with most data bus technologies. If an architecture can meet
the bandwidth requirements of the SCPB with enough room for system expansion, it could
probably take a hit on other requirements, and still be an acceptable solution. Therefore, it must
be determined whether or not the LONWorks architecture can satisfy the bandwidth
requirements for the SCPB.
Recall from Table 5-1 that the inertial instruments in the SCPB require 12.6 Kbps per
instrument and the GPS requires 19.2 Kbps. The inertial instruments require 42-bit messages (3
signals x 14 bits/signal). With the distributed architecture, really only two signals are required
because one of the signals is a reference signal and is used for common mode rejection of noise
out of the instruments. This signal would not normally need to be passed over the bus. However,
the signal is included as a worst case scenario if for some reason it is desired for use in the flight
control computer. In practice, due to all of the protocol overhead involved, LONWorks is only
capable of achieving 280 messages/sec for 42-bit messages in an attempt to get all of the
inertial information to the flight control computer at 300 Hz. This equates to 11.8 Kbps per
instrument which is less than the 12.6 Kbps required for the inertial instruments and the 19.2
Kbps required for the GPS.
A distributed architecture using LONWorks technology cannot meet the requirements of
the SCPB. It is important to note that the architecture presented exhibits the highest modularity
possible and each inertial instrument is considered a separate LONWorks transducer. Should a
designer decide that the system will use three-axis or six-axis inertial instruments in one package,
then the LONWorks transducer in the dotted box in Figure 5-4 would contain only one
LONWorks control module and three or six inertial instruments, respectively. This design option
would lower the required bandwidth and also allow more of the compensation to be done on the
Chapter 5: Alternative Flight Control System Architectures
LONWorks control module connected to the sensors. For this type of design option, LONWorks
may be an acceptable solution, but for the SCPB presented at the beginning of this chapter,
LONWorks is not an acceptable solution.
5.4
5.4.1
USB Solution
USB Architectural Design
A distributed architecture for a flight control system designed utilizing the USB high
performance serial bus is shown in Figure 5-6.
Attached to the flight control computer will be a USB host controller. Because the USB
technology was originally designed for personal computers, the normal method of interfacing this
host controller to the flight control computer is to purchase a flight control computer with a PCI
(Peripheral Component Interconnect) bus and then purchase a PCI to USB interface card to be
used between the host and the flight control computer.
Recently though, companies have
considered desires from consumers to use the USB technology in embedded systems. This has
resulted in an initial push toward the development of USB products which can be used in
embedded systems similar to the SCPB. Cypress is defining a one chip host controller that will
be available at the beginning of 1999 [12]. This chip can be used in embedded systems, and in
the SCPB system application, it will interface directly to the flight control computer either in
parallel or in series. It will be available in versions with either one or two downstream ports.
The part number for this product is CY7C67XXX and a preliminary data sheet should be
available for the product this summer at the web site http://wwww.cypress.com.
Chapter 5: Alternative Flight Control System Architectures
To Additional Hubs or Functions
Figure 5-6: USB Distributed Flight Control System Architecture
Each sensor or actuator used in the flight control system becomes a USB function and is
designed using a USB function controller.
The USB function controller is responsible for
formatting the data from each sensor or actuator into the standard serial format for transmission
over the communications medium to the USB host controller.
Similar to the LONWorks
architecture, each transducer becomes a virtual "USB Transducer" consisting of the transducer,
the USB function controller, and any electronics needed to interface the transducer to the USB
function controller.
Referring back to Figure 5-6, the dotted box labeled "USB Gyro"
demonstrates this concept. Each USB transducer in the flight control system is attached to the
host controller through a USB hub.
Many companies have various one chip hubs and function controllers available. At the
same time as Cypress puts out its CY7C67XXX, it also expects to come out with a function
controller with the part number CY7C642XX. This function controller will have an SPI (Serial
Chapter 5: Alternative Flight Control System Architectures
Peripheral Interface) interface which is useful for connecting to many A/D converters that
provide SPI output. The function controller will also have a regular serial interface, which is
useful for connecting to the GPS in the SCPB. In the meantime, however, Intel provides a very
useful four port hub' and a function controller. Both the hub and the function controller have an
embedded microcontroller on the chip which provides an internal clock and is capable of
interfacing serially to any transducer. Information on the Intel hub and function controller is
shown in Table 5-2.
Function
Controller
Fou12MHz
Internal
Clocks
Number
of Pins
Chip
Volume
Part
Number
Stock
Number
Price
Per Chip
6 or 12
MHz
68
0.188 in3
8x930Ax
N80930AD4
$8.70
64 or 68
0.188 in3
8x930Hx
N80930HFO
$11.10
Hub
Table 5-2: Intel USB Chip Specifications
5.4.2
USB Power Options
The USB topology is a branching star topology where USB hubs are the center of each star and
allow transducers to easily be added to the system. The communications medium used for the
data bus is a four wire bus which carries both data and power to the transducers. Single ended
cables which can be integrated into a USB function are available from many vendors for about
$7 per cable.
Two options exist for powering the transducers in the USB architecture. The first option
is to bus power the hubs and the function controllers and individually power each transducer as
was done in the LONWorks architecture. Bus powering the hubs and function controllers allows
the system to be configured without the transducers being powered, thus conserving power. The
second option is to bus power the hubs, the function controllers, and the transducers. The +5 V
carried by the USB cable will satisfy the +5 V requirement for the inertial instruments and the
GPS. Recall from Chapter 4 that a bus-powered function can draw up to 500 mA from the bus.
Each gyro will only draw 40 mA and each accelerometer will only draw 26 mA. The GPS will
1 Texas
Instruments has a seven port hub with fewer capabilities than the Intel four port hub. The author
speculates that Intel will come out with a seven port hub in the near future.
Chapter 5: Alternative Flight Control System Architectures
draw 400 mA in operating mode and 80 mA in stand-by mode. For the gyro and accelerometer
±12 V requirement, a DC-DC power converter can be used. The MAX743 from MAXIM
provides a dual output of ±12 V from a +5 V source. The power converter draws 125 mA for
operation and each inertial instrument will draw 6 mA for a total of 131 mA, much less than the
500 mA maximum. The MAX743 has a volume of 0.065 in3 and a cost of $8.48 per chip. If an
A/D converter is used, Analog Devices makes a +5 V supply, eight channel, 14-bit A/D
converter, that draws only 12 mA of current. The AD7856 has a volume of 0.081 in3 and a cost
of approximately $10 per chip. Because USB ports are current limited, there should be little
concern about high currents into these devices. Also, the MAX743 has on-board current limiting.
The USB architecture meets all of the SCPB power requirements with bus power.
Therefore, option one or option two can be used to power the transducers-option two (bus
power to all components) providing the highest modularity to the system.
5.4.3
USB Bandwidth Analysis
The USB architecture satisfies almost all of the system traits set forth in the ideal architecture.
Its bus power also meets the power requirements of the SCPB. The bandwidth test that the
LONWorks architecture failed must be applied to the USB architecture as well.
With a USB architecture, the first issue in performing a bandwidth analysis is to
determine what type of transfer will be used to move messages over the bus.
Because the
measurements from the inertial instruments are desired to be as near real time as possible, it
makes sense to use isochronous transfers to move the data over the bus. This will give the
instruments a guaranteed portion of the bus bandwidth. A USB frame is sent over the bus from
the host every 1 msec. In the case of isochronous transfers, information is sent to the host
controller every 1 msec. Each inertial instrument requires messages to be sent to the flight
control computer 300 times per sec or once every 3.3 msec. Because the messages from the
instruments are fairly small (only a few bytes), a way of making better use of the bus bandwidth
would be to use interrupt transfers. With interrupt transfers, each device is polled periodically by
the host and the device is given a guaranteed maximum service period. Using interrupt transfers
also adds an error checking benefit that is not available with isochronous transfers.
Each
instrument could be polled once every two or three frames. For the bandwidth calculations
though, a worst case scenario should be used. This worst case would be to use isochronous
Chapter 5: Alternative Flight Control System Architectures
transfers for everything therefore requiring every function to take up a part of each frame.
Frames would look similar to the frame shown in Figure 5-7.
O
9
S
>
>
S>
O
N 1
UNALLOCATED
Co~) (For actuators, telemetry,
I<
IN
or additional transducers)
--
1 msec Frame
Figure 5-7: USB Frame for Flight Control System Data
Recall from Table 4-3 that the maximum number of bytes per USB frame is 1500.
Isochronous transfers are guaranteed 90% of the bandwidth or 1350 bytes. The method used to
calculate the amount of bandwidth required for each frame per instrument is to determine the
number of bytes per frame for each instrument, add in the overhead required (also given in Table
4-3 from Chapter 4) and the worst case bit stuff required, giving a maximum desired bandwidth.
For the inertial instruments, it was assumed that each 14-bit signal required two bytes. This
process is shown in Table 5-3 for one inertial instrument and the GPS.
Bytes Per Overhead Worst Bit
Bytes
Stuff Bytes
Frame
Max Desired
Bandwidth
Bits
PerSec
Bytes
Per Sec
Inertial
14400
1800
1.8 4 2
17
3.2 - 4
23 BTF
GPS
19200
2400
2.4 - 3
17
3.3-) 4
24 BTF
Instrument
Table 5-3: USB Inertial Instrument and GPS Bandwidth Calculations
As a result, the six inertial instruments together require 138 bytes per frame and the
GPS requires 24 bytes perframe, for a total of 162 bytes per frame. Subtracted from the
original total of 1350 bytes, gives 1188 bytes remaining as unallocated space per frame.
Therefore, even in the worst possible case, the bandwidth requirements of the SCPB can be met
with the USB architecture. Furthermore, 1188 bytes remain per frame that can be used for
Chapter 5: Alternative Flight Control System Architectures
actuators, telemetry, or additional sensors. There is even plenty left for a real time, MPEG2 6
Mbps isochronous video camera, which requires approximately 798 bytes perframe [10].
In this chapter, a small UAV current practice baseline, centralized flight control system, the
SCPB, was presented as a test case flight control system. An ideal solution was presented for a
distributed architecture design of the SCPB. Two alternative data bus architectures followed and
were compared against the requirements of the SCPB and the traits of the ideal architecture.
LONWorks failed to meet the bandwidth requirements of the SCPB while USB satisfied the
SCPB and met many of the traits of the ideal architecture. Chapter 6 will define a set of metrics
which will be used to compare the SCPB with the USB architecture in terms of cost for
redesigning and expanding the particular system.
Chapter 6: Analysis of Metrics
Chapter 6
Analysis of Metrics
As stated in Chapter 1, there exists a need for frequent upgrades, replacements, and redesigns of
UAVs and their subsystems.
Because of its inflexible nature, the traditional centralized
architecture used in many existing UAVs does not satisfy these needs in the most time and cost
effective manner.
In Chapter 5, a USB implementation of a distributed flight control system architecture
was presented as the best alternative to a SCPB centralized flight control system architecture.
The alternative architecture met all of the system requirements of the SCPB architecture. One
question still remains though: does this alternative architecture allow the flight control system to
be upgraded or redesigned in a more timely and cost effective manner than the traditional
architecture? Answering this question is not an easy task. In this chapter, a list of metrics will
be used to compare both architectures and best answer this question. This list of metrics was
generated with the help of Dave Hanson, an engineer at the Charles Stark Draper Laboratory,
whose work involves determining the costs of designing and packaging embedded systems [11].
These metrics are separated into a group of functional factors and a group of cost factors.
6.1
Functional Factors
There are many methods which can be used to compare two systems. The purpose here is to
explore whether a distributed architecture is more cost effective than a centralized architecture,
when the system must be redesigned, upgraded, or expanded. More specifically, it is desirable to
determine whether the architecture presented in Chapter 5, using the USB high performance
serial bus, will be less expensive than the traditional SCPB architecture when these system
changes occur. Before costs are compared though, it makes sense to compare other functional
characteristics of the two systems to show that the new architecture does not add too many
drawbacks to the traditional system. Table 6-1 compares some functional factors of the two
systems against each other.
Chapter 6: Analysis of Metrics
Traditional Approach
USB Approach
Distributed bussing of digital
signals and power with four wire
backplane and flex cables
backplane and flex cablestransmission
cabled transmission medium
medium
Design
Description
Modularity
Low level of modularity because
interconnect must be redesigned
each time the system is altered or
a module is added
High level of modularity because
new the system can easily be
altered and new modules can
easily be added
Reliability
Many off-module
interconnections give it a lower
reliability
Fewer off-module
interconnections give it a higher
reliability
*
Size & Weight
*
Smaller and lighter modules
because of less on-module
electronics
Adds the size and weight of
the backplane and flex cables
Larger and heavier modules
because of more on-module
Adds the weight of hubs and
Adds the weight of hubs and
the host, but will have no
backplane or flex cables
Table 6-1: Traditional Versus USB Functional Factors
The functional factors are divided up into four groups: design description, modularity,
reliability and size and weight.
The design description was covered in Chapter 5. For the
conventional approach, modules are connected via a custom backplane and flex cables. The
complexity of this backplane was shown in Figure 5-2. For the USB approach, modules are
connected to a four wire bus which carries digital signals and power to each module. The
method by which the modules are connected in each system says a great deal about system
modularity and reliability.
The modularity of the USB architecture is definitely higher than the architecture of the
traditional SCPB.
The traditional approach has a low level of modularity because of its
centralized nature. Adding a module to the system requires an interconnect redesign of the
backplane. By way of contrast, in the USB approach, modules can be readily added to the
system giving it a much higher level of modularity. Due to the high bandwidth of the bus and the
universal interface of each module, the only work required for a module addition is in terms of
the hardware and software in the module itself.
The reliability of the USB system is higher than that of the traditional approach. In both
systems, the failure of the flight control computer will crash the system. However, the fact that
Chapter 6: Analysis of Metrics
the traditional system has many more off-module interconnections leaves more room for errors in
that architecture. The flight control computer may typically have between thirty and forty
connections on the backplane versus the two connections for a serial interface to the USB host.
If one of the modules in the USB system fails, its port can be disabled. In the traditional system,
if one of the modules fails (e.g. a short in one of its interconnections), it is likely that the entire
backplane may be shorted out because many of the interconnections are routed very close
together.
One of the constraints placed on the SCPB was that the system should be very small.
Size and weight then become a major issue when considering a distributed architecture. This is
because adding intelligence to the modules of a system requires more up front hardware,
resulting in a higher overall volume.
It makes sense to first consider a worst case scenario. For a worst case scenario, one
should assume that the following components are added to the USB system: a USB host
controller, enough USB hubs to support all the modules, a USB function controller for each
module, and an A/D converter and power converter for each inertial instrument. The new design
will not have the volume of a backplane and the volume for the flex cables is assumed to be
equal to the volume of the bus wiring. The total additional volume is summarized in Table 6-2.
System
Component
Host Controller
USB Hub
Function
Controller
A/D Converter
Power Converter
Volume Per
Component
0.200'
0.188
0.188
0.081
0.065
(in3)
Quantity of
Total Volume
Components
1
4
(in3)
0.200
0.752
9
1.692
6
6
0.486
0.390
Table 6-2: Added Volume of USB System Components
Adding up the last column of Table 6-2 and subtracting the volume of a typical
backplane gives a total added volume of 3 in3 .2 Although this is a significant penalty for a small
Because the host controller from Cypress will not be out until the beginning of 1999, this volume was
estimated to be a little larger than the hub and function controller volume.
2 It must be assumed that the added components will add weight as well as volume to the system. Accurate
weights for the components were not available and therefore could not be included.
Chapter 6: Analysis of Metrics
UAV, there are additional factors to be considered. First, it is likely that future flight control
systems may go to inertial modules which have an A/D converter integrated into the module's
ASIC that will provide AVs and Aes serially as output. This option removes the need for an
A/D converter for each USB module. Second, it is also likely that a desire to use three degree of
freedom modules may exist. If this is the case, only one function controller is required for every
three inertial instruments. The final course of action for preserving the volume of the system is
to make use of available USB synthesizable cores. Synthesizable cores are pre-designed, proven
building blocks for the host controller, hub, or function controller that can be easily integrated
into an ASIC. For example, if the size and weight of a function controller for each inertial
instrument is a big issue, the function controller synthesizable core can be purchased and
integrated into the ASIC that already exists for each inertial instrument. This would literally
transform a gyro into a "USB gyro". The synthesizable cores are not cheap and can cost near
$100,000 for one core. If a company plans to mass produce many USB products though, the
synthesizable cores could become very economical.
In conclusion, the USB architecture in general will provide more modularity and
reliability than a traditional centralized architecture.
The one drawback is the volume and
associated weight that it adds to the system. A designer must decide if this added volume is
acceptable if the system will provide the cost savings for redesigns and upgrades that will be
shown in the next section. It was noted that future advances in technology may make volume
less of an issue than it is today.
6.2
Cost Factors
Cost factors can be used to determine the overall differences in cost for the two systems in
various situations. The situations which are of the highest concern here are: development costs,
costs for a new design if one is required, and the cost for an upgrade or expansion to the system.
The costs which will be used are for both labor and materials and supplies. The costs used for
any design time, system assembly or layout, or system rework are all for labor. The costs for any
fabrication or individual components are for materials and supplies. Table 6-3 compares the cost
factors of the two systems against each other.
Chapter 6: Analysis of Metrics
Traditional Approach
USB Approach
None
Cost of breadboarding and testing
the new architecture = $20,0003
Development
Costs
*
New Design
Costs
Costs
Backplane Costs:
*
Cost of a Host = $204
=-
Design = $15000
*
Cost of a Hub = $11
=
Fabrication = $250
System Assembly = $500
*
Added Single Module Layout
Costs = $500
Three Flex Cable Costs:
*
Cost of Electronics for an
Design = $9000
=* Fabrication = $300
*
Added Module = $505
Cost of Bus Wiring = $7
=
*
=>
*
Cost Per System
Change
*
Backplane Costs:
- Redesign = $3000
-= Fabrication = $250
System Rework = $500
New Flex Cable Costs:
a Design = $3000
SCost of Using an Existing
Available Hub Port = $0
*
Cost of a New Hub = $11
Added Single Module Layout
Costs = $500
*
Cost of Electronics for an
*
Added Module = $50
*SFabrication
Cost= $100
of a Bus Cable = $7
Table 6-3: Traditional Versus USB Cost Factors
As stated earlier, the cost factors are divided into three groups. As a worst case scenario,
it is assumed that a SCPB architecture already exists and there will be no development costs for
the traditional system. For the USB approach, however, development costs are required. These
costs include the cost for breadboarding and testing the new architecture.
This cost was
estimated to be $20,000. Note that if one is to assume that both systems will be designed from
scratch, the development costs for the system should be assumed to be equal.
3A
cost of $20,000 is used for one man-month of work [1 ].
4 Again,
because the host controller from Cypress will not be out until the beginning of 1999, this amount
was estimated to be almost double the cost of a hub.
5 This cost includes a $9 cost for the AD7856 A/D converter, a $10 cost for the MAX743 power converter,
and costs for other miscellaneous module circuitry.
Chapter 6: Analysis of Metrics
6.2.1
New Design Costs
If it is determined that an existing design must be altered sufficiently to require a redesign of the
system, the costs of redesigning the system are less for the USB approach than they are for the
traditional approach. Table 6-4 shows the breakdown of these costs.
Component
Cost
Total
Amount
Number
Required
$15,000
1
$15,000
$250
1
$250
SCPB
$500
1
$500
$25,050
$3,000
3
$9,000
$100
3
$300
Host
$20
1
$20
Hubs
$11
4
$44
USB
Module Layout
$500
9
$4,500
$4,705
Module
Electronics
$50
9
$50
Bus Cable
$7
13
$91
Backplane
Design
Backplane
Fabrication
Fabrication
Backplane
Backplane
System Assemby
Flex Cable
Design
Flex Cable
Fabrication
Component
Cost
Cost
Total
System Cost
Table 6-4: New Design Costs for Traditional and USB Architectures
The new design costs for the SCPB architecture are much higher than those of the USB
architecture-$25,050 for the SCPB and $4,705 for USB.
For each new design, the SCPB
requires a new backplane design, fabrication, and assembly, as well as flex cable design and
fabrication. On the other hand, for each new design, the USB architecture only requires a host,
enough hubs to support all of the modules, module layout and electronics for each module, and
enough bus cables to connect the hubs to the hosts and the modules to the hubs. It is assumed
that the cost for software layout-and-development will be approximately the same for both
systems. This is also a worst case scenario because if the system already exists and then must be
redesigned, chances are that for a USB architecture, the modules will already exist. This will
Chapter 6: Analysis of Metrics
subtract the module layout and electronics cost and the only major costs will be to redesign the
topology of the USB architecture.
6.2.2
System Upgrade Costs
The other situation that must be covered is the differences in cost for a system upgrade or
expansion. If it is desired to add a new sensor or transducer to the current system, the system
must be altered to accept that new transducer. The costs for the two architectures were again
compiled and the results are summarized in Table 6-5.
Component Cost
Single Component
Amount
Backplane Redesign
Backplane
Fabrication
Backplane System
Rework
Flex Cable Design
Total System Cost
$3,000
$250
SCPB
$500
$6,850
$3,000
Flex Cable
Fabrication
Hub
$44
Module Layout
$500
USB
Module Electronics
$50
$601
Bus Cable
$7
Table 6-5: System Upgrade Costs for Traditional and USB Architectures
The new design costs for the SCPB architecture are again higher than those of the USB
architecture-$6,850 for the SCPB and $601 for USB. The situation considered is that a single
transducer will be added to the system.
For each transducer added, the SCPB requires a
backplane redesign, fabrication, and rework, as well as a single flex cable design and fabrication.
On the other hand, for each transducer added to the USB system, the USB architecture only
requires an additional hub, module layout and electronics for the added module, and a bus cable
Chapter 6: Analysis of Metrics
to connect the module to a hub port. It is assumed that the cost for any software layout-anddevelopment for the added module will be approximately the same for both systems. Again, this
is also a worst case scenario because if the module added to the USB architecture is an off-theshelf already designed USB module, there will be no costs for the additional module layout and
electronics. Also if there is an existing available port on one of the hubs already in the system,
the cost for an additional hub can be subtracted.
6.3
Conclusions
Using the results from the previous section, different situations can be presented which would
show that using a distributed architecture to design a flight control system, not only adds
modularity to the system, but also extends the life of the system. Figure 6-1 shows the costs for
both systems throughout the life of the system in a specific instance where the system required an
additional sensor, was redesigned, and then required an additional sensor once more. The SCPB
system is shown two cases: one where the development costs are considered and one where the
development costs are not considered.
$50,000
$40,000
$30,000
$20,000
-
=
$10,000
$0
Initial
Development
Sensor
Addition
System
Redesign
Sensor
Addition
-0- SCPB System
-M- USB System With Development Costs
-*USB System Without Development Costs
Figure 6-1: Costs for Redesigning and Upgrading a Flight Control System
Chapter 6: Analysis of Metrics
Figure 6-1 shows the costs first for the initial development. Because the current practice
for flight control systems is to use the traditional centralized architecture, a worst case scenario
would be to assume that there would be no initial development cost for the SCPB system and a
cost of $20,000 for the USB system. Even when this worst case is considered, the SCPB system
ends up costing more even before the system redesign stage. After the second sensor addition to
the system, the total cost for the SCPB system is almost $13,000 more than that for the USB
system. If the initial development costs are taken to be equal, the total cost of the SCPB system
after the second sensor addition is almost $33,000 more than that for the USB system.
The conclusions presented in this chapter support the initial hypothesis that a distributed
flight control system architecture will satisfy the need for frequent upgrades, replacements, and
redesigns of vehicles and their subsystems, more efficiently than the traditional centralized flight
control system architecture. The distributed architecture adds more modularity and reliability to
the system but also adds additional volume and weight to the system. Factors which can mitigate
the additional volume and weight were presented and it is the decision of the designer whether or
not he or she is willing to live with the additional volume in order to give the system the
modularity to permit redesigns and upgrades in a very cost effective fashion.
88
Chapter 7: Conclusions and Future Research
Chapter 7
Conclusions and Future Research
The goal of this thesis was to explore the potential of a distributed architecture versus a
centralized architecture in small UAV flight control systems to better meet the needs of
redesigning and upgrading those systems.
A systematic approach was taken to explore that potential. First, the requirements of
various existing small UAVs were examined which led to conclusions that the requirements do
vary widely from vehicle to vehicle. It was shown that neither an existing COTS flight control
system or a theoretical flight control system, based on the current practice in flight control system
design, could satisfy the requirements across multiple vehicles. This suggested that a scalable
architecture that preserved the prior design, but also allowed for sensor and processor
improvements was a good approach.
Methods for attaining this scalable architecture were
explored and a bus architecture was chosen as the best method for implementation. Many data
bus technologies exist and a study of the existing technologies revealed two specific bus
architectures with the potential to satisfy flight control system requirements:
the LONWorks
control bus and the USB high performance serial bus. As a current practice baseline, the SCPB
architecture was used as a point of comparison for evaluating distributed architectures.
LONWorks architecture could not meet the requirements of that theoretical system.
The
More
specifically, the bandwidth that the LONWorks communications medium can support is not
sufficient for the theoretical system. The USB architecture, however, can support more than
enough bandwidth to satisfy the flight control sensor requirements. Sets of functional and cost
metrics were then used to compare the USB architecture to the SCPB architecture. The analysis
of these metrics revealed that the USB architecture was more modular and reliable than the SCPB
architecture. The analysis also revealed that in cases where the system had to be redesigned or
upgraded with an additional sensor, the system costs for the USB architecture were much less
than the system costs for the SCPB architecture, over the life of the flight control system.
Therefore, compared to a centralized flight control system architecture, a distributed flight control
system architecture is more modular and will better meet the needs of frequent upgrades,
replacements, and redesigns of vehicles and their subsystems.
Chapter 7: Conclusions and Future Research
The modularity of a distributed flight control system architecture was a major contributor
to the results obtained in this thesis.
The direct processor interface design in a traditional
centralized architecture is difficult to modify and eventually becomes outdated-it lacks
modularity.
In order for a system to be modular, a plug-and-play simplicity with standard
interfaces is needed where different parts of the flight control system can be interchanged and
have as little effect as possible on the entire system. This plug-and-play simplicity saves some of
the time and money that are normally put into redesigning and constructing a new vehicle
whenever these parts must be interchanged. Modularity adds other benefits to the system as well:
the system can be continually expanded as long as the bandwidth requirements of the bus are met
and the standard interfaces of the transducers allows them to be tested and maintained more easily
off-board the vehicle using common test facilities. Because the different parts of the flight
control system can be interchanged and have little effect on the entire system, when the desire
arises to actually make those changes, the cost to the designer is lessened because a majority of
the overall system is preserved. Thus, the life of the flight control system is extended.
It is unlikely that distributed flight control technology will be the best solution in all
cases. In general, the best solution is usually a mixture of these two approaches. Furthermore,
the best solution greatly depends on the requirements and overall objectives that the vehicle must
meet.
In this thesis, one of the major goals was to find the best architecture to meet the
requirements across multiple platforms. This was shown to be a modular architecture. The other
major goal was to meet system redesign and upgrade requirements in the most cost effective
manner.
The best architecture to meet these goals was a distributed flight control system
architecture. In other instances, this might not be the case. For example, consider a system that
does have very strict size and weight constraints and cannot afford any extra volume.
If a
distributed architecture is used in this case, more microcontrollers and electronics are added to the
system as intelligence in the nodes, increasing the overall volume of the system. These added
microcontrollers require more power which might add even more volume to the system.
Therefore, in this particular case, rather than a distributed system, a more centralized system is
desired, which would remove a lot of the excess electronics. Methods for integrating nodes on
the bus to cut down on the size and weight of a system should be researched.
The USB
architecture in this thesis utilized a function controller for each instrument. Another option is to
integrate more than one instrument on a single function controller. For example, if the system is
using an altitude reading from an altimeter and a GPS, it may be possible to integrate these two
instruments with a single function controller. The altimeter could aid the GPS, or an estimate of
Chapter 7: Conclusions and Future Research
the two readings could be sent over the bus. This method is used for an integrated IMU in which
all six inertial instruments are integrated with a single function controller.
The type of data bus technology to use in implementing a distributed flight control
system architecture also depends on the requirements and overall objectives of the vehicle. For
the SCPB system in this thesis, LONWorks technology could not meet the requirements and USB
technology more than met the requirements of the system. Similar results have occurred in other
industry studies.
Several organizations within the Boeing Aircraft Company are working to
define a standard interface that would allow a data acquisition system to record information from
a large number of smart sensors at a bit rate of 12.8 Kbps [8]. In an evaluation of the current
data bus technology, LONWorks and CAN were first eliminated because they would not meet
system data rate requirements. They are looking into other field busses (ProfiBus and World FIP)
as well as high-speed serial busses (USB and Firewire). One of their final options is to take an
existing field bus and modify it to add a different physical layer and any required protocol
changes. They feel this might be the only way to meet system requirements. However, if for
some reason in the future, flight control systems start requiring more and more bandwidth for
sensors, the USB technology may not be the best solution. Designers may have to explore the
Firewire technology, another high performance serial bus which is very similar to the USB
technology, but can provide much higher bandwidths. On the other hand, if another flight control
system desires modularity and has fairly low bandwidth requirements for sensors and actuators,
either LONWorks technology or CAN technology may provide the best solution.
Many
technologies currently exist and it is up to the design engineer to determine which technology
best meets the needs of his or her system.
Up to this point in time, the research done for this thesis was strictly a paper project and
no hardware implementation has been done. A next step would be to take the results or overall
theory from this thesis and attempt to implement an actual existing system using one of the data
bus technologies that has been described herein.
92
Appendix A: Acronyms
Appendix A
Acronyms
A/D
ASIC
BDA
bps
Bps
BTF
CAN
CCU
COTS
CPU
CRC
CSDL
CSMA
D/A
DARPA
DOTS
DSAAV
EEPROM
EOD
EOP
FPGA
GPS
HCD
I/0
12C
IMU
INS
IP
IRP
ISO
IUVC
Analog-to-Digital
Application Specific
Integrated Circuit
Battle Damage Assessment
Bits Per Second
Bytes Per Second
Byte Times Per Frame
Controller Area Network
Central Control Unit
Commercial Off-the-Shelf
Central Processing Unit
Cyclic Redundancy Check
Charles Stark Draper
Laboratory
Carrier Sense Multiple
Access
Digital-to-Analog
Defense Advanced
Research Projects Agency
Draper Off-the-Shelf
Draper Small Autonomous
Aerial Vehicle
Electrically Erasable
Programmable Read Only
Memory
Explosive Ordnance
Disposal
End of Packet
Field Programmable Gate
Array
Global Positioning System
Host Controller Driver
Input and Output
Inter-Integrated Circuit
Inertial Measuring Unit
Inertial Navigation System
Internet Protocol
I/O Request Packet
International Standard
Organization
Intelligent Unmanned
Vehicle Center
Kbps
LAN
LON
LPS
MAC
Mbps
MUX
NBC
NRZI
OHC
OSI
PCI
PROM
PWM
RF
SCPB
SCSI
SDS
SIE
SOP
SPI
TBD
TCP
TD
UAV
UHC
ULV
USB
USBD
UUV
UXO
WASP
Kilo-Bits Per Second
Local Area Network
Local Operating Network
Local Positioning System
Media Access Control
Mega-Bits Per Second
Multiplexer
Nuclear, Biological, and
Chemical
Non-Return to Zero,
Inverted
Open Host Controller
Open System
Interconnection
Peripheral Component
Interconnect
Programmable Read Only
Memory
Pulse Width Modulation
Radio Frequency
Small UAV Current
Practice Baseline
Small Computer System
Interface
Smart Distributed System
Serial Interface Engine
Start of Packet
Serial Peripheral Interface
To Be Determined
Transport Control Protocol
Transfer Descriptor
Unmanned Aerial Vehicle
Universal Host Controller
Unmanned Land Vehicle
Universal Serial Bus
Universal Serial Bus Driver
Unmanned Underwater
Vehicle
Unexploded Ordnance
Wide Area Surveillance
Projectile
94
References
REFERENCES
[1]
Adam, S.P. A DistributedControl Network For a Mobile Robotics Platform. SM Thesis
(CSDL-T-1296), MIT, May 1996.
[2]
Anderson, Don. Universal Serial Bus System Architecture. Addison Wesley Longman,
Inc., Reading, MA, 1997.
[3]
Bertsekas, Dimitri, Robert Gallager. DataNetworks. Prentice Hall, Upper Saddle River,
NJ, 1992.
[4]
Butler, Robert. CSDL Employee. Technical Discussion, January 14, 1998.
[5]
Dahl, Glenn. Echelon Systems Engineer. Technical Discussion, February 18, 1998.
[6]
Deyst, John. Professor of Aeronautics and Astronautics at MIT. Technical Discussion,
February 6, 1998.
[7]
"Early Use of UAVs". Available from http://www.acq.osd.mil/daro/uav/early.html,April
29, 1997.
[8]
Eccles, Lee H. "A Smart Sensor Bus for Data Acquisition". Sensors, March 1998.
[9]
Echelon's LonWorks ProductsData Book: 1995-1996 Edition. Echelon Corporation, 1995.
[10] Gamey, John. "An Analysis of Throughput Characteristics of Universal Serial Bus".
Available from http://www.usb.org/developers/,March 31, 1998.
[11] Hanson, Dave. CSDL Employee. Technical Discussion, April 24, 1998.
[12] Kriplani, Nick. Cypress USB Technician. Technical Discussion, March 25, 1998.
[13] Leonard, Milt. "Creating the Truly Smart Sensor". ElectronicDesignfor Engineeringand
EngineeringManagers Worldwide. April 15, 1993.
[14] LONWORKS Technology Device Data. Motorola, Inc., U.S.A., 1995.
[15] Madan, Pradip. "LonWorks Technology for Intelligent Distributed Interoperable Control
Networks". Available from http://www.echelon.com/solutions/wpapers.htm,January 21,
1998.
[16] MIL-STD-1553B DigitalData Bus Components. MilesTek, Inc., Denton, TX, (800) 5247444, Catalog 1553-3.
[17] Ormond, Tom. "Smart Sensors Tackle Tough Environments". EDN (Electronic
Technology for Engineers and EngineeringManagers Worldwide). October 14, 1993.
References
[18] Schwartz, Gary. CSDL Employee. Technical Discussion, March 12, 1998.
[19] Smith, Samuel M. "An Approach to Intelligent Distributed Control for Autonomous
Underwater Vehicles". Proceedingsof the 1994 Symposium on Autonomous Underwater
Vehicle Technology. Cambridge, MA, July 1994.
[20] The Random House Dictionaryof the English Language. The UnabridgedEdition. Random
House, New York, 1983.
[21] Tung, Charles P. A DistributedProcessingNetwork ForAutonomous Micro-Rover
Control. BS & SM Thesis (CSDL-T-1300), MIT, January 1998.
[22] UAVAnnual Report: FY1996. Defense Airborne Reconnaissance Office, Washington DC,
1996.
[23] Universal Serial Bus Specification Revision 1.0. Available from
http://www. usb.org/developers/,March 20, 1998.
Download