Wireless Extension of an Avionics Bus for Prototyping and Testing Reconfigurable UAVs G.D. Chandler*, C.P. Harr*, O.A. Rawashdeh*, D.M. Feinauer*, D.K. Jackson*, A.W. Groves*, and J.E. Lumpp, Jr.† University of Kentucky, Lexington, KY 40506, USA This paper presents current research to extend the electronic systems of a UAV via a wireless link. Augmenting the onboard system with external resources offers a flexible testing environment and allows rapid prototyping of new hardware and software components that may not be adapted to meet the weight, space, and power restrictions of a small UAV. Additionally, keeping expensive prototypes on the ground lowers the risk of damage during experimentation. We are currently evaluating a number of hardware components and software techniques to wirelessly extend an onboard network. I. Introduction L IGHT UAVs constitute a large percentage of UAVs being developed for both military and domestic applications. Developing avionics systems for these vehicles requires greater time and expense to miniaturize designs to fit on a compact UAV. There are also higher risks assumed when placing one-of-a-kind systems into a test environment. The approach presented in this paper addresses these issues by proposing that only the minimal hardware for control and communications is placed onboard the aircraft while additional hardware (and software) resources remain on the ground. The challenge lies in making the wireless link layer transparent to all hardware and software modules on the aircraft and on the ground. Benefits gained from the physical separation of system components are threefold. Integration tests of expensive sensor packages can be done with the sensor in a protected environment separate from the aircraft. The computing resources of large and power hungry prototypes can be easily made available to the UAV avionics systems, greatly enhancing the speed of development. Time spent testing in the field can be optimized by removing the need to land the aircraft and replace onboard systems; instead these can be hot-swapped while the aircraft remains in the air. The design and verification of light UAV avionics is more constrained than that of larger vehicles. The economics of a large-scale system require that much of the verification be conducted through simulation before any flight testing is performed. Compact vehicles are often field tested with software and hardware that lack this testing because the systems are typically cheaper to replace in the event of a failure. Although this difference promotes rapid development, the inherent risks can become unacceptable as the systems increase in cost. Small UAVs possess significantly less onboard resources compared to larger aircraft. The fundamental constraints of size, weight, energy storage capacity, and processing power must all be managed. Traditionally, the only option is to build prototypes that conform to these constraints. This step is often expensive and time consuming. A solution to these problems is to link computing resources on the aircraft to resources on the ground. Although remote processing is not a new concept, the specifications of such a system are typically determined at design time and many aspects of the system must take this into account. The novelty of the approach presented here is that processing elements may be arbitrarily located and relocated at any time (bench test, field test, deployment, etc.) without modifying hardware or software modules. The technique is based on the wireless extension of automatically reconfiguring multiprocessor systems which is described in the next section. * † Research Assistant, Department of Electrical and Computer Engr., AIAA Student Member. Associate Professor, Department of Electrical and Computer Engr., jel@uky.edu, AIAA Member. 1 American Institute of Aeronautics and Astronautics III. Ardea Framework Ardea (Automatically Reconfigurable Distributed Embedded Architecture) can be used to address the inherent unreliability of wireless links1. Much of the discussion presented in this paper expands upon the ideas and concepts of Ardea. Distributed systems which gracefully degrade may be specified and created through the use of Ardea. In the case of faults (both hardware and software) such a system is capable of automatically reducing the set of services it provides to maintain some level of operation. This system reconfiguration happens dynamically, stopping and restarting only those services affected by the outage. In Ardea, applications software is developed in a modular fashion. The applications software architecture is graphically captured in Dependency Graphs2 (DGs). These DGs specify the flow of data and the dependencies among software modules. Data variable nodes represent the information passed between modules. The data variable (or input) requirements of a software module are represented in the graphs using a set of dependency gates (comparable to logic gates) allowing the specification a module with several distinct resource requirements. The software modules therefore form the basic degradable units of the architecture. Figure 1. Example dependency graph. An example of a DG for hypothetical flight control system is shown in Figure 1. Input and output devices (i.e., sensors and actuators) are shown as oval nodes on the left and right hand side of the DG respectively. The driver software modules of the airspeed sensor and inertial measurement unit (IMU) produce airspeed and attitude information data variables that are inputs, along with the desired airspeed data variable from another subsystem, that are required by the two flight control software modules (“required” inputs specified by AND dependency gate use). The two flight control modules each produce an elevator and rudder angle data variable. The two servo drivers, one controlling the rudder and one controlling the elevator control surface, read the desired angle data variables and update their servos. The XOR dependency gates specify that only one of the flight control modules will be operational in a given system configuration. The hardware components of a gracefully degrading system as defined by Ardea are: a system manager, a communication network, processing elements, and input/output devices. A pictorial representation of an Ardea system can be Figure 2. Ardea hardware block diagram. seen in Figure 2. At the bottom of the diagram are the input/output devices, each of which must be attached to a processing element (PE). Although typically responsible for acquiring sensor data and controlling actuators, these may also simply exist as members of the general processing pool. These processing elements communicate with each other through a unicast/multicast network. The data that exists on this network can be one of two types – application messages between top-level wares running on the PEs or management messages to and from the system manager. The system 2 American Institute of Aeronautics and Astronautics manager (SM) is a hardware/software subsystem connected to the communication network that has the following responsibilities: Track availability/status of system hardware and software resources, compute new system configurations using the DGs that map a set of software modules to available PEs, deploy new system configurations, and handle data check pointing of application software modules. Failures of the system manager component are not supported by Ardea. The SM is made robust using traditional fault tolerance techniques. The expense of such a manager is justifiable as the manager can be applied to a variety of systems and the cost of validation can be spread over multiple projects. Failures of PEs, sensors, actuators, software modules, and of communication links, on the other hand, are supported in the Ardea system model. There are many scenarios in which the system operates in a degraded mode. For example, insufficient hardware resources may be available causing non-critical functions to be dropped. Alternate lower quality sources of data (software modules) may be used when the preferred sources are not available. If a system designer specifies outputs with soft deadlines, the system can provide degraded performance by producing outputs that do not meet their deadlines. Finally, the system may shut down in response to system changes in a controlled, pre-specified manner if further correct operation can not be achieved. IV. Wireless Extension In this work we propose that the Ardea framework in Figure 2 be extended to include a second physical network, and the resulting partitions be linked through the use of two network-to-wireless bridges, as is shown in Figure 3. These bridges will effectively serve as proxies to all the network-attached PEs on the complementary bus. This structure allows the sharing of resources between the aircraft and the ground. However, this wireless link introduces new problems. Links of this type have bandwidth limits and can suffer intermittent data loss. Both of these problems must be addressed. The key to linking resources over Figure 3. Wirelessly unified avionics system. unreliable wireless connections is the ability to dynamically reconfigure. Although Ardea is designed to support failure detection below the node level (sensor and actuator failures), its ability to detect the absence of processing elements is sufficient in this wireless application. When communication fails, the local bridge can no longer provide proxied information from processing elements on the other bus. To the system manager this is perceived as the coincidental simultaneous failure of a number of processing elements. During the period when the link is unavailable the system manager cannot receive heartbeat signals of the remote PEs, considers them to have failed, and reconfigures accordingly. Once the link returns, the heartbeats are received again triggering the system to reconfigure. Techniques can be employed to prevent repeated reconfiguration when the link exhibits sporadic behavior. These features are not currently a part of the Ardea specification but are an extension of the software currently under development. Consider the example of the sporadic wireless linking of a ground based experimental autopilot as shown in Figure 4. For evaluation purposes, a processor intensive vehicle control algorithm is being run on a highperformance desktop computer. The data bus that exists on the aircraft is extended wirelessly via the methods presented to the desktop computer where the algorithms receive state information and return control surface adjustments. To all systems still onboard the plane the PC is a local data provider and consumer. As a backup to this experimental system, a stable, thoroughly tested autopilot algorithm with less functionality is implemented onboard the aircraft. 3 American Institute of Aeronautics and Astronautics Figure 4. Aircraft system and bus extension. During testing, the aircraft’s flight path exceeds the range of the wireless link and communication is lost. The wireless bridge can no longer receive information from the desktop computer and the system manager onboard the aircraft recognizes this as the failure of a processing element. It responds by evaluating the onboard resources and scheduling the onboard autopilot algorithm. The algorithm running onboard turns the craft towards the runway and after some amount of time it is again within range of the wireless transmitter. The heartbeats from these ground based systems reappear on the airborne network and the system manager transitions control of the aircraft back to the ground based autopilot. V. Current Development We are currently implementing a test-bed for the wireless extension of the Ardea bus. The Controller Area Network bus has been selected to provide the network fabric and extended 8051 microcontrollers will be used for processing elements, the system manager, and at both interfaces to the wireless link bridge. Several data modems are being evaluated with respect to their range and bandwidth. The Controller Area Network (CAN) bus fulfills the needs and specifications of Ardea. The CAN standard is optimized for systems needing to transmit relatively small amounts of information reliably to any or all other nodes3. A carrier sense multiple access with collision detection (CSMA/CD) protocol is used in conjunction with nondestructive bitwise arbitration to allow corruption-free transmissions4. The transfer of data across the network is message based, and as such, the broadcasting of packets containing an identifier and data allows for a great amount of system flexibility and non-invasive monitoring. The combination of high-speed (up to 1 megabit per second) asynchronous signaling, cyclic-redundancy checking, differential signaling, and fail-silence guarantees (through "fault confinement") make this network ideal for our real-time reliability needs. A software library that supports the communications requirements of the extended avionics bus has been developed and is being used on a bench top CAN network running at a reduced speed. The test bed is shown in Figure 5. 4 American Institute of Aeronautics and Astronautics Extended 8051 microcontrollers are well suited to handle the processing needs light UAVs. A variety of devices are currently available on the market – from highperformance units to very low cost CAN adapters used to form dedicated sensor and actuator interfaces. The evaluation of radio links for this project is ongoing. 900MHz and 2.4GHz serial modems with several miles of range are currently being used for bi-directional communication between airborne vehicles and ground stations. An image of one such modem installed in an aircraft is shown in Figure 6. Commercial-off-the-shelf solutions such as Bluetooth, IEEE 802.11b, and other 900MHz and 2.4GHz Figure 5. CAN bus bench top system. devices are also being investigated. The most promising solution employs 802.11 devices5 and 2.4GHz amplifiers. Such a system may by capable of bidirectionally transmitting all message traffic on the CAN networks at full speed over tens of miles using amateur frequencies and licensing. Design options for the architecture and link protocol of the bridges which operate between the local network and the wireless modem are also being evaluated. The primary challenge is to create these layers in such a way that they are able to serve as a proxy on the local network for all the processing elements that are attached to the remote network. This problem also involves maintaining message order and bandwidth on both networks. When the wireless bandwidth is less than that of the networks they are linking more processing than simple serial conversion is required. Options include refusing to transmit messages to the remote network that are not being Figure 6. Radio modem onboard aerial platform. consumed by any devices (the network traffic does not contain information identifying producers and consumers) and the filtering of low-priority messages. Viable options for further exploration include compression, limiting bus bandwidth or developing intelligent filters that understand the Ardea protocols and route messages as appropriate. The research will continue to address the bandwidth issues surrounding the use of this bus extension system. It is hoped that the communication rate of the CAN network can be increased to the maximum baudrate allowable while still providing 100% of this traffic wirelessly. In addition, CAN network controllers with larger feature sets than those in the current design will be evaluated. 5 American Institute of Aeronautics and Astronautics VI. Summary Through the incorporation of increased test functionality into the core architecture of a UAV system, prototyping new hardware and software modules becomes far less complex, time consuming, and risky. Extending the system bus to a ground location not only allows for safer testing of new components, but allows for testing of modules which may not meet the strict constraints imposed by a small-scale UAV design. The first steps in building both the highly reconfigurable architecture and the wireless extension of such have been completed. Further work is in progress to enhance the data rates and complete a full implementation. Acknowledgments The authors gratefully acknowledge the support of the Kentucky Space Grant Consortium and Kentucky NASA EPSCoR under the direction of Drs. Richard and Karen Hackney. This research is partially funded by a NASA Workforce Development Grant and the Commonwealth of Kentucky Research Challenge Trust Fund. References 1 Osamah A. Rawashdeh and James E. Lumpp, Jr., “A Dynamic Reconfiguration System for Improving Embedded Control System Reliability,” IEEE Aerospace Conference, March 2005. 2 Osamah A. Rawashdeh, James E. Lumpp, Jr., et al., “A Dynamically Reconfiguring Avionics Architecture for UAVs,” AIAA Infotech @ Aerospace Conference, September 2005. 3 Robert Bosch GmbH, CAN Specification 2.0, 1991. 4 Keith Paul, Controller Area Network (CAN) Basics, Microchip Technology Inc., Appl. Note AN713a, 1999. 5 Brown, T.X, Argrow, B., Dixon, C., Doshi, S., Thekkekunnel, R.-G., Henkel, D., “Ad hoc UAV-Ground Network (AUGNet),” AIAA 3rd Unmanned Unlimited Technical Conference, Chicago, IL, September 2004. 6 American Institute of Aeronautics and Astronautics