System Architecture of Networked Sensor Platforms

advertisement
System Architecture of Networked
Sensor Platforms
Introduction
Wireless sensor networks (WSN) consists of group of sensor
nodes to perform distributed sensing task using wireless
medium.
Characteristics
- low-cost, low-power, lightweight
- densely deployed
- prone to failures
- two ways of deployment: randomly, pre-determined or engineered
Objectives
- Monitor activities
- Gather and fuse information
- Communicate with global data processing unit
Introduction
–
Recent sensor networks research involves almost all the layers and can
be categorized into the following three aspects: [Akyildiz+2002,
Elson+2002]:

Energy Efficiency:
small devices, limited amount of energy, essential to prolong system
lifetime

Scalability:
deployment of thousands of sensor nodes, low-cost

Locality:
smallest networks cannot depend on having global states
Why Sensor Platforms?
–
Traditional mechanisms of exploring the network (analysis and
simulation) are not satisfied for exploring such a large-scale, dynamic
and resource-constrained networks due to their difficulties to modeling
every aspect of the system as a whole
–
For example, energy consumption model of the hardware platforms,
including sensing, computation and communication, is not fully
considered and overly-simplified assumptions have been made
–
Application-specific property of WSN makes the existing research
mechanisms even harder to obtain meaningful results
–
Therefore, the demand to build a platform is increasing; e.g., Berkeley’s
motes and MANTIS
Why Sensor Platforms?
–
Compared to analysis and simulation techniques, designing a system
platform has the following advantages:

Provides genuine executive environment: various proposed
algorithms can be exactly evaluated; good way to examine existing
design principles and discover new ones under different
configurations

More attention can be focused on the application-layer

A real system platform can accelerate the pace of research and
development
General WSN System Architecture
–
Constructing a platform for WSN falls into the area of embedded system
development which usually consists of developing environment,
hardware and software platforms.
1.
Hardware Platform
Consists of the following four components:
a) Processing Unit
Associates with small storage unit (tens of kilo bytes order) and
manages the procedures to collaborate with other nodes to carry out the
assigned sensing task
b) Transceiver Unit
Connects the node to the network via various possible transmission
medias such as infra, light, radio and so on
General WSN System Architecture
1.
Hardware Platform
c) Power Unit
Supplies power to the system by small size batteries which makes the
energy a scarce resource
d) Sensing Units
Usually composed of two subunits: sensors and analog-to-digital
Converters (ADCs). The analog signal produced by the sensors are
converted to digital signals by the ADC, and fed into the processing unit
e) Other Application Dependent Components
Location finding system is needed to determine the location of sensor
nodes with high accuracy; mobilizer may be needed to move sensor
nodes when it is required to carry out the task
General WSN System Architecture
1.
Hardware Platform
Figure 1: The components of a sensor node [Akyildiz+2002]
General WSN System Architecture
2.
Software Platform
Consists of the following four components:
a) Embedded Operating System (EOS)
Manages the hardware capability efficiently as well as supports
concurrency-intense operations. Apart from traditional OS tasks such as
processor, memory and I/O management, it must be real-time to rapidly
respond the hardware triggered events, multi-threading to handle
concurrent flows
b) Application Programming Interface (API)
A series of functions provided by OS and other system-level components
for assisting developers to build applications upon itself
General WSN System Architecture
2.
Software Platform
c) Device Drivers
A series of routines that determine how the upper layer entities
communicate with the peripheral devices
d) Hardware Abstract Layer (HAL)
Intermediate layer between the hardware and the OS. Provides uniform
interfaces to the upper layer while its implementation is highly dependent
on the lower layer hardware. With the use of HAL, the OS and
applications easily transplant from one hardware platform to another
General WSN System Architecture
2.
Software Platform
Figure 2: The software platform for WSN
General WSN System Architecture
3.
System Development Environment
Provides various of tools for every stage of software development over
the specific hardware platform
a) Cross-Platform Development
Generally, an embedded system unlike PC and does not have the ability
of self-development. The final binary code run on that system, termed as
target system, will be generated on the PC, termed as host system, by
cross-platform compilers and linkers, and download the resulted image
via the communication port onto the target system
General WSN System Architecture
3.
System Development Environment
Provides various of tools for every stage of software development over
the specific hardware platform
b) Debug Techniques
Due to the difficulties introduced by cross-platform development mode,
the debug techniques become critical for the efficiency of software
production. For this reason, many chips on the system provide the
on-chip debugger, such as JTAF, to reduce the development time.
Berkeley Motes [Hill+ 2000]
–
Motes are tiny, self-contained, battery powered computers with radio
links, which enable them to communicate and exchange data with one
another, and to self-organize into ad hoc networks
–
Motes form the building blocks of wireless sensor networks
–
TinyOS [TinyOS], component-based runtime environment, is designed to
provide support for these motes which require concurrency intensive
operations while constrained by minimal hardware resources
Figure 3: Berkeley Mote
Berkeley Motes [Hill+ 2000]
Hardware Platform
–
Consists of
o
micro-controller with internal flash program memory
o
data SRAM
o
data EEPROM
o
a set of actuator and sensor devices, including LEDs
o
a low-power transceiver
o
an analog photo-sensor
o
a digital temperature sensor
o
a serial port
o
a small coprocessor unit
Berkeley Motes [Hill+ 2000]
Hardware Platform
Figure 4: The schematic for representative network sensor platform
Berkeley Motes [Hill+ 2000]
Hardware Platform
–
The processing unit
o
MCU (ATMEL 90LS8535), an 8-bit architecture with 16-bit addresses
o
provides 32 8-bit general registers and runs at 4 MHz and 3.0 V
o
has 8 KB flash as the program memory and 512 Bytes of SRAM as
the data memory
o
MCU is designed such that the processor cannot write to instruction
memory; the prototype uses a coprocessor to perform this function
o
the processor integrates a set of timers and counters which can be
configured to generate interrupts at regular time intervals
o
three sleep modes: idle (shuts off the processor), power down (shuts
off everything, but the watchdog and asynchronous interrupt logic
necessary to wake up), power save (keep asynchronous timer on)
Berkeley Motes [Hill+ 2000]
Hardware Platform
–
The sensing units
o
contains two sub-components: photo sensor and temperature sensor
o
photo sensor represents an analog input device with simple control
lines which eliminate power drain through the photo resistor when not
in use
o
temperature sensor (Analog Devices AD7418) represents a large
class of digital sensors which have internal A/D converters and
interface over a standard chip-to-chip protocol (the synchronous twowire I2C protocol with software on the micro-controller synthesizing
the I2C master over general I/O pins. There is no explicit arbiter and
bus negotiations are carried out by the software on the microcontroller
Berkeley Motes [Hill+ 2000]
Hardware Platform
–
The transceiver unit
o
consist of an RF Monolithics 916.50 MHz transceiver (TR1000),
antenna, and a collection of discrete components to configure the
physical layer characteristics such as signal strength and sensitivity
o
operates in an ON-OFF key mode at speeds up to 19.2 Kbps
o
control signals configure the radio to operate in either transmit,
receive, or power-off mode
o
the radio contains no buffering, so each bit must be serviced by the
controller on time
o
the transmitted value is not latched by the radio, so the jitter at the
radio input is propagated into the transmission signal
Berkeley Motes [Hill+ 2000]
Hardware Platform
–
The transceiver unit is an Energizer CR2450 lithium battery rated at
575 mAh
–
The other auxiliary components include:
The coprocessor
o
represents a synchronous bit-level device with byte-level support
o
MCU (AT09LS2343, with 2KB instruction memory, 128 bytes of
SRAM and EEPROM) that uses I/O pins connected to an SPI
controller where SPI is a synchronous serial data link, providing high
speed full-duplex connections (up to 1 Mbit) between peripherals
o
the sensor can be reprogrammed by transferring data from the
network into the coprocessor’s 256 KB EEPROM (24LC256)
o
can be used as a gateway to extra storage by the main processor
Berkeley Motes [Hill+ 2000]
Hardware Platform
–
The other auxiliary components include:
The serial port
o
represents a synchronous bit-level device with byte-level controller
support
o
uses I/O pins that are connected to an internal UART controller
o
in transmit mode, the UART takes a byte of data and shifts it out
serially at a specified interval
o
in receive mode, it samples the input pin for a transition and shifts in
bits at a specified interval from the edge
o
interrupts are triggered in the processor to signal completion of the
events
Berkeley Motes [Hill+ 2000, TinyOS]
Hardware Platform
–
The other auxiliary components include:
Three LEDs
o
represent outputs connected through general I/O ports; they may be
used to display digital values or status
Software Platform
–
based on Tiny Micro-Threading Operating System (TinyOS) which is
designed for resource-constrained MEMS sensors
–
TinyOS adopts an event model so that high levels of concurrency can be
handled in a small amount of space
–
A stack-based threaded approach would require that stack space be
reserved for each execution context
Berkeley Motes [Hill+ 2000, TinyOS]
Software Platform
–
A complete system configuration consists of a tiny scheduler and a graph
of components
–
A component has four interrelated parts: a set of
o
a set of command handlers
o
a set of event handlers
o
an encapsulated fixed-size frame
o
Bundle of simple tasks
–
tasks, commands and event handlers execute in the context of the frame
and operate on its state
–
each component declares the commands it uses and the events it signals
–
these declarations are used to compose the modular components in a
per-application configuration
Berkeley Motes [Hill+ 2000, TinyOS]
Software Platform
–
the composition process creates layers of components where higher-level
components issue commands to lower-level components and lower-level
components signal events to the higher-level components
Frames
–
fixed-size and statistically allocated which allows us to know memory
requirements of a component at a compile time -- prevents overhead
associated with dynamic allocation
Commands
–
non-blocking requests made to lower level components
–
typically, a command will deposit request parameters into its frame and
conditionally post a task for later execution
Berkeley Motes [Hill+ 2000, TinyOS]
Software Platform
Commands
–
can invoke lower commands, but it must not wait for long
–
must provide feedback to its caller by returning status indicating whether it
was successful or not
Event handlers
–
Invoked to deal with hardware events, either directly or indirectly
–
The lowest level components have handlers connected directly to
hardware interrupts which may be external interrupts, timer events, or
counter events
–
An event handler can deposit information into its frame, post tasks, signal
higher level events or call lower level commands
Berkeley Motes [Hill+ 2000, TinyOS]
Software Platform
Event handlers
–
in order to avoid cycles in the command/event chain, commands cannot
signal events
–
both signals and events are intended to perform a small, fixed amount of
work, which occurs within the context of their component’s state
Tasks
–
perform the primary work
–
atomic entities with respect to other tasks, run to completion and can be
preempted by events
–
can call lower level commands, signal higher level events, and schedule
other tasks within a component
Berkeley Motes [Hill+ 2000, TinyOS]
Software Platform
Tasks
–
run-to-completion semantics make it possible to allocate a single stack
that is assigned to the currently executing task which is essential in
memory constrained systems
–
allows to simulate concurrency within each component, since tasks
execute asynchronously with respect to the events
–
must never block or spin wait, otherwise, they will prevent progress in
other components
Task scheduler
–
Utilizes a bounded size scheduling data structure to schedule various
tasks base on FIFO, priority-based or deadline-based policy which is
dependent on the requirements of the application
Berkeley Motes [Hill+ 2000, TinyOS]
Software Platform
Figure 5: The schematic for the architecture of TinyOS
MANTIS [Abrach+ 2003]
–
MANTIS (MultimodAl system for NeTworks of In-situ wireless Sensors)
provides a new multi-threaded embedded operating system integrated
with a general-purpose single-board hardware platform to enable flexible
and rapid prototyping of wireless sensor networks
–
the key design goals of MANTIS are
o
ease of use, i.e., a small learning curve that encourages novice
programmers to rapidly prototype sensor applications
o
flexibility such that expert researchers can continue to adapt and
extend the hardware/software system to suit the needs of advanced
research
MANTIS [Abrach+ 2003]
–
MANTIS OS is called MOS
o
MOS selects its model as classical structure of layered multithreaded operating systems which includes multi-threading,
preemptive scheduling with time slicing, I/O synchronization via
mutual exclusion, a standard network stack, and device drivers
o
MOS choose a standard programming language that the entire
kernel and API are written in standard C. This design choice not only
almost eliminates the learning curve, but also accrues many of the
other benefits of a standard programming language, including crossplatform support and reuse of a vast legacy code base. C also eases
development of cross-platform multimodal prototyping environments
on X86 PCs
MANTIS [Abrach+ 2003]
Hardware Platform
–
MANTIS hardware nymph’s design was inspired by the Berkeley MICA an
MICA2 Mote architecture
–
MANTIS Nymph is a single-board design, incorporating the microcontroller, analog sensor ports, RF communication, EEPROM, and serial
ports on one dual-layer 3.5 x 5.5 cm printed circuit board
–
the Nymph is centered around the AMTEL ATmega128(L) MCU, including
interfaces for two UARTs, an SPI bus, an I2C bus, and eight analog-todigital converter channels. It provides additional 64KB EEPROM external
to MCU in addition to 4KB EEPROM included in MCU
–
the unit is powered either by batteries or an AC adapter, and a set of three
on-board LEDs are included to aid in the debugging process. It is designed
to hold a 24mm diameter lithium ion coin cell battery (CR2477), but any
battery in the range of 1.8V to 3.6V can be connected
MANTIS [Abrach+ 2003]
Hardware Platform
–
in order to facilitate rapid prototyping in research environment, the Nymph
has solderless plug connections for both analog and digital sensors,
which eliminates the external sensor board for many applications
–
each connector provides lines for ground, power and sensor signal,
allowing basic sensors such as photo sensors or complex devices such
as infrared an ultra sounds receivers to be connected easily
–
the Chipcon CC1000 radio was chosen to handle wireless
communication. It supports four carrier frequency bands (315, 433, 868,
and 915 MHz) and allows for frequency hopping which is useful for multichannel communication. It is one of the lowest power commercial radios
and allows MOS to optimize the radio to further reduce the power
consumption
MANTIS [Abrach+ 2003]
Hardware Platform
–
for additional modules, the Nymph includes JTAG interface which allows
the user to easily download code to the hardware. This addition
eliminates a need for separate programming board, simplifying the
process of reprogramming the nodes while reducing the cost of overall
system. As added benefit, the JTAG port allows the user to single-step
through code on the MCU and also supports the remote shell
–
the Nymph uses one of the UARTs to supply a serial port (RS232) for
connection to a PC while the second one is used as an interface to the
optional GPS unit
–
MAX3221 RS232 serial chip is used and may be set in three different
power saving modes: power-down, receive only and shut down
MANTIS [Abrach+ 2003]
Hardware Platform
Figure 6: MANTIS Nymph
MANTIS [Abrach+ 2003]
Software Platform
–
MANTIS OS (MOS) adheres to classical layered multi-threaded design
–
top application and API layers show a simple C API which promotes the
ease of use, cross-platform portability, and reuse of a large installed code
base
–
in lower layers of MOS, it adapts the classical OS structures to achieve
small memory footprint
System APIs
–
MANTIS provides comprehensive System APIs for I/O and system
interaction
–
the choice of C language API simplifies cross-platform support and the
development of a multimodal prototyping environment
MANTIS [Abrach+ 2003]
Software Platform
System APIs
–
since MANTIS System API is preserved across both physical sensor
nodes as well as virtual sensor nodes running on X86 platforms, the same
C code developed for MANTIS sensor Nymphs with AMTEL MCU can be
compiled to run on X86 PCs with little or no alteration
Kernel and Scheduler
–
design of MOS kernel resembles classical UNIX-style schedulers
–
The services provided are subset of POSIX threads, most notably prioritybased thread scheduling with round-robin semantics within a priority level
–
binary (mutex) and counting semaphores are also supported
–
the goal of the kernel design is to implement these familiar services in an
efficient manner for resource-constrained environment of a sensor node
MANTIS [Abrach+ 2003]
Software Platform
Network Stack
–
focused on efficient use of limited memory, flexibility, and convenience
–
implemented as one or more user-level threads
–
different layers can be implemented in different threads, or all layers in the
stack can be implemented in one thread
–
the tradeoff is between performance and flexibility
–
designed to minimize memory buffer allocation through layers
–
the data body for a packet is common through all layers within a thread
–
the headers for a packet is variably-sized and are pre-pended to the single
data body
–
designed in a modular manner with standard APIs between each layers,
thereby allowing developers easily modify or replace layer modules
MANTIS [Abrach+ 2003]
Software Platform
Device Drivers
–
Adopts the traditional logical/physical partitioning with respect to device
driver design for the hardware
–
The application developer need not to interact with the hardware to
accomplish a given task
MANTIS [Abrach+ 2003]
Software Platform
Figure 7: MANTIS OS Architecture
MANTIS [Abrach+ 2003]
System Development
–
application developers need to be able to prototype and test applications
prior to distribution and physical deployment in the field
–
during deployment, in-situ sensor nodes need to be capable of being both
dynamically reprogrammed and remotely debugged
–
in order to facilitates these issues, MANTIS identifies and implements
three key advanced features for expert users of general-purpose sensor
systems
o
multimodal prototyping environment
o
dynamic reprogramming
o
remote shell and commander server
MANTIS [Abrach+ 2003]
System Development
Multimodal Prototyping Environment
–
Provides a framework for prototyping diverse applications across
heterogeneous platforms
–
A key requirement of sensor systems is the need to provide a prototyping
environment to test sensor networking applications prior to deployment
–
Postponing testing of an application until after its deployment across a
distributed sensor network can incur severe consequences
–
MANTIS prototyping environment extends beyond simulation to provide
larger framework for development of network management and
visualization applications as virtual nodes within a MANTIS network
o
MANTIS has property of enabling an application developer to test
execution of the same C code on both virtual sensor nodes and later
on in-situ physical sensor nodes
MANTIS [Abrach+ 2003]
System Development
Multimodal Prototyping Environment
o
Seamlessly integrates virtual environment with the real deployment
network such that both virtual and physical nodes can co-exit and
communicate with each other in the prototyping environment
o
Permits a virtual node to leverage other APIs outside of the MANTIS
API, e.g., a virtual node with the MANTIS API could be realized as a
UNIX X windows application that communicates with other rendering
or database APIs to build visualization and network management
applications
MANTIS [Abrach+ 2003]
System Development
Multimodal Prototyping Environment
Figure 8: Multimodal prototyping integrates both virtual and physical sensor
nodes across heterogeneous X86 and AMTEL sensor platforms
MANTIS [Abrach+ 2003]
System Development
Dynamic Reprogramming
–
Sensor nodes should be remotely reconfigurable over a wireless multi-hop
network after being deployed in the field. Since sensor nodes may be
deployed in inaccessible areas and may scale to thousands of nodes, this
simplifies management of the sensor network
–
MOS achieves dynamic reprogramming in several granularities: re-flashing
the entire OS; reprogramming of a single thread; and changing of variables
within a thread
–
Another useful feature would be the ability to remotely debug a running
thread. MOS provides a remote shell that enables a user to login and
inspect the sensor node’s memory
–
MOS includes two programming modes (simpler and more advanced) in
order to overcome the difficulty of reprogramming the network
MANTIS [Abrach+ 2003]
System Development
Dynamic Reprogramming
–
The simpler programming mode is similar to that used in many other
systems and involves a direct communication with a specific MANTIS node
–
On a Nymph, this would be accomplished via a serial port: the user simply
connects the node to a PC and opens the MANTIS shell
–
Upon reset, MOS enters a boot loader that checks for communication from
the shell. At this point, the node will accept a new code image, which is
downloaded from the PC over the direct communication line
–
From the shell, the user has the ability to inspect and modify the node’s
memory directly as well as spawn threads and retrieve debugging
information including thread status, stack fill, and other statistics from OS
–
The boot loader transfers control to the MOS kernel on command from the
shell, or at a startup if the shell is not present
MANTIS [Abrach+ 2003]
System Development
Dynamic Reprogramming
–
The more advanced programming mode is used when a node is already
deployed, and does not require direct access to the node
–
The spectrum of dynamic reprogramming of in-situ sensor networks
ranges from fine grained reprogramming to complete reprogramming
–
MOS has a provision for reprogramming any portion of the node up to and
including the OS itself while the node is deployed in the field
–
This is accomplished through the MOS dynamic reprogramming interface
MANTIS [Abrach+ 2003]
System Development
Remote Shell and Commander Server
–
MOS includes the MANTIS Command Server (MCS) which is
implemented as an application thread
–
From any device in the network equipped with a terminal, the user may
invoke the command server client (also referred to as the shell) and log in
to either a physical node (e.g., on a Nymph or Mica board) or a virtual
node running as a process on a PC
–
MCS listens on a network port for commands and replies with the results,
in a manner similar to RPC
–
The shell gains the ability to control a node remotely through MCS
MANTIS [Abrach+ 2003]
System Development
Remote Shell and Commander Server
–
The user may alter the node’s configuration settings, run or kill programs,
display the thread table and other OS data, inspect and modify the node’s
data memory, and call arbitrary user-defined functions
–
The shell is powerful debugging tool since it allows the user to examine
and modify the state of any node, without requiring physical access to the
node
References
[Abrach+ 2003] H. Abrach, S. Bhatti, J. Carlson, H. Dai, J. Rose, A. Sheth, B. Shucker, J,
Deng and R. Han, MANTIS: System Support for MultimodAl NeTworks of In-Situ
Sensors, 2nd ACM International Workshop on Wireless Sensor Networks and
Applications (WSNA 2003), September 2003.
[Akyildiz+ 2002] I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci, A Survey
on Sensor Networks, IEEE Communications Magazine, Vol. 40, No. 8, pp. 102-114,
August 2002.
[Elson+ 2002] J. Elson and K. Romer, Wireless Sensor Networks: A New Regime for
Time Synchronization, First Workshop on Hot Topics in Networks (Hotnets-I),
Princeton, USA, October 2002.
[Hill+ 2000] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, and K. Pister, System
Architecture Directions for Networked Sensors, Architectural Support for
Programming Languages and Operating Systems (ASPLOS) 2000
[TinyOS] TinyOS: a component-based OS for the networked sensor regime.
Download