Tracking Motion Direction and Distance With Pyroelectric IR Sensors

advertisement
1486
IEEE SENSORS JOURNAL, VOL. 10, NO. 9, SEPTEMBER 2010
Tracking Motion Direction and Distance
With Pyroelectric IR Sensors
Piero Zappi, Elisabetta Farella, and Luca Benini, Fellow, IEEE
Abstract—Passive IR (PIR) sensors are excellent devices for
wireless sensor networks (WSN), being low-cost, low-power, and
presenting a small form factor. PIR sensors are widely used as a
simple, but reliable, presence trigger for alarms, and automatic
lighting systems. However, the output of a PIR sensor depends on
several aspects beyond simple people presence, as, e.g., distance
of the body from the sensor, direction of movement, and presence
of multiple people. In this paper, we present a feature extraction and sensor fusion technique that exploits a set of wireless
nodes equipped with PIR sensors to track people moving in a
hallway. Our approach has reduced computational and memory
requirements, thus it is well suited for digital systems with limited
resources, such as those available in sensor nodes. Using the proposed techniques, we were able to achieve 100% correct detection
of direction of movement and 83.49%–95.35% correct detection
of distance intervals.
Index Terms—Classifier, distance, passive IR (PIR), tracking.
I. INTRODUCTION
YROELECTRIC IR (PIR) sensors belong to the class of
thermal detectors. Thermal detectors can measure incident
radiation by means of a change in their temperature. When an
appropriate absorbing material is applied to the detectors element surface, they can be made responsive over a selected
range of wavelengths. PIR sensors are designed to detect human
bodies, thus the wavelengths of interest are mainly in the range
, in which the IR emission of
of the IR window at
also peaks.
bodies at 37
Being low-cost, low-power, and providing a reliable indication of people presence, PIR sensors have achieved worldwide
diffusion. Furthermore, they can be manufactured with a reduced form factor that allows to unobtrusively integrate a large
number of them around us. Nowadays, many buildings include
automatic light switching and surveillance system based on a
large number of PIR scattered in different rooms.
Beyond simple presence, the output of a PIR sensor depends
on several characteristics of the body moving in its field of view
(FoV), such as direction of movement and distance of the body
from the sensor. This observation has motivated our effort in
P
Manuscript received May 19, 2009; revised September 18, 2009, November
09, 2009, and November 30, 2009; accepted December 14, 2009. Date of current version July 21, 2010. An earlier version of this paper was presented at the
IEEE SENSORS 2008 Conference and was published in its proceedings. The
associate editor coordinating the review of this paper and approving it for publication was Prof. Ralph Etienne-Cummings.
The authors are with the Department of Electronic Informatics and Systems,
University of Bologna, 40123 Bologna, Italy (e-mail: piero.zappi@unibo.it;
elisabetta.farella@unibo.it; luca.benini@unibo.it).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/JSEN.2009.2039792
developing a novel technique to extract these features. In particular, our objective is to implement a human tracking system
based on a dense array of PIR sensors. Previous works demonstrated how such system can be used to improve video surveillance systems [1] and preserve privacy [2].
In this paper, we present a technique to track people using an
array of PIR sensors distributed in the environment. This technique requires low computational power, is suitable for a parallel implementation, and is based only on low-cost, low-power
devices. Hence, it is well suited for implementation on wireless
sensor network (WSN) nodes, further reducing the obtrusiveness and cost since no wires are needed [3].
Our approach adopts a simple hierarchical structure. Several
autonomous clusters of PIR sensors cover the area of interest
AoI. Each cluster is made of two nodes able to detect the direction of movement and classify people position within three
possible regions (close to one sensor, middle, and close to the
other). Within each cluster, the nodes are organized as a hierarchical data graph. Each node locally extracts a set of features and
sends them to a selected node (the cluster head) that performs
sensor fusion. This information is forwarded to the system that
monitors the AoI.
The rest of the paper is organized as follows. In Section II, we
introduce PIR sensors and their working principles. Section IV
describes the system and its hierarchical structure. Our technique to track people moving in the AoI is presented in
Sections V and VI. Finally, we present experimental results and
conclude the paper.
II. PIR SENSORS
Pyroelectricity is the electrical response of a polar, dielectric
material to a change in its temperature. A pyroelectric element
converts incident IR flux into an electrical output through two
steps: the absorbing layer transforms the radiation flux change
into a change in temperature and the pyroelectric element performs a thermal to electrical conversion [4].
Commercial-off-the-shelf (COTS) PIR sensors include two
sensitive elements placed in series with opposite polarization
(see Fig. 1). This configuration makes the sensor immune to
slow changes in background temperature and reduces the settling-down period once input radiation changes settle down.
The PIR sensors are used in conjunction with Fresnel lenses
to augment and shape their FoV [5]. Fresnel lenses are good energy collectors that can be molded out of inexpensive plastic and
present a much more compact form factor with respect to normal
lenses. Typically, an array of Fresnel lenses is used to divide the
PIR sensor FoV into several, optically separated cones. The motivation is that the PIR elements detect only changes to incident
IR radiation. If a single lens is used, as a body moves through
1530-437X/$26.00 © 2010 IEEE
Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 20,2010 at 13:08:50 UTC from IEEE Xplore. Restrictions apply.
ZAPPI et al.: TRACKING MOTION DIRECTION AND DISTANCE WITH PYROELECTRIC IR SENSORS
Fig. 1. Schematic of a typical COTS PIR. Two sensing elements are used in
series with opposite polarization; the output is preamplified through a built in
FET transistor. The FoV of each sensing element is highlighted with shading.
It is worth noting how, in proximity of the device, the two FoVs overlap.
the FoV of the PIR (especially if it covers a wide area) only,
negligible changes in input IR radiation will be sensed. On the
other end, when using multiple lenses, the body moves between
different cones of view and is sensed for the whole traversal.
III. RELATED WORK
The PIR sensors are widely used in surveillance systems [6]
and automatic light switching systems [7] as simple but reliable triggers. They also have shown promising capabilities as
low-cost camera enhancers in video surveillance systems. The
work of Rajgarhia et al. [2] uses PIR sensors in conjunction with
cameras to address privacy issues. PIR sensors are deployed in
private rooms while cameras in public areas. Human tracking
is performed by correlating information from the two systems.
This paper demonstrates the benefits of reducing camera deployment in favor of PIR sensors. In fact, a survey on 60 people
highlights how motion sensors are considered less invasive for
people privacy than cameras. In Bai and Teng [8], the design of a
board for home surveillance is proposed. The board includes an
ARM processor together with a Web camera and a PIR sensor.
The latter triggers the Web camera in presence of an intruder
in order to capture and send to a remote server the snapshot.
Cucchiara et al. [1] propose a technique to fuse information
from a dense network of PIR sensors with the video streaming
from a set of cameras to improve consistent labeling of people
moving within the AoI. PIR sensors detect people presence and
their direction of movement, and these features help distinguish
reflections and changes of movement behind obstacles.
Other works present different approaches to perform people
tracking using only PIR sensors. Since PIR are sensitive to
changes in incident radiation, Hashimoto et al. use an array of
sensors in conjunction with a chopper wheel [9]. The wheel has
the same temperature as the background; therefore, this module
produces an output only in presence of body with different
temperature than the background and behaves as a thermal
imager. Gopinathan et al. [10] developed a pyroelectric motion
tracking system based on coded apertures. Four PIR detectors
are shaded using a frame with a set of apertures designed to
1487
Fig. 2. Schematic view of the system architecture. Several clusters of nodes
monitor a hallway. All the nodes of the network collect and preprocess data
from PIR sensor. Within each cluster, the cluster head collects and fuses the data
preprocessed by the nodes of the cluster. Then, it forwards the local information
to the SM that supervises the status of the network.
modulate PIR visibility over a 1.6 1.6 m area. Fifteen cells
can be discriminated measuring which PIR senses the body
presence and which do not. The work of Song et al. [11] analyzes the performance and the applicability of PIR sensors for
security systems and proposes a region-based human tracking
algorithm. The authors define a deployment strategy based on
overlapped FoVs that identify different regions of the AoI. This
technique has been implemented and tested in a real environment, and the authors claim high indoor localization accuracy.
Hao et al. have developed a wireless pyroelectric sensor system
used to track people and as a biometric system. The system
is made up of a number of modules each embedding several
pyroelectric detectors. Sensors FoVs is modulated using plastic
packages. In [12], each module includes eight PIR sensors
that cover 360 all together. The module samples, filters, and
digitalizes the data from the PIR sensors in order to deduce the
angular position of the body with respect to a local coordinate
system. Four modules are deployed in a room to track single
people movements. A similar approach using modules with
different form factor and number of PIR sensors is presented in
[13]. The use of the same module to track multiple people is
presented in [14].
The work presented in this paper falls into the latter group.
A common characteristic of the reviewed state of the art is that
these works use the PIR sensor as a digital indication of presence/absence. For example, in the work of Hao et al. [12], the
angular position is obtained by observing which sensor output
is above a threshold (digitalization step). In contrast to this approach our technique explores the use of analog features, such
as signal amplitude and duration, that can be extracted from
COTS PIR sensor by a low-power, low-cost wireless sensor
node. Furthermore, our system has a modular structure that supports deployment flexibility by using a variable number of independent building blocks. Finally, a driving factor for our algorithm development has been to limit power consumption. In
contrast to the other works where this is achieved only because
of the low power consumption of the PIR detectors, here, three
strategies have been adopted: 1) use of algorithms with low
computational requirements adequate for resources constrained
low-power hardware; 2) distributed processing among nodes to
reduce wireless communication; and 3) use of the PIR sensor
Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 20,2010 at 13:08:50 UTC from IEEE Xplore. Restrictions apply.
1488
IEEE SENSORS JOURNAL, VOL. 10, NO. 9, SEPTEMBER 2010
Fig. 3. FoV of the PIR sensor. The left picture has been taken from IS-215T datasheet [20]. The right picture shows the modified FoV, where only the cones
associated to the central lens are left. Note the presence of two cones is related to the two PIR sensitive elements.
also as trigger to wake up the sensor node from ultralow-power
state.
IV. SYSTEM DESCRIPTION
A WSN developer must deal with several issues related to the
specific characteristic of a WSN. Power consumption is one of
the most critical ones, since batteries scaling is the main limit to
sensor nodes miniaturization [15]. Furthermore, battery replacement in many cases is either impossible of unfeasible. Thus, efficient energy management is essential.
The simplest way to reduce power consumption is to use
low-power devices and passive sensors (such as PIR sensors).
Low-power microcontrollers usually present low computational
power and memory capabilities, typically less than 10 kB of
RAM and less than 512 kB of program memory. For this reason,
the algorithms developed for these nodes should present limited complexity. Note that, although single-node capabilities are
limited, being composed of a large number of nodes, whole
network computational power can be enough to perform complex algorithms. Thus, algorithms with a high degree of parallelism are desired. Usually, power consumption in wireless
sensor nodes has peaks when the radio is active. Thus, wireless
communication should be limited. This pushes for approaches
where sensor data are locally processed and aggregated through
its way to the final user rather than streamed to a central base
station where it is processed.
Starting from these considerations, we developed a system
with a hierarchical architecture to monitor and track people
passing. A scheme of our approach is presented in Fig. 2. In
this scheme, we can see how the AoI, which in our scenario is
a hallway, is covered by several nodes organized in clusters.
Each cluster is made up of two nodes placed on opposite walls
and facing each other. One node of the cluster has the role of
cluster head. All nodes of the network process incoming data
from their sensors and extract a set of relevant features (see
following sections). These features are sent to the cluster head
that fuses them and extracts information on people moving in
its FoV. This indication is sent to the system manager (SM)
that uses it together with the ones from the other cluster heads
to track people movements.
A. Node Overview
The wireless sensor node we developed is built on top of
a Zigbee developer board (SARD) [16], which includes an
8-bit microcontroller (GT60 of the 8-bit family HCS08) and a
Zigbee compliant transceiver (MC13192). Zigbee is a low-cost,
limited-bandwidth, low-power, wireless protocol developed
for WSN [17]. The GT60 embeds 60 kB of flash memory for
program and data, 4 kB of RAM and operates at 16 MHz.
Our prototype PIR sensor board has been designed using
COTS components. The detector is Murata IRA E710 [18]
and the signal conditioning circuit is a double stage amplifier,
which achieves a total gain of about 1400 and operates as a
bandpass filter between 0.57 and 11 Hz. This is a suitable
range for detecting moving people [19]. Furthermore, it biases
when no movement is detected.
the output voltage at
The conditioning circuit board includes also a low-power
voltage regulator used to decouple power supply lines from the
transceiver ones and a comparator used to generate a wake-up
signal when the board is in a low power state. The sensor and
its conditioning circuits are hosted in the package of a COTS
PIR presence detector, IS-215T [20].
Typically, the array of Fresnel lenses produces an FoV that
spans up to 110 –120 on the horizontal plane and 90 on the
vertical one. This approach is not suitable when more than a
person is moving in the AoI, as it does not allow to distinguish
between two people moving in separate areas covered by two
PIRs and a single person moving in the area where the FoVs of
the two sensors overlap.
In contrast, our approach relies on sensors whose Fresnel
lenses produce an FoV that, according to our measure, spans
only 20 on the horizontal plane. This shape has been obtained
by shading with a metallic tape the package of the IS-215T and
leaving only the central lens uncovered, as shown in Fig. 3. As
a consequence, each PIR’s FoV covers a thin AoI slice and we
assume that only one person can stand on each slice.
The PIRs in a cluster have overlapped FoV, thus each cluster
is responsible of a small part of the AoI. Information from different slices is correlated at the upper level by the SM.
In case of isolated people, each passage can be segmented
. The starting of
using two thresholds above and below
the passage is detected when one of the threshold is passed, the
Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 20,2010 at 13:08:50 UTC from IEEE Xplore. Restrictions apply.
ZAPPI et al.: TRACKING MOTION DIRECTION AND DISTANCE WITH PYROELECTRIC IR SENSORS
1489
Fig. 4. Output of the PIR sensor when a single lens is used and a person moves
back and forth in front of it.
end when the PIR output remains between the thresholds for a
certain time (settle down time). According to results from previous work [21], we placed the thresholds at
and
. The comparator on the PIR board generates
a wake-up signal to the MCU when one of these thresholds is
crossed. As a consequence we can keep the whole system into
an ultra-low power state when no passages are detected.
is a
The choice of the threshold and settle down time
tradeoff between node sensitivity and capability to distinguish
subsequent passages. In particular, as the distance of the body
increases, amplitude decreases. Thus, a high threshold may
result in the loss of passage detections. Statistics collected on a
previous dataset of passages performed at increasing distances
between 1 and 14 m showed that with a threshold of 300 mV,
passages up to 8 m can be detected. On the other hand, with
lower thresholds the PIR output requires a longer time to
settle down between the two thresholds; therefore, subsequent
passages of more than one person may be confused if too close.
The settle down time is necessary to avoid false positive. In
fact, as can be seen from Fig. 4, after a passage, during the settledown time, the output presents an overshoot that may cross the
threshold and be considered as another peak.
V. DIRECTION OF MOVEMENT
In the presence of a single lens, the passage of a body results
in a PIR output signal made up of two peaks, one positive and
one negative (see Fig. 4). The reason is that the from sensing elements detect the body in sequence. Being placed in series with
opposite polarization each of them causes a peak with different
direction.
As can be seen in Fig. 4, direction of movement can be easily
detected using a single PIR oriented with FoV orthogonal with
the body direction of movement by looking at the direction of
the first peak. This extremely simple task can be easily implemented on a 8-bit microcontroller.
VI. DISTANCE OF MOVEMENT
At this stage of development, the PIR network does not
need to provide a precise estimate of the body position. For
this reason, we want to identify whether a body is moving
within one of these three ranges from the sensors of a cluster:
Fig. 5. Output of a PIR sensor in case of passages at different distances.
,
, and
. These values have
been chosen since they are representative for an indoor scenario
(i.e., monitoring people moving within a hallway). Fig. 5 shows
an example of PIR output as a function of distance when a
person is walking within these three ranges. From this figure,
we see how signal duration (calculated as the time between the
instant when the PIR output exceeds one of the two thresholds
and the instant when it lays between the threshold for
s)
increases with distance while signal amplitude (calculated as
the difference between the maximum and minimum value of
the PIR output) is at a maximum for passages in the middle
distance.
Signal duration increasing is due to the FoV conic shape. In
fact, PIR is mostly sensitive to bodies that enter and live its FoV
and the time window defined by these two instants increases
with distance from the sensors of the person walking.
Considering the output signal, peak-to-peak amplitude
presents a maximum because this feature depends on two
contributions: the amount of incident radiation, the overlap
of FoV of the sensitive elements. Far from the sensor the
amplitude decreases with distance because farther bodies result
in a smaller change in incident radiation. In proximity of the
sensor instead, we are in presence of a region where the FoV
cones of the two sensitive elements are overlapped (see Fig. 1).
As a consequence, in this area, the contribution of one element
compensates the other resulting in smaller signal amplitude.
According to the consideration earlier, a model of the PIR
sensor could be built to relate signal amplitude and duration to
the body distance. However, this approach is not suitable due
to the high variability of the chosen features for movements
within the same distance range. In Table I, we report the average, maximum, and minimum values for both signal duration
and amplitude over 400 passages performed at each of these
three distances. From this table, we can see how these two features can give an indication of the distance of the body, but they
do not allow its clear identification. This is clearer looking at
Fig. 6 where we plotted the couples duration-amplitude at different distances. From this plot, we can see how passages in the
and
result in similar values
ranges
Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 20,2010 at 13:08:50 UTC from IEEE Xplore. Restrictions apply.
1490
IEEE SENSORS JOURNAL, VOL. 10, NO. 9, SEPTEMBER 2010
TABLE I
AVERAGE, MAXIMUM, AND MINIMUM VALUE OF PIR OUTPUT AMPLITUDE,
AND DURATION AT DIFFERENT PASSAGE DISTANCES
Fig. 7. Cluster used to detect distance is made up of two PIR sensors that autonomously monitor a slice of the AoI. The space between them is divided into
three slices.
Fig. 6. PIR output amplitude and duration as a function of distance.
of amplitude and duration making them almost impossible to
distinguish.
This variability is due to the fact that, even if the tester was
told to walk within the selected ranges of distance, he was not
forced to do it exactly on the same line and at a fixed speed. The
latter parameter, in particular, influences both the signal duration
(since the FoV crossing requires less time) and amplitude (since
the preamplifier integrated within the COTS PIR case acts as a
low-pass filter [22] and faster bodies results in an output with
spectral components at higher frequency). Furthermore, it has
been shown that the analog output of a PIR is influenced by the
gait of people [23].
To improve our accuracy and isolate the contribution related
to body distance, we use two PIR sensors placed on opposite
walls and facing each other, as shown in Fig. 7. With this setup,
we expect to increase the performance of our detection since
the effect of body speed and gait will produce similar changes
in both sensors output, while the only difference will be in the
body distance.
When a crossing is detected, each sensor calculates its duration and the PIR output amplitude. Only these two features are
wirelessly sent to the cluster head (that can be implemented on
either one of the two sensors or on a third node) to evaluate user
crossing distance range, thus reducing the power consumption
related to wireless communication and the bandwidth required.
To estimate the crossing distance we tested two possible alterna-
Fig. 8. Relative features vectors as a function of position.
tives. The first uses the four features collected from the 2 PIR to
build a 4-D feature vector (raw features case). In the second, the
cluster head calculates the ratio between homogeneous features
(relative features case). In the latter case, each transit results in
a two-elements feature vector, thus it is less complex and has
lower memory requirements.
In Fig. 8, we plotted the vectors of features for a subset of
samples from passages at different distances when relative features are used. As can be seen from this plot, the three classes
are more separate than in Fig. 6, yet it is not possible to define
well unconnected region for each range of distances; therefore,
we decided to rely on a classifier to increase recognition ratio.
A. Classifiers
We tested and compared the use of three supervised classifiers: Naïve Bayes, support vector machines (SVM) and
-Nearest Neighbor ( -NN). Classification of new instances
is a lightweight task that can be implemented real time on
low-cost, low-power devices, thus allowing distributed implementation through the sensor network.
1) Naïve Bayes: The Naïve Bayes classifier is a simple probabilistic classifier based on Bayes theorem, and the assumption
that input features are independent.
Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 20,2010 at 13:08:50 UTC from IEEE Xplore. Restrictions apply.
ZAPPI et al.: TRACKING MOTION DIRECTION AND DISTANCE WITH PYROELECTRIC IR SENSORS
Using the Bayes theorem, the classifier calculates the posterior probability of all classes given the input features. A decision
rule selects the output class: in this paper, we assign the instance
to the class with higher posterior probability.
2) Support Vector Machines: SVM belong to the class of
linear discriminant classifiers. Such classifiers use discriminant
functions that are a combination (either linear or not linear) of
the input vectors’ components. Geometrically, a discriminant
function defines a hyperplane that separates two classes [24].
Several solutions have been proposed to deal also with nonseparable data.
The SVM use a set of kernel functions to preprocess the input
vectors and represent them in a higher dimensional space where
they can be separated more easily [25]. The training phase looks
for the support vectors, which are the (transformed) training instances closer to the separating hyperplanes and are used to build
the hyperplanes for the classification.
3) -Nearest Neighbor ( -NN): -NN, given a set of reference instances, classifies a new pattern as the one most represented among the closer ones [26]. -NN training phase is
simply the collection of a set of reference instances from each
class.
The drawback of this approach is that its complexity and
memory cost increase with reference dataset dimensions, which
may be relatively large. Moreover, the accuracy of the algorithm
can be severely limited by noisy training instances, especially if
is small (i.e.,
).
1491
TABLE II
CORRECT CLASSIFICATION RATIO WHEN RAW FEATURES AND
RELATIVE FEATURES ARE USED
TABLE III
CLASSIFIERS COMPUTATIONAL EFFORT TO PERFORM THE CLASSIFICATION OF A
SINGLE INSTANCE AND MEMORY COST (NUMBER OF FLOAT) TO IMPLEMENT
THE CLASSIFIER WHEN RAW FEATURES ARE USED
N
= 153, N
= 129 AND T = 300
TABLE IV
CLASSIFIERS COMPUTATIONAL COST TO PERFORM THE CLASSIFICATION OF A
SINGLE INSTANCE AND MEMORY COST (NUMBER OF DOUBLES) TO IMPLEMENT
THE CLASSIFIER WHEN RELATIVE FEATURES ARE USED
VII. TEST AND RESULTS
To validate our approach, we recorded about 200 passages
for each of the three classes that we want to recognize. Samples
have been collected and processed on a PC to obtain reliable
data and separate the problem of distance and direction estimate
from that of wireless communication and network stability.
A. Presence and Direction Estimate
On the collected dataset, using the proposed thresholds, we
achieved 100% correct detection of passages and direction of
movement.
This easy task is performed by a single analog-to-digital
(ADC) conversion once the MCU has been woke up that
reveals the direction of the first peak in the PIR output. This
information can be sent immediately after the beginning of a
passage, however, to reduce power consumption it is included
in the message with the distance estimate (see Section VII-B).
Once the end of a passage is detected, the node can enter in a
low power state to save energy.
Previous work has shown how the number of people walking
in a row can be detected with a single PIR [21]. This information
can be extracted in this setup as well, however the close movement of another person alters the signal duration and prevents
the estimate of the distance.
B. Distance Estimate
In order to test the selected classification algorithm, we used
the Waikato Environment for Knowledge Analysis software developed at the University of Waikato [27]. The algorithms used
are: NaiveBayesSimple for Naïve Bayes, SMO with polynomial
N
= 257, N
= 235 AND T = 300
kernel for SVM, and IBk for -NN. To evaluate the results we
used fourfolds cross validation. This technique divides the available instances from each class into four groups (folds); three
of them are used to train the classifier and one to validate it.
The training and validation steps are repeated four times, each
one using a different fold for validation. As a consequence, the
results presented in this section are drawn from a validation
set made up of all available instances. We compared the proposed classifiers using both raw features and relative features.
In Table II, we present the correct classification ratio (CCR),
which is the ratio between the number of correctly classified instances and the total number of instances presented to the classifier. Tables III and IV present the computational complexity
and memory cost of the different classifiers when using both
the proposed features (raw and relative).
The results presented in Table II highlights how CCR increases when using raw features. The classifier that benefits
). However,
most of raw features is quadratic SVM (
when using raw features the complexity and memory cost increases too (see Tables III and IV).
A deeper understanding of the classification performance can
be gathered looking at the classifier Confusion Matrix.
Table V presents, as an example, the confusion matrix when
using Naïve Bayes classifier. By looking at the matrix, we can
Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 20,2010 at 13:08:50 UTC from IEEE Xplore. Restrictions apply.
1492
IEEE SENSORS JOURNAL, VOL. 10, NO. 9, SEPTEMBER 2010
TABLE V
NAÏVE BAYES CLASSIFIER’S CONFUSION MATRIX
TABLE VI
COMPUTATIONAL COST OF USED FLOATING POINT OPERATIONS
TABLE VII
PROTOTYPE POWER CONSUMPTION FOR DIFFERENT
OPERATING MODES (3.3 V OPERATING VOLTAGE)
cameras or RFIDs. Note that the time needed for classification
, which is the theoretical
is less than the settle down time
minimum passage duration; thus, there is no risk that classification tasks overlap.
C. Power Consumption
see how instances from classes close to 1 and close to 2 are
never confused, indicating limited uncertainty in position estimate. Similar findings can be concluded for the other classifiers.
Computational and memory costs are important factors, since
we are dealing with low-cost devices that embed few kilobytes
of program memory and RAM. According to the specification
presented in Section IV-A and considering that the Zigbee protocol and the device libraries that we use require 47 kB of flash
memory and 2.6 kB of RAM, only 13 kB of flash memory and
1.4 kB of RAM are available to implement the classifier. For
this reason, classifiers, such as quadratic and cubic SVM and
-NN, may not be the best one if the node should also perform some other task (i.e., it embeds other sensors). Moreover,
the microcontroller does not have a floating point coprocessor.
Thus, floating point operation should be emulated in fixed point.
An estimate of the performance of a floating point emulator designed for 8-bit microcontrollers from free scale is presented in
Table VI [28]. For example, if we implement a 3-NN classifier
doubles
with raw features, we need to store
(4800 Bytes) and perform 2100 sums, 1200 multiplications and
300 square roots for a total of 9 852 000 CPU cycles. In contrast,
if we implement a linear SVM with relative features we need to
store six doubles (24 Bytes) and perform six sums, six multiplications for a total of 20 094 CPU cycles. Even if both solutions
can be implemented on the GT60 microcontroller, careful evaluation must be carried on if devices with lower memory and
computational capabilities are used.
The CPU computational effort needed to extract the required
features (amplitude and duration) is limited. As a passage is detected (rising edge), a timer is started and it generates an interrupt every , which is the ADC sampling rate (in our exper). When a timer interrupt occurs, the PIR
iments
output is sampled and the timer restarted. The PIR sample is
used to update the maximum peak-to-peak amplitude and to detect the end of the passage. This task requires only few comparisons and assignments and can be executed in parallel with
the classification. Therefore, the only limitation of our setup is
given by the passage duration, which, according to Table I, is
in the order of few seconds. Once the passage is over, in the
worse case (9 852 000 CPU cycles at 16 MHz) the classification
output is computed in 0.616 s. This information is forwarded
to the SM that can correlate it with previous ones and other inputs from other systems that may be deployed in the AoI such as
As can be seen from Table VII, the power consumption of the
node is maximum when the RF module is active. Distributed
computing reduces radio use to a single message for each passage. Furthermore, local classification of the body position reduces the complexity of the software running on the SM, thus
improving system scalability.
To send a wireless message, a Zigbee radio should be turned
on for 30 ms. In a scenario where an average of 60 passages
per hour occurs, each passage takes 3 and 0.616 s are needed
to perform a classification. A node powered with a 1500 mA h
Li-ion battery can operate for 1014 h as a cluster head and 1083
h as the other node of the cluster.
In the same scenario, without the PIR trigger, a node lifetime
would be 179.3 h as a cluster head and 179.7 h as the other node.
Finally, notice that major contribution to power consumption
in the OFF state is related to the SARD development board. In a
custom design where all extra hardware is not present, the power
consumption in a deep-sleep state is driven by power consumption of the circuits on the PIR board. If we assume 0.66 mW
power consumption in the OFF state, a node lifetime is 2063 h
for a cluster head and 2400 h for the other node of the cluster.
VIII. DISCUSSION
This paper demonstrates how low-power, low-cost devices
can provide a rough, yet useful, information about the movements of people within smart environments. Simple COTS PIR
sensors connected with low-cost wireless sensor nodes provide
enough information to detect the position of a person in a small
part of the AoI. A potential application can exploit the information from a dense mesh of PIR sensors to build statistics on
movement of the people working or living within the environment for an efficient management of lighting or HVAC systems.
Furthermore, these statistics can be exploited to detect unusual
or dangerous behaviors and trigger an alarm.
The proposed architecture has not been developed as an alternative to video surveillance systems, but as a complement
of it. In fact, a rough estimate of people movements in the environment can be sufficient in several scenarios. For example,
where privacy issues and cost are driving factors; where cameras cannot be deployed in the environment or cover just a portion of the whole AoI.
Current prototype uses the optics of a COTS sensor in conjunction with a board designed by us. In this case, inaccurate
alignment between the lenses and the sensor introduces noise
Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 20,2010 at 13:08:50 UTC from IEEE Xplore. Restrictions apply.
ZAPPI et al.: TRACKING MOTION DIRECTION AND DISTANCE WITH PYROELECTRIC IR SENSORS
in the sensor reading and in the shape of the FoV. This translates into more variability in the sensor reading, especially between different prototypes, consequently decreasing classifier
performance.
IX. CONCLUSION
Low-cost, low-power PIR sensors are used in surveillance
and automatic light switching applications because of their
ability to provide a reliable indication of people presence. The
output of a PIR sensor depends on several characteristics of
the body moving in its FoV. In this paper, we show how we
can perform people tracking, using an array of PIR sensors
scattered in the environment.
Our approach relies on clusters made up of two PIR sensors facing each other. Each sensor locally extracts two features: passage duration and output amplitude. These features
are sent to the cluster head node that uses a classifier to estimate if the person is moving close to one sensor, in the middle
or close to the second sensor. Local position knowledge is forwarded to the SM that tracks people position in the environment. This architecture distributes the computation through the
network and minimizes wireless communication, thus is well
suited for energy constrained WSN. We tested two alternative
sets of features: raw features (output amplitude and passage duration from the two PIRs are used to build a four-element features vector) and relative features (the cluster head computes the
ratio between homogeneous features from the two PIR sensors).
Using raw features, we achieved higher classification performance (from 85.90% to 95.35%); however, a classifier that uses
such features requires higher computational power and memory
than in the case of relative features. On the other hand, relative features achieve lower classification accuracy (from 83.49%
to 93.75%) but have more relaxed computational and memory
cost.
The PIR-based tracking system can be integrated within
a video surveillance system to provide a coarse indication
of people movements. This contribution allows to preserve
privacy or to save power (in case of wireless video nodes) since
cameras can be turned on only when more information on the
people movements are required.
REFERENCES
[1] R. Cucchiara, A. Prati, R. Vezzani, L. Benini, E. Farella, and P. Zappi,
“Using a wireless sensor network to enhance video surveillance,” J.
Ubiquitous Comput. Intell. (JUCI), vol. 1, pp. 1–11, 2006.
[2] A. Rajgarhia, F. Stann, and J. Heidemann, “Privacy-sensitive monitoring with a mix of IR sensors and cameras,” in Proc. 2nd Int. Workshop Sens. Actor Netw. Protoc. Appl., Boston, MA, Aug. 2004, pp.
21–29.
[3] P. Zappi, E. Farella, and L. Benini, “Pyroelectric infrared sensors based
distance estimation,” in Proc. IEEE Sens., Oct. 2008, pp. 716–719.
[4] G. Milde, C. Hausler, G. Gerlach, H.-A. Bahr, and H. Balke, “3-D modeling of pyroelectric sensor arrays Part II: Modulation transfer function,” IEEE Sensors J., vol. 8, no. 12, pp. 2088–2094, Dec. 2008.
[5] G. A. Cirino, R. Barcellos, A. Bereczki, S. P. Morato, and L. G. Neto,
“Design, fabrication and characterization of Fresnel lens array with
spatial filtering for passive infrared motion sensors,” in Proc. Photon.
North Int. Conf. Appl. Photon. Technol., 2006, pp. 1–12.
[6] M. Moghavvemi and L. C. Seng, “Pyroelectric infrared sensor for intruder detection,” in Proc. IEEE Region 10 Conf. (TENCON 2004),
Nov. 2004, vol. 4, pp. 656–659.
1493
[7] Y. W. Bai and Y. T. Ku, “Automatic room light intensity detection
and control using a microprocessor and light sensors,” IEEE Trans.
Consum. Electron., vol. 54, no. 3, pp. 1173–1176, Aug. 2008.
[8] Y.-W. Bai and H. Teng, “Enhancement of the sensing distance of an
embedded surveillance system with video streaming recording triggered by an infrared sensor circuit,” in Proc. SICE Annu. Conf., Aug.
2008, pp. 1657–1662.
[9] K. Hashimoto, K. Morinaka, N. Yoshiike, C. Kawaguchi, and S.
Matsueda, “People count system using multi-sensing application,”
presented at the Int. Conf. Solid State Sens. Actuators (TRANSDUCERS 1997), Chicago, IL, Jun. 1997.
[10] U. Gopinathan, D. Brady, and N. Pitsianis, “Coded apertures for efficient pyroelectric motion tracking,” Opt. Exp., vol. 11, no. 18, pp.
2142–2152, 2003.
[11] B. Song, H. Choi, and H. S. Lee, “Surveillance tracking system using
passive infrared motion sensors in wireless sensor network,” in Proc.
Int. Conf. Inf. Netw. (ICOIN 2008), Jan. 2008, pp. 1–5.
[12] Q. Hao, D. Brady, B. Guenther, J. Burchett, M. Shankar, and S. Feller,
“Human tracking with wireless distributed pyroelectric sensors,” IEEE
Sensors J., vol. 6, no. 6, pp. 1683–1696, Dec. 2006.
[13] M. Shankar, J. B. Burchett, Q. Hao, B. D. Guenther, and D. J. Brady,
“Human-tracking systems using pyroelectric infrared detectors,” Opt.
Eng., vol. 45, no. 10, pp. 106401-1–106401-10, 2006.
[14] N. Li and Q. Hao, “Multiple human tracking with wireless distributed pyro-electric sensors,” Proc. SPIE, vol. 6940, no. 1, pp.
694033-1–694033-12, 2008.
[15] J. Paradiso and T. Starner, “Energy scavenging for mobile and wireless
electronics,” IEEE Pervasive Comput., vol. 4, no. 1, pp. 18–27, Jan.
–Mar. 2005.
[16] “13192 Developer’s Starter Kit, Application Note 2762,” Freescale,
Tech. Rep. Freescale, 2007.
[17] Zigbee Alliance. [Online]. Available: http://www.zigbee.org/
[18] Pyroelectric Infrared Sensors, Murata Manufacturing, 2005 [Online].
Available: http://www.murata.com/catalog/s21e.pdf
[19] Frequency Range for Pyroelectric Detectors, Perking Elmer. [Online].
Available: http://www.perkinelmer.com
[20] Is-215t Datasheet, Honeywell Security and Custom Electronic.,
2008. [Online]. Available: http://library.ademconet.com/MWT/fs1/9/
4434.pdf
[21] P. Zappi, E. Farella, and L. Benini, “Enhancing the spatial resolution
of presence detection in a PIR based wireless surveillance network,” in
Proc. IEEE Conf. Adv. Video Signal Based Surveill. (AVSS 2007), Sep.
2007, pp. 295–300.
[22] Detector Basics, InfraTec, 2009. [Online]. Available: http://www.infratec.de/fileadmin/media/ Sensorik/pdf/Application_Detector_Basics.pdf
[23] J.-S. Fang, Q. Hao, D. J. Brady, M. Shankar, B. D. Guenther, N. P.
Pitsianis, and K. Y. Hsu, “Path-dependent human identification using
a pyroelectric infrared sensor and Fresnel lens arrays,” Opt. Exp., vol.
14, no. 2, pp. 609–624, 2006.
[24] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification. New
York: Wiley, 2001.
[25] C. J. C. Burges, “A tutorial on support vector machines for pattern
recognition,” Data Mining Knowl. Discov., vol. 2, no. 2, pp. 121–167,
1998.
[26] B. V. Dasarathy, Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques. Los Alamitos, CA: IEEE Comput. Soc. Press,
1990.
[27] I. H. Witten and E. Frank, Data Mining: Practical Machine Learning
Tools and Techniques (Morgan Kaufmann Series in Data Management
Systems), 2nd ed. San Mateo, CA: Morgan Kaufmann, 2005.
[28] An974 mc68hc11 Floating-Point Package, Freescale Corporation,
2004. [Online]. Available: http://www.freescale.com/files/microcontrollers/doc/app_note/AN974.pdf
Piero Zappi received the M.S. and Ph.D. degrees in electronic engineering from
the University of Bologna, Bologna, Italy, in 2005 and 2009, respectively.
He is now a Postdoctoral Researcher at the System Energy Efficiency Laboratory, University of California San Diego (UCSD), where he is developing a
distributed air quality monitoring system. His research is mostly in the field of
wireless sensor networks (WSNs) and embedded systems. Main topics include
implementation of Zigbee-based WSN, use of Pyroelectric InfraRed (PIR) detector for ambient monitoring, data management in redundant WSN, tangible
interfaces, and smart objects. He spent six months visiting ETH (Zurich) for
joint research activity with the Wearable Laboratory, Institute of Electronics.
Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 20,2010 at 13:08:50 UTC from IEEE Xplore. Restrictions apply.
1494
IEEE SENSORS JOURNAL, VOL. 10, NO. 9, SEPTEMBER 2010
Elisabetta Farella received the Ph.D. degree in electrical engineering and computer science from the University of Ferrara, Ferrara, Italy, in March 2005.
She is a Postdoctoral Researcher at the Department of Engineering, Computer Science and Systems, University of Bologna, Bologna, Italy, and research
Supervisor at T3lab. She is part of the Ami group at Micrel Laboratory, where
she supervises research on wireless sensor networks as enabling technology for
Ambient Intelligence applications. In particular, her interest is in body area network for pervasive healthcare, smart assistive environments, novel natural interaction techniques, ICT applied to cultural heritage. She cooperates in many
EU projects (FP6 SENSACTIONAAL, FP7 SMILING, ARTEMIS CAMMI,
ARTEMIS SOFIA) and industrial cooperation on ambient assisted living, ambient intelligence, and e-inclusion topics.
Luca Benini (S’94–M’97–SM’04–F’07) received the Ph.D. degree in electrical
engineering from Stanford University, Stanford, CA, in 1997.
He is a Full Professor at the University of Bologna, Bologna, Italy. He also
holds a visiting faculty position at the Ecole Polytecnique Federale de Lausanne (EPFL). His research interests are in the fields of multiprocessor and networks systems-on-chip, ambient intelligence systems design, energy-efficient
smart sensors, and sensor networks. He has published more than 500 papers in
peer-reviewed international journals and conferences, three books, several book
chapters, and two patents.
Dr. Benini has been Program Chair and General Chair of the Design Automation and Test in Europe Conference. He is an Associate Editor of the IEEE
TRANSACTIONS ON COMPUTER-AIDED DESIGN OF CIRCUITS AND SYSTEMS, the
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, and the ACM Transactions
on Embedded Computing Systems.
Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 20,2010 at 13:08:50 UTC from IEEE Xplore. Restrictions apply.
Download